From rlooyahoo at gmail.com  Tue Sep  1 00:06:56 2015
From: rlooyahoo at gmail.com (Ruby Loo)
Date: Mon, 31 Aug 2015 20:06:56 -0400
Subject: [openstack-dev] [ironic] weekly subteam status report
Message-ID: <CA+5K_1H-4XS2QXC+2shcjPUmrp27xYi2AOcTi0CPrR+y9nfsqQ@mail.gmail.com>

Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
============
As of Mon, Aug 31 (diff with Aug 24)
- Open: 144 (+3). 9 new (+4), 50 in progress, 0 critical, 13 high and 10
(+1) incomplete
- Nova bugs with Ironic tag: 24 (-1). 1 new, 0 critical, 1 (+1) high


Oslo (lintan)
==========
Oslo plan to dismiss oslo-incubator as most code are already moved to
libraries. So in the future, other projects should maintain the code under
openstack/common by themself. It is also suggested to rename the folder to
avoid misundertand.


Testing (adam_g/jlvillal)
==================
Sent out email asking for interested parties for functional testing. Did
receive one response.  But believe there are at least two other people
interested in functional testing. Also JayF pointed to the 'mimic' project,
which looks useful.


Inspector (dtansur)
===============
- our dsvm is now voting on ironic-inspector. next step is to make it
voting on a client
- need reviews for IPA patch: https://review.openstack.org/#/c/205587/


Bifrost (TheJulia)
=============
- Gate is currently broken due to conflicting change in ironic.  Fix has
been approved ans should be landing shortly
- Our non-voting job should be passing soon once the issues with the
ironic-agent dib element get sorted out and landed.
- Cleanup under-way to cut an initial release.


webclient (krotscheck / betherly)
=========================
-
https://review.openstack.org/#/q/status:open+project:openstack/ironic-webclient,n,z
- https://review.openstack.org/#/c/199769


Drivers
======

iRMC (naohirot)
---------------------
https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z

Status: Active (solicit core team's spec review)
- Enhance Power Interface for Soft Reboot and NMI
  - bp/enhance-power-interface-for-soft-reboot-and-nmi

Status: Reactive (code review is on going)
- iRMC out of band inspection
  - bp/ironic-node-properties-discovery

Status: TODO
- New boot driver interface for iRMC drivers
........

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150831/47e36788/attachment.html>

From hejie.xu at intel.com  Tue Sep  1 00:45:19 2015
From: hejie.xu at intel.com (Alex Xu)
Date: Tue, 1 Sep 2015 08:45:19 +0800
Subject: [openstack-dev] [nova] Nova API sub-team meeting
Message-ID: <6AFDD4B8-E6F3-49A1-8B0C-B10157592757@intel.com>

Hi,

We have weekly Nova API meeting this week. The meeting is being held tomorrow Tuesday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Tue)
Japan 21:00 (Tue)
China 20:00 (Tue)
United Kingdom 13:00 (Tue)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI <https://wiki.openstack.org/wiki/Meetings/NovaAPI>

Please feel free to add items to the agenda.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/0e43a9de/attachment.html>

From emilien at redhat.com  Tue Sep  1 00:57:07 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Mon, 31 Aug 2015 20:57:07 -0400
Subject: [openstack-dev] [puppet] CI: make heat, Ironic,
 glance beaker non voting on trusty
In-Reply-To: <CAHr1CO_r5pedTrAy+JHMKv=3bLQ0ScHaiye0f_GMndnb+fr68w@mail.gmail.com>
References: <CAN7WfkJ=cBnq5O=5EGd0Rz6VxBuwwMV_CdamvJZD9EOOwcO4XQ@mail.gmail.com>
 <CAHr1CO_r5pedTrAy+JHMKv=3bLQ0ScHaiye0f_GMndnb+fr68w@mail.gmail.com>
Message-ID: <55E4F7E3.6050604@redhat.com>

I did https://review.openstack.org/219078 to disable the voting on these
jobs.

Matt, usually packages are in a better shape after official releases...
Let's hope they will be :-)

On 08/31/2015 05:54 PM, Matt Fischer wrote:
> +1, I guess we need to watch and see if they pass eventually but that's
> no guarantee that they won't break again in L or M etc.
> 
> On Mon, Aug 31, 2015 at 3:48 PM, Emilien Macchi
> <emilien.macchi at gmail.com <mailto:emilien.macchi at gmail.com>> wrote:
> 
>     So it he has been more than one month Ubuntu doesn't provide fixes
>     for Heat, Ironic, and Glance.
>     It's blocking our CI to merge patches so I propose we make beaker
>     jobs non voting on trusty for the three components only. It would
>     remain voting on centos7 so I think it's fine.
> 
>     Once they provide stable packages, we will obviously make them vote
>     again.
> 
>     Any feedback is welcome,
>     ---
>     Emilien Macchi
> 
> 
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150831/fe884995/attachment.pgp>

From yongli.he at intel.com  Tue Sep  1 02:29:04 2015
From: yongli.he at intel.com (He, Yongli)
Date: Tue, 1 Sep 2015 02:29:04 +0000
Subject: [openstack-dev] Intel PCI CI down today due to testing scripts bug
Message-ID: <837B116B6E5B934DA06D9AD0FD79C6A301AC1319@SHSMSX104.ccr.corp.intel.com>

Hello OpenStackers!

The Intel PCI CI, due to test scripts bug( recently added) is failing to report results to OpenStack Jenkins third-party tests. 

We had identify this bug and the scripts had rolled back to verified version. Now we are discussing this with infra team to 
bring it back ASAP.

Sorry for the inconvenience and your patience.

Regards
Yongli He



From germy.lure at gmail.com  Tue Sep  1 03:40:39 2015
From: germy.lure at gmail.com (Germy Lure)
Date: Tue, 1 Sep 2015 11:40:39 +0800
Subject: [openstack-dev] [Neutron] DHCP configuration
In-Reply-To: <CAO_F6JO2B21_bXTkdaUqLjQ7R_7eBseGo5NJArxXOBFhV0NZFg@mail.gmail.com>
References: <CAG9LJa5B4L=6AYqBbqAdMsD-DQzerYtbuNDUtonFEzbVGQ0D4g@mail.gmail.com>
 <CAO_F6JO2B21_bXTkdaUqLjQ7R_7eBseGo5NJArxXOBFhV0NZFg@mail.gmail.com>
Message-ID: <CAEfdOg2FGaPJhJYHLntbh0kHw9Fm1hb5Jkr7TA6H+V7EM4Q64g@mail.gmail.com>

+1
common.config should be global and general while agent.config should be
local and related to the special back-end.
Maybe, we can add different prefix to the same option.

Germy

On Mon, Aug 31, 2015 at 11:13 PM, Kevin Benton <blak111 at gmail.com> wrote:

> neutron.common.config should have general DHCP options that aren't
> specific to the reference DHCP agent. neutron.agent.dhcp.config should have
> all of the stuff specific to our agent and dnsmasq.
>
> On Mon, Aug 31, 2015 at 7:54 AM, Gal Sagie <gal.sagie at gmail.com> wrote:
>
>> Hello all,
>>
>> I went over the code and noticed that we have default DHCP configuration
>> both in neutron/common/config.py  (dhcp_lease_duration , dns_domain and
>> dhcp_agent_notification)
>>
>> But also we have it in neutron/agent/dhcp/config.py (DHCP_AGENT_OPTS,
>> DHCP_OPTS)
>>
>> I think we should consider merging them (especially the agent
>> configuration)
>> into one place so it will be easier to find them.
>>
>> I will add a bug on myself to address that, anyone know if this was done
>> in purpose
>> for some reason, or anyone have other thoughts regarding this?
>>
>> Thanks
>> Gal.
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/0951157d/attachment.html>

From choudharyvikas16 at gmail.com  Tue Sep  1 03:46:37 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Tue, 1 Sep 2015 09:16:37 +0530
Subject: [openstack-dev] Adding 'server_type' field to Baymodel
Message-ID: <CABJxuZqgFTjuNNkDO6TPUpnATjRnHn2oJznYdxaQMY_CouDfwg@mail.gmail.com>

Hi Team,

I have a doubt.What all values  'server type' field that is being used in
below function call can have?
*get_template_definition(cls, server_type, os, coe*)

Everywhere in code currently its value is 'vm' only. for example in classes
representing template definitions(mesos, k8s and swarm):
c


*lass AtomicSwarmTemplateDefinition(BaseTemplateDefinition):    provides =
[        {'server_type': 'vm', 'os': 'fedora-atomic', 'coe': 'swarm'},    ]*

os and coe are already fields of baymodel.Should not 'server_type' also be
another field in baymodel?

what i understand server-type can be baremetal_node also atleast.
please correct me if i am wrong.


Regards
Vikas Choudhary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/03a06782/attachment.html>

From choudharyvikas16 at gmail.com  Tue Sep  1 04:04:30 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Tue, 1 Sep 2015 09:34:30 +0530
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
Message-ID: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>

Hi,

Can anybody please point me out some etherpad discussion page/spec  that
can help me understand why we are going to introduce barbican  for magnum
when we already had keystone for security management?




-Vikas Choudhary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/375c12e1/attachment.html>

From gong_ys2004 at aliyun.com  Tue Sep  1 04:44:45 2015
From: gong_ys2004 at aliyun.com (gong_ys2004)
Date: Tue, 01 Sep 2015 12:44:45 +0800
Subject: [openstack-dev] =?utf-8?q?=5Bneutron=5D_subnetallocation_is_in_co?=
 =?utf-8?q?re_resource=2C_while_there_is_a_extension_for_it=3F?=
Message-ID: <----1a------ucW1a$326fa33e-1ee8-455d-982b-8cb3ab951f94@aliyun.com>


Hi, neutron guys,look at https://github.com/openstack/neutron/blob/master/neutron/extensions/subnetallocation.py,which defines an extension?Subnetallocation but defines no extension resource. Actually, it is implementedin core resource.So I think we should remove this extension.
I filed a bug for it:https://bugs.launchpad.net/neutron/+bug/1490815

Regards,yong sheng gong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/f646888b/attachment.html>

From nstarodubtsev at mirantis.com  Tue Sep  1 04:50:05 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Tue, 1 Sep 2015 07:50:05 +0300
Subject: [openstack-dev] Weekly Meeting on Sep 1
In-Reply-To: <D20A1F0A.1BD71%ian.cordasco@rackspace.com>
References: <CAOnDsYMacDSAnEnObNTygCJHtWWnXB4GP-DVqu+JZbaJJwsC_A@mail.gmail.com>
 <D20A1F0A.1BD71%ian.cordasco@rackspace.com>
Message-ID: <CAAa8YgAPJQiW1E-U-jyELNmWhraAqNpvkR6N3Z1ZjeqXNfZaYg@mail.gmail.com>

Hi Ian,
murano IRC weekly meeting. today at 17 UTC.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-08-31 23:11 GMT+03:00 Ian Cordasco <ian.cordasco at rackspace.com>:

>
>
> On 8/31/15, 13:59, "Serg Melikyan" <smelikyan at mirantis.com> wrote:
>
> >Hi folks,
> >
> >
> >I want to let you know that I would not be able to chair tomorrow's
> >weekly meeting, Nikolai Starodubtsev is going to temporary replace me on
> >this meeting.
> >
> >
> >--
> >Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
> >
> >http://mirantis.com <http://mirantis.com/> | smelikyan at mirantis.com
> >
> >+7 (495) 640-4904, 0261
> >+7 (903) 156-0836
>
> Which meeting?
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/46217fe4/attachment-0001.html>

From nstarodubtsev at mirantis.com  Tue Sep  1 04:52:16 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Tue, 1 Sep 2015 07:52:16 +0300
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <55E4847D.2020807@intel.com>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
Message-ID: <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>

All,
I'd like to propose use of #openstack-blazar for further communication and
coordination.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-08-31 19:44 GMT+03:00 Fuente, Pablo A <pablo.a.fuente at intel.com>:

> Yes, Blazar is a really interesting project. I worked on it some time ago
> and I really enjoy it. Sadly my obligations at work don't let me to still
> working on it, but I', happy that there still some interest in Blazar.
>
> Pablo.
> On 31/08/15 09:19, Zhenyu Zheng wrote:
> Hello,
> It seems like an interesting project.
>
> On Fri, Aug 28, 2015 at 7:54 PM, Pierre Riteau <priteau at uchicago.edu
> <mailto:priteau at uchicago.edu>> wrote:
> Hello,
>
> The NSF-funded Chameleon project (https://www.chameleoncloud.org) uses
> Blazar to provide advance reservations of resources for running cloud
> computing experiments.
>
> We would be interested in contributing as well.
>
> Pierre Riteau
>
> On 28 Aug 2015, at 07:56, Ildik? V?ncsa <<mailto:
> ildiko.vancsa at ericsson.com>ildiko.vancsa at ericsson.com<mailto:
> ildiko.vancsa at ericsson.com>> wrote:
>
> > Hi All,
> >
> > The resource reservation topic pops up time to time on different forums
> to cover use cases in terms of both IT and NFV. The Blazar project was
> intended to address this need, but according to my knowledge due to earlier
> integration and other difficulties the work has been stopped.
> >
> > My question is that who would be interested in resurrecting the Blazar
> project and/or working on a reservation system in OpenStack?
> >
> > Thanks and Best Regards,
> > Ildik?
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> <mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/44ec0a87/attachment.html>

From adrian.otto at rackspace.com  Tue Sep  1 05:02:32 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Tue, 1 Sep 2015 05:02:32 +0000
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
Message-ID: <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>

Simply put, Keystone is designed to generate  tokens that are to be used for authentication and RBAC. Systems like Kunernetes do not support Keystone auth, but do support TLS. Using TLS provides a solution that is compatible with using these systems outside of an OpenStack cloud.

Barbican is designed for secure storage of arbitrary secrets, and currently also has a CA function. The reason that is compelling is that you can have Barbican generate, sign, and store a keypair without transmitting the private key over the network to the client that originates the signing request. It can be directly stored, and made available only to the clients that need access to it.

We are taking an iterative approach to TLS integration, so we can gradually take advantage of both keystone and Barbican features as they allow us to iterate toward a more secure integration.

Adrian

> On Aug 31, 2015, at 9:05 PM, Vikas Choudhary <choudharyvikas16 at gmail.com> wrote:
> 
> Hi,
> 
> Can anybody please point me out some etherpad discussion page/spec  that can help me understand why we are going to introduce barbican  for magnum when we already had keystone for security management?
> 
> 
> 
> 
> -Vikas Choudhary
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From wkqwu at cn.ibm.com  Tue Sep  1 05:23:27 2015
From: wkqwu at cn.ibm.com (Kai Qiang Wu)
Date: Tue, 1 Sep 2015 13:23:27 +0800
Subject: [openstack-dev] Adding 'server_type' field to Baymodel
In-Reply-To: <CABJxuZqgFTjuNNkDO6TPUpnATjRnHn2oJznYdxaQMY_CouDfwg@mail.gmail.com>
References: <CABJxuZqgFTjuNNkDO6TPUpnATjRnHn2oJznYdxaQMY_CouDfwg@mail.gmail.com>
Message-ID: <OF35502345.73FA2451-ON48257EB3.001D0DCD-48257EB3.001D9C92@cn.ibm.com>

HI Vikas,

The server_type (old name was platform, we corrected it to be more proper,
'server_type') was introduced for wide support, like baremetal cases, etc.

For other parts it is hardcode 'vm' now, as 'vm' was and still are
supported.(it is widely used for VM cases in dev/test)


For baremetal, I did some work before, but consider time and priority, not
did it now. Details can be checked before like

review.openstack.org/#/c/198984



If you have further questions, you can check with me in IRC channel.



Thanks



Best Wishes,
--------------------------------------------------------------------------------
Kai Qiang Wu (???  Kennan?
IBM China System and Technology Lab, Beijing

E-mail: wkqwu at cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
         No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193
--------------------------------------------------------------------------------
Follow your heart. You are miracle!



From:	Vikas Choudhary <choudharyvikas16 at gmail.com>
To:	openstack-dev at lists.openstack.org
Date:	01/09/2015 11:50 am
Subject:	[openstack-dev] Adding 'server_type' field to Baymodel



Hi Team,

I have a doubt.What all values? 'server type' field that is being used in
below function call can have?
get_template_definition(cls, server_type, os, coe)

Everywhere in code currently its value is 'vm' only. for example in classes
representing template definitions(mesos, k8s and swarm):
class AtomicSwarmTemplateDefinition(BaseTemplateDefinition):
??? provides = [
??????? {'server_type': 'vm', 'os': 'fedora-atomic', 'coe': 'swarm'},
??? ]

os and coe are already fields of baymodel.Should not 'server_type' also be
another field in baymodel?

what i understand server-type can be baremetal_node also atleast.
please correct me if i am wrong.


Regards
Vikas Choudhary
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/b5b706bd/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/b5b706bd/attachment.gif>

From nakato at nakato.io  Tue Sep  1 06:00:11 2015
From: nakato at nakato.io (Sachi King)
Date: Tue, 1 Sep 2015 16:00:11 +1000
Subject: [openstack-dev] [Neutron] Netaddr 0.7.16 and gate breakage
In-Reply-To: <CALiLy7pddet-dQ1mBx3dvsx95Bd-oKt5C7Ffqo7vPkue+GzcUw@mail.gmail.com>
References: <CAK+RQeYhp1h4KbGexDmKd56O0WG-NFS7pk1sDNNveoZJo6Cp6w@mail.gmail.com>
 <CALiLy7ph1jYvVR5EnkMPyheQ8C2tqnuZ1T1sL-oBp7texf=kQg@mail.gmail.com>
 <20150831165334.GO7955@yuggoth.org>
 <CAK+RQeZ+1cN5TYxm23ZXD+FCz0WLB227nLx83-Mz6rG1_QimVg@mail.gmail.com>
 <CALiLy7pddet-dQ1mBx3dvsx95Bd-oKt5C7Ffqo7vPkue+GzcUw@mail.gmail.com>
Message-ID: <CA+=nq75DqBvn8v2_hPOfCt+ZJSt91bxaqM-16vadfCsfb=wLDw@mail.gmail.com>

On Tue, Sep 1, 2015 at 3:15 AM, Carl Baldwin <carl at ecbaldwin.net> wrote:
> On Mon, Aug 31, 2015 at 11:02 AM, Armando M. <armamig at gmail.com> wrote:
>> On 31 August 2015 at 09:53, Jeremy Stanley <fungi at yuggoth.org> wrote:
>>> On 2015-08-31 10:33:07 -0600 (-0600), Carl Baldwin wrote:
>>> > I was under the impression that we had a plan to stop allowing these
>>> > external updates break our gate jobs.
>>> [...]
>>>
>>> We do, and it succeeded in protecting master branch integration test
>>> jobs from this new netaddr release:
>>>
>>>     https://review.openstack.org/218737
>>>
>>> This was able to get implemented fairly early because DevStack
>>> already contained mechanisms for relying on the requirements repo to
>>> define its behavior WRT dependencies. The logistics for pinning
>>> versions in every project's unit tests, however, are more complex
>>> and not yet in place (but are in progress). Also where grenade is
>>> concerned, the breakage is on the stable/kilo side where we don't
>>> have constraints pinning implemented (since that work has been
>>> primarily in master this cycle and targeting liberty).
>
> Thanks for the update Jeremy.  It looks like there is still some work
> to do that I wasn't aware of.  Is there somewhere where we can follow
> the progress of this work?

Hi Carl, currently we're working on getting the required work into project
config, which means the change to tox.ini is currently going through some
revisions.

Project config: https://review.openstack.org/207262/
Nova tox.ini example: https://review.openstack.org/205931

I've created a change for Neutron's tox.ini.  This doesn't change anything,
so it should pass CI, but if there's a merge conflict with a external CI's
tox.ini this will flush it out.

https://review.openstack.org/219134/

>> Great...if there is any collaboration required by the individual projects,
>> please do not hesitate to reach out...we'd be happy to do our part.
>
> +1  Let us know how to pitch in.

For now there's not much to do, the project-config change is the change
that has to go in first.  Once that is in we can get the 'constraints'
factor into individual projects tox.ini and add the constraints jobs.

If neutron would like to be one of the initial projects to enable these
tests that would be quite helpful as I've yet to recruit any initial
projects.

Cheers,
Sachi


From choudharyvikas16 at gmail.com  Tue Sep  1 06:42:27 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Tue, 1 Sep 2015 12:12:27 +0530
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
Message-ID: <CABJxuZpAb3pUjNgTdaHOgT=hWbcE4EtHPEW0BRkxd8dWz13BNA@mail.gmail.com>

Thanks Adrian for description.

If i could understand correctly,magnum clients will be use barbican
for authentication as coes dont support keystone for operations like
pod-creation

etc and for rest of the project clients like nova/glance
pythonclients, keystone will continue to serve.

Please correct me.



-Vikas


_________________________________________________________________________________________________________
*Simply put, Keystone is designed to generate  tokens that are to be
used for authentication and RBAC. Systems like Kunernetes do not
support Keystone auth, but do support TLS. Using TLS provides a
solution that is compatible with using these systems outside of an
OpenStack cloud.

Barbican is designed for secure storage of arbitrary secrets, and
currently also has a CA function. The reason that is compelling is
that you can have Barbican generate, sign, and store a keypair without
transmitting the private key over the network to the client that
originates the signing request. It can be directly stored, and made
available only to the clients that need access to it.

We are taking an iterative approach to TLS integration, so we can
gradually take advantage of both keystone and Barbican features as
they allow us to iterate toward a more secure integration.

Adrian

> On Aug 31, 2015, at 9:05 PM, Vikas Choudhary <choudharyvikas16 at gmail.com <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>> wrote:
>
> Hi,
>
> Can anybody please point me out some etherpad discussion page/spec  that can help me understand why we are going to introduce barbican  for magnum when we already had keystone for security management?
>
>
>
>
> -Vikas Choudhary
>
>
> __________________________________________________________*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/c3da6f25/attachment.html>

From Tim.Bell at cern.ch  Tue Sep  1 06:49:20 2015
From: Tim.Bell at cern.ch (Tim Bell)
Date: Tue, 1 Sep 2015 06:49:20 +0000
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3B9061@CERNXCHG44.cern.ch>

> -----Original Message-----
> From: Adrian Otto [mailto:adrian.otto at rackspace.com]
> Sent: 01 September 2015 07:03
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum] Difference between certs stored in
> keystone and certs stored in barbican
> 
> Simply put, Keystone is designed to generate  tokens that are to be used for
> authentication and RBAC. Systems like Kunernetes do not support Keystone
> auth, but do support TLS. Using TLS provides a solution that is compatible
> with using these systems outside of an OpenStack cloud.
> 
> Barbican is designed for secure storage of arbitrary secrets, and currently
> also has a CA function. The reason that is compelling is that you can have
> Barbican generate, sign, and store a keypair without transmitting the private
> key over the network to the client that originates the signing request. It can
> be directly stored, and made available only to the clients that need access to
> it.
> 

Will it also be possible to use a different CA ? In some environments, there is already a corporate certificate authority server. This would ensure compliance with site security standards.

Tim

> We are taking an iterative approach to TLS integration, so we can gradually
> take advantage of both keystone and Barbican features as they allow us to
> iterate toward a more secure integration.
> 
> Adrian
> 
> > On Aug 31, 2015, at 9:05 PM, Vikas Choudhary
> <choudharyvikas16 at gmail.com> wrote:
> >
> > Hi,
> >
> > Can anybody please point me out some etherpad discussion page/spec
> that can help me understand why we are going to introduce barbican  for
> magnum when we already had keystone for security management?
> >
> >
> >
> >
> > -Vikas Choudhary
> >
> >
> >
> ________________________________________________________________
> ______
> > ____ OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ________________________________________________________________
> __________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From ddovbii at mirantis.com  Tue Sep  1 07:03:19 2015
From: ddovbii at mirantis.com (Dmitro Dovbii)
Date: Tue, 1 Sep 2015 10:03:19 +0300
Subject: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core
In-Reply-To: <CAOnDsYPpN1XGQ-ZLsbxv36Y2JWi+meuWz4vXXY=u44oaawTTjw@mail.gmail.com>
References: <etPan.55e4d925.59528236.146@TefMBPr.local>
 <CAOnDsYPpN1XGQ-ZLsbxv36Y2JWi+meuWz4vXXY=u44oaawTTjw@mail.gmail.com>
Message-ID: <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>

+1

2015-09-01 2:24 GMT+03:00 Serg Melikyan <smelikyan at mirantis.com>:

> +1
>
> On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev <kzaitsev at mirantis.com>
> wrote:
>
>> I?m pleased to nominate Nikolai for Murano core.
>>
>> He?s been actively participating in development of murano during liberty
>> and is among top5 contributors during last 90 days. He?s also leading the
>> CloudFoundry integration initiative.
>>
>> Here are some useful links:
>>
>> Overall contribution: http://stackalytics.com/?user_id=starodubcevna
>> List of reviews:
>> https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
>> Murano contribution during latest 90 days
>> http://stackalytics.com/report/contribution/murano/90
>>
>> Please vote with +1/-1 for approval/objections
>>
>> --
>> Kirill Zaitsev
>> Murano team
>> Software Engineer
>> Mirantis, Inc
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
> http://mirantis.com | smelikyan at mirantis.com
>
> +7 (495) 640-4904, 0261
> +7 (903) 156-0836
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/9e8a3cfa/attachment.html>

From choudharyvikas16 at gmail.com  Tue Sep  1 07:06:22 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Tue, 1 Sep 2015 12:36:22 +0530
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
Message-ID: <CABJxuZoYzvDZRbC0e6aJ4zZ8aL3wydAw4rCdD++p4YsgyCt1eA@mail.gmail.com>

Is it like keystone authenticating between magnum-client and magnum conductor,

and barbican certs will be used b/w conductor and k8s/swarm?



Thanks

Vikas Choudhary

_____________________________________________________________
Simply put, Keystone is designed to generate  tokens that are to be
used for authentication and RBAC. Systems like Kunernetes do not
support Keystone auth, but do support TLS. Using TLS provides a
solution that is compatible with using these systems outside of an
OpenStack cloud.

Barbican is designed for secure storage of arbitrary secrets, and
currently also has a CA function. The reason that is compelling is
that you can have Barbican generate, sign, and store a keypair without
transmitting the private key over the network to the client that
originates the signing request. It can be directly stored, and made
available only to the clients that need access to it.

We are taking an iterative approach to TLS integration, so we can
gradually take advantage of both keystone and Barbican features as
they allow us to iterate toward a more secure integration.

Adrian

>* On Aug 31, 2015, at 9:05 PM, Vikas Choudhary <choudharyvikas16 at gmail.com <http://gmail.com> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>> wrote:
*>>* Hi,
*>>* Can anybody please point me out some etherpad discussion
page/spec  that can help me understand why we are going to introduce
barbican  for magnum when we already had keystone for security
management?
*>>>>>* -Vikas Choudhary
*>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/92e5586e/attachment.html>

From ddovbii at mirantis.com  Tue Sep  1 07:30:37 2015
From: ddovbii at mirantis.com (Dmitro Dovbii)
Date: Tue, 1 Sep 2015 10:30:37 +0300
Subject: [openstack-dev] [murano] Let's minimaze the list of pylint
	exceptions
Message-ID: <CAKSp79wPJ2N48J3SP7nbivkerwtC6beXx18z23eqCHXUjqGU3A@mail.gmail.com>

Hi folks!

We have a long list of pylint exceptions in code of Murano (please see
example
<http://logs.openstack.org/10/207910/10/check/gate-murano-pylint/f54d298/console.html>).
I would like to propose you to take a part in refactoring of code and
minimization of this list.
I've created blueprint
<https://blueprints.launchpad.net/murano/+spec/reduce-pylint-warnings>
and etherpad
document <https://beta.etherpad.org/p/reduce-pylint-warnings> describing
the structure of participation. Please feel free to choose some type of
warning and several modules containing it, then make notice in document,
and finally fix issues.
Let's make murano code more clear together :)

Best regards,
Dmytro Dovbii
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/cdecca1e/attachment.html>

From nstarodubtsev at mirantis.com  Tue Sep  1 07:37:33 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Tue, 1 Sep 2015 10:37:33 +0300
Subject: [openstack-dev] [murano] Let's minimaze the list of pylint
	exceptions
In-Reply-To: <CAKSp79wPJ2N48J3SP7nbivkerwtC6beXx18z23eqCHXUjqGU3A@mail.gmail.com>
References: <CAKSp79wPJ2N48J3SP7nbivkerwtC6beXx18z23eqCHXUjqGU3A@mail.gmail.com>
Message-ID: <CAAa8YgCZ1Wb-kadOh5AiuKKbQu+i_cy52BWGHWiT-hkDOrOC=g@mail.gmail.com>

+1, good initiative



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-01 10:30 GMT+03:00 Dmitro Dovbii <ddovbii at mirantis.com>:

> Hi folks!
>
> We have a long list of pylint exceptions in code of Murano (please see
> example
> <http://logs.openstack.org/10/207910/10/check/gate-murano-pylint/f54d298/console.html>).
> I would like to propose you to take a part in refactoring of code and
> minimization of this list.
> I've created blueprint
> <https://blueprints.launchpad.net/murano/+spec/reduce-pylint-warnings>
> and etherpad document <https://beta.etherpad.org/p/reduce-pylint-warnings>
> describing the structure of participation. Please feel free to choose some
> type of warning and several modules containing it, then make notice in
> document, and finally fix issues.
> Let's make murano code more clear together :)
>
> Best regards,
> Dmytro Dovbii
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/75dfb5dd/attachment.html>

From thierry at openstack.org  Tue Sep  1 07:45:46 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Tue, 1 Sep 2015 09:45:46 +0200
Subject: [openstack-dev] [puppet] [security] applying for
 vulnerability:managed tag
In-Reply-To: <55E498EE.2090509@redhat.com>
References: <55E498EE.2090509@redhat.com>
Message-ID: <55E557AA.5000304@openstack.org>

Emilien Macchi wrote:
> I would like the feedback from the community about applying (or not) to
> the vulnerability:managed tag [1].
> Being part of OpenStack ecosystem and the big tent, Puppet OpenStack
> project might want to follow some other projects in order to be
> consistent in Security management procedures.

Hi Emilien,

This tag is not managed by the TC, it's directly managed by the
Vulnerability Management Team (under the Security project team). It's
their decision to evaluate if your code is mature enough and if your
team is organized enough that they feel confident they can tackle the
additional workload.

You can find their contact details at:
https://security.openstack.org/

-- 
Thierry Carrez (ttx)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/21cba93d/attachment.pgp>

From anteaya at anteaya.info  Tue Sep  1 08:13:13 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Tue, 01 Sep 2015 04:13:13 -0400
Subject: [openstack-dev] Intel PCI CI down today due to testing scripts
 bug
In-Reply-To: <837B116B6E5B934DA06D9AD0FD79C6A301AC1319@SHSMSX104.ccr.corp.intel.com>
References: <837B116B6E5B934DA06D9AD0FD79C6A301AC1319@SHSMSX104.ccr.corp.intel.com>
Message-ID: <55E55E19.70601@anteaya.info>

On 08/31/2015 10:29 PM, He, Yongli wrote:
> Hello OpenStackers!
> 
> The Intel PCI CI, due to test scripts bug( recently added) is failing to report results to OpenStack Jenkins third-party tests. 
> 
> We had identify this bug and the scripts had rolled back to verified version. Now we are discussing this with infra team to 
> bring it back ASAP.
> 
> Sorry for the inconvenience and your patience.
> 
> Regards
> Yongli He
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
This matter is being dealt with in this thread:
http://lists.openstack.org/pipermail/third-party-announce/2015-August/000263.html

Please let us conduct the conversation in one place on one mailing list,
in this case the third-party-announce mailing list where it was started,
rather than having split conversations.

If you want to convey the status of your third party system to the
development community, please update your wikipage.

Thank you,
Anita.


From sbauza at redhat.com  Tue Sep  1 08:26:03 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Tue, 01 Sep 2015 10:26:03 +0200
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
 <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
Message-ID: <55E5611B.1090203@redhat.com>



Le 01/09/2015 06:52, Nikolay Starodubtsev a ?crit :
> All,
> I'd like to propose use of #openstack-blazar for further communication 
> and coordination.
>


+2 to that. That's the first step of any communication. The channel logs 
are also recorded here, for async communication :
http://eavesdrop.openstack.org/irclogs/%23openstack-blazar/

I don't see at the moment much benefits of running a weekly meeting. We 
can chat on purpose if needed.

Like I said to Ildiko, I'm fine to help some people discovering Blazar 
but I won't have time lots of time for actually working on it.

IMHO, the first things to do with Blazar is to reduce the tech debt by :
  1/ finising the Climate->Blazar renaming
  2/ updating and using the latest oslo librairies instead of using the 
old incubator
  3/ using Nova V2.1 API (which could be a bit difficult because there 
are no more extensions)

If I see some progress with Blazar, I'm OK with asking -infra to move 
Blazar to the OpenStack namespace like it was asked by James Blair here 
because it seems Blazar is not defunct :
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html

-Sylvain



> *__*
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
>
> 2015-08-31 19:44 GMT+03:00 Fuente, Pablo A <pablo.a.fuente at intel.com 
> <mailto:pablo.a.fuente at intel.com>>:
>
>     Yes, Blazar is a really interesting project. I worked on it some
>     time ago and I really enjoy it. Sadly my obligations at work don't
>     let me to still working on it, but I', happy that there still some
>     interest in Blazar.
>
>     Pablo.
>     On 31/08/15 09:19, Zhenyu Zheng wrote:
>     Hello,
>     It seems like an interesting project.
>
>     On Fri, Aug 28, 2015 at 7:54 PM, Pierre Riteau
>     <priteau at uchicago.edu
>     <mailto:priteau at uchicago.edu><mailto:priteau at uchicago.edu
>     <mailto:priteau at uchicago.edu>>> wrote:
>     Hello,
>
>     The NSF-funded Chameleon project (https://www.chameleoncloud.org)
>     uses Blazar to provide advance reservations of resources for
>     running cloud computing experiments.
>
>     We would be interested in contributing as well.
>
>     Pierre Riteau
>
>     On 28 Aug 2015, at 07:56, Ildik? V?ncsa
>     <<mailto:ildiko.vancsa at ericsson.com
>     <mailto:ildiko.vancsa at ericsson.com>>ildiko.vancsa at ericsson.com
>     <mailto:ildiko.vancsa at ericsson.com><mailto:ildiko.vancsa at ericsson.com
>     <mailto:ildiko.vancsa at ericsson.com>>> wrote:
>
>     > Hi All,
>     >
>     > The resource reservation topic pops up time to time on different
>     forums to cover use cases in terms of both IT and NFV. The Blazar
>     project was intended to address this need, but according to my
>     knowledge due to earlier integration and other difficulties the
>     work has been stopped.
>     >
>     > My question is that who would be interested in resurrecting the
>     Blazar project and/or working on a reservation system in OpenStack?
>     >
>     > Thanks and Best Regards,
>     > Ildik?
>     >
>     >
>     __________________________________________________________________________
>     > OpenStack Development Mailing List (not for usage questions)
>     > Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><mailto:OpenStack-dev-request at lists.openstack.org
>     <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/d71f8134/attachment.html>

From mzoeller at de.ibm.com  Tue Sep  1 08:28:36 2015
From: mzoeller at de.ibm.com (Markus Zoeller)
Date: Tue, 1 Sep 2015 10:28:36 +0200
Subject: [openstack-dev] [nova][bugs] more specific tags?
In-Reply-To: <55E4B617.4010308@linux.vnet.ibm.com>
References: <201508281611.t7SGB6B0000678@d06av11.portsmouth.uk.ibm.com>
 <55E4B617.4010308@linux.vnet.ibm.com>
Message-ID: <201509010832.t818WctF002753@d06av02.portsmouth.uk.ibm.com>

Matt Riedemann <mriedem at linux.vnet.ibm.com> wrote on 08/31/2015 10:16:23 
PM:

> From: Matt Riedemann <mriedem at linux.vnet.ibm.com>
> To: openstack-dev at lists.openstack.org
> Date: 08/31/2015 10:21 PM
> Subject: Re: [openstack-dev] [nova][bugs] more specific tags?
> 
> 
> 
> On 8/28/2015 10:37 AM, Markus Zoeller wrote:
> > This is a proposal to enhance the list of offical tags for our bugs
> > in Launchpad. During the tagging process in the last weeks it seems
> > to me that some of the tags are too coarse-grained. Would you see a
> > benefit in enhancing the official list to more fine-grained tags?
> >
> > [...]
> >
> > Would you see this as benefitial? If yes, which tags would you like to
> > have additionally? If no, what did I miss or overlook?
> >
> > Regards,
> > Markus Zoeller (markus_z)
> 
> Given my cynical nature, I have to ask, are the bug tags really helping 
> with focusing triage by sub-teams?  Or are we somehow pulling or 
> planning to pull this info into some analysis, like, 'we have a lot of 
> bugs in component x, use case y, so we need to refactor and/or beef up 
> testing there'.  Otherwise I feel like we're just adding more official 
> tags that very few people will be aware of or care about.
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann

If this would help the sub-teams to split things up cannot be answered
by me. I hope some of the sub-team leaders will give their opinion here.

>From a users point of view I don't care, as an example, if the libvirt
driver is (theoretically) without bugs. I'm interested if I can do a 
live-migration/snapshot/reboot/<your-favorite-feature-here> and if I 
can do so, what are my restrictions. So, yes, the rationale behind the
proposal was indeed to get an impression where it makes sense to invest
time and effort, mainly from a use case perspective through the layers.
We have enough open bugs to be busy all the time and I don't want to 
make any orders of direction. You could see it as a navigation system.
It makes nice suggestions which way you could go, but you are free to
take another route because of personal experience or unforeseen events.

Regards,
Markus Zoeller (markus_z)



From ihrachys at redhat.com  Tue Sep  1 09:14:03 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Tue, 1 Sep 2015 11:14:03 +0200
Subject: [openstack-dev] [neutron][db] reviewers: please mind the branch a
	script belongs to
Message-ID: <A0ADAA86-9762-4734-98E9-FF29CD617653@redhat.com>

Hi reviewers,

several days ago, a semantically expand-only migration script was merged into contract branch [1]. This is not a disaster, though it would be a tiny one if a contract-only migration script would be merged into expand branch.

Please make sure you know the new migration strategy described in [2].

Previously, we introduced a check that validates that we don?t mix down_revision heads, linking e.g. expand script to contract revision, or vice versa [3]. Apparently, it?s not enough.

Ann is looking into introducing another check for semantical correctness of scripts. I don?t believe it may work for all complex cases we may need to solve manually, but at least it should be able to catch add_* operations in contract scripts, or drop_* operations in expand branch. Since there may be exceptions to general automation, we may also need a mechanism to disable such a sanity check for specific scripts.

So all in all, I kindly ask everyone to become aware of how we now manage migration scripts, and what it implies in how we should review code (f.e. looking at paths as well as the code of alembic scripts). That is especially important before the test that Ann is looking to implement is not merged.

[1]: https://bugs.launchpad.net/neutron/+bug/1490767
[2]: http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html
[3]: https://review.openstack.org/#/c/206746/

Thanks
Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/70b8f025/attachment.pgp>

From michal.gershenzon at alcatel-lucent.com  Tue Sep  1 09:14:08 2015
From: michal.gershenzon at alcatel-lucent.com (GERSHENZON, Michal (Michal))
Date: Tue, 1 Sep 2015 09:14:08 +0000
Subject: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core
In-Reply-To: <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>
References: <etPan.55e4d925.59528236.146@TefMBPr.local>
 <CAOnDsYPpN1XGQ-ZLsbxv36Y2JWi+meuWz4vXXY=u44oaawTTjw@mail.gmail.com>
 <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>
Message-ID: <53E57E3F2F67504C8F237A37C6D293DF80184368@FR711WXCHMBA02.zeu.alcatel-lucent.com>

+1

From: Dmitro Dovbii [mailto:ddovbii at mirantis.com]
Sent: Tuesday, September 01, 2015 10:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core

+1

2015-09-01 2:24 GMT+03:00 Serg Melikyan <smelikyan at mirantis.com<mailto:smelikyan at mirantis.com>>:
+1

On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev <kzaitsev at mirantis.com<mailto:kzaitsev at mirantis.com>> wrote:
I?m pleased to nominate Nikolai for Murano core.

He?s been actively participating in development of murano during liberty and is among top5 contributors during last 90 days. He?s also leading the CloudFoundry integration initiative.

Here are some useful links:

Overall contribution: http://stackalytics.com/?user_id=starodubcevna
List of reviews: https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
Murano contribution during latest 90 days http://stackalytics.com/report/contribution/murano/90

Please vote with +1/-1 for approval/objections

--
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com<http://mirantis.com/> | smelikyan at mirantis.com<mailto:smelikyan at mirantis.com>

+7 (495) 640-4904<tel:%2B7%20%28495%29%C2%A0640-4904>, 0261
+7 (903) 156-0836<tel:%2B7%20%28903%29%20156-0836>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/5e92d6e7/attachment.html>

From efedorova at mirantis.com  Tue Sep  1 09:17:39 2015
From: efedorova at mirantis.com (Ekaterina Chernova)
Date: Tue, 1 Sep 2015 12:17:39 +0300
Subject: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core
In-Reply-To: <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>
References: <etPan.55e4d925.59528236.146@TefMBPr.local>
 <CAOnDsYPpN1XGQ-ZLsbxv36Y2JWi+meuWz4vXXY=u44oaawTTjw@mail.gmail.com>
 <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>
Message-ID: <CAOFFu8aNYx-4mhnSA_4M7mDD5ndWNJuXnpQ5s1L0c7tSb7WdaA@mail.gmail.com>

+1

On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii <ddovbii at mirantis.com> wrote:

> +1
>
> 2015-09-01 2:24 GMT+03:00 Serg Melikyan <smelikyan at mirantis.com>:
>
>> +1
>>
>> On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev <kzaitsev at mirantis.com>
>> wrote:
>>
>>> I?m pleased to nominate Nikolai for Murano core.
>>>
>>> He?s been actively participating in development of murano during liberty
>>> and is among top5 contributors during last 90 days. He?s also leading the
>>> CloudFoundry integration initiative.
>>>
>>> Here are some useful links:
>>>
>>> Overall contribution: http://stackalytics.com/?user_id=starodubcevna
>>> List of reviews:
>>> https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
>>> Murano contribution during latest 90 days
>>> http://stackalytics.com/report/contribution/murano/90
>>>
>>> Please vote with +1/-1 for approval/objections
>>>
>>> --
>>> Kirill Zaitsev
>>> Murano team
>>> Software Engineer
>>> Mirantis, Inc
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
>> http://mirantis.com | smelikyan at mirantis.com
>>
>> +7 (495) 640-4904, 0261
>> +7 (903) 156-0836
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/6a36cab7/attachment.html>

From eantyshev at virtuozzo.com  Tue Sep  1 08:28:01 2015
From: eantyshev at virtuozzo.com (Evgeny Antyshev)
Date: Tue, 1 Sep 2015 12:28:01 +0400
Subject: [openstack-dev] [third-party][CI] Third-party oses in devstack-gate
Message-ID: <55E56191.1010905@virtuozzo.com>

Hello!

This letter I address to those third-party CI maintainers who needs to 
amend
the upstream devstack-gate to satisfy their environment.

Some folks that I know use inline patching at job level,
some make private forks of devstack-gate (I even saw one on github).
There have been a few improvements to devstack-gate, which made it 
easier to use it
downstream, f.e. introducing DEVSTACK_LOCAL_CONFIG 
(https://review.openstack.org/145321)

We particularly need it to recognize our rhel-based distribution as a 
Fedora OS.
We cannot define it explicitly in is_fedora() as it is not officially 
supported upstream,
but we can introduce the variable DEVSTACK_GATE_IS_FEDORA which makes
is_fedora() agnostic to distributions and to succeed if run on an 
undefined OS.

Here is the change: https://review.openstack.org/215029
I welcome everyone interested in the matter
to tell us if we do it right or not, and to review the change.

-- 
Best regards,
Evgeny Antyshev.



From sreejakannagundla08 at gmail.com  Tue Sep  1 09:40:43 2015
From: sreejakannagundla08 at gmail.com (sreeja kannagundla)
Date: Tue, 1 Sep 2015 15:10:43 +0530
Subject: [openstack-dev] Re keystone to keystone federation
Message-ID: <CAMyPZM39tEtSarRW6eVS_aoAqgAO=kWEABpe+4_CuXpgt3YN-Q@mail.gmail.com>

Hi

I am working on keystone2keystone federatoin and using kilo version for
both keystone-sp and keystone idp
After configuring keystone-sp and keystone-idp I am trying to use the
command :

openstack federation project list -os-auth-type v3unscopedsaml
--os-identity-provider k2k  --os-auth-url https://10.63.13.161:35357/v3
--os-identity-provider-url
https://10.63.13.163:35357/v3/OS-FEDERATION/saml2/idp --os-username user
--os-password password

It returns an error:

ERROR: openstack Expecting to find application/json in Content-Type header
- the server could not comply with the request since it is either malformed
or otherwise incorrect. The client is assumed to be in error. (HTTP 400)
(Request-ID: req-4839f349-e3ed-403f-b456-dfc0d1aecbe4)

This is because in keystoneclient/contrib/auth/v3/saml2.py, while sending a
request to keystone.idp for saml assertion, the content type used is
text/xml

idp_response = session.post(
            self.identity_provider_url,
            headers={'Content-type': 'text/xml'},
            data=etree.tostring(idp_saml2_authn_request),
            requests_auth=(self.username, self.password),
            authenticated=False, log=False)

why is keystone.idp not accepting the content type: text/xml?
what can be the workaroung for this issue

Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/3fe94788/attachment.html>

From nstarodubtsev at mirantis.com  Tue Sep  1 09:42:22 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Tue, 1 Sep 2015 12:42:22 +0300
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <55E5611B.1090203@redhat.com>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
 <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
 <55E5611B.1090203@redhat.com>
Message-ID: <CAAa8YgCZEFqGpwpY=P2JzxP+BmYJpHaYYFrX_fGf6-3s16NREQ@mail.gmail.com>

Sylvain,
First of all we need to reanimate blazar gate-jobs, or we can't merge
anything. I tried to do it a year ago, but can't get the point of the tests,
so better decision can be to rewrite them from scratch.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-01 11:26 GMT+03:00 Sylvain Bauza <sbauza at redhat.com>:

>
>
> Le 01/09/2015 06:52, Nikolay Starodubtsev a ?crit :
>
> All,
> I'd like to propose use of #openstack-blazar for further communication and
> coordination.
>
>
>
> +2 to that. That's the first step of any communication. The channel logs
> are also recorded here, for async communication :
> http://eavesdrop.openstack.org/irclogs/%23openstack-blazar/
>
> I don't see at the moment much benefits of running a weekly meeting. We
> can chat on purpose if needed.
>
> Like I said to Ildiko, I'm fine to help some people discovering Blazar but
> I won't have time lots of time for actually working on it.
>
> IMHO, the first things to do with Blazar is to reduce the tech debt by :
>  1/ finising the Climate->Blazar renaming
>  2/ updating and using the latest oslo librairies instead of using the old
> incubator
>  3/ using Nova V2.1 API (which could be a bit difficult because there are
> no more extensions)
>
> If I see some progress with Blazar, I'm OK with asking -infra to move
> Blazar to the OpenStack namespace like it was asked by James Blair here
> because it seems Blazar is not defunct :
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html
>
> -Sylvain
>
>
>
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> 2015-08-31 19:44 GMT+03:00 Fuente, Pablo A <pablo.a.fuente at intel.com>:
>
>> Yes, Blazar is a really interesting project. I worked on it some time ago
>> and I really enjoy it. Sadly my obligations at work don't let me to still
>> working on it, but I', happy that there still some interest in Blazar.
>>
>> Pablo.
>> On 31/08/15 09:19, Zhenyu Zheng wrote:
>> Hello,
>> It seems like an interesting project.
>>
>> On Fri, Aug 28, 2015 at 7:54 PM, Pierre Riteau <priteau at uchicago.edu
>> <mailto:priteau at uchicago.edu>> wrote:
>> Hello,
>>
>> The NSF-funded Chameleon project (https://www.chameleoncloud.org) uses
>> Blazar to provide advance reservations of resources for running cloud
>> computing experiments.
>>
>> We would be interested in contributing as well.
>>
>> Pierre Riteau
>>
>> On 28 Aug 2015, at 07:56, Ildik? V?ncsa <<mailto:
>> ildiko.vancsa at ericsson.com>ildiko.vancsa at ericsson.com<mailto:
>> ildiko.vancsa at ericsson.com>> wrote:
>>
>> > Hi All,
>> >
>> > The resource reservation topic pops up time to time on different forums
>> to cover use cases in terms of both IT and NFV. The Blazar project was
>> intended to address this need, but according to my knowledge due to earlier
>> integration and other difficulties the work has been stopped.
>> >
>> > My question is that who would be interested in resurrecting the Blazar
>> project and/or working on a reservation system in OpenStack?
>> >
>> > Thanks and Best Regards,
>> > Ildik?
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<
>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<
>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/94786752/attachment.html>

From hanlind at kth.se  Tue Sep  1 10:01:50 2015
From: hanlind at kth.se (Hans Lindgren)
Date: Tue, 1 Sep 2015 10:01:50 +0000
Subject: [openstack-dev] [nova] ProviderFirewallRules still available?
Message-ID: <C6C97104-CF8A-4108-8F02-5E68AF68923C@kth.se>

Some drivers (libvirt, xen) still query the database and try to set up provider rules. However, there is no api for this feature as the mail thread you link points out and therefore no way to add/remove any rules. Due to this the setup of provider rules that is done today will effectively do nothing.

An attempt at cleaning out all code related to this can be found here: https://review.openstack.org/184027


Hans



On 2015-08-31 21:23, "Asselin, Ramy" <ramy.asselin at hp.com> wrote:

>I saw this thread, ?Undead DB objects: ProviderFirewallRule and InstanceGroupPolicy?? [1]
>
> 
>Seems to be in the process of getting removed from the DB, but are provider level firewalls still available? I still see a lot of code that indicates it does [2]
> 
>I traced it to here, but not sure where the nova context is stored [3]
>
> 
>Ramy
>Irc:asselin
>[1] 
>http://lists.openstack.org/pipermail/openstack-dev/2014-November/050959.html <http://lists.openstack.org/pipermail/openstack-dev/2014-November/050959.html>
>[2] 
>https://github.com/openstack/nova/search?l=python&q=provider&utf8=%E2%9C%93 <https://github.com/openstack/nova/search?l=python&q=provider&utf8=%E2%9C%93>
>[3] 
>https://github.com/openstack/nova/blob/b0854ba0c697243aa3d91170d1a22896aed60e02/nova/conductor/rpcapi.py#L215 <https://github.com/openstack/nova/blob/b0854ba0c697243aa3d91170d1a22896aed60e02/nova/conductor/rpcapi.py#L215>
> 
> 
> 
>

From chdent at redhat.com  Tue Sep  1 10:23:51 2015
From: chdent at redhat.com (Chris Dent)
Date: Tue, 1 Sep 2015 11:23:51 +0100 (BST)
Subject: [openstack-dev] [api] [wsme] [ceilometer] Replacing WSME with
 _____	?
In-Reply-To: <alpine.OSX.2.11.1508281513570.47341@seed.local>
References: <alpine.OSX.2.11.1508281513570.47341@seed.local>
Message-ID: <alpine.OSX.2.11.1509011120070.75349@seed.local>

On Fri, 28 Aug 2015, Chris Dent wrote:

> The problem with the spec is that it doesn't know what to replace
> WSME with.

Thanks to everyone who provided some input.

The summary is: A lot of support for Flask. A fair bit of support
for the idea of using JSONSchema and publishing it.

Since this isn't a new API we may find that doing Flask to exactly
replicate the existing API is not possible, but if it is it seems
like a good candidate.

Thanks again.

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent


From ikalnitsky at mirantis.com  Tue Sep  1 10:39:24 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Tue, 1 Sep 2015 13:39:24 +0300
Subject: [openstack-dev] [Fuel] Number of IP addresses in a public
	network
In-Reply-To: <BLU436-SMTP3251328352C799BF53C469AD6B0@phx.gbl>
References: <BLU436-SMTP3251328352C799BF53C469AD6B0@phx.gbl>
Message-ID: <CACo6NWBUkvQckcYe8JV8NgbciW1Kj7t+Qu9rE21iEbY-KsNp2g@mail.gmail.com>

Hello,

My 5 cents on it.

I don't think it's really a High or Critical bug for 7.0. If there's
not enough IPs the CheckBeforeDeploymentTask will fail. And that's
actually Ok, it may fail by different reason without starting actual
deployment (sending message to Astute).

But I agree it's kinda strange that we don't check IPs during network
verification step. The good fix in my opinion is to move this check
into network checker (perhaps keep it here either), but that
definitely shouldn't be done in 7.0.

Thanks,
Igor


On Mon, Aug 31, 2015 at 2:54 PM, Roman Prykhodchenko <me at romcheg.me> wrote:
> Hi folks!
>
> Recently a problem that network check does not tell whether there?s enough IP addresses in a public network [1] was reported. That check is performed by CheckBeforeDeployment task, but there is two problems that happen because this verification is done that late:
>
>  - A deployment fails, if there?s not enough addresses in specified ranges
>  - If a user wants to get network configuration they will get an error
>
> The solution for this problems seems to be easy and a straightforward patch [2] was proposed. However, there is a hidden problem which is that patch does not address which is that installed plugins may reserve VIPs for their needs. The issue is that they do it just before deployment and so it?s not possible to get those reservations when a user wants to check their network set up.
>
> The important issue we have to address here is that network configuration generator will fail, if specified ranges don?t fit all VIPs. There were several proposals to fix that, I?d like to highlight two of them:
>
>  a) Allow VIPs to not have an IP address assigned, if network config generator works for API output.
>      That will prevent GET requests from failure, but since IP addresses for VIPs are required, generator will have to fail, if it generates a configuration for the orchestrator.
>  b) Add a release note that users have to calculate IP addresses manually and put sane ranges in order to not shoot their own legs. Then it?s also possible to change network verification output to remind users to check the ranges before starting a deployment.
>
> In my opinion we cannot follow (a) because it only masks a problem instead of providing a fix. Also it requires to change the API which is not a good thing to do after the SCF. If we choose (b), then we can work on a firm solution in 8.0 and fix the problem for real.
>
>
> P. S. We can still merge [2], because it checks, if IP ranges can at least fit the basic configuration. If you agree, I will update it soon.
>
> [1] https://bugs.launchpad.net/fuel/+bug/1487996
> [2] https://review.openstack.org/#/c/217267/
>
>
>
> - romcheg
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From mangelajo at redhat.com  Tue Sep  1 10:42:07 2015
From: mangelajo at redhat.com (Miguel Angel Ajo)
Date: Tue, 01 Sep 2015 12:42:07 +0200
Subject: [openstack-dev] [neutron][db] reviewers: please mind the branch
 a script belongs to
In-Reply-To: <A0ADAA86-9762-4734-98E9-FF29CD617653@redhat.com>
References: <A0ADAA86-9762-4734-98E9-FF29CD617653@redhat.com>
Message-ID: <55E580FF.4080206@redhat.com>

Good reminder, I believe automation will help us most of the time. But we need to have a good eye on contract/expand branches,

Ihar Hrachyshka wrote:
> Hi reviewers,
>
> several days ago, a semantically expand-only migration script was merged into contract branch [1]. This is not a disaster, though it would be a tiny one if a contract-only migration script would be merged into expand branch.
>
> Please make sure you know the new migration strategy described in [2].
>
> Previously, we introduced a check that validates that we don?t mix down_revision heads, linking e.g. expand script to contract revision, or vice versa [3]. Apparently, it?s not enough.
>
> Ann is looking into introducing another check for semantical correctness of scripts. I don?t believe it may work for all complex cases we may need to solve manually, but at least it should be able to catch add_* operations in contract scripts, or drop_* operations in expand branch. Since there may be exceptions to general automation, we may also need a mechanism to disable such a sanity check for specific scripts.
>
> So all in all, I kindly ask everyone to become aware of how we now manage migration scripts, and what it implies in how we should review code (f.e. looking at paths as well as the code of alembic scripts). That is especially important before the test that Ann is looking to implement is not merged.
>
> [1]: https://bugs.launchpad.net/neutron/+bug/1490767
> [2]: http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html
> [3]: https://review.openstack.org/#/c/206746/
>
> Thanks
> Ihar
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From gkotton at vmware.com  Tue Sep  1 10:42:59 2015
From: gkotton at vmware.com (Gary Kotton)
Date: Tue, 1 Sep 2015 10:42:59 +0000
Subject: [openstack-dev] [nova][vmware] compute log files are flooded with
	unnecessary data
Message-ID: <D20B5BDD.BAFBD%gkotton@vmware.com>

Hi,
The commit https://github.com/openstack/nova/commit/bcc002809894f39c84dca5c46034468ed0469c2b results in a ton of data in log files. An example is that a tempest runs is about 80G of data in the nova compute log file. This makes debugging issues terribly difficult.
I have proposed a revert of this in - https://review.openstack.org/219225
Thanks
Gary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/d70ce537/attachment.html>

From ikalnitsky at mirantis.com  Tue Sep  1 10:43:49 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Tue, 1 Sep 2015 13:43:49 +0300
Subject: [openstack-dev] [Fuel] Code review process in Fuel and related
	issues
In-Reply-To: <30E12849-7AAB-45F7-BA7B-A4D952053419@mirantis.com>
References: <CAKYN3rNAw4vqbrvUONaemxOx=mACM3Aq_JAjpBeXmhjXq-zi5A@mail.gmail.com>
 <CABfuu9qPOe2RVhBG7aq+coVRQ0898pkv+DXGQBs9nGU93b+krA@mail.gmail.com>
 <30E12849-7AAB-45F7-BA7B-A4D952053419@mirantis.com>
Message-ID: <CACo6NWA_=2JnJfcFwbTbt1M33P7Gqpg_xemKDV5x7miu94TAHQ@mail.gmail.com>

Hi folks,

So basically..

* core reviewers won't be feature leads anymore
* core reviewers won't be assigned to features (or at least not full-time)
* core reviewers will spend time doing review and participate design meetings
* core reviewers will spend time triaging bugs

Is that correct?

Thanks,
Igor

On Sun, Aug 30, 2015 at 2:29 AM, Tomasz Napierala
<tnapierala at mirantis.com> wrote:
>> On 27 Aug 2015, at 07:58, Evgeniy L <eli at mirantis.com> wrote:
>>
>> Hi Mike,
>>
>> I have several comments.
>>
>> >> SLA should be the driver of doing timely reviews, however we can?t allow to fast-track code into master suffering quality of review ...
>>
>> As for me the idea of SLA contradicts to qualitative reviews.
>
> We expect cores to be less loaded after this change, so you guys should have more time to spend on right reviews, and not minor stuff. We hope this will also help keeping SLAs.
>
>> Another thing is I got a bit confused by the difference between Core Reviewer and Component Lead,
>> aren't those the same persons? Shouldn't every Core Reviewer know the architecture, best practises
>> and participate in design architecture sessions?
>
> Not really. You can have  many core reviewers, but there should be one component lead. Currently, while Fuel is monolithic, we cannot implement it in technical way. But if we succeed splitting Fuel into smaller projects, component lead will be responsible for (most likely) one repo.
>
> Regards,
> --
> Tomasz 'Zen' Napierala
> Product Engineering - Poland
>
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From gkotton at vmware.com  Tue Sep  1 10:49:21 2015
From: gkotton at vmware.com (Gary Kotton)
Date: Tue, 1 Sep 2015 10:49:21 +0000
Subject: [openstack-dev] [nova] periodic task
In-Reply-To: <55E49B55.2000602@linux.vnet.ibm.com>
References: <D20121C2.B8B51%gkotton@vmware.com>
 <55DC75DE.4070709@linux.vnet.ibm.com> <D201D0BF.B9219%gkotton@vmware.com>
 <55DC938B.6030801@linux.vnet.ibm.com> <D201FC83.B9349%gkotton@vmware.com>
 <20150825184311.GF3226@crypt> <D20486D7.B9B04%gkotton@vmware.com>
 <55E49B55.2000602@linux.vnet.ibm.com>
Message-ID: <D20B5C68.BAFC0%gkotton@vmware.com>



On 8/31/15, 9:22 PM, "Matt Riedemann" <mriedem at linux.vnet.ibm.com> wrote:

>
>
>On 8/27/2015 1:22 AM, Gary Kotton wrote:
>> 
>> 
>> On 8/25/15, 2:43 PM, "Andrew Laski" <andrew at lascii.com> wrote:
>> 
>>> On 08/25/15 at 06:08pm, Gary Kotton wrote:
>>>>
>>>>
>>>> On 8/25/15, 9:10 AM, "Matt Riedemann" <mriedem at linux.vnet.ibm.com>
>>>>wrote:
>>>>
>>>>>
>>>>>
>>>>> On 8/25/2015 10:03 AM, Gary Kotton wrote:
>>>>>>
>>>>>>
>>>>>> On 8/25/15, 7:04 AM, "Matt Riedemann" <mriedem at linux.vnet.ibm.com>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 8/24/2015 9:32 PM, Gary Kotton wrote:
>>>>>>>> In item #2 below the reboot is down via the guest and not the nova
>>>>>>>> api?s :)
>>>>>>>>
>>>>>>>> From: Gary Kotton <gkotton at vmware.com <mailto:gkotton at vmware.com>>
>>>>>>>> Reply-To: OpenStack List <openstack-dev at lists.openstack.org
>>>>>>>> <mailto:openstack-dev at lists.openstack.org>>
>>>>>>>> Date: Monday, August 24, 2015 at 7:18 PM
>>>>>>>> To: OpenStack List <openstack-dev at lists.openstack.org
>>>>>>>> <mailto:openstack-dev at lists.openstack.org>>
>>>>>>>> Subject: [openstack-dev] [nova] periodic task
>>>>>>>>
>>>>>>>> Hi,
>>>>>>>> A couple of months ago I posted a patch for bug
>>>>>>>> https://launchpad.net/bugs/1463688. The issue is as follows: the
>>>>>>>> periodic task detects that the instance state does not match the
>>>>>>>> state
>>>>>>>> on the hypervisor and it shuts down the running VM. There are a
>>>>>>>> number
>>>>>>>> of ways that this may happen and I will try and explain:
>>>>>>>>
>>>>>>>>    1. Vmware driver example: a host where the instances are
>>>>>>>>running
>>>>>>>> goes
>>>>>>>>       down. This could be a power outage, host failure, etc. The
>>>>>>>> first
>>>>>>>>       iteration of the perdioc task will determine that the actual
>>>>>>>>       instacne is down. This will update the state of the
>>>>>>>>instance to
>>>>>>>>       DOWN. The VC has the ability to do HA and it will start the
>>>>>>>> instance
>>>>>>>>       up and running again. The next iteration of the periodic
>>>>>>>>task
>>>>>>>> will
>>>>>>>>       determine that the instance is up and the compute manager
>>>>>>>>will
>>>>>>>> stop
>>>>>>>>       the instance.
>>>>>>>>    2. All drivers. The tenant decides to do a reboot of the
>>>>>>>>instance
>>>>>>>> and
>>>>>>>>       that coincides with the periodic task state validation. At
>>>>>>>>this
>>>>>>>>       point in time the instance will not be up and the compute
>>>>>>>>node
>>>>>>>> will
>>>>>>>>       update the state of the instance as DWON. Next iteration the
>>>>>>>> states
>>>>>>>>       will differ and the instance will be shutdown
>>>>>>>>
>>>>>>>> Basically the issue hit us with our CI and there was no CI running
>>>>>>>> for a
>>>>>>>> couple of hours due to the fact that the compute node decided to
>>>>>>>> shutdown the running instances. The hypervisor should be the
>>>>>>>>source
>>>>>>>> of
>>>>>>>> truth and it should not be the compute node that decides to
>>>>>>>>shutdown
>>>>>>>> instances. I posted a patch to deal with this
>>>>>>>> https://review.openstack.org/#/c/190047/. Which is the reason for
>>>>>>>> this
>>>>>>>> mail. The patch is backwards compatible so that the existing
>>>>>>>> deployments
>>>>>>>> and random shutdown continues as it works today and the admin now
>>>>>>>> has
>>>>>>>> an
>>>>>>>> ability just to do a log if there is a inconsistency.
>>>>>>>>
>>>>>>>> We do not want to disable the periodic task as knowing the current
>>>>>>>> state
>>>>>>>> of the instance is very important and has a ton of value, we just
>>>>>>>>do
>>>>>>>> not
>>>>>>>> want the periodic to task to shut down a running instance.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>> Gary
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> 
>>>>>>>>___________________________________________________________________
>>>>>>>>__
>>>>>>>> __
>>>>>>>> __
>>>>>>>> _
>>>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>>>> Unsubscribe:
>>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>
>>>>>>>
>>>>>>> In #2 the guest shouldn't be rebooted by the user (tenant) outside
>>>>>>>of
>>>>>>> the nova-api.  I'm not sure if it's actually formally documented in
>>>>>>> the
>>>>>>> nova documentation, but from what I've always heard/known, nova is
>>>>>>> the
>>>>>>> control plane and you should be doing everything with your
>>>>>>>instances
>>>>>>> via
>>>>>>> the nova-api.  If the user rebooted via nova-api, the task_state
>>>>>>> would
>>>>>>> be set and the periodic task would ignore the instance.
>>>>>>
>>>>>> Matt, this is one case that I showed where the problem occurs. There
>>>>>> are
>>>>>> others and I can invest time to see them. The fact that the periodic
>>>>>> task
>>>>>> is there is important. What I don?t understand is why having an
>>>>>>option
>>>>>> of
>>>>>> log indication for an admin is something that is not useful and
>>>>>> instead
>>>>>> we
>>>>>> are going with having the compute node shutdown instance when this
>>>>>> should
>>>>>> not happen. Our infrastructure is behaving like cattle. That should
>>>>>> not
>>>>>> be
>>>>>> the case and the hypervisor should be the source of truth.
>>>>>>
>>>>>> This is a serious issue and instances in production can and will go
>>>>>> down.
>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Matt Riedemann
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 
>>>>>>>____________________________________________________________________
>>>>>>>__
>>>>>>> __
>>>>>>> __
>>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>>> Unsubscribe:
>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>>
>>>>>> 
>>>>>>_____________________________________________________________________
>>>>>>__
>>>>>> __
>>>>>> _
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>
>>>>> For the HA case #1, the periodic task checks to see if the
>>>>>instance.host
>>>>> doesn't match the compute service host [1] and skips if they don't
>>>>> match.
>>>>>
>>>>> Shouldn't your HA scenario be updating which host the instance is
>>>>> running on?  Or is this a vCenter-ism?
>>>>
>>>> The nova compute node has not changed. It is not the compute nodes
>>>>host.
>>>> The host that the instance was running on was down and those instances
>>>> were moved.
>>>
>>> So this is a case where a single compute node is managing multiple
>>> hypervisors?  It sounds like there is an assumption being made in the
>>> periodic task that doesn't hold true for the VMware driver, that a
>>> request for the power state of an instance would fail if the host was
>>> down.  This may be a better fix here: to not sync the state if the host
>>> is down.
>>>
>>>
>>>
>>>>
>>>> For libvirt the same issues could happen if a process goes down and is
>>>> restarted (there may be some race conditions). But I am not familiar
>>>> enough with the ins and outs there. Just the fact that suggesting in
>>>>some
>>>> cases that people disable the periodic task indicates that this too
>>>>is an
>>>> issue.
>>>>
>>>> But seriously, we need this and the change is non intrusive,
>>>>configuarble
>>>> and backwards compatible. Honestly I see no reason why this is bing
>>>> blocked.
>>>
>>> The change seems to be under discussion here because this is adding
>>>more
>>> complexity to an already quite complex method.  I believe the desire is
>>> to find a model that simplifies, or at least doesn't add to the
>>> complexity of, the way that syncs are handled.
>> 
>> I am not sure I understand what extra complexity is being added here -
>>the
>> patch in review just logs a message to the log file instead of stopping
>>a
>> running instance.
>> 
>> How do you guys suggest that we move forwards with this. At the moment
>>the
>> code is blocked and this is a real problem in deployment.
>> 
>> BTW I do not think that this is specific for the Vmware driver - it is
>> just that we hit it first :)
>> 
>>>
>>>>
>>>>
>>>>>
>>>>> [1]
>>>>> 
>>>>>http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager
>>>>>.p
>>>>> y#
>>>>> n5871
>>>>>
>>>>> --
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Matt Riedemann
>>>>>
>>>>>
>>>>> 
>>>>>______________________________________________________________________
>>>>>__
>>>>> __
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>> 
>>>>_______________________________________________________________________
>>>>__
>>>> _
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> 
>>>________________________________________________________________________
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>
>I think ideally the virt driver should determine that there is some
>background task going on with the instance (like the HA case of it
>migrating hosts in the VC cluster) and signal to the task in the compute
>manager that that instance should be skipped (basically the same logic
>as if instance.task_state is not None).  Barring that, I've proposed an
>alternative solution for what I think is the root issue you're having:
>
>https://review.openstack.org/#/c/218975/

I have a number of issues with this approach:
1. Say there is the edge case when the user does a reboot and the periodic
task detects that it is down. Then it will be randomly rebooted at the
next periodic task iteration.
2. If we are going to go the route of a configuration variable then I do
not think that using an existing variable is the correct approach. That
may affect workloads for say libvirt.
3. What about have a new configuration variable indicating the action that
should be taken. For example:
	- stop
	- reboot
	- continue
Lets at least give the admin the power to decide how the guests should
behave.

I would really hate to have a user being in the middle of an operation and
suddenly the instance is rebooted.

Thanks
Gary

>
>-- 
>
>Thanks,
>
>Matt Riedemann
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sean at dague.net  Tue Sep  1 11:17:56 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 1 Sep 2015 07:17:56 -0400
Subject: [openstack-dev] [ironic] 100% failure in pxe_ssh job
Message-ID: <55E58964.3060701@dague.net>

The current failure rate for the ironic pxe_ssh job is 100% -
http://graphite.openstack.org/render/?from=-200hours&height=500&until=now&width=800&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.{SUCCESS,FAILURE})),%2712hours%27),%20%27gate-tempest-dsvm-ironic-pxe_ssh%27),%27orange%27)

The reason is something wrong with disk image builder and upstream ubuntu.

Which raises a much more pressing issue, why is an ironic integration
job building, from scratch a disk image builder image on every go (and
connecting to the internet to do it)? Especially as this job sits on a
bunch of other projects beyond ironic. Architecturally this is not sound
enough to be a voting job.

I'm proposing we make it non-voting immediately, and until it's redone
so it's no long dependent on pulling images directly from upstream, we
don't let it be voting.

	-Sean

-- 
Sean Dague
http://dague.net


From ociuhandu at cloudbasesolutions.com  Tue Sep  1 11:20:31 2015
From: ociuhandu at cloudbasesolutions.com (Octavian Ciuhandu)
Date: Tue, 1 Sep 2015 11:20:31 +0000
Subject: [openstack-dev] FW: [cinder] Microsoft CI Still Disabled
In-Reply-To: <8B57756E-0226-4A97-B8A7-7A47F382687C@microsoft.com>
References: <CAHcn5b1UUkK79Hf3t8=78XeeEUR_yzBDAkvU1gwJxC8jCXCLnQ@mail.gmail.com>
 <b3332c8378a94201a81cb73641285c19@DFM-CO1MBX15-06.exchange.corp.microsoft.com>
 <8B57756E-0226-4A97-B8A7-7A47F382687C@microsoft.com>
Message-ID: <9CA7F646-0713-485A-B346-31AEA76E7311@cloudbasesolutions.com>

Hi Mike,

I think the first mail was sent only on openstack-dev. We have had already over 
a week of reliable results on the Microsoft Cinder CI on all three jobs, we have 
an updated set of results available at [1].

Please re-evaluate the activity of our CI and re-enable the gerrit account.

Thank you,

Octavian.

[1]: http://paste.openstack.org/show/437509/





On 24/8/15 20:19, "Octavian Ciuhandu" <ociuhandu at cloudbasesolutions.com> wrote:

>Hi Mike,
>
>We have stabilised the Microsoft Cinder CI and ensured dry-runs testing over
>the past days.
>
>The reason why we didnt report earlier on the status is that we were waiting 
>for the completion of a series of hardware maintenance activities performed in 
>the Microsoft datacenter where the CI is running, as metioned by Hashir in the
>previous email.
>
>In the meantime we also improved the CI software infrastructure, including:
>
>1. Adding local pip and deb cache servers to reduce transient package download
>issues 2. Improved resiliency in the under and overcloud deployment scripts [1]
>3. Improved monitoring to reduce the respose time on transient issues,
>especially for undercloud networking which was the source of a significant
>amount of failures due to a switch misconfiguration in the datacenter, recently
>identified and solved.
>
>We have a list of recent runs for all 3 drivers available at [2].
>
>Please let us know if there?s anything else that prevents re-enabling the 
>Microsoft Cinder CI account.
>
>Thank you,
>
>Octavian.
>
>[1]: https://github.com/cloudbase/cinder-ci
>[2]: http://paste.openstack.org/show/426144/
>
>
>
>
>
>
>On 24/8/15 16:48, "Hashir Abdi" <habdi at microsoft.com> wrote:
>
>>Mike:
>>
>>Apologies for the delayed response.
>>
>>We have been addressing bug-fixes and infrastructure changes for our CI, while working through vacation schedules.
>>
>>We have now been able to conduct successful dry runs of our CI, and I'll let Octavian follow up here with those latest successful results.
>>
>>Sincerely
>>
>>Hashir Abdi
>>
>>Principal Software Engineering Manager
>>Microsoft
>>
>>
>>
>>
>>
>>
>>
>>-----Original Message-----
>>From: Mike Perez [mailto:thingee at gmail.com]
>>Sent: Friday, August 21, 2015 8:39 PM
>>To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
>>Cc: Microsoft Cinder CI <Microsoft_Cinder_CI at microsoft.com>; Octavian Ciuhandu <ociuhandu at cloudbasesolutions.com>
>>Subject: [cinder] Microsoft CI Still Disabled
>>
>>The Microsoft CI has been disabled since July 22nd [1].
>>
>>Last I heard from the Cinder midcycle sprint, this CI was still not ready and it hasn't been for 30 days now.
>>
>>Where are we with things, and why has communication been so poor with the Cloud Base solutions team?
>>
>>[1] - https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2flists.openstack.org%2fpipermail%2fthird-party-announce%2f2015-July%2f000249.html&data=01%7c01%7chabdi%40exchange.microsoft.com%7c33ba5a82486e4dc6d06508d2aa8a115d%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=mPalAeUjUVBu9sVn6fDnehA4TN3uZOUB4DAKS6qcFaU%3d
>>
>>--
>>Mike Perez

From lucasagomes at gmail.com  Tue Sep  1 11:32:24 2015
From: lucasagomes at gmail.com (Lucas Alvares Gomes)
Date: Tue, 1 Sep 2015 12:32:24 +0100
Subject: [openstack-dev] [ironic] 100% failure in pxe_ssh job
In-Reply-To: <55E58964.3060701@dague.net>
References: <55E58964.3060701@dague.net>
Message-ID: <CAB1EZBr0wv=P7rut=8chC===wDV_KSid4U9fkWhgxP5H7yUS1A@mail.gmail.com>

Hi,

> The current failure rate for the ironic pxe_ssh job is 100% -
> http://graphite.openstack.org/render/?from=-200hours&height=500&until=now&width=800&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.{SUCCESS,FAILURE})),%2712hours%27),%20%27gate-tempest-dsvm-ironic-pxe_ssh%27),%27orange%27)
>
> The reason is something wrong with disk image builder and upstream ubuntu.
>
> Which raises a much more pressing issue, why is an ironic integration
> job building, from scratch a disk image builder image on every go (and
> connecting to the internet to do it)? Especially as this job sits on a
> bunch of other projects beyond ironic. Architecturally this is not sound
> enough to be a voting job.
>
> I'm proposing we make it non-voting immediately, and until it's redone
> so it's no long dependent on pulling images directly from upstream, we
> don't let it be voting.
>

Yeah, I had few time in the morning but I put a potential fix for that
problem here to test if would solve the problem:
https://review.openstack.org/#/c/219199/

This is one of the main jobs for Ironic and it would be great if we
could keep it voting. Could we perhaps change the base OS to something
else until Ubuntu is fixed ? ( Fedora / CentOS / Debian )

Cheers,
Lucas


From snikitin at mirantis.com  Tue Sep  1 11:39:57 2015
From: snikitin at mirantis.com (Sergey Nikitin)
Date: Tue, 1 Sep 2015 14:39:57 +0300
Subject: [openstack-dev] [ironic] 100% failure in pxe_ssh job
In-Reply-To: <CAB1EZBr0wv=P7rut=8chC===wDV_KSid4U9fkWhgxP5H7yUS1A@mail.gmail.com>
References: <55E58964.3060701@dague.net>
 <CAB1EZBr0wv=P7rut=8chC===wDV_KSid4U9fkWhgxP5H7yUS1A@mail.gmail.com>
Message-ID: <CAM25k1S1gFrtuAqqFqsQhQpY_cy41J7s5Xc60PE4H6C2tm-Zng@mail.gmail.com>

Maybe problem in ubuntu hash sums. I tried to compare hash sums of images
from here http://cloud-images.ubuntu.com/trusty/current/ but image names in
file with sums SHA256SUMS differ from actual images names.

pxe_ssh job failing with

+ grep trusty-server-cloudimg-amd64-root.tar.gz
/opt/stack/new/.cache/image-create/SHA256SUMS.ubuntu.trusty.amd64
2015-09-01 09:01:01.512 | + sha256sum --check -
2015-09-01 09:01:01.514 | sha256sum: standard input: no properly formatted
SHA256 checksum lines found

I guess it couldn't grep name of this image from SHA256SUMS file. I pinged
ubuntu guys in IRC #ubuntu-delev. They said that it sounds like a bug.


2015-09-01 14:32 GMT+03:00 Lucas Alvares Gomes <lucasagomes at gmail.com>:

> Hi,
>
> > The current failure rate for the ironic pxe_ssh job is 100% -
> >
> http://graphite.openstack.org/render/?from=-200hours&height=500&until=now&width=800&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.{SUCCESS,FAILURE})),%2712hours%27),%20%27gate-tempest-dsvm-ironic-pxe_ssh%27),%27orange%27)
> >
> > The reason is something wrong with disk image builder and upstream
> ubuntu.
> >
> > Which raises a much more pressing issue, why is an ironic integration
> > job building, from scratch a disk image builder image on every go (and
> > connecting to the internet to do it)? Especially as this job sits on a
> > bunch of other projects beyond ironic. Architecturally this is not sound
> > enough to be a voting job.
> >
> > I'm proposing we make it non-voting immediately, and until it's redone
> > so it's no long dependent on pulling images directly from upstream, we
> > don't let it be voting.
> >
>
> Yeah, I had few time in the morning but I put a potential fix for that
> problem here to test if would solve the problem:
> https://review.openstack.org/#/c/219199/
>
> This is one of the main jobs for Ironic and it would be great if we
> could keep it voting. Could we perhaps change the base OS to something
> else until Ubuntu is fixed ? ( Fedora / CentOS / Debian )
>
> Cheers,
> Lucas
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/157bc65c/attachment.html>

From vryzhenkin at mirantis.com  Tue Sep  1 11:47:24 2015
From: vryzhenkin at mirantis.com (Victor Ryzhenkin)
Date: Tue, 1 Sep 2015 14:47:24 +0300
Subject: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core
In-Reply-To: <CAOFFu8aNYx-4mhnSA_4M7mDD5ndWNJuXnpQ5s1L0c7tSb7WdaA@mail.gmail.com>
References: <etPan.55e4d925.59528236.146@TefMBPr.local>
 <CAOnDsYPpN1XGQ-ZLsbxv36Y2JWi+meuWz4vXXY=u44oaawTTjw@mail.gmail.com>
 <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>
 <CAOFFu8aNYx-4mhnSA_4M7mDD5ndWNJuXnpQ5s1L0c7tSb7WdaA@mail.gmail.com>
Message-ID: <etPan.55e5904c.61791e85.14d@pegasus.local>

+1 from me ;)

--?
Victor Ryzhenkin
Junior QA Engeneer
freerunner on #freenode

???????? 1 ???????? 2015 ?. ? 12:18:19, Ekaterina Chernova (efedorova at mirantis.com) ???????:

+1

On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii <ddovbii at mirantis.com> wrote:
+1

2015-09-01 2:24 GMT+03:00 Serg Melikyan <smelikyan at mirantis.com>:
+1

On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev <kzaitsev at mirantis.com> wrote:
I?m pleased to nominate Nikolai for Murano core.

He?s been actively participating in development of murano during liberty and is among top5 contributors during last 90 days. He?s also leading the CloudFoundry integration initiative.

Here are some useful links:

Overall contribution:?http://stackalytics.com/?user_id=starodubcevna
List of reviews:?https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
Murano contribution during latest 90 days?http://stackalytics.com/report/contribution/murano/90

Please vote with +1/-1 for approval/objections

--?
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Serg Melikyan,?Senior Software Engineer at Mirantis, Inc.
http://mirantis.com?|?smelikyan at mirantis.com

+7 (495)?640-4904, 0261
+7 (903) 156-0836

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/168f0d58/attachment.html>

From duncan.thomas at gmail.com  Tue Sep  1 11:48:27 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Tue, 1 Sep 2015 14:48:27 +0300
Subject: [openstack-dev] FW: [cinder] Microsoft CI Still Disabled
In-Reply-To: <9CA7F646-0713-485A-B346-31AEA76E7311@cloudbasesolutions.com>
References: <CAHcn5b1UUkK79Hf3t8=78XeeEUR_yzBDAkvU1gwJxC8jCXCLnQ@mail.gmail.com>
 <b3332c8378a94201a81cb73641285c19@DFM-CO1MBX15-06.exchange.corp.microsoft.com>
 <8B57756E-0226-4A97-B8A7-7A47F382687C@microsoft.com>
 <9CA7F646-0713-485A-B346-31AEA76E7311@cloudbasesolutions.com>
Message-ID: <CAOyZ2aFBGraWzsRNqxVGQVJR1pBb+Xm9UWS7W6RC0cJrkcSd2w@mail.gmail.com>

Mike is on vacation at the moment, somebody else will evaluate in his stead
as reply here shortly

On 1 September 2015 at 14:20, Octavian Ciuhandu <
ociuhandu at cloudbasesolutions.com> wrote:

> Hi Mike,
>
> I think the first mail was sent only on openstack-dev. We have had already
> over
> a week of reliable results on the Microsoft Cinder CI on all three jobs,
> we have
> an updated set of results available at [1].
>
> Please re-evaluate the activity of our CI and re-enable the gerrit account.
>
> Thank you,
>
> Octavian.
>
> [1]: http://paste.openstack.org/show/437509/
>
>
>
>
>
> On 24/8/15 20:19, "Octavian Ciuhandu" <ociuhandu at cloudbasesolutions.com>
> wrote:
>
> >Hi Mike,
> >
> >We have stabilised the Microsoft Cinder CI and ensured dry-runs testing
> over
> >the past days.
> >
> >The reason why we didnt report earlier on the status is that we were
> waiting
> >for the completion of a series of hardware maintenance activities
> performed in
> >the Microsoft datacenter where the CI is running, as metioned by Hashir
> in the
> >previous email.
> >
> >In the meantime we also improved the CI software infrastructure,
> including:
> >
> >1. Adding local pip and deb cache servers to reduce transient package
> download
> >issues 2. Improved resiliency in the under and overcloud deployment
> scripts [1]
> >3. Improved monitoring to reduce the respose time on transient issues,
> >especially for undercloud networking which was the source of a significant
> >amount of failures due to a switch misconfiguration in the datacenter,
> recently
> >identified and solved.
> >
> >We have a list of recent runs for all 3 drivers available at [2].
> >
> >Please let us know if there?s anything else that prevents re-enabling the
> >Microsoft Cinder CI account.
> >
> >Thank you,
> >
> >Octavian.
> >
> >[1]: https://github.com/cloudbase/cinder-ci
> >[2]: http://paste.openstack.org/show/426144/
> >
> >
> >
> >
> >
> >
> >On 24/8/15 16:48, "Hashir Abdi" <habdi at microsoft.com> wrote:
> >
> >>Mike:
> >>
> >>Apologies for the delayed response.
> >>
> >>We have been addressing bug-fixes and infrastructure changes for our CI,
> while working through vacation schedules.
> >>
> >>We have now been able to conduct successful dry runs of our CI, and I'll
> let Octavian follow up here with those latest successful results.
> >>
> >>Sincerely
> >>
> >>Hashir Abdi
> >>
> >>Principal Software Engineering Manager
> >>Microsoft
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>-----Original Message-----
> >>From: Mike Perez [mailto:thingee at gmail.com]
> >>Sent: Friday, August 21, 2015 8:39 PM
> >>To: OpenStack Development Mailing List <
> openstack-dev at lists.openstack.org>
> >>Cc: Microsoft Cinder CI <Microsoft_Cinder_CI at microsoft.com>; Octavian
> Ciuhandu <ociuhandu at cloudbasesolutions.com>
> >>Subject: [cinder] Microsoft CI Still Disabled
> >>
> >>The Microsoft CI has been disabled since July 22nd [1].
> >>
> >>Last I heard from the Cinder midcycle sprint, this CI was still not
> ready and it hasn't been for 30 days now.
> >>
> >>Where are we with things, and why has communication been so poor with
> the Cloud Base solutions team?
> >>
> >>[1] -
> https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2flists.openstack.org%2fpipermail%2fthird-party-announce%2f2015-July%2f000249.html&data=01%7c01%7chabdi%40exchange.microsoft.com%7c33ba5a82486e4dc6d06508d2aa8a115d%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=mPalAeUjUVBu9sVn6fDnehA4TN3uZOUB4DAKS6qcFaU%3d
> >>
> >>--
> >>Mike Perez
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/999e5dd0/attachment.html>

From duncan.thomas at gmail.com  Tue Sep  1 11:49:06 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Tue, 1 Sep 2015 14:49:06 +0300
Subject: [openstack-dev] FW: [cinder] Microsoft CI Still Disabled
In-Reply-To: <9CA7F646-0713-485A-B346-31AEA76E7311@cloudbasesolutions.com>
References: <CAHcn5b1UUkK79Hf3t8=78XeeEUR_yzBDAkvU1gwJxC8jCXCLnQ@mail.gmail.com>
 <b3332c8378a94201a81cb73641285c19@DFM-CO1MBX15-06.exchange.corp.microsoft.com>
 <8B57756E-0226-4A97-B8A7-7A47F382687C@microsoft.com>
 <9CA7F646-0713-485A-B346-31AEA76E7311@cloudbasesolutions.com>
Message-ID: <CAOyZ2aEg54BtPaZFAd+Pn3JBcb4hs1wKOb1CTmB3BLQObDCcCw@mail.gmail.com>

Mike is on vacation at the moment, somebody else will evaluate in his stead
Hi Mike,

I think the first mail was sent only on openstack-dev. We have had already
over
a week of reliable results on the Microsoft Cinder CI on all three jobs, we
have
an updated set of results available at [1].

Please re-evaluate the activity of our CI and re-enable the gerrit account.

Thank you,

Octavian.

[1]: http://paste.openstack.org/show/437509/





On 24/8/15 20:19, "Octavian Ciuhandu" <ociuhandu at cloudbasesolutions.com>
wrote:

>Hi Mike,
>
>We have stabilised the Microsoft Cinder CI and ensured dry-runs testing
over
>the past days.
>
>The reason why we didnt report earlier on the status is that we were
waiting
>for the completion of a series of hardware maintenance activities
performed in
>the Microsoft datacenter where the CI is running, as metioned by Hashir in
the
>previous email.
>
>In the meantime we also improved the CI software infrastructure, including:
>
>1. Adding local pip and deb cache servers to reduce transient package
download
>issues 2. Improved resiliency in the under and overcloud deployment
scripts [1]
>3. Improved monitoring to reduce the respose time on transient issues,
>especially for undercloud networking which was the source of a significant
>amount of failures due to a switch misconfiguration in the datacenter,
recently
>identified and solved.
>
>We have a list of recent runs for all 3 drivers available at [2].
>
>Please let us know if there?s anything else that prevents re-enabling the
>Microsoft Cinder CI account.
>
>Thank you,
>
>Octavian.
>
>[1]: https://github.com/cloudbase/cinder-ci
>[2]: http://paste.openstack.org/show/426144/
>
>
>
>
>
>
>On 24/8/15 16:48, "Hashir Abdi" <habdi at microsoft.com> wrote:
>
>>Mike:
>>
>>Apologies for the delayed response.
>>
>>We have been addressing bug-fixes and infrastructure changes for our CI,
while working through vacation schedules.
>>
>>We have now been able to conduct successful dry runs of our CI, and I'll
let Octavian follow up here with those latest successful results.
>>
>>Sincerely
>>
>>Hashir Abdi
>>
>>Principal Software Engineering Manager
>>Microsoft
>>
>>
>>
>>
>>
>>
>>
>>-----Original Message-----
>>From: Mike Perez [mailto:thingee at gmail.com]
>>Sent: Friday, August 21, 2015 8:39 PM
>>To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
>>Cc: Microsoft Cinder CI <Microsoft_Cinder_CI at microsoft.com>; Octavian
Ciuhandu <ociuhandu at cloudbasesolutions.com>
>>Subject: [cinder] Microsoft CI Still Disabled
>>
>>The Microsoft CI has been disabled since July 22nd [1].
>>
>>Last I heard from the Cinder midcycle sprint, this CI was still not ready
and it hasn't been for 30 days now.
>>
>>Where are we with things, and why has communication been so poor with the
Cloud Base solutions team?
>>
>>[1] -
https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2flists.openstack.org%2fpipermail%2fthird-party-announce%2f2015-July%2f000249.html&data=01%7c01%7chabdi%40exchange.microsoft.com%7c33ba5a82486e4dc6d06508d2aa8a115d%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=mPalAeUjUVBu9sVn6fDnehA4TN3uZOUB4DAKS6qcFaU%3d
>>
>>--
>>Mike Perez
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/71791c91/attachment.html>

From bharath_ves at hotmail.com  Tue Sep  1 11:50:25 2015
From: bharath_ves at hotmail.com (bharath thiruveedula)
Date: Tue, 1 Sep 2015 17:20:25 +0530
Subject: [openstack-dev] Debug tool for neutron
Message-ID: <SNT153-W35C61683E528F2BDF7A127946A0@phx.gbl>

Hi,
We have some troubleshooting guides for openstack neutron. But many people who are new to neutron find it difficult to follow the guides, as they are not aware of what is happening behind the scenes. So is there any tool which tracks the packet flow from the VM to debug issues like why the VM  is not getting the IP Address or why internet is not reachable from the VM? If any, can you please suggest some of them?

RegardsBharath 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/d339a582/attachment.html>

From tpb at dyncloud.net  Tue Sep  1 11:57:58 2015
From: tpb at dyncloud.net (Tom Barron)
Date: Tue, 1 Sep 2015 07:57:58 -0400
Subject: [openstack-dev] [cinder] L3 low pri review queue starvation
Message-ID: <55E592C6.7070808@dyncloud.net>

[Yesterday while discussing the following issue on IRC, jgriffith
suggested that I post to the dev list in preparation for a discussion in
Wednesday's cinder meeting.]

Please take a look at the 10 "Low" priority reviews in the cinder
Liberty 3 etherpad that were punted to Mitaka yesterday. [1]

Six of these *never* [2] received a vote from a core reviewer. With the
exception of the first in the list, which has 35 patch sets, none of the
others received a vote before Friday, August 28.  Of these, none had
more than -1s on minor issues, and these have been remedied.

Review https://review.openstack.org/#/c/213855 "Implement
manage/unmanage snapshot in Pure drivers" is a great example:

   * approved blueprint for a valuable feature
   * pristine code
   * passes CI and Jenkins (and by the deadline)
   * never reviewed

We have 11 core reviewers, all of whom were very busy doing reviews
during L3, but evidently this set of reviews didn't really have much
chance of making it.  This looks like a classic case where the
individually rational priority decisions of each core reviewer
collectively resulted in starving the Low Priority review queue.

One way to remedy would be for the 11 core reviewers to devote a day or
two to cleaning up this backlog of 10 outstanding reviews rather than
punting all of them out to Mitaka.

Thanks for your time and consideration.

Respectfully,

-- Tom Barron

[1] https://etherpad.openstack.org/p/cinder-liberty-3-reviews
[2] At the risk of stating the obvious, in this count I ignore purely
procedural votes such as the final -2.


From ativelkov at mirantis.com  Tue Sep  1 12:03:54 2015
From: ativelkov at mirantis.com (Alexander Tivelkov)
Date: Tue, 1 Sep 2015 15:03:54 +0300
Subject: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core
In-Reply-To: <etPan.55e5904c.61791e85.14d@pegasus.local>
References: <etPan.55e4d925.59528236.146@TefMBPr.local>
 <CAOnDsYPpN1XGQ-ZLsbxv36Y2JWi+meuWz4vXXY=u44oaawTTjw@mail.gmail.com>
 <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>
 <CAOFFu8aNYx-4mhnSA_4M7mDD5ndWNJuXnpQ5s1L0c7tSb7WdaA@mail.gmail.com>
 <etPan.55e5904c.61791e85.14d@pegasus.local>
Message-ID: <CAM6FM9T-VRxqTgSbz3gcyEPA+F-+Hs3qCMqF2EpC85KvvwXvhw@mail.gmail.com>

+1. Well deserved.

--
Regards,
Alexander Tivelkov

On Tue, Sep 1, 2015 at 2:47 PM, Victor Ryzhenkin <vryzhenkin at mirantis.com>
wrote:

> +1 from me ;)
>
> --
> Victor Ryzhenkin
> Junior QA Engeneer
> freerunner on #freenode
>
> ???????? 1 ???????? 2015 ?. ? 12:18:19, Ekaterina Chernova (
> efedorova at mirantis.com) ???????:
>
> +1
>
> On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii <ddovbii at mirantis.com>
> wrote:
>
>> +1
>>
>> 2015-09-01 2:24 GMT+03:00 Serg Melikyan <smelikyan at mirantis.com>:
>>
>>> +1
>>>
>>> On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev <kzaitsev at mirantis.com>
>>> wrote:
>>>
>>>> I?m pleased to nominate Nikolai for Murano core.
>>>>
>>>> He?s been actively participating in development of murano during
>>>> liberty and is among top5 contributors during last 90 days. He?s also
>>>> leading the CloudFoundry integration initiative.
>>>>
>>>> Here are some useful links:
>>>>
>>>> Overall contribution: http://stackalytics.com/?user_id=starodubcevna
>>>> List of reviews:
>>>> https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
>>>> Murano contribution during latest 90 days
>>>> http://stackalytics.com/report/contribution/murano/90
>>>>
>>>> Please vote with +1/-1 for approval/objections
>>>>
>>>> --
>>>> Kirill Zaitsev
>>>> Murano team
>>>> Software Engineer
>>>> Mirantis, Inc
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
>>> http://mirantis.com | smelikyan at mirantis.com
>>>
>>> +7 (495) 640-4904, 0261
>>> +7 (903) 156-0836
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/60c5f34d/attachment.html>

From jdennis at redhat.com  Tue Sep  1 12:09:54 2015
From: jdennis at redhat.com (John Dennis)
Date: Tue, 1 Sep 2015 08:09:54 -0400
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3B9061@CERNXCHG44.cern.ch>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3B9061@CERNXCHG44.cern.ch>
Message-ID: <55E59592.5000304@redhat.com>

On 09/01/2015 02:49 AM, Tim Bell wrote:
> Will it also be possible to use a different CA ? In some
> environments, there is already a corporate certificate authority
> server. This would ensure compliance with site security standards.

A configurable CA was one of the original design goals when the Barbican 
work began. I have not tracked the work so I don't know if that is still 
the case, Ade Lee would know for sure.

-- 
John


From shardy at redhat.com  Tue Sep  1 12:41:48 2015
From: shardy at redhat.com (Steven Hardy)
Date: Tue, 1 Sep 2015 13:41:48 +0100
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
Message-ID: <20150901124147.GA4710@t430slt.redhat.com>

On Fri, Aug 28, 2015 at 01:35:52AM +0000, Angus Salkeld wrote:
>    Hi
>    I have been running some rally tests against convergence and our existing
>    implementation to compare.
>    So far I have done the following:
>     1. defined a template with a resource
>        groupA https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
>     2. the inner resource looks like
>        this:A https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.templateA (it
>        uses TestResource to attempt to be a reasonable simulation of a
>        server+volume+floatingip)
>     3. defined a rally
>        job:A https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yamlA that
>        creates X resources then updates to X*2 then deletes.
>     4. I then ran the above with/without convergence and with 2,4,8
>        heat-engines
>    Here are the results compared:
>    https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing
>    Some notes on the results so far:
>      * A convergence with only 2 engines does suffer from RPC overload (it
>        gets message timeouts on larger templates). I wonder if this is the
>        problem in our convergence gate...
>      * convergence does very well with a reasonable number of engines
>        running.
>      * delete is slightly slower on convergence
>    Still to test:
>      * the above, but measure memory usage
>      * many small templates (run concurrently)

So, I tried running my many-small-templates here with convergence enabled:

https://bugs.launchpad.net/heat/+bug/1489548

In heat.conf I set:

max_resources_per_stack = -1
convergence_engine = true

Most other settings (particularly RPC and DB settings) are defaults.

Without convergence (but with max_resources_per_stack disabled) I see the
time to create a ResourceGroup of 400 nested stacks (each containing one
RandomString resource) is about 2.5 minutes (core i7 laptop w/SSD, 4 heat
workers e.g the default for a 4 core machine).

With convergence enabled, I see these errors from sqlalchemy:

File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 652, in
_checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', u'  File
"/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 444, in
checkout\n    rec = pool._do_get()\n', u'  File
"/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 980, in
_do_get\n    (self.size(), self.overflow(), self._timeout))\n',
u'TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection
timed out, timeout 30\n'].

I assume this means we're loading the DB much more in the convergence case
and overflowing the QueuePool?

This seems to happen when the RPC call from the ResourceGroup tries to
create some of the 400 nested stacks.

Interestingly after this error, the parent stack moves to CREATE_FAILED,
but the engine remains (very) busy, to the point of being partially
responsive, so it looks like maybe the cancel-on-fail isnt' working (I'm
assuming it isn't error_wait_time because the parent stack has been marked
FAILED and I'm pretty sure it's been more than 240s).

I'll dig a bit deeper when I get time, but for now you might like to try
the stress test too.  It's a bit of a synthetic test, but it turns out to
be a reasonable proxy for some performance issues we observed when creating
large-ish TripleO deployments (which also create a large number of nested
stacks concurrently).

Steve


From marcos.fermin.lobo at cern.ch  Tue Sep  1 12:50:26 2015
From: marcos.fermin.lobo at cern.ch (Marcos Fermin Lobo)
Date: Tue, 1 Sep 2015 12:50:26 +0000
Subject: [openstack-dev] [ec2api][puppet] EC2 api puppet module
Message-ID: <E6E80EA9C3C06E4FA36BF9685D57B3168BB3CC71@CERNXCHG51.cern.ch>

Hi all,

The standalone EC2 api project https://github.com/stackforge/ec2-api does not have puppet module yet. I want to develop this puppet module and my idea is start in a public Github repo and get feedback from the community. All feedback and collaborations will be very welcome.

I would like to start requesting your suggestions about the Github repository name. I would suggest: puppet-ec2-api

Sounds good for you people?, more suggestions?.

Thank you.

Regards,
Marcos.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/b998971b/attachment.html>

From adanin at mirantis.com  Tue Sep  1 13:00:47 2015
From: adanin at mirantis.com (Andrey Danin)
Date: Tue, 1 Sep 2015 16:00:47 +0300
Subject: [openstack-dev] [Fuel] Number of IP addresses in a public
	network
In-Reply-To: <CACo6NWBUkvQckcYe8JV8NgbciW1Kj7t+Qu9rE21iEbY-KsNp2g@mail.gmail.com>
References: <BLU436-SMTP3251328352C799BF53C469AD6B0@phx.gbl>
 <CACo6NWBUkvQckcYe8JV8NgbciW1Kj7t+Qu9rE21iEbY-KsNp2g@mail.gmail.com>
Message-ID: <CA+vYeFokEMsnjdAqF4wH4FU9sP4r-1gAopZvO0o69Hw57fab7w@mail.gmail.com>

+1 to Igor.

It's definitely not a High bug. The biggest problem I see here is a
confusing error message with a wrong number of required IPs. AFAIU we
cannot fix it easily now so let's postpone it to 8.0 but change a message
itself [0] in 7.0.

[0]
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/task/task.py#L1160-L1163

On Tue, Sep 1, 2015 at 1:39 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
wrote:

> Hello,
>
> My 5 cents on it.
>
> I don't think it's really a High or Critical bug for 7.0. If there's
> not enough IPs the CheckBeforeDeploymentTask will fail. And that's
> actually Ok, it may fail by different reason without starting actual
> deployment (sending message to Astute).
>
> But I agree it's kinda strange that we don't check IPs during network
> verification step. The good fix in my opinion is to move this check
> into network checker (perhaps keep it here either), but that
> definitely shouldn't be done in 7.0.
>
> Thanks,
> Igor
>
>
> On Mon, Aug 31, 2015 at 2:54 PM, Roman Prykhodchenko <me at romcheg.me>
> wrote:
> > Hi folks!
> >
> > Recently a problem that network check does not tell whether there?s
> enough IP addresses in a public network [1] was reported. That check is
> performed by CheckBeforeDeployment task, but there is two problems that
> happen because this verification is done that late:
> >
> >  - A deployment fails, if there?s not enough addresses in specified
> ranges
> >  - If a user wants to get network configuration they will get an error
> >
> > The solution for this problems seems to be easy and a straightforward
> patch [2] was proposed. However, there is a hidden problem which is that
> patch does not address which is that installed plugins may reserve VIPs for
> their needs. The issue is that they do it just before deployment and so
> it?s not possible to get those reservations when a user wants to check
> their network set up.
> >
> > The important issue we have to address here is that network
> configuration generator will fail, if specified ranges don?t fit all VIPs.
> There were several proposals to fix that, I?d like to highlight two of them:
> >
> >  a) Allow VIPs to not have an IP address assigned, if network config
> generator works for API output.
> >      That will prevent GET requests from failure, but since IP addresses
> for VIPs are required, generator will have to fail, if it generates a
> configuration for the orchestrator.
> >  b) Add a release note that users have to calculate IP addresses
> manually and put sane ranges in order to not shoot their own legs. Then
> it?s also possible to change network verification output to remind users to
> check the ranges before starting a deployment.
> >
> > In my opinion we cannot follow (a) because it only masks a problem
> instead of providing a fix. Also it requires to change the API which is not
> a good thing to do after the SCF. If we choose (b), then we can work on a
> firm solution in 8.0 and fix the problem for real.
> >
> >
> > P. S. We can still merge [2], because it checks, if IP ranges can at
> least fit the basic configuration. If you agree, I will update it soon.
> >
> > [1] https://bugs.launchpad.net/fuel/+bug/1487996
> > [2] https://review.openstack.org/#/c/217267/
> >
> >
> >
> > - romcheg
> >
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Andrey Danin
adanin at mirantis.com
skype: gcon.monolake
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/fe7689d7/attachment.html>

From ociuhandu at cloudbasesolutions.com  Tue Sep  1 13:09:05 2015
From: ociuhandu at cloudbasesolutions.com (Octavian Ciuhandu)
Date: Tue, 1 Sep 2015 13:09:05 +0000
Subject: [openstack-dev] FW: [cinder] Microsoft CI Still Disabled
In-Reply-To: <CAOyZ2aFBGraWzsRNqxVGQVJR1pBb+Xm9UWS7W6RC0cJrkcSd2w@mail.gmail.com>
References: <CAHcn5b1UUkK79Hf3t8=78XeeEUR_yzBDAkvU1gwJxC8jCXCLnQ@mail.gmail.com>
 <b3332c8378a94201a81cb73641285c19@DFM-CO1MBX15-06.exchange.corp.microsoft.com>
 <8B57756E-0226-4A97-B8A7-7A47F382687C@microsoft.com>
 <9CA7F646-0713-485A-B346-31AEA76E7311@cloudbasesolutions.com>
 <CAOyZ2aFBGraWzsRNqxVGQVJR1pBb+Xm9UWS7W6RC0cJrkcSd2w@mail.gmail.com>
Message-ID: <507B7071-9F59-4CDC-84EB-BB789EDD6989@cloudbasesolutions.com>

Thank you for your help on this,

Octavian.

From: Duncan Thomas
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Tuesday, 1 September 2015 14:48
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] FW: [cinder] Microsoft CI Still Disabled

Mike is on vacation at the moment, somebody else will evaluate in his stead as reply here shortly

On 1 September 2015 at 14:20, Octavian Ciuhandu <ociuhandu at cloudbasesolutions.com<mailto:ociuhandu at cloudbasesolutions.com>> wrote:
Hi Mike,

I think the first mail was sent only on openstack-dev. We have had already over
a week of reliable results on the Microsoft Cinder CI on all three jobs, we have
an updated set of results available at [1].

Please re-evaluate the activity of our CI and re-enable the gerrit account.

Thank you,

Octavian.

[1]: http://paste.openstack.org/show/437509/





On 24/8/15 20:19, "Octavian Ciuhandu" <ociuhandu at cloudbasesolutions.com<mailto:ociuhandu at cloudbasesolutions.com>> wrote:

>Hi Mike,
>
>We have stabilised the Microsoft Cinder CI and ensured dry-runs testing over
>the past days.
>
>The reason why we didnt report earlier on the status is that we were waiting
>for the completion of a series of hardware maintenance activities performed in
>the Microsoft datacenter where the CI is running, as metioned by Hashir in the
>previous email.
>
>In the meantime we also improved the CI software infrastructure, including:
>
>1. Adding local pip and deb cache servers to reduce transient package download
>issues 2. Improved resiliency in the under and overcloud deployment scripts [1]
>3. Improved monitoring to reduce the respose time on transient issues,
>especially for undercloud networking which was the source of a significant
>amount of failures due to a switch misconfiguration in the datacenter, recently
>identified and solved.
>
>We have a list of recent runs for all 3 drivers available at [2].
>
>Please let us know if there?s anything else that prevents re-enabling the
>Microsoft Cinder CI account.
>
>Thank you,
>
>Octavian.
>
>[1]: https://github.com/cloudbase/cinder-ci
>[2]: http://paste.openstack.org/show/426144/
>
>
>
>
>
>
>On 24/8/15 16:48, "Hashir Abdi" <habdi at microsoft.com<mailto:habdi at microsoft.com>> wrote:
>
>>Mike:
>>
>>Apologies for the delayed response.
>>
>>We have been addressing bug-fixes and infrastructure changes for our CI, while working through vacation schedules.
>>
>>We have now been able to conduct successful dry runs of our CI, and I'll let Octavian follow up here with those latest successful results.
>>
>>Sincerely
>>
>>Hashir Abdi
>>
>>Principal Software Engineering Manager
>>Microsoft
>>
>>
>>
>>
>>
>>
>>
>>-----Original Message-----
>>From: Mike Perez [mailto:thingee at gmail.com<mailto:thingee at gmail.com>]
>>Sent: Friday, August 21, 2015 8:39 PM
>>To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
>>Cc: Microsoft Cinder CI <Microsoft_Cinder_CI at microsoft.com<mailto:Microsoft_Cinder_CI at microsoft.com>>; Octavian Ciuhandu <ociuhandu at cloudbasesolutions.com<mailto:ociuhandu at cloudbasesolutions.com>>
>>Subject: [cinder] Microsoft CI Still Disabled
>>
>>The Microsoft CI has been disabled since July 22nd [1].
>>
>>Last I heard from the Cinder midcycle sprint, this CI was still not ready and it hasn't been for 30 days now.
>>
>>Where are we with things, and why has communication been so poor with the Cloud Base solutions team?
>>
>>[1] - https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2flists.openstack.org%2fpipermail%2fthird-party-announce%2f2015-July%2f000249.html&data=01%7c01%7chabdi%40exchange.microsoft.com%7c33ba5a82486e4dc6d06508d2aa8a115d%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=mPalAeUjUVBu9sVn6fDnehA4TN3uZOUB4DAKS6qcFaU%3d
>>
>>--
>>Mike Perez
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
--
Duncan Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/5cef247d/attachment.html>

From jim at jimrollenhagen.com  Tue Sep  1 13:17:53 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Tue, 1 Sep 2015 06:17:53 -0700
Subject: [openstack-dev] [ironic] 100% failure in pxe_ssh job
In-Reply-To: <CAB1EZBr0wv=P7rut=8chC===wDV_KSid4U9fkWhgxP5H7yUS1A@mail.gmail.com>
References: <55E58964.3060701@dague.net>
 <CAB1EZBr0wv=P7rut=8chC===wDV_KSid4U9fkWhgxP5H7yUS1A@mail.gmail.com>
Message-ID: <66CE3668-14E6-40CA-B186-E24C0CA52A7B@jimrollenhagen.com>



> On Sep 1, 2015, at 04:32, Lucas Alvares Gomes <lucasagomes at gmail.com> wrote:
> 
> Hi,
> 
>> The current failure rate for the ironic pxe_ssh job is 100% -
>> http://graphite.openstack.org/render/?from=-200hours&height=500&until=now&width=800&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.{SUCCESS,FAILURE})),%2712hours%27),%20%27gate-tempest-dsvm-ironic-pxe_ssh%27),%27orange%27)
>> 
>> The reason is something wrong with disk image builder and upstream ubuntu.
>> 
>> Which raises a much more pressing issue, why is an ironic integration
>> job building, from scratch a disk image builder image on every go (and
>> connecting to the internet to do it)? Especially as this job sits on a
>> bunch of other projects beyond ironic. Architecturally this is not sound
>> enough to be a voting job.
>> 
>> I'm proposing we make it non-voting immediately, and until it's redone
>> so it's no long dependent on pulling images directly from upstream, we
>> don't let it be voting.
> 
> Yeah, I had few time in the morning but I put a potential fix for that
> problem here to test if would solve the problem:
> https://review.openstack.org/#/c/219199/
> 
> This is one of the main jobs for Ironic and it would be great if we
> could keep it voting. Could we perhaps change the base OS to something
> else until Ubuntu is fixed ? ( Fedora / CentOS / Debian )

Why don't we just use the pre-built agent ramdisks we already publish? AFAIK we already have a job for it, just need to switch the name in project-config for Nova. 

// jim 



From juliaashleykreger at gmail.com  Tue Sep  1 13:21:39 2015
From: juliaashleykreger at gmail.com (Julia Kreger)
Date: Tue, 1 Sep 2015 09:21:39 -0400
Subject: [openstack-dev] [ironic] 100% failure in pxe_ssh job
In-Reply-To: <66CE3668-14E6-40CA-B186-E24C0CA52A7B@jimrollenhagen.com>
References: <55E58964.3060701@dague.net>
 <CAB1EZBr0wv=P7rut=8chC===wDV_KSid4U9fkWhgxP5H7yUS1A@mail.gmail.com>
 <66CE3668-14E6-40CA-B186-E24C0CA52A7B@jimrollenhagen.com>
Message-ID: <CAF7gwdjZPw0TRXnJjPp6qAC17DkOm1J4Kx9EVKLScX13QmWA_w@mail.gmail.com>

On Tue, Sep 1, 2015 at 9:17 AM, Jim Rollenhagen <jim at jimrollenhagen.com>
wrote:

>
>
> > On Sep 1, 2015, at 04:32, Lucas Alvares Gomes <lucasagomes at gmail.com>
> wrote:
> >
> > Hi,
> >
> >> The current failure rate for the ironic pxe_ssh job is 100% -
> >>
> http://graphite.openstack.org/render/?from=-200hours&height=500&until=now&width=800&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.{SUCCESS,FAILURE})),%2712hours%27),%20%27gate-tempest-dsvm-ironic-pxe_ssh%27),%27orange%27)
> >>
> >> The reason is something wrong with disk image builder and upstream
> ubuntu.
> >>
> >> Which raises a much more pressing issue, why is an ironic integration
> >> job building, from scratch a disk image builder image on every go (and
> >> connecting to the internet to do it)? Especially as this job sits on a
> >> bunch of other projects beyond ironic. Architecturally this is not sound
> >> enough to be a voting job.
> >>
> >> I'm proposing we make it non-voting immediately, and until it's redone
> >> so it's no long dependent on pulling images directly from upstream, we
> >> don't let it be voting.
> >
> > Yeah, I had few time in the morning but I put a potential fix for that
> > problem here to test if would solve the problem:
> > https://review.openstack.org/#/c/219199/
> >
> > This is one of the main jobs for Ironic and it would be great if we
> > could keep it voting. Could we perhaps change the base OS to something
> > else until Ubuntu is fixed ? ( Fedora / CentOS / Debian )
>
> Why don't we just use the pre-built agent ramdisks we already publish?
> AFAIK we already have a job for it, just need to switch the name in
> project-config for Nova.
>
> // jim
>
>
We need to make sure the dib element is in working shape, which means means
it actually has to be utilized somewhere along the line.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/b388a7be/attachment.html>

From sean at dague.net  Tue Sep  1 13:27:53 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 1 Sep 2015 09:27:53 -0400
Subject: [openstack-dev] [ironic] 100% failure in pxe_ssh job
In-Reply-To: <CAF7gwdjZPw0TRXnJjPp6qAC17DkOm1J4Kx9EVKLScX13QmWA_w@mail.gmail.com>
References: <55E58964.3060701@dague.net>
 <CAB1EZBr0wv=P7rut=8chC===wDV_KSid4U9fkWhgxP5H7yUS1A@mail.gmail.com>
 <66CE3668-14E6-40CA-B186-E24C0CA52A7B@jimrollenhagen.com>
 <CAF7gwdjZPw0TRXnJjPp6qAC17DkOm1J4Kx9EVKLScX13QmWA_w@mail.gmail.com>
Message-ID: <55E5A7D9.6070704@dague.net>

On 09/01/2015 09:21 AM, Julia Kreger wrote:
> 
> On Tue, Sep 1, 2015 at 9:17 AM, Jim Rollenhagen <jim at jimrollenhagen.com
> <mailto:jim at jimrollenhagen.com>> wrote:
<snip>
> 
>     Why don't we just use the pre-built agent ramdisks we already
>     publish? AFAIK we already have a job for it, just need to switch the
>     name in project-config for Nova.
> 
>     // jim
> 
> 
> We need to make sure the dib element is in working shape, which means
> means it actually has to be utilized somewhere along the line.

Which should be a functional test on dib, or at some lower level, not
randomly done in the integration tests of unrelated projects. The
current ball of mud testing leads to high coupling and high failure
rates, because everything is secondarily tested. It should never be run
during nova patches, because there is 0 chance that a Nova patch can
impact that code.

We spent a lot of time trying to decouple libraries from projects and
put the tests in the right places to actually verify the behavior closer
to the code in question. dib / ironic is still doing gating like other
projects were a year ago, which was fragile and led to unrelated lock
ups like this really often. So that needs to be prioritized soon.

	-Sean

-- 
Sean Dague
http://dague.net


From jaypipes at gmail.com  Tue Sep  1 13:31:01 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Tue, 01 Sep 2015 09:31:01 -0400
Subject: [openstack-dev] Debug tool for neutron
In-Reply-To: <SNT153-W35C61683E528F2BDF7A127946A0@phx.gbl>
References: <SNT153-W35C61683E528F2BDF7A127946A0@phx.gbl>
Message-ID: <55E5A895.6080707@gmail.com>

Check out https://github.com/yeasy/easyOVS for OVS-based setups.

Best,
-jay

On 09/01/2015 07:50 AM, bharath thiruveedula wrote:
> Hi,
>
> We have some troubleshooting guides for openstack neutron. But many
> people who are new to neutron find it difficult to follow the guides, as
> they are not aware of what is happening behind the scenes. So is there
> any tool which tracks the packet flow from the VM to debug issues like
> why the VM  is not getting the IP Address or why internet is not
> reachable from the VM? If any, can you please suggest some of them?
>
>
> Regards
> Bharath
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From nstarodubtsev at mirantis.com  Tue Sep  1 13:46:47 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Tue, 1 Sep 2015 16:46:47 +0300
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <CAAa8YgCZEFqGpwpY=P2JzxP+BmYJpHaYYFrX_fGf6-3s16NREQ@mail.gmail.com>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
 <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
 <55E5611B.1090203@redhat.com>
 <CAAa8YgCZEFqGpwpY=P2JzxP+BmYJpHaYYFrX_fGf6-3s16NREQ@mail.gmail.com>
Message-ID: <CAAa8YgBNHdqpWhKgoBTcm-cFTCD7hHU4iGWhKia3uePotg-UbA@mail.gmail.com>

Also, if we decided to continue development we should add blazar here [1]
according to the email [2]
So, my suggestion is to setup some timeframe on this week or next week and
hold some kind of meeting.
[1]: https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement
[2]:
http://lists.openstack.org/pipermail/openstack-dev/2015-August/073071.html




Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-01 12:42 GMT+03:00 Nikolay Starodubtsev <nstarodubtsev at mirantis.com>
:

> Sylvain,
> First of all we need to reanimate blazar gate-jobs, or we can't merge
> anything. I tried to do it a year ago, but can't get the point of the tests,
> so better decision can be to rewrite them from scratch.
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> 2015-09-01 11:26 GMT+03:00 Sylvain Bauza <sbauza at redhat.com>:
>
>>
>>
>> Le 01/09/2015 06:52, Nikolay Starodubtsev a ?crit :
>>
>> All,
>> I'd like to propose use of #openstack-blazar for further communication
>> and coordination.
>>
>>
>>
>> +2 to that. That's the first step of any communication. The channel logs
>> are also recorded here, for async communication :
>> http://eavesdrop.openstack.org/irclogs/%23openstack-blazar/
>>
>> I don't see at the moment much benefits of running a weekly meeting. We
>> can chat on purpose if needed.
>>
>> Like I said to Ildiko, I'm fine to help some people discovering Blazar
>> but I won't have time lots of time for actually working on it.
>>
>> IMHO, the first things to do with Blazar is to reduce the tech debt by :
>>  1/ finising the Climate->Blazar renaming
>>  2/ updating and using the latest oslo librairies instead of using the
>> old incubator
>>  3/ using Nova V2.1 API (which could be a bit difficult because there are
>> no more extensions)
>>
>> If I see some progress with Blazar, I'm OK with asking -infra to move
>> Blazar to the OpenStack namespace like it was asked by James Blair here
>> because it seems Blazar is not defunct :
>> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html
>>
>> -Sylvain
>>
>>
>>
>>
>>
>>
>> Nikolay Starodubtsev
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>>
>> Skype: dark_harlequine1
>>
>> 2015-08-31 19:44 GMT+03:00 Fuente, Pablo A <pablo.a.fuente at intel.com>:
>>
>>> Yes, Blazar is a really interesting project. I worked on it some time
>>> ago and I really enjoy it. Sadly my obligations at work don't let me to
>>> still working on it, but I', happy that there still some interest in Blazar.
>>>
>>> Pablo.
>>> On 31/08/15 09:19, Zhenyu Zheng wrote:
>>> Hello,
>>> It seems like an interesting project.
>>>
>>> On Fri, Aug 28, 2015 at 7:54 PM, Pierre Riteau <priteau at uchicago.edu
>>> <mailto:priteau at uchicago.edu>> wrote:
>>> Hello,
>>>
>>> The NSF-funded Chameleon project (https://www.chameleoncloud.org) uses
>>> Blazar to provide advance reservations of resources for running cloud
>>> computing experiments.
>>>
>>> We would be interested in contributing as well.
>>>
>>> Pierre Riteau
>>>
>>> On 28 Aug 2015, at 07:56, Ildik? V?ncsa <<mailto:
>>> ildiko.vancsa at ericsson.com>ildiko.vancsa at ericsson.com<mailto:
>>> ildiko.vancsa at ericsson.com>> wrote:
>>>
>>> > Hi All,
>>> >
>>> > The resource reservation topic pops up time to time on different
>>> forums to cover use cases in terms of both IT and NFV. The Blazar project
>>> was intended to address this need, but according to my knowledge due to
>>> earlier integration and other difficulties the work has been stopped.
>>> >
>>> > My question is that who would be interested in resurrecting the Blazar
>>> project and/or working on a reservation system in OpenStack?
>>> >
>>> > Thanks and Best Regards,
>>> > Ildik?
>>> >
>>> >
>>> __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<
>>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<
>>> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/371cfc8c/attachment.html>

From jim at jimrollenhagen.com  Tue Sep  1 13:53:16 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Tue, 1 Sep 2015 06:53:16 -0700
Subject: [openstack-dev] [ironic] 100% failure in pxe_ssh job
In-Reply-To: <CAF7gwdjZPw0TRXnJjPp6qAC17DkOm1J4Kx9EVKLScX13QmWA_w@mail.gmail.com>
References: <55E58964.3060701@dague.net>
 <CAB1EZBr0wv=P7rut=8chC===wDV_KSid4U9fkWhgxP5H7yUS1A@mail.gmail.com>
 <66CE3668-14E6-40CA-B186-E24C0CA52A7B@jimrollenhagen.com>
 <CAF7gwdjZPw0TRXnJjPp6qAC17DkOm1J4Kx9EVKLScX13QmWA_w@mail.gmail.com>
Message-ID: <20150901135316.GH9412@jimrollenhagen.com>

On Tue, Sep 01, 2015 at 09:21:39AM -0400, Julia Kreger wrote:
> On Tue, Sep 1, 2015 at 9:17 AM, Jim Rollenhagen <jim at jimrollenhagen.com>
> wrote:
> 
> >
> > Why don't we just use the pre-built agent ramdisks we already publish?
> > AFAIK we already have a job for it, just need to switch the name in
> > project-config for Nova.
> >
> > // jim
> >
> >
> We need to make sure the dib element is in working shape, which means means
> it actually has to be utilized somewhere along the line.

It's still running in Ironic's gate, I don't see any reason why it also
needs to be in Nova's gate.

// jim

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From sbauza at redhat.com  Tue Sep  1 14:18:08 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Tue, 01 Sep 2015 16:18:08 +0200
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <CAAa8YgBNHdqpWhKgoBTcm-cFTCD7hHU4iGWhKia3uePotg-UbA@mail.gmail.com>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
 <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
 <55E5611B.1090203@redhat.com>
 <CAAa8YgCZEFqGpwpY=P2JzxP+BmYJpHaYYFrX_fGf6-3s16NREQ@mail.gmail.com>
 <CAAa8YgBNHdqpWhKgoBTcm-cFTCD7hHU4iGWhKia3uePotg-UbA@mail.gmail.com>
Message-ID: <55E5B3A0.7010504@redhat.com>



Le 01/09/2015 15:46, Nikolay Starodubtsev a ?crit :
> Also, if we decided to continue development we should add blazar here 
> [1] according to the email [2]
> So, my suggestion is to setup some timeframe on this week or next week 
> and hold some kind of meeting.
> [1]: https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement
> [2]: 
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/073071.html
>

That's just what I said I was OK to add, for sure.

>
> *__*
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
>
> 2015-09-01 12:42 GMT+03:00 Nikolay Starodubtsev 
> <nstarodubtsev at mirantis.com <mailto:nstarodubtsev at mirantis.com>>:
>
>     Sylvain,
>     First of all we need to reanimate blazar gate-jobs, or we can't
>     merge anything. I tried to do it a year ago, but can't get the
>     point of the tests,
>     so better decision can be to rewrite them from scratch.
>
>     *__*
>
>     Nikolay Starodubtsev
>
>     Software Engineer
>
>     Mirantis Inc.
>
>
>     Skype: dark_harlequine1
>
>
>     2015-09-01 11:26 GMT+03:00 Sylvain Bauza <sbauza at redhat.com
>     <mailto:sbauza at redhat.com>>:
>
>
>
>         Le 01/09/2015 06:52, Nikolay Starodubtsev a ?crit :
>>         All,
>>         I'd like to propose use of #openstack-blazar for further
>>         communication and coordination.
>>
>
>
>         +2 to that. That's the first step of any communication. The
>         channel logs are also recorded here, for async communication :
>         http://eavesdrop.openstack.org/irclogs/%23openstack-blazar/
>
>         I don't see at the moment much benefits of running a weekly
>         meeting. We can chat on purpose if needed.
>
>         Like I said to Ildiko, I'm fine to help some people
>         discovering Blazar but I won't have time lots of time for
>         actually working on it.
>
>         IMHO, the first things to do with Blazar is to reduce the tech
>         debt by :
>          1/ finising the Climate->Blazar renaming
>          2/ updating and using the latest oslo librairies instead of
>         using the old incubator
>          3/ using Nova V2.1 API (which could be a bit difficult
>         because there are no more extensions)
>
>         If I see some progress with Blazar, I'm OK with asking -infra
>         to move Blazar to the OpenStack namespace like it was asked by
>         James Blair here because it seems Blazar is not defunct :
>         http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html
>
>         -Sylvain
>
>
>
>
>>         *__*
>>
>>         Nikolay Starodubtsev
>>
>>         Software Engineer
>>
>>         Mirantis Inc.
>>
>>
>>         Skype: dark_harlequine1
>>
>>
>>         2015-08-31 19:44 GMT+03:00 Fuente, Pablo A
>>         <pablo.a.fuente at intel.com <mailto:pablo.a.fuente at intel.com>>:
>>
>>             Yes, Blazar is a really interesting project. I worked on
>>             it some time ago and I really enjoy it. Sadly my
>>             obligations at work don't let me to still working on it,
>>             but I', happy that there still some interest in Blazar.
>>
>>             Pablo.
>>             On 31/08/15 09:19, Zhenyu Zheng wrote:
>>             Hello,
>>             It seems like an interesting project.
>>
>>             On Fri, Aug 28, 2015 at 7:54 PM, Pierre Riteau
>>             <priteau at uchicago.edu
>>             <mailto:priteau at uchicago.edu><mailto:priteau at uchicago.edu
>>             <mailto:priteau at uchicago.edu>>> wrote:
>>             Hello,
>>
>>             The NSF-funded Chameleon project
>>             (https://www.chameleoncloud.org) uses Blazar to provide
>>             advance reservations of resources for running cloud
>>             computing experiments.
>>
>>             We would be interested in contributing as well.
>>
>>             Pierre Riteau
>>
>>             On 28 Aug 2015, at 07:56, Ildik? V?ncsa
>>             <<mailto:ildiko.vancsa at ericsson.com
>>             <mailto:ildiko.vancsa at ericsson.com>>ildiko.vancsa at ericsson.com
>>             <mailto:ildiko.vancsa at ericsson.com><mailto:ildiko.vancsa at ericsson.com
>>             <mailto:ildiko.vancsa at ericsson.com>>> wrote:
>>
>>             > Hi All,
>>             >
>>             > The resource reservation topic pops up time to time on
>>             different forums to cover use cases in terms of both IT
>>             and NFV. The Blazar project was intended to address this
>>             need, but according to my knowledge due to earlier
>>             integration and other difficulties the work has been stopped.
>>             >
>>             > My question is that who would be interested in
>>             resurrecting the Blazar project and/or working on a
>>             reservation system in OpenStack?
>>             >
>>             > Thanks and Best Regards,
>>             > Ildik?
>>             >
>>             >
>>             __________________________________________________________________________
>>             > OpenStack Development Mailing List (not for usage
>>             questions)
>>             > Unsubscribe:
>>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>             >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>             __________________________________________________________________________
>>             OpenStack Development Mailing List (not for usage questions)
>>             Unsubscribe:
>>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>             __________________________________________________________________________
>>             OpenStack Development Mailing List (not for usage questions)
>>             Unsubscribe:
>>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><mailto:OpenStack-dev-request at lists.openstack.org
>>             <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe>
>>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>             __________________________________________________________________________
>>             OpenStack Development Mailing List (not for usage questions)
>>             Unsubscribe:
>>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>         __________________________________________________________________________
>>         OpenStack Development Mailing List (not for usage questions)
>>         Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe  <mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/88fa1654/attachment.html>

From jim at jimrollenhagen.com  Tue Sep  1 14:35:38 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Tue, 1 Sep 2015 07:35:38 -0700
Subject: [openstack-dev] [ironic] 100% failure in pxe_ssh job
In-Reply-To: <55E58964.3060701@dague.net>
References: <55E58964.3060701@dague.net>
Message-ID: <20150901143538.GI9412@jimrollenhagen.com>

On Tue, Sep 01, 2015 at 07:17:56AM -0400, Sean Dague wrote:
> The current failure rate for the ironic pxe_ssh job is 100% -
> http://graphite.openstack.org/render/?from=-200hours&height=500&until=now&width=800&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.{SUCCESS,FAILURE})),%2712hours%27),%20%27gate-tempest-dsvm-ironic-pxe_ssh%27),%27orange%27)
> 
> The reason is something wrong with disk image builder and upstream ubuntu.
> 
> Which raises a much more pressing issue, why is an ironic integration
> job building, from scratch a disk image builder image on every go (and
> connecting to the internet to do it)? Especially as this job sits on a
> bunch of other projects beyond ironic. Architecturally this is not sound
> enough to be a voting job.
> 
> I'm proposing we make it non-voting immediately, and until it's redone
> so it's no long dependent on pulling images directly from upstream, we
> don't let it be voting.

Quick update here after talking with Sean in IRC:

* pxe_ssh job has been made non-voting.

* We're switching to pxe_ipa job for Nova. This job uses a prebuilt
  ironic-python-agent ramdisk from tarballs.o.o, and is really what we
  want anyway since we're deprecating the bash ramdisk. This will remain
  non-voting.

* mtreinish and I are going to work on getting the full tempest job
  working for the Ironic driver, as has been the plan basically forever.
  Would love some extra hands if anyone is interested. :)

// jim


From rcresswe at cisco.com  Tue Sep  1 14:39:03 2015
From: rcresswe at cisco.com (Rob Cresswell (rcresswe))
Date: Tue, 1 Sep 2015 14:39:03 +0000
Subject: [openstack-dev]  [horizon] URL Sanity
Message-ID: <D20B7717.DE04%rcresswe@cisco.com>

Hi all,

I recently started looking into properly implementing breadcrumbs to make navigation clearer, especially around nested resources (Subnets Detail page, for example). The idea is to use the request.path to form a logical breadcrumb that isn?t dependent on browser history ( https://review.openstack.org/#/c/129985/3/horizon/browsers/breadcrumb.py ). Unfortunately, this breaks down quite quickly because we use odd patterns like `<resources>/<resource_id>/detail`, and `<resources>/<resource_id>` doesn?t exist.

This made me realise how much of an inconsistent mess the URL patterns are.  I?ve started cleaning them up, so we move from these patterns:

`/admin/networks/<network_id>/detail` - Detail page for a Network
`/admin/networks/<network_id>/addsubnet` - Create page for a Subnet

To patterns in line with usual CRUD usages, such as:

`/admin/networks/<network_id>` - Detail page for a Network
`/admin/networks/<network_id>/subnets/create` - Create page for a Subnet

This is mostly trivial, just removing extraneous words and adding consistency, with end goal being every panel following patterns like:

`/<resources>` - Index page
`/<resources>/<resource_id>` - Detail page for a single resource
`/<resources>/create` - Create new resource
`/<resources>/<resource_id>/update` - Update a resource

This gets a little complex around nested items. Should a Port for example, which has a unique ID, be reachable in Horizon by just its ID? Ports must always be attached to a network as I understand it. There are multiple ways to express this:

`/networks/ports/<port_id>` - Current implementation
`/networks/<network_id>/ports/<port_id>` - My preferred structure
`/ports/<port_id>` - Another option

Does anyone have any opinions on how to handle this structuring, or if it?s even necessary?

Regards,
Rob
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/226b1345/attachment.html>

From aschultz at mirantis.com  Tue Sep  1 14:45:03 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Tue, 1 Sep 2015 09:45:03 -0500
Subject: [openstack-dev] [ec2api][puppet] EC2 api puppet module
In-Reply-To: <E6E80EA9C3C06E4FA36BF9685D57B3168BB3CC71@CERNXCHG51.cern.ch>
References: <E6E80EA9C3C06E4FA36BF9685D57B3168BB3CC71@CERNXCHG51.cern.ch>
Message-ID: <CABzFt8PVyt2DpS+CnhXSezoK8TkTytWUF3adyJhTH71GYuhVCg@mail.gmail.com>

Hey Marcos,

On Tue, Sep 1, 2015 at 7:50 AM, Marcos Fermin Lobo <
marcos.fermin.lobo at cern.ch> wrote:

> Hi all,
>
> The standalone EC2 api project https://github.com/stackforge/ec2-api does
> not have puppet module yet. I want to develop this puppet module and my
> idea is start in a public Github repo and get feedback from the community.
> All feedback and collaborations will be very welcome.
>
> I would like to start requesting your suggestions about the Github
> repository name. I would suggest: puppet-ec2-api
>
>

Just a note I don't think dashes are allowed in the puppet module name or
if they are it may lead to some weirdness[0][1]. So puppet-ec2api might be
better.




> Sounds good for you people?, more suggestions?.
>
> Thank you.
>
> Regards,
> Marcos.
>
>

Thanks,
-Alex



[0] https://projects.puppetlabs.com/issues/5268
[1] https://tickets.puppetlabs.com/browse/PUP-1397
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/73f3acd5/attachment.html>

From emilien.macchi at gmail.com  Tue Sep  1 14:49:35 2015
From: emilien.macchi at gmail.com (Emilien Macchi)
Date: Tue, 1 Sep 2015 10:49:35 -0400
Subject: [openstack-dev] [ec2api][puppet] EC2 api puppet module
In-Reply-To: <E6E80EA9C3C06E4FA36BF9685D57B3168BB3CC71@CERNXCHG51.cern.ch>
References: <E6E80EA9C3C06E4FA36BF9685D57B3168BB3CC71@CERNXCHG51.cern.ch>
Message-ID: <55E5BAFF.9080800@gmail.com>



On 09/01/2015 08:50 AM, Marcos Fermin Lobo wrote:
> Hi all,
> 
> The standalone EC2 api project https://github.com/stackforge/ec2-api
> does not have puppet module yet. I want to develop this puppet module
> and my idea is start in a public Github repo and get feedback from the
> community. All feedback and collaborations will be very welcome.
> 
> I would like to start requesting your suggestions about the Github
> repository name. I would suggest: puppet-ec2-api
> 
> Sounds good for you people?, more suggestions?.

I would look at https://wiki.openstack.org/wiki/Puppet/New_module
For hosting place, I guess using 'openstack' namespace is fine.

Ping us on IRC #puppet-openstack if you need more informations,

Best,

> 
> Thank you.
> 
> Regards,
> Marcos.
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From robert.clark at hp.com  Tue Sep  1 14:57:50 2015
From: robert.clark at hp.com (Clark, Robert Graham)
Date: Tue, 1 Sep 2015 14:57:50 +0000
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
Message-ID: <D20B7929.29ACD%robert.clark@hp.com>


>The reason that is compelling is that you can have Barbican generate,
>sign, and store a keypair without transmitting the private key over the
>network to the client that originates the signing request. It can be
>directly stored, and made available only to the clients that need access
>to it.

This is absolutely _not_ how PKI for TLS is supposed to work, yes Barbican
can create keypairs etc because sometimes that?s useful but in the
public-private PKI model that TLS expects this is completely wrong. Magnum
nodes should be creating their own private key and CSR and submitting them
to some CA for signing.

Now this gets messy because you probably don?t want to push keystone
credentials onto each node (that they would use to communicate with
Barbican).

I?m a bit conflicted writing this next bit because I?m not particularly
familiar with the Kubernetes/Magnum architectures and also because I?m one
of the core developers for Anchor but here goes?.

Have you considered using Anchor for this? It?s a pretty lightweight
ephemeral CA that is built to work well in small PKI communities (like a
Kubernetes cluster) you can configure multiple methods for authentication
and build pretty simple validation rules for deciding if a host should be
given a certificate. Anchor is built to provide short-lifetime
certificates where each node re-requests a certificate typically every
12-24 hours, this has some really nice properties like ?passive
revocation? (Think revocation that actually works) and strong ways to
enforce issuing logic on a per host basis.

Anchor or not, I?d like to talk to you more about how you?re attempting to
secure Magnum - I think it?s an extremely interesting project that I?d
like to help out with.

-Rob
(Security Project PTL / Anchor flunkie)




From mriedem at linux.vnet.ibm.com  Tue Sep  1 14:58:50 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Tue, 1 Sep 2015 09:58:50 -0500
Subject: [openstack-dev] [nova][vmware][qa] vmware nsx CI appears gone
Message-ID: <55E5BD2A.6020908@linux.vnet.ibm.com>

I haven't seen the vmware nsx CI reporting on anything in awhile but 
don't see any outage events here:

https://wiki.openstack.org/wiki/NovaVMware/Minesweeper/Status

Is there some status?

-- 

Thanks,

Matt Riedemann



From john.griffith8 at gmail.com  Tue Sep  1 15:30:26 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Tue, 1 Sep 2015 09:30:26 -0600
Subject: [openstack-dev] [cinder] L3 low pri review queue starvation
In-Reply-To: <55E592C6.7070808@dyncloud.net>
References: <55E592C6.7070808@dyncloud.net>
Message-ID: <CAPWkaSUbTRJr1qXFSLN+d1qOF7zjWd7r64cVMpYuUxDYY6t4Ug@mail.gmail.com>

On Tue, Sep 1, 2015 at 5:57 AM, Tom Barron <tpb at dyncloud.net> wrote:

> [Yesterday while discussing the following issue on IRC, jgriffith
> suggested that I post to the dev list in preparation for a discussion in
> Wednesday's cinder meeting.]
>
> Please take a look at the 10 "Low" priority reviews in the cinder
> Liberty 3 etherpad that were punted to Mitaka yesterday. [1]
>
> Six of these *never* [2] received a vote from a core reviewer. With the
> exception of the first in the list, which has 35 patch sets, none of the
> others received a vote before Friday, August 28.  Of these, none had
> more than -1s on minor issues, and these have been remedied.
>
> Review https://review.openstack.org/#/c/213855 "Implement
> manage/unmanage snapshot in Pure drivers" is a great example:
>
>    * approved blueprint for a valuable feature
>    * pristine code
>    * passes CI and Jenkins (and by the deadline)
>    * never reviewed
>
> We have 11 core reviewers, all of whom were very busy doing reviews
> during L3, but evidently this set of reviews didn't really have much
> chance of making it.  This looks like a classic case where the
> individually rational priority decisions of each core reviewer
> collectively resulted in starving the Low Priority review queue.
>
> One way to remedy would be for the 11 core reviewers to devote a day or
> two to cleaning up this backlog of 10 outstanding reviews rather than
> punting all of them out to Mitaka.
>
> Thanks for your time and consideration.
>
> Respectfully,
>
> -- Tom Barron
>
> [1] https://etherpad.openstack.org/p/cinder-liberty-3-reviews
> [2] At the risk of stating the obvious, in this count I ignore purely
> procedural votes such as the final -2.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

?Thanks Tom, this is sadly an ongoing problem every release.  I think we
have a number of things we can talk about at the summit to try and make
some of this better.  I honestly think that if people were to actually
"use" launchpad instead of creating tracking etherpads everywhere it would
help.  What I mean is that there is a ranked targeting of items in
Launchpad and we should use it, core team members should know that as the
source of truth and things that must get reviewed.

As far as Liberty and your patches; Yesterday was the freeze point, the
entire Cinder team agreed on that (yourself included both at the mid-cycle
meet up and at the team meeting two weeks ago when Thingee reiterated the
deadlines).  If you noticed last week that your patches weren't going
anywhere YOU should've wrangled up some reviews.

Furthermore, I've explained every release for the last 3 or 4 years that
there's no silver bullet, no magic process when it come to review
throughput.  ESPECIALLY when it comes to the 3'rd milestone.  You can try
landing strips, priority listed etherpads, sponsors etc etc but the fact is
that things happen, the gate slows down (or we completely break on the
Cinder side like we did yesterday).  This doesn't mean "oh, well then you
get another day or two", it means stuff happens and it sucks but first
course of action is drop low priority items.  It just means if you really
wanted it you probably should've made it happen earlier.  Just so you know,
I run into this every release as well.  I had a number of things in
progress that I had hoped to finish last week and yesterday, BUT my
priority shifted to trying to help get the cinder patches back on track and
get the items in Launchpad updated to actually reflect something that was
somewhat possible.

The only thing that works is "submit early and review often" it's simple.

Finally, I pointed out to you yesterday that we could certainly discuss as
a team what to do with your patches.  BUT that given how terribly far
behind we were in the process that I wanted reviewers to focus on medium,
high and critical prioritized items.  That's what prioritization's are for,
it means when crunch time hits and things hit the fan it's usually the
"low" priority things that get bumped.

Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/a66cf8c2/attachment.html>

From mlowery at ebay.com  Tue Sep  1 15:41:30 2015
From: mlowery at ebay.com (Lowery, Mathew)
Date: Tue, 1 Sep 2015 15:41:30 +0000
Subject: [openstack-dev] [trove] [heat] Multi region support
Message-ID: <D20B3157.5150C%mlowery@ebay.com>

This is a Trove question but including Heat as they seem to have solved this problem.

Summary: Today, it seems that Trove is not capable of creating a cluster spanning multiple regions. Is that the case and, if so, are there any plans to work on that? Also, are we aware of any precedent solutions (e.g. remote stacks in Heat) or even partially completed spec/code in Trove?

More details:

I found this nice diagram<https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat/The_Missing_Diagram> created for Heat. As far as I understand it, #1 is the absence of multi-region support (i.e. what we have today). #2 seems to be a 100% client-based solution. In other words, the Heat services never know about the other stacks. In fact, there is nothing tying these stacks together at all. #3 seems to show a "master" Heat server that understands "remote stacks" and simply converts those "remote stacks" into calls on regional Heats. I assume here the master stack record is stored by the master Heat. Because the "remote stacks" are full-fledged stacks, they can be managed by their regional Heats if availability of master or other regional Heats is lost. #4, the diagram doesn't seem to match the description (instead of one global Heat, it seems the diagram should show two regional Heats). In this one, a single arbitrary region becomes the owner of the stack and remote (low-level not stack) resources are created as needed. One problem is the manageability is lost if the Heat in the owning region is lost. Finally, #5. In #5, it's just #4 but with one and only one Heat.

It seems like Heat solved this<https://review.openstack.org/#/c/53313/> using #3 (Master Orchestrator) but where there isn't necessarily a separate master Heat. Remote stacks can be created by any regional stack.

Trove questions:

  1.  Having sub-clusters (aka remote clusters aka nested clusters) seems to be useful (i.e. manageability isn't lost when a region is lost). But then again, does it make sense to perform a cluster operation on a sub-cluster?
  2.  You could forego sub-clusters and just create full-fledged remote standalone Trove instances.
  3.  If you don't create full-fledged remote Trove instances (but instead just call remote Nova), then you cannot do simple things like getting logs from a node without going through the owning region's Trove. This is an extra hop and a single point of failure.
  4.  Even with sub-clusters, the only record of them being related lives only in the "owning" region. Then again, some ID tying them all together could be passed to the remote regions.
  5.  Do we want to allow the piecing together of clusters (sort of like Heat's "adopt")?

These are some questions floating around my head and I'm sure there are plenty more. Any thoughts on any of this?

Thanks,
Mat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/3daf3c74/attachment.html>

From me at not.mn  Tue Sep  1 16:00:11 2015
From: me at not.mn (John Dickinson)
Date: Tue, 01 Sep 2015 09:00:11 -0700
Subject: [openstack-dev] [Swift] Swift 2.4.0 release
Message-ID: <CCE0321C-5F23-4917-9CF3-879C80859961@not.mn>

I'm pleased to announce that Swift 2.4.0 is available. As always, you can upgrade to
this version with no end-user downtime. This release has several very nice features,
so I recommend that you upgrade as soon as possible.

You can find the release at:
     https://launchpad.net/swift/liberty/2.4.0

Please note the dependency and config changes in the full release notes:
     https://github.com/openstack/swift/blob/master/CHANGELOG

This release is the result of 55 code contributors, including 29 first-time code
contributors. These are the first-time code contributors:

Timur Alperovich           Kazuhiro Miyahara
Minwoo Bae                 Takashi Natsume
Tim Burke                  Ondrej Novy
Carlos Cavanna             Falk Reimann
Emmanuel Cazenave          Brian Reitz
Cl?ment Contini            Eran Rom
Oshrit Feder               Hamdi Roumani
Charles Hsu                Atsushi Sakai
Joanna H. Huang            Azhagu Selvan SP
Bill Huber                 Alexandra Settle
Jaivish Kothari            Pradeep Kumar Singh
Zhao Lei                   Victor Stinner
Ben Martin                 Akihito Takai
Kenichiro Matsuda          Kai Zhang
Michael Matur


I'd like to call out a few significant changes in this release. Please read
the release notes in the link above for more information about these new
features.

* Allow one or more object servers per disk deployment

  This new deployment model allows you to set the number of object servers per
  drive. The basic change is that you add every drive into the ring at a
  different port, and Swift will automatically run the configured number of
  server processes for each drive. This results in per-disk isolated IO so
  that one slow disk does not slow down every operation to that server. This
  change can dramatically lower request latency and increase requests per
  second.

* Improve performance for object writes to large containers

  When an object is written in Swift, the object servers will attempt to
  update the listing in the appropriate container server before giving a
  response to the client. If the container server is busy, object latency
  increases as the object server waits for the container server. In this
  release, the object server waits less time (and the wait time is
  configurable) for the container server response, thus lowering overall
  latency. Effectively, this means that object writes will no longer get
  slower as the number of objects in a container increases.

* Users can set per-object metadata with bulk upload

  Bulk uploads allow users to upload an archive (.tar) to Swift and store the
  individual referenced files as separate objects in the system. Swift now
  observes extended attributes set on these files and appropriately sets user
  metadata on the stored objects.

* Allow SLO PUTs to forgo per-segment integrity checks.

  Previously, each segment referenced in the manifest also needed the correct
  etag and bytes setting. These fields now allow the "null" value to skip
  those particular checks on the given segment.

There are many other changes that have gone into this release. I encourage you to review
the release notes and upgrade to this release.

Thank you to everyone who contributed to this release. If you would like to get involved
in Swift, please join us in #openstack-swift on freenode IRC.


--John



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/df7e32bd/attachment.pgp>

From gord at live.ca  Tue Sep  1 16:02:54 2015
From: gord at live.ca (gord chung)
Date: Tue, 1 Sep 2015 12:02:54 -0400
Subject: [openstack-dev] [nova][ceilometer] cputime value resets on
 restart/shutdown
In-Reply-To: <m0twrfw8r9.fsf@danjou.info>
References: <BLU437-SMTP64E395201B29845166C408DE6B0@phx.gbl>
 <m0r3mjxu8g.fsf@danjou.info>
 <BLU436-SMTP135A306AC35A25D20CB3A4EDE6B0@phx.gbl>
 <m0twrfw8r9.fsf@danjou.info>
Message-ID: <BLU436-SMTP715696E1121BCBF5C5F8FDDE6A0@phx.gbl>



On 31/08/2015 3:36 PM, Julien Danjou wrote:
> On Mon, Aug 31 2015, gord chung wrote:
>
>> i'm not sure Gnocchi is where we should be fixing this as it really only
>> (potentially) fixes it for Gnocchi and not for any of the other ways Ceilometer
>> data can be consumed.
> The ideal way is to send the data as it is collected and do the
> treatment in the backend, otherwise you end up implementing the world in
> Ceilometer.

but if it's not actually cumulative in Ceilometer (pre-storage), should 
we really be tagging it as such?

>> a proposed solution in bug and in a previous thread suggests that a 'delta'
>> meter be built from cputime value libvirt provides which would better handle
>> the reset scenario. that said, is there another option to truly have a
>> cumulative meter?
> Yes, the hackish way might be to convert those cumulative to delta with
> a transfomer in the pipeline and be done with it. Though you'll probably
> have data points missing on restart, which is not ultimate solution. As
> soon as you start losing information in the Ceilometer pipeline by
> transforming I'll stand and say "bad idea". :-)

so i was thinking rather than modify the existing meter, we build a new 
cpu.delta meter, which is gives the delta. it seems like giving a delta 
meter would make the behaviour more consistent.

>
> So yes, you have to have a smart storage solution in the end.
>
ideally, i was thinking Nova would capture this data and there would be 
absolutely no data loss as Nova knows exactly when instances are 
restarted/stopped... that said, it's arguably outside the scope of Nova.

cheers,

-- 
gord



From emilien at redhat.com  Tue Sep  1 16:07:22 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Tue, 1 Sep 2015 12:07:22 -0400
Subject: [openstack-dev] [puppet] weekly meeting #49
In-Reply-To: <55E44B2B.2090204@redhat.com>
References: <55E44B2B.2090204@redhat.com>
Message-ID: <55E5CD3A.8090507@redhat.com>



On 08/31/2015 08:40 AM, Emilien Macchi wrote:
> Hello,
> 
> Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
> in #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150901
> 
> Please add additional items you'd like to discuss.
> If our schedule allows it, we'll make bug triage during the meeting.

We did our meeting:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-01-15.00.html

Reminder: tomorrow is our 3-days mid-cycle!
Don't miss to look
https://etherpad.openstack.org/p/puppet-liberty-mid-cycle if you want to
participate.

Happy hacking,
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/61e14a8a/attachment.pgp>

From julien at danjou.info  Tue Sep  1 16:10:06 2015
From: julien at danjou.info (Julien Danjou)
Date: Tue, 01 Sep 2015 18:10:06 +0200
Subject: [openstack-dev] [nova][ceilometer] cputime value resets on
	restart/shutdown
In-Reply-To: <BLU436-SMTP715696E1121BCBF5C5F8FDDE6A0@phx.gbl> (gord chung's
 message of "Tue, 1 Sep 2015 12:02:54 -0400")
References: <BLU437-SMTP64E395201B29845166C408DE6B0@phx.gbl>
 <m0r3mjxu8g.fsf@danjou.info>
 <BLU436-SMTP135A306AC35A25D20CB3A4EDE6B0@phx.gbl>
 <m0twrfw8r9.fsf@danjou.info>
 <BLU436-SMTP715696E1121BCBF5C5F8FDDE6A0@phx.gbl>
Message-ID: <m0bndmruht.fsf@danjou.info>

On Tue, Sep 01 2015, gord chung wrote:

> but if it's not actually cumulative in Ceilometer (pre-storage), should we
> really be tagging it as such?

We only have 3 meters type, and the cumulative definition I wrote
somewhere back in 2012 states that it can reset to 0. Sorry. :-)

> so i was thinking rather than modify the existing meter, we build a new
> cpu.delta meter, which is gives the delta. it seems like giving a delta meter
> would make the behaviour more consistent.

?with data loss if you restart the polling agent and it then ignores the
previous value of the meter.

Except if you connect the polling system to the previous state of the
meter, which requires to either:
1. Connect the polling system to the storage system
2. Store locally the latest value you fetched

Option 1 sounds crazy, option 2 sounds less crazy, but still hackish.

Whereas having a system that can compute the delta afterward with all
the value at its reach sounds way better ? that's why I'm in favor of
doing that in the storage system (e.g. Gnocchi).

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/70e3444f/attachment.pgp>

From anlin.kong at gmail.com  Tue Sep  1 16:21:30 2015
From: anlin.kong at gmail.com (Lingxian Kong)
Date: Wed, 2 Sep 2015 00:21:30 +0800
Subject: [openstack-dev] [mistral] Displaying wf hierarchy in CLI
In-Reply-To: <BLU436-SMTP17E153A426B8ECA10667E0D86B0@phx.gbl>
References: <407873FB-639D-40C0-ABD8-A62EA6FDF876@mirantis.com>
 <BLU436-SMTP17E153A426B8ECA10667E0D86B0@phx.gbl>
Message-ID: <CALjNAZ2B=cc-C_POc8D1+DgA=vF8hx6hNJV0c5B9bdfrRsi4OQ@mail.gmail.com>

Hi, Renat,

Actually, I have the same idea months ago, but what I thought is to provide
task dependencies information in a workflow definition, since as a workflow
designer, I have no idea about how my workflow 'looks like', unless I
create an execution with that, especially when there are a lot of tasks
within workflow definition.

And what you want to address here is to get execution/task-execution
dependencies information in run time, which also will give users more
detailed information about what happened in the system, I'd like to see it
landed in Mistral.

To achieve that, we should record the execution/task-execution relationship
during an execution is running, because we have no such info currently.

On the CLI side, I agree we add option to 'execution-get' command, and
accordingly we could add a new column 'dependent on' or 'parent' or
something else to the result.

On Tue, Sep 1, 2015 at 1:47 AM, Joshua Harlow <harlowja at outlook.com> wrote:

> Would u guys have any use for the following being split out into its own
> library?
>
> https://github.com/openstack/taskflow/blob/master/taskflow/types/tree.py
>
> It already has a pformat method that could be used to do your drawing of
> the 'tree'...
>
>
> http://docs.openstack.org/developer/taskflow/types.html#taskflow.types.tree.Node.pformat
>
> Might be useful for u folks? Taskflow uses it to be able to show
> information that is tree-like to the developer/user for similar purposes
> (it also supports using pydot to dump things out in dot graph format):
>
> For example http://tempsend.com/A8AA89F397/4663/car.pdf is the graph of
> an example (in particular
> https://github.com/openstack/taskflow/blob/master/taskflow/examples/build_a_car.py
> )
>
> -Josh
>
> Renat Akhmerov wrote:
>
>> Team,
>>
>> I?d like to discuss
>> https://blueprints.launchpad.net/mistral/+spec/mistral-execution-origin.
>>
>> To summarize what it?s about: imagine that we have a workflow which
>> calls other workflows and those workflows call some workflows again,
>> etc. etc. In other words, we have a tree of workflows. Right now there
>> isn?t a convenient way to track down the whole execution tree in CLI.
>> For example, see a running workflow but I have no idea whether it was
>> started by user manually or called by another (parent) workflow. In many
>> cases it?s crucial to know, otherwise it?s really painful if we need to
>> debug something or just figure out the whole picture of what?s going on.
>>
>> What this BP offers is that we have an ?origin ID? that would always
>> tell the top level (the highest one) workflow execution since which it
>> all started. This is kind of simple solution though and I thought we
>> could massage this idea a little bit and could come up with something
>> more interesting. For example, could we add a new option (i.e.
>> --detailed or --recursive) for ?mistral execution-get? command and if
>> it?s provided then we print out information not only about this wf
>> execution itself but about it?s children as well? The only question is:
>> how do we display a tree in CLI?
>>
>> I also created an empty etherpad where we can sketch out how it could
>> look like:
>> https://etherpad.openstack.org/p/mistral-cli-workflow-execution-tree
>>
>> Any other ideas? Thoughts?
>>
>> Renat Akhmerov
>> @ Mirantis Inc.
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*Regards!*
*-----------------------------------*
*Lingxian Kong*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/0c9d3433/attachment.html>

From rmeggins at redhat.com  Tue Sep  1 16:56:48 2015
From: rmeggins at redhat.com (Rich Megginson)
Date: Tue, 1 Sep 2015 10:56:48 -0600
Subject: [openstack-dev] [puppet][keystone] Keystone resource naming
 with domain support - no '::domain' if 'Default'
In-Reply-To: <55DF8321.4040109@redhat.com>
References: <55DCD07C.2040204@redhat.com> <55DEB546.1060207@redhat.com>
 <55DF0546.5050506@redhat.com> <55DF09E8.8030304@redhat.com>
 <55DF247F.1090405@redhat.com> <55DF8321.4040109@redhat.com>
Message-ID: <55E5D8D0.6080102@redhat.com>

To close this thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072878.html

puppet-openstack will support Keystone domain scoped resource names 
without a '::domain' in the name, only if the 'default_domain_id' 
parameter in Keystone has _not_ been set. That is, if the default domain 
is 'Default'.  This means that if the user/operator doesn't care about 
domains at all, the operator doesn't have to deal with them.  However, 
once the user/operator uses `keystone_domain`, and uses `is_default => 
true`, this means the user/operator _must_ use '::domain' with _all_ 
domain scoped Keystone resource names.

In addition:

* In the OpenStack L release:
    If 'default_domain_id' is set, puppet will issue a warning if a name 
is used without '::domain'. I think this is a good thing to do, just in 
case someone sets the default_domain_id by mistake.

* In OpenStack M release:
    Puppet will issue a warning if a name is used without '::domain'.

* From Openstack N release:
    A name must be used with '::domain'.



From jdennis at redhat.com  Tue Sep  1 17:03:56 2015
From: jdennis at redhat.com (John Dennis)
Date: Tue, 1 Sep 2015 13:03:56 -0400
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <D20B7929.29ACD%robert.clark@hp.com>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
 <D20B7929.29ACD%robert.clark@hp.com>
Message-ID: <55E5DA7C.9020309@redhat.com>

On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:
>
>> The reason that is compelling is that you can have Barbican generate,
>> sign, and store a keypair without transmitting the private key over the
>> network to the client that originates the signing request. It can be
>> directly stored, and made available only to the clients that need access
>> to it.
>
> This is absolutely _not_ how PKI for TLS is supposed to work, yes Barbican
> can create keypairs etc because sometimes that?s useful but in the
> public-private PKI model that TLS expects this is completely wrong. Magnum
> nodes should be creating their own private key and CSR and submitting them
> to some CA for signing.
>
> Now this gets messy because you probably don?t want to push keystone
> credentials onto each node (that they would use to communicate with
> Barbican).
>
> I?m a bit conflicted writing this next bit because I?m not particularly
> familiar with the Kubernetes/Magnum architectures and also because I?m one
> of the core developers for Anchor but here goes?.
>
> Have you considered using Anchor for this? It?s a pretty lightweight
> ephemeral CA that is built to work well in small PKI communities (like a
> Kubernetes cluster) you can configure multiple methods for authentication
> and build pretty simple validation rules for deciding if a host should be
> given a certificate. Anchor is built to provide short-lifetime
> certificates where each node re-requests a certificate typically every
> 12-24 hours, this has some really nice properties like ?passive
> revocation? (Think revocation that actually works) and strong ways to
> enforce issuing logic on a per host basis.
>
> Anchor or not, I?d like to talk to you more about how you?re attempting to
> secure Magnum - I think it?s an extremely interesting project that I?d
> like to help out with.
>
> -Rob
> (Security Project PTL / Anchor flunkie)

Let's not reinvent the wheel. I can't comment on what Magnum is doing 
but I do know the members of the Barbican project are PKI experts and 
understand CSR's, key escrow, revocation, etc. Some of the design work 
is being done by engineers who currently contribute to products in use 
by the Dept. of Defense, an agency that takes their PKI infrastructure 
very seriously. They also have been involved with Keystone. I work with 
these engineers on a regular basis.

The Barbican blueprint states:

Barbican supports full lifecycle management including provisioning, 
expiration, reporting, etc. A plugin system allows for multiple 
certificate authority support (including public and private CAs).

Perhaps Anchor would be a great candidate for a Barbican plugin.

What I don't want to see is spinning our wheels, going backward, or 
inventing one-off solutions to a very demanding and complex problem 
space. There have been way too many one-off solutions in the past, we 
want to consolidate the expertise in one project that is designed by 
experts and fully vetted, this is the role of Barbican. Would you like 
to contribute to Barbican? I'm sure your skills would be a tremendous 
asset.


-- 
John


From mlowery at ebay.com  Tue Sep  1 17:18:29 2015
From: mlowery at ebay.com (Lowery, Mathew)
Date: Tue, 1 Sep 2015 17:18:29 +0000
Subject: [openstack-dev] [trove] Anyone using containers?
Message-ID: <D20B4812.515A1%mlowery@ebay.com>

Just curious if anyone is using containers in their deployments. If so, in what capacity? What are the advantages, gotchas, and pain points?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/4eb9f54a/attachment.html>

From Kevin.Fox at pnnl.gov  Tue Sep  1 18:11:23 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 1 Sep 2015 18:11:23 +0000
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <55E5DA7C.9020309@redhat.com>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
 <D20B7929.29ACD%robert.clark@hp.com>,<55E5DA7C.9020309@redhat.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2EE434@EX10MBOX03.pnnl.gov>

https://blueprints.launchpad.net/nova/+spec/instance-users

Please see the above spec. Nova, Keystone and Barbican have been working together on it this cycle and are hoping to implement it in Mitaka

The problem of secrets from the secret store is not isolated to just Magnum.

Thanks,
Kevin
________________________________________
From: John Dennis [jdennis at redhat.com]
Sent: Tuesday, September 01, 2015 10:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Difference between certs stored in keystone and certs stored in barbican

On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:
>
>> The reason that is compelling is that you can have Barbican generate,
>> sign, and store a keypair without transmitting the private key over the
>> network to the client that originates the signing request. It can be
>> directly stored, and made available only to the clients that need access
>> to it.
>
> This is absolutely _not_ how PKI for TLS is supposed to work, yes Barbican
> can create keypairs etc because sometimes that?s useful but in the
> public-private PKI model that TLS expects this is completely wrong. Magnum
> nodes should be creating their own private key and CSR and submitting them
> to some CA for signing.
>
> Now this gets messy because you probably don?t want to push keystone
> credentials onto each node (that they would use to communicate with
> Barbican).
>
> I?m a bit conflicted writing this next bit because I?m not particularly
> familiar with the Kubernetes/Magnum architectures and also because I?m one
> of the core developers for Anchor but here goes?.
>
> Have you considered using Anchor for this? It?s a pretty lightweight
> ephemeral CA that is built to work well in small PKI communities (like a
> Kubernetes cluster) you can configure multiple methods for authentication
> and build pretty simple validation rules for deciding if a host should be
> given a certificate. Anchor is built to provide short-lifetime
> certificates where each node re-requests a certificate typically every
> 12-24 hours, this has some really nice properties like ?passive
> revocation? (Think revocation that actually works) and strong ways to
> enforce issuing logic on a per host basis.
>
> Anchor or not, I?d like to talk to you more about how you?re attempting to
> secure Magnum - I think it?s an extremely interesting project that I?d
> like to help out with.
>
> -Rob
> (Security Project PTL / Anchor flunkie)

Let's not reinvent the wheel. I can't comment on what Magnum is doing
but I do know the members of the Barbican project are PKI experts and
understand CSR's, key escrow, revocation, etc. Some of the design work
is being done by engineers who currently contribute to products in use
by the Dept. of Defense, an agency that takes their PKI infrastructure
very seriously. They also have been involved with Keystone. I work with
these engineers on a regular basis.

The Barbican blueprint states:

Barbican supports full lifecycle management including provisioning,
expiration, reporting, etc. A plugin system allows for multiple
certificate authority support (including public and private CAs).

Perhaps Anchor would be a great candidate for a Barbican plugin.

What I don't want to see is spinning our wheels, going backward, or
inventing one-off solutions to a very demanding and complex problem
space. There have been way too many one-off solutions in the past, we
want to consolidate the expertise in one project that is designed by
experts and fully vetted, this is the role of Barbican. Would you like
to contribute to Barbican? I'm sure your skills would be a tremendous
asset.


--
John

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From gord at live.ca  Tue Sep  1 18:20:25 2015
From: gord at live.ca (gord chung)
Date: Tue, 1 Sep 2015 14:20:25 -0400
Subject: [openstack-dev] [nova][ceilometer] cputime value resets on
 restart/shutdown
In-Reply-To: <m0bndmruht.fsf@danjou.info>
References: <BLU437-SMTP64E395201B29845166C408DE6B0@phx.gbl>
 <m0r3mjxu8g.fsf@danjou.info>
 <BLU436-SMTP135A306AC35A25D20CB3A4EDE6B0@phx.gbl>
 <m0twrfw8r9.fsf@danjou.info> <BLU436-SMTP715696E1121BCBF5C5F8FDDE6A0@phx.gbl>
 <m0bndmruht.fsf@danjou.info>
Message-ID: <BLU437-SMTP50B7DC5C43AF5B1D13B67DDE6A0@phx.gbl>



On 01/09/2015 12:10 PM, Julien Danjou wrote:
> On Tue, Sep 01 2015, gord chung wrote:
>
>> but if it's not actually cumulative in Ceilometer (pre-storage), should we
>> really be tagging it as such?
> We only have 3 meters type, and the cumulative definition I wrote
> somewhere back in 2012 states that it can reset to 0. Sorry. :-)

we shouldn't redefine real words -- no one reads definitions for words 
they already know... also, i'm not sure the 'reset to 0' definition is 
easily accessible (i'm not sure where it exists) :-\

>
>> so i was thinking rather than modify the existing meter, we build a new
>> cpu.delta meter, which is gives the delta. it seems like giving a delta meter
>> would make the behaviour more consistent.
> ?with data loss if you restart the polling agent and it then ignores the
> previous value of the meter.
>
> Except if you connect the polling system to the previous state of the
> meter, which requires to either:
> 1. Connect the polling system to the storage system
> 2. Store locally the latest value you fetched
>
> Option 1 sounds crazy, option 2 sounds less crazy, but still hackish.
>
> Whereas having a system that can compute the delta afterward with all
> the value at its reach sounds way better ? that's why I'm in favor of
> doing that in the storage system (e.g. Gnocchi).
>
i realise now we're just rehashing the same topic from 3 years ago[1]. 
it seemed like the issue then was we couldn't scale out multiple 
notification consumers... which we can now.

i agree that (1) is bad. (2) has potential data loss on agent restart... 
that said, you already have data loss regardless as when you poll 
'cumulative' meter, the cputime between poll cycles is loss during 
restart. if you handle this at storage level, every subsequent sample 
will be be off by an arbitrary amount after each restart where the delta 
solution, you'll only be off a single sample whenever instance/agent 
restarts.

[1]http://lists.openstack.org/pipermail/openstack-dev/2012-November/003297.html

-- 
gord



From adrian.otto at rackspace.com  Tue Sep  1 18:23:40 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Tue, 1 Sep 2015 18:23:40 +0000
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <55E5DA7C.9020309@redhat.com>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
 <D20B7929.29ACD%robert.clark@hp.com> <55E5DA7C.9020309@redhat.com>
Message-ID: <DD318060-D6AC-409C-82C7-ED63443B41F0@rackspace.com>

John and Robert,

On Sep 1, 2015, at 10:03 AM, John Dennis <jdennis at redhat.com<mailto:jdennis at redhat.com>> wrote:

On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:

The reason that is compelling is that you can have Barbican generate,
sign, and store a keypair without transmitting the private key over the
network to the client that originates the signing request. It can be
directly stored, and made available only to the clients that need access
to it.

This is absolutely _not_ how PKI for TLS is supposed to work, yes Barbican
can create keypairs etc because sometimes that?s useful but in the
public-private PKI model that TLS expects this is completely wrong. Magnum
nodes should be creating their own private key and CSR and submitting them
to some CA for signing.

Now this gets messy because you probably don?t want to push keystone
credentials onto each node (that they would use to communicate with
Barbican).

Exactly. Using Keystone trust tokens was one option we discussed, but placing those on the bay nodes is problematic. They currently offer us the rough equivalent of a regular keystone token because not all OpenStack services check for the scope of the token used to auth against the service, meaning that a trust token is effectively a bearer token for interacting with most of OpenStack. We woodenly to land patches in *every* OpenStack project in order to work around this gap. This is simply not practical in the time we have before the Liberty release, and is much more work than our contributors signed up for. Both our Magnum team and our Barbican team met about this together at our recent Midcycle meetup. We did talk about how to put additional trust support capabilities into Barbican to allow delegation and restricted use of Barbican by individual service accounts.

The bottom line is that we need a functional TLS implementation in Magnum for Kubernetes and Docker Swarm to use now, and we can?t in good conscious claim that Magnum is suitable for production workloads until we address this. If we have to take some shortcuts to get this done, then that?s fine, as long as we commit to revisiting our design compromises and correcting them.

There is another ML thread that references a new Nova spec for ?Instance Users? which is still in concept stage:

https://review.openstack.org/186617

We need something now, even if it?s not perfect. The thing we must solve for first is that we don?t have Kubernetes and Docker API?s that are open on public networks that are unprotected (no authentication), and allow anyone to just start stuff up on your container clusters. We are going to crawl before we walk/run here. We plan to use a TLS certificate to work as an authentication mechanism so that if you don?t have the correct certificate, you can?t communicate with the TLS enabled API port.

This is what we are doing as a first step:

https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/magnum-as-a-ca,n,z

I?m a bit conflicted writing this next bit because I?m not particularly
familiar with the Kubernetes/Magnum architectures and also because I?m one
of the core developers for Anchor but here goes?.

Have you considered using Anchor for this? It?s a pretty lightweight
ephemeral CA that is built to work well in small PKI communities (like a
Kubernetes cluster) you can configure multiple methods for authentication
and build pretty simple validation rules for deciding if a host should be
given a certificate. Anchor is built to provide short-lifetime
certificates where each node re-requests a certificate typically every
12-24 hours, this has some really nice properties like ?passive
revocation? (Think revocation that actually works) and strong ways to
enforce issuing logic on a per host basis.

Anchor or not, I?d like to talk to you more about how you?re attempting to
secure Magnum - I think it?s an extremely interesting project that I?d
like to help out with.

-Rob
(Security Project PTL / Anchor flunkie)

Let's not reinvent the wheel. I can't comment on what Magnum is doing but I do know the members of the Barbican project are PKI experts and understand CSR's, key escrow, revocation, etc. Some of the design work is being done by engineers who currently contribute to products in use by the Dept. of Defense, an agency that takes their PKI infrastructure very seriously. They also have been involved with Keystone. I work with these engineers on a regular basis.

The Barbican blueprint states:

Barbican supports full lifecycle management including provisioning, expiration, reporting, etc. A plugin system allows for multiple certificate authority support (including public and private CAs).

Perhaps Anchor would be a great candidate for a Barbican plugin.

That would be cool. I?m not sure that the use case for Anchor exactly fits into Barbican?s concept of a CA, but if there were a clean integration point there, I?d love to use it.

What I don't want to see is spinning our wheels, going backward, or inventing one-off solutions to a very demanding and complex problem space. There have been way too many one-off solutions in the past, we want to consolidate the expertise in one project that is designed by experts and fully vetted, this is the role of Barbican. Would you like to contribute to Barbican? I'm sure your skills would be a tremendous asset.

To be clear, Magnum has no interest in overlapping with Barbican. To the extent that we can iterate, and remove the cryptography code from Magnum, and leverage it in Barbican, we will do that over time, because we don?t want a duplicated support burden for that codebase. We would applaud collaboration between Anchor and Barbican, as we found navigating the capabilities of each of these choices to be rather confusing.

Thanks,

Adrian

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/20f20abd/attachment.html>

From gord at live.ca  Tue Sep  1 18:31:42 2015
From: gord at live.ca (gord chung)
Date: Tue, 1 Sep 2015 14:31:42 -0400
Subject: [openstack-dev] [oslo][versionedobjects][ceilometer] explain
 the benefits of ceilometer+versionedobjects
In-Reply-To: <F66AE3B0-96A6-4792-B322-E31C8299A0FD@cisco.com>
References: <BLU437-SMTP766811C9632CFCC37305CDDE6F0@phx.gbl>
 <20150828181856.ec477ff41bb188afb35f2b31@intel.com>
 <BLU437-SMTP74C2826061CBC4631D7108DE6E0@phx.gbl>
 <D205EA21.3535C%ahothan@cisco.com>
 <BLU436-SMTP14A77A6CC7A969283FD695DE6E0@phx.gbl>
 <F66AE3B0-96A6-4792-B322-E31C8299A0FD@cisco.com>
Message-ID: <BLU436-SMTP79E6616D1950645B7A9769DE6A0@phx.gbl>



On 28/08/2015 5:18 PM, Alec Hothan (ahothan) wrote:
>
>
>
>
> On 8/28/15, 11:39 AM, "gord chung" <gord at live.ca> wrote:
>
>> i should start by saying i re-read my subject line and it arguably comes
>> off aggressive -- i should probably have dropped 'explain' :)
>>
>> On 28/08/15 01:47 PM, Alec Hothan (ahothan) wrote:
>>> On 8/28/15, 10:07 AM, "gord chung" <gord at live.ca> wrote:
>>>
>>>> On 28/08/15 12:18 PM, Roman Dobosz wrote:
>>>>> So imagine we have new versions of the schema for the events, alarms or
>>>>> samples in ceilometer introduced in Mitaka release while you have all
>>>>> your ceilo services on Liberty release. To upgrade ceilometer you'll
>>>>> have to stop all services to avoid data corruption. With
>>>>> versionedobjects you can do this one by one without disrupting
>>>>> telemetry jobs.
>>>> are versions checked for every single message? has anyone considered the
>>>> overhead to validating each message? since ceilometer is queue based, we
>>>> could technically just publish to a new queue when schema changes... and
>>>> the consuming services will listen to the queue it knows of.
>>>>
>>>> ie. our notification service changes schema so it will now publish to a
>>>> v2 queue, the existing collector service consumes the v1 queue until
>>>> done at which point you can upgrade it and it will listen to v2 queue.
>>>>
>>>> this way there is no need to validate/convert anything and you can still
>>>> take services down one at a time. this support doesn't exist currently
>>>> (i just randomly thought of it) but assuming there's no flaw in my idea
>>>> (which there may be) isn't this more efficient?
>>> If high performance is a concern for ceilometer (and it should) then maybe
>>> there might be better options than JSON?
>>> JSON is great for many applications but can be inappropriate for other
>>> demanding applications.
>>> There are other popular open source encoding options that yield much more
>>> compact wire payload, more efficient encoding/decoding and handle
>>> versioning to a reasonable extent.
>> i should clarify. we let oslo.messaging serialise our dictionary how it
>> does... i believe it's JSON. i'd be interested to switch it to something
>> more efficient. maybe it's time we revive the msgpacks patch[1] or are
>> there better alternatives? (hoping i didn't just unleash a storm of
>> 'this is better' replies)
> I'd be curious to know if there is any benchmark on the oslo serializer for msgpack and how it compares to JSON?
> More important is to make sure we're optimizing in the right area.
> Do we have a good understanding of where ceilometer needs to improve to scale or is it still not quite clear cut?

re: serialisation, that probably isn't the biggest concern for 
Ceilometer performance. the main items are storage -- to be addressed by 
Gnocchi/tsdb, and polling load. i just thought i'd point out an existing 
serialisation patch since we were on the topic :-)

>
>>> Queue based versioning might be less runtime overhead per message but at
>>> the expense of a potentially complex queue version management (which can
>>> become tricky if you have more than 2 versions).
>>> I think Neutron was considering to use versioned queues as well for its
>>> rolling upgrade (along with versioned objects) and I already pointed out
>>> that managing the queues could be tricky.
>>>
>>> In general, trying to provide a versioning framework that allows to do
>>> arbitrary changes between versions is quite difficult (and often bound to
>>> fail).
>>>
>> yeah, so that's what a lot of the devs are debating about right now.
>> performance is our key driver so if we do something we think/know will
>> negatively impact performance, it better bring a whole lot more of
>> something else. if queue based versioning offers comparable
>> functionalities, i'd personally be more interested to explore that route
>> first. is there a thread/patch/log that we could read to see what
>> Neutron discovered when they looked into it?
> The versioning comments are buried in this mega patch if you are brave enough to dig in:
>
> https://review.openstack.org/#/c/190635
>
> The (offline) conclusion was that this was WIP and deserved more discussion (need to check back with Miguel and Ihar from the Neutron team).
> One option considered in that discussion was to use oslo messaging topics to manage flows of messages that had different versions (and still use versionedobjects). So if you have 3 versions in your cloud you'd end up with 3 topics (and as many queues when it comes to Rabbit). What is complex is to manage the queues/topic names (how to name them), how to discover them and how to deal with all the corner cases (like a new node coming in with an arbitrary version, nodes going away at any moment, downgrade cases).

conceptually, i would think only the consumers need to know about all 
the queues and even then, it should only really need to know about the 
ones it understands. the producers (polling agents) can just fire off to 
the correct versioned queue and be done... thanks for the above link 
(it'll help with discussion/spec design).

cheers,

-- 
gord



From douglas.mendizabal at rackspace.com  Tue Sep  1 18:38:47 2015
From: douglas.mendizabal at rackspace.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=)
Date: Tue, 1 Sep 2015 13:38:47 -0500
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <55E5DA7C.9020309@redhat.com>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
 <D20B7929.29ACD%robert.clark@hp.com> <55E5DA7C.9020309@redhat.com>
Message-ID: <55E5F0B7.5000307@rackspace.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Added a few comments inline.

- - Douglas Mendiz?bal

On 9/1/15 12:03 PM, John Dennis wrote:
> On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:
>> 
>>> The reason that is compelling is that you can have Barbican 
>>> generate, sign, and store a keypair without transmitting the 
>>> private key over the network to the client that originates the
>>>  signing request. It can be directly stored, and made available
>>>  only to the clients that need access to it.
>> 
>> This is absolutely _not_ how PKI for TLS is supposed to work,
>> yes Barbican can create keypairs etc because sometimes that?s
>> useful but in the public-private PKI model that TLS expects this
>> is completely wrong. Magnum nodes should be creating their own 
>> private key and CSR and submitting them to some CA for signing.
>> 

Barbican provides a lot of options for provisioning certificates. We
do support provisioning certs by only passing a CSR so that clients
can keep ownership of their keys, if that's what the client prefers.

Of course, when you're provisioning keys for every node in a cluster
for many clusters then key management becomes an issue, and if these
are not throwaway keys, then storing them in Barbican makes sense.

We can also provision the keys, and create CSRs on the Barbican side,
so we make it very easy for clients who don't want to do any of this
locally.

>> Now this gets messy because you probably don?t want to push 
>> keystone credentials onto each node (that they would use to 
>> communicate with Barbican).
>> 

Kevin Fox is working on a Nova spec to provide identity to VMs. I'm
really hoping this spec gains some traction because it's a problem
that not only Barbican, but all other user-facing projects can benefit
from.

See: https://blueprints.launchpad.net/nova/+spec/instance-users

>> I?m a bit conflicted writing this next bit because I?m not 
>> particularly familiar with the Kubernetes/Magnum architectures 
>> and also because I?m one of the core developers for Anchor but 
>> here goes?.
>> 
>> Have you considered using Anchor for this? It?s a pretty 
>> lightweight ephemeral CA that is built to work well in small PKI
>>  communities (like a Kubernetes cluster) you can configure 
>> multiple methods for authentication and build pretty simple 
>> validation rules for deciding if a host should be given a 
>> certificate. Anchor is built to provide short-lifetime 
>> certificates where each node re-requests a certificate typically
>>  every 12-24 hours, this has some really nice properties like 
>> ?passive revocation? (Think revocation that actually works) and 
>> strong ways to enforce issuing logic on a per host basis.
>> 

Someone from the Magnum team can correct me if I'm wrong, but I do
believe they considered Anchor for certificate provisioning.

As I understand the Magnum use case, they will be provisioning many
clusters across different tenants. While Anchor would work well for a
single cluster, they need the ability to provision new CA roots for
each and every cluster, and then provision certs off that root for
every node in the cluster. This way node certs are only valid in the
context of the cluster.

A new feature for Barbican Liberty will be the ability to add new CA
roots scoped to a tenant via API, which will address the Magnum
requirements of separating the certificate roots per cluster.

>> Anchor or not, I?d like to talk to you more about how you?re 
>> attempting to secure Magnum - I think it?s an extremely 
>> interesting project that I?d like to help out with.
>> 
>> -Rob (Security Project PTL / Anchor flunkie)
> 
> Let's not reinvent the wheel. I can't comment on what Magnum is 
> doing but I do know the members of the Barbican project are PKI 
> experts and understand CSR's, key escrow, revocation, etc. Some of
>  the design work is being done by engineers who currently
> contribute to products in use by the Dept. of Defense, an agency
> that takes their PKI infrastructure very seriously. They also have
> been involved with Keystone. I work with these engineers on a
> regular basis.
> 
> The Barbican blueprint states:
> 
> Barbican supports full lifecycle management including
> provisioning, expiration, reporting, etc. A plugin system allows
> for multiple certificate authority support (including public and
> private CAs).
> 
> Perhaps Anchor would be a great candidate for a Barbican plugin.
> 

We've talked about this before with the Security team, and I agree
that adding a CA plugin to Barbican to support Anchor would be awesome.

> What I don't want to see is spinning our wheels, going backward,
> or inventing one-off solutions to a very demanding and complex
> problem space. There have been way too many one-off solutions in
> the past, we want to consolidate the expertise in one project that
> is designed by experts and fully vetted, this is the role of
> Barbican. Would you like to contribute to Barbican? I'm sure your
> skills would be a tremendous asset.
> 
> 

Rob does help out on Barbican occasionally, and I agree his skills are
a tremendous asset. :)
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV5fC3AAoJEB7Z2EQgmLX7/+0P/i/YwO+LlONywiJkNZCHe7+D
Bx7W4BXu9L+fiStF8DFtFXeU/LZ0de8SOk8rCRQ4f2ZS3nDscE/B6NumooVo7rPI
tz/0Er9Tu2mFmObPaCzbcX5la1+6U3JVP4qzFoQ9Yxu+Ai5Kbav36c+P97v3fy6N
uwMm8UiZDGEgH9Z168C6X0qSJ2KOk2/34osxxllfFizYcKTu4A1PQDZbavB5btQ5
k9zOtNoEyiQcwOcyAa6Jr2/QYsiSthMZ33Q7BPkCa6h2vNUGt2wESSgroB+ThWJL
P8NkLZAhHM4Q+mULucmKcDlqNYb/nb/XOlsW1VTgVF0W1wYa7TqnFLqCptmjLCE/
d2P46OqV0eKN7uR6bK8xRrvbX66qTMm5pQN4mAmKPf9Oomj/9M4bsOjv4GhSf04+
o83Y/V4avcDKiO8zS19G/T7nay+YbBGSndjGATxr5EtmePIudHj98ygpGoo4vAOM
aLFCVYDIjTPvI9kWZUFD1wGu6hkUfeSpLFHjQfd1llt9qOgBhIb7R2cNE8mJMWWM
aZzbklGlp+SA6AzYF6x5WnMmr3C1MJngaRq9AL7S+0ti504eE5bibwtJjh91c8jo
v+pGC+A0+gw/x1fHijB+aUij15t8rH8FWnIsLXUBzrukt2xLJdpATrvef+6JI+QL
/Vqul3W0RzPRaoRp1bWc
=jtb/
-----END PGP SIGNATURE-----


From clint at fewbar.com  Tue Sep  1 18:46:45 2015
From: clint at fewbar.com (Clint Byrum)
Date: Tue, 01 Sep 2015 11:46:45 -0700
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <CAN25hfgNfaX6ajHRcSKyyMJ+iux=-cR3CUm+bCF_xWEw4Od1xw@mail.gmail.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
 <CAAbQNRnBFqfiiDEHEQwY5f=HBbZifVqFk1CBO715pSgO1pz=nQ@mail.gmail.com>
 <CA+GZd78FrtjaLb8NAS_tmiyA3B-Z3L4Qsh495LCteP311w8phg@mail.gmail.com>
 <CAA16xcxwYg6fF7yTbaKioSjz4d1qHtRfkn6NeJCprJ-63GjuQQ@mail.gmail.com>
 <CAN25hfgNfaX6ajHRcSKyyMJ+iux=-cR3CUm+bCF_xWEw4Od1xw@mail.gmail.com>
Message-ID: <1441132932-sup-6074@fewbar.com>

Excerpts from Anant Patil's message of 2015-08-30 23:01:29 -0700:
> Hi Angus,
> 
> Thanks for doing the tests with convergence. We are now assured that
> convergence has not impacted the performance in a negative way. Given
> that, in convergence, a stack provisioning process goes through a lot of
> RPC calls, it puts a lot of load on the message broker and the request
> looses time in network traversal etc., and in effect would hamper the
> performance. As the results show, having more than 2 engines will always
> yield better results with convergence. Since the deployments usually
> have 2 or more engines, this works in favor of convergence.
> 
> I have always held that convergence is more for scale (how much/many)
> than for performance (response time), due to it's design of distributing
> load (resource provisioning from single stack) among heat engines and
> also due to the fact that heat actually spends a lot of time waiting for
> the delegated resource request to be completed, not doing much
> computation. However, with these tests, we can eliminate any
> apprehension of performance issues which would have inadvertently
> sneaked in, with our focus more on scalability and reliability, than on
> performance.
> 
> I was thinking we should be doing some scale testing where we have many
> bigger stacks provisioned and compare the results with legacy, where we
> measure memory, CPU and network bandwidth.
> 

Convergence would be worth it if it was 2x slower in response time, and
scaled 10% worse. Because while scalability is super important, the main
point is resilience to failure of an engine. Add in engine restarts,
failures, etc, to these tests, and I think the advantages will be quite
a bit more skewed toward convergence.

Really nice work everyone!


From zbitter at redhat.com  Tue Sep  1 18:47:22 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Tue, 1 Sep 2015 14:47:22 -0400
Subject: [openstack-dev] [trove] [heat] Multi region support
In-Reply-To: <D20B3157.5150C%mlowery@ebay.com>
References: <D20B3157.5150C%mlowery@ebay.com>
Message-ID: <55E5F2BA.9000703@redhat.com>

On 01/09/15 11:41, Lowery, Mathew wrote:
> This is a Trove question but including Heat as they seem to have solved
> this problem.
>
> Summary: Today, it seems that Trove is not capable of creating a cluster
> spanning multiple regions. Is that the case and, if so, are there any
> plans to work on that? Also, are we aware of any precedent solutions
> (e.g. remote stacks in Heat) or even partially completed spec/code in Trove?
>
> More details:
>
> I found this nice diagram
> <https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat/The_Missing_Diagram> created
> for Heat. As far as I understand it,

Clarifications below...

> #1 is the absence of multi-region
> support (i.e. what we have today). #2 seems to be a 100% client-based
> solution. In other words, the Heat services never know about the other
> stacks.

I guess you could say that.

> In fact, there is nothing tying these stacks together at all.

I wouldn't go that far. The regional stacks still appear as resources in 
their parent stack, so they're tied together by whatever inputs and 
outputs are connected up in that stack.

> #3
> seems to show a "master" Heat server that understands "remote stacks"
> and simply converts those "remote stacks" into calls on regional Heats.
> I assume here the master stack record is stored by the master Heat.
> Because the "remote stacks" are full-fledged stacks, they can be managed
> by their regional Heats if availability of master or other regional
> Heats is lost.

Yeah.

> #4, the diagram doesn't seem to match the description
> (instead of one global Heat, it seems the diagram should show two
> regional Heats).

It does (they're the two orange boxes).

> In this one, a single arbitrary region becomes the
> owner of the stack and remote (low-level not stack) resources are
> created as needed. One problem is the manageability is lost if the Heat
> in the owning region is lost. Finally, #5. In #5, it's just #4 but with
> one and only one Heat.
>
> It seems like Heat solved this <https://review.openstack.org/#/c/53313/>
> using #3 (Master Orchestrator)

No, we implemented #2.

> but where there isn't necessarily a
> separate master Heat. Remote stacks can be created by any regional stack.

Yeah, that was the difference between #3 and #2 :)

cheers,
Zane.

> Trove questions:
>
>  1. Having sub-clusters (aka remote clusters aka nested clusters) seems
>     to be useful (i.e. manageability isn't lost when a region is lost).
>     But then again, does it make sense to perform a cluster operation on
>     a sub-cluster?
>  2. You could forego sub-clusters and just create full-fledged remote
>     standalone Trove instances.
>  3. If you don't create full-fledged remote Trove instances (but instead
>     just call remote Nova), then you cannot do simple things like
>     getting logs from a node without going through the owning region's
>     Trove. This is an extra hop and a single point of failure.
>  4. Even with sub-clusters, the only record of them being related lives
>     only in the "owning" region. Then again, some ID tying them all
>     together could be passed to the remote regions.
>  5. Do we want to allow the piecing together of clusters (sort of like
>     Heat's "adopt")?
>
> These are some questions floating around my head and I'm sure there are
> plenty more. Any thoughts on any of this?
>
> Thanks,
> Mat
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From fungi at yuggoth.org  Tue Sep  1 18:56:39 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Tue, 1 Sep 2015 18:56:39 +0000
Subject: [openstack-dev] [all] Criteria for applying vulnerability:managed
	tag
Message-ID: <20150901185638.GB7955@yuggoth.org>

Bringing OpenStack vulnerability management processes to the Big
Top started a couple months ago with creation of a deliverable tag
called vulnerability:managed, the definition of which can be found
at:

http://governance.openstack.org/reference/tags/vulnerability_managed.html

Its initial application is based on a crufty old wiki page which
previously listed what repos the VMT tracks for security
vulnerabilities with no discernible rationale (later extended from
repos to deliverables, as the TC decided against having per-repo
tags). As such, the requirements to qualify for this tag are
currently a rather dissatisfyingly non-transparent "ask the VMT and
we'll finalize it with the TC." It's past time we fix this.

In the spirit of proper transparency, I'm initiating a frank and
open dialogue on what our criteria for direct vulnerability
management within the VMT would require of a deliverable and its
controlling project-team. In the past a number of possible
requirements have been mentioned, so I'll enumerate the ones I can
recall here along with some pros and cons, and others can add to
this as they see fit...

1. Since the vulnerability:managed governance tag applies to
deliverables, all repos within a given deliverable must meet the
qualifying criteria. This means that if some repos in a deliverable
are in good enough shape to qualify, their vulnerability management
could be held back by other repos in the same deliverable. It might
be argued that perhaps this makes them separate deliverables (in
which case the governance projects.yaml should get an update to
reflect that), or maybe that we really have a use case for per-repo
tags and that the TC needs to consider deliverable and repo tags as
separate ideas.

2. The deliverable must have a dedicated point of contact for
security issues (which could be shared by multiple deliverables in a
given project-team if needed), so that the VMT can engage them to
triage reports of potential vulnerabilities. Deliverables with more
than a handful of core reviewers should (so as to limit the
unnecessary exposure of private reports) settle on a subset of these
to act as security core reviewers whose responsibility it is to be
able to confirm whether a bug report is accurate/applicable or at
least know other subject matter experts they can in turn subscribe
to perform those activities in a timely manner. They should also be
able to review and provide pre-approval of patches attached to
private bugs, which is why they are expected to be core reviewers
for the deliverable. These should be members of a group contact (for
example a <something>-coresec team in launchpad) so that the VMT can
easily subscribe them to new bugs.

3. The PTL for the deliverable should agree to act as (or delegate)
a vulnerability management liaison, serving as a point of escalation
for the VMT in situations where severe or lingering vulnerability
reports are failing to gain traction toward timely and thorough
resolution.

4. The bug tracker for the repos within the deliverable should be
configured to initially only provide access for the VMT to
privately-reported vulnerabilities. It is the responsibility of the
VMT to determine whether suspected vulnerabilities are reported
against the correct deliverable and redirect them when possible,
since reporters are often unfamiliar with our project structure and
may choose incorrectly. For Launchpad, this means making sure that
https://launchpad.net/<tracker-name>/+sharing shows only the
OpenStack Vulnerability Management Team (openstack-vuln-mgmt) with
sharing for Private Security: All. It implies some loss of control
for the project team over initial triage of bugs reported privately
as suspected vulnerabilities, but in some cases helps reduce the
number of people who have knowledge of them prior to public
disclosure.

5. The deliverable's repos should undergo a third-party review/audit
looking for obvious signs of insecure design or risky implementation
which could imply a large number of future vulnerability reports. As
much as anything this is a measure to keep the VMT's workload down,
since it is by necessity a group of constrained size and some of its
processes simply can't be scaled safely. It's not been identified
who would actually perform this review, though this is one place
members of the OpenStack Security project-team might volunteer to
provide their expertise and assistance.

I still have a few open questions as well...

A. Can the VMT accept deliverables in any programming language?
OpenStack has a lot of expertise in Python, so it's possible for us
to find people able to confidently review Python source code for
unsafe practices. We have tools and workflows which make it easier
to set up development and testing environments to confirm reported
vulnerabilities and test fixes. For Python-based deliverables we
have lots of solutions in place to improve code quality, and could
for example even require that proposed changes get gated by bandit.
On the other hand, all the VMT's work is on a best-effort basis, so
it may simply be that non-Python deliverables interested in VMT
oversight are willing to accept these shortcomings and the
inefficiencies they imply.

B. As we expand the VMT's ring within the Big Top to encircle more
and varied acts, are there parts of our current process we need to
reevaluate for better fit? For example, right now we have one list
of downstream stakeholders (primarily Linux distros and large public
providers) we notify of upcoming coordinated disclosures, but as the
list grows longer and the kinds of deliverables we support becomes
more diverse some of them can have different downstream communities
and so a single contact list may no longer make sense.

C. Should we be considering a different VMT configuration entirely,
to better service some under-represented subsets of the OpenStack
community? Perhaps multiple VMTs with different specialties or a
tiered structure with focused subteams.

D. Are there other improvements we can make so that our
recommendations and processes are more consumable by other groups
within OpenStack, further distributing the workload or making it
more self-service (perhaps reducing the need for direct VMT
oversight in more situations)?
-- 
Jeremy Stanley
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 949 bytes
Desc: Digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/ee29ddd6/attachment-0001.pgp>

From ildiko.vancsa at ericsson.com  Tue Sep  1 20:31:24 2015
From: ildiko.vancsa at ericsson.com (=?iso-8859-1?Q?Ildik=F3_V=E1ncsa?=)
Date: Tue, 1 Sep 2015 20:31:24 +0000
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <55E5B3A0.7010504@redhat.com>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
 <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
 <55E5611B.1090203@redhat.com>
 <CAAa8YgCZEFqGpwpY=P2JzxP+BmYJpHaYYFrX_fGf6-3s16NREQ@mail.gmail.com>
 <CAAa8YgBNHdqpWhKgoBTcm-cFTCD7hHU4iGWhKia3uePotg-UbA@mail.gmail.com>
 <55E5B3A0.7010504@redhat.com>
Message-ID: <408D5BC6C96B654BBFC5B5A9B60D13431A80880F@ESESSMB105.ericsson.se>

Hi,

I'm glad to see the interest and I also support the idea of using the IRC channel that is already set up for further communication. Should we aim for a meeting/discussion there around the end of this week or during next week?

@Nikolay, Sylvain: Thanks for support and bringing together a list of action items as very first steps.

@Pierre: You wrote that you are using Blazar. Are you using it as is with an older version of OpenStack or you have a modified version of the project/code?

Best Regards,
Ildik?

> -----Original Message-----
> From: Sylvain Bauza [mailto:sbauza at redhat.com]
> Sent: September 01, 2015 16:18
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Blazar] Anyone interested?
> 
> 
> 
> Le 01/09/2015 15:46, Nikolay Starodubtsev a ?crit :
> 
> 
> 	Also, if we decided to continue development we should add blazar here [1] according to the email [2]
> 	So, my suggestion is to setup some timeframe on this week or next week and hold some kind of meeting.
> 	[1]: https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement
> 	[2]: http://lists.openstack.org/pipermail/openstack-dev/2015-August/073071.html
> 
> 
> 
> 
> That's just what I said I was OK to add, for sure.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 	Nikolay Starodubtsev
> 
> 
> 	Software Engineer
> 
> 	Mirantis Inc.
> 
> 
> 
> 
> 
> 	Skype: dark_harlequine1
> 
> 
> 
> 	2015-09-01 12:42 GMT+03:00 Nikolay Starodubtsev <nstarodubtsev at mirantis.com>:
> 
> 
> 		Sylvain,
> 		First of all we need to reanimate blazar gate-jobs, or we can't merge anything. I tried to do it a year ago,
> but can't get the point of the tests,
> 		so better decision can be to rewrite them from scratch.
> 
> 
> 
> 
> 
> 
> 
> 
> 		Nikolay Starodubtsev
> 
> 
> 		Software Engineer
> 
> 		Mirantis Inc.
> 
> 
> 
> 
> 
> 		Skype: dark_harlequine1
> 
> 
> 
> 		2015-09-01 11:26 GMT+03:00 Sylvain Bauza <sbauza at redhat.com>:
> 
> 
> 
> 
> 
> 			Le 01/09/2015 06:52, Nikolay Starodubtsev a ?crit :
> 
> 
> 				All,
> 				I'd like to propose use of #openstack-blazar for further communication and
> coordination.
> 
> 
> 
> 
> 			+2 to that. That's the first step of any communication. The channel logs are also recorded
> here, for async communication :
> 			http://eavesdrop.openstack.org/irclogs/%23openstack-blazar/
> 
> 			I don't see at the moment much benefits of running a weekly meeting. We can chat on
> purpose if needed.
> 
> 			Like I said to Ildiko, I'm fine to help some people discovering Blazar but I won't have time
> lots of time for actually working on it.
> 
> 			IMHO, the first things to do with Blazar is to reduce the tech debt by :
> 			 1/ finising the Climate->Blazar renaming
> 			 2/ updating and using the latest oslo librairies instead of using the old incubator
> 			 3/ using Nova V2.1 API (which could be a bit difficult because there are no more
> extensions)
> 
> 			If I see some progress with Blazar, I'm OK with asking -infra to move Blazar to the
> OpenStack namespace like it was asked by James Blair here because it seems Blazar is not defunct :
> 			http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html
> 
> 			-Sylvain
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 				Nikolay Starodubtsev
> 
> 
> 				Software Engineer
> 
> 				Mirantis Inc.
> 
> 
> 
> 
> 
> 				Skype: dark_harlequine1
> 
> 
> 
> 				2015-08-31 19:44 GMT+03:00 Fuente, Pablo A <pablo.a.fuente at intel.com>:
> 
> 
> 					Yes, Blazar is a really interesting project. I worked on it some
> time ago and I really enjoy it. Sadly my obligations at work don't let me to still working on it, but I', happy that there still some interest
> in Blazar.
> 
> 					Pablo.
> 					On 31/08/15 09:19, Zhenyu Zheng wrote:
> 					Hello,
> 					It seems like an interesting project.
> 
> 					On Fri, Aug 28, 2015 at 7:54 PM, Pierre Riteau
> <priteau at uchicago.edu<mailto:priteau at uchicago.edu>> wrote:
> 					Hello,
> 
> 					The NSF-funded Chameleon project
> (https://www.chameleoncloud.org) uses Blazar to provide advance reservations of resources for running cloud computing
> experiments.
> 
> 					We would be interested in contributing as well.
> 
> 					Pierre Riteau
> 
> 					On 28 Aug 2015, at 07:56, Ildik? V?ncsa
> <<mailto:ildiko.vancsa at ericsson.com>ildiko.vancsa at ericsson.com<mailto:ildiko.vancsa at ericsson.com>> wrote:
> 
> 					> Hi All,
> 					>
> 					> The resource reservation topic pops up time to time on
> different forums to cover use cases in terms of both IT and NFV. The Blazar project was intended to address this need, but according
> to my knowledge due to earlier integration and other difficulties the work has been stopped.
> 					>
> 					> My question is that who would be interested in resurrecting
> the Blazar project and/or working on a reservation system in OpenStack?
> 					>
> 					> Thanks and Best Regards,
> 					> Ildik?
> 					>
> 					>
> __________________________________________________________________________
> 					> OpenStack Development Mailing List (not for usage
> questions)
> 					> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> 					> http://lists.openstack.org/cgi-
> bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 	__________________________________________________________________________
> 					OpenStack Development Mailing List (not for usage questions)
> 					Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> 					http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> dev
> 
> 
> 
> 
> 
> 	__________________________________________________________________________
> 					OpenStack Development Mailing List (not for usage questions)
> 					Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> 
> 					http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> dev
> 
> 
> 
> 
> 	__________________________________________________________________________
> 					OpenStack Development Mailing List (not for usage questions)
> 					Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> 					http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> dev
> 
> 
> 
> 
> 
> 
> 
> 	__________________________________________________________________________
> 				OpenStack Development Mailing List (not for usage questions)
> 				Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> 				http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 	__________________________________________________________________________
> 			OpenStack Development Mailing List (not for usage questions)
> 			Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> 			http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> 
> 
> 
> 	__________________________________________________________________________
> 	OpenStack Development Mailing List (not for usage questions)
> 	Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> 	http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From robert.clark at hp.com  Tue Sep  1 20:33:11 2015
From: robert.clark at hp.com (Clark, Robert Graham)
Date: Tue, 1 Sep 2015 20:33:11 +0000
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <55E5F0B7.5000307@rackspace.com>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
 <D20B7929.29ACD%robert.clark@hp.com> <55E5DA7C.9020309@redhat.com>
 <55E5F0B7.5000307@rackspace.com>
Message-ID: <D20B5643.29B92%robert.clark@hp.com>

On 01/09/2015 11:38, "Douglas Mendiz?bal"
<douglas.mendizabal at rackspace.com> wrote:

This turned into exactly what I was trying to avoid, I probably shouldn?t
have mentioned Anchor, but as I started us down this road (where really I
was just expressing some concerns over certificate lifecycle) please see
my comments below :)

>-----BEGIN PGP SIGNED MESSAGE-----
>Hash: SHA512
>
>Added a few comments inline.
>
>- - Douglas Mendiz?bal
>
>On 9/1/15 12:03 PM, John Dennis wrote:
>> On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:
>>> 
>>>> The reason that is compelling is that you can have Barbican
>>>> generate, sign, and store a keypair without transmitting the
>>>> private key over the network to the client that originates the
>>>>  signing request. It can be directly stored, and made available
>>>>  only to the clients that need access to it.
>>> 
>>> This is absolutely _not_ how PKI for TLS is supposed to work,
>>> yes Barbican can create keypairs etc because sometimes that?s
>>> useful but in the public-private PKI model that TLS expects this
>>> is completely wrong. Magnum nodes should be creating their own
>>> private key and CSR and submitting them to some CA for signing.
>>> 
>
>Barbican provides a lot of options for provisioning certificates. We
>do support provisioning certs by only passing a CSR so that clients
>can keep ownership of their keys, if that's what the client prefers.
>
>Of course, when you're provisioning keys for every node in a cluster
>for many clusters then key management becomes an issue, and if these
>are not throwaway keys, then storing them in Barbican makes sense.
>
>We can also provision the keys, and create CSRs on the Barbican side,
>so we make it very easy for clients who don't want to do any of this
>locally.

I wasn?t bashing Barbican, just the way that it looks like it?s being used
here, as I said, there?s good reasons why Barbican works the way it does,
but not all available options are appropriate for all types of deployments.



>
>>> Now this gets messy because you probably don?t want to push
>>> keystone credentials onto each node (that they would use to
>>> communicate with Barbican).
>>> 
>
>Kevin Fox is working on a Nova spec to provide identity to VMs. I'm
>really hoping this spec gains some traction because it's a problem
>that not only Barbican, but all other user-facing projects can benefit
>from.
>
>See: https://blueprints.launchpad.net/nova/+spec/instance-users
>
>>> I?m a bit conflicted writing this next bit because I?m not
>>> particularly familiar with the Kubernetes/Magnum architectures
>>> and also because I?m one of the core developers for Anchor but
>>> here goes?.
>>> 
>>> Have you considered using Anchor for this? It?s a pretty
>>> lightweight ephemeral CA that is built to work well in small PKI
>>>  communities (like a Kubernetes cluster) you can configure
>>> multiple methods for authentication and build pretty simple
>>> validation rules for deciding if a host should be given a
>>> certificate. Anchor is built to provide short-lifetime
>>> certificates where each node re-requests a certificate typically
>>>  every 12-24 hours, this has some really nice properties like
>>> ?passive revocation? (Think revocation that actually works) and
>>> strong ways to enforce issuing logic on a per host basis.
>>> 
>
>Someone from the Magnum team can correct me if I'm wrong, but I do
>believe they considered Anchor for certificate provisioning.
>
>As I understand the Magnum use case, they will be provisioning many
>clusters across different tenants. While Anchor would work well for a
>single cluster, they need the ability to provision new CA roots for
>each and every cluster, and then provision certs off that root for
>every node in the cluster. This way node certs are only valid in the
>context of the cluster.
>
>A new feature for Barbican Liberty will be the ability to add new CA
>roots scoped to a tenant via API, which will address the Magnum
>requirements of separating the certificate roots per cluster.

Interestingly, we recently added support for this, you can have multiple
CA roots within Anchor. When we announced it a few summits back we
focussed on how you can use Anchor in silo?d deployments where you would
spin up an instance for each cluster but recently we added the ability to
run Anchor with multiple CA roots, each can have it?s own auth and it?s
own validation chain.


>
>>> Anchor or not, I?d like to talk to you more about how you?re
>>> attempting to secure Magnum - I think it?s an extremely
>>> interesting project that I?d like to help out with.
>>> 
>>> -Rob (Security Project PTL / Anchor flunkie)
>> 
>> Let's not reinvent the wheel. I can't comment on what Magnum is
>> doing but I do know the members of the Barbican project are PKI
>> experts and understand CSR's, key escrow, revocation, etc. Some of
>>  the design work is being done by engineers who currently
>> contribute to products in use by the Dept. of Defense, an agency
>> that takes their PKI infrastructure very seriously. They also have
>> been involved with Keystone. I work with these engineers on a
>> regular basis.

To address some of the points above, Anchor and Barbican are complimentary
technologies, I?m not suggesting that one be used over the other, just
that the way that Barbican is being proposed to be used looks scary to me.
As for the people that work on it, I think they?re great! They?re the same
sort of (in some cases exactly the same) people that contribute to Anchor.


>> 
>> The Barbican blueprint states:
>> 
>> Barbican supports full lifecycle management including
>> provisioning, expiration, reporting, etc. A plugin system allows
>> for multiple certificate authority support (including public and
>> private CAs).
>> 
>> Perhaps Anchor would be a great candidate for a Barbican plugin.
>> 
>
>We've talked about this before with the Security team, and I agree
>that adding a CA plugin to Barbican to support Anchor would be awesome.

+1

>
>> What I don't want to see is spinning our wheels, going backward,
>> or inventing one-off solutions to a very demanding and complex
>> problem space. There have been way too many one-off solutions in
>> the past, we want to consolidate the expertise in one project that
>> is designed by experts and fully vetted, this is the role of
>> Barbican. Would you like to contribute to Barbican? I'm sure your
>> skills would be a tremendous asset.
>> 
>> 
>
>Rob does help out on Barbican occasionally, and I agree his skills are
>a tremendous asset. :)

I have a lot of love for Barbican, just not enough time!

>







From robert.clark at hp.com  Tue Sep  1 20:35:27 2015
From: robert.clark at hp.com (Clark, Robert Graham)
Date: Tue, 1 Sep 2015 20:35:27 +0000
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A2EE434@EX10MBOX03.pnnl.gov>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
 <D20B7929.29ACD%robert.clark@hp.com> <55E5DA7C.9020309@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2EE434@EX10MBOX03.pnnl.gov>
Message-ID: <D20B59B0.29BB2%robert.clark@hp.com>

Extremely interesting.

This is something that we are looking at during the Security mid-cycle
(happening this week) see "Secure communications between control plane and
tenant plane? under
https://etherpad.openstack.org/p/security-liberty-midcycle

This is problem for a lot of different projects, we?ve added your
blueprint and hopefully we?ll be able to help with this moving forward.

-Rob



On 01/09/2015 11:11, "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote:

>https://blueprints.launchpad.net/nova/+spec/instance-users
>
>Please see the above spec. Nova, Keystone and Barbican have been working
>together on it this cycle and are hoping to implement it in Mitaka
>
>The problem of secrets from the secret store is not isolated to just
>Magnum.
>
>Thanks,
>Kevin
>________________________________________
>From: John Dennis [jdennis at redhat.com]
>Sent: Tuesday, September 01, 2015 10:03 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [magnum] Difference between certs stored in
>keystone and certs stored in barbican
>
>On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:
>>
>>> The reason that is compelling is that you can have Barbican generate,
>>> sign, and store a keypair without transmitting the private key over the
>>> network to the client that originates the signing request. It can be
>>> directly stored, and made available only to the clients that need
>>>access
>>> to it.
>>
>> This is absolutely _not_ how PKI for TLS is supposed to work, yes
>>Barbican
>> can create keypairs etc because sometimes that?s useful but in the
>> public-private PKI model that TLS expects this is completely wrong.
>>Magnum
>> nodes should be creating their own private key and CSR and submitting
>>them
>> to some CA for signing.
>>
>> Now this gets messy because you probably don?t want to push keystone
>> credentials onto each node (that they would use to communicate with
>> Barbican).
>>
>> I?m a bit conflicted writing this next bit because I?m not particularly
>> familiar with the Kubernetes/Magnum architectures and also because I?m
>>one
>> of the core developers for Anchor but here goes?.
>>
>> Have you considered using Anchor for this? It?s a pretty lightweight
>> ephemeral CA that is built to work well in small PKI communities (like a
>> Kubernetes cluster) you can configure multiple methods for
>>authentication
>> and build pretty simple validation rules for deciding if a host should
>>be
>> given a certificate. Anchor is built to provide short-lifetime
>> certificates where each node re-requests a certificate typically every
>> 12-24 hours, this has some really nice properties like ?passive
>> revocation? (Think revocation that actually works) and strong ways to
>> enforce issuing logic on a per host basis.
>>
>> Anchor or not, I?d like to talk to you more about how you?re attempting
>>to
>> secure Magnum - I think it?s an extremely interesting project that I?d
>> like to help out with.
>>
>> -Rob
>> (Security Project PTL / Anchor flunkie)
>
>Let's not reinvent the wheel. I can't comment on what Magnum is doing
>but I do know the members of the Barbican project are PKI experts and
>understand CSR's, key escrow, revocation, etc. Some of the design work
>is being done by engineers who currently contribute to products in use
>by the Dept. of Defense, an agency that takes their PKI infrastructure
>very seriously. They also have been involved with Keystone. I work with
>these engineers on a regular basis.
>
>The Barbican blueprint states:
>
>Barbican supports full lifecycle management including provisioning,
>expiration, reporting, etc. A plugin system allows for multiple
>certificate authority support (including public and private CAs).
>
>Perhaps Anchor would be a great candidate for a Barbican plugin.
>
>What I don't want to see is spinning our wheels, going backward, or
>inventing one-off solutions to a very demanding and complex problem
>space. There have been way too many one-off solutions in the past, we
>want to consolidate the expertise in one project that is designed by
>experts and fully vetted, this is the role of Barbican. Would you like
>to contribute to Barbican? I'm sure your skills would be a tremendous
>asset.
>
>
>--
>John
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From itzshamail at gmail.com  Tue Sep  1 20:38:02 2015
From: itzshamail at gmail.com (Shamail)
Date: Tue, 1 Sep 2015 16:38:02 -0400
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <55E5B3A0.7010504@redhat.com>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
 <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
 <55E5611B.1090203@redhat.com>
 <CAAa8YgCZEFqGpwpY=P2JzxP+BmYJpHaYYFrX_fGf6-3s16NREQ@mail.gmail.com>
 <CAAa8YgBNHdqpWhKgoBTcm-cFTCD7hHU4iGWhKia3uePotg-UbA@mail.gmail.com>
 <55E5B3A0.7010504@redhat.com>
Message-ID: <26A351DB-970D-45EC-8217-F885ECB6CFB0@gmail.com>

Hi everyone,

Is the information on the wiki still up to date?
https://wiki.openstack.org/wiki/Blazar

I have copied the Product WG mailing list as well since one of the user stories[1] being discussed in that forum has to do with capacity management and resource reservation.

[1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/draft/capacity_management.html

Thanks,
Shamail 



> On Sep 1, 2015, at 10:18 AM, Sylvain Bauza <sbauza at redhat.com> wrote:
> 
> 
> 
> Le 01/09/2015 15:46, Nikolay Starodubtsev a ?crit :
>> Also, if we decided to continue development we should add blazar here [1] according to the email [2]
>> So, my suggestion is to setup some timeframe on this week or next week and hold some kind of meeting.
>> [1]: https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement
>> [2]: http://lists.openstack.org/pipermail/openstack-dev/2015-August/073071.html
> 
> That's just what I said I was OK to add, for sure.
> 
>> 
>>                                   
>> Nikolay Starodubtsev
>> Software Engineer
>> Mirantis Inc.
>> 
>> Skype: dark_harlequine1
>> 
>> 2015-09-01 12:42 GMT+03:00 Nikolay Starodubtsev <nstarodubtsev at mirantis.com>:
>>> Sylvain,
>>> First of all we need to reanimate blazar gate-jobs, or we can't merge anything. I tried to do it a year ago, but can't get the point of the tests,
>>> so better decision can be to rewrite them from scratch.
>>> 
>>>                                   
>>> Nikolay Starodubtsev
>>> Software Engineer
>>> Mirantis Inc.
>>> 
>>> Skype: dark_harlequine1
>>> 
>>> 2015-09-01 11:26 GMT+03:00 Sylvain Bauza <sbauza at redhat.com>:
>>>> 
>>>> 
>>>> Le 01/09/2015 06:52, Nikolay Starodubtsev a ?crit :
>>>>> All,
>>>>> I'd like to propose use of #openstack-blazar for further communication and coordination.
>>>> 
>>>> 
>>>> +2 to that. That's the first step of any communication. The channel logs are also recorded here, for async communication :
>>>> http://eavesdrop.openstack.org/irclogs/%23openstack-blazar/
>>>> 
>>>> I don't see at the moment much benefits of running a weekly meeting. We can chat on purpose if needed.
>>>> 
>>>> Like I said to Ildiko, I'm fine to help some people discovering Blazar but I won't have time lots of time for actually working on it.
>>>> 
>>>> IMHO, the first things to do with Blazar is to reduce the tech debt by :
>>>>  1/ finising the Climate->Blazar renaming
>>>>  2/ updating and using the latest oslo librairies instead of using the old incubator
>>>>  3/ using Nova V2.1 API (which could be a bit difficult because there are no more extensions)
>>>> 
>>>> If I see some progress with Blazar, I'm OK with asking -infra to move Blazar to the OpenStack namespace like it was asked by James Blair here because it seems Blazar is not defunct :
>>>> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html
>>>> 
>>>> -Sylvain
>>>> 
>>>> 
>>>> 
>>>> 
>>>>>                                   
>>>>> Nikolay Starodubtsev
>>>>> Software Engineer
>>>>> Mirantis Inc.
>>>>> 
>>>>> Skype: dark_harlequine1
>>>>> 
>>>>> 2015-08-31 19:44 GMT+03:00 Fuente, Pablo A <pablo.a.fuente at intel.com>:
>>>>>> Yes, Blazar is a really interesting project. I worked on it some time ago and I really enjoy it. Sadly my obligations at work don't let me to still working on it, but I', happy that there still some interest in Blazar.
>>>>>> 
>>>>>> Pablo.
>>>>>> On 31/08/15 09:19, Zhenyu Zheng wrote:
>>>>>> Hello,
>>>>>> It seems like an interesting project.
>>>>>> 
>>>>>> On Fri, Aug 28, 2015 at 7:54 PM, Pierre Riteau <priteau at uchicago.edu<mailto:priteau at uchicago.edu>> wrote:
>>>>>> Hello,
>>>>>> 
>>>>>> The NSF-funded Chameleon project (https://www.chameleoncloud.org) uses Blazar to provide advance reservations of resources for running cloud computing experiments.
>>>>>> 
>>>>>> We would be interested in contributing as well.
>>>>>> 
>>>>>> Pierre Riteau
>>>>>> 
>>>>>> On 28 Aug 2015, at 07:56, Ildik? V?ncsa <<mailto:ildiko.vancsa at ericsson.com>ildiko.vancsa at ericsson.com<mailto:ildiko.vancsa at ericsson.com>> wrote:
>>>>>> 
>>>>>> > Hi All,
>>>>>> >
>>>>>> > The resource reservation topic pops up time to time on different forums to cover use cases in terms of both IT and NFV. The Blazar project was intended to address this need, but according to my knowledge due to earlier integration and other difficulties the work has been stopped.
>>>>>> >
>>>>>> > My question is that who would be interested in resurrecting the Blazar project and/or working on a reservation system in OpenStack?
>>>>>> >
>>>>>> > Thanks and Best Regards,
>>>>>> > Ildik?
>>>>>> >
>>>>>> > __________________________________________________________________________
>>>>>> > OpenStack Development Mailing List (not for usage questions)
>>>>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>> 
>>>>>> 
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> 
>>>>> 
>>>>> 
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>>> 
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/3aecdc70/attachment.html>

From dtroyer at gmail.com  Tue Sep  1 20:47:03 2015
From: dtroyer at gmail.com (Dean Troyer)
Date: Tue, 1 Sep 2015 15:47:03 -0500
Subject: [openstack-dev] [Ironic] Command structure for OSC plugin
In-Reply-To: <1440446092-sup-2361@lrrr.local>
References: <20150824150341.GB13126@redhat.com> <55DB3B46.6000503@gmail.com>
 <55DB3EB4.5000105@redhat.com> <20150824172520.GD13126@redhat.com>
 <55DB54E6.1090408@redhat.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3877B7@CERNXCHG44.cern.ch>
 <20150824193559.GF13126@redhat.com>
 <1440446092-sup-2361@lrrr.local>
Message-ID: <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>

[late catch-up]

On Mon, Aug 24, 2015 at 2:56 PM, Doug Hellmann <doug at doughellmann.com>
wrote:

> Excerpts from Brad P. Crochet's message of 2015-08-24 15:35:59 -0400:
> > On 24/08/15 18:19 +0000, Tim Bell wrote:
> > >
> > >From a user perspective, where bare metal and VMs are just different
> flavors (with varying capabilities), can we not use the same commands
> (server create/rebuild/...) ? Containers will create the same conceptual
> problems.
> > >
> > >OSC can provide a converged interface but if we just replace '$ ironic
> XXXX' by '$ openstack baremetal XXXX', this seems to be a missed
> opportunity to hide the complexity from the end user.
> > >
> > >Can we re-use the existing server structures ?
>

I've wondered about how users would see doing this, we've done it already
with the quota and limits commands (blurring the distinction between
project APIs).  At some level I am sure users really do not care about some
of our project distinctions.


> To my knowledge, overriding or enhancing existing commands like that
>
> is not possible.
>
> You would have to do it in tree, by making the existing commands
> smart enough to talk to both nova and ironic, first to find the
> server (which service knows about something with UUID XYZ?) and
> then to take the appropriate action on that server using the right
> client. So it could be done, but it might lose some of the nuance
> between the server types by munging them into the same command. I
> don't know what sorts of operations are different, but it would be
> worth doing the analysis to see.
>

I do have an experimental plugin that hooks the server create command to
add some options and change its behaviour so it is possible, but right now
I wouldn't call it supported at all.  That might be something that we could
consider doing though for things like this.

The current model for commands calling multiple project APIs is to put them
in openstackclient.common, so yes, in-tree.

Overall, though, to stay consistent with OSC you would map operations into
the current verbs as much as possible.  It is best to think in terms of how
the CLI user is thinking and what she wants to do, and not how the REST or
Python API is written.  In this case, 'baremetal' is a type of server, a
set of attributes of a server, etc.  As mentioned earlier, containers will
also have a similar paradigm to consider.

dt

-- 

Dean Troyer
dtroyer at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/a30e2a8a/attachment.html>

From Kevin.Fox at pnnl.gov  Tue Sep  1 21:31:00 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 1 Sep 2015 21:31:00 +0000
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <D20B59B0.29BB2%robert.clark@hp.com>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
 <D20B7929.29ACD%robert.clark@hp.com> <55E5DA7C.9020309@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2EE434@EX10MBOX03.pnnl.gov>,
 <D20B59B0.29BB2%robert.clark@hp.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2EE5DB@EX10MBOX03.pnnl.gov>

Awesome. Thanks. :)

Are there any plans for the summit yet? I think we should all get together and talk about it.

Thanks,
Kevin
________________________________________
From: Clark, Robert Graham [robert.clark at hp.com]
Sent: Tuesday, September 01, 2015 1:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Difference between certs stored in keystone and certs stored in barbican

Extremely interesting.

This is something that we are looking at during the Security mid-cycle
(happening this week) see "Secure communications between control plane and
tenant plane? under
https://etherpad.openstack.org/p/security-liberty-midcycle

This is problem for a lot of different projects, we?ve added your
blueprint and hopefully we?ll be able to help with this moving forward.

-Rob



On 01/09/2015 11:11, "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote:

>https://blueprints.launchpad.net/nova/+spec/instance-users
>
>Please see the above spec. Nova, Keystone and Barbican have been working
>together on it this cycle and are hoping to implement it in Mitaka
>
>The problem of secrets from the secret store is not isolated to just
>Magnum.
>
>Thanks,
>Kevin
>________________________________________
>From: John Dennis [jdennis at redhat.com]
>Sent: Tuesday, September 01, 2015 10:03 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [magnum] Difference between certs stored in
>keystone and certs stored in barbican
>
>On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:
>>
>>> The reason that is compelling is that you can have Barbican generate,
>>> sign, and store a keypair without transmitting the private key over the
>>> network to the client that originates the signing request. It can be
>>> directly stored, and made available only to the clients that need
>>>access
>>> to it.
>>
>> This is absolutely _not_ how PKI for TLS is supposed to work, yes
>>Barbican
>> can create keypairs etc because sometimes that?s useful but in the
>> public-private PKI model that TLS expects this is completely wrong.
>>Magnum
>> nodes should be creating their own private key and CSR and submitting
>>them
>> to some CA for signing.
>>
>> Now this gets messy because you probably don?t want to push keystone
>> credentials onto each node (that they would use to communicate with
>> Barbican).
>>
>> I?m a bit conflicted writing this next bit because I?m not particularly
>> familiar with the Kubernetes/Magnum architectures and also because I?m
>>one
>> of the core developers for Anchor but here goes?.
>>
>> Have you considered using Anchor for this? It?s a pretty lightweight
>> ephemeral CA that is built to work well in small PKI communities (like a
>> Kubernetes cluster) you can configure multiple methods for
>>authentication
>> and build pretty simple validation rules for deciding if a host should
>>be
>> given a certificate. Anchor is built to provide short-lifetime
>> certificates where each node re-requests a certificate typically every
>> 12-24 hours, this has some really nice properties like ?passive
>> revocation? (Think revocation that actually works) and strong ways to
>> enforce issuing logic on a per host basis.
>>
>> Anchor or not, I?d like to talk to you more about how you?re attempting
>>to
>> secure Magnum - I think it?s an extremely interesting project that I?d
>> like to help out with.
>>
>> -Rob
>> (Security Project PTL / Anchor flunkie)
>
>Let's not reinvent the wheel. I can't comment on what Magnum is doing
>but I do know the members of the Barbican project are PKI experts and
>understand CSR's, key escrow, revocation, etc. Some of the design work
>is being done by engineers who currently contribute to products in use
>by the Dept. of Defense, an agency that takes their PKI infrastructure
>very seriously. They also have been involved with Keystone. I work with
>these engineers on a regular basis.
>
>The Barbican blueprint states:
>
>Barbican supports full lifecycle management including provisioning,
>expiration, reporting, etc. A plugin system allows for multiple
>certificate authority support (including public and private CAs).
>
>Perhaps Anchor would be a great candidate for a Barbican plugin.
>
>What I don't want to see is spinning our wheels, going backward, or
>inventing one-off solutions to a very demanding and complex problem
>space. There have been way too many one-off solutions in the past, we
>want to consolidate the expertise in one project that is designed by
>experts and fully vetted, this is the role of Barbican. Would you like
>to contribute to Barbican? I'm sure your skills would be a tremendous
>asset.
>
>
>--
>John
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sbauza at redhat.com  Tue Sep  1 22:15:49 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Wed, 02 Sep 2015 00:15:49 +0200
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <408D5BC6C96B654BBFC5B5A9B60D13431A80880F@ESESSMB105.ericsson.se>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
 <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
 <55E5611B.1090203@redhat.com>
 <CAAa8YgCZEFqGpwpY=P2JzxP+BmYJpHaYYFrX_fGf6-3s16NREQ@mail.gmail.com>
 <CAAa8YgBNHdqpWhKgoBTcm-cFTCD7hHU4iGWhKia3uePotg-UbA@mail.gmail.com>
 <55E5B3A0.7010504@redhat.com>
 <408D5BC6C96B654BBFC5B5A9B60D13431A80880F@ESESSMB105.ericsson.se>
Message-ID: <55E62395.1040901@redhat.com>



Le 01/09/2015 22:31, Ildik? V?ncsa a ?crit :
> Hi,
>
> I'm glad to see the interest and I also support the idea of using the IRC channel that is already set up for further communication. Should we aim for a meeting/discussion there around the end of this week or during next week?
>
> @Nikolay, Sylvain: Thanks for support and bringing together a list of action items as very first steps.
>
> @Pierre: You wrote that you are using Blazar. Are you using it as is with an older version of OpenStack or you have a modified version of the project/code?
I'm actually really surprised to see 
https://www.chameleoncloud.org/docs/user-guides/bare-metal-user-guide/ 
which describes quite fine how to use Blazar/Climate either using the 
CLI or by Horizon.

The latter is actually not provided within the git tree, so I guess 
Chameleon added it downstream. That's fine, maybe something we could 
upstream if Pierre and his team are okay ?

-Sylvain



>
> Best Regards,
> Ildik?
>
>> -----Original Message-----
>> From: Sylvain Bauza [mailto:sbauza at redhat.com]
>> Sent: September 01, 2015 16:18
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Blazar] Anyone interested?
>>
>>
>>
>> Le 01/09/2015 15:46, Nikolay Starodubtsev a ?crit :
>>
>>
>> 	Also, if we decided to continue development we should add blazar here [1] according to the email [2]
>> 	So, my suggestion is to setup some timeframe on this week or next week and hold some kind of meeting.
>> 	[1]: https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement
>> 	[2]: http://lists.openstack.org/pipermail/openstack-dev/2015-August/073071.html
>>
>>
>>
>>
>> That's just what I said I was OK to add, for sure.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 	Nikolay Starodubtsev
>>
>>
>> 	Software Engineer
>>
>> 	Mirantis Inc.
>>
>>
>>
>>
>>
>> 	Skype: dark_harlequine1
>>
>>
>>
>> 	2015-09-01 12:42 GMT+03:00 Nikolay Starodubtsev <nstarodubtsev at mirantis.com>:
>>
>>
>> 		Sylvain,
>> 		First of all we need to reanimate blazar gate-jobs, or we can't merge anything. I tried to do it a year ago,
>> but can't get the point of the tests,
>> 		so better decision can be to rewrite them from scratch.
>>
>>
>>
>>
>>
>>
>>
>>
>> 		Nikolay Starodubtsev
>>
>>
>> 		Software Engineer
>>
>> 		Mirantis Inc.
>>
>>
>>
>>
>>
>> 		Skype: dark_harlequine1
>>
>>
>>
>> 		2015-09-01 11:26 GMT+03:00 Sylvain Bauza <sbauza at redhat.com>:
>>
>>
>>
>>
>>
>> 			Le 01/09/2015 06:52, Nikolay Starodubtsev a ?crit :
>>
>>
>> 				All,
>> 				I'd like to propose use of #openstack-blazar for further communication and
>> coordination.
>>
>>
>>
>>
>> 			+2 to that. That's the first step of any communication. The channel logs are also recorded
>> here, for async communication :
>> 			http://eavesdrop.openstack.org/irclogs/%23openstack-blazar/
>>
>> 			I don't see at the moment much benefits of running a weekly meeting. We can chat on
>> purpose if needed.
>>
>> 			Like I said to Ildiko, I'm fine to help some people discovering Blazar but I won't have time
>> lots of time for actually working on it.
>>
>> 			IMHO, the first things to do with Blazar is to reduce the tech debt by :
>> 			 1/ finising the Climate->Blazar renaming
>> 			 2/ updating and using the latest oslo librairies instead of using the old incubator
>> 			 3/ using Nova V2.1 API (which could be a bit difficult because there are no more
>> extensions)
>>
>> 			If I see some progress with Blazar, I'm OK with asking -infra to move Blazar to the
>> OpenStack namespace like it was asked by James Blair here because it seems Blazar is not defunct :
>> 			http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html
>>
>> 			-Sylvain
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 				Nikolay Starodubtsev
>>
>>
>> 				Software Engineer
>>
>> 				Mirantis Inc.
>>
>>
>>
>>
>>
>> 				Skype: dark_harlequine1
>>
>>
>>
>> 				2015-08-31 19:44 GMT+03:00 Fuente, Pablo A <pablo.a.fuente at intel.com>:
>>
>>
>> 					Yes, Blazar is a really interesting project. I worked on it some
>> time ago and I really enjoy it. Sadly my obligations at work don't let me to still working on it, but I', happy that there still some interest
>> in Blazar.
>>
>> 					Pablo.
>> 					On 31/08/15 09:19, Zhenyu Zheng wrote:
>> 					Hello,
>> 					It seems like an interesting project.
>>
>> 					On Fri, Aug 28, 2015 at 7:54 PM, Pierre Riteau
>> <priteau at uchicago.edu<mailto:priteau at uchicago.edu>> wrote:
>> 					Hello,
>>
>> 					The NSF-funded Chameleon project
>> (https://www.chameleoncloud.org) uses Blazar to provide advance reservations of resources for running cloud computing
>> experiments.
>>
>> 					We would be interested in contributing as well.
>>
>> 					Pierre Riteau
>>
>> 					On 28 Aug 2015, at 07:56, Ildik? V?ncsa
>> <<mailto:ildiko.vancsa at ericsson.com>ildiko.vancsa at ericsson.com<mailto:ildiko.vancsa at ericsson.com>> wrote:
>>
>> 					> Hi All,
>> 					>
>> 					> The resource reservation topic pops up time to time on
>> different forums to cover use cases in terms of both IT and NFV. The Blazar project was intended to address this need, but according
>> to my knowledge due to earlier integration and other difficulties the work has been stopped.
>> 					>
>> 					> My question is that who would be interested in resurrecting
>> the Blazar project and/or working on a reservation system in OpenStack?
>> 					>
>> 					> Thanks and Best Regards,
>> 					> Ildik?
>> 					>
>> 					>
>> __________________________________________________________________________
>> 					> OpenStack Development Mailing List (not for usage
>> questions)
>> 					> Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>> 					> http://lists.openstack.org/cgi-
>> bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> 	__________________________________________________________________________
>> 					OpenStack Development Mailing List (not for usage questions)
>> 					Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>> 					http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
>> dev
>>
>>
>>
>>
>>
>> 	__________________________________________________________________________
>> 					OpenStack Development Mailing List (not for usage questions)
>> 					Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>
>> 					http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
>> dev
>>
>>
>>
>>
>> 	__________________________________________________________________________
>> 					OpenStack Development Mailing List (not for usage questions)
>> 					Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe
>> 					http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
>> dev
>>
>>
>>
>>
>>
>>
>>
>> 	__________________________________________________________________________
>> 				OpenStack Development Mailing List (not for usage questions)
>> 				Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe
>> 				http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> 	__________________________________________________________________________
>> 			OpenStack Development Mailing List (not for usage questions)
>> 			Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> 			http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>>
>>
>>
>> 	__________________________________________________________________________
>> 	OpenStack Development Mailing List (not for usage questions)
>> 	Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> 	http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From robert.clark at hp.com  Tue Sep  1 22:16:32 2015
From: robert.clark at hp.com (Clark, Robert Graham)
Date: Tue, 1 Sep 2015 22:16:32 +0000
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A2EE5DB@EX10MBOX03.pnnl.gov>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
 <D20B7929.29ACD%robert.clark@hp.com> <55E5DA7C.9020309@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2EE434@EX10MBOX03.pnnl.gov>
 <D20B59B0.29BB2%robert.clark@hp.com>
 <1A3C52DFCD06494D8528644858247BF01A2EE5DB@EX10MBOX03.pnnl.gov>
Message-ID: <D20B7164.29BCC%robert.clark@hp.com>

I?ve requested several Security Project slots on the summit timetable, I?d
be happy to dedicate a fishbowl session to this on the security track.

-Rob

On 01/09/2015 14:31, "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote:

>Awesome. Thanks. :)
>
>Are there any plans for the summit yet? I think we should all get
>together and talk about it.
>
>Thanks,
>Kevin
>________________________________________
>From: Clark, Robert Graham [robert.clark at hp.com]
>Sent: Tuesday, September 01, 2015 1:35 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [magnum] Difference between certs stored in
>keystone and certs stored in barbican
>
>Extremely interesting.
>
>This is something that we are looking at during the Security mid-cycle
>(happening this week) see "Secure communications between control plane and
>tenant plane? under
>https://etherpad.openstack.org/p/security-liberty-midcycle
>
>This is problem for a lot of different projects, we?ve added your
>blueprint and hopefully we?ll be able to help with this moving forward.
>
>-Rob
>
>
>
>On 01/09/2015 11:11, "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote:
>
>>https://blueprints.launchpad.net/nova/+spec/instance-users
>>
>>Please see the above spec. Nova, Keystone and Barbican have been working
>>together on it this cycle and are hoping to implement it in Mitaka
>>
>>The problem of secrets from the secret store is not isolated to just
>>Magnum.
>>
>>Thanks,
>>Kevin
>>________________________________________
>>From: John Dennis [jdennis at redhat.com]
>>Sent: Tuesday, September 01, 2015 10:03 AM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [magnum] Difference between certs stored in
>>keystone and certs stored in barbican
>>
>>On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:
>>>
>>>> The reason that is compelling is that you can have Barbican generate,
>>>> sign, and store a keypair without transmitting the private key over
>>>>the
>>>> network to the client that originates the signing request. It can be
>>>> directly stored, and made available only to the clients that need
>>>>access
>>>> to it.
>>>
>>> This is absolutely _not_ how PKI for TLS is supposed to work, yes
>>>Barbican
>>> can create keypairs etc because sometimes that?s useful but in the
>>> public-private PKI model that TLS expects this is completely wrong.
>>>Magnum
>>> nodes should be creating their own private key and CSR and submitting
>>>them
>>> to some CA for signing.
>>>
>>> Now this gets messy because you probably don?t want to push keystone
>>> credentials onto each node (that they would use to communicate with
>>> Barbican).
>>>
>>> I?m a bit conflicted writing this next bit because I?m not particularly
>>> familiar with the Kubernetes/Magnum architectures and also because I?m
>>>one
>>> of the core developers for Anchor but here goes?.
>>>
>>> Have you considered using Anchor for this? It?s a pretty lightweight
>>> ephemeral CA that is built to work well in small PKI communities (like
>>>a
>>> Kubernetes cluster) you can configure multiple methods for
>>>authentication
>>> and build pretty simple validation rules for deciding if a host should
>>>be
>>> given a certificate. Anchor is built to provide short-lifetime
>>> certificates where each node re-requests a certificate typically every
>>> 12-24 hours, this has some really nice properties like ?passive
>>> revocation? (Think revocation that actually works) and strong ways to
>>> enforce issuing logic on a per host basis.
>>>
>>> Anchor or not, I?d like to talk to you more about how you?re attempting
>>>to
>>> secure Magnum - I think it?s an extremely interesting project that I?d
>>> like to help out with.
>>>
>>> -Rob
>>> (Security Project PTL / Anchor flunkie)
>>
>>Let's not reinvent the wheel. I can't comment on what Magnum is doing
>>but I do know the members of the Barbican project are PKI experts and
>>understand CSR's, key escrow, revocation, etc. Some of the design work
>>is being done by engineers who currently contribute to products in use
>>by the Dept. of Defense, an agency that takes their PKI infrastructure
>>very seriously. They also have been involved with Keystone. I work with
>>these engineers on a regular basis.
>>
>>The Barbican blueprint states:
>>
>>Barbican supports full lifecycle management including provisioning,
>>expiration, reporting, etc. A plugin system allows for multiple
>>certificate authority support (including public and private CAs).
>>
>>Perhaps Anchor would be a great candidate for a Barbican plugin.
>>
>>What I don't want to see is spinning our wheels, going backward, or
>>inventing one-off solutions to a very demanding and complex problem
>>space. There have been way too many one-off solutions in the past, we
>>want to consolidate the expertise in one project that is designed by
>>experts and fully vetted, this is the role of Barbican. Would you like
>>to contribute to Barbican? I'm sure your skills would be a tremendous
>>asset.
>>
>>
>>--
>>John
>>
>>_________________________________________________________________________
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>_________________________________________________________________________
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sopatwar at gmail.com  Tue Sep  1 22:23:48 2015
From: sopatwar at gmail.com (Sourabh Patwardhan)
Date: Tue, 1 Sep 2015 15:23:48 -0700
Subject: [openstack-dev] [Nova] Placing VMs based on multiple criteria
In-Reply-To: <6F84DE007D2BD24A9221FF68E8A09FE0D2C32F@SJ-ITEXCH01.altera.priv.altera.com>
References: <6F84DE007D2BD24A9221FF68E8A09FE0D214D9@SJ-ITEXCH02.altera.priv.altera.com>
 <6F84DE007D2BD24A9221FF68E8A09FE0D21C31@SJ-ITEXCH02.altera.priv.altera.com>
 <6F84DE007D2BD24A9221FF68E8A09FE0D2C32F@SJ-ITEXCH01.altera.priv.altera.com>
Message-ID: <CA+mUx=Q5Gj0OUeKCifWVBZEx9O4yx11o3OVCwr2N2QPuTR7_Og@mail.gmail.com>

Hi Sundar,

Have you considered writing your own custom filter for hosts as described
in [1] ?

Thanks,
Sourabh

[1] http://docs.openstack.org/developer/nova/devref/filter_scheduler.html


On Mon, Aug 31, 2015 at 9:58 AM, Sundar Nadathur <snadathu at altera.com>
wrote:

> Hi all,
>
>     I?d appreciate if the experts can point me in one direction or
> another. If there are existing mechanisms, we don?t want to reinvent the
> wheel. If there aren?t, I?d be interested in exploring clean ways to extend
> and enhance nova scheduling.
>
>
>
> Thank you very much.
>
>
>
> Cheers,
>
> Sundar
>
>
>
> *From:* Sundar Nadathur
> *Sent:* Monday, August 24, 2015 10:48 PM
> *To:* 'openstack-dev at lists.openstack.org'
> *Subject:* [Nova] Placing VMs based on multiple criteria
>
>
>
> Hi,
>
>    Please advise me whether the following scenario requires changes to
> nova scheduler or can be handled with existing scheduling mechanisms.
>
>
>
> I have a type of PCIe device (not necessarily a NIC or HBA). The device
> can be configured with a set of user-defined resources ? say A, B, C. Each
> resource can be shared between a limited number of VMs -- say A can be
> shared among 4 VMs, B among 8, etc. A VM image may request the need for a
> specific list of features, say A and B. Then I want to place the VM on a
> host according to these criteria:
>
> 1.       If there are hosts with a PCIe device that already has A and B
> configured, and has a free instance each of A and B, the VM  must be placed
> on one of those hosts.
>
> 2.       Otherwise, find a host with this PCIe device that can be
> configured with one instance each of A and B.
>
>
>
> It is not clear that this can be handled through 3rd party metadata.
> Suppose we create host aggregates with properties like ?resource=A? and
> ?resource=B?, and also associate properties like ?resource=A? with VM
> images. (A and B are UUIDs representing user-defined resources.) Perhaps Nova
> scheduler can match the properties to select host aggregates that have all
> properties that the VM requires. However:
>
> a.       This would not be dynamic (i.e. track the free instances of each
> resource), and
>
> b.      This addresses only #1 above.
>
>
>
> Is there any way I can leverage existing scheduler mechanisms to  solve
> this VM placement problem? If not, do you have thoughts/comments on what
> changes are needed?
>
>
>
> Thanks, and apologies in advance if I am not clear. Please feel free to
> ask questions.
>
>
>
> Cheers,
>
> Sundar
>
> ------------------------------
>
> Confidentiality Notice.
> This message may contain information that is confidential or otherwise
> protected from disclosure. If you are not the intended recipient, you are
> hereby notified that any use, disclosure, dissemination, distribution, or
> copying of this message, or any attachments, is strictly prohibited. If you
> have received this message in error, please advise the sender by reply
> e-mail, and delete the message and any attachments. Thank you.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/822791ea/attachment.html>

From mlowery at ebay.com  Tue Sep  1 22:25:24 2015
From: mlowery at ebay.com (Lowery, Mathew)
Date: Tue, 1 Sep 2015 22:25:24 +0000
Subject: [openstack-dev] [trove] [heat] Multi region support
In-Reply-To: <55E5F2BA.9000703@redhat.com>
References: <D20B3157.5150C%mlowery@ebay.com> <55E5F2BA.9000703@redhat.com>
Message-ID: <D20B897D.5160D%mlowery@ebay.com>

Thank you Zane for the clarifications!

I misunderstood #2 and that led to the other misunderstandings.

Further questions:
* Are nested stacks aware of their nested-ness? In other words, given any
nested stack (colocated with parent stack or not), can I trace it back to
the parent stack? (On a possibly related note, I see that adopting a stack
is an option to reassemble a new parent stack from its regional parts in
the event that the old parent stack is lost.)
* Has this design met the users' needs? In other words, are there any
plans to make major modifications to this design?

Thanks!

On 9/1/15, 1:47 PM, "Zane Bitter" <zbitter at redhat.com> wrote:

>On 01/09/15 11:41, Lowery, Mathew wrote:
>> This is a Trove question but including Heat as they seem to have solved
>> this problem.
>>
>> Summary: Today, it seems that Trove is not capable of creating a cluster
>> spanning multiple regions. Is that the case and, if so, are there any
>> plans to work on that? Also, are we aware of any precedent solutions
>> (e.g. remote stacks in Heat) or even partially completed spec/code in
>>Trove?
>>
>> More details:
>>
>> I found this nice diagram
>> 
>><https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for
>>_Heat/The_Missing_Diagram> created
>> for Heat. As far as I understand it,
>
>Clarifications below...
>
>> #1 is the absence of multi-region
>> support (i.e. what we have today). #2 seems to be a 100% client-based
>> solution. In other words, the Heat services never know about the other
>> stacks.
>
>I guess you could say that.
>
>> In fact, there is nothing tying these stacks together at all.
>
>I wouldn't go that far. The regional stacks still appear as resources in
>their parent stack, so they're tied together by whatever inputs and
>outputs are connected up in that stack.
>
>> #3
>> seems to show a "master" Heat server that understands "remote stacks"
>> and simply converts those "remote stacks" into calls on regional Heats.
>> I assume here the master stack record is stored by the master Heat.
>> Because the "remote stacks" are full-fledged stacks, they can be managed
>> by their regional Heats if availability of master or other regional
>> Heats is lost.
>
>Yeah.
>
>> #4, the diagram doesn't seem to match the description
>> (instead of one global Heat, it seems the diagram should show two
>> regional Heats).
>
>It does (they're the two orange boxes).
>
>> In this one, a single arbitrary region becomes the
>> owner of the stack and remote (low-level not stack) resources are
>> created as needed. One problem is the manageability is lost if the Heat
>> in the owning region is lost. Finally, #5. In #5, it's just #4 but with
>> one and only one Heat.
>>
>> It seems like Heat solved this <https://review.openstack.org/#/c/53313/>
>> using #3 (Master Orchestrator)
>
>No, we implemented #2.
>
>> but where there isn't necessarily a
>> separate master Heat. Remote stacks can be created by any regional
>>stack.
>
>Yeah, that was the difference between #3 and #2 :)
>
>cheers,
>Zane.
>
>> Trove questions:
>>
>>  1. Having sub-clusters (aka remote clusters aka nested clusters) seems
>>     to be useful (i.e. manageability isn't lost when a region is lost).
>>     But then again, does it make sense to perform a cluster operation on
>>     a sub-cluster?
>>  2. You could forego sub-clusters and just create full-fledged remote
>>     standalone Trove instances.
>>  3. If you don't create full-fledged remote Trove instances (but instead
>>     just call remote Nova), then you cannot do simple things like
>>     getting logs from a node without going through the owning region's
>>     Trove. This is an extra hop and a single point of failure.
>>  4. Even with sub-clusters, the only record of them being related lives
>>     only in the "owning" region. Then again, some ID tying them all
>>     together could be passed to the remote regions.
>>  5. Do we want to allow the piecing together of clusters (sort of like
>>     Heat's "adopt")?
>>
>> These are some questions floating around my head and I'm sure there are
>> plenty more. Any thoughts on any of this?
>>
>> Thanks,
>> Mat
>>
>>
>> 
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From Kevin.Fox at pnnl.gov  Tue Sep  1 22:36:17 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 1 Sep 2015 22:36:17 +0000
Subject: [openstack-dev] [magnum] Difference between certs stored in
 keystone and certs stored in barbican
In-Reply-To: <D20B7164.29BCC%robert.clark@hp.com>
References: <CABJxuZqekJo8pt=aJVASnLuih4+Y6umcv6SwvryYgJmPJCxsQw@mail.gmail.com>
 <A00CC0A3-18AE-44B3-8E05-30154D090367@rackspace.com>
 <D20B7929.29ACD%robert.clark@hp.com> <55E5DA7C.9020309@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2EE434@EX10MBOX03.pnnl.gov>
 <D20B59B0.29BB2%robert.clark@hp.com>
 <1A3C52DFCD06494D8528644858247BF01A2EE5DB@EX10MBOX03.pnnl.gov>,
 <D20B7164.29BCC%robert.clark@hp.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2EE6BB@EX10MBOX03.pnnl.gov>

Nice. Thank you.
Kevin
________________________________________
From: Clark, Robert Graham [robert.clark at hp.com]
Sent: Tuesday, September 01, 2015 3:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Difference between certs stored in keystone and certs stored in barbican

I?ve requested several Security Project slots on the summit timetable, I?d
be happy to dedicate a fishbowl session to this on the security track.

-Rob

On 01/09/2015 14:31, "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote:

>Awesome. Thanks. :)
>
>Are there any plans for the summit yet? I think we should all get
>together and talk about it.
>
>Thanks,
>Kevin
>________________________________________
>From: Clark, Robert Graham [robert.clark at hp.com]
>Sent: Tuesday, September 01, 2015 1:35 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [magnum] Difference between certs stored in
>keystone and certs stored in barbican
>
>Extremely interesting.
>
>This is something that we are looking at during the Security mid-cycle
>(happening this week) see "Secure communications between control plane and
>tenant plane? under
>https://etherpad.openstack.org/p/security-liberty-midcycle
>
>This is problem for a lot of different projects, we?ve added your
>blueprint and hopefully we?ll be able to help with this moving forward.
>
>-Rob
>
>
>
>On 01/09/2015 11:11, "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote:
>
>>https://blueprints.launchpad.net/nova/+spec/instance-users
>>
>>Please see the above spec. Nova, Keystone and Barbican have been working
>>together on it this cycle and are hoping to implement it in Mitaka
>>
>>The problem of secrets from the secret store is not isolated to just
>>Magnum.
>>
>>Thanks,
>>Kevin
>>________________________________________
>>From: John Dennis [jdennis at redhat.com]
>>Sent: Tuesday, September 01, 2015 10:03 AM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [magnum] Difference between certs stored in
>>keystone and certs stored in barbican
>>
>>On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:
>>>
>>>> The reason that is compelling is that you can have Barbican generate,
>>>> sign, and store a keypair without transmitting the private key over
>>>>the
>>>> network to the client that originates the signing request. It can be
>>>> directly stored, and made available only to the clients that need
>>>>access
>>>> to it.
>>>
>>> This is absolutely _not_ how PKI for TLS is supposed to work, yes
>>>Barbican
>>> can create keypairs etc because sometimes that?s useful but in the
>>> public-private PKI model that TLS expects this is completely wrong.
>>>Magnum
>>> nodes should be creating their own private key and CSR and submitting
>>>them
>>> to some CA for signing.
>>>
>>> Now this gets messy because you probably don?t want to push keystone
>>> credentials onto each node (that they would use to communicate with
>>> Barbican).
>>>
>>> I?m a bit conflicted writing this next bit because I?m not particularly
>>> familiar with the Kubernetes/Magnum architectures and also because I?m
>>>one
>>> of the core developers for Anchor but here goes?.
>>>
>>> Have you considered using Anchor for this? It?s a pretty lightweight
>>> ephemeral CA that is built to work well in small PKI communities (like
>>>a
>>> Kubernetes cluster) you can configure multiple methods for
>>>authentication
>>> and build pretty simple validation rules for deciding if a host should
>>>be
>>> given a certificate. Anchor is built to provide short-lifetime
>>> certificates where each node re-requests a certificate typically every
>>> 12-24 hours, this has some really nice properties like ?passive
>>> revocation? (Think revocation that actually works) and strong ways to
>>> enforce issuing logic on a per host basis.
>>>
>>> Anchor or not, I?d like to talk to you more about how you?re attempting
>>>to
>>> secure Magnum - I think it?s an extremely interesting project that I?d
>>> like to help out with.
>>>
>>> -Rob
>>> (Security Project PTL / Anchor flunkie)
>>
>>Let's not reinvent the wheel. I can't comment on what Magnum is doing
>>but I do know the members of the Barbican project are PKI experts and
>>understand CSR's, key escrow, revocation, etc. Some of the design work
>>is being done by engineers who currently contribute to products in use
>>by the Dept. of Defense, an agency that takes their PKI infrastructure
>>very seriously. They also have been involved with Keystone. I work with
>>these engineers on a regular basis.
>>
>>The Barbican blueprint states:
>>
>>Barbican supports full lifecycle management including provisioning,
>>expiration, reporting, etc. A plugin system allows for multiple
>>certificate authority support (including public and private CAs).
>>
>>Perhaps Anchor would be a great candidate for a Barbican plugin.
>>
>>What I don't want to see is spinning our wheels, going backward, or
>>inventing one-off solutions to a very demanding and complex problem
>>space. There have been way too many one-off solutions in the past, we
>>want to consolidate the expertise in one project that is designed by
>>experts and fully vetted, this is the role of Barbican. Would you like
>>to contribute to Barbican? I'm sure your skills would be a tremendous
>>asset.
>>
>>
>>--
>>John
>>
>>_________________________________________________________________________
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe:
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>_________________________________________________________________________
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe:
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From srics.r at gmail.com  Tue Sep  1 22:39:17 2015
From: srics.r at gmail.com (Sridhar Ramaswamy)
Date: Tue, 1 Sep 2015 15:39:17 -0700
Subject: [openstack-dev] [Tacker][NFV] Heads up: switch over to master
In-Reply-To: <CAK6Sh4BF46A+kDyf+Rdmp8fBNenZepFAUu6yVB3LoL6hpcvwEQ@mail.gmail.com>
References: <CAK6Sh4BF46A+kDyf+Rdmp8fBNenZepFAUu6yVB3LoL6hpcvwEQ@mail.gmail.com>
Message-ID: <CAK6Sh4D1ZNnEWe6RR6JWbOq2n1c6vMv0TDJ4G8Fn1h=hS26SQQ@mail.gmail.com>

The switch over to master is now complete. Please follow the updated (and a
bit easier) Installation steps here,

https://wiki.openstack.org/wiki/Tacker/Installation


On Mon, Aug 31, 2015 at 1:32 PM, Sridhar Ramaswamy <srics.r at gmail.com>
wrote:

> Tacker dev & user teams:
>
> This shouldn't be a news to folks regularly attending tacker weekly
> meetings.
>
> For others, until now Tacker code base was using stable/kilo dependencies.
> Now we will be switching over to use master based dependencies using [1]
> and [2]. We are now close to merging these two patchsets. However this will
> cause your existing devstack setups incompatible with the latest tacker
> code base. Note, before merging these patchsets a stable/kilo branch will
> be pulled.
>
> Now you've two options to continue tacker activites,
>
> 1) Switch over to use new devstack setup post merge (preferred)
> 2) If you are in middle of something and want to continue your activity
> (say a PoC in flight) consider switching tacker & python-tackerclient repo
> to stable/kilo
>
> We understand this will cause some disruption. But this is essential to
> cut over to master and get ready for the upcoming liberty release.
>
> Thanks for your support. Questions / comments welcome!
>
> - Sridhar
>
> [1] https://review.openstack.org/#/c/211403/
> [2] https://review.openstack.org/#/c/211405/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/ddebd8f0/attachment.html>

From vahidhashemian at us.ibm.com  Tue Sep  1 22:45:22 2015
From: vahidhashemian at us.ibm.com (Vahid S Hashemian)
Date: Tue, 1 Sep 2015 15:45:22 -0700
Subject: [openstack-dev] [Murano] Documentation on how to Start
	Contributing
In-Reply-To: <20150827213651.GD7955@yuggoth.org>
References: <OF66E9C86D.793D83F7-ON87257E7E.007D9507-88257E7E.007E7E18@us.ibm.com>
 <OF7612A32D.A74FBFE9-ON87257EAC.007856AF-88257EAC.0082D57E@us.ibm.com>
 <etPan.55dd0c9b.32f0398f.134@pegasus.local>
 <OF07A18C60.2DCB65AF-ON87257EAD.0073B58E-88257EAD.0081D189@us.ibm.com>
 <etPan.55de598f.4be5294d.134@pegasus.local>
 <OFB2C81890.4195E87D-ON87257EAE.00047E86-88257EAE.00052066@us.ibm.com>
 <etPan.55de648b.7de6b469.134@pegasus.local>
 <OF11EA0A19.B9DED79C-ON87257EAE.0068DCA9-88257EAE.0069B4FE@us.ibm.com>
 <20150827203835.GA7955@yuggoth.org>
 <OF0E8904A0.1343D3CB-ON87257EAE.00721A5F-88257EAE.0072EF9E@us.ibm.com>
 <20150827213651.GD7955@yuggoth.org>
Message-ID: <OF6AC37E4D.A0EBDE0F-ON87257EB3.007CC092-88257EB3.007D0442@us.ibm.com>

Hi Jeremy,

Thanks for sharing these tips.

What is your advice on debugging a PyPi package? I'm modifying the code 
for python-muranoclient and would like to be able to debug using eclipse 
(in which I'm coding) or any other convenient means.

Regards,
--Vahid Hashemian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/f0b4f28a/attachment.html>

From r1chardj0n3s at gmail.com  Tue Sep  1 23:32:35 2015
From: r1chardj0n3s at gmail.com (Richard Jones)
Date: Wed, 2 Sep 2015 09:32:35 +1000
Subject: [openstack-dev] [horizon] URL Sanity
In-Reply-To: <D20B7717.DE04%rcresswe@cisco.com>
References: <D20B7717.DE04%rcresswe@cisco.com>
Message-ID: <CAHrZfZCFLR5+BN=ajMdrGhD_sq6HHbyoUL+S2cDcJeJx2Lhrcg@mail.gmail.com>

Interesting idea, and in general I'm for consistency. I can't speak
directly to the network/port question, though it seems to me that if ports
must be attached to networks then it makes sense for the URL to reflect
that.

On the other hand, some could argue that the django URL routing is ...
legacy ... and shouldn't me messed with :)

On the gripping hand, thinking about this could inform future angular
routing planning...

On 2 September 2015 at 00:39, Rob Cresswell (rcresswe) <rcresswe at cisco.com>
wrote:

> Hi all,
>
> I recently started looking into properly implementing breadcrumbs to make
> navigation clearer, especially around nested resources (Subnets Detail
> page, for example). The idea is to use the request.path to form a logical
> breadcrumb that isn?t dependent on browser history (
> https://review.openstack.org/#/c/129985/3/horizon/browsers/breadcrumb.py ).
> Unfortunately, this breaks down quite quickly because we use odd patterns
> like `<resources>/<resource_id>/detail`, and `<resources>/<resource_id>`
> doesn?t exist.
>
> This made me realise how much of an inconsistent mess the URL patterns
> are.  I?ve started cleaning them up, so we move from these patterns:
>
> `/admin/networks/<network_id>/detail` - Detail page for a Network
> `/admin/networks/<network_id>/addsubnet` - Create page for a Subnet
>
> To patterns in line with usual CRUD usages, such as:
>
> `/admin/networks/<network_id>` - Detail page for a Network
> `/admin/networks/<network_id>/subnets/create` - Create page for a Subnet
>
> This is mostly trivial, just removing extraneous words and adding
> consistency, with end goal being every panel following patterns like:
>
> `/<resources>` - Index page
> `/<resources>/<resource_id>` - Detail page for a single resource
> `/<resources>/create` - Create new resource
> `/<resources>/<resource_id>/update` - Update a resource
>
> This gets a little complex around nested items. Should a Port for example,
> which has a unique ID, be reachable in Horizon by just its ID? Ports must
> always be attached to a network as I understand it. There are multiple ways
> to express this:
>
> `/networks/ports/<port_id>` - Current implementation
> `/networks/<network_id>/ports/<port_id>` - My preferred structure
> `/ports/<port_id>` - Another option
>
> Does anyone have any opinions on how to handle this structuring, or if
> it?s even necessary?
>
> Regards,
> Rob
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/505e1e01/attachment.html>

From asalkeld at mirantis.com  Tue Sep  1 23:47:16 2015
From: asalkeld at mirantis.com (Angus Salkeld)
Date: Tue, 01 Sep 2015 23:47:16 +0000
Subject: [openstack-dev] [trove] [heat] Multi region support
In-Reply-To: <D20B897D.5160D%mlowery@ebay.com>
References: <D20B3157.5150C%mlowery@ebay.com> <55E5F2BA.9000703@redhat.com>
 <D20B897D.5160D%mlowery@ebay.com>
Message-ID: <CAA16xcxYGq=J9_cXxuB2VPQiCwubCvJ_d5pnii-Bex3A+E9wBA@mail.gmail.com>

On Wed, Sep 2, 2015 at 8:30 AM Lowery, Mathew <mlowery at ebay.com> wrote:

> Thank you Zane for the clarifications!
>
> I misunderstood #2 and that led to the other misunderstandings.
>
> Further questions:
> * Are nested stacks aware of their nested-ness? In other words, given any
> nested stack (colocated with parent stack or not), can I trace it back to
> the parent stack? (On a possibly related note, I see that adopting a stack
>

Yes, there is a link (url) to the parent_stack in the links section of show
stack.


> is an option to reassemble a new parent stack from its regional parts in
> the event that the old parent stack is lost.)
> * Has this design met the users' needs? In other words, are there any
> plans to make major modifications to this design?
>

AFAIK we have had zero feedback from the multi region feature.
No more plans, but we would obviously love feedback and suggestions
on how to improve region support.

-Angus


>
> Thanks!
>
> On 9/1/15, 1:47 PM, "Zane Bitter" <zbitter at redhat.com> wrote:
>
> >On 01/09/15 11:41, Lowery, Mathew wrote:
> >> This is a Trove question but including Heat as they seem to have solved
> >> this problem.
> >>
> >> Summary: Today, it seems that Trove is not capable of creating a cluster
> >> spanning multiple regions. Is that the case and, if so, are there any
> >> plans to work on that? Also, are we aware of any precedent solutions
> >> (e.g. remote stacks in Heat) or even partially completed spec/code in
> >>Trove?
> >>
> >> More details:
> >>
> >> I found this nice diagram
> >>
> >><
> https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for
> >>_Heat/The_Missing_Diagram> created
> >> for Heat. As far as I understand it,
> >
> >Clarifications below...
> >
> >> #1 is the absence of multi-region
> >> support (i.e. what we have today). #2 seems to be a 100% client-based
> >> solution. In other words, the Heat services never know about the other
> >> stacks.
> >
> >I guess you could say that.
> >
> >> In fact, there is nothing tying these stacks together at all.
> >
> >I wouldn't go that far. The regional stacks still appear as resources in
> >their parent stack, so they're tied together by whatever inputs and
> >outputs are connected up in that stack.
> >
> >> #3
> >> seems to show a "master" Heat server that understands "remote stacks"
> >> and simply converts those "remote stacks" into calls on regional Heats.
> >> I assume here the master stack record is stored by the master Heat.
> >> Because the "remote stacks" are full-fledged stacks, they can be managed
> >> by their regional Heats if availability of master or other regional
> >> Heats is lost.
> >
> >Yeah.
> >
> >> #4, the diagram doesn't seem to match the description
> >> (instead of one global Heat, it seems the diagram should show two
> >> regional Heats).
> >
> >It does (they're the two orange boxes).
> >
> >> In this one, a single arbitrary region becomes the
> >> owner of the stack and remote (low-level not stack) resources are
> >> created as needed. One problem is the manageability is lost if the Heat
> >> in the owning region is lost. Finally, #5. In #5, it's just #4 but with
> >> one and only one Heat.
> >>
> >> It seems like Heat solved this <https://review.openstack.org/#/c/53313/
> >
> >> using #3 (Master Orchestrator)
> >
> >No, we implemented #2.
> >
> >> but where there isn't necessarily a
> >> separate master Heat. Remote stacks can be created by any regional
> >>stack.
> >
> >Yeah, that was the difference between #3 and #2 :)
> >
> >cheers,
> >Zane.
> >
> >> Trove questions:
> >>
> >>  1. Having sub-clusters (aka remote clusters aka nested clusters) seems
> >>     to be useful (i.e. manageability isn't lost when a region is lost).
> >>     But then again, does it make sense to perform a cluster operation on
> >>     a sub-cluster?
> >>  2. You could forego sub-clusters and just create full-fledged remote
> >>     standalone Trove instances.
> >>  3. If you don't create full-fledged remote Trove instances (but instead
> >>     just call remote Nova), then you cannot do simple things like
> >>     getting logs from a node without going through the owning region's
> >>     Trove. This is an extra hop and a single point of failure.
> >>  4. Even with sub-clusters, the only record of them being related lives
> >>     only in the "owning" region. Then again, some ID tying them all
> >>     together could be passed to the remote regions.
> >>  5. Do we want to allow the piecing together of clusters (sort of like
> >>     Heat's "adopt")?
> >>
> >> These are some questions floating around my head and I'm sure there are
> >> plenty more. Any thoughts on any of this?
> >>
> >> Thanks,
> >> Mat
> >>
> >>
> >>
> >>_________________________________________________________________________
> >>_
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >__________________________________________________________________________
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/f6964b38/attachment.html>

From asalkeld at mirantis.com  Tue Sep  1 23:53:04 2015
From: asalkeld at mirantis.com (Angus Salkeld)
Date: Tue, 01 Sep 2015 23:53:04 +0000
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <20150901124147.GA4710@t430slt.redhat.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
 <20150901124147.GA4710@t430slt.redhat.com>
Message-ID: <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>

On Tue, Sep 1, 2015 at 10:45 PM Steven Hardy <shardy at redhat.com> wrote:

> On Fri, Aug 28, 2015 at 01:35:52AM +0000, Angus Salkeld wrote:
> >    Hi
> >    I have been running some rally tests against convergence and our
> existing
> >    implementation to compare.
> >    So far I have done the following:
> >     1. defined a template with a resource
> >        groupA
> https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
> >     2. the inner resource looks like
> >        this:A
> https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.templateA
> (it
> >        uses TestResource to attempt to be a reasonable simulation of a
> >        server+volume+floatingip)
> >     3. defined a rally
> >        job:A
> https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yamlA
> that
> >        creates X resources then updates to X*2 then deletes.
> >     4. I then ran the above with/without convergence and with 2,4,8
> >        heat-engines
> >    Here are the results compared:
> >
> https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing
> >    Some notes on the results so far:
> >      * A convergence with only 2 engines does suffer from RPC overload
> (it
> >        gets message timeouts on larger templates). I wonder if this is
> the
> >        problem in our convergence gate...
> >      * convergence does very well with a reasonable number of engines
> >        running.
> >      * delete is slightly slower on convergence
> >    Still to test:
> >      * the above, but measure memory usage
> >      * many small templates (run concurrently)
>
> So, I tried running my many-small-templates here with convergence enabled:
>
> https://bugs.launchpad.net/heat/+bug/1489548
>
> In heat.conf I set:
>
> max_resources_per_stack = -1
> convergence_engine = true
>
> Most other settings (particularly RPC and DB settings) are defaults.
>
> Without convergence (but with max_resources_per_stack disabled) I see the
> time to create a ResourceGroup of 400 nested stacks (each containing one
> RandomString resource) is about 2.5 minutes (core i7 laptop w/SSD, 4 heat
> workers e.g the default for a 4 core machine).
>
> With convergence enabled, I see these errors from sqlalchemy:
>
> File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 652, in
> _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', u'  File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 444, in
> checkout\n    rec = pool._do_get()\n', u'  File
> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 980, in
> _do_get\n    (self.size(), self.overflow(), self._timeout))\n',
> u'TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection
> timed out, timeout 30\n'].
>
> I assume this means we're loading the DB much more in the convergence case
> and overflowing the QueuePool?
>

Yeah, looks like it.


>
> This seems to happen when the RPC call from the ResourceGroup tries to
> create some of the 400 nested stacks.
>
> Interestingly after this error, the parent stack moves to CREATE_FAILED,
> but the engine remains (very) busy, to the point of being partially
> responsive, so it looks like maybe the cancel-on-fail isnt' working (I'm
> assuming it isn't error_wait_time because the parent stack has been marked
> FAILED and I'm pretty sure it's been more than 240s).
>
> I'll dig a bit deeper when I get time, but for now you might like to try
> the stress test too.  It's a bit of a synthetic test, but it turns out to
> be a reasonable proxy for some performance issues we observed when creating
> large-ish TripleO deployments (which also create a large number of nested
> stacks concurrently).
>

Thanks a lot for testing Steve! I'll make 2 bugs for what you have raised
1. limit the number of resource actions in parallel (maybe base on the
number of cores)
2. the cancel on fail error

-Angus


>
> Steve
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/ca1ec281/attachment.html>

From ryan.tidwell at hp.com  Tue Sep  1 23:59:07 2015
From: ryan.tidwell at hp.com (Tidwell, Ryan)
Date: Tue, 1 Sep 2015 23:59:07 +0000
Subject: [openstack-dev] [neutron] subnetallocation is in core resource,
 while there is a extension for it?
In-Reply-To: <----1a------ucW1a$326fa33e-1ee8-455d-982b-8cb3ab951f94@aliyun.com>
References: <----1a------ucW1a$326fa33e-1ee8-455d-982b-8cb3ab951f94@aliyun.com>
Message-ID: <B95D6A3085B24B4D9244DD4689285D4C9FCDE169@G4W3292.americas.hpqcorp.net>

This was a compromise we made toward the end of Kilo.  The subnetpools resource was implemented as a core resource, but for purposes of Horizon interaction and a lack of another method for evolving the Neutron API we deliberately added a shim extension.  I believe this was done with a couple other ?extensions? like VLAN transparent networks.  I don?t think we want to remove the shim extension.

-Ryan

From: gong_ys2004 [mailto:gong_ys2004 at aliyun.com]
Sent: Monday, August 31, 2015 9:45 PM
To: openstack-dev
Subject: [openstack-dev] [neutron] subnetallocation is in core resource, while there is a extension for it?




Hi, neutron guys,

look at https://github.com/openstack/neutron/blob/master/neutron/extensions/subnetallocation.py,

which defines an extension Subnetallocation but defines no extension resource. Actually, it is implemented

in core resource.

So I think we should remove this extension.



I filed a bug for it:

https://bugs.launchpad.net/neutron/+bug/1490815





Regards,

yong sheng gong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/abcecb5a/attachment.html>

From Kevin.Fox at pnnl.gov  Wed Sep  2 00:13:29 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 2 Sep 2015 00:13:29 +0000
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
 <20150901124147.GA4710@t430slt.redhat.com>,
 <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2EE814@EX10MBOX03.pnnl.gov>

You can default it to the number of cores, but please make it configurable. Some ops cram lots of services onto one node, and one service doesn't get to monopolize all cores.

Thanks,
Kevin
________________________________
From: Angus Salkeld [asalkeld at mirantis.com]
Sent: Tuesday, September 01, 2015 4:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] convergence rally test results (so far)

On Tue, Sep 1, 2015 at 10:45 PM Steven Hardy <shardy at redhat.com<mailto:shardy at redhat.com>> wrote:
On Fri, Aug 28, 2015 at 01:35:52AM +0000, Angus Salkeld wrote:
>    Hi
>    I have been running some rally tests against convergence and our existing
>    implementation to compare.
>    So far I have done the following:
>     1. defined a template with a resource
>        groupA https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
>     2. the inner resource looks like
>        this:A https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.templateA (it
>        uses TestResource to attempt to be a reasonable simulation of a
>        server+volume+floatingip)
>     3. defined a rally
>        job:A https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yamlA that
>        creates X resources then updates to X*2 then deletes.
>     4. I then ran the above with/without convergence and with 2,4,8
>        heat-engines
>    Here are the results compared:
>    https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing
>    Some notes on the results so far:
>      * A convergence with only 2 engines does suffer from RPC overload (it
>        gets message timeouts on larger templates). I wonder if this is the
>        problem in our convergence gate...
>      * convergence does very well with a reasonable number of engines
>        running.
>      * delete is slightly slower on convergence
>    Still to test:
>      * the above, but measure memory usage
>      * many small templates (run concurrently)

So, I tried running my many-small-templates here with convergence enabled:

https://bugs.launchpad.net/heat/+bug/1489548

In heat.conf I set:

max_resources_per_stack = -1
convergence_engine = true

Most other settings (particularly RPC and DB settings) are defaults.

Without convergence (but with max_resources_per_stack disabled) I see the
time to create a ResourceGroup of 400 nested stacks (each containing one
RandomString resource) is about 2.5 minutes (core i7 laptop w/SSD, 4 heat
workers e.g the default for a 4 core machine).

With convergence enabled, I see these errors from sqlalchemy:

File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 652, in
_checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', u'  File
"/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 444, in
checkout\n    rec = pool._do_get()\n', u'  File
"/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 980, in
_do_get\n    (self.size(), self.overflow(), self._timeout))\n',
u'TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection
timed out, timeout 30\n'].

I assume this means we're loading the DB much more in the convergence case
and overflowing the QueuePool?

Yeah, looks like it.


This seems to happen when the RPC call from the ResourceGroup tries to
create some of the 400 nested stacks.

Interestingly after this error, the parent stack moves to CREATE_FAILED,
but the engine remains (very) busy, to the point of being partially
responsive, so it looks like maybe the cancel-on-fail isnt' working (I'm
assuming it isn't error_wait_time because the parent stack has been marked
FAILED and I'm pretty sure it's been more than 240s).

I'll dig a bit deeper when I get time, but for now you might like to try
the stress test too.  It's a bit of a synthetic test, but it turns out to
be a reasonable proxy for some performance issues we observed when creating
large-ish TripleO deployments (which also create a large number of nested
stacks concurrently).

Thanks a lot for testing Steve! I'll make 2 bugs for what you have raised
1. limit the number of resource actions in parallel (maybe base on the number of cores)
2. the cancel on fail error

-Angus


Steve

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/1b90cd25/attachment.html>

From nader.lahouti at gmail.com  Wed Sep  2 01:27:00 2015
From: nader.lahouti at gmail.com (Nader Lahouti)
Date: Tue, 1 Sep 2015 18:27:00 -0700
Subject: [openstack-dev] [oslo.messaging]
Message-ID: <CAF5T5jsSUfB4EXAyjB=EOkx2A1kgm3j3FySmMzmpmK-+rm3fRA@mail.gmail.com>

Hi,

I am considering to use oslo.messaging to read messages from a rabbit
queue. The messages are put into the queue by an external process.
In order to do that I need to specify routing_key in addition to other
parameters (i.e. exchange and queue,... name) for accessing the queue.  I
was looking at the oslo.messaging API and wasn't able to find anywhere to
specify the routing key.

Is it possible to set routing_key when using oslo.messaging? if so, can you
please point me to the document.


Regards,
Nader.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/c61ad53c/attachment.html>

From dolph.mathews at gmail.com  Wed Sep  2 01:36:00 2015
From: dolph.mathews at gmail.com (Dolph Mathews)
Date: Tue, 1 Sep 2015 20:36:00 -0500
Subject: [openstack-dev] [api][keystone][openstackclient] Standards for
 object name attributes and filtering
In-Reply-To: <55E47864.3010903@redhat.com>
References: <F74A5456-FEA8-438D-B68A-5C6050F8C1B2@linux.vnet.ibm.com>
 <438ACB3F-AF51-4873-9BE8-5AA8420A7AEA@rackspace.com>
 <C5A0092C63E939488005F15F736A81120A8B09E8@SHSMSX103.ccr.corp.intel.com>
 <55E47864.3010903@redhat.com>
Message-ID: <CAC=h7gVJvofiHy9GoZNw+UQYuat4jAvBgw4QeCxGdr6JcPM6BQ@mail.gmail.com>

Does anyone have an example of an API outside of OpenStack that would
return 400 in this situation (arbitrary query string parameters)? Based on
my past experience, I'd expect them to be ignored, but I can't think of a
reason why a 400 would be a bad idea (but I suspect there's some prior art
/ discussion out there).

On Mon, Aug 31, 2015 at 10:53 AM, Ryan Brown <rybrown at redhat.com> wrote:

> On 08/27/2015 11:28 PM, Chen, Wei D wrote:
> >
> > I agree that return 400 is good idea, thus client user would know what
> happened.
> >
>
> +1, I think a 400 is the sensible choice here. It'd be much more likely
> to help devs catch their errors .
>
> --
> Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/c91c8f71/attachment.html>

From rmeggins at redhat.com  Wed Sep  2 02:26:11 2015
From: rmeggins at redhat.com (Rich Megginson)
Date: Tue, 1 Sep 2015 20:26:11 -0600
Subject: [openstack-dev] correction: Re: [puppet][keystone] Keystone
 resource naming with domain support - no '::domain' if 'Default'
In-Reply-To: <55E5D8D0.6080102@redhat.com>
References: <55DCD07C.2040204@redhat.com> <55DEB546.1060207@redhat.com>
 <55DF0546.5050506@redhat.com> <55DF09E8.8030304@redhat.com>
 <55DF247F.1090405@redhat.com> <55DF8321.4040109@redhat.com>
 <55E5D8D0.6080102@redhat.com>
Message-ID: <55E65E43.8020007@redhat.com>

Slight correction below:

On 09/01/2015 10:56 AM, Rich Megginson wrote:
> To close this thread: 
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072878.html
>
> puppet-openstack will support Keystone domain scoped resource names 
> without a '::domain' in the name, only if the 'default_domain_id' 
> parameter in Keystone has _not_ been set.

Or if the 'default_domain_id' parameter has been set to 'default'.

> That is, if the default domain is 'Default'.  This means that if the 
> user/operator doesn't care about domains at all, the operator doesn't 
> have to deal with them.  However, once the user/operator uses 
> `keystone_domain`, and uses `is_default => true`, this means the 
> user/operator _must_ use '::domain' with _all_ domain scoped Keystone 
> resource names.

Note that the domain named 'Default' with the UUID 'default' is created 
automatically by Keystone, so no need for puppet to create it or ensure 
that it exists.

>
> In addition:
>
> * In the OpenStack L release:
>    If 'default_domain_id' is set,
or if 'default_domain_id' is not 'default',
> puppet will issue a warning if a name is used without '::domain'. I 
> think this is a good thing to do, just in case someone sets the 
> default_domain_id by mistake.
>
> * In OpenStack M release:
>    Puppet will issue a warning if a name is used without '::domain'.
>
> * From Openstack N release:
>    A name must be used with '::domain'.
>
>
> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From tnapierala at mirantis.com  Wed Sep  2 02:46:23 2015
From: tnapierala at mirantis.com (Tomasz Napierala)
Date: Tue, 1 Sep 2015 19:46:23 -0700
Subject: [openstack-dev] [Fuel] Code review process in Fuel and related
	issues
In-Reply-To: <CACo6NWA_=2JnJfcFwbTbt1M33P7Gqpg_xemKDV5x7miu94TAHQ@mail.gmail.com>
References: <CAKYN3rNAw4vqbrvUONaemxOx=mACM3Aq_JAjpBeXmhjXq-zi5A@mail.gmail.com>
 <CABfuu9qPOe2RVhBG7aq+coVRQ0898pkv+DXGQBs9nGU93b+krA@mail.gmail.com>
 <30E12849-7AAB-45F7-BA7B-A4D952053419@mirantis.com>
 <CACo6NWA_=2JnJfcFwbTbt1M33P7Gqpg_xemKDV5x7miu94TAHQ@mail.gmail.com>
Message-ID: <9847EFCC-7772-4BB8-AD0E-4CA6BC65B535@mirantis.com>

> On 01 Sep 2015, at 03:43, Igor Kalnitsky <ikalnitsky at mirantis.com> wrote:
> 
> Hi folks,
> 
> So basically..
> 
> * core reviewers won't be feature leads anymore
> * core reviewers won't be assigned to features (or at least not full-time)
> * core reviewers will spend time doing review and participate design meetings
> * core reviewers will spend time triaging bugs
> 
> Is that correct?
> 

From what I understand, it is not correct. Core reviewer will still do all this activities in most cases. What we are trying to achieve, is to get core's attention only to reviews, that are already reviewed by SMEs and peers. We
hope this will increase the quality of code core reviewers are getting.

Regards,
-- 
Tomasz 'Zen' Napierala
Product Engineering - Poland








From anant.techie at gmail.com  Wed Sep  2 04:12:57 2015
From: anant.techie at gmail.com (Anant Patil)
Date: Wed, 2 Sep 2015 09:42:57 +0530
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
 <20150901124147.GA4710@t430slt.redhat.com>
 <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
Message-ID: <CAN25hfhDb04oJgwfmUEJC6z=vud0XJHVj2FE9nWSy3u+LeEk4Q@mail.gmail.com>

When the stack fails, it is marked as FAILED and all the sync points
that are needed to trigger the next set of resources are deleted. The
resources at same level in the graph, like here, they are suppose to
timeout or fail for an exception. Many DB hits means that the cache
data we were maintaining is not being used in the way we intended.

I don't see if really need 1; if it works with legacy w/o putting any such
constraints, it should work with convergence as well.

--
Anant

On Wed, Sep 2, 2015 at 5:23 AM, Angus Salkeld <asalkeld at mirantis.com> wrote:

> On Tue, Sep 1, 2015 at 10:45 PM Steven Hardy <shardy at redhat.com> wrote:
>
>> On Fri, Aug 28, 2015 at 01:35:52AM +0000, Angus Salkeld wrote:
>> >    Hi
>> >    I have been running some rally tests against convergence and our
>> existing
>> >    implementation to compare.
>> >    So far I have done the following:
>> >     1. defined a template with a resource
>> >        groupA
>> https://github.com/asalkeld/convergence-rally/blob/master/templates/resource_group_test_resource.yaml.template
>> >     2. the inner resource looks like
>> >        this:A
>> https://github.com/asalkeld/convergence-rally/blob/master/templates/server_with_volume.yaml.templateA
>> (it
>> >        uses TestResource to attempt to be a reasonable simulation of a
>> >        server+volume+floatingip)
>> >     3. defined a rally
>> >        job:A
>> https://github.com/asalkeld/convergence-rally/blob/master/increasing_resources.yamlA
>> that
>> >        creates X resources then updates to X*2 then deletes.
>> >     4. I then ran the above with/without convergence and with 2,4,8
>> >        heat-engines
>> >    Here are the results compared:
>> >
>> https://docs.google.com/spreadsheets/d/12kRtPsmZBl_y78aw684PTBg3op1ftUYsAEqXBtT800A/edit?usp=sharing
>> >    Some notes on the results so far:
>> >      * A convergence with only 2 engines does suffer from RPC overload
>> (it
>> >        gets message timeouts on larger templates). I wonder if this is
>> the
>> >        problem in our convergence gate...
>> >      * convergence does very well with a reasonable number of engines
>> >        running.
>> >      * delete is slightly slower on convergence
>> >    Still to test:
>> >      * the above, but measure memory usage
>> >      * many small templates (run concurrently)
>>
>> So, I tried running my many-small-templates here with convergence enabled:
>>
>> https://bugs.launchpad.net/heat/+bug/1489548
>>
>> In heat.conf I set:
>>
>> max_resources_per_stack = -1
>> convergence_engine = true
>>
>> Most other settings (particularly RPC and DB settings) are defaults.
>>
>> Without convergence (but with max_resources_per_stack disabled) I see the
>> time to create a ResourceGroup of 400 nested stacks (each containing one
>> RandomString resource) is about 2.5 minutes (core i7 laptop w/SSD, 4 heat
>> workers e.g the default for a 4 core machine).
>>
>> With convergence enabled, I see these errors from sqlalchemy:
>>
>> File "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 652, in
>> _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', u'  File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 444, in
>> checkout\n    rec = pool._do_get()\n', u'  File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 980, in
>> _do_get\n    (self.size(), self.overflow(), self._timeout))\n',
>> u'TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection
>> timed out, timeout 30\n'].
>>
>> I assume this means we're loading the DB much more in the convergence case
>> and overflowing the QueuePool?
>>
>
> Yeah, looks like it.
>
>
>>
>> This seems to happen when the RPC call from the ResourceGroup tries to
>> create some of the 400 nested stacks.
>>
>> Interestingly after this error, the parent stack moves to CREATE_FAILED,
>> but the engine remains (very) busy, to the point of being partially
>> responsive, so it looks like maybe the cancel-on-fail isnt' working (I'm
>> assuming it isn't error_wait_time because the parent stack has been marked
>> FAILED and I'm pretty sure it's been more than 240s).
>>
>> I'll dig a bit deeper when I get time, but for now you might like to try
>> the stress test too.  It's a bit of a synthetic test, but it turns out to
>> be a reasonable proxy for some performance issues we observed when
>> creating
>> large-ish TripleO deployments (which also create a large number of nested
>> stacks concurrently).
>>
>
> Thanks a lot for testing Steve! I'll make 2 bugs for what you have raised
> 1. limit the number of resource actions in parallel (maybe base on the
> number of cores)
> 2. the cancel on fail error
>
> -Angus
>
>
>>
>> Steve
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/7e6f12cb/attachment.html>

From robertc at robertcollins.net  Wed Sep  2 04:33:36 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Wed, 2 Sep 2015 16:33:36 +1200
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
 <20150901124147.GA4710@t430slt.redhat.com>
 <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
Message-ID: <CAJ3HoZ1RKCBV5if4YS_b-h0WzGu0HySkAVEQGKbvyuOpz9LYGg@mail.gmail.com>

On 2 September 2015 at 11:53, Angus Salkeld <asalkeld at mirantis.com> wrote:

> 1. limit the number of resource actions in parallel (maybe base on the
> number of cores)

I'm having trouble mapping that back to 'and heat-engine is running on
3 separate servers'.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From Tim.Bell at cern.ch  Wed Sep  2 05:15:35 2015
From: Tim.Bell at cern.ch (Tim Bell)
Date: Wed, 2 Sep 2015 05:15:35 +0000
Subject: [openstack-dev] [Ironic] Command structure for OSC plugin
In-Reply-To: <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>
References: <20150824150341.GB13126@redhat.com> <55DB3B46.6000503@gmail.com>
 <55DB3EB4.5000105@redhat.com> <20150824172520.GD13126@redhat.com>
 <55DB54E6.1090408@redhat.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3877B7@CERNXCHG44.cern.ch>
 <20150824193559.GF13126@redhat.com> <1440446092-sup-2361@lrrr.local>
 <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>
Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3C006E@CERNXCHG44.cern.ch>

That would be great to have plugins on the commands which are relevant to multiple projects? avoiding exposing all of the underlying projects as prefixes and getting more consistency would be very appreciated by the users.

Tim

From: Dean Troyer [mailto:dtroyer at gmail.com]
Sent: 01 September 2015 22:47
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Ironic] Command structure for OSC plugin

[late catch-up]

On Mon, Aug 24, 2015 at 2:56 PM, Doug Hellmann <doug at doughellmann.com<mailto:doug at doughellmann.com>> wrote:
Excerpts from Brad P. Crochet's message of 2015-08-24 15:35:59 -0400:
> On 24/08/15 18:19 +0000, Tim Bell wrote:
> >
> >From a user perspective, where bare metal and VMs are just different flavors (with varying capabilities), can we not use the same commands (server create/rebuild/...) ? Containers will create the same conceptual problems.
> >
> >OSC can provide a converged interface but if we just replace '$ ironic XXXX' by '$ openstack baremetal XXXX', this seems to be a missed opportunity to hide the complexity from the end user.
> >
> >Can we re-use the existing server structures ?

I've wondered about how users would see doing this, we've done it already with the quota and limits commands (blurring the distinction between project APIs).  At some level I am sure users really do not care about some of our project distinctions.


> To my knowledge, overriding or enhancing existing commands like that
> is not possible.

You would have to do it in tree, by making the existing commands
smart enough to talk to both nova and ironic, first to find the
server (which service knows about something with UUID XYZ?) and
then to take the appropriate action on that server using the right
client. So it could be done, but it might lose some of the nuance
between the server types by munging them into the same command. I
don't know what sorts of operations are different, but it would be
worth doing the analysis to see.

I do have an experimental plugin that hooks the server create command to add some options and change its behaviour so it is possible, but right now I wouldn't call it supported at all.  That might be something that we could consider doing though for things like this.

The current model for commands calling multiple project APIs is to put them in openstackclient.common, so yes, in-tree.

Overall, though, to stay consistent with OSC you would map operations into the current verbs as much as possible.  It is best to think in terms of how the CLI user is thinking and what she wants to do, and not how the REST or Python API is written.  In this case, 'baremetal' is a type of server, a set of attributes of a server, etc.  As mentioned earlier, containers will also have a similar paradigm to consider.

dt

--

Dean Troyer
dtroyer at gmail.com<mailto:dtroyer at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/a29339ce/attachment.html>

From gal.sagie at gmail.com  Wed Sep  2 05:59:21 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Wed, 2 Sep 2015 08:59:21 +0300
Subject: [openstack-dev] [Neutron] Port forwarding
Message-ID: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>

Hello All,

I have searched and found many past efforts to implement port forwarding in
Neutron.
I have found two incomplete blueprints [1], [2] and an abandoned patch [3].

There is even a project in Stackforge [4], [5] that claims
to implement this, but the L3 parts in it seems older then current master.

I have recently came across this requirement for various use cases, one of
them is
providing feature compliance with Docker port-mapping feature (for Kuryr),
and saving floating
IP's space.
There has been many discussions in the past that require this feature, so i
assume
there is a demand to make this formal, just a small examples [6], [7], [8],
[9]

The idea in a nutshell is to support port forwarding (TCP/UDP ports) on the
external router
leg from the public network to internal ports, so user can use one Floating
IP (the external
gateway router interface IP) and reach different internal ports depending
on the port numbers.
This should happen on the network node (and can also be leveraged for
security reasons).

I think that the POC implementation in the Stackforge project shows that
this needs to be
implemented inside the L3 parts of the current reference implementation, it
will be hard
to maintain something like that in an external repository.
(I also think that the API/DB extensions should be close to the current L3
reference
implementation)

I would like to renew the efforts on this feature and propose a RFE and a
spec for this to the
next release, any comments/ideas/thoughts are welcome.
And of course if any of the people interested or any of the people that
worked on this before
want to join the effort, you are more then welcome to join and comment.

Thanks
Gal.

[1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
[2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
[3] https://review.openstack.org/#/c/60512/
[4] https://github.com/stackforge/networking-portforwarding
[5] https://review.openstack.org/#/q/port+forwarding,n,z

[6]
https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
[7] http://www.gossamer-threads.com/lists/openstack/dev/34307
[8]
http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
[9]
http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/0c71663f/attachment.html>

From kirubak at zadarastorage.com  Wed Sep  2 06:22:02 2015
From: kirubak at zadarastorage.com (Kirubakaran Kaliannan)
Date: Wed, 2 Sep 2015 11:52:02 +0530
Subject: [openstack-dev] [Swift] ObjectController::async_update() latest
	change
In-Reply-To: <CCE0321C-5F23-4917-9CF3-879C80859961@not.mn>
References: <CCE0321C-5F23-4917-9CF3-879C80859961@not.mn>
Message-ID: <488cb9b5df6d4f80263961bbd9cfd213@mail.gmail.com>

Hi,

Regarding
https://github.com/openstack/swift/commit/2289137164231d7872731c2cf3d81b86f34f01a4

I am profiling each section of the swift code. Notice
ObjectController::async_update()has high latency, and tried to use threads
to parallelize the container_update. Noticed the above changes, and I
profiled the above code changes as well.

On my set up, per container async_update takes 2 to 4 millisecond and takes
9-11 ms to complete the 3 asyncupdate?s. With the above changes it came down
to 7 to 9 ms, but I am expecting this to go further below to 4ms at least.

1.	Do we have any latency number for the improvement on the above change ?
2.	Trying to understand the possibility, do we really want to wait for all
async_update()to complete ? can we just return success, once when one
async_update() is successful ?


Thanks,
-kiru


From gkotton at vmware.com  Wed Sep  2 06:38:02 2015
From: gkotton at vmware.com (Gary Kotton)
Date: Wed, 2 Sep 2015 06:38:02 +0000
Subject: [openstack-dev] [nova][vmware][qa] vmware nsx CI appears gone
In-Reply-To: <55E5BD2A.6020908@linux.vnet.ibm.com>
References: <55E5BD2A.6020908@linux.vnet.ibm.com>
Message-ID: <D20C73B6.BB30A%gkotton@vmware.com>

Hi,
We have an infra issue - seems related to
https://review.openstack.org/190047 - we are investigating.
Thanks
Gary

On 9/1/15, 5:58 PM, "Matt Riedemann" <mriedem at linux.vnet.ibm.com> wrote:

>I haven't seen the vmware nsx CI reporting on anything in awhile but
>don't see any outage events here:
>
>https://wiki.openstack.org/wiki/NovaVMware/Minesweeper/Status
>
>Is there some status?
>
>-- 
>
>Thanks,
>
>Matt Riedemann
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From ramishra at redhat.com  Wed Sep  2 06:40:18 2015
From: ramishra at redhat.com (Rabi Mishra)
Date: Wed, 2 Sep 2015 02:40:18 -0400 (EDT)
Subject: [openstack-dev] [Heat] Use block_device_mapping_v2 for swap?
In-Reply-To: <55E4275F.5090908@redhat.com>
References: <55E0669B.5040406@redhat.com>
 <7a8461b0.10a12.14f82d71773.Coremail.tiantian223@163.com>
 <55E4275F.5090908@redhat.com>
Message-ID: <993515610.16698230.1441176018815.JavaMail.zimbra@redhat.com>



Rabi Mishra
+91-7757924167

----- Original Message -----
> On 31/08/15 11:19, TIANTIAN wrote:
> > 
> > 
> > At 2015-08-28 21:48:11, "marios" <marios at redhat.com> wrote:
> >> I am working with the OS::Nova::Server resource and looking at the tests
> >> [1], it should be possible to just define 'swap_size' and get a swap
> >> space created on the instance:
> >>
> >>  NovaCompute:
> >>    type: OS::Nova::Server
> >>    properties:
> >>      image:
> >>        {get_param: Image}
> >>      ...
> >>      block_device_mapping_v2:
> >>        - swap_size: 1
> >>
> >> When trying this the first thing I hit is a validation code nit that is
> >> already fixed @ [2] (I have slightly older heat) and I applied that fix.
> >> However, when I try and deploy with a Flavor that has a 2MB swap for
> >> example, and with the above template, I still end up with a 2MB swap.
> >>
> >> Am I right in my assumption that the above template is the equivalent of
> >> specifying --swap on the nova boot cli (i.e. should this work?)? I am
> >> working with the Ironic nova driver btw and when deploying using the
> >> nova cli using --swap works; has anyone used/tested this property
> >> recently? I'm not yet sure if this is worth filing a bug for yet.
> > 
> >>
> > ------According to the codes of heat and novaclient, the above template is
> > the equivalent of
> >        specifying --swap on the nova boot cli:
> > https://github.com/openstack/python-novaclient/blob/master/novaclient/v2/shell.py#L142-L146
> > https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/nova/server.py#L822-L831
> > 
> > 
> > But don't know much about nova, and not sure how does nova behave if
> > specified different swap size on Flavor
> > 
> > 
> > and Bdm.
> 
> Hey TianTian, thanks very much for the pointers and sanity check. Yeah I
> think it is intended to work that way (e.g. the tests on the heatclient
> also cover this as per my original), I was mostly looking for 'yeah did
> this recently worked ok for me'.

Hi Marios,

This seems to work fine with master and I do see swap created with size of the
'swap_size' specified in the template. 

[fedora at test-stack-novacompute-nwownbcokzra ~]$ swapon -s
Filename				Type		Size	Used	Priority
/dev/vdb                               	partition	524284	0	-1


Though I did face a novaclient issue with python-novaclient==2.26.0.

The above issue has been resolved by the below commit.
https://github.com/openstack/python-novaclient/commit/0a8fffffbaa48083ba2e79abf67096efa59fa18b

When specifying swap_size more than that permitted by the flavor we get
'CREATE_FAILED' with the following error. So I assume it works as expected.


resources.NovaCompute: Swap drive requested is larger                                                                       |
|                       | than instance type allows. (HTTP 400) (Request-ID: req-                                                                     |
|                       | 276150f5-082d-4c00-bb73-645c59e52727)                  


Thanks,
Rabi

> WRT the different swap size on flavor, in this case what is on the
> flavor becomes the effective maximum you can specify (and can override
> with --swap on the nova cli).
> 
> thanks! marios
> 
> > 
> > 
> >> thanks very much for reading! marios > >[1]
> >> >https://github.com/openstack/heat/blob/a1819ff0696635c516d0eb1c59fa4f70cae27d65/heat/tests/nova/test_server.py#L2446
> >> >[2]
> >> >https://review.openstack.org/#/q/I2c538161d88a51022b91b584f16c1439848e7ada,n,z
> >> >
> >> >__________________________________________________________________________
> >> >OpenStack Development Mailing List (not for usage questions)
> >> >Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From eduard.matei at cloudfounders.com  Wed Sep  2 06:58:23 2015
From: eduard.matei at cloudfounders.com (Eduard Matei)
Date: Wed, 2 Sep 2015 09:58:23 +0300
Subject: [openstack-dev] [cinder][ThirdPartyCI]CloudFounders
 OpenvStorage CI - request to re-add the cinder driver
In-Reply-To: <CAEOp6J_xvVk40sZT+uQ94ruMV5Z6P6vPJJfS+Z7ZkJwPGHFj-A@mail.gmail.com>
References: <CAEOp6J95YVvpzdcPCjrh=9b7fL78Gb2vXgLa4LWd8A-UDnJXzw@mail.gmail.com>
 <CAEOp6J_xvVk40sZT+uQ94ruMV5Z6P6vPJJfS+Z7ZkJwPGHFj-A@mail.gmail.com>
Message-ID: <CAEOp6J-GW_a5E3WTBMAE3U7Y4j5Q+hwxG0GpfV_0oC5asC4kFg@mail.gmail.com>

Hi,

Trying to get more attention to this ...

We had our driver removed by commit:
https://github.com/openstack/cinder/commit/f0ab819732d77a8a6dd1a91422ac183ac4894419
due to no CI.

Pls let me know if there is something wrong so we can fix it asap so we can
have the driver back in Liberty (if possible).

The CI is commenting using the name "Open vStorage CI" instead of
"CloudFounders OpenvStorage CI".

Thanks,

Eduard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/46904ab7/attachment.html>

From ikalnitsky at mirantis.com  Wed Sep  2 07:00:40 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Wed, 2 Sep 2015 10:00:40 +0300
Subject: [openstack-dev] [Fuel] Code review process in Fuel and related
	issues
In-Reply-To: <9847EFCC-7772-4BB8-AD0E-4CA6BC65B535@mirantis.com>
References: <CAKYN3rNAw4vqbrvUONaemxOx=mACM3Aq_JAjpBeXmhjXq-zi5A@mail.gmail.com>
 <CABfuu9qPOe2RVhBG7aq+coVRQ0898pkv+DXGQBs9nGU93b+krA@mail.gmail.com>
 <30E12849-7AAB-45F7-BA7B-A4D952053419@mirantis.com>
 <CACo6NWA_=2JnJfcFwbTbt1M33P7Gqpg_xemKDV5x7miu94TAHQ@mail.gmail.com>
 <9847EFCC-7772-4BB8-AD0E-4CA6BC65B535@mirantis.com>
Message-ID: <CACo6NWDdxzWxDkU078tuuHupyArux09bPya72hC24WwnkNiCFg@mail.gmail.com>

It won't work that way. You either busy on writing code / leading
feature or doing review. It couldn't be combined effectively. Any
context switch between activities requires an extra time to focus on.

On Wed, Sep 2, 2015 at 5:46 AM, Tomasz Napierala
<tnapierala at mirantis.com> wrote:
>> On 01 Sep 2015, at 03:43, Igor Kalnitsky <ikalnitsky at mirantis.com> wrote:
>>
>> Hi folks,
>>
>> So basically..
>>
>> * core reviewers won't be feature leads anymore
>> * core reviewers won't be assigned to features (or at least not full-time)
>> * core reviewers will spend time doing review and participate design meetings
>> * core reviewers will spend time triaging bugs
>>
>> Is that correct?
>>
>
> From what I understand, it is not correct. Core reviewer will still do all this activities in most cases. What we are trying to achieve, is to get core's attention only to reviews, that are already reviewed by SMEs and peers. We
> hope this will increase the quality of code core reviewers are getting.
>
> Regards,
> --
> Tomasz 'Zen' Napierala
> Product Engineering - Poland
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From rakhmerov at mirantis.com  Wed Sep  2 07:28:46 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Wed, 2 Sep 2015 13:28:46 +0600
Subject: [openstack-dev] [mistral] Displaying wf hierarchy in CLI
In-Reply-To: <BLU436-SMTP17E153A426B8ECA10667E0D86B0@phx.gbl>
References: <407873FB-639D-40C0-ABD8-A62EA6FDF876@mirantis.com>
 <BLU436-SMTP17E153A426B8ECA10667E0D86B0@phx.gbl>
Message-ID: <8F5D9511-9BB6-4EBE-998E-621A1D190881@mirantis.com>


> On 31 Aug 2015, at 23:47, Joshua Harlow <harlowja at outlook.com> wrote:
> 
> Would u guys have any use for the following being split out into its own library?
> 
> https://github.com/openstack/taskflow/blob/master/taskflow/types/tree.py

Do you mean we could move this, for instance, into oslo somewhere? Taskflow itself is in oslo as well, of course, I rather mean ?somewhere else in oslo?.

> It already has a pformat method that could be used to do your drawing of the 'tree'...
> 
> http://docs.openstack.org/developer/taskflow/types.html#taskflow.types.tree.Node.pformat
> 
> Might be useful for u folks? Taskflow uses it to be able to show information that is tree-like to the developer/user for similar purposes (it also supports using pydot to dump things out in dot graph format):
> 
> For example http://tempsend.com/A8AA89F397/4663/car.pdf is the graph of an example (in particular https://github.com/openstack/taskflow/blob/master/taskflow/examples/build_a_car.py)

Yeah, that looks interesting. I think it can work out for our case. We just need to come to agreement on what exactly we want to display.

Thanks, Joshua.


Renat Akhmerov
@ Mirantis Inc.



From rakhmerov at mirantis.com  Wed Sep  2 07:33:38 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Wed, 2 Sep 2015 13:33:38 +0600
Subject: [openstack-dev] [mistral] Displaying wf hierarchy in CLI
In-Reply-To: <CALjNAZ2B=cc-C_POc8D1+DgA=vF8hx6hNJV0c5B9bdfrRsi4OQ@mail.gmail.com>
References: <407873FB-639D-40C0-ABD8-A62EA6FDF876@mirantis.com>
 <BLU436-SMTP17E153A426B8ECA10667E0D86B0@phx.gbl>
 <CALjNAZ2B=cc-C_POc8D1+DgA=vF8hx6hNJV0c5B9bdfrRsi4OQ@mail.gmail.com>
Message-ID: <71CC2E3C-A33A-401A-8268-B2FDB39A82F5@mirantis.com>


> On 01 Sep 2015, at 22:21, Lingxian Kong <anlin.kong at gmail.com> wrote:
> 
> To achieve that, we should record the execution/task-execution relationship during an execution is running, because we have no such info currently.

Well, in DB model we, in fact, have a field pointing to parent task execution id (and hence we can figure out workflow execution id easily), it?s required because child workflow needs to communicate its result to its parent workflow somehow. But it?s not displayed anywhere. So we can use this field.


Renat Akhmerov
@ Mirantis Inc.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/a43f0b73/attachment.html>

From wei.d.chen at intel.com  Wed Sep  2 07:39:15 2015
From: wei.d.chen at intel.com (Chen, Wei D)
Date: Wed, 2 Sep 2015 07:39:15 +0000
Subject: [openstack-dev] [api][keystone][openstackclient] Standards for
 object name attributes and filtering
In-Reply-To: <CAC=h7gVJvofiHy9GoZNw+UQYuat4jAvBgw4QeCxGdr6JcPM6BQ@mail.gmail.com>
References: <F74A5456-FEA8-438D-B68A-5C6050F8C1B2@linux.vnet.ibm.com>
 <438ACB3F-AF51-4873-9BE8-5AA8420A7AEA@rackspace.com>
 <C5A0092C63E939488005F15F736A81120A8B09E8@SHSMSX103.ccr.corp.intel.com>
 <55E47864.3010903@redhat.com>
 <CAC=h7gVJvofiHy9GoZNw+UQYuat4jAvBgw4QeCxGdr6JcPM6BQ@mail.gmail.com>
Message-ID: <C5A0092C63E939488005F15F736A81120A8B2D71@SHSMSX103.ccr.corp.intel.com>

Dolph,



Ignore these arbitrary query string is what we did in keystone, current implementation did something deliberately to ignore them 
instead of passing them into the backend driver (If these parameter go to backend driver we will get nothing for sure), there might be 
no model answer for this situation, but I guess there must be some consideration at that time, do you still remember the concerns 
around this?





Best Regards,

Dave Chen



From: Dolph Mathews [mailto:dolph.mathews at gmail.com]
Sent: Wednesday, September 02, 2015 9:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [api][keystone][openstackclient] Standards for object name attributes and filtering



Does anyone have an example of an API outside of OpenStack that would return 400 in this situation (arbitrary query string 
parameters)? Based on my past experience, I'd expect them to be ignored, but I can't think of a reason why a 400 would be a bad idea 
(but I suspect there's some prior art / discussion out there).



On Mon, Aug 31, 2015 at 10:53 AM, Ryan Brown <rybrown at redhat.com> wrote:

On 08/27/2015 11:28 PM, Chen, Wei D wrote:
>
> I agree that return 400 is good idea, thus client user would know what happened.
>

+1, I think a 400 is the sensible choice here. It'd be much more likely
to help devs catch their errors .

--
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/68cb9154/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6648 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/68cb9154/attachment.bin>

From sgolovatiuk at mirantis.com  Wed Sep  2 08:31:25 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Wed, 2 Sep 2015 10:31:25 +0200
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
Message-ID: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>

Hi,

I would like to nominate Alex Schultz to Fuel-Library Core team. He?s been
doing a great job in writing patches. At the same time his reviews are
solid with comments for further improvements. He?s #3 reviewer and #1
contributor with 46 commits for last 90 days [1]. Additionally, Alex has
been very active in IRC providing great ideas. His ?librarian? blueprint
[3] made a big step towards to puppet community.

Fuel Library, please vote with +1/-1 for approval/objection. Voting will be
open until September 9th. This will go forward after voting is closed if
there are no objections.

Overall contribution:
[0] http://stackalytics.com/?user_id=alex-schultz
Fuel library contribution for last 90 days:
[1] http://stackalytics.com/report/contribution/fuel-library/90
List of reviews:
[2]
https://review.openstack.org/#/q/reviewer:%22Alex+Schultz%22+status:merged,n,z
?Librarian activities? in mailing list:
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-July/071058.html

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/3c2f827c/attachment.html>

From gal.sagie at gmail.com  Wed Sep  2 08:37:22 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Wed, 2 Sep 2015 11:37:22 +0300
Subject: [openstack-dev] [Neutron] Un-addressed Port spec and implementation
Message-ID: <CAG9LJa7qJR2Dr0B8w5=a8YPXtdqkhr3TMBbHaegOvswwRXoN8A@mail.gmail.com>

Hello All,

The un-addressed port spec [1] was approved for Liberty.
I think this spec has good potential to provide very interesting solutions
for NFV use cases
but also for multi site connectivity and i would really want to see
it move forward with the community.

There are some issues we need to discuss regarding L2 population (both for
the reference
implementation and for any "SDN" solution), but we can iterate on them.

This email relates to a recent revert [2] that was done to prevent spoofing
possibility
due to recent work that was merged.

If i understand the problem correctly, an un-addressed port can now perform
ARP spoofing
on an address of a port that already exists in the same network and listen
to its traffic.
(a problem which becomes bigger with shared network among tenants)

One possible solution we could do to prevent this is to keep flow entries
that block the port
from pretending to have an IP that is already part of the network (or
subnet).
So there will be ARP spoofing checks that check the port is not answering
for an IP that is already
configured.
*Any thoughts/comments on that?*

Unrelated to this, i think that an un-address port should work in subnet
context when it comes
to L2 population and traffic forwarding, so that un-address port only gets
traffic for addresses
that are not found, but are on the same subnet as the un-address port.
(I understand this is a bigger challenge and is not working with the way
Neutron networks
work today, but we can iterate on this as well since its unrelated to the
security subject)

Thanks
Gal.

[1]
https://github.com/openstack/neutron-specs/blob/master/specs/liberty/unaddressed-port.rst
[2] https://review.openstack.org/#/c/218470/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/1495b093/attachment.html>

From shardy at redhat.com  Wed Sep  2 08:55:47 2015
From: shardy at redhat.com (Steven Hardy)
Date: Wed, 2 Sep 2015 09:55:47 +0100
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <CAJ3HoZ1RKCBV5if4YS_b-h0WzGu0HySkAVEQGKbvyuOpz9LYGg@mail.gmail.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
 <20150901124147.GA4710@t430slt.redhat.com>
 <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
 <CAJ3HoZ1RKCBV5if4YS_b-h0WzGu0HySkAVEQGKbvyuOpz9LYGg@mail.gmail.com>
Message-ID: <20150902085546.GA25909@t430slt.redhat.com>

On Wed, Sep 02, 2015 at 04:33:36PM +1200, Robert Collins wrote:
> On 2 September 2015 at 11:53, Angus Salkeld <asalkeld at mirantis.com> wrote:
> 
> > 1. limit the number of resource actions in parallel (maybe base on the
> > number of cores)
> 
> I'm having trouble mapping that back to 'and heat-engine is running on
> 3 separate servers'.

I think Angus was responding to my test feedback, which was a different
setup, one 4-core laptop running heat-engine with 4 worker processes.

In that environment, the level of additional concurrency becomes a problem
because all heat workers become so busy that creating a large stack
DoSes the Heat services, and in my case also the DB.

If we had a configurable option, similar to num_engine_workers, which
enabled control of the number of resource actions in parallel, I probably
could have controlled that explosion in activity to a more managable series
of tasks, e.g I'd set num_resource_actions to (num_engine_workers*2) or
something.

Steve


From anlin.kong at gmail.com  Wed Sep  2 08:56:55 2015
From: anlin.kong at gmail.com (Lingxian Kong)
Date: Wed, 2 Sep 2015 16:56:55 +0800
Subject: [openstack-dev] [mistral] Displaying wf hierarchy in CLI
In-Reply-To: <71CC2E3C-A33A-401A-8268-B2FDB39A82F5@mirantis.com>
References: <407873FB-639D-40C0-ABD8-A62EA6FDF876@mirantis.com>
 <BLU436-SMTP17E153A426B8ECA10667E0D86B0@phx.gbl>
 <CALjNAZ2B=cc-C_POc8D1+DgA=vF8hx6hNJV0c5B9bdfrRsi4OQ@mail.gmail.com>
 <71CC2E3C-A33A-401A-8268-B2FDB39A82F5@mirantis.com>
Message-ID: <CALjNAZ1XhAEPyYHL+AAu4WjpiJAvkrGfMznERO1hdxZUBgA_Ww@mail.gmail.com>

Hi, Renat,

I want to make it clear. Then, what you want to see is dependencies between
workflow executions? or task executions in one workflow? We know that we
could use a separate task or a workflow as a 'task'.

On Wed, Sep 2, 2015 at 3:33 PM, Renat Akhmerov <rakhmerov at mirantis.com>
wrote:

>
> On 01 Sep 2015, at 22:21, Lingxian Kong <anlin.kong at gmail.com> wrote:
>
> To achieve that, we should record the execution/task-execution
> relationship during an execution is running, because we have no such info
> currently.
>
>
> Well, in DB model we, in fact, have a field pointing to parent task
> execution id (and hence we can figure out workflow execution id easily),
> it?s required because child workflow needs to communicate its result to its
> parent workflow somehow. But it?s not displayed anywhere. So we can use
> this field.
>
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
*Regards!*
*-----------------------------------*
*Lingxian Kong*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/cb9062ac/attachment.html>

From geguileo at redhat.com  Wed Sep  2 09:19:02 2015
From: geguileo at redhat.com (Gorka Eguileor)
Date: Wed, 2 Sep 2015 11:19:02 +0200
Subject: [openstack-dev] [cinder] L3 low pri review queue starvation
In-Reply-To: <CAPWkaSUbTRJr1qXFSLN+d1qOF7zjWd7r64cVMpYuUxDYY6t4Ug@mail.gmail.com>
References: <55E592C6.7070808@dyncloud.net>
 <CAPWkaSUbTRJr1qXFSLN+d1qOF7zjWd7r64cVMpYuUxDYY6t4Ug@mail.gmail.com>
Message-ID: <20150902091902.GB19943@localhost>

On Tue, Sep 01, 2015 at 09:30:26AM -0600, John Griffith wrote:
> On Tue, Sep 1, 2015 at 5:57 AM, Tom Barron <tpb at dyncloud.net> wrote:
> 
> > [Yesterday while discussing the following issue on IRC, jgriffith
> > suggested that I post to the dev list in preparation for a discussion in
> > Wednesday's cinder meeting.]
> >
> > Please take a look at the 10 "Low" priority reviews in the cinder
> > Liberty 3 etherpad that were punted to Mitaka yesterday. [1]
> >
> > Six of these *never* [2] received a vote from a core reviewer. With the
> > exception of the first in the list, which has 35 patch sets, none of the
> > others received a vote before Friday, August 28.  Of these, none had
> > more than -1s on minor issues, and these have been remedied.
> >
> > Review https://review.openstack.org/#/c/213855 "Implement
> > manage/unmanage snapshot in Pure drivers" is a great example:
> >
> >    * approved blueprint for a valuable feature
> >    * pristine code
> >    * passes CI and Jenkins (and by the deadline)
> >    * never reviewed
> >
> > We have 11 core reviewers, all of whom were very busy doing reviews
> > during L3, but evidently this set of reviews didn't really have much
> > chance of making it.  This looks like a classic case where the
> > individually rational priority decisions of each core reviewer
> > collectively resulted in starving the Low Priority review queue.
> >

I can't speak for other cores, but in my case reviewing was mostly not
based on my own priorities, I reviewed patches based on the already set
priority of each patch as well as patches that I was already
reviewing.

Some of those medium priority patches took me a lot of time to review,
since they were not trivial (some needed some serious rework).  As for
patches I was already reviewing, as you can imagine it wouldn't be fair
to just ignore a patch that I've been reviewing for some time just when
it's almost ready and the deadline is closing in.

Having said that I have to agree that those patches didn't have much
chances, and I apologize for my part on that.  While it is no excuse I
have to agree with jgriffith when he says that those patches should have
pushed cores for reviews (even if this is clearly not the "right" way to
manage it).

> > One way to remedy would be for the 11 core reviewers to devote a day or
> > two to cleaning up this backlog of 10 outstanding reviews rather than
> > punting all of them out to Mitaka.
> >
> > Thanks for your time and consideration.
> >
> > Respectfully,
> >
> > -- Tom Barron
> >
> > [1] https://etherpad.openstack.org/p/cinder-liberty-3-reviews
> > [2] At the risk of stating the obvious, in this count I ignore purely
> > procedural votes such as the final -2.
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ?Thanks Tom, this is sadly an ongoing problem every release.  I think we
> have a number of things we can talk about at the summit to try and
> make some of this better.  I honestly think that if people were to
> actually "use" launchpad instead of creating tracking etherpads
> everywhere it would help.  What I mean is that there is a ranked
> targeting of items in Launchpad and we should use it, core team
> members should know that as the source of truth and things that must
> get reviewed.
> 

I agree, we should use Launchpad's functionality to track BPs and Bugs
targeted for each milestone, and maybe we can discuss on a workflow that
helps us reduce starvation at the same time that helps us keep track of
code reviewers responsible for each item.

Just spitballing here, but we could add to BP's work items and Bug's
comments what core members will be responsible for reviewing related
patches.  Although this means cores will have to check this on every
review they do that has a BP or Bug number, so if there are already 2
cores responsible for that feature they should preferably just move on
to other patches and if there are not 2 core reviewers they should add
themselves to LP.  This way patch owners know who they should bug for
reviews on their patches and if there are no core reviewers for them
they should look for some (or wait for them to "claim" that
bug/feature).

> As far as Liberty and your patches; Yesterday was the freeze point, the
> entire Cinder team agreed on that (yourself included both at the mid-cycle
> meet up and at the team meeting two weeks ago when Thingee reiterated the
> deadlines).  If you noticed last week that your patches weren't going
> anywhere YOU should've wrangled up some reviews.
> 
> Furthermore, I've explained every release for the last 3 or 4 years that
> there's no silver bullet, no magic process when it come to review
> throughput.  ESPECIALLY when it comes to the 3'rd milestone.  You can try
> landing strips, priority listed etherpads, sponsors etc etc but the fact is
> that things happen, the gate slows down (or we completely break on the
> Cinder side like we did yesterday).  This doesn't mean "oh, well then you
> get another day or two", it means stuff happens and it sucks but first
> course of action is drop low priority items.  It just means if you really
> wanted it you probably should've made it happen earlier.  Just so you know,
> I run into this every release as well.  I had a number of things in
> progress that I had hoped to finish last week and yesterday, BUT my
> priority shifted to trying to help get the cinder patches back on track and
> get the items in Launchpad updated to actually reflect something that was
> somewhat possible.
> 
> The only thing that works is "submit early and review often" it's simple.
> 

While this is true, sometimes it's not a matter of submitting early,
because some patches just keep getting ignored and when deadline closes
in low priority patches will always have higher priority patches that
will go before them, unless we set a workflow that will give them a
better chance of getting reviews.

> Finally, I pointed out to you yesterday that we could certainly discuss as
> a team what to do with your patches.  BUT that given how terribly far
> behind we were in the process that I wanted reviewers to focus on medium,
> high and critical prioritized items.  That's what prioritization's are for,
> it means when crunch time hits and things hit the fan it's usually the
> "low" priority things that get bumped.
> 
> Thanks,

I am not against discussing this, but in all fairness we should discuss
*everybody's* patches that were targeted for L3, not just tbarron's.

Cheers,
Gorka.


From rcresswe at cisco.com  Wed Sep  2 09:30:15 2015
From: rcresswe at cisco.com (Rob Cresswell (rcresswe))
Date: Wed, 2 Sep 2015 09:30:15 +0000
Subject: [openstack-dev] [horizon] URL Sanity
In-Reply-To: <CAHrZfZCFLR5+BN=ajMdrGhD_sq6HHbyoUL+S2cDcJeJx2Lhrcg@mail.gmail.com>
References: <D20B7717.DE04%rcresswe@cisco.com>
 <CAHrZfZCFLR5+BN=ajMdrGhD_sq6HHbyoUL+S2cDcJeJx2Lhrcg@mail.gmail.com>
Message-ID: <D20C7FB2.DEDE%rcresswe@cisco.com>

Yeah, the ?legacy? thought is what?s making me second guess the effort. We?re still in limbo with the language focus, IMO.

Are we nearing a change in routing? I remember work being demo?d at the last summit, but I haven?t seen any of it since.

From: Richard Jones <r1chardj0n3s at gmail.com<mailto:r1chardj0n3s at gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Wednesday, 2 September 2015 00:32
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [horizon] URL Sanity

Interesting idea, and in general I'm for consistency. I can't speak directly to the network/port question, though it seems to me that if ports must be attached to networks then it makes sense for the URL to reflect that.

On the other hand, some could argue that the django URL routing is ... legacy ... and shouldn't me messed with :)

On the gripping hand, thinking about this could inform future angular routing planning...

On 2 September 2015 at 00:39, Rob Cresswell (rcresswe) <rcresswe at cisco.com<mailto:rcresswe at cisco.com>> wrote:
Hi all,

I recently started looking into properly implementing breadcrumbs to make navigation clearer, especially around nested resources (Subnets Detail page, for example). The idea is to use the request.path to form a logical breadcrumb that isn?t dependent on browser history ( https://review.openstack.org/#/c/129985/3/horizon/browsers/breadcrumb.py ). Unfortunately, this breaks down quite quickly because we use odd patterns like `<resources>/<resource_id>/detail`, and `<resources>/<resource_id>` doesn?t exist.

This made me realise how much of an inconsistent mess the URL patterns are.  I?ve started cleaning them up, so we move from these patterns:

`/admin/networks/<network_id>/detail` - Detail page for a Network
`/admin/networks/<network_id>/addsubnet` - Create page for a Subnet

To patterns in line with usual CRUD usages, such as:

`/admin/networks/<network_id>` - Detail page for a Network
`/admin/networks/<network_id>/subnets/create` - Create page for a Subnet

This is mostly trivial, just removing extraneous words and adding consistency, with end goal being every panel following patterns like:

`/<resources>` - Index page
`/<resources>/<resource_id>` - Detail page for a single resource
`/<resources>/create` - Create new resource
`/<resources>/<resource_id>/update` - Update a resource

This gets a little complex around nested items. Should a Port for example, which has a unique ID, be reachable in Horizon by just its ID? Ports must always be attached to a network as I understand it. There are multiple ways to express this:

`/networks/ports/<port_id>` - Current implementation
`/networks/<network_id>/ports/<port_id>` - My preferred structure
`/ports/<port_id>` - Another option

Does anyone have any opinions on how to handle this structuring, or if it?s even necessary?

Regards,
Rob

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/b803cc40/attachment.html>

From vstinner at redhat.com  Wed Sep  2 09:32:45 2015
From: vstinner at redhat.com (Victor Stinner)
Date: Wed, 2 Sep 2015 11:32:45 +0200
Subject: [openstack-dev] [cinder] How to link a change to the completed
 cinder-python3 blueprint?
Message-ID: <55E6C23D.4010102@redhat.com>

Hi,

I'm porting Cinder to Python 3. There are discussions on my patches to 
decide how to link my changes to the cinder-python3 blueprint (syntax of 
the "Blueprint cinder-python3" line).

Mike Perez completed the blueprint on 2015-08-15. I don't understand 
why, the port is incomplete. I'm still writing patches to continue the port.

Because the blueprint is completed, launchpad is unable to find the 
blueprint when you click on the blueprint line from gerrit. Some people 
asked me to write the URL to the blueprint instead. The problem is that 
"git review" uses the topic "bp/https" instead of "bp/cinder-python3" in 
this case. I have to specify manually the topic ("git review -t 
bp/cinder-python3). The URL may also change in the future, using a name 
is more future proof.

Another question is how to mention the blueprint in the commit message. 
I already wrote 35 changes which were merged into Cinder using the 
syntax "Blueprint cinder-python3". But some people are now asking me to 
use the syntax "Implements: blueprint cinder-python3", "Partially 
implements: blueprint cinder-python3", "Implements: blueprint <link>" or 
something else.

I suggest to continue to use "Blueprint cinder-python3", and maybe 
change the status of the blueprint to be able to find it again in launchpad.


cinder-python3 blueprint:
https://blueprints.launchpad.net/cinder/+spec/cinder-python3

My Python 3 patches for Cinder:
https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/cinder-python3,n,z


Note: I wrote an email because people started to vote -1 on my changes 
because of the syntax of the "blueprint" line in my commit message.

Victor


From tpb at dyncloud.net  Wed Sep  2 09:35:37 2015
From: tpb at dyncloud.net (Tom Barron)
Date: Wed, 2 Sep 2015 05:35:37 -0400
Subject: [openstack-dev] [cinder] L3 low pri review queue starvation
In-Reply-To: <20150902091902.GB19943@localhost>
References: <55E592C6.7070808@dyncloud.net>
 <CAPWkaSUbTRJr1qXFSLN+d1qOF7zjWd7r64cVMpYuUxDYY6t4Ug@mail.gmail.com>
 <20150902091902.GB19943@localhost>
Message-ID: <55E6C2E9.3080600@dyncloud.net>

On 9/2/15 5:19 AM, Gorka Eguileor wrote:
> On Tue, Sep 01, 2015 at 09:30:26AM -0600, John Griffith wrote:
>> On Tue, Sep 1, 2015 at 5:57 AM, Tom Barron <tpb at dyncloud.net> wrote:
>>
>>> [Yesterday while discussing the following issue on IRC, jgriffith
>>> suggested that I post to the dev list in preparation for a discussion in
>>> Wednesday's cinder meeting.]
>>>
>>> Please take a look at the 10 "Low" priority reviews in the cinder
>>> Liberty 3 etherpad that were punted to Mitaka yesterday. [1]
>>>
>>> Six of these *never* [2] received a vote from a core reviewer. With the
>>> exception of the first in the list, which has 35 patch sets, none of the
>>> others received a vote before Friday, August 28.  Of these, none had
>>> more than -1s on minor issues, and these have been remedied.
>>>
>>> Review https://review.openstack.org/#/c/213855 "Implement
>>> manage/unmanage snapshot in Pure drivers" is a great example:
>>>
>>>    * approved blueprint for a valuable feature
>>>    * pristine code
>>>    * passes CI and Jenkins (and by the deadline)
>>>    * never reviewed
>>>
>>> We have 11 core reviewers, all of whom were very busy doing reviews
>>> during L3, but evidently this set of reviews didn't really have much
>>> chance of making it.  This looks like a classic case where the
>>> individually rational priority decisions of each core reviewer
>>> collectively resulted in starving the Low Priority review queue.
>>>
> 
> I can't speak for other cores, but in my case reviewing was mostly not
> based on my own priorities, I reviewed patches based on the already set
> priority of each patch as well as patches that I was already
> reviewing.
> 
> Some of those medium priority patches took me a lot of time to review,
> since they were not trivial (some needed some serious rework).  As for
> patches I was already reviewing, as you can imagine it wouldn't be fair
> to just ignore a patch that I've been reviewing for some time just when
> it's almost ready and the deadline is closing in.
> 

That's why I said that this situation is an outcome of individually
rational decisions.  It should be clear that none of this is intended as
a complaint about reviewers or reviewer's performance.

> Having said that I have to agree that those patches didn't have much
> chances, and I apologize for my part on that.  While it is no excuse I
> have to agree with jgriffith when he says that those patches should have
> pushed cores for reviews (even if this is clearly not the "right" way to
> manage it).

No apology required!

> 
>>> One way to remedy would be for the 11 core reviewers to devote a day or
>>> two to cleaning up this backlog of 10 outstanding reviews rather than
>>> punting all of them out to Mitaka.
>>>
>>> Thanks for your time and consideration.
>>>
>>> Respectfully,
>>>
>>> -- Tom Barron
>>>
>>> [1] https://etherpad.openstack.org/p/cinder-liberty-3-reviews
>>> [2] At the risk of stating the obvious, in this count I ignore purely
>>> procedural votes such as the final -2.
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ?Thanks Tom, this is sadly an ongoing problem every release.  I think we
>> have a number of things we can talk about at the summit to try and
>> make some of this better.  I honestly think that if people were to
>> actually "use" launchpad instead of creating tracking etherpads
>> everywhere it would help.  What I mean is that there is a ranked
>> targeting of items in Launchpad and we should use it, core team
>> members should know that as the source of truth and things that must
>> get reviewed.
>>
> 
> I agree, we should use Launchpad's functionality to track BPs and Bugs
> targeted for each milestone, and maybe we can discuss on a workflow that
> helps us reduce starvation at the same time that helps us keep track of
> code reviewers responsible for each item.
> 
> Just spitballing here, but we could add to BP's work items and Bug's
> comments what core members will be responsible for reviewing related
> patches.  Although this means cores will have to check this on every
> review they do that has a BP or Bug number, so if there are already 2
> cores responsible for that feature they should preferably just move on
> to other patches and if there are not 2 core reviewers they should add
> themselves to LP.  This way patch owners know who they should bug for
> reviews on their patches and if there are no core reviewers for them
> they should look for some (or wait for them to "claim" that
> bug/feature).
> 
>> As far as Liberty and your patches; Yesterday was the freeze point, the
>> entire Cinder team agreed on that (yourself included both at the mid-cycle
>> meet up and at the team meeting two weeks ago when Thingee reiterated the
>> deadlines).  If you noticed last week that your patches weren't going
>> anywhere YOU should've wrangled up some reviews.
>>
>> Furthermore, I've explained every release for the last 3 or 4 years that
>> there's no silver bullet, no magic process when it come to review
>> throughput.  ESPECIALLY when it comes to the 3'rd milestone.  You can try
>> landing strips, priority listed etherpads, sponsors etc etc but the fact is
>> that things happen, the gate slows down (or we completely break on the
>> Cinder side like we did yesterday).  This doesn't mean "oh, well then you
>> get another day or two", it means stuff happens and it sucks but first
>> course of action is drop low priority items.  It just means if you really
>> wanted it you probably should've made it happen earlier.  Just so you know,
>> I run into this every release as well.  I had a number of things in
>> progress that I had hoped to finish last week and yesterday, BUT my
>> priority shifted to trying to help get the cinder patches back on track and
>> get the items in Launchpad updated to actually reflect something that was
>> somewhat possible.
>>
>> The only thing that works is "submit early and review often" it's simple.
>>
> 
> While this is true, sometimes it's not a matter of submitting early,
> because some patches just keep getting ignored and when deadline closes
> in low priority patches will always have higher priority patches that
> will go before them, unless we set a workflow that will give them a
> better chance of getting reviews.
> 
>> Finally, I pointed out to you yesterday that we could certainly discuss as
>> a team what to do with your patches.  BUT that given how terribly far
>> behind we were in the process that I wanted reviewers to focus on medium,
>> high and critical prioritized items.  That's what prioritization's are for,
>> it means when crunch time hits and things hit the fan it's usually the
>> "low" priority things that get bumped.
>>
>> Thanks,
> 
> I am not against discussing this, but in all fairness we should discuss
> *everybody's* patches that were targeted for L3, not just tbarron's.

Completely agree, and I hope that everyone understands that this is the
way I've been trying to frame the issue myself.

-- Tom
> 
> Cheers,
> Gorka.
> 



From thierry at openstack.org  Wed Sep  2 09:36:14 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Wed, 2 Sep 2015 11:36:14 +0200
Subject: [openstack-dev] [trove] Anyone using containers?
In-Reply-To: <D20B4812.515A1%mlowery@ebay.com>
References: <D20B4812.515A1%mlowery@ebay.com>
Message-ID: <55E6C30E.1000908@openstack.org>

Lowery, Mathew wrote:
> Just curious if anyone is using containers in their deployments. If so,
> in what capacity? What are the advantages, gotchas, and pain points?

This might trigger more responses on the openstack-operators mailing-list.

-- 
Thierry Carrez (ttx)


From tom at openstack.org  Wed Sep  2 09:38:23 2015
From: tom at openstack.org (Tom Fifield)
Date: Wed, 2 Sep 2015 17:38:23 +0800
Subject: [openstack-dev] [trove] Anyone using containers?
In-Reply-To: <55E6C30E.1000908@openstack.org>
References: <D20B4812.515A1%mlowery@ebay.com> <55E6C30E.1000908@openstack.org>
Message-ID: <55E6C38F.3090006@openstack.org>

On 02/09/15 17:36, Thierry Carrez wrote:
> Lowery, Mathew wrote:
>> Just curious if anyone is using containers in their deployments. If so,
>> in what capacity? What are the advantages, gotchas, and pain points?
>
> This might trigger more responses on the openstack-operators mailing-list.
>

+1 :)

There are a few notes on using containers for deployment from the recent 
ops meetup here: 
https://etherpad.openstack.org/p/PAO-ops-containers-for-deployment


Regards,

Tom


From thierry at openstack.org  Wed Sep  2 09:50:24 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Wed, 2 Sep 2015 11:50:24 +0200
Subject: [openstack-dev] [all] Criteria for applying
 vulnerability:managed tag
In-Reply-To: <20150901185638.GB7955@yuggoth.org>
References: <20150901185638.GB7955@yuggoth.org>
Message-ID: <55E6C660.8010008@openstack.org>

Some out-of-context quotes and comments below:

Jeremy Stanley wrote:
> [...]
> 1. Since the vulnerability:managed governance tag applies to
> deliverables, all repos within a given deliverable must meet the
> qualifying criteria. This means that if some repos in a deliverable
> are in good enough shape to qualify, their vulnerability management
> could be held back by other repos in the same deliverable. It might
> be argued that perhaps this makes them separate deliverables (in
> which case the governance projects.yaml should get an update to
> reflect that), or maybe that we really have a use case for per-repo
> tags and that the TC needs to consider deliverable and repo tags as
> separate ideas.

If repositories forming a single deliverable have varying degrees of
maturity or support and that cannot be fixed quickly, then yes I would
argue that they do not form a coherent whole and need to be split into a
separate deliverable.

The idea behind applying tags to deliverables is to facilitate splitting
a given user-consumable product (deliverable) across multiple git
repositories. They should still make a coherent whole and have common
characteristics, otherwise they are not a single user-consumable
product, they are a bunch of repositories with various levels of quality
and support, thrown together for dubious reasons.

So if for example we can't apply the same tags to openstack/neutron and
openstack/neutron-*aas, then the *aas should probably form a separate
deliverable (called for example "neutron advanced services").

> [...]
> 5. The deliverable's repos should undergo a third-party review/audit
> looking for obvious signs of insecure design or risky implementation
> which could imply a large number of future vulnerability reports. As
> much as anything this is a measure to keep the VMT's workload down,
> since it is by necessity a group of constrained size and some of its
> processes simply can't be scaled safely. It's not been identified
> who would actually perform this review, though this is one place
> members of the OpenStack Security project-team might volunteer to
> provide their expertise and assistance.

About that one, one of the reasons we tried to have the projects audited
before inclusion was to avoid to issue a dozen OSSAs the first time a
project goes through a static code checker. So it is also about
proactively clearing the obvious stuff before it generates a spike in
VMT work.

-- 
Thierry Carrez (ttx)


From sxmatch1986 at gmail.com  Wed Sep  2 09:52:14 2015
From: sxmatch1986 at gmail.com (hao wang)
Date: Wed, 2 Sep 2015 17:52:14 +0800
Subject: [openstack-dev] [cinder] L3 low pri review queue starvation
In-Reply-To: <20150902091902.GB19943@localhost>
References: <55E592C6.7070808@dyncloud.net>
 <CAPWkaSUbTRJr1qXFSLN+d1qOF7zjWd7r64cVMpYuUxDYY6t4Ug@mail.gmail.com>
 <20150902091902.GB19943@localhost>
Message-ID: <CAOEh+o2YauE_D_okWesJ2+5WphgoBD9c=a7+BjUhkd8HM1mjbA@mail.gmail.com>

2015-09-02 17:19 GMT+08:00 Gorka Eguileor <geguileo at redhat.com>:
> On Tue, Sep 01, 2015 at 09:30:26AM -0600, John Griffith wrote:
>> On Tue, Sep 1, 2015 at 5:57 AM, Tom Barron <tpb at dyncloud.net> wrote:
>>
>> > [Yesterday while discussing the following issue on IRC, jgriffith
>> > suggested that I post to the dev list in preparation for a discussion in
>> > Wednesday's cinder meeting.]
>> >
>> > Please take a look at the 10 "Low" priority reviews in the cinder
>> > Liberty 3 etherpad that were punted to Mitaka yesterday. [1]
>> >
>> > Six of these *never* [2] received a vote from a core reviewer. With the
>> > exception of the first in the list, which has 35 patch sets, none of the
>> > others received a vote before Friday, August 28.  Of these, none had
>> > more than -1s on minor issues, and these have been remedied.
>> >
>> > Review https://review.openstack.org/#/c/213855 "Implement
>> > manage/unmanage snapshot in Pure drivers" is a great example:
>> >
>> >    * approved blueprint for a valuable feature
>> >    * pristine code
>> >    * passes CI and Jenkins (and by the deadline)
>> >    * never reviewed
>> >
>> > We have 11 core reviewers, all of whom were very busy doing reviews
>> > during L3, but evidently this set of reviews didn't really have much
>> > chance of making it.  This looks like a classic case where the
>> > individually rational priority decisions of each core reviewer
>> > collectively resulted in starving the Low Priority review queue.
>> >
>
> I can't speak for other cores, but in my case reviewing was mostly not
> based on my own priorities, I reviewed patches based on the already set
> priority of each patch as well as patches that I was already
> reviewing.
>
> Some of those medium priority patches took me a lot of time to review,
> since they were not trivial (some needed some serious rework).  As for
> patches I was already reviewing, as you can imagine it wouldn't be fair
> to just ignore a patch that I've been reviewing for some time just when
> it's almost ready and the deadline is closing in.
>
> Having said that I have to agree that those patches didn't have much
> chances, and I apologize for my part on that.  While it is no excuse I
> have to agree with jgriffith when he says that those patches should have
> pushed cores for reviews (even if this is clearly not the "right" way to
> manage it).
>
>> > One way to remedy would be for the 11 core reviewers to devote a day or
>> > two to cleaning up this backlog of 10 outstanding reviews rather than
>> > punting all of them out to Mitaka.
>> >
>> > Thanks for your time and consideration.
>> >
>> > Respectfully,
>> >
>> > -- Tom Barron
>> >
>> > [1] https://etherpad.openstack.org/p/cinder-liberty-3-reviews
>> > [2] At the risk of stating the obvious, in this count I ignore purely
>> > procedural votes such as the final -2.
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> Thanks Tom, this is sadly an ongoing problem every release.  I think we
>> have a number of things we can talk about at the summit to try and
>> make some of this better.  I honestly think that if people were to
>> actually "use" launchpad instead of creating tracking etherpads
>> everywhere it would help.  What I mean is that there is a ranked
>> targeting of items in Launchpad and we should use it, core team
>> members should know that as the source of truth and things that must
>> get reviewed.
>>
>
> I agree, we should use Launchpad's functionality to track BPs and Bugs
> targeted for each milestone, and maybe we can discuss on a workflow that
> helps us reduce starvation at the same time that helps us keep track of
> code reviewers responsible for each item.
>
> Just spitballing here, but we could add to BP's work items and Bug's
> comments what core members will be responsible for reviewing related
> patches.  Although this means cores will have to check this on every
> review they do that has a BP or Bug number, so if there are already 2
> cores responsible for that feature they should preferably just move on
> to other patches and if there are not 2 core reviewers they should add
> themselves to LP.  This way patch owners know who they should bug for
> reviews on their patches and if there are no core reviewers for them
> they should look for some (or wait for them to "claim" that
> bug/feature).

Like Gorka's idea here. This cloud be a better way to relieve this issue.
I hope patch owners could require cores' help via IRC or something else if there
are no core reviewers.
>
>> As far as Liberty and your patches; Yesterday was the freeze point, the
>> entire Cinder team agreed on that (yourself included both at the mid-cycle
>> meet up and at the team meeting two weeks ago when Thingee reiterated the
>> deadlines).  If you noticed last week that your patches weren't going
>> anywhere YOU should've wrangled up some reviews.
>>
>> Furthermore, I've explained every release for the last 3 or 4 years that
>> there's no silver bullet, no magic process when it come to review
>> throughput.  ESPECIALLY when it comes to the 3'rd milestone.  You can try
>> landing strips, priority listed etherpads, sponsors etc etc but the fact is
>> that things happen, the gate slows down (or we completely break on the
>> Cinder side like we did yesterday).  This doesn't mean "oh, well then you
>> get another day or two", it means stuff happens and it sucks but first
>> course of action is drop low priority items.  It just means if you really
>> wanted it you probably should've made it happen earlier.  Just so you know,
>> I run into this every release as well.  I had a number of things in
>> progress that I had hoped to finish last week and yesterday, BUT my
>> priority shifted to trying to help get the cinder patches back on track and
>> get the items in Launchpad updated to actually reflect something that was
>> somewhat possible.
>>
>> The only thing that works is "submit early and review often" it's simple.
>>
>
> While this is true, sometimes it's not a matter of submitting early,
> because some patches just keep getting ignored and when deadline closes
> in low priority patches will always have higher priority patches that
> will go before them, unless we set a workflow that will give them a
> better chance of getting reviews.
>
>> Finally, I pointed out to you yesterday that we could certainly discuss as
>> a team what to do with your patches.  BUT that given how terribly far
>> behind we were in the process that I wanted reviewers to focus on medium,
>> high and critical prioritized items.  That's what prioritization's are for,
>> it means when crunch time hits and things hit the fan it's usually the
>> "low" priority things that get bumped.
>>
>> Thanks,
>
> I am not against discussing this, but in all fairness we should discuss
> *everybody's* patches that were targeted for L3, not just tbarron's.
>
> Cheers,
> Gorka.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Best Wishes For You!


From rakhmerov at mirantis.com  Wed Sep  2 10:23:02 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Wed, 2 Sep 2015 16:23:02 +0600
Subject: [openstack-dev] [mistral] Displaying wf hierarchy in CLI
In-Reply-To: <CALjNAZ1XhAEPyYHL+AAu4WjpiJAvkrGfMznERO1hdxZUBgA_Ww@mail.gmail.com>
References: <407873FB-639D-40C0-ABD8-A62EA6FDF876@mirantis.com>
 <BLU436-SMTP17E153A426B8ECA10667E0D86B0@phx.gbl>
 <CALjNAZ2B=cc-C_POc8D1+DgA=vF8hx6hNJV0c5B9bdfrRsi4OQ@mail.gmail.com>
 <71CC2E3C-A33A-401A-8268-B2FDB39A82F5@mirantis.com>
 <CALjNAZ1XhAEPyYHL+AAu4WjpiJAvkrGfMznERO1hdxZUBgA_Ww@mail.gmail.com>
Message-ID: <049F8C5E-3E86-4F40-924F-7052D371E714@mirantis.com>


> On 02 Sep 2015, at 14:56, Lingxian Kong <anlin.kong at gmail.com> wrote:
> 
> I want to make it clear. Then, what you want to see is dependencies between workflow executions? or task executions in one workflow? We know that we could use a separate task or a workflow as a 'task'.

Dependencies between workflow executions. Dependencies between task executions is a different question.

But technically workflow executions are connected via task execution id. See [1].

[1] https://github.com/openstack/mistral/blob/master/mistral/db/v2/sqlalchemy/models.py#L217 <https://github.com/openstack/mistral/blob/master/mistral/db/v2/sqlalchemy/models.py#L217>

Renat Akhmerov
@ Mirantis Inc.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/b02e4174/attachment.html>

From rakhmerov at mirantis.com  Wed Sep  2 11:16:27 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Wed, 2 Sep 2015 17:16:27 +0600
Subject: [openstack-dev] [mistral][yaql] Addressing task result using YAQL
	function
Message-ID: <B93B8F94-DE9D-4723-A22D-DC527DCC54FB@mirantis.com>

Hi,

I?d like to propose a few changes after transition to yaql 1.0:

We already moved from using ?$.__execution? in DSL to "execution()? and from ?$.__env? to ?env()? where ?execution()? and ?env()" are registered yaql functions. We had to do it because double underscored are prohibited in the new yaql.


In the spirit of these changes I?m proposing a similar change for addressing task result in Mistral DSL.

Currently we have the syntax ?$.task_name? that we can use in yaql expressions to refer to corresponding task result. The current implementation is of that is kind of tricky and, IMO, conceptually wrong because referencing this kind of key leads to DB read operation that?s requried to fetch task result (it may be big as megabytes so it can?t be stored in workflow context all the time. In other words, we create a dictionary with side effect and change the initial semantics of dictionary. Along with mentioned trickiness of this approach it?s not convenient, for example, to debug the code because we can accidentally cause a DB operation. 

So the solution I?m proposing is to have an explicit yaql function called ?res? or ?result? to extract task result.

res() - extracts the result of the current task, i.e. in ?publish? clause.
res(?task_name?) - extracts the result of the task with the specified name. Can also be used in ?publish? clause, if needed

This approach seems more flexible (cause we can add any other functions w/o having to make significant changes in WF engine) and expressive because user won?t confuse $.task_name with addressing a regular workflow context variable.

Of course, this to some extent breaks backwards compatibility. But we already changed yaql version which was necessary anyway so it seems like a good time to do it.

I?d very much like to have your input on this.

Renat Akhmerov
@ Mirantis Inc.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/178c260d/attachment.html>

From nmakhotkin at mirantis.com  Wed Sep  2 11:35:36 2015
From: nmakhotkin at mirantis.com (Nikolay Makhotkin)
Date: Wed, 2 Sep 2015 14:35:36 +0300
Subject: [openstack-dev] [mistral][yaql] Addressing task result using
	YAQL function
In-Reply-To: <4540F52B-CAC2-4974-98F4-99E18BEE33D8@mirantis.com>
References: <B93B8F94-DE9D-4723-A22D-DC527DCC54FB@mirantis.com>
 <4540F52B-CAC2-4974-98F4-99E18BEE33D8@mirantis.com>
Message-ID: <CACarOJamsObYCQOCUa68w+zaFX9kdYQwyix_rg=E5-iHLxtuXA@mail.gmail.com>

Hi,

I also thought about that recently. So, I absolutely agree with this
proposal. It would be nice to see this feature in Liberty.

On Wed, Sep 2, 2015 at 2:17 PM, Renat Akhmerov <rakhmerov at mirantis.com>
wrote:

> FYI
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
> Begin forwarded message:
>
> *From: *Renat Akhmerov <rakhmerov at mirantis.com>
> *Subject: **[openstack-dev][mistral][yaql] Addressing task result using
> YAQL function*
> *Date: *2 Sep 2015 17:16:27 GMT+6
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
>
> Hi,
>
> I?d like to propose a few changes after transition to yaql 1.0:
>
> We already moved from using ?$.__execution? in DSL to "execution()? and
> from ?$.__env? to ?env()? where ?execution()? and ?env()" are registered
> yaql functions. We had to do it because double underscored are prohibited
> in the new yaql.
>
>
> In the spirit of these changes I?m proposing a similar change for
> addressing task result in Mistral DSL.
>
> Currently we have the syntax ?$.task_name? that we can use in yaql
> expressions to refer to corresponding task result. The current
> implementation is of that is kind of tricky and, IMO, conceptually wrong
> because referencing this kind of key leads to DB read operation that?s
> requried to fetch task result (it may be big as megabytes so it can?t be
> stored in workflow context all the time. In other words, we create a
> dictionary with side effect and change the initial semantics of dictionary.
> Along with mentioned trickiness of this approach it?s not convenient, for
> example, to debug the code because we can accidentally cause a DB
> operation.
>
> So the solution I?m proposing is to have an explicit yaql function called
> ?res? or ?result? to extract task result.
>
> *res()* - extracts the result of the current task, i.e. in ?publish?
> clause.
> *res(?task_name?)* - extracts the result of the task with the specified
> name. Can also be used in ?publish? clause, if needed
>
> This approach seems more flexible (cause we can add any other functions
> w/o having to make significant changes in WF engine) and expressive because
> user won?t confuse $.task_name with addressing a regular workflow context
> variable.
>
> Of course, this to some extent breaks backwards compatibility. But we
> already changed yaql version which was necessary anyway so it seems like a
> good time to do it.
>
> I?d very much like to have your input on this.
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
>
>


-- 
Best Regards,
Nikolay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/a1738170/attachment.html>

From xiaohui.xin at intel.com  Wed Sep  2 11:37:45 2015
From: xiaohui.xin at intel.com (Xin, Xiaohui)
Date: Wed, 2 Sep 2015 11:37:45 +0000
Subject: [openstack-dev]  [cinder] FFE Request - capacity-headroom
Message-ID: <EC800DA72BAD6E42B5F9B1752C30DB040525404D@SHSMSX104.ccr.corp.intel.com>

Hi,
I would like to request feature freeze exception for the implementation of capacity-headroom.
It calculates virtual free memory and send notifications to the ceilometer together with other storage capacity stats.

Blueprint:
https://blueprints.launchpad.net/cinder/+spec/capacity-headroom

Spec:
            https://review.openstack.org/#/c/170380/

Addressed by:
https://review.openstack.org/#/c/206923



I have addressed the latest comments related to active-active deployment according to Gorka Eguileor's comments and suggestions.
Please kindly review and evaluate it. Great Thanks!

Thanks
Xiaohui

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/9c2fe607/attachment.html>

From jaypipes at gmail.com  Wed Sep  2 12:14:37 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Wed, 02 Sep 2015 08:14:37 -0400
Subject: [openstack-dev] [Fuel] Code review process in Fuel and related
 issues
In-Reply-To: <CACo6NWDdxzWxDkU078tuuHupyArux09bPya72hC24WwnkNiCFg@mail.gmail.com>
References: <CAKYN3rNAw4vqbrvUONaemxOx=mACM3Aq_JAjpBeXmhjXq-zi5A@mail.gmail.com>
 <CABfuu9qPOe2RVhBG7aq+coVRQ0898pkv+DXGQBs9nGU93b+krA@mail.gmail.com>
 <30E12849-7AAB-45F7-BA7B-A4D952053419@mirantis.com>
 <CACo6NWA_=2JnJfcFwbTbt1M33P7Gqpg_xemKDV5x7miu94TAHQ@mail.gmail.com>
 <9847EFCC-7772-4BB8-AD0E-4CA6BC65B535@mirantis.com>
 <CACo6NWDdxzWxDkU078tuuHupyArux09bPya72hC24WwnkNiCFg@mail.gmail.com>
Message-ID: <55E6E82D.6030100@gmail.com>

On 09/02/2015 03:00 AM, Igor Kalnitsky wrote:
> It won't work that way. You either busy on writing code / leading
> feature or doing review. It couldn't be combined effectively. Any
> context switch between activities requires an extra time to focus on.

I don't agree with the above, Igor. I think there's plenty of examples 
of people in OpenStack projects that both submit code (and lead 
features) that also do code review on a daily basis.

Best,
-jay


From ipovolotskaya at mirantis.com  Wed Sep  2 12:23:00 2015
From: ipovolotskaya at mirantis.com (Irina Povolotskaya)
Date: Wed, 2 Sep 2015 15:23:00 +0300
Subject: [openstack-dev] [Fuel] Nominate Evgeniy Konstantinov for fuel-docs
	core
Message-ID: <CAFY49iBwxknorBHmVLZSkUWD9zMr4Tc57vKOg_F0=7PEG0_tSA@mail.gmail.com>

Fuelers,

I'd like to nominate Evgeniy Konstantinov for the fuel-docs-core team.
He has contributed thousands of lines of documentation to Fuel over
the past several months, and has been a diligent reviewer:

http://stackalytics.com/?user_id=evkonstantinov&release=all&project_type=all&module=fuel-docs

I believe it's time to grant him core reviewer rights in the fuel-docs
repository.

Core reviewer approval process definition:
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

-- 
Best regards,

Irina
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/a75b4c80/attachment.html>

From jaypipes at gmail.com  Wed Sep  2 12:23:22 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Wed, 02 Sep 2015 08:23:22 -0400
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
In-Reply-To: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
References: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
Message-ID: <55E6EA3A.7080006@gmail.com>

I'm not a Fuel core or anything, but +1 from me. Alex has been very 
visible in the community and his work on librarian-puppet was a great 
step forward for the project.

Best,
-jay

On 09/02/2015 04:31 AM, Sergii Golovatiuk wrote:
> Hi,
>
> I would like to nominate Alex Schultz to Fuel-Library Core team. He?s
> been doing a great job in writing patches. At the same time his reviews
> are solid with comments for further improvements. He?s #3 reviewer and
> #1 contributor with 46 commits for last 90 days [1]. Additionally, Alex
> has been very active in IRC providing great ideas. His ?librarian?
> blueprint [3] made a big step towards to puppet community.
>
> Fuel Library, please vote with +1/-1 for approval/objection. Voting will
> be open until September 9th. This will go forward after voting is closed
> if there are no objections.
>
> Overall contribution:
> [0] http://stackalytics.com/?user_id=alex-schultz
> Fuel library contribution for last 90 days:
> [1] http://stackalytics.com/report/contribution/fuel-library/90
> List of reviews:
> [2]
> https://review.openstack.org/#/q/reviewer:%22Alex+Schultz%22+status:merged,n,z
> ?Librarian activities? in mailing list:
> [3] http://lists.openstack.org/pipermail/openstack-dev/2015-July/071058.html
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From dborodaenko at mirantis.com  Wed Sep  2 12:45:34 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Wed, 02 Sep 2015 12:45:34 +0000
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
In-Reply-To: <55E6EA3A.7080006@gmail.com>
References: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
 <55E6EA3A.7080006@gmail.com>
Message-ID: <CAM0pNLMwsWK_N8EaobnCDXmFdfB0aTPMK9urXnbScGmJtvqfoA@mail.gmail.com>

Huge +1 from me, Alex has all the best qualities of a core reviewer: great
engineer, great communicator, diligent, and patient.

On Wed, Sep 2, 2015 at 3:24 PM Jay Pipes <jaypipes at gmail.com> wrote:

> I'm not a Fuel core or anything, but +1 from me. Alex has been very
> visible in the community and his work on librarian-puppet was a great
> step forward for the project.
>
> Best,
> -jay
>
> On 09/02/2015 04:31 AM, Sergii Golovatiuk wrote:
> > Hi,
> >
> > I would like to nominate Alex Schultz to Fuel-Library Core team. He?s
> > been doing a great job in writing patches. At the same time his reviews
> > are solid with comments for further improvements. He?s #3 reviewer and
> > #1 contributor with 46 commits for last 90 days [1]. Additionally, Alex
> > has been very active in IRC providing great ideas. His ?librarian?
> > blueprint [3] made a big step towards to puppet community.
> >
> > Fuel Library, please vote with +1/-1 for approval/objection. Voting will
> > be open until September 9th. This will go forward after voting is closed
> > if there are no objections.
> >
> > Overall contribution:
> > [0] http://stackalytics.com/?user_id=alex-schultz
> > Fuel library contribution for last 90 days:
> > [1] http://stackalytics.com/report/contribution/fuel-library/90
> > List of reviews:
> > [2]
> >
> https://review.openstack.org/#/q/reviewer:%22Alex+Schultz%22+status:merged,n,z
> > ?Librarian activities? in mailing list:
> > [3]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/071058.html
> >
> > --
> > Best regards,
> > Sergii Golovatiuk,
> > Skype #golserge
> > IRC #holser
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/90c85b74/attachment.html>

From ikalnitsky at mirantis.com  Wed Sep  2 12:45:56 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Wed, 2 Sep 2015 15:45:56 +0300
Subject: [openstack-dev] [Fuel] Code review process in Fuel and related
	issues
In-Reply-To: <55E6E82D.6030100@gmail.com>
References: <CAKYN3rNAw4vqbrvUONaemxOx=mACM3Aq_JAjpBeXmhjXq-zi5A@mail.gmail.com>
 <CABfuu9qPOe2RVhBG7aq+coVRQ0898pkv+DXGQBs9nGU93b+krA@mail.gmail.com>
 <30E12849-7AAB-45F7-BA7B-A4D952053419@mirantis.com>
 <CACo6NWA_=2JnJfcFwbTbt1M33P7Gqpg_xemKDV5x7miu94TAHQ@mail.gmail.com>
 <9847EFCC-7772-4BB8-AD0E-4CA6BC65B535@mirantis.com>
 <CACo6NWDdxzWxDkU078tuuHupyArux09bPya72hC24WwnkNiCFg@mail.gmail.com>
 <55E6E82D.6030100@gmail.com>
Message-ID: <CACo6NWCjp-DTCY2nrKyDij1TPeSuTCr9PhTLQ25Vf_Y5cJ=sZQ@mail.gmail.com>

> I think there's plenty of examples of people in OpenStack projects
> that both submit code (and lead features) that also do code review
> on a daily basis.

* Do these features huge?
* Is their code contribution huge or just small patches?
* Did they get to master before FF?
* How many intersecting features OpenStack projects have under
development? (since often merge conflicts requires a lot of re-review)
* How often OpenStack people are busy on other activities, such as
helping fellas, troubleshooting customers, participate design meetings
and so on?

If so, do you sure they are humans then? :) I can only speak for
myself, and that's what I want to say: during 7.0 dev cycle I burned
in hell and I don't want to continue that way.

Thanks,
Igor

On Wed, Sep 2, 2015 at 3:14 PM, Jay Pipes <jaypipes at gmail.com> wrote:
> On 09/02/2015 03:00 AM, Igor Kalnitsky wrote:
>>
>> It won't work that way. You either busy on writing code / leading
>> feature or doing review. It couldn't be combined effectively. Any
>> context switch between activities requires an extra time to focus on.
>
>
> I don't agree with the above, Igor. I think there's plenty of examples of
> people in OpenStack projects that both submit code (and lead features) that
> also do code review on a daily basis.
>
> Best,
> -jay
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From dborodaenko at mirantis.com  Wed Sep  2 13:03:22 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Wed, 02 Sep 2015 13:03:22 +0000
Subject: [openstack-dev] [Fuel] Nominate Evgeniy Konstantinov for
 fuel-docs core
In-Reply-To: <CAFY49iBwxknorBHmVLZSkUWD9zMr4Tc57vKOg_F0=7PEG0_tSA@mail.gmail.com>
References: <CAFY49iBwxknorBHmVLZSkUWD9zMr4Tc57vKOg_F0=7PEG0_tSA@mail.gmail.com>
Message-ID: <CAM0pNLOpBAhyQnRCHXK=jL6NTpxdEe880a=h7c-Jvw4GdTuk9w@mail.gmail.com>

+1, Evgeny has been a #1 committer in fuel-docs for a while, it's great to
see him pick up on the reviews, too.

On Wed, Sep 2, 2015 at 3:24 PM Irina Povolotskaya <
ipovolotskaya at mirantis.com> wrote:

> Fuelers,
>
> I'd like to nominate Evgeniy Konstantinov for the fuel-docs-core team.
> He has contributed thousands of lines of documentation to Fuel over
> the past several months, and has been a diligent reviewer:
>
>
> http://stackalytics.com/?user_id=evkonstantinov&release=all&project_type=all&module=fuel-docs
>
> I believe it's time to grant him core reviewer rights in the fuel-docs
> repository.
>
> Core reviewer approval process definition:
> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
> --
> Best regards,
>
> Irina
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/67900423/attachment.html>

From sean at dague.net  Wed Sep  2 13:07:57 2015
From: sean at dague.net (Sean Dague)
Date: Wed, 2 Sep 2015 09:07:57 -0400
Subject: [openstack-dev] [heat] [gate] gate-heat-dsvm-functional-orig-mysql
	fails 100%
Message-ID: <55E6F4AD.4020100@dague.net>

Today's gate firedrill #2 is the fact that in the last 12 hours heat's
functional tests went to 100% failure - http://goo.gl/YeVrD0

There are current 14 heat changes in the gate, each will add 30 - 40
minutes of delay as they fail and reset things behind them. So 7 - 10
hours of gate delay for everyone because of these patches.

Could heat team members get engaged and get to the bottom of this one?

	-Sean

-- 
Sean Dague
http://dague.net


From skraynev at mirantis.com  Wed Sep  2 13:23:44 2015
From: skraynev at mirantis.com (Sergey Kraynev)
Date: Wed, 2 Sep 2015 16:23:44 +0300
Subject: [openstack-dev] [heat] [gate]
 gate-heat-dsvm-functional-orig-mysql fails 100%
In-Reply-To: <55E6F4AD.4020100@dague.net>
References: <55E6F4AD.4020100@dague.net>
Message-ID: <CAAbQNRkZKqa22hc9JcW2pfSr_E9-5-PtysaxBfMNNDbO9Etgvg@mail.gmail.com>

Sean, thank you for the raising it.

We know about this issue. It was related with glanceclient:
There is a fix, which fixes this issue -
https://review.openstack.org/#/c/219533/

If it's possible may you abandon changes in queue before this patch?

Regards,
Sergey.

On 2 September 2015 at 16:07, Sean Dague <sean at dague.net> wrote:

> Today's gate firedrill #2 is the fact that in the last 12 hours heat's
> functional tests went to 100% failure - http://goo.gl/YeVrD0
>
> There are current 14 heat changes in the gate, each will add 30 - 40
> minutes of delay as they fail and reset things behind them. So 7 - 10
> hours of gate delay for everyone because of these patches.
>
> Could heat team members get engaged and get to the bottom of this one?
>
>         -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/f733fd77/attachment.html>

From shardy at redhat.com  Wed Sep  2 13:26:13 2015
From: shardy at redhat.com (Steven Hardy)
Date: Wed, 2 Sep 2015 14:26:13 +0100
Subject: [openstack-dev] [heat] [gate]
 gate-heat-dsvm-functional-orig-mysql fails 100%
In-Reply-To: <55E6F4AD.4020100@dague.net>
References: <55E6F4AD.4020100@dague.net>
Message-ID: <20150902132612.GC25909@t430slt.redhat.com>

On Wed, Sep 02, 2015 at 09:07:57AM -0400, Sean Dague wrote:
> Today's gate firedrill #2 is the fact that in the last 12 hours heat's
> functional tests went to 100% failure - http://goo.gl/YeVrD0
> 
> There are current 14 heat changes in the gate, each will add 30 - 40
> minutes of delay as they fail and reset things behind them. So 7 - 10
> hours of gate delay for everyone because of these patches.
> 
> Could heat team members get engaged and get to the bottom of this one?

I think this is the change we need to land to fix it:

https://review.openstack.org/#/c/219533/

Can it be bumped to the head of the queue?

Steve


From gokrokvertskhov at mirantis.com  Wed Sep  2 13:47:42 2015
From: gokrokvertskhov at mirantis.com (Georgy Okrokvertskhov)
Date: Wed, 2 Sep 2015 06:47:42 -0700
Subject: [openstack-dev] [oslo.messaging]
In-Reply-To: <CAF5T5jsSUfB4EXAyjB=EOkx2A1kgm3j3FySmMzmpmK-+rm3fRA@mail.gmail.com>
References: <CAF5T5jsSUfB4EXAyjB=EOkx2A1kgm3j3FySmMzmpmK-+rm3fRA@mail.gmail.com>
Message-ID: <CAG_6_oksKwsPLvXVsXR9wbqvxon3B9LwjFN=N_Qh1c6wLw96Zg@mail.gmail.com>

I believe in oslo.messaging routing_key is a topic.
Here is an example for ceilometer:
https://github.com/openstack/ceilometer/blob/master/ceilometer/meter/notifications.py#L212

And here is an oslo code for rabbitMQ driver:
https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_rabbit.py#L926

Routing_key is set to topic.

Thanks
Gosha

On Tue, Sep 1, 2015 at 6:27 PM, Nader Lahouti <nader.lahouti at gmail.com>
wrote:

> Hi,
>
> I am considering to use oslo.messaging to read messages from a rabbit
> queue. The messages are put into the queue by an external process.
> In order to do that I need to specify routing_key in addition to other
> parameters (i.e. exchange and queue,... name) for accessing the queue.  I
> was looking at the oslo.messaging API and wasn't able to find anywhere to
> specify the routing key.
>
> Is it possible to set routing_key when using oslo.messaging? if so, can
> you please point me to the document.
>
>
> Regards,
> Nader.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/196ceb07/attachment.html>

From fungi at yuggoth.org  Wed Sep  2 13:48:26 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Wed, 2 Sep 2015 13:48:26 +0000
Subject: [openstack-dev] [Murano] Documentation on how to Start
 Contributing
In-Reply-To: <OF6AC37E4D.A0EBDE0F-ON87257EB3.007CC092-88257EB3.007D0442@us.ibm.com>
References: <etPan.55dd0c9b.32f0398f.134@pegasus.local>
 <OF07A18C60.2DCB65AF-ON87257EAD.0073B58E-88257EAD.0081D189@us.ibm.com>
 <etPan.55de598f.4be5294d.134@pegasus.local>
 <OFB2C81890.4195E87D-ON87257EAE.00047E86-88257EAE.00052066@us.ibm.com>
 <etPan.55de648b.7de6b469.134@pegasus.local>
 <OF11EA0A19.B9DED79C-ON87257EAE.0068DCA9-88257EAE.0069B4FE@us.ibm.com>
 <20150827203835.GA7955@yuggoth.org>
 <OF0E8904A0.1343D3CB-ON87257EAE.00721A5F-88257EAE.0072EF9E@us.ibm.com>
 <20150827213651.GD7955@yuggoth.org>
 <OF6AC37E4D.A0EBDE0F-ON87257EB3.007CC092-88257EB3.007D0442@us.ibm.com>
Message-ID: <20150902134826.GC7955@yuggoth.org>

On 2015-09-01 15:45:22 -0700 (-0700), Vahid S Hashemian wrote:
[...]
> What is your advice on debugging a PyPi package? I'm modifying the code
> for python-muranoclient and would like to be able to debug using eclipse
> (in which I'm coding) or any other convenient means.

There are possibly better places than an OpenStack-specific mailing
list to ask general Python debugging questions. You can tell pip to
install from source in editable mode within a virtualenv and then
changes you make to the source code should be immediately testable
without needing to repackage/reinstall. Something like this
(assuming you have a recent version of virtualenv installed in your
current executable path):

    git clone git://git.openstack.org/openstack/python-muranoclient.git
    cd python-muranoclient
    virtualenv .testing
    . .testing/bin/activate
    pip install -e .

>From that point on, running the murano CLI command or importing in
another script should get whatever changes you've made in your local
clone of the repository.
-- 
Jeremy Stanley


From fungi at yuggoth.org  Wed Sep  2 13:58:20 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Wed, 2 Sep 2015 13:58:20 +0000
Subject: [openstack-dev] [heat] [gate]
 gate-heat-dsvm-functional-orig-mysql fails 100%
In-Reply-To: <20150902132612.GC25909@t430slt.redhat.com>
References: <55E6F4AD.4020100@dague.net>
 <20150902132612.GC25909@t430slt.redhat.com>
Message-ID: <20150902135820.GD7955@yuggoth.org>

On 2015-09-02 14:26:13 +0100 (+0100), Steven Hardy wrote:
> I think this is the change we need to land to fix it:
> 
> https://review.openstack.org/#/c/219533/
> 
> Can it be bumped to the head of the queue?

It's there now with an ETA of ~25 minutes (barring any unforeseen
job failures).
-- 
Jeremy Stanley


From vkuklin at mirantis.com  Wed Sep  2 14:00:03 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Wed, 2 Sep 2015 17:00:03 +0300
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
In-Reply-To: <CAM0pNLMwsWK_N8EaobnCDXmFdfB0aTPMK9urXnbScGmJtvqfoA@mail.gmail.com>
References: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
 <55E6EA3A.7080006@gmail.com>
 <CAM0pNLMwsWK_N8EaobnCDXmFdfB0aTPMK9urXnbScGmJtvqfoA@mail.gmail.com>
Message-ID: <CAHAWLf1Ed6fPqDUDuume+1JdtxJuLn+AFrcxkUvbZvxAokBWYA@mail.gmail.com>

+1 and also he is in US timezone, which  should help us cover the globe
with Continuous Review process.

On Wed, Sep 2, 2015 at 3:45 PM, Dmitry Borodaenko <dborodaenko at mirantis.com>
wrote:

> Huge +1 from me, Alex has all the best qualities of a core reviewer: great
> engineer, great communicator, diligent, and patient.
>
> On Wed, Sep 2, 2015 at 3:24 PM Jay Pipes <jaypipes at gmail.com> wrote:
>
>> I'm not a Fuel core or anything, but +1 from me. Alex has been very
>> visible in the community and his work on librarian-puppet was a great
>> step forward for the project.
>>
>> Best,
>> -jay
>>
>> On 09/02/2015 04:31 AM, Sergii Golovatiuk wrote:
>> > Hi,
>> >
>> > I would like to nominate Alex Schultz to Fuel-Library Core team. He?s
>> > been doing a great job in writing patches. At the same time his reviews
>> > are solid with comments for further improvements. He?s #3 reviewer and
>> > #1 contributor with 46 commits for last 90 days [1]. Additionally, Alex
>> > has been very active in IRC providing great ideas. His ?librarian?
>> > blueprint [3] made a big step towards to puppet community.
>> >
>> > Fuel Library, please vote with +1/-1 for approval/objection. Voting will
>> > be open until September 9th. This will go forward after voting is closed
>> > if there are no objections.
>> >
>> > Overall contribution:
>> > [0] http://stackalytics.com/?user_id=alex-schultz
>> > Fuel library contribution for last 90 days:
>> > [1] http://stackalytics.com/report/contribution/fuel-library/90
>> > List of reviews:
>> > [2]
>> >
>> https://review.openstack.org/#/q/reviewer:%22Alex+Schultz%22+status:merged,n,z
>> > ?Librarian activities? in mailing list:
>> > [3]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-July/071058.html
>> >
>> > --
>> > Best regards,
>> > Sergii Golovatiuk,
>> > Skype #golserge
>> > IRC #holser
>> >
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/c0858fc9/attachment.html>

From doug at doughellmann.com  Wed Sep  2 14:12:25 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 02 Sep 2015 10:12:25 -0400
Subject: [openstack-dev] [cinder] L3 low pri review queue starvation
In-Reply-To: <20150902091902.GB19943@localhost>
References: <55E592C6.7070808@dyncloud.net>
 <CAPWkaSUbTRJr1qXFSLN+d1qOF7zjWd7r64cVMpYuUxDYY6t4Ug@mail.gmail.com>
 <20150902091902.GB19943@localhost>
Message-ID: <1441203065-sup-3159@lrrr.local>

Excerpts from Gorka Eguileor's message of 2015-09-02 11:19:02 +0200:
> On Tue, Sep 01, 2015 at 09:30:26AM -0600, John Griffith wrote:
> > On Tue, Sep 1, 2015 at 5:57 AM, Tom Barron <tpb at dyncloud.net> wrote:
> > 
> > > [Yesterday while discussing the following issue on IRC, jgriffith
> > > suggested that I post to the dev list in preparation for a discussion in
> > > Wednesday's cinder meeting.]
> > >
> > > Please take a look at the 10 "Low" priority reviews in the cinder
> > > Liberty 3 etherpad that were punted to Mitaka yesterday. [1]
> > >
> > > Six of these *never* [2] received a vote from a core reviewer. With the
> > > exception of the first in the list, which has 35 patch sets, none of the
> > > others received a vote before Friday, August 28.  Of these, none had
> > > more than -1s on minor issues, and these have been remedied.
> > >
> > > Review https://review.openstack.org/#/c/213855 "Implement
> > > manage/unmanage snapshot in Pure drivers" is a great example:
> > >
> > >    * approved blueprint for a valuable feature
> > >    * pristine code
> > >    * passes CI and Jenkins (and by the deadline)
> > >    * never reviewed
> > >
> > > We have 11 core reviewers, all of whom were very busy doing reviews
> > > during L3, but evidently this set of reviews didn't really have much
> > > chance of making it.  This looks like a classic case where the
> > > individually rational priority decisions of each core reviewer
> > > collectively resulted in starving the Low Priority review queue.
> > >
> 
> I can't speak for other cores, but in my case reviewing was mostly not
> based on my own priorities, I reviewed patches based on the already set
> priority of each patch as well as patches that I was already
> reviewing.
> 
> Some of those medium priority patches took me a lot of time to review,
> since they were not trivial (some needed some serious rework).  As for
> patches I was already reviewing, as you can imagine it wouldn't be fair
> to just ignore a patch that I've been reviewing for some time just when
> it's almost ready and the deadline is closing in.
> 
> Having said that I have to agree that those patches didn't have much
> chances, and I apologize for my part on that.  While it is no excuse I
> have to agree with jgriffith when he says that those patches should have
> pushed cores for reviews (even if this is clearly not the "right" way to
> manage it).
> 
> > > One way to remedy would be for the 11 core reviewers to devote a day or
> > > two to cleaning up this backlog of 10 outstanding reviews rather than
> > > punting all of them out to Mitaka.
> > >
> > > Thanks for your time and consideration.
> > >
> > > Respectfully,
> > >
> > > -- Tom Barron
> > >
> > > [1] https://etherpad.openstack.org/p/cinder-liberty-3-reviews
> > > [2] At the risk of stating the obvious, in this count I ignore purely
> > > procedural votes such as the final -2.
> > >
> > > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > 
> > ?Thanks Tom, this is sadly an ongoing problem every release.  I think we
> > have a number of things we can talk about at the summit to try and
> > make some of this better.  I honestly think that if people were to
> > actually "use" launchpad instead of creating tracking etherpads
> > everywhere it would help.  What I mean is that there is a ranked
> > targeting of items in Launchpad and we should use it, core team
> > members should know that as the source of truth and things that must
> > get reviewed.
> > 
> 
> I agree, we should use Launchpad's functionality to track BPs and Bugs
> targeted for each milestone, and maybe we can discuss on a workflow that
> helps us reduce starvation at the same time that helps us keep track of
> code reviewers responsible for each item.
> 
> Just spitballing here, but we could add to BP's work items and Bug's
> comments what core members will be responsible for reviewing related
> patches.  Although this means cores will have to check this on every
> review they do that has a BP or Bug number, so if there are already 2
> cores responsible for that feature they should preferably just move on
> to other patches and if there are not 2 core reviewers they should add
> themselves to LP.  This way patch owners know who they should bug for
> reviews on their patches and if there are no core reviewers for them
> they should look for some (or wait for them to "claim" that
> bug/feature).

I know some teams are using an etherpad to track that sort of
information, because priorities change from week to week (as items are
closed out) and it's easy to add arbitrary comments to the etherpad.
Maybe one of the teams doing that can comment on its effectiveness?

Doug


From doug at doughellmann.com  Wed Sep  2 14:20:56 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 02 Sep 2015 10:20:56 -0400
Subject: [openstack-dev] [Ironic] Command structure for OSC plugin
In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3C006E@CERNXCHG44.cern.ch>
References: <20150824150341.GB13126@redhat.com> <55DB3B46.6000503@gmail.com>
 <55DB3EB4.5000105@redhat.com> <20150824172520.GD13126@redhat.com>
 <55DB54E6.1090408@redhat.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3877B7@CERNXCHG44.cern.ch>
 <20150824193559.GF13126@redhat.com> <1440446092-sup-2361@lrrr.local>
 <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3C006E@CERNXCHG44.cern.ch>
Message-ID: <1441203556-sup-2613@lrrr.local>

Excerpts from Tim Bell's message of 2015-09-02 05:15:35 +0000:
> That would be great to have plugins on the commands which are relevant to multiple projects? avoiding exposing all of the underlying projects as prefixes and getting more consistency would be very appreciated by the users.

That works in some cases, but in a lot of cases things that are
superficially similar have important differences in the inputs they
take. In those cases, it's better to be explicit about the differences
than to force the concepts together in a way that makes OSC present only
the least-common denominator interface.

Doug

> 
> Tim
> 
> From: Dean Troyer [mailto:dtroyer at gmail.com]
> Sent: 01 September 2015 22:47
> To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Ironic] Command structure for OSC plugin
> 
> [late catch-up]
> 
> On Mon, Aug 24, 2015 at 2:56 PM, Doug Hellmann <doug at doughellmann.com<mailto:doug at doughellmann.com>> wrote:
> Excerpts from Brad P. Crochet's message of 2015-08-24 15:35:59 -0400:
> > On 24/08/15 18:19 +0000, Tim Bell wrote:
> > >
> > >From a user perspective, where bare metal and VMs are just different flavors (with varying capabilities), can we not use the same commands (server create/rebuild/...) ? Containers will create the same conceptual problems.
> > >
> > >OSC can provide a converged interface but if we just replace '$ ironic XXXX' by '$ openstack baremetal XXXX', this seems to be a missed opportunity to hide the complexity from the end user.
> > >
> > >Can we re-use the existing server structures ?
> 
> I've wondered about how users would see doing this, we've done it already with the quota and limits commands (blurring the distinction between project APIs).  At some level I am sure users really do not care about some of our project distinctions.
> 
> > To my knowledge, overriding or enhancing existing commands like that
> > is not possible.
> 
> You would have to do it in tree, by making the existing commands
> smart enough to talk to both nova and ironic, first to find the
> server (which service knows about something with UUID XYZ?) and
> then to take the appropriate action on that server using the right
> client. So it could be done, but it might lose some of the nuance
> between the server types by munging them into the same command. I
> don't know what sorts of operations are different, but it would be
> worth doing the analysis to see.
> 
> I do have an experimental plugin that hooks the server create command to add some options and change its behaviour so it is possible, but right now I wouldn't call it supported at all.  That might be something that we could consider doing though for things like this.
> 
> The current model for commands calling multiple project APIs is to put them in openstackclient.common, so yes, in-tree.
> 
> Overall, though, to stay consistent with OSC you would map operations into the current verbs as much as possible.  It is best to think in terms of how the CLI user is thinking and what she wants to do, and not how the REST or Python API is written.  In this case, 'baremetal' is a type of server, a set of attributes of a server, etc.  As mentioned earlier, containers will also have a similar paradigm to consider.
> 
> dt
> 


From Neil.Jerram at metaswitch.com  Wed Sep  2 14:42:48 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Wed, 2 Sep 2015 14:42:48 +0000
Subject: [openstack-dev] [glance] image-create --is-public removed?
Message-ID: <SN1PR02MB16957CD3D48A0CB690365C4199690@SN1PR02MB1695.namprd02.prod.outlook.com>

Was the --is-public option to 'glance image-create ...' just removed?  I've been running Devstack successfully during the last week, but now see this:

glance: error: unrecognized arguments: --is-public=true

from running this:

wget http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img -O - | glance image-create --name=cirros-0.3.2-x86_64 --disk-format=qcow2 \
      --container-format=bare --is-public=true

So, some questions:

- Is it correct that this option has just been removed?
- Where should I be looking / tracking to see announcements of changes like this?
- Out of interest, where is the code that implements these command line operations?

Thanks,
    Neil

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/df3dc71c/attachment.html>

From nik.komawar at gmail.com  Wed Sep  2 14:59:40 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Wed, 2 Sep 2015 10:59:40 -0400
Subject: [openstack-dev] [glance] image-create --is-public removed?
In-Reply-To: <SN1PR02MB16957CD3D48A0CB690365C4199690@SN1PR02MB1695.namprd02.prod.outlook.com>
References: <SN1PR02MB16957CD3D48A0CB690365C4199690@SN1PR02MB1695.namprd02.prod.outlook.com>
Message-ID: <55E70EDC.60203@gmail.com>

You should check the version of your glanceclient.

`glance help` will give you help on most commands. Seems like you may
have upgraded your client and now it defaults to v2 of the server API.

You can track updates using the release on pypi and related
documentation on [1]; announcements are on the openstack-announce ML.

[1] http://docs.openstack.org/developer/python-glanceclient/

On 9/2/15 10:42 AM, Neil Jerram wrote:
> Was the --is-public option to 'glance image-create ...' just removed? 
> I've been running Devstack successfully during the last week, but now
> see this:
>
> glance: error: unrecognized arguments: --is-public=true
>
> from running this:
>
> wget
> http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img -O
> - | glance image-create --name=cirros-0.3.2-x86_64 --disk-format=qcow2 \
>       --container-format=bare --is-public=true
>
> So, some questions:
>
> - Is it correct that this option has just been removed?
> - Where should I be looking / tracking to see announcements of changes
> like this?
> - Out of interest, where is the code that implements these command
> line operations?
>
> Thanks,
>     Neil
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From dzimine at stackstorm.com  Wed Sep  2 15:01:48 2015
From: dzimine at stackstorm.com (Dmitri Zimine)
Date: Wed, 2 Sep 2015 08:01:48 -0700
Subject: [openstack-dev] [mistral][yaql] Addressing task result using
	YAQL function
In-Reply-To: <B93B8F94-DE9D-4723-A22D-DC527DCC54FB@mirantis.com>
References: <B93B8F94-DE9D-4723-A22D-DC527DCC54FB@mirantis.com>
Message-ID: <04C0E7F3-1E41-4C2A-8A03-EB5C3A598861@stackstorm.com>

Agree, 

with one detail: make it explicit -  task(task_name). 

res - we often see folks confused by result of what (action, task, workflow) although we cleaned up our lingo: action-output, task-result, workflow-output?. but still worth being explicit.

And full result is being thought as the root context $.

Publishing to global context may be ok for now, IMO.

DZ>

On Sep 2, 2015, at 4:16 AM, Renat Akhmerov <rakhmerov at mirantis.com> wrote:

> Hi,
> 
> I?d like to propose a few changes after transition to yaql 1.0:
> 
> We already moved from using ?$.__execution? in DSL to "execution()? and from ?$.__env? to ?env()? where ?execution()? and ?env()" are registered yaql functions. We had to do it because double underscored are prohibited in the new yaql.
> 
> 
> In the spirit of these changes I?m proposing a similar change for addressing task result in Mistral DSL.
> 
> Currently we have the syntax ?$.task_name? that we can use in yaql expressions to refer to corresponding task result. The current implementation is of that is kind of tricky and, IMO, conceptually wrong because referencing this kind of key leads to DB read operation that?s requried to fetch task result (it may be big as megabytes so it can?t be stored in workflow context all the time. In other words, we create a dictionary with side effect and change the initial semantics of dictionary. Along with mentioned trickiness of this approach it?s not convenient, for example, to debug the code because we can accidentally cause a DB operation. 
> 
> So the solution I?m proposing is to have an explicit yaql function called ?res? or ?result? to extract task result.
> 
> res() - extracts the result of the current task, i.e. in ?publish? clause.
> res(?task_name?) - extracts the result of the task with the specified name. Can also be used in ?publish? clause, if needed
> 
> This approach seems more flexible (cause we can add any other functions w/o having to make significant changes in WF engine) and expressive because user won?t confuse $.task_name with addressing a regular workflow context variable.
> 
> Of course, this to some extent breaks backwards compatibility. But we already changed yaql version which was necessary anyway so it seems like a good time to do it.
> 
> I?d very much like to have your input on this.
> 
> Renat Akhmerov
> @ Mirantis Inc.
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/6c7decf5/attachment.html>

From ahothan at cisco.com  Wed Sep  2 15:25:09 2015
From: ahothan at cisco.com (Alec Hothan (ahothan))
Date: Wed, 2 Sep 2015 15:25:09 +0000
Subject: [openstack-dev] [oslo][versionedobjects][ceilometer] explain
 the benefits of ceilometer+versionedobjects
In-Reply-To: <BLU436-SMTP79E6616D1950645B7A9769DE6A0@phx.gbl>
References: <BLU437-SMTP766811C9632CFCC37305CDDE6F0@phx.gbl>
 <20150828181856.ec477ff41bb188afb35f2b31@intel.com>
 <BLU437-SMTP74C2826061CBC4631D7108DE6E0@phx.gbl>
 <D205EA21.3535C%ahothan@cisco.com>
 <BLU436-SMTP14A77A6CC7A969283FD695DE6E0@phx.gbl>
 <F66AE3B0-96A6-4792-B322-E31C8299A0FD@cisco.com>
 <BLU436-SMTP79E6616D1950645B7A9769DE6A0@phx.gbl>
Message-ID: <71CA3C44-C5EA-4D29-80CE-5F1046A6D2F5@cisco.com>






On 9/1/15, 11:31 AM, "gord chung" <gord at live.ca> wrote:

>
>
>On 28/08/2015 5:18 PM, Alec Hothan (ahothan) wrote:
>>
>>
>>
>>
>> On 8/28/15, 11:39 AM, "gord chung" <gord at live.ca> wrote:
>>
>>> i should start by saying i re-read my subject line and it arguably comes
>>> off aggressive -- i should probably have dropped 'explain' :)
>>>
>>> On 28/08/15 01:47 PM, Alec Hothan (ahothan) wrote:
>>>> On 8/28/15, 10:07 AM, "gord chung" <gord at live.ca> wrote:
>>>>
>>>>> On 28/08/15 12:18 PM, Roman Dobosz wrote:
>>>>>> So imagine we have new versions of the schema for the events, alarms or
>>>>>> samples in ceilometer introduced in Mitaka release while you have all
>>>>>> your ceilo services on Liberty release. To upgrade ceilometer you'll
>>>>>> have to stop all services to avoid data corruption. With
>>>>>> versionedobjects you can do this one by one without disrupting
>>>>>> telemetry jobs.
>>>>> are versions checked for every single message? has anyone considered the
>>>>> overhead to validating each message? since ceilometer is queue based, we
>>>>> could technically just publish to a new queue when schema changes... and
>>>>> the consuming services will listen to the queue it knows of.
>>>>>
>>>>> ie. our notification service changes schema so it will now publish to a
>>>>> v2 queue, the existing collector service consumes the v1 queue until
>>>>> done at which point you can upgrade it and it will listen to v2 queue.
>>>>>
>>>>> this way there is no need to validate/convert anything and you can still
>>>>> take services down one at a time. this support doesn't exist currently
>>>>> (i just randomly thought of it) but assuming there's no flaw in my idea
>>>>> (which there may be) isn't this more efficient?
>>>> If high performance is a concern for ceilometer (and it should) then maybe
>>>> there might be better options than JSON?
>>>> JSON is great for many applications but can be inappropriate for other
>>>> demanding applications.
>>>> There are other popular open source encoding options that yield much more
>>>> compact wire payload, more efficient encoding/decoding and handle
>>>> versioning to a reasonable extent.
>>> i should clarify. we let oslo.messaging serialise our dictionary how it
>>> does... i believe it's JSON. i'd be interested to switch it to something
>>> more efficient. maybe it's time we revive the msgpacks patch[1] or are
>>> there better alternatives? (hoping i didn't just unleash a storm of
>>> 'this is better' replies)
>> I'd be curious to know if there is any benchmark on the oslo serializer for msgpack and how it compares to JSON?
>> More important is to make sure we're optimizing in the right area.
>> Do we have a good understanding of where ceilometer needs to improve to scale or is it still not quite clear cut?
>
>re: serialisation, that probably isn't the biggest concern for 
>Ceilometer performance. the main items are storage -- to be addressed by 
>Gnocchi/tsdb, and polling load. i just thought i'd point out an existing 
>serialisation patch since we were on the topic :-)

Is there any data measuring the polling load on large scale deployments?
Was there a plan to reduce the polling load to an acceptable level? If yes could you provide any pointer if any?


>
>>
>>>> Queue based versioning might be less runtime overhead per message but at
>>>> the expense of a potentially complex queue version management (which can
>>>> become tricky if you have more than 2 versions).
>>>> I think Neutron was considering to use versioned queues as well for its
>>>> rolling upgrade (along with versioned objects) and I already pointed out
>>>> that managing the queues could be tricky.
>>>>
>>>> In general, trying to provide a versioning framework that allows to do
>>>> arbitrary changes between versions is quite difficult (and often bound to
>>>> fail).
>>>>
>>> yeah, so that's what a lot of the devs are debating about right now.
>>> performance is our key driver so if we do something we think/know will
>>> negatively impact performance, it better bring a whole lot more of
>>> something else. if queue based versioning offers comparable
>>> functionalities, i'd personally be more interested to explore that route
>>> first. is there a thread/patch/log that we could read to see what
>>> Neutron discovered when they looked into it?
>> The versioning comments are buried in this mega patch if you are brave enough to dig in:
>>
>> https://review.openstack.org/#/c/190635
>>
>> The (offline) conclusion was that this was WIP and deserved more discussion (need to check back with Miguel and Ihar from the Neutron team).
>> One option considered in that discussion was to use oslo messaging topics to manage flows of messages that had different versions (and still use versionedobjects). So if you have 3 versions in your cloud you'd end up with 3 topics (and as many queues when it comes to Rabbit). What is complex is to manage the queues/topic names (how to name them), how to discover them and how to deal with all the corner cases (like a new node coming in with an arbitrary version, nodes going away at any moment, downgrade cases).
>
>conceptually, i would think only the consumers need to know about all 
>the queues and even then, it should only really need to know about the 
>ones it understands. the producers (polling agents) can just fire off to 
>the correct versioned queue and be done... thanks for the above link 
>(it'll help with discussion/spec design).

When everything goes according to plan, any solution can work but this is hardly the case in production, especially at scale.  Here are a few question that may help in the discussion:
- how are versioned queue named?
- who creates a versioned queue (producer or consumer?) and who deletes it when no more entity of that version is running?
- how to make sure a producer is not producing in a queue that has no consumer (a messaging infra like rabbit is designed to decouple producers from consumers)
- all corner cases of entities (consumers or producers) popping up with newer or older version, and terminating (gracefully or not) during the upgrade/downgrade, what happens to the queues...

IMHO using a simple communication schema (1 topic/queue for all versions) with in-band message versioning is a much less complex proposition than juggling with versioned queues (not to say the former is simple to do). With versioned queues you're kind of trading off the per message versioning with per queue versioning but at the expense of:
- a complex queue management (if you want to do it right) 
- a not less complex per queue message decoding (since the consumer needs to know how to decode and interpret every message depending on the version of the queue it comes from)
- a more difficult debug environment (harder to debug multiple queues than 1 queue)
- and added stress on oslo messaging (due to the use of transient queues)


Regards,

  Alec





>

From bharath at brocade.com  Wed Sep  2 15:37:32 2015
From: bharath at brocade.com (bharath)
Date: Wed, 2 Sep 2015 21:07:32 +0530
Subject: [openstack-dev] How to run fw testcases which are recently moved
 from tempest to neutron.
Message-ID: <55E717BC.6000904@brocade.com>

Hi ,

How to run FW testcases which are under neutron using tempest?

If i am trying to list cases from tempest(sudo -u stack -H testr 
list-tests neutron.api
), its resulting to empty list



Thanks,
bharath


From priteau at uchicago.edu  Wed Sep  2 15:50:30 2015
From: priteau at uchicago.edu (Pierre Riteau)
Date: Wed, 2 Sep 2015 16:50:30 +0100
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <55E62395.1040901@redhat.com>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
 <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
 <55E5611B.1090203@redhat.com>
 <CAAa8YgCZEFqGpwpY=P2JzxP+BmYJpHaYYFrX_fGf6-3s16NREQ@mail.gmail.com>
 <CAAa8YgBNHdqpWhKgoBTcm-cFTCD7hHU4iGWhKia3uePotg-UbA@mail.gmail.com>
 <55E5B3A0.7010504@redhat.com>
 <408D5BC6C96B654BBFC5B5A9B60D13431A80880F@ESESSMB105.ericsson.se>
 <55E62395.1040901@redhat.com>
Message-ID: <A8F143FD-FF87-48A2-A1D6-4B66528699DD@uchicago.edu>

On 1 Sep 2015, at 23:15, Sylvain Bauza <sbauza at redhat.com> wrote:

> Le 01/09/2015 22:31, Ildik? V?ncsa a ?crit :
>> Hi,
>> 
>> I'm glad to see the interest and I also support the idea of using the IRC channel that is already set up for further communication. Should we aim for a meeting/discussion there around the end of this week or during next week?
>> 
>> @Nikolay, Sylvain: Thanks for support and bringing together a list of action items as very first steps.
>> 
>> @Pierre: You wrote that you are using Blazar. Are you using it as is with an older version of OpenStack or you have a modified version of the project/code?
> I'm actually really surprised to see https://www.chameleoncloud.org/docs/user-guides/bare-metal-user-guide/ which describes quite fine how to use Blazar/Climate either using the CLI or by Horizon.
> 
> The latter is actually not provided within the git tree, so I guess Chameleon added it downstream. That's fine, maybe something we could upstream if Pierre and his team are okay ?
> 
> -Sylvain

We are using a modified version of Blazar (based on the latest master branch commit) with OpenStack Juno (RDO packaging).

The only really mandatory patch that we had to develop was for blazar-nova due to functions moving from oslo-incubator to oslo.i18n: https://github.com/ChameleonCloud/blazar-nova/commit/346627320e87c8e067db6e842935d243c9640e6e
On top of this we developed a number of patches, most of them to fix bugs that we discovered in Blazar, with a few to get specific features or behavior required for Chameleon.

We also used the code developed by Pablo (http://lists.openstack.org/pipermail/openstack-dev/2014-June/038506.html) to provide an Horizon dashboard for creating and managing leases.
We even developed a Gantt chart of physical node reservations, but this code reads data straight from the SQL database rather than using a new Blazar API, so it cannot be contributed as is.

We have always wanted to contribute back our improvements to Blazar and we are looking forward to it now that there is renewed interest from the community.
All our OpenStack code should already be available on GitHub at https://github.com/ChameleonCloud/

Pierre

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/f7dbea85/attachment.html>

From zbitter at redhat.com  Wed Sep  2 16:01:31 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Wed, 2 Sep 2015 12:01:31 -0400
Subject: [openstack-dev] [trove] [heat] Multi region support
In-Reply-To: <CAA16xcxYGq=J9_cXxuB2VPQiCwubCvJ_d5pnii-Bex3A+E9wBA@mail.gmail.com>
References: <D20B3157.5150C%mlowery@ebay.com> <55E5F2BA.9000703@redhat.com>
 <D20B897D.5160D%mlowery@ebay.com>
 <CAA16xcxYGq=J9_cXxuB2VPQiCwubCvJ_d5pnii-Bex3A+E9wBA@mail.gmail.com>
Message-ID: <55E71D5B.6020206@redhat.com>

On 01/09/15 19:47, Angus Salkeld wrote:
> On Wed, Sep 2, 2015 at 8:30 AM Lowery, Mathew <mlowery at ebay.com
> <mailto:mlowery at ebay.com>> wrote:
>
>     Thank you Zane for the clarifications!
>
>     I misunderstood #2 and that led to the other misunderstandings.
>
>     Further questions:
>     * Are nested stacks aware of their nested-ness? In other words,
>     given any
>     nested stack (colocated with parent stack or not), can I trace it
>     back to
>     the parent stack? (On a possibly related note, I see that adopting a
>     stack
>
>
> Yes, there is a link (url) to the parent_stack in the links section of
> show stack.

That's true only for resources which derive from StackResource, and 
which are manipulated through the RPC API. Mat was, I think, asking 
specifically about OS::Heat::Stack resources, which may (or may not) be 
in remote regions and are manipulated through the ReST API. Those ones 
are not aware of their nested-ness.

>     is an option to reassemble a new parent stack from its regional parts in
>     the event that the old parent stack is lost.)
>     * Has this design met the users' needs? In other words, are there any
>     plans to make major modifications to this design?
>
>
> AFAIK we have had zero feedback from the multi region feature.
> No more plans, but we would obviously love feedback and suggestions
> on how to improve region support.

Yeah, this has not been around so long that there has been a lot of 
feedback.

I know people want to also do multi-cloud (i.e. where the remote region 
has a different keystone). It's tricky to implement because we need 
somewhere to store the credentials... we'll possibly end up saying that 
Keystone federation is required, and then we'll only have to pass the 
keystone auth URL in addition to what we already have.

cheers,
Zane.


From sbauza at redhat.com  Wed Sep  2 16:04:04 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Wed, 02 Sep 2015 18:04:04 +0200
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <A8F143FD-FF87-48A2-A1D6-4B66528699DD@uchicago.edu>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
 <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
 <55E5611B.1090203@redhat.com>
 <CAAa8YgCZEFqGpwpY=P2JzxP+BmYJpHaYYFrX_fGf6-3s16NREQ@mail.gmail.com>
 <CAAa8YgBNHdqpWhKgoBTcm-cFTCD7hHU4iGWhKia3uePotg-UbA@mail.gmail.com>
 <55E5B3A0.7010504@redhat.com>
 <408D5BC6C96B654BBFC5B5A9B60D13431A80880F@ESESSMB105.ericsson.se>
 <55E62395.1040901@redhat.com>
 <A8F143FD-FF87-48A2-A1D6-4B66528699DD@uchicago.edu>
Message-ID: <55E71DF4.6020904@redhat.com>



Le 02/09/2015 17:50, Pierre Riteau a ?crit :
> On 1 Sep 2015, at 23:15, Sylvain Bauza <sbauza at redhat.com 
> <mailto:sbauza at redhat.com>> wrote:
>
>> Le 01/09/2015 22:31, Ildik? V?ncsa a ?crit :
>>> Hi,
>>>
>>> I'm glad to see the interest and I also support the idea of using 
>>> the IRC channel that is already set up for further communication. 
>>> Should we aim for a meeting/discussion there around the end of this 
>>> week or during next week?
>>>
>>> @Nikolay, Sylvain: Thanks for support and bringing together a list 
>>> of action items as very first steps.
>>>
>>> @Pierre: You wrote that you are using Blazar. Are you using it as is 
>>> with an older version of OpenStack or you have a modified version of 
>>> the project/code?
>> I'm actually really surprised to 
>> seehttps://www.chameleoncloud.org/docs/user-guides/bare-metal-user-guide/which 
>> describes quite fine how to use Blazar/Climate either using the CLI 
>> or by Horizon.
>>
>> The latter is actually not provided within the git tree, so I guess 
>> Chameleon added it downstream. That's fine, maybe something we could 
>> upstream if Pierre and his team are okay ?
>>
>> -Sylvain
>
> We are using a modified version of Blazar (based on the latest master 
> branch commit) with OpenStack Juno (RDO packaging).
>
> The only really mandatory patch that we had to develop was for 
> blazar-nova due to functions moving from oslo-incubator to oslo.i18n: 
> https://github.com/ChameleonCloud/blazar-nova/commit/346627320e87c8e067db6e842935d243c9640e6e
> On top of this we developed a number of patches, most of them to fix 
> bugs that we discovered in Blazar, with a few to get specific features 
> or behavior required for Chameleon.
>
> We also used the code developed by Pablo 
> (http://lists.openstack.org/pipermail/openstack-dev/2014-June/038506.html) 
> to provide an Horizon dashboard for creating and managing leases.
> We even developed a Gantt chart of physical node reservations, but 
> this code reads data straight from the SQL database rather than using 
> a new Blazar API, so it cannot be contributed as is.
>
> We have always wanted to contribute back our improvements to Blazar 
> and we are looking forward to it now that there is renewed interest 
> from the community.
> All our OpenStack code should already be available on GitHub at 
> https://github.com/ChameleonCloud/
>

That's great work, thanks for the explanations Pierre.
Since you're looking like the Blazar maintainer while Winter was coming, 
I'd certainly consider your commitment as enough trustable for being 
backported to the master branch.

Nikolay, you told that the Blazar gate is broken, right ? I take that as 
action #0 to fix, and then we can try to backport Pierre's changes into 
the master branch.

Now, let's switch over IRC, I don't want to pollute the ML for those 
technical details.

Anyone who still wants to contribute to Blazar can join 
#openstack-blazar and yell, for sure.

-Sylvain

> Pierre
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/1531632b/attachment.html>

From bharath at brocade.com  Wed Sep  2 16:21:33 2015
From: bharath at brocade.com (bharath)
Date: Wed, 2 Sep 2015 21:51:33 +0530
Subject: [openstack-dev] [Neutron][horizon][neutron][L3][dvr][fwaas] FWaaS
Message-ID: <55E7220D.4020807@brocade.com>

Hi,

Horizon seems to be broken.

When i try to add new firewall rule , horizon broken with "'NoneType' 
object has no attribute 'id'" Error.
This was fine about 10 hours back. Seems one of the  latest commit 
broken it.


Traceback in horizon:

2015-09-02 16:15:35.337872     return nodelist.render(context)
2015-09-02 16:15:35.337877   File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 903, in render
2015-09-02 16:15:35.337893     bit = self.render_node(node, context)
2015-09-02 16:15:35.337899   File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 79, in render_node
2015-09-02 16:15:35.337903     return node.render(context)
2015-09-02 16:15:35.337908   File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 89, in render
2015-09-02 16:15:35.337913     output = self.filter_expression.resolve(context)
2015-09-02 16:15:35.337917   File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 647, in resolve
2015-09-02 16:15:35.337922     obj = self.var.resolve(context)
2015-09-02 16:15:35.337927   File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 787, in resolve
2015-09-02 16:15:35.337931     value = self._resolve_lookup(context)
2015-09-02 16:15:35.337936   File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 825, in _resolve_lookup
2015-09-02 16:15:35.337940     current = getattr(current, bit)
2015-09-02 16:15:35.337945   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 59, in attr_string
2015-09-02 16:15:35.337950     return flatatt(self.get_final_attrs())
2015-09-02 16:15:35.337954   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 42, in get_final_attrs
2015-09-02 16:15:35.337959     final_attrs['class'] = self.get_final_css()
2015-09-02 16:15:35.337964   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 47, in get_final_css
2015-09-02 16:15:35.337981     default = " ".join(self.get_default_classes())
2015-09-02 16:15:35.337986   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 792, in get_default_classes
2015-09-02 16:15:35.337991     if not self.url:
2015-09-02 16:15:35.337995   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 756, in url
2015-09-02 16:15:35.338000     url = self.column.get_link_url(self.datum)
2015-09-02 16:15:35.338004   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 431, in get_link_url
2015-09-02 16:15:35.338009     return self.link(datum)
2015-09-02 16:15:35.338014   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/firewalls/tables.py", line 261, in get_policy_link
2015-09-02 16:15:35.338019     kwargs={'policy_id': datum.policy.id})
2015-09-02 16:15:35.338023 AttributeError: 'NoneType' object has no attribute 'id'


Thanks,
bharath
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/946af433/attachment.html>

From jaypipes at gmail.com  Wed Sep  2 16:21:48 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Wed, 02 Sep 2015 12:21:48 -0400
Subject: [openstack-dev] [Fuel] Code review process in Fuel and related
 issues
In-Reply-To: <CACo6NWCjp-DTCY2nrKyDij1TPeSuTCr9PhTLQ25Vf_Y5cJ=sZQ@mail.gmail.com>
References: <CAKYN3rNAw4vqbrvUONaemxOx=mACM3Aq_JAjpBeXmhjXq-zi5A@mail.gmail.com>
 <CABfuu9qPOe2RVhBG7aq+coVRQ0898pkv+DXGQBs9nGU93b+krA@mail.gmail.com>
 <30E12849-7AAB-45F7-BA7B-A4D952053419@mirantis.com>
 <CACo6NWA_=2JnJfcFwbTbt1M33P7Gqpg_xemKDV5x7miu94TAHQ@mail.gmail.com>
 <9847EFCC-7772-4BB8-AD0E-4CA6BC65B535@mirantis.com>
 <CACo6NWDdxzWxDkU078tuuHupyArux09bPya72hC24WwnkNiCFg@mail.gmail.com>
 <55E6E82D.6030100@gmail.com>
 <CACo6NWCjp-DTCY2nrKyDij1TPeSuTCr9PhTLQ25Vf_Y5cJ=sZQ@mail.gmail.com>
Message-ID: <55E7221C.2070008@gmail.com>

On 09/02/2015 08:45 AM, Igor Kalnitsky wrote:
>> I think there's plenty of examples of people in OpenStack projects
>> that both submit code (and lead features) that also do code review
>> on a daily basis.
>
> * Do these features huge?

Yes.

> * Is their code contribution huge or just small patches?

Both.

> * Did they get to master before FF?

Yes.

> * How many intersecting features OpenStack projects have under
> development? (since often merge conflicts requires a lot of re-review)

I recognize that Fuel, like devstack, has lots of cross-project 
dependencies. That just makes things harder to handle for Fuel, but it's 
not a reason to have core reviewers not working on code or non-core 
reviewers not doing reviews.

> * How often OpenStack people are busy on other activities, such as
> helping fellas, troubleshooting customers, participate design meetings
> and so on?

Quite often. I'm personally on IRC participating in design discussions, 
code reviews, and helping people every day. Not troubleshooting 
customers, though...

> If so, do you sure they are humans then? :) I can only speak for
> myself, and that's what I want to say: during 7.0 dev cycle I burned
> in hell and I don't want to continue that way.

I think you mean you "burned out" :) But, yes, I hear you. I understand 
the pressure that you are under, and I sympathize with you. I just feel 
that the situation is not an either/or situation, and encouraging some 
folks to only do reviews and not participate in coding/feature 
development is a dangerous thing.

Best,
-jay

> Thanks,
> Igor
>
> On Wed, Sep 2, 2015 at 3:14 PM, Jay Pipes <jaypipes at gmail.com> wrote:
>> On 09/02/2015 03:00 AM, Igor Kalnitsky wrote:
>>>
>>> It won't work that way. You either busy on writing code / leading
>>> feature or doing review. It couldn't be combined effectively. Any
>>> context switch between activities requires an extra time to focus on.
>>
>>
>> I don't agree with the above, Igor. I think there's plenty of examples of
>> people in OpenStack projects that both submit code (and lead features) that
>> also do code review on a daily basis.
>>
>> Best,
>> -jay
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From armamig at gmail.com  Wed Sep  2 16:40:42 2015
From: armamig at gmail.com (Armando M.)
Date: Wed, 2 Sep 2015 09:40:42 -0700
Subject: [openstack-dev] [neutron] pushing changes through the gate
Message-ID: <CAK+RQebMqAL-ZTn4z3Tafnpr=feA3i4qaeF5Uu4K03SO_fFF9g@mail.gmail.com>

Hi,

By now you may have seen that I have taken out your change from the gate
and given it a -2: don't despair! I am only doing it to give priority to
the stuff that needs to merge in order to get [1] into a much better shape.

If you have an important fix, please target it for RC1 or talk to me or
Doug (or Kyle when he's back from his time off), before putting it in the
gate queue. If everyone is not conscious of the other, we'll only end up
stepping on each other, and nothing moves forward.

Let's give priority to gate stabilization fixes, and targeted stuff.

Happy merging...not!

Many thanks,
Armando

[1] https://launchpad.net/neutron/+milestone/liberty-3
[2] https://launchpad.net/neutron/+milestone/liberty-rc1
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/e3c42c88/attachment.html>

From Neil.Jerram at metaswitch.com  Wed Sep  2 17:04:20 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Wed, 2 Sep 2015 17:04:20 +0000
Subject: [openstack-dev] [glance] image-create --is-public removed?
References: <SN1PR02MB16957CD3D48A0CB690365C4199690@SN1PR02MB1695.namprd02.prod.outlook.com>
 <55E70EDC.60203@gmail.com>
Message-ID: <SN1PR02MB1695615AEC9CB0E14C480B8A99690@SN1PR02MB1695.namprd02.prod.outlook.com>

Thanks, Nikhil.  I've been (repeatedly) cloning and using devstack from
scratch, on a fresh VM.  So 'the version of your glanceclient' is
whatever the latest devstack uses.

In case anyone is hitting this, it seems the correct replacement for
'--is-public=true' is '--visibility public'.

Regards,
    Neil


On 02/09/15 16:03, Nikhil Komawar wrote:
> You should check the version of your glanceclient.
>
> `glance help` will give you help on most commands. Seems like you may
> have upgraded your client and now it defaults to v2 of the server API.
>
> You can track updates using the release on pypi and related
> documentation on [1]; announcements are on the openstack-announce ML.
>
> [1] http://docs.openstack.org/developer/python-glanceclient/
>
> On 9/2/15 10:42 AM, Neil Jerram wrote:
>> Was the --is-public option to 'glance image-create ...' just removed? 
>> I've been running Devstack successfully during the last week, but now
>> see this:
>>
>> glance: error: unrecognized arguments: --is-public=true
>>
>> from running this:
>>
>> wget
>> http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img -O
>> - | glance image-create --name=cirros-0.3.2-x86_64 --disk-format=qcow2 \
>>       --container-format=bare --is-public=true
>>
>> So, some questions:
>>
>> - Is it correct that this option has just been removed?
>> - Where should I be looking / tracking to see announcements of changes
>> like this?
>> - Out of interest, where is the code that implements these command
>> line operations?
>>
>> Thanks,
>>     Neil
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From amuller at redhat.com  Wed Sep  2 17:28:00 2015
From: amuller at redhat.com (Assaf Muller)
Date: Wed, 2 Sep 2015 13:28:00 -0400
Subject: [openstack-dev] How to run fw testcases which are recently
 moved from tempest to neutron.
In-Reply-To: <55E717BC.6000904@brocade.com>
References: <55E717BC.6000904@brocade.com>
Message-ID: <CABARBAafQ7WQPQT8edWUrRei=3eXHprbe3ktvY+emPcd5m8pnw@mail.gmail.com>

I just go to the Neutron dir and use: tox -e api.

On Wed, Sep 2, 2015 at 11:37 AM, bharath <bharath at brocade.com> wrote:

> Hi ,
>
> How to run FW testcases which are under neutron using tempest?
>
> If i am trying to list cases from tempest(sudo -u stack -H testr
> list-tests neutron.api
> ), its resulting to empty list
>
>
>
> Thanks,
> bharath
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/a12977c3/attachment.html>

From tdecacqu at redhat.com  Wed Sep  2 17:47:20 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Wed, 2 Sep 2015 17:47:20 +0000
Subject: [openstack-dev] [all] Criteria for applying
 vulnerability:managed tag
In-Reply-To: <20150901185638.GB7955@yuggoth.org>
References: <20150901185638.GB7955@yuggoth.org>
Message-ID: <55E73628.5000608@redhat.com>

Thanks you Jeremy for starting this discussion :-)

Proposed criteria works for me and they concurs with what have been
discussed in Vancouver.

My comments on the open-question below.


On 09/01/2015 06:56 PM, Jeremy Stanley wrote:
> A. Can the VMT accept deliverables in any programming language?

Any supported programming language by the openstack project should/could
also be accepted for vulnerability management.
As long as there is a way to test patch, I think the VMT can support
other languages like Go or Puppet.


> 
> B. As we expand the VMT's ring within the Big Top to encircle more
> and varied acts, are there parts of our current process we need to
> reevaluate for better fit? For example, right now we have one list
> of downstream stakeholders (primarily Linux distros and large public
> providers) we notify of upcoming coordinated disclosures, but as the
> list grows longer and the kinds of deliverables we support becomes
> more diverse some of them can have different downstream communities
> and so a single contact list may no longer make sense.
> 
The risk is to divide downstream communities, and managing different
lists sounds like overkill for now. One improvement would be to maintain
that list publicly like xen do for their pre-disclosure list:
  http://www.xenproject.org/security-policy.html


> C. Should we be considering a different VMT configuration entirely,
> to better service some under-represented subsets of the OpenStack
> community? Perhaps multiple VMTs with different specialties or a
> tiered structure with focused subteams.
> 
> D. Are there other improvements we can make so that our
> recommendations and processes are more consumable by other groups
> within OpenStack, further distributing the workload or making it
> more self-service (perhaps reducing the need for direct VMT
> oversight in more situations)?
> -- Jeremy Stanley

With a public stakeholder list, we can clarify our vmt-process to be
directly usable without vmt supervision.

Anyway, imo the five criteria proposed are good to be amended to the
vulnerability:managed tag documentation.

Again, thank you fungi :-)
Tristan

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/9f8b728b/attachment.pgp>

From zbitter at redhat.com  Wed Sep  2 17:51:29 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Wed, 2 Sep 2015 13:51:29 -0400
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <20150902085546.GA25909@t430slt.redhat.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
 <20150901124147.GA4710@t430slt.redhat.com>
 <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
 <CAJ3HoZ1RKCBV5if4YS_b-h0WzGu0HySkAVEQGKbvyuOpz9LYGg@mail.gmail.com>
 <20150902085546.GA25909@t430slt.redhat.com>
Message-ID: <55E73721.1000804@redhat.com>

On 02/09/15 04:55, Steven Hardy wrote:
> On Wed, Sep 02, 2015 at 04:33:36PM +1200, Robert Collins wrote:
>> On 2 September 2015 at 11:53, Angus Salkeld <asalkeld at mirantis.com> wrote:
>>
>>> 1. limit the number of resource actions in parallel (maybe base on the
>>> number of cores)
>>
>> I'm having trouble mapping that back to 'and heat-engine is running on
>> 3 separate servers'.
>
> I think Angus was responding to my test feedback, which was a different
> setup, one 4-core laptop running heat-engine with 4 worker processes.
>
> In that environment, the level of additional concurrency becomes a problem
> because all heat workers become so busy that creating a large stack
> DoSes the Heat services, and in my case also the DB.
>
> If we had a configurable option, similar to num_engine_workers, which
> enabled control of the number of resource actions in parallel, I probably
> could have controlled that explosion in activity to a more managable series
> of tasks, e.g I'd set num_resource_actions to (num_engine_workers*2) or
> something.

I think that's actually the opposite of what we need.

The resource actions are just sent to the worker queue to get processed 
whenever. One day we will get to the point where we are overflowing the 
queue, but I guarantee that we are nowhere near that day. If we are 
DoSing ourselves, it can only be because we're pulling *everything* off 
the queue and starting it in separate greenthreads.

In an ideal world, we might only ever pull one task off that queue at a 
time. Any time the task is sleeping, we would use for processing stuff 
off the engine queue (which needs a quick response, since it is serving 
the ReST API). The trouble is that you need a *huge* number of 
heat-engines to handle stuff in parallel. In the reductio-ad-absurdum 
case of a single engine only processing a single task at a time, we're 
back to creating resources serially. So we probably want a higher number 
than 1. (Phase 2 of convergence will make tasks much smaller, and may 
even get us down to the point where we can pull only a single task at a 
time.)

However, the fewer engines you have, the more greenthreads we'll have to 
allow to get some semblance of parallelism. To the extent that more 
cores means more engines (which assumes all running on one box, but 
still), the number of cores is negatively correlated with the number of 
tasks that we want to allow.

Note that all of the greenthreads run in a single CPU thread, so having 
more cores doesn't help us at all with processing more stuff in parallel.

cheers,
Zane.


From emilien at redhat.com  Wed Sep  2 18:09:43 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Wed, 2 Sep 2015 14:09:43 -0400
Subject: [openstack-dev] [puppet] hosting developer documentation on
 http://docs.openstack.org/developer/
Message-ID: <55E73B67.9020802@redhat.com>

TL;DR, I propose to move our developer documentation from wiki to
something like http://docs.openstack.org/developer/puppet-openstack

(Look at http://docs.openstack.org/developer/tempest/ for example).

For now, most of our documentation is on
https://wiki.openstack.org/wiki/Puppet but I think it would be great to
use RST format and Gerrit so anyone could submit documentation
contribute like we do for code.

I propose a basic table of contents now:
Puppet modules introductions
Coding Guide
Reviewing code

I'm taking the opportunity of the puppet sprint to run this discussion
and maybe start some work of people agrees to move on.

Thanks,
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/f0101252/attachment.pgp>

From iurygregory at gmail.com  Wed Sep  2 18:13:13 2015
From: iurygregory at gmail.com (Iury Gregory)
Date: Wed, 2 Sep 2015 15:13:13 -0300
Subject: [openstack-dev] [puppet] hosting developer documentation on
	http://docs.openstack.org/developer/
In-Reply-To: <55E73B67.9020802@redhat.com>
References: <55E73B67.9020802@redhat.com>
Message-ID: <CAJqPaj9z4q+zSjndDe_WpkjfNA0T0exJNVKXCLf=UDLCxmv5JA@mail.gmail.com>

I liked the idea (+1)

2015-09-02 15:09 GMT-03:00 Emilien Macchi <emilien at redhat.com>:

> TL;DR, I propose to move our developer documentation from wiki to
> something like http://docs.openstack.org/developer/puppet-openstack
>
> (Look at http://docs.openstack.org/developer/tempest/ for example).
>
> For now, most of our documentation is on
> https://wiki.openstack.org/wiki/Puppet but I think it would be great to
> use RST format and Gerrit so anyone could submit documentation
> contribute like we do for code.
>
> I propose a basic table of contents now:
> Puppet modules introductions
> Coding Guide
> Reviewing code
>
> I'm taking the opportunity of the puppet sprint to run this discussion
> and maybe start some work of people agrees to move on.
>
> Thanks,
> --
> Emilien Macchi
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


*Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science at
UFCG*
*E-mail:  iurygregory at gmail.com <iurygregory at gmail.com>*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/d5871fb4/attachment.html>

From guimalufb at gmail.com  Wed Sep  2 18:19:55 2015
From: guimalufb at gmail.com (Gui Maluf)
Date: Wed, 2 Sep 2015 15:19:55 -0300
Subject: [openstack-dev] [puppet] hosting developer documentation on
	http://docs.openstack.org/developer/
In-Reply-To: <CAJqPaj9z4q+zSjndDe_WpkjfNA0T0exJNVKXCLf=UDLCxmv5JA@mail.gmail.com>
References: <55E73B67.9020802@redhat.com>
 <CAJqPaj9z4q+zSjndDe_WpkjfNA0T0exJNVKXCLf=UDLCxmv5JA@mail.gmail.com>
Message-ID: <CAJArKkcovV8Kt_abz=oTtw6vwvhBpm=V2Mm+X95MEyOVeeRLmQ@mail.gmail.com>

I loved the idea. This could also aim the good practices of puppet workflow
development, like setting up an rspec environment, using beaker to deploy
an test it's own code, an stuff like this.

+1

On Wed, Sep 2, 2015 at 3:13 PM, Iury Gregory <iurygregory at gmail.com> wrote:

> I liked the idea (+1)
>
> 2015-09-02 15:09 GMT-03:00 Emilien Macchi <emilien at redhat.com>:
>
>> TL;DR, I propose to move our developer documentation from wiki to
>> something like http://docs.openstack.org/developer/puppet-openstack
>>
>> (Look at http://docs.openstack.org/developer/tempest/ for example).
>>
>> For now, most of our documentation is on
>> https://wiki.openstack.org/wiki/Puppet but I think it would be great to
>> use RST format and Gerrit so anyone could submit documentation
>> contribute like we do for code.
>>
>> I propose a basic table of contents now:
>> Puppet modules introductions
>> Coding Guide
>> Reviewing code
>>
>> I'm taking the opportunity of the puppet sprint to run this discussion
>> and maybe start some work of people agrees to move on.
>>
>> Thanks,
>> --
>> Emilien Macchi
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
>
> *Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science
> at UFCG*
> *E-mail:  iurygregory at gmail.com <iurygregory at gmail.com>*
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
*guilherme* \n
\t *maluf*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/ad515bb4/attachment.html>

From nstarodubtsev at mirantis.com  Wed Sep  2 18:23:21 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Wed, 2 Sep 2015 21:23:21 +0300
Subject: [openstack-dev] [Blazar] Anyone interested?
In-Reply-To: <55E71DF4.6020904@redhat.com>
References: <408D5BC6C96B654BBFC5B5A9B60D13431A7FEA8C@ESESSMB105.ericsson.se>
 <0BFC56CD-8C73-4AB4-9A0C-673E49078A61@uchicago.edu>
 <CAO0b__8gJxAnhXz4dvT0y5VKz_2FQw=F3FyNZv6ZOFXU09qnhw@mail.gmail.com>
 <55E4847D.2020807@intel.com>
 <CAAa8YgBDEqbiy5n8dsMCq-p79-u0zJZ5cTig-G3v9nR=ZXK7ww@mail.gmail.com>
 <55E5611B.1090203@redhat.com>
 <CAAa8YgCZEFqGpwpY=P2JzxP+BmYJpHaYYFrX_fGf6-3s16NREQ@mail.gmail.com>
 <CAAa8YgBNHdqpWhKgoBTcm-cFTCD7hHU4iGWhKia3uePotg-UbA@mail.gmail.com>
 <55E5B3A0.7010504@redhat.com>
 <408D5BC6C96B654BBFC5B5A9B60D13431A80880F@ESESSMB105.ericsson.se>
 <55E62395.1040901@redhat.com>
 <A8F143FD-FF87-48A2-A1D6-4B66528699DD@uchicago.edu>
 <55E71DF4.6020904@redhat.com>
Message-ID: <CAAa8YgAQr3m4y6Z_g8VFT53pA3B2N7z49u10BeNP-zmTQ5anXw@mail.gmail.com>

Pierre, thanks for explanation. However it's great.

I agree with Sylvain. Let's switch to #openstack-blazar channel in IRC.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-02 19:04 GMT+03:00 Sylvain Bauza <sbauza at redhat.com>:

>
>
> Le 02/09/2015 17:50, Pierre Riteau a ?crit :
>
> On 1 Sep 2015, at 23:15, Sylvain Bauza <sbauza at redhat.com> wrote:
>
> Le 01/09/2015 22:31, Ildik? V?ncsa a ?crit :
>
> Hi,
>
> I'm glad to see the interest and I also support the idea of using the IRC
> channel that is already set up for further communication. Should we aim for
> a meeting/discussion there around the end of this week or during next week?
>
> @Nikolay, Sylvain: Thanks for support and bringing together a list of
> action items as very first steps.
>
> @Pierre: You wrote that you are using Blazar. Are you using it as is with
> an older version of OpenStack or you have a modified version of the
> project/code?
>
> I'm actually really surprised to see
> https://www.chameleoncloud.org/docs/user-guides/bare-metal-user-guide/ which
> describes quite fine how to use Blazar/Climate either using the CLI or by
> Horizon.
>
> The latter is actually not provided within the git tree, so I guess
> Chameleon added it downstream. That's fine, maybe something we could
> upstream if Pierre and his team are okay ?
>
> -Sylvain
>
>
> We are using a modified version of Blazar (based on the latest master
> branch commit) with OpenStack Juno (RDO packaging).
>
> The only really mandatory patch that we had to develop was for blazar-nova
> due to functions moving from oslo-incubator to oslo.i18n:
> https://github.com/ChameleonCloud/blazar-nova/commit/346627320e87c8e067db6e842935d243c9640e6e
> On top of this we developed a number of patches, most of them to fix bugs
> that we discovered in Blazar, with a few to get specific features or
> behavior required for Chameleon.
>
> We also used the code developed by Pablo (
> http://lists.openstack.org/pipermail/openstack-dev/2014-June/038506.html)
> to provide an Horizon dashboard for creating and managing leases.
> We even developed a Gantt chart of physical node reservations, but this
> code reads data straight from the SQL database rather than using a new
> Blazar API, so it cannot be contributed as is.
>
> We have always wanted to contribute back our improvements to Blazar and we
> are looking forward to it now that there is renewed interest from the
> community.
> All our OpenStack code should already be available on GitHub at
> https://github.com/ChameleonCloud/
>
>
> That's great work, thanks for the explanations Pierre.
> Since you're looking like the Blazar maintainer while Winter was coming,
> I'd certainly consider your commitment as enough trustable for being
> backported to the master branch.
>
> Nikolay, you told that the Blazar gate is broken, right ? I take that as
> action #0 to fix, and then we can try to backport Pierre's changes into the
> master branch.
>
> Now, let's switch over IRC, I don't want to pollute the ML for those
> technical details.
>
> Anyone who still wants to contribute to Blazar can join #openstack-blazar
> and yell, for sure.
>
> -Sylvain
>
> Pierre
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/1176c55d/attachment.html>

From matt at mattfischer.com  Wed Sep  2 18:25:20 2015
From: matt at mattfischer.com (Matt Fischer)
Date: Wed, 2 Sep 2015 12:25:20 -0600
Subject: [openstack-dev] [puppet] hosting developer documentation on
	http://docs.openstack.org/developer/
In-Reply-To: <55E73B67.9020802@redhat.com>
References: <55E73B67.9020802@redhat.com>
Message-ID: <CAHr1CO8_7j54avnbx5KkMw3BaZ52DwdNO3rdJwYhGyN1HzJxJQ@mail.gmail.com>

+1

On Wed, Sep 2, 2015 at 12:09 PM, Emilien Macchi <emilien at redhat.com> wrote:

> TL;DR, I propose to move our developer documentation from wiki to
> something like http://docs.openstack.org/developer/puppet-openstack
>
> (Look at http://docs.openstack.org/developer/tempest/ for example).
>
> For now, most of our documentation is on
> https://wiki.openstack.org/wiki/Puppet but I think it would be great to
> use RST format and Gerrit so anyone could submit documentation
> contribute like we do for code.
>
> I propose a basic table of contents now:
> Puppet modules introductions
> Coding Guide
> Reviewing code
>
> I'm taking the opportunity of the puppet sprint to run this discussion
> and maybe start some work of people agrees to move on.
>
> Thanks,
> --
> Emilien Macchi
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/61f0db6c/attachment.html>

From stdake at cisco.com  Wed Sep  2 18:41:30 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Wed, 2 Sep 2015 18:41:30 +0000
Subject: [openstack-dev] [kolla][announce] Announcing release of Liberty-3
	Milestone!
Message-ID: <D20C90CF.11C06%stdake@cisco.com>

The Kolla community is pleased to announce the release of the Kolla Liberty 3 milestone.  This release fixes 90 bugs and implements 16 blueprints!


During Liberty 3, Kolla joined the big tent governance!  Our project can be found here:


http://governance.openstack.org/reference/projects/index.html


As part of the project renames happening in OpenStack Infrastructure, the project will be moving to the openstack git namespace September 11th [1].


Our community developed the following notable features in Liberty-3:


  *   Mult-inode deployment using Ansible with complete high availability.
  *   Ubuntu source building implemented.
  *   Compose support has been removed in favor of a new Ansible framework which supports multinode deployment of all core services.
  *   Implementation of a new compact build tool written in Python which allows for building images which is fully featured.
  *   All docker files were converted to Jinja-2 templates allowing tidy multi-distro support.
  *   Building behind proxy implemented.
  *   Vastly improved the documentation (The documentation still needs a lot of work!).
  *   Improved coverage of stable image builds in gates.
  *   Packaging Kolla with PBR python toolset.


The following services are stable and may be deployed multi-node via Ansible:

  *   glance
  *   hapoxy
  *   heat
  *   horizon
  *   keytone
  *   mariadb
  *   memcached
  *   neutron
  *   nova
  *   rabbitmq
  *   swift


Kolla's implementation is stable and Kolla is ready for evaluation by operators and third party projects. The Kolla community encourages individuals to evaluate the project and provide feedback via the mailing list or irc!


Finally, Kola has a solid crew of reviewers that are not on the core team.  We hope that folks interested in joining the core reviewer team will continue reviewing - we definitely appreciate the reviews!  Our project is highly diverse.  To get an idea for those contributing, check out [2].


Regards,

- The Kolla Development Team


 [1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/073049.html

 [2] http://stackalytics.com/?module=kolla-group&metric=person-day

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/ed9ba151/attachment.html>

From anteaya at anteaya.info  Wed Sep  2 18:48:02 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Wed, 2 Sep 2015 14:48:02 -0400
Subject: [openstack-dev] [puppet] hosting developer documentation on
 http://docs.openstack.org/developer/
In-Reply-To: <55E73B67.9020802@redhat.com>
References: <55E73B67.9020802@redhat.com>
Message-ID: <55E74462.6030808@anteaya.info>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 09/02/2015 02:09 PM, Emilien Macchi wrote:
> TL;DR, I propose to move our developer documentation from wiki to 
> something like 
> http://docs.openstack.org/developer/puppet-openstack
> 
> (Look at http://docs.openstack.org/developer/tempest/ for 
> example).

Looking at the tempest example:
http://git.openstack.org/cgit/openstack/tempest/tree/doc/source
we see that the .rst files all live in the tempest repo in doc/source
(with the exception of the README.rst file with is referenced from
within doc/source when required:
http://git.openstack.org/cgit/openstack/tempest/tree/doc/source/overview.rst)

So question: Where should the source .rst files for puppet developer
documentation live? They will need a home.

Thanks,
Anita.

> 
> For now, most of our documentation is on 
> https://wiki.openstack.org/wiki/Puppet but I think it would be 
> great to use RST format and Gerrit so anyone could submit 
> documentation contribute like we do for code.
> 
> I propose a basic table of contents now: Puppet modules 
> introductions Coding Guide Reviewing code
> 
> I'm taking the opportunity of the puppet sprint to run this 
> discussion and maybe start some work of people agrees to move on.
> 
> Thanks,
> 
> 
> 
> __________________________________________________________________________
>
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV50RhAAoJELmKyZugNFU0X1sIANlLUwxe8i+vlRISuPQFBzWR
a8h4VybYRz1kz64LthKbwYaX5yRGyyn5ir0BbwC4cvxaN1J8P46/hJ7lAEe3BxWL
t5hqPdL40xSNcBLLX8EaJPS1ohO9V13k/qFstbWu3pF0tXYcqRiX53X1pS7v8VpL
19qXElddTTu9Nn6NAGeJS8fL/h5N67dBC0/S2K0kEHaXQI7yRB2uvUSwOsWQTswC
s/XVuGy/wQgHESbIEaiNgk49BjMm+5bYB187hJa97SuIjsGyIcUZsz44nZcyjlbv
cAmhhjkxgtFgM2znGuYXJGb5CKfZn+1qFgNhDGxFuQhFUuxRpRyaxkGLxq1fGaw=
=rk94
-----END PGP SIGNATURE-----


From Tim.Bell at cern.ch  Wed Sep  2 18:50:57 2015
From: Tim.Bell at cern.ch (Tim Bell)
Date: Wed, 2 Sep 2015 18:50:57 +0000
Subject: [openstack-dev] [Ironic] Command structure for OSC plugin
In-Reply-To: <1441203556-sup-2613@lrrr.local>
References: <20150824150341.GB13126@redhat.com> <55DB3B46.6000503@gmail.com>
 <55DB3EB4.5000105@redhat.com> <20150824172520.GD13126@redhat.com>
 <55DB54E6.1090408@redhat.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3877B7@CERNXCHG44.cern.ch>
 <20150824193559.GF13126@redhat.com> <1440446092-sup-2361@lrrr.local>
 <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3C006E@CERNXCHG44.cern.ch>
 <1441203556-sup-2613@lrrr.local>
Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3CC061@CERNXCHG44.cern.ch>

> -----Original Message-----
> From: Doug Hellmann [mailto:doug at doughellmann.com]
> Sent: 02 September 2015 16:21
> To: openstack-dev <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Ironic] Command structure for OSC plugin
> 
> Excerpts from Tim Bell's message of 2015-09-02 05:15:35 +0000:
> > That would be great to have plugins on the commands which are relevant
> to multiple projects? avoiding exposing all of the underlying projects as
> prefixes and getting more consistency would be very appreciated by the
> users.
> 
> That works in some cases, but in a lot of cases things that are superficially
> similar have important differences in the inputs they take. In those cases, it's
> better to be explicit about the differences than to force the concepts together
> in a way that makes OSC present only the least-common denominator
> interface.
> 

I think the difference are in the options rather than the prefixes. Thus, if I want to create a bare metal server, I should be able to use 'openstack create' rather than 'openstack ironic create'. The various implications on options etc. are clearly dependent on the target environment.

I would simply like to avoid that the OSC becomes a prefix, i.e. you need to know that ironic is for baremetal. If options are presented which are not supported in the desired context, they should be rejected. 

At CERN, we're using OSC as the default CLI. This is partly because the support for Keystone v3 API is much more advanced but also because we do not want our end users to know the list of OpenStack projects, only the services we are offering them.

Tim

> Doug
> 
> >
> > Tim
> >
> > From: Dean Troyer [mailto:dtroyer at gmail.com]
> > Sent: 01 September 2015 22:47
> > To: OpenStack Development Mailing List (not for usage questions)
> > <openstack-dev at lists.openstack.org>
> > Subject: Re: [openstack-dev] [Ironic] Command structure for OSC plugin
> >
> > [late catch-up]
> >
> > On Mon, Aug 24, 2015 at 2:56 PM, Doug Hellmann
> <doug at doughellmann.com<mailto:doug at doughellmann.com>> wrote:
> > Excerpts from Brad P. Crochet's message of 2015-08-24 15:35:59 -0400:
> > > On 24/08/15 18:19 +0000, Tim Bell wrote:
> > > >
> > > >From a user perspective, where bare metal and VMs are just different
> flavors (with varying capabilities), can we not use the same commands
> (server create/rebuild/...) ? Containers will create the same conceptual
> problems.
> > > >
> > > >OSC can provide a converged interface but if we just replace '$ ironic
> XXXX' by '$ openstack baremetal XXXX', this seems to be a missed
> opportunity to hide the complexity from the end user.
> > > >
> > > >Can we re-use the existing server structures ?
> >
> > I've wondered about how users would see doing this, we've done it already
> with the quota and limits commands (blurring the distinction between
> project APIs).  At some level I am sure users really do not care about some of
> our project distinctions.
> >
> > > To my knowledge, overriding or enhancing existing commands like that
> > > is not possible.
> >
> > You would have to do it in tree, by making the existing commands smart
> > enough to talk to both nova and ironic, first to find the server
> > (which service knows about something with UUID XYZ?) and then to take
> > the appropriate action on that server using the right client. So it
> > could be done, but it might lose some of the nuance between the server
> > types by munging them into the same command. I don't know what sorts
> > of operations are different, but it would be worth doing the analysis
> > to see.
> >
> > I do have an experimental plugin that hooks the server create command to
> add some options and change its behaviour so it is possible, but right now I
> wouldn't call it supported at all.  That might be something that we could
> consider doing though for things like this.
> >
> > The current model for commands calling multiple project APIs is to put
> them in openstackclient.common, so yes, in-tree.
> >
> > Overall, though, to stay consistent with OSC you would map operations
> into the current verbs as much as possible.  It is best to think in terms of how
> the CLI user is thinking and what she wants to do, and not how the REST or
> Python API is written.  In this case, 'baremetal' is a type of server, a set of
> attributes of a server, etc.  As mentioned earlier, containers will also have a
> similar paradigm to consider.
> >
> > dt
> >
> 
> ________________________________________________________________
> __________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From colleen at gazlene.net  Wed Sep  2 18:53:26 2015
From: colleen at gazlene.net (Colleen Murphy)
Date: Wed, 2 Sep 2015 11:53:26 -0700
Subject: [openstack-dev] [puppet] hosting developer documentation on
	http://docs.openstack.org/developer/
In-Reply-To: <55E73B67.9020802@redhat.com>
References: <55E73B67.9020802@redhat.com>
Message-ID: <CAJkgcEmsJmWa3u2+Yg-ujOHO9ZOSTYGpH_whpTLzJJxptd24qw@mail.gmail.com>

On Wed, Sep 2, 2015 at 11:09 AM, Emilien Macchi <emilien at redhat.com> wrote:

> TL;DR, I propose to move our developer documentation from wiki to
> something like http://docs.openstack.org/developer/puppet-openstack
>
> (Look at http://docs.openstack.org/developer/tempest/ for example).
>
> For now, most of our documentation is on
> https://wiki.openstack.org/wiki/Puppet but I think it would be great to
> use RST format and Gerrit so anyone could submit documentation
> contribute like we do for code.
>
> I propose a basic table of contents now:
> Puppet modules introductions
> Coding Guide
> Reviewing code
>
> I'm taking the opportunity of the puppet sprint to run this discussion
> and maybe start some work of people agrees to move on.
>
> Thanks,
> --
> Emilien Macchi
>
Please consider the Puppet Approved criteria[1] when making decisions about
documentation. In particular, we should be making sure the README contained
within the module is complete. Publishing .rst docs to docs.o.o is not a
substitute.

The READMEs and examples/ in our modules are generally inaccurate or out of
date. We should focus on enhancing the content of our docs before worrying
about the logistics of publishing them.

Colleen

[1] https://forge.puppetlabs.com/approved/criteria
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/1eff2dfd/attachment.html>

From emilien at redhat.com  Wed Sep  2 18:56:23 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Wed, 2 Sep 2015 14:56:23 -0400
Subject: [openstack-dev] [puppet] hosting developer documentation on
 http://docs.openstack.org/developer/
In-Reply-To: <55E74462.6030808@anteaya.info>
References: <55E73B67.9020802@redhat.com> <55E74462.6030808@anteaya.info>
Message-ID: <55E74657.90904@redhat.com>



On 09/02/2015 02:48 PM, Anita Kuno wrote:
> On 09/02/2015 02:09 PM, Emilien Macchi wrote:
>> TL;DR, I propose to move our developer documentation from wiki to 
>> something like 
>> http://docs.openstack.org/developer/puppet-openstack
> 
>> (Look at http://docs.openstack.org/developer/tempest/ for 
>> example).
> 
> Looking at the tempest example:
> http://git.openstack.org/cgit/openstack/tempest/tree/doc/source
> we see that the .rst files all live in the tempest repo in doc/source
> (with the exception of the README.rst file with is referenced from
> within doc/source when required:
> http://git.openstack.org/cgit/openstack/tempest/tree/doc/source/overview.rst)
> 
> So question: Where should the source .rst files for puppet developer
> documentation live? They will need a home.

I guess we would need a new repository for that.
It could be puppet-openstack-doc (kiss)
or something else, any suggestion is welcome.

> 
> Thanks,
> Anita.
> 
> 
>> For now, most of our documentation is on 
>> https://wiki.openstack.org/wiki/Puppet but I think it would be 
>> great to use RST format and Gerrit so anyone could submit 
>> documentation contribute like we do for code.
> 
>> I propose a basic table of contents now: Puppet modules 
>> introductions Coding Guide Reviewing code
> 
>> I'm taking the opportunity of the puppet sprint to run this 
>> discussion and maybe start some work of people agrees to move on.
> 
>> Thanks,
> 
> 
> 
>> __________________________________________________________________________
> 
> 
> 
> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/0f22c173/attachment.pgp>

From emilien at redhat.com  Wed Sep  2 19:00:50 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Wed, 2 Sep 2015 15:00:50 -0400
Subject: [openstack-dev] [puppet] hosting developer documentation on
 http://docs.openstack.org/developer/
In-Reply-To: <CAJkgcEmsJmWa3u2+Yg-ujOHO9ZOSTYGpH_whpTLzJJxptd24qw@mail.gmail.com>
References: <55E73B67.9020802@redhat.com>
 <CAJkgcEmsJmWa3u2+Yg-ujOHO9ZOSTYGpH_whpTLzJJxptd24qw@mail.gmail.com>
Message-ID: <55E74762.5040206@redhat.com>



On 09/02/2015 02:53 PM, Colleen Murphy wrote:
> 
> 
> On Wed, Sep 2, 2015 at 11:09 AM, Emilien Macchi <emilien at redhat.com
> <mailto:emilien at redhat.com>> wrote:
> 
>     TL;DR, I propose to move our developer documentation from wiki to
>     something like http://docs.openstack.org/developer/puppet-openstack
> 
>     (Look at http://docs.openstack.org/developer/tempest/ for example).
> 
>     For now, most of our documentation is on
>     https://wiki.openstack.org/wiki/Puppet but I think it would be great to
>     use RST format and Gerrit so anyone could submit documentation
>     contribute like we do for code.
> 
>     I propose a basic table of contents now:
>     Puppet modules introductions
>     Coding Guide
>     Reviewing code
> 
>     I'm taking the opportunity of the puppet sprint to run this discussion
>     and maybe start some work of people agrees to move on.
> 
>     Thanks,
>     --
>     Emilien Macchi
> 
> Please consider the Puppet Approved criteria[1] when making decisions
> about documentation. In particular, we should be making sure the README
> contained within the module is complete. Publishing .rst docs to
> docs.o.o is not a substitute.

+1 for how to use and consume our puppet modules.
But my proposal is about developer documentation which is related to
coding style, reviewing code manuals. Not how to deploy puppet-* itself,
but OpenStack things related. Tell me if I'm wrong and if it also should
live in README but I'm not sure here.

AFIK, Hunter and Cody are working on improving README doc to get modules
approved, during this sprint.

> The READMEs and examples/ in our modules are generally inaccurate or out
> of date. We should focus on enhancing the content of our docs before
> worrying about the logistics of publishing them.
> 
> Colleen 
> 
> [1] https://forge.puppetlabs.com/approved/criteria
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/2f58c39b/attachment.pgp>

From robert.clark at hp.com  Wed Sep  2 19:13:09 2015
From: robert.clark at hp.com (Clark, Robert Graham)
Date: Wed, 2 Sep 2015 19:13:09 +0000
Subject: [openstack-dev] [Security] Weekly meeting cancelled due to Mid-Cycle
Message-ID: <D20C9854.29E9C%robert.clark@hp.com>

Security folks,

Tomorrow?s mid-cycle is cancelled due to many of us attending the Mid-cycle.

-Rob



From doc at aedo.net  Wed Sep  2 19:22:29 2015
From: doc at aedo.net (Christopher Aedo)
Date: Wed, 2 Sep 2015 12:22:29 -0700
Subject: [openstack-dev] [app-catalog] IRC Meeting Thursday September 3rd at
	17:00UTC
Message-ID: <CA+odVQEdupZEYBVKQ3DFzNQg6Mr7pDuDjpiWVGhQFHy_D6TNPQ@mail.gmail.com>

Greetings! Our next OpenStack App Catalog meeting will take place this
Thursday September 3rd at 17:00 UTC in #openstack-meeting-3

The agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Please add agenda items if there's anything specific you would like to
discuss (or of course if the meeting time is not convenient for you
join us on IRC #openstack-app-catalog).

Please join us if you can!

-Christopher


From doug at doughellmann.com  Wed Sep  2 19:29:02 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 02 Sep 2015 15:29:02 -0400
Subject: [openstack-dev] [Ironic] Command structure for OSC plugin
In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5010A3CC061@CERNXCHG44.cern.ch>
References: <20150824150341.GB13126@redhat.com> <55DB3B46.6000503@gmail.com>
 <55DB3EB4.5000105@redhat.com> <20150824172520.GD13126@redhat.com>
 <55DB54E6.1090408@redhat.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3877B7@CERNXCHG44.cern.ch>
 <20150824193559.GF13126@redhat.com> <1440446092-sup-2361@lrrr.local>
 <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3C006E@CERNXCHG44.cern.ch>
 <1441203556-sup-2613@lrrr.local>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3CC061@CERNXCHG44.cern.ch>
Message-ID: <1441221251-sup-1972@lrrr.local>

Excerpts from Tim Bell's message of 2015-09-02 18:50:57 +0000:
> > -----Original Message-----
> > From: Doug Hellmann [mailto:doug at doughellmann.com]
> > Sent: 02 September 2015 16:21
> > To: openstack-dev <openstack-dev at lists.openstack.org>
> > Subject: Re: [openstack-dev] [Ironic] Command structure for OSC plugin
> > 
> > Excerpts from Tim Bell's message of 2015-09-02 05:15:35 +0000:
> > > That would be great to have plugins on the commands which are relevant
> > to multiple projects? avoiding exposing all of the underlying projects as
> > prefixes and getting more consistency would be very appreciated by the
> > users.
> > 
> > That works in some cases, but in a lot of cases things that are superficially
> > similar have important differences in the inputs they take. In those cases, it's
> > better to be explicit about the differences than to force the concepts together
> > in a way that makes OSC present only the least-common denominator
> > interface.
> > 
> 
> I think the difference are in the options rather than the prefixes. Thus, if I want to create a bare metal server, I should be able to use 'openstack create' rather than 'openstack ironic create'. The various implications on options etc. are clearly dependent on the target environment.
> 
> I would simply like to avoid that the OSC becomes a prefix, i.e. you need to know that ironic is for baremetal. If options are presented which are not supported in the desired context, they should be rejected. 

This is the long-standing debate over whether it's better to constrain
the inputs up front, or accept anything and then validate it and
reject bad inputs after they are presented. My UI training always
indicated that assisting to get the inputs right up front was better,
and that's what I think we're trying to do with OSC.

Having an "openstack server create" command that works for all
services that produce things that look like servers means the user
has to somehow determine which of the options are related to which
of the types of servers. We can do some work to group options in
help output, and express which are mutually exclusive, but that
only goes so far. At some point the user ends up having to know
that when "--type baremetal" is provided, the "--container-type"
option used for containers isn't valid at all. There's no way to
express that level of detail in the tools we're using right now,
in part because it results in a more complex UI.

Having an "openstack baremetal create" command is more like the
up-front constraint, because the user can use --help to discover
exactly which options are valid for a baremetal server. It shifts
that "--type baremetal" option into the command name, where the
tools we use to build OSC can let us express exactly which other
options are valid, and (implicitly) which are not.

Placing "baremetal" as the first part of the command was an intentional
choice -- we call this the "noun first" form, versus the "verb
first" form "create baremetal". We considered that the user understands
what kind of resource they're trying to issue a command on, but may
not know all of the commands available for that resource type. With
tab-completion enabled, "openstack baremetal<TAB>" will give them
the list of commands for doing anything to baremetal servers. It
also means all of the commands for a given resource type are listed
together in the global help output where we list all available
subcommands.

> 
> At CERN, we're using OSC as the default CLI. This is partly because the support for Keystone v3 API is much more advanced but also because we do not want our end users to know the list of OpenStack projects, only the services we are offering them.

Using resource type names rather than services is definitely
preferred. That falls down when we have 2 services providing similar
(or the same) resource types. For example, I could see some overlap
in command names and resource types for Cue and Zaqar.

Doug

> 
> Tim
> 
> > Doug
> > 
> > >
> > > Tim
> > >
> > > From: Dean Troyer [mailto:dtroyer at gmail.com]
> > > Sent: 01 September 2015 22:47
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > <openstack-dev at lists.openstack.org>
> > > Subject: Re: [openstack-dev] [Ironic] Command structure for OSC plugin
> > >
> > > [late catch-up]
> > >
> > > On Mon, Aug 24, 2015 at 2:56 PM, Doug Hellmann
> > <doug at doughellmann.com<mailto:doug at doughellmann.com>> wrote:
> > > Excerpts from Brad P. Crochet's message of 2015-08-24 15:35:59 -0400:
> > > > On 24/08/15 18:19 +0000, Tim Bell wrote:
> > > > >
> > > > >From a user perspective, where bare metal and VMs are just different
> > flavors (with varying capabilities), can we not use the same commands
> > (server create/rebuild/...) ? Containers will create the same conceptual
> > problems.
> > > > >
> > > > >OSC can provide a converged interface but if we just replace '$ ironic
> > XXXX' by '$ openstack baremetal XXXX', this seems to be a missed
> > opportunity to hide the complexity from the end user.
> > > > >
> > > > >Can we re-use the existing server structures ?
> > >
> > > I've wondered about how users would see doing this, we've done it already
> > with the quota and limits commands (blurring the distinction between
> > project APIs).  At some level I am sure users really do not care about some of
> > our project distinctions.
> > >
> > > > To my knowledge, overriding or enhancing existing commands like that
> > > > is not possible.
> > >
> > > You would have to do it in tree, by making the existing commands smart
> > > enough to talk to both nova and ironic, first to find the server
> > > (which service knows about something with UUID XYZ?) and then to take
> > > the appropriate action on that server using the right client. So it
> > > could be done, but it might lose some of the nuance between the server
> > > types by munging them into the same command. I don't know what sorts
> > > of operations are different, but it would be worth doing the analysis
> > > to see.
> > >
> > > I do have an experimental plugin that hooks the server create command to
> > add some options and change its behaviour so it is possible, but right now I
> > wouldn't call it supported at all.  That might be something that we could
> > consider doing though for things like this.
> > >
> > > The current model for commands calling multiple project APIs is to put
> > them in openstackclient.common, so yes, in-tree.
> > >
> > > Overall, though, to stay consistent with OSC you would map operations
> > into the current verbs as much as possible.  It is best to think in terms of how
> > the CLI user is thinking and what she wants to do, and not how the REST or
> > Python API is written.  In this case, 'baremetal' is a type of server, a set of
> > attributes of a server, etc.  As mentioned earlier, containers will also have a
> > similar paradigm to consider.
> > >
> > > dt
> > >
> > 
> > ________________________________________________________________
> > __________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From jpeeler at redhat.com  Wed Sep  2 19:42:20 2015
From: jpeeler at redhat.com (Jeff Peeler)
Date: Wed, 2 Sep 2015 15:42:20 -0400
Subject: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support
	multiple compute drivers?
Message-ID: <CALesnTzMv_+hxZLFkAbxObzGLKU0h2ENZ5-vYe1-u+EC5g7Eyg@mail.gmail.com>

Hi folks,

I'm currently looking at supporting Ironic in the Kolla project [1], but
was unsure if it would be possible to run separate instances of nova
compute and controller (and scheduler too?) to enable both baremetal and
libvirt type deployments. I found this mailing list post from two years ago
[2], asking the same question. The last response in the thread seemed to
indicate work was being done on the scheduler to support multiple
configurations, but the review [3] ended up abandoned.

Are the current requirements the same? Perhaps using two availability zones
would work, but I'm not clear if that works on the same host.

[1] https://review.openstack.org/#/c/219747/
[2] http://lists.openstack.org/pipermail/openstack/2013-August/000504.html
[3] https://review.openstack.org/#/c/37407/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/2acda773/attachment.html>

From aallen at Brocade.COM  Wed Sep  2 19:45:11 2015
From: aallen at Brocade.COM (Angela Smith)
Date: Wed, 2 Sep 2015 19:45:11 +0000
Subject: [openstack-dev] [cinder] Brocade CI
In-Reply-To: <a0bc1b18fcc44c80bb18ae1e2f2bb879@HQ1WP-EXMB12.corp.brocade.com>
References: <20150810003458.GA28937@gmail.com>
 <CAJadgubOympc+gfLm=Vj4OWFdN+ihLk6Xm2AQ9-pTeiL_9abvA@mail.gmail.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D169587162E0@G4W3223.americas.hpqcorp.net>
 <CAJadguYgKANkKdoaj+ikswV-wMXXsgKm2j1ni7hpObs7Y42srQ@mail.gmail.com>
 <4d7d2aa1dad748ef983c85e21467003d@HQ1WP-EXMB12.corp.brocade.com>
 <0523ed8f60d449928e7d96b0a4c9dbbe@HQ1WP-EXMB12.corp.brocade.com>
 <a0bc1b18fcc44c80bb18ae1e2f2bb879@HQ1WP-EXMB12.corp.brocade.com>
Message-ID: <46eca450e5734a84bf86e32ee6ca1f61@HQ1WP-EXMB12.corp.brocade.com>

Mike,

Brocade OpenStack CI is now complying with requirements mentioned in your mail.

1.       Reporting success/failure. (FYI, we had been doing this prior to your email, but the results were not visible using lastcomment script unless use the ?m option)

2.       Link is now posted for logs in comment and in right hand results column, per the message format requirement.

3.       Reporting consistently on all patchsets.

4.       Recheck is working.

5.       Failed message contains wiki link, and recheck message.

6.       Brocade OpenStack CI will remain non-voting.

Thanks,
Angela

From: Angela Smith
Sent: Thursday, August 27, 2015 2:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: DL-GRP-ENG-Brocade-Openstack-CI
Subject: RE: [openstack-dev] [cinder] Brocade CI

The full results of lastcomment script are here for last 400 commits: [1][2]

[1] http://paste.openstack.org/show/430074/
[2] http://paste.openstack.org/show/430088/


From: Angela Smith
Sent: Thursday, August 27, 2015 1:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: DL-GRP-ENG-Brocade-Openstack-CI
Subject: RE: [openstack-dev] [cinder] Brocade CI

Mike,
An update on Brocade CI progress.  We are now using the format required for results to show in lastcomment script.
We have been consistently reporting for last 9 days.  See results here: [1].
We are still working on resolving recheck issue and adding link to wiki page in the failed result comment message.   Update will be sent when that is completed.
Thanks,
Angela

[1] http://paste.openstack.org/show/430074/

From: Angela Smith
Sent: Friday, August 21, 2015 1:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: DL-GRP-ENG-Brocade-Openstack-CI
Subject: RE: [openstack-dev] [cinder] Brocade CI

Mike,
I wanted to update you on our progress on the Brocade CI.
We are currently working on the remaining requirements of adding recheck and adding link to wiki page for a failed result.
Also, the CI is now consistently testing and reporting on all cinder reviews for the past 3 days.
Thanks,
Angela

From: Nagendra Jaladanki [mailto:nagendra.jaladanki at gmail.com]
Sent: Thursday, August 13, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: DL-GRP-ENG-Brocade-Openstack-CI
Subject: Re: [openstack-dev] [cinder] Brocade CI

Ramy,
Thanks for providing the correct message. We will update our commit message accordingly.
Thanks,
Nagendra Rao

On Thu, Aug 13, 2015 at 4:43 PM, Asselin, Ramy <ramy.asselin at hp.com<mailto:ramy.asselin at hp.com>> wrote:
Hi Nagendra,

Seems one of the issues is the format of the posted comments. The correct format is documented here [1]

Notice the format is not correct:
Incorrect: Brocade Openstack CI (non-voting) build SUCCESS logs at: http://144.49.208.28:8000/build_logs/2015-08-13_18-19-19/
Correct: * test-name-no-spaces http://link.to/result : [SUCCESS|FAILURE] some comment about the test

Ramy

[1] http://docs.openstack.org/infra/system-config/third_party.html#posting-result-to-gerrit

From: Nagendra Jaladanki [mailto:nagendra.jaladanki at gmail.com<mailto:nagendra.jaladanki at gmail.com>]
Sent: Wednesday, August 12, 2015 4:37 PM
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Cc: Brocade-Openstack-CI at brocade.com<mailto:Brocade-Openstack-CI at brocade.com>
Subject: Re: [openstack-dev] [cinder] Brocade CI

Mike,

Thanks for your feedback and suggestions. I had send my response yesterday but looks like didn't get posted on the lists.openstack.org<http://lists.openstack.org>. Hence posting it here again.

We reviewed your comments and following issues were identified and some of them are fixed and some fix plans in progress:

1) Not posting success or failure
 The Brocade CI is a non-voting CI. The CI is posting the comment for build sucucess or failures. The report tool is not seeing these. We are working on correcting this.
2) Not posting a result link to view logs.
   We could not find any cases where CI is failed to post the link to logs from the generated report.  If you have any specific uses where it failed to post logs link, please share with us. But we did see that CI not posted the comment at all for some review patch sets. Root causing the issue why CI not posted the comment at all.
3) Not consistently doing runs.
   There were planned down times and CI not posted during those periods. We also observed that CI was not posting the failures in some cases where CI failed due non openstack issues. We corrected this. Now the CI should be posting the results for all patch sets either success or failure.
We are also doing the following:
- Enhance the message format to be inline with other CIs.
- Closely monitoring the incoming Jenkin's request vs out going builds and correcting if there are any issues.

Once again thanks for your feedback and suggestions. We will continue to post this list on the updates.

Thanks & Regards,

Nagendra Rao Jaladanki

Manager, Software Engineering Manageability Brocade

130 Holger Way, San Jose, CA 95134

On Sun, Aug 9, 2015 at 5:34 PM, Mike Perez <thingee at gmail.com<mailto:thingee at gmail.com>> wrote:
People have asked me at the Cinder midcycle sprint to look at the Brocade CI
to:

1) Keep the zone manager driver in Liberty.
2) Consider approving additional specs that we're submitted before the
   deadline.

Here are the current problems with the last 100 runs [1]:

1) Not posting success or failure.
2) Not posting a result link to view logs.
3) Not consistently doing runs. If you compare with other CI's there are plenty
   missing in a day.

This CI does not follow the guidelines [2]. Please get help [3].

[1] - http://paste.openstack.org/show/412316/
[2] - http://docs.openstack.org/infra/system-config/third_party.html#requirements
[3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Questions

--
Mike Perez

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/440c5284/attachment-0001.html>

From christophe.sauthier at objectif-libre.com  Wed Sep  2 20:03:03 2015
From: christophe.sauthier at objectif-libre.com (Christophe Sauthier)
Date: Wed, 02 Sep 2015 22:03:03 +0200
Subject: [openstack-dev] [ CloudKitty ] New release
Message-ID: <d4cf2e7b1ce6a48eefc648a4cef79d2f@objectif-libre.com>

We are really pleased to announce that a few days ago the latest 
CloudKitty's version (0.4.1) has been released.
For those of you who don't know CloudKitty, it is an additionnal 
component for OpenStack that tackles the pricing and chargeback aspects. 
It is fully OpenSource (Apache 2.0), developped in Python and using the 
same process and librairies as the rest of the OpenStack components.
CloudKitty is designed like any other OpenStack component, providing an 
API, a client and an Horizon integration.

Using CloudKitty you are able to:
? define the price of the different services you are providing
? charge your users for the usage
? get you users the possibiliy to know in advance the price for the 
ressource they are about to launch
? get your user a report of their past usage
? and many more


If you are interested please grab the latest release directly on 
stackforge:
? https://github.com/stackforge/cloudkitty/releases
? https://github.com/stackforge/cloudkitty-dashboard/releases
? https://github.com/stackforge/python-cloudkittyclient/releases

This release is targeted at Kilo, if you want to try it out in an older 
environment you can use virtualenv or your favorite contrainers 
solution.
A docker container will be soon avaible to help people wanting to try 
CloudKitty even on older release of OpenStack.
We are currently working on getting CloudKitty integrated with the 
major Linux Distributions, meanwhile we have repositories available with 
up to date packages at http://archive.objectif-libre.com/cloudkitty/
You can find some installation instructions here :  
http://cloudkitty.readthedocs.org/en/latest/installation.html



We hope you'll like that release but here is an overview of some of the 
major enhancements that we are planning for the Liberty release:
? Keystone AuthPlugins (less duplicated configuration code)
? Keystone sessions
? Rating module rules management (validity periods, per tenant, etc)
? Rating modules enabling you to write and manage python custom code to 
do your own pricing rules
? Scalability enhancement with the introduction of asynchronous workers 
and clustering
? Addition of the CSV as a native report output file format
? Documentaiton improvement
? More code coverage and unittests
? and many more...


Feel free to join us on #cloudkitty if you have any questions or want 
to take part into that project !

                 The CloudKitty team



----
Christophe Sauthier                       Mail : 
christophe.sauthier at objectif-libre.com
CEO                                       Mob : +33 (0) 6 16 98 63 96
Objectif Libre                            URL : www.objectif-libre.com
Infrastructure et Formations Linux        Twitter : @objectiflibre


From dprince at redhat.com  Wed Sep  2 20:22:17 2015
From: dprince at redhat.com (Dan Prince)
Date: Wed, 02 Sep 2015 16:22:17 -0400
Subject: [openstack-dev] [TripleO][Heat] instance_user fallout,
 keeping the 'heat-admin' user working
Message-ID: <1441225337.1917.16.camel@redhat.com>

We had an IRC discussion today about the 'heat-admin' user in #tripleo.

Upstream Heat recently reverted the 'instance_user' config file option
which we relied on in TripleO to standardize the default (admin) user
on our nodes. It is my understanding that Heat would prefer not to
maintain this option because it causes subtle compatibility issues
across the OpenStack and AWS APIs and the interactions between cloud
-init version, etc. So it was deprecated in Icehouse... and recently
removed in [1].

We could just go with the default distro user (centos, fedora, ubuntu,
etc.) but it would be really nice to standardize on a user name for
maintenance should anyone every spin up a cloud using multiple distros
or something.

So a couple of options. We could just go on and update our templates
like this [2]. This actually seems pretty clean to me, but it would
require anybody who has created custom firstboot scripts to do the same
(we have proposed docker patches with firstboot scripts that need
similar updates).

Alternately, we could propose that Heat revert the instance_user
feature or some version of it. We've been using that for a year or two
now and it has actually been fairly nice to set the default that way.

Thoughts?


[1] https://review.openstack.org/103928 

[2] https://review.openstack.org/#/c/219861/


From gord at live.ca  Wed Sep  2 20:27:39 2015
From: gord at live.ca (gord chung)
Date: Wed, 2 Sep 2015 16:27:39 -0400
Subject: [openstack-dev] [oslo][versionedobjects][ceilometer] explain
 the benefits of ceilometer+versionedobjects
In-Reply-To: <71CA3C44-C5EA-4D29-80CE-5F1046A6D2F5@cisco.com>
References: <BLU437-SMTP766811C9632CFCC37305CDDE6F0@phx.gbl>
 <20150828181856.ec477ff41bb188afb35f2b31@intel.com>
 <BLU437-SMTP74C2826061CBC4631D7108DE6E0@phx.gbl>
 <D205EA21.3535C%ahothan@cisco.com>
 <BLU436-SMTP14A77A6CC7A969283FD695DE6E0@phx.gbl>
 <F66AE3B0-96A6-4792-B322-E31C8299A0FD@cisco.com>
 <BLU436-SMTP79E6616D1950645B7A9769DE6A0@phx.gbl>
 <71CA3C44-C5EA-4D29-80CE-5F1046A6D2F5@cisco.com>
Message-ID: <BLU436-SMTP24497BA1BA54E187DF433CCDE690@phx.gbl>



On 02/09/2015 11:25 AM, Alec Hothan (ahothan) wrote:
>
>
>
>
> On 9/1/15, 11:31 AM, "gord chung" <gord at live.ca> wrote:
>
>> re: serialisation, that probably isn't the biggest concern for
>> Ceilometer performance. the main items are storage -- to be addressed by
>> Gnocchi/tsdb, and polling load. i just thought i'd point out an existing
>> serialisation patch since we were on the topic :-)
> Is there any data measuring the polling load on large scale deployments?
> Was there a plan to reduce the polling load to an acceptable level? If yes could you provide any pointer if any?

i'm not sure any user has provided numbers when raising the issue -- 
just that it's 'high'. this should probably be done in a separate thread 
as i don't want it to get lost in completely unrelated subject. that 
said, an initial patch to minimise load was done in Liberty[1] and 
secondary proposal for M*[2].

>> conceptually, i would think only the consumers need to know about all 
>> the queues and even then, it should only really need to know about 
>> the ones it understands. the producers (polling agents) can just fire 
>> off to the correct versioned queue and be done... thanks for the 
>> above link (it'll help with discussion/spec design). 
> When everything goes according to plan, any solution can work but this is hardly the case in production, especially at scale.  Here are a few question that may help in the discussion:
> - how are versioned queue named?
> - who creates a versioned queue (producer or consumer?) and who deletes it when no more entity of that version is running?
> - how to make sure a producer is not producing in a queue that has no consumer (a messaging infra like rabbit is designed to decouple producers from consumers)
> - all corner cases of entities (consumers or producers) popping up with newer or older version, and terminating (gracefully or not) during the upgrade/downgrade, what happens to the queues...
>
> IMHO using a simple communication schema (1 topic/queue for all versions) with in-band message versioning is a much less complex proposition than juggling with versioned queues (not to say the former is simple to do). With versioned queues you're kind of trading off the per message versioning with per queue versioning but at the expense of:
> - a complex queue management (if you want to do it right)
> - a not less complex per queue message decoding (since the consumer needs to know how to decode and interpret every message depending on the version of the queue it comes from)
> - a more difficult debug environment (harder to debug multiple queues than 1 queue)
> - and added stress on oslo messaging (due to the use of transient queues)
>
thanks, good items to think about when building spec. will be sure to 
add link when initial draft is ready.

[1] 
https://blueprints.launchpad.net/ceilometer/+spec/resource-metadata-caching
[2] https://review.openstack.org/#/c/209799/

-- 
gord



From fungi at yuggoth.org  Wed Sep  2 20:40:29 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Wed, 2 Sep 2015 20:40:29 +0000
Subject: [openstack-dev] [openstack-announce] [release][nova]
 python-novaclient release 2.27.0 (liberty)
In-Reply-To: <20150902145556.C5132C00018@frontend1.nyi.internal>
References: <20150902145556.C5132C00018@frontend1.nyi.internal>
Message-ID: <20150902204028.GG7955@yuggoth.org>

On 2015-09-02 10:55:56 -0400 (-0400), doug at doughellmann.com wrote:
> We are thrilled to announce the release of:
> 
> python-novaclient 2.27.0: Client library for OpenStack Compute API
[...]

Just as a heads up, there's some indication that this release is
currently broken by many popular service providers (behavior ranging
from 401 unauthorized errors to hanging indefinitely due, it seems,
to filtering or not supporting version detection in various ways).

    https://launchpad.net/bugs/1491579

-- 
Jeremy Stanley


From tim at styra.com  Wed Sep  2 20:41:17 2015
From: tim at styra.com (Tim Hinrichs)
Date: Wed, 02 Sep 2015 20:41:17 +0000
Subject: [openstack-dev] [Congress] Feedback on distributed architecture
Message-ID: <CAJjxPAB9D9Hn5Hc-75=fDm+tPHEAZ=oYZ0AXyCB+4mLqDxxiDg@mail.gmail.com>

I ran the basics of our new distributed architecture by Shawn from
Twitter.  Here's his response.


<Tim> We just held a Congress mid-cycle meet-up to discuss its distributed
architecture.  We decided on an architecture where the policy engine runs
in its own process; each datasource driver runs in its own process; the API
runs in its own process; and all communicate over a RabbitMQ-style message
bus.

This would enable you to choose which part of Congress you run on which
physical host.  You could run the policy engine on the same box as the
service you're using policy to protect.  You can build the datasource
driver logic into a datasource and push deltas directly to the RabbitMQ
message bus.  And of course you could run everything on a
Congress-dedicated box and ignore these details.

See any major problems with Congress's new distributed architecture, or
things you think should change?  Do you think that running the policy
engine on the same box as the service you're protecting is a good idea or a
bad one?

<Shawn>I suspect RabbitMQ may end up being a bottleneck but it makes sense
to deal with that when it comes up. Will you be relying on
RabbitMQ-specific features or will any given AMQP broker suffice? Which
version of AMQP will you be using?


<Tim>OpenStack has a wrapper around AMQP buses, and we're using that.  We
haven't gone any further into choices about which plugin to use.

<Shawn>I think the choice of where to run the datasource makes sense,
though I need to give some thought to wiring up my datasources directly to
the AMQP broker instead of submitting via a well-defined REST or thrift API
with realtime payload validation, backpressure, etc. I can't tell whether
my hesitation is legit or just FUD.

Tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/0caf91df/attachment.html>

From carl at ecbaldwin.net  Wed Sep  2 20:42:48 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Wed, 2 Sep 2015 14:42:48 -0600
Subject: [openstack-dev] [neutron][db] reviewers: please mind the branch
 a script belongs to
In-Reply-To: <A0ADAA86-9762-4734-98E9-FF29CD617653@redhat.com>
References: <A0ADAA86-9762-4734-98E9-FF29CD617653@redhat.com>
Message-ID: <CALiLy7oV2kE1xHBhjH71Xwuw6nC_aT6bEXX7=hvM47kxSDOm7Q@mail.gmail.com>

Thanks, I learned a thing or two from the document that you linked.
Thanks for reminding us of that.

Carl

On Tue, Sep 1, 2015 at 3:14 AM, Ihar Hrachyshka <ihrachys at redhat.com> wrote:
> Hi reviewers,
>
> several days ago, a semantically expand-only migration script was merged into contract branch [1]. This is not a disaster, though it would be a tiny one if a contract-only migration script would be merged into expand branch.
>
> Please make sure you know the new migration strategy described in [2].
>
> Previously, we introduced a check that validates that we don?t mix down_revision heads, linking e.g. expand script to contract revision, or vice versa [3]. Apparently, it?s not enough.
>
> Ann is looking into introducing another check for semantical correctness of scripts. I don?t believe it may work for all complex cases we may need to solve manually, but at least it should be able to catch add_* operations in contract scripts, or drop_* operations in expand branch. Since there may be exceptions to general automation, we may also need a mechanism to disable such a sanity check for specific scripts.
>
> So all in all, I kindly ask everyone to become aware of how we now manage migration scripts, and what it implies in how we should review code (f.e. looking at paths as well as the code of alembic scripts). That is especially important before the test that Ann is looking to implement is not merged.
>
> [1]: https://bugs.launchpad.net/neutron/+bug/1490767
> [2]: http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html
> [3]: https://review.openstack.org/#/c/206746/
>
> Thanks
> Ihar
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From german.eichberger at hp.com  Wed Sep  2 21:28:04 2015
From: german.eichberger at hp.com (Eichberger, German)
Date: Wed, 2 Sep 2015 21:28:04 +0000
Subject: [openstack-dev] [Neutron][horizon][neutron][L3][dvr][fwaas]
 FWaaS
In-Reply-To: <55E7220D.4020807@brocade.com>
References: <55E7220D.4020807@brocade.com>
Message-ID: <D20CB7D3.169F0%german.eichberger@hp.com>

Hi Bharath,

I am wondering if you can file this as a launchpad bug, please.

Thanks,
German

From: bharath <bharath at brocade.com<mailto:bharath at brocade.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Wednesday, September 2, 2015 at 9:21 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [Neutron][horizon][neutron][L3][dvr][fwaas] FWaaS

Hi,

Horizon seems to be broken.

When i try to add new firewall rule , horizon broken with "'NoneType' object has no attribute 'id'" Error.
This was fine about 10 hours back. Seems one of the  latest commit broken it.


Traceback in horizon:


2015-09-02 16:15:35.337872     return nodelist.render(context)
2015-09-02 16:15:35.337877   File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 903, in render
2015-09-02 16:15:35.337893     bit = self.render_node(node, context)
2015-09-02 16:15:35.337899   File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 79, in render_node
2015-09-02 16:15:35.337903     return node.render(context)
2015-09-02 16:15:35.337908   File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 89, in render
2015-09-02 16:15:35.337913     output = self.filter_expression.resolve(context)
2015-09-02 16:15:35.337917   File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 647, in resolve
2015-09-02 16:15:35.337922     obj = self.var.resolve(context)
2015-09-02 16:15:35.337927   File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 787, in resolve
2015-09-02 16:15:35.337931     value = self._resolve_lookup(context)
2015-09-02 16:15:35.337936   File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 825, in _resolve_lookup
2015-09-02 16:15:35.337940     current = getattr(current, bit)
2015-09-02 16:15:35.337945   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 59, in attr_string
2015-09-02 16:15:35.337950     return flatatt(self.get_final_attrs())
2015-09-02 16:15:35.337954   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 42, in get_final_attrs
2015-09-02 16:15:35.337959     final_attrs['class'] = self.get_final_css()
2015-09-02 16:15:35.337964   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 47, in get_final_css
2015-09-02 16:15:35.337981     default = " ".join(self.get_default_classes())
2015-09-02 16:15:35.337986   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 792, in get_default_classes
2015-09-02 16:15:35.337991     if not self.url:
2015-09-02 16:15:35.337995   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 756, in url
2015-09-02 16:15:35.338000     url = self.column.get_link_url(self.datum)
2015-09-02 16:15:35.338004   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 431, in get_link_url
2015-09-02 16:15:35.338009     return self.link(datum)
2015-09-02 16:15:35.338014   File "/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/firewalls/tables.py", line 261, in get_policy_link
2015-09-02 16:15:35.338019     kwargs={'policy_id': datum.policy.id})
2015-09-02 16:15:35.338023 AttributeError: 'NoneType' object has no attribute 'id'



Thanks,
bharath


From sgordon at redhat.com  Wed Sep  2 21:37:52 2015
From: sgordon at redhat.com (Steve Gordon)
Date: Wed, 2 Sep 2015 17:37:52 -0400 (EDT)
Subject: [openstack-dev] [nfv][telcowg] Issues with vIMS and SBC submissions
In-Reply-To: <1550920089.37129215.1441229071142.JavaMail.zimbra@redhat.com>
Message-ID: <186673128.37146571.1441229872769.JavaMail.zimbra@redhat.com>

Hi Calum,

After the discussion at the end of the meeting I *believe* I have fixed the issues with these two - please review and check I didn't accidentally drop any of your changes:

* SBC: https://review.openstack.org/#/c/176301/
  - Removed the erroneously included usecases/Virtual_IMS_Core.rst (on review it seems this was the same as the one in 179142 below).
  - Attempted to fix the syntax of the literal block around your ASCII flow diagram (seemed to test ok locally...).
* vIMS: https://review.openstack.org/#/c/179142/
  - Removed the dependency on a long-abandoned draft.

Thanks,

-Steve


From mriedem at linux.vnet.ibm.com  Wed Sep  2 21:48:28 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 2 Sep 2015 16:48:28 -0500
Subject: [openstack-dev] [openstack-announce] [release][nova]
 python-novaclient release 2.27.0 (liberty)
In-Reply-To: <20150902204028.GG7955@yuggoth.org>
References: <20150902145556.C5132C00018@frontend1.nyi.internal>
 <20150902204028.GG7955@yuggoth.org>
Message-ID: <55E76EAC.5030103@linux.vnet.ibm.com>



On 9/2/2015 3:40 PM, Jeremy Stanley wrote:
> On 2015-09-02 10:55:56 -0400 (-0400), doug at doughellmann.com wrote:
>> We are thrilled to announce the release of:
>>
>> python-novaclient 2.27.0: Client library for OpenStack Compute API
> [...]
>
> Just as a heads up, there's some indication that this release is
> currently broken by many popular service providers (behavior ranging
> from 401 unauthorized errors to hanging indefinitely due, it seems,
> to filtering or not supporting version detection in various ways).
>
>      https://launchpad.net/bugs/1491579
>

And:

https://bugs.launchpad.net/python-novaclient/+bug/1491325

We have a fix for ^ and I plan on putting in the request for 2.27.1 
tonight once the fix is merged.  That should unblock manila.

For the version discovery bug, we plan on talking about that in the nova 
meeting tomorrow.

-- 

Thanks,

Matt Riedemann



From os.lcheng at gmail.com  Wed Sep  2 22:49:52 2015
From: os.lcheng at gmail.com (Lin Hua Cheng)
Date: Wed, 2 Sep 2015 15:49:52 -0700
Subject: [openstack-dev] [Neutron][horizon][neutron][L3][dvr][fwaas]
	FWaaS
In-Reply-To: <D20CB7D3.169F0%german.eichberger@hp.com>
References: <55E7220D.4020807@brocade.com>
 <D20CB7D3.169F0%german.eichberger@hp.com>
Message-ID: <CABtBEBXujewJTE74OM+=mr_YVQ6J5A8b5JsQ4CcxVKR9FPUAZg@mail.gmail.com>

Opened a launchpad bug for tracking:
https://bugs.launchpad.net/horizon/+bug/1491637

-Lin

On Wed, Sep 2, 2015 at 2:28 PM, Eichberger, German <german.eichberger at hp.com
> wrote:

> Hi Bharath,
>
> I am wondering if you can file this as a launchpad bug, please.
>
> Thanks,
> German
>
> From: bharath <bharath at brocade.com<mailto:bharath at brocade.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> >>
> Date: Wednesday, September 2, 2015 at 9:21 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> >>
> Subject: [openstack-dev] [Neutron][horizon][neutron][L3][dvr][fwaas] FWaaS
>
> Hi,
>
> Horizon seems to be broken.
>
> When i try to add new firewall rule , horizon broken with "'NoneType'
> object has no attribute 'id'" Error.
> This was fine about 10 hours back. Seems one of the  latest commit broken
> it.
>
>
> Traceback in horizon:
>
>
> 2015-09-02 16:15:35.337872     return nodelist.render(context)
> 2015-09-02 16:15:35.337877   File
> "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 903,
> in render
> 2015-09-02 16:15:35.337893     bit = self.render_node(node, context)
> 2015-09-02 16:15:35.337899   File
> "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 79,
> in render_node
> 2015-09-02 16:15:35.337903     return node.render(context)
> 2015-09-02 16:15:35.337908   File
> "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 89,
> in render
> 2015-09-02 16:15:35.337913     output =
> self.filter_expression.resolve(context)
> 2015-09-02 16:15:35.337917   File
> "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 647,
> in resolve
> 2015-09-02 16:15:35.337922     obj = self.var.resolve(context)
> 2015-09-02 16:15:35.337927   File
> "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 787,
> in resolve
> 2015-09-02 16:15:35.337931     value = self._resolve_lookup(context)
> 2015-09-02 16:15:35.337936   File
> "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 825,
> in _resolve_lookup
> 2015-09-02 16:15:35.337940     current = getattr(current, bit)
> 2015-09-02 16:15:35.337945   File
> "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py",
> line 59, in attr_string
> 2015-09-02 16:15:35.337950     return flatatt(self.get_final_attrs())
> 2015-09-02 16:15:35.337954   File
> "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py",
> line 42, in get_final_attrs
> 2015-09-02 16:15:35.337959     final_attrs['class'] = self.get_final_css()
> 2015-09-02 16:15:35.337964   File
> "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py",
> line 47, in get_final_css
> 2015-09-02 16:15:35.337981     default = "
> ".join(self.get_default_classes())
> 2015-09-02 16:15:35.337986   File
> "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py",
> line 792, in get_default_classes
> 2015-09-02 16:15:35.337991     if not self.url:
> 2015-09-02 16:15:35.337995   File
> "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py",
> line 756, in url
> 2015-09-02 16:15:35.338000     url = self.column.get_link_url(self.datum)
> 2015-09-02 16:15:35.338004   File
> "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py",
> line 431, in get_link_url
> 2015-09-02 16:15:35.338009     return self.link(datum)
> 2015-09-02 16:15:35.338014   File
> "/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/firewalls/tables.py",
> line 261, in get_policy_link
> 2015-09-02 16:15:35.338019     kwargs={'policy_id': datum.policy.id})
> 2015-09-02 16:15:35.338023 AttributeError: 'NoneType' object has no
> attribute 'id'
>
>
>
> Thanks,
> bharath
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/051bad1f/attachment.html>

From dtroyer at gmail.com  Wed Sep  2 23:04:55 2015
From: dtroyer at gmail.com (Dean Troyer)
Date: Wed, 2 Sep 2015 18:04:55 -0500
Subject: [openstack-dev] [Ironic] Command structure for OSC plugin
In-Reply-To: <1441221251-sup-1972@lrrr.local>
References: <20150824150341.GB13126@redhat.com> <55DB3B46.6000503@gmail.com>
 <55DB3EB4.5000105@redhat.com> <20150824172520.GD13126@redhat.com>
 <55DB54E6.1090408@redhat.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3877B7@CERNXCHG44.cern.ch>
 <20150824193559.GF13126@redhat.com>
 <1440446092-sup-2361@lrrr.local>
 <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3C006E@CERNXCHG44.cern.ch>
 <1441203556-sup-2613@lrrr.local>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3CC061@CERNXCHG44.cern.ch>
 <1441221251-sup-1972@lrrr.local>
Message-ID: <CAOJFoEtu4eTRD18kEG+18Q0QmOWHoFgSE=jw=9sv5A1D4fZvTw@mail.gmail.com>

On Wed, Sep 2, 2015 at 2:29 PM, Doug Hellmann <doug at doughellmann.com> wrote:

> Excerpts from Tim Bell's message of 2015-09-02 18:50:57 +0000:
> > I think the difference are in the options rather than the prefixes.
> Thus, if I want to create a bare metal server, I should be able to use
> 'openstack create' rather than 'openstack ironic create'. The various
> implications on options etc. are clearly dependent on the target
> environment.
> >
> > I would simply like to avoid that the OSC becomes a prefix, i.e. you
> need to know that ironic is for baremetal. If options are presented which
> are not supported in the desired context, they should be rejected.
>
> This is the long-standing debate over whether it's better to constrain
> the inputs up front, or accept anything and then validate it and
> reject bad inputs after they are presented. My UI training always
> indicated that assisting to get the inputs right up front was better,
> and that's what I think we're trying to do with OSC.
>
> Having an "openstack server create" command that works for all
> services that produce things that look like servers means the user
> has to somehow determine which of the options are related to which
> of the types of servers. We can do some work to group options in
> help output, and express which are mutually exclusive, but that
> only goes so far. At some point the user ends up having to know
> that when "--type baremetal" is provided, the "--container-type"
> option used for containers isn't valid at all. There's no way to
> express that level of detail in the tools we're using right now,
> in part because it results in a more complex UI.
>

To do this now requires manually implementing it in the commands
themselves, but I am willing to do that as I do think the end result is a
lower footprint UI.  The biggest problem we have is the help output, that
is not solved yet, but we have been clear in the documentation when options
are only applicable with or without other options also present.


> Having an "openstack baremetal create" command is more like the
> up-front constraint, because the user can use --help to discover
> exactly which options are valid for a baremetal server. It shifts
> that "--type baremetal" option into the command name, where the
> tools we use to build OSC can let us express exactly which other
> options are valid, and (implicitly) which are not.
>

We have started using multiple word nouns for disambiguation, in this case,
I would suggest 'baremetal server' as 'baremetal' is not a thing by
itself.  I think this is an acceptable compromise as it still contains
'server' as the concepts involved are fundamentally the same thing.
 'server create --type baremental' would still be my preference, even with
it being more work inside OSC/plugins.

dt

-- 

Dean Troyer
dtroyer at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/4160b969/attachment.html>

From kunalhgandhi at gmail.com  Wed Sep  2 23:21:58 2015
From: kunalhgandhi at gmail.com (Gandhi, Kunal)
Date: Wed, 2 Sep 2015 16:21:58 -0700
Subject: [openstack-dev] [kosmos][designate][lbaas] Intial Kosmos source
	files for review in gerrit
In-Reply-To: <5AB7669C-C2DE-4A95-8C6D-DCE3D1860EB0@gmail.com>
References: <5AB7669C-C2DE-4A95-8C6D-DCE3D1860EB0@gmail.com>
Message-ID: <7C316D06-D32D-4172-8682-DA250F1CF18B@gmail.com>

A gentle reminder to please take a look at the code review and provide feedback.

Regards
Kunal

> On Aug 30, 2015, at 8:26 PM, Gandhi, Kunal <kunalhgandhi at gmail.com> wrote:
> 
> Hi All
> 
> I have checked in the initial source files to gerrit for review.
> 
> https://review.openstack.org/#/c/218709/ <https://review.openstack.org/#/c/218709/>
> 
> Please take a look a look at it and provide feedback.
> 
> Regards
> Kunal
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/603c9a17/attachment.html>

From mriedem at linux.vnet.ibm.com  Wed Sep  2 23:40:15 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 2 Sep 2015 18:40:15 -0500
Subject: [openstack-dev] subunit2html location on images changing
In-Reply-To: <20150826222242.GA31428@sazabi.kortar.org>
References: <20150826222242.GA31428@sazabi.kortar.org>
Message-ID: <55E788DF.9040305@linux.vnet.ibm.com>



On 8/26/2015 5:22 PM, Matthew Treinish wrote:
> Hi Everyone,
>
> There is a pending change up for review that will move the location subunit2html
> jenkins slave script:
>
> https://review.openstack.org/212864/
>
> It switches from a locally installed copy to using the version packaged in
> os-testr which is installed in a system venv. This was done in an effort to
> unify and package up some of the tooling which was locally copied around under
> the covers but are generally useful utilities. If you have a local gate hook
> which is manually calling the script realize that when this change merges and
> takes effect on the next round of nodepool images things will start failing. To
> fix it either update the location in your hook, or an even better solution would
> be to just rely on devstack-gate to do the right thing for you. For example,
> calling out to:
>
> http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/functions.sh#n571
>
> will do the right thing.
>
> Thanks,
>
> Matthew Treinish
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Some things seem to be blowing up because of this:

https://bugs.launchpad.net/openstack-gate/+bug/1491646

-- 

Thanks,

Matt Riedemann



From adidenko at mirantis.com  Wed Sep  2 23:49:40 2015
From: adidenko at mirantis.com (Aleksandr Didenko)
Date: Thu, 3 Sep 2015 02:49:40 +0300
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
In-Reply-To: <CAHAWLf1Ed6fPqDUDuume+1JdtxJuLn+AFrcxkUvbZvxAokBWYA@mail.gmail.com>
References: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
 <55E6EA3A.7080006@gmail.com>
 <CAM0pNLMwsWK_N8EaobnCDXmFdfB0aTPMK9urXnbScGmJtvqfoA@mail.gmail.com>
 <CAHAWLf1Ed6fPqDUDuume+1JdtxJuLn+AFrcxkUvbZvxAokBWYA@mail.gmail.com>
Message-ID: <CAOe9ns7CtSmgKuzZu1qvWyh2mU2zCZUvwRCa2DQWaUvpZuPiqQ@mail.gmail.com>

Hi,

+1 from me.

Regards,
Alex

On Wed, Sep 2, 2015 at 5:00 PM, Vladimir Kuklin <vkuklin at mirantis.com>
wrote:

> +1 and also he is in US timezone, which  should help us cover the globe
> with Continuous Review process.
>
> On Wed, Sep 2, 2015 at 3:45 PM, Dmitry Borodaenko <
> dborodaenko at mirantis.com> wrote:
>
>> Huge +1 from me, Alex has all the best qualities of a core reviewer:
>> great engineer, great communicator, diligent, and patient.
>>
>> On Wed, Sep 2, 2015 at 3:24 PM Jay Pipes <jaypipes at gmail.com> wrote:
>>
>>> I'm not a Fuel core or anything, but +1 from me. Alex has been very
>>> visible in the community and his work on librarian-puppet was a great
>>> step forward for the project.
>>>
>>> Best,
>>> -jay
>>>
>>> On 09/02/2015 04:31 AM, Sergii Golovatiuk wrote:
>>> > Hi,
>>> >
>>> > I would like to nominate Alex Schultz to Fuel-Library Core team. He?s
>>> > been doing a great job in writing patches. At the same time his reviews
>>> > are solid with comments for further improvements. He?s #3 reviewer and
>>> > #1 contributor with 46 commits for last 90 days [1]. Additionally, Alex
>>> > has been very active in IRC providing great ideas. His ?librarian?
>>> > blueprint [3] made a big step towards to puppet community.
>>> >
>>> > Fuel Library, please vote with +1/-1 for approval/objection. Voting
>>> will
>>> > be open until September 9th. This will go forward after voting is
>>> closed
>>> > if there are no objections.
>>> >
>>> > Overall contribution:
>>> > [0] http://stackalytics.com/?user_id=alex-schultz
>>> > Fuel library contribution for last 90 days:
>>> > [1] http://stackalytics.com/report/contribution/fuel-library/90
>>> > List of reviews:
>>> > [2]
>>> >
>>> https://review.openstack.org/#/q/reviewer:%22Alex+Schultz%22+status:merged,n,z
>>> > ?Librarian activities? in mailing list:
>>> > [3]
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-July/071058.html
>>> >
>>> > --
>>> > Best regards,
>>> > Sergii Golovatiuk,
>>> > Skype #golserge
>>> > IRC #holser
>>> >
>>> >
>>> >
>>> __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru
> vkuklin at mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/ba04694f/attachment.html>

From blak111 at gmail.com  Wed Sep  2 23:58:37 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Wed, 2 Sep 2015 16:58:37 -0700
Subject: [openstack-dev] [Neutron] Un-addressed Port spec and
	implementation
In-Reply-To: <CAG9LJa7qJR2Dr0B8w5=a8YPXtdqkhr3TMBbHaegOvswwRXoN8A@mail.gmail.com>
References: <CAG9LJa7qJR2Dr0B8w5=a8YPXtdqkhr3TMBbHaegOvswwRXoN8A@mail.gmail.com>
Message-ID: <CAO_F6JO-O3O0CRLrLKORjpjjwSOyOwxvnoAW89fytDDo2R1-Cw@mail.gmail.com>

That patch was reverted because it relied on a non-obvious corner case to
work. A port would not get any spoofing prevention if it had no IP
addresses.

At first we reasoned that this would be okay since the only way to create a
port without IPs was if the network has no subnets and it doesn't make
sense for Neutron to do L3 protection on a network it doesn't manage L3
for. However, this was an issue once a subnet was subsequently added to the
network. A port would still be remaining without IP addresses and it
wouldn't have any spoofing prevention. We don't want these kind of corner
cases in the API so we reverted it.

>One possible solution we could do to prevent this is to keep flow entries
that block the port from pretending to have an IP that is already part of
the network (or subnet).

Three issues with this I can see right away:

   - This breaks protection for provider network scenarios where the
   provider has a router that Neutron doesn't know about.
   - It introduces a window of attack where you can send gratuitous ARP for
   all of the IP addresses which aren't in use and collect traffic to new
   ports as they come online before the ARP entries time out.
   - Each L2 agent is now going to require an ARP flow rule for every other
   port's IP/MAC on the same network. This could easily be 10,000+ of rules on
   a densely packed node (50 VMs on 200 port networks). Syncing this info will
   need a reliability mechanism to make there are no missed messages (which
   result in vulnerabilities).


Why can you just use the port security API to disable port security for the
port? If the issue is just that you want MAC spoofing prevention but
nothing else, then we need to adjust the port security API to be more
fine-grained.

On Wed, Sep 2, 2015 at 1:37 AM, Gal Sagie <gal.sagie at gmail.com> wrote:

> Hello All,
>
> The un-addressed port spec [1] was approved for Liberty.
> I think this spec has good potential to provide very interesting solutions
> for NFV use cases
> but also for multi site connectivity and i would really want to see
> it move forward with the community.
>
> There are some issues we need to discuss regarding L2 population (both for
> the reference
> implementation and for any "SDN" solution), but we can iterate on them.
>
> This email relates to a recent revert [2] that was done to prevent
> spoofing possibility
> due to recent work that was merged.
>
> If i understand the problem correctly, an un-addressed port can now
> perform ARP spoofing
> on an address of a port that already exists in the same network and listen
> to its traffic.
> (a problem which becomes bigger with shared network among tenants)
>
> One possible solution we could do to prevent this is to keep flow entries
> that block the port
> from pretending to have an IP that is already part of the network (or
> subnet).
> So there will be ARP spoofing checks that check the port is not answering
> for an IP that is already
> configured.
> *Any thoughts/comments on that?*
>
> Unrelated to this, i think that an un-address port should work in subnet
> context when it comes
> to L2 population and traffic forwarding, so that un-address port only gets
> traffic for addresses
> that are not found, but are on the same subnet as the un-address port.
> (I understand this is a bigger challenge and is not working with the way
> Neutron networks
> work today, but we can iterate on this as well since its unrelated to the
> security subject)
>
> Thanks
> Gal.
>
> [1]
> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/unaddressed-port.rst
> [2] https://review.openstack.org/#/c/218470/
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/66085b67/attachment.html>

From james.slagle at gmail.com  Thu Sep  3 00:32:25 2015
From: james.slagle at gmail.com (James Slagle)
Date: Wed, 2 Sep 2015 20:32:25 -0400
Subject: [openstack-dev] [TripleO][Heat] instance_user fallout,
 keeping the 'heat-admin' user working
In-Reply-To: <1441225337.1917.16.camel@redhat.com>
References: <1441225337.1917.16.camel@redhat.com>
Message-ID: <CAHV77z84o6=S0wJr=q+fNjbmMiytb=-pskib5jX9q3vPxPeJJA@mail.gmail.com>

On Wed, Sep 2, 2015 at 4:22 PM, Dan Prince <dprince at redhat.com> wrote:
> We had an IRC discussion today about the 'heat-admin' user in #tripleo.
>
> Upstream Heat recently reverted the 'instance_user' config file option
> which we relied on in TripleO to standardize the default (admin) user
> on our nodes. It is my understanding that Heat would prefer not to
> maintain this option because it causes subtle compatibility issues
> across the OpenStack and AWS APIs and the interactions between cloud
> -init version, etc. So it was deprecated in Icehouse... and recently
> removed in [1].
>
> We could just go with the default distro user (centos, fedora, ubuntu,
> etc.) but it would be really nice to standardize on a user name for
> maintenance should anyone every spin up a cloud using multiple distros
> or something.
>
> So a couple of options. We could just go on and update our templates
> like this [2]. This actually seems pretty clean to me, but it would
> require anybody who has created custom firstboot scripts to do the same
> (we have proposed docker patches with firstboot scripts that need
> similar updates).

Yea, that's the main reason I'm not fond of this approach. It really
feels like cluttering up the firstboot interface, in that anyone who
wants to plugin in their own config there has to remember to also
include this snippet. It leads to copying/pasting around yaml, which I
don't think is a great pattern going forward.

It would be nice to have a cleaner separation between the interfaces
that we offer to users and those that need to be reserved/used for
TripleO's own purposes.

I'm not sure of a better solution though other than a native
SoftwareDeployment resource in the templates directly that creates a
known user and reads the ssh keys from the user data (via a script).

Or, what about baking in some static configuration for cloud-init into
our images that creates the known user?

> Alternately, we could propose that Heat revert the instance_user
> feature or some version of it. We've been using that for a year or two
> now and it has actually been fairly nice to set the default that way.

I really liked having the one consistent user no matter the cloud
image you deployed from as well. I'm not sure we could successfully
persuade it to go back in though given it was deprecated in Icehouse.

>
> Thoughts?
>
>
> [1] https://review.openstack.org/103928
>
> [2] https://review.openstack.org/#/c/219861/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--


From dprince at redhat.com  Thu Sep  3 01:04:04 2015
From: dprince at redhat.com (Dan Prince)
Date: Wed, 02 Sep 2015 21:04:04 -0400
Subject: [openstack-dev] [TripleO][Heat] instance_user fallout,
 keeping the 'heat-admin' user working
In-Reply-To: <CAHV77z84o6=S0wJr=q+fNjbmMiytb=-pskib5jX9q3vPxPeJJA@mail.gmail.com>
References: <1441225337.1917.16.camel@redhat.com>
 <CAHV77z84o6=S0wJr=q+fNjbmMiytb=-pskib5jX9q3vPxPeJJA@mail.gmail.com>
Message-ID: <1441242244.4192.19.camel@redhat.com>

On Wed, 2015-09-02 at 20:32 -0400, James Slagle wrote:
> On Wed, Sep 2, 2015 at 4:22 PM, Dan Prince <dprince at redhat.com>
> wrote:
> > We had an IRC discussion today about the 'heat-admin' user in
> > #tripleo.
> > 
> > Upstream Heat recently reverted the 'instance_user' config file
> > option
> > which we relied on in TripleO to standardize the default (admin)
> > user
> > on our nodes. It is my understanding that Heat would prefer not to
> > maintain this option because it causes subtle compatibility issues
> > across the OpenStack and AWS APIs and the interactions between
> > cloud
> > -init version, etc. So it was deprecated in Icehouse... and
> > recently
> > removed in [1].
> > 
> > We could just go with the default distro user (centos, fedora,
> > ubuntu,
> > etc.) but it would be really nice to standardize on a user name for
> > maintenance should anyone every spin up a cloud using multiple
> > distros
> > or something.
> > 
> > So a couple of options. We could just go on and update our
> > templates
> > like this [2]. This actually seems pretty clean to me, but it would
> > require anybody who has created custom firstboot scripts to do the
> > same
> > (we have proposed docker patches with firstboot scripts that need
> > similar updates).
> 
> Yea, that's the main reason I'm not fond of this approach. It really
> feels like cluttering up the firstboot interface, in that anyone who
> wants to plugin in their own config there has to remember to also
> include this snippet. It leads to copying/pasting around yaml, which
> I
> don't think is a great pattern going forward.
> 

Well, ideally we'll have a minimal number of first boot scripts... in
tree there are only 3 that I can see right now and the meaningful code
is actually quite small. Just this:

  user_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        user: 'heat-admin'

Any out of tree firstboot scripts would need the same implementation
fixes though. It is those that primarily concern me. Still, if we were
to validate this somehow and failfast that could be a fair compromise.

> It would be nice to have a cleaner separation between the interfaces
> that we offer to users and those that need to be reserved/used for
> TripleO's own purposes.

Agree. The ability to compose userdata via OS::Heat::MultipartMime does
already exist somewhat. Perhaps we could fashion a better way to
include a global user config (from a  heat-admin-user-data.yaml or
something). Haven't tried this yet...

> 
> I'm not sure of a better solution though other than a native
> SoftwareDeployment resource in the templates directly that creates a
> known user and reads the ssh keys from the user data (via a script).

This seems like duplication of a feature we get for free w/ cloud-init
though right?

If we wanted to validate the user exists a script (executed via a
SoftwareDeployment) could be useful for that.

> 
> Or, what about baking in some static configuration for cloud-init
> into
> our images that creates the known user?

Agree this would work but it might be undesirable for images like
Atomic which some would like to avoid repackaging for use via TripleO.
At least that is an idea on the table with the Docker implementation...
as in we don't actually create a custom TripleO version of Atomic. It
just works out of the box with the stock version.

> 
> > Alternately, we could propose that Heat revert the instance_user
> > feature or some version of it. We've been using that for a year or
> > two
> > now and it has actually been fairly nice to set the default that
> > way.
> 
> I really liked having the one consistent user no matter the cloud
> image you deployed from as well. I'm not sure we could successfully
> persuade it to go back in though given it was deprecated in Icehouse.

Agree. I miss the old feature quite a bit. Just wanted to give the new
idea a try first before posted a revert...

> 
> > 
> > Thoughts?
> > 
> > 
> > [1] https://review.openstack.org/103928
> > 
> > [2] https://review.openstack.org/#/c/219861/
> > 
> > ___________________________________________________________________
> > _______
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 


From nik.komawar at gmail.com  Thu Sep  3 02:11:40 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Wed, 2 Sep 2015 22:11:40 -0400
Subject: [openstack-dev]  [Glance] Feature Freeze Exception proposal
Message-ID: <55E7AC5C.9010504@gmail.com>

Hi,

I wanted to propose 'Single disk image OVA import' [1] feature proposal
for exception. This looks like a decently safe proposal that should be
able to adjust in the extended time period of Liberty. It has been
discussed at the Vancouver summit during a work session and the proposal
has been trimmed down as per the suggestions then; has been overall
accepted by those present during the discussions (barring a few changes
needed on the spec itself). It being a addition to already existing
import task, doesn't involve API change or change to any of the core
Image functionality as of now.

Please give your vote: +1 or -1 .

[1] https://review.openstack.org/#/c/194868/

-- 

Thanks,
Nikhil



From openstack at lanabrindley.com  Thu Sep  3 02:55:41 2015
From: openstack at lanabrindley.com (Lana Brindley)
Date: Thu, 3 Sep 2015 12:55:41 +1000
Subject: [openstack-dev] [nova] [docs] Are cells still 'experimental'?
Message-ID: <55E7B6AD.6050007@lanabrindley.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi John,

I noticed while looking through some Nova docs bugs that the Config Ref
lists cells as experimental:

http://docs.openstack.org/kilo/config-reference/content/section_compute-
cells.html

Is this still true?

Thanks,
Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV57atAAoJELppzVb4+KUyjzYIAIain4YZauEcMEMNYfdI74Lj
qmUO4U5kTkg7dFcsW1DJhhPvPjgsJPKRcMFofcZEB7qV+QcCbDx9g691NlB3u1dG
MEOtBq9y5o1PJMPxl8xcbHaOLm028E4f7oUrlODpQs/dlWS8vfXpOeT/CwYsqFG4
lF08/YpvNaNLBytCjbFgFqmQt5I+8gLBmyXgRl06+HflgjYsr6fQyjQzMlVfioPW
5IYg0p+Zj4B/MxRo5xCWph0e9YdeE3CBpqGB33iay06341Sh0cVi0O4QPTZ/f2tA
TbZzskHDKJoEb6kqbz4jMtzoDSr76N4+ltwMynzpCY/I8tyuV+Yj5vIWO79Wo6Q=
=hjD8
-----END PGP SIGNATURE-----


From feilong at catalyst.net.nz  Thu Sep  3 03:25:48 2015
From: feilong at catalyst.net.nz (Fei Long Wang)
Date: Thu, 03 Sep 2015 15:25:48 +1200
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <55E7AC5C.9010504@gmail.com>
References: <55E7AC5C.9010504@gmail.com>
Message-ID: <55E7BDBC.6080101@catalyst.net.nz>

+1 It would be nice to have it.

On 03/09/15 14:11, Nikhil Komawar wrote:
> Hi,
>
> I wanted to propose 'Single disk image OVA import' [1] feature proposal
> for exception. This looks like a decently safe proposal that should be
> able to adjust in the extended time period of Liberty. It has been
> discussed at the Vancouver summit during a work session and the proposal
> has been trimmed down as per the suggestions then; has been overall
> accepted by those present during the discussions (barring a few changes
> needed on the spec itself). It being a addition to already existing
> import task, doesn't involve API change or change to any of the core
> Image functionality as of now.
>
> Please give your vote: +1 or -1 .
>
> [1] https://review.openstack.org/#/c/194868/
>

-- 
Cheers & Best regards,
Fei Long Wang (???)
--------------------------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang at catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-------------------------------------------------------------------------- 



From vikschw at gmail.com  Thu Sep  3 03:33:17 2015
From: vikschw at gmail.com (Vikram Choudhary)
Date: Thu, 3 Sep 2015 09:03:17 +0530
Subject: [openstack-dev] [neutron][db] reviewers: please mind the branch
 a script belongs to
In-Reply-To: <CALiLy7oV2kE1xHBhjH71Xwuw6nC_aT6bEXX7=hvM47kxSDOm7Q@mail.gmail.com>
References: <A0ADAA86-9762-4734-98E9-FF29CD617653@redhat.com>
 <CALiLy7oV2kE1xHBhjH71Xwuw6nC_aT6bEXX7=hvM47kxSDOm7Q@mail.gmail.com>
Message-ID: <CAFeBh8s3rAeUAJVM9GouKw15ni7QktMbqdPZMkCeCCjHoF_Aeg@mail.gmail.com>

Thanks for sharing this Ihar!

Thanks
Vikram
On Sep 3, 2015 2:13 AM, "Carl Baldwin" <carl at ecbaldwin.net> wrote:

> Thanks, I learned a thing or two from the document that you linked.
> Thanks for reminding us of that.
>
> Carl
>
> On Tue, Sep 1, 2015 at 3:14 AM, Ihar Hrachyshka <ihrachys at redhat.com>
> wrote:
> > Hi reviewers,
> >
> > several days ago, a semantically expand-only migration script was merged
> into contract branch [1]. This is not a disaster, though it would be a tiny
> one if a contract-only migration script would be merged into expand branch.
> >
> > Please make sure you know the new migration strategy described in [2].
> >
> > Previously, we introduced a check that validates that we don?t mix
> down_revision heads, linking e.g. expand script to contract revision, or
> vice versa [3]. Apparently, it?s not enough.
> >
> > Ann is looking into introducing another check for semantical correctness
> of scripts. I don?t believe it may work for all complex cases we may need
> to solve manually, but at least it should be able to catch add_* operations
> in contract scripts, or drop_* operations in expand branch. Since there may
> be exceptions to general automation, we may also need a mechanism to
> disable such a sanity check for specific scripts.
> >
> > So all in all, I kindly ask everyone to become aware of how we now
> manage migration scripts, and what it implies in how we should review code
> (f.e. looking at paths as well as the code of alembic scripts). That is
> especially important before the test that Ann is looking to implement is
> not merged.
> >
> > [1]: https://bugs.launchpad.net/neutron/+bug/1490767
> > [2]:
> http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html
> > [3]: https://review.openstack.org/#/c/206746/
> >
> > Thanks
> > Ihar
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/2119571b/attachment.html>

From aurlapova at mirantis.com  Thu Sep  3 05:49:29 2015
From: aurlapova at mirantis.com (Anastasia Urlapova)
Date: Thu, 3 Sep 2015 08:49:29 +0300
Subject: [openstack-dev] [Fuel] Nominate Evgeniy Konstantinov for
 fuel-docs core
In-Reply-To: <CAM0pNLOpBAhyQnRCHXK=jL6NTpxdEe880a=h7c-Jvw4GdTuk9w@mail.gmail.com>
References: <CAFY49iBwxknorBHmVLZSkUWD9zMr4Tc57vKOg_F0=7PEG0_tSA@mail.gmail.com>
 <CAM0pNLOpBAhyQnRCHXK=jL6NTpxdEe880a=h7c-Jvw4GdTuk9w@mail.gmail.com>
Message-ID: <CAC+XjbZqz-qk1fi+pR=H-KXEgOqW9W0_+0f89xKVSPpiA5otWg@mail.gmail.com>

+1

On Wed, Sep 2, 2015 at 4:03 PM, Dmitry Borodaenko <dborodaenko at mirantis.com>
wrote:

> +1, Evgeny has been a #1 committer in fuel-docs for a while, it's great to
> see him pick up on the reviews, too.
>
> On Wed, Sep 2, 2015 at 3:24 PM Irina Povolotskaya <
> ipovolotskaya at mirantis.com> wrote:
>
>> Fuelers,
>>
>> I'd like to nominate Evgeniy Konstantinov for the fuel-docs-core team.
>> He has contributed thousands of lines of documentation to Fuel over
>> the past several months, and has been a diligent reviewer:
>>
>>
>> http://stackalytics.com/?user_id=evkonstantinov&release=all&project_type=all&module=fuel-docs
>>
>> I believe it's time to grant him core reviewer rights in the fuel-docs
>> repository.
>>
>> Core reviewer approval process definition:
>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>
>> --
>> Best regards,
>>
>> Irina
>>
>>
>>
>>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/74cbef9b/attachment.html>

From aurlapova at mirantis.com  Thu Sep  3 05:49:43 2015
From: aurlapova at mirantis.com (Anastasia Urlapova)
Date: Thu, 3 Sep 2015 08:49:43 +0300
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
In-Reply-To: <CAOe9ns7CtSmgKuzZu1qvWyh2mU2zCZUvwRCa2DQWaUvpZuPiqQ@mail.gmail.com>
References: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
 <55E6EA3A.7080006@gmail.com>
 <CAM0pNLMwsWK_N8EaobnCDXmFdfB0aTPMK9urXnbScGmJtvqfoA@mail.gmail.com>
 <CAHAWLf1Ed6fPqDUDuume+1JdtxJuLn+AFrcxkUvbZvxAokBWYA@mail.gmail.com>
 <CAOe9ns7CtSmgKuzZu1qvWyh2mU2zCZUvwRCa2DQWaUvpZuPiqQ@mail.gmail.com>
Message-ID: <CAC+XjbYo+Nd6zPY7vkwhFSjp5J7sAPYooU5FzzfBKRfDjgb1-A@mail.gmail.com>

+1

On Thu, Sep 3, 2015 at 2:49 AM, Aleksandr Didenko <adidenko at mirantis.com>
wrote:

> Hi,
>
> +1 from me.
>
> Regards,
> Alex
>
> On Wed, Sep 2, 2015 at 5:00 PM, Vladimir Kuklin <vkuklin at mirantis.com>
> wrote:
>
>> +1 and also he is in US timezone, which  should help us cover the globe
>> with Continuous Review process.
>>
>> On Wed, Sep 2, 2015 at 3:45 PM, Dmitry Borodaenko <
>> dborodaenko at mirantis.com> wrote:
>>
>>> Huge +1 from me, Alex has all the best qualities of a core reviewer:
>>> great engineer, great communicator, diligent, and patient.
>>>
>>> On Wed, Sep 2, 2015 at 3:24 PM Jay Pipes <jaypipes at gmail.com> wrote:
>>>
>>>> I'm not a Fuel core or anything, but +1 from me. Alex has been very
>>>> visible in the community and his work on librarian-puppet was a great
>>>> step forward for the project.
>>>>
>>>> Best,
>>>> -jay
>>>>
>>>> On 09/02/2015 04:31 AM, Sergii Golovatiuk wrote:
>>>> > Hi,
>>>> >
>>>> > I would like to nominate Alex Schultz to Fuel-Library Core team. He?s
>>>> > been doing a great job in writing patches. At the same time his
>>>> reviews
>>>> > are solid with comments for further improvements. He?s #3 reviewer and
>>>> > #1 contributor with 46 commits for last 90 days [1]. Additionally,
>>>> Alex
>>>> > has been very active in IRC providing great ideas. His ?librarian?
>>>> > blueprint [3] made a big step towards to puppet community.
>>>> >
>>>> > Fuel Library, please vote with +1/-1 for approval/objection. Voting
>>>> will
>>>> > be open until September 9th. This will go forward after voting is
>>>> closed
>>>> > if there are no objections.
>>>> >
>>>> > Overall contribution:
>>>> > [0] http://stackalytics.com/?user_id=alex-schultz
>>>> > Fuel library contribution for last 90 days:
>>>> > [1] http://stackalytics.com/report/contribution/fuel-library/90
>>>> > List of reviews:
>>>> > [2]
>>>> >
>>>> https://review.openstack.org/#/q/reviewer:%22Alex+Schultz%22+status:merged,n,z
>>>> > ?Librarian activities? in mailing list:
>>>> > [3]
>>>> http://lists.openstack.org/pipermail/openstack-dev/2015-July/071058.html
>>>> >
>>>> > --
>>>> > Best regards,
>>>> > Sergii Golovatiuk,
>>>> > Skype #golserge
>>>> > IRC #holser
>>>> >
>>>> >
>>>> >
>>>> __________________________________________________________________________
>>>> > OpenStack Development Mailing List (not for usage questions)
>>>> > Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Yours Faithfully,
>> Vladimir Kuklin,
>> Fuel Library Tech Lead,
>> Mirantis, Inc.
>> +7 (495) 640-49-04
>> +7 (926) 702-39-68
>> Skype kuklinvv
>> 35bk3, Vorontsovskaya Str.
>> Moscow, Russia,
>> www.mirantis.com <http://www.mirantis.ru/>
>> www.mirantis.ru
>> vkuklin at mirantis.com
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/a866eca2/attachment.html>

From mscherbakov at mirantis.com  Thu Sep  3 06:19:27 2015
From: mscherbakov at mirantis.com (Mike Scherbakov)
Date: Thu, 03 Sep 2015 06:19:27 +0000
Subject: [openstack-dev] [Fuel] Code review process in Fuel and related
	issues
In-Reply-To: <55E7221C.2070008@gmail.com>
References: <CAKYN3rNAw4vqbrvUONaemxOx=mACM3Aq_JAjpBeXmhjXq-zi5A@mail.gmail.com>
 <CABfuu9qPOe2RVhBG7aq+coVRQ0898pkv+DXGQBs9nGU93b+krA@mail.gmail.com>
 <30E12849-7AAB-45F7-BA7B-A4D952053419@mirantis.com>
 <CACo6NWA_=2JnJfcFwbTbt1M33P7Gqpg_xemKDV5x7miu94TAHQ@mail.gmail.com>
 <9847EFCC-7772-4BB8-AD0E-4CA6BC65B535@mirantis.com>
 <CACo6NWDdxzWxDkU078tuuHupyArux09bPya72hC24WwnkNiCFg@mail.gmail.com>
 <55E6E82D.6030100@gmail.com>
 <CACo6NWCjp-DTCY2nrKyDij1TPeSuTCr9PhTLQ25Vf_Y5cJ=sZQ@mail.gmail.com>
 <55E7221C.2070008@gmail.com>
Message-ID: <CAKYN3rN_+ZSOvWYerchULuH1Regu_r3nRZ77+E1XAXgEWrwGbQ@mail.gmail.com>

Thank you all for the feedback.


Dims -

> 1) I'd advise to codify a proposal in fuel-specs under a 'policy'
directory

I think it's great idea and I'll do it.


> 2) We don't have SME terminology, but we do have Maintainers both in
oslo-incubator

I like "Maintainers" more than SMEs, thank you for suggestion. I'd switch
SME -> Maintainer everywhere.


> 3) Is there a plan to split existing repos to more repos? Then each repo
can have a core team (one core team for one repo), PTL takes care of all
repos and MAINTAINERS take care of directories within a repo. That will
line up well with what we are doing elsewhere in the community (essentially
"Component Lead" is a core team which may not be a single person).

That's the plan, with one difference though. According to you, there is a
core team per repo without a lead identified. In my proposal, I'd like to
ensure that we always choose a lead by the process of voting. I'd like to
have voting process of identifying a component lead. It has to be fair
process.


> We do not have a concept of SLA anywhere that i know of, so it will have
to be some kind of social consensus and not a real carrot/stick.

> As for me the idea of SLA contradicts to qualitative reviews.

I'm not thinking about carrot or stick here. I'd like to ensure that core
reviewers and component leads are targeted to complete reviews within a
certain time. If it doesn't happen, then patchset needs to be discussed
during IRC meeting if we can delegate some testing, etc. to someone. If
there are many patchsets out of SLA, then we'd consider other changes
(decide to drop something from the release so to free up resources or
something else).

We had a problem in the past, when we would not pay attention to a patch
proposed by someone before it's being escalated. I'm suggesting a solution
for that problem. SLA is the contract between contributor and reviewer, and
both would have same expectations on how long would it take to review the
patch. Without expectations aligned, contributors can get upset easily.
They may expect that their code will be reviewed and merged within hours,
while it fact it's days. I'm not even talking about patches which can be
forgotten and hang in the queue for months...


> If we succeed in reducing the load on core reviewers, it will mean that
core reviewers will do less code reviews. This could lead to core reviewer
demotion.

I expect that there will be a drop in code reviews being done by core
reviewers team. This is the point actually - do less reviews, but do it
more thoroughly. Don't work on patches which have easy mistakes, as those
should be cought by maintainers, before these patches come to the core
reviewer's plate. I don't think though that it will lead to core reviewer
"demotion". Still, you will be doing many reviews - just less than before,
and other who did a little - will do more reviews, potentially becoming
joining a core team later.


> It would be nice if Jenkins could add reviewers after CI +1, or we can
use gerrit dashboard for SMEs to not waste their time on review that has
not yet passed CI and does not have +1 from other reviewers.

This is good suggestion. I agree.


> AFAIK Boris Pavlovic introduced some scripts

> in Rally which do basic preliminary check of review message, checking

> that it's formally correct.

Thanks Igor, I believe this can be applied as well.


> Another thing is I got a bit confused by the difference between Core
Reviewer and Component Lead,

> aren't those the same persons? Shouldn't every Core Reviewer know the
architecture, best practises

> and participate in design architecture sessions?

Component Lead is being elected by core reviewers team as the lead. So it's
just another core reviewer / architect, but with the right to have a final
word. Also, for large parts like fuel-library / nailgun, I'd expect this
person to be free from any feature-work. For smaller things, like network
verifier, I don't think that we'd need to have dedicated component owner
who will be free from any feature work.


Igor K., Evgeny L, did I address your questions regarding SLA and component
lead vs core reviewer?

Thank you,

On Wed, Sep 2, 2015 at 9:28 AM Jay Pipes <jaypipes at gmail.com> wrote:

> On 09/02/2015 08:45 AM, Igor Kalnitsky wrote:
> >> I think there's plenty of examples of people in OpenStack projects
> >> that both submit code (and lead features) that also do code review
> >> on a daily basis.
> >
> > * Do these features huge?
>
> Yes.
>
> > * Is their code contribution huge or just small patches?
>
> Both.
>
> > * Did they get to master before FF?
>
> Yes.
>
> > * How many intersecting features OpenStack projects have under
> > development? (since often merge conflicts requires a lot of re-review)
>
> I recognize that Fuel, like devstack, has lots of cross-project
> dependencies. That just makes things harder to handle for Fuel, but it's
> not a reason to have core reviewers not working on code or non-core
> reviewers not doing reviews.
>
> > * How often OpenStack people are busy on other activities, such as
> > helping fellas, troubleshooting customers, participate design meetings
> > and so on?
>
> Quite often. I'm personally on IRC participating in design discussions,
> code reviews, and helping people every day. Not troubleshooting
> customers, though...
>
> > If so, do you sure they are humans then? :) I can only speak for
> > myself, and that's what I want to say: during 7.0 dev cycle I burned
> > in hell and I don't want to continue that way.
>
> I think you mean you "burned out" :) But, yes, I hear you. I understand
> the pressure that you are under, and I sympathize with you. I just feel
> that the situation is not an either/or situation, and encouraging some
> folks to only do reviews and not participate in coding/feature
> development is a dangerous thing.
>
> Best,
> -jay
>
> > Thanks,
> > Igor
> >
> > On Wed, Sep 2, 2015 at 3:14 PM, Jay Pipes <jaypipes at gmail.com> wrote:
> >> On 09/02/2015 03:00 AM, Igor Kalnitsky wrote:
> >>>
> >>> It won't work that way. You either busy on writing code / leading
> >>> feature or doing review. It couldn't be combined effectively. Any
> >>> context switch between activities requires an extra time to focus on.
> >>
> >>
> >> I don't agree with the above, Igor. I think there's plenty of examples
> of
> >> people in OpenStack projects that both submit code (and lead features)
> that
> >> also do code review on a daily basis.
> >>
> >> Best,
> >> -jay
> >>
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/9cad859f/attachment.html>

From sukhdevkapur at gmail.com  Thu Sep  3 06:25:08 2015
From: sukhdevkapur at gmail.com (Sukhdev Kapur)
Date: Wed, 2 Sep 2015 23:25:08 -0700
Subject: [openstack-dev] [Ironic][Neutron] Ironic-Neutron Integration
 meeting cancelled for Sept 7th
Message-ID: <CA+wZVHTR8mmdmPe9F0_f+0+GtDe327mZE4E0JEf1S4KaF29DpA@mail.gmail.com>

Folks,

On account of Labor day holiday on Monday (9/7/15) in U.S.A, we will not be
holding Ironic-Neutron integration meeting.

On a separate note, Ironic core team is planning for Liberty-RC1 sometime
in third week of Sept. This means we need to test and complete all our
patches ASAP so that they can be merged in a timely fashion.

I have started to test the published patches. I have noticed an issue in
the newly modified Ironic CLI. I have notified the patch owner so that
issues can be addressed in timely manner.

Even though we are not going to meet next week, we should keep the moment
going. Please use IRC as well emails to reach each other so that we can
quickly address the issues.

Thanks
-Sukhdev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150902/cfd7c73a/attachment.html>

From mikal at stillhq.com  Thu Sep  3 06:33:49 2015
From: mikal at stillhq.com (Michael Still)
Date: Thu, 3 Sep 2015 16:33:49 +1000
Subject: [openstack-dev] [nova] [docs] Are cells still 'experimental'?
In-Reply-To: <55E7B6AD.6050007@lanabrindley.com>
References: <55E7B6AD.6050007@lanabrindley.com>
Message-ID: <CAEd1pt6v_du3nj+krP-Gq=kcxHsd=6HSyg1-UnEjZPPkxwHoOw@mail.gmail.com>

I think they should not be marked experimental. For better or for worse,
there are multiple sites deployed, and we will support them if something
goes wrong.

That said, I wouldn't be encouraging new deployments of cells v1 at this
point.

Michael

On Thu, Sep 3, 2015 at 12:55 PM, Lana Brindley <openstack at lanabrindley.com>
wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi John,
>
> I noticed while looking through some Nova docs bugs that the Config Ref
> lists cells as experimental:
>
> http://docs.openstack.org/kilo/config-reference/content/section_compute-
> cells.html
>
> Is this still true?
>
> Thanks,
> Lana
>
> - --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.22 (GNU/Linux)
>
> iQEcBAEBAgAGBQJV57atAAoJELppzVb4+KUyjzYIAIain4YZauEcMEMNYfdI74Lj
> qmUO4U5kTkg7dFcsW1DJhhPvPjgsJPKRcMFofcZEB7qV+QcCbDx9g691NlB3u1dG
> MEOtBq9y5o1PJMPxl8xcbHaOLm028E4f7oUrlODpQs/dlWS8vfXpOeT/CwYsqFG4
> lF08/YpvNaNLBytCjbFgFqmQt5I+8gLBmyXgRl06+HflgjYsr6fQyjQzMlVfioPW
> 5IYg0p+Zj4B/MxRo5xCWph0e9YdeE3CBpqGB33iay06341Sh0cVi0O4QPTZ/f2tA
> TbZzskHDKJoEb6kqbz4jMtzoDSr76N4+ltwMynzpCY/I8tyuV+Yj5vIWO79Wo6Q=
> =hjD8
> -----END PGP SIGNATURE-----
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/0b7ca661/attachment.html>

From derekh at redhat.com  Thu Sep  3 06:34:15 2015
From: derekh at redhat.com (Derek Higgins)
Date: Thu, 03 Sep 2015 07:34:15 +0100
Subject: [openstack-dev] [TripleO] Status of CI changes
Message-ID: <55E7E9E7.7070808@redhat.com>

Hi All,

The patch to reshuffle our CI jobs has merged[1], along with the patch 
to switch the f21-noha job to be instack based[2] (with centos images).

So the current status is that our CI has been removed from most of the 
non tripleo projects (with the exception of nova/neutron/heat and ironic
where it is only available with check experimental until we are sure its
reliable).

The last big move is to pull in some repositories into the upstream[3] 
gerrit so until this happens we still have to worry about some projects 
being on gerrithub (the instack based CI pulls them in from gerrithub 
for now). I'll follow up with a mail once this happens

A lot of CI stuff still needs to be worked on (and improved) e.g.
  o Add ceph support to the instack based job
  o Add ha support to the instack based job
  o Improve the logs exposed
  o Pull out a lot of workarounds that have gone into the CI job
  o move out some of the parts we still use in tripleo-incubator
  o other stuff

Please make yourself known if your interested in any of the above

thanks,
Derek.

[1] https://review.openstack.org/#/c/205479/
[2] https://review.openstack.org/#/c/185151/
[3] https://review.openstack.org/#/c/215186/


From ken1ohmichi at gmail.com  Thu Sep  3 06:37:32 2015
From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi)
Date: Thu, 3 Sep 2015 15:37:32 +0900
Subject: [openstack-dev] [QA] Meeting Thursday September 3rd at 9:00 UTC
Message-ID: <CAA393vhsVHf4wnuZdS457T2EhR7M=yYWbHjnW6pn8FiE9AKp8A@mail.gmail.com>

Hi everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, September 3th at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones tomorrow's
meeting will be at:

04:00 EDT
18:00 JST
18:30 ACST
11:00 CEST
04:00 CDT
02:00 PDT

Thanks
Ken Ohmichi


From asalkeld at mirantis.com  Thu Sep  3 06:56:19 2015
From: asalkeld at mirantis.com (Angus Salkeld)
Date: Thu, 03 Sep 2015 06:56:19 +0000
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <55E73721.1000804@redhat.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
 <20150901124147.GA4710@t430slt.redhat.com>
 <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
 <CAJ3HoZ1RKCBV5if4YS_b-h0WzGu0HySkAVEQGKbvyuOpz9LYGg@mail.gmail.com>
 <20150902085546.GA25909@t430slt.redhat.com> <55E73721.1000804@redhat.com>
Message-ID: <CAA16xcxAnXj9mDhoATFDvhvkBJiF5g4taTv7g3LNoONhuRo4jA@mail.gmail.com>

On Thu, Sep 3, 2015 at 3:53 AM Zane Bitter <zbitter at redhat.com> wrote:

> On 02/09/15 04:55, Steven Hardy wrote:
> > On Wed, Sep 02, 2015 at 04:33:36PM +1200, Robert Collins wrote:
> >> On 2 September 2015 at 11:53, Angus Salkeld <asalkeld at mirantis.com>
> wrote:
> >>
> >>> 1. limit the number of resource actions in parallel (maybe base on the
> >>> number of cores)
> >>
> >> I'm having trouble mapping that back to 'and heat-engine is running on
> >> 3 separate servers'.
> >
> > I think Angus was responding to my test feedback, which was a different
> > setup, one 4-core laptop running heat-engine with 4 worker processes.
> >
> > In that environment, the level of additional concurrency becomes a
> problem
> > because all heat workers become so busy that creating a large stack
> > DoSes the Heat services, and in my case also the DB.
> >
> > If we had a configurable option, similar to num_engine_workers, which
> > enabled control of the number of resource actions in parallel, I probably
> > could have controlled that explosion in activity to a more managable
> series
> > of tasks, e.g I'd set num_resource_actions to (num_engine_workers*2) or
> > something.
>
> I think that's actually the opposite of what we need.
>
> The resource actions are just sent to the worker queue to get processed
> whenever. One day we will get to the point where we are overflowing the
> queue, but I guarantee that we are nowhere near that day. If we are
> DoSing ourselves, it can only be because we're pulling *everything* off
> the queue and starting it in separate greenthreads.
>

worker does not use a greenthread per job like service.py does.
This issue is if you have actions that are fast you can hit the db hard.

QueuePool limit of size 5 overflow 10 reached, connection timed out,
timeout 30

It seems like it's not very hard to hit this limit. It comes from simply
loading
the resource in the worker:
"/home/angus/work/heat/heat/engine/worker.py", line 276, in check_resource
"/home/angus/work/heat/heat/engine/worker.py", line 145, in _load_resource
"/home/angus/work/heat/heat/engine/resource.py", line 290, in load
resource_objects.Resource.get_obj(context, resource_id)



>
> In an ideal world, we might only ever pull one task off that queue at a
> time. Any time the task is sleeping, we would use for processing stuff
> off the engine queue (which needs a quick response, since it is serving
> the ReST API). The trouble is that you need a *huge* number of
> heat-engines to handle stuff in parallel. In the reductio-ad-absurdum
> case of a single engine only processing a single task at a time, we're
> back to creating resources serially. So we probably want a higher number
> than 1. (Phase 2 of convergence will make tasks much smaller, and may
> even get us down to the point where we can pull only a single task at a
> time.)
>
> However, the fewer engines you have, the more greenthreads we'll have to
> allow to get some semblance of parallelism. To the extent that more
> cores means more engines (which assumes all running on one box, but
> still), the number of cores is negatively correlated with the number of
> tasks that we want to allow.
>
> Note that all of the greenthreads run in a single CPU thread, so having
> more cores doesn't help us at all with processing more stuff in parallel.
>

Except, as I said above, we are not creating greenthreads in worker.

-A


>
> cheers,
> Zane.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/f12197ce/attachment.html>

From eduard.matei at cloudfounders.com  Thu Sep  3 07:33:01 2015
From: eduard.matei at cloudfounders.com (Eduard Matei)
Date: Thu, 3 Sep 2015 10:33:01 +0300
Subject: [openstack-dev] Fwd: [cinder][ThirdPartyCI]CloudFounders
 OpenvStorage CI - request to re-add the cinder driver
In-Reply-To: <CAEOp6J-GW_a5E3WTBMAE3U7Y4j5Q+hwxG0GpfV_0oC5asC4kFg@mail.gmail.com>
References: <CAEOp6J95YVvpzdcPCjrh=9b7fL78Gb2vXgLa4LWd8A-UDnJXzw@mail.gmail.com>
 <CAEOp6J_xvVk40sZT+uQ94ruMV5Z6P6vPJJfS+Z7ZkJwPGHFj-A@mail.gmail.com>
 <CAEOp6J-GW_a5E3WTBMAE3U7Y4j5Q+hwxG0GpfV_0oC5asC4kFg@mail.gmail.com>
Message-ID: <CAEOp6J_Qj+rhoj2AOYEcLw3smXuu8nWaFNxhxYTF0Dc9pQidDg@mail.gmail.com>

Hi,

Trying to get more attention to this ...

We had our driver removed by commit:
https://github.com/openstack/cinder/commit/f0ab819732d77a8a6dd1a91422ac183ac4894419
due to no CI.

Pls let me know if there is something wrong so we can fix it asap so we can
have the driver back in Liberty (if possible).

The CI is commenting using the name "Open vStorage CI" instead of
"CloudFounders OpenvStorage CI".

Thanks,

Eduard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/f8523505/attachment.html>

From maishsk at maishsk.com  Thu Sep  3 08:04:22 2015
From: maishsk at maishsk.com (Maish Saidel-Keesing)
Date: Thu, 3 Sep 2015 11:04:22 +0300
Subject: [openstack-dev] [Neutron] Allowing DNS suffix to be set per subnet
 (at least per tenant)
Message-ID: <55E7FF06.2010207@maishsk.com>

Hello all (cross-posting to openstack-operators as well)

Today the setting of the dns suffix that is provided to the instance is 
passed through dhcp_agent.

There is the option of setting different DNS servers per subnet (and and 
therefore tenant) but the domain suffix is something that stays the same 
throughout the whole system is the domain suffix.

I see that this is not a current neutron feature.

Is this on the roadmap? Are there ways to achieve this today? If so I 
would be very interested in hearing how.

Thanks
-- 
Best Regards,
Maish Saidel-Keesing


From flavio at redhat.com  Thu Sep  3 08:52:24 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Thu, 3 Sep 2015 10:52:24 +0200
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <55E7AC5C.9010504@gmail.com>
References: <55E7AC5C.9010504@gmail.com>
Message-ID: <20150903085224.GD30997@redhat.com>

On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>Hi,
>
>I wanted to propose 'Single disk image OVA import' [1] feature proposal
>for exception. This looks like a decently safe proposal that should be
>able to adjust in the extended time period of Liberty. It has been
>discussed at the Vancouver summit during a work session and the proposal
>has been trimmed down as per the suggestions then; has been overall
>accepted by those present during the discussions (barring a few changes
>needed on the spec itself). It being a addition to already existing
>import task, doesn't involve API change or change to any of the core
>Image functionality as of now.
>
>Please give your vote: +1 or -1 .
>
>[1] https://review.openstack.org/#/c/194868/

I'd like to see support for OVF being, finally, implemented in Glance.
Unfortunately, I think there are too many open questions in the spec
right now to make this FFE worthy.

Could those questions be answered to before the EOW?

With those questions answered, we'll be able to provide a more,
realistic, vote.

Also, I'd like us to evaluate how mature the implementation[0] is and
the likelihood of it addressing the concerns/comments in time.

For now, it's a -1 from me.

Thanks all for working on this, this has been a long time requested
format to have in Glance.
Flavio

[0] https://review.openstack.org/#/c/214810/


-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/31644087/attachment.pgp>

From malini.k.bhandaru at intel.com  Thu Sep  3 09:10:07 2015
From: malini.k.bhandaru at intel.com (Bhandaru, Malini K)
Date: Thu, 3 Sep 2015 09:10:07 +0000
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <20150903085224.GD30997@redhat.com>
References: <55E7AC5C.9010504@gmail.com> <20150903085224.GD30997@redhat.com>
Message-ID: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>

Flavio, first thing in the morning Kent will upload a new BP that addresses the comments. We would very much appreciate a +1 on the FFE.

Regards
Malini



-----Original Message-----
From: Flavio Percoco [mailto:flavio at redhat.com] 
Sent: Thursday, September 03, 2015 1:52 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>Hi,
>
>I wanted to propose 'Single disk image OVA import' [1] feature proposal 
>for exception. This looks like a decently safe proposal that should be 
>able to adjust in the extended time period of Liberty. It has been 
>discussed at the Vancouver summit during a work session and the 
>proposal has been trimmed down as per the suggestions then; has been 
>overall accepted by those present during the discussions (barring a few 
>changes needed on the spec itself). It being a addition to already 
>existing import task, doesn't involve API change or change to any of 
>the core Image functionality as of now.
>
>Please give your vote: +1 or -1 .
>
>[1] https://review.openstack.org/#/c/194868/

I'd like to see support for OVF being, finally, implemented in Glance.
Unfortunately, I think there are too many open questions in the spec right now to make this FFE worthy.

Could those questions be answered to before the EOW?

With those questions answered, we'll be able to provide a more, realistic, vote.

Also, I'd like us to evaluate how mature the implementation[0] is and the likelihood of it addressing the concerns/comments in time.

For now, it's a -1 from me.

Thanks all for working on this, this has been a long time requested format to have in Glance.
Flavio

[0] https://review.openstack.org/#/c/214810/


--
@flaper87
Flavio Percoco

From nstarodubtsev at mirantis.com  Thu Sep  3 10:08:15 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Thu, 3 Sep 2015 13:08:15 +0300
Subject: [openstack-dev] [murano] Let's minimaze the list of pylint
	exceptions
In-Reply-To: <CAAa8YgCZ1Wb-kadOh5AiuKKbQu+i_cy52BWGHWiT-hkDOrOC=g@mail.gmail.com>
References: <CAKSp79wPJ2N48J3SP7nbivkerwtC6beXx18z23eqCHXUjqGU3A@mail.gmail.com>
 <CAAa8YgCZ1Wb-kadOh5AiuKKbQu+i_cy52BWGHWiT-hkDOrOC=g@mail.gmail.com>
Message-ID: <CAAa8YgBcgAtU8tV9j5C+Yk0jOYTAKz3aOn8Oz1NYG0q4q4f3-Q@mail.gmail.com>

If somebody wants to join here is a code for vim plugin which may help you:
http://paste.openstack.org/show/444042/
Copy it and save in your vim plugin directory. After that press <F10> or
<F12>
and see pylint errors in single file or all pylint errors in current code.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-01 10:37 GMT+03:00 Nikolay Starodubtsev <nstarodubtsev at mirantis.com>
:

> +1, good initiative
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> 2015-09-01 10:30 GMT+03:00 Dmitro Dovbii <ddovbii at mirantis.com>:
>
>> Hi folks!
>>
>> We have a long list of pylint exceptions in code of Murano (please see
>> example
>> <http://logs.openstack.org/10/207910/10/check/gate-murano-pylint/f54d298/console.html>).
>> I would like to propose you to take a part in refactoring of code and
>> minimization of this list.
>> I've created blueprint
>> <https://blueprints.launchpad.net/murano/+spec/reduce-pylint-warnings>
>> and etherpad document
>> <https://beta.etherpad.org/p/reduce-pylint-warnings> describing the
>> structure of participation. Please feel free to choose some type of warning
>> and several modules containing it, then make notice in document, and
>> finally fix issues.
>> Let's make murano code more clear together :)
>>
>> Best regards,
>> Dmytro Dovbii
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/5da688a5/attachment.html>

From kuvaja at hp.com  Thu Sep  3 10:14:20 2015
From: kuvaja at hp.com (Kuvaja, Erno)
Date: Thu, 3 Sep 2015 10:14:20 +0000
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>
References: <55E7AC5C.9010504@gmail.com> <20150903085224.GD30997@redhat.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>
Message-ID: <EA70533067B8F34F801E964ABCA4C4410F4C1D0D@G4W3202.americas.hpqcorp.net>

Malini, all,

My current opinion is -1 for FFE based on the concerns in the spec and implementation.

I'm more than happy to realign my stand after we have updated spec and a) it's agreed to be the approach as of now and b) we can evaluate how much work the implementation needs to meet with the revisited spec.

If we end up to the unfortunate situation that this functionality does not merge in time for Liberty, I'm confident that this is one of the first things in Mitaka. I really don't think there is too much to go, we just might run out of time.

Thanks for your patience and endless effort to get this done.

Best,
Erno

> -----Original Message-----
> From: Bhandaru, Malini K [mailto:malini.k.bhandaru at intel.com]
> Sent: Thursday, September 03, 2015 10:10 AM
> To: Flavio Percoco; OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
> 
> Flavio, first thing in the morning Kent will upload a new BP that addresses the
> comments. We would very much appreciate a +1 on the FFE.
> 
> Regards
> Malini
> 
> 
> 
> -----Original Message-----
> From: Flavio Percoco [mailto:flavio at redhat.com]
> Sent: Thursday, September 03, 2015 1:52 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
> 
> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
> >Hi,
> >
> >I wanted to propose 'Single disk image OVA import' [1] feature proposal
> >for exception. This looks like a decently safe proposal that should be
> >able to adjust in the extended time period of Liberty. It has been
> >discussed at the Vancouver summit during a work session and the
> >proposal has been trimmed down as per the suggestions then; has been
> >overall accepted by those present during the discussions (barring a few
> >changes needed on the spec itself). It being a addition to already
> >existing import task, doesn't involve API change or change to any of
> >the core Image functionality as of now.
> >
> >Please give your vote: +1 or -1 .
> >
> >[1] https://review.openstack.org/#/c/194868/
> 
> I'd like to see support for OVF being, finally, implemented in Glance.
> Unfortunately, I think there are too many open questions in the spec right
> now to make this FFE worthy.
> 
> Could those questions be answered to before the EOW?
> 
> With those questions answered, we'll be able to provide a more, realistic,
> vote.
> 
> Also, I'd like us to evaluate how mature the implementation[0] is and the
> likelihood of it addressing the concerns/comments in time.
> 
> For now, it's a -1 from me.
> 
> Thanks all for working on this, this has been a long time requested format to
> have in Glance.
> Flavio
> 
> [0] https://review.openstack.org/#/c/214810/
> 
> 
> --
> @flaper87
> Flavio Percoco
> __________________________________________________________
> ________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From ihrachys at redhat.com  Thu Sep  3 10:32:08 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Thu, 3 Sep 2015 12:32:08 +0200
Subject: [openstack-dev] How to run fw testcases which are recently
	moved from tempest to neutron.
In-Reply-To: <55E717BC.6000904@brocade.com>
References: <55E717BC.6000904@brocade.com>
Message-ID: <559086F2-A26E-45B0-A5A0-88BDAD156939@redhat.com>

> On 02 Sep 2015, at 17:37, bharath <bharath at brocade.com> wrote:
> 
> Hi ,
> 
> How to run FW testcases which are under neutron using tempest?
> 
> If i am trying to list cases from tempest(sudo -u stack -H testr list-tests neutron.api
> ), its resulting to empty list
> 

You would need to set OS_TEST_PATH too for this to work. But yes, as Assaf said, you should use tox.

Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/a7f5a76d/attachment.pgp>

From lxsli at hpe.com  Thu Sep  3 10:34:31 2015
From: lxsli at hpe.com (Alexis Lee)
Date: Thu, 3 Sep 2015 11:34:31 +0100
Subject: [openstack-dev] Tracing a request (NOVA)
In-Reply-To: <CAM6Kgs=6eDst4UPP9uSs1f=zvPSU1L=rziyqAGs-8A6ZG4OzVw@mail.gmail.com>
References: <CANovBq5N+J6wMQQ3diEuAAaKytSb+aYYp7w4TnUqR9M6ipTg4w@mail.gmail.com>
 <BLU436-SMTP2208AB0941585B3AC56C75AD86D0@phx.gbl>
 <CAM6Kgs=6eDst4UPP9uSs1f=zvPSU1L=rziyqAGs-8A6ZG4OzVw@mail.gmail.com>
Message-ID: <20150903103431.GG15292@hpe.com>

Vedsar Kushwaha said on Sat, Aug 29, 2015 at 09:30:05AM +0530:
> *i just want to understand as to how the request goes from the api-call to
> the nova-api and so on after that.*

For a code-level walkthrough:
    http://docs-draft.openstack.org/67/210467/7/check/gate-nova-docs/51ee4c9//doc/build/html/trace.html

Unfortunately this code path is changing a lot at the moment but it's
only a few days out of date so far.


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79


From shardy at redhat.com  Thu Sep  3 10:46:52 2015
From: shardy at redhat.com (Steven Hardy)
Date: Thu, 3 Sep 2015 11:46:52 +0100
Subject: [openstack-dev] [TripleO][Heat] instance_user fallout,
 keeping the 'heat-admin' user working
In-Reply-To: <CAHV77z84o6=S0wJr=q+fNjbmMiytb=-pskib5jX9q3vPxPeJJA@mail.gmail.com>
References: <1441225337.1917.16.camel@redhat.com>
 <CAHV77z84o6=S0wJr=q+fNjbmMiytb=-pskib5jX9q3vPxPeJJA@mail.gmail.com>
Message-ID: <20150903104648.GA16636@t430slt.redhat.com>

On Wed, Sep 02, 2015 at 08:32:25PM -0400, James Slagle wrote:
> On Wed, Sep 2, 2015 at 4:22 PM, Dan Prince <dprince at redhat.com> wrote:
> > We had an IRC discussion today about the 'heat-admin' user in #tripleo.
> >
> > Upstream Heat recently reverted the 'instance_user' config file option
> > which we relied on in TripleO to standardize the default (admin) user
> > on our nodes. It is my understanding that Heat would prefer not to
> > maintain this option because it causes subtle compatibility issues
> > across the OpenStack and AWS APIs and the interactions between cloud
> > -init version, etc. So it was deprecated in Icehouse... and recently
> > removed in [1].
> >
> > We could just go with the default distro user (centos, fedora, ubuntu,
> > etc.) but it would be really nice to standardize on a user name for
> > maintenance should anyone every spin up a cloud using multiple distros
> > or something.
> >
> > So a couple of options. We could just go on and update our templates
> > like this [2]. This actually seems pretty clean to me, but it would
> > require anybody who has created custom firstboot scripts to do the same
> > (we have proposed docker patches with firstboot scripts that need
> > similar updates).
> 
> Yea, that's the main reason I'm not fond of this approach. It really
> feels like cluttering up the firstboot interface, in that anyone who
> wants to plugin in their own config there has to remember to also
> include this snippet. It leads to copying/pasting around yaml, which I
> don't think is a great pattern going forward.

Yeah, I agree, so I proposed an alternative approach:

https://review.openstack.org/#/c/220057/

This leaves the existing NodeUserData interface intact, and introduces a
new NodeAdminUserData interface (which is specifically for injecting the
admin user, and leaves the existing interface alone).

> It would be nice to have a cleaner separation between the interfaces
> that we offer to users and those that need to be reserved/used for
> TripleO's own purposes.

Agreed, that's what I was aiming for with my patch above, we should
probably work towards more clearly documenting what's supported "external"
vs internal interfaces tho.

> I'm not sure of a better solution though other than a native
> SoftwareDeployment resource in the templates directly that creates a
> known user and reads the ssh keys from the user data (via a script).

I think it's better to avoid using SoftwareDeployment resources for this
if possible, because you really want the user configured as early as
possible for easier debugging e.g when credentials or network are broken
and SoftwareDeployments don't work because occ can't reach heat.

> Or, what about baking in some static configuration for cloud-init into
> our images that creates the known user?

That could work, but you'd still need a user-data script to collect the SSH
key from the nova metadata server, because we won't know that at
image-build time.

You're right tho, this probably could be built into the image, it just
seems a little more flexible to expose it in the templates.

> > Alternately, we could propose that Heat revert the instance_user
> > feature or some version of it. We've been using that for a year or two
> > now and it has actually been fairly nice to set the default that way.
> 
> I really liked having the one consistent user no matter the cloud
> image you deployed from as well. I'm not sure we could successfully
> persuade it to go back in though given it was deprecated in Icehouse.

So, I don't think this is impossible, but let me try to explain/remember
the reasons for removing it:

1. Many users disliked that we injected a non-default user and wanted
transparency, e.g just let them configure the users they want via
cloud-init vs "hidden" configuration injected via heat.  This actually
originated with Clint, back in 2013[1] and was also flagged as impacting
TripleO back in 2014.

2. We have historically to maintain code to inject this user which works
over a wide range of cloud-init versions [2] This has proven to be very
difficult, for various reasons, including:
 - It's impossible to find a cloud-config syntax which works over all
   versions of cloud-init we (heat) expect to support (e.g Ubuntu 12.04 LTS
   still ships 0.6.3 which doesn't support cloud-config syntax *at all*,
   and it's expected to be supported until 2017).
 - The alternative to cloud-config was an approach which injects a boothook
   to create the user, but this ran into issues on systems with selinux
   which were difficult/impossible to resolve in a way compatible with
   0.6.3.

So, IIRC, we reached the conclusion that it made sense to just punt the
decision of what users to create, and how to create them to the template
author, who will presumably know if they want a user created, and if so
what version of cloud-init they expect to support in their images
(something the deployers and/or template authors know but heat maintainers
can't possibly know with any confidence).

There are other advantages to the config-via-template approach too, e.g
what if you wanted to create a different admin user per cloud when TripleO
supports deploying multiple overclouds?  Or if a deployer wanted to inject
some additional user configuration such as SSH keys or group membership,
sudo rules, whatever.  All of this is easy via templates and impossible via
the hard-coded instance_user option.

On balance, I think we're best to let the removal of instance_user stand,
and go with the template approach I proposed above, but I'm still open to
discussing it if folks feel strongly otherwise :)

Steve

[1] https://bugs.launchpad.net/tripleo/+bug/1229849
[2] https://bugs.launchpad.net/heat/+bug/1257410


From rakhmerov at mirantis.com  Thu Sep  3 11:23:20 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Thu, 3 Sep 2015 17:23:20 +0600
Subject: [openstack-dev] [mistral][yaql] Addressing task result using
	YAQL function
In-Reply-To: <04C0E7F3-1E41-4C2A-8A03-EB5C3A598861@stackstorm.com>
References: <B93B8F94-DE9D-4723-A22D-DC527DCC54FB@mirantis.com>
 <04C0E7F3-1E41-4C2A-8A03-EB5C3A598861@stackstorm.com>
Message-ID: <FA465E58-4611-44B4-9E5D-F353C778D5FF@mirantis.com>


> On 02 Sep 2015, at 21:01, Dmitri Zimine <dzimine at stackstorm.com> wrote:
> 
> Agree, 
> 
> with one detail: make it explicit -  task(task_name). 

So do you suggest we just replace res() with task() and it looks like

task() - get task result when we are in ?publish?
task(task_name) - get task result from anywhere

?

Is that correct you mean we must always specify a task name? The reason I?d like to have a simplified form (w/o task name) is that I see a lot of workflows that we have to repeat task name in publish so that it just look too verbose to me. Especially in case of very long task name.

Consider something like this:

tasks:
  get_volumes_by_names:
    with-items: name in <% $.vol_names %>
    workflow: get_volume_by_name name=<% $.name %>
    publish:
      volumes: <% $.get_volumes_by_names %>

So in publish we have to repeat a task name, there?s no other way now. I?d like to soften this requirement, but if you still want to use task names you?ll be able to.


> res - we often see folks confused by result of what (action, task, workflow) although we cleaned up our lingo: action-output, task-result, workflow-output?. but still worth being explicit.
> 
> And full result is being thought as the root context $.
> 
> Publishing to global context may be ok for now, IMO.

Not sure what you meant by "Publishing to global context?. Can you clarify please?


Renat Akhmerov
@ Mirantis Inc.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/de14227f/attachment.html>

From sgordon at redhat.com  Thu Sep  3 11:23:45 2015
From: sgordon at redhat.com (Steve Gordon)
Date: Thu, 3 Sep 2015 07:23:45 -0400 (EDT)
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <CFE03EEA.66B51%harlowja@yahoo-inc.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <1640C0EA-107E-4DEF-94FD-DE45CB18C04D@gmail.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
Message-ID: <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>

----- Original Message -----
> From: "Joshua Harlow" <harlowja at yahoo-inc.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>, "PAUL CARVER"
> 
> Jump on #cloud-init on freenode, smoser and I (and the other folks there) are
> both pretty friendly ;)

Hi all,

Apologies for necro'ing an old thread, but did this ever happen?

Thanks!

Steve

> From: Joshua Harlow <harlowja at yahoo-inc.com<mailto:harlowja at yahoo-inc.com>>
> Date: Monday, July 7, 2014 at 12:10 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>,
> "CARVER, PAUL" <pc2929 at att.com<mailto:pc2929 at att.com>>
> Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support
> 
> It wouldn?t be sufficient for OpenStack to support an IPv6 metadata address
> as long as
> most tenants are likely to be using a version of cloud-init that doesn?t know
> about IPv6
> so step one would be to find out whether the maintainer of cloud-init is open
> to the
> idea of IPv4-less clouds.
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From zigo at debian.org  Thu Sep  3 11:24:58 2015
From: zigo at debian.org (Thomas Goirand)
Date: Thu, 03 Sep 2015 13:24:58 +0200
Subject: [openstack-dev] [horizon] Concern about XStatic-bootswatch imports
	from fonts.googleapis.com
Message-ID: <55E82E0A.1050507@debian.org>

Hi,

When doing:
grep -r fonts.googleapis.com *

there's 56 lines of this kind of result:
xstatic/pkg/bootswatch/data/cyborg/bootstrap.css:@import
url("https://fonts.googleapis.com/css?family=Roboto:400,700");

This is wrong because:

1/ This is a privacy breach, and one may not agree on hitting any web
server which he doesn't control. It's a problem in itself for packaging
in Debian, which is currently stopping me from uploading.

2/ More importantly (and even if you don't care about this kind of
privacy breach), this requires Internet access, which isn't at all
granted in some installations.

So I wonder if using bootswatch, which includes such a problem, is
really a good idea. Are these fonts import completely mandatory? Or can
I patch them out? Will the result be ugly if I patch it out?

Your thoughts?

Cheers,

Thomas Goirand (zigo)


From james.slagle at gmail.com  Thu Sep  3 11:28:34 2015
From: james.slagle at gmail.com (James Slagle)
Date: Thu, 3 Sep 2015 07:28:34 -0400
Subject: [openstack-dev] [TripleO][Heat] instance_user fallout,
 keeping the 'heat-admin' user working
In-Reply-To: <20150903104648.GA16636@t430slt.redhat.com>
References: <1441225337.1917.16.camel@redhat.com>
 <CAHV77z84o6=S0wJr=q+fNjbmMiytb=-pskib5jX9q3vPxPeJJA@mail.gmail.com>
 <20150903104648.GA16636@t430slt.redhat.com>
Message-ID: <CAHV77z8cMdhPC+VMKc0iW3MSX9B3Yen8nHQfvW+0iJ5c5RpanA@mail.gmail.com>

On Thu, Sep 3, 2015 at 6:46 AM, Steven Hardy <shardy at redhat.com> wrote:
> On Wed, Sep 02, 2015 at 08:32:25PM -0400, James Slagle wrote:
>> On Wed, Sep 2, 2015 at 4:22 PM, Dan Prince <dprince at redhat.com> wrote:
>> > We had an IRC discussion today about the 'heat-admin' user in #tripleo.
>> >
>> > Upstream Heat recently reverted the 'instance_user' config file option
>> > which we relied on in TripleO to standardize the default (admin) user
>> > on our nodes. It is my understanding that Heat would prefer not to
>> > maintain this option because it causes subtle compatibility issues
>> > across the OpenStack and AWS APIs and the interactions between cloud
>> > -init version, etc. So it was deprecated in Icehouse... and recently
>> > removed in [1].
>> >
>> > We could just go with the default distro user (centos, fedora, ubuntu,
>> > etc.) but it would be really nice to standardize on a user name for
>> > maintenance should anyone every spin up a cloud using multiple distros
>> > or something.
>> >
>> > So a couple of options. We could just go on and update our templates
>> > like this [2]. This actually seems pretty clean to me, but it would
>> > require anybody who has created custom firstboot scripts to do the same
>> > (we have proposed docker patches with firstboot scripts that need
>> > similar updates).
>>
>> Yea, that's the main reason I'm not fond of this approach. It really
>> feels like cluttering up the firstboot interface, in that anyone who
>> wants to plugin in their own config there has to remember to also
>> include this snippet. It leads to copying/pasting around yaml, which I
>> don't think is a great pattern going forward.
>
> Yeah, I agree, so I proposed an alternative approach:
>
> https://review.openstack.org/#/c/220057/
>
> This leaves the existing NodeUserData interface intact, and introduces a
> new NodeAdminUserData interface (which is specifically for injecting the
> admin user, and leaves the existing interface alone).

This looks like a nice solution, and it addresses the concern. Thanks
for pulling it together.

>
>> It would be nice to have a cleaner separation between the interfaces
>> that we offer to users and those that need to be reserved/used for
>> TripleO's own purposes.
>
> Agreed, that's what I was aiming for with my patch above, we should
> probably work towards more clearly documenting what's supported "external"
> vs internal interfaces tho.
>
>> I'm not sure of a better solution though other than a native
>> SoftwareDeployment resource in the templates directly that creates a
>> known user and reads the ssh keys from the user data (via a script).
>
> I think it's better to avoid using SoftwareDeployment resources for this
> if possible, because you really want the user configured as early as
> possible for easier debugging e.g when credentials or network are broken
> and SoftwareDeployments don't work because occ can't reach heat.
>
>> Or, what about baking in some static configuration for cloud-init into
>> our images that creates the known user?
>
> That could work, but you'd still need a user-data script to collect the SSH
> key from the nova metadata server, because we won't know that at
> image-build time.
>
> You're right tho, this probably could be built into the image, it just
> seems a little more flexible to expose it in the templates.
>
>> > Alternately, we could propose that Heat revert the instance_user
>> > feature or some version of it. We've been using that for a year or two
>> > now and it has actually been fairly nice to set the default that way.
>>
>> I really liked having the one consistent user no matter the cloud
>> image you deployed from as well. I'm not sure we could successfully
>> persuade it to go back in though given it was deprecated in Icehouse.
>
> So, I don't think this is impossible, but let me try to explain/remember
> the reasons for removing it:
>
> 1. Many users disliked that we injected a non-default user and wanted
> transparency, e.g just let them configure the users they want via
> cloud-init vs "hidden" configuration injected via heat.  This actually
> originated with Clint, back in 2013[1] and was also flagged as impacting
> TripleO back in 2014.
>
> 2. We have historically to maintain code to inject this user which works
> over a wide range of cloud-init versions [2] This has proven to be very
> difficult, for various reasons, including:
>  - It's impossible to find a cloud-config syntax which works over all
>    versions of cloud-init we (heat) expect to support (e.g Ubuntu 12.04 LTS
>    still ships 0.6.3 which doesn't support cloud-config syntax *at all*,
>    and it's expected to be supported until 2017).
>  - The alternative to cloud-config was an approach which injects a boothook
>    to create the user, but this ran into issues on systems with selinux
>    which were difficult/impossible to resolve in a way compatible with
>    0.6.3.
>
> So, IIRC, we reached the conclusion that it made sense to just punt the
> decision of what users to create, and how to create them to the template
> author, who will presumably know if they want a user created, and if so
> what version of cloud-init they expect to support in their images
> (something the deployers and/or template authors know but heat maintainers
> can't possibly know with any confidence).
>
> There are other advantages to the config-via-template approach too, e.g
> what if you wanted to create a different admin user per cloud when TripleO
> supports deploying multiple overclouds?  Or if a deployer wanted to inject
> some additional user configuration such as SSH keys or group membership,
> sudo rules, whatever.  All of this is easy via templates and impossible via
> the hard-coded instance_user option.
>
> On balance, I think we're best to let the removal of instance_user stand,
> and go with the template approach I proposed above, but I'm still open to
> discussing it if folks feel strongly otherwise :)

Yea, makes sense, thanks for the recap. I think the patch you've
proposed will work out.

-- 
-- James Slagle
--


From vkuklin at mirantis.com  Thu Sep  3 11:37:46 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Thu, 3 Sep 2015 14:37:46 +0300
Subject: [openstack-dev] [Fuel] Nominate Evgeniy Konstantinov for
 fuel-docs core
In-Reply-To: <CAC+XjbZqz-qk1fi+pR=H-KXEgOqW9W0_+0f89xKVSPpiA5otWg@mail.gmail.com>
References: <CAFY49iBwxknorBHmVLZSkUWD9zMr4Tc57vKOg_F0=7PEG0_tSA@mail.gmail.com>
 <CAM0pNLOpBAhyQnRCHXK=jL6NTpxdEe880a=h7c-Jvw4GdTuk9w@mail.gmail.com>
 <CAC+XjbZqz-qk1fi+pR=H-KXEgOqW9W0_+0f89xKVSPpiA5otWg@mail.gmail.com>
Message-ID: <CAHAWLf2apU=0b_xOhEMA=DjKoEKRsSCtys4sGnjyBmQckgXhUA@mail.gmail.com>

+2

On Thu, Sep 3, 2015 at 8:49 AM, Anastasia Urlapova <aurlapova at mirantis.com>
wrote:

> +1
>
> On Wed, Sep 2, 2015 at 4:03 PM, Dmitry Borodaenko <
> dborodaenko at mirantis.com> wrote:
>
>> +1, Evgeny has been a #1 committer in fuel-docs for a while, it's great
>> to see him pick up on the reviews, too.
>>
>> On Wed, Sep 2, 2015 at 3:24 PM Irina Povolotskaya <
>> ipovolotskaya at mirantis.com> wrote:
>>
>>> Fuelers,
>>>
>>> I'd like to nominate Evgeniy Konstantinov for the fuel-docs-core team.
>>> He has contributed thousands of lines of documentation to Fuel over
>>> the past several months, and has been a diligent reviewer:
>>>
>>>
>>> http://stackalytics.com/?user_id=evkonstantinov&release=all&project_type=all&module=fuel-docs
>>>
>>> I believe it's time to grant him core reviewer rights in the fuel-docs
>>> repository.
>>>
>>> Core reviewer approval process definition:
>>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>>
>>> --
>>> Best regards,
>>>
>>> Irina
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/e97b1693/attachment.html>

From bdobrelia at mirantis.com  Thu Sep  3 12:15:13 2015
From: bdobrelia at mirantis.com (Bogdan Dobrelya)
Date: Thu, 03 Sep 2015 14:15:13 +0200
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
	Fuel-Library Core
Message-ID: <55E839D1.7030404@mirantis.com>

+1
Thank you Alex for work hard.

> Hi,
> 
> I would like to nominate Alex Schultz to Fuel-Library Core team. He?s been
> doing a great job in writing patches. At the same time his reviews are
> solid with comments for further improvements. He?s #3 reviewer and #1
> contributor with 46 commits for last 90 days [1]. Additionally, Alex has
> been very active in IRC providing great ideas. His ?librarian? blueprint
> [3] made a big step towards to puppet community.
> 
> Fuel Library, please vote with +1/-1 for approval/objection. Voting will be
> open until September 9th. This will go forward after voting is closed if
> there are no objections.
> 
> Overall contribution:
> [0] http://stackalytics.com/?user_id=alex-schultz
> Fuel library contribution for last 90 days:
> [1] http://stackalytics.com/report/contribution/fuel-library/90
> List of reviews:
> [2]
> https://review.openstack.org/#/q/reviewer:%22Alex+Schultz%22+status:merged,n,z
> ?Librarian activities? in mailing list:
> [3] http://lists.openstack.org/pipermail/openstack-dev/2015-July/071058.html


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando


From thierry at openstack.org  Thu Sep  3 12:22:28 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Thu, 3 Sep 2015 14:22:28 +0200
Subject: [openstack-dev] [all] Base feature deprecation policy
Message-ID: <55E83B84.5000000@openstack.org>

Hi everyone,

A feature deprecation policy is a standard way to communicate and
perform the removal of user-visible behaviors and capabilities. It helps
setting user expectations on how much and how long they can rely on a
feature being present. It gives them reassurance over the timeframe they
have to adapt in such cases.

In OpenStack we always had a feature deprecation policy that would apply
to "integrated projects", however it was never written down. It was
something like "to remove a feature, you mark it deprecated for n
releases, then you can remove it".

We don't have an "integrated release" anymore, but having a base
deprecation policy, and knowing which projects are mature enough to
follow it, is a great piece of information to communicate to our users.

That's why the next-tags workgroup at the Technical Committee has been
working to propose such a base policy as a 'tag' that project teams can
opt to apply to their projects when they agree to apply it to one of
their deliverables:

https://review.openstack.org/#/c/207467/

Before going through the last stage of this, we want to survey existing
projects to see which deprecation policy they currently follow, and
verify that our proposed base deprecation policy makes sense. The goal
is not to dictate something new from the top, it's to reflect what's
generally already applied on the field.

In particular, the current proposal says:

"At the very minimum the feature [...] should be marked deprecated (and
still be supported) in the next two coordinated end-of-cyle releases.
For example, a feature deprecated during the M development cycle should
still appear in the M and N releases and cannot be removed before the
beginning of the O development cycle."

That would be a n+2 deprecation policy. Some suggested that this is too
far-reaching, and that a n+1 deprecation policy (feature deprecated
during the M development cycle can't be removed before the start of the
N cycle) would better reflect what's being currently done. Or that
config options (which are user-visible things) should have n+1 as long
as the underlying feature (or behavior) is not removed.

Please let us know what makes the most sense. In particular between the
3 options (but feel free to suggest something else):

1. n+2 overall
2. n+2 for features and capabilities, n+1 for config options
3. n+1 overall

Thanks in advance for your input.

-- 
Thierry Carrez (ttx)


From andrey.mp at gmail.com  Thu Sep  3 12:43:28 2015
From: andrey.mp at gmail.com (Andrey Pavlov)
Date: Thu, 3 Sep 2015 15:43:28 +0300
Subject: [openstack-dev] [nova] [neutron] [rally] Neutron or nova
	degradation?
Message-ID: <CAKdBrSdyj-Q3vKY6uFkSQW4G4gqOmOdCm4Vz6MsmPqByihU=+A@mail.gmail.com>

Hello,

We have rally job with fake virt driver. And we run it periodically.
This job runs 200 servers and measures 'show' operations.

On 18.08 it was run well[1]. But on 21.08 it was failed by timeout[2].
I tried to understand what happens.
I tried to check this job with 20 servers only[3]. It passed but I see that
operations with neutron take more time now (list subnets, list network
interfaces).
and as result start and show instances take more time also.

Maybe anyone knows what happens?


[1]
http://logs.openstack.org/13/211613/6/experimental/ec2-api-rally-dsvm-fakevirt/fac263e/
[2]
http://logs.openstack.org/74/213074/7/experimental/ec2-api-rally-dsvm-fakevirt/91d0675/
[3]
http://logs.openstack.org/46/219846/1/experimental/ec2-api-rally-dsvm-fakevirt/dad98f0/

-- 
Kind regards,
Andrey Pavlov.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/842e282a/attachment.html>

From john at johngarbutt.com  Thu Sep  3 12:48:24 2015
From: john at johngarbutt.com (John Garbutt)
Date: Thu, 3 Sep 2015 13:48:24 +0100
Subject: [openstack-dev] [nova] [docs] Are cells still 'experimental'?
In-Reply-To: <CAEd1pt6v_du3nj+krP-Gq=kcxHsd=6HSyg1-UnEjZPPkxwHoOw@mail.gmail.com>
References: <55E7B6AD.6050007@lanabrindley.com>
 <CAEd1pt6v_du3nj+krP-Gq=kcxHsd=6HSyg1-UnEjZPPkxwHoOw@mail.gmail.com>
Message-ID: <CABib2_r+OJFeAB1CjuQbGotMR-f_wOFnk9eq47wiK5p6MqqEqw@mail.gmail.com>

Cells v1 were marked as experimental because of the total lack of testing.

Some of that has now been resolved, we have a check job that tests a
subset of our functionality on cells.

But honestly, given we plan to remove cells v1, and the limitations
you will find using cells v1, its tempting to keep the experimental
label, purely to discourage new deployments of cells v1.

Really I think we need to start to formalise how we communicate to our
users about the maturity and usability level of all our features. I
have tried to draft out how that might look here:
https://review.openstack.org/#/c/215664/

Thanks,
John

On 3 September 2015 at 07:33, Michael Still <mikal at stillhq.com> wrote:
> I think they should not be marked experimental. For better or for worse,
> there are multiple sites deployed, and we will support them if something
> goes wrong.
>
> That said, I wouldn't be encouraging new deployments of cells v1 at this
> point.
>
> Michael
>
> On Thu, Sep 3, 2015 at 12:55 PM, Lana Brindley <openstack at lanabrindley.com>
> wrote:
>>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> Hi John,
>>
>> I noticed while looking through some Nova docs bugs that the Config Ref
>> lists cells as experimental:
>>
>> http://docs.openstack.org/kilo/config-reference/content/section_compute-
>> cells.html
>>
>> Is this still true?
>>
>> Thanks,
>> Lana
>>
>> - --
>> Lana Brindley
>> Technical Writer
>> Rackspace Cloud Builders Australia
>> http://lanabrindley.com
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v2.0.22 (GNU/Linux)
>>
>> iQEcBAEBAgAGBQJV57atAAoJELppzVb4+KUyjzYIAIain4YZauEcMEMNYfdI74Lj
>> qmUO4U5kTkg7dFcsW1DJhhPvPjgsJPKRcMFofcZEB7qV+QcCbDx9g691NlB3u1dG
>> MEOtBq9y5o1PJMPxl8xcbHaOLm028E4f7oUrlODpQs/dlWS8vfXpOeT/CwYsqFG4
>> lF08/YpvNaNLBytCjbFgFqmQt5I+8gLBmyXgRl06+HflgjYsr6fQyjQzMlVfioPW
>> 5IYg0p+Zj4B/MxRo5xCWph0e9YdeE3CBpqGB33iay06341Sh0cVi0O4QPTZ/f2tA
>> TbZzskHDKJoEb6kqbz4jMtzoDSr76N4+ltwMynzpCY/I8tyuV+Yj5vIWO79Wo6Q=
>> =hjD8
>> -----END PGP SIGNATURE-----
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Rackspace Australia


From sean at dague.net  Thu Sep  3 12:51:16 2015
From: sean at dague.net (Sean Dague)
Date: Thu, 3 Sep 2015 08:51:16 -0400
Subject: [openstack-dev] [nova] [docs] Are cells still 'experimental'?
In-Reply-To: <CABib2_r+OJFeAB1CjuQbGotMR-f_wOFnk9eq47wiK5p6MqqEqw@mail.gmail.com>
References: <55E7B6AD.6050007@lanabrindley.com>
 <CAEd1pt6v_du3nj+krP-Gq=kcxHsd=6HSyg1-UnEjZPPkxwHoOw@mail.gmail.com>
 <CABib2_r+OJFeAB1CjuQbGotMR-f_wOFnk9eq47wiK5p6MqqEqw@mail.gmail.com>
Message-ID: <55E84244.4020703@dague.net>

On 09/03/2015 08:48 AM, John Garbutt wrote:
> Cells v1 were marked as experimental because of the total lack of testing.
> 
> Some of that has now been resolved, we have a check job that tests a
> subset of our functionality on cells.
> 
> But honestly, given we plan to remove cells v1, and the limitations
> you will find using cells v1, its tempting to keep the experimental
> label, purely to discourage new deployments of cells v1.
> 
> Really I think we need to start to formalise how we communicate to our
> users about the maturity and usability level of all our features. I
> have tried to draft out how that might look here:
> https://review.openstack.org/#/c/215664/

Honestly, it's still got somewhat limited test coverage, and using Cells
means some features of Nova don't work. So Experimental is still the
right state for cells v1.

	-Sean

-- 
Sean Dague
http://dague.net

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 465 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/61b4ccd3/attachment.pgp>

From andrew at lascii.com  Thu Sep  3 13:01:15 2015
From: andrew at lascii.com (Andrew Laski)
Date: Thu, 3 Sep 2015 09:01:15 -0400
Subject: [openstack-dev] [nova] [docs] Are cells still 'experimental'?
In-Reply-To: <CAEd1pt6v_du3nj+krP-Gq=kcxHsd=6HSyg1-UnEjZPPkxwHoOw@mail.gmail.com>
References: <55E7B6AD.6050007@lanabrindley.com>
 <CAEd1pt6v_du3nj+krP-Gq=kcxHsd=6HSyg1-UnEjZPPkxwHoOw@mail.gmail.com>
Message-ID: <20150903130115.GG3226@crypt>

On 09/03/15 at 04:33pm, Michael Still wrote:
>I think they should not be marked experimental. For better or for worse,
>there are multiple sites deployed, and we will support them if something
>goes wrong.

I don't recall all of the criteria that went into the decision to mark 
them experimental, but a large factor was the lack of CI testing.  There 
is now a gating CI job so this no longer holds.  There is still the 
issue of missing features like security groups and host aggregates.  And 
a cells deployment does not benefit from all of the guarantees that we 
make around upgrades.  It is not possible to run cells from multiple 
releases in the same deployment without issue.

So while experimental may be the wrong description for cells it is 
important to call out the fact that they aren't up to the level of a non 
cells deployment in many ways.

>
>That said, I wouldn't be encouraging new deployments of cells v1 at this
>point.

Agreed.

>
>Michael
>
>On Thu, Sep 3, 2015 at 12:55 PM, Lana Brindley <openstack at lanabrindley.com>
>wrote:
>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> Hi John,
>>
>> I noticed while looking through some Nova docs bugs that the Config Ref
>> lists cells as experimental:
>>
>> http://docs.openstack.org/kilo/config-reference/content/section_compute-
>> cells.html
>>
>> Is this still true?
>>
>> Thanks,
>> Lana
>>
>> - --
>> Lana Brindley
>> Technical Writer
>> Rackspace Cloud Builders Australia
>> http://lanabrindley.com
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v2.0.22 (GNU/Linux)
>>
>> iQEcBAEBAgAGBQJV57atAAoJELppzVb4+KUyjzYIAIain4YZauEcMEMNYfdI74Lj
>> qmUO4U5kTkg7dFcsW1DJhhPvPjgsJPKRcMFofcZEB7qV+QcCbDx9g691NlB3u1dG
>> MEOtBq9y5o1PJMPxl8xcbHaOLm028E4f7oUrlODpQs/dlWS8vfXpOeT/CwYsqFG4
>> lF08/YpvNaNLBytCjbFgFqmQt5I+8gLBmyXgRl06+HflgjYsr6fQyjQzMlVfioPW
>> 5IYg0p+Zj4B/MxRo5xCWph0e9YdeE3CBpqGB33iay06341Sh0cVi0O4QPTZ/f2tA
>> TbZzskHDKJoEb6kqbz4jMtzoDSr76N4+ltwMynzpCY/I8tyuV+Yj5vIWO79Wo6Q=
>> =hjD8
>> -----END PGP SIGNATURE-----
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>-- 
>Rackspace Australia

>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From amuller at redhat.com  Thu Sep  3 13:07:39 2015
From: amuller at redhat.com (Assaf Muller)
Date: Thu, 3 Sep 2015 09:07:39 -0400
Subject: [openstack-dev] [nova] [neutron] [rally] Neutron or nova
	degradation?
In-Reply-To: <CAKdBrSdyj-Q3vKY6uFkSQW4G4gqOmOdCm4Vz6MsmPqByihU=+A@mail.gmail.com>
References: <CAKdBrSdyj-Q3vKY6uFkSQW4G4gqOmOdCm4Vz6MsmPqByihU=+A@mail.gmail.com>
Message-ID: <CABARBAbBZXqi7QOVs5Cza6mphqk6pfEZi1jburPPGYZNP63zzg@mail.gmail.com>

On Thu, Sep 3, 2015 at 8:43 AM, Andrey Pavlov <andrey.mp at gmail.com> wrote:

> Hello,
>
> We have rally job with fake virt driver. And we run it periodically.
> This job runs 200 servers and measures 'show' operations.
>
> On 18.08 it was run well[1]. But on 21.08 it was failed by timeout[2].
> I tried to understand what happens.
> I tried to check this job with 20 servers only[3]. It passed but I see
> that
> operations with neutron take more time now (list subnets, list network
> interfaces).
> and as result start and show instances take more time also.
>
> Maybe anyone knows what happens?
>

Looking at the merged Neutron patches between the 18th and 21st, there's a
lot of
candidates, including QoS and work around quotas.

I think the best way to find out would be to run a profiler against Neutron
from the 18th,
and Neutron from the 21st while running the Rally tests, and finding out if
the major
bottlenecks moved. Last time I profiled Neutron I used GeventProfiler:
https://pypi.python.org/pypi/GreenletProfiler

Ironically I was having issued with the profiler that comes with Eventlet.


>
>
> [1]
> http://logs.openstack.org/13/211613/6/experimental/ec2-api-rally-dsvm-fakevirt/fac263e/
> [2]
> http://logs.openstack.org/74/213074/7/experimental/ec2-api-rally-dsvm-fakevirt/91d0675/
> [3]
> http://logs.openstack.org/46/219846/1/experimental/ec2-api-rally-dsvm-fakevirt/dad98f0/
>
> --
> Kind regards,
> Andrey Pavlov.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/cfdcd275/attachment.html>

From brian.rosmaita at RACKSPACE.COM  Thu Sep  3 13:15:53 2015
From: brian.rosmaita at RACKSPACE.COM (Brian Rosmaita)
Date: Thu, 3 Sep 2015 13:15:53 +0000
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <EA70533067B8F34F801E964ABCA4C4410F4C1D0D@G4W3202.americas.hpqcorp.net>
References: <55E7AC5C.9010504@gmail.com> <20150903085224.GD30997@redhat.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>
 <EA70533067B8F34F801E964ABCA4C4410F4C1D0D@G4W3202.americas.hpqcorp.net>
Message-ID: <D20DBFD8.210FE%brian.rosmaita@rackspace.com>

I added an agenda item for this for today's Glance meeting:
   https://etherpad.openstack.org/p/glance-team-meeting-agenda

I'd prefer to hold my vote until after the meeting.

cheers,
brian


On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuvaja at hp.com> wrote:

>Malini, all,
>
>My current opinion is -1 for FFE based on the concerns in the spec and
>implementation.
>
>I'm more than happy to realign my stand after we have updated spec and a)
>it's agreed to be the approach as of now and b) we can evaluate how much
>work the implementation needs to meet with the revisited spec.
>
>If we end up to the unfortunate situation that this functionality does
>not merge in time for Liberty, I'm confident that this is one of the
>first things in Mitaka. I really don't think there is too much to go, we
>just might run out of time.
>
>Thanks for your patience and endless effort to get this done.
>
>Best,
>Erno
>
>> -----Original Message-----
>> From: Bhandaru, Malini K [mailto:malini.k.bhandaru at intel.com]
>> Sent: Thursday, September 03, 2015 10:10 AM
>> To: Flavio Percoco; OpenStack Development Mailing List (not for usage
>> questions)
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
>> 
>> Flavio, first thing in the morning Kent will upload a new BP that
>>addresses the
>> comments. We would very much appreciate a +1 on the FFE.
>> 
>> Regards
>> Malini
>> 
>> 
>> 
>> -----Original Message-----
>> From: Flavio Percoco [mailto:flavio at redhat.com]
>> Sent: Thursday, September 03, 2015 1:52 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
>> 
>> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>> >Hi,
>> >
>> >I wanted to propose 'Single disk image OVA import' [1] feature proposal
>> >for exception. This looks like a decently safe proposal that should be
>> >able to adjust in the extended time period of Liberty. It has been
>> >discussed at the Vancouver summit during a work session and the
>> >proposal has been trimmed down as per the suggestions then; has been
>> >overall accepted by those present during the discussions (barring a few
>> >changes needed on the spec itself). It being a addition to already
>> >existing import task, doesn't involve API change or change to any of
>> >the core Image functionality as of now.
>> >
>> >Please give your vote: +1 or -1 .
>> >
>> >[1] https://review.openstack.org/#/c/194868/
>> 
>> I'd like to see support for OVF being, finally, implemented in Glance.
>> Unfortunately, I think there are too many open questions in the spec
>>right
>> now to make this FFE worthy.
>> 
>> Could those questions be answered to before the EOW?
>> 
>> With those questions answered, we'll be able to provide a more,
>>realistic,
>> vote.
>> 
>> Also, I'd like us to evaluate how mature the implementation[0] is and
>>the
>> likelihood of it addressing the concerns/comments in time.
>> 
>> For now, it's a -1 from me.
>> 
>> Thanks all for working on this, this has been a long time requested
>>format to
>> have in Glance.
>> Flavio
>> 
>> [0] https://review.openstack.org/#/c/214810/
>> 
>> 
>> --
>> @flaper87
>> Flavio Percoco
>> __________________________________________________________
>> ________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From rlooyahoo at gmail.com  Thu Sep  3 14:09:56 2015
From: rlooyahoo at gmail.com (Ruby Loo)
Date: Thu, 3 Sep 2015 10:09:56 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <55E83B84.5000000@openstack.org>
References: <55E83B84.5000000@openstack.org>
Message-ID: <CA+5K_1FqSmMBnEh5s88+AAq2UsaK5rHRgu68LFNWer42K8d6Pg@mail.gmail.com>

On 3 September 2015 at 08:22, Thierry Carrez <thierry at openstack.org> wrote:

> ...
> In particular, the current proposal says:
>
> "At the very minimum the feature [...] should be marked deprecated (and
> still be supported) in the next two coordinated end-of-cyle releases.
> For example, a feature deprecated during the M development cycle should
> still appear in the M and N releases and cannot be removed before the
> beginning of the O development cycle."
>
> That would be a n+2 deprecation policy. Some suggested that this is too
> far-reaching, and that a n+1 deprecation policy (feature deprecated
> during the M development cycle can't be removed before the start of the
> N cycle) would better reflect what's being currently done. Or that
> config options (which are user-visible things) should have n+1 as long
> as the underlying feature (or behavior) is not removed.
>
> Please let us know what makes the most sense. In particular between the
> 3 options (but feel free to suggest something else):
>
> 1. n+2 overall
> 2. n+2 for features and capabilities, n+1 for config options
> 3. n+1 overall
>

ironic does n+1 for config options and features. It is possible that ironic
did n+2 as well but if so, I don't recall.

Thanks for asking!

--ruby
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/3d371a02/attachment.html>

From openstack-dev at storpool.com  Thu Sep  3 14:17:04 2015
From: openstack-dev at storpool.com (Peter Penchev)
Date: Thu, 3 Sep 2015 17:17:04 +0300
Subject: [openstack-dev] [cinder][third-party] StorPool Cinder CI
In-Reply-To: <CAFPwUpFyKns-T_2WGDX_y4YZYGr-8UWfcC_CrpMxJEESvgSgmg@mail.gmail.com>
References: <CAFPwUpGhs9=99_417B5MUSBfxnaqTy=EWQ0DTYHJCoQaG4forA@mail.gmail.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AA409D@G9W0753.americas.hpqcorp.net>
 <CAFPwUpFZEhW8Rza1F3ZOwOTJx8HqQx+ruXMzBGkDfYzz9nSxGQ@mail.gmail.com>
 <CAFPwUpFyKns-T_2WGDX_y4YZYGr-8UWfcC_CrpMxJEESvgSgmg@mail.gmail.com>
Message-ID: <CAFPwUpFnV-mmixzq568KbiXXJvBUYd3_JxemksEUQYu0wCdgHw@mail.gmail.com>

On Fri, Aug 28, 2015 at 6:29 PM, Peter Penchev
<openstack-dev at storpool.com> wrote:
> On Fri, Aug 28, 2015 at 1:03 AM, Peter Penchev
> <openstack-dev at storpool.com> wrote:
>> On Fri, Aug 28, 2015 at 12:22 AM, Asselin, Ramy <ramy.asselin at hp.com> wrote:
>>> Hi Peter,
>>>
>>> Your log files require downloads. Please fix it such that they can be viewed directly [1]
>>
>> Hi, and thanks for the fast reply!  Yes, I'll try to change the
>> webserver's configuration, although the snippet in the FAQ won''t help
>> a lot, since it's a lighttpd server, not Apache.  I'll get back to you
>> when I've figured something out.
>
> OK, it took some twiddling with the lighttpd config, but it's done -
> now files with a .gz extension are served uncompressed in a way that
> makes the browser display them and not save them to disk.
>
> About the rebasing over our local patches: I made the script display
> the subject lines and the file lists of the commits on our local
> branch (the ones that the source is being rebased onto).  Pay no
> attention to the several commits to "devstack"; they are mostly
> artifacts of our own infrastructure and the setup of the machines, and
> in most cases they are no-ops.
>
> About your question about 3129 and 217802/1 - well, to be fair, this
> is not a Cinder patch, so it's kind of expected that you won't find it
> in the Cinder commits :)  It's a Brick patch and it is indeed listed a
> couple of lines down in the os-brick section :)
>
> So, a couple of examples of our shiny new log setup (well, pretty much
> the same as the old boring log setup, but oh well):
>
> http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3166/
> http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3167/
> http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3168/

Hi,

Yesterday I realized that Mike Perez was on vacation; well, is there
anybody else who could take a look at our CI's logs and drop a line to
the third-party CI team to reenable our Gerrit account?

As an additional step of making our CI's patches public, I just
submitted https://review.openstack.org/220155/ - "Reintroduce the
StorPool driver", and updated https://review.openstack.org/#/c/192639/
- add the os-brick StorPool connector (just for informational
purposes).  And, yes, I do realize that the Liberty feature freeze is
pretty much upon us, but, well, it's been three weeks since the
message to third-party-announce that our CI was ready to be
reenabled... so would there be any chance for our Cinder driver to be
reintroduced?

Thanks in advance for any assistance!

G'luck,
Peter


From sean at coreitpro.com  Thu Sep  3 14:30:11 2015
From: sean at coreitpro.com (Sean M. Collins)
Date: Thu, 3 Sep 2015 14:30:11 +0000
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <1640C0EA-107E-4DEF-94FD-DE45CB18C04D@gmail.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
Message-ID: <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>

It's not a case of cloud-init supporting IPv6 - The Amazon EC2 metadata API defines transport level details about the API - and currently only defines a well known IPv4 link local address to connect to. No well known link local IPv6 address has been defined.

I usually recommend config-drive for IPv6 enabled clouds due to this.
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/c36dc45b/attachment.html>

From anteaya at anteaya.info  Thu Sep  3 14:35:33 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Thu, 3 Sep 2015 10:35:33 -0400
Subject: [openstack-dev] [cinder][third-party] StorPool Cinder CI
In-Reply-To: <CAFPwUpFnV-mmixzq568KbiXXJvBUYd3_JxemksEUQYu0wCdgHw@mail.gmail.com>
References: <CAFPwUpGhs9=99_417B5MUSBfxnaqTy=EWQ0DTYHJCoQaG4forA@mail.gmail.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AA409D@G9W0753.americas.hpqcorp.net>
 <CAFPwUpFZEhW8Rza1F3ZOwOTJx8HqQx+ruXMzBGkDfYzz9nSxGQ@mail.gmail.com>
 <CAFPwUpFyKns-T_2WGDX_y4YZYGr-8UWfcC_CrpMxJEESvgSgmg@mail.gmail.com>
 <CAFPwUpFnV-mmixzq568KbiXXJvBUYd3_JxemksEUQYu0wCdgHw@mail.gmail.com>
Message-ID: <55E85AB5.4060008@anteaya.info>

On 09/03/2015 10:17 AM, Peter Penchev wrote:
> On Fri, Aug 28, 2015 at 6:29 PM, Peter Penchev
> <openstack-dev at storpool.com> wrote:
>> On Fri, Aug 28, 2015 at 1:03 AM, Peter Penchev
>> <openstack-dev at storpool.com> wrote:
>>> On Fri, Aug 28, 2015 at 12:22 AM, Asselin, Ramy <ramy.asselin at hp.com> wrote:
>>>> Hi Peter,
>>>>
>>>> Your log files require downloads. Please fix it such that they can be viewed directly [1]
>>>
>>> Hi, and thanks for the fast reply!  Yes, I'll try to change the
>>> webserver's configuration, although the snippet in the FAQ won''t help
>>> a lot, since it's a lighttpd server, not Apache.  I'll get back to you
>>> when I've figured something out.
>>
>> OK, it took some twiddling with the lighttpd config, but it's done -
>> now files with a .gz extension are served uncompressed in a way that
>> makes the browser display them and not save them to disk.
>>
>> About the rebasing over our local patches: I made the script display
>> the subject lines and the file lists of the commits on our local
>> branch (the ones that the source is being rebased onto).  Pay no
>> attention to the several commits to "devstack"; they are mostly
>> artifacts of our own infrastructure and the setup of the machines, and
>> in most cases they are no-ops.
>>
>> About your question about 3129 and 217802/1 - well, to be fair, this
>> is not a Cinder patch, so it's kind of expected that you won't find it
>> in the Cinder commits :)  It's a Brick patch and it is indeed listed a
>> couple of lines down in the os-brick section :)
>>
>> So, a couple of examples of our shiny new log setup (well, pretty much
>> the same as the old boring log setup, but oh well):
>>
>> http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3166/
>> http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3167/
>> http://ci-openstack.storpool.com:8080/job/dsvm-tempest-storpool-cinder-driver/3168/
> 
> Hi,
> 
> Yesterday I realized that Mike Perez was on vacation; well, is there
> anybody else who could take a look at our CI's logs and drop a line to
> the third-party CI team to reenable our Gerrit account?

There is no third party CI team. Disabling and re-enabling CI accounts
is the responsibility of the infra team.

But you are correct, we are waiting to hear from one of Mike Perez or a
designate.

And while I'm here a big thank you to Ramy Asselin for all his hard work
helping you.

Thank you,
Anita.


> 
> As an additional step of making our CI's patches public, I just
> submitted https://review.openstack.org/220155/ - "Reintroduce the
> StorPool driver", and updated https://review.openstack.org/#/c/192639/
> - add the os-brick StorPool connector (just for informational
> purposes).  And, yes, I do realize that the Liberty feature freeze is
> pretty much upon us, but, well, it's been three weeks since the
> message to third-party-announce that our CI was ready to be
> reenabled... so would there be any chance for our Cinder driver to be
> reintroduced?
> 
> Thanks in advance for any assistance!
> 
> G'luck,
> Peter
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From zbitter at redhat.com  Thu Sep  3 14:42:41 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Thu, 3 Sep 2015 10:42:41 -0400
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <CAA16xcxAnXj9mDhoATFDvhvkBJiF5g4taTv7g3LNoONhuRo4jA@mail.gmail.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
 <20150901124147.GA4710@t430slt.redhat.com>
 <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
 <CAJ3HoZ1RKCBV5if4YS_b-h0WzGu0HySkAVEQGKbvyuOpz9LYGg@mail.gmail.com>
 <20150902085546.GA25909@t430slt.redhat.com> <55E73721.1000804@redhat.com>
 <CAA16xcxAnXj9mDhoATFDvhvkBJiF5g4taTv7g3LNoONhuRo4jA@mail.gmail.com>
Message-ID: <55E85C61.5000808@redhat.com>

On 03/09/15 02:56, Angus Salkeld wrote:
> On Thu, Sep 3, 2015 at 3:53 AM Zane Bitter <zbitter at redhat.com
> <mailto:zbitter at redhat.com>> wrote:
>
>     On 02/09/15 04:55, Steven Hardy wrote:
>      > On Wed, Sep 02, 2015 at 04:33:36PM +1200, Robert Collins wrote:
>      >> On 2 September 2015 at 11:53, Angus Salkeld
>     <asalkeld at mirantis.com <mailto:asalkeld at mirantis.com>> wrote:
>      >>
>      >>> 1. limit the number of resource actions in parallel (maybe base
>     on the
>      >>> number of cores)
>      >>
>      >> I'm having trouble mapping that back to 'and heat-engine is
>     running on
>      >> 3 separate servers'.
>      >
>      > I think Angus was responding to my test feedback, which was a
>     different
>      > setup, one 4-core laptop running heat-engine with 4 worker processes.
>      >
>      > In that environment, the level of additional concurrency becomes
>     a problem
>      > because all heat workers become so busy that creating a large stack
>      > DoSes the Heat services, and in my case also the DB.
>      >
>      > If we had a configurable option, similar to num_engine_workers, which
>      > enabled control of the number of resource actions in parallel, I
>     probably
>      > could have controlled that explosion in activity to a more
>     managable series
>      > of tasks, e.g I'd set num_resource_actions to
>     (num_engine_workers*2) or
>      > something.
>
>     I think that's actually the opposite of what we need.
>
>     The resource actions are just sent to the worker queue to get processed
>     whenever. One day we will get to the point where we are overflowing the
>     queue, but I guarantee that we are nowhere near that day. If we are
>     DoSing ourselves, it can only be because we're pulling *everything* off
>     the queue and starting it in separate greenthreads.
>
>
> worker does not use a greenthread per job like service.py does.
> This issue is if you have actions that are fast you can hit the db hard.
>
> QueuePool limit of size 5 overflow 10 reached, connection timed out,
> timeout 30
>
> It seems like it's not very hard to hit this limit. It comes from simply
> loading
> the resource in the worker:
> "/home/angus/work/heat/heat/engine/worker.py", line 276, in check_resource
> "/home/angus/work/heat/heat/engine/worker.py", line 145, in _load_resource
> "/home/angus/work/heat/heat/engine/resource.py", line 290, in load
> resource_objects.Resource.get_obj(context, resource_id)

This is probably me being naive, but that sounds strange. I would have 
thought that there is no way to exhaust the connection pool by doing 
lots of actions in rapid succession. I'd have guessed that the only way 
to exhaust a connection pool would be to have lots of connections open 
simultaneously. That suggests to me that either we are failing to 
expeditiously close connections and return them to the pool, or that we 
are - explicitly or implicitly - processing a bunch of messages in parallel.

>     In an ideal world, we might only ever pull one task off that queue at a
>     time. Any time the task is sleeping, we would use for processing stuff
>     off the engine queue (which needs a quick response, since it is serving
>     the ReST API). The trouble is that you need a *huge* number of
>     heat-engines to handle stuff in parallel. In the reductio-ad-absurdum
>     case of a single engine only processing a single task at a time, we're
>     back to creating resources serially. So we probably want a higher number
>     than 1. (Phase 2 of convergence will make tasks much smaller, and may
>     even get us down to the point where we can pull only a single task at a
>     time.)
>
>     However, the fewer engines you have, the more greenthreads we'll have to
>     allow to get some semblance of parallelism. To the extent that more
>     cores means more engines (which assumes all running on one box, but
>     still), the number of cores is negatively correlated with the number of
>     tasks that we want to allow.
>
>     Note that all of the greenthreads run in a single CPU thread, so having
>     more cores doesn't help us at all with processing more stuff in
>     parallel.
>
>
> Except, as I said above, we are not creating greenthreads in worker.

Well, maybe we'll need to in order to make things still work sanely with 
a low number of engines :) (Should be pretty easy to do with a semaphore.)

I think what y'all are suggesting is limiting the number of jobs that go 
into the queue... that's quite wrong IMO. Apart from the fact it's 
impossible (resources put jobs into the queue entirely independently, 
and have no knowledge of the global state required to throttle inputs), 
we shouldn't implement an in-memory queue with long-running tasks 
containing state that can be lost if the process dies - the whole point 
of convergence is we have... a message queue for that. We need to limit 
the rate that stuff comes *out* of the queue. And, again, since we have 
no knowledge of global state, we can only control the rate at which an 
individual worker processes tasks. The way to avoid killing the DB is to 
out a constant ceiling on the workers * concurrent_tasks_per_worker product.

cheers,
Zane.


From seanroberts66 at gmail.com  Thu Sep  3 15:24:56 2015
From: seanroberts66 at gmail.com (sean roberts)
Date: Thu, 3 Sep 2015 08:24:56 -0700
Subject: [openstack-dev] [akanda] outstanding issues from last meeting
Message-ID: <CAEoPogF_ig6YSY1dmX1hF9AHqTuLk4edh40=j+S3bO4KxhJD6g@mail.gmail.com>

*This coming Monday meeting is a holiday, so I propose to cancel the 07
September meeting. *

I do want to highlight a few things we need to iron out in the mean time.

As part of getting ready for the design summit, the Akanda project needs to
elect a PTL. It has been my pleasure to run the weekly Akanda development
meetings this year. As we transition into being a fully functional project,
we need a someone who has shown technical leadership in the project to
date. That person is Adam Gandelman. I nominate Adam to be the Akanda PTL.
We can delay the voting until the next meeting, but I wanted to put it out
there for discussion.

Set F2F hack for either week of 13 sept PDX, week of 20 sept SFO, or week
of 27 Sept PDX. Adam will be able to attend if we do PDX. Mark's schedule
is really tight. I would like to hold it 28-29 Sept in PDX. That should
allow everyone to join. Let's agree soonish so I can find a space for us
and we can start booking things.

Set either 10 or 17 sept for Mitaka design summit planning.

~ sean
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/fab9db8b/attachment.html>

From emilien at redhat.com  Thu Sep  3 15:30:27 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Thu, 3 Sep 2015 11:30:27 -0400
Subject: [openstack-dev] [tripleo] Upgrade plans for RDO Manager -
 Brainstorming
In-Reply-To: <55DB6C8A.7040602@redhat.com>
References: <55DB6C8A.7040602@redhat.com>
Message-ID: <55E86793.30408@redhat.com>



On 08/24/2015 03:12 PM, Emilien Macchi wrote:
> Hi,
> 
> So I've been working on OpenStack deployments for 4 years now and so far
> RDO Manager is the second installer -after SpinalStack [1]- I'm working on.
> 
> SpinalStack already had interested features [2] that allowed us to
> upgrade our customer platforms almost every months, with full testing
> and automation.
> 
> Now, we have RDO Manager, I would be happy to share my little experience
> on the topic and help to make it possible in the next cycle.
> 
> For that, I created an etherpad [3], which is not too long and focused
> on basic topics for now. This is technical and focused on Infrastructure
> upgrade automation.
> 

One week without discussion or thoughts in the etherpad.
Can anyone who cares about upgrades participate to the thread?

Thank you,
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/e41881d3/attachment.pgp>

From guillermo.ramirez-garcia at hpe.com  Thu Sep  3 16:01:11 2015
From: guillermo.ramirez-garcia at hpe.com (Ramirez Garcia, Guillermo)
Date: Thu, 3 Sep 2015 16:01:11 +0000
Subject: [openstack-dev] [freezer] Public IRC meetings every Thursday at 4PM
 (GTM0)
Message-ID: <3B8B4D8AB492364DA40126A7BF51E6191493FAF1@G4W3221.americas.hpqcorp.net>

Hi,

Reminder that we have a public meeting every Thursday at 4PM (GMT0) at the #openstack-freezer channel to discuss freezer, a backup and restore tool for openstack.



Project repositories:
- https://github.com/stackforge/freezer
- https://github.com/stackforge/freezer-api
- https://github.com/stackforge/freezer-web-ui

Launchpad
  - https://blueprints.launchpad.net/freezer

Bug Tracker
- https://bugs.launchpad.net/freezer

Etherpad documents
  - https://etherpad.openstack.org/p/freezer_meetings


Regards
Guillermo Ramirez Garcia
memo at hpe.com<mailto:memo at hpe.com>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/e362555a/attachment.html>

From jsbryant at electronicjungle.net  Thu Sep  3 16:13:04 2015
From: jsbryant at electronicjungle.net (Jay Bryant)
Date: Thu, 03 Sep 2015 16:13:04 +0000
Subject: [openstack-dev] [cinder] FFE Request - capacity-headroom
In-Reply-To: <EC800DA72BAD6E42B5F9B1752C30DB040525404D@SHSMSX104.ccr.corp.intel.com>
References: <EC800DA72BAD6E42B5F9B1752C30DB040525404D@SHSMSX104.ccr.corp.intel.com>
Message-ID: <CA+0S5sS014yoSvuxTE2oW5eWW4SejFq+j2pXHz8jm6ineYjiJg@mail.gmail.com>

Since Mike is out I wanted to respond to this:

I don't think we a re planning to take FFE requests this time around. Also,
the core team discussed this particular item and felt that there are still
too many open questions on the patch and the patch is too invasive to merge
this late in the game.

Please resubmit for Mitaka.

Thanks!
Jay
On Wed, Sep 2, 2015 at 6:38 AM Xin, Xiaohui <xiaohui.xin at intel.com> wrote:

> Hi,
>
> I would like to request feature freeze exception for the implementation of
> capacity-headroom.
>
> It calculates virtual free memory and send notifications to the ceilometer
> together with other storage capacity stats.
>
>
> Blueprint:
>
> https://blueprints.launchpad.net/cinder/+spec/capacity-headroom
>
>
> Spec:
>             https://review.openstack.org/#/c/170380/
>
>
>
> Addressed by:
>
> https://review.openstack.org/#/c/206923
>
>
>
>
> I have addressed the latest comments related to active-active deployment
> according to Gorka Eguileor?s comments and suggestions.
>
> Please kindly review and evaluate it. Great Thanks!
>
>
>
> Thanks
>
> Xiaohui
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/cb96509e/attachment.html>

From sean.wilcox at oracle.com  Thu Sep  3 16:32:50 2015
From: sean.wilcox at oracle.com (Sean Wilcox)
Date: Thu, 03 Sep 2015 10:32:50 -0600
Subject: [openstack-dev] [nova] Question regarding fix for 1392527
Message-ID: <55E87632.1020105@oracle.com>

With the fix patched into our Juno code base, the instance object that 
is handed into the driver.delete_instance_files() does not contain the 
instance_metadata from the db.  Is there a reason that is being left 
out?  If not should it be included?

-- 
Sean Wilcox
3032729711
x79711



From nik.komawar at gmail.com  Thu Sep  3 16:41:46 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Thu, 3 Sep 2015 12:41:46 -0400
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <D20DBFD8.210FE%brian.rosmaita@rackspace.com>
References: <55E7AC5C.9010504@gmail.com> <20150903085224.GD30997@redhat.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>
 <EA70533067B8F34F801E964ABCA4C4410F4C1D0D@G4W3202.americas.hpqcorp.net>
 <D20DBFD8.210FE%brian.rosmaita@rackspace.com>
Message-ID: <55E8784A.4060809@gmail.com>

We agreed to hold off on granting it a FFE until tomorrow.

There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
14:30 UTC ( #openstack-glance ). Please be there to voice your opinion
and cast your vote.

On 9/3/15 9:15 AM, Brian Rosmaita wrote:
> I added an agenda item for this for today's Glance meeting:
>    https://etherpad.openstack.org/p/glance-team-meeting-agenda
>
> I'd prefer to hold my vote until after the meeting.
>
> cheers,
> brian
>
>
> On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuvaja at hp.com> wrote:
>
>> Malini, all,
>>
>> My current opinion is -1 for FFE based on the concerns in the spec and
>> implementation.
>>
>> I'm more than happy to realign my stand after we have updated spec and a)
>> it's agreed to be the approach as of now and b) we can evaluate how much
>> work the implementation needs to meet with the revisited spec.
>>
>> If we end up to the unfortunate situation that this functionality does
>> not merge in time for Liberty, I'm confident that this is one of the
>> first things in Mitaka. I really don't think there is too much to go, we
>> just might run out of time.
>>
>> Thanks for your patience and endless effort to get this done.
>>
>> Best,
>> Erno
>>
>>> -----Original Message-----
>>> From: Bhandaru, Malini K [mailto:malini.k.bhandaru at intel.com]
>>> Sent: Thursday, September 03, 2015 10:10 AM
>>> To: Flavio Percoco; OpenStack Development Mailing List (not for usage
>>> questions)
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
>>>
>>> Flavio, first thing in the morning Kent will upload a new BP that
>>> addresses the
>>> comments. We would very much appreciate a +1 on the FFE.
>>>
>>> Regards
>>> Malini
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: Flavio Percoco [mailto:flavio at redhat.com]
>>> Sent: Thursday, September 03, 2015 1:52 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
>>>
>>> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>>>> Hi,
>>>>
>>>> I wanted to propose 'Single disk image OVA import' [1] feature proposal
>>>> for exception. This looks like a decently safe proposal that should be
>>>> able to adjust in the extended time period of Liberty. It has been
>>>> discussed at the Vancouver summit during a work session and the
>>>> proposal has been trimmed down as per the suggestions then; has been
>>>> overall accepted by those present during the discussions (barring a few
>>>> changes needed on the spec itself). It being a addition to already
>>>> existing import task, doesn't involve API change or change to any of
>>>> the core Image functionality as of now.
>>>>
>>>> Please give your vote: +1 or -1 .
>>>>
>>>> [1] https://review.openstack.org/#/c/194868/
>>> I'd like to see support for OVF being, finally, implemented in Glance.
>>> Unfortunately, I think there are too many open questions in the spec
>>> right
>>> now to make this FFE worthy.
>>>
>>> Could those questions be answered to before the EOW?
>>>
>>> With those questions answered, we'll be able to provide a more,
>>> realistic,
>>> vote.
>>>
>>> Also, I'd like us to evaluate how mature the implementation[0] is and
>>> the
>>> likelihood of it addressing the concerns/comments in time.
>>>
>>> For now, it's a -1 from me.
>>>
>>> Thanks all for working on this, this has been a long time requested
>>> format to
>>> have in Glance.
>>> Flavio
>>>
>>> [0] https://review.openstack.org/#/c/214810/
>>>
>>>
>>> --
>>> @flaper87
>>> Flavio Percoco
>>> __________________________________________________________
>>> ________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-
>>> request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From gord at live.ca  Thu Sep  3 16:42:03 2015
From: gord at live.ca (gord chung)
Date: Thu, 3 Sep 2015 12:42:03 -0400
Subject: [openstack-dev] [ceilometer] proposal to add Pradeep Kilambi to
 Ceilometer core
In-Reply-To: <BLU436-SMTP25C53BF066A7D6FA66D94DE6B0@phx.gbl>
References: <BLU436-SMTP25C53BF066A7D6FA66D94DE6B0@phx.gbl>
Message-ID: <BLU436-SMTP228F027E5FFF2D72020C354DE680@phx.gbl>



On 31/08/15 09:13 AM, gord chung wrote:
> hi,
>
> we'd like to nominate Pradeep Kilambi to the Ceilometer core team. he 
> has contributed by adding declarative meter support in Ceilometer and 
> provides feedback/input in regards to packaging and design.
>
> as we did last time, please vote here: 
> https://review.openstack.org/#/c/218822/ . if for whatever reason you 
> cannot vote there, please respond to this.
>
> reviews:
> https://review.openstack.org/#/q/reviewer:%22Pradeep+Kilambi%22++project:openstack/ceilometer,n,z 
>
>
> patches:
> https://review.openstack.org/#/q/owner:%22Pradeep+Kilambi%22+status:merged+project:openstack/ceilometer,n,z 
>
>
> cheers,
>
i'm please to welcome Pradeep to the Ceilometer core team. keep on 
keeping on.

cheers,

-- 
gord



From gord at live.ca  Thu Sep  3 16:44:40 2015
From: gord at live.ca (gord chung)
Date: Thu, 3 Sep 2015 12:44:40 -0400
Subject: [openstack-dev] [ceilometer] proposal to add Liusheng to
 Ceilometer core
In-Reply-To: <BLU436-SMTP251BACB04D2AAC2C5AC1FE3DE6B0@phx.gbl>
References: <BLU436-SMTP251BACB04D2AAC2C5AC1FE3DE6B0@phx.gbl>
Message-ID: <BLU436-SMTP193869A45996037118731EBDE680@phx.gbl>



On 31/08/15 09:18 AM, gord chung wrote:
> hi,
>
> we'd like to nominate Liusheng to the Ceilometer core team. he has 
> been a leading contributor in Ceilometer, provides solid reviews, and 
> regularly adds ideas for new improvements.
>
> as we did last time, please vote here: 
> https://review.openstack.org/#/c/218819/ . if for whatever reason you 
> cannot vote there, please respond to this.
>
> reviews:
> https://review.openstack.org/#/q/reviewer:liusheng+project:openstack/ceilometer,n,z 
>
>
> patches:
> https://review.openstack.org/#/q/owner:liusheng+status:merged+project:openstack/ceilometer,n,z 
>
>
> cheers,
>

it's my pleasure to welcome Liusheng to the Ceilometer core team. thanks 
for the great work you've done and will do.

cheers,

-- 
gord



From pkilambi at redhat.com  Thu Sep  3 16:47:10 2015
From: pkilambi at redhat.com (Pradeep Kilambi)
Date: Thu, 3 Sep 2015 12:47:10 -0400
Subject: [openstack-dev] [ceilometer] proposal to add Pradeep Kilambi to
 Ceilometer core
In-Reply-To: <BLU436-SMTP228F027E5FFF2D72020C354DE680@phx.gbl>
References: <BLU436-SMTP25C53BF066A7D6FA66D94DE6B0@phx.gbl>
 <BLU436-SMTP228F027E5FFF2D72020C354DE680@phx.gbl>
Message-ID: <CAOvC4jsQLaigS5MOYDMcU7dsy1=f59hA9LERLcK5KqseELuqqg@mail.gmail.com>

On Thu, Sep 3, 2015 at 12:42 PM, gord chung <gord at live.ca> wrote:

>
>
> On 31/08/15 09:13 AM, gord chung wrote:
>
>> hi,
>>
>> we'd like to nominate Pradeep Kilambi to the Ceilometer core team. he has
>> contributed by adding declarative meter support in Ceilometer and provides
>> feedback/input in regards to packaging and design.
>>
>> as we did last time, please vote here:
>> https://review.openstack.org/#/c/218822/ . if for whatever reason you
>> cannot vote there, please respond to this.
>>
>> reviews:
>>
>> https://review.openstack.org/#/q/reviewer:%22Pradeep+Kilambi%22++project:openstack/ceilometer,n,z
>>
>> patches:
>>
>> https://review.openstack.org/#/q/owner:%22Pradeep+Kilambi%22+status:merged+project:openstack/ceilometer,n,z
>>
>> cheers,
>>
>> i'm please to welcome Pradeep to the Ceilometer core team. keep on
> keeping on.



Thanks! Appreciate the opportunity!



-- 
--
Pradeep Kilambi; irc: pradk
OpenStack Engineering
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/b7b0907f/attachment.html>

From malini.k.bhandaru at intel.com  Thu Sep  3 17:13:45 2015
From: malini.k.bhandaru at intel.com (Bhandaru, Malini K)
Date: Thu, 3 Sep 2015 17:13:45 +0000
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <55E8784A.4060809@gmail.com>
References: <55E7AC5C.9010504@gmail.com> <20150903085224.GD30997@redhat.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>
 <EA70533067B8F34F801E964ABCA4C4410F4C1D0D@G4W3202.americas.hpqcorp.net>
 <D20DBFD8.210FE%brian.rosmaita@rackspace.com> <55E8784A.4060809@gmail.com>
Message-ID: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B3397CA@fmsmsx117.amr.corp.intel.com>

Thank you Nikhil and Brian!

-----Original Message-----
From: Nikhil Komawar [mailto:nik.komawar at gmail.com] 
Sent: Thursday, September 03, 2015 9:42 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

We agreed to hold off on granting it a FFE until tomorrow.

There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and cast your vote.

On 9/3/15 9:15 AM, Brian Rosmaita wrote:
> I added an agenda item for this for today's Glance meeting:
>    https://etherpad.openstack.org/p/glance-team-meeting-agenda
>
> I'd prefer to hold my vote until after the meeting.
>
> cheers,
> brian
>
>
> On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuvaja at hp.com> wrote:
>
>> Malini, all,
>>
>> My current opinion is -1 for FFE based on the concerns in the spec 
>> and implementation.
>>
>> I'm more than happy to realign my stand after we have updated spec 
>> and a) it's agreed to be the approach as of now and b) we can 
>> evaluate how much work the implementation needs to meet with the revisited spec.
>>
>> If we end up to the unfortunate situation that this functionality 
>> does not merge in time for Liberty, I'm confident that this is one of 
>> the first things in Mitaka. I really don't think there is too much to 
>> go, we just might run out of time.
>>
>> Thanks for your patience and endless effort to get this done.
>>
>> Best,
>> Erno
>>
>>> -----Original Message-----
>>> From: Bhandaru, Malini K [mailto:malini.k.bhandaru at intel.com]
>>> Sent: Thursday, September 03, 2015 10:10 AM
>>> To: Flavio Percoco; OpenStack Development Mailing List (not for 
>>> usage
>>> questions)
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>> proposal
>>>
>>> Flavio, first thing in the morning Kent will upload a new BP that 
>>> addresses the comments. We would very much appreciate a +1 on the 
>>> FFE.
>>>
>>> Regards
>>> Malini
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: Flavio Percoco [mailto:flavio at redhat.com]
>>> Sent: Thursday, September 03, 2015 1:52 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>> proposal
>>>
>>> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>>>> Hi,
>>>>
>>>> I wanted to propose 'Single disk image OVA import' [1] feature 
>>>> proposal for exception. This looks like a decently safe proposal 
>>>> that should be able to adjust in the extended time period of 
>>>> Liberty. It has been discussed at the Vancouver summit during a 
>>>> work session and the proposal has been trimmed down as per the 
>>>> suggestions then; has been overall accepted by those present during 
>>>> the discussions (barring a few changes needed on the spec itself). 
>>>> It being a addition to already existing import task, doesn't 
>>>> involve API change or change to any of the core Image functionality as of now.
>>>>
>>>> Please give your vote: +1 or -1 .
>>>>
>>>> [1] https://review.openstack.org/#/c/194868/
>>> I'd like to see support for OVF being, finally, implemented in Glance.
>>> Unfortunately, I think there are too many open questions in the spec 
>>> right now to make this FFE worthy.
>>>
>>> Could those questions be answered to before the EOW?
>>>
>>> With those questions answered, we'll be able to provide a more, 
>>> realistic, vote.
>>>
>>> Also, I'd like us to evaluate how mature the implementation[0] is 
>>> and the likelihood of it addressing the concerns/comments in time.
>>>
>>> For now, it's a -1 from me.
>>>
>>> Thanks all for working on this, this has been a long time requested 
>>> format to have in Glance.
>>> Flavio
>>>
>>> [0] https://review.openstack.org/#/c/214810/
>>>
>>>
>>> --
>>> @flaper87
>>> Flavio Percoco
>>> __________________________________________________________
>>> ________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-
>>> request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> _____________________________________________________________________
>> _____ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ______________________________________________________________________
> ____ OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From malini.k.bhandaru at intel.com  Thu Sep  3 17:33:39 2015
From: malini.k.bhandaru at intel.com (Bhandaru, Malini K)
Date: Thu, 3 Sep 2015 17:33:39 +0000
Subject: [openstack-dev] [Glance] loading a new conf file
Message-ID: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33983B@fmsmsx117.amr.corp.intel.com>

Sorry for the spam .. but email better for us in 2 disparate time zones,

We are a little stuck on how to integrate a new conf file and read it. 
Or perhaps we are approaching it wrong/missing something obvious.
This is with respect https://review.openstack.org/#/c/194868/11/specs/liberty/ovf-lite.rst line 146
Steps as we see it
1)	goal a new conf file in <glance>/etc
2)	Provide a sample
3)	Deploy it (part of packaging/install)
4)	Load it the code 

Any pointer by way of a class file name, or link or other would be very appreciated.

Regards
Malini



From openstack at wormley.com  Thu Sep  3 17:37:35 2015
From: openstack at wormley.com (Steve Wormley)
Date: Thu, 3 Sep 2015 10:37:35 -0700
Subject: [openstack-dev] [Neutron] Allowing DNS suffix to be set per
 subnet (at least per tenant)
In-Reply-To: <55E7FF06.2010207@maishsk.com>
References: <55E7FF06.2010207@maishsk.com>
Message-ID: <CAF47bhHfR9qAPAkUFQ=81=mYUpu_wMZ-OyKAEtTP9kJ22KpP9A@mail.gmail.com>

As far as I am aware it is not presently built-in to Openstack. You'll need
to add a dnsmasq_config_file option to your dhcp agent configurations and
then populate the file with:
domain=DOMAIN_NAME,CIDR for each network
i.e.
domain=example.com,10.11.22.0/24
...

-Steve


On Thu, Sep 3, 2015 at 1:04 AM, Maish Saidel-Keesing <maishsk at maishsk.com>
wrote:

> Hello all (cross-posting to openstack-operators as well)
>
> Today the setting of the dns suffix that is provided to the instance is
> passed through dhcp_agent.
>
> There is the option of setting different DNS servers per subnet (and and
> therefore tenant) but the domain suffix is something that stays the same
> throughout the whole system is the domain suffix.
>
> I see that this is not a current neutron feature.
>
> Is this on the roadmap? Are there ways to achieve this today? If so I
> would be very interested in hearing how.
>
> Thanks
> --
> Best Regards,
> Maish Saidel-Keesing
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/84ef07d4/attachment.html>

From gal.sagie at gmail.com  Thu Sep  3 17:51:24 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Thu, 3 Sep 2015 20:51:24 +0300
Subject: [openstack-dev] [Neutron] Allowing DNS suffix to be set per
 subnet (at least per tenant)
In-Reply-To: <CAF47bhHfR9qAPAkUFQ=81=mYUpu_wMZ-OyKAEtTP9kJ22KpP9A@mail.gmail.com>
References: <55E7FF06.2010207@maishsk.com>
 <CAF47bhHfR9qAPAkUFQ=81=mYUpu_wMZ-OyKAEtTP9kJ22KpP9A@mail.gmail.com>
Message-ID: <CAG9LJa6TnqPDefNGbsCY8A7U8sd+gdNuHTFiQL+nYrfgHQZ0BQ@mail.gmail.com>

I am not sure if this address what you need specifically, but it would be
worth checking these
two approved liberty specs:

1)
https://github.com/openstack/neutron-specs/blob/master/specs/liberty/internal-dns-resolution.rst
2)
https://github.com/openstack/neutron-specs/blob/master/specs/liberty/external-dns-resolution.rst

On Thu, Sep 3, 2015 at 8:37 PM, Steve Wormley <openstack at wormley.com> wrote:

> As far as I am aware it is not presently built-in to Openstack. You'll
> need to add a dnsmasq_config_file option to your dhcp agent configurations
> and then populate the file with:
> domain=DOMAIN_NAME,CIDR for each network
> i.e.
> domain=example.com,10.11.22.0/24
> ...
>
> -Steve
>
>
> On Thu, Sep 3, 2015 at 1:04 AM, Maish Saidel-Keesing <maishsk at maishsk.com>
> wrote:
>
>> Hello all (cross-posting to openstack-operators as well)
>>
>> Today the setting of the dns suffix that is provided to the instance is
>> passed through dhcp_agent.
>>
>> There is the option of setting different DNS servers per subnet (and and
>> therefore tenant) but the domain suffix is something that stays the same
>> throughout the whole system is the domain suffix.
>>
>> I see that this is not a current neutron feature.
>>
>> Is this on the roadmap? Are there ways to achieve this today? If so I
>> would be very interested in hearing how.
>>
>> Thanks
>> --
>> Best Regards,
>> Maish Saidel-Keesing
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/3a5bbec3/attachment.html>

From blak111 at gmail.com  Thu Sep  3 17:57:10 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 3 Sep 2015 10:57:10 -0700
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <1640C0EA-107E-4DEF-94FD-DE45CB18C04D@gmail.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
Message-ID: <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>

When we discussed this before on the neutron channel, I thought it was
because cloud-init doesn't support IPv6. We had wasted quite a bit of time
talking about adding support to our metadata service because I was under
the impression that cloud-init already did support IPv6.

IIRC, the argument against adding IPv6 support to cloud-init was that it
might be incompatible with how AWS chooses to implement IPv6 metadata, so
AWS would require a fork or other incompatible alternative to cloud-init in
all of their images.

Is that right?

On Thu, Sep 3, 2015 at 7:30 AM, Sean M. Collins <sean at coreitpro.com> wrote:

> It's not a case of cloud-init supporting IPv6 - The Amazon EC2 metadata
> API defines transport level details about the API - and currently only
> defines a well known IPv4 link local address to connect to. No well known
> link local IPv6 address has been defined.
>
> I usually recommend config-drive for IPv6 enabled clouds due to this.
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/0d2d5f4d/attachment.html>

From hurgleburgler at gmail.com  Thu Sep  3 17:58:50 2015
From: hurgleburgler at gmail.com (Diana Whitten)
Date: Thu, 3 Sep 2015 10:58:50 -0700
Subject: [openstack-dev] [horizon] Concern about XStatic-bootswatch imports
 from fonts.googleapis.com
Message-ID: <CABswzdFv-q7gz+W=ss917SvtTDZzkA55Q67RFpY-S=DaidM-8Q@mail.gmail.com>

Thomas,

Sorry for the slow response, since I wasn't on the right mailing list yet.

1. I'm trying to figure out the best way possible to address this security
breach.  I think that the best way to fix this is to augment Bootswatch to
only use the URL through a parameter, that can be easily configured.  I
have an Issue open on their code right now for this very feature.

Until then, I think that we can easily address the issue from the point of
view of Horizon, such that we:
1. Remove all instances of 'fonts.googleapis.com' from the SCSS during the
preprocessor step. Therefore, no outside URLs that point to this location
EVER get hit
*or*
2. Until the issue that I created on Bootswatch can be addressed,  we can
include that file that is making the call in the tree and remove the
@import entirely.
*or*
3. Until the issue that I created on Bootswatch can be addressed,  we can
include the two files that we need from bootswatch 'paper' entirely, and
remove Bootswatch as a requirement until we can get an updated package

2. Its not getting used at all ... anyways.  I packaged up the font and
make it also available via xstatic.  I realized there was some questions
about where the versioning came from, but it looks like you might have been
looking at the wrong github repo:
https://github.com/Templarian/MaterialDesign-Webfont/releases

You can absolutely patch out the fonts.  The result will not be ugly; each
font should fall back to a nice system font.  But, we are only using the
'Paper' theme out of Bootswatch right now and therefore only packaged up
the specific font required for it.

Ping me on IRC @hurgleburgler

- Diana


On Thu, Sep 3, 2015 at 9:55 AM, Thai Q Tran <tqtran at us.ibm.com> wrote:

>
>
>
> ----- Original message -----
> From: Thomas Goirand <zigo at debian.org>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Cc:
> Subject: [openstack-dev] [horizon] Concern about XStatic-bootswatch
> imports from fonts.googleapis.com
> Date: Thu, Sep 3, 2015 4:30 AM
>
> Hi,
>
> When doing:
> grep -r fonts.googleapis.com *
>
> there's 56 lines of this kind of result:
> xstatic/pkg/bootswatch/data/cyborg/bootstrap.css:@import
> url("https://fonts.googleapis.com/css?family=Roboto:400,700");
>
> This is wrong because:
>
> 1/ This is a privacy breach, and one may not agree on hitting any web
> server which he doesn't control. It's a problem in itself for packaging
> in Debian, which is currently stopping me from uploading.
>
> 2/ More importantly (and even if you don't care about this kind of
> privacy breach), this requires Internet access, which isn't at all
> granted in some installations.
>
> So I wonder if using bootswatch, which includes such a problem, is
> really a good idea. Are these fonts import completely mandatory? Or can
> I patch them out? Will the result be ugly if I patch it out?
>
> Your thoughts?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/e81e7f67/attachment.html>

From sgordon at redhat.com  Thu Sep  3 18:06:44 2015
From: sgordon at redhat.com (Steve Gordon)
Date: Thu, 3 Sep 2015 14:06:44 -0400 (EDT)
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
Message-ID: <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>

----- Original Message -----
> From: "Kevin Benton" <blak111 at gmail.com>
> 
> When we discussed this before on the neutron channel, I thought it was
> because cloud-init doesn't support IPv6. We had wasted quite a bit of time
> talking about adding support to our metadata service because I was under
> the impression that cloud-init already did support IPv6.
> 
> IIRC, the argument against adding IPv6 support to cloud-init was that it
> might be incompatible with how AWS chooses to implement IPv6 metadata, so
> AWS would require a fork or other incompatible alternative to cloud-init in
> all of their images.
> 
> Is that right?

That's certainly my understanding of the status quo, I was enquiring primarily to check it was still accurate.

-Steve

> On Thu, Sep 3, 2015 at 7:30 AM, Sean M. Collins <sean at coreitpro.com> wrote:
> 
> > It's not a case of cloud-init supporting IPv6 - The Amazon EC2 metadata
> > API defines transport level details about the API - and currently only
> > defines a well known IPv4 link local address to connect to. No well known
> > link local IPv6 address has been defined.
> >
> > I usually recommend config-drive for IPv6 enabled clouds due to this.
> > --
> > Sent from my Android device with K-9 Mail. Please excuse my brevity.
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >


From nik.komawar at gmail.com  Thu Sep  3 18:31:05 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Thu, 3 Sep 2015 14:31:05 -0400
Subject: [openstack-dev] [Glance] loading a new conf file
In-Reply-To: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33983B@fmsmsx117.amr.corp.intel.com>
References: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33983B@fmsmsx117.amr.corp.intel.com>
Message-ID: <55E891E9.5050102@gmail.com>

A decent way would be to adopt the approach metadefs are using.

Introduce a folder 'glance-tasks' and have such configs as a part of a
group "ova-lite" for image-import.conf.sample. Of couse, this needs to
come as part of the packaging under the standard conf directory or
whatever convenient.

I think it's a good pointer by Erno on that: glance-api.conf doesn't
need to include all the configs a non-mandatory task sub-script is using.

On 9/3/15 1:33 PM, Bhandaru, Malini K wrote:
> Sorry for the spam .. but email better for us in 2 disparate time zones,
>
> We are a little stuck on how to integrate a new conf file and read it. 
> Or perhaps we are approaching it wrong/missing something obvious.
> This is with respect https://review.openstack.org/#/c/194868/11/specs/liberty/ovf-lite.rst line 146
> Steps as we see it
> 1)	goal a new conf file in <glance>/etc
> 2)	Provide a sample
> 3)	Deploy it (part of packaging/install)
> 4)	Load it the code 
>
> Any pointer by way of a class file name, or link or other would be very appreciated.
>
> Regards
> Malini
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From harlowja at outlook.com  Thu Sep  3 18:41:13 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Thu, 3 Sep 2015 11:41:13 -0700
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
Message-ID: <BLU437-SMTP76050D04789EE848C5B5F4D8680@phx.gbl>

I'm pretty sure this got implemented :)

http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/revision/1042 
and https://bugs.launchpad.net/cloud-init/+bug/1391695

That's the RHEL support, since cloud-init translates a ubuntu style 
networking style the ubuntu/debian style format should also work.

Steve Gordon wrote:
> ----- Original Message -----
>> From: "Kevin Benton"<blak111 at gmail.com>
>>
>> When we discussed this before on the neutron channel, I thought it was
>> because cloud-init doesn't support IPv6. We had wasted quite a bit of time
>> talking about adding support to our metadata service because I was under
>> the impression that cloud-init already did support IPv6.
>>
>> IIRC, the argument against adding IPv6 support to cloud-init was that it
>> might be incompatible with how AWS chooses to implement IPv6 metadata, so
>> AWS would require a fork or other incompatible alternative to cloud-init in
>> all of their images.
>>
>> Is that right?
>
> That's certainly my understanding of the status quo, I was enquiring primarily to check it was still accurate.
>
> -Steve
>
>> On Thu, Sep 3, 2015 at 7:30 AM, Sean M. Collins<sean at coreitpro.com>  wrote:
>>
>>> It's not a case of cloud-init supporting IPv6 - The Amazon EC2 metadata
>>> API defines transport level details about the API - and currently only
>>> defines a well known IPv4 link local address to connect to. No well known
>>> link local IPv6 address has been defined.
>>>
>>> I usually recommend config-drive for IPv6 enabled clouds due to this.
>>> --
>>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From vgridnev at mirantis.com  Thu Sep  3 18:49:44 2015
From: vgridnev at mirantis.com (Vitaly Gridnev)
Date: Thu, 3 Sep 2015 21:49:44 +0300
Subject: [openstack-dev] [sahara] Request for Feature Freeze Exception
Message-ID: <CA+O3VAi689gyN-7Vu1qmBsv_T3xyaOOiL0foBo4YLLTJFW60ww@mail.gmail.com>

Hey folks!

I would like to propose to add to list of FFE's following blueprint:
https://blueprints.launchpad.net/sahara/+spec/drop-hadoop-1

Reasoning of that is following:

 1. HDP 1.3.2 and Vanilla 1.2.1 are not gated for a whole release cycle, so
it can be reason of several bugs in these versions;
 2. Minimal risk of removal: it doesn't touch versions that we already have.
 3. All required changes was already uploaded to the review:
https://review.openstack.org/#/q/status:open+project:openstack/sahara+branch:master+topic:bp/drop-hadoop-1,n,z

Waiting for approval of FFE.
-- 
Best Regards,
Vitaly Gridnev
Mirantis, Inc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/661e0ed2/attachment.html>

From emilien at redhat.com  Thu Sep  3 19:00:54 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Thu, 3 Sep 2015 15:00:54 -0400
Subject: [openstack-dev] [puppet] hosting developer documentation on
 http://docs.openstack.org/developer/
In-Reply-To: <55E74657.90904@redhat.com>
References: <55E73B67.9020802@redhat.com> <55E74462.6030808@anteaya.info>
 <55E74657.90904@redhat.com>
Message-ID: <55E898E6.8050406@redhat.com>



On 09/02/2015 02:56 PM, Emilien Macchi wrote:
> 
> 
> On 09/02/2015 02:48 PM, Anita Kuno wrote:
>> On 09/02/2015 02:09 PM, Emilien Macchi wrote:
>>> TL;DR, I propose to move our developer documentation from wiki to 
>>> something like 
>>> http://docs.openstack.org/developer/puppet-openstack
>>
>>> (Look at http://docs.openstack.org/developer/tempest/ for 
>>> example).
>>
>> Looking at the tempest example:
>> http://git.openstack.org/cgit/openstack/tempest/tree/doc/source
>> we see that the .rst files all live in the tempest repo in doc/source
>> (with the exception of the README.rst file with is referenced from
>> within doc/source when required:
>> http://git.openstack.org/cgit/openstack/tempest/tree/doc/source/overview.rst)
>>
>> So question: Where should the source .rst files for puppet developer
>> documentation live? They will need a home.
> 
> I guess we would need a new repository for that.
> It could be puppet-openstack-doc (kiss)
> or something else, any suggestion is welcome.

Are we ok for the name?
proposal:
puppet-openstack-doc
puppet-openstack-documentation

Any suggestion is welcome,

>>
>> Thanks,
>> Anita.
>>
>>
>>> For now, most of our documentation is on 
>>> https://wiki.openstack.org/wiki/Puppet but I think it would be 
>>> great to use RST format and Gerrit so anyone could submit 
>>> documentation contribute like we do for code.
>>
>>> I propose a basic table of contents now: Puppet modules 
>>> introductions Coding Guide Reviewing code
>>
>>> I'm taking the opportunity of the puppet sprint to run this 
>>> discussion and maybe start some work of people agrees to move on.
>>
>>> Thanks,
>>
>>
>>
>>> __________________________________________________________________________
>>
>>
>>
>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/b3e43c3b/attachment.pgp>

From mrunge at redhat.com  Thu Sep  3 19:02:32 2015
From: mrunge at redhat.com (Matthias Runge)
Date: Thu, 3 Sep 2015 21:02:32 +0200
Subject: [openstack-dev] [horizon] Concern about XStatic-bootswatch
 imports from fonts.googleapis.com
In-Reply-To: <55E82E0A.1050507@debian.org>
References: <55E82E0A.1050507@debian.org>
Message-ID: <55E89948.6050803@redhat.com>

On 03/09/15 13:24, Thomas Goirand wrote:
> Hi,
> 
> When doing:
> grep -r fonts.googleapis.com *
> 
> there's 56 lines of this kind of result:
> xstatic/pkg/bootswatch/data/cyborg/bootstrap.css:@import
> url("https://fonts.googleapis.com/css?family=Roboto:400,700");
> 
> This is wrong because:
> 
> 1/ This is a privacy breach, and one may not agree on hitting any web
> server which he doesn't control. It's a problem in itself for packaging
> in Debian, which is currently stopping me from uploading.
> 
> 2/ More importantly (and even if you don't care about this kind of
> privacy breach), this requires Internet access, which isn't at all
> granted in some installations.
> 
> So I wonder if using bootswatch, which includes such a problem, is
> really a good idea. Are these fonts import completely mandatory? Or can
> I patch them out? Will the result be ugly if I patch it out?
> 
Thomas,

You're right! I'd assume, this happened by accident. Nevertheless it
should be solved.

My simple POV is: solve it upstream or do not use bootswatch rather than
patch something out, which will lead to unexpected results for users and
will lead to complaints about stupid packagers (or else).

Matthias



From gord at live.ca  Thu Sep  3 19:12:43 2015
From: gord at live.ca (gord chung)
Date: Thu, 3 Sep 2015 15:12:43 -0400
Subject: [openstack-dev] [oslo][versionedobjects][ceilometer] explain
 the benefits of ceilometer+versionedobjects
In-Reply-To: <55E0A05E.6080100@danplanet.com>
References: <BLU437-SMTP766811C9632CFCC37305CDDE6F0@phx.gbl>
 <55E0910F.2080006@danplanet.com>
 <BLU437-SMTP78FC2B58548051114A0A33DE6E0@phx.gbl>
 <55E0A05E.6080100@danplanet.com>
Message-ID: <BLU437-SMTP9423D46199E6D36A11BD8EDE680@phx.gbl>



On 28/08/15 01:54 PM, Dan Smith wrote:
>> we store everything as primitives: floats, time, integer, etc... since
>> we need to query on attributes. it seems like versionedobjects might not
>> be useful to our db configuration currently.
> I don't think the former determines the latter -- we have lots of things
> stored as rows of column primitives and query them out as objects, but
> then you're not storing the object and version (unless you do it
> separately) So, if it doesn't buy you anything, then there's no reason
> to use it.
sorry, i misunderstood this. i thought you were saying ovo may not fit 
into Ceilometer.

i guess to give it more of a real context for us to understand, 
regarding the database layer, if we have an events model which consists of:

- id: uuid
- event_type: string
- generated: timestamp
- raw: dictionary value (not meant for querying, just for auditing purposes)
- traits: [list of tuples (key, value, type)]

given this model, each of our backend drivers take this data and using 
it's connection to db, stores data accordingly:
- in mongodb, the attributes are all stored in documents similar to 
json, raw attr is stored as json
- in elasticsearch, they're stored in documents as well but traits are 
mapped different from mongo
- in hbase, the attributes and traits are all mapped to columns
- in sql, the data is mapped to an Event table, traits are mapped to 
different traits tables depending on type, raw attribute stored as a string.

considering everything is stored differently depending on db, how does 
ovo work? is it normalising it into a specific format pre-storage? how 
does different data will different schemas co-exists on the same db?
- is there a some version tag applied to each item and a version schema 
table created somewhere?
- do we need to migrate the db to some handle different set of 
attributes and what happens for nosql dbs?

also, from api querying pov, if i want to query a db, how do you 
query/filter across different versions?
- does ovo tell the api what versions exists in db and then you can 
filter across any attribute from any schema version?
- are certain attributes effectively unqueryable because it may not 
exists across all versions?

apologies on not understanding how it all works or if the above has 
nothing to do with ovo (i wasn't joking about the 'explain it to me like 
i'm 5' request :-) ) ... i think part of the wariness is that the code 
seemingly does nothing now (or the logic is extremely hidden) but if we 
merge these x hundred/thousand lines of code, it will do something later 
if something changes.

cheers,

-- 
gord



From svasilenko at mirantis.com  Thu Sep  3 19:14:49 2015
From: svasilenko at mirantis.com (Sergey Vasilenko)
Date: Thu, 3 Sep 2015 22:14:49 +0300
Subject: [openstack-dev] [Fuel] Nominate Evgeniy Konstantinov for
 fuel-docs core
In-Reply-To: <CAHAWLf2apU=0b_xOhEMA=DjKoEKRsSCtys4sGnjyBmQckgXhUA@mail.gmail.com>
References: <CAFY49iBwxknorBHmVLZSkUWD9zMr4Tc57vKOg_F0=7PEG0_tSA@mail.gmail.com>
 <CAM0pNLOpBAhyQnRCHXK=jL6NTpxdEe880a=h7c-Jvw4GdTuk9w@mail.gmail.com>
 <CAC+XjbZqz-qk1fi+pR=H-KXEgOqW9W0_+0f89xKVSPpiA5otWg@mail.gmail.com>
 <CAHAWLf2apU=0b_xOhEMA=DjKoEKRsSCtys4sGnjyBmQckgXhUA@mail.gmail.com>
Message-ID: <CAPQe3Ln-Rv2Z-8LyWPo914mFk+xhxHe05Vj=wxR=yuoUd2+PyA@mail.gmail.com>

+1


/sv
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/21f3f3c2/attachment.html>

From svasilenko at mirantis.com  Thu Sep  3 19:15:19 2015
From: svasilenko at mirantis.com (Sergey Vasilenko)
Date: Thu, 3 Sep 2015 22:15:19 +0300
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
In-Reply-To: <55E839D1.7030404@mirantis.com>
References: <55E839D1.7030404@mirantis.com>
Message-ID: <CAPQe3LndECPZAUuU_7embLBjir7trTe1L9YXi9q8+bhdVh0cwg@mail.gmail.com>

+1

/sv
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/03dc788c/attachment.html>

From svasilenko at mirantis.com  Thu Sep  3 19:16:07 2015
From: svasilenko at mirantis.com (Sergey Vasilenko)
Date: Thu, 3 Sep 2015 22:16:07 +0300
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
In-Reply-To: <CAC+XjbYo+Nd6zPY7vkwhFSjp5J7sAPYooU5FzzfBKRfDjgb1-A@mail.gmail.com>
References: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
 <55E6EA3A.7080006@gmail.com>
 <CAM0pNLMwsWK_N8EaobnCDXmFdfB0aTPMK9urXnbScGmJtvqfoA@mail.gmail.com>
 <CAHAWLf1Ed6fPqDUDuume+1JdtxJuLn+AFrcxkUvbZvxAokBWYA@mail.gmail.com>
 <CAOe9ns7CtSmgKuzZu1qvWyh2mU2zCZUvwRCa2DQWaUvpZuPiqQ@mail.gmail.com>
 <CAC+XjbYo+Nd6zPY7vkwhFSjp5J7sAPYooU5FzzfBKRfDjgb1-A@mail.gmail.com>
Message-ID: <CAPQe3Lmwc6SyYuEj0LOjMBEMLLUqMKaqNAPG=puv_8skQ+Gu9Q@mail.gmail.com>

+1

/sv
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/acbf726d/attachment.html>

From guimalufb at gmail.com  Thu Sep  3 19:17:08 2015
From: guimalufb at gmail.com (Gui Maluf)
Date: Thu, 3 Sep 2015 16:17:08 -0300
Subject: [openstack-dev] [puppet] hosting developer documentation on
	http://docs.openstack.org/developer/
In-Reply-To: <55E898E6.8050406@redhat.com>
References: <55E73B67.9020802@redhat.com> <55E74462.6030808@anteaya.info>
 <55E74657.90904@redhat.com> <55E898E6.8050406@redhat.com>
Message-ID: <CAJArKkcQThe_4T0iTeauEU2C-r9xyg54_z7t=Fxm6KGLt9+pog@mail.gmail.com>

On Thu, Sep 3, 2015 at 4:00 PM, Emilien Macchi <emilien at redhat.com> wrote:

> puppet-openstack-doc



+1

-- 
*guilherme* \n
\t *maluf*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/5120060b/attachment.html>

From bpiotrowski at mirantis.com  Thu Sep  3 19:34:55 2015
From: bpiotrowski at mirantis.com (Bartlomiej Piotrowski)
Date: Thu, 3 Sep 2015 21:34:55 +0200
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
In-Reply-To: <CAPQe3Lmwc6SyYuEj0LOjMBEMLLUqMKaqNAPG=puv_8skQ+Gu9Q@mail.gmail.com>
References: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
 <55E6EA3A.7080006@gmail.com>
 <CAM0pNLMwsWK_N8EaobnCDXmFdfB0aTPMK9urXnbScGmJtvqfoA@mail.gmail.com>
 <CAHAWLf1Ed6fPqDUDuume+1JdtxJuLn+AFrcxkUvbZvxAokBWYA@mail.gmail.com>
 <CAOe9ns7CtSmgKuzZu1qvWyh2mU2zCZUvwRCa2DQWaUvpZuPiqQ@mail.gmail.com>
 <CAC+XjbYo+Nd6zPY7vkwhFSjp5J7sAPYooU5FzzfBKRfDjgb1-A@mail.gmail.com>
 <CAPQe3Lmwc6SyYuEj0LOjMBEMLLUqMKaqNAPG=puv_8skQ+Gu9Q@mail.gmail.com>
Message-ID: <CALMh7SAY0Him7mp-1u48MqGhrJ-P9oB=sO1J8nELZ65oXAQozQ@mail.gmail.com>

I have no idea if I'm eligible to vote, but I'll do it anyway:

+1

Bart?omiej

On Thu, Sep 3, 2015 at 9:16 PM, Sergey Vasilenko <svasilenko at mirantis.com>
wrote:

> +1
>
> /sv
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/ceab84ae/attachment.html>

From msm at redhat.com  Thu Sep  3 19:53:41 2015
From: msm at redhat.com (michael mccune)
Date: Thu, 3 Sep 2015 15:53:41 -0400
Subject: [openstack-dev] [sahara] Request for Feature Freeze Exception
In-Reply-To: <CA+O3VAi689gyN-7Vu1qmBsv_T3xyaOOiL0foBo4YLLTJFW60ww@mail.gmail.com>
References: <CA+O3VAi689gyN-7Vu1qmBsv_T3xyaOOiL0foBo4YLLTJFW60ww@mail.gmail.com>
Message-ID: <55E8A545.7000602@redhat.com>

On 09/03/2015 02:49 PM, Vitaly Gridnev wrote:
> Hey folks!
>
> I would like to propose to add to list of FFE's following blueprint:
> https://blueprints.launchpad.net/sahara/+spec/drop-hadoop-1
>
> Reasoning of that is following:
>
>   1. HDP 1.3.2 and Vanilla 1.2.1 are not gated for a whole release
> cycle, so it can be reason of several bugs in these versions;
>   2. Minimal risk of removal: it doesn't touch versions that we already
> have.
>   3. All required changes was already uploaded to the review:
> https://review.openstack.org/#/q/status:open+project:openstack/sahara+branch:master+topic:bp/drop-hadoop-1,n,z

this sounds reasonable to me

mike



From maishsk at maishsk.com  Thu Sep  3 19:56:39 2015
From: maishsk at maishsk.com (Maish Saidel-Keesing)
Date: Thu, 3 Sep 2015 22:56:39 +0300
Subject: [openstack-dev] [Neutron] Allowing DNS suffix to be set per
 subnet (at least per tenant)
In-Reply-To: <CAF47bhHfR9qAPAkUFQ=81=mYUpu_wMZ-OyKAEtTP9kJ22KpP9A@mail.gmail.com>
References: <55E7FF06.2010207@maishsk.com>
 <CAF47bhHfR9qAPAkUFQ=81=mYUpu_wMZ-OyKAEtTP9kJ22KpP9A@mail.gmail.com>
Message-ID: <55E8A5F7.9040608@maishsk.com>

On 09/03/15 20:37, Steve Wormley wrote:
> As far as I am aware it is not presently built-in to Openstack. You'll 
> need to add a dnsmasq_config_file option to your dhcp agent 
> configurations and then populate the file with:
> domain=DOMAIN_NAME,CIDR for each network
> i.e.
> domain=example.com <http://example.com>,10.11.22.0/24 
> <http://10.11.22.0/24>
> ...
>
> -Steve
>
Thanks Steve,

I am aware of that 'hack' which has a substantial number of downsides.

The biggest one being that it per subnet - and I afraid to find out what 
would happen if more than one tenant configures the same subnet - this 
could cause all sorts of problems.

And all of this has to be done with appropriate shell permissions to the 
neutron node.

In other words, it could work, but only for a very certain use case.

>
> On Thu, Sep 3, 2015 at 1:04 AM, Maish Saidel-Keesing 
> <maishsk at maishsk.com <mailto:maishsk at maishsk.com>> wrote:
>
>     Hello all (cross-posting to openstack-operators as well)
>
>     Today the setting of the dns suffix that is provided to the
>     instance is passed through dhcp_agent.
>
>     There is the option of setting different DNS servers per subnet
>     (and and therefore tenant) but the domain suffix is something that
>     stays the same throughout the whole system is the domain suffix.
>
>     I see that this is not a current neutron feature.
>
>     Is this on the roadmap? Are there ways to achieve this today? If
>     so I would be very interested in hearing how.
>
>     Thanks
>     -- 
>     Best Regards,
>     Maish Saidel-Keesing
>

-- 
Best Regards,
Maish Saidel-Keesing
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/ee032cac/attachment.html>

From dms at danplanet.com  Thu Sep  3 20:02:02 2015
From: dms at danplanet.com (Dan Smith)
Date: Thu, 3 Sep 2015 13:02:02 -0700
Subject: [openstack-dev] [oslo][versionedobjects][ceilometer] explain
 the benefits of ceilometer+versionedobjects
In-Reply-To: <BLU437-SMTP9423D46199E6D36A11BD8EDE680@phx.gbl>
References: <BLU437-SMTP766811C9632CFCC37305CDDE6F0@phx.gbl>
 <55E0910F.2080006@danplanet.com>
 <BLU437-SMTP78FC2B58548051114A0A33DE6E0@phx.gbl>
 <55E0A05E.6080100@danplanet.com>
 <BLU437-SMTP9423D46199E6D36A11BD8EDE680@phx.gbl>
Message-ID: <55E8A73A.5020203@danplanet.com>

>>> we store everything as primitives: floats, time, integer, etc... since
>>> we need to query on attributes. it seems like versionedobjects might not
>>> be useful to our db configuration currently.
>> I don't think the former determines the latter -- we have lots of things
>> stored as rows of column primitives and query them out as objects, but
>> then you're not storing the object and version (unless you do it
>> separately) So, if it doesn't buy you anything, then there's no reason
>> to use it.

> sorry, i misunderstood this. i thought you were saying ovo may not fit
> into Ceilometer.

Nope, what I meant was: there's no reason to use the technique of
storing serialized objects as blobs in the database if you don't want to
store things like that.

> i guess to give it more of a real context for us to understand,
> regarding the database layer, if we have an events model which consists of:
> 
> - id: uuid
> - event_type: string
> - generated: timestamp
> - raw: dictionary value (not meant for querying, just for auditing
> purposes)
> - traits: [list of tuples (key, value, type)]
> 
> given this model, each of our backend drivers take this data and using
> it's connection to db, stores data accordingly:
> - in mongodb, the attributes are all stored in documents similar to
> json, raw attr is stored as json

Right, so you could store the serialized version of the object in mongo
like this very easily. When you go to pull data out of the database
later, you have a strict format, and a version tied to it so that you
know exactly how it was stored. If you have storage drivers that handle
taking the generic thing and turning it into something appropriate for a
given store, then it's entirely possible that you are best suited to be
tolerant of old data there.

In Nova, we treat the object schema as the interface the rest of the
code uses and expects. Tolerance of the actual persistence schema moving
underneath and over time is hidden in this layer so that things above
don't have to know about it.

> - in sql, the data is mapped to an Event table, traits are mapped to
> different traits tables depending on type, raw attribute stored as a
> string.

Yep, so when storing in a SQL database, you'd (presumably) not store the
serialized blobs, and rather pick the object apart to store it as a row
(like most of the things in nova are stored).

> considering everything is stored differently depending on db, how does
> ovo work? is it normalising it into a specific format pre-storage? how
> does different data will different schemas co-exists on the same db?

This is completely up to your implementation. You could end up with a
top-level object like Event that doesn't implement .save(), and then
subclasses like SQLEvent and MongoEvent that do. All the structure could
be defined at the top, but the implementations of how to store/retrieve
them are separate.

The mongo one might be very simple because it can just use the object
infrastructure to get the serialized blob and store it. The SQL one
would turn the object's fields into an INSERT statement (or a SQLAlchemy
thing).

> - is there a some version tag applied to each item and a version schema
> table created somewhere?

The object defines the schema as a list of tightly typed fields, a bunch
of methods, and a version. In this purely DB-specific case, all it does
is provide you a facade with which to hide things like storing to a
different version or format of schema. For projects that send things
over RPC and then dump them in the database, it's super convenient that
this is all one thing.

> - do we need to migrate the db to some handle different set of
> attributes and what happens for nosql dbs?

No, Nova made no schema changes as a result of moving to objects.

> also, from api querying pov, if i want to query a db, how do you
> query/filter across different versions?
> - does ovo tell the api what versions exists in db and then you can
> filter across any attribute from any schema version?

Nope, o.vo doesn't do any of this for you magically. It merely sets up a
place for you to do that work. In nova, we use them for RPC and DB
storage, which means if we have an old node that receives a new object
over RPC (or the opposite) we have rules that define how we handle that.
Thus, we can apply the same rules to reading the DB, where some objects
might be older or newer.

> apologies on not understanding how it all works or if the above has
> nothing to do with ovo (i wasn't joking about the 'explain it to me like
> i'm 5' request :-) ) ... i think part of the wariness is that the code
> seemingly does nothing now (or the logic is extremely hidden) but if we
> merge these x hundred/thousand lines of code, it will do something later
> if something changes.

It really isn't magic and really doesn't do a huge amount of work for
you. It's a pattern as much as anything, and most of the benefit comes
from the serialization and version handling of things over RPC.

Part of the reason why my previous responses are so vague is that I
really don't care if you use o.vo or not. What I do care about is that
critical openstack projects move (quickly) to supporting rolling
upgrades, the likes of what nova supports now and the goals we're trying
to achieve. If the pattern that nova defined and spun out into the
library helps, then that's good for repeatability. However, the model
nova chose clearly mostly applies to projects spawned from nova or those
that were more or less desgined in its image.

I think the goal should be "Monitoring of your cloud doesn't ever have
to be turned off to upgrade it." Presumably you never want to leave your
cloud unmonitored while you take a big upgrade. How that goal is
realized really doesn't matter to me at all, as long we we get there.

--Dan


From maishsk at maishsk.com  Thu Sep  3 20:07:37 2015
From: maishsk at maishsk.com (Maish Saidel-Keesing)
Date: Thu, 3 Sep 2015 23:07:37 +0300
Subject: [openstack-dev] [Neutron] Allowing DNS suffix to be set per
 subnet (at least per tenant)
In-Reply-To: <CAG9LJa6TnqPDefNGbsCY8A7U8sd+gdNuHTFiQL+nYrfgHQZ0BQ@mail.gmail.com>
References: <55E7FF06.2010207@maishsk.com>
 <CAF47bhHfR9qAPAkUFQ=81=mYUpu_wMZ-OyKAEtTP9kJ22KpP9A@mail.gmail.com>
 <CAG9LJa6TnqPDefNGbsCY8A7U8sd+gdNuHTFiQL+nYrfgHQZ0BQ@mail.gmail.com>
Message-ID: <55E8A889.3030502@maishsk.com>

On 09/03/15 20:51, Gal Sagie wrote:
> I am not sure if this address what you need specifically, but it would 
> be worth checking these
> two approved liberty specs:
>
> 1) 
> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/internal-dns-resolution.rst
> 2) 
> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/external-dns-resolution.rst
>
Thanks Gal,

So I see that from the bp [1] the fqdn will be configurable for each and 
every port ?

I think that this does open up a number of interesting possibilities, 
but I would also think that it would be sufficient to do this on a 
subnet level?

We do already have the option of setting nameservers per subnet - I 
assume the data model is already implemented - which is interesting - 
because I don't see that as part of the information that is sent by 
dnsmasq so it must be coming from neutron somewhere.

The domain suffix - definitely is handled by dnsmasq.


> On Thu, Sep 3, 2015 at 8:37 PM, Steve Wormley <openstack at wormley.com 
> <mailto:openstack at wormley.com>> wrote:
>
>     As far as I am aware it is not presently built-in to Openstack.
>     You'll need to add a dnsmasq_config_file option to your dhcp agent
>     configurations and then populate the file with:
>     domain=DOMAIN_NAME,CIDR for each network
>     i.e.
>     domain=example.com <http://example.com>,10.11.22.0/24
>     <http://10.11.22.0/24>
>     ...
>
>     -Steve
>
>
>     On Thu, Sep 3, 2015 at 1:04 AM, Maish Saidel-Keesing
>     <maishsk at maishsk.com <mailto:maishsk at maishsk.com>> wrote:
>
>         Hello all (cross-posting to openstack-operators as well)
>
>         Today the setting of the dns suffix that is provided to the
>         instance is passed through dhcp_agent.
>
>         There is the option of setting different DNS servers per
>         subnet (and and therefore tenant) but the domain suffix is
>         something that stays the same throughout the whole system is
>         the domain suffix.
>
>         I see that this is not a current neutron feature.
>
>         Is this on the roadmap? Are there ways to achieve this today?
>         If so I would be very interested in hearing how.
>
>         Thanks
>         -- 
>         Best Regards,
>         Maish Saidel-Keesing
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> -- 
> Best Regards ,
>
> The G.

-- 
Best Regards,
Maish Saidel-Keesing
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/4fb6dbc0/attachment.html>

From egafford at redhat.com  Thu Sep  3 20:27:53 2015
From: egafford at redhat.com (Ethan Gafford)
Date: Thu, 3 Sep 2015 16:27:53 -0400 (EDT)
Subject: [openstack-dev] [sahara] Request for Feature Freeze Exception
In-Reply-To: <55E8A545.7000602@redhat.com>
References: <CA+O3VAi689gyN-7Vu1qmBsv_T3xyaOOiL0foBo4YLLTJFW60ww@mail.gmail.com>
 <55E8A545.7000602@redhat.com>
Message-ID: <47194693.21252941.1441312073237.JavaMail.zimbra@redhat.com>

Agreed. We've talked about this for a while, and it's very low risk.

Thanks,
Ethan

----- Original Message -----
From: "michael mccune" <msm at redhat.com>
To: openstack-dev at lists.openstack.org
Sent: Thursday, September 3, 2015 3:53:41 PM
Subject: Re: [openstack-dev] [sahara] Request for Feature Freeze Exception

On 09/03/2015 02:49 PM, Vitaly Gridnev wrote:
> Hey folks!
>
> I would like to propose to add to list of FFE's following blueprint:
> https://blueprints.launchpad.net/sahara/+spec/drop-hadoop-1
>
> Reasoning of that is following:
>
>   1. HDP 1.3.2 and Vanilla 1.2.1 are not gated for a whole release
> cycle, so it can be reason of several bugs in these versions;
>   2. Minimal risk of removal: it doesn't touch versions that we already
> have.
>   3. All required changes was already uploaded to the review:
> https://review.openstack.org/#/q/status:open+project:openstack/sahara+branch:master+topic:bp/drop-hadoop-1,n,z

this sounds reasonable to me

mike


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From henryn at linux.vnet.ibm.com  Thu Sep  3 20:38:45 2015
From: henryn at linux.vnet.ibm.com (Henry Nash)
Date: Thu, 3 Sep 2015 21:38:45 +0100
Subject: [openstack-dev] FFE Request for completion of data driven
	assignment testing in Keystone
Message-ID: <DB4FED6C-267C-45E4-BA7B-5FB42D816F60@linux.vnet.ibm.com>

The approved Keystone Blueprint (https://review.openstack.org/#/c/190996/ <https://review.openstack.org/#/c/190996/>) for enhancing our testing of the assignment backend was split into 7 patches. 5 of these landed before the liberty-3 freeze, but two had not yet been approved.

I would like to request an FFE for the remaining two patches that are already in review (https://review.openstack.org/#/c/153897/ <https://review.openstack.org/#/c/153897/> and https://review.openstack.org/#/c/154485/ <https://review.openstack.org/#/c/154485/>).  These contain only test code and no functional changes, and increase our test coverage - as well as enable other items to be re-use the list_role_assignment backend method.

Henry
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/06c2ffbe/attachment.html>

From blak111 at gmail.com  Thu Sep  3 20:38:31 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 3 Sep 2015 13:38:31 -0700
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <BLU437-SMTP76050D04789EE848C5B5F4D8680@phx.gbl>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <BLU437-SMTP76050D04789EE848C5B5F4D8680@phx.gbl>
Message-ID: <CAO_F6JO4N4AevZN0n4K=fTyXpbkOi+T1Up_7ukYHodzTTPim3A@mail.gmail.com>

I think that's different than what is being asked here. That patch appears
to just add IPv6 interface information if it's available in the metadata.
This thread is about getting cloud-init to connect to an IPv6 address
instead of 169.254.169.254 for pure IPv6 environments.

On Thu, Sep 3, 2015 at 11:41 AM, Joshua Harlow <harlowja at outlook.com> wrote:

> I'm pretty sure this got implemented :)
>
> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/revision/1042
> and https://bugs.launchpad.net/cloud-init/+bug/1391695
>
> That's the RHEL support, since cloud-init translates a ubuntu style
> networking style the ubuntu/debian style format should also work.
>
>
> Steve Gordon wrote:
>
>> ----- Original Message -----
>>
>>> From: "Kevin Benton"<blak111 at gmail.com>
>>>
>>> When we discussed this before on the neutron channel, I thought it was
>>> because cloud-init doesn't support IPv6. We had wasted quite a bit of
>>> time
>>> talking about adding support to our metadata service because I was under
>>> the impression that cloud-init already did support IPv6.
>>>
>>> IIRC, the argument against adding IPv6 support to cloud-init was that it
>>> might be incompatible with how AWS chooses to implement IPv6 metadata, so
>>> AWS would require a fork or other incompatible alternative to cloud-init
>>> in
>>> all of their images.
>>>
>>> Is that right?
>>>
>>
>> That's certainly my understanding of the status quo, I was enquiring
>> primarily to check it was still accurate.
>>
>> -Steve
>>
>> On Thu, Sep 3, 2015 at 7:30 AM, Sean M. Collins<sean at coreitpro.com>
>>> wrote:
>>>
>>> It's not a case of cloud-init supporting IPv6 - The Amazon EC2 metadata
>>>> API defines transport level details about the API - and currently only
>>>> defines a well known IPv4 link local address to connect to. No well
>>>> known
>>>> link local IPv6 address has been defined.
>>>>
>>>> I usually recommend config-drive for IPv6 enabled clouds due to this.
>>>> --
>>>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/cd338f27/attachment.html>

From henryn at linux.vnet.ibm.com  Thu Sep  3 20:49:35 2015
From: henryn at linux.vnet.ibm.com (Henry Nash)
Date: Thu, 3 Sep 2015 21:49:35 +0100
Subject: [openstack-dev] FFE Request for list role assignment in tree
	blueprint in Keystone
Message-ID: <31ACB7D9-4B6B-4DBF-B0D7-9705F4AA1F32@linux.vnet.ibm.com>

The approved Keystone blueprint (https://review.openstack.org/#/c/187045/ <https://review.openstack.org/#/c/187045/>) was held up during the Liberty cycles by needing the data driver testing to be in place  (https://review.openstack.org/#/c/190996/ <https://review.openstack.org/#/c/190996/>).  The main implementation is already in review and this adds a new API without modifying other existing APIs.

Given the narrowness of the addition being made (and that it is already well advanced in its implementation), I would like to request an FFE for the completion of this blueprint since this provides valuable administrative capabilities for enterprise customers in conjunction with the Keystone reseller capabilities.

Thanks

Henry
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/c8a05f15/attachment.html>

From blak111 at gmail.com  Thu Sep  3 21:21:43 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 3 Sep 2015 14:21:43 -0700
Subject: [openstack-dev] [Openstack-operators] [Neutron] Allowing DNS
 suffix to be set per subnet (at least per tenant)
In-Reply-To: <55E8A889.3030502@maishsk.com>
References: <55E7FF06.2010207@maishsk.com>
 <CAF47bhHfR9qAPAkUFQ=81=mYUpu_wMZ-OyKAEtTP9kJ22KpP9A@mail.gmail.com>
 <CAG9LJa6TnqPDefNGbsCY8A7U8sd+gdNuHTFiQL+nYrfgHQZ0BQ@mail.gmail.com>
 <55E8A889.3030502@maishsk.com>
Message-ID: <CAO_F6JMG5P6EhnrZm3WdzohAdxRh+vzhrXZZ_k+4quAJqEf+bA@mail.gmail.com>

Support for that blueprint already merged[1] so it's a little late to
change it to per-subnet. If that is too fine-grained for your use-case, I
would file an RFE bug[2] to allow it to be set at the subnet level.


1. https://review.openstack.org/#/c/200952/
2.
http://docs.openstack.org/developer/neutron/policies/blueprints.html#rfe-submission-guidelines

On Thu, Sep 3, 2015 at 1:07 PM, Maish Saidel-Keesing <maishsk at maishsk.com>
wrote:

> On 09/03/15 20:51, Gal Sagie wrote:
>
> I am not sure if this address what you need specifically, but it would be
> worth checking these
> two approved liberty specs:
>
> 1)
> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/internal-dns-resolution.rst
> 2)
> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/external-dns-resolution.rst
>
> Thanks Gal,
>
> So I see that from the bp [1] the fqdn will be configurable for each and
> every port ?
>
> I think that this does open up a number of interesting possibilities, but
> I would also think that it would be sufficient to do this on a subnet level?
>
> We do already have the option of setting nameservers per subnet - I assume
> the data model is already implemented - which is interesting  - because I
> don't see that as part of the information that is sent by dnsmasq so it
> must be coming from neutron somewhere.
>
> The domain suffix - definitely is handled by dnsmasq.
>
>
>
> On Thu, Sep 3, 2015 at 8:37 PM, Steve Wormley <openstack at wormley.com>
> wrote:
>
>> As far as I am aware it is not presently built-in to Openstack. You'll
>> need to add a dnsmasq_config_file option to your dhcp agent configurations
>> and then populate the file with:
>> domain=DOMAIN_NAME,CIDR for each network
>> i.e.
>> domain=example.com,10.11.22.0/24
>> ...
>>
>> -Steve
>>
>>
>> On Thu, Sep 3, 2015 at 1:04 AM, Maish Saidel-Keesing <
>> <maishsk at maishsk.com>maishsk at maishsk.com> wrote:
>>
>>> Hello all (cross-posting to openstack-operators as well)
>>>
>>> Today the setting of the dns suffix that is provided to the instance is
>>> passed through dhcp_agent.
>>>
>>> There is the option of setting different DNS servers per subnet (and and
>>> therefore tenant) but the domain suffix is something that stays the same
>>> throughout the whole system is the domain suffix.
>>>
>>> I see that this is not a current neutron feature.
>>>
>>> Is this on the roadmap? Are there ways to achieve this today? If so I
>>> would be very interested in hearing how.
>>>
>>> Thanks
>>> --
>>> Best Regards,
>>> Maish Saidel-Keesing
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
>
> --
> Best Regards,
> Maish Saidel-Keesing
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/a0964550/attachment.html>

From dpyzhov at mirantis.com  Thu Sep  3 21:41:08 2015
From: dpyzhov at mirantis.com (Dmitry Pyzhov)
Date: Fri, 4 Sep 2015 00:41:08 +0300
Subject: [openstack-dev] [Fuel] Nominate Evgeniy Konstantinov for
 fuel-docs core
In-Reply-To: <CAPQe3Ln-Rv2Z-8LyWPo914mFk+xhxHe05Vj=wxR=yuoUd2+PyA@mail.gmail.com>
References: <CAFY49iBwxknorBHmVLZSkUWD9zMr4Tc57vKOg_F0=7PEG0_tSA@mail.gmail.com>
 <CAM0pNLOpBAhyQnRCHXK=jL6NTpxdEe880a=h7c-Jvw4GdTuk9w@mail.gmail.com>
 <CAC+XjbZqz-qk1fi+pR=H-KXEgOqW9W0_+0f89xKVSPpiA5otWg@mail.gmail.com>
 <CAHAWLf2apU=0b_xOhEMA=DjKoEKRsSCtys4sGnjyBmQckgXhUA@mail.gmail.com>
 <CAPQe3Ln-Rv2Z-8LyWPo914mFk+xhxHe05Vj=wxR=yuoUd2+PyA@mail.gmail.com>
Message-ID: <CAEg2Y8M2HW2QLbNNNga2jMwCm4Z-78wxoZaQCGuW-q_-3PqjwA@mail.gmail.com>

+1

On Thu, Sep 3, 2015 at 10:14 PM, Sergey Vasilenko <svasilenko at mirantis.com>
wrote:

> +1
>
>
> /sv
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/f7437097/attachment.html>

From armamig at gmail.com  Thu Sep  3 22:00:08 2015
From: armamig at gmail.com (Armando M.)
Date: Thu, 3 Sep 2015 15:00:08 -0700
Subject: [openstack-dev] [neutron] pushing changes through the gate
In-Reply-To: <CAK+RQebMqAL-ZTn4z3Tafnpr=feA3i4qaeF5Uu4K03SO_fFF9g@mail.gmail.com>
References: <CAK+RQebMqAL-ZTn4z3Tafnpr=feA3i4qaeF5Uu4K03SO_fFF9g@mail.gmail.com>
Message-ID: <CAK+RQeYKgsBMg7Ray6wqHoPdJ7ODq-uy8N-Sp6iruq7oXvbTkA@mail.gmail.com>

On 2 September 2015 at 09:40, Armando M. <armamig at gmail.com> wrote:

> Hi,
>
> By now you may have seen that I have taken out your change from the gate
> and given it a -2: don't despair! I am only doing it to give priority to
> the stuff that needs to merge in order to get [1] into a much better shape.
>
> If you have an important fix, please target it for RC1 or talk to me or
> Doug (or Kyle when he's back from his time off), before putting it in the
> gate queue. If everyone is not conscious of the other, we'll only end up
> stepping on each other, and nothing moves forward.
>
> Let's give priority to gate stabilization fixes, and targeted stuff.
>
> Happy merging...not!
>
> Many thanks,
> Armando
>
> [1] https://launchpad.net/neutron/+milestone/liberty-3
> [2] https://launchpad.net/neutron/+milestone/liberty-rc1
>

Download files for the milestone are available in [1]. We still have a lot
to do as there are outstanding bugs and blueprints that will have to be
merged in the RC time windows.

Please be conscious of what you approve. Give priority to:

- Targeted bugs and blueprints in [2];
- Gate stability fixes or patches that aim at helping troubleshooting;

In these busy times, please refrain from proposing/merging:

- Silly rebase generators (e.g. spelling mistakes);
- Cosmetic changes (e.g. minor doc strings/comment improvements);
- Refactoring required while dealing with the above;
- A dozen of patches stacked on top of each other;

Every rule has its own exception, so don't take this literally.

If you are unsure, please reach out to me, Kyle or your Lieutenant and
we'll target stuff that is worth targeting.

As for the rest, I am gonna be merciless and -2 anything than I can find,
in order to keep our gate lean and sane :)

Thanks and happy hacking.

A.

[1] https://launchpad.net/neutron/+milestone/liberty-3
[2] https://launchpad.net/neutron/+milestone/liberty-rc1
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/0d670a05/attachment-0001.html>

From zigo at debian.org  Thu Sep  3 22:06:26 2015
From: zigo at debian.org (Thomas Goirand)
Date: Fri, 04 Sep 2015 00:06:26 +0200
Subject: [openstack-dev] [horizon] Concern about XStatic-bootswatch
 imports from fonts.googleapis.com
In-Reply-To: <CABswzdFv-q7gz+W=ss917SvtTDZzkA55Q67RFpY-S=DaidM-8Q@mail.gmail.com>
References: <CABswzdFv-q7gz+W=ss917SvtTDZzkA55Q67RFpY-S=DaidM-8Q@mail.gmail.com>
Message-ID: <55E8C462.5050804@debian.org>

On 09/03/2015 07:58 PM, Diana Whitten wrote:
> Thomas,
> 
> Sorry for the slow response, since I wasn't on the right mailing list yet.
> 
> 1. I'm trying to figure out the best way possible to address this
> security breach.  I think that the best way to fix this is to augment
> Bootswatch to only use the URL through a parameter, that can be easily
> configured.  I have an Issue open on their code right now for this very
> feature.
> 
> Until then, I think that we can easily address the issue from the point
> of view of Horizon, such that we:
> 1. Remove all instances of 'fonts.googleapis.com
> <http://fonts.googleapis.com>' from the SCSS during the preprocessor
> step. Therefore, no outside URLs that point to this location EVER get hit
> *or*
> 2. Until the issue that I created on Bootswatch can be addressed,  we
> can include that file that is making the call in the tree and remove the
> @import entirely. 
> *or*
> 3. Until the issue that I created on Bootswatch can be addressed,  we
> can include the two files that we need from bootswatch 'paper' entirely,
> and remove Bootswatch as a requirement until we can get an updated package
> 
> 2. Its not getting used at all ... anyways.  I packaged up the font and
> make it also available via xstatic.  I realized there was some questions
> about where the versioning came from, but it looks like you might have
> been looking at the wrong github repo:
> https://github.com/Templarian/MaterialDesign-Webfont/releases
> 
> You can absolutely patch out the fonts.  The result will not be ugly;
> each font should fall back to a nice system font.  But, we are only
> using the 'Paper' theme out of Bootswatch right now and therefore only
> packaged up the specific font required for it.
> 
> Ping me on IRC @hurgleburgler
> 
> - Diana

Diana,

Thanks a lot for all of these answers. It's really helping!

So if I understand well, xstatic-bootswatch is an already stripped down
version of the upstream bootswatch. But Horizon only use a single theme
out of the 16 available in the XStatic package. Then why aren't we using
an xstatic package which would include only the paper theme? Or is there
something that I didn't understand?

Removing the fonts.googleapis.com at runtime by Horizon isn't an option
for distributions, as we don't want to ship a .css file including such
an import anyway. So definitively, I'd be patching out the @import away.
But will there be a mechanism to load the Roboto font, packaged as
xstatic, then? Falling back to a system font could have surprising results.

This was for the bootswatch issue. Now, about the mdi, which IMO isn't
as much as a problem.

The Git repository at:
https://github.com/Templarian/MaterialDesign-Webfont/releases

I wonder how it was created. Apparently, the font is made up of images
that are coming from this repository:
https://github.com/google/material-design-icons

the question is then, how has this font been made? Was it done "by hand"
by an artist? Or was there some kind of scripting involved? If it is the
later, then I'd like to build the font out of the original sources if
possible. If I can't find how it was done, then I'll probably end up
just packaging the font as-is, but I'd very much prefer to understand
what has been done.

Cheers,

Thomas Goirand (zigo)



From Kevin.Fox at pnnl.gov  Thu Sep  3 23:03:48 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Thu, 3 Sep 2015 23:03:48 +0000
Subject: [openstack-dev] cloud-init IPv6 support
In-Reply-To: <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>,
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>

So if we define the well known address and cloud-init adopts it, then Amazon should be inclined to adopt it too. :)

Why always chase Amazon?

Thanks,
Kevin
________________________________________
From: Steve Gordon [sgordon at redhat.com]
Sent: Thursday, September 03, 2015 11:06 AM
To: Kevin Benton
Cc: OpenStack Development Mailing List (not for usage questions); PAUL CARVER
Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support

----- Original Message -----
> From: "Kevin Benton" <blak111 at gmail.com>
>
> When we discussed this before on the neutron channel, I thought it was
> because cloud-init doesn't support IPv6. We had wasted quite a bit of time
> talking about adding support to our metadata service because I was under
> the impression that cloud-init already did support IPv6.
>
> IIRC, the argument against adding IPv6 support to cloud-init was that it
> might be incompatible with how AWS chooses to implement IPv6 metadata, so
> AWS would require a fork or other incompatible alternative to cloud-init in
> all of their images.
>
> Is that right?

That's certainly my understanding of the status quo, I was enquiring primarily to check it was still accurate.

-Steve

> On Thu, Sep 3, 2015 at 7:30 AM, Sean M. Collins <sean at coreitpro.com> wrote:
>
> > It's not a case of cloud-init supporting IPv6 - The Amazon EC2 metadata
> > API defines transport level details about the API - and currently only
> > defines a well known IPv4 link local address to connect to. No well known
> > link local IPv6 address has been defined.
> >
> > I usually recommend config-drive for IPv6 enabled clouds due to this.
> > --
> > Sent from my Android device with K-9 Mail. Please excuse my brevity.
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From hurgleburgler at gmail.com  Thu Sep  3 23:11:48 2015
From: hurgleburgler at gmail.com (Diana Whitten)
Date: Thu, 3 Sep 2015 16:11:48 -0700
Subject: [openstack-dev] [horizon] Concern about XStatic-bootswatch imports
 from fonts.googleapis.com
Message-ID: <CABswzdHTq_WqX6XmfMpdDrt0pDa2DZS3spw0O26rErtE7h9emg@mail.gmail.com>

Thomas,

Lots of movement on this today.  I was able to get Bootswatch to roll a new
package to accommodate our need to not pull in the URL by default any
longer.  This is now a configurable value that can be set by a variable.
The variable's default value is still the google URL, but Horizon will
reset that when we pull it it.

The bootswatch package isn't a stripped down version of upstream
bootswatch, but it was created from the already existing bower package for
Bootswatch.  It is easier to maintain parity with the bower package, than
trying to pull apart very specific themes out of it.  Also, some upcoming
features plan to take advantage of some of the other themes as well.

As for the MDI package, there are services out there that can convert the
raw SVG that is available directly from google (
https://github.com/google/material-design-icons) into a variety of Web Font
Formats, BUT ... this is a not a direct mapping of Google's Material Design
Icons.  The Templarian repo is actually a bigger set of icons, it includes
Google's Icons, but also a number of Community supports and contributed
(under the same license) icons.  See the full set here:
https://materialdesignicons.com/.  Templarian maintains the SVGs of these
at https://github.com/Templarian/MaterialDesign, however, they also
maintain the Bower package (that the xstatic inherits from) at
https://github.com/Templarian/MaterialDesign-Webfont.

Best,
Diana



On Thu, Sep 3, 2015 at 3:06 PM, Thomas Goirand <zigo at debian.org> wrote:

> On 09/03/2015 07:58 PM, Diana Whitten wrote:
> > Thomas,
> >
> > Sorry for the slow response, since I wasn't on the right mailing list
> yet.
> >
> > 1. I'm trying to figure out the best way possible to address this
> > security breach.  I think that the best way to fix this is to augment
> > Bootswatch to only use the URL through a parameter, that can be easily
> > configured.  I have an Issue open on their code right now for this very
> > feature.
> >
> > Until then, I think that we can easily address the issue from the point
> > of view of Horizon, such that we:
> > 1. Remove all instances of 'fonts.googleapis.com
> > <http://fonts.googleapis.com>' from the SCSS during the preprocessor
> > step. Therefore, no outside URLs that point to this location EVER get hit
> > *or*
> > 2. Until the issue that I created on Bootswatch can be addressed,  we
> > can include that file that is making the call in the tree and remove the
> > @import entirely.
> > *or*
> > 3. Until the issue that I created on Bootswatch can be addressed,  we
> > can include the two files that we need from bootswatch 'paper' entirely,
> > and remove Bootswatch as a requirement until we can get an updated
> package
> >
> > 2. Its not getting used at all ... anyways.  I packaged up the font and
> > make it also available via xstatic.  I realized there was some questions
> > about where the versioning came from, but it looks like you might have
> > been looking at the wrong github repo:
> > https://github.com/Templarian/MaterialDesign-Webfont/releases
> >
> > You can absolutely patch out the fonts.  The result will not be ugly;
> > each font should fall back to a nice system font.  But, we are only
> > using the 'Paper' theme out of Bootswatch right now and therefore only
> > packaged up the specific font required for it.
> >
> > Ping me on IRC @hurgleburgler
> >
> > - Diana
>
> Diana,
>
> Thanks a lot for all of these answers. It's really helping!
>
> So if I understand well, xstatic-bootswatch is an already stripped down
> version of the upstream bootswatch. But Horizon only use a single theme
> out of the 16 available in the XStatic package. Then why aren't we using
> an xstatic package which would include only the paper theme? Or is there
> something that I didn't understand?
>
> Removing the fonts.googleapis.com at runtime by Horizon isn't an option
> for distributions, as we don't want to ship a .css file including such
> an import anyway. So definitively, I'd be patching out the @import away.
> But will there be a mechanism to load the Roboto font, packaged as
> xstatic, then? Falling back to a system font could have surprising results.
>
> This was for the bootswatch issue. Now, about the mdi, which IMO isn't
> as much as a problem.
>
> The Git repository at:
> https://github.com/Templarian/MaterialDesign-Webfont/releases
>
> I wonder how it was created. Apparently, the font is made up of images
> that are coming from this repository:
> https://github.com/google/material-design-icons
>
> the question is then, how has this font been made? Was it done "by hand"
> by an artist? Or was there some kind of scripting involved? If it is the
> later, then I'd like to build the font out of the original sources if
> possible. If I can't find how it was done, then I'll probably end up
> just packaging the font as-is, but I'd very much prefer to understand
> what has been done.
>
> Cheers,
>
> Thomas Goirand (zigo)
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/1be665a8/attachment.html>

From asalkeld at mirantis.com  Fri Sep  4 00:17:20 2015
From: asalkeld at mirantis.com (Angus Salkeld)
Date: Fri, 04 Sep 2015 00:17:20 +0000
Subject: [openstack-dev] [Heat] convergence rally test results (so far)
In-Reply-To: <55E85C61.5000808@redhat.com>
References: <CAA16xcx4BQ6meZ7HBCoEAwdQ_9k50T=wwt4wcNLUfBO9Y_LFbQ@mail.gmail.com>
 <20150901124147.GA4710@t430slt.redhat.com>
 <CAA16xcwVvJE35yKRLeELaRx3BxSUGD1okt3b-LHSuxz4BYqx0w@mail.gmail.com>
 <CAJ3HoZ1RKCBV5if4YS_b-h0WzGu0HySkAVEQGKbvyuOpz9LYGg@mail.gmail.com>
 <20150902085546.GA25909@t430slt.redhat.com> <55E73721.1000804@redhat.com>
 <CAA16xcxAnXj9mDhoATFDvhvkBJiF5g4taTv7g3LNoONhuRo4jA@mail.gmail.com>
 <55E85C61.5000808@redhat.com>
Message-ID: <CAA16xcx7R0eNatKKrq1rxLX7OGL37acKRFis5S1Pmt8-ZJq+dg@mail.gmail.com>

On Fri, Sep 4, 2015 at 12:48 AM Zane Bitter <zbitter at redhat.com> wrote:

> On 03/09/15 02:56, Angus Salkeld wrote:
> > On Thu, Sep 3, 2015 at 3:53 AM Zane Bitter <zbitter at redhat.com
> > <mailto:zbitter at redhat.com>> wrote:
> >
> >     On 02/09/15 04:55, Steven Hardy wrote:
> >      > On Wed, Sep 02, 2015 at 04:33:36PM +1200, Robert Collins wrote:
> >      >> On 2 September 2015 at 11:53, Angus Salkeld
> >     <asalkeld at mirantis.com <mailto:asalkeld at mirantis.com>> wrote:
> >      >>
> >      >>> 1. limit the number of resource actions in parallel (maybe base
> >     on the
> >      >>> number of cores)
> >      >>
> >      >> I'm having trouble mapping that back to 'and heat-engine is
> >     running on
> >      >> 3 separate servers'.
> >      >
> >      > I think Angus was responding to my test feedback, which was a
> >     different
> >      > setup, one 4-core laptop running heat-engine with 4 worker
> processes.
> >      >
> >      > In that environment, the level of additional concurrency becomes
> >     a problem
> >      > because all heat workers become so busy that creating a large
> stack
> >      > DoSes the Heat services, and in my case also the DB.
> >      >
> >      > If we had a configurable option, similar to num_engine_workers,
> which
> >      > enabled control of the number of resource actions in parallel, I
> >     probably
> >      > could have controlled that explosion in activity to a more
> >     managable series
> >      > of tasks, e.g I'd set num_resource_actions to
> >     (num_engine_workers*2) or
> >      > something.
> >
> >     I think that's actually the opposite of what we need.
> >
> >     The resource actions are just sent to the worker queue to get
> processed
> >     whenever. One day we will get to the point where we are overflowing
> the
> >     queue, but I guarantee that we are nowhere near that day. If we are
> >     DoSing ourselves, it can only be because we're pulling *everything*
> off
> >     the queue and starting it in separate greenthreads.
> >
> >
> > worker does not use a greenthread per job like service.py does.
> > This issue is if you have actions that are fast you can hit the db hard.
> >
> > QueuePool limit of size 5 overflow 10 reached, connection timed out,
> > timeout 30
> >
> > It seems like it's not very hard to hit this limit. It comes from simply
> > loading
> > the resource in the worker:
> > "/home/angus/work/heat/heat/engine/worker.py", line 276, in
> check_resource
> > "/home/angus/work/heat/heat/engine/worker.py", line 145, in
> _load_resource
> > "/home/angus/work/heat/heat/engine/resource.py", line 290, in load
> > resource_objects.Resource.get_obj(context, resource_id)
>
> This is probably me being naive, but that sounds strange. I would have
> thought that there is no way to exhaust the connection pool by doing
> lots of actions in rapid succession. I'd have guessed that the only way
> to exhaust a connection pool would be to have lots of connections open
> simultaneously. That suggests to me that either we are failing to
> expeditiously close connections and return them to the pool, or that we
> are - explicitly or implicitly - processing a bunch of messages in
> parallel.
>

I suspect we are leaking sessions, I have updated this bug to make sure we
focus on figuring out the root cause of this before jumping to conclusions:
https://bugs.launchpad.net/heat/+bug/1491185

-A


>
> >     In an ideal world, we might only ever pull one task off that queue
> at a
> >     time. Any time the task is sleeping, we would use for processing
> stuff
> >     off the engine queue (which needs a quick response, since it is
> serving
> >     the ReST API). The trouble is that you need a *huge* number of
> >     heat-engines to handle stuff in parallel. In the reductio-ad-absurdum
> >     case of a single engine only processing a single task at a time,
> we're
> >     back to creating resources serially. So we probably want a higher
> number
> >     than 1. (Phase 2 of convergence will make tasks much smaller, and may
> >     even get us down to the point where we can pull only a single task
> at a
> >     time.)
> >
> >     However, the fewer engines you have, the more greenthreads we'll
> have to
> >     allow to get some semblance of parallelism. To the extent that more
> >     cores means more engines (which assumes all running on one box, but
> >     still), the number of cores is negatively correlated with the number
> of
> >     tasks that we want to allow.
> >
> >     Note that all of the greenthreads run in a single CPU thread, so
> having
> >     more cores doesn't help us at all with processing more stuff in
> >     parallel.
> >
> >
> > Except, as I said above, we are not creating greenthreads in worker.
>
> Well, maybe we'll need to in order to make things still work sanely with
> a low number of engines :) (Should be pretty easy to do with a semaphore.)
>
> I think what y'all are suggesting is limiting the number of jobs that go
> into the queue... that's quite wrong IMO. Apart from the fact it's
> impossible (resources put jobs into the queue entirely independently,
> and have no knowledge of the global state required to throttle inputs),
> we shouldn't implement an in-memory queue with long-running tasks
> containing state that can be lost if the process dies - the whole point
> of convergence is we have... a message queue for that. We need to limit
> the rate that stuff comes *out* of the queue. And, again, since we have
> no knowledge of global state, we can only control the rate at which an
> individual worker processes tasks. The way to avoid killing the DB is to
> out a constant ceiling on the workers * concurrent_tasks_per_worker
> product.
>
> cheers,
> Zane.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/10ac4234/attachment.html>

From jim at jimrollenhagen.com  Fri Sep  4 00:18:15 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Thu, 3 Sep 2015 17:18:15 -0700
Subject: [openstack-dev] [Ironic] Liberty release plans
Message-ID: <20150904001815.GA21846@jimrollenhagen.com>

Hi all,

I wanted to lay down my thoughts on the rest of the cycle here.

As you may know, we recently released Ironic 4.0.0. We've also released
python-ironicclient 0.8.0 and ironic-lib 0.1.0 this week. Yay!

I'd like to do two more server releases this cycle.

* 4.1.0 - minor release to clean up some bugs on 4.0.0. The last
  patch[0] I wanted in this is in the gate right now. I'd like to
  release this on Tuesday, September 8.

* 4.2.0 - this will become the stable/liberty branch. I'd like to
  release this in coordination with the rest of OpenStack's RC releases,
  and backport bug fixes as necessary, releasing 4.2.x for those.

I've made lists of features and bugs we want to land in 4.2.0 on our
whiteboard[1]. Let's try to prioritize code and review for those as much
as possible.

I'd like to try to release 4.2.0 on Thursday, September 24. As such, I'd
like reviewers to respect a soft code freeze beginning on Thursday,
September 17. I don't want to say "don't merge features", but please
don't merge anything risky after that date.

As always, questions/comments/concerns welcome.

// jim

[0] https://review.openstack.org/#/c/219398/
[1] https://etherpad.openstack.org/p/IronicWhiteBoard


From jim at jimrollenhagen.com  Fri Sep  4 00:24:00 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Thu, 3 Sep 2015 17:24:00 -0700
Subject: [openstack-dev] [Ironic] Introducing ironic-lib 0.1.0
Message-ID: <20150904002400.GB21846@jimrollenhagen.com>

Hi all,

I'm proud to announce the initial release of ironic-lib! This library
was built to share code between various Ironic projects. The initial
release contains an up-to-date copy of Ironic's disk partitioning code,
to be shared between Ironic and ironic-python-agent.

At the beginning of the Mitaka cycle, we'll begin to refactor Ironic to
use this code, and also start using it in IPA to be able to deploy
partition images, support ephemeral volumes, etc., without relying on
iSCSI.

PyPI: https://pypi.python.org/pypi/ironic-lib
Git: http://git.openstack.org/cgit/openstack/ironic-lib/
Launchpad: https://launchpad.net/ironic-lib
global-requirements patch: https://review.openstack.org/#/c/219011/

As always, questions/comments/concerns welcome. :)

// jim


From ichihara.hirofumi at lab.ntt.co.jp  Fri Sep  4 00:55:40 2015
From: ichihara.hirofumi at lab.ntt.co.jp (Hirofumi Ichihara)
Date: Fri, 4 Sep 2015 09:55:40 +0900
Subject: [openstack-dev] [neutron] pushing changes through the gate
In-Reply-To: <CAK+RQeYKgsBMg7Ray6wqHoPdJ7ODq-uy8N-Sp6iruq7oXvbTkA@mail.gmail.com>
References: <CAK+RQebMqAL-ZTn4z3Tafnpr=feA3i4qaeF5Uu4K03SO_fFF9g@mail.gmail.com>
 <CAK+RQeYKgsBMg7Ray6wqHoPdJ7ODq-uy8N-Sp6iruq7oXvbTkA@mail.gmail.com>
Message-ID: <AB2E7CE7-ECCC-49D9-AB20-46135559D3DE@lab.ntt.co.jp>

Good work and thank you for your help with my patch.

Anyway, I don?t know when does bp owner have to merge the code by.
I can see the following sentence in bp rule[1]
?The PTL will create a <release>-backlog directory during the RC window and move all specs which didn?t make the <release> there.?
Did we have to merge the implementation by L-3? Or can we merge it in RC-1?

[1]: http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-bp-and-spec-notes

Thanks,
Hirofumi

> On 2015/09/04, at 7:00, Armando M. <armamig at gmail.com> wrote:
> 
> 
> 
> On 2 September 2015 at 09:40, Armando M. <armamig at gmail.com <mailto:armamig at gmail.com>> wrote:
> Hi,
> 
> By now you may have seen that I have taken out your change from the gate and given it a -2: don't despair! I am only doing it to give priority to the stuff that needs to merge in order to get [1] into a much better shape.
> 
> If you have an important fix, please target it for RC1 or talk to me or Doug (or Kyle when he's back from his time off), before putting it in the gate queue. If everyone is not conscious of the other, we'll only end up stepping on each other, and nothing moves forward.
> 
> Let's give priority to gate stabilization fixes, and targeted stuff.
> 
> Happy merging...not!
> 
> Many thanks,
> Armando
> 
> [1] https://launchpad.net/neutron/+milestone/liberty-3 <https://launchpad.net/neutron/+milestone/liberty-3>
> [2] https://launchpad.net/neutron/+milestone/liberty-rc1 <https://launchpad.net/neutron/+milestone/liberty-rc1>
> 
> Download files for the milestone are available in [1]. We still have a lot to do as there are outstanding bugs and blueprints that will have to be merged in the RC time windows.
> 
> Please be conscious of what you approve. Give priority to:
> 
> - Targeted bugs and blueprints in [2];
> - Gate stability fixes or patches that aim at helping troubleshooting;
> 
> In these busy times, please refrain from proposing/merging:
> 
> - Silly rebase generators (e.g. spelling mistakes);
> - Cosmetic changes (e.g. minor doc strings/comment improvements);
> - Refactoring required while dealing with the above;
> - A dozen of patches stacked on top of each other; 
> 
> Every rule has its own exception, so don't take this literally.
> 
> If you are unsure, please reach out to me, Kyle or your Lieutenant and we'll target stuff that is worth targeting.
> 
> As for the rest, I am gonna be merciless and -2 anything than I can find, in order to keep our gate lean and sane :)
> 
> Thanks and happy hacking.
> 
> A.
> 
> [1] https://launchpad.net/neutron/+milestone/liberty-3 <https://launchpad.net/neutron/+milestone/liberty-3>
> [2] https://launchpad.net/neutron/+milestone/liberty-rc1 <https://launchpad.net/neutron/+milestone/liberty-rc1>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/f3e022b3/attachment.html>

From wanghua.humble at gmail.com  Fri Sep  4 01:43:49 2015
From: wanghua.humble at gmail.com (=?UTF-8?B?546L5Y2O?=)
Date: Fri, 4 Sep 2015 09:43:49 +0800
Subject: [openstack-dev] [magnum]keystone version
Message-ID: <CAH5-jC8FQ9C7ADVygXXVRyKMt867iBFsjimKp26db6=pFO27-g@mail.gmail.com>

Hi all,

Now the keystoneclient in magnum only support keystone v3. Is is necessary
to support keystone v2? Keystone v2 don't support trust.

Regards,
Wanghua
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/cfb1e890/attachment.html>

From nakato at nakato.io  Fri Sep  4 02:16:00 2015
From: nakato at nakato.io (Sachi King)
Date: Fri, 04 Sep 2015 12:16:00 +1000
Subject: [openstack-dev] [Cross Project] [Neutron] [Nova] Tox.ini changes
	for Constraints testing
Message-ID: <1721376.LIrBB9dSzH@youmu>

Hi,

I'm working on setting up both Nova and Neutron with constrained unit tests.
More details about this can be found in the changes can be found in it's
blueprint [0].

An example issue this will prevent is Neutron's recent gate breakage caused
by a new netaddr version. [1]

Now that the base changes have landed in project-config the next step is to
modify tox.ini to run an alterniate install command when called with the
'constraints' factor.

Nova: https://review.openstack.org/205931/
Neutron: https://review.openstack.org/219134/

This change is a no-op for current gate jobs and developer workflows, only
adding the functionality required for the new constraints jobs and manual
execution of the constrained tests when desired.

Once these have been added we can then proceed adding the py27 and 34 jobs, 
which will be non-voting at this point.

Nova: https://review.openstack.org/219582/
Neutron: https://review.openstack.org/219610/

[0] http://specs.openstack.org/openstack/openstack-specs/specs/requirements-management.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/073239.html

If there are any other projects interested in being an early adopter of
constrained unit tests, please let me know.

Cheers,
Sachi


From dstanek at dstanek.com  Fri Sep  4 02:28:09 2015
From: dstanek at dstanek.com (David Stanek)
Date: Fri, 04 Sep 2015 02:28:09 +0000
Subject: [openstack-dev] FFE Request for completion of data driven
 assignment testing in Keystone
In-Reply-To: <DB4FED6C-267C-45E4-BA7B-5FB42D816F60@linux.vnet.ibm.com>
References: <DB4FED6C-267C-45E4-BA7B-5FB42D816F60@linux.vnet.ibm.com>
Message-ID: <CAO69Nd=i84FrR1f+0xHqb1S1jHytNFcbL+3+y+YjpDEcDQVimA@mail.gmail.com>

On Thu, Sep 3, 2015 at 3:44 PM Henry Nash <henryn at linux.vnet.ibm.com> wrote:

>
> I would like to request an FFE for the remaining two patches that are
> already in review (https://review.openstack.org/#/c/153897/ and
> https://review.openstack.org/#/c/154485/).  These contain only test code
> and no functional changes, and increase our test coverage - as well as
> enable other items to be re-use the list_role_assignment backend method.
>

Do we need a FFE for changes to tests?

-- David
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/c592cdd5/attachment.html>

From armamig at gmail.com  Fri Sep  4 02:36:51 2015
From: armamig at gmail.com (Armando M.)
Date: Thu, 3 Sep 2015 19:36:51 -0700
Subject: [openstack-dev] [neutron] pushing changes through the gate
In-Reply-To: <AB2E7CE7-ECCC-49D9-AB20-46135559D3DE@lab.ntt.co.jp>
References: <CAK+RQebMqAL-ZTn4z3Tafnpr=feA3i4qaeF5Uu4K03SO_fFF9g@mail.gmail.com>
 <CAK+RQeYKgsBMg7Ray6wqHoPdJ7ODq-uy8N-Sp6iruq7oXvbTkA@mail.gmail.com>
 <AB2E7CE7-ECCC-49D9-AB20-46135559D3DE@lab.ntt.co.jp>
Message-ID: <CAK+RQea9d57qka7st1zWHMzGF4qoWWB3MWrhQZThEn0LBFEEtg@mail.gmail.com>

On 3 September 2015 at 17:55, Hirofumi Ichihara <
ichihara.hirofumi at lab.ntt.co.jp> wrote:

> Good work and thank you for your help with my patch.
>
> Anyway, I don?t know when does bp owner have to merge the code by.
> I can see the following sentence in bp rule[1]
> ?The PTL will create a <release>-backlog directory during the RC window
> and move all specs which didn?t make the <release> there.?
> Did we have to merge the implementation by L-3? Or can we merge it in RC-1?
>
> [1]:
> http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-bp-and-spec-notes
>
>
It depends on the extent of the changes remaining to merge. There is no
hard and fast rule, because every blueprint is different: the code can be
complex and pervasive, or simple and isolated. In the former even a small
patch may be deferred, in the latter even a dozen patches could be merged
during RC. Some other blueprints are best completed during feature freeze,
because of the rebase risk they cause...

Bottom line: never leave it to last minute!


> Thanks,
> Hirofumi
>
> On 2015/09/04, at 7:00, Armando M. <armamig at gmail.com> wrote:
>
>
>
> On 2 September 2015 at 09:40, Armando M. <armamig at gmail.com> wrote:
>
>> Hi,
>>
>> By now you may have seen that I have taken out your change from the gate
>> and given it a -2: don't despair! I am only doing it to give priority to
>> the stuff that needs to merge in order to get [1] into a much better shape.
>>
>> If you have an important fix, please target it for RC1 or talk to me or
>> Doug (or Kyle when he's back from his time off), before putting it in the
>> gate queue. If everyone is not conscious of the other, we'll only end up
>> stepping on each other, and nothing moves forward.
>>
>> Let's give priority to gate stabilization fixes, and targeted stuff.
>>
>> Happy merging...not!
>>
>> Many thanks,
>> Armando
>>
>> [1] https://launchpad.net/neutron/+milestone/liberty-3
>> [2] https://launchpad.net/neutron/+milestone/liberty-rc1
>>
>
> Download files for the milestone are available in [1]. We still have a lot
> to do as there are outstanding bugs and blueprints that will have to be
> merged in the RC time windows.
>
> Please be conscious of what you approve. Give priority to:
>
> - Targeted bugs and blueprints in [2];
> - Gate stability fixes or patches that aim at helping troubleshooting;
>
> In these busy times, please refrain from proposing/merging:
>
> - Silly rebase generators (e.g. spelling mistakes);
> - Cosmetic changes (e.g. minor doc strings/comment improvements);
> - Refactoring required while dealing with the above;
> - A dozen of patches stacked on top of each other;
>
> Every rule has its own exception, so don't take this literally.
>
> If you are unsure, please reach out to me, Kyle or your Lieutenant and
> we'll target stuff that is worth targeting.
>
> As for the rest, I am gonna be merciless and -2 anything than I can find,
> in order to keep our gate lean and sane :)
>
> Thanks and happy hacking.
>
> A.
>
> [1] https://launchpad.net/neutron/+milestone/liberty-3
> [2] https://launchpad.net/neutron/+milestone/liberty-rc1
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/05913eda/attachment.html>

From morgan.fainberg at gmail.com  Fri Sep  4 02:48:17 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Thu, 3 Sep 2015 19:48:17 -0700
Subject: [openstack-dev] FFE Request for completion of data driven
	assignment testing in Keystone
In-Reply-To: <CAO69Nd=i84FrR1f+0xHqb1S1jHytNFcbL+3+y+YjpDEcDQVimA@mail.gmail.com>
References: <DB4FED6C-267C-45E4-BA7B-5FB42D816F60@linux.vnet.ibm.com>
 <CAO69Nd=i84FrR1f+0xHqb1S1jHytNFcbL+3+y+YjpDEcDQVimA@mail.gmail.com>
Message-ID: <88142A7C-67DF-440F-A3B7-02966AAE6A9E@gmail.com>




> On Sep 3, 2015, at 19:28, David Stanek <dstanek at dstanek.com> wrote:
> 
> 
>> On Thu, Sep 3, 2015 at 3:44 PM Henry Nash <henryn at linux.vnet.ibm.com> wrote:
>> 
>> I would like to request an FFE for the remaining two patches that are already in review (https://review.openstack.org/#/c/153897/ and https://review.openstack.org/#/c/154485/).  These contain only test code and no functional changes, and increase our test coverage - as well as enable other items to be re-use the list_role_assignment backend method.
> 
> Do we need a FFE for changes to tests?
> 

I would say "no". 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/6af2e685/attachment.html>

From ken1ohmichi at gmail.com  Fri Sep  4 03:14:07 2015
From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi)
Date: Fri, 4 Sep 2015 12:14:07 +0900
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <20150625142223.GC2646@crypt>
References: <20150625142223.GC2646@crypt>
Message-ID: <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>

Hi Andrew,

Sorry for this late response, I missed it.

2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com>:
> I have been growing concerned recently with some attempts to formalize
> scheduler hints, both with API validation and Nova objects defining them,
> and want to air those concerns and see if others agree or can help me see
> why I shouldn't worry.
>
> Starting with the API I think the strict input validation that's being done,
> as seen in
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da,
> is unnecessary, and potentially problematic.
>
> One problem is that it doesn't indicate anything useful for a client.  The
> schema indicates that there are hints available but can make no claim about
> whether or not they're actually enabled.  So while a microversion bump would
> typically indicate a new feature available to an end user, in the case of a
> new scheduler hint a microversion bump really indicates nothing at all.  It
> does ensure that if a scheduler hint is used that it's spelled properly and
> the data type passed is correct, but that's primarily useful because there
> is no feedback mechanism to indicate an invalid or unused scheduler hint.  I
> think the API schema is a poor proxy for that deficiency.
>
> Since the exposure of a hint means nothing as far as its usefulness, I don't
> think we should be codifying them as part of our API schema at this time.
> At some point I imagine we'll evolve a more useful API for passing
> information to the scheduler as part of a request, and when that happens I
> don't think needing to support a myriad of meaningless hints in older API
> versions is going to be desirable.
>
> Finally, at this time I'm not sure we should take the stance that only
> in-tree scheduler hints are supported.  While I completely agree with the
> desire to expose things in cross-cloud ways as we've done and are looking to
> do with flavor and image properties I think scheduling is an area where we
> want to allow some flexibility for deployers to write and expose scheduling
> capabilities that meet their specific needs.  Over time I hope we will get
> to a place where some standardization can happen, but I don't think locking
> in the current scheduling hints is the way forward for that.  I would love
> to hear from multi-cloud users here and get some input on whether that's
> crazy and they are expecting benefits from validation on the current
> scheduler hints.
>
> Now, objects.  As part of the work to formalize the request spec sent to the
> scheduler there's an effort to make a scheduler hints object.  This
> formalizes them in the same way as the API with no benefit that I can see.
> I won't duplicate my arguments above, but I feel the same way about the
> objects as I do with the API.  I don't think needing to update and object
> version every time a new hint is added is useful at this time, nor do I
> think we should lock in the current in-tree hints.
>
> In the end this boils down to my concern that the scheduling hints api is a
> really horrible user experience and I don't want it to be solidified in the
> API or objects yet.  I think we should re-examine how they're handled before
> that happens.

Now we are discussing this on https://review.openstack.org/#/c/217727/
for allowing out-of-tree scheduler-hints.
When we wrote API schema for scheduler-hints, it was difficult to know
what are available API parameters for scheduler-hints.
Current API schema exposes them and I guess that is useful for API users also.

One idea is that: How about auto-extending scheduler-hint API schema
based on loaded schedulers?
Now API schemas of "create/update/resize/rebuild a server" APIs are
auto-extended based on loaded extensions by using stevedore
library[1].
I guess we can apply the same way for scheduler-hints also in long-term.
Each scheduler needs to implement a method which returns available API
parameter formats and nova-api tries to get them then extends
scheduler-hints API schema with them.
That means out-of-tree schedulers also will be available if they
implement the method.
# In short-term, I can see "blocking additionalProperties" validation
disabled by the way.

Thanks
Ken Ohmichi

---
[1]: https://github.com/openstack/nova/blob/master/doc/source/api_plugins.rst#json-schema


From stdake at cisco.com  Fri Sep  4 03:20:16 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Fri, 4 Sep 2015 03:20:16 +0000
Subject: [openstack-dev] [kolla][doc] Kolla documentation now live on
	docs.openstack.org!
Message-ID: <D20E5BEE.11D6D%stdake@cisco.com>

Hi folks,

Kolla documentation is now published live to docs.openstack.org.  Our documentation still needs significant attention to be really high quality, but what we have is a start.  Every code change results in new published documentation.

I?d like to invite folks interested in getting involved in Kolla development to take a look at improving the documentation.  One of the major components of any good open source project is fantastic documentation, and the more contribution we receive the better.  As further encouragement to improving documentation no specific bugs or blueprints need be filed to make documentation changes.  Our community decided on this exception early on to facilitate the enhancement of the documentation.  The broader community looks forward to any contributions you can make!

The documentation is located here:

http://docs.openstack.org/developer/kolla

Thanks,
-steve

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/87d9cdd7/attachment.html>

From morgan.fainberg at gmail.com  Fri Sep  4 04:23:20 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Thu, 3 Sep 2015 21:23:20 -0700
Subject: [openstack-dev] FFE Request for completion of data driven
	assignment testing in Keystone
In-Reply-To: <88142A7C-67DF-440F-A3B7-02966AAE6A9E@gmail.com>
References: <DB4FED6C-267C-45E4-BA7B-5FB42D816F60@linux.vnet.ibm.com>
 <CAO69Nd=i84FrR1f+0xHqb1S1jHytNFcbL+3+y+YjpDEcDQVimA@mail.gmail.com>
 <88142A7C-67DF-440F-A3B7-02966AAE6A9E@gmail.com>
Message-ID: <E409D08D-203E-494A-9584-925740B121DE@gmail.com>



> On Sep 3, 2015, at 19:48, Morgan Fainberg <morgan.fainberg at gmail.com> wrote:
> 
> 
> 
> 
>> On Sep 3, 2015, at 19:28, David Stanek <dstanek at dstanek.com> wrote:
>> 
>> 
>>> On Thu, Sep 3, 2015 at 3:44 PM Henry Nash <henryn at linux.vnet.ibm.com> wrote:
>>> 
>>> I would like to request an FFE for the remaining two patches that are already in review (https://review.openstack.org/#/c/153897/ and https://review.openstack.org/#/c/154485/).  These contain only test code and no functional changes, and increase our test coverage - as well as enable other items to be re-use the list_role_assignment backend method.
>> 
>> Do we need a FFE for changes to tests?
> 
> I would say "no". 

To clarify "no" a FFE is not needed here. Not "no" to the additional testing. 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/fd68da5f/attachment.html>

From comnea.dani at gmail.com  Fri Sep  4 06:14:31 2015
From: comnea.dani at gmail.com (Daniel Comnea)
Date: Fri, 4 Sep 2015 09:14:31 +0300
Subject: [openstack-dev] [Openstack-operators] [Neutron] Allowing DNS
 suffix to be set per subnet (at least per tenant)
In-Reply-To: <CAO_F6JMG5P6EhnrZm3WdzohAdxRh+vzhrXZZ_k+4quAJqEf+bA@mail.gmail.com>
References: <55E7FF06.2010207@maishsk.com>
 <CAF47bhHfR9qAPAkUFQ=81=mYUpu_wMZ-OyKAEtTP9kJ22KpP9A@mail.gmail.com>
 <CAG9LJa6TnqPDefNGbsCY8A7U8sd+gdNuHTFiQL+nYrfgHQZ0BQ@mail.gmail.com>
 <55E8A889.3030502@maishsk.com>
 <CAO_F6JMG5P6EhnrZm3WdzohAdxRh+vzhrXZZ_k+4quAJqEf+bA@mail.gmail.com>
Message-ID: <CAOBAnZNj52t1-iKXqMPs6EBrw4SkL+OWrogMVb7ML_zyoRNiLA@mail.gmail.com>

Kevin,

am i right in saying that the merge above was packaged into Liberty ?

Any chance to be ported to Juno?


Cheers,
Dani



On Fri, Sep 4, 2015 at 12:21 AM, Kevin Benton <blak111 at gmail.com> wrote:

> Support for that blueprint already merged[1] so it's a little late to
> change it to per-subnet. If that is too fine-grained for your use-case, I
> would file an RFE bug[2] to allow it to be set at the subnet level.
>
>
> 1. https://review.openstack.org/#/c/200952/
> 2.
> http://docs.openstack.org/developer/neutron/policies/blueprints.html#rfe-submission-guidelines
>
> On Thu, Sep 3, 2015 at 1:07 PM, Maish Saidel-Keesing <maishsk at maishsk.com>
> wrote:
>
>> On 09/03/15 20:51, Gal Sagie wrote:
>>
>> I am not sure if this address what you need specifically, but it would be
>> worth checking these
>> two approved liberty specs:
>>
>> 1)
>> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/internal-dns-resolution.rst
>> 2)
>> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/external-dns-resolution.rst
>>
>> Thanks Gal,
>>
>> So I see that from the bp [1] the fqdn will be configurable for each and
>> every port ?
>>
>> I think that this does open up a number of interesting possibilities, but
>> I would also think that it would be sufficient to do this on a subnet level?
>>
>> We do already have the option of setting nameservers per subnet - I
>> assume the data model is already implemented - which is interesting  -
>> because I don't see that as part of the information that is sent by dnsmasq
>> so it must be coming from neutron somewhere.
>>
>> The domain suffix - definitely is handled by dnsmasq.
>>
>>
>>
>> On Thu, Sep 3, 2015 at 8:37 PM, Steve Wormley <openstack at wormley.com>
>> wrote:
>>
>>> As far as I am aware it is not presently built-in to Openstack. You'll
>>> need to add a dnsmasq_config_file option to your dhcp agent configurations
>>> and then populate the file with:
>>> domain=DOMAIN_NAME,CIDR for each network
>>> i.e.
>>> domain=example.com,10.11.22.0/24
>>> ...
>>>
>>> -Steve
>>>
>>>
>>> On Thu, Sep 3, 2015 at 1:04 AM, Maish Saidel-Keesing <
>>> <maishsk at maishsk.com>maishsk at maishsk.com> wrote:
>>>
>>>> Hello all (cross-posting to openstack-operators as well)
>>>>
>>>> Today the setting of the dns suffix that is provided to the instance is
>>>> passed through dhcp_agent.
>>>>
>>>> There is the option of setting different DNS servers per subnet (and
>>>> and therefore tenant) but the domain suffix is something that stays the
>>>> same throughout the whole system is the domain suffix.
>>>>
>>>> I see that this is not a current neutron feature.
>>>>
>>>> Is this on the roadmap? Are there ways to achieve this today? If so I
>>>> would be very interested in hearing how.
>>>>
>>>> Thanks
>>>> --
>>>> Best Regards,
>>>> Maish Saidel-Keesing
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>>
>> --
>> Best Regards,
>> Maish Saidel-Keesing
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
> --
> Kevin Benton
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/21eab95b/attachment.html>

From openstack at lanabrindley.com  Fri Sep  4 06:25:09 2015
From: openstack at lanabrindley.com (Lana Brindley)
Date: Fri, 4 Sep 2015 16:25:09 +1000
Subject: [openstack-dev] What's Up, Doc? 4 September, 2015
Message-ID: <55E93945.4040107@lanabrindley.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi everyone,

This has been a fairly busy week, with Summit preparations beginning,
more newly migrated RST books going live, and testing starting on the
Install Guide. I've been spending time on sorting out the Liberty
blueprints still outstanding, and also working on some old bugs.

== Progress towards Liberty ==

40 days to go!

* RST conversion:
** Is now completed! Well done and a huge thank you to everyone who
converted pages, approved reviews, and participated in publishing the
new guides. This was a truly phenomenal effort :)

* User Guides information architecture overhaul
** Some user analysis has begun, and we have a new blueprint:
https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reo
rganised

* Greater focus on helping out devs with docs in their repo
** A certain amount of progress has been made here, and some wrinkles
sorted out which will improve this process for the future.

* Improve how we communicate with and support our corporate contributors
** If you currently work on documentation for a company that would like
to improve their upstream commits for documentation, please contact me!

* Improve communication with Docs Liaisons
** I'm very pleased to see liaisons getting more involved in our bugs
and reviews. Keep up the good work!

* Clearing out old bugs
** The last lot of old bugs are still languishing. I'm assuming you all
hate them so very much that I've decided to give you three more to pick
from. Have at it!

== Countdown to Summit ==

With the Liberty release less than two months away, that means it's
nearly Summit time again: https://www.openstack.org/summit/tokyo-2015/

The schedule has now been released, congratulations to everyone who had
a talk accepted this time around:
https://www.openstack.org/summit/tokyo-2015/schedule/

All ATCs should have received their pass by now, so now is the time to
be booking your travel and accommodation:
https://www.openstack.org/summit/tokyo-2015/tokyo-and-travel/

== Conventions ==

A new governance patch has landed which changes the way we capitalise
service names (I know almost exactly 50% of you will be happy about
this!):
https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_pr
oject_names
Please be aware of this when editing files, and remember that the
'source of truth' for these things is the projects.yaml file:
http://git.openstack.org/cgit/openstack/governance/tree/reference/projec
ts.yaml

== Docs Tools ==

openstack-doc-tools 0.30.0 and openstackdocstheme 1.2.1 have been
released. openstack-doc-tools allows translation of the Install Guide.
openstackdocstheme contains fixes for the inclusion of metatags, removes
unused images and javascript files, and fixes the "Docs Home" link.

== Doc team meeting ==

The APAC meeting was not held this week. The minutes from the previous
US meeting are here:
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2015-08-26

The next meetings are:
US: Wednesday 9 September, 14:00:00 UTC
APAC: Wednesday 16 September, 00:30:00 UTC

Please go ahead and add any agenda items to the meeting page here:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_
meeting

== Spotlight bugs for this week ==

Let's give these three a little love:

https://bugs.launchpad.net/openstack-manuals/+bug/1280092 end user guide
lacks doc on admin password injection

https://bugs.launchpad.net/openstack-manuals/+bug/1282765 Chapter 6.
Block Storage in OpenStack Cloud Administrator Guide

https://bugs.launchpad.net/openstack-manuals/+bug/1284215 Driver for IBM
SONAS and Storwize V7000 Unified

- --

Remember, if you have content you would like to add to this newsletter,
or you would like to be added to the distribution list, please email me
directly at openstack at lanabrindley.com, or visit:
https://wiki.openstack.org/w/index.php?title=Documentation/WhatsUpDoc

Keep on doc'ing!

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV6TlFAAoJELppzVb4+KUy/zkIAKYKbKdw78Nv8dpB8d9Rj4qh
+JTK2rTlz/Up5F10OzIoJoNMIvySeKH+jHV1CP0qL9KigYaepkEeMn8RnNSayYww
cgSmk/8gpzGTTd17JK0Rrn+RjOb3XMYeNH2d4OkvIQPGBAYsnerODrvEK3GG7YHO
oo5xYSkLdYH54qnXhhvNZxjxDclT1P5QgpUP6M6KcB3bcKt4niHGLHnBHFoqvlMR
gJA1BtKR6CackhZbkJpPFCpEHimm4xdWwF+q7xRezy599MbkkPAIxR/oMuEkqU2H
zj+tm9sHDxOoH2j4Hfkbw7xxF+/NjvGtm41JCPsUVBxuAocaBbJ1kZRbbRzrafI=
=2TAI
-----END PGP SIGNATURE-----


From eantyshev at virtuozzo.com  Fri Sep  4 06:11:45 2015
From: eantyshev at virtuozzo.com (Evgeny Antyshev)
Date: Fri, 4 Sep 2015 10:11:45 +0400
Subject: [openstack-dev] [infra][third-party][CI] Third-party oses in
 devstack-gate
In-Reply-To: <55E56191.1010905@virtuozzo.com>
References: <55E56191.1010905@virtuozzo.com>
Message-ID: <55E93621.5050909@virtuozzo.com>

On 01.09.2015 12:28, Evgeny Antyshev wrote:
> Hello!
>
> This letter I address to those third-party CI maintainers who needs to 
> amend
> the upstream devstack-gate to satisfy their environment.
>
> Some folks that I know use inline patching at job level,
> some make private forks of devstack-gate (I even saw one on github).
> There have been a few improvements to devstack-gate, which made it 
> easier to use it
> downstream, f.e. introducing DEVSTACK_LOCAL_CONFIG 
> (https://review.openstack.org/145321)
>
> We particularly need it to recognize our rhel-based distribution as a 
> Fedora OS.
> We cannot define it explicitly in is_fedora() as it is not officially 
> supported upstream,
> but we can introduce the variable DEVSTACK_GATE_IS_FEDORA which makes
> is_fedora() agnostic to distributions and to succeed if run on an 
> undefined OS.
>
> Here is the change: https://review.openstack.org/215029
> I welcome everyone interested in the matter
> to tell us if we do it right or not, and to review the change.
>
Prepending with [infra] tag to draw more attention

-- 
Best regards,
Evgeny Antyshev.



From ihrachys at redhat.com  Fri Sep  4 07:26:23 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Fri, 4 Sep 2015 09:26:23 +0200
Subject: [openstack-dev] [Openstack-operators] [Neutron] Allowing DNS
	suffix to be set per subnet (at least per tenant)
In-Reply-To: <CAOBAnZNj52t1-iKXqMPs6EBrw4SkL+OWrogMVb7ML_zyoRNiLA@mail.gmail.com>
References: <55E7FF06.2010207@maishsk.com>
 <CAF47bhHfR9qAPAkUFQ=81=mYUpu_wMZ-OyKAEtTP9kJ22KpP9A@mail.gmail.com>
 <CAG9LJa6TnqPDefNGbsCY8A7U8sd+gdNuHTFiQL+nYrfgHQZ0BQ@mail.gmail.com>
 <55E8A889.3030502@maishsk.com>
 <CAO_F6JMG5P6EhnrZm3WdzohAdxRh+vzhrXZZ_k+4quAJqEf+bA@mail.gmail.com>
 <CAOBAnZNj52t1-iKXqMPs6EBrw4SkL+OWrogMVb7ML_zyoRNiLA@mail.gmail.com>
Message-ID: <32F63AEB-460C-46FB-8F30-517C2AEA1563@redhat.com>

> On 04 Sep 2015, at 08:14, Daniel Comnea <comnea.dani at gmail.com> wrote:
> 
> Kevin,
> 
> am i right in saying that the merge above was packaged into Liberty ?
> 
> Any chance to be ported to Juno?
> 

There is no chance a new feature will be backported to any stable branch, even Kilo. At least in upstream.

Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/5267415e/attachment.pgp>

From jmferrer.paradigmatecnologico at gmail.com  Fri Sep  4 07:58:11 2015
From: jmferrer.paradigmatecnologico at gmail.com (Jose Manuel Ferrer Mosteiro)
Date: Fri, 04 Sep 2015 09:58:11 +0200
Subject: [openstack-dev] [Openstack] [ANN] OpenStack Kilo on Ubuntu
 fully automated with Ansible! Ready for NFV L2 Bridges via Heat!
In-Reply-To: <CAODS64y37o31okPqbdKVLgH5rwwQB9gEsnsEc57j94WvkjGFog@mail.gmail.com>
References: <CAJSM8J0Pnh-wawPwRZfAiWuKsJ0Nb1voDH0KBqGMHuy9sbhkGg@mail.gmail.com>
 <0c0c010043aa4679c4bfaaa89fbc6e8f@fermosit.es>
 <CAODS64y37o31okPqbdKVLgH5rwwQB9gEsnsEc57j94WvkjGFog@mail.gmail.com>
Message-ID: <b671b11c2e3440458f33159aa4f3f191@fermosit.es>

 

Hi 

It is a pre pre pre pre pre pre pre alpha version that just installs the
juno ubuntu guide until dashboard included. Block Storage Service is
very important but does not work now. 

vCenter will be always the operating system that makes my life easyer.
Today is Ubuntu. 

The hypervisor is also Ubuntu but it will be Ubuntu, CentOs and Debian. 

I will announce the project when the project is more advanced. 

Thanks 

On 2015-08-31 15:08, Sabrina Bajorat wrote: 

> That is great !!! Can it be use with Debian 7 too? 
> 
> Thanks 
> 
> On Mon, Aug 31, 2015 at 2:54 PM, Jose Manuel Ferrer Mosteiro <jmferrer.paradigmatecnologico at gmail.com> wrote:
> 
> Nice job. I am doing a vmware vcenter like in https://github.com/elmanytas/ansible-openstack-vcenter [1] and I solved the problem of duplicate endpoints in line 106 of https://github.com/elmanytas/ansible-openstack-vcenter/blob/master/etc_ansible/roles/keystone/tasks/main.yml [2] . This makes playbooks idempotents. 
> 
> Maybe you could be interested. 
> 
> On 2015-08-26 00:30, Martinx - ????? wrote: 
> Hello Stackers!
> 
> I'm proud to announce an Ansible Playbook to deploy OpenStack on Ubuntu!
> 
> Check it out!
> 
> * https://github.com/sandvine/os-ansible-deployment-lite [3]
> 
> Powered by Sandvine! ;-)
> 
> Basically, this is the automation of what we have documented here:
> 
> * http://docs.openstack.org/kilo/install-guide/install/apt/content/ [4]
> 
> Instructions:
> 
> 1- Install Ubuntu 14.04, fully upgraded (with
> "linux-generic-lts-vivid" installed), plus "/etc/hostname" and
> "/etc/hosts" configured according.
> 
> 2- Deploy OpenStack with 1 command:
> 
> * Open vSwtich (default):
> 
> bash <(curl -s
> https://raw.githubusercontent.com/sandvine/os-ansible-deployment-lite/kilo/misc/os-install.sh [5])
> 
> * Linux Bridges (alternative):
> 
> bash <(curl -s
> https://raw.githubusercontent.com/sandvine/os-ansible-deployment-lite/kilo/misc/os-install-lbr.sh [6])
> 
> 3- Launch a NFV L2 Stack:
> 
> heat stack-create demo -f
> ~/os-ansible-deployment-lite/misc/os-heat-templates/nfv-l2-bridge-basic-stack-ubuntu-little.yaml
> 
> IMPORTANT NOTES:
> 
> Only runs the "step 2" on top of a fresh installed Ubuntu 14.04! Can
> be a Server or Desktop but, fresh installed. Do not pre-install MySQL,
> RabbitMQ, Keystone, etc... Let Ansible to its magic!
> 
> Also, make sure you can use "sudo" without password.
> 
> Some features of our Ansible Playbook:
> 
> 1- Deploys OpenStack with one single command, in one physical box
> (all-in-one), helper script (./os-deploy.sh) available;
> 
> 2- Supports NFV instances that can act as a L2 Bridge between two
> VXLAN Networks;
> 
> 3- Plenty of Heat Templates;
> 
> 4- 100% Ubuntu based;
> 
> 5- Very simple setup (simpler topology; dummy interfaces for both
> "br-ex" and "vxlan"; no containers for each service (yet));
> 
> 6- Ubuntu PPA available, with a few OpenStack patches backported from
> Liberty, to Kilo (to add "port_security_enabled" Heat support);
> 
> https://launchpad.net/~sandvine/+archive/ubuntu/cloud-archive-kilo/ [7]
> 
> 7- Only requires one physical ethernet card;
> 
> 8- Both "Linux Bridges" and "Open vSwitch" deployments are supported;
> 
> 9- Planning to add DPDK support;
> 
> 10- Multi-node support under development;
> 
> 11- IPv6 support comming...
> 
> * Notes about Vagrant support:
> 
> Under development (it doesn't work yet).
> 
> There is a preliminary Vagrant support (there is still a bug on MySQL
> startup, pull requests are welcome).
> 
> Just "git clone" our Ansible playbooks and run "vagrant up" (or
> ./os-deploy-vagrant.sh to auto-config your Ansible vars / files for
> you).
> 
> We tried it only with Mac / VirtualBox but, it does not support
> VT-in-VT (nested virtualization), so, we're looking for KVM / Libvirt
> on Ubuntu Desktop instead. But it would be nice to, at least, launch
> OpenStack in a VirtualBox on you Mac... =)
> 
> Hope you guys enjoy it!
> 
> Cheers!
> Thiago
> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [8]
> Post to : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [8]
> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [8]
> Post to : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [8]
 

Links:
------
[1] https://github.com/elmanytas/ansible-openstack-vcenter
[2]
https://github.com/elmanytas/ansible-openstack-vcenter/blob/master/etc_ansible/roles/keystone/tasks/main.yml
[3] https://github.com/sandvine/os-ansible-deployment-lite
[4] http://docs.openstack.org/kilo/install-guide/install/apt/content/
[5]
https://raw.githubusercontent.com/sandvine/os-ansible-deployment-lite/kilo/misc/os-install.sh
[6]
https://raw.githubusercontent.com/sandvine/os-ansible-deployment-lite/kilo/misc/os-install-lbr.sh
[7] https://launchpad.net/~sandvine/+archive/ubuntu/cloud-archive-kilo/
[8] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/792ad425/attachment.html>

From thierry at openstack.org  Fri Sep  4 08:17:29 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Fri, 4 Sep 2015 10:17:29 +0200
Subject: [openstack-dev] FFE Request for completion of data driven
 assignment testing in Keystone
In-Reply-To: <88142A7C-67DF-440F-A3B7-02966AAE6A9E@gmail.com>
References: <DB4FED6C-267C-45E4-BA7B-5FB42D816F60@linux.vnet.ibm.com>
 <CAO69Nd=i84FrR1f+0xHqb1S1jHytNFcbL+3+y+YjpDEcDQVimA@mail.gmail.com>
 <88142A7C-67DF-440F-A3B7-02966AAE6A9E@gmail.com>
Message-ID: <55E95399.1070903@openstack.org>

Morgan Fainberg wrote:
> 
>>     I would like to request an FFE for the remaining two patches that
>>     are already in review
>>     (https://review.openstack.org/#/c/153897/ and https://review.openstack.org/#/c/154485/). 
>>     These contain only test code and no functional changes, and
>>     increase our test coverage - as well as enable other items to be
>>     re-use the list_role_assignment backend method.
>>
>> Do we need a FFE for changes to tests?
>>
> 
> I would say "no". 

Right. Extra tests (or extra docs for that matter) don't count as a
"feature" for the freeze. In particular it doesn't change the behavior
of the software or invalidate testing that may have been conducted.

-- 
Thierry Carrez (ttx)


From daniel.mellado.es at ieee.org  Fri Sep  4 08:42:57 2015
From: daniel.mellado.es at ieee.org (Daniel Mellado)
Date: Fri, 4 Sep 2015 10:42:57 +0200
Subject: [openstack-dev] [infra][third-party][CI] Third-party oses in
 devstack-gate
In-Reply-To: <55E93621.5050909@virtuozzo.com>
References: <55E56191.1010905@virtuozzo.com> <55E93621.5050909@virtuozzo.com>
Message-ID: <55E95991.4050105@ieee.org>

El 04/09/15 a las 08:11, Evgeny Antyshev escribi?:
> On 01.09.2015 12:28, Evgeny Antyshev wrote:
>> Hello!
>>
>> This letter I address to those third-party CI maintainers who needs
>> to amend
>> the upstream devstack-gate to satisfy their environment.
>>
>> Some folks that I know use inline patching at job level,
>> some make private forks of devstack-gate (I even saw one on github).
>> There have been a few improvements to devstack-gate, which made it
>> easier to use it
>> downstream, f.e. introducing DEVSTACK_LOCAL_CONFIG
>> (https://review.openstack.org/145321)
>>
>> We particularly need it to recognize our rhel-based distribution as a
>> Fedora OS.
>> We cannot define it explicitly in is_fedora() as it is not officially
>> supported upstream,
>> but we can introduce the variable DEVSTACK_GATE_IS_FEDORA which makes
>> is_fedora() agnostic to distributions and to succeed if run on an
>> undefined OS.
>>
>> Here is the change: https://review.openstack.org/215029
>> I welcome everyone interested in the matter
>> to tell us if we do it right or not, and to review the change.
>>
> Prepending with [infra] tag to draw more attention
>
Personally I think that would be great, as it would greatly help finding
Fedora-ish issues with devstack, which are now only being raised by
developers due to the lack of a gate.


From ddovbii at mirantis.com  Fri Sep  4 09:36:56 2015
From: ddovbii at mirantis.com (Dmitro Dovbii)
Date: Fri, 4 Sep 2015 12:36:56 +0300
Subject: [openstack-dev] [murano] [dashboard] Remove the owner filter from
 "Package Definitions" page
Message-ID: <CAKSp79y8cCU7z0S-Pzgy2k1TNJZZMsyVYXk-bEtSj6ByoB4JZQ@mail.gmail.com>

Hi folks!

I want suggest you to delete owner filter (3 tabs) from Package Definition
page. Previously this filter was available for all users and we agreed that
it is useless. Now it is available only for admin but I think this fact
still doesn't improve the UX. Moreover, this filter prevents the
implementation of the search by name, because the work of the two filters
can be inconsistent.
So, please express your opinion on this issue. If you agree, I will remove
this filter ASAP.

Best regards,
Dmytro Dovbii
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/c9546f3a/attachment.html>

From sreshetniak at mirantis.com  Fri Sep  4 09:40:55 2015
From: sreshetniak at mirantis.com (Sergey Reshetnyak)
Date: Fri, 4 Sep 2015 12:40:55 +0300
Subject: [openstack-dev] [sahara] Request for Feature Freeze Exception
In-Reply-To: <47194693.21252941.1441312073237.JavaMail.zimbra@redhat.com>
References: <CA+O3VAi689gyN-7Vu1qmBsv_T3xyaOOiL0foBo4YLLTJFW60ww@mail.gmail.com>
 <55E8A545.7000602@redhat.com>
 <47194693.21252941.1441312073237.JavaMail.zimbra@redhat.com>
Message-ID: <CAOB5mPxaTM5QKm410c2956QMfnsaz9QqT7XreMyxPmdrK1E0Og@mail.gmail.com>

+1 from me.

Thanks,
Sergey R.

2015-09-03 23:27 GMT+03:00 Ethan Gafford <egafford at redhat.com>:

> Agreed. We've talked about this for a while, and it's very low risk.
>
> Thanks,
> Ethan
>
> ----- Original Message -----
> From: "michael mccune" <msm at redhat.com>
> To: openstack-dev at lists.openstack.org
> Sent: Thursday, September 3, 2015 3:53:41 PM
> Subject: Re: [openstack-dev] [sahara] Request for Feature Freeze Exception
>
> On 09/03/2015 02:49 PM, Vitaly Gridnev wrote:
> > Hey folks!
> >
> > I would like to propose to add to list of FFE's following blueprint:
> > https://blueprints.launchpad.net/sahara/+spec/drop-hadoop-1
> >
> > Reasoning of that is following:
> >
> >   1. HDP 1.3.2 and Vanilla 1.2.1 are not gated for a whole release
> > cycle, so it can be reason of several bugs in these versions;
> >   2. Minimal risk of removal: it doesn't touch versions that we already
> > have.
> >   3. All required changes was already uploaded to the review:
> >
> https://review.openstack.org/#/q/status:open+project:openstack/sahara+branch:master+topic:bp/drop-hadoop-1,n,z
>
> this sounds reasonable to me
>
> mike
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/5c660e15/attachment.html>

From sreshetniak at mirantis.com  Fri Sep  4 09:54:18 2015
From: sreshetniak at mirantis.com (Sergey Reshetnyak)
Date: Fri, 4 Sep 2015 12:54:18 +0300
Subject: [openstack-dev] [sahara] FFE request for heat wait condition support
Message-ID: <CAOB5mPwf6avCZD4Q6U4xh-g4f553eMzCTh1kfiX4bVY8x59i5A@mail.gmail.com>

Hi,

I would like to request FFE for wait condition support for Heat engine.
Wait condition reports signal about booting instance.

Blueprint:
https://blueprints.launchpad.net/sahara/+spec/sahara-heat-wait-conditions

Spec:
https://github.com/openstack/sahara-specs/blob/master/specs/liberty/sahara-heat-wait-conditions.rst

Patch:
https://review.openstack.org/#/c/169338/

Thanks,
Sergey Reshetnyak
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/6e8d42e5/attachment.html>

From ken1ohmichi at gmail.com  Fri Sep  4 09:54:49 2015
From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi)
Date: Fri, 4 Sep 2015 18:54:49 +0900
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
References: <20150625142223.GC2646@crypt>
 <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
Message-ID: <CAA393vhyeMYeA=6MK9+0LtReud67+OMBu=KcaOzvM_pzL4Ea+g@mail.gmail.com>

2015-09-04 12:14 GMT+09:00 Ken'ichi Ohmichi <ken1ohmichi at gmail.com>:
> Hi Andrew,
>
> Sorry for this late response, I missed it.
>
> 2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com>:
>> I have been growing concerned recently with some attempts to formalize
>> scheduler hints, both with API validation and Nova objects defining them,
>> and want to air those concerns and see if others agree or can help me see
>> why I shouldn't worry.
>>
>> Starting with the API I think the strict input validation that's being done,
>> as seen in
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da,
>> is unnecessary, and potentially problematic.
>>
>> One problem is that it doesn't indicate anything useful for a client.  The
>> schema indicates that there are hints available but can make no claim about
>> whether or not they're actually enabled.  So while a microversion bump would
>> typically indicate a new feature available to an end user, in the case of a
>> new scheduler hint a microversion bump really indicates nothing at all.  It
>> does ensure that if a scheduler hint is used that it's spelled properly and
>> the data type passed is correct, but that's primarily useful because there
>> is no feedback mechanism to indicate an invalid or unused scheduler hint.  I
>> think the API schema is a poor proxy for that deficiency.
>>
>> Since the exposure of a hint means nothing as far as its usefulness, I don't
>> think we should be codifying them as part of our API schema at this time.
>> At some point I imagine we'll evolve a more useful API for passing
>> information to the scheduler as part of a request, and when that happens I
>> don't think needing to support a myriad of meaningless hints in older API
>> versions is going to be desirable.
>>
>> Finally, at this time I'm not sure we should take the stance that only
>> in-tree scheduler hints are supported.  While I completely agree with the
>> desire to expose things in cross-cloud ways as we've done and are looking to
>> do with flavor and image properties I think scheduling is an area where we
>> want to allow some flexibility for deployers to write and expose scheduling
>> capabilities that meet their specific needs.  Over time I hope we will get
>> to a place where some standardization can happen, but I don't think locking
>> in the current scheduling hints is the way forward for that.  I would love
>> to hear from multi-cloud users here and get some input on whether that's
>> crazy and they are expecting benefits from validation on the current
>> scheduler hints.
>>
>> Now, objects.  As part of the work to formalize the request spec sent to the
>> scheduler there's an effort to make a scheduler hints object.  This
>> formalizes them in the same way as the API with no benefit that I can see.
>> I won't duplicate my arguments above, but I feel the same way about the
>> objects as I do with the API.  I don't think needing to update and object
>> version every time a new hint is added is useful at this time, nor do I
>> think we should lock in the current in-tree hints.
>>
>> In the end this boils down to my concern that the scheduling hints api is a
>> really horrible user experience and I don't want it to be solidified in the
>> API or objects yet.  I think we should re-examine how they're handled before
>> that happens.
>
> Now we are discussing this on https://review.openstack.org/#/c/217727/
> for allowing out-of-tree scheduler-hints.
> When we wrote API schema for scheduler-hints, it was difficult to know
> what are available API parameters for scheduler-hints.
> Current API schema exposes them and I guess that is useful for API users also.
>
> One idea is that: How about auto-extending scheduler-hint API schema
> based on loaded schedulers?
> Now API schemas of "create/update/resize/rebuild a server" APIs are
> auto-extended based on loaded extensions by using stevedore
> library[1].
> I guess we can apply the same way for scheduler-hints also in long-term.
> Each scheduler needs to implement a method which returns available API
> parameter formats and nova-api tries to get them then extends
> scheduler-hints API schema with them.
> That means out-of-tree schedulers also will be available if they
> implement the method.
> # In short-term, I can see "blocking additionalProperties" validation
> disabled by the way.

https://review.openstack.org/#/c/220440 is a prototype for the above idea.

Thanks
Ken Ohmichi


From soulxu at gmail.com  Fri Sep  4 10:03:57 2015
From: soulxu at gmail.com (Alex Xu)
Date: Fri, 4 Sep 2015 18:03:57 +0800
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
References: <20150625142223.GC2646@crypt>
 <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
Message-ID: <CAH7mGauOgfvVkfW2OYPm7D=7zgXhRHpx4a7_jZMyBtND3iirGQ@mail.gmail.com>

2015-09-04 11:14 GMT+08:00 Ken'ichi Ohmichi <ken1ohmichi at gmail.com>:

> Hi Andrew,
>
> Sorry for this late response, I missed it.
>
> 2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com>:
> > I have been growing concerned recently with some attempts to formalize
> > scheduler hints, both with API validation and Nova objects defining them,
> > and want to air those concerns and see if others agree or can help me see
> > why I shouldn't worry.
> >
> > Starting with the API I think the strict input validation that's being
> done,
> > as seen in
> >
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da
> ,
> > is unnecessary, and potentially problematic.
> >
> > One problem is that it doesn't indicate anything useful for a client.
> The
> > schema indicates that there are hints available but can make no claim
> about
> > whether or not they're actually enabled.  So while a microversion bump
> would
> > typically indicate a new feature available to an end user, in the case
> of a
> > new scheduler hint a microversion bump really indicates nothing at all.
> It
> > does ensure that if a scheduler hint is used that it's spelled properly
> and
> > the data type passed is correct, but that's primarily useful because
> there
> > is no feedback mechanism to indicate an invalid or unused scheduler
> hint.  I
> > think the API schema is a poor proxy for that deficiency.
> >
> > Since the exposure of a hint means nothing as far as its usefulness, I
> don't
> > think we should be codifying them as part of our API schema at this time.
> > At some point I imagine we'll evolve a more useful API for passing
> > information to the scheduler as part of a request, and when that happens
> I
> > don't think needing to support a myriad of meaningless hints in older API
> > versions is going to be desirable.
> >
> > Finally, at this time I'm not sure we should take the stance that only
> > in-tree scheduler hints are supported.  While I completely agree with the
> > desire to expose things in cross-cloud ways as we've done and are
> looking to
> > do with flavor and image properties I think scheduling is an area where
> we
> > want to allow some flexibility for deployers to write and expose
> scheduling
> > capabilities that meet their specific needs.  Over time I hope we will
> get
> > to a place where some standardization can happen, but I don't think
> locking
> > in the current scheduling hints is the way forward for that.  I would
> love
> > to hear from multi-cloud users here and get some input on whether that's
> > crazy and they are expecting benefits from validation on the current
> > scheduler hints.
> >
> > Now, objects.  As part of the work to formalize the request spec sent to
> the
> > scheduler there's an effort to make a scheduler hints object.  This
> > formalizes them in the same way as the API with no benefit that I can
> see.
> > I won't duplicate my arguments above, but I feel the same way about the
> > objects as I do with the API.  I don't think needing to update and object
> > version every time a new hint is added is useful at this time, nor do I
> > think we should lock in the current in-tree hints.
> >
> > In the end this boils down to my concern that the scheduling hints api
> is a
> > really horrible user experience and I don't want it to be solidified in
> the
> > API or objects yet.  I think we should re-examine how they're handled
> before
> > that happens.
>
> Now we are discussing this on https://review.openstack.org/#/c/217727/
> for allowing out-of-tree scheduler-hints.
> When we wrote API schema for scheduler-hints, it was difficult to know
> what are available API parameters for scheduler-hints.
> Current API schema exposes them and I guess that is useful for API users
> also.
>
> One idea is that: How about auto-extending scheduler-hint API schema
> based on loaded schedulers?
> Now API schemas of "create/update/resize/rebuild a server" APIs are
> auto-extended based on loaded extensions by using stevedore
> library[1].
>

Em....we will deprecate the extension from our API. this sounds like add
more extension mechanism.


> I guess we can apply the same way for scheduler-hints also in long-term.
> Each scheduler needs to implement a method which returns available API
> parameter formats and nova-api tries to get them then extends
> scheduler-hints API schema with them.
> That means out-of-tree schedulers also will be available if they
> implement the method.
> # In short-term, I can see "blocking additionalProperties" validation
> disabled by the way.
>
> Thanks
> Ken Ohmichi
>
> ---
> [1]:
> https://github.com/openstack/nova/blob/master/doc/source/api_plugins.rst#json-schema
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/d10cb2fb/attachment.html>

From ativelkov at mirantis.com  Fri Sep  4 10:06:57 2015
From: ativelkov at mirantis.com (Alexander Tivelkov)
Date: Fri, 4 Sep 2015 13:06:57 +0300
Subject: [openstack-dev] [murano] [dashboard] Remove the owner filter
 from "Package Definitions" page
In-Reply-To: <CAKSp79y8cCU7z0S-Pzgy2k1TNJZZMsyVYXk-bEtSj6ByoB4JZQ@mail.gmail.com>
References: <CAKSp79y8cCU7z0S-Pzgy2k1TNJZZMsyVYXk-bEtSj6ByoB4JZQ@mail.gmail.com>
Message-ID: <CAM6FM9S47YmJsTYGVNoPc7L2JGjBpCB+-s-HTd=d+HK939GEEg@mail.gmail.com>

?+1 on this.

Filtering by ownership makes sense only on Catalog view (i.e. on the page
of usable apps) ?but not on the admin-like console like the list of package
definitions.

--
Regards,
Alexander Tivelkov

On Fri, Sep 4, 2015 at 12:36 PM, Dmitro Dovbii <ddovbii at mirantis.com> wrote:

> Hi folks!
>
> I want suggest you to delete owner filter (3 tabs) from Package Definition
> page. Previously this filter was available for all users and we agreed that
> it is useless. Now it is available only for admin but I think this fact
> still doesn't improve the UX. Moreover, this filter prevents the
> implementation of the search by name, because the work of the two filters
> can be inconsistent.
> So, please express your opinion on this issue. If you agree, I will remove
> this filter ASAP.
>
> Best regards,
> Dmytro Dovbii
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/904f4318/attachment.html>

From thierry at openstack.org  Fri Sep  4 10:14:06 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Fri, 4 Sep 2015 12:14:06 +0200
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
	allocation
Message-ID: <55E96EEE.4070306@openstack.org>

Hi PTLs,

Here is the proposed slot allocation for every "big tent" project team
at the Mitaka Design Summit in Tokyo. This is based on the requests the
liberty PTLs have made, space availability and project activity &
collaboration needs.

We have a lot less space (and time slots) in Tokyo compared to
Vancouver, so we were unable to give every team what they wanted. In
particular, there were far more workroom requests than we have
available, so we had to cut down on those quite heavily. Please note
that we'll have a large lunch room with roundtables inside the Design
Summit space that can easily be abused (outside of lunch) as space for
extra discussions.

Here is the allocation:

| fb: fishbowl 40-min slots
| wr: workroom 40-min slots
| cm: Friday contributors meetup
| | day: full day, morn: only morning, aft: only afternoon

Neutron: 12fb, cm:day
Nova: 14fb, cm:day
Cinder: 5fb, 4wr, cm:day	
Horizon: 2fb, 7wr, cm:day	
Heat: 4fb, 8wr, cm:morn
Keystone: 7fb, 3wr, cm:day
Ironic: 4fb, 4wr, cm:morn
Oslo: 3fb, 5wr
Rally: 1fb, 2wr
Kolla: 3fb, 5wr, cm:aft
Ceilometer: 2fb, 7wr, cm:morn
TripleO: 2fb, 1wr, cm:full
Sahara: 2fb, 5wr, cm:aft
Murano: 2wr, cm:full
Glance: 3fb, 5wr, cm:full	
Manila: 2fb, 4wr, cm:morn
Magnum: 5fb, 5wr, cm:full	
Swift: 2fb, 12wr, cm:full	
Trove: 2fb, 4wr, cm:aft
Barbican: 2fb, 6wr, cm:aft
Designate: 1fb, 4wr, cm:aft
OpenStackClient: 1fb, 1wr, cm:morn
Mistral: 1fb, 3wr	
Zaqar: 1fb, 3wr
Congress: 3wr
Cue: 1fb, 1wr
Solum: 1fb
Searchlight: 1fb, 1wr
MagnetoDB: won't be present

Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)	
PuppetOpenStack: 2fb, 3wr
Documentation: 2fb, 4wr, cm:morn
Quality Assurance: 4fb, 4wr, cm:full
OpenStackAnsible: 2fb, 1wr, cm:aft
Release management: 1fb, 1wr (shared meetup with QA)
Security: 2fb, 2wr
ChefOpenstack: will camp in the lunch room all week
App catalog: 1fb, 1wr
I18n: cm:morn
OpenStack UX: 2wr
Packaging-deb: 2wr
Refstack: 2wr
RpmPackaging: 1fb, 1wr

We'll start working on laying out those sessions over the available
rooms and time slots. If you have constraints (I already know
searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
Manila with Cinder, Solum with Magnum...) please let me know, we'll do
our best to limit them.

-- 
Thierry Carrez (ttx)


From ken1ohmichi at gmail.com  Fri Sep  4 10:18:43 2015
From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi)
Date: Fri, 04 Sep 2015 10:18:43 +0000
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <CAH7mGauOgfvVkfW2OYPm7D=7zgXhRHpx4a7_jZMyBtND3iirGQ@mail.gmail.com>
References: <20150625142223.GC2646@crypt>
 <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
 <CAH7mGauOgfvVkfW2OYPm7D=7zgXhRHpx4a7_jZMyBtND3iirGQ@mail.gmail.com>
Message-ID: <CAA393vhfetUH3PkJHkpcP9sf8vjzS+Tm-Fcp7O_D6mo3Q_S-xA@mail.gmail.com>

Hi Alex,

Thanks for  your comment.
IMO, this idea is different from the extension we will remove.
That is modularity for the maintenance burden.
By this idea, we can put the corresponding schema in each filter.

2015?9?4?(?) 19:04 Alex Xu <soulxu at gmail.com>:

> 2015-09-04 11:14 GMT+08:00 Ken'ichi Ohmichi <ken1ohmichi at gmail.com>:
>
>> Hi Andrew,
>>
>> Sorry for this late response, I missed it.
>>
>> 2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com>:
>> > I have been growing concerned recently with some attempts to formalize
>> > scheduler hints, both with API validation and Nova objects defining
>> them,
>> > and want to air those concerns and see if others agree or can help me
>> see
>> > why I shouldn't worry.
>> >
>> > Starting with the API I think the strict input validation that's being
>> done,
>> > as seen in
>> >
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da
>> ,
>> > is unnecessary, and potentially problematic.
>> >
>> > One problem is that it doesn't indicate anything useful for a client.
>> The
>> > schema indicates that there are hints available but can make no claim
>> about
>> > whether or not they're actually enabled.  So while a microversion bump
>> would
>> > typically indicate a new feature available to an end user, in the case
>> of a
>> > new scheduler hint a microversion bump really indicates nothing at
>> all.  It
>> > does ensure that if a scheduler hint is used that it's spelled properly
>> and
>> > the data type passed is correct, but that's primarily useful because
>> there
>> > is no feedback mechanism to indicate an invalid or unused scheduler
>> hint.  I
>> > think the API schema is a poor proxy for that deficiency.
>> >
>> > Since the exposure of a hint means nothing as far as its usefulness, I
>> don't
>> > think we should be codifying them as part of our API schema at this
>> time.
>> > At some point I imagine we'll evolve a more useful API for passing
>> > information to the scheduler as part of a request, and when that
>> happens I
>> > don't think needing to support a myriad of meaningless hints in older
>> API
>> > versions is going to be desirable.
>> >
>> > Finally, at this time I'm not sure we should take the stance that only
>> > in-tree scheduler hints are supported.  While I completely agree with
>> the
>> > desire to expose things in cross-cloud ways as we've done and are
>> looking to
>> > do with flavor and image properties I think scheduling is an area where
>> we
>> > want to allow some flexibility for deployers to write and expose
>> scheduling
>> > capabilities that meet their specific needs.  Over time I hope we will
>> get
>> > to a place where some standardization can happen, but I don't think
>> locking
>> > in the current scheduling hints is the way forward for that.  I would
>> love
>> > to hear from multi-cloud users here and get some input on whether that's
>> > crazy and they are expecting benefits from validation on the current
>> > scheduler hints.
>> >
>> > Now, objects.  As part of the work to formalize the request spec sent
>> to the
>> > scheduler there's an effort to make a scheduler hints object.  This
>> > formalizes them in the same way as the API with no benefit that I can
>> see.
>> > I won't duplicate my arguments above, but I feel the same way about the
>> > objects as I do with the API.  I don't think needing to update and
>> object
>> > version every time a new hint is added is useful at this time, nor do I
>> > think we should lock in the current in-tree hints.
>> >
>> > In the end this boils down to my concern that the scheduling hints api
>> is a
>> > really horrible user experience and I don't want it to be solidified in
>> the
>> > API or objects yet.  I think we should re-examine how they're handled
>> before
>> > that happens.
>>
>> Now we are discussing this on https://review.openstack.org/#/c/217727/
>> for allowing out-of-tree scheduler-hints.
>> When we wrote API schema for scheduler-hints, it was difficult to know
>> what are available API parameters for scheduler-hints.
>> Current API schema exposes them and I guess that is useful for API users
>> also.
>>
>> One idea is that: How about auto-extending scheduler-hint API schema
>> based on loaded schedulers?
>> Now API schemas of "create/update/resize/rebuild a server" APIs are
>> auto-extended based on loaded extensions by using stevedore
>> library[1].
>>
>
> Em....we will deprecate the extension from our API. this sounds like add
> more extension mechanism.
>
>
>> I guess we can apply the same way for scheduler-hints also in long-term.
>> Each scheduler needs to implement a method which returns available API
>> parameter formats and nova-api tries to get them then extends
>> scheduler-hints API schema with them.
>> That means out-of-tree schedulers also will be available if they
>> implement the method.
>> # In short-term, I can see "blocking additionalProperties" validation
>> disabled by the way.
>>
>> Thanks
>> Ken Ohmichi
>>
>> ---
>> [1]:
>> https://github.com/openstack/nova/blob/master/doc/source/api_plugins.rst#json-schema
>>
> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/34f28fe5/attachment.html>

From vgridnev at mirantis.com  Fri Sep  4 10:37:25 2015
From: vgridnev at mirantis.com (Vitaly Gridnev)
Date: Fri, 4 Sep 2015 13:37:25 +0300
Subject: [openstack-dev] [sahara] FFE request for heat wait condition
	support
In-Reply-To: <CAOB5mPwf6avCZD4Q6U4xh-g4f553eMzCTh1kfiX4bVY8x59i5A@mail.gmail.com>
References: <CAOB5mPwf6avCZD4Q6U4xh-g4f553eMzCTh1kfiX4bVY8x59i5A@mail.gmail.com>
Message-ID: <CA+O3VAhA2Xi_hKCaCB2PoWr8jUM0bQhwnSUAGx2gOGB0ksii6w@mail.gmail.com>

+1 for FFE, because of

 1. Low risk of issues, fully covered with current scenario tests;
 2. Implementation already on review

On Fri, Sep 4, 2015 at 12:54 PM, Sergey Reshetnyak <sreshetniak at mirantis.com
> wrote:

> Hi,
>
> I would like to request FFE for wait condition support for Heat engine.
> Wait condition reports signal about booting instance.
>
> Blueprint:
> https://blueprints.launchpad.net/sahara/+spec/sahara-heat-wait-conditions
>
> Spec:
>
> https://github.com/openstack/sahara-specs/blob/master/specs/liberty/sahara-heat-wait-conditions.rst
>
> Patch:
> https://review.openstack.org/#/c/169338/
>
> Thanks,
> Sergey Reshetnyak
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Vitaly Gridnev
Mirantis, Inc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/92bbca9d/attachment.html>

From sbauza at redhat.com  Fri Sep  4 10:56:51 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Fri, 04 Sep 2015 12:56:51 +0200
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <CAA393vhfetUH3PkJHkpcP9sf8vjzS+Tm-Fcp7O_D6mo3Q_S-xA@mail.gmail.com>
References: <20150625142223.GC2646@crypt>
 <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
 <CAH7mGauOgfvVkfW2OYPm7D=7zgXhRHpx4a7_jZMyBtND3iirGQ@mail.gmail.com>
 <CAA393vhfetUH3PkJHkpcP9sf8vjzS+Tm-Fcp7O_D6mo3Q_S-xA@mail.gmail.com>
Message-ID: <55E978F3.2070804@redhat.com>



Le 04/09/2015 12:18, Ken'ichi Ohmichi a ?crit :
>
> Hi Alex,
>
> Thanks for  your comment.
> IMO, this idea is different from the extension we will remove.
> That is modularity for the maintenance burden.
> By this idea, we can put the corresponding schema in each filter.
>
>

While I think it could be a nice move to have stevedore-loaded filters 
for the FilterScheduler due to many reasons, I actually wouldn't want to 
delay more than needed the compatibility change for the API validation 
relaxing the scheduler hints.

In order to have a smooth transition, I'd rather just provide a change 
for using stevedore with the filters and weighters (even if the 
weighters are not using the API), and then once implemented, then do the 
necessary change on the API level like the one you proposed.

In the meantime, IMHO we should accept rather sooner than later (meaning 
for Liberty) https://review.openstack.org/#/c/217727/

Thanks for that good idea, I like it,

-Sylvain


> 2015?9?4?(?) 19:04 Alex Xu <soulxu at gmail.com 
> <mailto:soulxu at gmail.com>>:
>
>     2015-09-04 11:14 GMT+08:00 Ken'ichi Ohmichi <ken1ohmichi at gmail.com
>     <mailto:ken1ohmichi at gmail.com>>:
>
>         Hi Andrew,
>
>         Sorry for this late response, I missed it.
>
>         2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com
>         <mailto:andrew at lascii.com>>:
>         > I have been growing concerned recently with some attempts to
>         formalize
>         > scheduler hints, both with API validation and Nova objects
>         defining them,
>         > and want to air those concerns and see if others agree or
>         can help me see
>         > why I shouldn't worry.
>         >
>         > Starting with the API I think the strict input validation
>         that's being done,
>         > as seen in
>         >
>         http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da,
>         > is unnecessary, and potentially problematic.
>         >
>         > One problem is that it doesn't indicate anything useful for
>         a client.  The
>         > schema indicates that there are hints available but can make
>         no claim about
>         > whether or not they're actually enabled.  So while a
>         microversion bump would
>         > typically indicate a new feature available to an end user,
>         in the case of a
>         > new scheduler hint a microversion bump really indicates
>         nothing at all.  It
>         > does ensure that if a scheduler hint is used that it's
>         spelled properly and
>         > the data type passed is correct, but that's primarily useful
>         because there
>         > is no feedback mechanism to indicate an invalid or unused
>         scheduler hint.  I
>         > think the API schema is a poor proxy for that deficiency.
>         >
>         > Since the exposure of a hint means nothing as far as its
>         usefulness, I don't
>         > think we should be codifying them as part of our API schema
>         at this time.
>         > At some point I imagine we'll evolve a more useful API for
>         passing
>         > information to the scheduler as part of a request, and when
>         that happens I
>         > don't think needing to support a myriad of meaningless hints
>         in older API
>         > versions is going to be desirable.
>         >
>         > Finally, at this time I'm not sure we should take the stance
>         that only
>         > in-tree scheduler hints are supported.  While I completely
>         agree with the
>         > desire to expose things in cross-cloud ways as we've done
>         and are looking to
>         > do with flavor and image properties I think scheduling is an
>         area where we
>         > want to allow some flexibility for deployers to write and
>         expose scheduling
>         > capabilities that meet their specific needs. Over time I
>         hope we will get
>         > to a place where some standardization can happen, but I
>         don't think locking
>         > in the current scheduling hints is the way forward for
>         that.  I would love
>         > to hear from multi-cloud users here and get some input on
>         whether that's
>         > crazy and they are expecting benefits from validation on the
>         current
>         > scheduler hints.
>         >
>         > Now, objects.  As part of the work to formalize the request
>         spec sent to the
>         > scheduler there's an effort to make a scheduler hints
>         object.  This
>         > formalizes them in the same way as the API with no benefit
>         that I can see.
>         > I won't duplicate my arguments above, but I feel the same
>         way about the
>         > objects as I do with the API.  I don't think needing to
>         update and object
>         > version every time a new hint is added is useful at this
>         time, nor do I
>         > think we should lock in the current in-tree hints.
>         >
>         > In the end this boils down to my concern that the scheduling
>         hints api is a
>         > really horrible user experience and I don't want it to be
>         solidified in the
>         > API or objects yet.  I think we should re-examine how
>         they're handled before
>         > that happens.
>
>         Now we are discussing this on
>         https://review.openstack.org/#/c/217727/
>         for allowing out-of-tree scheduler-hints.
>         When we wrote API schema for scheduler-hints, it was difficult
>         to know
>         what are available API parameters for scheduler-hints.
>         Current API schema exposes them and I guess that is useful for
>         API users also.
>
>         One idea is that: How about auto-extending scheduler-hint API
>         schema
>         based on loaded schedulers?
>         Now API schemas of "create/update/resize/rebuild a server"
>         APIs are
>         auto-extended based on loaded extensions by using stevedore
>         library[1].
>
>
>     Em....we will deprecate the extension from our API. this sounds
>     like add more extension mechanism.
>
>         I guess we can apply the same way for scheduler-hints also in
>         long-term.
>         Each scheduler needs to implement a method which returns
>         available API
>         parameter formats and nova-api tries to get them then extends
>         scheduler-hints API schema with them.
>         That means out-of-tree schedulers also will be available if they
>         implement the method.
>         # In short-term, I can see "blocking additionalProperties"
>         validation
>         disabled by the way.
>
>         Thanks
>         Ken Ohmichi
>
>         ---
>         [1]:
>         https://github.com/openstack/nova/blob/master/doc/source/api_plugins.rst#json-schema
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/ba8e2d21/attachment.html>

From sean at dague.net  Fri Sep  4 11:15:01 2015
From: sean at dague.net (Sean Dague)
Date: Fri, 4 Sep 2015 07:15:01 -0400
Subject: [openstack-dev] [openstack-announce] [release][nova]
 python-novaclient release 2.28.0 (liberty)
In-Reply-To: <55E76EAC.5030103@linux.vnet.ibm.com>
References: <20150902145556.C5132C00018@frontend1.nyi.internal>
 <20150902204028.GG7955@yuggoth.org> <55E76EAC.5030103@linux.vnet.ibm.com>
Message-ID: <55E97D35.9020507@dague.net>

On 09/02/2015 05:48 PM, Matt Riedemann wrote:
> 
> 
> On 9/2/2015 3:40 PM, Jeremy Stanley wrote:
>> On 2015-09-02 10:55:56 -0400 (-0400), doug at doughellmann.com wrote:
>>> We are thrilled to announce the release of:
>>>
>>> python-novaclient 2.27.0: Client library for OpenStack Compute API
>> [...]
>>
>> Just as a heads up, there's some indication that this release is
>> currently broken by many popular service providers (behavior ranging
>> from 401 unauthorized errors to hanging indefinitely due, it seems,
>> to filtering or not supporting version detection in various ways).
>>
>>      https://launchpad.net/bugs/1491579
>>
> 
> And:
> 
> https://bugs.launchpad.net/python-novaclient/+bug/1491325
> 
> We have a fix for ^ and I plan on putting in the request for 2.27.1
> tonight once the fix is merged.  That should unblock manila.
> 
> For the version discovery bug, we plan on talking about that in the nova
> meeting tomorrow.

The issues exposed in novaclient version detection working correctly
against various clouds has now be fixed in 2.28.0 - the bug
https://bugs.launchpad.net/python-novaclient/+bug/1491579 has been
updated to hopefully contain all the relevant details of the issue.

	-Sean

-- 
Sean Dague
http://dague.net


From sean at dague.net  Fri Sep  4 11:29:45 2015
From: sean at dague.net (Sean Dague)
Date: Fri, 4 Sep 2015 07:29:45 -0400
Subject: [openstack-dev] [openstack-announce] [release][nova]
 python-novaclient release 2.28.0 (liberty)
In-Reply-To: <55E97D35.9020507@dague.net>
References: <20150902145556.C5132C00018@frontend1.nyi.internal>
 <20150902204028.GG7955@yuggoth.org> <55E76EAC.5030103@linux.vnet.ibm.com>
 <55E97D35.9020507@dague.net>
Message-ID: <55E980A9.9020101@dague.net>

On 09/04/2015 07:15 AM, Sean Dague wrote:
> On 09/02/2015 05:48 PM, Matt Riedemann wrote:
>>
>>
>> On 9/2/2015 3:40 PM, Jeremy Stanley wrote:
>>> On 2015-09-02 10:55:56 -0400 (-0400), doug at doughellmann.com wrote:
>>>> We are thrilled to announce the release of:
>>>>
>>>> python-novaclient 2.27.0: Client library for OpenStack Compute API
>>> [...]
>>>
>>> Just as a heads up, there's some indication that this release is
>>> currently broken by many popular service providers (behavior ranging
>>> from 401 unauthorized errors to hanging indefinitely due, it seems,
>>> to filtering or not supporting version detection in various ways).
>>>
>>>      https://launchpad.net/bugs/1491579
>>>
>>
>> And:
>>
>> https://bugs.launchpad.net/python-novaclient/+bug/1491325
>>
>> We have a fix for ^ and I plan on putting in the request for 2.27.1
>> tonight once the fix is merged.  That should unblock manila.
>>
>> For the version discovery bug, we plan on talking about that in the nova
>> meeting tomorrow.
> 
> The issues exposed in novaclient version detection working correctly
> against various clouds has now be fixed in 2.28.0 - the bug
> https://bugs.launchpad.net/python-novaclient/+bug/1491579 has been
> updated to hopefully contain all the relevant details of the issue.

It also looks like a big reason that this unexpected behavior in the
field existed was because configuring SSL termination correctly (so that
link following in the rest documents work) requires setting a ton of
additional and divergent configuration options in various services.
Thanks for folks looking into the issue in the bug and helping explain
the behavior we saw.

We're not yet testing for that in Tempest, so people are probably not
realizing that their API environments are a bit janky.

Honestly, the fact that deployers are required to do this is crazy. The
service catalog already has this information, and the services should be
reflecting this back. However people spent a lot of time working around
the service catalog here probably because they didn't understand it, and
creating a configuration hairball in the process.

This I think raises the importance of really getting the Service Catalog
into shape in this next cycle so that we can get ahead of issues like
this one in the future, and actually ensure that out of the box cloud
installs work in situations like this.

	-Sean

-- 
Sean Dague
http://dague.net


From flavio at redhat.com  Fri Sep  4 11:37:05 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 4 Sep 2015 13:37:05 +0200
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
 allocation
In-Reply-To: <55E96EEE.4070306@openstack.org>
References: <55E96EEE.4070306@openstack.org>
Message-ID: <20150904113705.GH30997@redhat.com>

On 04/09/15 12:14 +0200, Thierry Carrez wrote:
>Hi PTLs,
>
>Here is the proposed slot allocation for every "big tent" project team
>at the Mitaka Design Summit in Tokyo. This is based on the requests the
>liberty PTLs have made, space availability and project activity &
>collaboration needs.
>
>We have a lot less space (and time slots) in Tokyo compared to
>Vancouver, so we were unable to give every team what they wanted. In
>particular, there were far more workroom requests than we have
>available, so we had to cut down on those quite heavily. Please note
>that we'll have a large lunch room with roundtables inside the Design
>Summit space that can easily be abused (outside of lunch) as space for
>extra discussions.
>
>Here is the allocation:
>
>| fb: fishbowl 40-min slots
>| wr: workroom 40-min slots
>| cm: Friday contributors meetup
>| | day: full day, morn: only morning, aft: only afternoon
>
>Neutron: 12fb, cm:day
>Nova: 14fb, cm:day
>Cinder: 5fb, 4wr, cm:day	
>Horizon: 2fb, 7wr, cm:day	
>Heat: 4fb, 8wr, cm:morn
>Keystone: 7fb, 3wr, cm:day
>Ironic: 4fb, 4wr, cm:morn
>Oslo: 3fb, 5wr
>Rally: 1fb, 2wr
>Kolla: 3fb, 5wr, cm:aft
>Ceilometer: 2fb, 7wr, cm:morn
>TripleO: 2fb, 1wr, cm:full
>Sahara: 2fb, 5wr, cm:aft
>Murano: 2wr, cm:full
>Glance: 3fb, 5wr, cm:full	
>Manila: 2fb, 4wr, cm:morn
>Magnum: 5fb, 5wr, cm:full	
>Swift: 2fb, 12wr, cm:full	
>Trove: 2fb, 4wr, cm:aft
>Barbican: 2fb, 6wr, cm:aft
>Designate: 1fb, 4wr, cm:aft
>OpenStackClient: 1fb, 1wr, cm:morn
>Mistral: 1fb, 3wr	
>Zaqar: 1fb, 3wr
>Congress: 3wr
>Cue: 1fb, 1wr
>Solum: 1fb
>Searchlight: 1fb, 1wr
>MagnetoDB: won't be present
>
>Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)	
>PuppetOpenStack: 2fb, 3wr
>Documentation: 2fb, 4wr, cm:morn
>Quality Assurance: 4fb, 4wr, cm:full
>OpenStackAnsible: 2fb, 1wr, cm:aft
>Release management: 1fb, 1wr (shared meetup with QA)
>Security: 2fb, 2wr
>ChefOpenstack: will camp in the lunch room all week
>App catalog: 1fb, 1wr
>I18n: cm:morn
>OpenStack UX: 2wr
>Packaging-deb: 2wr
>Refstack: 2wr
>RpmPackaging: 1fb, 1wr
>
>We'll start working on laying out those sessions over the available
>rooms and time slots. If you have constraints (I already know
>searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
>Manila with Cinder, Solum with Magnum...) please let me know, we'll do
>our best to limit them.

From a very selfish POV, I'd like to avoid conflicts between Glance
and Zaqar.

From a community POV, It'd be cool if we could avoid conflicts between
Zaqar and Sahara (at least in 1wr slot) since we'd like to dedicate 1
to Sahara.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/e7b9b548/attachment.pgp>

From efedorova at mirantis.com  Fri Sep  4 11:48:53 2015
From: efedorova at mirantis.com (Ekaterina Chernova)
Date: Fri, 4 Sep 2015 14:48:53 +0300
Subject: [openstack-dev] [murano] [dashboard] Remove the owner filter
 from "Package Definitions" page
In-Reply-To: <CAM6FM9S47YmJsTYGVNoPc7L2JGjBpCB+-s-HTd=d+HK939GEEg@mail.gmail.com>
References: <CAKSp79y8cCU7z0S-Pzgy2k1TNJZZMsyVYXk-bEtSj6ByoB4JZQ@mail.gmail.com>
 <CAM6FM9S47YmJsTYGVNoPc7L2JGjBpCB+-s-HTd=d+HK939GEEg@mail.gmail.com>
Message-ID: <CAOFFu8Zo5SRVPUytGk7kj4UgNN5KJ5m39d9NeJpKoB427FbzfA@mail.gmail.com>

Agreed.

Currently, pagination is broken on "Package definitions" page now, so
removing that filter
will fix it back. Also, 'Other' tab looks unhelpful, admin should indicate
to witch tenant this package belongs to.
This improvement will be added later.

Regards,
Kate.

On Fri, Sep 4, 2015 at 1:06 PM, Alexander Tivelkov <ativelkov at mirantis.com>
wrote:

> ?+1 on this.
>
> Filtering by ownership makes sense only on Catalog view (i.e. on the page
> of usable apps) ?but not on the admin-like console like the list of package
> definitions.
>
> --
> Regards,
> Alexander Tivelkov
>
> On Fri, Sep 4, 2015 at 12:36 PM, Dmitro Dovbii <ddovbii at mirantis.com>
> wrote:
>
>> Hi folks!
>>
>> I want suggest you to delete owner filter (3 tabs) from Package
>> Definition page. Previously this filter was available for all users and we
>> agreed that it is useless. Now it is available only for admin but I think
>> this fact still doesn't improve the UX. Moreover, this filter prevents the
>> implementation of the search by name, because the work of the two filters
>> can be inconsistent.
>> So, please express your opinion on this issue. If you agree, I will
>> remove this filter ASAP.
>>
>> Best regards,
>> Dmytro Dovbii
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/4a269156/attachment.html>

From dtantsur at redhat.com  Fri Sep  4 11:52:41 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Fri, 4 Sep 2015 13:52:41 +0200
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
 allocation
In-Reply-To: <55E96EEE.4070306@openstack.org>
References: <55E96EEE.4070306@openstack.org>
Message-ID: <55E98609.4060708@redhat.com>

On 09/04/2015 12:14 PM, Thierry Carrez wrote:
> Hi PTLs,
>
> Here is the proposed slot allocation for every "big tent" project team
> at the Mitaka Design Summit in Tokyo. This is based on the requests the
> liberty PTLs have made, space availability and project activity &
> collaboration needs.
>
> We have a lot less space (and time slots) in Tokyo compared to
> Vancouver, so we were unable to give every team what they wanted. In
> particular, there were far more workroom requests than we have
> available, so we had to cut down on those quite heavily. Please note
> that we'll have a large lunch room with roundtables inside the Design
> Summit space that can easily be abused (outside of lunch) as space for
> extra discussions.
>
> Here is the allocation:
>
> | fb: fishbowl 40-min slots
> | wr: workroom 40-min slots
> | cm: Friday contributors meetup
> | | day: full day, morn: only morning, aft: only afternoon
>
> Neutron: 12fb, cm:day
> Nova: 14fb, cm:day
> Cinder: 5fb, 4wr, cm:day	
> Horizon: 2fb, 7wr, cm:day	
> Heat: 4fb, 8wr, cm:morn
> Keystone: 7fb, 3wr, cm:day
> Ironic: 4fb, 4wr, cm:morn
> Oslo: 3fb, 5wr
> Rally: 1fb, 2wr
> Kolla: 3fb, 5wr, cm:aft
> Ceilometer: 2fb, 7wr, cm:morn
> TripleO: 2fb, 1wr, cm:full
> Sahara: 2fb, 5wr, cm:aft
> Murano: 2wr, cm:full
> Glance: 3fb, 5wr, cm:full	
> Manila: 2fb, 4wr, cm:morn
> Magnum: 5fb, 5wr, cm:full	
> Swift: 2fb, 12wr, cm:full	
> Trove: 2fb, 4wr, cm:aft
> Barbican: 2fb, 6wr, cm:aft
> Designate: 1fb, 4wr, cm:aft
> OpenStackClient: 1fb, 1wr, cm:morn
> Mistral: 1fb, 3wr	
> Zaqar: 1fb, 3wr
> Congress: 3wr
> Cue: 1fb, 1wr
> Solum: 1fb
> Searchlight: 1fb, 1wr
> MagnetoDB: won't be present
>
> Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)	
> PuppetOpenStack: 2fb, 3wr
> Documentation: 2fb, 4wr, cm:morn
> Quality Assurance: 4fb, 4wr, cm:full
> OpenStackAnsible: 2fb, 1wr, cm:aft
> Release management: 1fb, 1wr (shared meetup with QA)
> Security: 2fb, 2wr
> ChefOpenstack: will camp in the lunch room all week
> App catalog: 1fb, 1wr
> I18n: cm:morn
> OpenStack UX: 2wr
> Packaging-deb: 2wr
> Refstack: 2wr
> RpmPackaging: 1fb, 1wr
>
> We'll start working on laying out those sessions over the available
> rooms and time slots. If you have constraints (I already know
> searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
> Manila with Cinder, Solum with Magnum...) please let me know, we'll do
> our best to limit them.
>

Would be cool to avoid conflicts between Ironic and TripleO.


From shardy at redhat.com  Fri Sep  4 12:04:54 2015
From: shardy at redhat.com (Steven Hardy)
Date: Fri, 4 Sep 2015 13:04:54 +0100
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
 allocation
In-Reply-To: <55E96EEE.4070306@openstack.org>
References: <55E96EEE.4070306@openstack.org>
Message-ID: <20150904120453.GA10703@t430slt.redhat.com>

On Fri, Sep 04, 2015 at 12:14:06PM +0200, Thierry Carrez wrote:
<snip>
> 
> We'll start working on laying out those sessions over the available
> rooms and time slots. If you have constraints (I already know
> searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
> Manila with Cinder, Solum with Magnum...) please let me know, we'll do
> our best to limit them.

If possible it'd be best to avoid significant overlap between the Heat and
TripleO sessions as a number of folks are heavily involved with both.

Also between TripleO and Kolla if possible, as again there are some key
folks involved in cross-team efforts to enable containerized TripleO.

Thanks!

Steve


From henryn at linux.vnet.ibm.com  Fri Sep  4 09:07:32 2015
From: henryn at linux.vnet.ibm.com (Henry Nash)
Date: Fri, 4 Sep 2015 10:07:32 +0100
Subject: [openstack-dev] FFE Request for completion of data driven
	assignment testing in Keystone
In-Reply-To: <55E95399.1070903@openstack.org>
References: <DB4FED6C-267C-45E4-BA7B-5FB42D816F60@linux.vnet.ibm.com>
 <CAO69Nd=i84FrR1f+0xHqb1S1jHytNFcbL+3+y+YjpDEcDQVimA@mail.gmail.com>
 <88142A7C-67DF-440F-A3B7-02966AAE6A9E@gmail.com>
 <55E95399.1070903@openstack.org>
Message-ID: <F7ACA0E7-46DB-441C-A306-14510E9CC431@linux.vnet.ibm.com>

Great, thanks.

Henry
> On 4 Sep 2015, at 09:17, Thierry Carrez <thierry at openstack.org> wrote:
> 
> Morgan Fainberg wrote:
>> 
>>>    I would like to request an FFE for the remaining two patches that
>>>    are already in review
>>>    (https://review.openstack.org/#/c/153897/ and https://review.openstack.org/#/c/154485/). 
>>>    These contain only test code and no functional changes, and
>>>    increase our test coverage - as well as enable other items to be
>>>    re-use the list_role_assignment backend method.
>>> 
>>> Do we need a FFE for changes to tests?
>>> 
>> 
>> I would say "no". 
> 
> Right. Extra tests (or extra docs for that matter) don't count as a
> "feature" for the freeze. In particular it doesn't change the behavior
> of the software or invalidate testing that may have been conducted.
> 
> -- 
> Thierry Carrez (ttx)
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From ndipanov at redhat.com  Fri Sep  4 12:57:20 2015
From: ndipanov at redhat.com (=?UTF-8?Q?Nikola_=c4=90ipanov?=)
Date: Fri, 4 Sep 2015 13:57:20 +0100
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <558C2338.1060204@inaugust.com>
References: <20150625142223.GC2646@crypt> <558C2338.1060204@inaugust.com>
Message-ID: <55E99530.4040907@redhat.com>

On 06/25/2015 04:50 PM, Monty Taylor wrote:
> On 06/25/2015 10:22 AM, Andrew Laski wrote:
>> I have been growing concerned recently with some attempts to formalize
>> scheduler hints, both with API validation and Nova objects defining
>> them, and want to air those concerns and see if others agree or can help
>> me see why I shouldn't worry.
>>
>> Starting with the API I think the strict input validation that's being
>> done, as seen in
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da,
>> is unnecessary, and potentially problematic.
>>
>> One problem is that it doesn't indicate anything useful for a client. 
>> The schema indicates that there are hints available but can make no
>> claim about whether or not they're actually enabled.  So while a
>> microversion bump would typically indicate a new feature available to an
>> end user, in the case of a new scheduler hint a microversion bump really
>> indicates nothing at all.  It does ensure that if a scheduler hint is
>> used that it's spelled properly and the data type passed is correct, but
>> that's primarily useful because there is no feedback mechanism to
>> indicate an invalid or unused scheduler hint.  I think the API schema is
>> a poor proxy for that deficiency.
>>
>> Since the exposure of a hint means nothing as far as its usefulness, I
>> don't think we should be codifying them as part of our API schema at
>> this time.  At some point I imagine we'll evolve a more useful API for
>> passing information to the scheduler as part of a request, and when that
>> happens I don't think needing to support a myriad of meaningless hints
>> in older API versions is going to be desirable.
> 
> I totally agree.
> 
> If hints are to become an object, then need to be _real_ resources that
> can be listed, and that have structured metadata that has an API.
> Flavors are a great example of this. From an end user perspective, I can
> ask the cloud what flavors exist, those flavors tell me information that
> I can use to make a decision, and I can pass in a reference to those
> things. If I pass in an invalid flavor, I get a meaningful error message.
> 
>> Finally, at this time I'm not sure we should take the stance that only
>> in-tree scheduler hints are supported.  While I completely agree with
>> the desire to expose things in cross-cloud ways as we've done and are
>> looking to do with flavor and image properties I think scheduling is an
>> area where we want to allow some flexibility for deployers to write and
>> expose scheduling capabilities that meet their specific needs.  Over
>> time I hope we will get to a place where some standardization can
>> happen, but I don't think locking in the current scheduling hints is the
>> way forward for that.  I would love to hear from multi-cloud users here
>> and get some input on whether that's crazy and they are expecting
>> benefits from validation on the current scheduler hints.
> 
> As a multi-cloud user, I do not use scheduler hints because there is no
> API to discover that they exist, and also no shared sense of semantics.
> (I know a flavor that claims 8G of RAM will give me, you guessed it, 8G
> of ram) So I consider scheduler hints currently to be COMPLETE vendor
> lock-in and/or only things to be used by private cloud folks who are
> also admins of their clouds.
> 
> I would not touch them with a 10 foot pole until such a time as there is
> an actual API for listing, describing and selecting them.
> 
> I would suggest that if we make one of those, we should quickly
> formalize meanings of fields - so that cloud can have specific hints
> that seem like cloud content - but that the way I learn about them is
> the same, and if there are two hints that do the same thing I can expect
> them to look the same in two different clouds.
> 

So this kind of argumentation keeps confusing me TBH. Unless I am not
understanding some basic things about how Nova works, the above argument
cleanly applies to flavors as well. Flavor '42' is not going to be the
same thing across clouds, but that's not where this ends. Once you throw
in extra_specs, in particular related to PCI devices and NUMA/CPU
pinning features. There is really no discoverability there whatsoever (*).

What I am trying to get to is not whether this is right or wrong, but to
point out the fact that Flavors are simply not a good abstraction that
can have reasonable meaning "across cloud boundaries" (i.e. different
Nova deployments), at least the way they are implemented at the moment.
We should not pretend that they are, and try to demonize useful code
making use of them, but come up with a better abstraction that can have
reasonable meaning across different deployments.

I think this is what Andrew was hinting at when he said that scheduling
is an area that cannot reasonably be standardized in this way.

I recently spoke to John briefly about this and got the feeling that he
has similar views - but I'd encourage him to comment if he wishes to do so.

N.

(*) For PCI devices, we can list aliases per host, but that's clearly an
admin API so not suitable for end user consumption really, and the alias
is an opaque string that is defined in nova.conf - has no meaning
outside a particular deployment really.


From gord at live.ca  Fri Sep  4 13:09:38 2015
From: gord at live.ca (gord chung)
Date: Fri, 4 Sep 2015 09:09:38 -0400
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
 allocation
In-Reply-To: <55E96EEE.4070306@openstack.org>
References: <55E96EEE.4070306@openstack.org>
Message-ID: <BLU437-SMTP94FFDC71909B10399384AADE570@phx.gbl>



On 04/09/2015 6:14 AM, Thierry Carrez wrote:
> Hi PTLs,
>
> Here is the proposed slot allocation for every "big tent" project team
> at the Mitaka Design Summit in Tokyo. This is based on the requests the
> liberty PTLs have made, space availability and project activity &
> collaboration needs.
>
> We have a lot less space (and time slots) in Tokyo compared to
> Vancouver, so we were unable to give every team what they wanted. In
> particular, there were far more workroom requests than we have
> available, so we had to cut down on those quite heavily. Please note
> that we'll have a large lunch room with roundtables inside the Design
> Summit space that can easily be abused (outside of lunch) as space for
> extra discussions.
>
> Here is the allocation:
>
> | fb: fishbowl 40-min slots
> | wr: workroom 40-min slots
> | cm: Friday contributors meetup
> | | day: full day, morn: only morning, aft: only afternoon
>
> Neutron: 12fb, cm:day
> Nova: 14fb, cm:day
> Cinder: 5fb, 4wr, cm:day	
> Horizon: 2fb, 7wr, cm:day	
> Heat: 4fb, 8wr, cm:morn
> Keystone: 7fb, 3wr, cm:day
> Ironic: 4fb, 4wr, cm:morn
> Oslo: 3fb, 5wr
> Rally: 1fb, 2wr
> Kolla: 3fb, 5wr, cm:aft
> Ceilometer: 2fb, 7wr, cm:morn
> TripleO: 2fb, 1wr, cm:full
> Sahara: 2fb, 5wr, cm:aft
> Murano: 2wr, cm:full
> Glance: 3fb, 5wr, cm:full	
> Manila: 2fb, 4wr, cm:morn
> Magnum: 5fb, 5wr, cm:full	
> Swift: 2fb, 12wr, cm:full	
> Trove: 2fb, 4wr, cm:aft
> Barbican: 2fb, 6wr, cm:aft
> Designate: 1fb, 4wr, cm:aft
> OpenStackClient: 1fb, 1wr, cm:morn
> Mistral: 1fb, 3wr	
> Zaqar: 1fb, 3wr
> Congress: 3wr
> Cue: 1fb, 1wr
> Solum: 1fb
> Searchlight: 1fb, 1wr
> MagnetoDB: won't be present
>
> Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)	
> PuppetOpenStack: 2fb, 3wr
> Documentation: 2fb, 4wr, cm:morn
> Quality Assurance: 4fb, 4wr, cm:full
> OpenStackAnsible: 2fb, 1wr, cm:aft
> Release management: 1fb, 1wr (shared meetup with QA)
> Security: 2fb, 2wr
> ChefOpenstack: will camp in the lunch room all week
> App catalog: 1fb, 1wr
> I18n: cm:morn
> OpenStack UX: 2wr
> Packaging-deb: 2wr
> Refstack: 2wr
> RpmPackaging: 1fb, 1wr
>
> We'll start working on laying out those sessions over the available
> rooms and time slots. If you have constraints (I already know
> searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
> Manila with Cinder, Solum with Magnum...) please let me know, we'll do
> our best to limit them.
>
this works well for us. thanks! i would say, if possible, please avoid 
overlaps between Ceilometer and Oslo.

cheers,

-- 
gord



From emilien at redhat.com  Fri Sep  4 13:28:47 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Fri, 4 Sep 2015 09:28:47 -0400
Subject: [openstack-dev] [puppet] hosting developer documentation on
 http://docs.openstack.org/developer/
In-Reply-To: <55E898E6.8050406@redhat.com>
References: <55E73B67.9020802@redhat.com> <55E74462.6030808@anteaya.info>
 <55E74657.90904@redhat.com> <55E898E6.8050406@redhat.com>
Message-ID: <55E99C8F.4090000@redhat.com>



On 09/03/2015 03:00 PM, Emilien Macchi wrote:
> 
> 
> On 09/02/2015 02:56 PM, Emilien Macchi wrote:
>>
>>
>> On 09/02/2015 02:48 PM, Anita Kuno wrote:
>>> On 09/02/2015 02:09 PM, Emilien Macchi wrote:
>>>> TL;DR, I propose to move our developer documentation from wiki to 
>>>> something like 
>>>> http://docs.openstack.org/developer/puppet-openstack
>>>
>>>> (Look at http://docs.openstack.org/developer/tempest/ for 
>>>> example).
>>>
>>> Looking at the tempest example:
>>> http://git.openstack.org/cgit/openstack/tempest/tree/doc/source
>>> we see that the .rst files all live in the tempest repo in doc/source
>>> (with the exception of the README.rst file with is referenced from
>>> within doc/source when required:
>>> http://git.openstack.org/cgit/openstack/tempest/tree/doc/source/overview.rst)
>>>
>>> So question: Where should the source .rst files for puppet developer
>>> documentation live? They will need a home.
>>
>> I guess we would need a new repository for that.
>> It could be puppet-openstack-doc (kiss)
>> or something else, any suggestion is welcome.
> 
> Are we ok for the name?
> proposal:
> puppet-openstack-doc
> puppet-openstack-documentation

Let's go for puppet-openstack-docs.

> 
> Any suggestion is welcome,
> 
>>>
>>> Thanks,
>>> Anita.
>>>
>>>
>>>> For now, most of our documentation is on 
>>>> https://wiki.openstack.org/wiki/Puppet but I think it would be 
>>>> great to use RST format and Gerrit so anyone could submit 
>>>> documentation contribute like we do for code.
>>>
>>>> I propose a basic table of contents now: Puppet modules 
>>>> introductions Coding Guide Reviewing code
>>>
>>>> I'm taking the opportunity of the puppet sprint to run this 
>>>> discussion and maybe start some work of people agrees to move on.
>>>
>>>> Thanks,
>>>
>>>
>>>
>>>> __________________________________________________________________________
>>>
>>>
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: 
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/30e1cf6f/attachment.pgp>

From sbauza at redhat.com  Fri Sep  4 13:31:24 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Fri, 04 Sep 2015 15:31:24 +0200
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <55E99530.4040907@redhat.com>
References: <20150625142223.GC2646@crypt> <558C2338.1060204@inaugust.com>
 <55E99530.4040907@redhat.com>
Message-ID: <55E99D2C.5030309@redhat.com>



Le 04/09/2015 14:57, Nikola ?ipanov a ?crit :
> On 06/25/2015 04:50 PM, Monty Taylor wrote:
>> On 06/25/2015 10:22 AM, Andrew Laski wrote:
>>> I have been growing concerned recently with some attempts to formalize
>>> scheduler hints, both with API validation and Nova objects defining
>>> them, and want to air those concerns and see if others agree or can help
>>> me see why I shouldn't worry.
>>>
>>> Starting with the API I think the strict input validation that's being
>>> done, as seen in
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da,
>>> is unnecessary, and potentially problematic.
>>>
>>> One problem is that it doesn't indicate anything useful for a client.
>>> The schema indicates that there are hints available but can make no
>>> claim about whether or not they're actually enabled.  So while a
>>> microversion bump would typically indicate a new feature available to an
>>> end user, in the case of a new scheduler hint a microversion bump really
>>> indicates nothing at all.  It does ensure that if a scheduler hint is
>>> used that it's spelled properly and the data type passed is correct, but
>>> that's primarily useful because there is no feedback mechanism to
>>> indicate an invalid or unused scheduler hint.  I think the API schema is
>>> a poor proxy for that deficiency.
>>>
>>> Since the exposure of a hint means nothing as far as its usefulness, I
>>> don't think we should be codifying them as part of our API schema at
>>> this time.  At some point I imagine we'll evolve a more useful API for
>>> passing information to the scheduler as part of a request, and when that
>>> happens I don't think needing to support a myriad of meaningless hints
>>> in older API versions is going to be desirable.
>> I totally agree.
>>
>> If hints are to become an object, then need to be _real_ resources that
>> can be listed, and that have structured metadata that has an API.
>> Flavors are a great example of this. From an end user perspective, I can
>> ask the cloud what flavors exist, those flavors tell me information that
>> I can use to make a decision, and I can pass in a reference to those
>> things. If I pass in an invalid flavor, I get a meaningful error message.
>>
>>> Finally, at this time I'm not sure we should take the stance that only
>>> in-tree scheduler hints are supported.  While I completely agree with
>>> the desire to expose things in cross-cloud ways as we've done and are
>>> looking to do with flavor and image properties I think scheduling is an
>>> area where we want to allow some flexibility for deployers to write and
>>> expose scheduling capabilities that meet their specific needs.  Over
>>> time I hope we will get to a place where some standardization can
>>> happen, but I don't think locking in the current scheduling hints is the
>>> way forward for that.  I would love to hear from multi-cloud users here
>>> and get some input on whether that's crazy and they are expecting
>>> benefits from validation on the current scheduler hints.
>> As a multi-cloud user, I do not use scheduler hints because there is no
>> API to discover that they exist, and also no shared sense of semantics.
>> (I know a flavor that claims 8G of RAM will give me, you guessed it, 8G
>> of ram) So I consider scheduler hints currently to be COMPLETE vendor
>> lock-in and/or only things to be used by private cloud folks who are
>> also admins of their clouds.
>>
>> I would not touch them with a 10 foot pole until such a time as there is
>> an actual API for listing, describing and selecting them.
>>
>> I would suggest that if we make one of those, we should quickly
>> formalize meanings of fields - so that cloud can have specific hints
>> that seem like cloud content - but that the way I learn about them is
>> the same, and if there are two hints that do the same thing I can expect
>> them to look the same in two different clouds.
>>
> So this kind of argumentation keeps confusing me TBH. Unless I am not
> understanding some basic things about how Nova works, the above argument
> cleanly applies to flavors as well. Flavor '42' is not going to be the
> same thing across clouds, but that's not where this ends. Once you throw
> in extra_specs, in particular related to PCI devices and NUMA/CPU
> pinning features. There is really no discoverability there whatsoever (*).
>
> What I am trying to get to is not whether this is right or wrong, but to
> point out the fact that Flavors are simply not a good abstraction that
> can have reasonable meaning "across cloud boundaries" (i.e. different
> Nova deployments), at least the way they are implemented at the moment.
> We should not pretend that they are, and try to demonize useful code
> making use of them, but come up with a better abstraction that can have
> reasonable meaning across different deployments.
>
> I think this is what Andrew was hinting at when he said that scheduling
> is an area that cannot reasonably be standardized in this way.
>
> I recently spoke to John briefly about this and got the feeling that he
> has similar views - but I'd encourage him to comment if he wishes to do so.
>
> N.
>
> (*) For PCI devices, we can list aliases per host, but that's clearly an
> admin API so not suitable for end user consumption really, and the alias
> is an opaque string that is defined in nova.conf - has no meaning
> outside a particular deployment really.

So, MHO on that is that hints and Flavors are children of a cloud. You 
can have the same name for two different boys, but unless you ask them 
who are their parents, you can't know who to ask.

Okay, the analogy is really bad. Let's stop it, and let me just provide 
some thoughts. While I'm really happy to have a consistent API for 
querying Flavors, we still need to ask the API to get the list of 
flavors and how they're set.

Why couldn't it be the same for Scheduler hints ? Creating an API 
endpoint really clear about how to ask and what are the response body 
fields is IMHO then needed to help the users knowing what hints they can 
have and how to use them.

My .02 euros,
-Sylvain



> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From mordred at inaugust.com  Fri Sep  4 14:04:34 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Fri, 04 Sep 2015 10:04:34 -0400
Subject: [openstack-dev] This is what disabled-by-policy should look like to
	the user
Message-ID: <55E9A4F2.5030809@inaugust.com>

mordred at camelot:~$ neutron net-create test-net-mt
Policy doesn't allow create_network to be performed.

Thank you neutron. Excellent job.

Here's what that looks like at the REST layer:

DEBUG: keystoneclient.session RESP: [403] date: Fri, 04 Sep 2015 
13:55:47 GMT connection: close content-type: application/json; 
charset=UTF-8 content-length: 130 x-openstack-request-id: 
req-ba05b555-82f4-4aaf-91b2-bae37916498d
RESP BODY: {"NeutronError": {"message": "Policy doesn't allow 
create_network to be performed.", "type": "PolicyNotAuthorized", 
"detail": ""}}

As a user, I am not confused. I do not think that maybe I made a mistake 
with my credentials. The cloud in question simply does not allow user 
creation of networks. I'm fine with that. (as a user, that might make 
this cloud unusable to me - but that's a choice I can now make with 
solid information easily. Turns out, I don't need to create networks for 
my application, so this actually makes it easier for me personally)

In any case- rather than complaining and being a whiny brat about 
something that annoys me - I thought I'd say something nice about 
something that the neutron team has done that especially pleases me. I 
would love it if this became the experience across the board in 
OpenStack for times when a feature of the API is disabled by local 
policy. It's possible it already is and I just haven't directly 
experienced it - so please don't take this as a backhanded condemnation 
of anyone else.

Monty


From nik.komawar at gmail.com  Fri Sep  4 14:24:26 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 4 Sep 2015 10:24:26 -0400
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
 allocation
In-Reply-To: <55E96EEE.4070306@openstack.org>
References: <55E96EEE.4070306@openstack.org>
Message-ID: <55E9A99A.5040001@gmail.com>

No dedicated time slot for cross-project sessions this time around?

On 9/4/15 6:14 AM, Thierry Carrez wrote:
> Hi PTLs,
>
> Here is the proposed slot allocation for every "big tent" project team
> at the Mitaka Design Summit in Tokyo. This is based on the requests the
> liberty PTLs have made, space availability and project activity &
> collaboration needs.
>
> We have a lot less space (and time slots) in Tokyo compared to
> Vancouver, so we were unable to give every team what they wanted. In
> particular, there were far more workroom requests than we have
> available, so we had to cut down on those quite heavily. Please note
> that we'll have a large lunch room with roundtables inside the Design
> Summit space that can easily be abused (outside of lunch) as space for
> extra discussions.
>
> Here is the allocation:
>
> | fb: fishbowl 40-min slots
> | wr: workroom 40-min slots
> | cm: Friday contributors meetup
> | | day: full day, morn: only morning, aft: only afternoon
>
> Neutron: 12fb, cm:day
> Nova: 14fb, cm:day
> Cinder: 5fb, 4wr, cm:day	
> Horizon: 2fb, 7wr, cm:day	
> Heat: 4fb, 8wr, cm:morn
> Keystone: 7fb, 3wr, cm:day
> Ironic: 4fb, 4wr, cm:morn
> Oslo: 3fb, 5wr
> Rally: 1fb, 2wr
> Kolla: 3fb, 5wr, cm:aft
> Ceilometer: 2fb, 7wr, cm:morn
> TripleO: 2fb, 1wr, cm:full
> Sahara: 2fb, 5wr, cm:aft
> Murano: 2wr, cm:full
> Glance: 3fb, 5wr, cm:full	
> Manila: 2fb, 4wr, cm:morn
> Magnum: 5fb, 5wr, cm:full	
> Swift: 2fb, 12wr, cm:full	
> Trove: 2fb, 4wr, cm:aft
> Barbican: 2fb, 6wr, cm:aft
> Designate: 1fb, 4wr, cm:aft
> OpenStackClient: 1fb, 1wr, cm:morn
> Mistral: 1fb, 3wr	
> Zaqar: 1fb, 3wr
> Congress: 3wr
> Cue: 1fb, 1wr
> Solum: 1fb
> Searchlight: 1fb, 1wr
> MagnetoDB: won't be present
>
> Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)	
> PuppetOpenStack: 2fb, 3wr
> Documentation: 2fb, 4wr, cm:morn
> Quality Assurance: 4fb, 4wr, cm:full
> OpenStackAnsible: 2fb, 1wr, cm:aft
> Release management: 1fb, 1wr (shared meetup with QA)
> Security: 2fb, 2wr
> ChefOpenstack: will camp in the lunch room all week
> App catalog: 1fb, 1wr
> I18n: cm:morn
> OpenStack UX: 2wr
> Packaging-deb: 2wr
> Refstack: 2wr
> RpmPackaging: 1fb, 1wr
>
> We'll start working on laying out those sessions over the available
> rooms and time slots. If you have constraints (I already know
> searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
> Manila with Cinder, Solum with Magnum...) please let me know, we'll do
> our best to limit them.
>

-- 

Thanks,
Nikhil



From ndipanov at redhat.com  Fri Sep  4 14:31:07 2015
From: ndipanov at redhat.com (=?UTF-8?Q?Nikola_=c4=90ipanov?=)
Date: Fri, 4 Sep 2015 15:31:07 +0100
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <55E99D2C.5030309@redhat.com>
References: <20150625142223.GC2646@crypt> <558C2338.1060204@inaugust.com>
 <55E99530.4040907@redhat.com> <55E99D2C.5030309@redhat.com>
Message-ID: <55E9AB2B.7020108@redhat.com>

On 09/04/2015 02:31 PM, Sylvain Bauza wrote:
> 
> 
> Le 04/09/2015 14:57, Nikola ?ipanov a ?crit :
>> On 06/25/2015 04:50 PM, Monty Taylor wrote:
>>> On 06/25/2015 10:22 AM, Andrew Laski wrote:
>>>> I have been growing concerned recently with some attempts to formalize
>>>> scheduler hints, both with API validation and Nova objects defining
>>>> them, and want to air those concerns and see if others agree or can
>>>> help
>>>> me see why I shouldn't worry.
>>>>
>>>> Starting with the API I think the strict input validation that's being
>>>> done, as seen in
>>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da,
>>>>
>>>> is unnecessary, and potentially problematic.
>>>>
>>>> One problem is that it doesn't indicate anything useful for a client.
>>>> The schema indicates that there are hints available but can make no
>>>> claim about whether or not they're actually enabled.  So while a
>>>> microversion bump would typically indicate a new feature available
>>>> to an
>>>> end user, in the case of a new scheduler hint a microversion bump
>>>> really
>>>> indicates nothing at all.  It does ensure that if a scheduler hint is
>>>> used that it's spelled properly and the data type passed is correct,
>>>> but
>>>> that's primarily useful because there is no feedback mechanism to
>>>> indicate an invalid or unused scheduler hint.  I think the API
>>>> schema is
>>>> a poor proxy for that deficiency.
>>>>
>>>> Since the exposure of a hint means nothing as far as its usefulness, I
>>>> don't think we should be codifying them as part of our API schema at
>>>> this time.  At some point I imagine we'll evolve a more useful API for
>>>> passing information to the scheduler as part of a request, and when
>>>> that
>>>> happens I don't think needing to support a myriad of meaningless hints
>>>> in older API versions is going to be desirable.
>>> I totally agree.
>>>
>>> If hints are to become an object, then need to be _real_ resources that
>>> can be listed, and that have structured metadata that has an API.
>>> Flavors are a great example of this. From an end user perspective, I can
>>> ask the cloud what flavors exist, those flavors tell me information that
>>> I can use to make a decision, and I can pass in a reference to those
>>> things. If I pass in an invalid flavor, I get a meaningful error
>>> message.
>>>
>>>> Finally, at this time I'm not sure we should take the stance that only
>>>> in-tree scheduler hints are supported.  While I completely agree with
>>>> the desire to expose things in cross-cloud ways as we've done and are
>>>> looking to do with flavor and image properties I think scheduling is an
>>>> area where we want to allow some flexibility for deployers to write and
>>>> expose scheduling capabilities that meet their specific needs.  Over
>>>> time I hope we will get to a place where some standardization can
>>>> happen, but I don't think locking in the current scheduling hints is
>>>> the
>>>> way forward for that.  I would love to hear from multi-cloud users here
>>>> and get some input on whether that's crazy and they are expecting
>>>> benefits from validation on the current scheduler hints.
>>> As a multi-cloud user, I do not use scheduler hints because there is no
>>> API to discover that they exist, and also no shared sense of semantics.
>>> (I know a flavor that claims 8G of RAM will give me, you guessed it, 8G
>>> of ram) So I consider scheduler hints currently to be COMPLETE vendor
>>> lock-in and/or only things to be used by private cloud folks who are
>>> also admins of their clouds.
>>>
>>> I would not touch them with a 10 foot pole until such a time as there is
>>> an actual API for listing, describing and selecting them.
>>>
>>> I would suggest that if we make one of those, we should quickly
>>> formalize meanings of fields - so that cloud can have specific hints
>>> that seem like cloud content - but that the way I learn about them is
>>> the same, and if there are two hints that do the same thing I can expect
>>> them to look the same in two different clouds.
>>>
>> So this kind of argumentation keeps confusing me TBH. Unless I am not
>> understanding some basic things about how Nova works, the above argument
>> cleanly applies to flavors as well. Flavor '42' is not going to be the
>> same thing across clouds, but that's not where this ends. Once you throw
>> in extra_specs, in particular related to PCI devices and NUMA/CPU
>> pinning features. There is really no discoverability there whatsoever
>> (*).
>>
>> What I am trying to get to is not whether this is right or wrong, but to
>> point out the fact that Flavors are simply not a good abstraction that
>> can have reasonable meaning "across cloud boundaries" (i.e. different
>> Nova deployments), at least the way they are implemented at the moment.
>> We should not pretend that they are, and try to demonize useful code
>> making use of them, but come up with a better abstraction that can have
>> reasonable meaning across different deployments.
>>
>> I think this is what Andrew was hinting at when he said that scheduling
>> is an area that cannot reasonably be standardized in this way.
>>
>> I recently spoke to John briefly about this and got the feeling that he
>> has similar views - but I'd encourage him to comment if he wishes to
>> do so.
>>
>> N.
>>
>> (*) For PCI devices, we can list aliases per host, but that's clearly an
>> admin API so not suitable for end user consumption really, and the alias
>> is an opaque string that is defined in nova.conf - has no meaning
>> outside a particular deployment really.
> 
> So, MHO on that is that hints and Flavors are children of a cloud. You
> can have the same name for two different boys, but unless you ask them
> who are their parents, you can't know who to ask.
> 
> Okay, the analogy is really bad. Let's stop it, and let me just provide
> some thoughts. While I'm really happy to have a consistent API for
> querying Flavors, we still need to ask the API to get the list of
> flavors and how they're set.
> 

You can't actually do this at all because extra specs are deployment
specific. Things that you expose have no meaning outside of the cloud
you are querying.


> Why couldn't it be the same for Scheduler hints ? Creating an API
> endpoint really clear about how to ask and what are the response body
> fields is IMHO then needed to help the users knowing what hints they can
> have and how to use them.
> 

Same as above.

> My .02 euros,
> -Sylvain
> 
> 
> 
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From andrew at lascii.com  Fri Sep  4 14:36:27 2015
From: andrew at lascii.com (Andrew Laski)
Date: Fri, 4 Sep 2015 10:36:27 -0400
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <55E99530.4040907@redhat.com>
References: <20150625142223.GC2646@crypt> <558C2338.1060204@inaugust.com>
 <55E99530.4040907@redhat.com>
Message-ID: <20150904143627.GH3226@crypt>

On 09/04/15 at 01:57pm, Nikola ?ipanov wrote:
>On 06/25/2015 04:50 PM, Monty Taylor wrote:
>> On 06/25/2015 10:22 AM, Andrew Laski wrote:
>>> I have been growing concerned recently with some attempts to formalize
>>> scheduler hints, both with API validation and Nova objects defining
>>> them, and want to air those concerns and see if others agree or can help
>>> me see why I shouldn't worry.
>>>
>>> Starting with the API I think the strict input validation that's being
>>> done, as seen in
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da,
>>> is unnecessary, and potentially problematic.
>>>
>>> One problem is that it doesn't indicate anything useful for a client.
>>> The schema indicates that there are hints available but can make no
>>> claim about whether or not they're actually enabled.  So while a
>>> microversion bump would typically indicate a new feature available to an
>>> end user, in the case of a new scheduler hint a microversion bump really
>>> indicates nothing at all.  It does ensure that if a scheduler hint is
>>> used that it's spelled properly and the data type passed is correct, but
>>> that's primarily useful because there is no feedback mechanism to
>>> indicate an invalid or unused scheduler hint.  I think the API schema is
>>> a poor proxy for that deficiency.
>>>
>>> Since the exposure of a hint means nothing as far as its usefulness, I
>>> don't think we should be codifying them as part of our API schema at
>>> this time.  At some point I imagine we'll evolve a more useful API for
>>> passing information to the scheduler as part of a request, and when that
>>> happens I don't think needing to support a myriad of meaningless hints
>>> in older API versions is going to be desirable.
>>
>> I totally agree.
>>
>> If hints are to become an object, then need to be _real_ resources that
>> can be listed, and that have structured metadata that has an API.
>> Flavors are a great example of this. From an end user perspective, I can
>> ask the cloud what flavors exist, those flavors tell me information that
>> I can use to make a decision, and I can pass in a reference to those
>> things. If I pass in an invalid flavor, I get a meaningful error message.
>>
>>> Finally, at this time I'm not sure we should take the stance that only
>>> in-tree scheduler hints are supported.  While I completely agree with
>>> the desire to expose things in cross-cloud ways as we've done and are
>>> looking to do with flavor and image properties I think scheduling is an
>>> area where we want to allow some flexibility for deployers to write and
>>> expose scheduling capabilities that meet their specific needs.  Over
>>> time I hope we will get to a place where some standardization can
>>> happen, but I don't think locking in the current scheduling hints is the
>>> way forward for that.  I would love to hear from multi-cloud users here
>>> and get some input on whether that's crazy and they are expecting
>>> benefits from validation on the current scheduler hints.
>>
>> As a multi-cloud user, I do not use scheduler hints because there is no
>> API to discover that they exist, and also no shared sense of semantics.
>> (I know a flavor that claims 8G of RAM will give me, you guessed it, 8G
>> of ram) So I consider scheduler hints currently to be COMPLETE vendor
>> lock-in and/or only things to be used by private cloud folks who are
>> also admins of their clouds.
>>
>> I would not touch them with a 10 foot pole until such a time as there is
>> an actual API for listing, describing and selecting them.
>>
>> I would suggest that if we make one of those, we should quickly
>> formalize meanings of fields - so that cloud can have specific hints
>> that seem like cloud content - but that the way I learn about them is
>> the same, and if there are two hints that do the same thing I can expect
>> them to look the same in two different clouds.
>>
>
>So this kind of argumentation keeps confusing me TBH. Unless I am not
>understanding some basic things about how Nova works, the above argument
>cleanly applies to flavors as well. Flavor '42' is not going to be the
>same thing across clouds, but that's not where this ends. Once you throw
>in extra_specs, in particular related to PCI devices and NUMA/CPU
>pinning features. There is really no discoverability there whatsoever (*).
>
>What I am trying to get to is not whether this is right or wrong, but to
>point out the fact that Flavors are simply not a good abstraction that
>can have reasonable meaning "across cloud boundaries" (i.e. different
>Nova deployments), at least the way they are implemented at the moment.
>We should not pretend that they are, and try to demonize useful code
>making use of them, but come up with a better abstraction that can have
>reasonable meaning across different deployments.
>
>I think this is what Andrew was hinting at when he said that scheduling
>is an area that cannot reasonably be standardized in this way.

Essentially.  Flavors work out because they can be queried in order to 
get at the important information.  It doesn't matter what the flavor is 
called in each cloud because you can programmatically find the flavor 
that gives you 8 GB of root disk.  I'm ignoring the other part of your 
point for now which is that there's no guarantee you can find a matching 
flavor across clouds or that you can even ascertain the exact meaning of 
a flavor when extra_specs are involved, but I agree.

But flavors still work well enough because something like root_gb is a 
concrete concept that doesn't change across clouds.  Scheduling concepts 
are more difficult to nail down in a consistent way.  There are some 
things like affinity that seem easy enough to express but still have 
some subjectivity to them.  How far away or how close should instances 
be in order to satisfy those constraints?  The in-tree hints require 
instances to be on the same host of different hosts.  But what if we 
wanted to provide affinity between a set of hosts, what's important to 
express there?  The number of hosts in that set, the fact that they're 
physically close to the same volume store, their IP space, etc...?  
There are so many potential things that deployers may want to provide 
that we almost need a full grammar to describe it all.  Maybe we'll get 
there, but for now I think we should be hesitant about locking this 
down.

>
>I recently spoke to John briefly about this and got the feeling that he
>has similar views - but I'd encourage him to comment if he wishes to do so.
>
>N.
>
>(*) For PCI devices, we can list aliases per host, but that's clearly an
>admin API so not suitable for end user consumption really, and the alias
>is an opaque string that is defined in nova.conf - has no meaning
>outside a particular deployment really.
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From egafford at redhat.com  Fri Sep  4 14:40:55 2015
From: egafford at redhat.com (Ethan Gafford)
Date: Fri, 4 Sep 2015 10:40:55 -0400 (EDT)
Subject: [openstack-dev] [Openstack] [Horizon] [Sahara] FFE request for
 Sahara unified job interface map UI
In-Reply-To: <1213082964.21674808.1441377081740.JavaMail.zimbra@redhat.com>
Message-ID: <508867389.21683555.1441377655696.JavaMail.zimbra@redhat.com>

Hello all,

I request a FFE for the change at: https://review.openstack.org/#/c/209683/

This change enables a significant improvement to UX in Sahara's elastic data processing flow which is already in the server and client layers of Sahara. Because it specifically aims at improving ease of use and comprehensibility, Horizon integration is critical to the success of the feature. The change itself is reasonably modular and thus low-risk; it will have no impact outside Sahara's job template creation and launch flow, and (failing unforseen issues) no impact to users of the existing flow who choose not to use this feature.

Thank you,
Ethan


From andrew at lascii.com  Fri Sep  4 14:45:38 2015
From: andrew at lascii.com (Andrew Laski)
Date: Fri, 4 Sep 2015 10:45:38 -0400
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <CAA393vhyeMYeA=6MK9+0LtReud67+OMBu=KcaOzvM_pzL4Ea+g@mail.gmail.com>
References: <20150625142223.GC2646@crypt>
 <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
 <CAA393vhyeMYeA=6MK9+0LtReud67+OMBu=KcaOzvM_pzL4Ea+g@mail.gmail.com>
Message-ID: <20150904144538.GI3226@crypt>

On 09/04/15 at 06:54pm, Ken'ichi Ohmichi wrote:
>2015-09-04 12:14 GMT+09:00 Ken'ichi Ohmichi <ken1ohmichi at gmail.com>:
>> Hi Andrew,
>>
>> Sorry for this late response, I missed it.
>>
>> 2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com>:
>>> I have been growing concerned recently with some attempts to formalize
>>> scheduler hints, both with API validation and Nova objects defining them,
>>> and want to air those concerns and see if others agree or can help me see
>>> why I shouldn't worry.
>>>
>>> Starting with the API I think the strict input validation that's being done,
>>> as seen in
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da,
>>> is unnecessary, and potentially problematic.
>>>
>>> One problem is that it doesn't indicate anything useful for a client.  The
>>> schema indicates that there are hints available but can make no claim about
>>> whether or not they're actually enabled.  So while a microversion bump would
>>> typically indicate a new feature available to an end user, in the case of a
>>> new scheduler hint a microversion bump really indicates nothing at all.  It
>>> does ensure that if a scheduler hint is used that it's spelled properly and
>>> the data type passed is correct, but that's primarily useful because there
>>> is no feedback mechanism to indicate an invalid or unused scheduler hint.  I
>>> think the API schema is a poor proxy for that deficiency.
>>>
>>> Since the exposure of a hint means nothing as far as its usefulness, I don't
>>> think we should be codifying them as part of our API schema at this time.
>>> At some point I imagine we'll evolve a more useful API for passing
>>> information to the scheduler as part of a request, and when that happens I
>>> don't think needing to support a myriad of meaningless hints in older API
>>> versions is going to be desirable.
>>>
>>> Finally, at this time I'm not sure we should take the stance that only
>>> in-tree scheduler hints are supported.  While I completely agree with the
>>> desire to expose things in cross-cloud ways as we've done and are looking to
>>> do with flavor and image properties I think scheduling is an area where we
>>> want to allow some flexibility for deployers to write and expose scheduling
>>> capabilities that meet their specific needs.  Over time I hope we will get
>>> to a place where some standardization can happen, but I don't think locking
>>> in the current scheduling hints is the way forward for that.  I would love
>>> to hear from multi-cloud users here and get some input on whether that's
>>> crazy and they are expecting benefits from validation on the current
>>> scheduler hints.
>>>
>>> Now, objects.  As part of the work to formalize the request spec sent to the
>>> scheduler there's an effort to make a scheduler hints object.  This
>>> formalizes them in the same way as the API with no benefit that I can see.
>>> I won't duplicate my arguments above, but I feel the same way about the
>>> objects as I do with the API.  I don't think needing to update and object
>>> version every time a new hint is added is useful at this time, nor do I
>>> think we should lock in the current in-tree hints.
>>>
>>> In the end this boils down to my concern that the scheduling hints api is a
>>> really horrible user experience and I don't want it to be solidified in the
>>> API or objects yet.  I think we should re-examine how they're handled before
>>> that happens.
>>
>> Now we are discussing this on https://review.openstack.org/#/c/217727/
>> for allowing out-of-tree scheduler-hints.
>> When we wrote API schema for scheduler-hints, it was difficult to know
>> what are available API parameters for scheduler-hints.
>> Current API schema exposes them and I guess that is useful for API users also.
>>
>> One idea is that: How about auto-extending scheduler-hint API schema
>> based on loaded schedulers?
>> Now API schemas of "create/update/resize/rebuild a server" APIs are
>> auto-extended based on loaded extensions by using stevedore
>> library[1].
>> I guess we can apply the same way for scheduler-hints also in long-term.
>> Each scheduler needs to implement a method which returns available API
>> parameter formats and nova-api tries to get them then extends
>> scheduler-hints API schema with them.
>> That means out-of-tree schedulers also will be available if they
>> implement the method.
>> # In short-term, I can see "blocking additionalProperties" validation
>> disabled by the way.
>
>https://review.openstack.org/#/c/220440 is a prototype for the above idea.

I like the idea of providing strict API validation for the scheduler 
hints if it accounts for out of tree extensions like this would do.  I 
do have a slight concern about how this works in a world where the 
scheduler does eventually get an HTTP interface that Nova uses and the 
code isn't necessarily accessible, but that can be worried about later.

This does mean that the scheduler hints are not controlled by 
microversions though, since we don't have a mechanism for out of tree 
extensions to signal their presence that way.  And even if they could it 
would still mean that identical microversions on different clouds 
wouldn't offer the same hints.  If we're accepting of that, which isn't 
really any different than having "additionalProperties: True", then this 
seems reasonable to me.

>
>Thanks
>Ken Ohmichi
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From morgan.fainberg at gmail.com  Fri Sep  4 14:55:31 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Fri, 4 Sep 2015 07:55:31 -0700
Subject: [openstack-dev] This is what disabled-by-policy should look
	like to the user
In-Reply-To: <55E9A4F2.5030809@inaugust.com>
References: <55E9A4F2.5030809@inaugust.com>
Message-ID: <BBCB05C9-C0D0-456E-BA4D-817D45F00563@gmail.com>



> On Sep 4, 2015, at 07:04, Monty Taylor <mordred at inaugust.com> wrote:
> 
> mordred at camelot:~$ neutron net-create test-net-mt
> Policy doesn't allow create_network to be performed.
> 
> Thank you neutron. Excellent job.
> 
> Here's what that looks like at the REST layer:
> 
> DEBUG: keystoneclient.session RESP: [403] date: Fri, 04 Sep 2015 13:55:47 GMT connection: close content-type: application/json; charset=UTF-8 content-length: 130 x-openstack-request-id: req-ba05b555-82f4-4aaf-91b2-bae37916498d
> RESP BODY: {"NeutronError": {"message": "Policy doesn't allow create_network to be performed.", "type": "PolicyNotAuthorized", "detail": ""}}
> 
> As a user, I am not confused. I do not think that maybe I made a mistake with my credentials. The cloud in question simply does not allow user creation of networks. I'm fine with that. (as a user, that might make this cloud unusable to me - but that's a choice I can now make with solid information easily. Turns out, I don't need to create networks for my application, so this actually makes it easier for me personally)
> 

The 403 (yay good HTTP error choice) and message is great here.

We should make this the default (I think we can do something like this baking it into the enforcer in oslo.policy so that it is consistent across openstack). Obviously the translation of errors would be more difficult if the enforcer is generating messages. 

--Morgan




From lucasagomes at gmail.com  Fri Sep  4 15:03:21 2015
From: lucasagomes at gmail.com (Lucas Alvares Gomes)
Date: Fri, 4 Sep 2015 16:03:21 +0100
Subject: [openstack-dev] [DevStack][Keystone][Ironic][Swit] FYI: Defaulting
	to Keystone v3 API
Message-ID: <CAB1EZBomKom6_vXb36Yu=2v1EvYYMyX9Ufa9WzZuyvC6TAGFAQ@mail.gmail.com>

Hi,

This is email is just a FYI: Recently the patch [1] got merged in
DevStack and broke the Ironic gate [2], I haven't had time to dig into
the problem yet so I reverted the patch [3] to unblock our gate.

The work to convert to v3 seems to be close enough but not yet there
so I just want to bring a broader attention to it with this email.

Also, the Ironic job that is currently running in the DevStack gate is
not testing Ironic with the Swift module, there's a patch [4] changing
that so I hope we will be able to identify the problem before we break
things next time .

[1] https://review.openstack.org/#/c/186684/
[2] http://logs.openstack.org/68/217068/14/check/gate-tempest-dsvm-ironic-agent_ssh/18d8590/logs/devstacklog.txt.gz#_2015-09-04_09_04_55_994
[3] https://review.openstack.org/220532
[4] https://review.openstack.org/#/c/220516/

Cheers,
Lucas


From tim at styra.com  Fri Sep  4 15:39:48 2015
From: tim at styra.com (Tim Hinrichs)
Date: Fri, 04 Sep 2015 15:39:48 +0000
Subject: [openstack-dev] [Congress] bugs for liberty release
Message-ID: <CAJjxPADr9u7nmwAtVZhhE_j7F=xUrpXpk1Q7exs+x4QVFvx_rw@mail.gmail.com>

Hi all,

I've found a few bugs that we could/should fix by the liberty release.  I
tagged them with "liberty-rc".  If we could all pitch in, that'd be great.
Let me know which ones you'd like to work on so I can assign them to you in
launchpad.

https://bugs.launchpad.net/congress/+bugs/?field.tag=liberty-rc

Thanks,
Tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/71943880/attachment.html>

From thierry at openstack.org  Fri Sep  4 15:51:47 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Fri, 4 Sep 2015 17:51:47 +0200
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
 allocation
In-Reply-To: <55E9A99A.5040001@gmail.com>
References: <55E96EEE.4070306@openstack.org> <55E9A99A.5040001@gmail.com>
Message-ID: <55E9BE13.50102@openstack.org>

Nikhil Komawar wrote:
> No dedicated time slot for cross-project sessions this time around?

That's on the Tuesday. 3 parallel sessions all day.
In addition, the Ops track runs on Tuesday and Wednesday.

-- 
Thierry Carrez (ttx)


From anteaya at anteaya.info  Fri Sep  4 15:51:58 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Fri, 4 Sep 2015 11:51:58 -0400
Subject: [openstack-dev] [puppet] hosting developer documentation on
 http://docs.openstack.org/developer/
In-Reply-To: <55E99C8F.4090000@redhat.com>
References: <55E73B67.9020802@redhat.com> <55E74462.6030808@anteaya.info>
 <55E74657.90904@redhat.com> <55E898E6.8050406@redhat.com>
 <55E99C8F.4090000@redhat.com>
Message-ID: <55E9BE1E.5020806@anteaya.info>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 09/04/2015 09:28 AM, Emilien Macchi wrote:
> 
> 
> On 09/03/2015 03:00 PM, Emilien Macchi wrote:
>> 
>> 
>> On 09/02/2015 02:56 PM, Emilien Macchi wrote:
>>> 
>>> 
>>> On 09/02/2015 02:48 PM, Anita Kuno wrote:
>>>> On 09/02/2015 02:09 PM, Emilien Macchi wrote:
>>>>> TL;DR, I propose to move our developer documentation from
>>>>> wiki to something like 
>>>>> http://docs.openstack.org/developer/puppet-openstack
>>>> 
>>>>> (Look at http://docs.openstack.org/developer/tempest/ for 
>>>>> example).
>>>> 
>>>> Looking at the tempest example: 
>>>> http://git.openstack.org/cgit/openstack/tempest/tree/doc/source
>>>>
>>>> 
we see that the .rst files all live in the tempest repo in doc/source
>>>> (with the exception of the README.rst file with is referenced
>>>> from within doc/source when required: 
>>>> http://git.openstack.org/cgit/openstack/tempest/tree/doc/source/ove
rview.rst)
>>>>
>>>>
>>>> 
So question: Where should the source .rst files for puppet developer
>>>> documentation live? They will need a home.
>>> 
>>> I guess we would need a new repository for that. It could be
>>> puppet-openstack-doc (kiss) or something else, any suggestion
>>> is welcome.
>> 
>> Are we ok for the name? proposal: puppet-openstack-doc 
>> puppet-openstack-documentation
> 
> Let's go for puppet-openstack-docs.

Here is a patch: https://review.openstack.org/#/c/220555/
I still need to offer the co-responding patch to the governance repo.

Thanks,
Anita.

> 
>> 
>> Any suggestion is welcome,
>> 
>>>> 
>>>> Thanks, Anita.
>>>> 
>>>> 
>>>>> For now, most of our documentation is on 
>>>>> https://wiki.openstack.org/wiki/Puppet but I think it would
>>>>> be great to use RST format and Gerrit so anyone could
>>>>> submit documentation contribute like we do for code.
>>>> 
>>>>> I propose a basic table of contents now: Puppet modules 
>>>>> introductions Coding Guide Reviewing code
>>>> 
>>>>> I'm taking the opportunity of the puppet sprint to run this
>>>>>  discussion and maybe start some work of people agrees to
>>>>> move on.
>>>> 
>>>>> Thanks,
>>>> 
>>>> 
>>>> 
>>>>> __________________________________________________________________
________
>>>>
>>>>
>>>>
>>>>
>>>>> 
OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe: 
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>  
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>
>>>>
>>>>> 
________________________________________________________________________
__
>>>> OpenStack Development Mailing List (not for usage questions) 
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>
>>>> 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>> 
>>> 
>>> 
>>> ____________________________________________________________________
______
>>>
>>> 
OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>>
>>> 
________________________________________________________________________
__
>> OpenStack Development Mailing List (not for usage questions) 
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>> 
> 
> 
> ______________________________________________________________________
____
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV6b4eAAoJELmKyZugNFU0oIAH/Atinp6F+2/fF/rOJXRSAY1q
CMprTm2x0eNGDTH/Cy17U305qj4cSXdzcpv3yv7iLDQRFcSCvWwzcTXxmUjIKrRe
1n8EI+vi/kg/MoSMBSAk84o6Zt9YM2pTbIr/rFzvb0qEesx7r2eErV6lpRJbpUvm
qFU0+GBBoDJi5DN2T1Y4qV3MhgxNJLpJfzorT4Nn4AAGXqWSvKM1y1YEtTIiK0wn
d7VoWdcffDI4/hPyMnjOngKf0mj5v66KJkL6eYrY9InqMrCTjwg0v6YlQ+gp3wHw
oZWe34WDU/DPQBJyhq7UOcK0khnHIvfJorCo1lOz2V3uZE5g+NBB8Z9dBTlfbvk=
=Ju1R
-----END PGP SIGNATURE-----


From henriquecostatruta at gmail.com  Fri Sep  4 16:11:06 2015
From: henriquecostatruta at gmail.com (Henrique Truta)
Date: Fri, 04 Sep 2015 16:11:06 +0000
Subject: [openstack-dev] [keystone] FFE Request for Reseller
Message-ID: <CABj-22jEYcQT03QUftK4DJZJ7dvLfoFZsLzCNiN92mOwsYuUCw@mail.gmail.com>

Hi Folks,

As you may know, the Reseller Blueprint was proposed and approved in Kilo (
https://review.openstack.org/#/c/139824/) with the developing postponed to
Liberty.

During this time, the 3 main patches of the chain were split into 8,
becoming smaller and easier to review. The first 2 of them were merged
before liberty-3 freeze, and some of the others have already received +2s.
The code is very mature, having a keystone core member support through the
whole release cycle.

I would like to request an FFE for the remaining 9 patches (reseller core)
which are already in review (starting from
https://review.openstack.org/#/c/213448/ to
https://review.openstack.org/#/c/161854/).

Henrique
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/a84ed847/attachment.html>

From msm at redhat.com  Fri Sep  4 16:16:24 2015
From: msm at redhat.com (michael mccune)
Date: Fri, 4 Sep 2015 12:16:24 -0400
Subject: [openstack-dev] [sahara] FFE request for heat wait condition
 support
In-Reply-To: <CA+O3VAhA2Xi_hKCaCB2PoWr8jUM0bQhwnSUAGx2gOGB0ksii6w@mail.gmail.com>
References: <CAOB5mPwf6avCZD4Q6U4xh-g4f553eMzCTh1kfiX4bVY8x59i5A@mail.gmail.com>
 <CA+O3VAhA2Xi_hKCaCB2PoWr8jUM0bQhwnSUAGx2gOGB0ksii6w@mail.gmail.com>
Message-ID: <55E9C3D8.2080606@redhat.com>

makes sense to me, +1

mike

On 09/04/2015 06:37 AM, Vitaly Gridnev wrote:
> +1 for FFE, because of
>   1. Low risk of issues, fully covered with current scenario tests;
>   2. Implementation already on review
>
> On Fri, Sep 4, 2015 at 12:54 PM, Sergey Reshetnyak
> <sreshetniak at mirantis.com <mailto:sreshetniak at mirantis.com>> wrote:
>
>     Hi,
>
>     I would like to request FFE for wait condition support for Heat engine.
>     Wait condition reports signal about booting instance.
>
>     Blueprint:
>     https://blueprints.launchpad.net/sahara/+spec/sahara-heat-wait-conditions
>
>     Spec:
>     https://github.com/openstack/sahara-specs/blob/master/specs/liberty/sahara-heat-wait-conditions.rst
>
>     Patch:
>     https://review.openstack.org/#/c/169338/
>
>     Thanks,
>     Sergey Reshetnyak
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Best Regards,
> Vitaly Gridnev
> Mirantis, Inc
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From msm at redhat.com  Fri Sep  4 16:17:41 2015
From: msm at redhat.com (michael mccune)
Date: Fri, 4 Sep 2015 12:17:41 -0400
Subject: [openstack-dev] [Openstack] [Horizon] [Sahara] FFE request for
 Sahara unified job interface map UI
In-Reply-To: <508867389.21683555.1441377655696.JavaMail.zimbra@redhat.com>
References: <508867389.21683555.1441377655696.JavaMail.zimbra@redhat.com>
Message-ID: <55E9C425.1000103@redhat.com>

On 09/04/2015 10:40 AM, Ethan Gafford wrote:
> Hello all,
>
> I request a FFE for the change at: https://review.openstack.org/#/c/209683/
>
> This change enables a significant improvement to UX in Sahara's elastic data processing flow which is already in the server and client layers of Sahara. Because it specifically aims at improving ease of use and comprehensibility, Horizon integration is critical to the success of the feature. The change itself is reasonably modular and thus low-risk; it will have no impact outside Sahara's job template creation and launch flow, and (failing unforseen issues) no impact to users of the existing flow who choose not to use this feature.
>
> Thank you,
> Ethan

+1 from me, this feature has received much attention and seems pretty 
low-risk of introducing critical errors.

mike



From morgan.fainberg at gmail.com  Fri Sep  4 16:31:11 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Fri, 4 Sep 2015 09:31:11 -0700
Subject: [openstack-dev] FFE Request for completion of data driven
 assignment testing in Keystone
In-Reply-To: <F7ACA0E7-46DB-441C-A306-14510E9CC431@linux.vnet.ibm.com>
References: <DB4FED6C-267C-45E4-BA7B-5FB42D816F60@linux.vnet.ibm.com>
 <CAO69Nd=i84FrR1f+0xHqb1S1jHytNFcbL+3+y+YjpDEcDQVimA@mail.gmail.com>
 <88142A7C-67DF-440F-A3B7-02966AAE6A9E@gmail.com>
 <55E95399.1070903@openstack.org>
 <F7ACA0E7-46DB-441C-A306-14510E9CC431@linux.vnet.ibm.com>
Message-ID: <CAGnj6auSoVWXWPGBoQ=Gq5Xbcn3GnbOgYn23DSvsUtBLV+nfwA@mail.gmail.com>

Henry,

I've applied the -2 (for Feature Freeze) to a bunch of patchsets, I think I
excluded your testing ones, but if the testing ones got -2'd please let me
know so I can remove the -2.

--Morgan

On Fri, Sep 4, 2015 at 2:07 AM, Henry Nash <henryn at linux.vnet.ibm.com>
wrote:

> Great, thanks.
>
> Henry
> > On 4 Sep 2015, at 09:17, Thierry Carrez <thierry at openstack.org> wrote:
> >
> > Morgan Fainberg wrote:
> >>
> >>>    I would like to request an FFE for the remaining two patches that
> >>>    are already in review
> >>>    (https://review.openstack.org/#/c/153897/ and
> https://review.openstack.org/#/c/154485/).
> >>>    These contain only test code and no functional changes, and
> >>>    increase our test coverage - as well as enable other items to be
> >>>    re-use the list_role_assignment backend method.
> >>>
> >>> Do we need a FFE for changes to tests?
> >>>
> >>
> >> I would say "no".
> >
> > Right. Extra tests (or extra docs for that matter) don't count as a
> > "feature" for the freeze. In particular it doesn't change the behavior
> > of the software or invalidate testing that may have been conducted.
> >
> > --
> > Thierry Carrez (ttx)
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/fc436312/attachment.html>

From ben at swartzlander.org  Fri Sep  4 16:36:26 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Fri, 4 Sep 2015 12:36:26 -0400
Subject: [openstack-dev] [Manila] Gate breakage and technical FFEs
Message-ID: <55E9C88A.3080907@swartzlander.org>

The Manila gate is finally unblocked, thanks to the efforts of Valeriy! 
I see patches going in now so all of the features which were granted 
technical FFEs should start merging IMMEDIATELY.

By my calculations, we lost approximately 30 hours of time to merge 
stuff before the scheduled feature freeze, so I'm going to grant about 
30 hours from now, plus the weekend to get everything in.

The new deadline is Tuesday 8/8, at 1200 UTC. Everything must be merged 
by that time, or it will be rescheduled to Mitaka, no exceptions, no 
excuses (if the gate breaks again, then fix it).

Those of you who weren't going to make the original deadline and were 
planning on asking for FFEs should consider yourself lucky. The gate 
getting stuck has bought you nearly 5 extra days to get your patches in 
order. For this reason I don't plan to grant any additional FFEs.

The following patches are on my list to get -2 if they're not merged by 
the deadline.

Migration

179790 - ganso - Add Share Migration feature
179791 - ganso - Share Migration support in generic driver
220278 - ganso - Add Share Migration tempest functional tests

CGs

215343 - cknight - Add DB changes for consistency-groups
215344 - cknight - Scheduler changes for consistency groups
215345 - cknight - Add Consistency Groups API
219891 - cknight - Consistency Group Support for the Generic Driver
215346 - cknight - Add functional tests for Manila consistency groups

NetApp CGs

215347 - cknight - Consistency groups in NetApp cDOT drivers

Mount Automation

201669 - vponomaryov - Add share hooks

Tempest Plugin

201955 - mkoderer - Use Tempest plugin interface

Windows SMB Driver

200154 - plucian - Add Windows SMB share driver

GlusterFS Driver

214462 - chenk - glusterfs*: factor out common parts
214921 - chenk - glusterfs/common: refactor GlusterManager
215021 - chenk - glusterfs-native: cut back on redundancy
215172 - chenk - glusterfs/layout: add layout base classes
215173 - chenk - glusterfs: volume mapped share layout
215293 - chenk - glusterfs: directory mapped share layout


-Ben Swartzlander



From stevemar at ca.ibm.com  Fri Sep  4 16:38:27 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Fri, 4 Sep 2015 11:38:27 -0500
Subject: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican]
 FYI: Defaulting to Keystone v3 API
In-Reply-To: <CAB1EZBomKom6_vXb36Yu=2v1EvYYMyX9Ufa9WzZuyvC6TAGFAQ@mail.gmail.com>
References: <CAB1EZBomKom6_vXb36Yu=2v1EvYYMyX9Ufa9WzZuyvC6TAGFAQ@mail.gmail.com>
Message-ID: <201509041638.t84GcbI5013710@d03av05.boulder.ibm.com>


This change also affected Barbican too, but they quickly tossed up a patch
to resolve the gate failures [1]. As much as I would like DevStack and
OpenStackClient to default to Keystone's v3 API, we should - considering
how close we are in the schedule, revert the initial patch (which I see
sdague already did). We need to determine which projects are hosting their
own devstack plugin scripts and update those first before bringing back the
original patch.

https://review.openstack.org/#/c/220396/

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:	Lucas Alvares Gomes <lucasagomes at gmail.com>
To:	OpenStack Development Mailing List
            <openstack-dev at lists.openstack.org>
Date:	2015/09/04 10:07 AM
Subject:	[openstack-dev] [DevStack][Keystone][Ironic][Swit] FYI:
            Defaulting	to Keystone v3 API



Hi,

This is email is just a FYI: Recently the patch [1] got merged in
DevStack and broke the Ironic gate [2], I haven't had time to dig into
the problem yet so I reverted the patch [3] to unblock our gate.

The work to convert to v3 seems to be close enough but not yet there
so I just want to bring a broader attention to it with this email.

Also, the Ironic job that is currently running in the DevStack gate is
not testing Ironic with the Swift module, there's a patch [4] changing
that so I hope we will be able to identify the problem before we break
things next time .

[1] https://review.openstack.org/#/c/186684/
[2]
http://logs.openstack.org/68/217068/14/check/gate-tempest-dsvm-ironic-agent_ssh/18d8590/logs/devstacklog.txt.gz#_2015-09-04_09_04_55_994

[3] https://review.openstack.org/220532
[4] https://review.openstack.org/#/c/220516/

Cheers,
Lucas

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/82cf4567/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/82cf4567/attachment.gif>

From doug at doughellmann.com  Fri Sep  4 16:39:31 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 04 Sep 2015 12:39:31 -0400
Subject: [openstack-dev] [ptl][release] flushing unreleased client library
	changes
Message-ID: <1441384328-sup-9901@lrrr.local>


PTLs,

We have quite a few unreleased client changes pending, and it would
be good to go ahead and publish them so they can be tested as part
of the release candidate process. I have the full list of changes for
each project below, so please find yours and review them and then
propose a release request to the openstack/releases repository.

On a separate note, for next cycle we need to do a better job of
releasing these much much earlier (a few of these changes are at
least a month old). Remember that changes to libraries do not go
into the gate for consuming projects until that library is released.
If you have any suggestions for how to do improve our tracking for
needed releases, let me know.

Doug


[ Unreleased changes in openstack/python-barbicanclient ]

Changes in python-barbicanclient 3.3.0..97cc46a
-----------------------------------------------
4572293 2015-08-27 19:38:39 -0500 Add epilog to parser
17ed50a 2015-08-25 14:58:25 +0000 Add Unit Tests for Store and Update Payload when Payload is zero
34256de 2015-08-25 09:56:24 -0500 Allow Barbican Client Secret Update Functionality

[ Unreleased changes in openstack/python-ceilometerclient ]

Changes in python-ceilometerclient 1.4.0..2006902
-------------------------------------------------
2006902 2015-08-26 14:09:00 +0000 Updated from global requirements
2429dae 2015-08-25 01:32:56 +0000 Don't try to get aodh endpoint if auth_url didn't provided
6498d55 2015-08-13 20:21:24 +0000 Updated from global requirements

[ Unreleased changes in openstack/python-cinderclient ]

Changes in python-cinderclient 1.3.1..1c82825
---------------------------------------------
1c82825 2015-09-02 17:18:28 -0700 Update path to subunit2html in post_test_hook
471aea8 2015-09-02 00:53:40 +0000 Adds command to fetch specified backend capabilities
2d979dc 2015-09-01 22:35:28 +0800 Volume status management for volume migration
50758ba 2015-08-27 09:39:58 -0500 Fixed test_password_prompted
dc6e823 2015-08-26 23:04:16 -0700 Fix help message for reset-state commands
f805f5a 2015-08-25 15:15:20 +0300 Add functional tests for python-cinderclient
8cc3ee2 2015-08-19 18:05:34 +0300 Add support '--all-tenants' for cinder backup-list
2c3169e 2015-08-11 10:18:01 -0400 CLI: Non-disruptive backup
5e26906 2015-08-11 13:14:04 +0300 Add tests for python-cinderclient
780a129 2015-08-09 15:57:00 +0900 Replace assertEqual(None, *) with assertIsNone in tests
a9405b1 2015-08-08 05:36:45 -0400 CLI: Clone CG
03542ee 2015-08-04 14:21:52 -0700 Fix ClientException init when there is no message on py34
2ec9a22 2015-08-03 11:14:44 +0800 Fixes table when there are multiline in result data
04caf88 2015-07-30 18:24:57 +0300 Set default OS_VOLUME_API_VERSION to '2'
bae0bb3 2015-07-27 01:28:00 +0000 Add commands for modifying image metadata
629e548 2015-07-24 03:34:55 +0000 Updated from global requirements
b51e43e 2015-07-21 01:04:02 +0000 Remove H302
b426b71 2015-07-20 13:21:02 +0800 Show backup and volume info in backup_restore
dc1186d 2015-07-16 19:45:08 -0700 Add response message when volume delete
075381d 2015-07-15 09:20:48 +0800 Add more details for replication
953f766 2015-07-14 18:51:58 +0800 New mock release(1.1.0) broke unit/function tests
8afc06c 2015-07-08 12:11:07 -0500 Remove unnecessary check for tenant information
c23586b 2015-07-08 11:42:59 +0800 Remove redundant statement and refactor
891ef3e 2015-06-25 13:27:42 +0000 Use shared shell arguments provided by Session

[ Unreleased changes in openstack/python-congressclient ]

Changes in python-congressclient 1.1.0..0874721
-----------------------------------------------
0f699f8 2015-09-02 14:47:32 +0800 Add actions listing command
d7fa523 2015-08-26 14:09:29 +0000 Updated from global requirements
36f2b47 2015-08-11 01:38:32 +0000 Updated from global requirements
ee07cb3 2015-07-27 15:47:12 +0800 Fix constant name
f9858a8 2015-07-27 15:19:04 +0800 Support version list API in client
9693132 2015-06-20 22:38:10 +0900 Adding a test of datasource table show CLI
a102014 2015-05-21 04:52:12 -0300 Favor the use of importlib over Python internal __import__ statement
726d560 2015-05-07 23:37:04 +0000 Updated from global requirements
b8a176b 2015-04-24 16:31:49 +0800 Replace stackforge with openstack in README.rst
8c31d3f 2015-03-31 15:33:53 -0700 Add api bindings for datasource request-request trigger

[ Unreleased changes in openstack/python-cueclient ]

Changes in python-cueclient 0.0.1..d9ac712
------------------------------------------
d9ac712 2015-08-26 14:09:42 +0000 Updated from global requirements
47b81c3 2015-08-11 13:38:14 -0700 Update python-binding section for keystone v3 support
d30c3b1 2015-08-11 01:38:34 +0000 Updated from global requirements
14e5f05 2015-08-06 10:15:13 -0700 Adding size field cue cluster list command
0c7559b 2015-07-16 10:29:07 -0700 Rename end_points to endpoints in API
0e36051 2015-07-13 12:55:29 -0700 updating docs from stackforge->openstack
cccccda 2015-07-13 12:29:04 -0700 fixing oslo_serialization reference
b9352df 2015-07-10 17:04:45 -0700 Update .gitreview file for project rename
1acc546 2015-06-08 03:19:18 -0700 Rename cue command shell commands with message-broker
d5b3acd 2015-04-23 12:57:55 -0700 Refactor cue client tests
9590b5f 2015-04-18 21:13:39 -0700 Change nic type to list
3779999 2015-04-16 12:26:16 -0700 Resolving cluster wrapper issue Closes bug: #1445175
665dee5 2015-04-14 14:05:28 -0700 Add .coveragerc to better control coverage outputs
a992e48 2015-04-09 17:46:58 +0000 Remove cluster wrapper from response body
b302486 2015-04-01 16:19:19 -0700 Modifying CRD return type from dict to object
7618f95 2015-03-31 15:19:27 -0700 Expose endpoints in cluster list command
8c47aa6 2015-03-06 17:45:24 -0800 Removing openstackclient dependency
9c125cc 2015-03-06 16:27:16 -0800 Adding cue python binding

[ Unreleased changes in openstack/python-designateclient ]

Changes in python-designateclient 1.4.0..52d68e5
------------------------------------------------
f7d4dbb 2015-09-02 15:54:08 +0200 V2 CLI Support
86d988d 2015-08-26 14:09:56 +0000 Updated from global requirements
83d4cea 2015-08-19 18:31:49 +0800 Update github's URL
1e1b94c 2015-08-13 20:21:31 +0000 Updated from global requirements
71f465c 2015-08-11 10:12:25 +0200 Don't wildcard resolve names
74ee1a1 2015-08-11 01:38:35 +0000 Updated from global requirements
08191bb 2015-08-10 01:10:02 +0000 Updated from global requirements
035657c 2015-08-07 18:41:29 +0200 Improve help strings

[ Unreleased changes in openstack/python-glanceclient ]

Changes in python-glanceclient 1.0.0..90b7dc4
---------------------------------------------
90b7dc4 2015-09-04 10:29:01 +0900 Update path to subunit2html in post_test_hook
1e2274a 2015-09-01 18:03:41 +0200 Password should be prompted once

[ Unreleased changes in openstack/python-ironicclient ]

Changes in python-ironicclient 0.8.0..6a58f9d
---------------------------------------------
156ca47 2015-09-03 12:55:41 -0700 Fix functional tests job

[ Unreleased changes in openstack/python-ironic-inspector-client ]

Changes in python-ironic-inspector-client 1.0.1..7ac591e
--------------------------------------------------------
1ce6380 2015-08-26 14:10:34 +0000 Updated from global requirements
29e38a9 2015-08-25 18:48:38 +0200 Make our README friendly to OpenStack release-tools
9625bf7 2015-08-13 20:21:36 +0000 Updated from global requirements
95133c7 2015-08-12 14:58:58 +0200 Make sure we expose all API elements in the top-level package
bd3737c 2015-08-12 14:43:45 +0200 Drop comment about changing functional tests to use released inspector
69dc6ee 2015-08-12 14:37:05 +0200 Fix error message for unsupported API version
89695da 2015-08-11 01:38:40 +0000 Updated from global requirements
fe67f67 2015-08-10 01:10:07 +0000 Updated from global requirements
61448ba 2015-08-04 12:50:13 +0200 Implement optional API versioning
7d443fb 2015-07-23 14:13:38 +0200 Create own functional tests for the client
e7bb103 2015-07-22 04:59:33 +0000 Updated from global requirements
1e3d334 2015-07-17 16:17:44 +0000 Updated from global requirements
8d68f61 2015-07-12 15:22:07 +0000 Updated from global requirements
16dc081 2015-07-09 17:52:05 +0200 Use released ironic-inspector for functional testing
0df30c9 2015-07-08 20:40:34 +0200 Don't repeat requirements in tox.ini
2ea4b8c 2015-07-01 14:35:07 +0200 Add functional test
66f8551 2015-06-22 22:35:16 +0000 Updated from global requirements
71d491b 2015-06-23 00:02:29 +0900 Change to Capital letters

[ Unreleased changes in openstack/python-keystoneclient ]

Changes in python-keystoneclient 1.6.0..6231459
-----------------------------------------------
3e862bb 2015-09-02 17:20:17 -0700 Update path to subunit2html in post_test_hook
1697fd7 2015-09-02 11:39:35 -0500 Deprecate create Discover without session
3e26ff8 2015-08-31 12:49:34 -0700 Mask passwords when logging the HTTP response
f58661e 2015-08-31 15:36:04 +0000 Updated from global requirements
7c545e5 2015-08-29 11:28:01 -0500 Update deprecation text for Session properties
e76423f 2015-08-29 11:28:01 -0500 Proper deprecation for httpclient.USER_AGENT
42bd016 2015-08-29 11:28:01 -0500 Deprecate create HTTPClient without session
e0276c6 2015-08-26 06:24:27 +0000 Fix Accept header in SAML2 requests
d22cd9d 2015-08-20 17:10:05 +0000 Updated from global requirements
0cb46c9 2015-08-15 07:36:09 +0800 Expose token_endpoint.Token as admin_token
4bdbb83 2015-08-13 19:01:42 -0500 Proper deprecation for UserManager project argument
a50f8a1 2015-08-13 19:01:42 -0500 Proper deprecation for CredentialManager data argument
4e4dede 2015-08-13 19:01:42 -0500 Deprecate create v3 Client without session
b94a610 2015-08-13 19:01:42 -0500 Deprecate create v2_0 Client without session
962ab57 2015-08-13 19:01:42 -0500 Proper deprecation for Session.get_token()
afcf4a1 2015-08-13 19:01:42 -0500 Deprecate use of cert and key
58cc453 2015-08-13 18:59:31 -0500 Proper deprecation for Session.construct()
0d293ea 2015-08-13 18:58:27 -0500 Deprecate ServiceCatalog.get_urls() with no attr
803eb23 2015-08-13 18:57:31 -0500 Deprecate ServiceCatalog(region_name)
cba0a68 2015-08-13 20:21:41 +0000 Updated from global requirements
1cbfb2e 2015-08-13 02:18:54 +0000 Updated from global requirements
43e69cc 2015-08-10 01:10:11 +0000 Updated from global requirements
b54d9f1 2015-08-06 14:44:12 -0500 Stop using .keys() on dicts where not needed
6dae40e 2015-08-06 16:57:32 +0000 Inhrerit roles project calls on keystoneclient v3
51d9d12 2015-08-05 12:28:30 -0500 Deprecate openstack.common.apiclient
16e834d 2015-08-05 11:24:08 -0500 Move apiclient.base.Resource into keystoneclient
26534da 2015-08-05 14:59:23 +0000 oslo-incubator apiclient.exceptions to keystoneclient.exceptions
eaa7ddd 2015-08-04 09:56:44 -0500 Proper deprecation for HTTPClient session and adapter properties
0c2fef5 2015-08-04 09:56:44 -0500 Proper deprecation for HTTPClient.request methods
ada04ac 2015-08-04 09:56:44 -0500 Proper deprecation for HTTPClient.tenant_id|name
1721e01 2015-08-04 09:56:43 -0500 Proper deprecation for HTTPClient tenant_id, tenant_name parameters
a9ef92a 2015-08-04 00:48:54 +0000 Updated from global requirements
22236fd 2015-08-02 11:22:18 -0500 Clarify setting socket_options
aa5738c 2015-08-02 11:18:45 -0500 Remove check for requests version
9e470a5 2015-07-29 03:50:34 +0000 Updated from global requirements
0b74590 2015-07-26 06:54:23 -0500 Fix tests passing user, project, and token
9f17732 2015-07-26 06:54:23 -0500 Proper deprecation for httpclient.request()
fb28e1a 2015-07-26 06:54:23 -0500 Proper deprecation for Dicover.raw_version_data unstable parameter
a303cbc 2015-07-26 06:54:23 -0500 Proper deprecation for Dicover.available_versions()
5547fe8 2015-07-26 06:54:23 -0500 Proper deprecation for is_ans1_token
ce58b07 2015-07-26 06:54:23 -0500 Proper deprecation for client.HTTPClient
c5b0319 2015-07-26 06:54:23 -0500 Proper deprecation for Manager.api
fee5ba7 2015-07-26 06:54:22 -0500 Stop using Manager.api
b1496ab 2015-07-26 06:54:22 -0500 Proper deprecation for BaseIdentityPlugin trust_id property
799e1fa 2015-07-26 06:54:22 -0500 Proper deprecation for BaseIdentityPlugin username, password, token_id properties
85b32fc 2015-07-26 06:54:22 -0500 Proper deprecations for modules
6950527 2015-07-25 09:51:42 +0000 Use UUID values in v3 test fixtures
1a2ccb0 2015-07-24 11:05:05 -0500 Proper deprecation for AccessInfo management_url property
6d82f1f 2015-07-24 11:05:05 -0500 Proper deprecation for AccessInfo auth_url property
66fd1eb 2015-07-24 11:04:04 -0500 Stop using deprecated AccessInfo.auth_url and management_url
f782ee8 2015-07-24 09:14:40 -0500 Proper deprecation for AccessInfo scoped property
8d65259 2015-07-24 08:16:03 -0500 Proper deprecation for AccessInfo region_name parameter
610844d 2015-07-24 08:05:13 -0500 Deprecations fixture support calling deprecated function
c6b14f9 2015-07-23 20:14:14 -0500 Set reasonable defaults for TCP Keep-Alive
bb6463e 2015-07-23 07:44:44 +0000 Updated from global requirements
0d5415e 2015-07-23 07:22:57 +0000 Remove unused time_patcher
7d5d8b3 2015-07-22 23:41:07 +0300 Make OAuth testcase use actual request headers
98326c7 2015-07-19 09:49:04 -0500 Prevent attempts to "filter" list() calls by globally unique IDs
a4584c4 2015-07-15 22:01:14 +0000 Add get_token_data to token CRUD
2f90bb6 2015-07-15 01:37:25 +0000 Updated from global requirements
3668d9c 2015-07-13 04:53:17 -0700 py34 not py33 is tested and supported
d3b9755 2015-07-12 15:22:13 +0000 Updated from global requirements
8bab2c2 2015-07-11 08:01:39 -0500 Remove confusing deprecation comment from token_to_cms
4034366 2015-07-08 20:12:31 +0000 Fixes modules index generated by Sphinx
c503c29 2015-07-02 18:57:20 +0000 Updated from global requirements
31f326d 2015-06-30 12:58:55 -0500 Unit tests catch deprecated function usage
225832f 2015-06-30 12:58:55 -0500 Switch from deprecated oslo_utils.timeutils.strtime
97c2c69 2015-06-30 12:58:55 -0500 Switch from deprecated isotime
ef0f267 2015-06-29 00:12:44 +0000 Remove keystoneclient CLI references in README
20db11f 2015-06-29 00:12:11 +0000 Update README.rst and remove ancient reference
a951023 2015-06-28 05:49:46 +0000 Remove unused images from docs
2b058ba 2015-06-22 20:00:20 +0000 Updated from global requirements
02f07cf 2015-06-17 11:15:03 -0400 Add openid connect client support
350b795 2015-06-13 09:02:09 -0500 Stop using tearDown
f249332 2015-06-13 09:02:09 -0500 Use mock rather than mox
75d4b16 2015-06-13 09:01:44 -0500 Remove unused setUp from ClientTest
08783e0 2015-06-11 00:48:15 +0000 Updated from global requirements
d99c56f 2015-06-09 13:42:53 -0400 Iterate over copy of sys.modules keys in Python2/3
945e519 2015-06-08 21:11:54 -0500 Use random strings for test fixtures
c0046d7 2015-06-08 20:29:07 -0500 Stop using function deprecated in Python 3
2a032a5 2015-06-05 09:45:08 -0400 Use python-six shim for assertRaisesRegex/p
86018ca 2015-06-03 21:01:18 -0500 tox env for Bandit
f756798 2015-05-31 10:27:01 -0500 Cleanup fixture imports
28fd6d5 2015-05-30 12:36:16 +0000 Removes unused debug logging code
0ecf9b1 2015-05-26 17:05:09 +1000 Add get_communication_params interface to plugins
8994d90 2015-05-04 16:07:31 +0800 add --slowest flag to testr
831ba03 2015-03-31 08:47:25 +1100 Support /auth routes for list projects and domains

[ Unreleased changes in openstack/python-magnumclient ]

Changes in python-magnumclient 0.2.1..e6dd7bb
---------------------------------------------
31417f7 2015-08-27 04:18:52 +0000 Updated from global requirements
97dbb71 2015-08-26 18:15:06 +0000 Rename existing service-* to coe-service-*
39e7b24 2015-08-26 14:11:20 +0000 Updated from global requirements
ea83d71 2015-08-21 22:21:37 +0000 Remove name from test token
b450891 2015-08-18 05:28:06 -0400 This adds proxy feature in magnum client
d55e7f3 2015-08-13 20:21:44 +0000 Updated from global requirements
ba689b8 2015-08-10 01:10:14 +0000 Updated from global requirements
9de9a3a 2015-08-04 00:48:58 +0000 Updated from global requirements
292310c 2015-08-03 11:22:06 -0400 Add support for multiple master nodes
3d5e0ed 2015-07-29 03:50:37 +0000 Updated from global requirements
24577e3 2015-07-22 23:49:23 +0300 Remove uuidutils from openstack.common
6455a1c 2015-07-22 04:59:40 +0000 Updated from global requirements
b9681e9 2015-07-21 23:16:57 +0000 Updated from global requirements
c457cda 2015-07-17 23:06:58 +0800 Remove H803 rule
de2e368 2015-07-15 01:37:31 +0000 Updated from global requirements
70830ed 2015-07-12 15:22:16 +0000 Updated from global requirements
017fccb 2015-06-30 20:03:09 +0000 Updated from global requirements
252586a 2015-06-24 10:55:11 +0800 Rename image_id to image when create a container
d85771a 2015-06-22 08:27:54 +0000 Updated from global requirements
0bdc3ce 2015-06-18 15:24:46 -0700 Add missing dependency oslo.serialization
0469af8 2015-06-16 19:23:05 +0000 Updated from global requirements
0c9f735 2015-06-16 11:44:54 +0530 Add additional arguments to CLI for container-create.
b9ca1d5 2015-06-15 12:54:41 +0900 Pass environment variables of proxy to tox
ac983f1 2015-06-12 05:30:38 +0000 Change container-execute to container-exec
8e75123 2015-06-11 00:48:19 +0000 Updated from global requirements
80d8f1a 2015-06-08 11:09:42 +0000 Sync from latest oslo-incubator
7f58e6f 2015-06-04 16:24:29 +0000 Updated from global requirements
0ea4159 2015-05-27 11:44:04 +0200 Fix translation setup

[ Unreleased changes in openstack/python-manilaclient ]

Changes in python-manilaclient 1.2.0..0c7b857
---------------------------------------------
0c7b857 2015-08-27 04:18:54 +0000 Updated from global requirements
fec43dd 2015-08-19 22:01:12 -0400 Move requirement Openstack client to test-requirements
fa05919 2015-08-15 20:54:25 +0000 Updated from global requirements
5f45b18 2015-08-12 17:20:01 +0000 Make spec_driver_handles_share_servers required
f0c6685 2015-08-06 10:47:11 +0800 Modify the manage command prompt information
8a3702d 2015-08-04 00:57:34 +0000 Updated from global requirements
4454542 2015-07-22 04:59:42 +0000 Updated from global requirements
e919065 2015-07-16 06:59:53 -0400 Add functional tests for access rules
c954684 2015-07-15 20:45:50 +0000 Updated from global requirements
2a4f79c 2015-07-15 08:37:48 -0400 Fix post_test_hook and update test-requirements
4f25278 2015-06-30 22:45:35 +0000 Updated from global requirements
92643e8 2015-06-22 08:27:56 +0000 Updated from global requirements
d2c0e26 2015-06-16 19:23:07 +0000 Updated from global requirements
6b2121e 2015-06-05 17:21:43 +0300 Add share shrink API
611d4fa 2015-06-04 16:24:31 +0000 Updated from global requirements
6733ae3 2015-06-03 11:35:01 +0000 Updated from global requirements
bd28eda 2015-06-02 10:32:55 +0000 Add rw functional tests for shares metadata
ada9825 2015-06-02 12:46:34 +0300 Add rw functional tests for shares
c668f00 2015-05-29 22:53:37 +0000 Updated from global requirements

[ Unreleased changes in openstack/python-muranoclient ]

Changes in python-muranoclient 0.6.3..7897d7f
---------------------------------------------
54918f5 2015-09-03 10:39:45 +0000 Added the support of Glance Artifact Repository
39967d9 2015-09-02 19:09:10 +0000 Copy the code of Glance V3 (artifacts) client
1141dd5 2015-09-02 17:18:28 +0300 Fixed issue with cacert parameter
66af770 2015-08-31 21:53:45 +0800 Update the git ingore
6e9f436 2015-08-31 13:48:21 +0800 Fix the reversed incoming parameters of assertEqual
b632ede 2015-08-27 14:18:12 +0800 Add olso.log into muranoclient's requirements.txt
75a616f 2015-08-26 14:11:36 +0000 Updated from global requirements
6242211 2015-08-26 12:45:10 +0800 Standardise help parameter of CLI commands
924a83f 2015-08-26 09:56:04 +0800 Fix some spelling mistakes of setup files

[ Unreleased changes in openstack/python-neutronclient ]

Changes in python-neutronclient 2.6.0..d75f79f
----------------------------------------------
d75f79f 2015-09-04 11:06:00 +0900 Update path to subunit2html in post_test_hook
627f68e 2015-09-02 08:19:20 +0000 Updated from global requirements
0558b49 2015-09-01 02:01:07 +0000 Add REJECT rule on FWaaS Client
a4f64f6 2015-08-27 09:36:04 -0700 Update tls_container_id to tls_container_ref
9a51f2d 2015-08-27 04:18:58 +0000 Updated from global requirements
31df9de 2015-08-26 16:32:21 +0300 Support CLI changes for QoS (2/2).
002a0c7 2015-08-26 16:26:21 +0300 Support QoS neutron-client (1/2).
a174215 2015-08-25 09:26:00 +0800 Clear the extension requirement
bb7124e 2015-08-23 05:28:30 +0000 Updated from global requirements
c44b57f 2015-08-21 16:44:52 +0000 Make subnetpool-list show correct address scope column
abc2b65 2015-08-21 16:44:28 +0000 Fix find_resourceid_by_name call for address scopes
45ed3ec 2015-08-20 21:57:25 +0800 Add extension name to extension's command help text line
f6ca3a1 2015-08-20 12:04:37 +0530 Adding registration interface for non_admin_status_resources
54e7b94 2015-08-20 12:00:52 +0800 Add document for entry point in setup.cfg
de5d3bb 2015-08-19 11:32:54 +0000 Create hooks for running functional test
d749973 2015-08-19 13:51:30 +0530 Support Command line changes for Address Scope
5271890 2015-08-13 14:26:59 +0000 Remove --shared option from firewall-create
16e02dd 2015-08-12 18:05:56 +0300 Disable failing vpn tests
22c8492 2015-08-10 17:14:54 +0530 Support RBAC neutron-client changes.
8da3dc8 2015-08-10 12:59:58 +0300 Remove newlines from request and response log
ccf6fb8 2015-07-17 16:18:04 +0000 Updated from global requirements
d61a5b5 2015-07-15 01:37:35 +0000 Updated from global requirements
ab7d9e8 2015-07-14 18:00:43 +0300 Devref documentation for client command extension support
0094e51 2015-07-14 15:55:08 +0530 Support CLI changes for associating subnetpools and address-scopes.
f936493 2015-07-13 09:49:57 +0900 Remove unused AlreadyAttachedClient
31f8f23 2015-07-13 09:11:53 +0900 Avoid overwriting parsed_args
043656c 2015-07-12 21:47:32 +0000 Determine ip version during subnet create.
52721a8 2015-07-12 19:59:00 +0000 Call UnsetStub/VerifyAll properly for tests with exceptions
25a947b 2015-07-12 15:22:20 +0000 Updated from global requirements
f4ddc6e 2015-07-03 00:17:32 -0500 Support resource plurals not ending in 's'
f446ab5 2015-06-30 22:45:38 +0000 Updated from global requirements
da3a415 2015-06-30 13:47:29 +0300 Revert "Add '--router:external' option to 'net-create'"
8557cd9 2015-06-23 21:50:00 +0000 Updated from global requirements
dcb7401 2015-06-16 19:23:09 +0000 Updated from global requirements
f13161b 2015-06-12 20:41:47 +0000 Fixes indentation for bash completion script
c809e06 2015-06-12 13:32:44 -0700 Allow bash completion script to work with BSD sed
58a5ec6 2015-06-12 11:38:02 -0400 Add alternative login description in neutronclient docs
a788a3e 2015-06-08 21:20:10 +0000 Updated from global requirements
a2ae8eb 2015-06-08 18:28:23 +0000 Raise user-friendly exceptions in str2dict
e3f61c9 2015-06-08 19:49:24 +0200 LBaaS v2: Fix listing pool members
7eb3241 2015-05-26 16:26:51 +0200 Fix functional tests and tox 2.0 errors
d536020 2015-05-11 16:12:41 +0800 Add missing tenant_id to lbaas-v2 resources creation
df93b27 2015-05-11 06:08:16 +0000 Add InvalidIpForSubnetClient exception
ada1568 2015-03-04 16:28:19 +0530 "neutron help router-update" help info updated

[ Unreleased changes in openstack/python-novaclient ]

Changes in python-novaclient 2.28.0..5d50603
--------------------------------------------
d970de4 2015-09-03 21:53:53 +0000 Adds missing internationalization for help message

[ Unreleased changes in openstack/python-openstackclient ]

Changes in python-openstackclient 1.6.0..9210cac
------------------------------------------------
d751a21 2015-09-01 15:51:58 -0700 Fix 'auhentication' spelling error/mistake
5171a42 2015-08-28 09:32:05 -0600 Ignore flavor and image find errors on server show
f142516 2015-08-24 10:38:43 -0500 default OS_VOLUME_API_VERSION to v2
59d12a6 2015-08-21 15:33:48 -0500 unwedge the osc gate
8fb19bc 2015-08-21 16:07:58 +0000 additional functional tests for identity providers
1966663 2015-08-19 16:46:55 -0400 Adds documentation  on weekly meeting
1004e06 2015-08-19 11:01:26 -0600 Update the plugin docs for designate
0f837df 2015-08-19 11:29:29 -0400 Added note to install openstackclient
0f0d66f 2015-08-14 10:31:53 -0400 Running 'limits show' returns nothing
ac5e289 2015-08-13 20:21:57 +0000 Updated from global requirements
a6c8c8f 2015-08-13 09:31:11 +0000 Updated from global requirements
e908492 2015-08-13 02:19:22 +0000 Updated from global requirements

[ Unreleased changes in openstack/python-swiftclient ]

Changes in python-swiftclient 2.5.0..93666bb
--------------------------------------------
d5eb818 2015-09-02 13:06:13 +0100 Cleanup and improve tests for download
3c02898 2015-08-31 22:03:26 +0100 Log and report trace on service operation fails
4b62732 2015-08-27 00:01:22 +0800 Increase httplib._MAXHEADERS to 256.
4b31008 2015-08-25 09:47:09 +0100 Stop Connection class modifying os_options parameter
1789c26 2015-08-24 10:54:15 +0100 Add minimal working service token support.
91d82af 2015-08-19 14:54:03 -0700 Drop flake8 ignores for already-passing tests
38a82e9 2015-08-18 19:19:22 -0700 flake8 ignores same hacks as swift
7c7f46a 2015-08-06 14:51:10 -0700 Update mock to get away from env markers
be0f1aa 2015-08-06 18:50:33 +0900 change deprecated assertEquals to assertEqual
a056f1b 2015-08-04 11:34:51 +0900 fix old style class definition(H238)
847f135 2015-07-30 09:55:51 +0200 Block comment PEP8 fix.
1c644d8 2015-07-30 09:48:00 +0200 Test auth params together with --help option.
3cd1faa 2015-07-24 10:57:29 -0700 make Connection.get_auth set url and token attributes on self
a8c4df9 2015-07-20 20:44:51 +0100 Reduce memory usage for download/delete and add --no-shuffle option to st_download
7442f0d 2015-07-17 16:03:39 +0900 swiftclient: add short options to help message
ef467dd 2015-06-28 07:40:26 +0530 Python 3: Replacing unicode with six.text_type for py3 compatibility

[ Unreleased changes in openstack/python-troveclient ]

Changes in python-troveclient 1.2.0..fcc0e73
--------------------------------------------
ec666ca 2015-09-04 04:19:26 +0000 Updated from global requirements
55af7dd 2015-09-03 20:41:25 +0000 Use more appropriate exceptions for validation
d95ceff 2015-09-03 14:49:38 -0400 Redis Clustering Initial Implementation
7ec45db 2015-08-28 00:02:45 +0000 Revert "Root enablement for Vertica clusters/instances"
608ef3d 2015-08-21 09:39:09 +0000 Implements Datastore Registration API
77960ee 2015-08-20 15:38:20 -0700 Root enablement for Vertica clusters/instances
57bb542 2015-08-13 20:22:07 +0000 Updated from global requirements
d3a9f9e 2015-08-10 01:10:31 +0000 Updated from global requirements
3e6c219 2015-07-31 23:54:26 -0400 Add a --marker argument to the backup commands.
f3f0cbd 2015-07-31 16:27:55 -0700 Fixed missing periods in positional arguments
fd81067 2015-07-22 04:59:56 +0000 Updated from global requirements
fbbc025 2015-07-15 01:37:52 +0000 Updated from global requirements
2598641 2015-07-12 15:22:33 +0000 Updated from global requirements
398bc8e 2015-07-12 00:56:25 -0400 Error message on cluster-create is misleading
7f82bcc 2015-07-10 10:05:21 +0900 Make subcommands accept flavor name and cluster name
29d0703 2015-06-29 10:13:12 +0900 Fix flavor-show problems with UUID
0702365 2015-06-22 20:00:41 +0000 Updated from global requirements
1d30a5f 2015-06-18 12:19:53 -0700 Allow a user to pass an insecure environment variable
dffbd6f 2015-06-16 19:23:22 +0000 Updated from global requirements
61a756a 2015-06-08 10:51:40 +0000 Added more unit-tests to improve code coverage
93f70ca 2015-06-04 20:05:14 +0000 Updated from global requirements
ad68fb2 2015-06-03 08:31:26 +0000 Fixes the non-existent exception NoTokenLookupException

[ Unreleased changes in openstack/python-tuskarclient ]

Changes in python-tuskarclient 0.1.18..edec875
----------------------------------------------
edec875 2015-08-04 08:05:11 +0000 Switch to oslo_i18n
92a1834 2015-07-23 18:40:47 +0200 Replace assert_called_once() calls
caa2b4d 2015-06-15 22:08:13 +0000 Updated from global requirements
39ea687 2015-06-12 06:48:45 -0400 Fix output of "tuskar plan-list --verbose"
00c3de2 2015-06-10 17:36:59 +0000 Enable SSL-related CLI opts
09d73e0 2015-06-09 07:34:55 -0400 Calling tuskar role-list would output blank lines
52dfbce 2015-06-03 16:44:14 +0200 Handle creation of plan with existing name
24087b3 2015-05-29 14:01:35 +0200 Filter and format parameters for plan role in OSC
b3e37fc 2015-05-22 21:02:32 +0100 Bump hacking version
47fdee0 2015-05-13 16:38:51 +0000 Updated from global requirements
af2597d 2015-05-12 12:29:42 +0100 Implement download Plan for the OpenStack client
6228020 2015-05-12 08:44:53 +0100 Implement Plan remove Role for the OpenStack client
14e273b 2015-05-11 11:48:26 +0100 Implement Plan add Role for the OpenStack client
7a85951 2015-05-11 11:48:26 +0100 Implement show Plan for the OpenStack client

[ Unreleased changes in openstack/python-zaqarclient ]

Changes in python-zaqarclient 0.1.1..c140a58
--------------------------------------------
2490ed4 2015-08-31 15:59:15 +0200 Send claims `limit` as a query param
baf6fa7 2015-08-31 15:29:36 +0200 v1.1 and v2 claims return document not list
0d80728 2015-08-28 11:42:57 +0200 Make sure the API version is passed down
407925c 2015-08-28 11:30:40 +0200 Make v1.1 the default CLI version
895aad2 2015-08-27 23:26:17 +0000 Updated from global requirements
8a81c44 2015-08-27 13:19:58 +0200 Updated from global requirements
705ee75 2015-08-26 22:53:09 +0530 Implement CLI support for flavor
32a847e 2015-07-16 09:25:58 +0530 Implements CLI for pool
964443d 2015-06-26 14:13:26 +0200 Raises an error if the queue name is empty
e9a8d01 2015-06-25 17:05:00 +0200 Added support to pools and flavors
f46979b 2015-06-05 06:59:44 +0000 Removed deprecated 'shard' methods
1a85f83 2015-04-21 16:07:46 +0000 Update README to work with release tools


From nik.komawar at gmail.com  Fri Sep  4 16:44:25 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 4 Sep 2015 12:44:25 -0400
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B3397CA@fmsmsx117.amr.corp.intel.com>
References: <55E7AC5C.9010504@gmail.com> <20150903085224.GD30997@redhat.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>
 <EA70533067B8F34F801E964ABCA4C4410F4C1D0D@G4W3202.americas.hpqcorp.net>
 <D20DBFD8.210FE%brian.rosmaita@rackspace.com> <55E8784A.4060809@gmail.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B3397CA@fmsmsx117.amr.corp.intel.com>
Message-ID: <55E9CA69.9030003@gmail.com>

Hi Malini et.al.,

We had a sync up earlier today on this topic and a few items were
discussed including new comments on the spec and existing code proposal.
You can find the logs of the conversation here [1].

There are 3 main outcomes of the discussion:
1. We hope to get a commitment on the feature (spec and the code) that
the comments would be addressed and code would be ready by Sept 18th;
after which the RC1 is planned to be cut [2]. Our hope is that the spec
is merged way before and implementation to the very least is ready if
not merged. The comments on the spec and merge proposal are currently
implementation details specific so we were positive on this front.
2. The decision to grant FFE will be on Tuesday Sept 8th after the spec
has newer patch sets with major concerns addressed.
3. We cannot commit to granting a backport to this feature so, we ask
the implementors to consider using the plug-ability and modularity of
the taskflow library. You may consult developers who have already worked
on adopting this library in Glance (Flavio, Sabari and Harsh). Deployers
can then use those scripts and put them back in their Liberty
deployments even if it's not in the standard tarball.

Please let me know if you have more questions.

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47
[2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule

On 9/3/15 1:13 PM, Bhandaru, Malini K wrote:
> Thank you Nikhil and Brian!
>
> -----Original Message-----
> From: Nikhil Komawar [mailto:nik.komawar at gmail.com] 
> Sent: Thursday, September 03, 2015 9:42 AM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
>
> We agreed to hold off on granting it a FFE until tomorrow.
>
> There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
> 14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and cast your vote.
>
> On 9/3/15 9:15 AM, Brian Rosmaita wrote:
>> I added an agenda item for this for today's Glance meeting:
>>    https://etherpad.openstack.org/p/glance-team-meeting-agenda
>>
>> I'd prefer to hold my vote until after the meeting.
>>
>> cheers,
>> brian
>>
>>
>> On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuvaja at hp.com> wrote:
>>
>>> Malini, all,
>>>
>>> My current opinion is -1 for FFE based on the concerns in the spec 
>>> and implementation.
>>>
>>> I'm more than happy to realign my stand after we have updated spec 
>>> and a) it's agreed to be the approach as of now and b) we can 
>>> evaluate how much work the implementation needs to meet with the revisited spec.
>>>
>>> If we end up to the unfortunate situation that this functionality 
>>> does not merge in time for Liberty, I'm confident that this is one of 
>>> the first things in Mitaka. I really don't think there is too much to 
>>> go, we just might run out of time.
>>>
>>> Thanks for your patience and endless effort to get this done.
>>>
>>> Best,
>>> Erno
>>>
>>>> -----Original Message-----
>>>> From: Bhandaru, Malini K [mailto:malini.k.bhandaru at intel.com]
>>>> Sent: Thursday, September 03, 2015 10:10 AM
>>>> To: Flavio Percoco; OpenStack Development Mailing List (not for 
>>>> usage
>>>> questions)
>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>> proposal
>>>>
>>>> Flavio, first thing in the morning Kent will upload a new BP that 
>>>> addresses the comments. We would very much appreciate a +1 on the 
>>>> FFE.
>>>>
>>>> Regards
>>>> Malini
>>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Flavio Percoco [mailto:flavio at redhat.com]
>>>> Sent: Thursday, September 03, 2015 1:52 AM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>> proposal
>>>>
>>>> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>>>>> Hi,
>>>>>
>>>>> I wanted to propose 'Single disk image OVA import' [1] feature 
>>>>> proposal for exception. This looks like a decently safe proposal 
>>>>> that should be able to adjust in the extended time period of 
>>>>> Liberty. It has been discussed at the Vancouver summit during a 
>>>>> work session and the proposal has been trimmed down as per the 
>>>>> suggestions then; has been overall accepted by those present during 
>>>>> the discussions (barring a few changes needed on the spec itself). 
>>>>> It being a addition to already existing import task, doesn't 
>>>>> involve API change or change to any of the core Image functionality as of now.
>>>>>
>>>>> Please give your vote: +1 or -1 .
>>>>>
>>>>> [1] https://review.openstack.org/#/c/194868/
>>>> I'd like to see support for OVF being, finally, implemented in Glance.
>>>> Unfortunately, I think there are too many open questions in the spec 
>>>> right now to make this FFE worthy.
>>>>
>>>> Could those questions be answered to before the EOW?
>>>>
>>>> With those questions answered, we'll be able to provide a more, 
>>>> realistic, vote.
>>>>
>>>> Also, I'd like us to evaluate how mature the implementation[0] is 
>>>> and the likelihood of it addressing the concerns/comments in time.
>>>>
>>>> For now, it's a -1 from me.
>>>>
>>>> Thanks all for working on this, this has been a long time requested 
>>>> format to have in Glance.
>>>> Flavio
>>>>
>>>> [0] https://review.openstack.org/#/c/214810/
>>>>
>>>>
>>>> --
>>>> @flaper87
>>>> Flavio Percoco
>>>> __________________________________________________________
>>>> ________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-
>>>> request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> _____________________________________________________________________
>>> _____ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> ______________________________________________________________________
>> ____ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From mordred at inaugust.com  Fri Sep  4 16:50:33 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Fri, 04 Sep 2015 12:50:33 -0400
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <BBCB05C9-C0D0-456E-BA4D-817D45F00563@gmail.com>
References: <55E9A4F2.5030809@inaugust.com>
 <BBCB05C9-C0D0-456E-BA4D-817D45F00563@gmail.com>
Message-ID: <55E9CBD9.7080603@inaugust.com>

On 09/04/2015 10:55 AM, Morgan Fainberg wrote:
>
>
>> On Sep 4, 2015, at 07:04, Monty Taylor <mordred at inaugust.com>
>> wrote:
>>
>> mordred at camelot:~$ neutron net-create test-net-mt Policy doesn't
>> allow create_network to be performed.
>>
>> Thank you neutron. Excellent job.
>>
>> Here's what that looks like at the REST layer:
>>
>> DEBUG: keystoneclient.session RESP: [403] date: Fri, 04 Sep 2015
>> 13:55:47 GMT connection: close content-type: application/json;
>> charset=UTF-8 content-length: 130 x-openstack-request-id:
>> req-ba05b555-82f4-4aaf-91b2-bae37916498d RESP BODY:
>> {"NeutronError": {"message": "Policy doesn't allow create_network
>> to be performed.", "type": "PolicyNotAuthorized", "detail": ""}}
>>
>> As a user, I am not confused. I do not think that maybe I made a
>> mistake with my credentials. The cloud in question simply does not
>> allow user creation of networks. I'm fine with that. (as a user,
>> that might make this cloud unusable to me - but that's a choice I
>> can now make with solid information easily. Turns out, I don't need
>> to create networks for my application, so this actually makes it
>> easier for me personally)
>>
>
> The 403 (yay good HTTP error choice) and message is great here.
>
> We should make this the default (I think we can do something like
> this baking it into the enforcer in oslo.policy so that it is
> consistent across openstack).

Great idea!

> Obviously the translation of errors
> would be more difficult if the enforcer is generating messages.

The type: "PolicyNotAuthorized" is a good general key. Also - even 
though the command I sent was:

neutron net-create

On the command line, the entry in the policy_file is "create_network" - 
so honestly I think that policy.json and oslo.policy should have (or be 
able to have) all of the info needed to create almost the exact same 
message. Perhaps "NeutronError" would just need to be 
"OpenStackPolicyError"?

Oh. Wait. You meant translation like i18n translation. In that case, I 
think it's easy:

message=_("Policy doesn't allow %(policy_key)s to be performed", 
policy_key="create_network")

/me waves hands

> --Morgan
>
>
>
> __________________________________________________________________________
>
>
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From nik.komawar at gmail.com  Fri Sep  4 17:06:45 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 4 Sep 2015 13:06:45 -0400
Subject: [openstack-dev]  [Glance] Liberty RC reviews
Message-ID: <55E9CFA5.3030706@gmail.com>

Hi all,

Please take some time to go through the etherpad I've created to
prioritize reviews needed for Liberty RC period. These reviews have been
categorized to help pick your favorite but all of them are important. As
we have a decent amount of time to identify bugs and fix them, reviewing
the mentioned merge proposals would be beneficial in the near future.

https://etherpad.openstack.org/p/glance-liberty-rc-reviews

-- 

Thanks,
Nikhil



From james.slagle at gmail.com  Fri Sep  4 17:28:39 2015
From: james.slagle at gmail.com (James Slagle)
Date: Fri, 4 Sep 2015 13:28:39 -0400
Subject: [openstack-dev] [TripleO] Status of CI changes
In-Reply-To: <55E7E9E7.7070808@redhat.com>
References: <55E7E9E7.7070808@redhat.com>
Message-ID: <CAHV77z8VWUKznLjAYkBCTYS1eVm78RZoj3=5FmE6vSFkyOXL1w@mail.gmail.com>

On Thu, Sep 3, 2015 at 2:34 AM, Derek Higgins <derekh at redhat.com> wrote:
> Hi All,
>
> The patch to reshuffle our CI jobs has merged[1], along with the patch to
> switch the f21-noha job to be instack based[2] (with centos images).
>
> So the current status is that our CI has been removed from most of the non
> tripleo projects (with the exception of nova/neutron/heat and ironic
> where it is only available with check experimental until we are sure its
> reliable).
>
> The last big move is to pull in some repositories into the upstream[3]
> gerrit so until this happens we still have to worry about some projects
> being on gerrithub (the instack based CI pulls them in from gerrithub for
> now). I'll follow up with a mail once this happens
>
> A lot of CI stuff still needs to be worked on (and improved) e.g.
>  o Add ceph support to the instack based job
>  o Add ha support to the instack based job
>  o Improve the logs exposed
>  o Pull out a lot of workarounds that have gone into the CI job
>  o move out some of the parts we still use in tripleo-incubator
>  o other stuff
>
> Please make yourself known if your interested in any of the above
>

As usual, a huge thanks for your effort in pushing this forward.

I've been working on getting the instack-undercloud documentation
updated with the OpenStack theme, as well as getting them updated to
match what the CI job is testing:
https://fedorapeople.org/~slagle/tripleo-docs/

These docs will eventually be hosted at
http://docs.openstack.org/developer/tripleo-docs
when the below mentioned patch [3] merges.

There's also some set of content from the existing tripleo-incubator
documentation that is still relevant, so I think we should roll that
into tripleo-docs as well. I'll have a look at that once the repo is
setup.


> [1] https://review.openstack.org/#/c/205479/
> [2] https://review.openstack.org/#/c/185151/
> [3] https://review.openstack.org/#/c/215186/




-- 
-- James Slagle
--


From henryn at linux.vnet.ibm.com  Fri Sep  4 17:28:51 2015
From: henryn at linux.vnet.ibm.com (Henry Nash)
Date: Fri, 4 Sep 2015 18:28:51 +0100
Subject: [openstack-dev] FFE Request for moving inherited assignment to core
	in Keystone
Message-ID: <EF5BE73F-25BA-45E9-83D4-C6E0A1B67423@linux.vnet.ibm.com>

Keystone has, for a number of releases,  supported the concept of inherited role assignments via the OS-INHERIT extension. At the Keystone mid-cycle we agreed moving this to core this was a good target for Liberty, but this was held by needing the data driver testing to be in place  (https://review.openstack.org/#/c/190996/ <https://review.openstack.org/#/c/190996/>).

Inherited roles are becoming an integral part of Keystone, especially with the move to hierarchal projects (which is core already) - and so moving inheritance to core makes a lot of sense.  At the same time as the move, we want to tidy up the API (https://review.openstack.org/#/c/200434/ <https://review.openstack.org/#/c/187045/>) to be more consistent with project hierarchies (before the old API semantics get too widely used), although we will continue to support the old API via the extension for a number of cycles.

I would like to request an FFE for the move of inheritance to core.

Henry
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/89bf5aeb/attachment.html>

From ben at swartzlander.org  Fri Sep  4 17:29:28 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Fri, 4 Sep 2015 13:29:28 -0400
Subject: [openstack-dev] [Manila] Gate breakage and technical FFEs
In-Reply-To: <55E9C88A.3080907@swartzlander.org>
References: <55E9C88A.3080907@swartzlander.org>
Message-ID: <55E9D4F8.4090006@swartzlander.org>

On 09/04/2015 12:36 PM, Ben Swartzlander wrote:
> The Manila gate is finally unblocked, thanks to the efforts of 
> Valeriy! I see patches going in now so all of the features which were 
> granted technical FFEs should start merging IMMEDIATELY.
>
> By my calculations, we lost approximately 30 hours of time to merge 
> stuff before the scheduled feature freeze, so I'm going to grant about 
> 30 hours from now, plus the weekend to get everything in.
>
> The new deadline is Tuesday 8/8, at 1200 UTC. Everything must be 
> merged by that time, or it will be rescheduled to Mitaka, no 
> exceptions, no excuses (if the gate breaks again, then fix it).
>
> Those of you who weren't going to make the original deadline and were 
> planning on asking for FFEs should consider yourself lucky. The gate 
> getting stuck has bought you nearly 5 extra days to get your patches 
> in order. For this reason I don't plan to grant any additional FFEs.
>
> The following patches are on my list to get -2 if they're not merged 
> by the deadline.
>
> Migration
>
> 179790 - ganso - Add Share Migration feature
> 179791 - ganso - Share Migration support in generic driver
> 220278 - ganso - Add Share Migration tempest functional tests
>
> CGs
>
> 215343 - cknight - Add DB changes for consistency-groups
> 215344 - cknight - Scheduler changes for consistency groups
> 215345 - cknight - Add Consistency Groups API
> 219891 - cknight - Consistency Group Support for the Generic Driver
> 215346 - cknight - Add functional tests for Manila consistency groups
>
> NetApp CGs
>
> 215347 - cknight - Consistency groups in NetApp cDOT drivers
>
> Mount Automation
>
> 201669 - vponomaryov - Add share hooks
>
> Tempest Plugin
>
> 201955 - mkoderer - Use Tempest plugin interface
>
> Windows SMB Driver
>
> 200154 - plucian - Add Windows SMB share driver
>
> GlusterFS Driver
>
> 214462 - chenk - glusterfs*: factor out common parts
> 214921 - chenk - glusterfs/common: refactor GlusterManager
> 215021 - chenk - glusterfs-native: cut back on redundancy
> 215172 - chenk - glusterfs/layout: add layout base classes
> 215173 - chenk - glusterfs: volume mapped share layout
> 215293 - chenk - glusterfs: directory mapped share layout
>

Also a note for core reviewers: for the 2 chains of patches (CGs and 
GlusterFS) please workflow them in reverse order so they go through the 
gate all at once. I would rather avoid a situation where we have to back 
out a half-merged feature in the event it doesn't make the deadline.

>
> -Ben Swartzlander
>
>
> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From ben at swartzlander.org  Fri Sep  4 17:30:49 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Fri, 4 Sep 2015 13:30:49 -0400
Subject: [openstack-dev] [Manila] Gate breakage and technical FFEs
In-Reply-To: <55E9C88A.3080907@swartzlander.org>
References: <55E9C88A.3080907@swartzlander.org>
Message-ID: <55E9D549.3020002@swartzlander.org>

On 09/04/2015 12:36 PM, Ben Swartzlander wrote:
> The Manila gate is finally unblocked, thanks to the efforts of 
> Valeriy! I see patches going in now so all of the features which were 
> granted technical FFEs should start merging IMMEDIATELY.
>
> By my calculations, we lost approximately 30 hours of time to merge 
> stuff before the scheduled feature freeze, so I'm going to grant about 
> 30 hours from now, plus the weekend to get everything in.
>
> The new deadline is Tuesday 8/8, at 1200 UTC. Everything must be 
> merged by that time, or it will be rescheduled to Mitaka, no 
> exceptions, no excuses (if the gate breaks again, then fix it).

Important correction: that's September 8 not August 8. My brain isn't a 
full strength after these last 2 days...


> Those of you who weren't going to make the original deadline and were 
> planning on asking for FFEs should consider yourself lucky. The gate 
> getting stuck has bought you nearly 5 extra days to get your patches 
> in order. For this reason I don't plan to grant any additional FFEs.
>
> The following patches are on my list to get -2 if they're not merged 
> by the deadline.
>
> Migration
>
> 179790 - ganso - Add Share Migration feature
> 179791 - ganso - Share Migration support in generic driver
> 220278 - ganso - Add Share Migration tempest functional tests
>
> CGs
>
> 215343 - cknight - Add DB changes for consistency-groups
> 215344 - cknight - Scheduler changes for consistency groups
> 215345 - cknight - Add Consistency Groups API
> 219891 - cknight - Consistency Group Support for the Generic Driver
> 215346 - cknight - Add functional tests for Manila consistency groups
>
> NetApp CGs
>
> 215347 - cknight - Consistency groups in NetApp cDOT drivers
>
> Mount Automation
>
> 201669 - vponomaryov - Add share hooks
>
> Tempest Plugin
>
> 201955 - mkoderer - Use Tempest plugin interface
>
> Windows SMB Driver
>
> 200154 - plucian - Add Windows SMB share driver
>
> GlusterFS Driver
>
> 214462 - chenk - glusterfs*: factor out common parts
> 214921 - chenk - glusterfs/common: refactor GlusterManager
> 215021 - chenk - glusterfs-native: cut back on redundancy
> 215172 - chenk - glusterfs/layout: add layout base classes
> 215173 - chenk - glusterfs: volume mapped share layout
> 215293 - chenk - glusterfs: directory mapped share layout
>
>
> -Ben Swartzlander
>
>
> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From mgagne at internap.com  Fri Sep  4 17:35:11 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Fri, 4 Sep 2015 13:35:11 -0400
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <55E9CBD9.7080603@inaugust.com>
References: <55E9A4F2.5030809@inaugust.com>
 <BBCB05C9-C0D0-456E-BA4D-817D45F00563@gmail.com>
 <55E9CBD9.7080603@inaugust.com>
Message-ID: <55E9D64F.5050200@internap.com>

On 2015-09-04 12:50 PM, Monty Taylor wrote:
> On 09/04/2015 10:55 AM, Morgan Fainberg wrote:
>>
>> Obviously the translation of errors
>> would be more difficult if the enforcer is generating messages.
> 
> The type: "PolicyNotAuthorized" is a good general key. Also - even
> though the command I sent was:
> 
> neutron net-create
> 
> On the command line, the entry in the policy_file is "create_network" -
> so honestly I think that policy.json and oslo.policy should have (or be
> able to have) all of the info needed to create almost the exact same
> message. Perhaps "NeutronError" would just need to be
> "OpenStackPolicyError"?
> 
> Oh. Wait. You meant translation like i18n translation. In that case, I
> think it's easy:
> 
> message=_("Policy doesn't allow %(policy_key)s to be performed",
> policy_key="create_network")
> 
> /me waves hands
> 

I don't feel like this error message would be user-friendly:

"Policy doesn't allow os_compute_api:os-instance-actions to be performed"

Policy name aren't human readable and match nothing on the client side.

-- 
Mathieu


From srikumar at appcito.net  Fri Sep  4 17:36:33 2015
From: srikumar at appcito.net (Srikumar Chari)
Date: Fri, 4 Sep 2015 10:36:33 -0700
Subject: [openstack-dev] custom lbaas driver
In-Reply-To: <CAHPHmHcGEHMHpus-0pphrPviShoe--PcRik9HHY+iU0x1-Q74g@mail.gmail.com>
References: <CAHPHmHcGEHMHpus-0pphrPviShoe--PcRik9HHY+iU0x1-Q74g@mail.gmail.com>
Message-ID: <CAHPHmHedh+zuMVJnyQN1SHgWZVXgW2GMge1CeHzw14rGDX=1NQ@mail.gmail.com>

Hello,

I am trying to write an custom lbaas v2.0 driver. Was wondering if there's
a document on how to go about that. I see a number of implementations as
part of the source code but they all seem to be different. For instance
HAProxy is completely different when compared to the other vendors. I am
assuming that HAProxy is the "standard" as it is the default load balancer
for OpenStack. Is there a design doc or some kinda write up?

This thread was the closest I could get to a write-up:
https://openstack.nimeyo.com/21628/openstack-dev-neutron-lbaas-need-help-with-lbaas-drivers
.

I guess I could reverse engineering the HAProxy namespace driver but I am
probably going to miss some of the design elements. Any help/pointers/links
would be great.

thanks
Sri
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/8e171502/attachment.html>

From john.griffith8 at gmail.com  Fri Sep  4 17:42:25 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Fri, 4 Sep 2015 11:42:25 -0600
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <55E9D64F.5050200@internap.com>
References: <55E9A4F2.5030809@inaugust.com>
 <BBCB05C9-C0D0-456E-BA4D-817D45F00563@gmail.com>
 <55E9CBD9.7080603@inaugust.com> <55E9D64F.5050200@internap.com>
Message-ID: <CAPWkaSU--XLza5i_-_cUXM=E9ftQynDUbZvDj38UQSNAQ6=kTA@mail.gmail.com>

On Fri, Sep 4, 2015 at 11:35 AM, Mathieu Gagn? <mgagne at internap.com> wrote:

> On 2015-09-04 12:50 PM, Monty Taylor wrote:
> > On 09/04/2015 10:55 AM, Morgan Fainberg wrote:
> >>
> >> Obviously the translation of errors
> >> would be more difficult if the enforcer is generating messages.
> >
> > The type: "PolicyNotAuthorized" is a good general key. Also - even
> > though the command I sent was:
> >
> > neutron net-create
> >
> > On the command line, the entry in the policy_file is "create_network" -
> > so honestly I think that policy.json and oslo.policy should have (or be
> > able to have) all of the info needed to create almost the exact same
> > message. Perhaps "NeutronError" would just need to be
> > "OpenStackPolicyError"?
> >
> > Oh. Wait. You meant translation like i18n translation. In that case, I
> > think it's easy:
> >
> > message=_("Policy doesn't allow %(policy_key)s to be performed",
> > policy_key="create_network")
> >
> > /me waves hands
> >
>
> I don't feel like this error message would be user-friendly:
>
> "Policy doesn't allow os_compute_api:os-instance-actions to be performed"
>
> Policy name aren't human readable and match nothing on the client side.
>
> --
> Mathieu
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

?Ok, so this:

ubuntu at devbox:~$ cinder reset-state 9dee0fae-864c-44f9-bdd7-3330a0f4e899
Reset state for volume 9dee0fae-864c-44f9-bdd7-3330a0f4e899 failed: Policy
doesn't allow volume_extension:volume_admin_actions:reset_status to be
performed. (HTTP 403) (Request-ID: req-8ed2c895-0d1f-4b2c-9859-ee15c19267de)
ERROR: Unable to reset the state for the specified volume(s).
ubuntu at devbox:~$?

?Is no good?  You would like to see "less" in the output; like just the
command name itself and "Policy doesn't allow"?

To Mathieu's point, fair statement WRT the visibility of the policy name.

?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/1043b3af/attachment.html>

From morgan.fainberg at gmail.com  Fri Sep  4 17:45:34 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Fri, 4 Sep 2015 10:45:34 -0700
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <55E9D64F.5050200@internap.com>
References: <55E9A4F2.5030809@inaugust.com>
 <BBCB05C9-C0D0-456E-BA4D-817D45F00563@gmail.com>
 <55E9CBD9.7080603@inaugust.com> <55E9D64F.5050200@internap.com>
Message-ID: <CAGnj6atk-fLFBvZ_TqMkjX1SnaMbkoCc-+SbnGtuufMRYsLjOQ@mail.gmail.com>

On Fri, Sep 4, 2015 at 10:35 AM, Mathieu Gagn? <mgagne at internap.com> wrote:

> On 2015-09-04 12:50 PM, Monty Taylor wrote:
> > On 09/04/2015 10:55 AM, Morgan Fainberg wrote:
> >>
> >> Obviously the translation of errors
> >> would be more difficult if the enforcer is generating messages.
> >
> > The type: "PolicyNotAuthorized" is a good general key. Also - even
> > though the command I sent was:
> >
> > neutron net-create
> >
> > On the command line, the entry in the policy_file is "create_network" -
> > so honestly I think that policy.json and oslo.policy should have (or be
> > able to have) all of the info needed to create almost the exact same
> > message. Perhaps "NeutronError" would just need to be
> > "OpenStackPolicyError"?
> >
> > Oh. Wait. You meant translation like i18n translation. In that case, I
> > think it's easy:
> >
> > message=_("Policy doesn't allow %(policy_key)s to be performed",
> > policy_key="create_network")
> >
> > /me waves hands
> >
>
> I don't feel like this error message would be user-friendly:
>
> "Policy doesn't allow os_compute_api:os-instance-actions to be performed"
>
> Policy name aren't human readable and match nothing on the client side.
>
>
To be fair the message can be improved. Right now this is so far above what
you get in most cases. Digging a bit deeper, a lot of this is in
oslo.policy but it appears we have projects doing custom layers of
enforcement that change the results. The short solution is to clean up and
consistently raise an exception up and then work on the messaging.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/120fc8bb/attachment.html>

From mriedem at linux.vnet.ibm.com  Fri Sep  4 17:49:33 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Fri, 4 Sep 2015 12:49:33 -0500
Subject: [openstack-dev] [nova][i18n] Is there any point in using _() in
	python-novaclient?
Message-ID: <55E9D9AD.1000402@linux.vnet.ibm.com>

I noticed this today:

https://review.openstack.org/#/c/219768/

And it got me thinking about something I've wondered before - why do we 
even use _() in python-novaclient?  It doesn't have any .po files for 
babel message translation, it has no babel config, there is nothing in 
setup.cfg about extracting messages and compiling them into .mo's, there 
is nothing on Transifex for python-novaclient, etc.

Is there a way to change your locale and get translated output in nova 
CLIs?  I didn't find anything in docs from a quick google search.

Comparing to python-openstackclient, that does have a babel config and 
some locale po files in tree, at least for de and zh_TW.

So if this doesn't work in python-novaclient, do we need any of the i18n 
code in there?  It doesn't really hurt, but it seems pointless to push 
changes for it or try to keep user-facing messages in mind in the code.

-- 

Thanks,

Matt Riedemann



From gessau at cisco.com  Fri Sep  4 17:53:20 2015
From: gessau at cisco.com (Henry Gessau)
Date: Fri, 4 Sep 2015 13:53:20 -0400
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <CAO_F6JO4N4AevZN0n4K=fTyXpbkOi+T1Up_7ukYHodzTTPim3A@mail.gmail.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <BLU437-SMTP76050D04789EE848C5B5F4D8680@phx.gbl>
 <CAO_F6JO4N4AevZN0n4K=fTyXpbkOi+T1Up_7ukYHodzTTPim3A@mail.gmail.com>
Message-ID: <55E9DA90.1090208@cisco.com>

Some thought has been given to this. See
https://bugs.launchpad.net/neutron/+bug/1460177

I like the third option, a well-known name using DNS.

On Thu, Sep 03, 2015, Kevin Benton <blak111 at gmail.com> wrote:
> I think that's different than what is being asked here. That patch appears to
> just add IPv6 interface information if it's available in the metadata. This
> thread is about getting cloud-init to connect to an IPv6 address instead of
> 169.254.169.254 for pure IPv6 environments.
>
> On Thu, Sep 3, 2015 at 11:41 AM, Joshua Harlow <harlowja at outlook.com
> <mailto:harlowja at outlook.com>> wrote:
>
>     I'm pretty sure this got implemented :)
>
>     http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/revision/1042
>     <http://bazaar.launchpad.net/%7Ecloud-init-dev/cloud-init/trunk/revision/1042>
>     and https://bugs.launchpad.net/cloud-init/+bug/1391695
>
>     That's the RHEL support, since cloud-init translates a ubuntu style
>     networking style the ubuntu/debian style format should also work.
>
>
>     Steve Gordon wrote:
>
>         ----- Original Message -----
>
>             From: "Kevin Benton"<blak111 at gmail.com <mailto:blak111 at gmail.com>>
>
>             When we discussed this before on the neutron channel, I thought it was
>             because cloud-init doesn't support IPv6. We had wasted quite a bit
>             of time
>             talking about adding support to our metadata service because I was
>             under
>             the impression that cloud-init already did support IPv6.
>
>             IIRC, the argument against adding IPv6 support to cloud-init was
>             that it
>             might be incompatible with how AWS chooses to implement IPv6
>             metadata, so
>             AWS would require a fork or other incompatible alternative to
>             cloud-init in
>             all of their images.
>
>             Is that right?
>
>
>         That's certainly my understanding of the status quo, I was enquiring
>         primarily to check it was still accurate.
>
>         -Steve
>
>             On Thu, Sep 3, 2015 at 7:30 AM, Sean M. Collins<sean at coreitpro.com
>             <mailto:sean at coreitpro.com>>  wrote:
>
>                 It's not a case of cloud-init supporting IPv6 - The Amazon EC2
>                 metadata
>                 API defines transport level details about the API - and
>                 currently only
>                 defines a well known IPv4 link local address to connect to. No
>                 well known
>                 link local IPv6 address has been defined.
>
>                 I usually recommend config-drive for IPv6 enabled clouds due
>                 to this.
>                 --
>                 Sent from my Android device with K-9 Mail. Please excuse my
>                 brevity.
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> -- 
> Kevin Benton
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/b9194c42/attachment.html>

From jim at jimrollenhagen.com  Fri Sep  4 17:55:17 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Fri, 4 Sep 2015 10:55:17 -0700
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
 allocation
In-Reply-To: <55E98609.4060708@redhat.com>
References: <55E96EEE.4070306@openstack.org>
 <55E98609.4060708@redhat.com>
Message-ID: <20150904175517.GC21846@jimrollenhagen.com>

On Fri, Sep 04, 2015 at 01:52:41PM +0200, Dmitry Tantsur wrote:
> On 09/04/2015 12:14 PM, Thierry Carrez wrote:
> >Hi PTLs,
> >
> >Here is the proposed slot allocation for every "big tent" project team
> >at the Mitaka Design Summit in Tokyo. This is based on the requests the
> >liberty PTLs have made, space availability and project activity &
> >collaboration needs.
> >
> >We have a lot less space (and time slots) in Tokyo compared to
> >Vancouver, so we were unable to give every team what they wanted. In
> >particular, there were far more workroom requests than we have
> >available, so we had to cut down on those quite heavily. Please note
> >that we'll have a large lunch room with roundtables inside the Design
> >Summit space that can easily be abused (outside of lunch) as space for
> >extra discussions.
> >
> >Here is the allocation:
> >
> >| fb: fishbowl 40-min slots
> >| wr: workroom 40-min slots
> >| cm: Friday contributors meetup
> >| | day: full day, morn: only morning, aft: only afternoon
> >
> >Neutron: 12fb, cm:day
> >Nova: 14fb, cm:day
> >Cinder: 5fb, 4wr, cm:day	
> >Horizon: 2fb, 7wr, cm:day	
> >Heat: 4fb, 8wr, cm:morn
> >Keystone: 7fb, 3wr, cm:day
> >Ironic: 4fb, 4wr, cm:morn
> >Oslo: 3fb, 5wr
> >Rally: 1fb, 2wr
> >Kolla: 3fb, 5wr, cm:aft
> >Ceilometer: 2fb, 7wr, cm:morn
> >TripleO: 2fb, 1wr, cm:full
> >Sahara: 2fb, 5wr, cm:aft
> >Murano: 2wr, cm:full
> >Glance: 3fb, 5wr, cm:full	
> >Manila: 2fb, 4wr, cm:morn
> >Magnum: 5fb, 5wr, cm:full	
> >Swift: 2fb, 12wr, cm:full	
> >Trove: 2fb, 4wr, cm:aft
> >Barbican: 2fb, 6wr, cm:aft
> >Designate: 1fb, 4wr, cm:aft
> >OpenStackClient: 1fb, 1wr, cm:morn
> >Mistral: 1fb, 3wr	
> >Zaqar: 1fb, 3wr
> >Congress: 3wr
> >Cue: 1fb, 1wr
> >Solum: 1fb
> >Searchlight: 1fb, 1wr
> >MagnetoDB: won't be present
> >
> >Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)	
> >PuppetOpenStack: 2fb, 3wr
> >Documentation: 2fb, 4wr, cm:morn
> >Quality Assurance: 4fb, 4wr, cm:full
> >OpenStackAnsible: 2fb, 1wr, cm:aft
> >Release management: 1fb, 1wr (shared meetup with QA)
> >Security: 2fb, 2wr
> >ChefOpenstack: will camp in the lunch room all week
> >App catalog: 1fb, 1wr
> >I18n: cm:morn
> >OpenStack UX: 2wr
> >Packaging-deb: 2wr
> >Refstack: 2wr
> >RpmPackaging: 1fb, 1wr
> >
> >We'll start working on laying out those sessions over the available
> >rooms and time slots. If you have constraints (I already know
> >searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
> >Manila with Cinder, Solum with Magnum...) please let me know, we'll do
> >our best to limit them.
> >
> 
> Would be cool to avoid conflicts between Ironic and TripleO.

I'd also like to save room for one Ironic/Nova session, and one
Ironic/Neutron session.

// jim


From doc at aedo.net  Fri Sep  4 18:05:03 2015
From: doc at aedo.net (Christopher Aedo)
Date: Fri, 4 Sep 2015 11:05:03 -0700
Subject: [openstack-dev] [app-catalog] App Catalog IRC meeting minutes -
	9/3/2015
Message-ID: <CA+odVQG1=Z7CTQ8Hv_ho-xw6DGUNBsbK2btTW2tx+dcTTdH_aA@mail.gmail.com>

We had a good meeting yesterday which included a little cheer for the
Community App Catalog project being accepted into the big tent (thanks
to everyone who has been supporting our efforts all along!)  The bulk
of the meeting was then devoted to discussing our plans for the next
generation of the site.  We've started the effort by outlining on
etherpad [1] all the features we need the site to support.  Next up we
are going to start evaluating potential frameworks we can use to build
a new site quickly while still making it easy to deploy and support
(and extend down the road.)  If you have useful opinions/experience
and are interested in helping plan this, please check out the etherpad
and join us on IRC to discuss.

As always, please join us on IRC (#openstack-app-catalog), or speak up
here on the mailing list if you want to help us make this the top
destination for people using OpenStack clouds!

[1] https://etherpad.openstack.org/p/app-catalog-v2-backend

-Christopher

=================================
#openstack-meeting-3: app-catalog
=================================
Meeting started by docaedo at 17:00:15 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/app_catalog/2015/app_catalog.2015-09-03-17.00.log.html
.

Meeting summary
---------------
* rollcall  (docaedo, 17:00:30)

* Status updates (docaedo)  (docaedo, 17:01:21)
  * LINK: https://review.openstack.org/#/c/217957/  (docaedo, 17:01:43)
  * LINK: https://review.openstack.org/#/c/219809/  (docaedo, 17:02:57)
  * LINK: http://bit.ly/1N37aMS  (docaedo, 17:03:09)
  * LINK: https://review.openstack.org/#/c/218898/  (docaedo, 17:06:31)

* Review/discuss "new site plans" etherpad (docaedo)  (docaedo,
  17:10:47)
  * LINK: https://etherpad.openstack.org/p/app-catalog-v2-backend
    (docaedo, 17:10:52)

* Open discussion  (docaedo, 17:38:11)

Meeting ended at 17:43:30 UTC.

People present (lines said)
---------------------------
* docaedo (55)
* kfox1111 (47)
* ativelkov (17)
* kzaitsev_mb (9)
* j^2 (4)
* openstack (3)

Generated by `MeetBot`_ 0.1.4


From everett.toews at RACKSPACE.COM  Fri Sep  4 18:41:21 2015
From: everett.toews at RACKSPACE.COM (Everett Toews)
Date: Fri, 4 Sep 2015 18:41:21 +0000
Subject: [openstack-dev] [all][api] New API Guidelines Ready for Cross
	Project Review
In-Reply-To: <25743032-241F-4958-B8E4-53419046549F@rackspace.com>
References: <25743032-241F-4958-B8E4-53419046549F@rackspace.com>
Message-ID: <BCB8B7D2-2A83-4A88-87B4-A6436F541EC4@rackspace.com>

On Aug 27, 2015, at 10:48 AM, Everett Toews <everett.toews at rackspace.com> wrote:

> Hi All,
> 
> The following API guidelines are ready for cross project review. They will be merged on Sept. 4 if there's no further feedback.
> 
> 1. Add description of pagination parameters
> https://review.openstack.org/#/c/190743/
> 
> 2. Require "OpenStack-" in headers
> https://review.openstack.org/#/c/215683/
> 
> 3. Add the condition for using a project term
> https://review.openstack.org/#/c/208264/
> 
> 4. Added note about caching of responses when using https
> https://review.openstack.org/#/c/185288/
> 
> 5. add section describing 501 common mistake
> https://review.openstack.org/#/c/183456/

API guidelines 2-5 above have been merged. Guideline 1 needs further work.

Thanks for you feedback,
Everett



From mordred at inaugust.com  Fri Sep  4 18:41:48 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Fri, 04 Sep 2015 14:41:48 -0400
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <CAPWkaSU--XLza5i_-_cUXM=E9ftQynDUbZvDj38UQSNAQ6=kTA@mail.gmail.com>
References: <55E9A4F2.5030809@inaugust.com>
 <BBCB05C9-C0D0-456E-BA4D-817D45F00563@gmail.com>
 <55E9CBD9.7080603@inaugust.com> <55E9D64F.5050200@internap.com>
 <CAPWkaSU--XLza5i_-_cUXM=E9ftQynDUbZvDj38UQSNAQ6=kTA@mail.gmail.com>
Message-ID: <55E9E5EC.9050300@inaugust.com>

On 09/04/2015 01:42 PM, John Griffith wrote:
> On Fri, Sep 4, 2015 at 11:35 AM, Mathieu Gagn? <mgagne at internap.com> wrote:
>
>> On 2015-09-04 12:50 PM, Monty Taylor wrote:
>>> On 09/04/2015 10:55 AM, Morgan Fainberg wrote:
>>>>
>>>> Obviously the translation of errors
>>>> would be more difficult if the enforcer is generating messages.
>>>
>>> The type: "PolicyNotAuthorized" is a good general key. Also - even
>>> though the command I sent was:
>>>
>>> neutron net-create
>>>
>>> On the command line, the entry in the policy_file is "create_network" -
>>> so honestly I think that policy.json and oslo.policy should have (or be
>>> able to have) all of the info needed to create almost the exact same
>>> message. Perhaps "NeutronError" would just need to be
>>> "OpenStackPolicyError"?
>>>
>>> Oh. Wait. You meant translation like i18n translation. In that case, I
>>> think it's easy:
>>>
>>> message=_("Policy doesn't allow %(policy_key)s to be performed",
>>> policy_key="create_network")
>>>
>>> /me waves hands
>>>
>>
>> I don't feel like this error message would be user-friendly:
>>
>> "Policy doesn't allow os_compute_api:os-instance-actions to be performed"
>>
>> Policy name aren't human readable and match nothing on the client side.
>>
>> --
>> Mathieu
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ?Ok, so this:
>
> ubuntu at devbox:~$ cinder reset-state 9dee0fae-864c-44f9-bdd7-3330a0f4e899
> Reset state for volume 9dee0fae-864c-44f9-bdd7-3330a0f4e899 failed: Policy
> doesn't allow volume_extension:volume_admin_actions:reset_status to be
> performed. (HTTP 403) (Request-ID: req-8ed2c895-0d1f-4b2c-9859-ee15c19267de)
> ERROR: Unable to reset the state for the specified volume(s).
> ubuntu at devbox:~$?
>
> ?Is no good?  You would like to see "less" in the output; like just the
> command name itself and "Policy doesn't allow"?
>
> To Mathieu's point, fair statement WRT the visibility of the policy name.

Totally agree on the policy name. The one I did happened to be clear - 
that is not always the case. I'd love to see that.

But more to your question - yes, as an end user, I do't know what a 
volume_extension:volume_admin_actions:reset_status is - but I do know 
that I ran "cinder reset-state" - so getting:

'Cloud policy does not allow you to run reset_status"

would be fairly clear to me.

The other bits, the 403, the request-id and then the additional error 
message are a bit too busy. (they seem like output for a debug or 
verbose flag IMHO)

NOW -

  ERROR: Unable to reset the state for the specified volume(s) - Policy 
does not allow reset_status

would also work and would also be clear "this did not occur, the reason 
is that you are not allowed to do this because the cloud admin has set a 
policy.

Now that I'm talking out loud though - I'm policy is a little confusing 
- because policy is not an end-user concept in any way.

"Your cloud administrator has disabled this API function"

is clearer and more to the point with less jargon.

I think the key points to communicate (verbally or through crafting):

- Yes, you logged in
- Yes, the API you called is a correct and real API
- No, you did not make a syntax error
- No, you are not allowed to call that real API on _this_ cloud

(without knowing those things, I tend to debug a TON of things before 
figuring out "oh, the cloud admin turned off part of the API)



From ben at swartzlander.org  Fri Sep  4 18:51:10 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Fri, 4 Sep 2015 14:51:10 -0400
Subject: [openstack-dev] [ptl][release] flushing unreleased client
 library changes
In-Reply-To: <1441384328-sup-9901@lrrr.local>
References: <1441384328-sup-9901@lrrr.local>
Message-ID: <55E9E81E.9060401@swartzlander.org>

On 09/04/2015 12:39 PM, Doug Hellmann wrote:
> PTLs,
>
> We have quite a few unreleased client changes pending, and it would
> be good to go ahead and publish them so they can be tested as part
> of the release candidate process. I have the full list of changes for
> each project below, so please find yours and review them and then
> propose a release request to the openstack/releases repository.

Manila had multiple gate-breaking bugs this week and I've extended our 
feature freeze to next Tuesday to compensate. As a result our L-3 
milestone release is not really representative of Liberty and we'd 
rather not do a client release until we reach RC1.

-Ben Swartzlander


> On a separate note, for next cycle we need to do a better job of
> releasing these much much earlier (a few of these changes are at
> least a month old). Remember that changes to libraries do not go
> into the gate for consuming projects until that library is released.
> If you have any suggestions for how to do improve our tracking for
> needed releases, let me know.
>
> Doug
>
>
> [ Unreleased changes in openstack/python-barbicanclient ]
>
> Changes in python-barbicanclient 3.3.0..97cc46a
> -----------------------------------------------
> 4572293 2015-08-27 19:38:39 -0500 Add epilog to parser
> 17ed50a 2015-08-25 14:58:25 +0000 Add Unit Tests for Store and Update Payload when Payload is zero
> 34256de 2015-08-25 09:56:24 -0500 Allow Barbican Client Secret Update Functionality
>
> [ Unreleased changes in openstack/python-ceilometerclient ]
>
> Changes in python-ceilometerclient 1.4.0..2006902
> -------------------------------------------------
> 2006902 2015-08-26 14:09:00 +0000 Updated from global requirements
> 2429dae 2015-08-25 01:32:56 +0000 Don't try to get aodh endpoint if auth_url didn't provided
> 6498d55 2015-08-13 20:21:24 +0000 Updated from global requirements
>
> [ Unreleased changes in openstack/python-cinderclient ]
>
> Changes in python-cinderclient 1.3.1..1c82825
> ---------------------------------------------
> 1c82825 2015-09-02 17:18:28 -0700 Update path to subunit2html in post_test_hook
> 471aea8 2015-09-02 00:53:40 +0000 Adds command to fetch specified backend capabilities
> 2d979dc 2015-09-01 22:35:28 +0800 Volume status management for volume migration
> 50758ba 2015-08-27 09:39:58 -0500 Fixed test_password_prompted
> dc6e823 2015-08-26 23:04:16 -0700 Fix help message for reset-state commands
> f805f5a 2015-08-25 15:15:20 +0300 Add functional tests for python-cinderclient
> 8cc3ee2 2015-08-19 18:05:34 +0300 Add support '--all-tenants' for cinder backup-list
> 2c3169e 2015-08-11 10:18:01 -0400 CLI: Non-disruptive backup
> 5e26906 2015-08-11 13:14:04 +0300 Add tests for python-cinderclient
> 780a129 2015-08-09 15:57:00 +0900 Replace assertEqual(None, *) with assertIsNone in tests
> a9405b1 2015-08-08 05:36:45 -0400 CLI: Clone CG
> 03542ee 2015-08-04 14:21:52 -0700 Fix ClientException init when there is no message on py34
> 2ec9a22 2015-08-03 11:14:44 +0800 Fixes table when there are multiline in result data
> 04caf88 2015-07-30 18:24:57 +0300 Set default OS_VOLUME_API_VERSION to '2'
> bae0bb3 2015-07-27 01:28:00 +0000 Add commands for modifying image metadata
> 629e548 2015-07-24 03:34:55 +0000 Updated from global requirements
> b51e43e 2015-07-21 01:04:02 +0000 Remove H302
> b426b71 2015-07-20 13:21:02 +0800 Show backup and volume info in backup_restore
> dc1186d 2015-07-16 19:45:08 -0700 Add response message when volume delete
> 075381d 2015-07-15 09:20:48 +0800 Add more details for replication
> 953f766 2015-07-14 18:51:58 +0800 New mock release(1.1.0) broke unit/function tests
> 8afc06c 2015-07-08 12:11:07 -0500 Remove unnecessary check for tenant information
> c23586b 2015-07-08 11:42:59 +0800 Remove redundant statement and refactor
> 891ef3e 2015-06-25 13:27:42 +0000 Use shared shell arguments provided by Session
>
> [ Unreleased changes in openstack/python-congressclient ]
>
> Changes in python-congressclient 1.1.0..0874721
> -----------------------------------------------
> 0f699f8 2015-09-02 14:47:32 +0800 Add actions listing command
> d7fa523 2015-08-26 14:09:29 +0000 Updated from global requirements
> 36f2b47 2015-08-11 01:38:32 +0000 Updated from global requirements
> ee07cb3 2015-07-27 15:47:12 +0800 Fix constant name
> f9858a8 2015-07-27 15:19:04 +0800 Support version list API in client
> 9693132 2015-06-20 22:38:10 +0900 Adding a test of datasource table show CLI
> a102014 2015-05-21 04:52:12 -0300 Favor the use of importlib over Python internal __import__ statement
> 726d560 2015-05-07 23:37:04 +0000 Updated from global requirements
> b8a176b 2015-04-24 16:31:49 +0800 Replace stackforge with openstack in README.rst
> 8c31d3f 2015-03-31 15:33:53 -0700 Add api bindings for datasource request-request trigger
>
> [ Unreleased changes in openstack/python-cueclient ]
>
> Changes in python-cueclient 0.0.1..d9ac712
> ------------------------------------------
> d9ac712 2015-08-26 14:09:42 +0000 Updated from global requirements
> 47b81c3 2015-08-11 13:38:14 -0700 Update python-binding section for keystone v3 support
> d30c3b1 2015-08-11 01:38:34 +0000 Updated from global requirements
> 14e5f05 2015-08-06 10:15:13 -0700 Adding size field cue cluster list command
> 0c7559b 2015-07-16 10:29:07 -0700 Rename end_points to endpoints in API
> 0e36051 2015-07-13 12:55:29 -0700 updating docs from stackforge->openstack
> cccccda 2015-07-13 12:29:04 -0700 fixing oslo_serialization reference
> b9352df 2015-07-10 17:04:45 -0700 Update .gitreview file for project rename
> 1acc546 2015-06-08 03:19:18 -0700 Rename cue command shell commands with message-broker
> d5b3acd 2015-04-23 12:57:55 -0700 Refactor cue client tests
> 9590b5f 2015-04-18 21:13:39 -0700 Change nic type to list
> 3779999 2015-04-16 12:26:16 -0700 Resolving cluster wrapper issue Closes bug: #1445175
> 665dee5 2015-04-14 14:05:28 -0700 Add .coveragerc to better control coverage outputs
> a992e48 2015-04-09 17:46:58 +0000 Remove cluster wrapper from response body
> b302486 2015-04-01 16:19:19 -0700 Modifying CRD return type from dict to object
> 7618f95 2015-03-31 15:19:27 -0700 Expose endpoints in cluster list command
> 8c47aa6 2015-03-06 17:45:24 -0800 Removing openstackclient dependency
> 9c125cc 2015-03-06 16:27:16 -0800 Adding cue python binding
>
> [ Unreleased changes in openstack/python-designateclient ]
>
> Changes in python-designateclient 1.4.0..52d68e5
> ------------------------------------------------
> f7d4dbb 2015-09-02 15:54:08 +0200 V2 CLI Support
> 86d988d 2015-08-26 14:09:56 +0000 Updated from global requirements
> 83d4cea 2015-08-19 18:31:49 +0800 Update github's URL
> 1e1b94c 2015-08-13 20:21:31 +0000 Updated from global requirements
> 71f465c 2015-08-11 10:12:25 +0200 Don't wildcard resolve names
> 74ee1a1 2015-08-11 01:38:35 +0000 Updated from global requirements
> 08191bb 2015-08-10 01:10:02 +0000 Updated from global requirements
> 035657c 2015-08-07 18:41:29 +0200 Improve help strings
>
> [ Unreleased changes in openstack/python-glanceclient ]
>
> Changes in python-glanceclient 1.0.0..90b7dc4
> ---------------------------------------------
> 90b7dc4 2015-09-04 10:29:01 +0900 Update path to subunit2html in post_test_hook
> 1e2274a 2015-09-01 18:03:41 +0200 Password should be prompted once
>
> [ Unreleased changes in openstack/python-ironicclient ]
>
> Changes in python-ironicclient 0.8.0..6a58f9d
> ---------------------------------------------
> 156ca47 2015-09-03 12:55:41 -0700 Fix functional tests job
>
> [ Unreleased changes in openstack/python-ironic-inspector-client ]
>
> Changes in python-ironic-inspector-client 1.0.1..7ac591e
> --------------------------------------------------------
> 1ce6380 2015-08-26 14:10:34 +0000 Updated from global requirements
> 29e38a9 2015-08-25 18:48:38 +0200 Make our README friendly to OpenStack release-tools
> 9625bf7 2015-08-13 20:21:36 +0000 Updated from global requirements
> 95133c7 2015-08-12 14:58:58 +0200 Make sure we expose all API elements in the top-level package
> bd3737c 2015-08-12 14:43:45 +0200 Drop comment about changing functional tests to use released inspector
> 69dc6ee 2015-08-12 14:37:05 +0200 Fix error message for unsupported API version
> 89695da 2015-08-11 01:38:40 +0000 Updated from global requirements
> fe67f67 2015-08-10 01:10:07 +0000 Updated from global requirements
> 61448ba 2015-08-04 12:50:13 +0200 Implement optional API versioning
> 7d443fb 2015-07-23 14:13:38 +0200 Create own functional tests for the client
> e7bb103 2015-07-22 04:59:33 +0000 Updated from global requirements
> 1e3d334 2015-07-17 16:17:44 +0000 Updated from global requirements
> 8d68f61 2015-07-12 15:22:07 +0000 Updated from global requirements
> 16dc081 2015-07-09 17:52:05 +0200 Use released ironic-inspector for functional testing
> 0df30c9 2015-07-08 20:40:34 +0200 Don't repeat requirements in tox.ini
> 2ea4b8c 2015-07-01 14:35:07 +0200 Add functional test
> 66f8551 2015-06-22 22:35:16 +0000 Updated from global requirements
> 71d491b 2015-06-23 00:02:29 +0900 Change to Capital letters
>
> [ Unreleased changes in openstack/python-keystoneclient ]
>
> Changes in python-keystoneclient 1.6.0..6231459
> -----------------------------------------------
> 3e862bb 2015-09-02 17:20:17 -0700 Update path to subunit2html in post_test_hook
> 1697fd7 2015-09-02 11:39:35 -0500 Deprecate create Discover without session
> 3e26ff8 2015-08-31 12:49:34 -0700 Mask passwords when logging the HTTP response
> f58661e 2015-08-31 15:36:04 +0000 Updated from global requirements
> 7c545e5 2015-08-29 11:28:01 -0500 Update deprecation text for Session properties
> e76423f 2015-08-29 11:28:01 -0500 Proper deprecation for httpclient.USER_AGENT
> 42bd016 2015-08-29 11:28:01 -0500 Deprecate create HTTPClient without session
> e0276c6 2015-08-26 06:24:27 +0000 Fix Accept header in SAML2 requests
> d22cd9d 2015-08-20 17:10:05 +0000 Updated from global requirements
> 0cb46c9 2015-08-15 07:36:09 +0800 Expose token_endpoint.Token as admin_token
> 4bdbb83 2015-08-13 19:01:42 -0500 Proper deprecation for UserManager project argument
> a50f8a1 2015-08-13 19:01:42 -0500 Proper deprecation for CredentialManager data argument
> 4e4dede 2015-08-13 19:01:42 -0500 Deprecate create v3 Client without session
> b94a610 2015-08-13 19:01:42 -0500 Deprecate create v2_0 Client without session
> 962ab57 2015-08-13 19:01:42 -0500 Proper deprecation for Session.get_token()
> afcf4a1 2015-08-13 19:01:42 -0500 Deprecate use of cert and key
> 58cc453 2015-08-13 18:59:31 -0500 Proper deprecation for Session.construct()
> 0d293ea 2015-08-13 18:58:27 -0500 Deprecate ServiceCatalog.get_urls() with no attr
> 803eb23 2015-08-13 18:57:31 -0500 Deprecate ServiceCatalog(region_name)
> cba0a68 2015-08-13 20:21:41 +0000 Updated from global requirements
> 1cbfb2e 2015-08-13 02:18:54 +0000 Updated from global requirements
> 43e69cc 2015-08-10 01:10:11 +0000 Updated from global requirements
> b54d9f1 2015-08-06 14:44:12 -0500 Stop using .keys() on dicts where not needed
> 6dae40e 2015-08-06 16:57:32 +0000 Inhrerit roles project calls on keystoneclient v3
> 51d9d12 2015-08-05 12:28:30 -0500 Deprecate openstack.common.apiclient
> 16e834d 2015-08-05 11:24:08 -0500 Move apiclient.base.Resource into keystoneclient
> 26534da 2015-08-05 14:59:23 +0000 oslo-incubator apiclient.exceptions to keystoneclient.exceptions
> eaa7ddd 2015-08-04 09:56:44 -0500 Proper deprecation for HTTPClient session and adapter properties
> 0c2fef5 2015-08-04 09:56:44 -0500 Proper deprecation for HTTPClient.request methods
> ada04ac 2015-08-04 09:56:44 -0500 Proper deprecation for HTTPClient.tenant_id|name
> 1721e01 2015-08-04 09:56:43 -0500 Proper deprecation for HTTPClient tenant_id, tenant_name parameters
> a9ef92a 2015-08-04 00:48:54 +0000 Updated from global requirements
> 22236fd 2015-08-02 11:22:18 -0500 Clarify setting socket_options
> aa5738c 2015-08-02 11:18:45 -0500 Remove check for requests version
> 9e470a5 2015-07-29 03:50:34 +0000 Updated from global requirements
> 0b74590 2015-07-26 06:54:23 -0500 Fix tests passing user, project, and token
> 9f17732 2015-07-26 06:54:23 -0500 Proper deprecation for httpclient.request()
> fb28e1a 2015-07-26 06:54:23 -0500 Proper deprecation for Dicover.raw_version_data unstable parameter
> a303cbc 2015-07-26 06:54:23 -0500 Proper deprecation for Dicover.available_versions()
> 5547fe8 2015-07-26 06:54:23 -0500 Proper deprecation for is_ans1_token
> ce58b07 2015-07-26 06:54:23 -0500 Proper deprecation for client.HTTPClient
> c5b0319 2015-07-26 06:54:23 -0500 Proper deprecation for Manager.api
> fee5ba7 2015-07-26 06:54:22 -0500 Stop using Manager.api
> b1496ab 2015-07-26 06:54:22 -0500 Proper deprecation for BaseIdentityPlugin trust_id property
> 799e1fa 2015-07-26 06:54:22 -0500 Proper deprecation for BaseIdentityPlugin username, password, token_id properties
> 85b32fc 2015-07-26 06:54:22 -0500 Proper deprecations for modules
> 6950527 2015-07-25 09:51:42 +0000 Use UUID values in v3 test fixtures
> 1a2ccb0 2015-07-24 11:05:05 -0500 Proper deprecation for AccessInfo management_url property
> 6d82f1f 2015-07-24 11:05:05 -0500 Proper deprecation for AccessInfo auth_url property
> 66fd1eb 2015-07-24 11:04:04 -0500 Stop using deprecated AccessInfo.auth_url and management_url
> f782ee8 2015-07-24 09:14:40 -0500 Proper deprecation for AccessInfo scoped property
> 8d65259 2015-07-24 08:16:03 -0500 Proper deprecation for AccessInfo region_name parameter
> 610844d 2015-07-24 08:05:13 -0500 Deprecations fixture support calling deprecated function
> c6b14f9 2015-07-23 20:14:14 -0500 Set reasonable defaults for TCP Keep-Alive
> bb6463e 2015-07-23 07:44:44 +0000 Updated from global requirements
> 0d5415e 2015-07-23 07:22:57 +0000 Remove unused time_patcher
> 7d5d8b3 2015-07-22 23:41:07 +0300 Make OAuth testcase use actual request headers
> 98326c7 2015-07-19 09:49:04 -0500 Prevent attempts to "filter" list() calls by globally unique IDs
> a4584c4 2015-07-15 22:01:14 +0000 Add get_token_data to token CRUD
> 2f90bb6 2015-07-15 01:37:25 +0000 Updated from global requirements
> 3668d9c 2015-07-13 04:53:17 -0700 py34 not py33 is tested and supported
> d3b9755 2015-07-12 15:22:13 +0000 Updated from global requirements
> 8bab2c2 2015-07-11 08:01:39 -0500 Remove confusing deprecation comment from token_to_cms
> 4034366 2015-07-08 20:12:31 +0000 Fixes modules index generated by Sphinx
> c503c29 2015-07-02 18:57:20 +0000 Updated from global requirements
> 31f326d 2015-06-30 12:58:55 -0500 Unit tests catch deprecated function usage
> 225832f 2015-06-30 12:58:55 -0500 Switch from deprecated oslo_utils.timeutils.strtime
> 97c2c69 2015-06-30 12:58:55 -0500 Switch from deprecated isotime
> ef0f267 2015-06-29 00:12:44 +0000 Remove keystoneclient CLI references in README
> 20db11f 2015-06-29 00:12:11 +0000 Update README.rst and remove ancient reference
> a951023 2015-06-28 05:49:46 +0000 Remove unused images from docs
> 2b058ba 2015-06-22 20:00:20 +0000 Updated from global requirements
> 02f07cf 2015-06-17 11:15:03 -0400 Add openid connect client support
> 350b795 2015-06-13 09:02:09 -0500 Stop using tearDown
> f249332 2015-06-13 09:02:09 -0500 Use mock rather than mox
> 75d4b16 2015-06-13 09:01:44 -0500 Remove unused setUp from ClientTest
> 08783e0 2015-06-11 00:48:15 +0000 Updated from global requirements
> d99c56f 2015-06-09 13:42:53 -0400 Iterate over copy of sys.modules keys in Python2/3
> 945e519 2015-06-08 21:11:54 -0500 Use random strings for test fixtures
> c0046d7 2015-06-08 20:29:07 -0500 Stop using function deprecated in Python 3
> 2a032a5 2015-06-05 09:45:08 -0400 Use python-six shim for assertRaisesRegex/p
> 86018ca 2015-06-03 21:01:18 -0500 tox env for Bandit
> f756798 2015-05-31 10:27:01 -0500 Cleanup fixture imports
> 28fd6d5 2015-05-30 12:36:16 +0000 Removes unused debug logging code
> 0ecf9b1 2015-05-26 17:05:09 +1000 Add get_communication_params interface to plugins
> 8994d90 2015-05-04 16:07:31 +0800 add --slowest flag to testr
> 831ba03 2015-03-31 08:47:25 +1100 Support /auth routes for list projects and domains
>
> [ Unreleased changes in openstack/python-magnumclient ]
>
> Changes in python-magnumclient 0.2.1..e6dd7bb
> ---------------------------------------------
> 31417f7 2015-08-27 04:18:52 +0000 Updated from global requirements
> 97dbb71 2015-08-26 18:15:06 +0000 Rename existing service-* to coe-service-*
> 39e7b24 2015-08-26 14:11:20 +0000 Updated from global requirements
> ea83d71 2015-08-21 22:21:37 +0000 Remove name from test token
> b450891 2015-08-18 05:28:06 -0400 This adds proxy feature in magnum client
> d55e7f3 2015-08-13 20:21:44 +0000 Updated from global requirements
> ba689b8 2015-08-10 01:10:14 +0000 Updated from global requirements
> 9de9a3a 2015-08-04 00:48:58 +0000 Updated from global requirements
> 292310c 2015-08-03 11:22:06 -0400 Add support for multiple master nodes
> 3d5e0ed 2015-07-29 03:50:37 +0000 Updated from global requirements
> 24577e3 2015-07-22 23:49:23 +0300 Remove uuidutils from openstack.common
> 6455a1c 2015-07-22 04:59:40 +0000 Updated from global requirements
> b9681e9 2015-07-21 23:16:57 +0000 Updated from global requirements
> c457cda 2015-07-17 23:06:58 +0800 Remove H803 rule
> de2e368 2015-07-15 01:37:31 +0000 Updated from global requirements
> 70830ed 2015-07-12 15:22:16 +0000 Updated from global requirements
> 017fccb 2015-06-30 20:03:09 +0000 Updated from global requirements
> 252586a 2015-06-24 10:55:11 +0800 Rename image_id to image when create a container
> d85771a 2015-06-22 08:27:54 +0000 Updated from global requirements
> 0bdc3ce 2015-06-18 15:24:46 -0700 Add missing dependency oslo.serialization
> 0469af8 2015-06-16 19:23:05 +0000 Updated from global requirements
> 0c9f735 2015-06-16 11:44:54 +0530 Add additional arguments to CLI for container-create.
> b9ca1d5 2015-06-15 12:54:41 +0900 Pass environment variables of proxy to tox
> ac983f1 2015-06-12 05:30:38 +0000 Change container-execute to container-exec
> 8e75123 2015-06-11 00:48:19 +0000 Updated from global requirements
> 80d8f1a 2015-06-08 11:09:42 +0000 Sync from latest oslo-incubator
> 7f58e6f 2015-06-04 16:24:29 +0000 Updated from global requirements
> 0ea4159 2015-05-27 11:44:04 +0200 Fix translation setup
>
> [ Unreleased changes in openstack/python-manilaclient ]
>
> Changes in python-manilaclient 1.2.0..0c7b857
> ---------------------------------------------
> 0c7b857 2015-08-27 04:18:54 +0000 Updated from global requirements
> fec43dd 2015-08-19 22:01:12 -0400 Move requirement Openstack client to test-requirements
> fa05919 2015-08-15 20:54:25 +0000 Updated from global requirements
> 5f45b18 2015-08-12 17:20:01 +0000 Make spec_driver_handles_share_servers required
> f0c6685 2015-08-06 10:47:11 +0800 Modify the manage command prompt information
> 8a3702d 2015-08-04 00:57:34 +0000 Updated from global requirements
> 4454542 2015-07-22 04:59:42 +0000 Updated from global requirements
> e919065 2015-07-16 06:59:53 -0400 Add functional tests for access rules
> c954684 2015-07-15 20:45:50 +0000 Updated from global requirements
> 2a4f79c 2015-07-15 08:37:48 -0400 Fix post_test_hook and update test-requirements
> 4f25278 2015-06-30 22:45:35 +0000 Updated from global requirements
> 92643e8 2015-06-22 08:27:56 +0000 Updated from global requirements
> d2c0e26 2015-06-16 19:23:07 +0000 Updated from global requirements
> 6b2121e 2015-06-05 17:21:43 +0300 Add share shrink API
> 611d4fa 2015-06-04 16:24:31 +0000 Updated from global requirements
> 6733ae3 2015-06-03 11:35:01 +0000 Updated from global requirements
> bd28eda 2015-06-02 10:32:55 +0000 Add rw functional tests for shares metadata
> ada9825 2015-06-02 12:46:34 +0300 Add rw functional tests for shares
> c668f00 2015-05-29 22:53:37 +0000 Updated from global requirements
>
> [ Unreleased changes in openstack/python-muranoclient ]
>
> Changes in python-muranoclient 0.6.3..7897d7f
> ---------------------------------------------
> 54918f5 2015-09-03 10:39:45 +0000 Added the support of Glance Artifact Repository
> 39967d9 2015-09-02 19:09:10 +0000 Copy the code of Glance V3 (artifacts) client
> 1141dd5 2015-09-02 17:18:28 +0300 Fixed issue with cacert parameter
> 66af770 2015-08-31 21:53:45 +0800 Update the git ingore
> 6e9f436 2015-08-31 13:48:21 +0800 Fix the reversed incoming parameters of assertEqual
> b632ede 2015-08-27 14:18:12 +0800 Add olso.log into muranoclient's requirements.txt
> 75a616f 2015-08-26 14:11:36 +0000 Updated from global requirements
> 6242211 2015-08-26 12:45:10 +0800 Standardise help parameter of CLI commands
> 924a83f 2015-08-26 09:56:04 +0800 Fix some spelling mistakes of setup files
>
> [ Unreleased changes in openstack/python-neutronclient ]
>
> Changes in python-neutronclient 2.6.0..d75f79f
> ----------------------------------------------
> d75f79f 2015-09-04 11:06:00 +0900 Update path to subunit2html in post_test_hook
> 627f68e 2015-09-02 08:19:20 +0000 Updated from global requirements
> 0558b49 2015-09-01 02:01:07 +0000 Add REJECT rule on FWaaS Client
> a4f64f6 2015-08-27 09:36:04 -0700 Update tls_container_id to tls_container_ref
> 9a51f2d 2015-08-27 04:18:58 +0000 Updated from global requirements
> 31df9de 2015-08-26 16:32:21 +0300 Support CLI changes for QoS (2/2).
> 002a0c7 2015-08-26 16:26:21 +0300 Support QoS neutron-client (1/2).
> a174215 2015-08-25 09:26:00 +0800 Clear the extension requirement
> bb7124e 2015-08-23 05:28:30 +0000 Updated from global requirements
> c44b57f 2015-08-21 16:44:52 +0000 Make subnetpool-list show correct address scope column
> abc2b65 2015-08-21 16:44:28 +0000 Fix find_resourceid_by_name call for address scopes
> 45ed3ec 2015-08-20 21:57:25 +0800 Add extension name to extension's command help text line
> f6ca3a1 2015-08-20 12:04:37 +0530 Adding registration interface for non_admin_status_resources
> 54e7b94 2015-08-20 12:00:52 +0800 Add document for entry point in setup.cfg
> de5d3bb 2015-08-19 11:32:54 +0000 Create hooks for running functional test
> d749973 2015-08-19 13:51:30 +0530 Support Command line changes for Address Scope
> 5271890 2015-08-13 14:26:59 +0000 Remove --shared option from firewall-create
> 16e02dd 2015-08-12 18:05:56 +0300 Disable failing vpn tests
> 22c8492 2015-08-10 17:14:54 +0530 Support RBAC neutron-client changes.
> 8da3dc8 2015-08-10 12:59:58 +0300 Remove newlines from request and response log
> ccf6fb8 2015-07-17 16:18:04 +0000 Updated from global requirements
> d61a5b5 2015-07-15 01:37:35 +0000 Updated from global requirements
> ab7d9e8 2015-07-14 18:00:43 +0300 Devref documentation for client command extension support
> 0094e51 2015-07-14 15:55:08 +0530 Support CLI changes for associating subnetpools and address-scopes.
> f936493 2015-07-13 09:49:57 +0900 Remove unused AlreadyAttachedClient
> 31f8f23 2015-07-13 09:11:53 +0900 Avoid overwriting parsed_args
> 043656c 2015-07-12 21:47:32 +0000 Determine ip version during subnet create.
> 52721a8 2015-07-12 19:59:00 +0000 Call UnsetStub/VerifyAll properly for tests with exceptions
> 25a947b 2015-07-12 15:22:20 +0000 Updated from global requirements
> f4ddc6e 2015-07-03 00:17:32 -0500 Support resource plurals not ending in 's'
> f446ab5 2015-06-30 22:45:38 +0000 Updated from global requirements
> da3a415 2015-06-30 13:47:29 +0300 Revert "Add '--router:external' option to 'net-create'"
> 8557cd9 2015-06-23 21:50:00 +0000 Updated from global requirements
> dcb7401 2015-06-16 19:23:09 +0000 Updated from global requirements
> f13161b 2015-06-12 20:41:47 +0000 Fixes indentation for bash completion script
> c809e06 2015-06-12 13:32:44 -0700 Allow bash completion script to work with BSD sed
> 58a5ec6 2015-06-12 11:38:02 -0400 Add alternative login description in neutronclient docs
> a788a3e 2015-06-08 21:20:10 +0000 Updated from global requirements
> a2ae8eb 2015-06-08 18:28:23 +0000 Raise user-friendly exceptions in str2dict
> e3f61c9 2015-06-08 19:49:24 +0200 LBaaS v2: Fix listing pool members
> 7eb3241 2015-05-26 16:26:51 +0200 Fix functional tests and tox 2.0 errors
> d536020 2015-05-11 16:12:41 +0800 Add missing tenant_id to lbaas-v2 resources creation
> df93b27 2015-05-11 06:08:16 +0000 Add InvalidIpForSubnetClient exception
> ada1568 2015-03-04 16:28:19 +0530 "neutron help router-update" help info updated
>
> [ Unreleased changes in openstack/python-novaclient ]
>
> Changes in python-novaclient 2.28.0..5d50603
> --------------------------------------------
> d970de4 2015-09-03 21:53:53 +0000 Adds missing internationalization for help message
>
> [ Unreleased changes in openstack/python-openstackclient ]
>
> Changes in python-openstackclient 1.6.0..9210cac
> ------------------------------------------------
> d751a21 2015-09-01 15:51:58 -0700 Fix 'auhentication' spelling error/mistake
> 5171a42 2015-08-28 09:32:05 -0600 Ignore flavor and image find errors on server show
> f142516 2015-08-24 10:38:43 -0500 default OS_VOLUME_API_VERSION to v2
> 59d12a6 2015-08-21 15:33:48 -0500 unwedge the osc gate
> 8fb19bc 2015-08-21 16:07:58 +0000 additional functional tests for identity providers
> 1966663 2015-08-19 16:46:55 -0400 Adds documentation  on weekly meeting
> 1004e06 2015-08-19 11:01:26 -0600 Update the plugin docs for designate
> 0f837df 2015-08-19 11:29:29 -0400 Added note to install openstackclient
> 0f0d66f 2015-08-14 10:31:53 -0400 Running 'limits show' returns nothing
> ac5e289 2015-08-13 20:21:57 +0000 Updated from global requirements
> a6c8c8f 2015-08-13 09:31:11 +0000 Updated from global requirements
> e908492 2015-08-13 02:19:22 +0000 Updated from global requirements
>
> [ Unreleased changes in openstack/python-swiftclient ]
>
> Changes in python-swiftclient 2.5.0..93666bb
> --------------------------------------------
> d5eb818 2015-09-02 13:06:13 +0100 Cleanup and improve tests for download
> 3c02898 2015-08-31 22:03:26 +0100 Log and report trace on service operation fails
> 4b62732 2015-08-27 00:01:22 +0800 Increase httplib._MAXHEADERS to 256.
> 4b31008 2015-08-25 09:47:09 +0100 Stop Connection class modifying os_options parameter
> 1789c26 2015-08-24 10:54:15 +0100 Add minimal working service token support.
> 91d82af 2015-08-19 14:54:03 -0700 Drop flake8 ignores for already-passing tests
> 38a82e9 2015-08-18 19:19:22 -0700 flake8 ignores same hacks as swift
> 7c7f46a 2015-08-06 14:51:10 -0700 Update mock to get away from env markers
> be0f1aa 2015-08-06 18:50:33 +0900 change deprecated assertEquals to assertEqual
> a056f1b 2015-08-04 11:34:51 +0900 fix old style class definition(H238)
> 847f135 2015-07-30 09:55:51 +0200 Block comment PEP8 fix.
> 1c644d8 2015-07-30 09:48:00 +0200 Test auth params together with --help option.
> 3cd1faa 2015-07-24 10:57:29 -0700 make Connection.get_auth set url and token attributes on self
> a8c4df9 2015-07-20 20:44:51 +0100 Reduce memory usage for download/delete and add --no-shuffle option to st_download
> 7442f0d 2015-07-17 16:03:39 +0900 swiftclient: add short options to help message
> ef467dd 2015-06-28 07:40:26 +0530 Python 3: Replacing unicode with six.text_type for py3 compatibility
>
> [ Unreleased changes in openstack/python-troveclient ]
>
> Changes in python-troveclient 1.2.0..fcc0e73
> --------------------------------------------
> ec666ca 2015-09-04 04:19:26 +0000 Updated from global requirements
> 55af7dd 2015-09-03 20:41:25 +0000 Use more appropriate exceptions for validation
> d95ceff 2015-09-03 14:49:38 -0400 Redis Clustering Initial Implementation
> 7ec45db 2015-08-28 00:02:45 +0000 Revert "Root enablement for Vertica clusters/instances"
> 608ef3d 2015-08-21 09:39:09 +0000 Implements Datastore Registration API
> 77960ee 2015-08-20 15:38:20 -0700 Root enablement for Vertica clusters/instances
> 57bb542 2015-08-13 20:22:07 +0000 Updated from global requirements
> d3a9f9e 2015-08-10 01:10:31 +0000 Updated from global requirements
> 3e6c219 2015-07-31 23:54:26 -0400 Add a --marker argument to the backup commands.
> f3f0cbd 2015-07-31 16:27:55 -0700 Fixed missing periods in positional arguments
> fd81067 2015-07-22 04:59:56 +0000 Updated from global requirements
> fbbc025 2015-07-15 01:37:52 +0000 Updated from global requirements
> 2598641 2015-07-12 15:22:33 +0000 Updated from global requirements
> 398bc8e 2015-07-12 00:56:25 -0400 Error message on cluster-create is misleading
> 7f82bcc 2015-07-10 10:05:21 +0900 Make subcommands accept flavor name and cluster name
> 29d0703 2015-06-29 10:13:12 +0900 Fix flavor-show problems with UUID
> 0702365 2015-06-22 20:00:41 +0000 Updated from global requirements
> 1d30a5f 2015-06-18 12:19:53 -0700 Allow a user to pass an insecure environment variable
> dffbd6f 2015-06-16 19:23:22 +0000 Updated from global requirements
> 61a756a 2015-06-08 10:51:40 +0000 Added more unit-tests to improve code coverage
> 93f70ca 2015-06-04 20:05:14 +0000 Updated from global requirements
> ad68fb2 2015-06-03 08:31:26 +0000 Fixes the non-existent exception NoTokenLookupException
>
> [ Unreleased changes in openstack/python-tuskarclient ]
>
> Changes in python-tuskarclient 0.1.18..edec875
> ----------------------------------------------
> edec875 2015-08-04 08:05:11 +0000 Switch to oslo_i18n
> 92a1834 2015-07-23 18:40:47 +0200 Replace assert_called_once() calls
> caa2b4d 2015-06-15 22:08:13 +0000 Updated from global requirements
> 39ea687 2015-06-12 06:48:45 -0400 Fix output of "tuskar plan-list --verbose"
> 00c3de2 2015-06-10 17:36:59 +0000 Enable SSL-related CLI opts
> 09d73e0 2015-06-09 07:34:55 -0400 Calling tuskar role-list would output blank lines
> 52dfbce 2015-06-03 16:44:14 +0200 Handle creation of plan with existing name
> 24087b3 2015-05-29 14:01:35 +0200 Filter and format parameters for plan role in OSC
> b3e37fc 2015-05-22 21:02:32 +0100 Bump hacking version
> 47fdee0 2015-05-13 16:38:51 +0000 Updated from global requirements
> af2597d 2015-05-12 12:29:42 +0100 Implement download Plan for the OpenStack client
> 6228020 2015-05-12 08:44:53 +0100 Implement Plan remove Role for the OpenStack client
> 14e273b 2015-05-11 11:48:26 +0100 Implement Plan add Role for the OpenStack client
> 7a85951 2015-05-11 11:48:26 +0100 Implement show Plan for the OpenStack client
>
> [ Unreleased changes in openstack/python-zaqarclient ]
>
> Changes in python-zaqarclient 0.1.1..c140a58
> --------------------------------------------
> 2490ed4 2015-08-31 15:59:15 +0200 Send claims `limit` as a query param
> baf6fa7 2015-08-31 15:29:36 +0200 v1.1 and v2 claims return document not list
> 0d80728 2015-08-28 11:42:57 +0200 Make sure the API version is passed down
> 407925c 2015-08-28 11:30:40 +0200 Make v1.1 the default CLI version
> 895aad2 2015-08-27 23:26:17 +0000 Updated from global requirements
> 8a81c44 2015-08-27 13:19:58 +0200 Updated from global requirements
> 705ee75 2015-08-26 22:53:09 +0530 Implement CLI support for flavor
> 32a847e 2015-07-16 09:25:58 +0530 Implements CLI for pool
> 964443d 2015-06-26 14:13:26 +0200 Raises an error if the queue name is empty
> e9a8d01 2015-06-25 17:05:00 +0200 Added support to pools and flavors
> f46979b 2015-06-05 06:59:44 +0000 Removed deprecated 'shard' methods
> 1a85f83 2015-04-21 16:07:46 +0000 Update README to work with release tools
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From swapnil.tamse at gmail.com  Fri Sep  4 19:00:33 2015
From: swapnil.tamse at gmail.com (Swapnil Tamse)
Date: Fri, 4 Sep 2015 15:00:33 -0400
Subject: [openstack-dev] help
Message-ID: <CALnfN4+KFvyKyKmqQnZG_PxXk6L57LbGG09ChP5oGZ2AppMp2A@mail.gmail.com>

On 4 September 2015 at 08:00, <openstack-dev-request at lists.openstack.org>
wrote:

> Send OpenStack-dev mailing list submissions to
>         openstack-dev at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> or, via email, send a message with subject or body 'help' to
>         openstack-dev-request at lists.openstack.org
>
> You can reach the person managing the list at
>         openstack-dev-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OpenStack-dev digest..."
>
>
> Today's Topics:
>
>    1. Re: [horizon] Concern about XStatic-bootswatch imports from
>       fonts.googleapis.com (Thomas Goirand)
>    2. Re: cloud-init IPv6 support (Fox, Kevin M)
>    3. [horizon] Concern about XStatic-bootswatch imports from
>       fonts.googleapis.com (Diana Whitten)
>    4. Re: [Heat] convergence rally test results (so far) (Angus Salkeld)
>    5. [Ironic] Liberty release plans (Jim Rollenhagen)
>    6. [Ironic] Introducing ironic-lib 0.1.0 (Jim Rollenhagen)
>    7. Re: [neutron] pushing changes through the gate (Hirofumi Ichihara)
>    8. [magnum]keystone version (??)
>    9. [Cross Project] [Neutron] [Nova] Tox.ini changes  for
>       Constraints testing (Sachi King)
>   10. Re: FFE Request for completion of data driven assignment
>       testing in Keystone (David Stanek)
>   11. Re: [neutron] pushing changes through the gate (Armando M.)
>   12. Re: FFE Request for completion of data driven     assignment
>       testing in Keystone (Morgan Fainberg)
>   13. Re: Scheduler hints, API and Objects (Ken'ichi Ohmichi)
>   14. [kolla][doc] Kolla documentation now live on
>       docs.openstack.org! (Steven Dake (stdake))
>   15. Re: FFE Request for completion of data driven     assignment
>       testing in Keystone (Morgan Fainberg)
>   16. Re: [Openstack-operators] [Neutron] Allowing DNS suffix to be
>       set per subnet (at least per tenant) (Daniel Comnea)
>   17. What's Up, Doc? 4 September, 2015 (Lana Brindley)
>   18. Re: [infra][third-party][CI] Third-party oses in
>       devstack-gate (Evgeny Antyshev)
>   19. Re: [Openstack-operators] [Neutron] Allowing DNS  suffix to be
>       set per subnet (at least per tenant) (Ihar Hrachyshka)
>   20. Re: [Openstack] [ANN] OpenStack Kilo on Ubuntu fully
>       automated with Ansible! Ready for NFV L2 Bridges via Heat!
>       (Jose Manuel Ferrer Mosteiro)
>   21. Re: FFE Request for completion of data driven assignment
>       testing in Keystone (Thierry Carrez)
>   22. Re: [infra][third-party][CI] Third-party oses in
>       devstack-gate (Daniel Mellado)
>   23. [murano] [dashboard] Remove the owner filter from "Package
>       Definitions" page (Dmitro Dovbii)
>   24. Re: [sahara] Request for Feature Freeze Exception
>       (Sergey Reshetnyak)
>   25. [sahara] FFE request for heat wait condition support
>       (Sergey Reshetnyak)
>   26. Re: Scheduler hints, API and Objects (Ken'ichi Ohmichi)
>   27. Re: Scheduler hints, API and Objects (Alex Xu)
>   28. Re: [murano] [dashboard] Remove the owner filter from
>       "Package Definitions" page (Alexander Tivelkov)
>   29. [all] Mitaka Design Summit - Proposed slot        allocation
>       (Thierry Carrez)
>   30. Re: Scheduler hints, API and Objects (Ken'ichi Ohmichi)
>   31. Re: [sahara] FFE request for heat wait condition  support
>       (Vitaly Gridnev)
>   32. Re: Scheduler hints, API and Objects (Sylvain Bauza)
>   33. Re: [openstack-announce] [release][nova] python-novaclient
>       release 2.28.0 (liberty) (Sean Dague)
>   34. Re: [openstack-announce] [release][nova] python-novaclient
>       release 2.28.0 (liberty) (Sean Dague)
>   35. Re: [all] Mitaka Design Summit - Proposed slot allocation
>       (Flavio Percoco)
>   36. Re: [murano] [dashboard] Remove the owner filter from
>       "Package Definitions" page (Ekaterina Chernova)
>   37. Re: [all] Mitaka Design Summit - Proposed slot allocation
>       (Dmitry Tantsur)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 04 Sep 2015 00:06:26 +0200
> From: Thomas Goirand <zigo at debian.org>
> To: Diana Whitten <hurgleburgler at gmail.com>
> Cc: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [horizon] Concern about
>         XStatic-bootswatch imports from fonts.googleapis.com
> Message-ID: <55E8C462.5050804 at debian.org>
> Content-Type: text/plain; charset=utf-8
>
> On 09/03/2015 07:58 PM, Diana Whitten wrote:
> > Thomas,
> >
> > Sorry for the slow response, since I wasn't on the right mailing list
> yet.
> >
> > 1. I'm trying to figure out the best way possible to address this
> > security breach.  I think that the best way to fix this is to augment
> > Bootswatch to only use the URL through a parameter, that can be easily
> > configured.  I have an Issue open on their code right now for this very
> > feature.
> >
> > Until then, I think that we can easily address the issue from the point
> > of view of Horizon, such that we:
> > 1. Remove all instances of 'fonts.googleapis.com
> > <http://fonts.googleapis.com>' from the SCSS during the preprocessor
> > step. Therefore, no outside URLs that point to this location EVER get hit
> > *or*
> > 2. Until the issue that I created on Bootswatch can be addressed,  we
> > can include that file that is making the call in the tree and remove the
> > @import entirely.
> > *or*
> > 3. Until the issue that I created on Bootswatch can be addressed,  we
> > can include the two files that we need from bootswatch 'paper' entirely,
> > and remove Bootswatch as a requirement until we can get an updated
> package
> >
> > 2. Its not getting used at all ... anyways.  I packaged up the font and
> > make it also available via xstatic.  I realized there was some questions
> > about where the versioning came from, but it looks like you might have
> > been looking at the wrong github repo:
> > https://github.com/Templarian/MaterialDesign-Webfont/releases
> >
> > You can absolutely patch out the fonts.  The result will not be ugly;
> > each font should fall back to a nice system font.  But, we are only
> > using the 'Paper' theme out of Bootswatch right now and therefore only
> > packaged up the specific font required for it.
> >
> > Ping me on IRC @hurgleburgler
> >
> > - Diana
>
> Diana,
>
> Thanks a lot for all of these answers. It's really helping!
>
> So if I understand well, xstatic-bootswatch is an already stripped down
> version of the upstream bootswatch. But Horizon only use a single theme
> out of the 16 available in the XStatic package. Then why aren't we using
> an xstatic package which would include only the paper theme? Or is there
> something that I didn't understand?
>
> Removing the fonts.googleapis.com at runtime by Horizon isn't an option
> for distributions, as we don't want to ship a .css file including such
> an import anyway. So definitively, I'd be patching out the @import away.
> But will there be a mechanism to load the Roboto font, packaged as
> xstatic, then? Falling back to a system font could have surprising results.
>
> This was for the bootswatch issue. Now, about the mdi, which IMO isn't
> as much as a problem.
>
> The Git repository at:
> https://github.com/Templarian/MaterialDesign-Webfont/releases
>
> I wonder how it was created. Apparently, the font is made up of images
> that are coming from this repository:
> https://github.com/google/material-design-icons
>
> the question is then, how has this font been made? Was it done "by hand"
> by an artist? Or was there some kind of scripting involved? If it is the
> later, then I'd like to build the font out of the original sources if
> possible. If I can't find how it was done, then I'll probably end up
> just packaging the font as-is, but I'd very much prefer to understand
> what has been done.
>
> Cheers,
>
> Thomas Goirand (zigo)
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 3 Sep 2015 23:03:48 +0000
> From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>, Kevin Benton <
> blak111 at gmail.com>
> Cc: PAUL CARVER <pc2929 at att.com>
> Subject: Re: [openstack-dev] cloud-init IPv6 support
> Message-ID:
>         <1A3C52DFCD06494D8528644858247BF01A2F0CB6 at EX10MBOX03.pnnl.gov>
> Content-Type: text/plain; charset="us-ascii"
>
> So if we define the well known address and cloud-init adopts it, then
> Amazon should be inclined to adopt it too. :)
>
> Why always chase Amazon?
>
> Thanks,
> Kevin
> ________________________________________
> From: Steve Gordon [sgordon at redhat.com]
> Sent: Thursday, September 03, 2015 11:06 AM
> To: Kevin Benton
> Cc: OpenStack Development Mailing List (not for usage questions); PAUL
> CARVER
> Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support
>
> ----- Original Message -----
> > From: "Kevin Benton" <blak111 at gmail.com>
> >
> > When we discussed this before on the neutron channel, I thought it was
> > because cloud-init doesn't support IPv6. We had wasted quite a bit of
> time
> > talking about adding support to our metadata service because I was under
> > the impression that cloud-init already did support IPv6.
> >
> > IIRC, the argument against adding IPv6 support to cloud-init was that it
> > might be incompatible with how AWS chooses to implement IPv6 metadata, so
> > AWS would require a fork or other incompatible alternative to cloud-init
> in
> > all of their images.
> >
> > Is that right?
>
> That's certainly my understanding of the status quo, I was enquiring
> primarily to check it was still accurate.
>
> -Steve
>
> > On Thu, Sep 3, 2015 at 7:30 AM, Sean M. Collins <sean at coreitpro.com>
> wrote:
> >
> > > It's not a case of cloud-init supporting IPv6 - The Amazon EC2 metadata
> > > API defines transport level details about the API - and currently only
> > > defines a well known IPv4 link local address to connect to. No well
> known
> > > link local IPv6 address has been defined.
> > >
> > > I usually recommend config-drive for IPv6 enabled clouds due to this.
> > > --
> > > Sent from my Android device with K-9 Mail. Please excuse my brevity.
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ------------------------------
>
> Message: 3
> Date: Thu, 3 Sep 2015 16:11:48 -0700
> From: Diana Whitten <hurgleburgler at gmail.com>
> To: openstack-dev at lists.openstack.org
> Subject: [openstack-dev] [horizon] Concern about XStatic-bootswatch
>         imports from fonts.googleapis.com
> Message-ID:
>         <
> CABswzdHTq_WqX6XmfMpdDrt0pDa2DZS3spw0O26rErtE7h9emg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Thomas,
>
> Lots of movement on this today.  I was able to get Bootswatch to roll a new
> package to accommodate our need to not pull in the URL by default any
> longer.  This is now a configurable value that can be set by a variable.
> The variable's default value is still the google URL, but Horizon will
> reset that when we pull it it.
>
> The bootswatch package isn't a stripped down version of upstream
> bootswatch, but it was created from the already existing bower package for
> Bootswatch.  It is easier to maintain parity with the bower package, than
> trying to pull apart very specific themes out of it.  Also, some upcoming
> features plan to take advantage of some of the other themes as well.
>
> As for the MDI package, there are services out there that can convert the
> raw SVG that is available directly from google (
> https://github.com/google/material-design-icons) into a variety of Web
> Font
> Formats, BUT ... this is a not a direct mapping of Google's Material Design
> Icons.  The Templarian repo is actually a bigger set of icons, it includes
> Google's Icons, but also a number of Community supports and contributed
> (under the same license) icons.  See the full set here:
> https://materialdesignicons.com/.  Templarian maintains the SVGs of these
> at https://github.com/Templarian/MaterialDesign, however, they also
> maintain the Bower package (that the xstatic inherits from) at
> https://github.com/Templarian/MaterialDesign-Webfont.
>
> Best,
> Diana
>
>
>
> On Thu, Sep 3, 2015 at 3:06 PM, Thomas Goirand <zigo at debian.org> wrote:
>
> > On 09/03/2015 07:58 PM, Diana Whitten wrote:
> > > Thomas,
> > >
> > > Sorry for the slow response, since I wasn't on the right mailing list
> > yet.
> > >
> > > 1. I'm trying to figure out the best way possible to address this
> > > security breach.  I think that the best way to fix this is to augment
> > > Bootswatch to only use the URL through a parameter, that can be easily
> > > configured.  I have an Issue open on their code right now for this very
> > > feature.
> > >
> > > Until then, I think that we can easily address the issue from the point
> > > of view of Horizon, such that we:
> > > 1. Remove all instances of 'fonts.googleapis.com
> > > <http://fonts.googleapis.com>' from the SCSS during the preprocessor
> > > step. Therefore, no outside URLs that point to this location EVER get
> hit
> > > *or*
> > > 2. Until the issue that I created on Bootswatch can be addressed,  we
> > > can include that file that is making the call in the tree and remove
> the
> > > @import entirely.
> > > *or*
> > > 3. Until the issue that I created on Bootswatch can be addressed,  we
> > > can include the two files that we need from bootswatch 'paper'
> entirely,
> > > and remove Bootswatch as a requirement until we can get an updated
> > package
> > >
> > > 2. Its not getting used at all ... anyways.  I packaged up the font and
> > > make it also available via xstatic.  I realized there was some
> questions
> > > about where the versioning came from, but it looks like you might have
> > > been looking at the wrong github repo:
> > > https://github.com/Templarian/MaterialDesign-Webfont/releases
> > >
> > > You can absolutely patch out the fonts.  The result will not be ugly;
> > > each font should fall back to a nice system font.  But, we are only
> > > using the 'Paper' theme out of Bootswatch right now and therefore only
> > > packaged up the specific font required for it.
> > >
> > > Ping me on IRC @hurgleburgler
> > >
> > > - Diana
> >
> > Diana,
> >
> > Thanks a lot for all of these answers. It's really helping!
> >
> > So if I understand well, xstatic-bootswatch is an already stripped down
> > version of the upstream bootswatch. But Horizon only use a single theme
> > out of the 16 available in the XStatic package. Then why aren't we using
> > an xstatic package which would include only the paper theme? Or is there
> > something that I didn't understand?
> >
> > Removing the fonts.googleapis.com at runtime by Horizon isn't an option
> > for distributions, as we don't want to ship a .css file including such
> > an import anyway. So definitively, I'd be patching out the @import away.
> > But will there be a mechanism to load the Roboto font, packaged as
> > xstatic, then? Falling back to a system font could have surprising
> results.
> >
> > This was for the bootswatch issue. Now, about the mdi, which IMO isn't
> > as much as a problem.
> >
> > The Git repository at:
> > https://github.com/Templarian/MaterialDesign-Webfont/releases
> >
> > I wonder how it was created. Apparently, the font is made up of images
> > that are coming from this repository:
> > https://github.com/google/material-design-icons
> >
> > the question is then, how has this font been made? Was it done "by hand"
> > by an artist? Or was there some kind of scripting involved? If it is the
> > later, then I'd like to build the font out of the original sources if
> > possible. If I can't find how it was done, then I'll probably end up
> > just packaging the font as-is, but I'd very much prefer to understand
> > what has been done.
> >
> > Cheers,
> >
> > Thomas Goirand (zigo)
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/1be665a8/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 4
> Date: Fri, 04 Sep 2015 00:17:20 +0000
> From: Angus Salkeld <asalkeld at mirantis.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Heat] convergence rally test results (so
>         far)
> Message-ID:
>         <
> CAA16xcx7R0eNatKKrq1rxLX7OGL37acKRFis5S1Pmt8-ZJq+dg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Fri, Sep 4, 2015 at 12:48 AM Zane Bitter <zbitter at redhat.com> wrote:
>
> > On 03/09/15 02:56, Angus Salkeld wrote:
> > > On Thu, Sep 3, 2015 at 3:53 AM Zane Bitter <zbitter at redhat.com
> > > <mailto:zbitter at redhat.com>> wrote:
> > >
> > >     On 02/09/15 04:55, Steven Hardy wrote:
> > >      > On Wed, Sep 02, 2015 at 04:33:36PM +1200, Robert Collins wrote:
> > >      >> On 2 September 2015 at 11:53, Angus Salkeld
> > >     <asalkeld at mirantis.com <mailto:asalkeld at mirantis.com>> wrote:
> > >      >>
> > >      >>> 1. limit the number of resource actions in parallel (maybe
> base
> > >     on the
> > >      >>> number of cores)
> > >      >>
> > >      >> I'm having trouble mapping that back to 'and heat-engine is
> > >     running on
> > >      >> 3 separate servers'.
> > >      >
> > >      > I think Angus was responding to my test feedback, which was a
> > >     different
> > >      > setup, one 4-core laptop running heat-engine with 4 worker
> > processes.
> > >      >
> > >      > In that environment, the level of additional concurrency becomes
> > >     a problem
> > >      > because all heat workers become so busy that creating a large
> > stack
> > >      > DoSes the Heat services, and in my case also the DB.
> > >      >
> > >      > If we had a configurable option, similar to num_engine_workers,
> > which
> > >      > enabled control of the number of resource actions in parallel, I
> > >     probably
> > >      > could have controlled that explosion in activity to a more
> > >     managable series
> > >      > of tasks, e.g I'd set num_resource_actions to
> > >     (num_engine_workers*2) or
> > >      > something.
> > >
> > >     I think that's actually the opposite of what we need.
> > >
> > >     The resource actions are just sent to the worker queue to get
> > processed
> > >     whenever. One day we will get to the point where we are overflowing
> > the
> > >     queue, but I guarantee that we are nowhere near that day. If we are
> > >     DoSing ourselves, it can only be because we're pulling *everything*
> > off
> > >     the queue and starting it in separate greenthreads.
> > >
> > >
> > > worker does not use a greenthread per job like service.py does.
> > > This issue is if you have actions that are fast you can hit the db
> hard.
> > >
> > > QueuePool limit of size 5 overflow 10 reached, connection timed out,
> > > timeout 30
> > >
> > > It seems like it's not very hard to hit this limit. It comes from
> simply
> > > loading
> > > the resource in the worker:
> > > "/home/angus/work/heat/heat/engine/worker.py", line 276, in
> > check_resource
> > > "/home/angus/work/heat/heat/engine/worker.py", line 145, in
> > _load_resource
> > > "/home/angus/work/heat/heat/engine/resource.py", line 290, in load
> > > resource_objects.Resource.get_obj(context, resource_id)
> >
> > This is probably me being naive, but that sounds strange. I would have
> > thought that there is no way to exhaust the connection pool by doing
> > lots of actions in rapid succession. I'd have guessed that the only way
> > to exhaust a connection pool would be to have lots of connections open
> > simultaneously. That suggests to me that either we are failing to
> > expeditiously close connections and return them to the pool, or that we
> > are - explicitly or implicitly - processing a bunch of messages in
> > parallel.
> >
>
> I suspect we are leaking sessions, I have updated this bug to make sure we
> focus on figuring out the root cause of this before jumping to conclusions:
> https://bugs.launchpad.net/heat/+bug/1491185
>
> -A
>
>
> >
> > >     In an ideal world, we might only ever pull one task off that queue
> > at a
> > >     time. Any time the task is sleeping, we would use for processing
> > stuff
> > >     off the engine queue (which needs a quick response, since it is
> > serving
> > >     the ReST API). The trouble is that you need a *huge* number of
> > >     heat-engines to handle stuff in parallel. In the
> reductio-ad-absurdum
> > >     case of a single engine only processing a single task at a time,
> > we're
> > >     back to creating resources serially. So we probably want a higher
> > number
> > >     than 1. (Phase 2 of convergence will make tasks much smaller, and
> may
> > >     even get us down to the point where we can pull only a single task
> > at a
> > >     time.)
> > >
> > >     However, the fewer engines you have, the more greenthreads we'll
> > have to
> > >     allow to get some semblance of parallelism. To the extent that more
> > >     cores means more engines (which assumes all running on one box, but
> > >     still), the number of cores is negatively correlated with the
> number
> > of
> > >     tasks that we want to allow.
> > >
> > >     Note that all of the greenthreads run in a single CPU thread, so
> > having
> > >     more cores doesn't help us at all with processing more stuff in
> > >     parallel.
> > >
> > >
> > > Except, as I said above, we are not creating greenthreads in worker.
> >
> > Well, maybe we'll need to in order to make things still work sanely with
> > a low number of engines :) (Should be pretty easy to do with a
> semaphore.)
> >
> > I think what y'all are suggesting is limiting the number of jobs that go
> > into the queue... that's quite wrong IMO. Apart from the fact it's
> > impossible (resources put jobs into the queue entirely independently,
> > and have no knowledge of the global state required to throttle inputs),
> > we shouldn't implement an in-memory queue with long-running tasks
> > containing state that can be lost if the process dies - the whole point
> > of convergence is we have... a message queue for that. We need to limit
> > the rate that stuff comes *out* of the queue. And, again, since we have
> > no knowledge of global state, we can only control the rate at which an
> > individual worker processes tasks. The way to avoid killing the DB is to
> > out a constant ceiling on the workers * concurrent_tasks_per_worker
> > product.
> >
> > cheers,
> > Zane.
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/10ac4234/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 5
> Date: Thu, 3 Sep 2015 17:18:15 -0700
> From: Jim Rollenhagen <jim at jimrollenhagen.com>
> To: openstack-dev at lists.openstack.org
> Subject: [openstack-dev] [Ironic] Liberty release plans
> Message-ID: <20150904001815.GA21846 at jimrollenhagen.com>
> Content-Type: text/plain; charset=us-ascii
>
> Hi all,
>
> I wanted to lay down my thoughts on the rest of the cycle here.
>
> As you may know, we recently released Ironic 4.0.0. We've also released
> python-ironicclient 0.8.0 and ironic-lib 0.1.0 this week. Yay!
>
> I'd like to do two more server releases this cycle.
>
> * 4.1.0 - minor release to clean up some bugs on 4.0.0. The last
>   patch[0] I wanted in this is in the gate right now. I'd like to
>   release this on Tuesday, September 8.
>
> * 4.2.0 - this will become the stable/liberty branch. I'd like to
>   release this in coordination with the rest of OpenStack's RC releases,
>   and backport bug fixes as necessary, releasing 4.2.x for those.
>
> I've made lists of features and bugs we want to land in 4.2.0 on our
> whiteboard[1]. Let's try to prioritize code and review for those as much
> as possible.
>
> I'd like to try to release 4.2.0 on Thursday, September 24. As such, I'd
> like reviewers to respect a soft code freeze beginning on Thursday,
> September 17. I don't want to say "don't merge features", but please
> don't merge anything risky after that date.
>
> As always, questions/comments/concerns welcome.
>
> // jim
>
> [0] https://review.openstack.org/#/c/219398/
> [1] https://etherpad.openstack.org/p/IronicWhiteBoard
>
>
>
> ------------------------------
>
> Message: 6
> Date: Thu, 3 Sep 2015 17:24:00 -0700
> From: Jim Rollenhagen <jim at jimrollenhagen.com>
> To: openstack-dev at lists.openstack.org
> Subject: [openstack-dev] [Ironic] Introducing ironic-lib 0.1.0
> Message-ID: <20150904002400.GB21846 at jimrollenhagen.com>
> Content-Type: text/plain; charset=us-ascii
>
> Hi all,
>
> I'm proud to announce the initial release of ironic-lib! This library
> was built to share code between various Ironic projects. The initial
> release contains an up-to-date copy of Ironic's disk partitioning code,
> to be shared between Ironic and ironic-python-agent.
>
> At the beginning of the Mitaka cycle, we'll begin to refactor Ironic to
> use this code, and also start using it in IPA to be able to deploy
> partition images, support ephemeral volumes, etc., without relying on
> iSCSI.
>
> PyPI: https://pypi.python.org/pypi/ironic-lib
> Git: http://git.openstack.org/cgit/openstack/ironic-lib/
> Launchpad: https://launchpad.net/ironic-lib
> global-requirements patch: https://review.openstack.org/#/c/219011/
>
> As always, questions/comments/concerns welcome. :)
>
> // jim
>
>
>
> ------------------------------
>
> Message: 7
> Date: Fri, 4 Sep 2015 09:55:40 +0900
> From: Hirofumi Ichihara <ichihara.hirofumi at lab.ntt.co.jp>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron] pushing changes through the
>         gate
> Message-ID: <AB2E7CE7-ECCC-49D9-AB20-46135559D3DE at lab.ntt.co.jp>
> Content-Type: text/plain; charset="utf-8"
>
> Good work and thank you for your help with my patch.
>
> Anyway, I don?t know when does bp owner have to merge the code by.
> I can see the following sentence in bp rule[1]
> ?The PTL will create a <release>-backlog directory during the RC window
> and move all specs which didn?t make the <release> there.?
> Did we have to merge the implementation by L-3? Or can we merge it in RC-1?
>
> [1]:
> http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-bp-and-spec-notes
>
> Thanks,
> Hirofumi
>
> > On 2015/09/04, at 7:00, Armando M. <armamig at gmail.com> wrote:
> >
> >
> >
> > On 2 September 2015 at 09:40, Armando M. <armamig at gmail.com <mailto:
> armamig at gmail.com>> wrote:
> > Hi,
> >
> > By now you may have seen that I have taken out your change from the gate
> and given it a -2: don't despair! I am only doing it to give priority to
> the stuff that needs to merge in order to get [1] into a much better shape.
> >
> > If you have an important fix, please target it for RC1 or talk to me or
> Doug (or Kyle when he's back from his time off), before putting it in the
> gate queue. If everyone is not conscious of the other, we'll only end up
> stepping on each other, and nothing moves forward.
> >
> > Let's give priority to gate stabilization fixes, and targeted stuff.
> >
> > Happy merging...not!
> >
> > Many thanks,
> > Armando
> >
> > [1] https://launchpad.net/neutron/+milestone/liberty-3 <
> https://launchpad.net/neutron/+milestone/liberty-3>
> > [2] https://launchpad.net/neutron/+milestone/liberty-rc1 <
> https://launchpad.net/neutron/+milestone/liberty-rc1>
> >
> > Download files for the milestone are available in [1]. We still have a
> lot to do as there are outstanding bugs and blueprints that will have to be
> merged in the RC time windows.
> >
> > Please be conscious of what you approve. Give priority to:
> >
> > - Targeted bugs and blueprints in [2];
> > - Gate stability fixes or patches that aim at helping troubleshooting;
> >
> > In these busy times, please refrain from proposing/merging:
> >
> > - Silly rebase generators (e.g. spelling mistakes);
> > - Cosmetic changes (e.g. minor doc strings/comment improvements);
> > - Refactoring required while dealing with the above;
> > - A dozen of patches stacked on top of each other;
> >
> > Every rule has its own exception, so don't take this literally.
> >
> > If you are unsure, please reach out to me, Kyle or your Lieutenant and
> we'll target stuff that is worth targeting.
> >
> > As for the rest, I am gonna be merciless and -2 anything than I can
> find, in order to keep our gate lean and sane :)
> >
> > Thanks and happy hacking.
> >
> > A.
> >
> > [1] https://launchpad.net/neutron/+milestone/liberty-3 <
> https://launchpad.net/neutron/+milestone/liberty-3>
> > [2] https://launchpad.net/neutron/+milestone/liberty-rc1 <
> https://launchpad.net/neutron/+milestone/liberty-rc1>
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org <mailto:
> OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/f3e022b3/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 8
> Date: Fri, 4 Sep 2015 09:43:49 +0800
> From: ?? <wanghua.humble at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [magnum]keystone version
> Message-ID:
>         <CAH5-jC8FQ9C7ADVygXXVRyKMt867iBFsjimKp26db6=
> pFO27-g at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi all,
>
> Now the keystoneclient in magnum only support keystone v3. Is is necessary
> to support keystone v2? Keystone v2 don't support trust.
>
> Regards,
> Wanghua
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/cfb1e890/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 9
> Date: Fri, 04 Sep 2015 12:16:00 +1000
> From: Sachi King <nakato at nakato.io>
> To: openstack-dev at lists.openstack.org
> Subject: [openstack-dev] [Cross Project] [Neutron] [Nova] Tox.ini
>         changes for Constraints testing
> Message-ID: <1721376.LIrBB9dSzH at youmu>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi,
>
> I'm working on setting up both Nova and Neutron with constrained unit
> tests.
> More details about this can be found in the changes can be found in it's
> blueprint [0].
>
> An example issue this will prevent is Neutron's recent gate breakage caused
> by a new netaddr version. [1]
>
> Now that the base changes have landed in project-config the next step is to
> modify tox.ini to run an alterniate install command when called with the
> 'constraints' factor.
>
> Nova: https://review.openstack.org/205931/
> Neutron: https://review.openstack.org/219134/
>
> This change is a no-op for current gate jobs and developer workflows, only
> adding the functionality required for the new constraints jobs and manual
> execution of the constrained tests when desired.
>
> Once these have been added we can then proceed adding the py27 and 34 jobs,
> which will be non-voting at this point.
>
> Nova: https://review.openstack.org/219582/
> Neutron: https://review.openstack.org/219610/
>
> [0]
> http://specs.openstack.org/openstack/openstack-specs/specs/requirements-management.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/073239.html
>
> If there are any other projects interested in being an early adopter of
> constrained unit tests, please let me know.
>
> Cheers,
> Sachi
>
>
>
> ------------------------------
>
> Message: 10
> Date: Fri, 04 Sep 2015 02:28:09 +0000
> From: David Stanek <dstanek at dstanek.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] FFE Request for completion of data driven
>         assignment testing in Keystone
> Message-ID:
>         <CAO69Nd=
> i84FrR1f+0xHqb1S1jHytNFcbL+3+y+YjpDEcDQVimA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Thu, Sep 3, 2015 at 3:44 PM Henry Nash <henryn at linux.vnet.ibm.com>
> wrote:
>
> >
> > I would like to request an FFE for the remaining two patches that are
> > already in review (https://review.openstack.org/#/c/153897/ and
> > https://review.openstack.org/#/c/154485/).  These contain only test code
> > and no functional changes, and increase our test coverage - as well as
> > enable other items to be re-use the list_role_assignment backend method.
> >
>
> Do we need a FFE for changes to tests?
>
> -- David
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/c592cdd5/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 11
> Date: Thu, 3 Sep 2015 19:36:51 -0700
> From: "Armando M." <armamig at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron] pushing changes through the
>         gate
> Message-ID:
>         <
> CAK+RQea9d57qka7st1zWHMzGF4qoWWB3MWrhQZThEn0LBFEEtg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On 3 September 2015 at 17:55, Hirofumi Ichihara <
> ichihara.hirofumi at lab.ntt.co.jp> wrote:
>
> > Good work and thank you for your help with my patch.
> >
> > Anyway, I don?t know when does bp owner have to merge the code by.
> > I can see the following sentence in bp rule[1]
> > ?The PTL will create a <release>-backlog directory during the RC window
> > and move all specs which didn?t make the <release> there.?
> > Did we have to merge the implementation by L-3? Or can we merge it in
> RC-1?
> >
> > [1]:
> >
> http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-bp-and-spec-notes
> >
> >
> It depends on the extent of the changes remaining to merge. There is no
> hard and fast rule, because every blueprint is different: the code can be
> complex and pervasive, or simple and isolated. In the former even a small
> patch may be deferred, in the latter even a dozen patches could be merged
> during RC. Some other blueprints are best completed during feature freeze,
> because of the rebase risk they cause...
>
> Bottom line: never leave it to last minute!
>
>
> > Thanks,
> > Hirofumi
> >
> > On 2015/09/04, at 7:00, Armando M. <armamig at gmail.com> wrote:
> >
> >
> >
> > On 2 September 2015 at 09:40, Armando M. <armamig at gmail.com> wrote:
> >
> >> Hi,
> >>
> >> By now you may have seen that I have taken out your change from the gate
> >> and given it a -2: don't despair! I am only doing it to give priority to
> >> the stuff that needs to merge in order to get [1] into a much better
> shape.
> >>
> >> If you have an important fix, please target it for RC1 or talk to me or
> >> Doug (or Kyle when he's back from his time off), before putting it in
> the
> >> gate queue. If everyone is not conscious of the other, we'll only end up
> >> stepping on each other, and nothing moves forward.
> >>
> >> Let's give priority to gate stabilization fixes, and targeted stuff.
> >>
> >> Happy merging...not!
> >>
> >> Many thanks,
> >> Armando
> >>
> >> [1] https://launchpad.net/neutron/+milestone/liberty-3
> >> [2] https://launchpad.net/neutron/+milestone/liberty-rc1
> >>
> >
> > Download files for the milestone are available in [1]. We still have a
> lot
> > to do as there are outstanding bugs and blueprints that will have to be
> > merged in the RC time windows.
> >
> > Please be conscious of what you approve. Give priority to:
> >
> > - Targeted bugs and blueprints in [2];
> > - Gate stability fixes or patches that aim at helping troubleshooting;
> >
> > In these busy times, please refrain from proposing/merging:
> >
> > - Silly rebase generators (e.g. spelling mistakes);
> > - Cosmetic changes (e.g. minor doc strings/comment improvements);
> > - Refactoring required while dealing with the above;
> > - A dozen of patches stacked on top of each other;
> >
> > Every rule has its own exception, so don't take this literally.
> >
> > If you are unsure, please reach out to me, Kyle or your Lieutenant and
> > we'll target stuff that is worth targeting.
> >
> > As for the rest, I am gonna be merciless and -2 anything than I can find,
> > in order to keep our gate lean and sane :)
> >
> > Thanks and happy hacking.
> >
> > A.
> >
> > [1] https://launchpad.net/neutron/+milestone/liberty-3
> > [2] https://launchpad.net/neutron/+milestone/liberty-rc1
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/05913eda/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 12
> Date: Thu, 3 Sep 2015 19:48:17 -0700
> From: Morgan Fainberg <morgan.fainberg at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] FFE Request for completion of data driven
>         assignment testing in Keystone
> Message-ID: <88142A7C-67DF-440F-A3B7-02966AAE6A9E at gmail.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
>
>
> > On Sep 3, 2015, at 19:28, David Stanek <dstanek at dstanek.com> wrote:
> >
> >
> >> On Thu, Sep 3, 2015 at 3:44 PM Henry Nash <henryn at linux.vnet.ibm.com>
> wrote:
> >>
> >> I would like to request an FFE for the remaining two patches that are
> already in review (https://review.openstack.org/#/c/153897/ and
> https://review.openstack.org/#/c/154485/).  These contain only test code
> and no functional changes, and increase our test coverage - as well as
> enable other items to be re-use the list_role_assignment backend method.
> >
> > Do we need a FFE for changes to tests?
> >
>
> I would say "no".
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/6af2e685/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 13
> Date: Fri, 4 Sep 2015 12:14:07 +0900
> From: "Ken'ichi Ohmichi" <ken1ohmichi at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Scheduler hints, API and Objects
> Message-ID:
>         <CAA393vixHPJ=Ay=
> 79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Hi Andrew,
>
> Sorry for this late response, I missed it.
>
> 2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com>:
> > I have been growing concerned recently with some attempts to formalize
> > scheduler hints, both with API validation and Nova objects defining them,
> > and want to air those concerns and see if others agree or can help me see
> > why I shouldn't worry.
> >
> > Starting with the API I think the strict input validation that's being
> done,
> > as seen in
> >
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da
> ,
> > is unnecessary, and potentially problematic.
> >
> > One problem is that it doesn't indicate anything useful for a client.
> The
> > schema indicates that there are hints available but can make no claim
> about
> > whether or not they're actually enabled.  So while a microversion bump
> would
> > typically indicate a new feature available to an end user, in the case
> of a
> > new scheduler hint a microversion bump really indicates nothing at all.
> It
> > does ensure that if a scheduler hint is used that it's spelled properly
> and
> > the data type passed is correct, but that's primarily useful because
> there
> > is no feedback mechanism to indicate an invalid or unused scheduler
> hint.  I
> > think the API schema is a poor proxy for that deficiency.
> >
> > Since the exposure of a hint means nothing as far as its usefulness, I
> don't
> > think we should be codifying them as part of our API schema at this time.
> > At some point I imagine we'll evolve a more useful API for passing
> > information to the scheduler as part of a request, and when that happens
> I
> > don't think needing to support a myriad of meaningless hints in older API
> > versions is going to be desirable.
> >
> > Finally, at this time I'm not sure we should take the stance that only
> > in-tree scheduler hints are supported.  While I completely agree with the
> > desire to expose things in cross-cloud ways as we've done and are
> looking to
> > do with flavor and image properties I think scheduling is an area where
> we
> > want to allow some flexibility for deployers to write and expose
> scheduling
> > capabilities that meet their specific needs.  Over time I hope we will
> get
> > to a place where some standardization can happen, but I don't think
> locking
> > in the current scheduling hints is the way forward for that.  I would
> love
> > to hear from multi-cloud users here and get some input on whether that's
> > crazy and they are expecting benefits from validation on the current
> > scheduler hints.
> >
> > Now, objects.  As part of the work to formalize the request spec sent to
> the
> > scheduler there's an effort to make a scheduler hints object.  This
> > formalizes them in the same way as the API with no benefit that I can
> see.
> > I won't duplicate my arguments above, but I feel the same way about the
> > objects as I do with the API.  I don't think needing to update and object
> > version every time a new hint is added is useful at this time, nor do I
> > think we should lock in the current in-tree hints.
> >
> > In the end this boils down to my concern that the scheduling hints api
> is a
> > really horrible user experience and I don't want it to be solidified in
> the
> > API or objects yet.  I think we should re-examine how they're handled
> before
> > that happens.
>
> Now we are discussing this on https://review.openstack.org/#/c/217727/
> for allowing out-of-tree scheduler-hints.
> When we wrote API schema for scheduler-hints, it was difficult to know
> what are available API parameters for scheduler-hints.
> Current API schema exposes them and I guess that is useful for API users
> also.
>
> One idea is that: How about auto-extending scheduler-hint API schema
> based on loaded schedulers?
> Now API schemas of "create/update/resize/rebuild a server" APIs are
> auto-extended based on loaded extensions by using stevedore
> library[1].
> I guess we can apply the same way for scheduler-hints also in long-term.
> Each scheduler needs to implement a method which returns available API
> parameter formats and nova-api tries to get them then extends
> scheduler-hints API schema with them.
> That means out-of-tree schedulers also will be available if they
> implement the method.
> # In short-term, I can see "blocking additionalProperties" validation
> disabled by the way.
>
> Thanks
> Ken Ohmichi
>
> ---
> [1]:
> https://github.com/openstack/nova/blob/master/doc/source/api_plugins.rst#json-schema
>
>
>
> ------------------------------
>
> Message: 14
> Date: Fri, 4 Sep 2015 03:20:16 +0000
> From: "Steven Dake (stdake)" <stdake at cisco.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [kolla][doc] Kolla documentation now live on
>         docs.openstack.org!
> Message-ID: <D20E5BEE.11D6D%stdake at cisco.com>
> Content-Type: text/plain; charset="windows-1252"
>
> Hi folks,
>
> Kolla documentation is now published live to docs.openstack.org.  Our
> documentation still needs significant attention to be really high quality,
> but what we have is a start.  Every code change results in new published
> documentation.
>
> I?d like to invite folks interested in getting involved in Kolla
> development to take a look at improving the documentation.  One of the
> major components of any good open source project is fantastic
> documentation, and the more contribution we receive the better.  As further
> encouragement to improving documentation no specific bugs or blueprints
> need be filed to make documentation changes.  Our community decided on this
> exception early on to facilitate the enhancement of the documentation.  The
> broader community looks forward to any contributions you can make!
>
> The documentation is located here:
>
> http://docs.openstack.org/developer/kolla
>
> Thanks,
> -steve
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/87d9cdd7/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 15
> Date: Thu, 3 Sep 2015 21:23:20 -0700
> From: Morgan Fainberg <morgan.fainberg at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] FFE Request for completion of data driven
>         assignment testing in Keystone
> Message-ID: <E409D08D-203E-494A-9584-925740B121DE at gmail.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
>
> > On Sep 3, 2015, at 19:48, Morgan Fainberg <morgan.fainberg at gmail.com>
> wrote:
> >
> >
> >
> >
> >> On Sep 3, 2015, at 19:28, David Stanek <dstanek at dstanek.com> wrote:
> >>
> >>
> >>> On Thu, Sep 3, 2015 at 3:44 PM Henry Nash <henryn at linux.vnet.ibm.com>
> wrote:
> >>>
> >>> I would like to request an FFE for the remaining two patches that are
> already in review (https://review.openstack.org/#/c/153897/ and
> https://review.openstack.org/#/c/154485/).  These contain only test code
> and no functional changes, and increase our test coverage - as well as
> enable other items to be re-use the list_role_assignment backend method.
> >>
> >> Do we need a FFE for changes to tests?
> >
> > I would say "no".
>
> To clarify "no" a FFE is not needed here. Not "no" to the additional
> testing.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150903/fd68da5f/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 16
> Date: Fri, 4 Sep 2015 09:14:31 +0300
> From: Daniel Comnea <comnea.dani at gmail.com>
> To: Kevin Benton <blak111 at gmail.com>
> Cc: "OpenStack Development Mailing List \(not for usage questions\)"
>         <openstack-dev at lists.openstack.org>,
>         "openstack-operators at lists.openstack.org"
>         <openstack-operators at lists.openstack.org>
> Subject: Re: [openstack-dev] [Openstack-operators] [Neutron] Allowing
>         DNS suffix to be set per subnet (at least per tenant)
> Message-ID:
>         <
> CAOBAnZNj52t1-iKXqMPs6EBrw4SkL+OWrogMVb7ML_zyoRNiLA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Kevin,
>
> am i right in saying that the merge above was packaged into Liberty ?
>
> Any chance to be ported to Juno?
>
>
> Cheers,
> Dani
>
>
>
> On Fri, Sep 4, 2015 at 12:21 AM, Kevin Benton <blak111 at gmail.com> wrote:
>
> > Support for that blueprint already merged[1] so it's a little late to
> > change it to per-subnet. If that is too fine-grained for your use-case, I
> > would file an RFE bug[2] to allow it to be set at the subnet level.
> >
> >
> > 1. https://review.openstack.org/#/c/200952/
> > 2.
> >
> http://docs.openstack.org/developer/neutron/policies/blueprints.html#rfe-submission-guidelines
> >
> > On Thu, Sep 3, 2015 at 1:07 PM, Maish Saidel-Keesing <
> maishsk at maishsk.com>
> > wrote:
> >
> >> On 09/03/15 20:51, Gal Sagie wrote:
> >>
> >> I am not sure if this address what you need specifically, but it would
> be
> >> worth checking these
> >> two approved liberty specs:
> >>
> >> 1)
> >>
> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/internal-dns-resolution.rst
> >> 2)
> >>
> https://github.com/openstack/neutron-specs/blob/master/specs/liberty/external-dns-resolution.rst
> >>
> >> Thanks Gal,
> >>
> >> So I see that from the bp [1] the fqdn will be configurable for each and
> >> every port ?
> >>
> >> I think that this does open up a number of interesting possibilities,
> but
> >> I would also think that it would be sufficient to do this on a subnet
> level?
> >>
> >> We do already have the option of setting nameservers per subnet - I
> >> assume the data model is already implemented - which is interesting  -
> >> because I don't see that as part of the information that is sent by
> dnsmasq
> >> so it must be coming from neutron somewhere.
> >>
> >> The domain suffix - definitely is handled by dnsmasq.
> >>
> >>
> >>
> >> On Thu, Sep 3, 2015 at 8:37 PM, Steve Wormley <openstack at wormley.com>
> >> wrote:
> >>
> >>> As far as I am aware it is not presently built-in to Openstack. You'll
> >>> need to add a dnsmasq_config_file option to your dhcp agent
> configurations
> >>> and then populate the file with:
> >>> domain=DOMAIN_NAME,CIDR for each network
> >>> i.e.
> >>> domain=example.com,10.11.22.0/24
> >>> ...
> >>>
> >>> -Steve
> >>>
> >>>
> >>> On Thu, Sep 3, 2015 at 1:04 AM, Maish Saidel-Keesing <
> >>> <maishsk at maishsk.com>maishsk at maishsk.com> wrote:
> >>>
> >>>> Hello all (cross-posting to openstack-operators as well)
> >>>>
> >>>> Today the setting of the dns suffix that is provided to the instance
> is
> >>>> passed through dhcp_agent.
> >>>>
> >>>> There is the option of setting different DNS servers per subnet (and
> >>>> and therefore tenant) but the domain suffix is something that stays
> the
> >>>> same throughout the whole system is the domain suffix.
> >>>>
> >>>> I see that this is not a current neutron feature.
> >>>>
> >>>> Is this on the roadmap? Are there ways to achieve this today? If so I
> >>>> would be very interested in hearing how.
> >>>>
> >>>> Thanks
> >>>> --
> >>>> Best Regards,
> >>>> Maish Saidel-Keesing
> >>>>
> >>>>
> >>>>
> __________________________________________________________________________
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>> Unsubscribe:
> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>
> >>>
> >>>
> >>>
> __________________________________________________________________________
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>
> >>
> >> --
> >> Best Regards ,
> >>
> >> The G.
> >>
> >>
> >> --
> >> Best Regards,
> >> Maish Saidel-Keesing
> >>
> >> _______________________________________________
> >> OpenStack-operators mailing list
> >> OpenStack-operators at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >>
> >
> >
> > --
> > Kevin Benton
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/21eab95b/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 17
> Date: Fri, 4 Sep 2015 16:25:09 +1000
> From: Lana Brindley <openstack at lanabrindley.com>
> To: "openstack-docs at lists.openstack.org"
>         <openstack-docs at lists.openstack.org>, OpenStack Development
> Mailing
>         List <openstack-dev at lists.openstack.org>,
>         "openstack-i18n at lists.openstack.org"
>         <openstack-i18n at lists.openstack.org>
> Subject: [openstack-dev] What's Up, Doc? 4 September, 2015
> Message-ID: <55E93945.4040107 at lanabrindley.com>
> Content-Type: text/plain; charset=utf-8
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi everyone,
>
> This has been a fairly busy week, with Summit preparations beginning,
> more newly migrated RST books going live, and testing starting on the
> Install Guide. I've been spending time on sorting out the Liberty
> blueprints still outstanding, and also working on some old bugs.
>
> == Progress towards Liberty ==
>
> 40 days to go!
>
> * RST conversion:
> ** Is now completed! Well done and a huge thank you to everyone who
> converted pages, approved reviews, and participated in publishing the
> new guides. This was a truly phenomenal effort :)
>
> * User Guides information architecture overhaul
> ** Some user analysis has begun, and we have a new blueprint:
> https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reo
> rganised
>
> * Greater focus on helping out devs with docs in their repo
> ** A certain amount of progress has been made here, and some wrinkles
> sorted out which will improve this process for the future.
>
> * Improve how we communicate with and support our corporate contributors
> ** If you currently work on documentation for a company that would like
> to improve their upstream commits for documentation, please contact me!
>
> * Improve communication with Docs Liaisons
> ** I'm very pleased to see liaisons getting more involved in our bugs
> and reviews. Keep up the good work!
>
> * Clearing out old bugs
> ** The last lot of old bugs are still languishing. I'm assuming you all
> hate them so very much that I've decided to give you three more to pick
> from. Have at it!
>
> == Countdown to Summit ==
>
> With the Liberty release less than two months away, that means it's
> nearly Summit time again: https://www.openstack.org/summit/tokyo-2015/
>
> The schedule has now been released, congratulations to everyone who had
> a talk accepted this time around:
> https://www.openstack.org/summit/tokyo-2015/schedule/
>
> All ATCs should have received their pass by now, so now is the time to
> be booking your travel and accommodation:
> https://www.openstack.org/summit/tokyo-2015/tokyo-and-travel/
>
> == Conventions ==
>
> A new governance patch has landed which changes the way we capitalise
> service names (I know almost exactly 50% of you will be happy about
> this!):
> https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_pr
> oject_names
> Please be aware of this when editing files, and remember that the
> 'source of truth' for these things is the projects.yaml file:
> http://git.openstack.org/cgit/openstack/governance/tree/reference/projec
> ts.yaml
>
> == Docs Tools ==
>
> openstack-doc-tools 0.30.0 and openstackdocstheme 1.2.1 have been
> released. openstack-doc-tools allows translation of the Install Guide.
> openstackdocstheme contains fixes for the inclusion of metatags, removes
> unused images and javascript files, and fixes the "Docs Home" link.
>
> == Doc team meeting ==
>
> The APAC meeting was not held this week. The minutes from the previous
> US meeting are here:
> https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2015-08-26
>
> The next meetings are:
> US: Wednesday 9 September, 14:00:00 UTC
> APAC: Wednesday 16 September, 00:30:00 UTC
>
> Please go ahead and add any agenda items to the meeting page here:
> https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_
> meeting
>
> == Spotlight bugs for this week ==
>
> Let's give these three a little love:
>
> https://bugs.launchpad.net/openstack-manuals/+bug/1280092 end user guide
> lacks doc on admin password injection
>
> https://bugs.launchpad.net/openstack-manuals/+bug/1282765 Chapter 6.
> Block Storage in OpenStack Cloud Administrator Guide
>
> https://bugs.launchpad.net/openstack-manuals/+bug/1284215 Driver for IBM
> SONAS and Storwize V7000 Unified
>
> - --
>
> Remember, if you have content you would like to add to this newsletter,
> or you would like to be added to the distribution list, please email me
> directly at openstack at lanabrindley.com, or visit:
> https://wiki.openstack.org/w/index.php?title=Documentation/WhatsUpDoc
>
> Keep on doc'ing!
>
> Lana
>
> - --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.22 (GNU/Linux)
>
> iQEcBAEBAgAGBQJV6TlFAAoJELppzVb4+KUy/zkIAKYKbKdw78Nv8dpB8d9Rj4qh
> +JTK2rTlz/Up5F10OzIoJoNMIvySeKH+jHV1CP0qL9KigYaepkEeMn8RnNSayYww
> cgSmk/8gpzGTTd17JK0Rrn+RjOb3XMYeNH2d4OkvIQPGBAYsnerODrvEK3GG7YHO
> oo5xYSkLdYH54qnXhhvNZxjxDclT1P5QgpUP6M6KcB3bcKt4niHGLHnBHFoqvlMR
> gJA1BtKR6CackhZbkJpPFCpEHimm4xdWwF+q7xRezy599MbkkPAIxR/oMuEkqU2H
> zj+tm9sHDxOoH2j4Hfkbw7xxF+/NjvGtm41JCPsUVBxuAocaBbJ1kZRbbRzrafI=
> =2TAI
> -----END PGP SIGNATURE-----
>
>
>
> ------------------------------
>
> Message: 18
> Date: Fri, 4 Sep 2015 10:11:45 +0400
> From: Evgeny Antyshev <eantyshev at virtuozzo.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [infra][third-party][CI] Third-party oses
>         in devstack-gate
> Message-ID: <55E93621.5050909 at virtuozzo.com>
> Content-Type: text/plain; charset="windows-1252"; format=flowed
>
> On 01.09.2015 12:28, Evgeny Antyshev wrote:
> > Hello!
> >
> > This letter I address to those third-party CI maintainers who needs to
> > amend
> > the upstream devstack-gate to satisfy their environment.
> >
> > Some folks that I know use inline patching at job level,
> > some make private forks of devstack-gate (I even saw one on github).
> > There have been a few improvements to devstack-gate, which made it
> > easier to use it
> > downstream, f.e. introducing DEVSTACK_LOCAL_CONFIG
> > (https://review.openstack.org/145321)
> >
> > We particularly need it to recognize our rhel-based distribution as a
> > Fedora OS.
> > We cannot define it explicitly in is_fedora() as it is not officially
> > supported upstream,
> > but we can introduce the variable DEVSTACK_GATE_IS_FEDORA which makes
> > is_fedora() agnostic to distributions and to succeed if run on an
> > undefined OS.
> >
> > Here is the change: https://review.openstack.org/215029
> > I welcome everyone interested in the matter
> > to tell us if we do it right or not, and to review the change.
> >
> Prepending with [infra] tag to draw more attention
>
> --
> Best regards,
> Evgeny Antyshev.
>
>
>
>
> ------------------------------
>
> Message: 19
> Date: Fri, 4 Sep 2015 09:26:23 +0200
> From: Ihar Hrachyshka <ihrachys at redhat.com>
> To: Daniel Comnea <comnea.dani at gmail.com>
> Cc: "OpenStack Development Mailing List \(not for usage questions\)"
>         <openstack-dev at lists.openstack.org>,
>         "openstack-operators at lists.openstack.org"
>         <openstack-operators at lists.openstack.org>
> Subject: Re: [openstack-dev] [Openstack-operators] [Neutron] Allowing
>         DNS     suffix to be set per subnet (at least per tenant)
> Message-ID: <32F63AEB-460C-46FB-8F30-517C2AEA1563 at redhat.com>
> Content-Type: text/plain; charset="us-ascii"
>
> > On 04 Sep 2015, at 08:14, Daniel Comnea <comnea.dani at gmail.com> wrote:
> >
> > Kevin,
> >
> > am i right in saying that the merge above was packaged into Liberty ?
> >
> > Any chance to be ported to Juno?
> >
>
> There is no chance a new feature will be backported to any stable branch,
> even Kilo. At least in upstream.
>
> Ihar
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 455 bytes
> Desc: Message signed with OpenPGP using GPGMail
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/5267415e/attachment-0001.pgp
> >
>
> ------------------------------
>
> Message: 20
> Date: Fri, 04 Sep 2015 09:58:11 +0200
> From: Jose Manuel Ferrer Mosteiro
>         <jmferrer.paradigmatecnologico at gmail.com>
> To: Sabrina Bajorat <sabrina.bajorat at gmail.com>
> Cc: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>, openstack at lists.openstack.org
> Subject: Re: [openstack-dev] [Openstack] [ANN] OpenStack Kilo on
>         Ubuntu fully automated with Ansible! Ready for NFV L2 Bridges via
>         Heat!
> Message-ID: <b671b11c2e3440458f33159aa4f3f191 at fermosit.es>
> Content-Type: text/plain; charset="utf-8"
>
>
>
> Hi
>
> It is a pre pre pre pre pre pre pre alpha version that just installs the
> juno ubuntu guide until dashboard included. Block Storage Service is
> very important but does not work now.
>
> vCenter will be always the operating system that makes my life easyer.
> Today is Ubuntu.
>
> The hypervisor is also Ubuntu but it will be Ubuntu, CentOs and Debian.
>
> I will announce the project when the project is more advanced.
>
> Thanks
>
> On 2015-08-31 15:08, Sabrina Bajorat wrote:
>
> > That is great !!! Can it be use with Debian 7 too?
> >
> > Thanks
> >
> > On Mon, Aug 31, 2015 at 2:54 PM, Jose Manuel Ferrer Mosteiro <
> jmferrer.paradigmatecnologico at gmail.com> wrote:
> >
> > Nice job. I am doing a vmware vcenter like in
> https://github.com/elmanytas/ansible-openstack-vcenter [1] and I solved
> the problem of duplicate endpoints in line 106 of
> https://github.com/elmanytas/ansible-openstack-vcenter/blob/master/etc_ansible/roles/keystone/tasks/main.yml
> [2] . This makes playbooks idempotents.
> >
> > Maybe you could be interested.
> >
> > On 2015-08-26 00:30, Martinx - ????? wrote:
> > Hello Stackers!
> >
> > I'm proud to announce an Ansible Playbook to deploy OpenStack on Ubuntu!
> >
> > Check it out!
> >
> > * https://github.com/sandvine/os-ansible-deployment-lite [3]
> >
> > Powered by Sandvine! ;-)
> >
> > Basically, this is the automation of what we have documented here:
> >
> > * http://docs.openstack.org/kilo/install-guide/install/apt/content/ [4]
> >
> > Instructions:
> >
> > 1- Install Ubuntu 14.04, fully upgraded (with
> > "linux-generic-lts-vivid" installed), plus "/etc/hostname" and
> > "/etc/hosts" configured according.
> >
> > 2- Deploy OpenStack with 1 command:
> >
> > * Open vSwtich (default):
> >
> > bash <(curl -s
> >
> https://raw.githubusercontent.com/sandvine/os-ansible-deployment-lite/kilo/misc/os-install.sh
> [5])
> >
> > * Linux Bridges (alternative):
> >
> > bash <(curl -s
> >
> https://raw.githubusercontent.com/sandvine/os-ansible-deployment-lite/kilo/misc/os-install-lbr.sh
> [6])
> >
> > 3- Launch a NFV L2 Stack:
> >
> > heat stack-create demo -f
> >
> ~/os-ansible-deployment-lite/misc/os-heat-templates/nfv-l2-bridge-basic-stack-ubuntu-little.yaml
> >
> > IMPORTANT NOTES:
> >
> > Only runs the "step 2" on top of a fresh installed Ubuntu 14.04! Can
> > be a Server or Desktop but, fresh installed. Do not pre-install MySQL,
> > RabbitMQ, Keystone, etc... Let Ansible to its magic!
> >
> > Also, make sure you can use "sudo" without password.
> >
> > Some features of our Ansible Playbook:
> >
> > 1- Deploys OpenStack with one single command, in one physical box
> > (all-in-one), helper script (./os-deploy.sh) available;
> >
> > 2- Supports NFV instances that can act as a L2 Bridge between two
> > VXLAN Networks;
> >
> > 3- Plenty of Heat Templates;
> >
> > 4- 100% Ubuntu based;
> >
> > 5- Very simple setup (simpler topology; dummy interfaces for both
> > "br-ex" and "vxlan"; no containers for each service (yet));
> >
> > 6- Ubuntu PPA available, with a few OpenStack patches backported from
> > Liberty, to Kilo (to add "port_security_enabled" Heat support);
> >
> > https://launchpad.net/~sandvine/+archive/ubuntu/cloud-archive-kilo/ [7]
> >
> > 7- Only requires one physical ethernet card;
> >
> > 8- Both "Linux Bridges" and "Open vSwitch" deployments are supported;
> >
> > 9- Planning to add DPDK support;
> >
> > 10- Multi-node support under development;
> >
> > 11- IPv6 support comming...
> >
> > * Notes about Vagrant support:
> >
> > Under development (it doesn't work yet).
> >
> > There is a preliminary Vagrant support (there is still a bug on MySQL
> > startup, pull requests are welcome).
> >
> > Just "git clone" our Ansible playbooks and run "vagrant up" (or
> > ./os-deploy-vagrant.sh to auto-config your Ansible vars / files for
> > you).
> >
> > We tried it only with Mac / VirtualBox but, it does not support
> > VT-in-VT (nested virtualization), so, we're looking for KVM / Libvirt
> > on Ubuntu Desktop instead. But it would be nice to, at least, launch
> > OpenStack in a VirtualBox on you Mac... =)
> >
> > Hope you guys enjoy it!
> >
> > Cheers!
> > Thiago
> >
> > _______________________________________________
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [8]
> > Post to : openstack at lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [8]
> >
> > _______________________________________________
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [8]
> > Post to : openstack at lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [8]
>
>
> Links:
> ------
> [1] https://github.com/elmanytas/ansible-openstack-vcenter
> [2]
>
> https://github.com/elmanytas/ansible-openstack-vcenter/blob/master/etc_ansible/roles/keystone/tasks/main.yml
> [3] https://github.com/sandvine/os-ansible-deployment-lite
> [4] http://docs.openstack.org/kilo/install-guide/install/apt/content/
> [5]
>
> https://raw.githubusercontent.com/sandvine/os-ansible-deployment-lite/kilo/misc/os-install.sh
> [6]
>
> https://raw.githubusercontent.com/sandvine/os-ansible-deployment-lite/kilo/misc/os-install-lbr.sh
> [7] https://launchpad.net/~sandvine/+archive/ubuntu/cloud-archive-kilo/
> [8] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/792ad425/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 21
> Date: Fri, 4 Sep 2015 10:17:29 +0200
> From: Thierry Carrez <thierry at openstack.org>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] FFE Request for completion of data driven
>         assignment testing in Keystone
> Message-ID: <55E95399.1070903 at openstack.org>
> Content-Type: text/plain; charset=windows-1252
>
> Morgan Fainberg wrote:
> >
> >>     I would like to request an FFE for the remaining two patches that
> >>     are already in review
> >>     (https://review.openstack.org/#/c/153897/ and
> https://review.openstack.org/#/c/154485/).
> >>     These contain only test code and no functional changes, and
> >>     increase our test coverage - as well as enable other items to be
> >>     re-use the list_role_assignment backend method.
> >>
> >> Do we need a FFE for changes to tests?
> >>
> >
> > I would say "no".
>
> Right. Extra tests (or extra docs for that matter) don't count as a
> "feature" for the freeze. In particular it doesn't change the behavior
> of the software or invalidate testing that may have been conducted.
>
> --
> Thierry Carrez (ttx)
>
>
>
> ------------------------------
>
> Message: 22
> Date: Fri, 4 Sep 2015 10:42:57 +0200
> From: Daniel Mellado <daniel.mellado.es at ieee.org>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [infra][third-party][CI] Third-party oses
>         in devstack-gate
> Message-ID: <55E95991.4050105 at ieee.org>
> Content-Type: text/plain; charset=windows-1252
>
> El 04/09/15 a las 08:11, Evgeny Antyshev escribi?:
> > On 01.09.2015 12:28, Evgeny Antyshev wrote:
> >> Hello!
> >>
> >> This letter I address to those third-party CI maintainers who needs
> >> to amend
> >> the upstream devstack-gate to satisfy their environment.
> >>
> >> Some folks that I know use inline patching at job level,
> >> some make private forks of devstack-gate (I even saw one on github).
> >> There have been a few improvements to devstack-gate, which made it
> >> easier to use it
> >> downstream, f.e. introducing DEVSTACK_LOCAL_CONFIG
> >> (https://review.openstack.org/145321)
> >>
> >> We particularly need it to recognize our rhel-based distribution as a
> >> Fedora OS.
> >> We cannot define it explicitly in is_fedora() as it is not officially
> >> supported upstream,
> >> but we can introduce the variable DEVSTACK_GATE_IS_FEDORA which makes
> >> is_fedora() agnostic to distributions and to succeed if run on an
> >> undefined OS.
> >>
> >> Here is the change: https://review.openstack.org/215029
> >> I welcome everyone interested in the matter
> >> to tell us if we do it right or not, and to review the change.
> >>
> > Prepending with [infra] tag to draw more attention
> >
> Personally I think that would be great, as it would greatly help finding
> Fedora-ish issues with devstack, which are now only being raised by
> developers due to the lack of a gate.
>
>
>
> ------------------------------
>
> Message: 23
> Date: Fri, 4 Sep 2015 12:36:56 +0300
> From: Dmitro Dovbii <ddovbii at mirantis.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [murano] [dashboard] Remove the owner filter
>         from "Package Definitions" page
> Message-ID:
>         <
> CAKSp79y8cCU7z0S-Pzgy2k1TNJZZMsyVYXk-bEtSj6ByoB4JZQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi folks!
>
> I want suggest you to delete owner filter (3 tabs) from Package Definition
> page. Previously this filter was available for all users and we agreed that
> it is useless. Now it is available only for admin but I think this fact
> still doesn't improve the UX. Moreover, this filter prevents the
> implementation of the search by name, because the work of the two filters
> can be inconsistent.
> So, please express your opinion on this issue. If you agree, I will remove
> this filter ASAP.
>
> Best regards,
> Dmytro Dovbii
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/c9546f3a/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 24
> Date: Fri, 4 Sep 2015 12:40:55 +0300
> From: Sergey Reshetnyak <sreshetniak at mirantis.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [sahara] Request for Feature Freeze
>         Exception
> Message-ID:
>         <
> CAOB5mPxaTM5QKm410c2956QMfnsaz9QqT7XreMyxPmdrK1E0Og at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> +1 from me.
>
> Thanks,
> Sergey R.
>
> 2015-09-03 23:27 GMT+03:00 Ethan Gafford <egafford at redhat.com>:
>
> > Agreed. We've talked about this for a while, and it's very low risk.
> >
> > Thanks,
> > Ethan
> >
> > ----- Original Message -----
> > From: "michael mccune" <msm at redhat.com>
> > To: openstack-dev at lists.openstack.org
> > Sent: Thursday, September 3, 2015 3:53:41 PM
> > Subject: Re: [openstack-dev] [sahara] Request for Feature Freeze
> Exception
> >
> > On 09/03/2015 02:49 PM, Vitaly Gridnev wrote:
> > > Hey folks!
> > >
> > > I would like to propose to add to list of FFE's following blueprint:
> > > https://blueprints.launchpad.net/sahara/+spec/drop-hadoop-1
> > >
> > > Reasoning of that is following:
> > >
> > >   1. HDP 1.3.2 and Vanilla 1.2.1 are not gated for a whole release
> > > cycle, so it can be reason of several bugs in these versions;
> > >   2. Minimal risk of removal: it doesn't touch versions that we already
> > > have.
> > >   3. All required changes was already uploaded to the review:
> > >
> >
> https://review.openstack.org/#/q/status:open+project:openstack/sahara+branch:master+topic:bp/drop-hadoop-1,n,z
> >
> > this sounds reasonable to me
> >
> > mike
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/5c660e15/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 25
> Date: Fri, 4 Sep 2015 12:54:18 +0300
> From: Sergey Reshetnyak <sreshetniak at mirantis.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [sahara] FFE request for heat wait condition
>         support
> Message-ID:
>         <
> CAOB5mPwf6avCZD4Q6U4xh-g4f553eMzCTh1kfiX4bVY8x59i5A at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> I would like to request FFE for wait condition support for Heat engine.
> Wait condition reports signal about booting instance.
>
> Blueprint:
> https://blueprints.launchpad.net/sahara/+spec/sahara-heat-wait-conditions
>
> Spec:
>
> https://github.com/openstack/sahara-specs/blob/master/specs/liberty/sahara-heat-wait-conditions.rst
>
> Patch:
> https://review.openstack.org/#/c/169338/
>
> Thanks,
> Sergey Reshetnyak
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/6e8d42e5/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 26
> Date: Fri, 4 Sep 2015 18:54:49 +0900
> From: "Ken'ichi Ohmichi" <ken1ohmichi at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Scheduler hints, API and Objects
> Message-ID:
>         <CAA393vhyeMYeA=6MK9+0LtReud67+OMBu=
> KcaOzvM_pzL4Ea+g at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> 2015-09-04 12:14 GMT+09:00 Ken'ichi Ohmichi <ken1ohmichi at gmail.com>:
> > Hi Andrew,
> >
> > Sorry for this late response, I missed it.
> >
> > 2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com>:
> >> I have been growing concerned recently with some attempts to formalize
> >> scheduler hints, both with API validation and Nova objects defining
> them,
> >> and want to air those concerns and see if others agree or can help me
> see
> >> why I shouldn't worry.
> >>
> >> Starting with the API I think the strict input validation that's being
> done,
> >> as seen in
> >>
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da
> ,
> >> is unnecessary, and potentially problematic.
> >>
> >> One problem is that it doesn't indicate anything useful for a client.
> The
> >> schema indicates that there are hints available but can make no claim
> about
> >> whether or not they're actually enabled.  So while a microversion bump
> would
> >> typically indicate a new feature available to an end user, in the case
> of a
> >> new scheduler hint a microversion bump really indicates nothing at
> all.  It
> >> does ensure that if a scheduler hint is used that it's spelled properly
> and
> >> the data type passed is correct, but that's primarily useful because
> there
> >> is no feedback mechanism to indicate an invalid or unused scheduler
> hint.  I
> >> think the API schema is a poor proxy for that deficiency.
> >>
> >> Since the exposure of a hint means nothing as far as its usefulness, I
> don't
> >> think we should be codifying them as part of our API schema at this
> time.
> >> At some point I imagine we'll evolve a more useful API for passing
> >> information to the scheduler as part of a request, and when that
> happens I
> >> don't think needing to support a myriad of meaningless hints in older
> API
> >> versions is going to be desirable.
> >>
> >> Finally, at this time I'm not sure we should take the stance that only
> >> in-tree scheduler hints are supported.  While I completely agree with
> the
> >> desire to expose things in cross-cloud ways as we've done and are
> looking to
> >> do with flavor and image properties I think scheduling is an area where
> we
> >> want to allow some flexibility for deployers to write and expose
> scheduling
> >> capabilities that meet their specific needs.  Over time I hope we will
> get
> >> to a place where some standardization can happen, but I don't think
> locking
> >> in the current scheduling hints is the way forward for that.  I would
> love
> >> to hear from multi-cloud users here and get some input on whether that's
> >> crazy and they are expecting benefits from validation on the current
> >> scheduler hints.
> >>
> >> Now, objects.  As part of the work to formalize the request spec sent
> to the
> >> scheduler there's an effort to make a scheduler hints object.  This
> >> formalizes them in the same way as the API with no benefit that I can
> see.
> >> I won't duplicate my arguments above, but I feel the same way about the
> >> objects as I do with the API.  I don't think needing to update and
> object
> >> version every time a new hint is added is useful at this time, nor do I
> >> think we should lock in the current in-tree hints.
> >>
> >> In the end this boils down to my concern that the scheduling hints api
> is a
> >> really horrible user experience and I don't want it to be solidified in
> the
> >> API or objects yet.  I think we should re-examine how they're handled
> before
> >> that happens.
> >
> > Now we are discussing this on https://review.openstack.org/#/c/217727/
> > for allowing out-of-tree scheduler-hints.
> > When we wrote API schema for scheduler-hints, it was difficult to know
> > what are available API parameters for scheduler-hints.
> > Current API schema exposes them and I guess that is useful for API users
> also.
> >
> > One idea is that: How about auto-extending scheduler-hint API schema
> > based on loaded schedulers?
> > Now API schemas of "create/update/resize/rebuild a server" APIs are
> > auto-extended based on loaded extensions by using stevedore
> > library[1].
> > I guess we can apply the same way for scheduler-hints also in long-term.
> > Each scheduler needs to implement a method which returns available API
> > parameter formats and nova-api tries to get them then extends
> > scheduler-hints API schema with them.
> > That means out-of-tree schedulers also will be available if they
> > implement the method.
> > # In short-term, I can see "blocking additionalProperties" validation
> > disabled by the way.
>
> https://review.openstack.org/#/c/220440 is a prototype for the above idea.
>
> Thanks
> Ken Ohmichi
>
>
>
> ------------------------------
>
> Message: 27
> Date: Fri, 4 Sep 2015 18:03:57 +0800
> From: Alex Xu <soulxu at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Scheduler hints, API and Objects
> Message-ID:
>         <CAH7mGauOgfvVkfW2OYPm7D=
> 7zgXhRHpx4a7_jZMyBtND3iirGQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> 2015-09-04 11:14 GMT+08:00 Ken'ichi Ohmichi <ken1ohmichi at gmail.com>:
>
> > Hi Andrew,
> >
> > Sorry for this late response, I missed it.
> >
> > 2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com>:
> > > I have been growing concerned recently with some attempts to formalize
> > > scheduler hints, both with API validation and Nova objects defining
> them,
> > > and want to air those concerns and see if others agree or can help me
> see
> > > why I shouldn't worry.
> > >
> > > Starting with the API I think the strict input validation that's being
> > done,
> > > as seen in
> > >
> >
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da
> > ,
> > > is unnecessary, and potentially problematic.
> > >
> > > One problem is that it doesn't indicate anything useful for a client.
> > The
> > > schema indicates that there are hints available but can make no claim
> > about
> > > whether or not they're actually enabled.  So while a microversion bump
> > would
> > > typically indicate a new feature available to an end user, in the case
> > of a
> > > new scheduler hint a microversion bump really indicates nothing at all.
> > It
> > > does ensure that if a scheduler hint is used that it's spelled properly
> > and
> > > the data type passed is correct, but that's primarily useful because
> > there
> > > is no feedback mechanism to indicate an invalid or unused scheduler
> > hint.  I
> > > think the API schema is a poor proxy for that deficiency.
> > >
> > > Since the exposure of a hint means nothing as far as its usefulness, I
> > don't
> > > think we should be codifying them as part of our API schema at this
> time.
> > > At some point I imagine we'll evolve a more useful API for passing
> > > information to the scheduler as part of a request, and when that
> happens
> > I
> > > don't think needing to support a myriad of meaningless hints in older
> API
> > > versions is going to be desirable.
> > >
> > > Finally, at this time I'm not sure we should take the stance that only
> > > in-tree scheduler hints are supported.  While I completely agree with
> the
> > > desire to expose things in cross-cloud ways as we've done and are
> > looking to
> > > do with flavor and image properties I think scheduling is an area where
> > we
> > > want to allow some flexibility for deployers to write and expose
> > scheduling
> > > capabilities that meet their specific needs.  Over time I hope we will
> > get
> > > to a place where some standardization can happen, but I don't think
> > locking
> > > in the current scheduling hints is the way forward for that.  I would
> > love
> > > to hear from multi-cloud users here and get some input on whether
> that's
> > > crazy and they are expecting benefits from validation on the current
> > > scheduler hints.
> > >
> > > Now, objects.  As part of the work to formalize the request spec sent
> to
> > the
> > > scheduler there's an effort to make a scheduler hints object.  This
> > > formalizes them in the same way as the API with no benefit that I can
> > see.
> > > I won't duplicate my arguments above, but I feel the same way about the
> > > objects as I do with the API.  I don't think needing to update and
> object
> > > version every time a new hint is added is useful at this time, nor do I
> > > think we should lock in the current in-tree hints.
> > >
> > > In the end this boils down to my concern that the scheduling hints api
> > is a
> > > really horrible user experience and I don't want it to be solidified in
> > the
> > > API or objects yet.  I think we should re-examine how they're handled
> > before
> > > that happens.
> >
> > Now we are discussing this on https://review.openstack.org/#/c/217727/
> > for allowing out-of-tree scheduler-hints.
> > When we wrote API schema for scheduler-hints, it was difficult to know
> > what are available API parameters for scheduler-hints.
> > Current API schema exposes them and I guess that is useful for API users
> > also.
> >
> > One idea is that: How about auto-extending scheduler-hint API schema
> > based on loaded schedulers?
> > Now API schemas of "create/update/resize/rebuild a server" APIs are
> > auto-extended based on loaded extensions by using stevedore
> > library[1].
> >
>
> Em....we will deprecate the extension from our API. this sounds like add
> more extension mechanism.
>
>
> > I guess we can apply the same way for scheduler-hints also in long-term.
> > Each scheduler needs to implement a method which returns available API
> > parameter formats and nova-api tries to get them then extends
> > scheduler-hints API schema with them.
> > That means out-of-tree schedulers also will be available if they
> > implement the method.
> > # In short-term, I can see "blocking additionalProperties" validation
> > disabled by the way.
> >
> > Thanks
> > Ken Ohmichi
> >
> > ---
> > [1]:
> >
> https://github.com/openstack/nova/blob/master/doc/source/api_plugins.rst#json-schema
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/d10cb2fb/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 28
> Date: Fri, 4 Sep 2015 13:06:57 +0300
> From: Alexander Tivelkov <ativelkov at mirantis.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [murano] [dashboard] Remove the owner
>         filter from "Package Definitions" page
> Message-ID:
>         <CAM6FM9S47YmJsTYGVNoPc7L2JGjBpCB+-s-HTd=
> d+HK939GEEg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> ?+1 on this.
>
> Filtering by ownership makes sense only on Catalog view (i.e. on the page
> of usable apps) ?but not on the admin-like console like the list of package
> definitions.
>
> --
> Regards,
> Alexander Tivelkov
>
> On Fri, Sep 4, 2015 at 12:36 PM, Dmitro Dovbii <ddovbii at mirantis.com>
> wrote:
>
> > Hi folks!
> >
> > I want suggest you to delete owner filter (3 tabs) from Package
> Definition
> > page. Previously this filter was available for all users and we agreed
> that
> > it is useless. Now it is available only for admin but I think this fact
> > still doesn't improve the UX. Moreover, this filter prevents the
> > implementation of the search by name, because the work of the two filters
> > can be inconsistent.
> > So, please express your opinion on this issue. If you agree, I will
> remove
> > this filter ASAP.
> >
> > Best regards,
> > Dmytro Dovbii
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/904f4318/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 29
> Date: Fri, 4 Sep 2015 12:14:06 +0200
> From: Thierry Carrez <thierry at openstack.org>
> To: OpenStack Development Mailing List
>         <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
>         allocation
> Message-ID: <55E96EEE.4070306 at openstack.org>
> Content-Type: text/plain; charset=utf-8
>
> Hi PTLs,
>
> Here is the proposed slot allocation for every "big tent" project team
> at the Mitaka Design Summit in Tokyo. This is based on the requests the
> liberty PTLs have made, space availability and project activity &
> collaboration needs.
>
> We have a lot less space (and time slots) in Tokyo compared to
> Vancouver, so we were unable to give every team what they wanted. In
> particular, there were far more workroom requests than we have
> available, so we had to cut down on those quite heavily. Please note
> that we'll have a large lunch room with roundtables inside the Design
> Summit space that can easily be abused (outside of lunch) as space for
> extra discussions.
>
> Here is the allocation:
>
> | fb: fishbowl 40-min slots
> | wr: workroom 40-min slots
> | cm: Friday contributors meetup
> | | day: full day, morn: only morning, aft: only afternoon
>
> Neutron: 12fb, cm:day
> Nova: 14fb, cm:day
> Cinder: 5fb, 4wr, cm:day
> Horizon: 2fb, 7wr, cm:day
> Heat: 4fb, 8wr, cm:morn
> Keystone: 7fb, 3wr, cm:day
> Ironic: 4fb, 4wr, cm:morn
> Oslo: 3fb, 5wr
> Rally: 1fb, 2wr
> Kolla: 3fb, 5wr, cm:aft
> Ceilometer: 2fb, 7wr, cm:morn
> TripleO: 2fb, 1wr, cm:full
> Sahara: 2fb, 5wr, cm:aft
> Murano: 2wr, cm:full
> Glance: 3fb, 5wr, cm:full
> Manila: 2fb, 4wr, cm:morn
> Magnum: 5fb, 5wr, cm:full
> Swift: 2fb, 12wr, cm:full
> Trove: 2fb, 4wr, cm:aft
> Barbican: 2fb, 6wr, cm:aft
> Designate: 1fb, 4wr, cm:aft
> OpenStackClient: 1fb, 1wr, cm:morn
> Mistral: 1fb, 3wr
> Zaqar: 1fb, 3wr
> Congress: 3wr
> Cue: 1fb, 1wr
> Solum: 1fb
> Searchlight: 1fb, 1wr
> MagnetoDB: won't be present
>
> Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)
> PuppetOpenStack: 2fb, 3wr
> Documentation: 2fb, 4wr, cm:morn
> Quality Assurance: 4fb, 4wr, cm:full
> OpenStackAnsible: 2fb, 1wr, cm:aft
> Release management: 1fb, 1wr (shared meetup with QA)
> Security: 2fb, 2wr
> ChefOpenstack: will camp in the lunch room all week
> App catalog: 1fb, 1wr
> I18n: cm:morn
> OpenStack UX: 2wr
> Packaging-deb: 2wr
> Refstack: 2wr
> RpmPackaging: 1fb, 1wr
>
> We'll start working on laying out those sessions over the available
> rooms and time slots. If you have constraints (I already know
> searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
> Manila with Cinder, Solum with Magnum...) please let me know, we'll do
> our best to limit them.
>
> --
> Thierry Carrez (ttx)
>
>
>
> ------------------------------
>
> Message: 30
> Date: Fri, 04 Sep 2015 10:18:43 +0000
> From: "Ken'ichi Ohmichi" <ken1ohmichi at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Scheduler hints, API and Objects
> Message-ID:
>         <
> CAA393vhfetUH3PkJHkpcP9sf8vjzS+Tm-Fcp7O_D6mo3Q_S-xA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Alex,
>
> Thanks for  your comment.
> IMO, this idea is different from the extension we will remove.
> That is modularity for the maintenance burden.
> By this idea, we can put the corresponding schema in each filter.
>
> 2015?9?4?(?) 19:04 Alex Xu <soulxu at gmail.com>:
>
> > 2015-09-04 11:14 GMT+08:00 Ken'ichi Ohmichi <ken1ohmichi at gmail.com>:
> >
> >> Hi Andrew,
> >>
> >> Sorry for this late response, I missed it.
> >>
> >> 2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com>:
> >> > I have been growing concerned recently with some attempts to formalize
> >> > scheduler hints, both with API validation and Nova objects defining
> >> them,
> >> > and want to air those concerns and see if others agree or can help me
> >> see
> >> > why I shouldn't worry.
> >> >
> >> > Starting with the API I think the strict input validation that's being
> >> done,
> >> > as seen in
> >> >
> >>
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da
> >> ,
> >> > is unnecessary, and potentially problematic.
> >> >
> >> > One problem is that it doesn't indicate anything useful for a client.
> >> The
> >> > schema indicates that there are hints available but can make no claim
> >> about
> >> > whether or not they're actually enabled.  So while a microversion bump
> >> would
> >> > typically indicate a new feature available to an end user, in the case
> >> of a
> >> > new scheduler hint a microversion bump really indicates nothing at
> >> all.  It
> >> > does ensure that if a scheduler hint is used that it's spelled
> properly
> >> and
> >> > the data type passed is correct, but that's primarily useful because
> >> there
> >> > is no feedback mechanism to indicate an invalid or unused scheduler
> >> hint.  I
> >> > think the API schema is a poor proxy for that deficiency.
> >> >
> >> > Since the exposure of a hint means nothing as far as its usefulness, I
> >> don't
> >> > think we should be codifying them as part of our API schema at this
> >> time.
> >> > At some point I imagine we'll evolve a more useful API for passing
> >> > information to the scheduler as part of a request, and when that
> >> happens I
> >> > don't think needing to support a myriad of meaningless hints in older
> >> API
> >> > versions is going to be desirable.
> >> >
> >> > Finally, at this time I'm not sure we should take the stance that only
> >> > in-tree scheduler hints are supported.  While I completely agree with
> >> the
> >> > desire to expose things in cross-cloud ways as we've done and are
> >> looking to
> >> > do with flavor and image properties I think scheduling is an area
> where
> >> we
> >> > want to allow some flexibility for deployers to write and expose
> >> scheduling
> >> > capabilities that meet their specific needs.  Over time I hope we will
> >> get
> >> > to a place where some standardization can happen, but I don't think
> >> locking
> >> > in the current scheduling hints is the way forward for that.  I would
> >> love
> >> > to hear from multi-cloud users here and get some input on whether
> that's
> >> > crazy and they are expecting benefits from validation on the current
> >> > scheduler hints.
> >> >
> >> > Now, objects.  As part of the work to formalize the request spec sent
> >> to the
> >> > scheduler there's an effort to make a scheduler hints object.  This
> >> > formalizes them in the same way as the API with no benefit that I can
> >> see.
> >> > I won't duplicate my arguments above, but I feel the same way about
> the
> >> > objects as I do with the API.  I don't think needing to update and
> >> object
> >> > version every time a new hint is added is useful at this time, nor do
> I
> >> > think we should lock in the current in-tree hints.
> >> >
> >> > In the end this boils down to my concern that the scheduling hints api
> >> is a
> >> > really horrible user experience and I don't want it to be solidified
> in
> >> the
> >> > API or objects yet.  I think we should re-examine how they're handled
> >> before
> >> > that happens.
> >>
> >> Now we are discussing this on https://review.openstack.org/#/c/217727/
> >> for allowing out-of-tree scheduler-hints.
> >> When we wrote API schema for scheduler-hints, it was difficult to know
> >> what are available API parameters for scheduler-hints.
> >> Current API schema exposes them and I guess that is useful for API users
> >> also.
> >>
> >> One idea is that: How about auto-extending scheduler-hint API schema
> >> based on loaded schedulers?
> >> Now API schemas of "create/update/resize/rebuild a server" APIs are
> >> auto-extended based on loaded extensions by using stevedore
> >> library[1].
> >>
> >
> > Em....we will deprecate the extension from our API. this sounds like add
> > more extension mechanism.
> >
> >
> >> I guess we can apply the same way for scheduler-hints also in long-term.
> >> Each scheduler needs to implement a method which returns available API
> >> parameter formats and nova-api tries to get them then extends
> >> scheduler-hints API schema with them.
> >> That means out-of-tree schedulers also will be available if they
> >> implement the method.
> >> # In short-term, I can see "blocking additionalProperties" validation
> >> disabled by the way.
> >>
> >> Thanks
> >> Ken Ohmichi
> >>
> >> ---
> >> [1]:
> >>
> https://github.com/openstack/nova/blob/master/doc/source/api_plugins.rst#json-schema
> >>
> >
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/34f28fe5/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 31
> Date: Fri, 4 Sep 2015 13:37:25 +0300
> From: Vitaly Gridnev <vgridnev at mirantis.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [sahara] FFE request for heat wait
>         condition       support
> Message-ID:
>         <
> CA+O3VAhA2Xi_hKCaCB2PoWr8jUM0bQhwnSUAGx2gOGB0ksii6w at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> +1 for FFE, because of
>
>  1. Low risk of issues, fully covered with current scenario tests;
>  2. Implementation already on review
>
> On Fri, Sep 4, 2015 at 12:54 PM, Sergey Reshetnyak <
> sreshetniak at mirantis.com
> > wrote:
>
> > Hi,
> >
> > I would like to request FFE for wait condition support for Heat engine.
> > Wait condition reports signal about booting instance.
> >
> > Blueprint:
> >
> https://blueprints.launchpad.net/sahara/+spec/sahara-heat-wait-conditions
> >
> > Spec:
> >
> >
> https://github.com/openstack/sahara-specs/blob/master/specs/liberty/sahara-heat-wait-conditions.rst
> >
> > Patch:
> > https://review.openstack.org/#/c/169338/
> >
> > Thanks,
> > Sergey Reshetnyak
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>
>
> --
> Best Regards,
> Vitaly Gridnev
> Mirantis, Inc
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/92bbca9d/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 32
> Date: Fri, 04 Sep 2015 12:56:51 +0200
> From: Sylvain Bauza <sbauza at redhat.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Scheduler hints, API and Objects
> Message-ID: <55E978F3.2070804 at redhat.com>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
>
>
> Le 04/09/2015 12:18, Ken'ichi Ohmichi a ?crit :
> >
> > Hi Alex,
> >
> > Thanks for  your comment.
> > IMO, this idea is different from the extension we will remove.
> > That is modularity for the maintenance burden.
> > By this idea, we can put the corresponding schema in each filter.
> >
> >
>
> While I think it could be a nice move to have stevedore-loaded filters
> for the FilterScheduler due to many reasons, I actually wouldn't want to
> delay more than needed the compatibility change for the API validation
> relaxing the scheduler hints.
>
> In order to have a smooth transition, I'd rather just provide a change
> for using stevedore with the filters and weighters (even if the
> weighters are not using the API), and then once implemented, then do the
> necessary change on the API level like the one you proposed.
>
> In the meantime, IMHO we should accept rather sooner than later (meaning
> for Liberty) https://review.openstack.org/#/c/217727/
>
> Thanks for that good idea, I like it,
>
> -Sylvain
>
>
> > 2015?9?4?(?) 19:04 Alex Xu <soulxu at gmail.com
> > <mailto:soulxu at gmail.com>>:
> >
> >     2015-09-04 11:14 GMT+08:00 Ken'ichi Ohmichi <ken1ohmichi at gmail.com
> >     <mailto:ken1ohmichi at gmail.com>>:
> >
> >         Hi Andrew,
> >
> >         Sorry for this late response, I missed it.
> >
> >         2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com
> >         <mailto:andrew at lascii.com>>:
> >         > I have been growing concerned recently with some attempts to
> >         formalize
> >         > scheduler hints, both with API validation and Nova objects
> >         defining them,
> >         > and want to air those concerns and see if others agree or
> >         can help me see
> >         > why I shouldn't worry.
> >         >
> >         > Starting with the API I think the strict input validation
> >         that's being done,
> >         > as seen in
> >         >
> >
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da
> ,
> >         > is unnecessary, and potentially problematic.
> >         >
> >         > One problem is that it doesn't indicate anything useful for
> >         a client.  The
> >         > schema indicates that there are hints available but can make
> >         no claim about
> >         > whether or not they're actually enabled.  So while a
> >         microversion bump would
> >         > typically indicate a new feature available to an end user,
> >         in the case of a
> >         > new scheduler hint a microversion bump really indicates
> >         nothing at all.  It
> >         > does ensure that if a scheduler hint is used that it's
> >         spelled properly and
> >         > the data type passed is correct, but that's primarily useful
> >         because there
> >         > is no feedback mechanism to indicate an invalid or unused
> >         scheduler hint.  I
> >         > think the API schema is a poor proxy for that deficiency.
> >         >
> >         > Since the exposure of a hint means nothing as far as its
> >         usefulness, I don't
> >         > think we should be codifying them as part of our API schema
> >         at this time.
> >         > At some point I imagine we'll evolve a more useful API for
> >         passing
> >         > information to the scheduler as part of a request, and when
> >         that happens I
> >         > don't think needing to support a myriad of meaningless hints
> >         in older API
> >         > versions is going to be desirable.
> >         >
> >         > Finally, at this time I'm not sure we should take the stance
> >         that only
> >         > in-tree scheduler hints are supported.  While I completely
> >         agree with the
> >         > desire to expose things in cross-cloud ways as we've done
> >         and are looking to
> >         > do with flavor and image properties I think scheduling is an
> >         area where we
> >         > want to allow some flexibility for deployers to write and
> >         expose scheduling
> >         > capabilities that meet their specific needs. Over time I
> >         hope we will get
> >         > to a place where some standardization can happen, but I
> >         don't think locking
> >         > in the current scheduling hints is the way forward for
> >         that.  I would love
> >         > to hear from multi-cloud users here and get some input on
> >         whether that's
> >         > crazy and they are expecting benefits from validation on the
> >         current
> >         > scheduler hints.
> >         >
> >         > Now, objects.  As part of the work to formalize the request
> >         spec sent to the
> >         > scheduler there's an effort to make a scheduler hints
> >         object.  This
> >         > formalizes them in the same way as the API with no benefit
> >         that I can see.
> >         > I won't duplicate my arguments above, but I feel the same
> >         way about the
> >         > objects as I do with the API.  I don't think needing to
> >         update and object
> >         > version every time a new hint is added is useful at this
> >         time, nor do I
> >         > think we should lock in the current in-tree hints.
> >         >
> >         > In the end this boils down to my concern that the scheduling
> >         hints api is a
> >         > really horrible user experience and I don't want it to be
> >         solidified in the
> >         > API or objects yet.  I think we should re-examine how
> >         they're handled before
> >         > that happens.
> >
> >         Now we are discussing this on
> >         https://review.openstack.org/#/c/217727/
> >         for allowing out-of-tree scheduler-hints.
> >         When we wrote API schema for scheduler-hints, it was difficult
> >         to know
> >         what are available API parameters for scheduler-hints.
> >         Current API schema exposes them and I guess that is useful for
> >         API users also.
> >
> >         One idea is that: How about auto-extending scheduler-hint API
> >         schema
> >         based on loaded schedulers?
> >         Now API schemas of "create/update/resize/rebuild a server"
> >         APIs are
> >         auto-extended based on loaded extensions by using stevedore
> >         library[1].
> >
> >
> >     Em....we will deprecate the extension from our API. this sounds
> >     like add more extension mechanism.
> >
> >         I guess we can apply the same way for scheduler-hints also in
> >         long-term.
> >         Each scheduler needs to implement a method which returns
> >         available API
> >         parameter formats and nova-api tries to get them then extends
> >         scheduler-hints API schema with them.
> >         That means out-of-tree schedulers also will be available if they
> >         implement the method.
> >         # In short-term, I can see "blocking additionalProperties"
> >         validation
> >         disabled by the way.
> >
> >         Thanks
> >         Ken Ohmichi
> >
> >         ---
> >         [1]:
> >
> https://github.com/openstack/nova/blob/master/doc/source/api_plugins.rst#json-schema
> >
> >
>  __________________________________________________________________________
> >         OpenStack Development Mailing List (not for usage questions)
> >         Unsubscribe:
> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >         <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>  __________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/ba8e2d21/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 33
> Date: Fri, 4 Sep 2015 07:15:01 -0400
> From: Sean Dague <sean at dague.net>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [openstack-announce] [release][nova]
>         python-novaclient release 2.28.0 (liberty)
> Message-ID: <55E97D35.9020507 at dague.net>
> Content-Type: text/plain; charset=utf-8
>
> On 09/02/2015 05:48 PM, Matt Riedemann wrote:
> >
> >
> > On 9/2/2015 3:40 PM, Jeremy Stanley wrote:
> >> On 2015-09-02 10:55:56 -0400 (-0400), doug at doughellmann.com wrote:
> >>> We are thrilled to announce the release of:
> >>>
> >>> python-novaclient 2.27.0: Client library for OpenStack Compute API
> >> [...]
> >>
> >> Just as a heads up, there's some indication that this release is
> >> currently broken by many popular service providers (behavior ranging
> >> from 401 unauthorized errors to hanging indefinitely due, it seems,
> >> to filtering or not supporting version detection in various ways).
> >>
> >>      https://launchpad.net/bugs/1491579
> >>
> >
> > And:
> >
> > https://bugs.launchpad.net/python-novaclient/+bug/1491325
> >
> > We have a fix for ^ and I plan on putting in the request for 2.27.1
> > tonight once the fix is merged.  That should unblock manila.
> >
> > For the version discovery bug, we plan on talking about that in the nova
> > meeting tomorrow.
>
> The issues exposed in novaclient version detection working correctly
> against various clouds has now be fixed in 2.28.0 - the bug
> https://bugs.launchpad.net/python-novaclient/+bug/1491579 has been
> updated to hopefully contain all the relevant details of the issue.
>
>         -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
>
> ------------------------------
>
> Message: 34
> Date: Fri, 4 Sep 2015 07:29:45 -0400
> From: Sean Dague <sean at dague.net>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [openstack-announce] [release][nova]
>         python-novaclient release 2.28.0 (liberty)
> Message-ID: <55E980A9.9020101 at dague.net>
> Content-Type: text/plain; charset=utf-8
>
> On 09/04/2015 07:15 AM, Sean Dague wrote:
> > On 09/02/2015 05:48 PM, Matt Riedemann wrote:
> >>
> >>
> >> On 9/2/2015 3:40 PM, Jeremy Stanley wrote:
> >>> On 2015-09-02 10:55:56 -0400 (-0400), doug at doughellmann.com wrote:
> >>>> We are thrilled to announce the release of:
> >>>>
> >>>> python-novaclient 2.27.0: Client library for OpenStack Compute API
> >>> [...]
> >>>
> >>> Just as a heads up, there's some indication that this release is
> >>> currently broken by many popular service providers (behavior ranging
> >>> from 401 unauthorized errors to hanging indefinitely due, it seems,
> >>> to filtering or not supporting version detection in various ways).
> >>>
> >>>      https://launchpad.net/bugs/1491579
> >>>
> >>
> >> And:
> >>
> >> https://bugs.launchpad.net/python-novaclient/+bug/1491325
> >>
> >> We have a fix for ^ and I plan on putting in the request for 2.27.1
> >> tonight once the fix is merged.  That should unblock manila.
> >>
> >> For the version discovery bug, we plan on talking about that in the nova
> >> meeting tomorrow.
> >
> > The issues exposed in novaclient version detection working correctly
> > against various clouds has now be fixed in 2.28.0 - the bug
> > https://bugs.launchpad.net/python-novaclient/+bug/1491579 has been
> > updated to hopefully contain all the relevant details of the issue.
>
> It also looks like a big reason that this unexpected behavior in the
> field existed was because configuring SSL termination correctly (so that
> link following in the rest documents work) requires setting a ton of
> additional and divergent configuration options in various services.
> Thanks for folks looking into the issue in the bug and helping explain
> the behavior we saw.
>
> We're not yet testing for that in Tempest, so people are probably not
> realizing that their API environments are a bit janky.
>
> Honestly, the fact that deployers are required to do this is crazy. The
> service catalog already has this information, and the services should be
> reflecting this back. However people spent a lot of time working around
> the service catalog here probably because they didn't understand it, and
> creating a configuration hairball in the process.
>
> This I think raises the importance of really getting the Service Catalog
> into shape in this next cycle so that we can get ahead of issues like
> this one in the future, and actually ensure that out of the box cloud
> installs work in situations like this.
>
>         -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
>
> ------------------------------
>
> Message: 35
> Date: Fri, 4 Sep 2015 13:37:05 +0200
> From: Flavio Percoco <flavio at redhat.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [all] Mitaka Design Summit - Proposed
>         slot allocation
> Message-ID: <20150904113705.GH30997 at redhat.com>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
> On 04/09/15 12:14 +0200, Thierry Carrez wrote:
> >Hi PTLs,
> >
> >Here is the proposed slot allocation for every "big tent" project team
> >at the Mitaka Design Summit in Tokyo. This is based on the requests the
> >liberty PTLs have made, space availability and project activity &
> >collaboration needs.
> >
> >We have a lot less space (and time slots) in Tokyo compared to
> >Vancouver, so we were unable to give every team what they wanted. In
> >particular, there were far more workroom requests than we have
> >available, so we had to cut down on those quite heavily. Please note
> >that we'll have a large lunch room with roundtables inside the Design
> >Summit space that can easily be abused (outside of lunch) as space for
> >extra discussions.
> >
> >Here is the allocation:
> >
> >| fb: fishbowl 40-min slots
> >| wr: workroom 40-min slots
> >| cm: Friday contributors meetup
> >| | day: full day, morn: only morning, aft: only afternoon
> >
> >Neutron: 12fb, cm:day
> >Nova: 14fb, cm:day
> >Cinder: 5fb, 4wr, cm:day
> >Horizon: 2fb, 7wr, cm:day
> >Heat: 4fb, 8wr, cm:morn
> >Keystone: 7fb, 3wr, cm:day
> >Ironic: 4fb, 4wr, cm:morn
> >Oslo: 3fb, 5wr
> >Rally: 1fb, 2wr
> >Kolla: 3fb, 5wr, cm:aft
> >Ceilometer: 2fb, 7wr, cm:morn
> >TripleO: 2fb, 1wr, cm:full
> >Sahara: 2fb, 5wr, cm:aft
> >Murano: 2wr, cm:full
> >Glance: 3fb, 5wr, cm:full
> >Manila: 2fb, 4wr, cm:morn
> >Magnum: 5fb, 5wr, cm:full
> >Swift: 2fb, 12wr, cm:full
> >Trove: 2fb, 4wr, cm:aft
> >Barbican: 2fb, 6wr, cm:aft
> >Designate: 1fb, 4wr, cm:aft
> >OpenStackClient: 1fb, 1wr, cm:morn
> >Mistral: 1fb, 3wr
> >Zaqar: 1fb, 3wr
> >Congress: 3wr
> >Cue: 1fb, 1wr
> >Solum: 1fb
> >Searchlight: 1fb, 1wr
> >MagnetoDB: won't be present
> >
> >Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)
> >PuppetOpenStack: 2fb, 3wr
> >Documentation: 2fb, 4wr, cm:morn
> >Quality Assurance: 4fb, 4wr, cm:full
> >OpenStackAnsible: 2fb, 1wr, cm:aft
> >Release management: 1fb, 1wr (shared meetup with QA)
> >Security: 2fb, 2wr
> >ChefOpenstack: will camp in the lunch room all week
> >App catalog: 1fb, 1wr
> >I18n: cm:morn
> >OpenStack UX: 2wr
> >Packaging-deb: 2wr
> >Refstack: 2wr
> >RpmPackaging: 1fb, 1wr
> >
> >We'll start working on laying out those sessions over the available
> >rooms and time slots. If you have constraints (I already know
> >searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
> >Manila with Cinder, Solum with Magnum...) please let me know, we'll do
> >our best to limit them.
>
> From a very selfish POV, I'd like to avoid conflicts between Glance
> and Zaqar.
>
> From a community POV, It'd be cool if we could avoid conflicts between
> Zaqar and Sahara (at least in 1wr slot) since we'd like to dedicate 1
> to Sahara.
>
> Cheers,
> Flavio
>
> --
> @flaper87
> Flavio Percoco
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 819 bytes
> Desc: not available
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/e7b9b548/attachment-0001.pgp
> >
>
> ------------------------------
>
> Message: 36
> Date: Fri, 4 Sep 2015 14:48:53 +0300
> From: Ekaterina Chernova <efedorova at mirantis.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>         <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [murano] [dashboard] Remove the owner
>         filter from "Package Definitions" page
> Message-ID:
>         <
> CAOFFu8Zo5SRVPUytGk7kj4UgNN5KJ5m39d9NeJpKoB427FbzfA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Agreed.
>
> Currently, pagination is broken on "Package definitions" page now, so
> removing that filter
> will fix it back. Also, 'Other' tab looks unhelpful, admin should indicate
> to witch tenant this package belongs to.
> This improvement will be added later.
>
> Regards,
> Kate.
>
> On Fri, Sep 4, 2015 at 1:06 PM, Alexander Tivelkov <ativelkov at mirantis.com
> >
> wrote:
>
> > ?+1 on this.
> >
> > Filtering by ownership makes sense only on Catalog view (i.e. on the page
> > of usable apps) ?but not on the admin-like console like the list of
> package
> > definitions.
> >
> > --
> > Regards,
> > Alexander Tivelkov
> >
> > On Fri, Sep 4, 2015 at 12:36 PM, Dmitro Dovbii <ddovbii at mirantis.com>
> > wrote:
> >
> >> Hi folks!
> >>
> >> I want suggest you to delete owner filter (3 tabs) from Package
> >> Definition page. Previously this filter was available for all users and
> we
> >> agreed that it is useless. Now it is available only for admin but I
> think
> >> this fact still doesn't improve the UX. Moreover, this filter prevents
> the
> >> implementation of the search by name, because the work of the two
> filters
> >> can be inconsistent.
> >> So, please express your opinion on this issue. If you agree, I will
> >> remove this filter ASAP.
> >>
> >> Best regards,
> >> Dmytro Dovbii
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/4a269156/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 37
> Date: Fri, 4 Sep 2015 13:52:41 +0200
> From: Dmitry Tantsur <dtantsur at redhat.com>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [all] Mitaka Design Summit - Proposed
>         slot allocation
> Message-ID: <55E98609.4060708 at redhat.com>
> Content-Type: text/plain; charset=windows-1252; format=flowed
>
> On 09/04/2015 12:14 PM, Thierry Carrez wrote:
> > Hi PTLs,
> >
> > Here is the proposed slot allocation for every "big tent" project team
> > at the Mitaka Design Summit in Tokyo. This is based on the requests the
> > liberty PTLs have made, space availability and project activity &
> > collaboration needs.
> >
> > We have a lot less space (and time slots) in Tokyo compared to
> > Vancouver, so we were unable to give every team what they wanted. In
> > particular, there were far more workroom requests than we have
> > available, so we had to cut down on those quite heavily. Please note
> > that we'll have a large lunch room with roundtables inside the Design
> > Summit space that can easily be abused (outside of lunch) as space for
> > extra discussions.
> >
> > Here is the allocation:
> >
> > | fb: fishbowl 40-min slots
> > | wr: workroom 40-min slots
> > | cm: Friday contributors meetup
> > | | day: full day, morn: only morning, aft: only afternoon
> >
> > Neutron: 12fb, cm:day
> > Nova: 14fb, cm:day
> > Cinder: 5fb, 4wr, cm:day
> > Horizon: 2fb, 7wr, cm:day
> > Heat: 4fb, 8wr, cm:morn
> > Keystone: 7fb, 3wr, cm:day
> > Ironic: 4fb, 4wr, cm:morn
> > Oslo: 3fb, 5wr
> > Rally: 1fb, 2wr
> > Kolla: 3fb, 5wr, cm:aft
> > Ceilometer: 2fb, 7wr, cm:morn
> > TripleO: 2fb, 1wr, cm:full
> > Sahara: 2fb, 5wr, cm:aft
> > Murano: 2wr, cm:full
> > Glance: 3fb, 5wr, cm:full
> > Manila: 2fb, 4wr, cm:morn
> > Magnum: 5fb, 5wr, cm:full
> > Swift: 2fb, 12wr, cm:full
> > Trove: 2fb, 4wr, cm:aft
> > Barbican: 2fb, 6wr, cm:aft
> > Designate: 1fb, 4wr, cm:aft
> > OpenStackClient: 1fb, 1wr, cm:morn
> > Mistral: 1fb, 3wr
> > Zaqar: 1fb, 3wr
> > Congress: 3wr
> > Cue: 1fb, 1wr
> > Solum: 1fb
> > Searchlight: 1fb, 1wr
> > MagnetoDB: won't be present
> >
> > Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)
> > PuppetOpenStack: 2fb, 3wr
> > Documentation: 2fb, 4wr, cm:morn
> > Quality Assurance: 4fb, 4wr, cm:full
> > OpenStackAnsible: 2fb, 1wr, cm:aft
> > Release management: 1fb, 1wr (shared meetup with QA)
> > Security: 2fb, 2wr
> > ChefOpenstack: will camp in the lunch room all week
> > App catalog: 1fb, 1wr
> > I18n: cm:morn
> > OpenStack UX: 2wr
> > Packaging-deb: 2wr
> > Refstack: 2wr
> > RpmPackaging: 1fb, 1wr
> >
> > We'll start working on laying out those sessions over the available
> > rooms and time slots. If you have constraints (I already know
> > searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
> > Manila with Cinder, Solum with Magnum...) please let me know, we'll do
> > our best to limit them.
> >
>
> Would be cool to avoid conflicts between Ironic and TripleO.
>
>
>
> ------------------------------
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> End of OpenStack-dev Digest, Vol 41, Issue 9
> ********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/411709eb/attachment-0001.html>

From mgagne at internap.com  Fri Sep  4 19:07:32 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Fri, 4 Sep 2015 15:07:32 -0400
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <55E9E5EC.9050300@inaugust.com>
References: <55E9A4F2.5030809@inaugust.com>
 <BBCB05C9-C0D0-456E-BA4D-817D45F00563@gmail.com>
 <55E9CBD9.7080603@inaugust.com> <55E9D64F.5050200@internap.com>
 <CAPWkaSU--XLza5i_-_cUXM=E9ftQynDUbZvDj38UQSNAQ6=kTA@mail.gmail.com>
 <55E9E5EC.9050300@inaugust.com>
Message-ID: <55E9EBF4.9040905@internap.com>

On 2015-09-04 2:41 PM, Monty Taylor wrote:
> On 09/04/2015 01:42 PM, John Griffith wrote:
>>
>> ?Is no good?  You would like to see "less" in the output; like just the
>> command name itself and "Policy doesn't allow"?
>>
>> To Mathieu's point, fair statement WRT the visibility of the policy name.
> 
> Totally agree on the policy name. The one I did happened to be clear -
> that is not always the case. I'd love to see that.
> 
> But more to your question - yes, as an end user, I do't know what a
> volume_extension:volume_admin_actions:reset_status is - but I do know
> that I ran "cinder reset-state" - so getting:
> 
> 'Cloud policy does not allow you to run reset_status"
> 
> would be fairly clear to me.

Don't assume the user will run it from the (supposedly) deprecated
Cinder CLI. It could be from the new openstackclient or even an SDK
written in Ruby which might not name it "reset_status".

I would prefer a generic message over an overly specific message which
makes a lot of wrong assumption about the consumer.

-- 
Mathieu


From ihrachys at redhat.com  Fri Sep  4 19:07:47 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Fri, 4 Sep 2015 21:07:47 +0200
Subject: [openstack-dev] [ptl][release] flushing unreleased client
	library changes
In-Reply-To: <1441384328-sup-9901@lrrr.local>
References: <1441384328-sup-9901@lrrr.local>
Message-ID: <8ED5035B-D141-4A8B-9021-A3934ACCB97F@redhat.com>

> On 04 Sep 2015, at 18:39, Doug Hellmann <doug at doughellmann.com> wrote:
> 
> 
> PTLs,
> 
> We have quite a few unreleased client changes pending, and it would
> be good to go ahead and publish them so they can be tested as part
> of the release candidate process. I have the full list of changes for
> each project below, so please find yours and review them and then
> propose a release request to the openstack/releases repository.
> 
> On a separate note, for next cycle we need to do a better job of
> releasing these much much earlier (a few of these changes are at
> least a month old). Remember that changes to libraries do not go
> into the gate for consuming projects until that library is released.
> If you have any suggestions for how to do improve our tracking for
> needed releases, let me know.
> 
> Doug
> 
> 
> [ Unreleased changes in openstack/python-barbicanclient ]
> 
> Changes in python-barbicanclient 3.3.0..97cc46a


+++

It may also block some efforts, f.e. we cannot move with fullstack tests for QoS feature in neutron because those rely on neutronclient changes that are not yet released.

Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/bd3c62e6/attachment.pgp>

From doug at doughellmann.com  Fri Sep  4 19:21:11 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 04 Sep 2015 15:21:11 -0400
Subject: [openstack-dev] [ptl][release] flushing unreleased client
	library changes
In-Reply-To: <55E9E81E.9060401@swartzlander.org>
References: <1441384328-sup-9901@lrrr.local>
 <55E9E81E.9060401@swartzlander.org>
Message-ID: <1441394380-sup-7270@lrrr.local>

Excerpts from Ben Swartzlander's message of 2015-09-04 14:51:10 -0400:
> On 09/04/2015 12:39 PM, Doug Hellmann wrote:
> > PTLs,
> >
> > We have quite a few unreleased client changes pending, and it would
> > be good to go ahead and publish them so they can be tested as part
> > of the release candidate process. I have the full list of changes for
> > each project below, so please find yours and review them and then
> > propose a release request to the openstack/releases repository.
> 
> Manila had multiple gate-breaking bugs this week and I've extended our 
> feature freeze to next Tuesday to compensate. As a result our L-3 
> milestone release is not really representative of Liberty and we'd 
> rather not do a client release until we reach RC1.

Keep in mind that the unreleased changes are not being used to test
anything at all in the gate, so there's an integration "penalty" for
delaying releases. You can have as many releases as you want, and we can
create the stable branch from the last useful release any time after it
is created. So, I still recommend releasing early and often unless you
anticipate making API or CLI breaking changes between now and RC1.

Doug

> 
> -Ben Swartzlander
> 
> > On a separate note, for next cycle we need to do a better job of
> > releasing these much much earlier (a few of these changes are at
> > least a month old). Remember that changes to libraries do not go
> > into the gate for consuming projects until that library is released.
> > If you have any suggestions for how to do improve our tracking for
> > needed releases, let me know.
> >
> > Doug
> >
> >
> > [ Unreleased changes in openstack/python-barbicanclient ]
> >
> > Changes in python-barbicanclient 3.3.0..97cc46a
> > -----------------------------------------------
> > 4572293 2015-08-27 19:38:39 -0500 Add epilog to parser
> > 17ed50a 2015-08-25 14:58:25 +0000 Add Unit Tests for Store and Update Payload when Payload is zero
> > 34256de 2015-08-25 09:56:24 -0500 Allow Barbican Client Secret Update Functionality
> >
> > [ Unreleased changes in openstack/python-ceilometerclient ]
> >
> > Changes in python-ceilometerclient 1.4.0..2006902
> > -------------------------------------------------
> > 2006902 2015-08-26 14:09:00 +0000 Updated from global requirements
> > 2429dae 2015-08-25 01:32:56 +0000 Don't try to get aodh endpoint if auth_url didn't provided
> > 6498d55 2015-08-13 20:21:24 +0000 Updated from global requirements
> >
> > [ Unreleased changes in openstack/python-cinderclient ]
> >
> > Changes in python-cinderclient 1.3.1..1c82825
> > ---------------------------------------------
> > 1c82825 2015-09-02 17:18:28 -0700 Update path to subunit2html in post_test_hook
> > 471aea8 2015-09-02 00:53:40 +0000 Adds command to fetch specified backend capabilities
> > 2d979dc 2015-09-01 22:35:28 +0800 Volume status management for volume migration
> > 50758ba 2015-08-27 09:39:58 -0500 Fixed test_password_prompted
> > dc6e823 2015-08-26 23:04:16 -0700 Fix help message for reset-state commands
> > f805f5a 2015-08-25 15:15:20 +0300 Add functional tests for python-cinderclient
> > 8cc3ee2 2015-08-19 18:05:34 +0300 Add support '--all-tenants' for cinder backup-list
> > 2c3169e 2015-08-11 10:18:01 -0400 CLI: Non-disruptive backup
> > 5e26906 2015-08-11 13:14:04 +0300 Add tests for python-cinderclient
> > 780a129 2015-08-09 15:57:00 +0900 Replace assertEqual(None, *) with assertIsNone in tests
> > a9405b1 2015-08-08 05:36:45 -0400 CLI: Clone CG
> > 03542ee 2015-08-04 14:21:52 -0700 Fix ClientException init when there is no message on py34
> > 2ec9a22 2015-08-03 11:14:44 +0800 Fixes table when there are multiline in result data
> > 04caf88 2015-07-30 18:24:57 +0300 Set default OS_VOLUME_API_VERSION to '2'
> > bae0bb3 2015-07-27 01:28:00 +0000 Add commands for modifying image metadata
> > 629e548 2015-07-24 03:34:55 +0000 Updated from global requirements
> > b51e43e 2015-07-21 01:04:02 +0000 Remove H302
> > b426b71 2015-07-20 13:21:02 +0800 Show backup and volume info in backup_restore
> > dc1186d 2015-07-16 19:45:08 -0700 Add response message when volume delete
> > 075381d 2015-07-15 09:20:48 +0800 Add more details for replication
> > 953f766 2015-07-14 18:51:58 +0800 New mock release(1.1.0) broke unit/function tests
> > 8afc06c 2015-07-08 12:11:07 -0500 Remove unnecessary check for tenant information
> > c23586b 2015-07-08 11:42:59 +0800 Remove redundant statement and refactor
> > 891ef3e 2015-06-25 13:27:42 +0000 Use shared shell arguments provided by Session
> >
> > [ Unreleased changes in openstack/python-congressclient ]
> >
> > Changes in python-congressclient 1.1.0..0874721
> > -----------------------------------------------
> > 0f699f8 2015-09-02 14:47:32 +0800 Add actions listing command
> > d7fa523 2015-08-26 14:09:29 +0000 Updated from global requirements
> > 36f2b47 2015-08-11 01:38:32 +0000 Updated from global requirements
> > ee07cb3 2015-07-27 15:47:12 +0800 Fix constant name
> > f9858a8 2015-07-27 15:19:04 +0800 Support version list API in client
> > 9693132 2015-06-20 22:38:10 +0900 Adding a test of datasource table show CLI
> > a102014 2015-05-21 04:52:12 -0300 Favor the use of importlib over Python internal __import__ statement
> > 726d560 2015-05-07 23:37:04 +0000 Updated from global requirements
> > b8a176b 2015-04-24 16:31:49 +0800 Replace stackforge with openstack in README.rst
> > 8c31d3f 2015-03-31 15:33:53 -0700 Add api bindings for datasource request-request trigger
> >
> > [ Unreleased changes in openstack/python-cueclient ]
> >
> > Changes in python-cueclient 0.0.1..d9ac712
> > ------------------------------------------
> > d9ac712 2015-08-26 14:09:42 +0000 Updated from global requirements
> > 47b81c3 2015-08-11 13:38:14 -0700 Update python-binding section for keystone v3 support
> > d30c3b1 2015-08-11 01:38:34 +0000 Updated from global requirements
> > 14e5f05 2015-08-06 10:15:13 -0700 Adding size field cue cluster list command
> > 0c7559b 2015-07-16 10:29:07 -0700 Rename end_points to endpoints in API
> > 0e36051 2015-07-13 12:55:29 -0700 updating docs from stackforge->openstack
> > cccccda 2015-07-13 12:29:04 -0700 fixing oslo_serialization reference
> > b9352df 2015-07-10 17:04:45 -0700 Update .gitreview file for project rename
> > 1acc546 2015-06-08 03:19:18 -0700 Rename cue command shell commands with message-broker
> > d5b3acd 2015-04-23 12:57:55 -0700 Refactor cue client tests
> > 9590b5f 2015-04-18 21:13:39 -0700 Change nic type to list
> > 3779999 2015-04-16 12:26:16 -0700 Resolving cluster wrapper issue Closes bug: #1445175
> > 665dee5 2015-04-14 14:05:28 -0700 Add .coveragerc to better control coverage outputs
> > a992e48 2015-04-09 17:46:58 +0000 Remove cluster wrapper from response body
> > b302486 2015-04-01 16:19:19 -0700 Modifying CRD return type from dict to object
> > 7618f95 2015-03-31 15:19:27 -0700 Expose endpoints in cluster list command
> > 8c47aa6 2015-03-06 17:45:24 -0800 Removing openstackclient dependency
> > 9c125cc 2015-03-06 16:27:16 -0800 Adding cue python binding
> >
> > [ Unreleased changes in openstack/python-designateclient ]
> >
> > Changes in python-designateclient 1.4.0..52d68e5
> > ------------------------------------------------
> > f7d4dbb 2015-09-02 15:54:08 +0200 V2 CLI Support
> > 86d988d 2015-08-26 14:09:56 +0000 Updated from global requirements
> > 83d4cea 2015-08-19 18:31:49 +0800 Update github's URL
> > 1e1b94c 2015-08-13 20:21:31 +0000 Updated from global requirements
> > 71f465c 2015-08-11 10:12:25 +0200 Don't wildcard resolve names
> > 74ee1a1 2015-08-11 01:38:35 +0000 Updated from global requirements
> > 08191bb 2015-08-10 01:10:02 +0000 Updated from global requirements
> > 035657c 2015-08-07 18:41:29 +0200 Improve help strings
> >
> > [ Unreleased changes in openstack/python-glanceclient ]
> >
> > Changes in python-glanceclient 1.0.0..90b7dc4
> > ---------------------------------------------
> > 90b7dc4 2015-09-04 10:29:01 +0900 Update path to subunit2html in post_test_hook
> > 1e2274a 2015-09-01 18:03:41 +0200 Password should be prompted once
> >
> > [ Unreleased changes in openstack/python-ironicclient ]
> >
> > Changes in python-ironicclient 0.8.0..6a58f9d
> > ---------------------------------------------
> > 156ca47 2015-09-03 12:55:41 -0700 Fix functional tests job
> >
> > [ Unreleased changes in openstack/python-ironic-inspector-client ]
> >
> > Changes in python-ironic-inspector-client 1.0.1..7ac591e
> > --------------------------------------------------------
> > 1ce6380 2015-08-26 14:10:34 +0000 Updated from global requirements
> > 29e38a9 2015-08-25 18:48:38 +0200 Make our README friendly to OpenStack release-tools
> > 9625bf7 2015-08-13 20:21:36 +0000 Updated from global requirements
> > 95133c7 2015-08-12 14:58:58 +0200 Make sure we expose all API elements in the top-level package
> > bd3737c 2015-08-12 14:43:45 +0200 Drop comment about changing functional tests to use released inspector
> > 69dc6ee 2015-08-12 14:37:05 +0200 Fix error message for unsupported API version
> > 89695da 2015-08-11 01:38:40 +0000 Updated from global requirements
> > fe67f67 2015-08-10 01:10:07 +0000 Updated from global requirements
> > 61448ba 2015-08-04 12:50:13 +0200 Implement optional API versioning
> > 7d443fb 2015-07-23 14:13:38 +0200 Create own functional tests for the client
> > e7bb103 2015-07-22 04:59:33 +0000 Updated from global requirements
> > 1e3d334 2015-07-17 16:17:44 +0000 Updated from global requirements
> > 8d68f61 2015-07-12 15:22:07 +0000 Updated from global requirements
> > 16dc081 2015-07-09 17:52:05 +0200 Use released ironic-inspector for functional testing
> > 0df30c9 2015-07-08 20:40:34 +0200 Don't repeat requirements in tox.ini
> > 2ea4b8c 2015-07-01 14:35:07 +0200 Add functional test
> > 66f8551 2015-06-22 22:35:16 +0000 Updated from global requirements
> > 71d491b 2015-06-23 00:02:29 +0900 Change to Capital letters
> >
> > [ Unreleased changes in openstack/python-keystoneclient ]
> >
> > Changes in python-keystoneclient 1.6.0..6231459
> > -----------------------------------------------
> > 3e862bb 2015-09-02 17:20:17 -0700 Update path to subunit2html in post_test_hook
> > 1697fd7 2015-09-02 11:39:35 -0500 Deprecate create Discover without session
> > 3e26ff8 2015-08-31 12:49:34 -0700 Mask passwords when logging the HTTP response
> > f58661e 2015-08-31 15:36:04 +0000 Updated from global requirements
> > 7c545e5 2015-08-29 11:28:01 -0500 Update deprecation text for Session properties
> > e76423f 2015-08-29 11:28:01 -0500 Proper deprecation for httpclient.USER_AGENT
> > 42bd016 2015-08-29 11:28:01 -0500 Deprecate create HTTPClient without session
> > e0276c6 2015-08-26 06:24:27 +0000 Fix Accept header in SAML2 requests
> > d22cd9d 2015-08-20 17:10:05 +0000 Updated from global requirements
> > 0cb46c9 2015-08-15 07:36:09 +0800 Expose token_endpoint.Token as admin_token
> > 4bdbb83 2015-08-13 19:01:42 -0500 Proper deprecation for UserManager project argument
> > a50f8a1 2015-08-13 19:01:42 -0500 Proper deprecation for CredentialManager data argument
> > 4e4dede 2015-08-13 19:01:42 -0500 Deprecate create v3 Client without session
> > b94a610 2015-08-13 19:01:42 -0500 Deprecate create v2_0 Client without session
> > 962ab57 2015-08-13 19:01:42 -0500 Proper deprecation for Session.get_token()
> > afcf4a1 2015-08-13 19:01:42 -0500 Deprecate use of cert and key
> > 58cc453 2015-08-13 18:59:31 -0500 Proper deprecation for Session.construct()
> > 0d293ea 2015-08-13 18:58:27 -0500 Deprecate ServiceCatalog.get_urls() with no attr
> > 803eb23 2015-08-13 18:57:31 -0500 Deprecate ServiceCatalog(region_name)
> > cba0a68 2015-08-13 20:21:41 +0000 Updated from global requirements
> > 1cbfb2e 2015-08-13 02:18:54 +0000 Updated from global requirements
> > 43e69cc 2015-08-10 01:10:11 +0000 Updated from global requirements
> > b54d9f1 2015-08-06 14:44:12 -0500 Stop using .keys() on dicts where not needed
> > 6dae40e 2015-08-06 16:57:32 +0000 Inhrerit roles project calls on keystoneclient v3
> > 51d9d12 2015-08-05 12:28:30 -0500 Deprecate openstack.common.apiclient
> > 16e834d 2015-08-05 11:24:08 -0500 Move apiclient.base.Resource into keystoneclient
> > 26534da 2015-08-05 14:59:23 +0000 oslo-incubator apiclient.exceptions to keystoneclient.exceptions
> > eaa7ddd 2015-08-04 09:56:44 -0500 Proper deprecation for HTTPClient session and adapter properties
> > 0c2fef5 2015-08-04 09:56:44 -0500 Proper deprecation for HTTPClient.request methods
> > ada04ac 2015-08-04 09:56:44 -0500 Proper deprecation for HTTPClient.tenant_id|name
> > 1721e01 2015-08-04 09:56:43 -0500 Proper deprecation for HTTPClient tenant_id, tenant_name parameters
> > a9ef92a 2015-08-04 00:48:54 +0000 Updated from global requirements
> > 22236fd 2015-08-02 11:22:18 -0500 Clarify setting socket_options
> > aa5738c 2015-08-02 11:18:45 -0500 Remove check for requests version
> > 9e470a5 2015-07-29 03:50:34 +0000 Updated from global requirements
> > 0b74590 2015-07-26 06:54:23 -0500 Fix tests passing user, project, and token
> > 9f17732 2015-07-26 06:54:23 -0500 Proper deprecation for httpclient.request()
> > fb28e1a 2015-07-26 06:54:23 -0500 Proper deprecation for Dicover.raw_version_data unstable parameter
> > a303cbc 2015-07-26 06:54:23 -0500 Proper deprecation for Dicover.available_versions()
> > 5547fe8 2015-07-26 06:54:23 -0500 Proper deprecation for is_ans1_token
> > ce58b07 2015-07-26 06:54:23 -0500 Proper deprecation for client.HTTPClient
> > c5b0319 2015-07-26 06:54:23 -0500 Proper deprecation for Manager.api
> > fee5ba7 2015-07-26 06:54:22 -0500 Stop using Manager.api
> > b1496ab 2015-07-26 06:54:22 -0500 Proper deprecation for BaseIdentityPlugin trust_id property
> > 799e1fa 2015-07-26 06:54:22 -0500 Proper deprecation for BaseIdentityPlugin username, password, token_id properties
> > 85b32fc 2015-07-26 06:54:22 -0500 Proper deprecations for modules
> > 6950527 2015-07-25 09:51:42 +0000 Use UUID values in v3 test fixtures
> > 1a2ccb0 2015-07-24 11:05:05 -0500 Proper deprecation for AccessInfo management_url property
> > 6d82f1f 2015-07-24 11:05:05 -0500 Proper deprecation for AccessInfo auth_url property
> > 66fd1eb 2015-07-24 11:04:04 -0500 Stop using deprecated AccessInfo.auth_url and management_url
> > f782ee8 2015-07-24 09:14:40 -0500 Proper deprecation for AccessInfo scoped property
> > 8d65259 2015-07-24 08:16:03 -0500 Proper deprecation for AccessInfo region_name parameter
> > 610844d 2015-07-24 08:05:13 -0500 Deprecations fixture support calling deprecated function
> > c6b14f9 2015-07-23 20:14:14 -0500 Set reasonable defaults for TCP Keep-Alive
> > bb6463e 2015-07-23 07:44:44 +0000 Updated from global requirements
> > 0d5415e 2015-07-23 07:22:57 +0000 Remove unused time_patcher
> > 7d5d8b3 2015-07-22 23:41:07 +0300 Make OAuth testcase use actual request headers
> > 98326c7 2015-07-19 09:49:04 -0500 Prevent attempts to "filter" list() calls by globally unique IDs
> > a4584c4 2015-07-15 22:01:14 +0000 Add get_token_data to token CRUD
> > 2f90bb6 2015-07-15 01:37:25 +0000 Updated from global requirements
> > 3668d9c 2015-07-13 04:53:17 -0700 py34 not py33 is tested and supported
> > d3b9755 2015-07-12 15:22:13 +0000 Updated from global requirements
> > 8bab2c2 2015-07-11 08:01:39 -0500 Remove confusing deprecation comment from token_to_cms
> > 4034366 2015-07-08 20:12:31 +0000 Fixes modules index generated by Sphinx
> > c503c29 2015-07-02 18:57:20 +0000 Updated from global requirements
> > 31f326d 2015-06-30 12:58:55 -0500 Unit tests catch deprecated function usage
> > 225832f 2015-06-30 12:58:55 -0500 Switch from deprecated oslo_utils.timeutils.strtime
> > 97c2c69 2015-06-30 12:58:55 -0500 Switch from deprecated isotime
> > ef0f267 2015-06-29 00:12:44 +0000 Remove keystoneclient CLI references in README
> > 20db11f 2015-06-29 00:12:11 +0000 Update README.rst and remove ancient reference
> > a951023 2015-06-28 05:49:46 +0000 Remove unused images from docs
> > 2b058ba 2015-06-22 20:00:20 +0000 Updated from global requirements
> > 02f07cf 2015-06-17 11:15:03 -0400 Add openid connect client support
> > 350b795 2015-06-13 09:02:09 -0500 Stop using tearDown
> > f249332 2015-06-13 09:02:09 -0500 Use mock rather than mox
> > 75d4b16 2015-06-13 09:01:44 -0500 Remove unused setUp from ClientTest
> > 08783e0 2015-06-11 00:48:15 +0000 Updated from global requirements
> > d99c56f 2015-06-09 13:42:53 -0400 Iterate over copy of sys.modules keys in Python2/3
> > 945e519 2015-06-08 21:11:54 -0500 Use random strings for test fixtures
> > c0046d7 2015-06-08 20:29:07 -0500 Stop using function deprecated in Python 3
> > 2a032a5 2015-06-05 09:45:08 -0400 Use python-six shim for assertRaisesRegex/p
> > 86018ca 2015-06-03 21:01:18 -0500 tox env for Bandit
> > f756798 2015-05-31 10:27:01 -0500 Cleanup fixture imports
> > 28fd6d5 2015-05-30 12:36:16 +0000 Removes unused debug logging code
> > 0ecf9b1 2015-05-26 17:05:09 +1000 Add get_communication_params interface to plugins
> > 8994d90 2015-05-04 16:07:31 +0800 add --slowest flag to testr
> > 831ba03 2015-03-31 08:47:25 +1100 Support /auth routes for list projects and domains
> >
> > [ Unreleased changes in openstack/python-magnumclient ]
> >
> > Changes in python-magnumclient 0.2.1..e6dd7bb
> > ---------------------------------------------
> > 31417f7 2015-08-27 04:18:52 +0000 Updated from global requirements
> > 97dbb71 2015-08-26 18:15:06 +0000 Rename existing service-* to coe-service-*
> > 39e7b24 2015-08-26 14:11:20 +0000 Updated from global requirements
> > ea83d71 2015-08-21 22:21:37 +0000 Remove name from test token
> > b450891 2015-08-18 05:28:06 -0400 This adds proxy feature in magnum client
> > d55e7f3 2015-08-13 20:21:44 +0000 Updated from global requirements
> > ba689b8 2015-08-10 01:10:14 +0000 Updated from global requirements
> > 9de9a3a 2015-08-04 00:48:58 +0000 Updated from global requirements
> > 292310c 2015-08-03 11:22:06 -0400 Add support for multiple master nodes
> > 3d5e0ed 2015-07-29 03:50:37 +0000 Updated from global requirements
> > 24577e3 2015-07-22 23:49:23 +0300 Remove uuidutils from openstack.common
> > 6455a1c 2015-07-22 04:59:40 +0000 Updated from global requirements
> > b9681e9 2015-07-21 23:16:57 +0000 Updated from global requirements
> > c457cda 2015-07-17 23:06:58 +0800 Remove H803 rule
> > de2e368 2015-07-15 01:37:31 +0000 Updated from global requirements
> > 70830ed 2015-07-12 15:22:16 +0000 Updated from global requirements
> > 017fccb 2015-06-30 20:03:09 +0000 Updated from global requirements
> > 252586a 2015-06-24 10:55:11 +0800 Rename image_id to image when create a container
> > d85771a 2015-06-22 08:27:54 +0000 Updated from global requirements
> > 0bdc3ce 2015-06-18 15:24:46 -0700 Add missing dependency oslo.serialization
> > 0469af8 2015-06-16 19:23:05 +0000 Updated from global requirements
> > 0c9f735 2015-06-16 11:44:54 +0530 Add additional arguments to CLI for container-create.
> > b9ca1d5 2015-06-15 12:54:41 +0900 Pass environment variables of proxy to tox
> > ac983f1 2015-06-12 05:30:38 +0000 Change container-execute to container-exec
> > 8e75123 2015-06-11 00:48:19 +0000 Updated from global requirements
> > 80d8f1a 2015-06-08 11:09:42 +0000 Sync from latest oslo-incubator
> > 7f58e6f 2015-06-04 16:24:29 +0000 Updated from global requirements
> > 0ea4159 2015-05-27 11:44:04 +0200 Fix translation setup
> >
> > [ Unreleased changes in openstack/python-manilaclient ]
> >
> > Changes in python-manilaclient 1.2.0..0c7b857
> > ---------------------------------------------
> > 0c7b857 2015-08-27 04:18:54 +0000 Updated from global requirements
> > fec43dd 2015-08-19 22:01:12 -0400 Move requirement Openstack client to test-requirements
> > fa05919 2015-08-15 20:54:25 +0000 Updated from global requirements
> > 5f45b18 2015-08-12 17:20:01 +0000 Make spec_driver_handles_share_servers required
> > f0c6685 2015-08-06 10:47:11 +0800 Modify the manage command prompt information
> > 8a3702d 2015-08-04 00:57:34 +0000 Updated from global requirements
> > 4454542 2015-07-22 04:59:42 +0000 Updated from global requirements
> > e919065 2015-07-16 06:59:53 -0400 Add functional tests for access rules
> > c954684 2015-07-15 20:45:50 +0000 Updated from global requirements
> > 2a4f79c 2015-07-15 08:37:48 -0400 Fix post_test_hook and update test-requirements
> > 4f25278 2015-06-30 22:45:35 +0000 Updated from global requirements
> > 92643e8 2015-06-22 08:27:56 +0000 Updated from global requirements
> > d2c0e26 2015-06-16 19:23:07 +0000 Updated from global requirements
> > 6b2121e 2015-06-05 17:21:43 +0300 Add share shrink API
> > 611d4fa 2015-06-04 16:24:31 +0000 Updated from global requirements
> > 6733ae3 2015-06-03 11:35:01 +0000 Updated from global requirements
> > bd28eda 2015-06-02 10:32:55 +0000 Add rw functional tests for shares metadata
> > ada9825 2015-06-02 12:46:34 +0300 Add rw functional tests for shares
> > c668f00 2015-05-29 22:53:37 +0000 Updated from global requirements
> >
> > [ Unreleased changes in openstack/python-muranoclient ]
> >
> > Changes in python-muranoclient 0.6.3..7897d7f
> > ---------------------------------------------
> > 54918f5 2015-09-03 10:39:45 +0000 Added the support of Glance Artifact Repository
> > 39967d9 2015-09-02 19:09:10 +0000 Copy the code of Glance V3 (artifacts) client
> > 1141dd5 2015-09-02 17:18:28 +0300 Fixed issue with cacert parameter
> > 66af770 2015-08-31 21:53:45 +0800 Update the git ingore
> > 6e9f436 2015-08-31 13:48:21 +0800 Fix the reversed incoming parameters of assertEqual
> > b632ede 2015-08-27 14:18:12 +0800 Add olso.log into muranoclient's requirements.txt
> > 75a616f 2015-08-26 14:11:36 +0000 Updated from global requirements
> > 6242211 2015-08-26 12:45:10 +0800 Standardise help parameter of CLI commands
> > 924a83f 2015-08-26 09:56:04 +0800 Fix some spelling mistakes of setup files
> >
> > [ Unreleased changes in openstack/python-neutronclient ]
> >
> > Changes in python-neutronclient 2.6.0..d75f79f
> > ----------------------------------------------
> > d75f79f 2015-09-04 11:06:00 +0900 Update path to subunit2html in post_test_hook
> > 627f68e 2015-09-02 08:19:20 +0000 Updated from global requirements
> > 0558b49 2015-09-01 02:01:07 +0000 Add REJECT rule on FWaaS Client
> > a4f64f6 2015-08-27 09:36:04 -0700 Update tls_container_id to tls_container_ref
> > 9a51f2d 2015-08-27 04:18:58 +0000 Updated from global requirements
> > 31df9de 2015-08-26 16:32:21 +0300 Support CLI changes for QoS (2/2).
> > 002a0c7 2015-08-26 16:26:21 +0300 Support QoS neutron-client (1/2).
> > a174215 2015-08-25 09:26:00 +0800 Clear the extension requirement
> > bb7124e 2015-08-23 05:28:30 +0000 Updated from global requirements
> > c44b57f 2015-08-21 16:44:52 +0000 Make subnetpool-list show correct address scope column
> > abc2b65 2015-08-21 16:44:28 +0000 Fix find_resourceid_by_name call for address scopes
> > 45ed3ec 2015-08-20 21:57:25 +0800 Add extension name to extension's command help text line
> > f6ca3a1 2015-08-20 12:04:37 +0530 Adding registration interface for non_admin_status_resources
> > 54e7b94 2015-08-20 12:00:52 +0800 Add document for entry point in setup.cfg
> > de5d3bb 2015-08-19 11:32:54 +0000 Create hooks for running functional test
> > d749973 2015-08-19 13:51:30 +0530 Support Command line changes for Address Scope
> > 5271890 2015-08-13 14:26:59 +0000 Remove --shared option from firewall-create
> > 16e02dd 2015-08-12 18:05:56 +0300 Disable failing vpn tests
> > 22c8492 2015-08-10 17:14:54 +0530 Support RBAC neutron-client changes.
> > 8da3dc8 2015-08-10 12:59:58 +0300 Remove newlines from request and response log
> > ccf6fb8 2015-07-17 16:18:04 +0000 Updated from global requirements
> > d61a5b5 2015-07-15 01:37:35 +0000 Updated from global requirements
> > ab7d9e8 2015-07-14 18:00:43 +0300 Devref documentation for client command extension support
> > 0094e51 2015-07-14 15:55:08 +0530 Support CLI changes for associating subnetpools and address-scopes.
> > f936493 2015-07-13 09:49:57 +0900 Remove unused AlreadyAttachedClient
> > 31f8f23 2015-07-13 09:11:53 +0900 Avoid overwriting parsed_args
> > 043656c 2015-07-12 21:47:32 +0000 Determine ip version during subnet create.
> > 52721a8 2015-07-12 19:59:00 +0000 Call UnsetStub/VerifyAll properly for tests with exceptions
> > 25a947b 2015-07-12 15:22:20 +0000 Updated from global requirements
> > f4ddc6e 2015-07-03 00:17:32 -0500 Support resource plurals not ending in 's'
> > f446ab5 2015-06-30 22:45:38 +0000 Updated from global requirements
> > da3a415 2015-06-30 13:47:29 +0300 Revert "Add '--router:external' option to 'net-create'"
> > 8557cd9 2015-06-23 21:50:00 +0000 Updated from global requirements
> > dcb7401 2015-06-16 19:23:09 +0000 Updated from global requirements
> > f13161b 2015-06-12 20:41:47 +0000 Fixes indentation for bash completion script
> > c809e06 2015-06-12 13:32:44 -0700 Allow bash completion script to work with BSD sed
> > 58a5ec6 2015-06-12 11:38:02 -0400 Add alternative login description in neutronclient docs
> > a788a3e 2015-06-08 21:20:10 +0000 Updated from global requirements
> > a2ae8eb 2015-06-08 18:28:23 +0000 Raise user-friendly exceptions in str2dict
> > e3f61c9 2015-06-08 19:49:24 +0200 LBaaS v2: Fix listing pool members
> > 7eb3241 2015-05-26 16:26:51 +0200 Fix functional tests and tox 2.0 errors
> > d536020 2015-05-11 16:12:41 +0800 Add missing tenant_id to lbaas-v2 resources creation
> > df93b27 2015-05-11 06:08:16 +0000 Add InvalidIpForSubnetClient exception
> > ada1568 2015-03-04 16:28:19 +0530 "neutron help router-update" help info updated
> >
> > [ Unreleased changes in openstack/python-novaclient ]
> >
> > Changes in python-novaclient 2.28.0..5d50603
> > --------------------------------------------
> > d970de4 2015-09-03 21:53:53 +0000 Adds missing internationalization for help message
> >
> > [ Unreleased changes in openstack/python-openstackclient ]
> >
> > Changes in python-openstackclient 1.6.0..9210cac
> > ------------------------------------------------
> > d751a21 2015-09-01 15:51:58 -0700 Fix 'auhentication' spelling error/mistake
> > 5171a42 2015-08-28 09:32:05 -0600 Ignore flavor and image find errors on server show
> > f142516 2015-08-24 10:38:43 -0500 default OS_VOLUME_API_VERSION to v2
> > 59d12a6 2015-08-21 15:33:48 -0500 unwedge the osc gate
> > 8fb19bc 2015-08-21 16:07:58 +0000 additional functional tests for identity providers
> > 1966663 2015-08-19 16:46:55 -0400 Adds documentation  on weekly meeting
> > 1004e06 2015-08-19 11:01:26 -0600 Update the plugin docs for designate
> > 0f837df 2015-08-19 11:29:29 -0400 Added note to install openstackclient
> > 0f0d66f 2015-08-14 10:31:53 -0400 Running 'limits show' returns nothing
> > ac5e289 2015-08-13 20:21:57 +0000 Updated from global requirements
> > a6c8c8f 2015-08-13 09:31:11 +0000 Updated from global requirements
> > e908492 2015-08-13 02:19:22 +0000 Updated from global requirements
> >
> > [ Unreleased changes in openstack/python-swiftclient ]
> >
> > Changes in python-swiftclient 2.5.0..93666bb
> > --------------------------------------------
> > d5eb818 2015-09-02 13:06:13 +0100 Cleanup and improve tests for download
> > 3c02898 2015-08-31 22:03:26 +0100 Log and report trace on service operation fails
> > 4b62732 2015-08-27 00:01:22 +0800 Increase httplib._MAXHEADERS to 256.
> > 4b31008 2015-08-25 09:47:09 +0100 Stop Connection class modifying os_options parameter
> > 1789c26 2015-08-24 10:54:15 +0100 Add minimal working service token support.
> > 91d82af 2015-08-19 14:54:03 -0700 Drop flake8 ignores for already-passing tests
> > 38a82e9 2015-08-18 19:19:22 -0700 flake8 ignores same hacks as swift
> > 7c7f46a 2015-08-06 14:51:10 -0700 Update mock to get away from env markers
> > be0f1aa 2015-08-06 18:50:33 +0900 change deprecated assertEquals to assertEqual
> > a056f1b 2015-08-04 11:34:51 +0900 fix old style class definition(H238)
> > 847f135 2015-07-30 09:55:51 +0200 Block comment PEP8 fix.
> > 1c644d8 2015-07-30 09:48:00 +0200 Test auth params together with --help option.
> > 3cd1faa 2015-07-24 10:57:29 -0700 make Connection.get_auth set url and token attributes on self
> > a8c4df9 2015-07-20 20:44:51 +0100 Reduce memory usage for download/delete and add --no-shuffle option to st_download
> > 7442f0d 2015-07-17 16:03:39 +0900 swiftclient: add short options to help message
> > ef467dd 2015-06-28 07:40:26 +0530 Python 3: Replacing unicode with six.text_type for py3 compatibility
> >
> > [ Unreleased changes in openstack/python-troveclient ]
> >
> > Changes in python-troveclient 1.2.0..fcc0e73
> > --------------------------------------------
> > ec666ca 2015-09-04 04:19:26 +0000 Updated from global requirements
> > 55af7dd 2015-09-03 20:41:25 +0000 Use more appropriate exceptions for validation
> > d95ceff 2015-09-03 14:49:38 -0400 Redis Clustering Initial Implementation
> > 7ec45db 2015-08-28 00:02:45 +0000 Revert "Root enablement for Vertica clusters/instances"
> > 608ef3d 2015-08-21 09:39:09 +0000 Implements Datastore Registration API
> > 77960ee 2015-08-20 15:38:20 -0700 Root enablement for Vertica clusters/instances
> > 57bb542 2015-08-13 20:22:07 +0000 Updated from global requirements
> > d3a9f9e 2015-08-10 01:10:31 +0000 Updated from global requirements
> > 3e6c219 2015-07-31 23:54:26 -0400 Add a --marker argument to the backup commands.
> > f3f0cbd 2015-07-31 16:27:55 -0700 Fixed missing periods in positional arguments
> > fd81067 2015-07-22 04:59:56 +0000 Updated from global requirements
> > fbbc025 2015-07-15 01:37:52 +0000 Updated from global requirements
> > 2598641 2015-07-12 15:22:33 +0000 Updated from global requirements
> > 398bc8e 2015-07-12 00:56:25 -0400 Error message on cluster-create is misleading
> > 7f82bcc 2015-07-10 10:05:21 +0900 Make subcommands accept flavor name and cluster name
> > 29d0703 2015-06-29 10:13:12 +0900 Fix flavor-show problems with UUID
> > 0702365 2015-06-22 20:00:41 +0000 Updated from global requirements
> > 1d30a5f 2015-06-18 12:19:53 -0700 Allow a user to pass an insecure environment variable
> > dffbd6f 2015-06-16 19:23:22 +0000 Updated from global requirements
> > 61a756a 2015-06-08 10:51:40 +0000 Added more unit-tests to improve code coverage
> > 93f70ca 2015-06-04 20:05:14 +0000 Updated from global requirements
> > ad68fb2 2015-06-03 08:31:26 +0000 Fixes the non-existent exception NoTokenLookupException
> >
> > [ Unreleased changes in openstack/python-tuskarclient ]
> >
> > Changes in python-tuskarclient 0.1.18..edec875
> > ----------------------------------------------
> > edec875 2015-08-04 08:05:11 +0000 Switch to oslo_i18n
> > 92a1834 2015-07-23 18:40:47 +0200 Replace assert_called_once() calls
> > caa2b4d 2015-06-15 22:08:13 +0000 Updated from global requirements
> > 39ea687 2015-06-12 06:48:45 -0400 Fix output of "tuskar plan-list --verbose"
> > 00c3de2 2015-06-10 17:36:59 +0000 Enable SSL-related CLI opts
> > 09d73e0 2015-06-09 07:34:55 -0400 Calling tuskar role-list would output blank lines
> > 52dfbce 2015-06-03 16:44:14 +0200 Handle creation of plan with existing name
> > 24087b3 2015-05-29 14:01:35 +0200 Filter and format parameters for plan role in OSC
> > b3e37fc 2015-05-22 21:02:32 +0100 Bump hacking version
> > 47fdee0 2015-05-13 16:38:51 +0000 Updated from global requirements
> > af2597d 2015-05-12 12:29:42 +0100 Implement download Plan for the OpenStack client
> > 6228020 2015-05-12 08:44:53 +0100 Implement Plan remove Role for the OpenStack client
> > 14e273b 2015-05-11 11:48:26 +0100 Implement Plan add Role for the OpenStack client
> > 7a85951 2015-05-11 11:48:26 +0100 Implement show Plan for the OpenStack client
> >
> > [ Unreleased changes in openstack/python-zaqarclient ]
> >
> > Changes in python-zaqarclient 0.1.1..c140a58
> > --------------------------------------------
> > 2490ed4 2015-08-31 15:59:15 +0200 Send claims `limit` as a query param
> > baf6fa7 2015-08-31 15:29:36 +0200 v1.1 and v2 claims return document not list
> > 0d80728 2015-08-28 11:42:57 +0200 Make sure the API version is passed down
> > 407925c 2015-08-28 11:30:40 +0200 Make v1.1 the default CLI version
> > 895aad2 2015-08-27 23:26:17 +0000 Updated from global requirements
> > 8a81c44 2015-08-27 13:19:58 +0200 Updated from global requirements
> > 705ee75 2015-08-26 22:53:09 +0530 Implement CLI support for flavor
> > 32a847e 2015-07-16 09:25:58 +0530 Implements CLI for pool
> > 964443d 2015-06-26 14:13:26 +0200 Raises an error if the queue name is empty
> > e9a8d01 2015-06-25 17:05:00 +0200 Added support to pools and flavors
> > f46979b 2015-06-05 06:59:44 +0000 Removed deprecated 'shard' methods
> > 1a85f83 2015-04-21 16:07:46 +0000 Update README to work with release tools
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From doug at doughellmann.com  Fri Sep  4 19:22:16 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 04 Sep 2015 15:22:16 -0400
Subject: [openstack-dev] [ptl][release] flushing unreleased client
	library changes
In-Reply-To: <8ED5035B-D141-4A8B-9021-A3934ACCB97F@redhat.com>
References: <1441384328-sup-9901@lrrr.local>
 <8ED5035B-D141-4A8B-9021-A3934ACCB97F@redhat.com>
Message-ID: <1441394484-sup-878@lrrr.local>

Excerpts from Ihar Hrachyshka's message of 2015-09-04 21:07:47 +0200:
> > On 04 Sep 2015, at 18:39, Doug Hellmann <doug at doughellmann.com> wrote:
> > 
> > 
> > PTLs,
> > 
> > We have quite a few unreleased client changes pending, and it would
> > be good to go ahead and publish them so they can be tested as part
> > of the release candidate process. I have the full list of changes for
> > each project below, so please find yours and review them and then
> > propose a release request to the openstack/releases repository.
> > 
> > On a separate note, for next cycle we need to do a better job of
> > releasing these much much earlier (a few of these changes are at
> > least a month old). Remember that changes to libraries do not go
> > into the gate for consuming projects until that library is released.
> > If you have any suggestions for how to do improve our tracking for
> > needed releases, let me know.
> > 
> > Doug
> > 
> > 
> > [ Unreleased changes in openstack/python-barbicanclient ]
> > 
> > Changes in python-barbicanclient 3.3.0..97cc46a
> 
> 
> +++
> 
> It may also block some efforts, f.e. we cannot move with fullstack tests for QoS feature in neutron because those rely on neutronclient changes that are not yet released.

The next release of neutronclient has some breaking changes, and since
we're heading into a 3-day weekend here in the US Kyle and I have agreed
to put that release off until Tuesday of next week. I'll do it early
US/Eastern time.

Doug

> 
> Ihar


From mestery at mestery.com  Fri Sep  4 20:11:10 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Fri, 4 Sep 2015 15:11:10 -0500
Subject: [openstack-dev] [neutron] pushing changes through the gate
In-Reply-To: <CAK+RQeYKgsBMg7Ray6wqHoPdJ7ODq-uy8N-Sp6iruq7oXvbTkA@mail.gmail.com>
References: <CAK+RQebMqAL-ZTn4z3Tafnpr=feA3i4qaeF5Uu4K03SO_fFF9g@mail.gmail.com>
 <CAK+RQeYKgsBMg7Ray6wqHoPdJ7ODq-uy8N-Sp6iruq7oXvbTkA@mail.gmail.com>
Message-ID: <CAL3VkVz1iRBhH-BT7XNhtFBR+zqSuDTH0N9ay5++eHEg-xbH=Q@mail.gmail.com>

I would like to second everything Armando has said below. Please, Neutron
core reviewers, follow the advice below for the rest of the Liberty cycle
as we work to merge patches targeted at Liberty.

I'd also like to thank Armando for jumping in and running things the past
week while I was on a (planned last spring) vacation with my family. He's
done a fabulous job, and his tireless effort this week shouldn't go
unnoticed.

Thanks for all your hard work Armando!

Kyle

On Thu, Sep 3, 2015 at 5:00 PM, Armando M. <armamig at gmail.com> wrote:

>
>
> On 2 September 2015 at 09:40, Armando M. <armamig at gmail.com> wrote:
>
>> Hi,
>>
>> By now you may have seen that I have taken out your change from the gate
>> and given it a -2: don't despair! I am only doing it to give priority to
>> the stuff that needs to merge in order to get [1] into a much better shape.
>>
>> If you have an important fix, please target it for RC1 or talk to me or
>> Doug (or Kyle when he's back from his time off), before putting it in the
>> gate queue. If everyone is not conscious of the other, we'll only end up
>> stepping on each other, and nothing moves forward.
>>
>> Let's give priority to gate stabilization fixes, and targeted stuff.
>>
>> Happy merging...not!
>>
>> Many thanks,
>> Armando
>>
>> [1] https://launchpad.net/neutron/+milestone/liberty-3
>> [2] https://launchpad.net/neutron/+milestone/liberty-rc1
>>
>
> Download files for the milestone are available in [1]. We still have a lot
> to do as there are outstanding bugs and blueprints that will have to be
> merged in the RC time windows.
>
> Please be conscious of what you approve. Give priority to:
>
> - Targeted bugs and blueprints in [2];
> - Gate stability fixes or patches that aim at helping troubleshooting;
>
> In these busy times, please refrain from proposing/merging:
>
> - Silly rebase generators (e.g. spelling mistakes);
> - Cosmetic changes (e.g. minor doc strings/comment improvements);
> - Refactoring required while dealing with the above;
> - A dozen of patches stacked on top of each other;
>
> Every rule has its own exception, so don't take this literally.
>
> If you are unsure, please reach out to me, Kyle or your Lieutenant and
> we'll target stuff that is worth targeting.
>
> As for the rest, I am gonna be merciless and -2 anything than I can find,
> in order to keep our gate lean and sane :)
>
> Thanks and happy hacking.
>
> A.
>
> [1] https://launchpad.net/neutron/+milestone/liberty-3
> [2] https://launchpad.net/neutron/+milestone/liberty-rc1
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/5adf182a/attachment.html>

From mriedem at linux.vnet.ibm.com  Fri Sep  4 20:13:17 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Fri, 4 Sep 2015 15:13:17 -0500
Subject: [openstack-dev] 9/4 state of the gate
Message-ID: <55E9FB5D.3080603@linux.vnet.ibm.com>

There are a few things blowing up in the last 24 hours so might as well 
make people aware.

1. gate-tempest-dsvm-large-ops was failing at a decent rate:

https://bugs.launchpad.net/nova/+bug/1491949

Turns out devstack was changed to run multihost=true and that doesn't 
work so well with the large-ops job that's creating hundreds of fake 
instances on a single node.  We reverted the devstack change so things 
should be good there now.


2. gate-tempest-dsvm-cells was regressed because nova has an in-tree 
blacklist regex of tests that don't work with cells and renaming some of 
those in tempest broke the regex.

https://bugs.launchpad.net/nova/+bug/1492255

There is a patch in the gate but it's getting bounced on #3.  Long-term 
we want to bring that blacklist regex down to 0 and instead use feature 
toggles in Tempest for the cells job, we just aren't there yet.  Help 
wanted...


3. gate-tempest-dsvm-full-ceph is broken with glance-store 0.9.0:

https://bugs.launchpad.net/glance-store/+bug/1492432

It looks like the gate-tempest-dsvm-full-ceph-src-glance_store job was 
not actually testing trunk glance_store code because of a problem in the 
upper-constraints.txt file in the requirements repo - pip was capping 
glance_store at 0.8.0 in the src job so we actually haven't been testing 
latest glance-store.  dhellmann posted a fix:

https://review.openstack.org/#/c/220648/

But I'm assuming glance-store 0.9.0 is still busted. I've posted a 
change which I think might be related:

https://review.openstack.org/#/c/220646/

If ^ fixes the issue we'll need to blacklist 0.9.0 from global-requirements.

--

As always, it's fun to hit this stuff right before the weekend, 
especially a long US holiday weekend. :)

-- 

Thanks,

Matt Riedemann



From mestery at mestery.com  Fri Sep  4 20:13:22 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Fri, 4 Sep 2015 15:13:22 -0500
Subject: [openstack-dev] [neutron] FFE process for Liberty
Message-ID: <CAL3VkVz_UHQT3+QpMQsyhOzaeBOP_KYGc+mi5-EVVWECQ_xhQQ@mail.gmail.com>

Folks, Armando did a great job targeting things which missed Liberty-3
towards Liberty-RC1 here [1]. For the most part, I hope most of those will
land in the coming week or so, so lets work as a team to land those.

For things which are not already targeted there (or were targeted by
someone else, like [2]), the process to get these in as an FFE is to send
an email to the openstack-dev list and request this. We'll review these in
the team meeting next Tuesday morning and see what we can add. At this
point, things are mostly full, given we already have 15 BPs targeted. But
if something is small enough or only needs one more patch to land, we'll
concentrate on helping to move it along.

Thanks!
Kyle

[1] https://launchpad.net/neutron/+milestone/liberty-rc1
[2] https://blueprints.launchpad.net/neutron/+spec/ovs-tunnel-csum-option
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/3b61e92e/attachment.html>

From blak111 at gmail.com  Fri Sep  4 20:20:23 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Fri, 4 Sep 2015 13:20:23 -0700
Subject: [openstack-dev] cloud-init IPv6 support
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
Message-ID: <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>

Right, it depends on your perspective of who 'owns' the API. Is it
cloud-init or EC2?

At this point I would argue that cloud-init is in control because it would
be a large undertaking to switch all of the AMI's on Amazon to something
else. However, I know Sean disagrees with me on this point so I'll let him
reply here.


On Thu, Sep 3, 2015 at 4:03 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:

> So if we define the well known address and cloud-init adopts it, then
> Amazon should be inclined to adopt it too. :)
>
> Why always chase Amazon?
>
> Thanks,
> Kevin
> ________________________________________
> From: Steve Gordon [sgordon at redhat.com]
> Sent: Thursday, September 03, 2015 11:06 AM
> To: Kevin Benton
> Cc: OpenStack Development Mailing List (not for usage questions); PAUL
> CARVER
> Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support
>
> ----- Original Message -----
> > From: "Kevin Benton" <blak111 at gmail.com>
> >
> > When we discussed this before on the neutron channel, I thought it was
> > because cloud-init doesn't support IPv6. We had wasted quite a bit of
> time
> > talking about adding support to our metadata service because I was under
> > the impression that cloud-init already did support IPv6.
> >
> > IIRC, the argument against adding IPv6 support to cloud-init was that it
> > might be incompatible with how AWS chooses to implement IPv6 metadata, so
> > AWS would require a fork or other incompatible alternative to cloud-init
> in
> > all of their images.
> >
> > Is that right?
>
> That's certainly my understanding of the status quo, I was enquiring
> primarily to check it was still accurate.
>
> -Steve
>
> > On Thu, Sep 3, 2015 at 7:30 AM, Sean M. Collins <sean at coreitpro.com>
> wrote:
> >
> > > It's not a case of cloud-init supporting IPv6 - The Amazon EC2 metadata
> > > API defines transport level details about the API - and currently only
> > > defines a well known IPv4 link local address to connect to. No well
> known
> > > link local IPv6 address has been defined.
> > >
> > > I usually recommend config-drive for IPv6 enabled clouds due to this.
> > > --
> > > Sent from my Android device with K-9 Mail. Please excuse my brevity.
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/c8cd7274/attachment.html>

From blak111 at gmail.com  Fri Sep  4 20:25:23 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Fri, 4 Sep 2015 13:25:23 -0700
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <55E9DA90.1090208@cisco.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <BLU437-SMTP76050D04789EE848C5B5F4D8680@phx.gbl>
 <CAO_F6JO4N4AevZN0n4K=fTyXpbkOi+T1Up_7ukYHodzTTPim3A@mail.gmail.com>
 <55E9DA90.1090208@cisco.com>
Message-ID: <CAO_F6JMVw2R0w=zsaMaUjLbYutuih=BThObmo1B8QDgH=FCD0A@mail.gmail.com>

Thanks for pointing that out. I like the DNS option too. That has to be
done carefully though to make sure it's not easy for an attacker to get the
name of the DNS entry that the instance tries to look up.

On Fri, Sep 4, 2015 at 10:53 AM, Henry Gessau <gessau at cisco.com> wrote:

> Some thought has been given to this. See
> https://bugs.launchpad.net/neutron/+bug/1460177
>
> I like the third option, a well-known name using DNS.
>
>
> On Thu, Sep 03, 2015, Kevin Benton <blak111 at gmail.com> <blak111 at gmail.com>
> wrote:
>
> I think that's different than what is being asked here. That patch appears
> to just add IPv6 interface information if it's available in the metadata.
> This thread is about getting cloud-init to connect to an IPv6 address
> instead of 169.254.169.254 for pure IPv6 environments.
>
> On Thu, Sep 3, 2015 at 11:41 AM, Joshua Harlow <harlowja at outlook.com>
> wrote:
>
>> I'm pretty sure this got implemented :)
>>
>> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/revision/1042
>> and https://bugs.launchpad.net/cloud-init/+bug/1391695
>>
>> That's the RHEL support, since cloud-init translates a ubuntu style
>> networking style the ubuntu/debian style format should also work.
>>
>>
>> Steve Gordon wrote:
>>
>>> ----- Original Message -----
>>>
>>>> From: "Kevin Benton"<blak111 at gmail.com>
>>>>
>>>> When we discussed this before on the neutron channel, I thought it was
>>>> because cloud-init doesn't support IPv6. We had wasted quite a bit of
>>>> time
>>>> talking about adding support to our metadata service because I was under
>>>> the impression that cloud-init already did support IPv6.
>>>>
>>>> IIRC, the argument against adding IPv6 support to cloud-init was that it
>>>> might be incompatible with how AWS chooses to implement IPv6 metadata,
>>>> so
>>>> AWS would require a fork or other incompatible alternative to
>>>> cloud-init in
>>>> all of their images.
>>>>
>>>> Is that right?
>>>>
>>>
>>> That's certainly my understanding of the status quo, I was enquiring
>>> primarily to check it was still accurate.
>>>
>>> -Steve
>>>
>>> On Thu, Sep 3, 2015 at 7:30 AM, Sean M. Collins< <sean at coreitpro.com>
>>>> sean at coreitpro.com>  wrote:
>>>>
>>>> It's not a case of cloud-init supporting IPv6 - The Amazon EC2 metadata
>>>>> API defines transport level details about the API - and currently only
>>>>> defines a well known IPv4 link local address to connect to. No well
>>>>> known
>>>>> link local IPv6 address has been defined.
>>>>>
>>>>> I usually recommend config-drive for IPv6 enabled clouds due to this.
>>>>> --
>>>>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>
>
> --
> Kevin Benton
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/8fdd4f8e/attachment.html>

From ben at swartzlander.org  Fri Sep  4 20:54:13 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Fri, 4 Sep 2015 16:54:13 -0400
Subject: [openstack-dev] [ptl][release] flushing unreleased client
 library changes
In-Reply-To: <1441394380-sup-7270@lrrr.local>
References: <1441384328-sup-9901@lrrr.local>
 <55E9E81E.9060401@swartzlander.org> <1441394380-sup-7270@lrrr.local>
Message-ID: <55EA04F5.2020006@swartzlander.org>



On 09/04/2015 03:21 PM, Doug Hellmann wrote:
> Excerpts from Ben Swartzlander's message of 2015-09-04 14:51:10 -0400:
>> On 09/04/2015 12:39 PM, Doug Hellmann wrote:
>>> PTLs,
>>>
>>> We have quite a few unreleased client changes pending, and it would
>>> be good to go ahead and publish them so they can be tested as part
>>> of the release candidate process. I have the full list of changes for
>>> each project below, so please find yours and review them and then
>>> propose a release request to the openstack/releases repository.
>> Manila had multiple gate-breaking bugs this week and I've extended our
>> feature freeze to next Tuesday to compensate. As a result our L-3
>> milestone release is not really representative of Liberty and we'd
>> rather not do a client release until we reach RC1.
> Keep in mind that the unreleased changes are not being used to test
> anything at all in the gate, so there's an integration "penalty" for
> delaying releases. You can have as many releases as you want, and we can
> create the stable branch from the last useful release any time after it
> is created. So, I still recommend releasing early and often unless you
> anticipate making API or CLI breaking changes between now and RC1.

There is currently an API breaking change that needs to be fixed. It 
will be fixed before the RC so that Kilo<->Liberty upgrades go smoothly 
but the L-3 milestone is broken regarding forward and backward 
compatibility.

https://bugs.launchpad.net/manila/+bug/1488624

I would actually want to release a milestone between L-3 and RC1 after 
we get to the real Manila FF date but since that's not in line with the 
official release process I'm okay waiting for RC1. Since there is no 
official process for client releases (that I know about) I'd rather just 
wait to do the client until RC1. We'll plan for an early RC1 by 
aggressively driving the bugs to zero instead of putting time into 
testing the L-3 milestone.

-Ben



From pcarver at paulcarver.us  Fri Sep  4 22:33:01 2015
From: pcarver at paulcarver.us (Paul Carver)
Date: Fri, 4 Sep 2015 18:33:01 -0400
Subject: [openstack-dev] [docs][networking-sfc]
Message-ID: <55EA1C1D.1080508@paulcarver.us>


Can someone from the Docs team take a look at why there isn't a docs URL 
for the networking-sfc repo?

Compare [1] vs [2]
The first URL appears to be a rendering of the docs/source/index.rst 
from the Neutron Git repo, but the second one gives a Not Found even 
though there is a docs/source/index.rst in the networking-sfc repo.

If I've guessed the wrong URL, please let me know. I just guessed that 
replacing the name of the neutron repo in the URL with the name of the 
networking-sfc repo should have given me the right URL.

Compare [3] vs [4]

Both of these exist and as far as I can tell [1] is rendered from [3] 
and I would just naturally expect [2] to be rendered from [4] but it isn't.

[1] http://docs.openstack.org/developer/neutron/
[2] http://docs.openstack.org/developer/networking-sfc/
[3] 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/index.rst
[4] 
https://git.openstack.org/cgit/openstack/networking-sfc/tree/doc/source/index.rst


From dolph.mathews at gmail.com  Fri Sep  4 23:40:56 2015
From: dolph.mathews at gmail.com (Dolph Mathews)
Date: Fri, 4 Sep 2015 18:40:56 -0500
Subject: [openstack-dev] FFE Request for moving inherited assignment to
 core in Keystone
In-Reply-To: <EF5BE73F-25BA-45E9-83D4-C6E0A1B67423@linux.vnet.ibm.com>
References: <EF5BE73F-25BA-45E9-83D4-C6E0A1B67423@linux.vnet.ibm.com>
Message-ID: <CAC=h7gVqtGN6GZ=Gn2xck3ZLP2mswgYO-i_8urSGC2tRU1T5_w@mail.gmail.com>

-1

Unless there's something more to this, I don't think it's worth any sort of
risk to stability just to shuffle API implementations around that can't
wait for mikata.

On Fri, Sep 4, 2015 at 12:28 PM, Henry Nash <henryn at linux.vnet.ibm.com>
wrote:

> Keystone has, for a number of releases,  supported the concept of
> inherited role assignments via the OS-INHERIT extension. At the Keystone
> mid-cycle we agreed moving this to core this was a good target for Liberty,
> but this was held by needing the data driver testing to be in place  (
> https://review.openstack.org/#/c/190996/).
>
> Inherited roles are becoming an integral part of Keystone, especially with
> the move to hierarchal projects (which is core already) - and so moving
> inheritance to core makes a lot of sense.  At the same time as the move, we
> want to tidy up the API (https://review.openstack.org/#/c/200434/
> <https://review.openstack.org/#/c/187045/>) to be more consistent with
> project hierarchies (before the old API semantics get too widely used),
> although we will continue to support the old API via the extension for a
> number of cycles.
>
> I would like to request an FFE for the move of inheritance to core.
>
> Henry
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/06cd892d/attachment.html>

From Kevin.Fox at pnnl.gov  Fri Sep  4 23:55:24 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Fri, 4 Sep 2015 23:55:24 +0000
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <CAO_F6JMVw2R0w=zsaMaUjLbYutuih=BThObmo1B8QDgH=FCD0A@mail.gmail.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <BLU437-SMTP76050D04789EE848C5B5F4D8680@phx.gbl>
 <CAO_F6JO4N4AevZN0n4K=fTyXpbkOi+T1Up_7ukYHodzTTPim3A@mail.gmail.com>
 <55E9DA90.1090208@cisco.com>,
 <CAO_F6JMVw2R0w=zsaMaUjLbYutuih=BThObmo1B8QDgH=FCD0A@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F163B@EX10MBOX03.pnnl.gov>

Adding a dns server adds more complexity into the mix. You need to support both a dns server and a metadata server at that point.

________________________________
From: Kevin Benton [blak111 at gmail.com]
Sent: Friday, September 04, 2015 1:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support

Thanks for pointing that out. I like the DNS option too. That has to be done carefully though to make sure it's not easy for an attacker to get the name of the DNS entry that the instance tries to look up.

On Fri, Sep 4, 2015 at 10:53 AM, Henry Gessau <gessau at cisco.com<mailto:gessau at cisco.com>> wrote:
Some thought has been given to this. See
https://bugs.launchpad.net/neutron/+bug/1460177

I like the third option, a well-known name using DNS.


On Thu, Sep 03, 2015, Kevin Benton <blak111 at gmail.com><mailto:blak111 at gmail.com> wrote:
I think that's different than what is being asked here. That patch appears to just add IPv6 interface information if it's available in the metadata. This thread is about getting cloud-init to connect to an IPv6 address instead of 169.254.169.254 for pure IPv6 environments.

On Thu, Sep 3, 2015 at 11:41 AM, Joshua Harlow <harlowja at outlook.com<mailto:harlowja at outlook.com>> wrote:
I'm pretty sure this got implemented :)

http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/revision/1042<http://bazaar.launchpad.net/%7Ecloud-init-dev/cloud-init/trunk/revision/1042> and https://bugs.launchpad.net/cloud-init/+bug/1391695

That's the RHEL support, since cloud-init translates a ubuntu style networking style the ubuntu/debian style format should also work.


Steve Gordon wrote:
----- Original Message -----
From: "Kevin Benton"<blak111 at gmail.com<mailto:blak111 at gmail.com>>

When we discussed this before on the neutron channel, I thought it was
because cloud-init doesn't support IPv6. We had wasted quite a bit of time
talking about adding support to our metadata service because I was under
the impression that cloud-init already did support IPv6.

IIRC, the argument against adding IPv6 support to cloud-init was that it
might be incompatible with how AWS chooses to implement IPv6 metadata, so
AWS would require a fork or other incompatible alternative to cloud-init in
all of their images.

Is that right?

That's certainly my understanding of the status quo, I was enquiring primarily to check it was still accurate.

-Steve

On Thu, Sep 3, 2015 at 7:30 AM, Sean M. Collins<<mailto:sean at coreitpro.com>sean at coreitpro.com<mailto:sean at coreitpro.com>>  wrote:

It's not a case of cloud-init supporting IPv6 - The Amazon EC2 metadata
API defines transport level details about the API - and currently only
defines a well known IPv4 link local address to connect to. No well known
link local IPv6 address has been defined.

I usually recommend config-drive for IPv6 enabled clouds due to this.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kevin Benton



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/c5164f16/attachment.html>

From armamig at gmail.com  Sat Sep  5 00:36:06 2015
From: armamig at gmail.com (Armando M.)
Date: Fri, 4 Sep 2015 17:36:06 -0700
Subject: [openstack-dev] [docs][networking-sfc]
In-Reply-To: <55EA1C1D.1080508@paulcarver.us>
References: <55EA1C1D.1080508@paulcarver.us>
Message-ID: <CAK+RQeYb0_CYzYj+5-baP4K04Uhn+PokMj1gJoLaBB9YL=TifA@mail.gmail.com>

On 4 September 2015 at 15:33, Paul Carver <pcarver at paulcarver.us> wrote:

>
> Can someone from the Docs team take a look at why there isn't a docs URL
> for the networking-sfc repo?
>

Everything in OpenStack is code driven. And doc publishing happens through
code review as much as anything else. [1] Should provide pointers, but
another great source of recipes and clues can be found on [2]. You'll find
your answer in there somewhere.

HTH
Armando

[1]
http://docs.openstack.org/infra/manual/creators.html#add-basic-jenkins-jobs
[2] https://github.com/openstack-infra/project-config/commits/master


>
> Compare [1] vs [2]
> The first URL appears to be a rendering of the docs/source/index.rst from
> the Neutron Git repo, but the second one gives a Not Found even though
> there is a docs/source/index.rst in the networking-sfc repo.
>
> If I've guessed the wrong URL, please let me know. I just guessed that
> replacing the name of the neutron repo in the URL with the name of the
> networking-sfc repo should have given me the right URL.
>
> Compare [3] vs [4]
>
> Both of these exist and as far as I can tell [1] is rendered from [3] and
> I would just naturally expect [2] to be rendered from [4] but it isn't.
>
> [1] http://docs.openstack.org/developer/neutron/
> [2] http://docs.openstack.org/developer/networking-sfc/
> [3]
> https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/index.rst
> [4]
> https://git.openstack.org/cgit/openstack/networking-sfc/tree/doc/source/index.rst
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150904/76125264/attachment.html>

From mriedem at linux.vnet.ibm.com  Sat Sep  5 01:43:04 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Fri, 4 Sep 2015 20:43:04 -0500
Subject: [openstack-dev] 9/4 state of the gate
In-Reply-To: <55E9FB5D.3080603@linux.vnet.ibm.com>
References: <55E9FB5D.3080603@linux.vnet.ibm.com>
Message-ID: <55EA48A8.6060505@linux.vnet.ibm.com>



On 9/4/2015 3:13 PM, Matt Riedemann wrote:
> There are a few things blowing up in the last 24 hours so might as well
> make people aware.
>
> 1. gate-tempest-dsvm-large-ops was failing at a decent rate:
>
> https://bugs.launchpad.net/nova/+bug/1491949
>
> Turns out devstack was changed to run multihost=true and that doesn't
> work so well with the large-ops job that's creating hundreds of fake
> instances on a single node.  We reverted the devstack change so things
> should be good there now.
>
>
> 2. gate-tempest-dsvm-cells was regressed because nova has an in-tree
> blacklist regex of tests that don't work with cells and renaming some of
> those in tempest broke the regex.
>
> https://bugs.launchpad.net/nova/+bug/1492255
>
> There is a patch in the gate but it's getting bounced on #3.  Long-term
> we want to bring that blacklist regex down to 0 and instead use feature
> toggles in Tempest for the cells job, we just aren't there yet.  Help
> wanted...
>
>
> 3. gate-tempest-dsvm-full-ceph is broken with glance-store 0.9.0:
>
> https://bugs.launchpad.net/glance-store/+bug/1492432
>
> It looks like the gate-tempest-dsvm-full-ceph-src-glance_store job was
> not actually testing trunk glance_store code because of a problem in the
> upper-constraints.txt file in the requirements repo - pip was capping
> glance_store at 0.8.0 in the src job so we actually haven't been testing
> latest glance-store.  dhellmann posted a fix:
>
> https://review.openstack.org/#/c/220648/
>
> But I'm assuming glance-store 0.9.0 is still busted. I've posted a
> change which I think might be related:
>
> https://review.openstack.org/#/c/220646/
>
> If ^ fixes the issue we'll need to blacklist 0.9.0 from
> global-requirements.
>
> --
>
> As always, it's fun to hit this stuff right before the weekend,
> especially a long US holiday weekend. :)
>

I haven't seen the elastic-recheck bot comment on any changes in awhile 
either so I'm wondering if that's not running.

Also, here is another new(ish) gate bug I'm just seeing tonight (bumped 
a fix for #3 above):

https://bugs.launchpad.net/keystonemiddleware/+bug/1492508

-- 

Thanks,

Matt Riedemann



From akirayoshiyama at gmail.com  Sat Sep  5 02:10:24 2015
From: akirayoshiyama at gmail.com (Akira Yoshiyama)
Date: Sat, 5 Sep 2015 11:10:24 +0900
Subject: [openstack-dev] Tracing a request (NOVA)
In-Reply-To: <CANovBq5N+J6wMQQ3diEuAAaKytSb+aYYp7w4TnUqR9M6ipTg4w@mail.gmail.com>
References: <CANovBq5N+J6wMQQ3diEuAAaKytSb+aYYp7w4TnUqR9M6ipTg4w@mail.gmail.com>
Message-ID: <CAOmk7ZtXFKDJYz+rJJ9zGoz+-X_Sj9mBqG5zH0HgDLMBkE=Tvg@mail.gmail.com>

You may like below:
https://gist.github.com/yosshy/5da0c2d6af1b446088bc

Akira

> Hi,
>
> I'm trying to trace a request made for an instance and looking at the flow
> in the code.
> I'm just trying to understand better how the request goes from the
> dashboard to the nova-api , to the other internal components of nova and to
> the scheduler and back with a suitable host and launching of the instance.
>
> i just want to understand as to how the request goes from the api-call to
> the nova-api and so on after that.
> I have understood the nova-scheduler and in that, the filter_scheduler
> receives something called request_spec that is the specifications of the
> request that is made, and I want to see where this comes from. I was not
> very successful in reverse engineering this.
>
> I could use some help as I want to implement a scheduling algorithm of my
> own but for that I need to understand how and where the requests come in
> and how the flow works.
>
> If someone could guide me as to where i can find help or point in some
> direction then it would be of great help.
>
> --
> Dhvanan Shah
>


-- 
????? <akirayoshiyama at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150905/8b78a59c/attachment.html>

From ayoung at redhat.com  Sat Sep  5 02:43:08 2015
From: ayoung at redhat.com (Adam Young)
Date: Fri, 4 Sep 2015 22:43:08 -0400
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <55E9A4F2.5030809@inaugust.com>
References: <55E9A4F2.5030809@inaugust.com>
Message-ID: <55EA56BC.70408@redhat.com>

On 09/04/2015 10:04 AM, Monty Taylor wrote:
> mordred at camelot:~$ neutron net-create test-net-mt
> Policy doesn't allow create_network to be performed.
>
> Thank you neutron. Excellent job.
>
> Here's what that looks like at the REST layer:
>
> DEBUG: keystoneclient.session RESP: [403] date: Fri, 04 Sep 2015 
> 13:55:47 GMT connection: close content-type: application/json; 
> charset=UTF-8 content-length: 130 x-openstack-request-id: 
> req-ba05b555-82f4-4aaf-91b2-bae37916498d
> RESP BODY: {"NeutronError": {"message": "Policy doesn't allow 
> create_network to be performed.", "type": "PolicyNotAuthorized", 
> "detail": ""}}
>
> As a user, I am not confused. I do not think that maybe I made a 
> mistake with my credentials. The cloud in question simply does not 
> allow user creation of networks. I'm fine with that. (as a user, that 
> might make this cloud unusable to me - but that's a choice I can now 
> make with solid information easily. Turns out, I don't need to create 
> networks for my application, so this actually makes it easier for me 
> personally)
>
> In any case- rather than complaining and being a whiny brat about 
> something that annoys me - I thought I'd say something nice about 
> something that the neutron team has done that especially pleases me.

Then let my Hijack:

Policy is still broken.  We need the pieces of Dynamic policy.

I am going to call for a cross project policy discussion for the 
upcoming summit.  Please, please, please all the projects attend. The 
operators have made it clear they need better policy support.


> I would love it if this became the experience across the board in 
> OpenStack for times when a feature of the API is disabled by local 
> policy. It's possible it already is and I just haven't directly 
> experienced it - so please don't take this as a backhanded 
> condemnation of anyone else.
>
> Monty
>
> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From tim.kelsey at hp.com  Sat Sep  5 16:46:08 2015
From: tim.kelsey at hp.com (Kelsey, Timothy John)
Date: Sat, 5 Sep 2015 16:46:08 +0000
Subject: [openstack-dev] [security][bandit] Looking to the future
Message-ID: <D210C829.12F1F%tim.kelsey@hp.com>

Hey Bandit Folks,
Thanks for all the great work done during the recent security mid cycle, we have made some really solid progress on key areas like documentation, testing, and code quality. It was also great to see people in person! This email follows on from various conversations with the hope of keeping our momentum and planning out our next steps.

Key Focus Areas

Documentation
We made good progress here getting our docs layout and initial content down. The next steps now are to keep pushing to bring our docs up to scratch across the board, covering all testing and report plugins we have available today. As cores, I would suggest we don?t accept any new tests without accompanying documentation. Work will now be done to integrate our sphinx build with infra to get our stuff available online, much in the same way as Anchor has done here: http://docs.openstack.org/developer/anchor/

Testing
We had a strong push to add unit tests to supplement our existing functional tests. Going forward we should continue to focus on bringing our coverage up and bug fixing as we go. Cores should be mindful of coverage when reviewing new patches and significant blocks of new work should of course be accompanied with unit tests. To help with this, coverage reporting will be added to the current tox output report.

Code Quality
Bandit is growing fast, new and interesting stuff is being added all the time, but its worth keeping in mind that there is a lot of code that was hastily written for the original prototype and still persists in the code base today. This is a source of potential bugs and unnecessary complexity, any effort directed in improving this situation would be a good thing. Refactoring is also a perfect opportunity to bring up our test coverage as well.

Releases
Up to this point bandit has had a fairly add-hoc release schedule, with new releases being pushed once a significant number of new features/bug fixes have been accumulated. Going forward we should review this strategy and determine if it is still appropriate. We should also consider how our releases could best tie into the overarching OpenStack release cadence. I would very much like to hear peoples thoughts on this matter.

Anyway, please let me know what people think of this, or anything else that I haven?t covered here.

Thanks again for all your hard work

--
Tim Kelsey
Cloud Security Engineer
HP Hellion


From malini.k.bhandaru at intel.com  Sat Sep  5 20:40:42 2015
From: malini.k.bhandaru at intel.com (Bhandaru, Malini K)
Date: Sat, 5 Sep 2015 20:40:42 +0000
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <55E9CA69.9030003@gmail.com>
References: <55E7AC5C.9010504@gmail.com> <20150903085224.GD30997@redhat.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>
 <EA70533067B8F34F801E964ABCA4C4410F4C1D0D@G4W3202.americas.hpqcorp.net>
 <D20DBFD8.210FE%brian.rosmaita@rackspace.com> <55E8784A.4060809@gmail.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B3397CA@fmsmsx117.amr.corp.intel.com>
 <55E9CA69.9030003@gmail.com>
Message-ID: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33A963@fmsmsx117.amr.corp.intel.com>

Thank you Nikhil and Glance team on the FFE consideration.
We are committed to making the revisions per suggestion and separately seek help from the Flavio, Sabari, and Harsh.
Regards
Malini, Kent, and Jakub 


-----Original Message-----
From: Nikhil Komawar [mailto:nik.komawar at gmail.com] 
Sent: Friday, September 04, 2015 9:44 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

Hi Malini et.al.,

We had a sync up earlier today on this topic and a few items were discussed including new comments on the spec and existing code proposal.
You can find the logs of the conversation here [1].

There are 3 main outcomes of the discussion:
1. We hope to get a commitment on the feature (spec and the code) that the comments would be addressed and code would be ready by Sept 18th; after which the RC1 is planned to be cut [2]. Our hope is that the spec is merged way before and implementation to the very least is ready if not merged. The comments on the spec and merge proposal are currently implementation details specific so we were positive on this front.
2. The decision to grant FFE will be on Tuesday Sept 8th after the spec has newer patch sets with major concerns addressed.
3. We cannot commit to granting a backport to this feature so, we ask the implementors to consider using the plug-ability and modularity of the taskflow library. You may consult developers who have already worked on adopting this library in Glance (Flavio, Sabari and Harsh). Deployers can then use those scripts and put them back in their Liberty deployments even if it's not in the standard tarball.

Please let me know if you have more questions.

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47
[2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule

On 9/3/15 1:13 PM, Bhandaru, Malini K wrote:
> Thank you Nikhil and Brian!
>
> -----Original Message-----
> From: Nikhil Komawar [mailto:nik.komawar at gmail.com]
> Sent: Thursday, September 03, 2015 9:42 AM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
> proposal
>
> We agreed to hold off on granting it a FFE until tomorrow.
>
> There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
> 14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and cast your vote.
>
> On 9/3/15 9:15 AM, Brian Rosmaita wrote:
>> I added an agenda item for this for today's Glance meeting:
>>    https://etherpad.openstack.org/p/glance-team-meeting-agenda
>>
>> I'd prefer to hold my vote until after the meeting.
>>
>> cheers,
>> brian
>>
>>
>> On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuvaja at hp.com> wrote:
>>
>>> Malini, all,
>>>
>>> My current opinion is -1 for FFE based on the concerns in the spec 
>>> and implementation.
>>>
>>> I'm more than happy to realign my stand after we have updated spec 
>>> and a) it's agreed to be the approach as of now and b) we can 
>>> evaluate how much work the implementation needs to meet with the revisited spec.
>>>
>>> If we end up to the unfortunate situation that this functionality 
>>> does not merge in time for Liberty, I'm confident that this is one 
>>> of the first things in Mitaka. I really don't think there is too 
>>> much to go, we just might run out of time.
>>>
>>> Thanks for your patience and endless effort to get this done.
>>>
>>> Best,
>>> Erno
>>>
>>>> -----Original Message-----
>>>> From: Bhandaru, Malini K [mailto:malini.k.bhandaru at intel.com]
>>>> Sent: Thursday, September 03, 2015 10:10 AM
>>>> To: Flavio Percoco; OpenStack Development Mailing List (not for 
>>>> usage
>>>> questions)
>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>> proposal
>>>>
>>>> Flavio, first thing in the morning Kent will upload a new BP that 
>>>> addresses the comments. We would very much appreciate a +1 on the 
>>>> FFE.
>>>>
>>>> Regards
>>>> Malini
>>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Flavio Percoco [mailto:flavio at redhat.com]
>>>> Sent: Thursday, September 03, 2015 1:52 AM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>> proposal
>>>>
>>>> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>>>>> Hi,
>>>>>
>>>>> I wanted to propose 'Single disk image OVA import' [1] feature 
>>>>> proposal for exception. This looks like a decently safe proposal 
>>>>> that should be able to adjust in the extended time period of 
>>>>> Liberty. It has been discussed at the Vancouver summit during a 
>>>>> work session and the proposal has been trimmed down as per the 
>>>>> suggestions then; has been overall accepted by those present 
>>>>> during the discussions (barring a few changes needed on the spec itself).
>>>>> It being a addition to already existing import task, doesn't 
>>>>> involve API change or change to any of the core Image functionality as of now.
>>>>>
>>>>> Please give your vote: +1 or -1 .
>>>>>
>>>>> [1] https://review.openstack.org/#/c/194868/
>>>> I'd like to see support for OVF being, finally, implemented in Glance.
>>>> Unfortunately, I think there are too many open questions in the 
>>>> spec right now to make this FFE worthy.
>>>>
>>>> Could those questions be answered to before the EOW?
>>>>
>>>> With those questions answered, we'll be able to provide a more, 
>>>> realistic, vote.
>>>>
>>>> Also, I'd like us to evaluate how mature the implementation[0] is 
>>>> and the likelihood of it addressing the concerns/comments in time.
>>>>
>>>> For now, it's a -1 from me.
>>>>
>>>> Thanks all for working on this, this has been a long time requested 
>>>> format to have in Glance.
>>>> Flavio
>>>>
>>>> [0] https://review.openstack.org/#/c/214810/
>>>>
>>>>
>>>> --
>>>> @flaper87
>>>> Flavio Percoco
>>>> __________________________________________________________
>>>> ________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-
>>>> request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> ____________________________________________________________________
>>> _ _____ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> _____________________________________________________________________
>> _ ____ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From dzimine at stackstorm.com  Sat Sep  5 21:14:09 2015
From: dzimine at stackstorm.com (Dmitri Zimine)
Date: Sat, 5 Sep 2015 14:14:09 -0700
Subject: [openstack-dev] [mistral][yaql] Addressing task result using
	YAQL function
In-Reply-To: <FA465E58-4611-44B4-9E5D-F353C778D5FF@mirantis.com>
References: <B93B8F94-DE9D-4723-A22D-DC527DCC54FB@mirantis.com>
 <04C0E7F3-1E41-4C2A-8A03-EB5C3A598861@stackstorm.com>
 <FA465E58-4611-44B4-9E5D-F353C778D5FF@mirantis.com>
Message-ID: <869980B9-AB93-4EC1-A74B-76F4D9DDC326@stackstorm.com>

Yes meant to ask for consistency of referencing to task results. So it?s task(task_name) regardless of where. 

One use case in favor of this is tooling: I refactor workflow with an automated tool which wants to automatically rename the task name EVERYWHERE. You guys know well by now that renaming the task is a source of too many frustrating errors :)

What other think? 

DZ. 

On Sep 3, 2015, at 4:23 AM, Renat Akhmerov <rakhmerov at mirantis.com> wrote:

> 
>> On 02 Sep 2015, at 21:01, Dmitri Zimine <dzimine at stackstorm.com> wrote:
>> 
>> Agree, 
>> 
>> with one detail: make it explicit -  task(task_name). 
> 
> So do you suggest we just replace res() with task() and it looks like
> 
> task() - get task result when we are in ?publish?
> task(task_name) - get task result from anywhere
> 
> ?
> 
> Is that correct you mean we must always specify a task name? The reason I?d like to have a simplified form (w/o task name) is that I see a lot of workflows that we have to repeat task name in publish so that it just look too verbose to me. Especially in case of very long task name.
> 
> Consider something like this:
> 
> tasks:
>   get_volumes_by_names:
>     with-items: name in <% $.vol_names %>
>     workflow: get_volume_by_name name=<% $.name %>
>     publish:
>       volumes: <% $.get_volumes_by_names %>
> 
> So in publish we have to repeat a task name, there?s no other way now. I?d like to soften this requirement, but if you still want to use task names you?ll be able to.
> 
> 
>> res - we often see folks confused by result of what (action, task, workflow) although we cleaned up our lingo: action-output, task-result, workflow-output?. but still worth being explicit.
>> 
>> And full result is being thought as the root context $.
>> 
>> Publishing to global context may be ok for now, IMO.
> 
> Not sure what you meant by "Publishing to global context?. Can you clarify please?
> 
> 
> Renat Akhmerov
> @ Mirantis Inc.
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150905/67b6ef93/attachment.html>

From john.griffith8 at gmail.com  Sat Sep  5 22:02:25 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Sat, 5 Sep 2015 16:02:25 -0600
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <55E9A4F2.5030809@inaugust.com>
References: <55E9A4F2.5030809@inaugust.com>
Message-ID: <CAPWkaSWFha9Ro5JuoeMPJmH9QiVaaa96eLuoX64UNZnriZdBmQ@mail.gmail.com>

On Fri, Sep 4, 2015 at 8:04 AM, Monty Taylor <mordred at inaugust.com> wrote:

> mordred at camelot:~$ neutron net-create test-net-mt
> Policy doesn't allow create_network to be performed.
>
> Thank you neutron. Excellent job.
>
> Here's what that looks like at the REST layer:
>
> DEBUG: keystoneclient.session RESP: [403] date: Fri, 04 Sep 2015 13:55:47
> GMT connection: close content-type: application/json; charset=UTF-8
> content-length: 130 x-openstack-request-id:
> req-ba05b555-82f4-4aaf-91b2-bae37916498d
> RESP BODY: {"NeutronError": {"message": "Policy doesn't allow
> create_network to be performed.", "type": "PolicyNotAuthorized", "detail":
> ""}}
>
> As a user, I am not confused. I do not think that maybe I made a mistake
> with my credentials. The cloud in question simply does not allow user
> creation of networks. I'm fine with that. (as a user, that might make this
> cloud unusable to me - but that's a choice I can now make with solid
> information easily. Turns out, I don't need to create networks for my
> application, so this actually makes it easier for me personally)
>
> In any case- rather than complaining and being a whiny brat about
> something that annoys me - I thought I'd say something nice about something
> that the neutron team has done that especially pleases me. I would love it
> if this became the experience across the board in OpenStack for times when
> a feature of the API is disabled by local policy. It's possible it already
> is and I just haven't directly experienced it - so please don't take this
> as a backhanded condemnation of anyone else.
>

?By the way, I think feedback is good and pointing out positive feedback
for something isn't as frequent as it probably should be.  So before the
sentiment gets completely lost...

"Cool!!!!, thanks for pointing it out and yeah; maybe we can look at some
improvements in Cinder and other projects".
?


>
> Monty
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150905/cd2aad01/attachment.html>

From sean at coreitpro.com  Sat Sep  5 22:19:48 2015
From: sean at coreitpro.com (Sean M. Collins)
Date: Sat, 5 Sep 2015 22:19:48 +0000
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
 cloud-init IPv6 support
In-Reply-To: <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
References: <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
Message-ID: <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>

On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:
> Right, it depends on your perspective of who 'owns' the API. Is it
> cloud-init or EC2?
> 
> At this point I would argue that cloud-init is in control because it would
> be a large undertaking to switch all of the AMI's on Amazon to something
> else. However, I know Sean disagrees with me on this point so I'll let him
> reply here.


Here's my take:

Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
in both the Neutron and Nova projects should all the details of the
Metadata API that is documented at:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

This means that this is a compatibility layer that OpenStack has
implemented so that users can use appliances, applications, and
operating system images in both Amazon EC2 and an OpenStack environment.

Yes, we can make changes to cloud-init. However, there is no guarantee
that all users of the Metadata API are exclusively using cloud-init as
their client. It is highly unlikely that people are rolling their own
Metadata API clients, but it's a contract we've made with users. This
includes transport level details like the IP address that the service
listens on. 

The Metadata API is an established API that Amazon introduced years ago,
and we shouldn't be "improving" APIs that we don't control. If Amazon
were to introduce IPv6 support the Metadata API tomorrow, we would
naturally implement it exactly the way they implemented it in EC2. We'd
honor the contract that Amazon made with its users, in our Metadata API,
since it is a compatibility layer.

However, since they haven't defined transport level details of the
Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
solution. It is not our API.

The nice thing about config-drive is that we've created a new mechanism
for bootstrapping instances - by replacing the transport level details
of the API. Rather than being a link-local address that instances access
over HTTP, it's a device that guests can mount and read. The actual
contents of the drive may have a similar schema as the Metadata API, but
I think at this point we've made enough of a differentiation between the
EC2 Metadata API and config-drive that I believe the contents of the
actual drive that the instance mounts can be changed without breaking
user expectations - since config-drive was developed by the OpenStack
community. The point being that we call it "config-drive" in
conversation and our docs. Users understand that config-drive is a
different feature.

I've had this same conversation about the Security Group API that we
have. We've named it the same thing as the Amazon API, but then went and
made all the fields different, inexplicably. Thankfully, it's just the
names of the fields, rather than being huge conceptual changes.

http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html

Basically, I believe that OpenStack should create APIs that are
community driven and owned, and that we should only emulate
non-community APIs where appropriate, and explicitly state that we only
are emulating them. Putting improvements in APIs that came from
somewhere else, instead of creating new OpenStack branded APIs is a lost
opportunity to differentiate OpenStack from other projects, as well as
Amazon AWS.

Thanks for reading, and have a great holiday.

-- 
Sean M. Collins


From zhenzan.zhou at intel.com  Sun Sep  6 00:13:55 2015
From: zhenzan.zhou at intel.com (Zhou, Zhenzan)
Date: Sun, 6 Sep 2015 00:13:55 +0000
Subject: [openstack-dev] [Congress] bugs for liberty release
In-Reply-To: <CAJjxPADr9u7nmwAtVZhhE_j7F=xUrpXpk1Q7exs+x4QVFvx_rw@mail.gmail.com>
References: <CAJjxPADr9u7nmwAtVZhhE_j7F=xUrpXpk1Q7exs+x4QVFvx_rw@mail.gmail.com>
Message-ID: <EB8DB51184817F479FC9C47B120861EE0470F3A7@SHSMSX101.ccr.corp.intel.com>

I have taken two, thanks.
https://bugs.launchpad.net/congress/+bug/1492308
https://bugs.launchpad.net/congress/+bug/1492354

BR
Zhou Zhenzan
From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Friday, September 4, 2015 23:40
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Congress] bugs for liberty release

Hi all,

I've found a few bugs that we could/should fix by the liberty release.  I tagged them with "liberty-rc".  If we could all pitch in, that'd be great.  Let me know which ones you'd like to work on so I can assign them to you in launchpad.

https://bugs.launchpad.net/congress/+bugs/?field.tag=liberty-rc

Thanks,
Tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/016a0a9b/attachment.html>

From joe.gordon0 at gmail.com  Sun Sep  6 01:50:18 2015
From: joe.gordon0 at gmail.com (Joe Gordon)
Date: Sat, 5 Sep 2015 18:50:18 -0700
Subject: [openstack-dev] 9/4 state of the gate
In-Reply-To: <55EA48A8.6060505@linux.vnet.ibm.com>
References: <55E9FB5D.3080603@linux.vnet.ibm.com>
 <55EA48A8.6060505@linux.vnet.ibm.com>
Message-ID: <CAHXdxOd5YL8QE_SfrypBUUsp9xAM4nQJxsHiP7rr7LH=ZHbg8Q@mail.gmail.com>

On Fri, Sep 4, 2015 at 6:43 PM, Matt Riedemann <mriedem at linux.vnet.ibm.com>
wrote:

>
>
> On 9/4/2015 3:13 PM, Matt Riedemann wrote:
>
>> There are a few things blowing up in the last 24 hours so might as well
>> make people aware.
>>
>> 1. gate-tempest-dsvm-large-ops was failing at a decent rate:
>>
>> https://bugs.launchpad.net/nova/+bug/1491949
>>
>> Turns out devstack was changed to run multihost=true and that doesn't
>> work so well with the large-ops job that's creating hundreds of fake
>> instances on a single node.  We reverted the devstack change so things
>> should be good there now.
>>
>>
>> 2. gate-tempest-dsvm-cells was regressed because nova has an in-tree
>> blacklist regex of tests that don't work with cells and renaming some of
>> those in tempest broke the regex.
>>
>> https://bugs.launchpad.net/nova/+bug/1492255
>>
>> There is a patch in the gate but it's getting bounced on #3.  Long-term
>> we want to bring that blacklist regex down to 0 and instead use feature
>> toggles in Tempest for the cells job, we just aren't there yet.  Help
>> wanted...
>>
>>
>> 3. gate-tempest-dsvm-full-ceph is broken with glance-store 0.9.0:
>>
>> https://bugs.launchpad.net/glance-store/+bug/1492432
>>
>> It looks like the gate-tempest-dsvm-full-ceph-src-glance_store job was
>> not actually testing trunk glance_store code because of a problem in the
>> upper-constraints.txt file in the requirements repo - pip was capping
>> glance_store at 0.8.0 in the src job so we actually haven't been testing
>> latest glance-store.  dhellmann posted a fix:
>>
>> https://review.openstack.org/#/c/220648/
>>
>> But I'm assuming glance-store 0.9.0 is still busted. I've posted a
>> change which I think might be related:
>>
>> https://review.openstack.org/#/c/220646/
>>
>> If ^ fixes the issue we'll need to blacklist 0.9.0 from
>> global-requirements.
>>
>> --
>>
>> As always, it's fun to hit this stuff right before the weekend,
>> especially a long US holiday weekend. :)
>>
>>
> I haven't seen the elastic-recheck bot comment on any changes in awhile
> either so I'm wondering if that's not running.
>

Looks like there was a suspicious 4 day gap in elastic-recheck, but it
appears to be running again?

$ ./lastcomment.py
Checking name: Elastic Recheck
[0] 2015-09-06 01:12:40 (0:35:54 old) https://review.openstack.org/220386
'Reject the cell name include '!', '.' and '@' for Nova API'
[1] 2015-09-02 00:54:54 (4 days, 0:53:40 old)
https://review.openstack.org/218781 'Remove the unnecassary
volume_api.get(context, volume_id)'


>
> Also, here is another new(ish) gate bug I'm just seeing tonight (bumped a
> fix for #3 above):
>
> https://bugs.launchpad.net/keystonemiddleware/+bug/1492508
>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150905/0f85e1af/attachment.html>

From wanghua.humble at gmail.com  Sun Sep  6 01:56:45 2015
From: wanghua.humble at gmail.com (=?UTF-8?B?546L5Y2O?=)
Date: Sun, 6 Sep 2015 09:56:45 +0800
Subject: [openstack-dev] [magnum]keystone version
In-Reply-To: <CAH5-jC8FQ9C7ADVygXXVRyKMt867iBFsjimKp26db6=pFO27-g@mail.gmail.com>
References: <CAH5-jC8FQ9C7ADVygXXVRyKMt867iBFsjimKp26db6=pFO27-g@mail.gmail.com>
Message-ID: <CAH5-jC_e28VjS+0bPr4xr6i7YJJr8vL+p6-6QMobvoXpuEiO_A@mail.gmail.com>

any comments on this?

On Fri, Sep 4, 2015 at 9:43 AM, ?? <wanghua.humble at gmail.com> wrote:

> Hi all,
>
> Now the keystoneclient in magnum only support keystone v3. Is is necessary
> to support keystone v2? Keystone v2 don't support trust.
>
> Regards,
> Wanghua
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/0c798fcf/attachment.html>

From donald.d.dugger at intel.com  Sun Sep  6 02:11:31 2015
From: donald.d.dugger at intel.com (Dugger, Donald D)
Date: Sun, 6 Sep 2015 02:11:31 +0000
Subject: [openstack-dev] [nova-scheduler] Scheduler sub-group meeting -
	Cancel for 9/8?
Message-ID: <6AF484C0160C61439DE06F17668F3BCB53FDCF79@ORSMSX114.amr.corp.intel.com>

It belatedly occurred to me that it's a US holiday this week (Labor Day) so most of us will gone so we should probably cancel the meeting this week.  The IRC channel will be there but many of us won't.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/c9c6af03/attachment.html>

From davanum at gmail.com  Sun Sep  6 02:38:37 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Sat, 5 Sep 2015 19:38:37 -0700
Subject: [openstack-dev] [oslo] Oslo meeting - Cancel for Sept 7th, 2015
Message-ID: <CANw6fcGDfeUDc1tBBdumvMdTU3VTceN8NxqZOvGZ+OvyshgNjg@mail.gmail.com>

Team,

Just realized that 9/8 is a holiday for some of us. Let's meet again the
next week.

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150905/23351162/attachment.html>

From germy.lure at gmail.com  Sun Sep  6 02:39:55 2015
From: germy.lure at gmail.com (Germy Lure)
Date: Sun, 6 Sep 2015 10:39:55 +0800
Subject: [openstack-dev] [Neutron] Port forwarding
In-Reply-To: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>
References: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>
Message-ID: <CAEfdOg2AS8uDZUyi5FzyAEKqU3D9axFaKBs5ibzjFudSR5JFGw@mail.gmail.com>

Hi, Gal

Thank you for bringing this up. But I have some suggestions for the API.

An operator or some other component wants to reach several VMs related NOT
only one or one by one. Here, RELATED means that the VMs are in one subnet
or network or a host(similar to reaching dockers on a host).

Via the API you mentioned, user must ssh one VM and update even delete and
add PF to ssh another. To a VPC(with 20 subnets?) admin, it's totally a
nightmare.

Germy


On Wed, Sep 2, 2015 at 1:59 PM, Gal Sagie <gal.sagie at gmail.com> wrote:

> Hello All,
>
> I have searched and found many past efforts to implement port forwarding
> in Neutron.
> I have found two incomplete blueprints [1], [2] and an abandoned patch [3].
>
> There is even a project in Stackforge [4], [5] that claims
> to implement this, but the L3 parts in it seems older then current master.
>
> I have recently came across this requirement for various use cases, one of
> them is
> providing feature compliance with Docker port-mapping feature (for Kuryr),
> and saving floating
> IP's space.
> There has been many discussions in the past that require this feature, so
> i assume
> there is a demand to make this formal, just a small examples [6], [7],
> [8], [9]
>
> The idea in a nutshell is to support port forwarding (TCP/UDP ports) on
> the external router
> leg from the public network to internal ports, so user can use one
> Floating IP (the external
> gateway router interface IP) and reach different internal ports depending
> on the port numbers.
> This should happen on the network node (and can also be leveraged for
> security reasons).
>
> I think that the POC implementation in the Stackforge project shows that
> this needs to be
> implemented inside the L3 parts of the current reference implementation,
> it will be hard
> to maintain something like that in an external repository.
> (I also think that the API/DB extensions should be close to the current L3
> reference
> implementation)
>
> I would like to renew the efforts on this feature and propose a RFE and a
> spec for this to the
> next release, any comments/ideas/thoughts are welcome.
> And of course if any of the people interested or any of the people that
> worked on this before
> want to join the effort, you are more then welcome to join and comment.
>
> Thanks
> Gal.
>
> [1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
> [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
> [3] https://review.openstack.org/#/c/60512/
> [4] https://github.com/stackforge/networking-portforwarding
> [5] https://review.openstack.org/#/q/port+forwarding,n,z
>
> [6]
> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
> [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
> [8]
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
> [9]
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/1569b7fa/attachment.html>

From adrian.otto at rackspace.com  Sun Sep  6 04:46:57 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Sun, 6 Sep 2015 04:46:57 +0000
Subject: [openstack-dev] [magnum]keystone version
In-Reply-To: <CAH5-jC_e28VjS+0bPr4xr6i7YJJr8vL+p6-6QMobvoXpuEiO_A@mail.gmail.com>
References: <CAH5-jC8FQ9C7ADVygXXVRyKMt867iBFsjimKp26db6=pFO27-g@mail.gmail.com>,
 <CAH5-jC_e28VjS+0bPr4xr6i7YJJr8vL+p6-6QMobvoXpuEiO_A@mail.gmail.com>
Message-ID: <EB0FE4CA-E173-43F3-A6FE-8A017C9A59F2@rackspace.com>

Keystone v2 will soon be deprecated. It makes sense for Magnum to use v3.

--
Adrian

On Sep 5, 2015, at 6:57 PM, ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com>> wrote:

any comments on this?

On Fri, Sep 4, 2015 at 9:43 AM, ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com>> wrote:
Hi all,

Now the keystoneclient in magnum only support keystone v3. Is is necessary to support keystone v2? Keystone v2 don't support trust.

Regards,
Wanghua

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/19b1ae7b/attachment.html>

From gal.sagie at gmail.com  Sun Sep  6 05:05:31 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Sun, 6 Sep 2015 08:05:31 +0300
Subject: [openstack-dev] [Neutron] Port forwarding
In-Reply-To: <CAEfdOg2AS8uDZUyi5FzyAEKqU3D9axFaKBs5ibzjFudSR5JFGw@mail.gmail.com>
References: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>
 <CAEfdOg2AS8uDZUyi5FzyAEKqU3D9axFaKBs5ibzjFudSR5JFGw@mail.gmail.com>
Message-ID: <CAG9LJa40-NM1LTJOZxuTa27W6LyZgpJwb7D4XVcqoiH63GaASg@mail.gmail.com>

Hi Germy,

I am not sure i understand what you mean, can you please explain it
further?

Thanks
Gal.

On Sun, Sep 6, 2015 at 5:39 AM, Germy Lure <germy.lure at gmail.com> wrote:

> Hi, Gal
>
> Thank you for bringing this up. But I have some suggestions for the API.
>
> An operator or some other component wants to reach several VMs related NOT
> only one or one by one. Here, RELATED means that the VMs are in one subnet
> or network or a host(similar to reaching dockers on a host).
>
> Via the API you mentioned, user must ssh one VM and update even delete and
> add PF to ssh another. To a VPC(with 20 subnets?) admin, it's totally a
> nightmare.
>
> Germy
>
>
> On Wed, Sep 2, 2015 at 1:59 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
>
>> Hello All,
>>
>> I have searched and found many past efforts to implement port forwarding
>> in Neutron.
>> I have found two incomplete blueprints [1], [2] and an abandoned patch
>> [3].
>>
>> There is even a project in Stackforge [4], [5] that claims
>> to implement this, but the L3 parts in it seems older then current master.
>>
>> I have recently came across this requirement for various use cases, one
>> of them is
>> providing feature compliance with Docker port-mapping feature (for
>> Kuryr), and saving floating
>> IP's space.
>> There has been many discussions in the past that require this feature, so
>> i assume
>> there is a demand to make this formal, just a small examples [6], [7],
>> [8], [9]
>>
>> The idea in a nutshell is to support port forwarding (TCP/UDP ports) on
>> the external router
>> leg from the public network to internal ports, so user can use one
>> Floating IP (the external
>> gateway router interface IP) and reach different internal ports depending
>> on the port numbers.
>> This should happen on the network node (and can also be leveraged for
>> security reasons).
>>
>> I think that the POC implementation in the Stackforge project shows that
>> this needs to be
>> implemented inside the L3 parts of the current reference implementation,
>> it will be hard
>> to maintain something like that in an external repository.
>> (I also think that the API/DB extensions should be close to the current
>> L3 reference
>> implementation)
>>
>> I would like to renew the efforts on this feature and propose a RFE and a
>> spec for this to the
>> next release, any comments/ideas/thoughts are welcome.
>> And of course if any of the people interested or any of the people that
>> worked on this before
>> want to join the effort, you are more then welcome to join and comment.
>>
>> Thanks
>> Gal.
>>
>> [1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
>> [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
>> [3] https://review.openstack.org/#/c/60512/
>> [4] https://github.com/stackforge/networking-portforwarding
>> [5] https://review.openstack.org/#/q/port+forwarding,n,z
>>
>> [6]
>> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
>> [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
>> [8]
>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
>> [9]
>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/8308ebb3/attachment.html>

From stevemar at ca.ibm.com  Sun Sep  6 05:14:23 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Sun, 6 Sep 2015 01:14:23 -0400
Subject: [openstack-dev] [magnum]keystone version
In-Reply-To: <EB0FE4CA-E173-43F3-A6FE-8A017C9A59F2@rackspace.com>
References: <CAH5-jC8FQ9C7ADVygXXVRyKMt867iBFsjimKp26db6=pFO27-g@mail.gmail.com>, 
 <CAH5-jC_e28VjS+0bPr4xr6i7YJJr8vL+p6-6QMobvoXpuEiO_A@mail.gmail.com>
 <EB0FE4CA-E173-43F3-A6FE-8A017C9A59F2@rackspace.com>
Message-ID: <201509060514.t865Eeju000528@d03av02.boulder.ibm.com>

+1, we're trying to deprecate the v2 API as soon as is sanely possible.
Plus, there's no reason to not use v3 since you can achieve everything you
could in v2, plus more goodness.

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:	Adrian Otto <adrian.otto at rackspace.com>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>
Date:	2015/09/06 12:48 AM
Subject:	Re: [openstack-dev] [magnum]keystone version



Keystone v2 will soon be deprecated. It makes sense for Magnum to use v3.

--
Adrian

On Sep 5, 2015, at 6:57 PM, ?? <wanghua.humble at gmail.com> wrote:

      any comments on this?

      On Fri, Sep 4, 2015 at 9:43 AM, ?? <wanghua.humble at gmail.com>
      wrote:
        Hi all,

        Now the keystoneclient in magnum only support keystone v3. Is is
        necessary to support keystone v2? Keystone v2 don't support trust.

        Regards,
        Wanghua

      __________________________________________________________________________

      OpenStack Development Mailing List (not for usage questions)
      Unsubscribe: OpenStack-dev-request at lists.openstack.org
      ?subject:unsubscribe
      http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
      __________________________________________________________________________

      OpenStack Development Mailing List (not for usage questions)
      Unsubscribe:
      OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
      http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/a5e89bad/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/a5e89bad/attachment.gif>

From stevemar at ca.ibm.com  Sun Sep  6 05:18:25 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Sun, 6 Sep 2015 01:18:25 -0400
Subject: [openstack-dev] [nova][i18n] Is there any point in using _()
 inpython-novaclient?
In-Reply-To: <55E9D9AD.1000402@linux.vnet.ibm.com>
References: <55E9D9AD.1000402@linux.vnet.ibm.com>
Message-ID: <201509060518.t865IeSf019572@d01av05.pok.ibm.com>


Isn't this just a matter of setting up novaclient for translation? IIRC
using _() is harmless if there's no translation bits set up for the
project.

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:	Matt Riedemann <mriedem at linux.vnet.ibm.com>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>,
            openstack-i18n at lists.openstack.org
Date:	2015/09/04 01:50 PM
Subject:	[openstack-dev] [nova][i18n] Is there any point in using _() in
            python-novaclient?



I noticed this today:

https://review.openstack.org/#/c/219768/

And it got me thinking about something I've wondered before - why do we
even use _() in python-novaclient?  It doesn't have any .po files for
babel message translation, it has no babel config, there is nothing in
setup.cfg about extracting messages and compiling them into .mo's, there
is nothing on Transifex for python-novaclient, etc.

Is there a way to change your locale and get translated output in nova
CLIs?  I didn't find anything in docs from a quick google search.

Comparing to python-openstackclient, that does have a babel config and
some locale po files in tree, at least for de and zh_TW.

So if this doesn't work in python-novaclient, do we need any of the i18n
code in there?  It doesn't really hurt, but it seems pointless to push
changes for it or try to keep user-facing messages in mind in the code.

--

Thanks,

Matt Riedemann


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/9bd65a2b/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/9bd65a2b/attachment.gif>

From stevemar at ca.ibm.com  Sun Sep  6 05:23:13 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Sun, 6 Sep 2015 01:23:13 -0400
Subject: [openstack-dev] [keystone] FFE Request for Reseller
In-Reply-To: <CABj-22jEYcQT03QUftK4DJZJ7dvLfoFZsLzCNiN92mOwsYuUCw@mail.gmail.com>
References: <CABj-22jEYcQT03QUftK4DJZJ7dvLfoFZsLzCNiN92mOwsYuUCw@mail.gmail.com>
Message-ID: <201509060523.t865NLM6026751@d01av02.pok.ibm.com>


I suspect we'll vote on this topic during the next meeting on Tuesday, but
this seems like a huge amount of code to land.

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:	Henrique Truta <henriquecostatruta at gmail.com>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>
Date:	2015/09/04 12:12 PM
Subject:	[openstack-dev] [keystone] FFE Request for Reseller



Hi Folks,



As you may know, the Reseller Blueprint was proposed and approved in Kilo (
https://review.openstack.org/#/c/139824/) with the developing postponed to
Liberty.

During this time, the 3 main patches of the chain were split into 8,
becoming smaller and easier to review. The first 2 of them were merged
before liberty-3 freeze, and some of the others have already received +2s.
The code is very mature, having a keystone core member support through the
whole release cycle.



I would like to request an FFE for the remaining 9 patches (reseller core)
which are already in review (starting from
https://review.openstack.org/#/c/213448/ to
https://review.openstack.org/#/c/161854/).



Henrique
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/20f1912a/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/20f1912a/attachment.gif>

From jamielennox at redhat.com  Sun Sep  6 08:40:10 2015
From: jamielennox at redhat.com (Jamie Lennox)
Date: Sun, 6 Sep 2015 04:40:10 -0400 (EDT)
Subject: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican]
 FYI: Defaulting to Keystone v3 API
In-Reply-To: <201509041638.t84GcbI5013710@d03av05.boulder.ibm.com>
References: <CAB1EZBomKom6_vXb36Yu=2v1EvYYMyX9Ufa9WzZuyvC6TAGFAQ@mail.gmail.com>
 <201509041638.t84GcbI5013710@d03av05.boulder.ibm.com>
Message-ID: <1299339546.18129044.1441528810582.JavaMail.zimbra@redhat.com>

Note that this fixing this does not mean ironic has to support keystone v3 (but please fix that too). It just means that somewhere in ironic's gate it is doing like an "openstack user create" or a role assignment directly with the OSC tool assuming v2 rather than using the helpers that devstack provides like get_or_create_user. Keystone v2 still exists and is running we just changed the default API for devstack OSC commands.

I'm kind of annoyed we reverted this patch (though i was surprised to see it merge recently as it's been around for a while), as it was known to possibly break people which is why it was on the discussion for the qa meetings. However given that devstack has plugins and there is third party CI there is absolutely no way we can make sure that everyone has fixed this and we just need to make a breaking change. Granted coinciding with freeze is unfortunate. Luckily this doesn't affect most people because they use the devstack helper functions and for those that don't it's an almost trivial fix to start using them.

Jamie

----- Original Message -----
> From: "Steve Martinelli" <stevemar at ca.ibm.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Sent: Saturday, September 5, 2015 2:38:27 AM
> Subject: Re: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican] FYI: Defaulting to Keystone v3 API
> 
> 
> 
> This change also affected Barbican too, but they quickly tossed up a patch to
> resolve the gate failures [1]. As much as I would like DevStack and
> OpenStackClient to default to Keystone's v3 API, we should - considering how
> close we are in the schedule, revert the initial patch (which I see sdague
> already did). We need to determine which projects are hosting their own
> devstack plugin scripts and update those first before bringing back the
> original patch.
> 
> https://review.openstack.org/#/c/220396/
> 
> Thanks,
> 
> Steve Martinelli
> OpenStack Keystone Core
> 
> Lucas Alvares Gomes ---2015/09/04 10:07:51 AM---Hi, This is email is just a
> FYI: Recently the patch [1] got merged in
> 
> From: Lucas Alvares Gomes <lucasagomes at gmail.com>
> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
> Date: 2015/09/04 10:07 AM
> Subject: [openstack-dev] [DevStack][Keystone][Ironic][Swit] FYI: Defaulting
> to Keystone v3 API
> 
> 
> 
> 
> Hi,
> 
> This is email is just a FYI: Recently the patch [1] got merged in
> DevStack and broke the Ironic gate [2], I haven't had time to dig into
> the problem yet so I reverted the patch [3] to unblock our gate.
> 
> The work to convert to v3 seems to be close enough but not yet there
> so I just want to bring a broader attention to it with this email.
> 
> Also, the Ironic job that is currently running in the DevStack gate is
> not testing Ironic with the Swift module, there's a patch [4] changing
> that so I hope we will be able to identify the problem before we break
> things next time .
> 
> [1] https://review.openstack.org/#/c/186684/
> [2]
> http://logs.openstack.org/68/217068/14/check/gate-tempest-dsvm-ironic-agent_ssh/18d8590/logs/devstacklog.txt.gz#_2015-09-04_09_04_55_994
> [3] https://review.openstack.org/220532
> [4] https://review.openstack.org/#/c/220516/
> 
> Cheers,
> Lucas
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From huan.xie at citrix.com  Sun Sep  6 09:03:47 2015
From: huan.xie at citrix.com (Huan Xie)
Date: Sun, 6 Sep 2015 09:03:47 +0000
Subject: [openstack-dev]  [neutron] Fail to get ipv4 address from dhcp
Message-ID: <27E8119E14BEBA418E5E368E4DF2CA71FCB66B@SINPEX01CL02.citrite.net>


Hi all,

I'm trying to deploy OpenStack environment using DevStack with latest master code.
I use Xenserver + neutron, with ML2 plugins and VLAN type.

The problem I met is that the instances cannot really get IP address (I use DHCP), although we can see the VM with IP from horizon.
I have tcpdump from VM side and DHCP server side, I can get DHCP request packet from VM side but cannot see any request packet from DHCP server side.
But after I reboot the q-agt, the VM can get IP successfully.
Checking the difference before and after q-agt restart, all my seen are the flow rules about ARP spoofing.

This is the q-agt's br-int port, it is dom0's flow rules and the bold part are new added

                NXST_FLOW reply (xid=0x4):
               cookie=0x824d13a352a4e216, duration=163244.088s, table=0, n_packets=93, n_bytes=18140, idle_age=4998, hard_age=65534, priority=0 actions=NORMAL
cookie=0x824d13a352a4e216, duration=163215.062s, table=0, n_packets=7, n_bytes=294, idle_age=33540, hard_age=65534, priority=10,arp,in_port=5 actions=resubmit(,24)
               cookie=0x824d13a352a4e216, duration=163230.050s, table=0, n_packets=25179, n_bytes=2839586, idle_age=5, hard_age=65534, priority=3,in_port=2,dl_vlan=1023 actions=mod_vlan_vid:1,NORMAL
               cookie=0x824d13a352a4e216, duration=163236.775s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=2 actions=drop
               cookie=0x824d13a352a4e216, duration=163243.516s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
               cookie=0x824d13a352a4e216, duration=163242.953s, table=24, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x824d13a352a4e216, duration=163215.636s, table=24, n_packets=7, n_bytes=294, idle_age=33540, hard_age=65534, priority=2,arp,in_port=5,arp_spa=10.0.0.6 actions=NORMAL

I cannot see other changes after reboot q-agt, but it seems these rules are only for ARP spoofing, however, the instance can get IP from DHCP.
I also google for this problem, but failed to deal this problem.
Is anyone met this problem before or has any suggestion about how to debugging for this?

Thanks a lot

BR//Huan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/b6edc0b2/attachment.html>

From joehuang at huawei.com  Sun Sep  6 09:28:04 2015
From: joehuang at huawei.com (joehuang)
Date: Sun, 6 Sep 2015 09:28:04 +0000
Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware &
 multiple keystone endpoints
In-Reply-To: <55DC3005.9000306@ericsson.com>
References: <55D5CABC.1020808@ericsson.com>
 <1512600172.12898075.1440488257977.JavaMail.zimbra@redhat.com>
 <55DC3005.9000306@ericsson.com>
Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF542F516C@szxema505-mbx.china.huawei.com>

Hello, Jamie and Hans,

The patch " Allow specifying a region name to auth_token " https://review.openstack.org/#/c/216579 has just been merged.

But unfortunately, when I modify the source code as this patch did in the multisite cloud with Fernet token, the issue is still there, and routed to incorrect endpoint.

I also check the region_name configuration in the source code, it's correct. 

The issue mentioned in the bug report not addressed yet: https://bugs.launchpad.net/keystonemiddleware/+bug/1488347

Is there anyone who tested it successfully in your environment?


The log of Glance API, the request was redirected to http://172.17.0.95:35357, but this address is not a KeyStone endpoint. (http://172.17.0.98:35357 and http://172.17.0.41:35357 are correct KeyStone endpoints )
//////////////////////////////////////////
2015-09-06 07:50:43.447 194 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://172.17.0.98:35357 -H "Accept: application/json" -H "User-Agent: python-keystoneclient" _http_log_request /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
2015-09-06 07:50:43.468 194 DEBUG keystoneclient.session [-] RESP: [300] content-length: 593 vary: X-Auth-Token connection: keep-alive date: Sun, 06 Sep 2015 07:50:43 GMT content-type: application/json x-distribution: Ubuntu 
RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": [{"href": "http://172.17.0.98:35357/v3/", "rel": "self"}]}, {"status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://172.17.0.98:35357/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}
 _http_log_response /usr/lib/python2.7/dist-packages/keystoneclient/session.py:223
2015-09-06 07:50:43.469 194 DEBUG keystoneclient.auth.identity.v3 [-] Making authentication request to http://172.17.0.98:35357/v3/auth/tokens get_auth_ref /usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/v3.py:125
2015-09-06 07:50:43.574 194 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://172.17.0.95:35357 -H "Accept: application/json" -H "User-Agent: python-keystoneclient" _http_log_request /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
2015-09-06 07:50:46.576 194 WARNING keystoneclient.auth.identity.base [-] Failed to contact the endpoint at http://172.17.0.95:35357 for discovery. Fallback to using that endpoint as the base url.
2015-09-06 07:50:46.576 194 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://172.17.0.95:35357/auth/tokens -H "X-Subject-Token: {SHA1}640964e1f8716ecbb10ca3d8b5b08c8e7abfac1d" -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}386777062718e0992cc818780e3ec7fa0671d8e9" _http_log_request /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
2015-09-06 07:50:49.576 194 INFO keystoneclient.session [-] Failure: Unable to establish connection to http://172.17.0.95:35357/auth/tokens. Retrying in 0.5s.
2015-09-06 07:50:52.576 194 INFO keystoneclient.session [-] Failure: Unable to establish connection to http://172.17.0.95:35357/auth/tokens. Retrying in 1.0s.
2015-09-06 07:50:55.576 194 INFO keystoneclient.session [-] Failure: Unable to establish connection to http://172.17.0.95:35357/auth/tokens. Retrying in 2.0s.
2015-09-06 07:50:58.576 194 WARNING keystonemiddleware.auth_token [-] Authorization failed for token


Best Regards
Chaoyi Huang ( Joe Huang )


-----Original Message-----
From: Hans Feldt [mailto:hans.feldt at ericsson.com] 
Sent: Tuesday, August 25, 2015 5:06 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Keystone][Glance] keystonemiddleware & multiple keystone endpoints



On 2015-08-25 09:37, Jamie Lennox wrote:
>
>
> ----- Original Message -----
>> From: "Hans Feldt" <hans.feldt at ericsson.com>
>> To: openstack-dev at lists.openstack.org
>> Sent: Thursday, August 20, 2015 10:40:28 PM
>> Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware & multiple	keystone endpoints
>>
>> How do you configure/use keystonemiddleware for a specific identity 
>> endpoint among several?
>>
>> In an OPNFV multi region prototype I have keystone endpoints per 
>> region. I would like keystonemiddleware (in context of glance-api) to 
>> use the local keystone for performing user token validation. Instead 
>> keystonemiddleware seems to use the first listed keystone endpoint in 
>> the service catalog (which could be wrong/non-optimal in most 
>> regions).
>>
>> I found this closed, related bug:
>> https://bugs.launchpad.net/python-keystoneclient/+bug/1147530
>
> Hey,
>
> There's two points to this.
>
> * If you are using an auth plugin then you're right it will just pick the first endpoint. You can look at project specific endpoints[1] so that there is only one keystone endpoint returned for the services project. I've also just added a review for this feature[2].

I am not.

> * If you're not using an auth plugin (so the admin_X options) then keystone will always use the endpoint that is configured in the options (identity_uri).

Yes for getting its own admin/service token. But for later user token validation it seems to pick the first identity service in the stored (?) service catalog.

By patching keystonemiddleware, _create_identity_server and the call to Adapter constructor with an endpoint_override parameter I can get it to use the local keystone for token validation. I am looking for an official way of achieving the same.

Thanks,
Hans

>
> Hope that helps,
>
> Jamie
>
>
> [1] 
> https://github.com/openstack/keystone-specs/blob/master/specs/juno/end
> point-group-filter.rst [2] https://review.openstack.org/#/c/216579
>
>> Thanks,
>> Hans
>>
>> _____________________________________________________________________
>> _____ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ______________________________________________________________________
> ____ OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From gkotton at vmware.com  Sun Sep  6 11:56:55 2015
From: gkotton at vmware.com (Gary Kotton)
Date: Sun, 6 Sep 2015 11:56:55 +0000
Subject: [openstack-dev] [nova] Bug importance
Message-ID: <D21204B9.BBED2%gkotton@vmware.com>

Hi,
In the past I was able to set the importance of a bug. Now I am unable to do this? Has the policy changed? Can someone please clarify. If the policy has changed who is responsible for deciding the priority of a bug?
Thanks
Gary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/713e53b1/attachment.html>

From davanum at gmail.com  Sun Sep  6 13:10:03 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Sun, 6 Sep 2015 06:10:03 -0700
Subject: [openstack-dev] [nova] Bug importance
In-Reply-To: <D21204B9.BBED2%gkotton@vmware.com>
References: <D21204B9.BBED2%gkotton@vmware.com>
Message-ID: <CANw6fcHqyza2Z4hDZWdFr9_3V=FbVpbsvBNOLFprW_9F+Ma8ow@mail.gmail.com>

Gary,

Not sure what changed...

On this page (https://bugs.launchpad.net/nova/) on the right hand side, do
you see "Bug Supervisor" set to "Nova Bug Team"?  I believe "Nova Bug Team"
is open and you can add yourself, so if you do not see yourself in that
group, can you please add it and try?

-- Dims

On Sun, Sep 6, 2015 at 4:56 AM, Gary Kotton <gkotton at vmware.com> wrote:

> Hi,
> In the past I was able to set the importance of a bug. Now I am unable to
> do this? Has the policy changed? Can someone please clarify. If the policy
> has changed who is responsible for deciding the priority of a bug?
> Thanks
> Gary
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/a4567a36/attachment.html>

From gkotton at vmware.com  Sun Sep  6 13:25:53 2015
From: gkotton at vmware.com (Gary Kotton)
Date: Sun, 6 Sep 2015 13:25:53 +0000
Subject: [openstack-dev] [nova] Bug importance
In-Reply-To: <CANw6fcHqyza2Z4hDZWdFr9_3V=FbVpbsvBNOLFprW_9F+Ma8ow@mail.gmail.com>
References: <D21204B9.BBED2%gkotton@vmware.com>
 <CANw6fcHqyza2Z4hDZWdFr9_3V=FbVpbsvBNOLFprW_9F+Ma8ow@mail.gmail.com>
Message-ID: <D212198D.BBF16%gkotton@vmware.com>

That works.
Thanks!

From: "davanum at gmail.com<mailto:davanum at gmail.com>" <davanum at gmail.com<mailto:davanum at gmail.com>>
Reply-To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Sunday, September 6, 2015 at 4:10 PM
To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [nova] Bug importance

Gary,

Not sure what changed...

On this page (https://bugs.launchpad.net/nova/) on the right hand side, do you see "Bug Supervisor" set to "Nova Bug Team"?  I believe "Nova Bug Team" is open and you can add yourself, so if you do not see yourself in that group, can you please add it and try?

-- Dims

On Sun, Sep 6, 2015 at 4:56 AM, Gary Kotton <gkotton at vmware.com<mailto:gkotton at vmware.com>> wrote:
Hi,
In the past I was able to set the importance of a bug. Now I am unable to do this? Has the policy changed? Can someone please clarify. If the policy has changed who is responsible for deciding the priority of a bug?
Thanks
Gary

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Davanum Srinivas :: https://twitter.com/dims<https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_dims&d=BQMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc&m=CfiDSvUWP9eHhl22u1JTc8kwyMoUP2d4vFHTaAAIILw&s=jGlwF0Z-FeR61gMs6CbYPVbMQyzw6A-wqaWnZWu4sK8&e=>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/5a07bf13/attachment.html>

From zhipengh512 at gmail.com  Sun Sep  6 13:28:07 2015
From: zhipengh512 at gmail.com (Zhipeng Huang)
Date: Sun, 6 Sep 2015 21:28:07 +0800
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
	allocation
In-Reply-To: <20150904175517.GC21846@jimrollenhagen.com>
References: <55E96EEE.4070306@openstack.org> <55E98609.4060708@redhat.com>
 <20150904175517.GC21846@jimrollenhagen.com>
Message-ID: <CAHZqm+XLtqN75LXcFuSmC92ep17cHo1CjDOn+NE-A+kSQ0VAPg@mail.gmail.com>

Do we have enough logistics for projects that are not in the schedule above
but also want to have ad hoc sessions at the design summit venue? For
example like in Paris we usually just grab a table and slap a card with
project name on it.

On Sat, Sep 5, 2015 at 1:55 AM, Jim Rollenhagen <jim at jimrollenhagen.com>
wrote:

> On Fri, Sep 04, 2015 at 01:52:41PM +0200, Dmitry Tantsur wrote:
> > On 09/04/2015 12:14 PM, Thierry Carrez wrote:
> > >Hi PTLs,
> > >
> > >Here is the proposed slot allocation for every "big tent" project team
> > >at the Mitaka Design Summit in Tokyo. This is based on the requests the
> > >liberty PTLs have made, space availability and project activity &
> > >collaboration needs.
> > >
> > >We have a lot less space (and time slots) in Tokyo compared to
> > >Vancouver, so we were unable to give every team what they wanted. In
> > >particular, there were far more workroom requests than we have
> > >available, so we had to cut down on those quite heavily. Please note
> > >that we'll have a large lunch room with roundtables inside the Design
> > >Summit space that can easily be abused (outside of lunch) as space for
> > >extra discussions.
> > >
> > >Here is the allocation:
> > >
> > >| fb: fishbowl 40-min slots
> > >| wr: workroom 40-min slots
> > >| cm: Friday contributors meetup
> > >| | day: full day, morn: only morning, aft: only afternoon
> > >
> > >Neutron: 12fb, cm:day
> > >Nova: 14fb, cm:day
> > >Cinder: 5fb, 4wr, cm:day
> > >Horizon: 2fb, 7wr, cm:day
> > >Heat: 4fb, 8wr, cm:morn
> > >Keystone: 7fb, 3wr, cm:day
> > >Ironic: 4fb, 4wr, cm:morn
> > >Oslo: 3fb, 5wr
> > >Rally: 1fb, 2wr
> > >Kolla: 3fb, 5wr, cm:aft
> > >Ceilometer: 2fb, 7wr, cm:morn
> > >TripleO: 2fb, 1wr, cm:full
> > >Sahara: 2fb, 5wr, cm:aft
> > >Murano: 2wr, cm:full
> > >Glance: 3fb, 5wr, cm:full
> > >Manila: 2fb, 4wr, cm:morn
> > >Magnum: 5fb, 5wr, cm:full
> > >Swift: 2fb, 12wr, cm:full
> > >Trove: 2fb, 4wr, cm:aft
> > >Barbican: 2fb, 6wr, cm:aft
> > >Designate: 1fb, 4wr, cm:aft
> > >OpenStackClient: 1fb, 1wr, cm:morn
> > >Mistral: 1fb, 3wr
> > >Zaqar: 1fb, 3wr
> > >Congress: 3wr
> > >Cue: 1fb, 1wr
> > >Solum: 1fb
> > >Searchlight: 1fb, 1wr
> > >MagnetoDB: won't be present
> > >
> > >Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)
> > >PuppetOpenStack: 2fb, 3wr
> > >Documentation: 2fb, 4wr, cm:morn
> > >Quality Assurance: 4fb, 4wr, cm:full
> > >OpenStackAnsible: 2fb, 1wr, cm:aft
> > >Release management: 1fb, 1wr (shared meetup with QA)
> > >Security: 2fb, 2wr
> > >ChefOpenstack: will camp in the lunch room all week
> > >App catalog: 1fb, 1wr
> > >I18n: cm:morn
> > >OpenStack UX: 2wr
> > >Packaging-deb: 2wr
> > >Refstack: 2wr
> > >RpmPackaging: 1fb, 1wr
> > >
> > >We'll start working on laying out those sessions over the available
> > >rooms and time slots. If you have constraints (I already know
> > >searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
> > >Manila with Cinder, Solum with Magnum...) please let me know, we'll do
> > >our best to limit them.
> > >
> >
> > Would be cool to avoid conflicts between Ironic and TripleO.
>
> I'd also like to save room for one Ironic/Nova session, and one
> Ironic/Neutron session.
>
> // jim
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng at huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh at uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/2eba131d/attachment.html>

From emilien at redhat.com  Sun Sep  6 14:59:12 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Sun, 6 Sep 2015 10:59:12 -0400
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
 allocation
In-Reply-To: <55E96EEE.4070306@openstack.org>
References: <55E96EEE.4070306@openstack.org>
Message-ID: <55EC54C0.5040800@redhat.com>



On 09/04/2015 06:14 AM, Thierry Carrez wrote:
> Hi PTLs,
> 
> Here is the proposed slot allocation for every "big tent" project team
> at the Mitaka Design Summit in Tokyo. This is based on the requests the
> liberty PTLs have made, space availability and project activity &
> collaboration needs.
> 
> We have a lot less space (and time slots) in Tokyo compared to
> Vancouver, so we were unable to give every team what they wanted. In
> particular, there were far more workroom requests than we have
> available, so we had to cut down on those quite heavily. Please note
> that we'll have a large lunch room with roundtables inside the Design
> Summit space that can easily be abused (outside of lunch) as space for
> extra discussions.
> 
> Here is the allocation:
> 
> | fb: fishbowl 40-min slots
> | wr: workroom 40-min slots
> | cm: Friday contributors meetup
> | | day: full day, morn: only morning, aft: only afternoon
> 
> Neutron: 12fb, cm:day
> Nova: 14fb, cm:day
> Cinder: 5fb, 4wr, cm:day	
> Horizon: 2fb, 7wr, cm:day	
> Heat: 4fb, 8wr, cm:morn
> Keystone: 7fb, 3wr, cm:day
> Ironic: 4fb, 4wr, cm:morn
> Oslo: 3fb, 5wr
> Rally: 1fb, 2wr
> Kolla: 3fb, 5wr, cm:aft
> Ceilometer: 2fb, 7wr, cm:morn
> TripleO: 2fb, 1wr, cm:full
> Sahara: 2fb, 5wr, cm:aft
> Murano: 2wr, cm:full
> Glance: 3fb, 5wr, cm:full	
> Manila: 2fb, 4wr, cm:morn
> Magnum: 5fb, 5wr, cm:full	
> Swift: 2fb, 12wr, cm:full	
> Trove: 2fb, 4wr, cm:aft
> Barbican: 2fb, 6wr, cm:aft
> Designate: 1fb, 4wr, cm:aft
> OpenStackClient: 1fb, 1wr, cm:morn
> Mistral: 1fb, 3wr	
> Zaqar: 1fb, 3wr
> Congress: 3wr
> Cue: 1fb, 1wr
> Solum: 1fb
> Searchlight: 1fb, 1wr
> MagnetoDB: won't be present
> 
> Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)	
> PuppetOpenStack: 2fb, 3wr

I know Infra & PuppetOpenStack have a lot of common bits together, if we
could avoid slot conflicts, it would be awesome.

Thanks,

> Documentation: 2fb, 4wr, cm:morn
> Quality Assurance: 4fb, 4wr, cm:full
> OpenStackAnsible: 2fb, 1wr, cm:aft
> Release management: 1fb, 1wr (shared meetup with QA)
> Security: 2fb, 2wr
> ChefOpenstack: will camp in the lunch room all week
> App catalog: 1fb, 1wr
> I18n: cm:morn
> OpenStack UX: 2wr
> Packaging-deb: 2wr
> Refstack: 2wr
> RpmPackaging: 1fb, 1wr
> 
> We'll start working on laying out those sessions over the available
> rooms and time slots. If you have constraints (I already know
> searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
> Manila with Cinder, Solum with Magnum...) please let me know, we'll do
> our best to limit them.
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/75f5a4f7/attachment.pgp>

From morgan.fainberg at gmail.com  Sun Sep  6 16:01:46 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Sun, 6 Sep 2015 09:01:46 -0700
Subject: [openstack-dev] [magnum]keystone version
In-Reply-To: <201509060514.t865Eeju000528@d03av02.boulder.ibm.com>
References: <CAH5-jC8FQ9C7ADVygXXVRyKMt867iBFsjimKp26db6=pFO27-g@mail.gmail.com>
 <CAH5-jC_e28VjS+0bPr4xr6i7YJJr8vL+p6-6QMobvoXpuEiO_A@mail.gmail.com>
 <EB0FE4CA-E173-43F3-A6FE-8A017C9A59F2@rackspace.com>
 <201509060514.t865Eeju000528@d03av02.boulder.ibm.com>
Message-ID: <71DB15A7-CB63-466C-BDCE-D92317E70ADA@gmail.com>



> On Sep 5, 2015, at 22:14, Steve Martinelli <stevemar at ca.ibm.com> wrote:
> 
> +1, we're trying to deprecate the v2 API as soon as is sanely possible. Plus, there's no reason to not use v3 since you can achieve everything you could in v2, plus more goodness.
> 
> Thanks,
> 
> Steve Martinelli
> OpenStack Keystone Core
> 
> 
Steve hit the mark dead on. Please aim to use v3 keystone api (and if you can't because of a bug or behavior please let us know before settling on v2 so we can try and fix it). 

--Morgan

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/be25ba07/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/be25ba07/attachment.gif>

From juvenboy1987 at gmail.com  Sun Sep  6 16:18:31 2015
From: juvenboy1987 at gmail.com (lu jander)
Date: Mon, 7 Sep 2015 00:18:31 +0800
Subject: [openstack-dev] [sahara] FFE request for scheduler and suspend EDP
 job for sahara
Message-ID: <CAMfz_LOZmrju2eRrQ1ASCK2yjyxa-UhuV=uv3SRrtKkoPDP-bQ@mail.gmail.com>

Hi, Guys

 I would like to request FFE for scheduler EDP job and suspend EDP job for
sahara. these patches has been reviewed for a long time with lots of patch
sets.

Blueprint:

(1) https://blueprints.launchpad.net/sahara/+spec/enable-scheduled-edp-jobs
(2)
https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs


Spec:

(1) https://review.openstack.org/#/c/175719/
(2) https://review.openstack.org/#/c/198264/


Patch:

(1) https://review.openstack.org/#/c/182310/
(2) https://review.openstack.org/#/c/201448/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/6b2a818c/attachment.html>

From mordred at inaugust.com  Sun Sep  6 16:43:38 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Sun, 06 Sep 2015 12:43:38 -0400
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
 cloud-init IPv6 support
In-Reply-To: <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
References: <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
Message-ID: <55EC6D3A.8050604@inaugust.com>

On 09/05/2015 06:19 PM, Sean M. Collins wrote:
> On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:
>> Right, it depends on your perspective of who 'owns' the API. Is it
>> cloud-init or EC2?
>>
>> At this point I would argue that cloud-init is in control because it would
>> be a large undertaking to switch all of the AMI's on Amazon to something
>> else. However, I know Sean disagrees with me on this point so I'll let him
>> reply here.
>
>
> Here's my take:
>
> Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
> in both the Neutron and Nova projects should all the details of the
> Metadata API that is documented at:
>
> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
>
> This means that this is a compatibility layer that OpenStack has
> implemented so that users can use appliances, applications, and
> operating system images in both Amazon EC2 and an OpenStack environment.
>
> Yes, we can make changes to cloud-init. However, there is no guarantee
> that all users of the Metadata API are exclusively using cloud-init as
> their client. It is highly unlikely that people are rolling their own
> Metadata API clients, but it's a contract we've made with users. This
> includes transport level details like the IP address that the service
> listens on.
>
> The Metadata API is an established API that Amazon introduced years ago,
> and we shouldn't be "improving" APIs that we don't control. If Amazon
> were to introduce IPv6 support the Metadata API tomorrow, we would
> naturally implement it exactly the way they implemented it in EC2. We'd
> honor the contract that Amazon made with its users, in our Metadata API,
> since it is a compatibility layer.
>
> However, since they haven't defined transport level details of the
> Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
> solution. It is not our API.
>
> The nice thing about config-drive is that we've created a new mechanism
> for bootstrapping instances - by replacing the transport level details
> of the API. Rather than being a link-local address that instances access
> over HTTP, it's a device that guests can mount and read. The actual
> contents of the drive may have a similar schema as the Metadata API, but
> I think at this point we've made enough of a differentiation between the
> EC2 Metadata API and config-drive that I believe the contents of the
> actual drive that the instance mounts can be changed without breaking
> user expectations - since config-drive was developed by the OpenStack
> community. The point being that we call it "config-drive" in
> conversation and our docs. Users understand that config-drive is a
> different feature.

Another great part about config-drive is that it's scalable. At infra's 
application scale, we take pains to disable anyting in our images that 
might want to contact the metadata API because we're essentially a DDOS 
on it.

config-drive being local to the hypervisor host makes it MUCH more 
stable at scale.

cloud-init supports config-drive

If it were up to me, nobody would be enablig the metadata API in new 
deployments.

I totally agree that we should not make changes in the metadata API.

> I've had this same conversation about the Security Group API that we
> have. We've named it the same thing as the Amazon API, but then went and
> made all the fields different, inexplicably. Thankfully, it's just the
> names of the fields, rather than being huge conceptual changes.
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html
>
> Basically, I believe that OpenStack should create APIs that are
> community driven and owned, and that we should only emulate
> non-community APIs where appropriate, and explicitly state that we only
> are emulating them. Putting improvements in APIs that came from
> somewhere else, instead of creating new OpenStack branded APIs is a lost
> opportunity to differentiate OpenStack from other projects, as well as
> Amazon AWS.
>
> Thanks for reading, and have a great holiday.
>

I could not possibly agree more if our brains were physically fused.


From gal.sagie at gmail.com  Sun Sep  6 16:44:04 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Sun, 6 Sep 2015 19:44:04 +0300
Subject: [openstack-dev] [Neutron] RFE Bugs - Add Tags to resources and Port
	Forwarding
Message-ID: <CAG9LJa4QGVS0qdmgya9wHTtr0591FBt3dh2Sse-bTtpYcgNUNg@mail.gmail.com>

Hello All,

i wanted to remind the community some RFE's bugs that need
review/processing

1) Add tags to Neutron resources

    RFE: https://bugs.launchpad.net/neutron/+bug/1489291
    Spec: https://review.openstack.org/#/c/216021/

    In terms of the RFE, i think the drivers team discussed this RFE and
decided to
    at least move on to design/spec step (The bug currently doesn't reflect
that yet)

    Regarding the spec it self, i think we had some good comments on it and
they are
    all addressed in the updated patch (link above).
    Would love to get more comments on it and start working on the
implementation
    and hopefully we can also discuss it in the summit. (There is one
dependency i am
    waiting for some work by Kevin Benton for a common Neutron object which
    the tags implementation should leverage, will of course let him extend
on it once he is ready)

2) Port forwarding

     RFE: https://bugs.launchpad.net/neutron/+bug/1491317

     I have sent an email to the mailing list:

http://lists.openstack.org/pipermail/openstack-dev/2015-September/073461.html

     This email describe all the past efforts to implement this feature, i
think its a valuable
     functionality and all the previous interest in it show that as well.
     Would love to move forward with this as well and start drafting a
spec, would like the
     drivers team to look at it and see if i should move forward to the
next step.

Thanks
Gal.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/ed987953/attachment.html>

From duncan.thomas at gmail.com  Sun Sep  6 19:31:04 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Sun, 6 Sep 2015 22:31:04 +0300
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <55EA56BC.70408@redhat.com>
References: <55E9A4F2.5030809@inaugust.com>
	<55EA56BC.70408@redhat.com>
Message-ID: <CAOyZ2aEKoAb9m=qz69u93G9cbkS6toybz7gdv9JR97jv9RVMgA@mail.gmail.com>

On 5 Sep 2015 05:47, "Adam Young" <ayoung at redhat.com> wrote:

> Then let my Hijack:
>
> Policy is still broken.  We need the pieces of Dynamic policy.
>
> I am going to call for a cross project policy discussion for the upcoming
summit.  Please, please, please all the projects attend. The operators have
made it clear they need better policy support.

Can you give us a heads up on the perceived shortcomings, please, together
with an overview of any proposed changes? Turning up to a session to hear,
unprepared, something that can be introduced in advance over email so that
people can ruminate on the details and be better prepared to discuss them
is probably more productive than expecting tired, jet-lagged people to
think on their feet.

In general, I think the practice of introducing new things at design
summits, rather than letting people prepare, is slowing us down as a
community.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/4991026a/attachment.html>

From fungi at yuggoth.org  Sun Sep  6 20:00:29 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Sun, 6 Sep 2015 20:00:29 +0000
Subject: [openstack-dev] [all] Criteria for applying
 vulnerability:managed tag
In-Reply-To: <55E73628.5000608@redhat.com>
References: <20150901185638.GB7955@yuggoth.org> <55E73628.5000608@redhat.com>
Message-ID: <20150906200029.GU7955@yuggoth.org>

On 2015-09-02 17:47:20 +0000 (+0000), Tristan Cacqueray wrote:
[...]
> Any supported programming language by the openstack project should/could
> also be accepted for vulnerability management.
> As long as there is a way to test patch, I think the VMT can support
> other languages like Go or Puppet.

Okay, so for me that implies an extra criterion: the repos for the
deliverable covered should have testing. Great point, it seems
pretty important really and was absent from my initial list.

> The risk is to divide downstream communities, and managing different
> lists sounds like overkill for now. One improvement would be to maintain
> that list publicly like xen do for their pre-disclosure list:
>   http://www.xenproject.org/security-policy.html
[...]
> With a public stakeholder list, we can clarify our vmt-process to be
> directly usable without vmt supervision.
[...]

Unlike many communities, our commercial popularity and corresponding
desire from many vendors to make their involvement in OpenStack as
obvious as possible leads to a bit of a "me too" situation whenever
we create public lists of organizations. I'm all for making our
stakeholder *criteria* clearly documented, but worry that turning
the list of who gets advance notification of embargoed vulnerability
fixes into a public roster will put undue pressure on vendors to be
seen as one of the "privileged few" (creating additional work for
the VMT and potentially resulting in downstream stakeholders who
don't actually intend to make use of the notification and so
needlessly increase the risk of leaks and premature disclosure).

An alternative solution we've discussed to make reaching downstream
stakeholders easier for our developers is adding them to a private
mailing list reserved only for advance notification of embargoed
vulnerability fixes. The VMT could control manual subscription of
new stakeholders and moderate posts to ensure that subsequent
discussion is pushed back to the embargoed bug reports themselves
(we should also probably create a corresponding stakeholders group
in the bug tracker so they can be subscribed to private bugs at the
same time advance notifications are sent, and start including the
bug links in those downstream notifications).
-- 
Jeremy Stanley
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 949 bytes
Desc: Digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/8d628cea/attachment.pgp>

From blak111 at gmail.com  Sun Sep  6 20:25:43 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Sun, 6 Sep 2015 13:25:43 -0700
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
 cloud-init IPv6 support
In-Reply-To: <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
References: <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
Message-ID: <CAO_F6JO+ZnpW61XoipHu-hxsa6TBStiynFO0Kh+GFvMNN8Ni0g@mail.gmail.com>

So it's been pointed out that http://169.254.169.254/openstack is completed
OpenStack invented. I don't quite understand how that's not violating the
contract you said we have with end users about EC2 compatibility under the
restriction of 'no new stuff'.

If we added an IPv6 endpoint that the metadata service listens on, it would
just be another place that non cloud-init clients don't know how to talk
to. It's not going to break our compatibility with any clients that connect
to the IPv4 address.
On Sep 5, 2015 3:22 PM, "Sean M. Collins" <sean at coreitpro.com> wrote:

> On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:
> > Right, it depends on your perspective of who 'owns' the API. Is it
> > cloud-init or EC2?
> >
> > At this point I would argue that cloud-init is in control because it
> would
> > be a large undertaking to switch all of the AMI's on Amazon to something
> > else. However, I know Sean disagrees with me on this point so I'll let
> him
> > reply here.
>
>
> Here's my take:
>
> Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
> in both the Neutron and Nova projects should all the details of the
> Metadata API that is documented at:
>
>
> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
>
> This means that this is a compatibility layer that OpenStack has
> implemented so that users can use appliances, applications, and
> operating system images in both Amazon EC2 and an OpenStack environment.
>
> Yes, we can make changes to cloud-init. However, there is no guarantee
> that all users of the Metadata API are exclusively using cloud-init as
> their client. It is highly unlikely that people are rolling their own
> Metadata API clients, but it's a contract we've made with users. This
> includes transport level details like the IP address that the service
> listens on.
>
> The Metadata API is an established API that Amazon introduced years ago,
> and we shouldn't be "improving" APIs that we don't control. If Amazon
> were to introduce IPv6 support the Metadata API tomorrow, we would
> naturally implement it exactly the way they implemented it in EC2. We'd
> honor the contract that Amazon made with its users, in our Metadata API,
> since it is a compatibility layer.
>
> However, since they haven't defined transport level details of the
> Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
> solution. It is not our API.
>
> The nice thing about config-drive is that we've created a new mechanism
> for bootstrapping instances - by replacing the transport level details
> of the API. Rather than being a link-local address that instances access
> over HTTP, it's a device that guests can mount and read. The actual
> contents of the drive may have a similar schema as the Metadata API, but
> I think at this point we've made enough of a differentiation between the
> EC2 Metadata API and config-drive that I believe the contents of the
> actual drive that the instance mounts can be changed without breaking
> user expectations - since config-drive was developed by the OpenStack
> community. The point being that we call it "config-drive" in
> conversation and our docs. Users understand that config-drive is a
> different feature.
>
> I've had this same conversation about the Security Group API that we
> have. We've named it the same thing as the Amazon API, but then went and
> made all the fields different, inexplicably. Thankfully, it's just the
> names of the fields, rather than being huge conceptual changes.
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html
>
> Basically, I believe that OpenStack should create APIs that are
> community driven and owned, and that we should only emulate
> non-community APIs where appropriate, and explicitly state that we only
> are emulating them. Putting improvements in APIs that came from
> somewhere else, instead of creating new OpenStack branded APIs is a lost
> opportunity to differentiate OpenStack from other projects, as well as
> Amazon AWS.
>
> Thanks for reading, and have a great holiday.
>
> --
> Sean M. Collins
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/0e0e2ce8/attachment.html>

From everett.toews at RACKSPACE.COM  Sun Sep  6 21:24:59 2015
From: everett.toews at RACKSPACE.COM (Everett Toews)
Date: Sun, 6 Sep 2015 21:24:59 +0000
Subject: [openstack-dev] [api][keystone][openstackclient] Standards for
 object name attributes and filtering
In-Reply-To: <CAC=h7gVJvofiHy9GoZNw+UQYuat4jAvBgw4QeCxGdr6JcPM6BQ@mail.gmail.com>
References: <F74A5456-FEA8-438D-B68A-5C6050F8C1B2@linux.vnet.ibm.com>
 <438ACB3F-AF51-4873-9BE8-5AA8420A7AEA@rackspace.com>
 <C5A0092C63E939488005F15F736A81120A8B09E8@SHSMSX103.ccr.corp.intel.com>
 <55E47864.3010903@redhat.com>
 <CAC=h7gVJvofiHy9GoZNw+UQYuat4jAvBgw4QeCxGdr6JcPM6BQ@mail.gmail.com>
Message-ID: <4622AC87-4585-4C41-B48C-E6C2CCC238A1@rackspace.com>

On Sep 1, 2015, at 8:36 PM, Dolph Mathews <dolph.mathews at gmail.com> wrote:

> Does anyone have an example of an API outside of OpenStack that would return 400 in this situation (arbitrary query string parameters)? Based on my past experience, I'd expect them to be ignored, but I can't think of a reason why a 400 would be a bad idea (but I suspect there's some prior art / discussion out there).

Good question! I think it?s a great idea to look outside OpenStack-land to see what?s going on in the wider world of APIs.

One example is listing containers in the Docker API [1]. A number of other resources will also return a 400 if a bad parameter is used.

Everett

[1] https://docs.docker.com/reference/api/docker_remote_api_v1.20/#list-containers



From stevemar at ca.ibm.com  Sun Sep  6 23:27:55 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Sun, 6 Sep 2015 19:27:55 -0400
Subject: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican]
 FYI: Defaulting to Keystone v3 API
In-Reply-To: <1299339546.18129044.1441528810582.JavaMail.zimbra@redhat.com>
References: <CAB1EZBomKom6_vXb36Yu=2v1EvYYMyX9Ufa9WzZuyvC6TAGFAQ@mail.gmail.com>
 <201509041638.t84GcbI5013710@d03av05.boulder.ibm.com>
 <1299339546.18129044.1441528810582.JavaMail.zimbra@redhat.com>
Message-ID: <201509062328.t86NS5xN017975@d01av02.pok.ibm.com>


So, digging into this a bit more, the failures that Barbican and Ironic saw
were very different. The barbican team managed to fix their issues by
replacing their `keystone endpoint-create` and such commands with the
`openstack endpoint create` alternates.

Looking at the failure for why Ironic failed led me down a rabbit hole I
wish I hadn't gone down. There are no plugins managed by the ironic, like
there were in the barbican case, so there were easy commands to
replace.Instead it was failing on a few swift related commands, as
evidenced in the log that Lucas copied:

http://logs.openstack.org/68/217068/14/check/gate-tempest-dsvm-ironic-agent_ssh/18d8590/logs/devstacklog.txt.gz#_2015-09-04_09_04_55_994
2015-09-04 09:04:55.527 | + swift post -m 'Temp-URL-Key: secretkey'
2015-09-04 09:04:55.994 | Authorization Failure. Authorization Failed: The
resource could not be found. (HTTP 404)

Jamie's patch sets everything to v3, so why is failing now? I tried this in
my own environment to make sure:
steve at steve-vm:~/devstack$ export OS_IDENTITY_API_VERSION=3
steve at steve-vm:~/devstack$ export OS_AUTH_URL='http://10.0.2.15:5000/v3'
steve at steve-vm:~/devstack$ swift stat
Authorization Failure. Authorization Failed: The resource could not be
found. (HTTP 404) (Request-ID: req-63bee2a6-ca9d-49f4-baad-b6a1eef916df)
steve at steve-vm:~/devstack$ swift --debug stat
DEBUG:keystoneclient.auth.identity.v2:Making authentication request to
http://10.0.2.15:5000/v3/tokens

And saw that swiftclient was creating a v2 client instance with a v3
endpoint, this is no beuno.

As I continued to dig into this, it seems like swiftclient doesn't honor
the OS_IDENTITY_API_VERSION flag that was set, instead it relies on
--auth-version or OS_AUTH_VERSION
steve at steve-vm:~/devstack$ export OS_AUTH_VERSION=3
steve at steve-vm:~/devstack$ swift --debug stat
DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to
http://10.0.2.15:5000/v3/auth/tokens
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection
(1): 10.0.2.15
.... it continued and was happy

So, we could easily propose Jamies patch again, but this time also set
OS_AUTH_VERSION, or we could fix swiftclient to honor the
OS_IDENTITY_API_VERSION flag. I'd prefer doing the former first to get
Jamie's patch back in, and the latter for the long term plan, but looking
at the code, there doesn't seem to be a plan on deprecating
OS_AUTH_VERSION.

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:	Jamie Lennox <jamielennox at redhat.com>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>
Date:	2015/09/06 04:41 AM
Subject:	Re: [openstack-dev]
            [DevStack][Keystone][Ironic][Swit][Barbican] FYI: Defaulting to
            Keystone v3 API



Note that this fixing this does not mean ironic has to support keystone v3
(but please fix that too). It just means that somewhere in ironic's gate it
is doing like an "openstack user create" or a role assignment directly with
the OSC tool assuming v2 rather than using the helpers that devstack
provides like get_or_create_user. Keystone v2 still exists and is running
we just changed the default API for devstack OSC commands.

I'm kind of annoyed we reverted this patch (though i was surprised to see
it merge recently as it's been around for a while), as it was known to
possibly break people which is why it was on the discussion for the qa
meetings. However given that devstack has plugins and there is third party
CI there is absolutely no way we can make sure that everyone has fixed this
and we just need to make a breaking change. Granted coinciding with freeze
is unfortunate. Luckily this doesn't affect most people because they use
the devstack helper functions and for those that don't it's an almost
trivial fix to start using them.

Jamie

----- Original Message -----
> From: "Steve Martinelli" <stevemar at ca.ibm.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
> Sent: Saturday, September 5, 2015 2:38:27 AM
> Subject: Re: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican]
FYI: Defaulting to Keystone v3 API
>
>
>
> This change also affected Barbican too, but they quickly tossed up a
patch to
> resolve the gate failures [1]. As much as I would like DevStack and
> OpenStackClient to default to Keystone's v3 API, we should - considering
how
> close we are in the schedule, revert the initial patch (which I see
sdague
> already did). We need to determine which projects are hosting their own
> devstack plugin scripts and update those first before bringing back the
> original patch.
>
> https://review.openstack.org/#/c/220396/
>
> Thanks,
>
> Steve Martinelli
> OpenStack Keystone Core
>
> Lucas Alvares Gomes ---2015/09/04 10:07:51 AM---Hi, This is email is just
a
> FYI: Recently the patch [1] got merged in
>
> From: Lucas Alvares Gomes <lucasagomes at gmail.com>
> To: OpenStack Development Mailing List
<openstack-dev at lists.openstack.org>
> Date: 2015/09/04 10:07 AM
> Subject: [openstack-dev] [DevStack][Keystone][Ironic][Swit] FYI:
Defaulting
> to Keystone v3 API
>
>
>
>
> Hi,
>
> This is email is just a FYI: Recently the patch [1] got merged in
> DevStack and broke the Ironic gate [2], I haven't had time to dig into
> the problem yet so I reverted the patch [3] to unblock our gate.
>
> The work to convert to v3 seems to be close enough but not yet there
> so I just want to bring a broader attention to it with this email.
>
> Also, the Ironic job that is currently running in the DevStack gate is
> not testing Ironic with the Swift module, there's a patch [4] changing
> that so I hope we will be able to identify the problem before we break
> things next time .
>
> [1] https://review.openstack.org/#/c/186684/
> [2]
>
http://logs.openstack.org/68/217068/14/check/gate-tempest-dsvm-ironic-agent_ssh/18d8590/logs/devstacklog.txt.gz#_2015-09-04_09_04_55_994

> [3] https://review.openstack.org/220532
> [4] https://review.openstack.org/#/c/220516/
>
> Cheers,
> Lucas
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/c759f352/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/c759f352/attachment.gif>

From hongbin.lu at huawei.com  Sun Sep  6 23:47:22 2015
From: hongbin.lu at huawei.com (Hongbin Lu)
Date: Sun, 6 Sep 2015 23:47:22 +0000
Subject: [openstack-dev] [magnum] Steps to upload magnum images
Message-ID: <0957CD8F4B55C0418161614FEC580D6BCCF310@SZXEMI503-MBS.china.huawei.com>

Hi team,

As you may know, magnum is tested with pre-built Fedora Atomic images. Basically, these images are standard atomic image with k8s packages pre-installed. The images can be downloaded from fedorapeople.org [1]. In most cases, you are able to test magnum by using images there. If you are not satisfied by existing images, you are welcome to build a new image and share it with the team. Here [2] is the instruction for how to build an new atomic image. After you successfully build an image, you may want to upload it to the public file server, which is what I am going to talk about.

Below are the steps to upload an image:

1.       Register an account in here https://admin.fedoraproject.org/accounts/

2.       Sign the contributor agreement (On the home page after you login: "My Account" -> "Contributor Agreement").

3.       Upload your public key ("My Account" -> "Public SSH Key").

4.       Apply to join the magnum group ("Join a group" -> search "magnum" -> "apply"). If you cannot find the "apply" link under "Status" (I didn't), you can wait a few minutes or skip this step and ask Steven Dake to add you to the group instead.

5.       Ping Steven Dake (stdake at cisco.com) to approve your application.

6.       After 30-60 minutes, you should be able to SSH to the file server (ssh <yourname>@fedorapeople.org). Our images are stored in /srv/groups/magnum.

Notes on using the file server:

*         Avoid type "sudo ...".

*         Activities there are logged, so don't expect any privacy.

*         Not all contents are allowed. Please make sure your uploaded contents are acceptable before uploading it.

[1] https://fedorapeople.org/groups/magnum/
[2] https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-build-atomic-image.rst
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/d381a8d1/attachment.html>

From jim at jimrollenhagen.com  Mon Sep  7 00:02:24 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Sun, 6 Sep 2015 17:02:24 -0700
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
	cloud-init IPv6 support
In-Reply-To: <55EC6D3A.8050604@inaugust.com>
References: <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
 <55EC6D3A.8050604@inaugust.com>
Message-ID: <782B38CA-9629-48D0-BC4F-79F15322C494@jimrollenhagen.com>



> On Sep 6, 2015, at 09:43, Monty Taylor <mordred at inaugust.com> wrote:
> 
>> On 09/05/2015 06:19 PM, Sean M. Collins wrote:
>>> On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:
>>> Right, it depends on your perspective of who 'owns' the API. Is it
>>> cloud-init or EC2?
>>> 
>>> At this point I would argue that cloud-init is in control because it would
>>> be a large undertaking to switch all of the AMI's on Amazon to something
>>> else. However, I know Sean disagrees with me on this point so I'll let him
>>> reply here.
>> 
>> 
>> Here's my take:
>> 
>> Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
>> in both the Neutron and Nova projects should all the details of the
>> Metadata API that is documented at:
>> 
>> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
>> 
>> This means that this is a compatibility layer that OpenStack has
>> implemented so that users can use appliances, applications, and
>> operating system images in both Amazon EC2 and an OpenStack environment.
>> 
>> Yes, we can make changes to cloud-init. However, there is no guarantee
>> that all users of the Metadata API are exclusively using cloud-init as
>> their client. It is highly unlikely that people are rolling their own
>> Metadata API clients, but it's a contract we've made with users. This
>> includes transport level details like the IP address that the service
>> listens on.
>> 
>> The Metadata API is an established API that Amazon introduced years ago,
>> and we shouldn't be "improving" APIs that we don't control. If Amazon
>> were to introduce IPv6 support the Metadata API tomorrow, we would
>> naturally implement it exactly the way they implemented it in EC2. We'd
>> honor the contract that Amazon made with its users, in our Metadata API,
>> since it is a compatibility layer.
>> 
>> However, since they haven't defined transport level details of the
>> Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
>> solution. It is not our API.
>> 
>> The nice thing about config-drive is that we've created a new mechanism
>> for bootstrapping instances - by replacing the transport level details
>> of the API. Rather than being a link-local address that instances access
>> over HTTP, it's a device that guests can mount and read. The actual
>> contents of the drive may have a similar schema as the Metadata API, but
>> I think at this point we've made enough of a differentiation between the
>> EC2 Metadata API and config-drive that I believe the contents of the
>> actual drive that the instance mounts can be changed without breaking
>> user expectations - since config-drive was developed by the OpenStack
>> community. The point being that we call it "config-drive" in
>> conversation and our docs. Users understand that config-drive is a
>> different feature.
> 
> Another great part about config-drive is that it's scalable. At infra's application scale, we take pains to disable anyting in our images that might want to contact the metadata API because we're essentially a DDOS on it.

So, I tend to think a simple API service like this should never be hard to scale. Put a bunch of hosts behind a load balancer, boom, done. Even 1000 requests/s shouldn't be hard, though it may require many hosts, and that's far beyond what infra would hit today. 

The one problem I have with config-drive is that it is static. I'd love for systems like cloud-init, glean, etc, to be able to see changes to mounted disks, attached networks, etc. Attaching things after the fact isn't uncommon, and to make the user config the thing is a terrible experience. :(

// jim 

> 
> config-drive being local to the hypervisor host makes it MUCH more stable at scale.
> 
> cloud-init supports config-drive
> 
> If it were up to me, nobody would be enablig the metadata API in new deployments.
> 
> I totally agree that we should not make changes in the metadata API.
> 
>> I've had this same conversation about the Security Group API that we
>> have. We've named it the same thing as the Amazon API, but then went and
>> made all the fields different, inexplicably. Thankfully, it's just the
>> names of the fields, rather than being huge conceptual changes.
>> 
>> http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html
>> 
>> Basically, I believe that OpenStack should create APIs that are
>> community driven and owned, and that we should only emulate
>> non-community APIs where appropriate, and explicitly state that we only
>> are emulating them. Putting improvements in APIs that came from
>> somewhere else, instead of creating new OpenStack branded APIs is a lost
>> opportunity to differentiate OpenStack from other projects, as well as
>> Amazon AWS.
>> 
>> Thanks for reading, and have a great holiday.
> 
> I could not possibly agree more if our brains were physically fused.
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From ken1ohmichi at gmail.com  Mon Sep  7 00:27:09 2015
From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi)
Date: Mon, 7 Sep 2015 09:27:09 +0900
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <20150904144538.GI3226@crypt>
References: <20150625142223.GC2646@crypt>
 <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
 <CAA393vhyeMYeA=6MK9+0LtReud67+OMBu=KcaOzvM_pzL4Ea+g@mail.gmail.com>
 <20150904144538.GI3226@crypt>
Message-ID: <CAA393vitrR+MhU+sLKCTiQtdLvq8qVUi6g5Vo-2bhoPeG50MWA@mail.gmail.com>

Hi Andrew,

2015-09-04 23:45 GMT+09:00 Andrew Laski <andrew at lascii.com>:
>>>
>>> Now we are discussing this on https://review.openstack.org/#/c/217727/
>>> for allowing out-of-tree scheduler-hints.
>>> When we wrote API schema for scheduler-hints, it was difficult to know
>>> what are available API parameters for scheduler-hints.
>>> Current API schema exposes them and I guess that is useful for API users
>>> also.
>>>
>>> One idea is that: How about auto-extending scheduler-hint API schema
>>> based on loaded schedulers?
>>> Now API schemas of "create/update/resize/rebuild a server" APIs are
>>> auto-extended based on loaded extensions by using stevedore
>>> library[1].
>>> I guess we can apply the same way for scheduler-hints also in long-term.
>>> Each scheduler needs to implement a method which returns available API
>>> parameter formats and nova-api tries to get them then extends
>>> scheduler-hints API schema with them.
>>> That means out-of-tree schedulers also will be available if they
>>> implement the method.
>>> # In short-term, I can see "blocking additionalProperties" validation
>>> disabled by the way.
>>
>>
>> https://review.openstack.org/#/c/220440 is a prototype for the above idea.
>
>
> I like the idea of providing strict API validation for the scheduler hints
> if it accounts for out of tree extensions like this would do.  I do have a
> slight concern about how this works in a world where the scheduler does
> eventually get an HTTP interface that Nova uses and the code isn't
> necessarily accessible, but that can be worried about later.
>
> This does mean that the scheduler hints are not controlled by microversions
> though, since we don't have a mechanism for out of tree extensions to signal
> their presence that way.  And even if they could it would still mean that
> identical microversions on different clouds wouldn't offer the same hints.
> If we're accepting of that, which isn't really any different than having
> "additionalProperties: True", then this seems reasonable to me.

In short-term, yes. That is almost the same as "additionalProperties: True".
But in long-term, no. Each scheduler-hint parameter which is described
with JSON-Schema will be useful instead of "additionalProperties:
True" because API parameters will be exposed with JSON-Schema format
on JSON-Home or something.
If we allow customization of scheduler-hints like new filters,
out-of-tree filters without microversions, API users cannot know
available scheduler-hints parameter from microversions number.
That will be helpful for API users that nova can provide available
parameters with JSON-Home or something.

Thanks
Ken Ohmichi


From douglas.mendizabal at rackspace.com  Mon Sep  7 00:43:27 2015
From: douglas.mendizabal at rackspace.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=)
Date: Sun, 6 Sep 2015 19:43:27 -0500
Subject: [openstack-dev] [barbican] No IRC meeting tomorrow September 7
Message-ID: <55ECDDAF.80704@rackspace.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Barbicaneers,

We'll be skipping the weekly IRC meeting tomorrow since I expect most
folks will be out due to the US holiday.

Thanks,

Douglas Mendiz?bal
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV7N2vAAoJEB7Z2EQgmLX7uGIP/1mpkl9rtJchktROwhMkO/Kk
OI3oMkVyOLOQwje7lwYjF8DR55cV/wrE+kYUyJlNnkELH5+CUskBWERN+fFOjyJ7
oCbUK3ISg9QIJf+wyov3Wbzyp+vFwLFDzHucz6rfb2MJCwNQxUH9lujBCQjdGrJC
MBV0IWQ9A8KE6yNj5Sk0S8T1ZnkPTGKojKNv8rY2NQE9O1RdPIH7IxeYPfA6Z4xY
9Z/5KEb1BpbLPnlAKxF3gLb1H+FcjF+QiSdaek3t+6QOVJ1dTjA53bbxETrWlrcU
mjDGm99bixZxlON1Pce4fg/EtshAI+LPKK9epWh2yQ5M73JfXQnm6/eFVDecmGPI
kgYnUCUb8k05ag8qN3Zrgd51SyhrSsQg2IvV6I7/mfbChFuQlg/pPLmlCATSr4Kj
zPSylNocoT5kqXAAdqjYMsXmIgyCAMEo7q5xH7ZbIC3SWdDDoQVOWrEgbo6mHfIW
aemFB2nS+ybwk4UvcYJTsrLXU8xEGAYYLJaWWHjzbd5Lt1C502rQGED2T6YkwSJY
+7Udct8gTCTxWuQykBbXgGBh2RjrcAj+ArQhUJpluzgyolNmYILOpRTGo/HhwKk9
CqTlm/R+EcDEsxpp3JmhxwwpxCc9gMcU5oUWpHcWKkyqAW0XpE3iS3nXm2YNeCXw
XI2iForVuqYIZYb6XHXT
=T39J
-----END PGP SIGNATURE-----


From emilien.macchi at gmail.com  Mon Sep  7 00:59:11 2015
From: emilien.macchi at gmail.com (Emilien Macchi)
Date: Sun, 6 Sep 2015 20:59:11 -0400
Subject: [openstack-dev] [puppet] Liberty Sprint Retrospective
Message-ID: <CAN7WfkJopv0vNRmzeixmov1LMWS-icAXm7V9F47t3gOKQ47muA@mail.gmail.com>

Hi,

With the goal to continually improve our way to work together, I would like
to build a Sprint Retrospective from what happened last week.

The first step would be to gather data on this etherpad:
https://etherpad.openstack.org/p/puppet-liberty-sprint-retrospective
The second step, that we will probably do during our weekly meeting would
be to feed the "Generate insights" section and the "Decide what to do".
Then I'll make a summary of our thoughts and close the retrospective by
some documentation that will help us to make the next time even better.

Feel free to participate to this discussion, any feedback is welcome,
-- 
Emilien Macchi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/597bfba9/attachment.html>

From matt at mattfischer.com  Mon Sep  7 01:16:58 2015
From: matt at mattfischer.com (Matt Fischer)
Date: Sun, 6 Sep 2015 19:16:58 -0600
Subject: [openstack-dev] [puppet] Liberty Sprint Retrospective
In-Reply-To: <CAN7WfkJopv0vNRmzeixmov1LMWS-icAXm7V9F47t3gOKQ47muA@mail.gmail.com>
References: <CAN7WfkJopv0vNRmzeixmov1LMWS-icAXm7V9F47t3gOKQ47muA@mail.gmail.com>
Message-ID: <CAHr1CO-hMFQD0ntzqrBJPjuzyjrwAC=e8wMAqeWtxoCB9ZdhDA@mail.gmail.com>

I've updated the bug triage portion but tomorrow is a US holiday so you may
not see much traction there until Tuesday.

On Sun, Sep 6, 2015 at 6:59 PM, Emilien Macchi <emilien.macchi at gmail.com>
wrote:

> Hi,
>
> With the goal to continually improve our way to work together, I would
> like to build a Sprint Retrospective from what happened last week.
>
> The first step would be to gather data on this etherpad:
> https://etherpad.openstack.org/p/puppet-liberty-sprint-retrospective
> The second step, that we will probably do during our weekly meeting would
> be to feed the "Generate insights" section and the "Decide what to do".
> Then I'll make a summary of our thoughts and close the retrospective by
> some documentation that will help us to make the next time even better.
>
> Feel free to participate to this discussion, any feedback is welcome,
> --
> Emilien Macchi
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/29cf2d44/attachment.html>

From emilien.macchi at gmail.com  Mon Sep  7 01:29:17 2015
From: emilien.macchi at gmail.com (Emilien Macchi)
Date: Sun, 6 Sep 2015 21:29:17 -0400
Subject: [openstack-dev] [puppet] Tokyo Summit
Message-ID: <CAN7WfkJdju3ii8pOpmOGwZQrQowM+wKBdXjVfidKhcAgBiq-hA@mail.gmail.com>

Hi,

I created an etherpad to gathers all informations about Tokyo Summit [1].
You can see the resources that we will dispose, and also a "Topics" section.
Feel free to bring anything you would like to discuss.

During the next weekly meetings, we will iterate the etherpad to make sure
we use our few slots in an efficient way.

[1] https://etherpad.openstack.org/p/HND-puppet

Looking forward to seeing you there,
-- 
Emilien Macchi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/832aa798/attachment.html>

From wanghua.humble at gmail.com  Mon Sep  7 02:00:43 2015
From: wanghua.humble at gmail.com (=?UTF-8?B?546L5Y2O?=)
Date: Mon, 7 Sep 2015 10:00:43 +0800
Subject: [openstack-dev] [keystone]how to get service_catalog
Message-ID: <CAH5-jC-L8six0MCQupX=0g-xA6DBAMGHRMLLEdrjAouhb377sg@mail.gmail.com>

Hi all,

When I use a token to init a keystoneclient and try to get service_catalog
by it, error occurs. I find that keystone doesn't return service_catalog
when we use a token. Is there a way to get service_catalog by token?  In
magnum, we now make a trick. We init a keystoneclient with service_catalog
which is contained in the token_info returned by keystonemiddleware in
auth_ref parameter.

I want a way to get service_catalog by token. Or can we init a
keystoneclient by the token_info return by keystonemiddleware directly?

Regards,
Wanghua
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/8cdf3dde/attachment.html>

From germy.lure at gmail.com  Mon Sep  7 02:21:50 2015
From: germy.lure at gmail.com (Germy Lure)
Date: Mon, 7 Sep 2015 10:21:50 +0800
Subject: [openstack-dev] [Neutron] Port forwarding
In-Reply-To: <CAG9LJa40-NM1LTJOZxuTa27W6LyZgpJwb7D4XVcqoiH63GaASg@mail.gmail.com>
References: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>
 <CAEfdOg2AS8uDZUyi5FzyAEKqU3D9axFaKBs5ibzjFudSR5JFGw@mail.gmail.com>
 <CAG9LJa40-NM1LTJOZxuTa27W6LyZgpJwb7D4XVcqoiH63GaASg@mail.gmail.com>
Message-ID: <CAEfdOg0oUVpDCMU6Ko59MvkyqdzFnxOAWqZpk1QF-eYbZ_ebeA@mail.gmail.com>

Hi Gal,

I'm sorry for my poor English. Let me try again.

What operator wants to access is several related instances, instead of only
one or one by one. The use case is periodical check and maintain. RELATED
means instance maybe in one subnet, or one network, or one host. The host's
scene is similar to access the docker on the host as you mentioned before.

Via what you mentioned of API, user must ssh an instance and then invoke
API to update the IP address and port, or even create a new PF to access
another one. It will be a nightmare to a VPC operator who owns so many
instances.

In a word, I think the "inside_addr" should be "subnet" or "host".

Hope this is clear enough.

Germy

On Sun, Sep 6, 2015 at 1:05 PM, Gal Sagie <gal.sagie at gmail.com> wrote:

> Hi Germy,
>
> I am not sure i understand what you mean, can you please explain it
> further?
>
> Thanks
> Gal.
>
> On Sun, Sep 6, 2015 at 5:39 AM, Germy Lure <germy.lure at gmail.com> wrote:
>
>> Hi, Gal
>>
>> Thank you for bringing this up. But I have some suggestions for the API.
>>
>> An operator or some other component wants to reach several VMs related
>> NOT only one or one by one. Here, RELATED means that the VMs are in one
>> subnet or network or a host(similar to reaching dockers on a host).
>>
>> Via the API you mentioned, user must ssh one VM and update even delete
>> and add PF to ssh another. To a VPC(with 20 subnets?) admin, it's totally a
>> nightmare.
>>
>> Germy
>>
>>
>> On Wed, Sep 2, 2015 at 1:59 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
>>
>>> Hello All,
>>>
>>> I have searched and found many past efforts to implement port forwarding
>>> in Neutron.
>>> I have found two incomplete blueprints [1], [2] and an abandoned patch
>>> [3].
>>>
>>> There is even a project in Stackforge [4], [5] that claims
>>> to implement this, but the L3 parts in it seems older then current
>>> master.
>>>
>>> I have recently came across this requirement for various use cases, one
>>> of them is
>>> providing feature compliance with Docker port-mapping feature (for
>>> Kuryr), and saving floating
>>> IP's space.
>>> There has been many discussions in the past that require this feature,
>>> so i assume
>>> there is a demand to make this formal, just a small examples [6], [7],
>>> [8], [9]
>>>
>>> The idea in a nutshell is to support port forwarding (TCP/UDP ports) on
>>> the external router
>>> leg from the public network to internal ports, so user can use one
>>> Floating IP (the external
>>> gateway router interface IP) and reach different internal ports
>>> depending on the port numbers.
>>> This should happen on the network node (and can also be leveraged for
>>> security reasons).
>>>
>>> I think that the POC implementation in the Stackforge project shows that
>>> this needs to be
>>> implemented inside the L3 parts of the current reference implementation,
>>> it will be hard
>>> to maintain something like that in an external repository.
>>> (I also think that the API/DB extensions should be close to the current
>>> L3 reference
>>> implementation)
>>>
>>> I would like to renew the efforts on this feature and propose a RFE and
>>> a spec for this to the
>>> next release, any comments/ideas/thoughts are welcome.
>>> And of course if any of the people interested or any of the people that
>>> worked on this before
>>> want to join the effort, you are more then welcome to join and comment.
>>>
>>> Thanks
>>> Gal.
>>>
>>> [1]
>>> https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
>>> [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
>>> [3] https://review.openstack.org/#/c/60512/
>>> [4] https://github.com/stackforge/networking-portforwarding
>>> [5] https://review.openstack.org/#/q/port+forwarding,n,z
>>>
>>> [6]
>>> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
>>> [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
>>> [8]
>>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
>>> [9]
>>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/0116679d/attachment.html>

From sean at coreitpro.com  Mon Sep  7 02:34:54 2015
From: sean at coreitpro.com (Sean M. Collins)
Date: Mon, 7 Sep 2015 02:34:54 +0000
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
 cloud-init IPv6 support
In-Reply-To: <CAO_F6JO+ZnpW61XoipHu-hxsa6TBStiynFO0Kh+GFvMNN8Ni0g@mail.gmail.com>
References: <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
 <CAO_F6JO+ZnpW61XoipHu-hxsa6TBStiynFO0Kh+GFvMNN8Ni0g@mail.gmail.com>
Message-ID: <0000014fa5a7fc65-db3a79b7-91fc-4e28-91ff-8e01e14cbbb7-000000@email.amazonses.com>

On Sun, Sep 06, 2015 at 04:25:43PM EDT, Kevin Benton wrote:
> So it's been pointed out that http://169.254.169.254/openstack is completed
> OpenStack invented. I don't quite understand how that's not violating the
> contract you said we have with end users about EC2 compatibility under the
> restriction of 'no new stuff'.

I think that is a violation. I don't think that allows us to make more
changes, just because we've broken the contract once, so a second
infraction is less significant. 

> If we added an IPv6 endpoint that the metadata service listens on, it would
> just be another place that non cloud-init clients don't know how to talk
> to. It's not going to break our compatibility with any clients that connect
> to the IPv4 address.

No, but if Amazon were to make a decision about how to implement IPv6 in
EC2 and how to make the Metadata API service work with IPv6 we'd be
supporting two implementations - the one we came up with and one for
supporting the way Amazon implemented it.

-- 
Sean M. Collins


From ghanshyammann at gmail.com  Mon Sep  7 03:15:00 2015
From: ghanshyammann at gmail.com (GHANSHYAM MANN)
Date: Mon, 7 Sep 2015 12:15:00 +0900
Subject: [openstack-dev] Should v2 compatibility mode (v2.0 on v2.1) fixes
 be applicable for v2.1 too?
Message-ID: <CACE3TKWnnCtjc-CM408zO4BLfG733Rz4s90ap69PdE2jmvNWmg@mail.gmail.com>

Hi All,

As we all knows, api-paste.ini default setting for /v2 was changed to
run those on v2.1 (v2.0 on v2.1) which is really great think for easy
code maintenance in future (removal of v2 code).

To keep "v2.0 on v2.1" fully compatible with "v2.0 on v2.0", some bugs
were found[1] and fixed. But I think we should fix those only for v2
compatible mode not for v2.1.

For example bug#1491325, 'device' on volume attachment Request is
optional param[2] (which does not mean 'null-able' is allowed) and
v2.1 used to detect and error on usage of 'device' as "None". But as
it was used as 'None' by many /v2 users and not to break those, we
should allow 'None' on v2 compatible mode also. But we should not
allow the same for v2.1.

IMO v2.1 strong input validation feature (which helps to make API
usage in correct manner) should not be changed, and for v2 compatible
mode we should have another solution without affecting v2.1 behavior
may be having different schema for v2 compatible mode and do the
necessary fixes there.

Trying to know other's opinion on this or something I missed during
any discussion.

[1]: https://bugs.launchpad.net/python-novaclient/+bug/1491325
      https://bugs.launchpad.net/nova/+bug/1491511

[2]: http://developer.openstack.org/api-ref-compute-v2.1.html#attachVolume

-- 
Thanks & Regards
Ghanshyam Mann


From skalinowski at mirantis.com  Mon Sep  7 04:48:46 2015
From: skalinowski at mirantis.com (Sebastian Kalinowski)
Date: Mon, 7 Sep 2015 06:48:46 +0200
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
In-Reply-To: <CALMh7SAY0Him7mp-1u48MqGhrJ-P9oB=sO1J8nELZ65oXAQozQ@mail.gmail.com>
References: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
 <55E6EA3A.7080006@gmail.com>
 <CAM0pNLMwsWK_N8EaobnCDXmFdfB0aTPMK9urXnbScGmJtvqfoA@mail.gmail.com>
 <CAHAWLf1Ed6fPqDUDuume+1JdtxJuLn+AFrcxkUvbZvxAokBWYA@mail.gmail.com>
 <CAOe9ns7CtSmgKuzZu1qvWyh2mU2zCZUvwRCa2DQWaUvpZuPiqQ@mail.gmail.com>
 <CAC+XjbYo+Nd6zPY7vkwhFSjp5J7sAPYooU5FzzfBKRfDjgb1-A@mail.gmail.com>
 <CAPQe3Lmwc6SyYuEj0LOjMBEMLLUqMKaqNAPG=puv_8skQ+Gu9Q@mail.gmail.com>
 <CALMh7SAY0Him7mp-1u48MqGhrJ-P9oB=sO1J8nELZ65oXAQozQ@mail.gmail.com>
Message-ID: <CAGRGKG7VRYEU7y+rVMMCRt4GLcgXOeJxZRFHHoKO9W+kchET4A@mail.gmail.com>

+1

2015-09-03 21:34 GMT+02:00 Bartlomiej Piotrowski <bpiotrowski at mirantis.com>:

> I have no idea if I'm eligible to vote, but I'll do it anyway:
>
> +1
>
> Bart?omiej
>
> On Thu, Sep 3, 2015 at 9:16 PM, Sergey Vasilenko <svasilenko at mirantis.com>
> wrote:
>
>> +1
>>
>> /sv
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/dea4263f/attachment.html>

From jamielennox at redhat.com  Mon Sep  7 04:54:26 2015
From: jamielennox at redhat.com (Jamie Lennox)
Date: Mon, 7 Sep 2015 00:54:26 -0400 (EDT)
Subject: [openstack-dev] [keystone]how to get service_catalog
In-Reply-To: <CAH5-jC-L8six0MCQupX=0g-xA6DBAMGHRMLLEdrjAouhb377sg@mail.gmail.com>
References: <CAH5-jC-L8six0MCQupX=0g-xA6DBAMGHRMLLEdrjAouhb377sg@mail.gmail.com>
Message-ID: <809341212.18213995.1441601666881.JavaMail.zimbra@redhat.com>



----- Original Message -----
> From: "??" <wanghua.humble at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Sent: Monday, 7 September, 2015 12:00:43 PM
> Subject: [openstack-dev] [keystone]how to get service_catalog
> 
> Hi all,
> 
> When I use a token to init a keystoneclient and try to get service_catalog by
> it, error occurs. I find that keystone doesn't return service_catalog when
> we use a token. Is there a way to get service_catalog by token? In magnum,
> we now make a trick. We init a keystoneclient with service_catalog which is
> contained in the token_info returned by keystonemiddleware in auth_ref
> parameter.
> 
> I want a way to get service_catalog by token. Or can we init a keystoneclient
> by the token_info return by keystonemiddleware directly?
> 
> Regards,
> Wanghua

Sort of. 

The problem you are hitting is that a token is just a string, an identifier for some information stored in keystone. Given a token at __init__ time the client doesn't try to validate this in anyway it just assumes you know what you are doing. You can do a variation of this though in which you use an existing token to fetch a new token with the same rights (the expiry etc will be the same) and then you will get a fresh service catalog. Using auth plugins that's the Token family of plugins.

However i don't _think_ that's exactly what you're looking for in magnum. What token are you trying to reuse? 

If it's the users token then auth_token passes down an auth plugin in the ENV['keystone.token_auth'] variable[1] and you can pass that to a client to reuse the token and service catalog. If you are loading up magnum specific auth then again have a look at using keystoneclient's auth plugins and reusing it across multiple requests.

Trying to pass around a bundle of token id and service catalog is pretty much exactly what an auth plugin does and you should be able to do something there. 


Jamie

[1] https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L164
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From jamielennox at redhat.com  Mon Sep  7 05:07:34 2015
From: jamielennox at redhat.com (Jamie Lennox)
Date: Mon, 7 Sep 2015 01:07:34 -0400 (EDT)
Subject: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican]
 FYI: Defaulting to Keystone v3 API
In-Reply-To: <201509062328.t86NS5xN017975@d01av02.pok.ibm.com>
References: <CAB1EZBomKom6_vXb36Yu=2v1EvYYMyX9Ufa9WzZuyvC6TAGFAQ@mail.gmail.com>
 <201509041638.t84GcbI5013710@d03av05.boulder.ibm.com>
 <1299339546.18129044.1441528810582.JavaMail.zimbra@redhat.com>
 <201509062328.t86NS5xN017975@d01av02.pok.ibm.com>
Message-ID: <1258219114.18215619.1441602454204.JavaMail.zimbra@redhat.com>

----- Original Message -----

> From: "Steve Martinelli" <stevemar at ca.ibm.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Sent: Monday, 7 September, 2015 9:27:55 AM
> Subject: Re: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican]
> FYI: Defaulting to Keystone v3 API

> So, digging into this a bit more, the failures that Barbican and Ironic saw
> were very different. The barbican team managed to fix their issues by
> replacing their `keystone endpoint-create` and such commands with the
> `openstack endpoint create` alternates.

> Looking at the failure for why Ironic failed led me down a rabbit hole I wish
> I hadn't gone down. There are no plugins managed by the ironic, like there
> were in the barbican case, so there were easy commands to replace.Instead it
> was failing on a few swift related commands, as evidenced in the log that
> Lucas copied:

> http://logs.openstack.org/68/217068/14/check/gate-tempest-dsvm-ironic-agent_ssh/18d8590/logs/devstacklog.txt.gz#_2015-09-04_09_04_55_994
> 2015-09-04 09:04:55.527 | + swift post -m 'Temp-URL-Key: secretkey'
> 2015-09-04 09:04:55.994 | Authorization Failure. Authorization Failed: The
> resource could not be found. (HTTP 404)

> Jamie's patch sets everything to v3, so why is failing now? I tried this in
> my own environment to make sure:
> steve at steve-vm:~/devstack$ export OS_IDENTITY_API_VERSION=3
> steve at steve-vm:~/devstack$ export OS_AUTH_URL='http://10.0.2.15:5000/v3'
> steve at steve-vm:~/devstack$ swift stat
> Authorization Failure. Authorization Failed: The resource could not be found.
> (HTTP 404) (Request-ID: req-63bee2a6-ca9d-49f4-baad-b6a1eef916df)
> steve at steve-vm:~/devstack$ swift --debug stat
> DEBUG:keystoneclient.auth.identity.v2:Making authentication request to
> http://10.0.2.15:5000/v3/tokens

> And saw that swiftclient was creating a v2 client instance with a v3
> endpoint, this is no beuno.

> As I continued to dig into this, it seems like swiftclient doesn't honor the
> OS_IDENTITY_API_VERSION flag that was set, instead it relies on
> --auth-version or OS_AUTH_VERSION
> steve at steve-vm:~/devstack$ export OS_AUTH_VERSION=3
> steve at steve-vm:~/devstack$ swift --debug stat
> DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to
> http://10.0.2.15:5000/v3/auth/tokens
> INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection
> (1): 10.0.2.15
> .... it continued and was happy

> So, we could easily propose Jamies patch again, but this time also set
> OS_AUTH_VERSION, or we could fix swiftclient to honor the
> OS_IDENTITY_API_VERSION flag. I'd prefer doing the former first to get
> Jamie's patch back in, and the latter for the long term plan, but looking at
> the code, there doesn't seem to be a plan on deprecating OS_AUTH_VERSION.

> Thanks,
Thanks for looking into this Steve, as evidenced by how well i phrased my response I shouldn't answer emails late at night. 

So what i would like to consider this is a renewed call for deprecating ALL the project specific CLIs in favour of using openstack client. The mix and match of different parameters accepted by different clients is not just a problem for ours users, developers who actually interact with this code can't keep them all straight. Does anyone know what the path would be to get this actually agreed on by all the projects? Cross-project blueprint? TC? 

Does OSC have the required commands that we can remove use of the swift CLI? 

I can understand having this reverted whilst feature freeze is underway and i didn't do a sufficient job in alerting people to the possibility of a breaking change (partially as i wasn't expecting it to merge). However to get this back in I think the easiest thing to do is just set OS_AUTH_VERSION in devstack with a comment about "needed for swift". I think we should consider OS_IDENTITY_API_VERSION an OSC flag rather that something we want all the CLIs to copy (as API_VERSION shouldn't necessarily imply auth_version/method). 

Jamie 

> Steve Martinelli
> OpenStack Keystone Core

> Jamie Lennox ---2015/09/06 04:41:23 AM---Note that this fixing this does not
> mean ironic has to support keystone v3 (but please fix that too)

> From: Jamie Lennox <jamielennox at redhat.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Date: 2015/09/06 04:41 AM
> Subject: Re: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican]
> FYI: Defaulting to Keystone v3 API

> Note that this fixing this does not mean ironic has to support keystone v3
> (but please fix that too). It just means that somewhere in ironic's gate it
> is doing like an "openstack user create" or a role assignment directly with
> the OSC tool assuming v2 rather than using the helpers that devstack
> provides like get_or_create_user. Keystone v2 still exists and is running we
> just changed the default API for devstack OSC commands.

> I'm kind of annoyed we reverted this patch (though i was surprised to see it
> merge recently as it's been around for a while), as it was known to possibly
> break people which is why it was on the discussion for the qa meetings.
> However given that devstack has plugins and there is third party CI there is
> absolutely no way we can make sure that everyone has fixed this and we just
> need to make a breaking change. Granted coinciding with freeze is
> unfortunate. Luckily this doesn't affect most people because they use the
> devstack helper functions and for those that don't it's an almost trivial
> fix to start using them.

> Jamie

> ----- Original Message -----
> > From: "Steve Martinelli" <stevemar at ca.ibm.com>
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev at lists.openstack.org>
> > Sent: Saturday, September 5, 2015 2:38:27 AM
> > Subject: Re: [openstack-dev] [DevStack][Keystone][Ironic][Swit][Barbican]
> > FYI: Defaulting to Keystone v3 API
> >
> >
> >
> > This change also affected Barbican too, but they quickly tossed up a patch
> > to
> > resolve the gate failures [1]. As much as I would like DevStack and
> > OpenStackClient to default to Keystone's v3 API, we should - considering
> > how
> > close we are in the schedule, revert the initial patch (which I see sdague
> > already did). We need to determine which projects are hosting their own
> > devstack plugin scripts and update those first before bringing back the
> > original patch.
> >
> > https://review.openstack.org/#/c/220396/
> >
> > Thanks,
> >
> > Steve Martinelli
> > OpenStack Keystone Core
> >
> > Lucas Alvares Gomes ---2015/09/04 10:07:51 AM---Hi, This is email is just a
> > FYI: Recently the patch [1] got merged in
> >
> > From: Lucas Alvares Gomes <lucasagomes at gmail.com>
> > To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
> > Date: 2015/09/04 10:07 AM
> > Subject: [openstack-dev] [DevStack][Keystone][Ironic][Swit] FYI: Defaulting
> > to Keystone v3 API
> >
> >
> >
> >
> > Hi,
> >
> > This is email is just a FYI: Recently the patch [1] got merged in
> > DevStack and broke the Ironic gate [2], I haven't had time to dig into
> > the problem yet so I reverted the patch [3] to unblock our gate.
> >
> > The work to convert to v3 seems to be close enough but not yet there
> > so I just want to bring a broader attention to it with this email.
> >
> > Also, the Ironic job that is currently running in the DevStack gate is
> > not testing Ironic with the Swift module, there's a patch [4] changing
> > that so I hope we will be able to identify the problem before we break
> > things next time .
> >
> > [1] https://review.openstack.org/#/c/186684/
> > [2]
> > http://logs.openstack.org/68/217068/14/check/gate-tempest-dsvm-ironic-agent_ssh/18d8590/logs/devstacklog.txt.gz#_2015-09-04_09_04_55_994
> > [3] https://review.openstack.org/220532
> > [4] https://review.openstack.org/#/c/220516/
> >
> > Cheers,
> > Lucas
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/018a0532/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/018a0532/attachment.gif>

From jamielennox at redhat.com  Mon Sep  7 05:23:52 2015
From: jamielennox at redhat.com (Jamie Lennox)
Date: Mon, 7 Sep 2015 01:23:52 -0400 (EDT)
Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware &
 multiple keystone endpoints
In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF542F516C@szxema505-mbx.china.huawei.com>
References: <55D5CABC.1020808@ericsson.com>
 <1512600172.12898075.1440488257977.JavaMail.zimbra@redhat.com>
 <55DC3005.9000306@ericsson.com>
 <5E7A3D1BF5FD014E86E5F971CF446EFF542F516C@szxema505-mbx.china.huawei.com>
Message-ID: <1700909195.18217744.1441603432407.JavaMail.zimbra@redhat.com>



----- Original Message -----
> From: "joehuang" <joehuang at huawei.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Sent: Sunday, 6 September, 2015 7:28:04 PM
> Subject: Re: [openstack-dev] [Keystone][Glance] keystonemiddleware & multiple keystone endpoints
> 
> Hello, Jamie and Hans,
> 
> The patch " Allow specifying a region name to auth_token "
> https://review.openstack.org/#/c/216579 has just been merged.
> 
> But unfortunately, when I modify the source code as this patch did in the
> multisite cloud with Fernet token, the issue is still there, and routed to
> incorrect endpoint.
> 
> I also check the region_name configuration in the source code, it's correct.
> 
> The issue mentioned in the bug report not addressed yet:
> https://bugs.launchpad.net/keystonemiddleware/+bug/1488347
> 
> Is there anyone who tested it successfully in your environment?

Hey Joe, 

The way that this patch is implemented requires you to have configured auth_token middleware with an auth plugin. For example [1]. I should have called this out better in the help for the config option. To make it so that the old admin_user etc options were region aware is kind of a big change because in that case the URL that is configured as identity_uri is always used for all keystone options.

Can you try and configure with the auth plugin options and see if the regions work after that? 


Jamie

[1] http://www.jamielennox.net/blog/2015/02/23/v3-authentication-with-auth-token-middleware/



> The log of Glance API, the request was redirected to
> http://172.17.0.95:35357, but this address is not a KeyStone endpoint.
> (http://172.17.0.98:35357 and http://172.17.0.41:35357 are correct KeyStone
> endpoints )
> //////////////////////////////////////////
> 2015-09-06 07:50:43.447 194 DEBUG keystoneclient.session [-] REQ: curl -g -i
> -X GET http://172.17.0.98:35357 -H "Accept: application/json" -H
> "User-Agent: python-keystoneclient" _http_log_request
> /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
> 2015-09-06 07:50:43.468 194 DEBUG keystoneclient.session [-] RESP: [300]
> content-length: 593 vary: X-Auth-Token connection: keep-alive date: Sun, 06
> Sep 2015 07:50:43 GMT content-type: application/json x-distribution: Ubuntu
> RESP BODY: {"versions": {"values": [{"status": "stable", "updated":
> "2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type":
> "application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links":
> [{"href": "http://172.17.0.98:35357/v3/", "rel": "self"}]}, {"status":
> "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base":
> "application/json", "type":
> "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links":
> [{"href": "http://172.17.0.98:35357/v2.0/", "rel": "self"}, {"href":
> "http://docs.openstack.org/", "type": "text/html", "rel":
> "describedby"}]}]}}
>  _http_log_response
>  /usr/lib/python2.7/dist-packages/keystoneclient/session.py:223
> 2015-09-06 07:50:43.469 194 DEBUG keystoneclient.auth.identity.v3 [-] Making
> authentication request to http://172.17.0.98:35357/v3/auth/tokens
> get_auth_ref
> /usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/v3.py:125
> 2015-09-06 07:50:43.574 194 DEBUG keystoneclient.session [-] REQ: curl -g -i
> -X GET http://172.17.0.95:35357 -H "Accept: application/json" -H
> "User-Agent: python-keystoneclient" _http_log_request
> /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
> 2015-09-06 07:50:46.576 194 WARNING keystoneclient.auth.identity.base [-]
> Failed to contact the endpoint at http://172.17.0.95:35357 for discovery.
> Fallback to using that endpoint as the base url.
> 2015-09-06 07:50:46.576 194 DEBUG keystoneclient.session [-] REQ: curl -g -i
> -X GET http://172.17.0.95:35357/auth/tokens -H "X-Subject-Token:
> {SHA1}640964e1f8716ecbb10ca3d8b5b08c8e7abfac1d" -H "User-Agent:
> python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token:
> {SHA1}386777062718e0992cc818780e3ec7fa0671d8e9" _http_log_request
> /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
> 2015-09-06 07:50:49.576 194 INFO keystoneclient.session [-] Failure: Unable
> to establish connection to http://172.17.0.95:35357/auth/tokens. Retrying in
> 0.5s.
> 2015-09-06 07:50:52.576 194 INFO keystoneclient.session [-] Failure: Unable
> to establish connection to http://172.17.0.95:35357/auth/tokens. Retrying in
> 1.0s.
> 2015-09-06 07:50:55.576 194 INFO keystoneclient.session [-] Failure: Unable
> to establish connection to http://172.17.0.95:35357/auth/tokens. Retrying in
> 2.0s.
> 2015-09-06 07:50:58.576 194 WARNING keystonemiddleware.auth_token [-]
> Authorization failed for token
> 
> 
> Best Regards
> Chaoyi Huang ( Joe Huang )
> 
> 
> -----Original Message-----
> From: Hans Feldt [mailto:hans.feldt at ericsson.com]
> Sent: Tuesday, August 25, 2015 5:06 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [Keystone][Glance] keystonemiddleware & multiple
> keystone endpoints
> 
> 
> 
> On 2015-08-25 09:37, Jamie Lennox wrote:
> >
> >
> > ----- Original Message -----
> >> From: "Hans Feldt" <hans.feldt at ericsson.com>
> >> To: openstack-dev at lists.openstack.org
> >> Sent: Thursday, August 20, 2015 10:40:28 PM
> >> Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware & multiple
> >> 	keystone endpoints
> >>
> >> How do you configure/use keystonemiddleware for a specific identity
> >> endpoint among several?
> >>
> >> In an OPNFV multi region prototype I have keystone endpoints per
> >> region. I would like keystonemiddleware (in context of glance-api) to
> >> use the local keystone for performing user token validation. Instead
> >> keystonemiddleware seems to use the first listed keystone endpoint in
> >> the service catalog (which could be wrong/non-optimal in most
> >> regions).
> >>
> >> I found this closed, related bug:
> >> https://bugs.launchpad.net/python-keystoneclient/+bug/1147530
> >
> > Hey,
> >
> > There's two points to this.
> >
> > * If you are using an auth plugin then you're right it will just pick the
> > first endpoint. You can look at project specific endpoints[1] so that
> > there is only one keystone endpoint returned for the services project.
> > I've also just added a review for this feature[2].
> 
> I am not.
> 
> > * If you're not using an auth plugin (so the admin_X options) then keystone
> > will always use the endpoint that is configured in the options
> > (identity_uri).
> 
> Yes for getting its own admin/service token. But for later user token
> validation it seems to pick the first identity service in the stored (?)
> service catalog.
> 
> By patching keystonemiddleware, _create_identity_server and the call to
> Adapter constructor with an endpoint_override parameter I can get it to use
> the local keystone for token validation. I am looking for an official way of
> achieving the same.
> 
> Thanks,
> Hans
> 
> >
> > Hope that helps,
> >
> > Jamie
> >
> >
> > [1]
> > https://github.com/openstack/keystone-specs/blob/master/specs/juno/end
> > point-group-filter.rst [2] https://review.openstack.org/#/c/216579
> >
> >> Thanks,
> >> Hans
> >>
> >> _____________________________________________________________________
> >> _____ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ______________________________________________________________________
> > ____ OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From tony at bakeyournoodle.com  Mon Sep  7 05:40:30 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Mon, 7 Sep 2015 15:40:30 +1000
Subject: [openstack-dev] [libraries] gate-tempest-dsvm-neutron-src test and
 major version bumps
Message-ID: <20150907054030.GF29438@thor.bakeyournoodle.com>

Hi All,
    We have the test in $subject that is used for most (if not all) libraries
via the 'lib-forward-testing' job-group it's aim to to "test their proposed
commits to ensure they don't break OpenStack on their next release." [1]

This is of course a good idea.

The problem I;'m having is trying to land a patch in stable/juno which is
running devstack-gate in stable/juno except for the library in question where
it's grabbing master[2].  I think this is a problem because master has
introduced in compatible changes (and bumped $major).  So this results in
violating global-requirements and IMO an invalid test.

So what is the correct way forward?

 1. Disable lib-forward-testing on stable branches
    - This seems easy but wrong ....
 2. Use the appropriate stable/x branch for stable tests, when master has made
    a $major version bump?
 3. Something smarter that I can't see because I'm not familiar enough with
    what we can do in job definitions.

Of course I could be way off and this isn't really a problem at all.

Yours Tony.

[1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n72
[2] http://logs.openstack.org/54/216954/1/check/gate-tempest-dsvm-neutron-src-oslo.i18n/044dea9/logs/devstacklog.txt.gz#_2015-08-26_04_25_37_264
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/c08b324d/attachment.pgp>

From changzhi1990 at gmail.com  Mon Sep  7 06:06:56 2015
From: changzhi1990 at gmail.com (zhi)
Date: Mon, 7 Sep 2015 14:06:56 +0800
Subject: [openstack-dev] [neutron] Fail to get ipv4 address from dhcp
In-Reply-To: <27E8119E14BEBA418E5E368E4DF2CA71FCB66B@SINPEX01CL02.citrite.net>
References: <27E8119E14BEBA418E5E368E4DF2CA71FCB66B@SINPEX01CL02.citrite.net>
Message-ID: <CAENZyq7F6F=icS6Tx4=3MoouKLfzB=-Ti29qX5b3hci8RxDDKw@mail.gmail.com>

hi, if you turn off the "ARP Spoofing" flag and restart the q-agt service.
Does vm can get IP successfully?

2015-09-06 17:03 GMT+08:00 Huan Xie <huan.xie at citrix.com>:

>
>
> Hi all,
>
>
>
> I?m trying to deploy OpenStack environment using DevStack with latest
> master code.
>
> I use Xenserver + neutron, with ML2 plugins and VLAN type.
>
>
>
> The problem I met is that the instances cannot really get IP address (I
> use DHCP), although we can see the VM with IP from horizon.
>
> I have tcpdump from VM side and DHCP server side, I can get DHCP request
> packet from VM side but cannot see any request packet from DHCP server side.
>
> But after I reboot the q-agt, the VM can get IP successfully.
>
> Checking the difference before and after q-agt restart, all my seen are
> the flow rules about ARP spoofing.
>
>
>
> This is the q-agt?s br-int port, it is dom0?s flow rules and the bold part
> are new added
>
>
>
>                 NXST_FLOW reply (xid=0x4):
>
>                cookie=0x824d13a352a4e216, duration=163244.088s, table=0,
> n_packets=93, n_bytes=18140, idle_age=4998, hard_age=65534, priority=0
> actions=NORMAL
>
> *cookie=0x824d13a352a4e216, duration=163215.062s, table=0, n_packets=7,
> n_bytes=294, idle_age=33540, hard_age=65534, priority=10,arp,in_port=5
> actions=resubmit(,24)*
>
>                cookie=0x824d13a352a4e216, duration=163230.050s, table=0,
> n_packets=25179, n_bytes=2839586, idle_age=5, hard_age=65534,
> priority=3,in_port=2,dl_vlan=1023 actions=mod_vlan_vid:1,NORMAL
>
>                cookie=0x824d13a352a4e216, duration=163236.775s, table=0,
> n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534,
> priority=2,in_port=2 actions=drop
>
>                cookie=0x824d13a352a4e216, duration=163243.516s, table=23,
> n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0
> actions=drop
>
>                cookie=0x824d13a352a4e216, duration=163242.953s, table=24,
> n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0
> actions=drop
>
> *cookie=0x824d13a352a4e216, duration=163215.636s, table=24, n_packets=7,
> n_bytes=294, idle_age=33540, hard_age=65534,
> priority=2,arp,in_port=5,arp_spa=10.0.0.6 actions=NORMAL*
>
>
>
> I cannot see other changes after reboot q-agt, but it seems these rules
> are only for ARP spoofing, however, the instance can get IP from DHCP.
>
> I also google for this problem, but failed to deal this problem.
>
> Is anyone met this problem before or has any suggestion about how to
> debugging for this?
>
>
>
> Thanks a lot
>
>
>
> BR//Huan
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/5c8c217f/attachment.html>

From chenrui.momo at gmail.com  Mon Sep  7 06:25:03 2015
From: chenrui.momo at gmail.com (Rui Chen)
Date: Mon, 7 Sep 2015 14:25:03 +0800
Subject: [openstack-dev] [Congress] bugs for liberty release
In-Reply-To: <EB8DB51184817F479FC9C47B120861EE0470F3A7@SHSMSX101.ccr.corp.intel.com>
References: <CAJjxPADr9u7nmwAtVZhhE_j7F=xUrpXpk1Q7exs+x4QVFvx_rw@mail.gmail.com>
 <EB8DB51184817F479FC9C47B120861EE0470F3A7@SHSMSX101.ccr.corp.intel.com>
Message-ID: <CABHH=5BLmp-H_5VgN4+0FuwB3kFxD=NS2kSC_+U+OPdBDWXgaw@mail.gmail.com>

I start to fix https://bugs.launchpad.net/congress/+bug/1492329
if I have enough time, I can allocate other one or two bugs.

2015-09-06 8:13 GMT+08:00 Zhou, Zhenzan <zhenzan.zhou at intel.com>:

> I have taken two, thanks.
>
> https://bugs.launchpad.net/congress/+bug/1492308
>
> https://bugs.launchpad.net/congress/+bug/1492354
>
>
>
> BR
>
> Zhou Zhenzan
>
> *From:* Tim Hinrichs [mailto:tim at styra.com]
> *Sent:* Friday, September 4, 2015 23:40
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [Congress] bugs for liberty release
>
>
>
> Hi all,
>
>
>
> I've found a few bugs that we could/should fix by the liberty release.  I
> tagged them with "liberty-rc".  If we could all pitch in, that'd be great.
> Let me know which ones you'd like to work on so I can assign them to you in
> launchpad.
>
>
>
> https://bugs.launchpad.net/congress/+bugs/?field.tag=liberty-rc
>
>
>
> Thanks,
>
> Tim
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/58cdefd2/attachment.html>

From ton at us.ibm.com  Mon Sep  7 06:50:58 2015
From: ton at us.ibm.com (Ton Ngo)
Date: Sun, 6 Sep 2015 23:50:58 -0700
Subject: [openstack-dev] [magnum] Steps to upload magnum images
In-Reply-To: <0957CD8F4B55C0418161614FEC580D6BCCF310@SZXEMI503-MBS.china.huawei.com>
References: <0957CD8F4B55C0418161614FEC580D6BCCF310@SZXEMI503-MBS.china.huawei.com>
Message-ID: <OFEB81AFE0.2B795588-ON88257EB9.00256DA9-88257EB9.0025A26E@us.ibm.com>

Thanks Hongbin for the useful info.
I don't see the "apply" link either.  It looks like the group is set to
"Invite Only", so I will need to ping Steve Dake.
Ton Ngo,



From:	Hongbin Lu <hongbin.lu at huawei.com>
To:	"openstack-dev at lists.openstack.org"
            <openstack-dev at lists.openstack.org>
Date:	09/06/2015 04:51 PM
Subject:	[openstack-dev] [magnum] Steps to upload magnum images



Hi team,

As you may know, magnum is tested with pre-built Fedora Atomic images.
Basically, these images are standard atomic image with k8s packages
pre-installed. The images can be downloaded from fedorapeople.org [1]. In
most cases, you are able to test magnum by using images there. If you are
not satisfied by existing images, you are welcome to build a new image and
share it with the team. Here [2] is the instruction for how to build an new
atomic image. After you successfully build an image, you may want to upload
it to the public file server, which is what I am going to talk about.

Below are the steps to upload an image:
      1.       Register an account in here
      https://admin.fedoraproject.org/accounts/
      2.       Sign the contributor agreement (On the home page after you
      login: ?My Account? -> ?Contributor Agreement?).
      3.       Upload your public key (?My Account? -> ?Public SSH Key?).
      4.       Apply to join the magnum group (?Join a group? -> search
      ?magnum? -> ?apply?). If you cannot find the ?apply? link under
      ?Status? (I didn?t), you can wait a few minutes or skip this step and
      ask Steven Dake to add you to the group instead.
      5.       Ping Steven Dake (stdake at cisco.com) to approve your
      application.
      6.       After 30-60 minutes, you should be able to SSH to the file
      server (ssh <yourname>@fedorapeople.org). Our images are stored
      in /srv/groups/magnum.

Notes on using the file server:
      ?         Avoid type ?sudo ??.
      ?         Activities there are logged, so don?t expect any privacy.
      ?         Not all contents are allowed. Please make sure your
      uploaded contents are acceptable before uploading it.

[1] https://fedorapeople.org/groups/magnum/
[2]
https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-build-atomic-image.rst
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/2cf80326/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150906/2cf80326/attachment.gif>

From hejie.xu at intel.com  Mon Sep  7 06:51:03 2015
From: hejie.xu at intel.com (Alex Xu)
Date: Mon, 7 Sep 2015 14:51:03 +0800
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <CAA393vitrR+MhU+sLKCTiQtdLvq8qVUi6g5Vo-2bhoPeG50MWA@mail.gmail.com>
References: <20150625142223.GC2646@crypt>
 <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
 <CAA393vhyeMYeA=6MK9+0LtReud67+OMBu=KcaOzvM_pzL4Ea+g@mail.gmail.com>
 <20150904144538.GI3226@crypt>
 <CAA393vitrR+MhU+sLKCTiQtdLvq8qVUi6g5Vo-2bhoPeG50MWA@mail.gmail.com>
Message-ID: <7A5E0505-8993-47BC-98F5-7AC84A2FFF66@intel.com>


> ? 2015?9?7????8:27?Ken'ichi Ohmichi <ken1ohmichi at gmail.com> ???
> 
> f we allow customization of scheduler-hints like new filters,
> out-of-tree filters without microversions, API users cannot know
> available scheduler-hints parameter from microversions number.
> That will be helpful for API users that nova can provide available
> parameters with JSON-Home or somethi


I'm thinking we should distinguish the API discovery and Capabilities discovery.

The JSON-Home and JSON-Schema is used to API discovery.
The discovery of scheduler-hints enabled or not in the deployment is Capabilities discovery.

So this should be done by two parts.
1. JSON-Schema is used to Scheduler-hints API contract
2. Another Capabilities discovery API is used to discover whether hint enabled or not in the deployment.

For JSON-Schema, we should think the API contract as ?The Scheduler Hints API only accept dict?. Then
each hint should have it own contract. Add scheduler filters add each own hints schema to the API schema
for API discovery. 

So now I think Ken?ichi propose https://review.openstack.org/#/c/220440 <https://review.openstack.org/#/c/220440>  make sense to me. But the
hints whether enabled in the deployment should be done another way, A capabilities discover API.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/19f35f58/attachment.html>

From ken1ohmichi at gmail.com  Mon Sep  7 07:16:04 2015
From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi)
Date: Mon, 7 Sep 2015 16:16:04 +0900
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <55E978F3.2070804@redhat.com>
References: <20150625142223.GC2646@crypt>
 <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
 <CAH7mGauOgfvVkfW2OYPm7D=7zgXhRHpx4a7_jZMyBtND3iirGQ@mail.gmail.com>
 <CAA393vhfetUH3PkJHkpcP9sf8vjzS+Tm-Fcp7O_D6mo3Q_S-xA@mail.gmail.com>
 <55E978F3.2070804@redhat.com>
Message-ID: <CAA393vhC1-gMyYdwpbrcRrraQnQZrZiA-3yB=pRudyqVvbjQVg@mail.gmail.com>

Hi Sylvain,

2015-09-04 19:56 GMT+09:00 Sylvain Bauza <sbauza at redhat.com>:
>
>
> Le 04/09/2015 12:18, Ken'ichi Ohmichi a ?crit :
>
> Hi Alex,
>
> Thanks for  your comment.
> IMO, this idea is different from the extension we will remove.
> That is modularity for the maintenance burden.
> By this idea, we can put the corresponding schema in each filter.
>
>
>
> While I think it could be a nice move to have stevedore-loaded filters for
> the FilterScheduler due to many reasons, I actually wouldn't want to delay
> more than needed the compatibility change for the API validation relaxing
> the scheduler hints.
>
> In order to have a smooth transition, I'd rather just provide a change for
> using stevedore with the filters and weighters (even if the weighters are
> not using the API), and then once implemented, then do the necessary change
> on the API level like the one you proposed.
>
> In the meantime, IMHO we should accept rather sooner than later (meaning for
> Liberty) https://review.openstack.org/#/c/217727/
>
> Thanks for that good idea, I like it,

Thanks for your feedback :)

During the above idea implementation, I found a bug of API validation
for cell filter:
https://bugs.launchpad.net/nova/+bug/1492925

I feel now this way will exactly help us for maintaining the code.

Thanks
Ken Ohmichi

---

> 2015?9?4?(?) 19:04 Alex Xu <soulxu at gmail.com>:
>>
>> 2015-09-04 11:14 GMT+08:00 Ken'ichi Ohmichi <ken1ohmichi at gmail.com>:
>>>
>>> Hi Andrew,
>>>
>>> Sorry for this late response, I missed it.
>>>
>>> 2015-06-25 23:22 GMT+09:00 Andrew Laski <andrew at lascii.com>:
>>> > I have been growing concerned recently with some attempts to formalize
>>> > scheduler hints, both with API validation and Nova objects defining
>>> > them,
>>> > and want to air those concerns and see if others agree or can help me
>>> > see
>>> > why I shouldn't worry.
>>> >
>>> > Starting with the API I think the strict input validation that's being
>>> > done,
>>> > as seen in
>>> >
>>> > http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3/scheduler_hints.py?id=53677ebba6c86bd02ae80867028ed5f21b1299da,
>>> > is unnecessary, and potentially problematic.
>>> >
>>> > One problem is that it doesn't indicate anything useful for a client.
>>> > The
>>> > schema indicates that there are hints available but can make no claim
>>> > about
>>> > whether or not they're actually enabled.  So while a microversion bump
>>> > would
>>> > typically indicate a new feature available to an end user, in the case
>>> > of a
>>> > new scheduler hint a microversion bump really indicates nothing at all.
>>> > It
>>> > does ensure that if a scheduler hint is used that it's spelled properly
>>> > and
>>> > the data type passed is correct, but that's primarily useful because
>>> > there
>>> > is no feedback mechanism to indicate an invalid or unused scheduler
>>> > hint.  I
>>> > think the API schema is a poor proxy for that deficiency.
>>> >
>>> > Since the exposure of a hint means nothing as far as its usefulness, I
>>> > don't
>>> > think we should be codifying them as part of our API schema at this
>>> > time.
>>> > At some point I imagine we'll evolve a more useful API for passing
>>> > information to the scheduler as part of a request, and when that
>>> > happens I
>>> > don't think needing to support a myriad of meaningless hints in older
>>> > API
>>> > versions is going to be desirable.
>>> >
>>> > Finally, at this time I'm not sure we should take the stance that only
>>> > in-tree scheduler hints are supported.  While I completely agree with
>>> > the
>>> > desire to expose things in cross-cloud ways as we've done and are
>>> > looking to
>>> > do with flavor and image properties I think scheduling is an area where
>>> > we
>>> > want to allow some flexibility for deployers to write and expose
>>> > scheduling
>>> > capabilities that meet their specific needs.  Over time I hope we will
>>> > get
>>> > to a place where some standardization can happen, but I don't think
>>> > locking
>>> > in the current scheduling hints is the way forward for that.  I would
>>> > love
>>> > to hear from multi-cloud users here and get some input on whether
>>> > that's
>>> > crazy and they are expecting benefits from validation on the current
>>> > scheduler hints.
>>> >
>>> > Now, objects.  As part of the work to formalize the request spec sent
>>> > to the
>>> > scheduler there's an effort to make a scheduler hints object.  This
>>> > formalizes them in the same way as the API with no benefit that I can
>>> > see.
>>> > I won't duplicate my arguments above, but I feel the same way about the
>>> > objects as I do with the API.  I don't think needing to update and
>>> > object
>>> > version every time a new hint is added is useful at this time, nor do I
>>> > think we should lock in the current in-tree hints.
>>> >
>>> > In the end this boils down to my concern that the scheduling hints api
>>> > is a
>>> > really horrible user experience and I don't want it to be solidified in
>>> > the
>>> > API or objects yet.  I think we should re-examine how they're handled
>>> > before
>>> > that happens.
>>>
>>> Now we are discussing this on https://review.openstack.org/#/c/217727/
>>> for allowing out-of-tree scheduler-hints.
>>> When we wrote API schema for scheduler-hints, it was difficult to know
>>> what are available API parameters for scheduler-hints.
>>> Current API schema exposes them and I guess that is useful for API users
>>> also.
>>>
>>> One idea is that: How about auto-extending scheduler-hint API schema
>>> based on loaded schedulers?
>>> Now API schemas of "create/update/resize/rebuild a server" APIs are
>>> auto-extended based on loaded extensions by using stevedore
>>> library[1].
>>
>>
>> Em....we will deprecate the extension from our API. this sounds like add
>> more extension mechanism.
>>
>>>
>>> I guess we can apply the same way for scheduler-hints also in long-term.
>>> Each scheduler needs to implement a method which returns available API
>>> parameter formats and nova-api tries to get them then extends
>>> scheduler-hints API schema with them.
>>> That means out-of-tree schedulers also will be available if they
>>> implement the method.
>>> # In short-term, I can see "blocking additionalProperties" validation
>>> disabled by the way.
>>>
>>> Thanks
>>> Ken Ohmichi
>>>
>>> ---
>>> [1]:
>>> https://github.com/openstack/nova/blob/master/doc/source/api_plugins.rst#json-schema
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From thierry at openstack.org  Mon Sep  7 08:05:19 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Mon, 7 Sep 2015 10:05:19 +0200
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
 allocation
In-Reply-To: <CAHZqm+XLtqN75LXcFuSmC92ep17cHo1CjDOn+NE-A+kSQ0VAPg@mail.gmail.com>
References: <55E96EEE.4070306@openstack.org> <55E98609.4060708@redhat.com>
 <20150904175517.GC21846@jimrollenhagen.com>
 <CAHZqm+XLtqN75LXcFuSmC92ep17cHo1CjDOn+NE-A+kSQ0VAPg@mail.gmail.com>
Message-ID: <55ED453F.4020503@openstack.org>

Zhipeng Huang wrote:
> Do we have enough logistics for projects that are not in the schedule
> above but also want to have ad hoc sessions at the design summit venue?
> For example like in Paris we usually just grab a table and slap a card
> with project name on it.

Space is very limited (more like Paris than like Vancouver), but we have
a lunch room with roundtables which can be abused (outside lunch) for
ad-hoc discussions.

-- 
Thierry Carrez (ttx)


From sbhou at cn.ibm.com  Mon Sep  7 08:15:30 2015
From: sbhou at cn.ibm.com (Sheng Bo Hou)
Date: Mon, 7 Sep 2015 16:15:30 +0800
Subject: [openstack-dev] [Cinder] The devref for volume migration in Cinder
In-Reply-To: <CAHV77z_QPa+hjMcCmjWrNzZU8hVSQ1xg-3ngnZv6+vATCLSGyQ@mail.gmail.com>
References: <CAHV77z_QPa+hjMcCmjWrNzZU8hVSQ1xg-3ngnZv6+vATCLSGyQ@mail.gmail.com>
Message-ID: <OFB5A47CB9.97BB2D7D-ON48257EB9.002CAFC8-48257EB9.002D5DAB@cn.ibm.com>

Hi everyone interested in volume migration:

I have just drafted the devref for volume migration in Cinder. It consists 
of everything I can think of for the volume migration.

LINK: https://review.openstack.org/#/c/214941

For folks who have much experience in migration:
Help me check it to see if it is comprehensive and precise. If there is 
anything missing or confusing, please comment on it.

For folks who are not familiar with experience in migration:
Read the devref from the beginning to the end to see if you have got what 
volume migration is about, how to use, how to configure, etc. Check if it 
is clear and easy to understand.

Thank you very much.

Best wishes,
Vincent Hou (???)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM at IBMCN    E-mail: sbhou at cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
??:???????????8???????28??????3? ???100193
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/73c4296a/attachment.html>

From rahulsharmaait at gmail.com  Mon Sep  7 08:26:07 2015
From: rahulsharmaait at gmail.com (Rahul Sharma)
Date: Mon, 7 Sep 2015 04:26:07 -0400
Subject: [openstack-dev] [Nova] understanding SSL behavior
Message-ID: <CAAw0FvQPDxf1AKCT+zFEhY7DJ4WvPoeuYE39E5b0CEo8spw2TA@mail.gmail.com>

Hi All,

I am trying to configure the endpoints to communicate over https. I am
trying to debug a particular behavior of code but unable to relate my
sequence of actions with the behavior of code. Kindly do guide me to
understand the below mentioned scenario.

For SSL, I have generated a self-signed CA cert and used it for signing CSR
request of host/controller. I placed the CA cert in the trusted-root
authority of my host and all the services work fine. They are able to talk
with each other over https. I was able to access the url
https://<controller>:8774
from anywhere.

I went ahead and modified the nova.conf and added ssl_ca_file in [DEFAULT]
section.
[DEFAULT]
.......
ssl_ca_file=<path-to-ca-file>
ssl_cert_file=<path-to-cert-file>
ssl_key_file=<path-to-key-file>
.......

Nova services come up fine, but now I am unable to access the url
https://<controller>:8774.
If I again remove the ssl_ca_file from nova.conf, it again starts working
fine.

Looking at the code, I could see that its getting used in nova/wsgi.py.

if CONF.ssl_ca_file:
    ssl_kwargs['ca_certs'] = ca_file
    ssl_kwargs['cert_reqs'] = ssl.CERT_REQUIRED

I am missing some very basic thing here, can someone please help me to
understand the sequence of steps going on and what do I need to do to
communicate with the service. The service is running and listening on port
8774, but it looks like I might have to provide something else with the
request to communicate with the service. Since various other services would
be communicating with nova, do I need to configure some specific parameter
in those services? Any pointers would be really helpful.

Thanks.

*Rahul Sharma*
*MS in Computer Science, 2016*
College of Computer and Information Science, Northeastern University
Mobile:  801-706-7860
Email: rahulsharmaait at gmail.com
Linkedin: www.linkedin.com/in/rahulsharmaait
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/82864a50/attachment-0001.html>

From scroiset at mirantis.com  Mon Sep  7 08:31:55 2015
From: scroiset at mirantis.com (Swann Croiset)
Date: Mon, 7 Sep 2015 10:31:55 +0200
Subject: [openstack-dev] [Fuel][Plugins] Deployment order with custom role
Message-ID: <CAOmgvhym_UG+V2UaCvbN955Ex7HipnE+2Sw_tOKL_AAnM_CCTg@mail.gmail.com>

Hi fuelers,

We're currently porting nearly all LMA plugins to the new plugin fwk 3.0.0
to leverage custom role capabilities.
That brings up a lot of simplifications for node assignment, disk
management, network config, reuse core tasks and so on .. thanks to the fwk.

However, we still need deployment order between independent plugins and it
seems impossible to define the priorities [0] in *deployment_tasks.yaml,*
The only way to preserve deployment order would be to keep *tasks.yaml *too.

So, I'm wondering if this is the recommended solution to address plugins
order deployment with plugin fwk 3.0.0?
And furthermore if *tasks.yaml* will still be supported in future by the
plugin fwk or if the fwk shouldn't evolve  by adding priorities definitions
in *deployment_tasks.yaml* ?

Thanks

[0] https://wiki.openstack.org/wiki/Fuel/Plugins#Plugins_deployment_order
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/2bfacbc0/attachment.html>

From john at johngarbutt.com  Mon Sep  7 08:43:43 2015
From: john at johngarbutt.com (John Garbutt)
Date: Mon, 7 Sep 2015 09:43:43 +0100
Subject: [openstack-dev] [all] Mitaka Design Summit - Proposed slot
	allocation
In-Reply-To: <20150904175517.GC21846@jimrollenhagen.com>
References: <55E96EEE.4070306@openstack.org> <55E98609.4060708@redhat.com>
 <20150904175517.GC21846@jimrollenhagen.com>
Message-ID: <CABib2_pSWxJvWczQJ=YJ7UBZbyJ88QLahNFou1xyKR-zmDqo=A@mail.gmail.com>

On 4 September 2015 at 18:55, Jim Rollenhagen <jim at jimrollenhagen.com> wrote:
> On Fri, Sep 04, 2015 at 01:52:41PM +0200, Dmitry Tantsur wrote:
>> On 09/04/2015 12:14 PM, Thierry Carrez wrote:
>> >Hi PTLs,
>> >
>> >Here is the proposed slot allocation for every "big tent" project team
>> >at the Mitaka Design Summit in Tokyo. This is based on the requests the
>> >liberty PTLs have made, space availability and project activity &
>> >collaboration needs.
>> >
>> >We have a lot less space (and time slots) in Tokyo compared to
>> >Vancouver, so we were unable to give every team what they wanted. In
>> >particular, there were far more workroom requests than we have
>> >available, so we had to cut down on those quite heavily. Please note
>> >that we'll have a large lunch room with roundtables inside the Design
>> >Summit space that can easily be abused (outside of lunch) as space for
>> >extra discussions.
>> >
>> >Here is the allocation:
>> >
>> >| fb: fishbowl 40-min slots
>> >| wr: workroom 40-min slots
>> >| cm: Friday contributors meetup
>> >| | day: full day, morn: only morning, aft: only afternoon
>> >
>> >Neutron: 12fb, cm:day
>> >Nova: 14fb, cm:day
>> >Cinder: 5fb, 4wr, cm:day
>> >Horizon: 2fb, 7wr, cm:day
>> >Heat: 4fb, 8wr, cm:morn
>> >Keystone: 7fb, 3wr, cm:day
>> >Ironic: 4fb, 4wr, cm:morn
>> >Oslo: 3fb, 5wr
>> >Rally: 1fb, 2wr
>> >Kolla: 3fb, 5wr, cm:aft
>> >Ceilometer: 2fb, 7wr, cm:morn
>> >TripleO: 2fb, 1wr, cm:full
>> >Sahara: 2fb, 5wr, cm:aft
>> >Murano: 2wr, cm:full
>> >Glance: 3fb, 5wr, cm:full
>> >Manila: 2fb, 4wr, cm:morn
>> >Magnum: 5fb, 5wr, cm:full
>> >Swift: 2fb, 12wr, cm:full
>> >Trove: 2fb, 4wr, cm:aft
>> >Barbican: 2fb, 6wr, cm:aft
>> >Designate: 1fb, 4wr, cm:aft
>> >OpenStackClient: 1fb, 1wr, cm:morn
>> >Mistral: 1fb, 3wr
>> >Zaqar: 1fb, 3wr
>> >Congress: 3wr
>> >Cue: 1fb, 1wr
>> >Solum: 1fb
>> >Searchlight: 1fb, 1wr
>> >MagnetoDB: won't be present
>> >
>> >Infrastructure: 3fb, 4wr (shared meetup with Ironic and QA)
>> >PuppetOpenStack: 2fb, 3wr
>> >Documentation: 2fb, 4wr, cm:morn
>> >Quality Assurance: 4fb, 4wr, cm:full
>> >OpenStackAnsible: 2fb, 1wr, cm:aft
>> >Release management: 1fb, 1wr (shared meetup with QA)
>> >Security: 2fb, 2wr
>> >ChefOpenstack: will camp in the lunch room all week
>> >App catalog: 1fb, 1wr
>> >I18n: cm:morn
>> >OpenStack UX: 2wr
>> >Packaging-deb: 2wr
>> >Refstack: 2wr
>> >RpmPackaging: 1fb, 1wr
>> >
>> >We'll start working on laying out those sessions over the available
>> >rooms and time slots. If you have constraints (I already know
>> >searchlight wants to avoid conflicts with Horizon, Kolla with Magnum,
>> >Manila with Cinder, Solum with Magnum...) please let me know, we'll do
>> >our best to limit them.
>> >
>>
>> Would be cool to avoid conflicts between Ironic and TripleO.
>
> I'd also like to save room for one Ironic/Nova session, and one
> Ironic/Neutron session.

I am thinking we could use a Nova slot for the Ironic + Nova session,
picking a slot when there is nothing scheduled for ironic (or
TripleO).

This will all be part of our regular Nova session selection process,
but honestly I hope we can schedule that one.

Thanks,
John

PS
This is to support reserving cross project topics, to things that
involve more than two projects.


From john at johngarbutt.com  Mon Sep  7 08:49:13 2015
From: john at johngarbutt.com (John Garbutt)
Date: Mon, 7 Sep 2015 09:49:13 +0100
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
 cloud-init IPv6 support
In-Reply-To: <0000014fa5a7fc65-db3a79b7-91fc-4e28-91ff-8e01e14cbbb7-000000@email.amazonses.com>
References: <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
 <CAO_F6JO+ZnpW61XoipHu-hxsa6TBStiynFO0Kh+GFvMNN8Ni0g@mail.gmail.com>
 <0000014fa5a7fc65-db3a79b7-91fc-4e28-91ff-8e01e14cbbb7-000000@email.amazonses.com>
Message-ID: <CABib2_pMOwfr4M6Oefn8B8fh9+U3zq1boq1ewf64iuLyuF86og@mail.gmail.com>

On 7 September 2015 at 03:34, Sean M. Collins <sean at coreitpro.com> wrote:
> On Sun, Sep 06, 2015 at 04:25:43PM EDT, Kevin Benton wrote:
>> So it's been pointed out that http://169.254.169.254/openstack is completed
>> OpenStack invented. I don't quite understand how that's not violating the
>> contract you said we have with end users about EC2 compatibility under the
>> restriction of 'no new stuff'.
>
> I think that is a violation. I don't think that allows us to make more
> changes, just because we've broken the contract once, so a second
> infraction is less significant.

I see the OpenStack part of the metadata service a different
interface, that happens to be accessed in a similar way to EC2.

>> If we added an IPv6 endpoint that the metadata service listens on, it would
>> just be another place that non cloud-init clients don't know how to talk
>> to. It's not going to break our compatibility with any clients that connect
>> to the IPv4 address.
>
> No, but if Amazon were to make a decision about how to implement IPv6 in
> EC2 and how to make the Metadata API service work with IPv6 we'd be
> supporting two implementations - the one we came up with and one for
> supporting the way Amazon implemented it.

Yes, thats the cost of moving first.
Honestly, I would assume we end up implementing two access routes, if
we support IPv6 first.

Thanks,
John


From flavio at redhat.com  Mon Sep  7 08:50:05 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Mon, 7 Sep 2015 10:50:05 +0200
Subject: [openstack-dev] [zaqar] Mitaka summit brainstorm
Message-ID: <20150907085005.GA6373@redhat.com>

Greetings,

A couple of weeks ago, the Zaqar team started brainstorming about
possible topics for the Mitaka summit. Now it is time to start
collecting those proposals in a single place[0] so that we can start
voting and selecting topics.

The next PTL will likely use this etherpad as a reference to create a
schedule for the summit.

Happy Brainstorming,
Flavio

[0] https://etherpad.openstack.org/p/Mitaka-Zaqar

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/dd8de1ae/attachment.pgp>

From john at johngarbutt.com  Mon Sep  7 08:58:34 2015
From: john at johngarbutt.com (John Garbutt)
Date: Mon, 7 Sep 2015 09:58:34 +0100
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
 cloud-init IPv6 support
In-Reply-To: <782B38CA-9629-48D0-BC4F-79F15322C494@jimrollenhagen.com>
References: <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
 <55EC6D3A.8050604@inaugust.com>
 <782B38CA-9629-48D0-BC4F-79F15322C494@jimrollenhagen.com>
Message-ID: <CABib2_r3gzUiUvfvnwNxXFX+9Cod1NZ6R9-0WyURKZmE-Pzjew@mail.gmail.com>

On 7 September 2015 at 01:02, Jim Rollenhagen <jim at jimrollenhagen.com> wrote:
>> On Sep 6, 2015, at 09:43, Monty Taylor <mordred at inaugust.com> wrote:
>>> On 09/05/2015 06:19 PM, Sean M. Collins wrote:
>>>> On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:
>>>> Right, it depends on your perspective of who 'owns' the API. Is it
>>>> cloud-init or EC2?
>>>>
>>>> At this point I would argue that cloud-init is in control because it would
>>>> be a large undertaking to switch all of the AMI's on Amazon to something
>>>> else. However, I know Sean disagrees with me on this point so I'll let him
>>>> reply here.
>>>
>>>
>>> Here's my take:
>>>
>>> Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
>>> in both the Neutron and Nova projects should all the details of the
>>> Metadata API that is documented at:
>>>
>>> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
>>>
>>> This means that this is a compatibility layer that OpenStack has
>>> implemented so that users can use appliances, applications, and
>>> operating system images in both Amazon EC2 and an OpenStack environment.
>>>
>>> Yes, we can make changes to cloud-init. However, there is no guarantee
>>> that all users of the Metadata API are exclusively using cloud-init as
>>> their client. It is highly unlikely that people are rolling their own
>>> Metadata API clients, but it's a contract we've made with users. This
>>> includes transport level details like the IP address that the service
>>> listens on.
>>>
>>> The Metadata API is an established API that Amazon introduced years ago,
>>> and we shouldn't be "improving" APIs that we don't control. If Amazon
>>> were to introduce IPv6 support the Metadata API tomorrow, we would
>>> naturally implement it exactly the way they implemented it in EC2. We'd
>>> honor the contract that Amazon made with its users, in our Metadata API,
>>> since it is a compatibility layer.
>>>
>>> However, since they haven't defined transport level details of the
>>> Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
>>> solution. It is not our API.
>>>
>>> The nice thing about config-drive is that we've created a new mechanism
>>> for bootstrapping instances - by replacing the transport level details
>>> of the API. Rather than being a link-local address that instances access
>>> over HTTP, it's a device that guests can mount and read. The actual
>>> contents of the drive may have a similar schema as the Metadata API, but
>>> I think at this point we've made enough of a differentiation between the
>>> EC2 Metadata API and config-drive that I believe the contents of the
>>> actual drive that the instance mounts can be changed without breaking
>>> user expectations - since config-drive was developed by the OpenStack
>>> community. The point being that we call it "config-drive" in
>>> conversation and our docs. Users understand that config-drive is a
>>> different feature.
>>
>> Another great part about config-drive is that it's scalable. At infra's application scale, we take pains to disable anyting in our images that might want to contact the metadata API because we're essentially a DDOS on it.
>
> So, I tend to think a simple API service like this should never be hard to scale. Put a bunch of hosts behind a load balancer, boom, done. Even 1000 requests/s shouldn't be hard, though it may require many hosts, and that's far beyond what infra would hit today.
>
> The one problem I have with config-drive is that it is static. I'd love for systems like cloud-init, glean, etc, to be able to see changes to mounted disks, attached networks, etc. Attaching things after the fact isn't uncommon, and to make the user config the thing is a terrible experience. :(

While I would love to avoid the complexity of the metadata service,
its dynamic nature is the key bit you loose with config drive.

For example, our mechanism for passwords (sure, I wish everyone used
keys) uses the openstack metadata service as a two way communication
system. Add VIF is probably a better example.

Thanks,
John

PS
If we get the per instance keystone token 'injection' working
correctly using purely config drive, then instances could just
authenticate to the regular API, avoiding the need for the metadata
service in its current form, but thats probably a red herring at this
point in time.

>> config-drive being local to the hypervisor host makes it MUCH more stable at scale.
>>
>> cloud-init supports config-drive
>>
>> If it were up to me, nobody would be enablig the metadata API in new deployments.
>>
>> I totally agree that we should not make changes in the metadata API.
>>
>>> I've had this same conversation about the Security Group API that we
>>> have. We've named it the same thing as the Amazon API, but then went and
>>> made all the fields different, inexplicably. Thankfully, it's just the
>>> names of the fields, rather than being huge conceptual changes.
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html
>>>
>>> Basically, I believe that OpenStack should create APIs that are
>>> community driven and owned, and that we should only emulate
>>> non-community APIs where appropriate, and explicitly state that we only
>>> are emulating them. Putting improvements in APIs that came from
>>> somewhere else, instead of creating new OpenStack branded APIs is a lost
>>> opportunity to differentiate OpenStack from other projects, as well as
>>> Amazon AWS.
>>>
>>> Thanks for reading, and have a great holiday.
>>
>> I could not possibly agree more if our brains were physically fused.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From ikalnitsky at mirantis.com  Mon Sep  7 09:12:42 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Mon, 7 Sep 2015 12:12:42 +0300
Subject: [openstack-dev] [Fuel][Plugins] Deployment order with custom
	role
In-Reply-To: <CAOmgvhym_UG+V2UaCvbN955Ex7HipnE+2Sw_tOKL_AAnM_CCTg@mail.gmail.com>
References: <CAOmgvhym_UG+V2UaCvbN955Ex7HipnE+2Sw_tOKL_AAnM_CCTg@mail.gmail.com>
Message-ID: <CACo6NWD4aj0+DnB3+cT=pVBQNUYchn2aDa3qTiAsdRf3YLNT6A@mail.gmail.com>

Hi Swann,

> However, we still need deployment order between independent
> plugins and it seems impossible to define the priorities

There's no such things like priorities for now.. perhaps we can
introduce some kind of anchors instead of priorities, but that's
another story.

Currently the only way to synchronize two plugins is to make one to
know about other one. That means you need to properly setup "requires"
field:

    - id: my-plugin-b-task
      type: puppet
      role: [my-plugin-b-role]
      required_for: [post_deployment_end]
      requires: [post_deployment_start, PLUGIN-A-TASK]
      parameters:
        puppet_manifest: some-puppet.pp
        puppet_modules: /etc/puppet/modules
        timeout: 3600
        cwd: /

Thanks,
Igor

On Mon, Sep 7, 2015 at 11:31 AM, Swann Croiset <scroiset at mirantis.com> wrote:
> Hi fuelers,
>
> We're currently porting nearly all LMA plugins to the new plugin fwk 3.0.0
> to leverage custom role capabilities.
> That brings up a lot of simplifications for node assignment, disk
> management, network config, reuse core tasks and so on .. thanks to the fwk.
>
> However, we still need deployment order between independent plugins and it
> seems impossible to define the priorities [0] in deployment_tasks.yaml,
> The only way to preserve deployment order would be to keep tasks.yaml too.
>
> So, I'm wondering if this is the recommended solution to address plugins
> order deployment with plugin fwk 3.0.0?
> And furthermore if tasks.yaml will still be supported in future by the
> plugin fwk or if the fwk shouldn't evolve  by adding priorities definitions
> in deployment_tasks.yaml ?
>
> Thanks
>
> [0] https://wiki.openstack.org/wiki/Fuel/Plugins#Plugins_deployment_order
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From joehuang at huawei.com  Mon Sep  7 09:26:08 2015
From: joehuang at huawei.com (joehuang)
Date: Mon, 7 Sep 2015 09:26:08 +0000
Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware &
 multiple keystone endpoints
In-Reply-To: <1700909195.18217744.1441603432407.JavaMail.zimbra@redhat.com>
References: <55D5CABC.1020808@ericsson.com>
 <1512600172.12898075.1440488257977.JavaMail.zimbra@redhat.com>
 <55DC3005.9000306@ericsson.com>
 <5E7A3D1BF5FD014E86E5F971CF446EFF542F516C@szxema505-mbx.china.huawei.com>
 <1700909195.18217744.1441603432407.JavaMail.zimbra@redhat.com>
Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF542F546A@szxema505-mbx.china.huawei.com>

Hello, Jamie, 

Thanks for your guide. If the item " include_service_catalog" is not configured, then the region name can be used to specify the region for the token validation.

But if I configure" include_service_catalog = False", then the token validation will be redirected to incorrect keystone server.

In multi-site cloud scenario, there are dozens of endpoints, it's reasonable to " include_service_catalog = False".

The log is attached here. 172.17.0.135:35357 and 172.17.0.36:35357 are KeyStone server, our intention is to use the local KeyStone server 172.17.0.135 for token validation, but it forward the request to 172.17.0.36, KeyStone server in another region. 

It seems that override endpoint is a better choice, just like what I did in the https://docs.google.com/document/d/1258g0VTC4wktevo2ymS7SaNhDeY8-S2QWY45them7ZM/edit, ( I just borrowed the configuration item auth_uri, so many close name of configuration item, confused ).

----------------------------------------------------------- 

2015-09-07 09:02:16.257 242 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://172.17.0.135:35357 -H "Accept: application/json" -H "User-Agent
: python-keystoneclient" _http_log_request /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
2015-09-07 09:02:16.280 242 DEBUG keystoneclient.session [-] RESP: [300] content-length: 595 vary: X-Auth-Token connection: keep-alive date: Mon, 07 Sep 2
015 09:02:16 GMT content-type: application/json x-distribution: Ubuntu
RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type": "applicat
ion/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": [{"href": "http://172.17.0.135:35357/v3/", "rel": "self"}]}, {"status": "stable", "updated":
 "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"
href": "http://172.17.0.135:35357/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}
 _http_log_response /usr/lib/python2.7/dist-packages/keystoneclient/session.py:223
2015-09-07 09:02:16.280 242 DEBUG keystoneclient.auth.identity.v3 [-] Making authentication request to http://172.17.0.135:35357/v3/auth/tokens get_auth_r
ef /usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/v3.py:125
2015-09-07 09:02:19.382 242 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://172.17.0.36:35357 -H "Accept: application/json" -H "User-Agent:
 python-keystoneclient" _http_log_request /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
2015-09-07 09:02:19.386 242 DEBUG keystoneclient.session [-] RESP: [300] content-length: 593 vary: X-Auth-Token connection: keep-alive date: Mon, 07 Sep 2
015 09:02:19 GMT content-type: application/json x-distribution: Ubuntu
RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type": "applicat
ion/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": [{"href": "http://172.17.0.36:35357/v3/", "rel": "self"}]}, {"status": "stable", "updated":
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"h
ref": "http://172.17.0.36:35357/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}
 _http_log_response /usr/lib/python2.7/dist-packages/keystoneclient/session.py:223
2015-09-07 09:02:19.387 242 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://172.17.0.36:35357/v3/auth/tokens?nocatalog -H "X-Subject-Token:
 {SHA1}6e306214e70d1c9547b2d22d6962cefb6354164f" -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}f9014aa76c16
2b0db646b325daf813e258c8e2a5" _http_log_request /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
2015-09-07 09:02:19.492 242 DEBUG keystoneclient.session [-] RESP: [200] content-length: 491 x-subject-token: {SHA1}6e306214e70d1c9547b2d22d6962cefb635416
4f vary: X-Auth-Token x-distribution: Ubuntu connection: keep-alive date: Mon, 07 Sep 2015 09:02:19 GMT content-type: application/json x-openstack-request
-id: req-c752beb4-c87b-4812-93dc-b2ea00fbf7b1
RESP BODY: {"token": {"methods": ["password"], "roles": [{"id": "a4935779c40f45d3ba7a8eeada0f7714", "name": "admin"}], "expires_at": "2015-09-07T09:02:20.
000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "79185427679a44019d919ce34112c971", "name": "admin"}, "extras": {}, "user": {"
domain": {"id": "default", "name": "Default"}, "id": "3c94437bbd0a455d938ed1d95c7049d1", "name": "osadmin"}, "audit_ids": ["t9QR8DxdSXiQjuJW-nYxfw"], "iss
ued_at": "2015-09-07T08:02:20.000000Z"}}
 _http_log_response /usr/lib/python2.7/dist-packages/keystoneclient/session.py:223
2015-09-07 09:02:19.499 242 DEBUG oslo_policy.openstack.common.fileutils [req-e905e497-2f45-4bb2-a384-ee4f97a23cec 3c94437bbd0a455d938ed1d95c7049d1 791854
27679a44019d919ce34112c971 - - -] Reloading cached file /etc/glance/policy.json read_cached_file /usr/lib/python2.7/dist-packages/oslo_policy/openstack/co
mmon/fileutils.py:64
2015-09-07 09:02:19.499 242 DEBUG oslo_policy.policy [req-e905e497-2f45-4bb2-a384-ee4f97a23cec 3c94437bbd0a455d938ed1d95c7049d1 79185427679a44019d919ce341
12c971 - - -] Reloaded policy file: /etc/glance/policy.json _load_policy_file /usr/lib/python2.7/dist-packages/oslo_policy/policy.py:403
2015-09-07 09:02:19.501 242 DEBUG glance.common.client [req-e905e497-2f45-4bb2-a384-ee4f97a23cec 3c94437bbd0a455d938ed1d95c7049d1 79185427679a44019d919ce3
4112c971 - - -] Constructed URL: http://172.17.0.144:9191/images/detail?sort_key=name&sort_dir=asc&limit=20 _construct_url /usr/lib/python2.7/dist-package
s/glance/common/client.py:401

Best Regards
Chaoyi Huang ( Joe Huang )


-----Original Message-----
From: Jamie Lennox [mailto:jamielennox at redhat.com] 
Sent: Monday, September 07, 2015 1:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone][Glance] keystonemiddleware & multiple keystone endpoints



----- Original Message -----
> From: "joehuang" <joehuang at huawei.com>
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev at lists.openstack.org>
> Sent: Sunday, 6 September, 2015 7:28:04 PM
> Subject: Re: [openstack-dev] [Keystone][Glance] keystonemiddleware & 
> multiple keystone endpoints
> 
> Hello, Jamie and Hans,
> 
> The patch " Allow specifying a region name to auth_token "
> https://review.openstack.org/#/c/216579 has just been merged.
> 
> But unfortunately, when I modify the source code as this patch did in 
> the multisite cloud with Fernet token, the issue is still there, and 
> routed to incorrect endpoint.
> 
> I also check the region_name configuration in the source code, it's correct.
> 
> The issue mentioned in the bug report not addressed yet:
> https://bugs.launchpad.net/keystonemiddleware/+bug/1488347
> 
> Is there anyone who tested it successfully in your environment?

Hey Joe, 

The way that this patch is implemented requires you to have configured auth_token middleware with an auth plugin. For example [1]. I should have called this out better in the help for the config option. To make it so that the old admin_user etc options were region aware is kind of a big change because in that case the URL that is configured as identity_uri is always used for all keystone options.

Can you try and configure with the auth plugin options and see if the regions work after that? 


Jamie

[1] http://www.jamielennox.net/blog/2015/02/23/v3-authentication-with-auth-token-middleware/



> The log of Glance API, the request was redirected to 
> http://172.17.0.95:35357, but this address is not a KeyStone endpoint.
> (http://172.17.0.98:35357 and http://172.17.0.41:35357 are correct 
> KeyStone endpoints ) //////////////////////////////////////////
> 2015-09-06 07:50:43.447 194 DEBUG keystoneclient.session [-] REQ: curl 
> -g -i -X GET http://172.17.0.98:35357 -H "Accept: application/json" -H
> "User-Agent: python-keystoneclient" _http_log_request
> /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
> 2015-09-06 07:50:43.468 194 DEBUG keystoneclient.session [-] RESP: 
> [300]
> content-length: 593 vary: X-Auth-Token connection: keep-alive date: 
> Sun, 06 Sep 2015 07:50:43 GMT content-type: application/json 
> x-distribution: Ubuntu RESP BODY: {"versions": {"values": [{"status": "stable", "updated":
> "2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type":
> "application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links":
> [{"href": "http://172.17.0.98:35357/v3/", "rel": "self"}]}, {"status":
> "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base":
> "application/json", "type":
> "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links":
> [{"href": "http://172.17.0.98:35357/v2.0/", "rel": "self"}, {"href":
> "http://docs.openstack.org/", "type": "text/html", "rel":
> "describedby"}]}]}}
>  _http_log_response
>  /usr/lib/python2.7/dist-packages/keystoneclient/session.py:223
> 2015-09-06 07:50:43.469 194 DEBUG keystoneclient.auth.identity.v3 [-] 
> Making authentication request to 
> http://172.17.0.98:35357/v3/auth/tokens
> get_auth_ref
> /usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/v3.py:12
> 5
> 2015-09-06 07:50:43.574 194 DEBUG keystoneclient.session [-] REQ: curl 
> -g -i -X GET http://172.17.0.95:35357 -H "Accept: application/json" -H
> "User-Agent: python-keystoneclient" _http_log_request
> /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
> 2015-09-06 07:50:46.576 194 WARNING keystoneclient.auth.identity.base 
> [-] Failed to contact the endpoint at http://172.17.0.95:35357 for discovery.
> Fallback to using that endpoint as the base url.
> 2015-09-06 07:50:46.576 194 DEBUG keystoneclient.session [-] REQ: curl 
> -g -i -X GET http://172.17.0.95:35357/auth/tokens -H "X-Subject-Token:
> {SHA1}640964e1f8716ecbb10ca3d8b5b08c8e7abfac1d" -H "User-Agent:
> python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token:
> {SHA1}386777062718e0992cc818780e3ec7fa0671d8e9" _http_log_request
> /usr/lib/python2.7/dist-packages/keystoneclient/session.py:195
> 2015-09-06 07:50:49.576 194 INFO keystoneclient.session [-] Failure: 
> Unable to establish connection to 
> http://172.17.0.95:35357/auth/tokens. Retrying in 0.5s.
> 2015-09-06 07:50:52.576 194 INFO keystoneclient.session [-] Failure: 
> Unable to establish connection to 
> http://172.17.0.95:35357/auth/tokens. Retrying in 1.0s.
> 2015-09-06 07:50:55.576 194 INFO keystoneclient.session [-] Failure: 
> Unable to establish connection to 
> http://172.17.0.95:35357/auth/tokens. Retrying in 2.0s.
> 2015-09-06 07:50:58.576 194 WARNING keystonemiddleware.auth_token [-] 
> Authorization failed for token
> 
> 
> Best Regards
> Chaoyi Huang ( Joe Huang )
> 
> 
> -----Original Message-----
> From: Hans Feldt [mailto:hans.feldt at ericsson.com]
> Sent: Tuesday, August 25, 2015 5:06 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [Keystone][Glance] keystonemiddleware & 
> multiple keystone endpoints
> 
> 
> 
> On 2015-08-25 09:37, Jamie Lennox wrote:
> >
> >
> > ----- Original Message -----
> >> From: "Hans Feldt" <hans.feldt at ericsson.com>
> >> To: openstack-dev at lists.openstack.org
> >> Sent: Thursday, August 20, 2015 10:40:28 PM
> >> Subject: [openstack-dev] [Keystone][Glance] keystonemiddleware & multiple
> >> 	keystone endpoints
> >>
> >> How do you configure/use keystonemiddleware for a specific identity 
> >> endpoint among several?
> >>
> >> In an OPNFV multi region prototype I have keystone endpoints per 
> >> region. I would like keystonemiddleware (in context of glance-api) 
> >> to use the local keystone for performing user token validation. 
> >> Instead keystonemiddleware seems to use the first listed keystone 
> >> endpoint in the service catalog (which could be wrong/non-optimal 
> >> in most regions).
> >>
> >> I found this closed, related bug:
> >> https://bugs.launchpad.net/python-keystoneclient/+bug/1147530
> >
> > Hey,
> >
> > There's two points to this.
> >
> > * If you are using an auth plugin then you're right it will just 
> > pick the first endpoint. You can look at project specific 
> > endpoints[1] so that there is only one keystone endpoint returned for the services project.
> > I've also just added a review for this feature[2].
> 
> I am not.
> 
> > * If you're not using an auth plugin (so the admin_X options) then 
> > keystone will always use the endpoint that is configured in the 
> > options (identity_uri).
> 
> Yes for getting its own admin/service token. But for later user token 
> validation it seems to pick the first identity service in the stored 
> (?) service catalog.
> 
> By patching keystonemiddleware, _create_identity_server and the call 
> to Adapter constructor with an endpoint_override parameter I can get 
> it to use the local keystone for token validation. I am looking for an 
> official way of achieving the same.
> 
> Thanks,
> Hans
> 
> >
> > Hope that helps,
> >
> > Jamie
> >
> >
> > [1]
> > https://github.com/openstack/keystone-specs/blob/master/specs/juno/e
> > nd point-group-filter.rst [2] 
> > https://review.openstack.org/#/c/216579
> >
> >> Thanks,
> >> Hans
> >>
> >> ___________________________________________________________________
> >> __ _____ OpenStack Development Mailing List (not for usage 
> >> questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ____________________________________________________________________
> > __ ____ OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> ______________________________________________________________________
> ____ OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ______________________________________________________________________
> ____ OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From tomer.shtilman at alcatel-lucent.com  Mon Sep  7 09:27:07 2015
From: tomer.shtilman at alcatel-lucent.com (SHTILMAN, Tomer (Tomer))
Date: Mon, 7 Sep 2015 09:27:07 +0000
Subject: [openstack-dev] [Heat] Multi Node Stack - keystone federation
Message-ID: <94346481835D244BB7F6486C00E9C1BA2AE1E553@FR711WXCHMBA06.zeu.alcatel-lucent.com>

Hi
Currently in heat we have the ability to deploy a remote stack on a different region using OS::Heat::Stack and region_name in the context

My question is regarding multi node , separate keystones, with keystone federation.
Is there an option in a HOT template to send a stack to a different node, using the keystone federation feature?
For example ,If I have two Nodes (N1 and N2) with separate keystones (and keystone federation), I would like to deploy a stack on N1 with a nested stack that will deploy on N2, similar to what we have now for regions
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/4b2f10b9/attachment.html>

From john at johngarbutt.com  Mon Sep  7 09:54:49 2015
From: john at johngarbutt.com (John Garbutt)
Date: Mon, 7 Sep 2015 10:54:49 +0100
Subject: [openstack-dev] [nova] Bug importance
In-Reply-To: <D212198D.BBF16%gkotton@vmware.com>
References: <D21204B9.BBED2%gkotton@vmware.com>
 <CANw6fcHqyza2Z4hDZWdFr9_3V=FbVpbsvBNOLFprW_9F+Ma8ow@mail.gmail.com>
 <D212198D.BBF16%gkotton@vmware.com>
Message-ID: <CABib2_obj4rLvESZkPbP4qMSumBpbLyD5Xwh9YoVO1fd3vGi0A@mail.gmail.com>

I have a feeling launchpad asked my to renew my membership of nova-bug
recently, and said it would drop me form the list if I didn't do that.

Not sure if thats intentional to keep the list fresh? Its the first I
knew about it.

Unsure, but that could be related?

Thanks,
John

On 6 September 2015 at 14:25, Gary Kotton <gkotton at vmware.com> wrote:
> That works.
> Thanks!
>
> From: "davanum at gmail.com" <davanum at gmail.com>
> Reply-To: OpenStack List <openstack-dev at lists.openstack.org>
> Date: Sunday, September 6, 2015 at 4:10 PM
> To: OpenStack List <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [nova] Bug importance
>
> Gary,
>
> Not sure what changed...
>
> On this page (https://bugs.launchpad.net/nova/) on the right hand side, do
> you see "Bug Supervisor" set to "Nova Bug Team"?  I believe "Nova Bug Team"
> is open and you can add yourself, so if you do not see yourself in that
> group, can you please add it and try?
>
> -- Dims
>
> On Sun, Sep 6, 2015 at 4:56 AM, Gary Kotton <gkotton at vmware.com> wrote:
>>
>> Hi,
>> In the past I was able to set the importance of a bug. Now I am unable to
>> do this? Has the policy changed? Can someone please clarify. If the policy
>> has changed who is responsible for deciding the priority of a bug?
>> Thanks
>> Gary
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From vgridnev at mirantis.com  Mon Sep  7 10:48:35 2015
From: vgridnev at mirantis.com (Vitaly Gridnev)
Date: Mon, 7 Sep 2015 13:48:35 +0300
Subject: [openstack-dev] [sahara] FFE request for scheduler and suspend
 EDP job for sahara
In-Reply-To: <CAMfz_LOZmrju2eRrQ1ASCK2yjyxa-UhuV=uv3SRrtKkoPDP-bQ@mail.gmail.com>
References: <CAMfz_LOZmrju2eRrQ1ASCK2yjyxa-UhuV=uv3SRrtKkoPDP-bQ@mail.gmail.com>
Message-ID: <CA+O3VAivX4To4MQnAwP-fj2iX2hb3dsUj7rP3zMh9dRyPHFeHA@mail.gmail.com>

Hey!

>From my point of view, we definetly should not give FFE for
add-suspend-resume-ability-for-edp-jobs
<https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs>
spec,
because client side for this change is not included in official liberty
release.

By the way, I am not sure about FFE for enable-scheduled-edp-jobs
<https://blueprints.launchpad.net/sahara/+spec/enable-scheduled-edp-jobs>,
because it's not clear which progress of these blueprint. Implementation of
that consists with 2 patch-sets, and one of that marked as Work In Progress.


On Sun, Sep 6, 2015 at 7:18 PM, lu jander <juvenboy1987 at gmail.com> wrote:

> Hi, Guys
>
>  I would like to request FFE for scheduler EDP job and suspend EDP job for
> sahara. these patches has been reviewed for a long time with lots of patch
> sets.
>
> Blueprint:
>
> (1)
> https://blueprints.launchpad.net/sahara/+spec/enable-scheduled-edp-jobs
> (2)
> https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs
>
>
> Spec:
>
> (1) https://review.openstack.org/#/c/175719/
> (2) https://review.openstack.org/#/c/198264/
>
>
> Patch:
>
> (1) https://review.openstack.org/#/c/182310/
> (2) https://review.openstack.org/#/c/201448/
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Vitaly Gridnev
Mirantis, Inc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/3920734e/attachment.html>

From svilgelm at mirantis.com  Mon Sep  7 10:48:34 2015
From: svilgelm at mirantis.com (Sergey Vilgelm)
Date: Mon, 7 Sep 2015 13:48:34 +0300
Subject: [openstack-dev] [nova][oslo][policy] oslo.policy adoption in
	Nova.
In-Reply-To: <CAPTR=Uhx0WHHNJz+JKR1ftM3aRPXBB-O0MuSn5xeT0owJKRXaA@mail.gmail.com>
References: <CANw6fcHK26tJJq10Ggum+SxUR+0HDPSyHdg6w6ubXCEYzijfLA@mail.gmail.com>
 <1438593167-sup-5001@lrrr.local>
 <CANw6fcHyAsEJ3OF7LxgNrPCeyfAPH96Htv=8vz=HQez6b_3Fjg@mail.gmail.com>
 <1438600138-sup-8453@lrrr.local>
 <CAPTR=Uj8rmeYK=eG_WXxrxZQn1GikEdmWOmhBJHcY0a8Sn+G1A@mail.gmail.com>
 <1438625471-sup-8982@lrrr.local>
 <CAPTR=UgOkC2zdZT675myaoGMZ9hwEpMdhsj4SJ2JyL1YdKxzeg@mail.gmail.com>
 <1438631011-sup-3247@lrrr.local>
 <B59DD537-9577-4731-A945-DAF59A4600CB@gmail.com>
 <1438632980-sup-9565@lrrr.local> <1438637056-sup-1576@lrrr.local>
 <CAPTR=Uhx0WHHNJz+JKR1ftM3aRPXBB-O0MuSn5xeT0owJKRXaA@mail.gmail.com>
Message-ID: <CA6215BB-8AA8-46BF-BD85-061077DA80C0@mirantis.com>

Hi nova-team,

Jeffrey Zhang has updated his patch[1].
Dan Smith, Could you remove -2?

[1] https://review.openstack.org/#/c/198065 <https://review.openstack.org/#/c/198065>


> On Aug 20, 2015, at 17:26, Sergey Vilgelm <svilgelm at mirantis.com> wrote:
> 
> Nova-cores,
> Do you have any decision about the patch: https://review.openstack.org/#/c/198065/ ?
> Dan Smith, Could you remove -2?
> Jeffrey Zhang, What is your opinion?
> 
> On Tue, Aug 4, 2015 at 12:26 AM, Doug Hellmann <doug at doughellmann.com> wrote:
> Excerpts from Doug Hellmann's message of 2015-08-03 16:19:31 -0400:
> > Excerpts from Morgan Fainberg's message of 2015-08-04 06:05:56 +1000:
> > >
> > > > On Aug 4, 2015, at 05:49, Doug Hellmann <doug at doughellmann.com> wrote:
> > > >
> > > > Excerpts from Sergey Vilgelm's message of 2015-08-03 22:11:50 +0300:
> > > >>> On Mon, Aug 3, 2015 at 9:37 PM, Doug Hellmann <doug at doughellmann.com> wrote:
> > > >>>
> > > >>> Making that function public may be the most expedient fix, but the
> > > >>> parser was made private for a reason, so before we expose it we
> > > >>> should understand why, and if there are alternatives (such as
> > > >>> creating a fixture in oslo.policy to do what the nova tests need).
> > > >>
> > > >> Probably we may extend the Rules class and add the similar functions as a
> > > >> classmethod?
> > > >> I've created a patch for slo.policy as example[1]
> > > >
> > > > Well, my point was that the folks working on that library considered the
> > > > entire parser to be private. That could just be overly ambitious API
> > > > pruning, or there could be some underlying reason (like, the syntax may
> > > > be changing or we want apps to interact with APIs and not generate DSL
> > > > and feed it to the library). So we should find out about the reason
> > > > before looking for alternative places to expose the parser.
> > > >
> > >
> > > The idea is to have apis vs dsl generation. But we did a "everything private that isnt clearly used" as a starting point. I would prefer to not make this public and have a fixture instead. That said, i am not hard-set against a change to make it public.
> >
> > It would be easy enough to provide a fixture, which would make it clear
> > that the API is meant for testing and not for general use. I see a
> > couple of options:
> >
> > 1. a fixture that takes some DSL text and creates a new Enforcer
> >    instance populated with the rules based on parsing the text
> >
> > 2. a fixture that takes some DSL text *and* an existing Enforcer
> >    instance and replaces the rules inside that Enforcer instance with the
> >    rules represented by the DSL text
> >
> > Option 1 feels a little cleaner but option 2 is more like how Nova
> > is using parse_rule() now and may be easier to drop in.
> 
> Brant also pointed out on IRC that the Rules class already has a
> load_json() class method that invokes the parser, so maybe the thing to
> do is update nova's tests to use that method. A fixture would still be
> an improvement, but using the existing code will let us move ahead
> faster (assuming we've decided not to wait for the new features to be
> implemented).
> 
> Doug
> 
> >
> > Doug
> >
> > >
> > > > Doug
> > > >
> > > >>
> > > >> [1] https://review.openstack.org/#/c/208617/
> > > >
> > > > __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Thanks,
> Sergey Vilgelm
> OpenStack Software Engineer
> Mirantis Inc.
> Skype: sergey.vilgelm
> Phone: +36 70 512 3836


?
Sergey Vilgelm
OpenStack Software Engineer
Mirantis Inc.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/41c26e1d/attachment.html>

From john at johngarbutt.com  Mon Sep  7 10:48:43 2015
From: john at johngarbutt.com (John Garbutt)
Date: Mon, 7 Sep 2015 11:48:43 +0100
Subject: [openstack-dev]  [all] Cross-Project meeting, Tue Sept 8th,
	21:00 UTC
Message-ID: <CABib2_omLusMhD3S01=KKHtv8tmwxuviRNeY=QMv90r8wM=DVA@mail.gmail.com>

Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting tomorrow at 21:00 UTC, with the
following agenda:

* Review past action items
* Team announcements (horizontal, vertical, diagonal)
* Base feature deprecation policy [1]
* Open discussion

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html

If you're from an horizontal team (Release management, QA, Infra, Docs,
Security, I18n...) or a vertical team (Nova, Swift, Keystone...) and
have something to communicate to the other teams, feel free to abuse the
relevant sections of that meeting and make sure it gets #info-ed by the
meetbot in the meeting summary.

See you there !

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

Thanks,
johnthetubaguy


From hejie.xu at intel.com  Mon Sep  7 10:50:01 2015
From: hejie.xu at intel.com (Alex Xu)
Date: Mon, 7 Sep 2015 18:50:01 +0800
Subject: [openstack-dev] [nova] Nova API sub-team meeting
Message-ID: <237D4B03-84A3-4BA7-AD10-545062815751@intel.com>

Hi,

We have weekly Nova API meeting this week. The meeting is being held tomorrow Tuesday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Tue)
Japan 21:00 (Tue)
China 20:00 (Tue)
United Kingdom 13:00 (Tue)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI <https://wiki.openstack.org/wiki/Meetings/NovaAPI>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/b7ed11fa/attachment.html>

From julien at danjou.info  Mon Sep  7 11:29:07 2015
From: julien at danjou.info (Julien Danjou)
Date: Mon, 07 Sep 2015 13:29:07 +0200
Subject: [openstack-dev] [Aodh][Gnocchi] Change of Launchpad ownership
Message-ID: <m0oahexybg.fsf@danjou.info>

Hi folks,

Per recent discussion with the team, and to reflect the fact that we
have different core reviewers group on Gerrit for
Ceilometer/Gnocchi/Aodh, I've implemented the same arrangements on
Launchpad.

There are now 2 new teams:

  https://launchpad.net/~aodh-drivers
  https://launchpad.net/~gnocchi-drivers

That owns and maintains those projects.

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/1357a35a/attachment.pgp>

From scroiset at mirantis.com  Mon Sep  7 12:27:49 2015
From: scroiset at mirantis.com (Swann Croiset)
Date: Mon, 7 Sep 2015 14:27:49 +0200
Subject: [openstack-dev] [Fuel][Plugins] Deployment order with custom
	role
In-Reply-To: <CACo6NWD4aj0+DnB3+cT=pVBQNUYchn2aDa3qTiAsdRf3YLNT6A@mail.gmail.com>
References: <CAOmgvhym_UG+V2UaCvbN955Ex7HipnE+2Sw_tOKL_AAnM_CCTg@mail.gmail.com>
 <CACo6NWD4aj0+DnB3+cT=pVBQNUYchn2aDa3qTiAsdRf3YLNT6A@mail.gmail.com>
Message-ID: <CAOmgvhwtLwCFshYHMxsT+8aBd1GaFk8k2p77=x=-9zGXF_+Xmw@mail.gmail.com>

On Mon, Sep 7, 2015 at 11:12 AM, Igor Kalnitsky <ikalnitsky at mirantis.com>
wrote:

> Hi Swann,
>
> > However, we still need deployment order between independent
> > plugins and it seems impossible to define the priorities
>
> There's no such things like priorities for now.. perhaps we can
> introduce some kind of anchors instead of priorities, but that's
> another story.
>
yes its another story for next release(s), anchors could reuse the actual
convention of ranges used (disk, network, software, monitoring)
that said I no sure anchors are sufficient, we still need priorities to
specify orders for independant and/or optional plugins (they don't know
each other)



> Currently the only way to synchronize two plugins is to make one to
> know about other one. That means you need to properly setup "requires"
> field:
>
>     - id: my-plugin-b-task
>       type: puppet
>       role: [my-plugin-b-role]
>       required_for: [post_deployment_end]
>       requires: [post_deployment_start, PLUGIN-A-TASK]
>       parameters:
>         puppet_manifest: some-puppet.pp
>         puppet_modules: /etc/puppet/modules
>         timeout: 3600
>         cwd: /
>
> We thought about this solution _but_ in our case we cannot because the
plugin is optional and may not be installed/enabled. So I guess this will
break things if we reference in 'require' a nonexistent plugin-a-task.
For example with LMA plugins, the LMA-Collector plugin must be
deployed/installed before LMA-Infrastructure-Alerting plugin (to avoid
false alerts UNKNOWN state) but the last may not be enabled for the
deployment.

Thanks,
> Igor
>
>
About tasks.yaml, we must support it until an equivalent 'deployment order'
is implemented with plugin-custom-role feature.


> On Mon, Sep 7, 2015 at 11:31 AM, Swann Croiset <scroiset at mirantis.com>
> wrote:
> > Hi fuelers,
> >
> > We're currently porting nearly all LMA plugins to the new plugin fwk
> 3.0.0
> > to leverage custom role capabilities.
> > That brings up a lot of simplifications for node assignment, disk
> > management, network config, reuse core tasks and so on .. thanks to the
> fwk.
> >
> > However, we still need deployment order between independent plugins and
> it
> > seems impossible to define the priorities [0] in deployment_tasks.yaml,
> > The only way to preserve deployment order would be to keep tasks.yaml
> too.
> >
> > So, I'm wondering if this is the recommended solution to address plugins
> > order deployment with plugin fwk 3.0.0?
> > And furthermore if tasks.yaml will still be supported in future by the
> > plugin fwk or if the fwk shouldn't evolve  by adding priorities
> definitions
> > in deployment_tasks.yaml ?
> >
> > Thanks
> >
> > [0]
> https://wiki.openstack.org/wiki/Fuel/Plugins#Plugins_deployment_order
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/8d9a60ce/attachment.html>

From nik.komawar at gmail.com  Mon Sep  7 12:49:37 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Mon, 7 Sep 2015 08:49:37 -0400
Subject: [openstack-dev]  [Glance] Artifacts meeting canceled today.
Message-ID: <55ED87E1.7000906@gmail.com>

Hi,

We do not have any items to be discussed and nothing has been proposed
by requesters on the agenda etherpad. Also, being a holiday in the US we
are likely to have smaller participation.

Hence, the artifacts sub team meeting has been canceled for today. See
you all next time around! 

-- 

Thanks,
Nikhil



From rakhmerov at mirantis.com  Mon Sep  7 13:12:46 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Mon, 7 Sep 2015 19:12:46 +0600
Subject: [openstack-dev] [mistral] Team meeting - 09/07/2015
Message-ID: <3785EC24-26E7-493A-B172-5363283FC73E@mirantis.com>

Hi,

This is a reminder that we?ll have a team meeting today at #openstack-meeting IRC channel at 16.00 UTC.

Agenda:
* Review action items
* Current status (progress, issues, roadblocks, further plans)
* Wrapping up Liberty-3
* Open discussion

Feel free to add your topics to https://wiki.openstack.org/wiki/Meetings/MistralAgenda <https://wiki.openstack.org/wiki/Meetings/MistralAgenda>.

Renat Akhmerov
@ Mirantis Inc.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/75fec341/attachment.html>

From slagun at mirantis.com  Mon Sep  7 13:18:10 2015
From: slagun at mirantis.com (Stan Lagun)
Date: Mon, 7 Sep 2015 16:18:10 +0300
Subject: [openstack-dev] [mistral][yaql] Addressing task result using
	YAQL function
In-Reply-To: <869980B9-AB93-4EC1-A74B-76F4D9DDC326@stackstorm.com>
References: <B93B8F94-DE9D-4723-A22D-DC527DCC54FB@mirantis.com>
 <04C0E7F3-1E41-4C2A-8A03-EB5C3A598861@stackstorm.com>
 <FA465E58-4611-44B4-9E5D-F353C778D5FF@mirantis.com>
 <869980B9-AB93-4EC1-A74B-76F4D9DDC326@stackstorm.com>
Message-ID: <CAOCoZiaPjM6k+OiRaH0c-UoUbS1SX87YvoggzGCsvRgvopke9A@mail.gmail.com>

I believe this is a good change. $.task_name requires you that $ be
pointing to a tasks dictionary. But in the middle of the query like
[1.2.3].select($ + 1)  "$" will change its value. With a function approach
you can write [1, 2, 3].select($ + task(taskName)). However the name "task"
looks confusing as to my understanding tasks may have attributes other than
result. It may make sense to use task(taskName).result instead.

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

<slagun at mirantis.com>

On Sun, Sep 6, 2015 at 12:14 AM, Dmitri Zimine <dzimine at stackstorm.com>
wrote:

> Yes meant to ask for consistency of referencing to task results. So it?s
> task(task_name) regardless of where.
>
> One use case in favor of this is tooling: I refactor workflow with an
> automated tool which wants to automatically rename the task name
> EVERYWHERE. You guys know well by now that renaming the task is a source of
> too many frustrating errors :)
>
> What other think?
>
> DZ.
>
> On Sep 3, 2015, at 4:23 AM, Renat Akhmerov <rakhmerov at mirantis.com> wrote:
>
>
> On 02 Sep 2015, at 21:01, Dmitri Zimine <dzimine at stackstorm.com> wrote:
>
> Agree,
>
> with one detail: make it explicit -  task(task_name).
>
>
> So do you suggest we just replace res() with task() and it looks like
>
> task() - get task result when we are in ?publish?
> task(task_name) - get task result from anywhere
>
> ?
>
> Is that correct you mean we must always specify a task name? The reason
> I?d like to have a simplified form (w/o task name) is that I see a lot of
> workflows that we have to repeat task name in publish so that it just look
> too verbose to me. Especially in case of very long task name.
>
> Consider something like this:
>
> tasks:
>   *get_volumes_by_names*:
>     with-items: name in <% $.vol_names %>
>     workflow: get_volume_by_name name=<% $.name %>
>     publish:
>       volumes: <% $.*get_volumes_by_names* %>
>
> So in publish we have to repeat a task name, there?s no other way now. I?d
> like to soften this requirement, but if you still want to use task names
> you?ll be able to.
>
>
> res - we often see folks confused by result of what (action, task,
> workflow) although we cleaned up our lingo: action-output, task-result,
> workflow-output?. but still worth being explicit.
>
> And full result is being thought as the root context $.
>
> Publishing to global context may be ok for now, IMO.
>
>
> Not sure what you meant by "Publishing to global context?. Can you clarify
> please?
>
>
> Renat Akhmerov
> @ Mirantis Inc.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/bba0cedd/attachment.html>

From ikalnitsky at mirantis.com  Mon Sep  7 14:25:46 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Mon, 7 Sep 2015 17:25:46 +0300
Subject: [openstack-dev] [Fuel][Plugins] Deployment order with custom
	role
In-Reply-To: <CAOmgvhwtLwCFshYHMxsT+8aBd1GaFk8k2p77=x=-9zGXF_+Xmw@mail.gmail.com>
References: <CAOmgvhym_UG+V2UaCvbN955Ex7HipnE+2Sw_tOKL_AAnM_CCTg@mail.gmail.com>
 <CACo6NWD4aj0+DnB3+cT=pVBQNUYchn2aDa3qTiAsdRf3YLNT6A@mail.gmail.com>
 <CAOmgvhwtLwCFshYHMxsT+8aBd1GaFk8k2p77=x=-9zGXF_+Xmw@mail.gmail.com>
Message-ID: <CACo6NWDgjQYc3ZuU-85URByHAoyOe2VgN1tzo3N6JbqixxNT5A@mail.gmail.com>

> that said I no sure anchors are sufficient, we still need priorities to
> specify orders for independant and/or optional plugins (they don't know
> each other)

If you need this, you're probably doing something wrong. Priorities
won't solve your problem here, because plugins will need to know about
priorities in other plugins and that's weird. The only working
solution here is to make plugin to know about other plugin if it's
important to make deployment precisely after other plugin.

> So I guess this will break things if we reference in 'require' a
> nonexistent plugin-a-task.

That's true. I think the right case here is to implement some sort of
conditional tasks, so different tasks will be executed in different
cases.

> About tasks.yaml, we must support it until an equivalent 'deployment order'
> is implemented with plugin-custom-role feature.

This is not about plugin-custom-role, this is about our task
deployment framework. I heard there were some plans on its
improvements.

Regards,
Igor

On Mon, Sep 7, 2015 at 3:27 PM, Swann Croiset <scroiset at mirantis.com> wrote:
>
>
> On Mon, Sep 7, 2015 at 11:12 AM, Igor Kalnitsky <ikalnitsky at mirantis.com>
> wrote:
>>
>> Hi Swann,
>>
>> > However, we still need deployment order between independent
>> > plugins and it seems impossible to define the priorities
>>
>> There's no such things like priorities for now.. perhaps we can
>> introduce some kind of anchors instead of priorities, but that's
>> another story.
>
> yes its another story for next release(s), anchors could reuse the actual
> convention of ranges used (disk, network, software, monitoring)
> that said I no sure anchors are sufficient, we still need priorities to
> specify orders for independant and/or optional plugins (they don't know each
> other)
>
>
>>
>> Currently the only way to synchronize two plugins is to make one to
>> know about other one. That means you need to properly setup "requires"
>> field:
>>
>>     - id: my-plugin-b-task
>>       type: puppet
>>       role: [my-plugin-b-role]
>>       required_for: [post_deployment_end]
>>       requires: [post_deployment_start, PLUGIN-A-TASK]
>>       parameters:
>>         puppet_manifest: some-puppet.pp
>>         puppet_modules: /etc/puppet/modules
>>         timeout: 3600
>>         cwd: /
>>
> We thought about this solution _but_ in our case we cannot because the
> plugin is optional and may not be installed/enabled. So I guess this will
> break things if we reference in 'require' a nonexistent plugin-a-task.
> For example with LMA plugins, the LMA-Collector plugin must be
> deployed/installed before LMA-Infrastructure-Alerting plugin (to avoid false
> alerts UNKNOWN state) but the last may not be enabled for the deployment.
>
>> Thanks,
>> Igor
>>
>
> About tasks.yaml, we must support it until an equivalent 'deployment order'
> is implemented with plugin-custom-role feature.
>
>>
>> On Mon, Sep 7, 2015 at 11:31 AM, Swann Croiset <scroiset at mirantis.com>
>> wrote:
>> > Hi fuelers,
>> >
>> > We're currently porting nearly all LMA plugins to the new plugin fwk
>> > 3.0.0
>> > to leverage custom role capabilities.
>> > That brings up a lot of simplifications for node assignment, disk
>> > management, network config, reuse core tasks and so on .. thanks to the
>> > fwk.
>> >
>> > However, we still need deployment order between independent plugins and
>> > it
>> > seems impossible to define the priorities [0] in deployment_tasks.yaml,
>> > The only way to preserve deployment order would be to keep tasks.yaml
>> > too.
>> >
>> > So, I'm wondering if this is the recommended solution to address plugins
>> > order deployment with plugin fwk 3.0.0?
>> > And furthermore if tasks.yaml will still be supported in future by the
>> > plugin fwk or if the fwk shouldn't evolve  by adding priorities
>> > definitions
>> > in deployment_tasks.yaml ?
>> >
>> > Thanks
>> >
>> > [0]
>> > https://wiki.openstack.org/wiki/Fuel/Plugins#Plugins_deployment_order
>> >
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From yguenane at redhat.com  Mon Sep  7 14:57:23 2015
From: yguenane at redhat.com (Yanis Guenane)
Date: Mon, 7 Sep 2015 16:57:23 +0200
Subject: [openstack-dev] [puppet] Parameters possible default value
In-Reply-To: <55C3231E.3090203@redhat.com>
References: <55A8C32A.2030406@redhat.com> <55ACAC3A.9080001@redhat.com>
 <55B601A9.4000204@redhat.com>
 <CACEfbZiHuOOV+6Ha-A_Br4T_5-gYu1HACt9KeNcqAMdFgJinfQ@mail.gmail.com>
 <20150730103132.GA14993@baloo.sebian.fr>
 <CACEfbZhBg++-e38NnT2Xgp9jF4ff1FCqis_9nwg7_k7yKXq9FQ@mail.gmail.com>
 <55C3231E.3090203@redhat.com>
Message-ID: <55EDA5D3.4050802@redhat.com>

Hello,

Few weeks ago the patches needed for us to have a cleaner parameters default
management way of doing things have been merged [1].

After that each module have been updated so it could rely on this new
feature.
All but puppet-neutron[3] and puppet-glance[4] have their patch merged.

Following on this effort, here is the first review to try to converge to
what
we expected : https://review.openstack.org/#/c/221005/

Cinder has been picked as the Canary module, if it can make it there it can
make it everywhere :)

If you have any feedback, please provide them on the review


Thank you in advance,

[1]
https://github.com/openstack/puppet-openstacklib/commit/3b85306d042292713d0fd89fa508e0a0fbf99671
[2] https://review.openstack.org/#/c/209875/
[3] https://review.openstack.org/#/c/209894/

--
Yanis Guenane

On 08/06/2015 11:04 AM, Yanis Guenane wrote:
> Hi Andrew,
>
> Sorry for the delay in this answer
>
> On 07/30/2015 09:20 PM, Andrew Woodward wrote:
>> On Thu, Jul 30, 2015 at 3:36 AM Sebastien Badia <sbadia at redhat.com> wrote:
>>
>>> On Mon, Jul 27, 2015 at 09:43:28PM (+0000), Andrew Woodward wrote:
>>>> Sorry, I forgot to finish this up and send it out.
>>>>
>>>> #--SNIP--
>>>> def absent_default(
>>>>   $value,
>>>>   $default,
>>>>   $unset_when_default = true,
>>>> ){
>>>>   if ( $value == $default ) and $unset_when_default {
>>>>     # I cant think of a way to deal with this in a define so lets pretend
>>>>     # we can re-use this with multiple providers like we could if this
>>> was
>>>>     # in the actual provider.
>>>>
>>>>     keystone_config {$name: ensure => absent,}
>>>>   } else {
>>>>     keystone_config {$name: value = $value,}
>>>>   }
>>>> }
>>>>
>>>> # Usage:
>>>> absent_default{'DEFAULT/foo': default => 'bar', value => $foo }
>>> Hi,
>>>
>>> Hum, but you want to add this definition in all our modules, or directly in
>>> openstacklib?
>>>
>> I only mocked it up in a puppet define, because its easier for me (my ruby
>> is terrible) It should be done by adding these kinds of extra providers to
>> the inifile provider override that Yanis proposed.
>>
>>
>>> In case of openstacklib, in which manner do you define the
>>> <component>_config
>>> resource? (eg, generic def, but specialized resource).
>>>
>>>> #--SNIP--
>>>>
>>>> (I threw this together and haven't tried to run it yet, so It might not
>>> run
>>>> verbatim, I will create a test project with it to show it working)
>>>>
>>>> So In the long-term we should be able to add some new functionality to
>>> the
>>>> inifile provider to simply just do this for us. We can add the 'default'
>>>> and 'unset_when_default' parameter so that we can use them straight w/o a
>>>> wrapping function (but the warping function could be used too). This
>>> would
>>>> give us the defaults (I have an idea on that too that I will try to put
>>>> into the prototype) that should allow us to have something that looks
>>> quite
>>>> clean, but is highly functional
>>>>
>>>>> Keystone_config{unset_when_default => true} #probably flatly enabled in
>>>> our inifile provider for the module
>>>>> keystone_config {'DEFAULT/foo': value => 'bar', default => 'bar'}
>>> I'm not sure to see the difference with the Yanis solution here?, and not
>>> sure
>>> to see the link between the define resource and the type/provider resource.
>>>
>> This adds on to Yanis' solution so that we can authoritatively understand
>> what the default value is, and how it should be treated (instead of hoping
>> some magic word doesn't conflict)
> So I think we agree on most points here. '<SERVICE DEFAULT>' value has
> been chosen based on our weekly meetings two weeks ago but it remains
> customizable (via the ensure_absent_val parameter).
>
> We need an explicit one by default so it can be set as a default value
> in all manifests.
>
> We mainly picked that value because we thought it was the less likely to
> be used as a valid value in any OpenStack related component
> configuration file.
>
> If by any chance it turns out to be a valid value for a parameter, we
> can use the temporary fix of changing ensure_absent_val for this
> specific parameter and raise the point during a meeting.
>
> I take the point to make it clear in the README that if a X_config
> resource has as a value set to '<SERVICE DEFAULT>' it will ensure absent
> on the resource.
>
> Does that sound good with you ?
>
>> Seb
>>> ?https://review.openstack.org/#/c/202574/
>>> --
>>> Sebastien Badia
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> Yanis Guenane
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From tleontovich at mirantis.com  Mon Sep  7 15:30:47 2015
From: tleontovich at mirantis.com (Tatyana Leontovich)
Date: Mon, 7 Sep 2015 18:30:47 +0300
Subject: [openstack-dev] [Fuel] Nominate Andrey Sledzinskiy for fuel-ostf
	core
Message-ID: <CAJWtyAOeyjVLTkuDB7pJGcbr0iPDYh1-ZqXhn_ODi-XwOxTJvQ@mail.gmail.com>

Fuelers,

I'd like to nominate Andrey Sledzinskiy for the fuel-ostf core team.
He?s been doing a great job in writing patches(support for detached
services ).
Also his review comments always have a lot of detailed information for
further improvements

http://stackalytics.com/?user_id=asledzinskiy&release=all&project_type=all&module=fuel-ostf

Please vote with +1/-1 for approval/objection.

Core reviewer approval process definition:
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

-- 
Best regards,
Tatyana
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/8b5ee4c6/attachment.html>

From emilien.macchi at gmail.com  Mon Sep  7 15:54:23 2015
From: emilien.macchi at gmail.com (Emilien Macchi)
Date: Mon, 7 Sep 2015 11:54:23 -0400
Subject: [openstack-dev] [puppet] weekly meeting #50
Message-ID: <CAN7WfkJMV8_1KWp7kYGjrM6GY4+E3iS1Rk89_Ud2rpHm2_HRRw@mail.gmail.com>

Hello,

Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
in #openstack-meeting-4:

https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150908

Please add additional items you'd like to discuss.
If our schedule allows it, we'll make bug triage during the meeting.

Regards,
-- 
Emilien Macchi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/98a5a121/attachment.html>

From lyz at princessleia.com  Mon Sep  7 16:51:05 2015
From: lyz at princessleia.com (Elizabeth K. Joseph)
Date: Mon, 7 Sep 2015 09:51:05 -0700
Subject: [openstack-dev] [Infra] Meeting Tuesday September 8th at 19:00 UTC
Message-ID: <CABesOu0tNR067iM17Fvpm-zjdafRkjXK6e=dtYkZw4VY-ss6dw@mail.gmail.com>

Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday September 8th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-01-19.01.log.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-01-19.01.txt
Log: http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-01-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2


From rakhmerov at mirantis.com  Mon Sep  7 16:58:47 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Mon, 7 Sep 2015 22:58:47 +0600
Subject: [openstack-dev] [mistral][yaql] Addressing task result using
	YAQL function
In-Reply-To: <CAOCoZiaPjM6k+OiRaH0c-UoUbS1SX87YvoggzGCsvRgvopke9A@mail.gmail.com>
References: <B93B8F94-DE9D-4723-A22D-DC527DCC54FB@mirantis.com>
 <04C0E7F3-1E41-4C2A-8A03-EB5C3A598861@stackstorm.com>
 <FA465E58-4611-44B4-9E5D-F353C778D5FF@mirantis.com>
 <869980B9-AB93-4EC1-A74B-76F4D9DDC326@stackstorm.com>
 <CAOCoZiaPjM6k+OiRaH0c-UoUbS1SX87YvoggzGCsvRgvopke9A@mail.gmail.com>
Message-ID: <E6D08D36-14AA-4D62-9A39-DFC1E382B0D0@mirantis.com>


> On 07 Sep 2015, at 19:18, Stan Lagun <slagun at mirantis.com> wrote:
> 
> I believe this is a good change. $.task_name requires you that $ be pointing to a tasks dictionary. But in the middle of the query like [1.2.3].select($ + 1)  "$" will change its value. With a function approach
> you can write [1, 2, 3].select($ + task(taskName)). However the name "task" looks confusing as to my understanding tasks may have attributes other than result. It may make sense to use task(taskName).result instead.

Yes, I like the idea that task() should be more than just a result. Good point!

Renat Akhmerov
@ Mirantis Inc.



From nmakhotkin at mirantis.com  Mon Sep  7 17:02:39 2015
From: nmakhotkin at mirantis.com (Nikolay Makhotkin)
Date: Mon, 7 Sep 2015 20:02:39 +0300
Subject: [openstack-dev]  [mistral] Team meeting minutes
Message-ID: <CACarOJYrfsiqPEW-m5zh4gSGjcqGuVn4o_g_ZURVfN=OcVF4vA@mail.gmail.com>

Thanks for joining the meeting today!

Meeting minutes:
*http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-09-07-16.00.html
<http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-09-07-16.00.html>*
Meeting log: *http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-09-07-16.00.log.html
<http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-09-07-16.00.log.html>*

The next meeting will be on Sept 14. You can post your agenda items at
https://wiki.openstack.org/wiki/Meetings/MistralAgenda

-- 
Best Regards,
Nikolay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/7ef4782b/attachment.html>

From juvenboy1987 at gmail.com  Mon Sep  7 17:26:49 2015
From: juvenboy1987 at gmail.com (lu jander)
Date: Tue, 8 Sep 2015 01:26:49 +0800
Subject: [openstack-dev] [sahara] FFE request for scheduler and suspend
 EDP job for sahara
In-Reply-To: <CA+O3VAivX4To4MQnAwP-fj2iX2hb3dsUj7rP3zMh9dRyPHFeHA@mail.gmail.com>
References: <CAMfz_LOZmrju2eRrQ1ASCK2yjyxa-UhuV=uv3SRrtKkoPDP-bQ@mail.gmail.com>
 <CA+O3VAivX4To4MQnAwP-fj2iX2hb3dsUj7rP3zMh9dRyPHFeHA@mail.gmail.com>
Message-ID: <CAMfz_LO6P0=VF-QSJNJeo_8XfNBShLUH3q6Nt=ecL48YEkebCQ@mail.gmail.com>

Hi Vitaly,
enable-scheduled-edp-jobs
<https://blueprints.launchpad.net/sahara/+spec/enable-scheduled-edp-jobs>
patch has 34 patch sets review. https://review.openstack.org/#/c/182310/ ,
it has no impact with another working in process patch.

2015-09-07 18:48 GMT+08:00 Vitaly Gridnev <vgridnev at mirantis.com>:

> Hey!
>
> From my point of view, we definetly should not give FFE for
> add-suspend-resume-ability-for-edp-jobs
> <https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs> spec,
> because client side for this change is not included in official liberty
> release.
>
> By the way, I am not sure about FFE for enable-scheduled-edp-jobs
> <https://blueprints.launchpad.net/sahara/+spec/enable-scheduled-edp-jobs>,
> because it's not clear which progress of these blueprint. Implementation of
> that consists with 2 patch-sets, and one of that marked as Work In Progress.
>
>
> On Sun, Sep 6, 2015 at 7:18 PM, lu jander <juvenboy1987 at gmail.com> wrote:
>
>> Hi, Guys
>>
>>  I would like to request FFE for scheduler EDP job and suspend EDP job
>> for sahara. these patches has been reviewed for a long time with lots of
>> patch sets.
>>
>> Blueprint:
>>
>> (1)
>> https://blueprints.launchpad.net/sahara/+spec/enable-scheduled-edp-jobs
>> (2)
>> https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs
>>
>>
>> Spec:
>>
>> (1) https://review.openstack.org/#/c/175719/
>> (2) https://review.openstack.org/#/c/198264/
>>
>>
>> Patch:
>>
>> (1) https://review.openstack.org/#/c/182310/
>> (2) https://review.openstack.org/#/c/201448/
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards,
> Vitaly Gridnev
> Mirantis, Inc
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/ee3fbec5/attachment.html>

From bhagyashree.iitg at gmail.com  Mon Sep  7 17:59:03 2015
From: bhagyashree.iitg at gmail.com (Bhagyashree Uday)
Date: Mon, 7 Sep 2015 23:29:03 +0530
Subject: [openstack-dev] Getting Started : OpenStack
Message-ID: <CAMobWWqoYBDGmH+9wo2OjUzwdBUEQLbXH68Hw5dq47SQLPKGHQ@mail.gmail.com>

Hi ,

I am Bhagyashree from India(IRC nick : bee2502 ). I have previous
experience in data analytics including Machine Leraning,,NLP,IR and User
Experience Research. I am interested in contributing to OpenStack on
projects involving data analysis. Also , if these projects could be a part
of Outreachy, it would be added bonus. I went through project ideas listed
on https://wiki.openstack.org/wiki/Internship_ideas and one of these
projects interested me a lot -
Understand OpenStack Operations via Insights from Logs and Metrics: A Data
Science Perspective
However, this project does not have any mentioned mentor and I was hoping
you could provide me with some individual contact from OpenStack community
who would be interested in mentoring this project or some mailing
list/thread/IRC community where I could look for a mentor. Other open data
science projects/idea suggestions are also welcome.

Regards,
Bhagyashree
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/02d1d319/attachment.html>

From victoria at vmartinezdelacruz.com  Mon Sep  7 19:29:24 2015
From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=)
Date: Mon, 7 Sep 2015 16:29:24 -0300
Subject: [openstack-dev] Getting Started : OpenStack
In-Reply-To: <CAMobWWqoYBDGmH+9wo2OjUzwdBUEQLbXH68Hw5dq47SQLPKGHQ@mail.gmail.com>
References: <CAMobWWqoYBDGmH+9wo2OjUzwdBUEQLbXH68Hw5dq47SQLPKGHQ@mail.gmail.com>
Message-ID: <CAJ_e2gA81srzjQ4iN+GnXcHzPdHwWoDbE0r1aCo2+pzqM_sd-Q@mail.gmail.com>

Hi Bhagyashree,

Welcome!

That project seems to belong to Ceilometer, but I'm not sure about that.
Ceilometer is the code name for OpenStack telemetry, if you are interested
about it a good place to start is https://wiki.openstack.org/wiki/Ceilometer
.

Those internships ideas are from previous Outreachy/Google Summer of Code
rounds. Outreachy applications will open next September 22nd so there is no
much information about next round mentors/projects yet.

Call for mentors is going to be launched soon, so keep track of that wiki
for updates. Feel free to pass by #openstack-opw as well and we can help
you set your development environment.

Cheers,

Victoria

2015-09-07 14:59 GMT-03:00 Bhagyashree Uday <bhagyashree.iitg at gmail.com>:

> Hi ,
>
> I am Bhagyashree from India(IRC nick : bee2502 ). I have previous
> experience in data analytics including Machine Leraning,,NLP,IR and User
> Experience Research. I am interested in contributing to OpenStack on
> projects involving data analysis. Also , if these projects could be a
> part of Outreachy, it would be added bonus. I went through project ideas
> listed on https://wiki.openstack.org/wiki/Internship_ideas and one of
> these projects interested me a lot -
> Understand OpenStack Operations via Insights from Logs and Metrics: A Data
> Science Perspective
> However, this project does not have any mentioned mentor and I was hoping
> you could provide me with some individual contact from OpenStack community
> who would be interested in mentoring this project or some mailing
> list/thread/IRC community where I could look for a mentor. Other open data
> science projects/idea suggestions are also welcome.
>
> Regards,
> Bhagyashree
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/3e459b4b/attachment.html>

From adanin at mirantis.com  Mon Sep  7 22:05:08 2015
From: adanin at mirantis.com (Andrey Danin)
Date: Tue, 8 Sep 2015 01:05:08 +0300
Subject: [openstack-dev] [Fuel][Plugins] add health check for plugins
In-Reply-To: <2d9b21315bb63501f34fbbdeb36fc0cb@mail.gmail.com>
References: <CAGq0MBgitgb5ep1vCYr45hrX7u1--gBEP3_LCcz0xoCCQw9asQ@mail.gmail.com>
 <CAOq3GZVoB_7rOU0zr=vaFB3bC4W1eNoyJV-B6Rm2xZzGnTHtRg@mail.gmail.com>
 <2d9b21315bb63501f34fbbdeb36fc0cb@mail.gmail.com>
Message-ID: <CA+vYeFqOeRQgEzuoSx7mgEhNcVqfNYBweUS=3we8-tTcMD0-Mw@mail.gmail.com>

Hi.

Sorry for bringing this thread back from the grave but it look quite
interesting to me.

Sheena, could you please explain how pre-deployment sanity checks should
look like? I don't get what it is.

>From the Health Check point of view plugins may be divided to two groups:

1) A plugin that doesn't change an already covered functionality thus
doesn't require extra tests implemented. Such plugins may be Contrail and
almost all SDN plugins, Glance or Cinder backend plugins, and others which
don't bring any changes in OSt API or any extra OSt components.

2) A plugin that adds new elements into OSt or changes API or a standard
behavior. Such plugins may be Contrail (because it actually adds Contrail
Controller which may be covered by Health Check too), Cisco ASR plugin
(because it always creates HA routers), some Swift plugins (we don't have
Swift/S3 API covered by Health Check now at all), SR-IOV plugins (because
they require special network preparation and extra drivers to be presented
in an image), when a combination of different ML2 plugins or hypervisors
deployed (because you need to test all network underlayers or HVs).

So, all that means we need to make OSTF extendible by Fuel plugin's tests
eventually.


On Mon, Aug 10, 2015 at 5:17 PM, Sheena Gregson <sgregson at mirantis.com>
wrote:

> I like that idea a lot ? I also think there would be value in adding
> pre-deployment sanity checks that could be called from the Health Check
> screen prior to deployment.  Thoughts?
>
>
>
> *From:* Simon Pasquier [mailto:spasquier at mirantis.com]
> *Sent:* Monday, August 10, 2015 9:00 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev at lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Fuel][Plugins] add health check for
> plugins
>
>
>
> Hello Samuel,
>
> This looks like an interesting idea. Do you have any concrete example to
> illustrate your point (with one of your plugins maybe)?
>
> BR,
>
> Simon
>
>
>
> On Mon, Aug 10, 2015 at 12:04 PM, Samuel Bartel <
> samuel.bartel.pro at gmail.com> wrote:
>
> Hi all,
>
>
>
> actually with fuel plugins there are test for the plugins used by the
> CICD, but after a deployment it is not possible for the user to easily test
> if a plugin is crrectly deploy or not.
>
> I am wondering if it could be interesting to improve the fuel plugin
> framework in order to be able to define test for each plugin which would ba
> dded to the health Check. the user would be able to test the plugin when
> testing the deployment test.
>
>
>
> What do you think about that?
>
>
>
>
>
> Kind regards
>
>
>
> Samuel
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Danin
adanin at mirantis.com
skype: gcon.monolake
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/8402e57b/attachment.html>

From rodrigodsousa at gmail.com  Mon Sep  7 22:20:23 2015
From: rodrigodsousa at gmail.com (Rodrigo Duarte)
Date: Mon, 7 Sep 2015 19:20:23 -0300
Subject: [openstack-dev] [keystone] FFE Request for Reseller
In-Reply-To: <201509060523.t865NLM6026751@d01av02.pok.ibm.com>
References: <CABj-22jEYcQT03QUftK4DJZJ7dvLfoFZsLzCNiN92mOwsYuUCw@mail.gmail.com>
 <201509060523.t865NLM6026751@d01av02.pok.ibm.com>
Message-ID: <CAAJsUK+b=HMixyw+eVv+CO2Afu1QNMXYKQRjRwgJr7U998wk=w@mail.gmail.com>

Hi all,

Although Steve is right about the amount of code that needs to land, the
code has received lots of iterations on top of it. If the team don't agree
in landing the whole patch chain, maybe agree with a reasonable cut for
Liberty (more 3 or 4 patches, for example) so we have good prospects for M.

Thanks,

On Sun, Sep 6, 2015 at 2:23 AM, Steve Martinelli <stevemar at ca.ibm.com>
wrote:

> I suspect we'll vote on this topic during the next meeting on Tuesday, but
> this seems like a huge amount of code to land.
>
> Thanks,
>
> Steve Martinelli
> OpenStack Keystone Core
>
> [image: Inactive hide details for Henrique Truta ---2015/09/04 12:12:47
> PM---Hi Folks, As you may know, the Reseller Blueprint was prop]Henrique
> Truta ---2015/09/04 12:12:47 PM---Hi Folks, As you may know, the Reseller
> Blueprint was proposed and approved in Kilo (
>
> From: Henrique Truta <henriquecostatruta at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Date: 2015/09/04 12:12 PM
> Subject: [openstack-dev] [keystone] FFE Request for Reseller
> ------------------------------
>
>
>
> Hi Folks,
>
>
> As you may know, the Reseller Blueprint was proposed and approved in Kilo (
> *https://review.openstack.org/#/c/139824/*
> <https://review.openstack.org/#/c/139824/>) with the developing postponed
> to Liberty.
>
> During this time, the 3 main patches of the chain were split into 8,
> becoming smaller and easier to review. The first 2 of them were merged
> before liberty-3 freeze, and some of the others have already received +2s.
> The code is very mature, having a keystone core member support through the
> whole release cycle.
>
>
> I would like to request an FFE for the remaining 9 patches (reseller core)
> which are already in review (starting from
> *https://review.openstack.org/#/c/213448/*
> <https://review.openstack.org/#/c/213448/> to
> *https://review.openstack.org/#/c/161854/*
> <https://review.openstack.org/#/c/161854/>).
>
>
> Henrique
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
MSc in Computer Science
http://rodrigods.com <http://lsd.ufcg.edu.br/%7Erodrigods>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/de911c0b/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/de911c0b/attachment.gif>

From rmeggins at redhat.com  Mon Sep  7 22:53:27 2015
From: rmeggins at redhat.com (Rich Megginson)
Date: Mon, 7 Sep 2015 16:53:27 -0600
Subject: [openstack-dev] [puppet][keystone] plan for domain name handling
Message-ID: <55EE1567.4010401@redhat.com>

This is to outline the plan for the implementation of "puppet-openstack 
will support Keystone domain scoped resource names
without a '::domain' in the name, only if the 'default_domain_id'
parameter in Keystone has _not_ been set.  That is, if the default
domain is 'Default'."

Details here:
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072878.html

In the process of implementation, several bugs were found and fixed (for 
review) in the underlying code.
https://bugs.launchpad.net/puppet-keystone/+bug/1492843
- review https://review.openstack.org/221119
https://bugs.launchpad.net/puppet-keystone/+bug/1492846
- review https://review.openstack.org/221120
https://bugs.launchpad.net/puppet-keystone/+bug/1492848
- review https://review.openstack.org/221121

I think the best course of action will be to rebase both 
https://review.openstack.org/#/c/218044 and 
https://review.openstack.org/#/c/218059/ on top of these, in order for 
the https://review.openstack.org/#/c/218059 to be able to pass the gate 
tests.

The next step will be to get rid of the introspection/indirection calls, 
which were a mistake from the beginning (terrible for performance), but 
that will be easily done on top of the above patches.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/30bda30b/attachment.html>

From edwin.zhai at intel.com  Mon Sep  7 23:26:46 2015
From: edwin.zhai at intel.com (Zhai, Edwin)
Date: Tue, 8 Sep 2015 07:26:46 +0800 (CST)
Subject: [openstack-dev] [Ceilometer][Aodh] event-alarm fire policy
Message-ID: <alpine.DEB.2.10.1509080704210.32581@edwin-gen>

All,
Currently, event-alarm is one-shot style: don't fire again for same event.  But 
threshold-alarm is limited periodic style:
1. only get 1 fire for continuous valid datapoints.
2. Would get a new fire if insufficient data followed by valid ones, as we reset 
alarm state upon insufficient data.

So maybe event-alarm should be periodic also. But I'm not sure when to reset the 
alarm state to 'UNKNOWN': after each fire, or when receive different event.

Fire a bug @
https://bugs.launchpad.net/aodh/+bug/1493171

Best Rgds,
Edwin


From wanghua.humble at gmail.com  Tue Sep  8 01:36:15 2015
From: wanghua.humble at gmail.com (=?UTF-8?B?546L5Y2O?=)
Date: Tue, 8 Sep 2015 09:36:15 +0800
Subject: [openstack-dev] [keystone]how to get service_catalog
In-Reply-To: <809341212.18213995.1441601666881.JavaMail.zimbra@redhat.com>
References: <CAH5-jC-L8six0MCQupX=0g-xA6DBAMGHRMLLEdrjAouhb377sg@mail.gmail.com>
 <809341212.18213995.1441601666881.JavaMail.zimbra@redhat.com>
Message-ID: <CAH5-jC8UAh9PiMhx9wkkM2dpEFVLV_CTYnan3B74juGYBjSimw@mail.gmail.com>

Hi Jamie,

We want to reuse the user token in magnum. But there is no convenient way
to reuse it. It is better that we can use ENV['keystone.token_auth'] to
init keystoneclient directly. Now we need to construct a auth_ref which is
a parameter in keystoneclient init function according to
ENV['keystone.token_auth']. I think it is a common case which service wants
to reuse user token to do something like getting service_catalog. Can
keystoneclient provide this feature?

Regards,
Wanghua

On Mon, Sep 7, 2015 at 12:54 PM, Jamie Lennox <jamielennox at redhat.com>
wrote:

>
>
> ----- Original Message -----
> > From: "??" <wanghua.humble at gmail.com>
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> > Sent: Monday, 7 September, 2015 12:00:43 PM
> > Subject: [openstack-dev] [keystone]how to get service_catalog
> >
> > Hi all,
> >
> > When I use a token to init a keystoneclient and try to get
> service_catalog by
> > it, error occurs. I find that keystone doesn't return service_catalog
> when
> > we use a token. Is there a way to get service_catalog by token? In
> magnum,
> > we now make a trick. We init a keystoneclient with service_catalog which
> is
> > contained in the token_info returned by keystonemiddleware in auth_ref
> > parameter.
> >
> > I want a way to get service_catalog by token. Or can we init a
> keystoneclient
> > by the token_info return by keystonemiddleware directly?
> >
> > Regards,
> > Wanghua
>
> Sort of.
>
> The problem you are hitting is that a token is just a string, an
> identifier for some information stored in keystone. Given a token at
> __init__ time the client doesn't try to validate this in anyway it just
> assumes you know what you are doing. You can do a variation of this though
> in which you use an existing token to fetch a new token with the same
> rights (the expiry etc will be the same) and then you will get a fresh
> service catalog. Using auth plugins that's the Token family of plugins.
>
> However i don't _think_ that's exactly what you're looking for in magnum.
> What token are you trying to reuse?
>
> If it's the users token then auth_token passes down an auth plugin in the
> ENV['keystone.token_auth'] variable[1] and you can pass that to a client to
> reuse the token and service catalog. If you are loading up magnum specific
> auth then again have a look at using keystoneclient's auth plugins and
> reusing it across multiple requests.
>
> Trying to pass around a bundle of token id and service catalog is pretty
> much exactly what an auth plugin does and you should be able to do
> something there.
>
>
> Jamie
>
> [1]
> https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L164
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/ccebd830/attachment.html>

From wanghua.humble at gmail.com  Tue Sep  8 01:43:56 2015
From: wanghua.humble at gmail.com (=?UTF-8?B?546L5Y2O?=)
Date: Tue, 8 Sep 2015 09:43:56 +0800
Subject: [openstack-dev] [keystone]how to get service_catalog
In-Reply-To: <CAH5-jC8UAh9PiMhx9wkkM2dpEFVLV_CTYnan3B74juGYBjSimw@mail.gmail.com>
References: <CAH5-jC-L8six0MCQupX=0g-xA6DBAMGHRMLLEdrjAouhb377sg@mail.gmail.com>
 <809341212.18213995.1441601666881.JavaMail.zimbra@redhat.com>
 <CAH5-jC8UAh9PiMhx9wkkM2dpEFVLV_CTYnan3B74juGYBjSimw@mail.gmail.com>
Message-ID: <CAH5-jC__kfqf2hzp-cPnVRGgZ78Pedb+pPU+2up8hWZB07r4HA@mail.gmail.com>

Hi Jamie,

I find that when I use an existing token to fetch a new token with the same
rights (the expiry etc will be the same) , but keystone doesn't return
service_catalog in the response. I wonder why it is different from
authentication with password.

Regards,
Wanghua

On Tue, Sep 8, 2015 at 9:36 AM, ?? <wanghua.humble at gmail.com> wrote:

> Hi Jamie,
>
> We want to reuse the user token in magnum. But there is no convenient way
> to reuse it. It is better that we can use ENV['keystone.token_auth'] to
> init keystoneclient directly. Now we need to construct a auth_ref which is
> a parameter in keystoneclient init function according to
> ENV['keystone.token_auth']. I think it is a common case which service wants
> to reuse user token to do something like getting service_catalog. Can
> keystoneclient provide this feature?
>
> Regards,
> Wanghua
>
> On Mon, Sep 7, 2015 at 12:54 PM, Jamie Lennox <jamielennox at redhat.com>
> wrote:
>
>>
>>
>> ----- Original Message -----
>> > From: "??" <wanghua.humble at gmail.com>
>> > To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev at lists.openstack.org>
>> > Sent: Monday, 7 September, 2015 12:00:43 PM
>> > Subject: [openstack-dev] [keystone]how to get service_catalog
>> >
>> > Hi all,
>> >
>> > When I use a token to init a keystoneclient and try to get
>> service_catalog by
>> > it, error occurs. I find that keystone doesn't return service_catalog
>> when
>> > we use a token. Is there a way to get service_catalog by token? In
>> magnum,
>> > we now make a trick. We init a keystoneclient with service_catalog
>> which is
>> > contained in the token_info returned by keystonemiddleware in auth_ref
>> > parameter.
>> >
>> > I want a way to get service_catalog by token. Or can we init a
>> keystoneclient
>> > by the token_info return by keystonemiddleware directly?
>> >
>> > Regards,
>> > Wanghua
>>
>> Sort of.
>>
>> The problem you are hitting is that a token is just a string, an
>> identifier for some information stored in keystone. Given a token at
>> __init__ time the client doesn't try to validate this in anyway it just
>> assumes you know what you are doing. You can do a variation of this though
>> in which you use an existing token to fetch a new token with the same
>> rights (the expiry etc will be the same) and then you will get a fresh
>> service catalog. Using auth plugins that's the Token family of plugins.
>>
>> However i don't _think_ that's exactly what you're looking for in magnum.
>> What token are you trying to reuse?
>>
>> If it's the users token then auth_token passes down an auth plugin in the
>> ENV['keystone.token_auth'] variable[1] and you can pass that to a client to
>> reuse the token and service catalog. If you are loading up magnum specific
>> auth then again have a look at using keystoneclient's auth plugins and
>> reusing it across multiple requests.
>>
>> Trying to pass around a bundle of token id and service catalog is pretty
>> much exactly what an auth plugin does and you should be able to do
>> something there.
>>
>>
>> Jamie
>>
>> [1]
>> https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L164
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/db9c7b55/attachment.html>

From jamielennox at redhat.com  Tue Sep  8 01:44:18 2015
From: jamielennox at redhat.com (Jamie Lennox)
Date: Mon, 7 Sep 2015 21:44:18 -0400 (EDT)
Subject: [openstack-dev] [keystone]how to get service_catalog
In-Reply-To: <CAH5-jC8UAh9PiMhx9wkkM2dpEFVLV_CTYnan3B74juGYBjSimw@mail.gmail.com>
References: <CAH5-jC-L8six0MCQupX=0g-xA6DBAMGHRMLLEdrjAouhb377sg@mail.gmail.com>
 <809341212.18213995.1441601666881.JavaMail.zimbra@redhat.com>
 <CAH5-jC8UAh9PiMhx9wkkM2dpEFVLV_CTYnan3B74juGYBjSimw@mail.gmail.com>
Message-ID: <1987013173.18601597.1441676658667.JavaMail.zimbra@redhat.com>

----- Original Message -----

> From: "??" <wanghua.humble at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Sent: Tuesday, 8 September, 2015 11:36:15 AM
> Subject: Re: [openstack-dev] [keystone]how to get service_catalog

> Hi Jamie,

> We want to reuse the user token in magnum. But there is no convenient way to
> reuse it. It is better that we can use ENV['keystone.token_auth'] to init
> keystoneclient directly. Now we need to construct a auth_ref which is a
> parameter in keystoneclient init function according to
> ENV['keystone.token_auth']. I think it is a common case which service wants
> to reuse user token to do something like getting service_catalog. Can
> keystoneclient provide this feature?

> Regards,
> Wanghua

Yes, that's exactly what ENV['keystone.token_auth'] is for. 

You would need to create a session object [1] which can be on the process (long lived). Then when you create a client you pass Client(session=sess, auth=ENV['keystone.token_auth']). This works for all the clients i know of with the exception of swift. This will reuse the user's auth token and the user's service catalog. 

For keystone there is an issue with doing this with the client.Client() object as it still wants a URL passed (there is a bug for this i can find if you're interested). If you are able to I recommend using client.v3.Client() directly. 

Jamie 

[1] https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/session.py#L845 

> On Mon, Sep 7, 2015 at 12:54 PM, Jamie Lennox < jamielennox at redhat.com >
> wrote:

> > ----- Original Message -----
> 
> > > From: "??" < wanghua.humble at gmail.com >
> 
> > > To: "OpenStack Development Mailing List (not for usage questions)" <
> > > openstack-dev at lists.openstack.org >
> 
> > > Sent: Monday, 7 September, 2015 12:00:43 PM
> 
> > > Subject: [openstack-dev] [keystone]how to get service_catalog
> 
> > >
> 
> > > Hi all,
> 
> > >
> 
> > > When I use a token to init a keystoneclient and try to get
> > > service_catalog
> > > by
> 
> > > it, error occurs. I find that keystone doesn't return service_catalog
> > > when
> 
> > > we use a token. Is there a way to get service_catalog by token? In
> > > magnum,
> 
> > > we now make a trick. We init a keystoneclient with service_catalog which
> > > is
> 
> > > contained in the token_info returned by keystonemiddleware in auth_ref
> 
> > > parameter.
> 
> > >
> 
> > > I want a way to get service_catalog by token. Or can we init a
> > > keystoneclient
> 
> > > by the token_info return by keystonemiddleware directly?
> 
> > >
> 
> > > Regards,
> 
> > > Wanghua
> 

> > Sort of.
> 

> > The problem you are hitting is that a token is just a string, an identifier
> > for some information stored in keystone. Given a token at __init__ time the
> > client doesn't try to validate this in anyway it just assumes you know what
> > you are doing. You can do a variation of this though in which you use an
> > existing token to fetch a new token with the same rights (the expiry etc
> > will be the same) and then you will get a fresh service catalog. Using auth
> > plugins that's the Token family of plugins.
> 

> > However i don't _think_ that's exactly what you're looking for in magnum.
> > What token are you trying to reuse?
> 

> > If it's the users token then auth_token passes down an auth plugin in the
> > ENV['keystone.token_auth'] variable[1] and you can pass that to a client to
> > reuse the token and service catalog. If you are loading up magnum specific
> > auth then again have a look at using keystoneclient's auth plugins and
> > reusing it across multiple requests.
> 

> > Trying to pass around a bundle of token id and service catalog is pretty
> > much
> > exactly what an auth plugin does and you should be able to do something
> > there.
> 

> > Jamie
> 

> > [1]
> > https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L164
> 
> > > __________________________________________________________________________
> 
> > > OpenStack Development Mailing List (not for usage questions)
> 
> > > Unsubscribe:
> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> > >
> 

> > __________________________________________________________________________
> 
> > OpenStack Development Mailing List (not for usage questions)
> 
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150907/82787869/attachment.html>

From gal.sagie at gmail.com  Tue Sep  8 05:58:39 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Tue, 8 Sep 2015 08:58:39 +0300
Subject: [openstack-dev] [Neutron] Port forwarding
In-Reply-To: <CAEfdOg0oUVpDCMU6Ko59MvkyqdzFnxOAWqZpk1QF-eYbZ_ebeA@mail.gmail.com>
References: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>
 <CAEfdOg2AS8uDZUyi5FzyAEKqU3D9axFaKBs5ibzjFudSR5JFGw@mail.gmail.com>
 <CAG9LJa40-NM1LTJOZxuTa27W6LyZgpJwb7D4XVcqoiH63GaASg@mail.gmail.com>
 <CAEfdOg0oUVpDCMU6Ko59MvkyqdzFnxOAWqZpk1QF-eYbZ_ebeA@mail.gmail.com>
Message-ID: <CAG9LJa675MAQ0pRZ=WRBpTDsKudZHk0vRchNaVNG45_Z3tBXDw@mail.gmail.com>

Hi Germy,

Port forwarding the way i see it, is a way of reusing the same floating ip
to access several different Neutron ports (VM's , Containers)
So for example if we have floating IP 172.20.20.10 , we can assign
172.20.20.10:4001 to VM1 and 172.20.20.10:4002 to VM2 (which are behind
that same router
which has an external gw).
The user use the same IP but according to the tcp/udp port Neutron performs
mapping in the virtual router namespace to the private IP and possibly to a
different port
that is running on that instance for example port 80

So for example if we have two VM's with private IP's 10.0.0.1 and 10.0.0.2
and we have a floating ip assigned to the router of 172.20.20.10
with port forwarding we can build the following mapping:

172.20.20.10:4001  =>  10.0.0.1:80
172.20.20.10:4002  =>  10.0.0.2:80

And this is only from the Neutron API, this feature is usefull when you
offer PaaS, SaaS and have an automated framework that calls the API
to allocate these "client ports"

I am not sure why you think the operator will need to ssh the instances,
the operator just needs to build the mapping of <floating_ip, port>  to the
instance private IP.
Of course keep in mind that we didnt yet discuss full API details but its
going to be something like that (at least the way i see it)

Hope thats explains it.

Gal.

On Mon, Sep 7, 2015 at 5:21 AM, Germy Lure <germy.lure at gmail.com> wrote:

> Hi Gal,
>
> I'm sorry for my poor English. Let me try again.
>
> What operator wants to access is several related instances, instead of
> only one or one by one. The use case is periodical check and maintain.
> RELATED means instance maybe in one subnet, or one network, or one host.
> The host's scene is similar to access the docker on the host as you
> mentioned before.
>
> Via what you mentioned of API, user must ssh an instance and then invoke
> API to update the IP address and port, or even create a new PF to access
> another one. It will be a nightmare to a VPC operator who owns so many
> instances.
>
> In a word, I think the "inside_addr" should be "subnet" or "host".
>
> Hope this is clear enough.
>
> Germy
>
> On Sun, Sep 6, 2015 at 1:05 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
>
>> Hi Germy,
>>
>> I am not sure i understand what you mean, can you please explain it
>> further?
>>
>> Thanks
>> Gal.
>>
>> On Sun, Sep 6, 2015 at 5:39 AM, Germy Lure <germy.lure at gmail.com> wrote:
>>
>>> Hi, Gal
>>>
>>> Thank you for bringing this up. But I have some suggestions for the API.
>>>
>>> An operator or some other component wants to reach several VMs related
>>> NOT only one or one by one. Here, RELATED means that the VMs are in one
>>> subnet or network or a host(similar to reaching dockers on a host).
>>>
>>> Via the API you mentioned, user must ssh one VM and update even delete
>>> and add PF to ssh another. To a VPC(with 20 subnets?) admin, it's totally a
>>> nightmare.
>>>
>>> Germy
>>>
>>>
>>> On Wed, Sep 2, 2015 at 1:59 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
>>>
>>>> Hello All,
>>>>
>>>> I have searched and found many past efforts to implement port
>>>> forwarding in Neutron.
>>>> I have found two incomplete blueprints [1], [2] and an abandoned patch
>>>> [3].
>>>>
>>>> There is even a project in Stackforge [4], [5] that claims
>>>> to implement this, but the L3 parts in it seems older then current
>>>> master.
>>>>
>>>> I have recently came across this requirement for various use cases, one
>>>> of them is
>>>> providing feature compliance with Docker port-mapping feature (for
>>>> Kuryr), and saving floating
>>>> IP's space.
>>>> There has been many discussions in the past that require this feature,
>>>> so i assume
>>>> there is a demand to make this formal, just a small examples [6], [7],
>>>> [8], [9]
>>>>
>>>> The idea in a nutshell is to support port forwarding (TCP/UDP ports) on
>>>> the external router
>>>> leg from the public network to internal ports, so user can use one
>>>> Floating IP (the external
>>>> gateway router interface IP) and reach different internal ports
>>>> depending on the port numbers.
>>>> This should happen on the network node (and can also be leveraged for
>>>> security reasons).
>>>>
>>>> I think that the POC implementation in the Stackforge project shows
>>>> that this needs to be
>>>> implemented inside the L3 parts of the current reference
>>>> implementation, it will be hard
>>>> to maintain something like that in an external repository.
>>>> (I also think that the API/DB extensions should be close to the current
>>>> L3 reference
>>>> implementation)
>>>>
>>>> I would like to renew the efforts on this feature and propose a RFE and
>>>> a spec for this to the
>>>> next release, any comments/ideas/thoughts are welcome.
>>>> And of course if any of the people interested or any of the people that
>>>> worked on this before
>>>> want to join the effort, you are more then welcome to join and comment.
>>>>
>>>> Thanks
>>>> Gal.
>>>>
>>>> [1]
>>>> https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
>>>> [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
>>>> [3] https://review.openstack.org/#/c/60512/
>>>> [4] https://github.com/stackforge/networking-portforwarding
>>>> [5] https://review.openstack.org/#/q/port+forwarding,n,z
>>>>
>>>> [6]
>>>> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
>>>> [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
>>>> [8]
>>>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
>>>> [9]
>>>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
>>>>
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/6f4f4a15/attachment.html>

From aurlapova at mirantis.com  Tue Sep  8 06:02:57 2015
From: aurlapova at mirantis.com (Anastasia Urlapova)
Date: Tue, 8 Sep 2015 09:02:57 +0300
Subject: [openstack-dev] [Fuel] Nominate Andrey Sledzinskiy for
	fuel-ostf core
In-Reply-To: <CAJWtyAOeyjVLTkuDB7pJGcbr0iPDYh1-ZqXhn_ODi-XwOxTJvQ@mail.gmail.com>
References: <CAJWtyAOeyjVLTkuDB7pJGcbr0iPDYh1-ZqXhn_ODi-XwOxTJvQ@mail.gmail.com>
Message-ID: <CAC+XjbarB-GU-R+6XS7hd2i_3-HYSZxbMDX1OXmdD8eQ=ZNO5g@mail.gmail.com>

+1

On Mon, Sep 7, 2015 at 6:30 PM, Tatyana Leontovich <tleontovich at mirantis.com
> wrote:

> Fuelers,
>
> I'd like to nominate Andrey Sledzinskiy for the fuel-ostf core team.
> He?s been doing a great job in writing patches(support for detached
> services ).
> Also his review comments always have a lot of detailed information for
> further improvements
>
>
> http://stackalytics.com/?user_id=asledzinskiy&release=all&project_type=all&module=fuel-ostf
>
> Please vote with +1/-1 for approval/objection.
>
> Core reviewer approval process definition:
> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
> --
> Best regards,
> Tatyana
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/1e4133f3/attachment.html>

From abhishek.talwar at tcs.com  Tue Sep  8 06:04:40 2015
From: abhishek.talwar at tcs.com (Abhishek Talwar)
Date: Tue, 8 Sep 2015 11:34:40 +0530
Subject: [openstack-dev]  [Ceilometer] Meters
Message-ID: <OF6974FD42.90ACA49D-ON65257EBA.0021633A-65257EBA.0021633C@tcs.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/4c1d5e29/attachment.html>

From dshulyak at mirantis.com  Tue Sep  8 06:07:58 2015
From: dshulyak at mirantis.com (Dmitriy Shulyak)
Date: Tue, 8 Sep 2015 09:07:58 +0300
Subject: [openstack-dev] [Fuel] Nominate Andrey Sledzinskiy for
	fuel-ostf core
In-Reply-To: <CAC+XjbarB-GU-R+6XS7hd2i_3-HYSZxbMDX1OXmdD8eQ=ZNO5g@mail.gmail.com>
References: <CAJWtyAOeyjVLTkuDB7pJGcbr0iPDYh1-ZqXhn_ODi-XwOxTJvQ@mail.gmail.com>
 <CAC+XjbarB-GU-R+6XS7hd2i_3-HYSZxbMDX1OXmdD8eQ=ZNO5g@mail.gmail.com>
Message-ID: <CAP2-cGd9ex57e7g+WuqKK=d9kxDZs77hUHR7QJa-n8fjLwHbpw@mail.gmail.com>

+1

On Tue, Sep 8, 2015 at 9:02 AM, Anastasia Urlapova <aurlapova at mirantis.com>
wrote:

> +1
>
> On Mon, Sep 7, 2015 at 6:30 PM, Tatyana Leontovich <
> tleontovich at mirantis.com> wrote:
>
>> Fuelers,
>>
>> I'd like to nominate Andrey Sledzinskiy for the fuel-ostf core team.
>> He?s been doing a great job in writing patches(support for detached
>> services ).
>> Also his review comments always have a lot of detailed information for
>> further improvements
>>
>>
>> http://stackalytics.com/?user_id=asledzinskiy&release=all&project_type=all&module=fuel-ostf
>>
>> Please vote with +1/-1 for approval/objection.
>>
>> Core reviewer approval process definition:
>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>
>> --
>> Best regards,
>> Tatyana
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/0956e820/attachment.html>

From moshe.elisha at alcatel-lucent.com  Tue Sep  8 06:39:11 2015
From: moshe.elisha at alcatel-lucent.com (ELISHA, Moshe (Moshe))
Date: Tue, 8 Sep 2015 06:39:11 +0000
Subject: [openstack-dev] [mistral][yaql] Addressing task result
	using	YAQL function
In-Reply-To: <E6D08D36-14AA-4D62-9A39-DFC1E382B0D0@mirantis.com>
References: <B93B8F94-DE9D-4723-A22D-DC527DCC54FB@mirantis.com>
 <04C0E7F3-1E41-4C2A-8A03-EB5C3A598861@stackstorm.com>
 <FA465E58-4611-44B4-9E5D-F353C778D5FF@mirantis.com>
 <869980B9-AB93-4EC1-A74B-76F4D9DDC326@stackstorm.com>
 <CAOCoZiaPjM6k+OiRaH0c-UoUbS1SX87YvoggzGCsvRgvopke9A@mail.gmail.com>
 <E6D08D36-14AA-4D62-9A39-DFC1E382B0D0@mirantis.com>
Message-ID: <68E0028885A28B42BA72B18D58B31E8E9425DB@FR711WXCHMBA08.zeu.alcatel-lucent.com>

I agree with the task object as well.
It should be an object you can get info like the start time, end time, error code, ...

-----Original Message-----
From: Renat Akhmerov [mailto:rakhmerov at mirantis.com] 
Sent: ????? 07 ?????? 2015 19:59
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [mistral][yaql] Addressing task result using YAQL function


> On 07 Sep 2015, at 19:18, Stan Lagun <slagun at mirantis.com> wrote:
> 
> I believe this is a good change. $.task_name requires you that $ be 
> pointing to a tasks dictionary. But in the middle of the query like [1.2.3].select($ + 1)  "$" will change its value. With a function approach you can write [1, 2, 3].select($ + task(taskName)). However the name "task" looks confusing as to my understanding tasks may have attributes other than result. It may make sense to use task(taskName).result instead.

Yes, I like the idea that task() should be more than just a result. Good point!

Renat Akhmerov
@ Mirantis Inc.


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From marcos.fermin.lobo at cern.ch  Tue Sep  8 07:03:02 2015
From: marcos.fermin.lobo at cern.ch (Marcos Fermin Lobo)
Date: Tue, 8 Sep 2015 07:03:02 +0000
Subject: [openstack-dev] [ec2api][murano][rdo] RPMs for ec2api and murano
Message-ID: <E6E80EA9C3C06E4FA36BF9685D57B3168BB41537@CERNXCHG51.cern.ch>

Hi all,

I would like to announce that I'm building the RPMs for OpenStack EC2 API https://github.com/stackforge/ec2-api and OpenStack Murano https://github.com/openstack/murano (server and client) for stable/kilo.

At this moment, I have those packages up-and-running in single VMs and connected to Packstack instance, in order to test the RPMs.

As soon as I finish my sponsorship as Fedora packager https://bugzilla.redhat.com/show_bug.cgi?id=1257178 I'll push those RPMs to upstream.

Also, I've started a puppet module for OpenStack EC2 API http://lists.openstack.org/pipermail/openstack-dev/2015-September/073402.html.

If you have any question, please contact. All contributions are very welcome.

Regards,
Marcos.
IRC: mflobo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/15ae73c6/attachment.html>

From germy.lure at gmail.com  Tue Sep  8 07:33:03 2015
From: germy.lure at gmail.com (Germy Lure)
Date: Tue, 8 Sep 2015 15:33:03 +0800
Subject: [openstack-dev] [Neutron] Port forwarding
In-Reply-To: <CAG9LJa675MAQ0pRZ=WRBpTDsKudZHk0vRchNaVNG45_Z3tBXDw@mail.gmail.com>
References: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>
 <CAEfdOg2AS8uDZUyi5FzyAEKqU3D9axFaKBs5ibzjFudSR5JFGw@mail.gmail.com>
 <CAG9LJa40-NM1LTJOZxuTa27W6LyZgpJwb7D4XVcqoiH63GaASg@mail.gmail.com>
 <CAEfdOg0oUVpDCMU6Ko59MvkyqdzFnxOAWqZpk1QF-eYbZ_ebeA@mail.gmail.com>
 <CAG9LJa675MAQ0pRZ=WRBpTDsKudZHk0vRchNaVNG45_Z3tBXDw@mail.gmail.com>
Message-ID: <CAEfdOg0Y1YehBMty-9MEw20BPuuE9PVwcsjcQLxxoBLVMrkGrQ@mail.gmail.com>

Hi Gal,

Thank you for your explanation.
As you mentioned, PF is a way of reusing floating IP to access several
Neutron ports. I agree with your point of view completely.
Let me extend your example to explain where I was going.
T1 has 20 subnets behind a router, and one of them is 10.0.0.0/24 named s1.
There are 100 VMs named VM1~VM100 in the subnet s1 and T1 wants to update
the same file(or something else) in those VMs. Let's have a look how will
T1 do it.

T1 invokes Neutron API to create a port-mapping for VM1(Maybe that will be
did by operator)
For example :  172.20.20.10:4001  =>  10.0.0.1:80
And then T1 does the update task via 172.20.20.10:4001.

Now for the VM2,VM3,...VM100, T1 must repeat the steps above with different
ports. And T1 must clean those records(100 records in DB) after accessing.
That's badly, I think.
Note that T1 still has 19 subnets to be dealt with. That's a nightmare to
T1.
To PaaS, SaaS, that also is a big trouble.

So, can we do it like this?
T1 invokes Neutron API one time for s1(not VM1), and Neutron setups a group
of port-mapping relation. For example:
172.20.20.10:4001  =>  10.0.0.1:80
172.20.20.10:4002  =>  10.0.0.2:80
172.20.20.10:4003  =>  10.0.0.3:80
......                                   ......
172.20.20.10:4100  =>  10.0.0.100:80
Now T1 just needs focus on his/her business work not PF.

We just store one record in Neutron DB for such one time API invoking. For
the single VM scene, we can specific private IP range instead of subnet.
For example, 10.0.0.1 to 10.0.0.3. The mapped ports(like 4001,4002...) can
be returned in the response body, for example, 4001 to 4003, also can just
return a base number(4000) and upper layer rework it. For example, 4000+1,
where 1 is the last number in the private IP address of VM1.

Forgive my poor E.
Hope that's clear enough and i am happy to discuss it further if necessary.

Germy


On Tue, Sep 8, 2015 at 1:58 PM, Gal Sagie <gal.sagie at gmail.com> wrote:

> Hi Germy,
>
> Port forwarding the way i see it, is a way of reusing the same floating ip
> to access several different Neutron ports (VM's , Containers)
> So for example if we have floating IP 172.20.20.10 , we can assign
> 172.20.20.10:4001 to VM1 and 172.20.20.10:4002 to VM2 (which are behind
> that same router
> which has an external gw).
> The user use the same IP but according to the tcp/udp port Neutron
> performs mapping in the virtual router namespace to the private IP and
> possibly to a different port
> that is running on that instance for example port 80
>
> So for example if we have two VM's with private IP's 10.0.0.1 and 10.0.0.2
> and we have a floating ip assigned to the router of 172.20.20.10
> with port forwarding we can build the following mapping:
>
> 172.20.20.10:4001  =>  10.0.0.1:80
> 172.20.20.10:4002  =>  10.0.0.2:80
>
> And this is only from the Neutron API, this feature is usefull when you
> offer PaaS, SaaS and have an automated framework that calls the API
> to allocate these "client ports"
>
> I am not sure why you think the operator will need to ssh the instances,
> the operator just needs to build the mapping of <floating_ip, port>  to the
> instance private IP.
> Of course keep in mind that we didnt yet discuss full API details but its
> going to be something like that (at least the way i see it)
>
> Hope thats explains it.
>
> Gal.
>
> On Mon, Sep 7, 2015 at 5:21 AM, Germy Lure <germy.lure at gmail.com> wrote:
>
>> Hi Gal,
>>
>> I'm sorry for my poor English. Let me try again.
>>
>> What operator wants to access is several related instances, instead of
>> only one or one by one. The use case is periodical check and maintain.
>> RELATED means instance maybe in one subnet, or one network, or one host.
>> The host's scene is similar to access the docker on the host as you
>> mentioned before.
>>
>> Via what you mentioned of API, user must ssh an instance and then invoke
>> API to update the IP address and port, or even create a new PF to access
>> another one. It will be a nightmare to a VPC operator who owns so many
>> instances.
>>
>> In a word, I think the "inside_addr" should be "subnet" or "host".
>>
>> Hope this is clear enough.
>>
>> Germy
>>
>> On Sun, Sep 6, 2015 at 1:05 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
>>
>>> Hi Germy,
>>>
>>> I am not sure i understand what you mean, can you please explain it
>>> further?
>>>
>>> Thanks
>>> Gal.
>>>
>>> On Sun, Sep 6, 2015 at 5:39 AM, Germy Lure <germy.lure at gmail.com> wrote:
>>>
>>>> Hi, Gal
>>>>
>>>> Thank you for bringing this up. But I have some suggestions for the API.
>>>>
>>>> An operator or some other component wants to reach several VMs related
>>>> NOT only one or one by one. Here, RELATED means that the VMs are in one
>>>> subnet or network or a host(similar to reaching dockers on a host).
>>>>
>>>> Via the API you mentioned, user must ssh one VM and update even delete
>>>> and add PF to ssh another. To a VPC(with 20 subnets?) admin, it's totally a
>>>> nightmare.
>>>>
>>>> Germy
>>>>
>>>>
>>>> On Wed, Sep 2, 2015 at 1:59 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
>>>>
>>>>> Hello All,
>>>>>
>>>>> I have searched and found many past efforts to implement port
>>>>> forwarding in Neutron.
>>>>> I have found two incomplete blueprints [1], [2] and an abandoned patch
>>>>> [3].
>>>>>
>>>>> There is even a project in Stackforge [4], [5] that claims
>>>>> to implement this, but the L3 parts in it seems older then current
>>>>> master.
>>>>>
>>>>> I have recently came across this requirement for various use cases,
>>>>> one of them is
>>>>> providing feature compliance with Docker port-mapping feature (for
>>>>> Kuryr), and saving floating
>>>>> IP's space.
>>>>> There has been many discussions in the past that require this feature,
>>>>> so i assume
>>>>> there is a demand to make this formal, just a small examples [6], [7],
>>>>> [8], [9]
>>>>>
>>>>> The idea in a nutshell is to support port forwarding (TCP/UDP ports)
>>>>> on the external router
>>>>> leg from the public network to internal ports, so user can use one
>>>>> Floating IP (the external
>>>>> gateway router interface IP) and reach different internal ports
>>>>> depending on the port numbers.
>>>>> This should happen on the network node (and can also be leveraged for
>>>>> security reasons).
>>>>>
>>>>> I think that the POC implementation in the Stackforge project shows
>>>>> that this needs to be
>>>>> implemented inside the L3 parts of the current reference
>>>>> implementation, it will be hard
>>>>> to maintain something like that in an external repository.
>>>>> (I also think that the API/DB extensions should be close to the
>>>>> current L3 reference
>>>>> implementation)
>>>>>
>>>>> I would like to renew the efforts on this feature and propose a RFE
>>>>> and a spec for this to the
>>>>> next release, any comments/ideas/thoughts are welcome.
>>>>> And of course if any of the people interested or any of the people
>>>>> that worked on this before
>>>>> want to join the effort, you are more then welcome to join and comment.
>>>>>
>>>>> Thanks
>>>>> Gal.
>>>>>
>>>>> [1]
>>>>> https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
>>>>> [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
>>>>> [3] https://review.openstack.org/#/c/60512/
>>>>> [4] https://github.com/stackforge/networking-portforwarding
>>>>> [5] https://review.openstack.org/#/q/port+forwarding,n,z
>>>>>
>>>>> [6]
>>>>> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
>>>>> [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
>>>>> [8]
>>>>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
>>>>> [9]
>>>>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Best Regards ,
>>>
>>> The G.
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/b4a94ab3/attachment.html>

From zhang.lei.fly at gmail.com  Tue Sep  8 07:35:34 2015
From: zhang.lei.fly at gmail.com (Lei Zhang)
Date: Tue, 8 Sep 2015 15:35:34 +0800
Subject: [openstack-dev] [nova][oslo][policy] oslo.policy adoption in
	Nova.
In-Reply-To: <CA6215BB-8AA8-46BF-BD85-061077DA80C0@mirantis.com>
References: <CANw6fcHK26tJJq10Ggum+SxUR+0HDPSyHdg6w6ubXCEYzijfLA@mail.gmail.com>
 <1438593167-sup-5001@lrrr.local>
 <CANw6fcHyAsEJ3OF7LxgNrPCeyfAPH96Htv=8vz=HQez6b_3Fjg@mail.gmail.com>
 <1438600138-sup-8453@lrrr.local>
 <CAPTR=Uj8rmeYK=eG_WXxrxZQn1GikEdmWOmhBJHcY0a8Sn+G1A@mail.gmail.com>
 <1438625471-sup-8982@lrrr.local>
 <CAPTR=UgOkC2zdZT675myaoGMZ9hwEpMdhsj4SJ2JyL1YdKxzeg@mail.gmail.com>
 <1438631011-sup-3247@lrrr.local>
 <B59DD537-9577-4731-A945-DAF59A4600CB@gmail.com>
 <1438632980-sup-9565@lrrr.local> <1438637056-sup-1576@lrrr.local>
 <CAPTR=Uhx0WHHNJz+JKR1ftM3aRPXBB-O0MuSn5xeT0owJKRXaA@mail.gmail.com>
 <CA6215BB-8AA8-46BF-BD85-061077DA80C0@mirantis.com>
Message-ID: <CAATxhGcqp-S54OSaNBUwyobxna6x5Gdig1EdNajn2Zu=Wev2Zg@mail.gmail.com>

The oslo.policy has a class method `from_dict` which can be helpful.
So I replaced the `parse_rule` with `from_dict`, and it works as expect.

Is there any other feature that oslo.policy doesn't have?



On Mon, Sep 7, 2015 at 6:48 PM, Sergey Vilgelm <svilgelm at mirantis.com>
wrote:

> Hi nova-team,
>
> Jeffrey Zhang has updated his patch[1].
> Dan Smith, Could you remove -2?
>
> [1] https://review.openstack.org/#/c/198065
>
>
> On Aug 20, 2015, at 17:26, Sergey Vilgelm <svilgelm at mirantis.com> wrote:
>
> Nova-cores,
> Do you have any decision about the patch:
> https://review.openstack.org/#/c/198065/ ?
> Dan Smith, Could you remove -2?
> Jeffrey Zhang, What is your opinion?
>
> On Tue, Aug 4, 2015 at 12:26 AM, Doug Hellmann <doug at doughellmann.com
> > wrote:
> Excerpts from Doug Hellmann's message of 2015-08-03 16:19:31 -0400:
> > Excerpts from Morgan Fainberg's message of 2015-08-04 06:05:56 +1000:
> > >
> > > > On Aug 4, 2015, at 05:49, Doug Hellmann <doug at doughellmann.com>
> wrote:
> > > >
> > > > Excerpts from Sergey Vilgelm's message of 2015-08-03 22:11:50 +0300:
> > > >>> On Mon, Aug 3, 2015 at 9:37 PM, Doug Hellmann <
> doug at doughellmann.com> wrote:
> > > >>>
> > > >>> Making that function public may be the most expedient fix, but the
> > > >>> parser was made private for a reason, so before we expose it we
> > > >>> should understand why, and if there are alternatives (such as
> > > >>> creating a fixture in oslo.policy to do what the nova tests need).
> > > >>
> > > >> Probably we may extend the Rules class and add the similar
> functions as a
> > > >> classmethod?
> > > >> I've created a patch for slo.policy as example[1]
> > > >
> > > > Well, my point was that the folks working on that library considered
> the
> > > > entire parser to be private. That could just be overly ambitious API
> > > > pruning, or there could be some underlying reason (like, the syntax
> may
> > > > be changing or we want apps to interact with APIs and not generate
> DSL
> > > > and feed it to the library). So we should find out about the reason
> > > > before looking for alternative places to expose the parser.
> > > >
> > >
> > > The idea is to have apis vs dsl generation. But we did a "everything
> private that isnt clearly used" as a starting point. I would prefer to not
> make this public and have a fixture instead. That said, i am not hard-set
> against a change to make it public.
> >
> > It would be easy enough to provide a fixture, which would make it clear
> > that the API is meant for testing and not for general use. I see a
> > couple of options:
> >
> > 1. a fixture that takes some DSL text and creates a new Enforcer
> >    instance populated with the rules based on parsing the text
> >
> > 2. a fixture that takes some DSL text *and* an existing Enforcer
> >    instance and replaces the rules inside that Enforcer instance with the
> >    rules represented by the DSL text
> >
> > Option 1 feels a little cleaner but option 2 is more like how Nova
> > is using parse_rule() now and may be easier to drop in.
>
> Brant also pointed out on IRC that the Rules class already has a
> load_json() class method that invokes the parser, so maybe the thing to
> do is update nova's tests to use that method. A fixture would still be
> an improvement, but using the existing code will let us move ahead
> faster (assuming we've decided not to wait for the new features to be
> implemented).
>
> Doug
>
> >
> > Doug
> >
> > >
> > > > Doug
> > > >
> > > >>
> > > >> [1] https://review.openstack.org/#/c/208617/
> > > >
> > > >
> __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Thanks,
> Sergey Vilgelm
> OpenStack Software Engineer
> Mirantis Inc.
> Skype: sergey.vilgelm
> Phone: +36 70 512 3836
>
>
>
> ?
> Sergey Vilgelm
> OpenStack Software Engineer
> Mirantis Inc.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
?Jeffrey
 Zhang
Blog: http://xcodest.me
twitter/weibo: @jeffrey4l
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/263636e7/attachment.html>

From shardy at redhat.com  Tue Sep  8 08:40:55 2015
From: shardy at redhat.com (Steven Hardy)
Date: Tue, 8 Sep 2015 09:40:55 +0100
Subject: [openstack-dev] [tripleo] Plugin integration and environment file
	naming
Message-ID: <20150908084055.GA23526@t430slt.redhat.com>

Hi all,

So, lately we're seeing an increasing number of patches adding integration
for various third-party plugins, such as different neutron and cinder
backends.

This is great to see, but it also poses the question of how we organize the
user-visible interfaces to these things long term.

Originally, I was hoping to land some Heat composability improvements[1]
which would allow for tagging templates as providing a particular
capability (such as "provides neutron ML2 plugin"), but this has stalled on
some negative review feedback and isn't going to be implemented for
Liberty.

However, today looking at [2] and [3], (which both add t-h-t integration to
enable neutron ML2 plugins), a simpler interim solution occured to me,
which is just to make use of a suggested/mandatory naming convention.

For example:

environments/neutron-ml2-bigswitch.yaml
environments/neutron-ml2-cisco-nexus-ucsm.yaml

Or via directory structure:

environments/neutron-ml2/bigswitch.yaml
environments/neutron-ml2/cisco-nexus-ucsm.yaml

This would require enforcement via code-review, but could potentially
provide a much more intuitive interface for users when they go to create
their cloud, and particularly it would make life much easier for any Ux to
ask "choose which neutron-ml2 plugin you want", because the available
options can simply be listed by looking at the available environment
files?

What do folks think of this, is now a good time to start enforcing such a
convention?

Steve

[1] https://review.openstack.org/#/c/196656/
[2] https://review.openstack.org/#/c/213142/
[3] https://review.openstack.org/#/c/198754/


From xiaoyan.li at intel.com  Tue Sep  8 08:53:03 2015
From: xiaoyan.li at intel.com (Li, Xiaoyan)
Date: Tue, 8 Sep 2015 08:53:03 +0000
Subject: [openstack-dev] [cinder]Review request for data transfer between
 encrypted volumes and images
Message-ID: <AEE495BD65FC0144A24ECB647C8270EF1DCAF1@shsmsx102.ccr.corp.intel.com>

Hi, 

@Jay, 
As talked with you in IRC, I have updated the patches which fix the bug in Cinder about creating encrypted volumes from images, and uploading encrypted volumes to images.
Please help to review. 

https://review.openstack.org/#/c/216567/
https://review.openstack.org/#/c/217557/

Also welcome others' review. 

Best wishes
Lisa



From liusheng1175 at 126.com  Tue Sep  8 08:54:48 2015
From: liusheng1175 at 126.com (liusheng)
Date: Tue, 8 Sep 2015 16:54:48 +0800
Subject: [openstack-dev] [Ceilometer][Aodh] event-alarm fire policy
In-Reply-To: <alpine.DEB.2.10.1509080704210.32581@edwin-gen>
References: <alpine.DEB.2.10.1509080704210.32581@edwin-gen>
Message-ID: <55EEA258.1090705@126.com>

Just a personal thought, can we add an ACK to alarm notifier? when an 
event-alarm fired, the alarm state transformed to "alarm", if 
'alarm_action' has been set, the 'alarm_action' will be triggered and 
notify. for event-alarm, a timout can be set to wait the ACK from alarm 
notifier, if the ACK recieved, reset the alarm state to OK, if timeout 
occured, set the alarm state to 'UNKNOWN'. If 'alarm_action' has not 
been set, we just need to recored the alarm state transition history.

? 2015/9/8 7:26, Zhai, Edwin ??:
> All,
> Currently, event-alarm is one-shot style: don't fire again for same 
> event.  But threshold-alarm is limited periodic style:
> 1. only get 1 fire for continuous valid datapoints.
> 2. Would get a new fire if insufficient data followed by valid ones, 
> as we reset alarm state upon insufficient data.
>
> So maybe event-alarm should be periodic also. But I'm not sure when to 
> reset the alarm state to 'UNKNOWN': after each fire, or when receive 
> different event.
>
> Fire a bug @
> https://bugs.launchpad.net/aodh/+bug/1493171
>
> Best Rgds,
> Edwin
>
> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




From akostrikov at mirantis.com  Tue Sep  8 10:06:18 2015
From: akostrikov at mirantis.com (Alexander Kostrikov)
Date: Tue, 8 Sep 2015 13:06:18 +0300
Subject: [openstack-dev] [Fuel] Nominate Andrey Sledzinskiy for
	fuel-ostf core
In-Reply-To: <CAP2-cGd9ex57e7g+WuqKK=d9kxDZs77hUHR7QJa-n8fjLwHbpw@mail.gmail.com>
References: <CAJWtyAOeyjVLTkuDB7pJGcbr0iPDYh1-ZqXhn_ODi-XwOxTJvQ@mail.gmail.com>
 <CAC+XjbarB-GU-R+6XS7hd2i_3-HYSZxbMDX1OXmdD8eQ=ZNO5g@mail.gmail.com>
 <CAP2-cGd9ex57e7g+WuqKK=d9kxDZs77hUHR7QJa-n8fjLwHbpw@mail.gmail.com>
Message-ID: <CAFNR43P3BLkWcvay3KewNwFdLjjVLi16iHY7GVZecsexnkaDNA@mail.gmail.com>

+1

On Tue, Sep 8, 2015 at 9:07 AM, Dmitriy Shulyak <dshulyak at mirantis.com>
wrote:

> +1
>
> On Tue, Sep 8, 2015 at 9:02 AM, Anastasia Urlapova <aurlapova at mirantis.com
> > wrote:
>
>> +1
>>
>> On Mon, Sep 7, 2015 at 6:30 PM, Tatyana Leontovich <
>> tleontovich at mirantis.com> wrote:
>>
>>> Fuelers,
>>>
>>> I'd like to nominate Andrey Sledzinskiy for the fuel-ostf core team.
>>> He?s been doing a great job in writing patches(support for detached
>>> services ).
>>> Also his review comments always have a lot of detailed information for
>>> further improvements
>>>
>>>
>>> http://stackalytics.com/?user_id=asledzinskiy&release=all&project_type=all&module=fuel-ostf
>>>
>>> Please vote with +1/-1 for approval/objection.
>>>
>>> Core reviewer approval process definition:
>>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>>
>>> --
>>> Best regards,
>>> Tatyana
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Kind Regards,

Alexandr Kostrikov,

Mirantis, Inc.

35b/3, Vorontsovskaya St., 109147, Moscow, Russia


Tel.: +7 (495) 640-49-04
Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>

Skype: akostrikov_mirantis

E-mail: akostrikov at mirantis.com <elogutova at mirantis.com>

*www.mirantis.com <http://www.mirantis.ru/>*
*www.mirantis.ru <http://www.mirantis.ru/>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/3a137d6d/attachment-0001.html>

From sean at dague.net  Tue Sep  8 10:45:28 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 8 Sep 2015 06:45:28 -0400
Subject: [openstack-dev] Should v2 compatibility mode (v2.0 on v2.1)
 fixes be applicable for v2.1 too?
In-Reply-To: <CACE3TKWnnCtjc-CM408zO4BLfG733Rz4s90ap69PdE2jmvNWmg@mail.gmail.com>
References: <CACE3TKWnnCtjc-CM408zO4BLfG733Rz4s90ap69PdE2jmvNWmg@mail.gmail.com>
Message-ID: <55EEBC48.6080200@dague.net>

On 09/06/2015 11:15 PM, GHANSHYAM MANN wrote:
> Hi All,
> 
> As we all knows, api-paste.ini default setting for /v2 was changed to
> run those on v2.1 (v2.0 on v2.1) which is really great think for easy
> code maintenance in future (removal of v2 code).
> 
> To keep "v2.0 on v2.1" fully compatible with "v2.0 on v2.0", some bugs
> were found[1] and fixed. But I think we should fix those only for v2
> compatible mode not for v2.1.
> 
> For example bug#1491325, 'device' on volume attachment Request is
> optional param[2] (which does not mean 'null-able' is allowed) and
> v2.1 used to detect and error on usage of 'device' as "None". But as
> it was used as 'None' by many /v2 users and not to break those, we
> should allow 'None' on v2 compatible mode also. But we should not
> allow the same for v2.1.
> 
> IMO v2.1 strong input validation feature (which helps to make API
> usage in correct manner) should not be changed, and for v2 compatible
> mode we should have another solution without affecting v2.1 behavior
> may be having different schema for v2 compatible mode and do the
> necessary fixes there.
> 
> Trying to know other's opinion on this or something I missed during
> any discussion.
> 
> [1]: https://bugs.launchpad.net/python-novaclient/+bug/1491325
>       https://bugs.launchpad.net/nova/+bug/1491511
> 
> [2]: http://developer.openstack.org/api-ref-compute-v2.1.html#attachVolume

A lot of these issue need to be a case by case determination.

In this particular case, we had the Documetation, the nova code, the
clients, and the future.

The documentation: device is optional. That means it should be a string
or not there at all. The schema was set to enforce this on v2.1

The nova code: device = None was accepted previously, because device is
a mandatory parameter all the way down the call stack. 2 layers in we
default it to None if it wasn't specified.

The clients: both python-novaclient and ruby fog sent device=None in the
common case. While only 2 data points, this does demonstrate this is
more wide spread than just our buggy code.

The future: it turns out we really can't honor this parameter in most
cases anyway, and passing it just means causing bugs. This is an
artifact of the EC2 API that only works on specific (and possibly
forked) versions of Xen that Amazon runs. Most hypervisor / guest
relationships don't allow this to be set. The long term direction is
going to be removing it from our API.

Given that it seemed fine to relax this across all API. We screwed up
and didn't test this case correctly, and long term we're going to dump
it. So we don't want to honor 3 different versions of this API,
especially as no one seems written to work against the documentation,
but were written against the code in question. If they write to the
docs, they'll be fine. But the clients that are out in the wild will be
fine as well.

	-Sea

-- 
Sean Dague
http://dague.net


From sean at dague.net  Tue Sep  8 10:49:58 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 8 Sep 2015 06:49:58 -0400
Subject: [openstack-dev] 9/4 state of the gate
In-Reply-To: <CAHXdxOd5YL8QE_SfrypBUUsp9xAM4nQJxsHiP7rr7LH=ZHbg8Q@mail.gmail.com>
References: <55E9FB5D.3080603@linux.vnet.ibm.com>
 <55EA48A8.6060505@linux.vnet.ibm.com>
 <CAHXdxOd5YL8QE_SfrypBUUsp9xAM4nQJxsHiP7rr7LH=ZHbg8Q@mail.gmail.com>
Message-ID: <55EEBD56.9060302@dague.net>

On 09/05/2015 09:50 PM, Joe Gordon wrote:
> 
> 
> On Fri, Sep 4, 2015 at 6:43 PM, Matt Riedemann
> <mriedem at linux.vnet.ibm.com <mailto:mriedem at linux.vnet.ibm.com>> wrote:
<snip>
> 
>     I haven't seen the elastic-recheck bot comment on any changes in
>     awhile either so I'm wondering if that's not running.
> 
> 
> Looks like there was a suspicious 4 day gap in elastic-recheck, but it
> appears to be running again?
> 
> $ ./lastcomment.py 
> Checking name: Elastic Recheck
> [0] 2015-09-06 01:12:40 (0:35:54 old)
> https://review.openstack.org/220386 'Reject the cell name include '!',
> '.' and '@' for Nova API' 
> [1] 2015-09-02 00:54:54 (4 days, 0:53:40 old)
> https://review.openstack.org/218781 'Remove the unnecassary
> volume_api.get(context, volume_id)' 

Remember, there is a 15 minute report contract on the bot, assuming that
if we're > 15 minutes late enough of the environment is backed up that
there is no point in waiting. We had some pretty substantial backups in
Elastic Search recently.

	-Sean


-- 
Sean Dague
http://dague.net


From scroiset at mirantis.com  Tue Sep  8 11:23:54 2015
From: scroiset at mirantis.com (Swann Croiset)
Date: Tue, 8 Sep 2015 13:23:54 +0200
Subject: [openstack-dev] [Fuel][Plugins] Deployment order with custom
	role
In-Reply-To: <CACo6NWDgjQYc3ZuU-85URByHAoyOe2VgN1tzo3N6JbqixxNT5A@mail.gmail.com>
References: <CAOmgvhym_UG+V2UaCvbN955Ex7HipnE+2Sw_tOKL_AAnM_CCTg@mail.gmail.com>
 <CACo6NWD4aj0+DnB3+cT=pVBQNUYchn2aDa3qTiAsdRf3YLNT6A@mail.gmail.com>
 <CAOmgvhwtLwCFshYHMxsT+8aBd1GaFk8k2p77=x=-9zGXF_+Xmw@mail.gmail.com>
 <CACo6NWDgjQYc3ZuU-85URByHAoyOe2VgN1tzo3N6JbqixxNT5A@mail.gmail.com>
Message-ID: <CAOmgvhxcf3irjxGEcp2WjjcyGpsaTVDREqp=F=rLagdQue03VA@mail.gmail.com>

On Mon, Sep 7, 2015 at 4:25 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
wrote:

> > that said I no sure anchors are sufficient, we still need priorities to
> > specify orders for independant and/or optional plugins (they don't know
> > each other)
>
> If you need this, you're probably doing something wrong. Priorities
> won't solve your problem here, because plugins will need to know about
> priorities in other plugins and that's weird.

yes that wired, by convention all LMA plugins have priorities well defined
between each other. It works well until we manage all related plugins but
reaches its limit for other plugins if we don't align them together.
This kind of workaround was the only solution at this time...


> The only working
> solution here is to make plugin to know about other plugin if it's
> important to make deployment precisely after other plugin.
>
> > So I guess this will break things if we reference in 'require' a
> > nonexistent plugin-a-task.
>
> That's true. I think the right case here is to implement some sort of
> conditional tasks, so different tasks will be executed in different
> cases.
>
> conditional tasks sounds good indeed, how can we bootstrap this feature?


> > About tasks.yaml, we must support it until an equivalent 'deployment
> order'
> > is implemented with plugin-custom-role feature.
>
This is not about plugin-custom-role, this is about our task
> deployment framework. I heard there were some plans on its
> improvements.
>
> from POV of plugins development, priorities did the trick so far even if
it doesn't look like so natural..
I'm just speaking to provide/preserve the same flexibility within next
plugin framework releases.
If this effort can be made on the whole fuel internals and let plugins
enjoy it I would be happy.
do you have any pointers about these rumours, any BP ?

--
BR
Swann


> Regards,
> Igor
>
> On Mon, Sep 7, 2015 at 3:27 PM, Swann Croiset <scroiset at mirantis.com>
> wrote:
> >
> >
> > On Mon, Sep 7, 2015 at 11:12 AM, Igor Kalnitsky <ikalnitsky at mirantis.com
> >
> > wrote:
> >>
> >> Hi Swann,
> >>
> >> > However, we still need deployment order between independent
> >> > plugins and it seems impossible to define the priorities
> >>
> >> There's no such things like priorities for now.. perhaps we can
> >> introduce some kind of anchors instead of priorities, but that's
> >> another story.
> >
> > yes its another story for next release(s), anchors could reuse the actual
> > convention of ranges used (disk, network, software, monitoring)
> > that said I no sure anchors are sufficient, we still need priorities to
> > specify orders for independant and/or optional plugins (they don't know
> each
> > other)
> >
> >
> >>
> >> Currently the only way to synchronize two plugins is to make one to
> >> know about other one. That means you need to properly setup "requires"
> >> field:
> >>
> >>     - id: my-plugin-b-task
> >>       type: puppet
> >>       role: [my-plugin-b-role]
> >>       required_for: [post_deployment_end]
> >>       requires: [post_deployment_start, PLUGIN-A-TASK]
> >>       parameters:
> >>         puppet_manifest: some-puppet.pp
> >>         puppet_modules: /etc/puppet/modules
> >>         timeout: 3600
> >>         cwd: /
> >>
> > We thought about this solution _but_ in our case we cannot because the
> > plugin is optional and may not be installed/enabled. So I guess this will
> > break things if we reference in 'require' a nonexistent plugin-a-task.
> > For example with LMA plugins, the LMA-Collector plugin must be
> > deployed/installed before LMA-Infrastructure-Alerting plugin (to avoid
> false
> > alerts UNKNOWN state) but the last may not be enabled for the deployment.
> >
> >> Thanks,
> >> Igor
> >>
> >
> > About tasks.yaml, we must support it until an equivalent 'deployment
> order'
> > is implemented with plugin-custom-role feature.
> >
> >>
> >> On Mon, Sep 7, 2015 at 11:31 AM, Swann Croiset <scroiset at mirantis.com>
> >> wrote:
> >> > Hi fuelers,
> >> >
> >> > We're currently porting nearly all LMA plugins to the new plugin fwk
> >> > 3.0.0
> >> > to leverage custom role capabilities.
> >> > That brings up a lot of simplifications for node assignment, disk
> >> > management, network config, reuse core tasks and so on .. thanks to
> the
> >> > fwk.
> >> >
> >> > However, we still need deployment order between independent plugins
> and
> >> > it
> >> > seems impossible to define the priorities [0] in
> deployment_tasks.yaml,
> >> > The only way to preserve deployment order would be to keep tasks.yaml
> >> > too.
> >> >
> >> > So, I'm wondering if this is the recommended solution to address
> plugins
> >> > order deployment with plugin fwk 3.0.0?
> >> > And furthermore if tasks.yaml will still be supported in future by the
> >> > plugin fwk or if the fwk shouldn't evolve  by adding priorities
> >> > definitions
> >> > in deployment_tasks.yaml ?
> >> >
> >> > Thanks
> >> >
> >> > [0]
> >> > https://wiki.openstack.org/wiki/Fuel/Plugins#Plugins_deployment_order
> >> >
> >> >
> >> >
> __________________________________________________________________________
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/921c1a43/attachment.html>

From kirubak at zadarastorage.com  Tue Sep  8 11:33:56 2015
From: kirubak at zadarastorage.com (Kirubakaran Kaliannan)
Date: Tue, 8 Sep 2015 17:03:56 +0530
Subject: [openstack-dev] profiling Latency of single PUT operation on proxy
	+ storage
References: <fb2530751a3507f06e8dfbf23fd066c8@mail.gmail.com>
 <faf8b873dff689b9c13833e18d66b5dc@mail.gmail.com>
 8ae1b3c7e8816e36ca2267cce75aa420@mail.gmail.com
Message-ID: <3dc023363802cdce5dd13c87e70a385e@mail.gmail.com>

Hi All,



I have attached a simple timeline of proxy+object latency chart for a
single PUT request. Please check.



I am profiling the swift proxy + object server to improve the latency of
single PUT request. This may help to improve the overall OPS performance.

Test Configuration : 4CPU + 16GB + 1 proxy node + 1 storage node + 1
replica for object ring, 3 replica for container ring on SSD;  perform 4k
PUT (one-by-one) request.

Every 4K PUT request in the above case takes 22ms (30ms for 3 replica-count
for object). Target is to bring the per 4K put request below 10ms to double
the overall OPS performance.



There are some potential places where we can improve the latency to achieve
this. Can you please provide your thoughts.



*Performance optimization-1: *Proxy server don?t have to get blocked in
connect() - getexpect() until object-server responds.

*Problem Today: *On PUT request, the proxy server connect_put_node() wait
for the response from the object server (getexpect()) after the connection
is established. Once the response (?HTTP_CONTINUE?) is received, the proxy
server goes ahead and spawn the send_file thread to send data to object
server?s. There code looks serialized between proxy and object server.

*Optimization*:

*Option1:* Avoid waiting for all the connect to complete before proceeding
with the send_data to the connected object-server?s ?

*Option2:* The purpose of the getexpect() call is not very clear. Can we
relax  this, so that the proxy server will go-ahead read the data_source
and send it to the object server quickly after the connection is
established. We may have to handle extra failure cases here. (FYI: This
reduce 3ms for a single PUT request ).

    def _connect_put_node(self, nodes, part, path, headers,

                          logger_thread_locals, req):

        """Method for a file PUT connect"""

       ???..
       *with Timeout(self.app.node_timeout):*

*                    resp = conn.getexpect()*

       ???



*Performance Optimization-2*: Object server serialize the container_update
after the data write.

*Problem Today:* On PUT request, the object server, after writing the data
and meta data, the container_update() is called, which is serialized to all
storage nodes (3 way). Each container update take 3 millisecond and it adds
to 9 millisecond for the container_update to complete.

*Optimization:* Can we make this parallel using the green thread, and
probably *return success on  the first successful container update*, if
there is no connection error? I am trying to understand whether this will
have any data integrity issues, can you please provide your feed back on
this ?

*(FYI:* this reduce atlest 5 millisecond)



*Performance Optimization-3*:  write(metadata) in object server takes 2 to
3 millisecond

*Problem today:* After writing the data to the file, writer.put(metadata)
-> _*finalize*_put() to process the post write operation. This takes an
average of 3 millisecond for every put request.

*Optimization:*

*Option 1:* Is it possible to flush the file (group of files)
asynchronously in _*finalize*_put()

*Option 2:* Can we make this put(metadata) an asynchronous call ? so the
container update can happen in parallel ?  Error conditions must be handled
properly.



I would like to know, whether we have done any work done in this area, so
not to repeat the effort.



The motivation for this work, is because 30millisecond for a single 4K I/O
looks too high. With this the only way to scale is to put more server?s.
Trying to see whether we can achieve anything quickly to modify some
portion of code  or this may require quite a bit of code-rewrite.



Also, suggest whether this approach/work on reducing latency of 1 PUT
request is correct ?





Thanks

-kiru



*From:* Shyam Kaushik [mailto:shyam at zadarastorage.com
<shyam at zadarastorage.com>]
*Sent:* Friday, September 04, 2015 11:53 AM
*To:* Kirubakaran Kaliannan
*Subject:* RE: profiling per I/O logs



*Hi Kiru,*



I listed couple of optimization options like below. Can you pls list down
3-4 optimizations like below in similar format & pass it back to me for a
quick review. Once we finalize lets bounce it with community on what they
think.



*Performance optimization-1:* Proxy-server - on PUT request drive client
side independent of auth/object-server connection establishment

*Problem today:* on PUT request, client connects/puts header to
proxy-server. Proxy-server goes to auth & then looks up ring, connects to
each of object-server sending a header. Then when object-servers accept the
connection, proxy-server sends HTTP continue to client & now client writes
data into proxy-server & then proxy-server writes data to the object-servers

*Optimization:* Proxy-server can drive the client side independent of
backend side. i.e. upon auth completes, proxy-server through a thread can
send HTTP continue to client & ask for the data to be written. In the
background it can try to connect to object-server writing the header. This
way when the backend side is doing work, parallel work is done at proxy
front-end thereby reducing latency on the overall IO processing



<<Can you pls confirm if this is the case>>

*Performance optimization-2:* Proxy does TCP connect/disconnect on every
PUT to object-server & similarly object-server to container-server updates

*Problem today:* swift/common/bufferedhttp.py does TCP connect for every
BufferedHTTPConnection::connect().

*Optimization:* Maintain TCP connection pool below bufferedhttp.py.
refcounted pool. Connection pool manager periodically cleans up
unreferenced connections. Re-use past tcp connections for quicker
HTTPConnection



--Shyam



*From:* Kirubakaran Kaliannan [mailto:kirubak at zadarastorage.com]
*Sent:* Thursday, September 03, 2015 3:10 PM
*To:* Shyam Kaushik
*Subject:* profiling per I/O logs





Hi Shyam,



You can look at the directory



/mnt/work/kirubak/profile/perio/*



This is for single object replica with 3 way container.



The list of potential identified task that we can work with community are
(in issues section ? P1, O1, O2 ? which we can discuss)



https://docs.google.com/spreadsheets/d/1We577s7CQRfq2RmpPCN04kEHc8HD_4g_54ELPn-F0g0/edit#gid=288817690



Thanks

-kiru
------------------------------

No virus found in this message.
Checked by AVG - www.avg.com
Version: 2015.0.6125 / Virus Database: 4409/10565 - Release Date: 09/03/15
------------------------------

No virus found in this message.
Checked by AVG - www.avg.com
Version: 2015.0.6086 / Virus Database: 4409/10558 - Release Date: 09/01/15
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/009026bb/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: perf_timeline_low.png
Type: image/png
Size: 28302 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/009026bb/attachment.png>

From jistr at redhat.com  Tue Sep  8 11:47:34 2015
From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=)
Date: Tue, 8 Sep 2015 13:47:34 +0200
Subject: [openstack-dev] [tripleo] Plugin integration and environment
 file naming
In-Reply-To: <20150908084055.GA23526@t430slt.redhat.com>
References: <20150908084055.GA23526@t430slt.redhat.com>
Message-ID: <55EECAD6.9060409@redhat.com>

On 8.9.2015 10:40, Steven Hardy wrote:
> Hi all,
>
> So, lately we're seeing an increasing number of patches adding integration
> for various third-party plugins, such as different neutron and cinder
> backends.
>
> This is great to see, but it also poses the question of how we organize the
> user-visible interfaces to these things long term.
>
> Originally, I was hoping to land some Heat composability improvements[1]
> which would allow for tagging templates as providing a particular
> capability (such as "provides neutron ML2 plugin"), but this has stalled on
> some negative review feedback and isn't going to be implemented for
> Liberty.
>
> However, today looking at [2] and [3], (which both add t-h-t integration to
> enable neutron ML2 plugins), a simpler interim solution occured to me,
> which is just to make use of a suggested/mandatory naming convention.
>
> For example:
>
> environments/neutron-ml2-bigswitch.yaml
> environments/neutron-ml2-cisco-nexus-ucsm.yaml
>
> Or via directory structure:
>
> environments/neutron-ml2/bigswitch.yaml
> environments/neutron-ml2/cisco-nexus-ucsm.yaml

+1 for this one ^

>
> This would require enforcement via code-review, but could potentially
> provide a much more intuitive interface for users when they go to create
> their cloud, and particularly it would make life much easier for any Ux to
> ask "choose which neutron-ml2 plugin you want", because the available
> options can simply be listed by looking at the available environment
> files?

Yeah i like the idea of more structure in placing the environment files. 
It seems like customization of deployment via those files is becoming 
common, so we might see more environment files appearing over time.

>
> What do folks think of this, is now a good time to start enforcing such a
> convention?

We'd probably need to do this at some point anyway, and sooner seems 
better than later :)


Apart from "cinder" and "neutron-ml2" directories, we could also have a 
"combined" (or sth similar) directory for env files which combine 
multiple other env files. The use case which i see is for extra 
pre-deployment configs which would be commonly used together. E.g. 
combining Neutron and Horizon extensions of a single vendor [4].

Maybe also a couple of other categories could be found like "network" 
(for things related mainly to network isolation) or "devel" [5].


Jirka

[4] 
https://review.openstack.org/#/c/213142/1/puppet/extraconfig/pre_deploy/controller/all-bigswitch.yaml
[5] 
https://github.com/openstack/tripleo-heat-templates/blob/master/environments/puppet-ceph-devel.yaml

>
> Steve
>
> [1] https://review.openstack.org/#/c/196656/
> [2] https://review.openstack.org/#/c/213142/
> [3] https://review.openstack.org/#/c/198754/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From anikm99 at yahoo.com  Tue Sep  8 11:51:38 2015
From: anikm99 at yahoo.com (Anik)
Date: Tue, 8 Sep 2015 11:51:38 +0000 (UTC)
Subject: [openstack-dev] GSLB
Message-ID: <675198506.3158990.1441713098377.JavaMail.yahoo@mail.yahoo.com>

Hello,
Recently saw some discussions in the Designate mailer archive around GSLB and saw some API snippets subsequently. Seems like early stages on this project, but highly excited that there is some?traction now on GSLB.
I would to find out [1] If there has been discussions around how GSLB will work across multiple OpenStack regions and [2] The level of integration planned between Designate and GSLB.?Any pointers in this regard will be helpful.??Regards, Anik 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/c96b1d96/attachment.html>

From sean at dague.net  Tue Sep  8 12:13:14 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 8 Sep 2015 08:13:14 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <55E83B84.5000000@openstack.org>
References: <55E83B84.5000000@openstack.org>
Message-ID: <55EED0DA.6090201@dague.net>

On 09/03/2015 08:22 AM, Thierry Carrez wrote:
> Hi everyone,
> 
> A feature deprecation policy is a standard way to communicate and
> perform the removal of user-visible behaviors and capabilities. It helps
> setting user expectations on how much and how long they can rely on a
> feature being present. It gives them reassurance over the timeframe they
> have to adapt in such cases.
> 
> In OpenStack we always had a feature deprecation policy that would apply
> to "integrated projects", however it was never written down. It was
> something like "to remove a feature, you mark it deprecated for n
> releases, then you can remove it".
> 
> We don't have an "integrated release" anymore, but having a base
> deprecation policy, and knowing which projects are mature enough to
> follow it, is a great piece of information to communicate to our users.
> 
> That's why the next-tags workgroup at the Technical Committee has been
> working to propose such a base policy as a 'tag' that project teams can
> opt to apply to their projects when they agree to apply it to one of
> their deliverables:
> 
> https://review.openstack.org/#/c/207467/
> 
> Before going through the last stage of this, we want to survey existing
> projects to see which deprecation policy they currently follow, and
> verify that our proposed base deprecation policy makes sense. The goal
> is not to dictate something new from the top, it's to reflect what's
> generally already applied on the field.
> 
> In particular, the current proposal says:
> 
> "At the very minimum the feature [...] should be marked deprecated (and
> still be supported) in the next two coordinated end-of-cyle releases.
> For example, a feature deprecated during the M development cycle should
> still appear in the M and N releases and cannot be removed before the
> beginning of the O development cycle."
> 
> That would be a n+2 deprecation policy. Some suggested that this is too
> far-reaching, and that a n+1 deprecation policy (feature deprecated
> during the M development cycle can't be removed before the start of the
> N cycle) would better reflect what's being currently done. Or that
> config options (which are user-visible things) should have n+1 as long
> as the underlying feature (or behavior) is not removed.
> 
> Please let us know what makes the most sense. In particular between the
> 3 options (but feel free to suggest something else):
> 
> 1. n+2 overall
> 2. n+2 for features and capabilities, n+1 for config options
> 3. n+1 overall
> 
> Thanks in advance for your input.

Based on my experience of projects in OpenStack projects in what they
are doing today:

Configuration options are either N or N+1: either they are just changed,
or there is a single deprecation cycle (i.e. deprecated by Milestone 3
of release N, removed before milestone 1 of release N+1). I know a lot
of projects continue to just change configs based on the number of
changes we block landing with Grenade.

An N+1 policy for configuration seems sensible. N+2 ends up pretty
burdensome because typically removing a config option means dropping a
code path as well, and an N+2 policy means the person deprecating the
config may very well not be the one removing the code, leading to debt
or more bugs.

For features, this is all over the map. I've seen removes in 0 cycles
because everyone is convinced that the feature doesn't work anyway (and
had been broken for some amount of time). I've seen 1 cycle deprecations
for minor features that are believed to be little used. In Nova we did
XML deprecation over 2 cycles IIRC. EC2 is going to be 2+ (we're still
waiting to get field data back on the alternate approach). The API
version deprecations by lots of projects are measured in years at this
point.

I feel like a realistic bit of compromise that won't drive everyone nuts
would be:

config options: n+1
minor features: n+1
major features: at least n+2 (larger is ok)

And come up with some fuzzy words around minor / major features.

I also think that ensuring that any project that gets this tag publishes
a list of deprecations in release notes would be really good. And that
gets looked for going forward.

	-Sean

-- 
Sean Dague
http://dague.net


From jistr at redhat.com  Tue Sep  8 12:20:51 2015
From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=)
Date: Tue, 8 Sep 2015 14:20:51 +0200
Subject: [openstack-dev] [tripleo] Plugin integration and environment
 file naming
In-Reply-To: <55EECAD6.9060409@redhat.com>
References: <20150908084055.GA23526@t430slt.redhat.com>
 <55EECAD6.9060409@redhat.com>
Message-ID: <55EED2A3.6040206@redhat.com>

On 8.9.2015 13:47, Ji?? Str?nsk? wrote:
> Apart from "cinder" and "neutron-ml2" directories, we could also have a
> "combined" (or sth similar) directory for env files which combine
> multiple other env files. The use case which i see is for extra
> pre-deployment configs which would be commonly used together. E.g.
> combining Neutron and Horizon extensions of a single vendor [4].

Ah i mixed up two things in this paragraph -- env files vs. extraconfig 
nested stacks. Not sure if we want to start namespacing the extraconfig 
bits in a parallel manner. E.g. 
"puppet/extraconfig/pre_deploy/controller/cinder", 
"puppet/extraconfig/pre_deploy/controller/neutron-ml2". It would be 
nice, especially if we're sort of able to map the extraconfig categories 
to env file categories most of the time. OTOH the directory nesting is 
getting quite deep there :)

J.

> [4]
> https://review.openstack.org/#/c/213142/1/puppet/extraconfig/pre_deploy/controller/all-bigswitch.yaml



From james.slagle at gmail.com  Tue Sep  8 12:27:49 2015
From: james.slagle at gmail.com (James Slagle)
Date: Tue, 8 Sep 2015 08:27:49 -0400
Subject: [openstack-dev] [TripleO] Mitaka proposed design sessions
Message-ID: <CAHV77z-XLZoYFc+9wWhpq3en-+oejbkEoTZBG02FLrsWL4LrYA@mail.gmail.com>

Hi everyone,

I started an etherpad to capture some ideas for the TripleO design
sessions in Tokyo:
https://etherpad.openstack.org/p/tripleo-mitaka-proposed-sessions

Please add your ideas and proposals to the etherpad. Once we have some
set of proposals, we can come back around and everyone to assign a
ranking to them so we can pick the actual sessions we'll have.

-- 
-- James Slagle
--


From slagun at mirantis.com  Tue Sep  8 12:28:42 2015
From: slagun at mirantis.com (Stan Lagun)
Date: Tue, 8 Sep 2015 15:28:42 +0300
Subject: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core
In-Reply-To: <CAM6FM9T-VRxqTgSbz3gcyEPA+F-+Hs3qCMqF2EpC85KvvwXvhw@mail.gmail.com>
References: <etPan.55e4d925.59528236.146@TefMBPr.local>
 <CAOnDsYPpN1XGQ-ZLsbxv36Y2JWi+meuWz4vXXY=u44oaawTTjw@mail.gmail.com>
 <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>
 <CAOFFu8aNYx-4mhnSA_4M7mDD5ndWNJuXnpQ5s1L0c7tSb7WdaA@mail.gmail.com>
 <etPan.55e5904c.61791e85.14d@pegasus.local>
 <CAM6FM9T-VRxqTgSbz3gcyEPA+F-+Hs3qCMqF2EpC85KvvwXvhw@mail.gmail.com>
Message-ID: <CAOCoZiZO+4f=+yrbcpH44rzkUe0+h6xZtaG8sm6QT1M1CuVr-g@mail.gmail.com>

+1

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

<slagun at mirantis.com>

On Tue, Sep 1, 2015 at 3:03 PM, Alexander Tivelkov <ativelkov at mirantis.com>
wrote:

> +1. Well deserved.
>
> --
> Regards,
> Alexander Tivelkov
>
> On Tue, Sep 1, 2015 at 2:47 PM, Victor Ryzhenkin <vryzhenkin at mirantis.com>
> wrote:
>
>> +1 from me ;)
>>
>> --
>> Victor Ryzhenkin
>> Junior QA Engeneer
>> freerunner on #freenode
>>
>> ???????? 1 ???????? 2015 ?. ? 12:18:19, Ekaterina Chernova (
>> efedorova at mirantis.com) ???????:
>>
>> +1
>>
>> On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii <ddovbii at mirantis.com>
>> wrote:
>>
>>> +1
>>>
>>> 2015-09-01 2:24 GMT+03:00 Serg Melikyan <smelikyan at mirantis.com>:
>>>
>>>> +1
>>>>
>>>> On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev <kzaitsev at mirantis.com>
>>>> wrote:
>>>>
>>>>> I?m pleased to nominate Nikolai for Murano core.
>>>>>
>>>>> He?s been actively participating in development of murano during
>>>>> liberty and is among top5 contributors during last 90 days. He?s also
>>>>> leading the CloudFoundry integration initiative.
>>>>>
>>>>> Here are some useful links:
>>>>>
>>>>> Overall contribution: http://stackalytics.com/?user_id=starodubcevna
>>>>> List of reviews:
>>>>> https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
>>>>> Murano contribution during latest 90 days
>>>>> http://stackalytics.com/report/contribution/murano/90
>>>>>
>>>>> Please vote with +1/-1 for approval/objections
>>>>>
>>>>> --
>>>>> Kirill Zaitsev
>>>>> Murano team
>>>>> Software Engineer
>>>>> Mirantis, Inc
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
>>>> http://mirantis.com | smelikyan at mirantis.com
>>>>
>>>> +7 (495) 640-4904, 0261
>>>> +7 (903) 156-0836
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/4a0184f9/attachment.html>

From gal.sagie at gmail.com  Tue Sep  8 12:29:31 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Tue, 8 Sep 2015 15:29:31 +0300
Subject: [openstack-dev] [Neutron] Port forwarding
In-Reply-To: <CAEfdOg0Y1YehBMty-9MEw20BPuuE9PVwcsjcQLxxoBLVMrkGrQ@mail.gmail.com>
References: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>
 <CAEfdOg2AS8uDZUyi5FzyAEKqU3D9axFaKBs5ibzjFudSR5JFGw@mail.gmail.com>
 <CAG9LJa40-NM1LTJOZxuTa27W6LyZgpJwb7D4XVcqoiH63GaASg@mail.gmail.com>
 <CAEfdOg0oUVpDCMU6Ko59MvkyqdzFnxOAWqZpk1QF-eYbZ_ebeA@mail.gmail.com>
 <CAG9LJa675MAQ0pRZ=WRBpTDsKudZHk0vRchNaVNG45_Z3tBXDw@mail.gmail.com>
 <CAEfdOg0Y1YehBMty-9MEw20BPuuE9PVwcsjcQLxxoBLVMrkGrQ@mail.gmail.com>
Message-ID: <CAG9LJa6xOGCRWeM+U9bK-v=BS6-4vbCX6gFH517HV8kXeC5skw@mail.gmail.com>

Hi Germy,

Yes i understand now.
What you request is an enahncment to the API to be able to assing these
port forwarding rules in bulks per subnet.
I will make sure to mention this in the spec that i am writing for this.

Thanks!
Gal.


On Tue, Sep 8, 2015 at 10:33 AM, Germy Lure <germy.lure at gmail.com> wrote:

> Hi Gal,
>
> Thank you for your explanation.
> As you mentioned, PF is a way of reusing floating IP to access several
> Neutron ports. I agree with your point of view completely.
> Let me extend your example to explain where I was going.
> T1 has 20 subnets behind a router, and one of them is 10.0.0.0/24 named
> s1. There are 100 VMs named VM1~VM100 in the subnet s1 and T1 wants to
> update the same file(or something else) in those VMs. Let's have a look how
> will T1 do it.
>
> T1 invokes Neutron API to create a port-mapping for VM1(Maybe that will be
> did by operator)
> For example :  172.20.20.10:4001  =>  10.0.0.1:80
> And then T1 does the update task via 172.20.20.10:4001.
>
> Now for the VM2,VM3,...VM100, T1 must repeat the steps above with
> different ports. And T1 must clean those records(100 records in DB) after
> accessing. That's badly, I think.
> Note that T1 still has 19 subnets to be dealt with. That's a nightmare to
> T1.
> To PaaS, SaaS, that also is a big trouble.
>
> So, can we do it like this?
> T1 invokes Neutron API one time for s1(not VM1), and Neutron setups a
> group of port-mapping relation. For example:
> 172.20.20.10:4001  =>  10.0.0.1:80
> 172.20.20.10:4002  =>  10.0.0.2:80
> 172.20.20.10:4003  =>  10.0.0.3:80
> ......                                   ......
> 172.20.20.10:4100  =>  10.0.0.100:80
> Now T1 just needs focus on his/her business work not PF.
>
> We just store one record in Neutron DB for such one time API invoking. For
> the single VM scene, we can specific private IP range instead of subnet.
> For example, 10.0.0.1 to 10.0.0.3. The mapped ports(like 4001,4002...) can
> be returned in the response body, for example, 4001 to 4003, also can just
> return a base number(4000) and upper layer rework it. For example, 4000+1,
> where 1 is the last number in the private IP address of VM1.
>
> Forgive my poor E.
> Hope that's clear enough and i am happy to discuss it further if necessary.
>
> Germy
>
>
> On Tue, Sep 8, 2015 at 1:58 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
>
>> Hi Germy,
>>
>> Port forwarding the way i see it, is a way of reusing the same floating
>> ip to access several different Neutron ports (VM's , Containers)
>> So for example if we have floating IP 172.20.20.10 , we can assign
>> 172.20.20.10:4001 to VM1 and 172.20.20.10:4002 to VM2 (which are behind
>> that same router
>> which has an external gw).
>> The user use the same IP but according to the tcp/udp port Neutron
>> performs mapping in the virtual router namespace to the private IP and
>> possibly to a different port
>> that is running on that instance for example port 80
>>
>> So for example if we have two VM's with private IP's 10.0.0.1 and
>> 10.0.0.2 and we have a floating ip assigned to the router of 172.20.20.10
>> with port forwarding we can build the following mapping:
>>
>> 172.20.20.10:4001  =>  10.0.0.1:80
>> 172.20.20.10:4002  =>  10.0.0.2:80
>>
>> And this is only from the Neutron API, this feature is usefull when you
>> offer PaaS, SaaS and have an automated framework that calls the API
>> to allocate these "client ports"
>>
>> I am not sure why you think the operator will need to ssh the instances,
>> the operator just needs to build the mapping of <floating_ip, port>  to the
>> instance private IP.
>> Of course keep in mind that we didnt yet discuss full API details but its
>> going to be something like that (at least the way i see it)
>>
>> Hope thats explains it.
>>
>> Gal.
>>
>> On Mon, Sep 7, 2015 at 5:21 AM, Germy Lure <germy.lure at gmail.com> wrote:
>>
>>> Hi Gal,
>>>
>>> I'm sorry for my poor English. Let me try again.
>>>
>>> What operator wants to access is several related instances, instead of
>>> only one or one by one. The use case is periodical check and maintain.
>>> RELATED means instance maybe in one subnet, or one network, or one host.
>>> The host's scene is similar to access the docker on the host as you
>>> mentioned before.
>>>
>>> Via what you mentioned of API, user must ssh an instance and then invoke
>>> API to update the IP address and port, or even create a new PF to access
>>> another one. It will be a nightmare to a VPC operator who owns so many
>>> instances.
>>>
>>> In a word, I think the "inside_addr" should be "subnet" or "host".
>>>
>>> Hope this is clear enough.
>>>
>>> Germy
>>>
>>> On Sun, Sep 6, 2015 at 1:05 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
>>>
>>>> Hi Germy,
>>>>
>>>> I am not sure i understand what you mean, can you please explain it
>>>> further?
>>>>
>>>> Thanks
>>>> Gal.
>>>>
>>>> On Sun, Sep 6, 2015 at 5:39 AM, Germy Lure <germy.lure at gmail.com>
>>>> wrote:
>>>>
>>>>> Hi, Gal
>>>>>
>>>>> Thank you for bringing this up. But I have some suggestions for the
>>>>> API.
>>>>>
>>>>> An operator or some other component wants to reach several VMs related
>>>>> NOT only one or one by one. Here, RELATED means that the VMs are in one
>>>>> subnet or network or a host(similar to reaching dockers on a host).
>>>>>
>>>>> Via the API you mentioned, user must ssh one VM and update even delete
>>>>> and add PF to ssh another. To a VPC(with 20 subnets?) admin, it's totally a
>>>>> nightmare.
>>>>>
>>>>> Germy
>>>>>
>>>>>
>>>>> On Wed, Sep 2, 2015 at 1:59 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
>>>>>
>>>>>> Hello All,
>>>>>>
>>>>>> I have searched and found many past efforts to implement port
>>>>>> forwarding in Neutron.
>>>>>> I have found two incomplete blueprints [1], [2] and an abandoned
>>>>>> patch [3].
>>>>>>
>>>>>> There is even a project in Stackforge [4], [5] that claims
>>>>>> to implement this, but the L3 parts in it seems older then current
>>>>>> master.
>>>>>>
>>>>>> I have recently came across this requirement for various use cases,
>>>>>> one of them is
>>>>>> providing feature compliance with Docker port-mapping feature (for
>>>>>> Kuryr), and saving floating
>>>>>> IP's space.
>>>>>> There has been many discussions in the past that require this
>>>>>> feature, so i assume
>>>>>> there is a demand to make this formal, just a small examples [6],
>>>>>> [7], [8], [9]
>>>>>>
>>>>>> The idea in a nutshell is to support port forwarding (TCP/UDP ports)
>>>>>> on the external router
>>>>>> leg from the public network to internal ports, so user can use one
>>>>>> Floating IP (the external
>>>>>> gateway router interface IP) and reach different internal ports
>>>>>> depending on the port numbers.
>>>>>> This should happen on the network node (and can also be leveraged for
>>>>>> security reasons).
>>>>>>
>>>>>> I think that the POC implementation in the Stackforge project shows
>>>>>> that this needs to be
>>>>>> implemented inside the L3 parts of the current reference
>>>>>> implementation, it will be hard
>>>>>> to maintain something like that in an external repository.
>>>>>> (I also think that the API/DB extensions should be close to the
>>>>>> current L3 reference
>>>>>> implementation)
>>>>>>
>>>>>> I would like to renew the efforts on this feature and propose a RFE
>>>>>> and a spec for this to the
>>>>>> next release, any comments/ideas/thoughts are welcome.
>>>>>> And of course if any of the people interested or any of the people
>>>>>> that worked on this before
>>>>>> want to join the effort, you are more then welcome to join and
>>>>>> comment.
>>>>>>
>>>>>> Thanks
>>>>>> Gal.
>>>>>>
>>>>>> [1]
>>>>>> https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
>>>>>> [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
>>>>>> [3] https://review.openstack.org/#/c/60512/
>>>>>> [4] https://github.com/stackforge/networking-portforwarding
>>>>>> [5] https://review.openstack.org/#/q/port+forwarding,n,z
>>>>>>
>>>>>> [6]
>>>>>> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
>>>>>> [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
>>>>>> [8]
>>>>>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
>>>>>> [9]
>>>>>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards ,
>>>>
>>>> The G.
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/0c67b620/attachment.html>

From edwin.zhai at intel.com  Tue Sep  8 13:09:39 2015
From: edwin.zhai at intel.com (Zhai, Edwin)
Date: Tue, 8 Sep 2015 21:09:39 +0800 (CST)
Subject: [openstack-dev] [Ceilometer][Aodh] event-alarm fire policy
In-Reply-To: <55EEA258.1090705@126.com>
References: <alpine.DEB.2.10.1509080704210.32581@edwin-gen>
 <55EEA258.1090705@126.com>
Message-ID: <alpine.DEB.2.10.1509082058490.32581@edwin-gen>

Liusheng,
Thanks for your idea. I think it guarantees alarm action got called. But I just 
want it fired upon each matching event. E.g. for each instance crash event.

Have talked with MIBU, repeat-actions can be used for this purpose.

Best,
Edwin

On Tue, 8 Sep 2015, liusheng wrote:

> Just a personal thought, can we add an ACK to alarm notifier? when an 
> event-alarm fired, the alarm state transformed to "alarm", if 
> 'alarm_action' has been set, the 'alarm_action' will be triggered and 
> notify. for event-alarm, a timout can be set to wait the ACK from alarm 
> notifier, if the ACK recieved, reset the alarm state to OK, if timeout 
> occured, set the alarm state to 'UNKNOWN'. If 'alarm_action' has not 
> been set, we just need to recored the alarm state transition history.
>
> ? 2015/9/8 7:26, Zhai, Edwin ??:
>> All,
>> Currently, event-alarm is one-shot style: don't fire again for same 
>> event.  But threshold-alarm is limited periodic style:
>> 1. only get 1 fire for continuous valid datapoints.
>> 2. Would get a new fire if insufficient data followed by valid ones, 
>> as we reset alarm state upon insufficient data.
>>
>> So maybe event-alarm should be periodic also. But I'm not sure when to 
>> reset the alarm state to 'UNKNOWN': after each fire, or when receive 
>> different event.
>>
>> Fire a bug @
>> https://bugs.launchpad.net/aodh/+bug/1493171
>>
>> Best Rgds,
>> Edwin
>>
>> __________________________________________________________________________ 
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Best Rgds,
Edwin

From me at romcheg.me  Tue Sep  8 13:37:52 2015
From: me at romcheg.me (Roman Prykhodchenko)
Date: Tue, 8 Sep 2015 15:37:52 +0200
Subject: [openstack-dev] [Fuel] python-jobs now vote on Fuel Client
Message-ID: <BLU436-SMTP182D4FD610239F1E4226126AD530@phx.gbl>

Good news folks!

Since python jobs worked well on a number of patches, their mode was switched to voting. They were also added to the gate pipeline.


- romcheg
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 842 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/246e5bb1/attachment.pgp>

From vladyslav.gridin at nuagenetworks.net  Tue Sep  8 13:39:59 2015
From: vladyslav.gridin at nuagenetworks.net (Vladyslav Gridin)
Date: Tue, 8 Sep 2015 15:39:59 +0200
Subject: [openstack-dev] [nova] pci-passtrough and neutron multi segment
	networks
Message-ID: <CAFL1fMArmK_WwOirD4-b+8ZoYY8jTaZSa49W+iHvixMW7A9JgA@mail.gmail.com>

Hi All,

Is there a way to successfully deploy a vm with sriov nic
on both single segment vlan network, and multi provider network,
containing vlan segment?
When nova builds pci request for nic it looks for 'physical_network'
at network level, but for multi provider networks this is set within a
segment.

e.g.
RESP BODY: {"network": {"status": "ACTIVE", "subnets":
["3862051f-de55-4bb9-8c88-acd675bb3702"], "name": "sriov",
"admin_state_up": true, "router:external": false, "segments":
[{"provider:segmentation_id": 77, "provider:physical_network": "physnet1",
"provider:network_type": "vlan"}, {"provider:segmentation_id": 35,
"provider:physical_network": null, "provider:network_type": "vxlan"}],
"mtu": 0, "tenant_id": "bd3afb5fac0745faa34713e6cada5a8d", "shared": false,
"id": "53c0e71e-4c9a-4a33-b1a0-69529583e05f"}}


So, if on compute my pci_passthrough_whitelist contains physical_network,
deployment will fail in multi segment network, and vice versa.

Thanks,
Vlad.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/d10c4d3b/attachment.html>

From jason.dobies at redhat.com  Tue Sep  8 13:40:31 2015
From: jason.dobies at redhat.com (Jay Dobies)
Date: Tue, 8 Sep 2015 09:40:31 -0400
Subject: [openstack-dev] [tripleo] Plugin integration and environment
 file naming
In-Reply-To: <55EED2A3.6040206@redhat.com>
References: <20150908084055.GA23526@t430slt.redhat.com>
 <55EECAD6.9060409@redhat.com> <55EED2A3.6040206@redhat.com>
Message-ID: <55EEE54F.8030602@redhat.com>

I like where this is going. I've been asked a number of times where to 
put things and we never had a solid convention. I like the idea of 
having that docced somewhere.

I like either of the proposed solutions. My biggest concern is that they 
don't capture how you actually use them. I know that was the point of 
your e-mail; we don't yet have the Heat constructs in place for the 
templates to convey that information.

What about if we adopt the directory structure model and strongly 
request a README.md file in there? It's similar to the image elements 
model. We could offer a template to fill out or leave it open ended, but 
the purpose would be to specify:

- Installation instructions (e.g. "set the resource registry namespace 
for Blah to point to this file" or "use the corresponding environment 
file foo.yaml")
- Parameters that can/should be specified via parameter_defaults. I'm 
not saying we add a ton of documentation in there that would be 
duplicate of the actual parameter definitions, but perhaps just a list 
of the parameter names. That way, a user can have an idea of what 
specifically to look for in the template parameter list itself.

That should be all of the info that we'd like Heat to eventually provide 
and hold us over until those discussions are finished.

On 09/08/2015 08:20 AM, Ji?? Str?nsk? wrote:
> On 8.9.2015 13:47, Ji?? Str?nsk? wrote:
>> Apart from "cinder" and "neutron-ml2" directories, we could also have a
>> "combined" (or sth similar) directory for env files which combine
>> multiple other env files. The use case which i see is for extra
>> pre-deployment configs which would be commonly used together. E.g.
>> combining Neutron and Horizon extensions of a single vendor [4].
>
> Ah i mixed up two things in this paragraph -- env files vs. extraconfig
> nested stacks. Not sure if we want to start namespacing the extraconfig
> bits in a parallel manner. E.g.
> "puppet/extraconfig/pre_deploy/controller/cinder",
> "puppet/extraconfig/pre_deploy/controller/neutron-ml2". It would be
> nice, especially if we're sort of able to map the extraconfig categories
> to env file categories most of the time. OTOH the directory nesting is
> getting quite deep there :)

That was my thought too, that the nesting is getting a bit deep. I also 
don't think we should enforce the role in the directory structure as 
we've already seen instances of things that have to happen on both 
controller and compute.


> J.
>
>> [4]
>> https://review.openstack.org/#/c/213142/1/puppet/extraconfig/pre_deploy/controller/all-bigswitch.yaml
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From apevec at gmail.com  Tue Sep  8 13:50:17 2015
From: apevec at gmail.com (Alan Pevec)
Date: Tue, 8 Sep 2015 15:50:17 +0200
Subject: [openstack-dev] [depfreeze] [keystone] Set minimum version for
	passlib
Message-ID: <CAGi==UW6OdXiBxfBdj6QRiMtgOLy+6g=zUY2E4r6h7iss4eF=Q@mail.gmail.com>

Hi all,

according to https://wiki.openstack.org/wiki/DepFreeze I'm requesting
depfreeze exception for
https://review.openstack.org/221267
This is just a sync with reality, copying Javier's description:

(Keystone) commit a7235fc0511c643a8441efd3d21fc334535066e2 [1] uses
passlib.utils.MAX_PASSWORD_SIZE, which was only introduced to
passlib in version 1.6

Cheers,
Alan

[1] https://review.openstack.org/217449


From vkozhukalov at mirantis.com  Tue Sep  8 13:53:52 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Tue, 8 Sep 2015 16:53:52 +0300
Subject: [openstack-dev]  [Fuel] Remove MOS DEB repo from master node
Message-ID: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>

Dear colleagues,

The idea is to remove MOS DEB repo from the Fuel master node by default and
use online MOS repo instead. Pros of such an approach are:

0) Reduced requirement for the master node minimal disk space
1) There won't be such things in like [1] and [2], thus less complicated
flow, less errors, easier to maintain, easier to understand, easier to
troubleshoot
2) If one wants to have local mirror, the flow is the same as in case of
upstream repos (fuel-createmirror), which is clrear for a user to
understand.

Many people still associate ISO with MOS





[1]
https://github.com/stackforge/fuel-main/blob/master/iso/ks.template#L416-L419
[2]
https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/host_system.py#L109-L115


Vladimir Kozhukalov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/99a0c2fa/attachment.html>

From doug at doughellmann.com  Tue Sep  8 13:56:29 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 08 Sep 2015 09:56:29 -0400
Subject: [openstack-dev] [ptl][release] flushing unreleased client
	library changes
In-Reply-To: <55EA04F5.2020006@swartzlander.org>
References: <1441384328-sup-9901@lrrr.local>
 <55E9E81E.9060401@swartzlander.org> <1441394380-sup-7270@lrrr.local>
 <55EA04F5.2020006@swartzlander.org>
Message-ID: <1441720324-sup-5616@lrrr.local>

Excerpts from Ben Swartzlander's message of 2015-09-04 16:54:13 -0400:
> 
> On 09/04/2015 03:21 PM, Doug Hellmann wrote:
> > Excerpts from Ben Swartzlander's message of 2015-09-04 14:51:10 -0400:
> >> On 09/04/2015 12:39 PM, Doug Hellmann wrote:
> >>> PTLs,
> >>>
> >>> We have quite a few unreleased client changes pending, and it would
> >>> be good to go ahead and publish them so they can be tested as part
> >>> of the release candidate process. I have the full list of changes for
> >>> each project below, so please find yours and review them and then
> >>> propose a release request to the openstack/releases repository.
> >> Manila had multiple gate-breaking bugs this week and I've extended our
> >> feature freeze to next Tuesday to compensate. As a result our L-3
> >> milestone release is not really representative of Liberty and we'd
> >> rather not do a client release until we reach RC1.
> > Keep in mind that the unreleased changes are not being used to test
> > anything at all in the gate, so there's an integration "penalty" for
> > delaying releases. You can have as many releases as you want, and we can
> > create the stable branch from the last useful release any time after it
> > is created. So, I still recommend releasing early and often unless you
> > anticipate making API or CLI breaking changes between now and RC1.
> 
> There is currently an API breaking change that needs to be fixed. It 
> will be fixed before the RC so that Kilo<->Liberty upgrades go smoothly 
> but the L-3 milestone is broken regarding forward and backward 
> compatibility.
> 
> https://bugs.launchpad.net/manila/+bug/1488624
> 
> I would actually want to release a milestone between L-3 and RC1 after 
> we get to the real Manila FF date but since that's not in line with the 
> official release process I'm okay waiting for RC1. Since there is no 
> official process for client releases (that I know about) I'd rather just 
> wait to do the client until RC1. We'll plan for an early RC1 by 
> aggressively driving the bugs to zero instead of putting time into 
> testing the L-3 milestone.

If master is broken right now, I agree it's not a good idea to
release.  That said, you still don't want to wait any later than
you have to. Gate jobs only install libraries from packages, so no
projects that are co-gating with manila, including manila itself,
are using the source version of the client library. That means when
there's a release, the new package introduces all of the new changes
into the integration tests at the same time.

We want to release clients as often as possible to keep the number
of changes small. This is why we release Oslo libraries weekly --
we still break things once in a while, but when we do we have a
short list of changes to look at to figure out why.

I'll be proposing that we do a weekly client change review for all
managed clients starting next cycle, and release when there are changes
that warrant (probably not just for requirements changes, unless
it's necessary). I haven't worked out the details of how to do the
review without me contacting release liaisons directly, so suggestions
on that are welcome.

Doug


From derekh at redhat.com  Tue Sep  8 13:57:54 2015
From: derekh at redhat.com (Derek Higgins)
Date: Tue, 08 Sep 2015 14:57:54 +0100
Subject: [openstack-dev] [TripleO] Status of CI changes
In-Reply-To: <55E7E9E7.7070808@redhat.com>
References: <55E7E9E7.7070808@redhat.com>
Message-ID: <55EEE962.3000903@redhat.com>



On 03/09/15 07:34, Derek Higgins wrote:
> Hi All,
>
> The patch to reshuffle our CI jobs has merged[1], along with the patch
> to switch the f21-noha job to be instack based[2] (with centos images).
>
> So the current status is that our CI has been removed from most of the
> non tripleo projects (with the exception of nova/neutron/heat and ironic
> where it is only available with check experimental until we are sure its
> reliable).
>
> The last big move is to pull in some repositories into the upstream[3]
> gerrit so until this happens we still have to worry about some projects
> being on gerrithub (the instack based CI pulls them in from gerrithub
> for now). I'll follow up with a mail once this happens

This has happened, as of now we should be developing the following 
repositories on https://review.openstack.org/#/

http://git.openstack.org/cgit/openstack/instack/
http://git.openstack.org/cgit/openstack/instack-undercloud/
http://git.openstack.org/cgit/openstack/tripleo-docs/
http://git.openstack.org/cgit/openstack/python-tripleoclient/

>
> A lot of CI stuff still needs to be worked on (and improved) e.g.
>   o Add ceph support to the instack based job
>   o Add ha support to the instack based job
>   o Improve the logs exposed
>   o Pull out a lot of workarounds that have gone into the CI job
>   o move out some of the parts we still use in tripleo-incubator
>   o other stuff
>
> Please make yourself known if your interested in any of the above
>
> thanks,
> Derek.
>
> [1] https://review.openstack.org/#/c/205479/
> [2] https://review.openstack.org/#/c/185151/
> [3] https://review.openstack.org/#/c/215186/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From andrew at lascii.com  Tue Sep  8 13:58:44 2015
From: andrew at lascii.com (Andrew Laski)
Date: Tue, 8 Sep 2015 09:58:44 -0400
Subject: [openstack-dev] Scheduler hints, API and Objects
In-Reply-To: <CAA393vitrR+MhU+sLKCTiQtdLvq8qVUi6g5Vo-2bhoPeG50MWA@mail.gmail.com>
References: <20150625142223.GC2646@crypt>
 <CAA393vixHPJ=Ay=79JepDeMA+e+z8x_3FQcnT+8NcQCrvMtYFQ@mail.gmail.com>
 <CAA393vhyeMYeA=6MK9+0LtReud67+OMBu=KcaOzvM_pzL4Ea+g@mail.gmail.com>
 <20150904144538.GI3226@crypt>
 <CAA393vitrR+MhU+sLKCTiQtdLvq8qVUi6g5Vo-2bhoPeG50MWA@mail.gmail.com>
Message-ID: <20150908135844.GJ3226@crypt>

On 09/07/15 at 09:27am, Ken'ichi Ohmichi wrote:
>Hi Andrew,
>
>2015-09-04 23:45 GMT+09:00 Andrew Laski <andrew at lascii.com>:
>>>>
>>>> Now we are discussing this on https://review.openstack.org/#/c/217727/
>>>> for allowing out-of-tree scheduler-hints.
>>>> When we wrote API schema for scheduler-hints, it was difficult to know
>>>> what are available API parameters for scheduler-hints.
>>>> Current API schema exposes them and I guess that is useful for API users
>>>> also.
>>>>
>>>> One idea is that: How about auto-extending scheduler-hint API schema
>>>> based on loaded schedulers?
>>>> Now API schemas of "create/update/resize/rebuild a server" APIs are
>>>> auto-extended based on loaded extensions by using stevedore
>>>> library[1].
>>>> I guess we can apply the same way for scheduler-hints also in long-term.
>>>> Each scheduler needs to implement a method which returns available API
>>>> parameter formats and nova-api tries to get them then extends
>>>> scheduler-hints API schema with them.
>>>> That means out-of-tree schedulers also will be available if they
>>>> implement the method.
>>>> # In short-term, I can see "blocking additionalProperties" validation
>>>> disabled by the way.
>>>
>>>
>>> https://review.openstack.org/#/c/220440 is a prototype for the above idea.
>>
>>
>> I like the idea of providing strict API validation for the scheduler hints
>> if it accounts for out of tree extensions like this would do.  I do have a
>> slight concern about how this works in a world where the scheduler does
>> eventually get an HTTP interface that Nova uses and the code isn't
>> necessarily accessible, but that can be worried about later.
>>
>> This does mean that the scheduler hints are not controlled by microversions
>> though, since we don't have a mechanism for out of tree extensions to signal
>> their presence that way.  And even if they could it would still mean that
>> identical microversions on different clouds wouldn't offer the same hints.
>> If we're accepting of that, which isn't really any different than having
>> "additionalProperties: True", then this seems reasonable to me.
>
>In short-term, yes. That is almost the same as "additionalProperties: True".
>But in long-term, no. Each scheduler-hint parameter which is described
>with JSON-Schema will be useful instead of "additionalProperties:
>True" because API parameters will be exposed with JSON-Schema format
>on JSON-Home or something.
>If we allow customization of scheduler-hints like new filters,
>out-of-tree filters without microversions, API users cannot know
>available scheduler-hints parameter from microversions number.
>That will be helpful for API users that nova can provide available
>parameters with JSON-Home or something.

The issue that I still have is that I don't believe that scheduler hints 
belong in the interopable cloud story, at least not any time soon.  I 
think scheduling is one place that different cloud providers can 
distinguish themselves and I don't think there's anything wrong with 
that.  It's very coupled to the underlying infrastructure that runs the 
cloud and I haven't yet seen the proper abstraction that can properly 
reconcile the differences that happen there between different clouds, at 
least beyond the simple level of host affinity.  Now after saying that I 
would love to find a solution that allows for a strict API around 
scheduling while still providing flexibility to cloud providers.  I 
don't assume it can't be done, I just don't think we're at a place where 
adding strictness adds any real value.

I would compare this to flavor extra specs.  There are a lot of 
proposals to do things with extra specs which we would not want to 
introduce to Nova in that way.  However there are clouds out there that 
have out of tree code that relies on data in flavor extra specs.  And 
discussions that I've been involved in around that have focused on how 
to introduce those concepts into Nova in a standard way that doesn't 
rely on an unversioned key/value store like extra specs.  The solution 
hasn't been to introduce a schema on extra specs and lock them down so 
they share meaning across clouds.  It's been to acknowledge that extra 
specs is a mess that doesn't provide what we want in a manageable way so 
we should deprecate it's usage in favor of better methods.  I think the 
same applies to scheduler hints.  Let's acknowledge that they're a mess 
and rather than trying to impose order on them we should focus on other 
improvements around scheduling.  My big fear is still that we introduce 
microversion 2.42 which adds scheduler hint foo which is now a permanent 
part of the Nova API.  And what contortions will we need to go through 
to maintain that if we get to a point where the scheduler is no longer 
in Nova or that hint for some reason no longer makes logical sense.


>
>Thanks
>Ken Ohmichi
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From vkozhukalov at mirantis.com  Tue Sep  8 13:59:13 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Tue, 8 Sep 2015 16:59:13 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
Message-ID: <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>

Sorry, fat fingers => early sending.

=============
Dear colleagues,

The idea is to remove MOS DEB repo from the Fuel master node by default and
use online MOS repo instead. Pros of such an approach are:

0) Reduced requirement for the master node minimal disk space
1) There won't be such things in like [1] and [2], thus less complicated
flow, less errors, easier to maintain, easier to understand, easier to
troubleshoot
2) If one wants to have local mirror, the flow is the same as in case of
upstream repos (fuel-createmirror), which is clrear for a user to
understand.

Many people still associate ISO with MOS, but it is not true when using
package based delivery approach.

It is easy to define necessary repos during deployment and thus it is easy
to control what exactly is going to be installed on slave nodes.

What do you guys think of it?



Vladimir Kozhukalov

On Tue, Sep 8, 2015 at 4:53 PM, Vladimir Kozhukalov <
vkozhukalov at mirantis.com> wrote:

> Dear colleagues,
>
> The idea is to remove MOS DEB repo from the Fuel master node by default
> and use online MOS repo instead. Pros of such an approach are:
>
> 0) Reduced requirement for the master node minimal disk space
> 1) There won't be such things in like [1] and [2], thus less complicated
> flow, less errors, easier to maintain, easier to understand, easier to
> troubleshoot
> 2) If one wants to have local mirror, the flow is the same as in case of
> upstream repos (fuel-createmirror), which is clrear for a user to
> understand.
>
> Many people still associate ISO with MOS
>
>
>
>
>
> [1]
> https://github.com/stackforge/fuel-main/blob/master/iso/ks.template#L416-L419
> [2]
> https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/host_system.py#L109-L115
>
>
> Vladimir Kozhukalov
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/973b0142/attachment.html>

From doug at doughellmann.com  Tue Sep  8 14:10:47 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 08 Sep 2015 10:10:47 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <55EED0DA.6090201@dague.net>
References: <55E83B84.5000000@openstack.org> <55EED0DA.6090201@dague.net>
Message-ID: <1441721217-sup-1564@lrrr.local>

Excerpts from Sean Dague's message of 2015-09-08 08:13:14 -0400:
> On 09/03/2015 08:22 AM, Thierry Carrez wrote:
> > Hi everyone,
> > 
> > A feature deprecation policy is a standard way to communicate and
> > perform the removal of user-visible behaviors and capabilities. It helps
> > setting user expectations on how much and how long they can rely on a
> > feature being present. It gives them reassurance over the timeframe they
> > have to adapt in such cases.
> > 
> > In OpenStack we always had a feature deprecation policy that would apply
> > to "integrated projects", however it was never written down. It was
> > something like "to remove a feature, you mark it deprecated for n
> > releases, then you can remove it".
> > 
> > We don't have an "integrated release" anymore, but having a base
> > deprecation policy, and knowing which projects are mature enough to
> > follow it, is a great piece of information to communicate to our users.
> > 
> > That's why the next-tags workgroup at the Technical Committee has been
> > working to propose such a base policy as a 'tag' that project teams can
> > opt to apply to their projects when they agree to apply it to one of
> > their deliverables:
> > 
> > https://review.openstack.org/#/c/207467/
> > 
> > Before going through the last stage of this, we want to survey existing
> > projects to see which deprecation policy they currently follow, and
> > verify that our proposed base deprecation policy makes sense. The goal
> > is not to dictate something new from the top, it's to reflect what's
> > generally already applied on the field.
> > 
> > In particular, the current proposal says:
> > 
> > "At the very minimum the feature [...] should be marked deprecated (and
> > still be supported) in the next two coordinated end-of-cyle releases.
> > For example, a feature deprecated during the M development cycle should
> > still appear in the M and N releases and cannot be removed before the
> > beginning of the O development cycle."
> > 
> > That would be a n+2 deprecation policy. Some suggested that this is too
> > far-reaching, and that a n+1 deprecation policy (feature deprecated
> > during the M development cycle can't be removed before the start of the
> > N cycle) would better reflect what's being currently done. Or that
> > config options (which are user-visible things) should have n+1 as long
> > as the underlying feature (or behavior) is not removed.
> > 
> > Please let us know what makes the most sense. In particular between the
> > 3 options (but feel free to suggest something else):
> > 
> > 1. n+2 overall
> > 2. n+2 for features and capabilities, n+1 for config options
> > 3. n+1 overall
> > 
> > Thanks in advance for your input.
> 
> Based on my experience of projects in OpenStack projects in what they
> are doing today:
> 
> Configuration options are either N or N+1: either they are just changed,
> or there is a single deprecation cycle (i.e. deprecated by Milestone 3
> of release N, removed before milestone 1 of release N+1). I know a lot
> of projects continue to just change configs based on the number of
> changes we block landing with Grenade.
> 
> An N+1 policy for configuration seems sensible. N+2 ends up pretty
> burdensome because typically removing a config option means dropping a
> code path as well, and an N+2 policy means the person deprecating the
> config may very well not be the one removing the code, leading to debt
> or more bugs.
> 
> For features, this is all over the map. I've seen removes in 0 cycles
> because everyone is convinced that the feature doesn't work anyway (and
> had been broken for some amount of time). I've seen 1 cycle deprecations
> for minor features that are believed to be little used. In Nova we did
> XML deprecation over 2 cycles IIRC. EC2 is going to be 2+ (we're still
> waiting to get field data back on the alternate approach). The API
> version deprecations by lots of projects are measured in years at this
> point.
> 
> I feel like a realistic bit of compromise that won't drive everyone nuts
> would be:
> 
> config options: n+1
> minor features: n+1
> major features: at least n+2 (larger is ok)
> 
> And come up with some fuzzy words around minor / major features.
> 
> I also think that ensuring that any project that gets this tag publishes
> a list of deprecations in release notes would be really good. And that
> gets looked for going forward.

These times seem reasonable to me.

I'd like to come up with some way to express the time other than
N+M because in the middle of a cycle it can be confusing to know
what that means (if I want to deprecate something in August am I
far enough through the current cycle that it doesn't count?).

Also, as we start moving more projects to doing intermediate releases
the notion of a "release" vs. a "cycle" will drift apart, so we
want to talk about "stable releases" not just any old release.

Doug



From thierry at openstack.org  Tue Sep  8 14:15:18 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Tue, 8 Sep 2015 16:15:18 +0200
Subject: [openstack-dev] [ptl][release] flushing unreleased client
 library changes
In-Reply-To: <1441720324-sup-5616@lrrr.local>
References: <1441384328-sup-9901@lrrr.local>
 <55E9E81E.9060401@swartzlander.org> <1441394380-sup-7270@lrrr.local>
 <55EA04F5.2020006@swartzlander.org> <1441720324-sup-5616@lrrr.local>
Message-ID: <55EEED76.5080204@openstack.org>

Doug Hellmann wrote:
> Excerpts from Ben Swartzlander's message of 2015-09-04 16:54:13 -0400:
>> [...]
>> I would actually want to release a milestone between L-3 and RC1 after 
>> we get to the real Manila FF date but since that's not in line with the 
>> official release process I'm okay waiting for RC1. Since there is no 
>> official process for client releases (that I know about) I'd rather just 
>> wait to do the client until RC1. We'll plan for an early RC1 by 
>> aggressively driving the bugs to zero instead of putting time into 
>> testing the L-3 milestone.
> 
> If master is broken right now, I agree it's not a good idea to
> release.  That said, you still don't want to wait any later than
> you have to. Gate jobs only install libraries from packages, so no
> projects that are co-gating with manila, including manila itself,
> are using the source version of the client library. That means when
> there's a release, the new package introduces all of the new changes
> into the integration tests at the same time.

Yes, that creates unnecessary risk toward the end of the release cycle.
This is why we want to release near-final version of libraries this week
(from which we create the library release/stable branches) -- to keep
risk under control and start testing with the latest code ASAP.

> We want to release clients as often as possible to keep the number
> of changes small. This is why we release Oslo libraries weekly --
> we still break things once in a while, but when we do we have a
> short list of changes to look at to figure out why.
> 
> I'll be proposing that we do a weekly client change review for all
> managed clients starting next cycle, and release when there are changes
> that warrant (probably not just for requirements changes, unless
> it's necessary). I haven't worked out the details of how to do the
> review without me contacting release liaisons directly, so suggestions
> on that are welcome.

+1

-- 
Thierry Carrez (ttx)


From Kevin.Fox at pnnl.gov  Tue Sep  8 14:21:12 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 8 Sep 2015 14:21:12 +0000
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <55EA56BC.70408@redhat.com>
References: <55E9A4F2.5030809@inaugust.com>,<55EA56BC.70408@redhat.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F266F@EX10MBOX03.pnnl.gov>

+1

________________________________
From: Adam Young
Sent: Friday, September 04, 2015 7:43:08 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] This is what disabled-by-policy should look like to the user

On 09/04/2015 10:04 AM, Monty Taylor wrote:
> mordred at camelot:~$ neutron net-create test-net-mt
> Policy doesn't allow create_network to be performed.
>
> Thank you neutron. Excellent job.
>
> Here's what that looks like at the REST layer:
>
> DEBUG: keystoneclient.session RESP: [403] date: Fri, 04 Sep 2015
> 13:55:47 GMT connection: close content-type: application/json;
> charset=UTF-8 content-length: 130 x-openstack-request-id:
> req-ba05b555-82f4-4aaf-91b2-bae37916498d
> RESP BODY: {"NeutronError": {"message": "Policy doesn't allow
> create_network to be performed.", "type": "PolicyNotAuthorized",
> "detail": ""}}
>
> As a user, I am not confused. I do not think that maybe I made a
> mistake with my credentials. The cloud in question simply does not
> allow user creation of networks. I'm fine with that. (as a user, that
> might make this cloud unusable to me - but that's a choice I can now
> make with solid information easily. Turns out, I don't need to create
> networks for my application, so this actually makes it easier for me
> personally)
>
> In any case- rather than complaining and being a whiny brat about
> something that annoys me - I thought I'd say something nice about
> something that the neutron team has done that especially pleases me.

Then let my Hijack:

Policy is still broken.  We need the pieces of Dynamic policy.

I am going to call for a cross project policy discussion for the
upcoming summit.  Please, please, please all the projects attend. The
operators have made it clear they need better policy support.


> I would love it if this became the experience across the board in
> OpenStack for times when a feature of the API is disabled by local
> policy. It's possible it already is and I just haven't directly
> experienced it - so please don't take this as a backhanded
> condemnation of anyone else.
>
> Monty
>
> __________________________________________________________________________
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/1520aee1/attachment.html>

From Kevin.Fox at pnnl.gov  Tue Sep  8 14:30:29 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 8 Sep 2015 14:30:29 +0000
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
 cloud-init IPv6 support
In-Reply-To: <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
References: <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>,
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F269D@EX10MBOX03.pnnl.gov>

No, we already extend the metadata server with our own stuff. See /openstack/ on the metadata server. Cloudinit even supports the extensions. Supporting ipv6 as well as v4 is the same. Why does it matter if aws doesnt currently support it? They can support it if they want in the future and reuse code, or do their own thing and have to convince cloudinit to support there way too. But why should that hold back the openstack metadata server now? Lets lead rather then follow.

Thanks,
Kevin

________________________________
From: Sean M. Collins
Sent: Saturday, September 05, 2015 3:19:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Fox, Kevin M; PAUL CARVER
Subject: OpenStack support for Amazon Concepts - was Re: [openstack-dev] cloud-init IPv6 support

On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:
> Right, it depends on your perspective of who 'owns' the API. Is it
> cloud-init or EC2?
>
> At this point I would argue that cloud-init is in control because it would
> be a large undertaking to switch all of the AMI's on Amazon to something
> else. However, I know Sean disagrees with me on this point so I'll let him
> reply here.


Here's my take:

Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
in both the Neutron and Nova projects should all the details of the
Metadata API that is documented at:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

This means that this is a compatibility layer that OpenStack has
implemented so that users can use appliances, applications, and
operating system images in both Amazon EC2 and an OpenStack environment.

Yes, we can make changes to cloud-init. However, there is no guarantee
that all users of the Metadata API are exclusively using cloud-init as
their client. It is highly unlikely that people are rolling their own
Metadata API clients, but it's a contract we've made with users. This
includes transport level details like the IP address that the service
listens on.

The Metadata API is an established API that Amazon introduced years ago,
and we shouldn't be "improving" APIs that we don't control. If Amazon
were to introduce IPv6 support the Metadata API tomorrow, we would
naturally implement it exactly the way they implemented it in EC2. We'd
honor the contract that Amazon made with its users, in our Metadata API,
since it is a compatibility layer.

However, since they haven't defined transport level details of the
Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
solution. It is not our API.

The nice thing about config-drive is that we've created a new mechanism
for bootstrapping instances - by replacing the transport level details
of the API. Rather than being a link-local address that instances access
over HTTP, it's a device that guests can mount and read. The actual
contents of the drive may have a similar schema as the Metadata API, but
I think at this point we've made enough of a differentiation between the
EC2 Metadata API and config-drive that I believe the contents of the
actual drive that the instance mounts can be changed without breaking
user expectations - since config-drive was developed by the OpenStack
community. The point being that we call it "config-drive" in
conversation and our docs. Users understand that config-drive is a
different feature.

I've had this same conversation about the Security Group API that we
have. We've named it the same thing as the Amazon API, but then went and
made all the fields different, inexplicably. Thankfully, it's just the
names of the fields, rather than being huge conceptual changes.

http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html

Basically, I believe that OpenStack should create APIs that are
community driven and owned, and that we should only emulate
non-community APIs where appropriate, and explicitly state that we only
are emulating them. Putting improvements in APIs that came from
somewhere else, instead of creating new OpenStack branded APIs is a lost
opportunity to differentiate OpenStack from other projects, as well as
Amazon AWS.

Thanks for reading, and have a great holiday.

--
Sean M. Collins
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/89876bee/attachment.html>

From Kevin.Fox at pnnl.gov  Tue Sep  8 14:39:51 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 8 Sep 2015 14:39:51 +0000
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was
	Re:	cloud-init IPv6 support
In-Reply-To: <782B38CA-9629-48D0-BC4F-79F15322C494@jimrollenhagen.com>
References: <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
 <55EC6D3A.8050604@inaugust.com>,
 <782B38CA-9629-48D0-BC4F-79F15322C494@jimrollenhagen.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F26CD@EX10MBOX03.pnnl.gov>

Yeah, we heve been trying to work through how to make instance users work with config drive and its staticness makes the problem very difficult. It just trades one problem for another.

Thanks,
Kevin

________________________________
From: Jim Rollenhagen
Sent: Sunday, September 06, 2015 5:02:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] OpenStack support for Amazon Concepts - was Re: cloud-init IPv6 support



> On Sep 6, 2015, at 09:43, Monty Taylor <mordred at inaugust.com> wrote:
>
>> On 09/05/2015 06:19 PM, Sean M. Collins wrote:
>>> On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:
>>> Right, it depends on your perspective of who 'owns' the API. Is it
>>> cloud-init or EC2?
>>>
>>> At this point I would argue that cloud-init is in control because it would
>>> be a large undertaking to switch all of the AMI's on Amazon to something
>>> else. However, I know Sean disagrees with me on this point so I'll let him
>>> reply here.
>>
>>
>> Here's my take:
>>
>> Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
>> in both the Neutron and Nova projects should all the details of the
>> Metadata API that is documented at:
>>
>> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
>>
>> This means that this is a compatibility layer that OpenStack has
>> implemented so that users can use appliances, applications, and
>> operating system images in both Amazon EC2 and an OpenStack environment.
>>
>> Yes, we can make changes to cloud-init. However, there is no guarantee
>> that all users of the Metadata API are exclusively using cloud-init as
>> their client. It is highly unlikely that people are rolling their own
>> Metadata API clients, but it's a contract we've made with users. This
>> includes transport level details like the IP address that the service
>> listens on.
>>
>> The Metadata API is an established API that Amazon introduced years ago,
>> and we shouldn't be "improving" APIs that we don't control. If Amazon
>> were to introduce IPv6 support the Metadata API tomorrow, we would
>> naturally implement it exactly the way they implemented it in EC2. We'd
>> honor the contract that Amazon made with its users, in our Metadata API,
>> since it is a compatibility layer.
>>
>> However, since they haven't defined transport level details of the
>> Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
>> solution. It is not our API.
>>
>> The nice thing about config-drive is that we've created a new mechanism
>> for bootstrapping instances - by replacing the transport level details
>> of the API. Rather than being a link-local address that instances access
>> over HTTP, it's a device that guests can mount and read. The actual
>> contents of the drive may have a similar schema as the Metadata API, but
>> I think at this point we've made enough of a differentiation between the
>> EC2 Metadata API and config-drive that I believe the contents of the
>> actual drive that the instance mounts can be changed without breaking
>> user expectations - since config-drive was developed by the OpenStack
>> community. The point being that we call it "config-drive" in
>> conversation and our docs. Users understand that config-drive is a
>> different feature.
>
> Another great part about config-drive is that it's scalable. At infra's application scale, we take pains to disable anyting in our images that might want to contact the metadata API because we're essentially a DDOS on it.

So, I tend to think a simple API service like this should never be hard to scale. Put a bunch of hosts behind a load balancer, boom, done. Even 1000 requests/s shouldn't be hard, though it may require many hosts, and that's far beyond what infra would hit today.

The one problem I have with config-drive is that it is static. I'd love for systems like cloud-init, glean, etc, to be able to see changes to mounted disks, attached networks, etc. Attaching things after the fact isn't uncommon, and to make the user config the thing is a terrible experience. :(

// jim

>
> config-drive being local to the hypervisor host makes it MUCH more stable at scale.
>
> cloud-init supports config-drive
>
> If it were up to me, nobody would be enablig the metadata API in new deployments.
>
> I totally agree that we should not make changes in the metadata API.
>
>> I've had this same conversation about the Security Group API that we
>> have. We've named it the same thing as the Amazon API, but then went and
>> made all the fields different, inexplicably. Thankfully, it's just the
>> names of the fields, rather than being huge conceptual changes.
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html
>>
>> Basically, I believe that OpenStack should create APIs that are
>> community driven and owned, and that we should only emulate
>> non-community APIs where appropriate, and explicitly state that we only
>> are emulating them. Putting improvements in APIs that came from
>> somewhere else, instead of creating new OpenStack branded APIs is a lost
>> opportunity to differentiate OpenStack from other projects, as well as
>> Amazon AWS.
>>
>> Thanks for reading, and have a great holiday.
>
> I could not possibly agree more if our brains were physically fused.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/65bde1bf/attachment.html>

From aadamov at mirantis.com  Tue Sep  8 14:41:33 2015
From: aadamov at mirantis.com (Alexander Adamov)
Date: Tue, 8 Sep 2015 16:41:33 +0200
Subject: [openstack-dev] [Fuel] Nominate Evgeniy Konstantinov for
 fuel-docs core
In-Reply-To: <CAEg2Y8M2HW2QLbNNNga2jMwCm4Z-78wxoZaQCGuW-q_-3PqjwA@mail.gmail.com>
References: <CAFY49iBwxknorBHmVLZSkUWD9zMr4Tc57vKOg_F0=7PEG0_tSA@mail.gmail.com>
 <CAM0pNLOpBAhyQnRCHXK=jL6NTpxdEe880a=h7c-Jvw4GdTuk9w@mail.gmail.com>
 <CAC+XjbZqz-qk1fi+pR=H-KXEgOqW9W0_+0f89xKVSPpiA5otWg@mail.gmail.com>
 <CAHAWLf2apU=0b_xOhEMA=DjKoEKRsSCtys4sGnjyBmQckgXhUA@mail.gmail.com>
 <CAPQe3Ln-Rv2Z-8LyWPo914mFk+xhxHe05Vj=wxR=yuoUd2+PyA@mail.gmail.com>
 <CAEg2Y8M2HW2QLbNNNga2jMwCm4Z-78wxoZaQCGuW-q_-3PqjwA@mail.gmail.com>
Message-ID: <CAK2oe+Jy02oP2Z-mMZsgkXarR_7UYmAt8+jR6_nCsfkS1aNnig@mail.gmail.com>

+1

On Thu, Sep 3, 2015 at 11:41 PM, Dmitry Pyzhov <dpyzhov at mirantis.com> wrote:

> +1
>
> On Thu, Sep 3, 2015 at 10:14 PM, Sergey Vasilenko <svasilenko at mirantis.com
> > wrote:
>
>> +1
>>
>>
>> /sv
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/9de56ae4/attachment.html>

From Kevin.Fox at pnnl.gov  Tue Sep  8 14:45:08 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 8 Sep 2015 14:45:08 +0000
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
 cloud-init IPv6 support
In-Reply-To: <0000014fa5a7fc65-db3a79b7-91fc-4e28-91ff-8e01e14cbbb7-000000@email.amazonses.com>
References: <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
 <CAO_F6JO+ZnpW61XoipHu-hxsa6TBStiynFO0Kh+GFvMNN8Ni0g@mail.gmail.com>,
 <0000014fa5a7fc65-db3a79b7-91fc-4e28-91ff-8e01e14cbbb7-000000@email.amazonses.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F26F1@EX10MBOX03.pnnl.gov>

We have the whole /openstack namespace. We can extend it as far as we like.
Again, why would aws choosing to go a different way then openstack when openstack did something first be an openstack problem? We're not even talking about a big change. Just making the same md server available on a second ip.

Thanks,
Kevin

________________________________
From: Sean M. Collins
Sent: Sunday, September 06, 2015 7:34:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] OpenStack support for Amazon Concepts - was Re: cloud-init IPv6 support

On Sun, Sep 06, 2015 at 04:25:43PM EDT, Kevin Benton wrote:
> So it's been pointed out that http://169.254.169.254/openstack is completed
> OpenStack invented. I don't quite understand how that's not violating the
> contract you said we have with end users about EC2 compatibility under the
> restriction of 'no new stuff'.

I think that is a violation. I don't think that allows us to make more
changes, just because we've broken the contract once, so a second
infraction is less significant.

> If we added an IPv6 endpoint that the metadata service listens on, it would
> just be another place that non cloud-init clients don't know how to talk
> to. It's not going to break our compatibility with any clients that connect
> to the IPv4 address.

No, but if Amazon were to make a decision about how to implement IPv6 in
EC2 and how to make the Metadata API service work with IPv6 we'd be
supporting two implementations - the one we came up with and one for
supporting the way Amazon implemented it.

--
Sean M. Collins

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/e7bcaff5/attachment.html>

From zbitter at redhat.com  Tue Sep  8 14:53:07 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Tue, 8 Sep 2015 10:53:07 -0400
Subject: [openstack-dev] [Heat] Multi Node Stack - keystone federation
In-Reply-To: <94346481835D244BB7F6486C00E9C1BA2AE1E553@FR711WXCHMBA06.zeu.alcatel-lucent.com>
References: <94346481835D244BB7F6486C00E9C1BA2AE1E553@FR711WXCHMBA06.zeu.alcatel-lucent.com>
Message-ID: <55EEF653.4040909@redhat.com>

On 07/09/15 05:27, SHTILMAN, Tomer (Tomer) wrote:
> Hi
>
> Currently in heat we have the ability to deploy a remote stack on a
> different region using OS::Heat::Stack and region_name in the context
>
> My question is regarding multi node , separate keystones, with keystone
> federation.
>
> Is there an option in a HOT template to send a stack to a different
> node, using the keystone federation feature?
>
> For example ,If I have two Nodes (N1 and N2) with separate keystones
> (and keystone federation), I would like to deploy a stack on N1 with a
> nested stack that will deploy on N2, similar to what we have now for regions

Short answer: no.

Long answer: this is something we've wanted to do for a while, and a lot 
of folks have asked for it. We've been calling it multi-cloud (i.e. 
multiple keystones, as opposed to multi-region which is multiple regions 
with one keystone). In principle it's a small extension to the 
multi-region stacks (just add a way to specify the auth_url as well as 
the region), but the tricky part is how to authenticate to the other 
clouds. We don't want to encourage people to put their login credentials 
into a template. I'm not sure to what extent keystone federation could 
solve that - I suspect that it does not allow you to use a single token 
on multiple clouds, just that it allows you to obtain a token on 
multiple clouds using the same credentials? So basically this idea is on 
hold until someone comes up with a safe way to authenticate to the other 
clouds. Ideas/specs welcome.

cheers,
Zane.


From smelikyan at mirantis.com  Tue Sep  8 14:55:22 2015
From: smelikyan at mirantis.com (Serg Melikyan)
Date: Tue, 8 Sep 2015 07:55:22 -0700
Subject: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core
In-Reply-To: <CAOCoZiZO+4f=+yrbcpH44rzkUe0+h6xZtaG8sm6QT1M1CuVr-g@mail.gmail.com>
References: <etPan.55e4d925.59528236.146@TefMBPr.local>
 <CAOnDsYPpN1XGQ-ZLsbxv36Y2JWi+meuWz4vXXY=u44oaawTTjw@mail.gmail.com>
 <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>
 <CAOFFu8aNYx-4mhnSA_4M7mDD5ndWNJuXnpQ5s1L0c7tSb7WdaA@mail.gmail.com>
 <etPan.55e5904c.61791e85.14d@pegasus.local>
 <CAM6FM9T-VRxqTgSbz3gcyEPA+F-+Hs3qCMqF2EpC85KvvwXvhw@mail.gmail.com>
 <CAOCoZiZO+4f=+yrbcpH44rzkUe0+h6xZtaG8sm6QT1M1CuVr-g@mail.gmail.com>
Message-ID: <CAOnDsYOhSjR-e34ifW5TNvj4+n7JNwo2oWff42_Ts-oO6JFCZA@mail.gmail.com>

Nikolai, my congratulations!

On Tue, Sep 8, 2015 at 5:28 AM, Stan Lagun <slagun at mirantis.com> wrote:

> +1
>
> Sincerely yours,
> Stan Lagun
> Principal Software Engineer @ Mirantis
>
> <slagun at mirantis.com>
>
> On Tue, Sep 1, 2015 at 3:03 PM, Alexander Tivelkov <ativelkov at mirantis.com
> > wrote:
>
>> +1. Well deserved.
>>
>> --
>> Regards,
>> Alexander Tivelkov
>>
>> On Tue, Sep 1, 2015 at 2:47 PM, Victor Ryzhenkin <vryzhenkin at mirantis.com
>> > wrote:
>>
>>> +1 from me ;)
>>>
>>> --
>>> Victor Ryzhenkin
>>> Junior QA Engeneer
>>> freerunner on #freenode
>>>
>>> ???????? 1 ???????? 2015 ?. ? 12:18:19, Ekaterina Chernova (
>>> efedorova at mirantis.com) ???????:
>>>
>>> +1
>>>
>>> On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii <ddovbii at mirantis.com>
>>> wrote:
>>>
>>>> +1
>>>>
>>>> 2015-09-01 2:24 GMT+03:00 Serg Melikyan <smelikyan at mirantis.com>:
>>>>
>>>>> +1
>>>>>
>>>>> On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev <kzaitsev at mirantis.com
>>>>> > wrote:
>>>>>
>>>>>> I?m pleased to nominate Nikolai for Murano core.
>>>>>>
>>>>>> He?s been actively participating in development of murano during
>>>>>> liberty and is among top5 contributors during last 90 days. He?s also
>>>>>> leading the CloudFoundry integration initiative.
>>>>>>
>>>>>> Here are some useful links:
>>>>>>
>>>>>> Overall contribution: http://stackalytics.com/?user_id=starodubcevna
>>>>>> List of reviews:
>>>>>> https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
>>>>>> Murano contribution during latest 90 days
>>>>>> http://stackalytics.com/report/contribution/murano/90
>>>>>>
>>>>>> Please vote with +1/-1 for approval/objections
>>>>>>
>>>>>> --
>>>>>> Kirill Zaitsev
>>>>>> Murano team
>>>>>> Software Engineer
>>>>>> Mirantis, Inc
>>>>>>
>>>>>>
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
>>>>> http://mirantis.com | smelikyan at mirantis.com
>>>>>
>>>>> +7 (495) 640-4904, 0261
>>>>> +7 (903) 156-0836
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelikyan at mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/3e0de8d6/attachment.html>

From pabelanger at redhat.com  Tue Sep  8 14:57:55 2015
From: pabelanger at redhat.com (Paul Belanger)
Date: Tue, 8 Sep 2015 10:57:55 -0400
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big tent?
Message-ID: <20150908145755.GC16241@localhost.localdomain>

Greetings,

I wanted to start a discussion about the future of ansible / ansible roles in
OpenStack. Over the last week or so I've started down the ansible path, starting
my first ansible role; I've started with ansible-role-nodepool[1].

My initial question is simple, now that big tent is upon us, I would like
some way to include ansible roles into the opentack git workflow.  I first
thought the role might live under openstack-infra however I am not sure that
is the right place.  My reason is, -infra tents to include modules they
currently run under the -infra namespace, and I don't want to start the effort
to convince people to migrate.

Another thought might be to reach out to the os-ansible-deployment team and ask
how they see roles in OpenStack moving foward (mostly the reason for this
email).

Either way, I would be interested in feedback on moving forward on this. Using
travis-ci and github works but OpenStack workflow is much better.

[1] https://github.com/pabelanger/ansible-role-nodepool


From nik.komawar at gmail.com  Tue Sep  8 15:02:55 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Tue, 8 Sep 2015 11:02:55 -0400
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33A963@fmsmsx117.amr.corp.intel.com>
References: <55E7AC5C.9010504@gmail.com> <20150903085224.GD30997@redhat.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>
 <EA70533067B8F34F801E964ABCA4C4410F4C1D0D@G4W3202.americas.hpqcorp.net>
 <D20DBFD8.210FE%brian.rosmaita@rackspace.com> <55E8784A.4060809@gmail.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B3397CA@fmsmsx117.amr.corp.intel.com>
 <55E9CA69.9030003@gmail.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33A963@fmsmsx117.amr.corp.intel.com>
Message-ID: <55EEF89F.4040003@gmail.com>

Malini,

Your note on the etherpad [1] went unnoticed as we had that sync on
Friday outside of our regular meeting and weekly meeting agenda etherpad
was not fit for discussion purposes.

It would be nice if you all can update & comment on the spec, ref. the
note or have someone send a relative email here that explains the
redressal of the issues raised on the spec and during Friday sync [2].

[1] https://etherpad.openstack.org/p/glance-team-meeting-agenda
[2]
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47

On 9/5/15 4:40 PM, Bhandaru, Malini K wrote:
> Thank you Nikhil and Glance team on the FFE consideration.
> We are committed to making the revisions per suggestion and separately seek help from the Flavio, Sabari, and Harsh.
> Regards
> Malini, Kent, and Jakub 
>
>
> -----Original Message-----
> From: Nikhil Komawar [mailto:nik.komawar at gmail.com] 
> Sent: Friday, September 04, 2015 9:44 AM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
>
> Hi Malini et.al.,
>
> We had a sync up earlier today on this topic and a few items were discussed including new comments on the spec and existing code proposal.
> You can find the logs of the conversation here [1].
>
> There are 3 main outcomes of the discussion:
> 1. We hope to get a commitment on the feature (spec and the code) that the comments would be addressed and code would be ready by Sept 18th; after which the RC1 is planned to be cut [2]. Our hope is that the spec is merged way before and implementation to the very least is ready if not merged. The comments on the spec and merge proposal are currently implementation details specific so we were positive on this front.
> 2. The decision to grant FFE will be on Tuesday Sept 8th after the spec has newer patch sets with major concerns addressed.
> 3. We cannot commit to granting a backport to this feature so, we ask the implementors to consider using the plug-ability and modularity of the taskflow library. You may consult developers who have already worked on adopting this library in Glance (Flavio, Sabari and Harsh). Deployers can then use those scripts and put them back in their Liberty deployments even if it's not in the standard tarball.
>
> Please let me know if you have more questions.
>
> [1]
> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47
> [2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
>
> On 9/3/15 1:13 PM, Bhandaru, Malini K wrote:
>> Thank you Nikhil and Brian!
>>
>> -----Original Message-----
>> From: Nikhil Komawar [mailto:nik.komawar at gmail.com]
>> Sent: Thursday, September 03, 2015 9:42 AM
>> To: openstack-dev at lists.openstack.org
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>> proposal
>>
>> We agreed to hold off on granting it a FFE until tomorrow.
>>
>> There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
>> 14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and cast your vote.
>>
>> On 9/3/15 9:15 AM, Brian Rosmaita wrote:
>>> I added an agenda item for this for today's Glance meeting:
>>>    https://etherpad.openstack.org/p/glance-team-meeting-agenda
>>>
>>> I'd prefer to hold my vote until after the meeting.
>>>
>>> cheers,
>>> brian
>>>
>>>
>>> On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuvaja at hp.com> wrote:
>>>
>>>> Malini, all,
>>>>
>>>> My current opinion is -1 for FFE based on the concerns in the spec 
>>>> and implementation.
>>>>
>>>> I'm more than happy to realign my stand after we have updated spec 
>>>> and a) it's agreed to be the approach as of now and b) we can 
>>>> evaluate how much work the implementation needs to meet with the revisited spec.
>>>>
>>>> If we end up to the unfortunate situation that this functionality 
>>>> does not merge in time for Liberty, I'm confident that this is one 
>>>> of the first things in Mitaka. I really don't think there is too 
>>>> much to go, we just might run out of time.
>>>>
>>>> Thanks for your patience and endless effort to get this done.
>>>>
>>>> Best,
>>>> Erno
>>>>
>>>>> -----Original Message-----
>>>>> From: Bhandaru, Malini K [mailto:malini.k.bhandaru at intel.com]
>>>>> Sent: Thursday, September 03, 2015 10:10 AM
>>>>> To: Flavio Percoco; OpenStack Development Mailing List (not for 
>>>>> usage
>>>>> questions)
>>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>>> proposal
>>>>>
>>>>> Flavio, first thing in the morning Kent will upload a new BP that 
>>>>> addresses the comments. We would very much appreciate a +1 on the 
>>>>> FFE.
>>>>>
>>>>> Regards
>>>>> Malini
>>>>>
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: Flavio Percoco [mailto:flavio at redhat.com]
>>>>> Sent: Thursday, September 03, 2015 1:52 AM
>>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>>> proposal
>>>>>
>>>>> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I wanted to propose 'Single disk image OVA import' [1] feature 
>>>>>> proposal for exception. This looks like a decently safe proposal 
>>>>>> that should be able to adjust in the extended time period of 
>>>>>> Liberty. It has been discussed at the Vancouver summit during a 
>>>>>> work session and the proposal has been trimmed down as per the 
>>>>>> suggestions then; has been overall accepted by those present 
>>>>>> during the discussions (barring a few changes needed on the spec itself).
>>>>>> It being a addition to already existing import task, doesn't 
>>>>>> involve API change or change to any of the core Image functionality as of now.
>>>>>>
>>>>>> Please give your vote: +1 or -1 .
>>>>>>
>>>>>> [1] https://review.openstack.org/#/c/194868/
>>>>> I'd like to see support for OVF being, finally, implemented in Glance.
>>>>> Unfortunately, I think there are too many open questions in the 
>>>>> spec right now to make this FFE worthy.
>>>>>
>>>>> Could those questions be answered to before the EOW?
>>>>>
>>>>> With those questions answered, we'll be able to provide a more, 
>>>>> realistic, vote.
>>>>>
>>>>> Also, I'd like us to evaluate how mature the implementation[0] is 
>>>>> and the likelihood of it addressing the concerns/comments in time.
>>>>>
>>>>> For now, it's a -1 from me.
>>>>>
>>>>> Thanks all for working on this, this has been a long time requested 
>>>>> format to have in Glance.
>>>>> Flavio
>>>>>
>>>>> [0] https://review.openstack.org/#/c/214810/
>>>>>
>>>>>
>>>>> --
>>>>> @flaper87
>>>>> Flavio Percoco
>>>>> __________________________________________________________
>>>>> ________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe: OpenStack-dev-
>>>>> request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> ____________________________________________________________________
>>>> _ _____ OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: 
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> _____________________________________________________________________
>>> _ ____ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From ben at swartzlander.org  Tue Sep  8 15:08:00 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Tue, 8 Sep 2015 11:08:00 -0400
Subject: [openstack-dev] [Manila] Feature Freeze
Message-ID: <55EEF9D0.8070004@swartzlander.org>

Manila reached its liberty feature freeze yesterday thanks to the heroic 
work of the last few submitters and a few core reviewers who worked over 
the weekend! All of the features targeted for Liberty have been merged 
and nothing was booted out.

I would like to say that this has been a very painful feature freeze for 
many of us, and some mistakes were made which should not be repeated. I 
have some ideas for changes we can implement in the Mitaka timeframe to 
avoid the need for heroics at the last minute. In particular, large new 
features need a deadline substantially earlier than the ordinary FPF 
deadline, at least for WIP patches to be upstream (this was xyang's idea 
and it makes tons of sense). We can discuss the detail of how we want to 
run Mitaka at future meetings or in Tokyo, but I wanted to acknowledge 
that we didn't do it a good job this time.

Now that we're past feature freeze we need to drive aggressively to fix 
all the bugs because the RC1 target date has not moved (Sept 17). This 
is all the more important because our L-3 milestone is not really usable 
for testing purposes and we need a release that QA-oriented people can 
hammer on. This also means the client patches related to new features 
all need to get merged and release in the next week too.

Also the CI-system reporting deadline blew by last week during the 
gate-breakage-hell and I haven't had time to go check that all the CI 
systems which should be reporting actually are. That's something I'll be 
doing today and I'll post the driver removal patches for any system not 
reporting.

-Ben Swartzlander



From gord at live.ca  Tue Sep  8 15:24:56 2015
From: gord at live.ca (gord chung)
Date: Tue, 8 Sep 2015 11:24:56 -0400
Subject: [openstack-dev] [Ceilometer] Meters
In-Reply-To: <OF6974FD42.90ACA49D-ON65257EBA.0021633A-65257EBA.0021633C@tcs.com>
References: <OF6974FD42.90ACA49D-ON65257EBA.0021633A-65257EBA.0021633C@tcs.com>
Message-ID: <BLU436-SMTP165661304C1E0F1F19737F6DE530@phx.gbl>

is this using libvirt? if so, can you verify you have the following 
requirements:

  * libvirt 1.1.1+
  * qemu 1.5+
  * guest driver that supports memory balloon stats

also, please check to see if there are any visible ERRORs in 
ceilometer-agent-compute log.


On 08/09/2015 2:04 AM, Abhishek Talwar wrote:
> Hi Folks,
>
>
> I have installed a *kilo devstack setup* install and I am trying to 
> get the *memory and disk usage* for my VM's. But on checking the 
> *"ceilometer meter-list"* I can't find memory.usage or disk.usage meters.
>
> I am searched a lot for this and still couldn't find a solution. So 
> how to enable these meters to our meter-list.
>
> I want all these meters in the ceilometer meter-list so that I can use 
> them to monitor my instances.
>
> |Currently the output of *ceilometer meter-list* is as follows:
>
> |
> |+--------------------------+------------+----------------------------------------------+----------------------------------+----------------------------------+
> |Name|Type|Resource ID |User ID |Project ID |
> +--------------------------+------------+----------------------------------------------+----------------------------------+----------------------------------+
> | cpu | cumulative 
> |5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | cpu_util | gauge 
> |5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | disk.read.bytes | cumulative 
> |5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | disk.read.requests | cumulative 
> |5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | disk.write.bytes | cumulative 
> |5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | disk.write.requests | cumulative 
> |5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | image | gauge 
> |55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
> | image | gauge 
> | acd6beef-13e6-4d64-a83d-9e96beac26ef||22f22fb60bf8496cb60e8498d93d56e8|
> | image | gauge | ecefcd31-ae47-4079-bd19-efe07f4c33d3 
> ||22f22fb60bf8496cb60e8498d93d56e8|
> | image.download | delta 
> |55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
> | image.serve | delta 
> |55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
> | image.size | gauge 
> |55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
> | image.size | gauge 
> | acd6beef-13e6-4d64-a83d-9e96beac26ef||22f22fb60bf8496cb60e8498d93d56e8|
> | image.size | gauge | ecefcd31-ae47-4079-bd19-efe07f4c33d3 
> ||22f22fb60bf8496cb60e8498d93d56e8|
> | image.update | delta 
> |55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
> | image.upload | delta 
> |55a0a2c2-8cfb-4882-ad05-01d7c821b1de||22f22fb60bf8496cb60e8498d93d56e8|
> | instance | gauge 
> |5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | instance:m1.small | gauge 
> |5314c72b-a2b4-4b2b-bcb1-4057c3d96f77|92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | network.incoming.bytes | cumulative 
> | nova-instance-instance-00000022-fa163e3bd74e 
> |92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | network.incoming.packets | cumulative 
> | nova-instance-instance-00000022-fa163e3bd74e 
> |92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | network.outgoing.bytes | cumulative 
> | nova-instance-instance-00000022-fa163e3bd74e 
> |92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> | network.outgoing.packets | cumulative 
> | nova-instance-instance-00000022-fa163e3bd74e 
> |92876a1aad3c477398137b702a8467d3|22f22fb60bf8496cb60e8498d93d56e8|
> +--------------------------+------------+----------------------------------------------+----------------------------------+----------------------------------+
>
> Thanks and Regards
> Abhishek Talwar
> |
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
gord

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/1abfa8d6/attachment.html>

From derekh at redhat.com  Tue Sep  8 15:36:16 2015
From: derekh at redhat.com (Derek Higgins)
Date: Tue, 08 Sep 2015 16:36:16 +0100
Subject: [openstack-dev] [TripleO] trello
Message-ID: <55EF0070.4040309@redhat.com>

Hi All,

    Some of ye may remember some time ago we used to organize TripleO 
based jobs/tasks on a trello board[1], at some stage this board fell out 
of use (the exact reason I can't put my finger on). This morning I was 
putting a list of things together that need to be done in the area of CI 
and needed somewhere to keep track of it.

I propose we get back to using this trello board and each of us add 
cards at the very least for the things we are working on.

This should give each of us a lot more visibility into what is ongoing 
on in the tripleo project currently, unless I hear any objections, 
tomorrow I'll start archiving all cards on the boards and removing 
people no longer involved in tripleo. We can then start adding items and 
anybody who wants in can be added again.

thanks,
Derek.

[1] - https://trello.com/tripleo


From mriedem at linux.vnet.ibm.com  Tue Sep  8 15:44:26 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Tue, 8 Sep 2015 10:44:26 -0500
Subject: [openstack-dev] [nova] Bug importance
In-Reply-To: <CABib2_obj4rLvESZkPbP4qMSumBpbLyD5Xwh9YoVO1fd3vGi0A@mail.gmail.com>
References: <D21204B9.BBED2%gkotton@vmware.com>
 <CANw6fcHqyza2Z4hDZWdFr9_3V=FbVpbsvBNOLFprW_9F+Ma8ow@mail.gmail.com>
 <D212198D.BBF16%gkotton@vmware.com>
 <CABib2_obj4rLvESZkPbP4qMSumBpbLyD5Xwh9YoVO1fd3vGi0A@mail.gmail.com>
Message-ID: <55EF025A.1010509@linux.vnet.ibm.com>



On 9/7/2015 4:54 AM, John Garbutt wrote:
> I have a feeling launchpad asked my to renew my membership of nova-bug
> recently, and said it would drop me form the list if I didn't do that.
>
> Not sure if thats intentional to keep the list fresh? Its the first I
> knew about it.
>
> Unsure, but that could be related?
>
> Thanks,
> John
>
> On 6 September 2015 at 14:25, Gary Kotton <gkotton at vmware.com> wrote:
>> That works.
>> Thanks!
>>
>> From: "davanum at gmail.com" <davanum at gmail.com>
>> Reply-To: OpenStack List <openstack-dev at lists.openstack.org>
>> Date: Sunday, September 6, 2015 at 4:10 PM
>> To: OpenStack List <openstack-dev at lists.openstack.org>
>> Subject: Re: [openstack-dev] [nova] Bug importance
>>
>> Gary,
>>
>> Not sure what changed...
>>
>> On this page (https://bugs.launchpad.net/nova/) on the right hand side, do
>> you see "Bug Supervisor" set to "Nova Bug Team"?  I believe "Nova Bug Team"
>> is open and you can add yourself, so if you do not see yourself in that
>> group, can you please add it and try?
>>
>> -- Dims
>>
>> On Sun, Sep 6, 2015 at 4:56 AM, Gary Kotton <gkotton at vmware.com> wrote:
>>>
>>> Hi,
>>> In the past I was able to set the importance of a bug. Now I am unable to
>>> do this? Has the policy changed? Can someone please clarify. If the policy
>>> has changed who is responsible for deciding the priority of a bug?
>>> Thanks
>>> Gary
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

That's probably what happened.  I recently got the same notice about 
being dropped from the cinder bugs team if I didn't renew my membership. 
  It's not a bad idea since there have been issues where you have a lot 
of projects associated with a single bug and launchpad times out 
changing status or making other updates because it's trying to process 
everyone that gets notified.

-- 

Thanks,

Matt Riedemann



From mriedem at linux.vnet.ibm.com  Tue Sep  8 15:48:04 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Tue, 8 Sep 2015 10:48:04 -0500
Subject: [openstack-dev] [nova][i18n] Is there any point in using _()
 inpython-novaclient?
In-Reply-To: <201509060518.t865IeSf019572@d01av05.pok.ibm.com>
References: <55E9D9AD.1000402@linux.vnet.ibm.com>
 <201509060518.t865IeSf019572@d01av05.pok.ibm.com>
Message-ID: <55EF0334.3030606@linux.vnet.ibm.com>



On 9/6/2015 12:18 AM, Steve Martinelli wrote:
> Isn't this just a matter of setting up novaclient for translation? IIRC
> using _() is harmless if there's no translation bits set up for the project.
>
> Thanks,
>
> Steve Martinelli
> OpenStack Keystone Core
>
> Inactive hide details for Matt Riedemann ---2015/09/04 01:50:54 PM---I
> noticed this today: https://review.openstack.org/#/c/219Matt Riedemann
> ---2015/09/04 01:50:54 PM---I noticed this today:
> https://review.openstack.org/#/c/219768/
>
> From: Matt Riedemann <mriedem at linux.vnet.ibm.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>, openstack-i18n at lists.openstack.org
> Date: 2015/09/04 01:50 PM
> Subject: [openstack-dev] [nova][i18n] Is there any point in using _() in
> python-novaclient?
>
> ------------------------------------------------------------------------
>
>
>
> I noticed this today:
>
> https://review.openstack.org/#/c/219768/
>
> And it got me thinking about something I've wondered before - why do we
> even use _() in python-novaclient?  It doesn't have any .po files for
> babel message translation, it has no babel config, there is nothing in
> setup.cfg about extracting messages and compiling them into .mo's, there
> is nothing on Transifex for python-novaclient, etc.
>
> Is there a way to change your locale and get translated output in nova
> CLIs?  I didn't find anything in docs from a quick google search.
>
> Comparing to python-openstackclient, that does have a babel config and
> some locale po files in tree, at least for de and zh_TW.
>
> So if this doesn't work in python-novaclient, do we need any of the i18n
> code in there?  It doesn't really hurt, but it seems pointless to push
> changes for it or try to keep user-facing messages in mind in the code.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Yeah, doffm is working on enabling i18n for python-novaclient via bug:

https://bugs.launchpad.net/python-novaclient/+bug/1492444

-- 

Thanks,

Matt Riedemann



From emilien at redhat.com  Tue Sep  8 15:52:27 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Tue, 8 Sep 2015 11:52:27 -0400
Subject: [openstack-dev] [puppet] weekly meeting #50
In-Reply-To: <CAN7WfkJMV8_1KWp7kYGjrM6GY4+E3iS1Rk89_Ud2rpHm2_HRRw@mail.gmail.com>
References: <CAN7WfkJMV8_1KWp7kYGjrM6GY4+E3iS1Rk89_Ud2rpHm2_HRRw@mail.gmail.com>
Message-ID: <55EF043B.1000903@redhat.com>



On 09/07/2015 11:54 AM, Emilien Macchi wrote:
> Hello,
> 
> Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
> in #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150908
> 
> Please add additional items you'd like to discuss.
> If our schedule allows it, we'll make bug triage during the meeting.
> 

We did our meeting, you can read the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-08-15.00.html

Thanks,
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/5d047de7/attachment.pgp>

From Kevin.Fox at pnnl.gov  Tue Sep  8 16:01:53 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 8 Sep 2015 16:01:53 +0000
Subject: [openstack-dev] [Heat] Multi Node Stack - keystone federation
In-Reply-To: <55EEF653.4040909@redhat.com>
References: <94346481835D244BB7F6486C00E9C1BA2AE1E553@FR711WXCHMBA06.zeu.alcatel-lucent.com>,
 <55EEF653.4040909@redhat.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F27EA@EX10MBOX03.pnnl.gov>

I think it lets you take a token on the identity cloud and provide it to the service cloud and get a token for that cloud. So I think it might do what we need without storing credentials.

Thanks,
Kevin
________________________________________
From: Zane Bitter [zbitter at redhat.com]
Sent: Tuesday, September 08, 2015 7:53 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Multi Node Stack - keystone federation

On 07/09/15 05:27, SHTILMAN, Tomer (Tomer) wrote:
> Hi
>
> Currently in heat we have the ability to deploy a remote stack on a
> different region using OS::Heat::Stack and region_name in the context
>
> My question is regarding multi node , separate keystones, with keystone
> federation.
>
> Is there an option in a HOT template to send a stack to a different
> node, using the keystone federation feature?
>
> For example ,If I have two Nodes (N1 and N2) with separate keystones
> (and keystone federation), I would like to deploy a stack on N1 with a
> nested stack that will deploy on N2, similar to what we have now for regions

Short answer: no.

Long answer: this is something we've wanted to do for a while, and a lot
of folks have asked for it. We've been calling it multi-cloud (i.e.
multiple keystones, as opposed to multi-region which is multiple regions
with one keystone). In principle it's a small extension to the
multi-region stacks (just add a way to specify the auth_url as well as
the region), but the tricky part is how to authenticate to the other
clouds. We don't want to encourage people to put their login credentials
into a template. I'm not sure to what extent keystone federation could
solve that - I suspect that it does not allow you to use a single token
on multiple clouds, just that it allows you to obtain a token on
multiple clouds using the same credentials? So basically this idea is on
hold until someone comes up with a safe way to authenticate to the other
clouds. Ideas/specs welcome.

cheers,
Zane.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From jaosorior at gmail.com  Tue Sep  8 16:05:12 2015
From: jaosorior at gmail.com (Juan Antonio Osorio)
Date: Tue, 8 Sep 2015 19:05:12 +0300
Subject: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican core
Message-ID: <CAG=EsMOY+QBt4Hw4YdYyDk-v0yfmKKEf1BiHPxBi_enjvHZCYw@mail.gmail.com>

I'd like to nominate Dave Mccowan for the Barbican core review team.

He has been an active contributor both in doing relevant code pieces and
making useful and thorough reviews; And so I think he would make a great
addition to the team.

Please bring the +1's :D

Cheers!

-- 
Juan Antonio Osorio R.
e-mail: jaosorior at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/6d306ee1/attachment.html>

From douglas.mendizabal at rackspace.com  Tue Sep  8 16:13:19 2015
From: douglas.mendizabal at rackspace.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=)
Date: Tue, 8 Sep 2015 11:13:19 -0500
Subject: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican
 core
In-Reply-To: <CAG=EsMOY+QBt4Hw4YdYyDk-v0yfmKKEf1BiHPxBi_enjvHZCYw@mail.gmail.com>
References: <CAG=EsMOY+QBt4Hw4YdYyDk-v0yfmKKEf1BiHPxBi_enjvHZCYw@mail.gmail.com>
Message-ID: <55EF091F.9080601@rackspace.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

+1

Dave has been a great asset to the team, and I think he would make an
excellent core reviewer.

- - Douglas Mendiz?bal

On 9/8/15 11:05 AM, Juan Antonio Osorio wrote:
> I'd like to nominate Dave Mccowan for the Barbican core review 
> team.
> 
> He has been an active contributor both in doing relevant code 
> pieces and making useful and thorough reviews; And so I think he 
> would make a great addition to the team.
> 
> Please bring the +1's :D
> 
> Cheers!
> 
> -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com 
> <mailto:jaosorior at gmail.com>
> 
> 
> 
> ______________________________________________________________________
____
>
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV7wkfAAoJEB7Z2EQgmLX7X+IP/AtYTxcx0u+O6MMLDU1VcGZg
5ksCdn1bosfuqJ/X/QWplHBSG8BzllwciHm7YJxIY94MaAlThk3Zw6UDKKkBMqIt
Qag09Z868LPl9/pll0whR5fVa052zSMq/QYWTnpgwpAgQduKNe4KaR1ZKhtBBbAJ
BvjyKEa2dJLA6LIMXxcxpoCAKSeORM5lce19kHHhWyqq9v5A89U6GHMgwRAa2fGN
7RyYmlOrmxh6TyJQX9Xl+w9y5WPAbxaUqC0MYEkLMpa7VnGf2pEangkN0LUAJO2x
NxwHa73b2LA8K1+4hwTvZO28sRnyMHwjSpqvpGt60FXkgi4dLyyy8gR6gsO49EDB
QOSwpwyFHzA//iuMl72pAD6uMzK0SCECtEu2000l0p3WEXS1i0z7p9VTfw4FySqb
V0S/IeSFfkt09TK2DoOSzXAvBZjsLz9gjRbRIv2dx0QTTmN5JpihOeoUojn24aDV
86AshlhoImJGOX16MwRL+T6LCindkczGe4Faz7WzmBomEJ7SOY6pzDbyEBLYcqzu
crvrLt2D1HmaygFGS37lVCqxlIegwsnZHGIe+Jtr8pDIDSW37ig4LZIDVra2/lj9
E7/fWYCDqbSIUWYG2jMr0/3eQQwZCj4kNvtWaTlNFmTPJZAEYpSN3rBhkfWBgsLv
mqBOM4IeR4EqaqaC2og7
=jL8d
-----END PGP SIGNATURE-----


From alee at redhat.com  Tue Sep  8 16:13:57 2015
From: alee at redhat.com (Ade Lee)
Date: Tue, 08 Sep 2015 12:13:57 -0400
Subject: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican
 core
In-Reply-To: <CAG=EsMOY+QBt4Hw4YdYyDk-v0yfmKKEf1BiHPxBi_enjvHZCYw@mail.gmail.com>
References: <CAG=EsMOY+QBt4Hw4YdYyDk-v0yfmKKEf1BiHPxBi_enjvHZCYw@mail.gmail.com>
Message-ID: <1441728837.14140.0.camel@redhat.com>

Definitely .. +1
On Tue, 2015-09-08 at 19:05 +0300, Juan Antonio Osorio wrote:
> I'd like to nominate Dave Mccowan for the Barbican core review team.
> 
> He has been an active contributor both in doing relevant code pieces
> and making useful and thorough reviews; And so I think he would make
> a great addition to the team.
> 
> Please bring the +1's :D
> 
> Cheers!
> 
> _____________________________________________________________________
> _____
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/b4e6e23f/attachment.html>

From nstarodubtsev at mirantis.com  Tue Sep  8 16:19:30 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Tue, 8 Sep 2015 19:19:30 +0300
Subject: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core
In-Reply-To: <CAOnDsYOhSjR-e34ifW5TNvj4+n7JNwo2oWff42_Ts-oO6JFCZA@mail.gmail.com>
References: <etPan.55e4d925.59528236.146@TefMBPr.local>
 <CAOnDsYPpN1XGQ-ZLsbxv36Y2JWi+meuWz4vXXY=u44oaawTTjw@mail.gmail.com>
 <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>
 <CAOFFu8aNYx-4mhnSA_4M7mDD5ndWNJuXnpQ5s1L0c7tSb7WdaA@mail.gmail.com>
 <etPan.55e5904c.61791e85.14d@pegasus.local>
 <CAM6FM9T-VRxqTgSbz3gcyEPA+F-+Hs3qCMqF2EpC85KvvwXvhw@mail.gmail.com>
 <CAOCoZiZO+4f=+yrbcpH44rzkUe0+h6xZtaG8sm6QT1M1CuVr-g@mail.gmail.com>
 <CAOnDsYOhSjR-e34ifW5TNvj4+n7JNwo2oWff42_Ts-oO6JFCZA@mail.gmail.com>
Message-ID: <CAAa8YgCo3jL6MM7tKdbvoG1v=HD-=hiUcpKgiq-0O_zii1+Lcg@mail.gmail.com>

Serg and murano team, thanks. I'll try to do my best for the project.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-08 17:55 GMT+03:00 Serg Melikyan <smelikyan at mirantis.com>:

> Nikolai, my congratulations!
>
> On Tue, Sep 8, 2015 at 5:28 AM, Stan Lagun <slagun at mirantis.com> wrote:
>
>> +1
>>
>> Sincerely yours,
>> Stan Lagun
>> Principal Software Engineer @ Mirantis
>>
>> <slagun at mirantis.com>
>>
>> On Tue, Sep 1, 2015 at 3:03 PM, Alexander Tivelkov <
>> ativelkov at mirantis.com> wrote:
>>
>>> +1. Well deserved.
>>>
>>> --
>>> Regards,
>>> Alexander Tivelkov
>>>
>>> On Tue, Sep 1, 2015 at 2:47 PM, Victor Ryzhenkin <
>>> vryzhenkin at mirantis.com> wrote:
>>>
>>>> +1 from me ;)
>>>>
>>>> --
>>>> Victor Ryzhenkin
>>>> Junior QA Engeneer
>>>> freerunner on #freenode
>>>>
>>>> ???????? 1 ???????? 2015 ?. ? 12:18:19, Ekaterina Chernova (
>>>> efedorova at mirantis.com) ???????:
>>>>
>>>> +1
>>>>
>>>> On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii <ddovbii at mirantis.com>
>>>> wrote:
>>>>
>>>>> +1
>>>>>
>>>>> 2015-09-01 2:24 GMT+03:00 Serg Melikyan <smelikyan at mirantis.com>:
>>>>>
>>>>>> +1
>>>>>>
>>>>>> On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev <
>>>>>> kzaitsev at mirantis.com> wrote:
>>>>>>
>>>>>>> I?m pleased to nominate Nikolai for Murano core.
>>>>>>>
>>>>>>> He?s been actively participating in development of murano during
>>>>>>> liberty and is among top5 contributors during last 90 days. He?s also
>>>>>>> leading the CloudFoundry integration initiative.
>>>>>>>
>>>>>>> Here are some useful links:
>>>>>>>
>>>>>>> Overall contribution: http://stackalytics.com/?user_id=starodubcevna
>>>>>>> List of reviews:
>>>>>>> https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
>>>>>>> Murano contribution during latest 90 days
>>>>>>> http://stackalytics.com/report/contribution/murano/90
>>>>>>>
>>>>>>> Please vote with +1/-1 for approval/objections
>>>>>>>
>>>>>>> --
>>>>>>> Kirill Zaitsev
>>>>>>> Murano team
>>>>>>> Software Engineer
>>>>>>> Mirantis, Inc
>>>>>>>
>>>>>>>
>>>>>>> __________________________________________________________________________
>>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>>> Unsubscribe:
>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
>>>>>> http://mirantis.com | smelikyan at mirantis.com
>>>>>>
>>>>>> +7 (495) 640-4904, 0261
>>>>>> +7 (903) 156-0836
>>>>>>
>>>>>>
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>> __________________________________________________________________________
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
> http://mirantis.com | smelikyan at mirantis.com
>
> +7 (495) 640-4904, 0261
> +7 (903) 156-0836
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/5a26e9b3/attachment-0001.html>

From dtroyer at gmail.com  Tue Sep  8 16:20:47 2015
From: dtroyer at gmail.com (Dean Troyer)
Date: Tue, 8 Sep 2015 11:20:47 -0500
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <1441721217-sup-1564@lrrr.local>
References: <55E83B84.5000000@openstack.org> <55EED0DA.6090201@dague.net>
 <1441721217-sup-1564@lrrr.local>
Message-ID: <CAOJFoEsuct5CTtU_qE+oB1jgfrpi2b65Xw1HOKNmMxZEXeUjPQ@mail.gmail.com>

On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
>
> I'd like to come up with some way to express the time other than
> N+M because in the middle of a cycle it can be confusing to know
> what that means (if I want to deprecate something in August am I
> far enough through the current cycle that it doesn't count?).
>
> Also, as we start moving more projects to doing intermediate releases
> the notion of a "release" vs. a "cycle" will drift apart, so we
> want to talk about "stable releases" not just any old release.
>

I've always thought the appropriate equivalent for projects not following
the (old) integrated release cadence was for N == six months.  It sets
approx. the same pace and expectation with users/deployers.

For those deployments tracking trunk, a similar approach can be taken, in
that deprecating a config option in M3 then removing it in N1 might be too
quick, but rather wait at least the same point in the following release
cycle to increment 'N'.

dt

-- 

Dean Troyer
dtroyer at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/957c41cd/attachment.html>

From graham.hayes at hpe.com  Tue Sep  8 16:19:50 2015
From: graham.hayes at hpe.com (Hayes, Graham)
Date: Tue, 8 Sep 2015 16:19:50 +0000
Subject: [openstack-dev] GSLB
References: <675198506.3158990.1441713098377.JavaMail.yahoo@mail.yahoo.com>
Message-ID: <325F898546FBBF4487D24D6D606A277E18AD1484@G4W3290.americas.hpqcorp.net>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 08/09/15 12:55, Anik wrote:
> Hello,
> 
> Recently saw some discussions in the Designate mailer archive
> around GSLB and saw some API snippets subsequently. Seems like
> early stages on this project, but highly excited that there is some
> traction now on GSLB.
> 
> I would to find out [1] If there has been discussions around how
> GSLB will work across multiple OpenStack regions and [2] The level
> of integration planned between Designate and GSLB.
> 
> Any pointers in this regard will be helpful.
> 
> Regards, Anik
> 

Hi Anik

Currently we are in very early stages of planning for GSLB.

We do not yet have a good answer for [1] - we need to work this out in
the near future.

My plan is for the MVP, is for a service running in one region - this
allows us to work out the kinks in the API / driver integration for the
service.

For [2], I would like to see us do the following:

For regional load balancers support the Neutron LBaaS v2 API as a
default in-tree.

For the global side, (routing traffic to regions) I would like to
have designate as the default in-tree.

I think we should have these both as plugins, to allow for other
configurations / technologies, but we should be runnable out of the box
using just other OpenStack open source projects.

I may be slightly biased (I am a member of designate-core), but I think
we should do the 4 Open's of OpenStack, and this seems the best way.

We also meet in #openstack-meeting-4 at 16:00 UTC every Tuesday, and
most of the people interested are in #openstack-gslb .

Thanks,

Graham



-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV7wqjAAoJEPRBUqpJBgIicJoH/RztF/0om92Jx01+iGL/9+Wu
ZP9h++7/nZUMtrEY5vIGKR5q2d6wD7AMH9bxYLzjP2u6yIfWhchqH5YWy87PVsdH
7nC/AF7jIR30RiOxD6UlZ7N1414sFCEsO3VcnneV6oSbretmh2kjmH2KRRKBdfaR
VNZXrSvaICoqzhDcfmhhSpFdHVFkPHEQ5DVDJPlF0CrGFG9fp0R7Osra+DC8dpAu
2HF6o7cSZk/t/EoQWnZE43rNXQfFXLmLs964OsklMBK79FJX49qJ90TxgDGJznCt
RuduUMMEZmDJ0a8rDLxkuzkOHsuqPmnXJ+ADcouuWZ/EWb5fMfLkZ4ZPmayDlgg=
=VZl2
-----END PGP SIGNATURE-----


From degorenko at mirantis.com  Tue Sep  8 16:31:59 2015
From: degorenko at mirantis.com (Denis Egorenko)
Date: Tue, 8 Sep 2015 19:31:59 +0300
Subject: [openstack-dev]  [puppet] [sahara]
Message-ID: <CAN2iLr5J8F6RMm80kkkpVUBsCoxKm6o-obiYVxh3Mvn4NuCQRQ@mail.gmail.com>

Hello everyone,

Currently Sahara [1] keeps two run mods: stand alone mode (all-in-one) and
distributed mode. Difference between those modes is that second mode
creates separation between API and engine processes. Such architecture
allows the API process to remain relatively free to handle requests while
offloading intensive tasks to the engine processes. [2] Second mode is more
appropriate for big and complex environments, but you can also use stand
alone mode for simple tasks and tests.

So, the main issue is that now puppet-sahara [3] uses only first all-in-one
run mode. So, I've implemented support for distributed mode in
puppet-sahara [4], please review this commit and provide feedback.

Also i have some +1 from Sahara team, including Sahara Cores.

[1] https://github.com/openstack/sahara
[2]
http://docs.openstack.org/developer/sahara/userdoc/advanced.configuration.guide.html#distributed-mode-configuration
[3] https://github.com/openstack/sahara
[4] https://review.openstack.org/#/c/192721/

Thanks.

-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/03c693ba/attachment.html>

From degorenko at mirantis.com  Tue Sep  8 16:34:34 2015
From: degorenko at mirantis.com (Denis Egorenko)
Date: Tue, 8 Sep 2015 19:34:34 +0300
Subject: [openstack-dev] [puppet] [sahara]
In-Reply-To: <CAN2iLr5J8F6RMm80kkkpVUBsCoxKm6o-obiYVxh3Mvn4NuCQRQ@mail.gmail.com>
References: <CAN2iLr5J8F6RMm80kkkpVUBsCoxKm6o-obiYVxh3Mvn4NuCQRQ@mail.gmail.com>
Message-ID: <CAN2iLr7hrZLD-e0tBK=LtepZ603NM8fBJBsyMbfM6E6wv7s0ag@mail.gmail.com>

Sorry, wrong link [3]  https://github.com/openstack/puppet-sahara
<https://github.com/openstack/sahara>

2015-09-08 19:31 GMT+03:00 Denis Egorenko <degorenko at mirantis.com>:

> Hello everyone,
>
> Currently Sahara [1] keeps two run mods: stand alone mode (all-in-one) and
> distributed mode. Difference between those modes is that second mode
> creates separation between API and engine processes. Such architecture
> allows the API process to remain relatively free to handle requests while
> offloading intensive tasks to the engine processes. [2] Second mode is more
> appropriate for big and complex environments, but you can also use stand
> alone mode for simple tasks and tests.
>
> So, the main issue is that now puppet-sahara [3] uses only first
> all-in-one run mode. So, I've implemented support for distributed mode in
> puppet-sahara [4], please review this commit and provide feedback.
>
> Also i have some +1 from Sahara team, including Sahara Cores.
>
> [1] https://github.com/openstack/sahara
> [2]
> http://docs.openstack.org/developer/sahara/userdoc/advanced.configuration.guide.html#distributed-mode-configuration
> [3] https://github.com/openstack/sahara
> [4] https://review.openstack.org/#/c/192721/
>
> Thanks.
>
> --
> Best Regards,
> Egorenko Denis,
> Deployment Engineer
> Mirantis
>



-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/35ffa064/attachment.html>

From prometheanfire at gentoo.org  Tue Sep  8 16:38:20 2015
From: prometheanfire at gentoo.org (Matthew Thode)
Date: Tue, 8 Sep 2015 11:38:20 -0500
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big
 tent?
In-Reply-To: <20150908145755.GC16241@localhost.localdomain>
References: <20150908145755.GC16241@localhost.localdomain>
Message-ID: <55EF0EFC.1090008@gentoo.org>

On 09/08/2015 09:57 AM, Paul Belanger wrote:
> Greetings,
> 
> I wanted to start a discussion about the future of ansible / ansible roles in
> OpenStack. Over the last week or so I've started down the ansible path, starting
> my first ansible role; I've started with ansible-role-nodepool[1].
> 
> My initial question is simple, now that big tent is upon us, I would like
> some way to include ansible roles into the opentack git workflow.  I first
> thought the role might live under openstack-infra however I am not sure that
> is the right place.  My reason is, -infra tents to include modules they
> currently run under the -infra namespace, and I don't want to start the effort
> to convince people to migrate.
> 
> Another thought might be to reach out to the os-ansible-deployment team and ask
> how they see roles in OpenStack moving foward (mostly the reason for this
> email).
> 
> Either way, I would be interested in feedback on moving forward on this. Using
> travis-ci and github works but OpenStack workflow is much better.
> 
> [1] https://github.com/pabelanger/ansible-role-nodepool
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

This might be useful in openstack-ansible if we are going to want to use
openstack-ansible for testing.  Might want infra's feedback on that
though, also a spec would be in order (to openstack-ansible) for this.

-- 
Matthew Thode (prometheanfire)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/1409f5d1/attachment.pgp>

From kevin.carter at RACKSPACE.COM  Tue Sep  8 16:45:21 2015
From: kevin.carter at RACKSPACE.COM (Kevin Carter)
Date: Tue, 8 Sep 2015 16:45:21 +0000
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big
 tent?
In-Reply-To: <20150908145755.GC16241@localhost.localdomain>
References: <20150908145755.GC16241@localhost.localdomain>
Message-ID: <1441730721118.86258@RACKSPACE.COM>

Hi Paul,

We'd love to collaborate on improving openstack-ansible and getting our OpenStack roles into the general big tent and out of our monolithic repository. We have a proposal in review for moving our roles out of our main repository and into separate repositories [0] making openstack-ansible consume the roles through the use of an Ansible Galaxy interface. We've been holding off on this effort until os-ansible-deployment is moved into the OpenStack namespace which should be happening sometime on September 11 [1][2]. With that, I'd say join us in the #openstack-ansible channel if you have any questions on the os-ansible-deployment project in general and check out our twice weekly meetings [3]. Lastly, many of the core members / deployers of the project will be at the summit and if you're interested / will be in Tokyo we can schedule some time to work out a path to convergence. 

Look forward to talking to you and others about this more soon. 

--

[0] - https://review.openstack.org/#/c/213779
[1] - https://review.openstack.org/#/c/200730
[2] - https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Upcoming_Project_Renames
[3] - https://wiki.openstack.org/wiki/Meetings/openstack-ansible

Kevin Carter
IRC: cloudnull


________________________________________
From: Paul Belanger <pabelanger at redhat.com>
Sent: Tuesday, September 8, 2015 9:57 AM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big tent?

Greetings,

I wanted to start a discussion about the future of ansible / ansible roles in
OpenStack. Over the last week or so I've started down the ansible path, starting
my first ansible role; I've started with ansible-role-nodepool[1].

My initial question is simple, now that big tent is upon us, I would like
some way to include ansible roles into the opentack git workflow.  I first
thought the role might live under openstack-infra however I am not sure that
is the right place.  My reason is, -infra tents to include modules they
currently run under the -infra namespace, and I don't want to start the effort
to convince people to migrate.

Another thought might be to reach out to the os-ansible-deployment team and ask
how they see roles in OpenStack moving foward (mostly the reason for this
email).

Either way, I would be interested in feedback on moving forward on this. Using
travis-ci and github works but OpenStack workflow is much better.

[1] https://github.com/pabelanger/ansible-role-nodepool

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From Kaitlin.Farr at jhuapl.edu  Tue Sep  8 16:46:06 2015
From: Kaitlin.Farr at jhuapl.edu (Farr, Kaitlin M.)
Date: Tue, 8 Sep 2015 16:46:06 +0000
Subject: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican core
In-Reply-To: <mailman.20462.1441729172.14112.openstack-dev@lists.openstack.org>
References: <mailman.20462.1441729172.14112.openstack-dev@lists.openstack.org>
Message-ID: <1441730765897.840@jhuapl.edu>

+1, Dave has been a key contributor, and his code reviews are thoughtful.

Kaitlin
________________________________________
I'd like to nominate Dave Mccowan for the Barbican core review team.

He has been an active contributor both in doing relevant code pieces and
making useful and thorough reviews; And so I think he would make a great
addition to the team.

Please bring the +1's :D

Cheers!

--
Juan Antonio Osorio R.
e-mail: jaosorior at gmail.com


From mestery at mestery.com  Tue Sep  8 17:01:14 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Tue, 8 Sep 2015 12:01:14 -0500
Subject: [openstack-dev] [neutron] Mitaka Design Summit ideas
Message-ID: <CAL3VkVwaOUMgmjdKgxfXqpD8GkPMMCLAw2LCBXAoPTWXw5nEUw@mail.gmail.com>

Folks:

It's that time of the cycle again! Lets start collecting ideas for our
design summit in Tokyo at the etherpad located here [1]. We'll discuss
these a bit in some upcoming meetings and ensure we have a solid schedule
to fill our 12 fishbowl slots in Tokyo.

Thanks!
Kyle

[1] https://etherpad.openstack.org/p/neutron-mitaka-designsummit
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/2f4b2bdc/attachment.html>

From carl at ecbaldwin.net  Tue Sep  8 17:06:34 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Tue, 8 Sep 2015 11:06:34 -0600
Subject: [openstack-dev] [nova] [neutron] [rally] Neutron or nova
	degradation?
In-Reply-To: <CABARBAbBZXqi7QOVs5Cza6mphqk6pfEZi1jburPPGYZNP63zzg@mail.gmail.com>
References: <CAKdBrSdyj-Q3vKY6uFkSQW4G4gqOmOdCm4Vz6MsmPqByihU=+A@mail.gmail.com>
 <CABARBAbBZXqi7QOVs5Cza6mphqk6pfEZi1jburPPGYZNP63zzg@mail.gmail.com>
Message-ID: <CALiLy7qADww2PuUWp5wUUn1BzkaD+KmRK6hf-Xw3p0Szm0cw0Q@mail.gmail.com>

This sounds like a good candidate for a "git bisect" operation [1]
since we already have a pretty tight window where things changed.

Carl

[1] http://git-scm.com/docs/git-bisect

On Thu, Sep 3, 2015 at 7:07 AM, Assaf Muller <amuller at redhat.com> wrote:
>
>
> On Thu, Sep 3, 2015 at 8:43 AM, Andrey Pavlov <andrey.mp at gmail.com> wrote:
>>
>> Hello,
>>
>> We have rally job with fake virt driver. And we run it periodically.
>> This job runs 200 servers and measures 'show' operations.
>>
>> On 18.08 it was run well[1]. But on 21.08 it was failed by timeout[2].
>> I tried to understand what happens.
>> I tried to check this job with 20 servers only[3]. It passed but I see
>> that
>> operations with neutron take more time now (list subnets, list network
>> interfaces).
>> and as result start and show instances take more time also.
>>
>> Maybe anyone knows what happens?
>
>
> Looking at the merged Neutron patches between the 18th and 21st, there's a
> lot of
> candidates, including QoS and work around quotas.
>
> I think the best way to find out would be to run a profiler against Neutron
> from the 18th,
> and Neutron from the 21st while running the Rally tests, and finding out if
> the major
> bottlenecks moved. Last time I profiled Neutron I used GeventProfiler:
> https://pypi.python.org/pypi/GreenletProfiler
>
> Ironically I was having issued with the profiler that comes with Eventlet.
>
>>
>>
>>
>> [1]
>> http://logs.openstack.org/13/211613/6/experimental/ec2-api-rally-dsvm-fakevirt/fac263e/
>> [2]
>> http://logs.openstack.org/74/213074/7/experimental/ec2-api-rally-dsvm-fakevirt/91d0675/
>> [3]
>> http://logs.openstack.org/46/219846/1/experimental/ec2-api-rally-dsvm-fakevirt/dad98f0/
>>
>> --
>> Kind regards,
>> Andrey Pavlov.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From anikm99 at yahoo.com  Tue Sep  8 17:04:29 2015
From: anikm99 at yahoo.com (Anik)
Date: Tue, 8 Sep 2015 17:04:29 +0000 (UTC)
Subject: [openstack-dev] GSLB
In-Reply-To: <325F898546FBBF4487D24D6D606A277E18AD1484@G4W3290.americas.hpqcorp.net>
References: <325F898546FBBF4487D24D6D606A277E18AD1484@G4W3290.americas.hpqcorp.net>
Message-ID: <1087912429.3384173.1441731869267.JavaMail.yahoo@mail.yahoo.com>

Hi Graham,
Thanks for getting back.?
So am I correct in summarizing that the current plan is that [1]?GSLB will have its own API set and not evolve as an extension of Designate and [2] GSLB will mostly act as a policy definition engine (+ health checks ?) with Designate backend providing the actual DNS resolution ??Regards, Anik 

      From: "Hayes, Graham" <graham.hayes at hpe.com>
 To: Anik <anikm99 at yahoo.com>; OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org> 
 Sent: Tuesday, September 8, 2015 9:19 AM
 Subject: Re: [openstack-dev] GSLB
   
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 08/09/15 12:55, Anik wrote:
> Hello,
> 
> Recently saw some discussions in the Designate mailer archive
> around GSLB and saw some API snippets subsequently. Seems like
> early stages on this project, but highly excited that there is some
> traction now on GSLB.
> 
> I would to find out [1] If there has been discussions around how
> GSLB will work across multiple OpenStack regions and [2] The level
> of integration planned between Designate and GSLB.
> 
> Any pointers in this regard will be helpful.
> 
> Regards, Anik
> 

Hi Anik

Currently we are in very early stages of planning for GSLB.

We do not yet have a good answer for [1] - we need to work this out in
the near future.

My plan is for the MVP, is for a service running in one region - this
allows us to work out the kinks in the API / driver integration for the
service.

For [2], I would like to see us do the following:

For regional load balancers support the Neutron LBaaS v2 API as a
default in-tree.

For the global side, (routing traffic to regions) I would like to
have designate as the default in-tree.

I think we should have these both as plugins, to allow for other
configurations / technologies, but we should be runnable out of the box
using just other OpenStack open source projects.

I may be slightly biased (I am a member of designate-core), but I think
we should do the 4 Open's of OpenStack, and this seems the best way.

We also meet in #openstack-meeting-4 at 16:00 UTC every Tuesday, and
most of the people interested are in #openstack-gslb .

Thanks,

Graham



-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV7wqjAAoJEPRBUqpJBgIicJoH/RztF/0om92Jx01+iGL/9+Wu
ZP9h++7/nZUMtrEY5vIGKR5q2d6wD7AMH9bxYLzjP2u6yIfWhchqH5YWy87PVsdH
7nC/AF7jIR30RiOxD6UlZ7N1414sFCEsO3VcnneV6oSbretmh2kjmH2KRRKBdfaR
VNZXrSvaICoqzhDcfmhhSpFdHVFkPHEQ5DVDJPlF0CrGFG9fp0R7Osra+DC8dpAu
2HF6o7cSZk/t/EoQWnZE43rNXQfFXLmLs964OsklMBK79FJX49qJ90TxgDGJznCt
RuduUMMEZmDJ0a8rDLxkuzkOHsuqPmnXJ+ADcouuWZ/EWb5fMfLkZ4ZPmayDlgg=
=VZl2
-----END PGP SIGNATURE-----


  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/de181844/attachment.html>

From doug at doughellmann.com  Tue Sep  8 17:07:43 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 08 Sep 2015 13:07:43 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <CAOJFoEsuct5CTtU_qE+oB1jgfrpi2b65Xw1HOKNmMxZEXeUjPQ@mail.gmail.com>
References: <55E83B84.5000000@openstack.org> <55EED0DA.6090201@dague.net>
 <1441721217-sup-1564@lrrr.local>
 <CAOJFoEsuct5CTtU_qE+oB1jgfrpi2b65Xw1HOKNmMxZEXeUjPQ@mail.gmail.com>
Message-ID: <1441731915-sup-1930@lrrr.local>

Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
> On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
> >
> > I'd like to come up with some way to express the time other than
> > N+M because in the middle of a cycle it can be confusing to know
> > what that means (if I want to deprecate something in August am I
> > far enough through the current cycle that it doesn't count?).
> >
> > Also, as we start moving more projects to doing intermediate releases
> > the notion of a "release" vs. a "cycle" will drift apart, so we
> > want to talk about "stable releases" not just any old release.
> >
> 
> I've always thought the appropriate equivalent for projects not following
> the (old) integrated release cadence was for N == six months.  It sets
> approx. the same pace and expectation with users/deployers.
> 
> For those deployments tracking trunk, a similar approach can be taken, in
> that deprecating a config option in M3 then removing it in N1 might be too
> quick, but rather wait at least the same point in the following release
> cycle to increment 'N'.
> 
> dt
> 

Making it explicitly date-based would simplify tracking, to be sure.

Doug


From blak111 at gmail.com  Tue Sep  8 17:07:52 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Tue, 8 Sep 2015 10:07:52 -0700
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
 cloud-init IPv6 support
In-Reply-To: <0000014fa5a7fc65-db3a79b7-91fc-4e28-91ff-8e01e14cbbb7-000000@email.amazonses.com>
References: <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
 <CAO_F6JO+ZnpW61XoipHu-hxsa6TBStiynFO0Kh+GFvMNN8Ni0g@mail.gmail.com>
 <0000014fa5a7fc65-db3a79b7-91fc-4e28-91ff-8e01e14cbbb7-000000@email.amazonses.com>
Message-ID: <CAO_F6JORpZiaZCLPExwEEqtJEkjtxcL4WPJts852-3YDiw=kMQ@mail.gmail.com>

The contract we have is to maintain compatibility. As long as a client
written for the AWS API continues to work, I don't think we are violating
anything. Offering one API isn't a promise not to offer an alternative way
to access the same information.
On Sep 6, 2015 7:37 PM, "Sean M. Collins" <sean at coreitpro.com> wrote:

> On Sun, Sep 06, 2015 at 04:25:43PM EDT, Kevin Benton wrote:
> > So it's been pointed out that http://169.254.169.254/openstack is
> completed
> > OpenStack invented. I don't quite understand how that's not violating the
> > contract you said we have with end users about EC2 compatibility under
> the
> > restriction of 'no new stuff'.
>
> I think that is a violation. I don't think that allows us to make more
> changes, just because we've broken the contract once, so a second
> infraction is less significant.
>
> > If we added an IPv6 endpoint that the metadata service listens on, it
> would
> > just be another place that non cloud-init clients don't know how to talk
> > to. It's not going to break our compatibility with any clients that
> connect
> > to the IPv4 address.
>
> No, but if Amazon were to make a decision about how to implement IPv6 in
> EC2 and how to make the Metadata API service work with IPv6 we'd be
> supporting two implementations - the one we came up with and one for
> supporting the way Amazon implemented it.
>
> --
> Sean M. Collins
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/ea333d00/attachment.html>

From srikanth.vavilapalli at ericsson.com  Tue Sep  8 17:17:01 2015
From: srikanth.vavilapalli at ericsson.com (Srikanth Vavilapalli)
Date: Tue, 8 Sep 2015 17:17:01 +0000
Subject: [openstack-dev] [Ceilometer] Meters
In-Reply-To: <OF6974FD42.90ACA49D-ON65257EBA.0021633A-65257EBA.0021633C@tcs.com>
References: <OF6974FD42.90ACA49D-ON65257EBA.0021633A-65257EBA.0021633C@tcs.com>
Message-ID: <0738F7545DD8EA459B4A21F4B608D0FF2CB3761A@ESESSMB107.ericsson.se>

Hi

The vcpus, memory and disk usage related nova measurements are of "notification" type. So Plz ensure you have the following configuration settings in your nova.conf file on your compute node and restart your nova-compute service if you made any changes to that file.

instance_usage_audit=True
instance_usage_audit_period=hour
notify_on_state_change=vm_and_task_state
notification_driver = messagingv2
notification_topics = notifications
notify_on_any_change = True

Thanks
Srikanth



From: Abhishek Talwar [mailto:abhishek.talwar at tcs.com]
Sent: Monday, September 07, 2015 11:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ceilometer] Meters

Hi Folks,


I have installed a kilo devstack setup install and I am trying to get the memory and disk usage for my VM's. But on checking the "ceilometer meter-list" I can't find memory.usage or disk.usage meters.

I am searched a lot for this and still couldn't find a solution. So how to enable these meters to our meter-list.

I want all these meters in the ceilometer meter-list so that I can use them to monitor my instances.
Currently the output of ceilometer meter-list is as follows:
+--------------------------+------------+----------------------------------------------+----------------------------------+----------------------------------+
| Name                     | Type       | Resource ID                                  | User ID                          | Project ID                       |
+--------------------------+------------+----------------------------------------------+----------------------------------+----------------------------------+
| cpu                      | cumulative | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77         | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| cpu_util                 | gauge      | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77         | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| disk.read.bytes          | cumulative | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77         | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| disk.read.requests       | cumulative | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77         | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| disk.write.bytes         | cumulative | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77         | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| disk.write.requests      | cumulative | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77         | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| image                    | gauge      | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de         |                                  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image                    | gauge      | acd6beef-13e6-4d64-a83d-9e96beac26ef         |                                  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image                    | gauge      | ecefcd31-ae47-4079-bd19-efe07f4c33d3         |                                  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.download           | delta      | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de         |                                  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.serve              | delta      | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de         |                                  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.size               | gauge      | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de         |                                  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.size               | gauge      | acd6beef-13e6-4d64-a83d-9e96beac26ef         |                                  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.size               | gauge      | ecefcd31-ae47-4079-bd19-efe07f4c33d3         |                                  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.update             | delta      | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de         |                                  | 22f22fb60bf8496cb60e8498d93d56e8 |
| image.upload             | delta      | 55a0a2c2-8cfb-4882-ad05-01d7c821b1de         |                                  | 22f22fb60bf8496cb60e8498d93d56e8 |
| instance                 | gauge      | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77         | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| instance:m1.small        | gauge      | 5314c72b-a2b4-4b2b-bcb1-4057c3d96f77         | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| network.incoming.bytes   | cumulative | nova-instance-instance-00000022-fa163e3bd74e | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| network.incoming.packets | cumulative | nova-instance-instance-00000022-fa163e3bd74e | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| network.outgoing.bytes   | cumulative | nova-instance-instance-00000022-fa163e3bd74e | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
| network.outgoing.packets | cumulative | nova-instance-instance-00000022-fa163e3bd74e | 92876a1aad3c477398137b702a8467d3 | 22f22fb60bf8496cb60e8498d93d56e8 |
+--------------------------+------------+----------------------------------------------+----------------------------------+----------------------------------+

Thanks and Regards
Abhishek Talwar


=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/adf9b79f/attachment.html>

From ben at swartzlander.org  Tue Sep  8 17:32:58 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Tue, 8 Sep 2015 13:32:58 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <55E83B84.5000000@openstack.org>
References: <55E83B84.5000000@openstack.org>
Message-ID: <55EF1BCA.8020708@swartzlander.org>

On 09/03/2015 08:22 AM, Thierry Carrez wrote:
> Hi everyone,
>
> A feature deprecation policy is a standard way to communicate and
> perform the removal of user-visible behaviors and capabilities. It helps
> setting user expectations on how much and how long they can rely on a
> feature being present. It gives them reassurance over the timeframe they
> have to adapt in such cases.
>
> In OpenStack we always had a feature deprecation policy that would apply
> to "integrated projects", however it was never written down. It was
> something like "to remove a feature, you mark it deprecated for n
> releases, then you can remove it".
>
> We don't have an "integrated release" anymore, but having a base
> deprecation policy, and knowing which projects are mature enough to
> follow it, is a great piece of information to communicate to our users.
>
> That's why the next-tags workgroup at the Technical Committee has been
> working to propose such a base policy as a 'tag' that project teams can
> opt to apply to their projects when they agree to apply it to one of
> their deliverables:
>
> https://review.openstack.org/#/c/207467/
>
> Before going through the last stage of this, we want to survey existing
> projects to see which deprecation policy they currently follow, and
> verify that our proposed base deprecation policy makes sense. The goal
> is not to dictate something new from the top, it's to reflect what's
> generally already applied on the field.
>
> In particular, the current proposal says:
>
> "At the very minimum the feature [...] should be marked deprecated (and
> still be supported) in the next two coordinated end-of-cyle releases.
> For example, a feature deprecated during the M development cycle should
> still appear in the M and N releases and cannot be removed before the
> beginning of the O development cycle."
>
> That would be a n+2 deprecation policy. Some suggested that this is too
> far-reaching, and that a n+1 deprecation policy (feature deprecated
> during the M development cycle can't be removed before the start of the
> N cycle) would better reflect what's being currently done. Or that
> config options (which are user-visible things) should have n+1 as long
> as the underlying feature (or behavior) is not removed.
>
> Please let us know what makes the most sense. In particular between the
> 3 options (but feel free to suggest something else):
>
> 1. n+2 overall
> 2. n+2 for features and capabilities, n+1 for config options
> 3. n+1 overall

I think any discussion of a deprecation policy needs to be combined with 
a discussion about LTS (long term support) releases. Real customers (not 
devops users -- people who pay money for support) can't deal with 
upgrades every 6 months.

Unavoidably, distros are going to want to support certain releases for 
longer than the normal upstream support window so they can satisfy the 
needs of the aforementioned customers. This will be true whether the 
deprecation policy is N+1, N+2, or N+3.

It makes sense for the community to define LTS releases and coordinate 
making sure all the relevant projects are mutually compatible at that 
release point. Then the job of actually maintaining the LTS release can 
fall on people who care about such things. The major benefit to solving 
the LTS problem, though, is that deprecation will get a lot less painful 
because you could assume upgrades to be one release at a time or 
skipping directly from one LTS to the next, and you can reduce your 
upgrade test matrix accordingly.

-Ben Swartzlander


> Thanks in advance for your input.
>



From pmurray at hp.com  Tue Sep  8 17:32:29 2015
From: pmurray at hp.com (Murray, Paul (HP Cloud))
Date: Tue, 8 Sep 2015 17:32:29 +0000
Subject: [openstack-dev] [Nova] What is the no_device flag for in block
	device mapping?
Message-ID: <39E5672E03A1CB4B93936D1C4AA5E15D1DBC8401@G1W3640.americas.hpqcorp.net>

Hi All,

I'm wondering what the "no_device" flag is used for in the block device mappings. I had a dig around in the code but couldn't figure out why it is there. The name suggests an obvious meaning, but I've learnt not to guess too much from names.

Any pointers welcome.

Thanks
Paul

Paul Murray
Nova Technical Lead, HP Cloud
+44 117 316 2527

Hewlett-Packard Limited   |   Registered Office: Cain Road, Bracknell, Berkshire, RG12 1HN   |    Registered No: 690597 England   |    VAT Number: GB 314 1496 79

This e-mail may contain confidential and/or legally privileged material for the sole use of the intended recipient.  If you are not the intended recipient (or authorized to receive for the recipient) please contact the sender by reply e-mail and delete all copies of this message.  If you are receiving this message internally within the Hewlett Packard group of companies, you should consider the contents "CONFIDENTIAL".

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/86855e31/attachment.html>

From graham.hayes at hpe.com  Tue Sep  8 17:48:25 2015
From: graham.hayes at hpe.com (Hayes, Graham)
Date: Tue, 8 Sep 2015 17:48:25 +0000
Subject: [openstack-dev] GSLB
References: <325F898546FBBF4487D24D6D606A277E18AD1484@G4W3290.americas.hpqcorp.net>
 <1087912429.3384173.1441731869267.JavaMail.yahoo@mail.yahoo.com>
Message-ID: <325F898546FBBF4487D24D6D606A277E18AD24F2@G4W3290.americas.hpqcorp.net>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 08/09/15 18:07, Anik wrote:
> Hi Graham,
> 
> Thanks for getting back.
> 
> So am I correct in summarizing that the current plan is that [1]
> GSLB will have its own API set and not evolve as an extension of
> Designate and [2] GSLB will mostly act as a policy definition
> engine (+ health checks ?) with Designate backend providing the
> actual DNS resolution ?

[1] - Yes. Designate decided a while ago that this was not in scope for
      the project

[2] - Yes, The Kosmos API will be a place to define the endpoints you
      are balancing across, and what checks should be run on them to
      decide on their status.

      There will be built in checks like TCP / HTTP(S). There will also
      be plugin checks -for example, with a Neutron LBaaS Load
      Balancer, we can query its status API.

      All of this will result in DNS entries in Designate being updated
      (or another Global Load Balancing Plugin)

> Regards, Anik

<snip>


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV7x9lAAoJEPRBUqpJBgIiYLsH/3hBKyTYVQ3qDRKd4qtlPWeV
zdYzGwKFQ2OauE6ou7p1SpXTu9iJBg2uT0XIIuhHI6clI0ZgZSylZtlyBUyjggvO
6fY5pCzouJxE0Ad3HbGypXqU554WYLxPXmftto0fEB6nvkrc0qeDPgUre2Q4QTdo
P/A1tJaDqlNzQlCpnMRl2Ihdy8gZYSD5sRJVvmTMF2cKD40dJQrCI68IzWr+bfLf
O/961QBUQOxJ8GqldR1gLTNoIzEUSdVFvjU7i1JZpznk74RzB5tDp7O/hiO0Shit
rFXEx4KCLovIvG3hbDaurSpf+A1SQACuZXZJDFtXufN2JPfXMT3xCm/3mNfMowc=
=2Eis
-----END PGP SIGNATURE-----


From doug at doughellmann.com  Tue Sep  8 17:58:29 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 08 Sep 2015 13:58:29 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <55EF1BCA.8020708@swartzlander.org>
References: <55E83B84.5000000@openstack.org>
 <55EF1BCA.8020708@swartzlander.org>
Message-ID: <1441735065-sup-780@lrrr.local>

Excerpts from Ben Swartzlander's message of 2015-09-08 13:32:58 -0400:
> On 09/03/2015 08:22 AM, Thierry Carrez wrote:
> > Hi everyone,
> >
> > A feature deprecation policy is a standard way to communicate and
> > perform the removal of user-visible behaviors and capabilities. It helps
> > setting user expectations on how much and how long they can rely on a
> > feature being present. It gives them reassurance over the timeframe they
> > have to adapt in such cases.
> >
> > In OpenStack we always had a feature deprecation policy that would apply
> > to "integrated projects", however it was never written down. It was
> > something like "to remove a feature, you mark it deprecated for n
> > releases, then you can remove it".
> >
> > We don't have an "integrated release" anymore, but having a base
> > deprecation policy, and knowing which projects are mature enough to
> > follow it, is a great piece of information to communicate to our users.
> >
> > That's why the next-tags workgroup at the Technical Committee has been
> > working to propose such a base policy as a 'tag' that project teams can
> > opt to apply to their projects when they agree to apply it to one of
> > their deliverables:
> >
> > https://review.openstack.org/#/c/207467/
> >
> > Before going through the last stage of this, we want to survey existing
> > projects to see which deprecation policy they currently follow, and
> > verify that our proposed base deprecation policy makes sense. The goal
> > is not to dictate something new from the top, it's to reflect what's
> > generally already applied on the field.
> >
> > In particular, the current proposal says:
> >
> > "At the very minimum the feature [...] should be marked deprecated (and
> > still be supported) in the next two coordinated end-of-cyle releases.
> > For example, a feature deprecated during the M development cycle should
> > still appear in the M and N releases and cannot be removed before the
> > beginning of the O development cycle."
> >
> > That would be a n+2 deprecation policy. Some suggested that this is too
> > far-reaching, and that a n+1 deprecation policy (feature deprecated
> > during the M development cycle can't be removed before the start of the
> > N cycle) would better reflect what's being currently done. Or that
> > config options (which are user-visible things) should have n+1 as long
> > as the underlying feature (or behavior) is not removed.
> >
> > Please let us know what makes the most sense. In particular between the
> > 3 options (but feel free to suggest something else):
> >
> > 1. n+2 overall
> > 2. n+2 for features and capabilities, n+1 for config options
> > 3. n+1 overall
> 
> I think any discussion of a deprecation policy needs to be combined with 
> a discussion about LTS (long term support) releases. Real customers (not 
> devops users -- people who pay money for support) can't deal with 
> upgrades every 6 months.
> 
> Unavoidably, distros are going to want to support certain releases for 
> longer than the normal upstream support window so they can satisfy the 
> needs of the aforementioned customers. This will be true whether the 
> deprecation policy is N+1, N+2, or N+3.
> 
> It makes sense for the community to define LTS releases and coordinate 
> making sure all the relevant projects are mutually compatible at that 
> release point. Then the job of actually maintaining the LTS release can 
> fall on people who care about such things. The major benefit to solving 
> the LTS problem, though, is that deprecation will get a lot less painful 
> because you could assume upgrades to be one release at a time or 
> skipping directly from one LTS to the next, and you can reduce your 
> upgrade test matrix accordingly.

How is this fundamentally different from what we do now with stable
releases, aside from involving a longer period of time?

Doug

> 
> -Ben Swartzlander
> 
> > Thanks in advance for your input.
> >
> 


From bhagyashree.iitg at gmail.com  Tue Sep  8 18:05:11 2015
From: bhagyashree.iitg at gmail.com (Bhagyashree Uday)
Date: Tue, 8 Sep 2015 23:35:11 +0530
Subject: [openstack-dev] Getting Started : OpenStack
In-Reply-To: <CAJ_e2gA81srzjQ4iN+GnXcHzPdHwWoDbE0r1aCo2+pzqM_sd-Q@mail.gmail.com>
References: <CAMobWWqoYBDGmH+9wo2OjUzwdBUEQLbXH68Hw5dq47SQLPKGHQ@mail.gmail.com>
 <CAJ_e2gA81srzjQ4iN+GnXcHzPdHwWoDbE0r1aCo2+pzqM_sd-Q@mail.gmail.com>
Message-ID: <CAMobWWqmEZxNtv0JUVvPynKUw60i_iZOtjHJLZf0CANtwFfH9Q@mail.gmail.com>

Hi Victoria ,

Thanks for the prompt reply. I go by Bee(IRC nick : bee2502) . There
doesn't seem to be much information regarding this project even on the
Ceilometer project page :( I will wait till the next Outreachy applications
begin though to check out any new developments. Thanks for suggesting the
IRC channel :) Btw, do you happen to know any other open data analysis
projects in OpenStack ?

Bee

On Tue, Sep 8, 2015 at 12:59 AM, Victoria Mart?nez de la Cruz <
victoria at vmartinezdelacruz.com> wrote:

> Hi Bhagyashree,
>
> Welcome!
>
> That project seems to belong to Ceilometer, but I'm not sure about that.
> Ceilometer is the code name for OpenStack telemetry, if you are interested
> about it a good place to start is
> https://wiki.openstack.org/wiki/Ceilometer.
>
> Those internships ideas are from previous Outreachy/Google Summer of Code
> rounds. Outreachy applications will open next September 22nd so there is
> no much information about next round mentors/projects yet.
>
> Call for mentors is going to be launched soon, so keep track of that wiki
> for updates. Feel free to pass by #openstack-opw as well and we can help
> you set your development environment.
>
> Cheers,
>
> Victoria
>
> 2015-09-07 14:59 GMT-03:00 Bhagyashree Uday <bhagyashree.iitg at gmail.com>:
>
>> Hi ,
>>
>> I am Bhagyashree from India(IRC nick : bee2502 ). I have previous
>> experience in data analytics including Machine Leraning,,NLP,IR and User
>> Experience Research. I am interested in contributing to OpenStack on
>> projects involving data analysis. Also , if these projects could be a
>> part of Outreachy, it would be added bonus. I went through project ideas
>> listed on https://wiki.openstack.org/wiki/Internship_ideas and one of
>> these projects interested me a lot -
>> Understand OpenStack Operations via Insights from Logs and Metrics: A
>> Data Science Perspective
>> However, this project does not have any mentioned mentor and I was hoping
>> you could provide me with some individual contact from OpenStack community
>> who would be interested in mentoring this project or some mailing
>> list/thread/IRC community where I could look for a mentor. Other open data
>> science projects/idea suggestions are also welcome.
>>
>> Regards,
>> Bhagyashree
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/bab3dee9/attachment.html>

From sean at dague.net  Tue Sep  8 18:11:48 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 8 Sep 2015 14:11:48 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <1441731915-sup-1930@lrrr.local>
References: <55E83B84.5000000@openstack.org> <55EED0DA.6090201@dague.net>
 <1441721217-sup-1564@lrrr.local>
 <CAOJFoEsuct5CTtU_qE+oB1jgfrpi2b65Xw1HOKNmMxZEXeUjPQ@mail.gmail.com>
 <1441731915-sup-1930@lrrr.local>
Message-ID: <55EF24E4.4060100@dague.net>

On 09/08/2015 01:07 PM, Doug Hellmann wrote:
> Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
>> On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
>>>
>>> I'd like to come up with some way to express the time other than
>>> N+M because in the middle of a cycle it can be confusing to know
>>> what that means (if I want to deprecate something in August am I
>>> far enough through the current cycle that it doesn't count?).
>>>
>>> Also, as we start moving more projects to doing intermediate releases
>>> the notion of a "release" vs. a "cycle" will drift apart, so we
>>> want to talk about "stable releases" not just any old release.
>>>
>>
>> I've always thought the appropriate equivalent for projects not following
>> the (old) integrated release cadence was for N == six months.  It sets
>> approx. the same pace and expectation with users/deployers.
>>
>> For those deployments tracking trunk, a similar approach can be taken, in
>> that deprecating a config option in M3 then removing it in N1 might be too
>> quick, but rather wait at least the same point in the following release
>> cycle to increment 'N'.
>>
>> dt
>>
> 
> Making it explicitly date-based would simplify tracking, to be sure.

I would agree that the M3 -> N0 drop can be pretty quick, it can be 6
weeks (which I've seen happen). However N == six months might make FFE
deprecation lands in one release run into FFE in the next. For the CD
case my suggestion is > 3 months. Because if you aren't CDing in
increments smaller than that, and hence seeing the deprecation, you
aren't really doing the C part of CDing.

	-Sean

-- 
Sean Dague
http://dague.net


From fungi at yuggoth.org  Tue Sep  8 18:24:24 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Tue, 8 Sep 2015 18:24:24 +0000
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <55EF1BCA.8020708@swartzlander.org>
References: <55E83B84.5000000@openstack.org>
 <55EF1BCA.8020708@swartzlander.org>
Message-ID: <20150908182424.GW7955@yuggoth.org>

On 2015-09-08 13:32:58 -0400 (-0400), Ben Swartzlander wrote:
[...]
> It makes sense for the community to define LTS releases and coordinate
> making sure all the relevant projects are mutually compatible at that
> release point.
[...]

This seems premature. The most recent stable branch to reach EOL
(icehouse) made it just past 14 months before we had to give up
because not enough effort was being expended to keep it working and
testable. As a community we've so far struggled to maintain stable
branches as much as one year past release. While there are some
exciting improvements on the way to our requirements standardization
which could prove to help extend this, I really want to see us
demonstrate that we can maintain a release longer before we make
such a long-term commitment to downstream consumers.
-- 
Jeremy Stanley


From tnapierala at mirantis.com  Tue Sep  8 18:27:01 2015
From: tnapierala at mirantis.com (Tomasz Napierala)
Date: Tue, 8 Sep 2015 11:27:01 -0700
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
	Fuel-Library Core
In-Reply-To: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
References: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
Message-ID: <104B1A59-FBD9-49B3-AEB4-E2870205D894@mirantis.com>

> On 02 Sep 2015, at 01:31, Sergii Golovatiuk <sgolovatiuk at mirantis.com> wrote:
> 
> Hi,
> 
> I would like to nominate Alex Schultz to Fuel-Library Core team. He?s been doing a great job in writing patches. At the same time his reviews are solid with comments for further improvements. He?s #3 reviewer and #1 contributor with 46 commits for last 90 days [1]. Additionally, Alex has been very active in IRC providing great ideas. His ?librarian? blueprint [3] made a big step towards to puppet community.
> 
> Fuel Library, please vote with +1/-1 for approval/objection. Voting will be open until September 9th. This will go forward after voting is closed if there are no objections.  
> 
> Overall contribution:
> [0] http://stackalytics.com/?user_id=alex-schultz
> Fuel library contribution for last 90 days:
> [1] http://stackalytics.com/report/contribution/fuel-library/90
> List of reviews:
> [2] https://review.openstack.org/#/q/reviewer:%22Alex+Schultz%22+status:merged,n,z
> ?Librarian activities? in mailing list: 
> [3] http://lists.openstack.org/pipermail/openstack-dev/2015-July/071058.html


Definitely well deserved for Alex. Outstanding technical work and really good community skills! My strong +1

Regards,
-- 
Tomasz 'Zen' Napierala
Product Engineering - Poland









From sean at dague.net  Tue Sep  8 18:30:49 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 8 Sep 2015 14:30:49 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <20150908182424.GW7955@yuggoth.org>
References: <55E83B84.5000000@openstack.org>
 <55EF1BCA.8020708@swartzlander.org> <20150908182424.GW7955@yuggoth.org>
Message-ID: <55EF2959.1040209@dague.net>

On 09/08/2015 02:24 PM, Jeremy Stanley wrote:
> On 2015-09-08 13:32:58 -0400 (-0400), Ben Swartzlander wrote:
> [...]
>> It makes sense for the community to define LTS releases and coordinate
>> making sure all the relevant projects are mutually compatible at that
>> release point.
> [...]
> 
> This seems premature. The most recent stable branch to reach EOL
> (icehouse) made it just past 14 months before we had to give up
> because not enough effort was being expended to keep it working and
> testable. As a community we've so far struggled to maintain stable
> branches as much as one year past release. While there are some
> exciting improvements on the way to our requirements standardization
> which could prove to help extend this, I really want to see us
> demonstrate that we can maintain a release longer before we make
> such a long-term commitment to downstream consumers.

And, the LTS question is separate from the feature deprecation question.
They are both pro consumer behaviors that have cost on the development
teams, but they are different things.

We rarely get resolution on one thing by entwining a different thing in
the same question.

	-Sean

-- 
Sean Dague
http://dague.net


From tmckay at redhat.com  Tue Sep  8 19:05:44 2015
From: tmckay at redhat.com (Trevor McKay)
Date: Tue, 08 Sep 2015 15:05:44 -0400
Subject: [openstack-dev] [Openstack] [Horizon] [Sahara] FFE request for
 Sahara unified job interface map UI
In-Reply-To: <508867389.21683555.1441377655696.JavaMail.zimbra@redhat.com>
References: <508867389.21683555.1441377655696.JavaMail.zimbra@redhat.com>
Message-ID: <1441739144.3709.5.camel@redhat.com>

+1 from me as well.  It would be a shame to see this go to the next
cycle.

On Fri, 2015-09-04 at 10:40 -0400, Ethan Gafford wrote:
> Hello all,
> 
> I request a FFE for the change at: https://review.openstack.org/#/c/209683/
> 
> This change enables a significant improvement to UX in Sahara's elastic data processing flow which is already in the server and client layers of Sahara. Because it specifically aims at improving ease of use and comprehensibility, Horizon integration is critical to the success of the feature. The change itself is reasonably modular and thus low-risk; it will have no impact outside Sahara's job template creation and launch flow, and (failing unforseen issues) no impact to users of the existing flow who choose not to use this feature.
> 
> Thank you,
> Ethan
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




From carl at ecbaldwin.net  Tue Sep  8 19:30:14 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Tue, 8 Sep 2015 13:30:14 -0600
Subject: [openstack-dev] [Neutron] Port forwarding
In-Reply-To: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>
References: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>
Message-ID: <CALiLy7qv+c+ozm2Vw-pCZ1yQiJ1WhTW-Rc1GevCu7soz5H7cRg@mail.gmail.com>

On Tue, Sep 1, 2015 at 11:59 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
> Hello All,
>
> I have searched and found many past efforts to implement port forwarding in
> Neutron.

I have heard a few express a desire for this use case a few times in
the past without gaining much traction.  Your summary here seems to
show that this continues to come up.  I would be interested in seeing
this move forward.

> I have found two incomplete blueprints [1], [2] and an abandoned patch [3].
>
> There is even a project in Stackforge [4], [5] that claims
> to implement this, but the L3 parts in it seems older then current master.

I looked at this stack forge project.  It looks like files copied out
of neutron and modified as an alternative to proposing a patch set to
neutron.

> I have recently came across this requirement for various use cases, one of
> them is
> providing feature compliance with Docker port-mapping feature (for Kuryr),
> and saving floating
> IP's space.

I think both of these could be compelling use cases.

> There has been many discussions in the past that require this feature, so i
> assume
> there is a demand to make this formal, just a small examples [6], [7], [8],
> [9]
>
> The idea in a nutshell is to support port forwarding (TCP/UDP ports) on the
> external router
> leg from the public network to internal ports, so user can use one Floating
> IP (the external
> gateway router interface IP) and reach different internal ports depending on
> the port numbers.
> This should happen on the network node (and can also be leveraged for
> security reasons).

I'm sure someone will ask how this works with DVR.  It should be
implemented so that it works with a DVR router but it will be
implemented in the central part of the router.  Ideally, DVR and
legacy routers work the same in this regard and a single bit of code
will implement it for both.  If this isn't the case, I think that is a
problem with our current code structure.

> I think that the POC implementation in the Stackforge project shows that
> this needs to be
> implemented inside the L3 parts of the current reference implementation, it
> will be hard
> to maintain something like that in an external repository.
> (I also think that the API/DB extensions should be close to the current L3
> reference
> implementation)

Agreed.

> I would like to renew the efforts on this feature and propose a RFE and a
> spec for this to the
> next release, any comments/ideas/thoughts are welcome.
> And of course if any of the people interested or any of the people that
> worked on this before
> want to join the effort, you are more then welcome to join and comment.

I have added this to the agenda for the Neutron drivers meeting.  When
the team starts to turn its eye toward Mitaka, we'll discuss it.
Hopefully that will be soon as I'm started to think about it already.

I'd like to see how the API for this will look.  I don't think we'll
need more detail that that for now.

Carl

> [1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
> [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
> [3] https://review.openstack.org/#/c/60512/
> [4] https://github.com/stackforge/networking-portforwarding
> [5] https://review.openstack.org/#/q/port+forwarding,n,z
>
> [6]
> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
> [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
> [8]
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
> [9]
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
>
>


From doug at doughellmann.com  Tue Sep  8 19:32:07 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 08 Sep 2015 15:32:07 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <55EF24E4.4060100@dague.net>
References: <55E83B84.5000000@openstack.org> <55EED0DA.6090201@dague.net>
 <1441721217-sup-1564@lrrr.local>
 <CAOJFoEsuct5CTtU_qE+oB1jgfrpi2b65Xw1HOKNmMxZEXeUjPQ@mail.gmail.com>
 <1441731915-sup-1930@lrrr.local> <55EF24E4.4060100@dague.net>
Message-ID: <1441740621-sup-126@lrrr.local>

Excerpts from Sean Dague's message of 2015-09-08 14:11:48 -0400:
> On 09/08/2015 01:07 PM, Doug Hellmann wrote:
> > Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
> >> On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
> >>>
> >>> I'd like to come up with some way to express the time other than
> >>> N+M because in the middle of a cycle it can be confusing to know
> >>> what that means (if I want to deprecate something in August am I
> >>> far enough through the current cycle that it doesn't count?).
> >>>
> >>> Also, as we start moving more projects to doing intermediate releases
> >>> the notion of a "release" vs. a "cycle" will drift apart, so we
> >>> want to talk about "stable releases" not just any old release.
> >>>
> >>
> >> I've always thought the appropriate equivalent for projects not following
> >> the (old) integrated release cadence was for N == six months.  It sets
> >> approx. the same pace and expectation with users/deployers.
> >>
> >> For those deployments tracking trunk, a similar approach can be taken, in
> >> that deprecating a config option in M3 then removing it in N1 might be too
> >> quick, but rather wait at least the same point in the following release
> >> cycle to increment 'N'.
> >>
> >> dt
> >>
> > 
> > Making it explicitly date-based would simplify tracking, to be sure.
> 
> I would agree that the M3 -> N0 drop can be pretty quick, it can be 6
> weeks (which I've seen happen). However N == six months might make FFE
> deprecation lands in one release run into FFE in the next. For the CD
> case my suggestion is > 3 months. Because if you aren't CDing in
> increments smaller than that, and hence seeing the deprecation, you
> aren't really doing the C part of CDing.
> 
>     -Sean
> 

Do those 3 months need to span more than one stable release? For
projects doing intermediary releases, there may be several releases
within a 3 month period.

Doug




From zigo at debian.org  Tue Sep  8 19:35:30 2015
From: zigo at debian.org (Thomas Goirand)
Date: Tue, 08 Sep 2015 21:35:30 +0200
Subject: [openstack-dev] [horizon] python-selenium landed in Debian main
 today (in Debian Experimental for the moment)
Message-ID: <55EF3882.2000805@debian.org>

Hi,

I'm very happy to write this message! :)

After the non-free files were removed from the package (after I asked
for it through the Debian bug https://bugs.debian.org/770232), Selenium
was uploaded and reached Debian Experimental in main today (ie: Selenium
is not in non-free section of Debian anymore). \o/

Now, I wonder: can the Horizon team use python-selenium as uploaded to
Debian experimental today? Can we run the Selenium unit tests, even
without the browser plugins? It is my understanding that it's possible,
if we use something like PhantomJS, which is also available in Debian.

So, Horizon guys, could you please have a look, and let me know if I may
run Selenium tests with what's in Debian now? Does it requires some
modification on how we run tests in Horizon currently?

Running Selenium unit tests during package build time would definitively
increase a lot the Horizon package quality insurance, so I would
definitively love running these tests. Please help me doing so! :)

Cheers,

Thomas Goirand (zigo)


From baoli at cisco.com  Tue Sep  8 19:43:39 2015
From: baoli at cisco.com (Robert Li (baoli))
Date: Tue, 8 Sep 2015 19:43:39 +0000
Subject: [openstack-dev] [nova] pci-passtrough and neutron multi segment
 networks
In-Reply-To: <CAFL1fMArmK_WwOirD4-b+8ZoYY8jTaZSa49W+iHvixMW7A9JgA@mail.gmail.com>
References: <CAFL1fMArmK_WwOirD4-b+8ZoYY8jTaZSa49W+iHvixMW7A9JgA@mail.gmail.com>
Message-ID: <D214B250.133F49%baoli@cisco.com>

As far as I know, it was discussed but not supported yet. It requires change in nova and support in the neutron plugins.

?Robert

On 9/8/15, 9:39 AM, "Vladyslav Gridin" <vladyslav.gridin at nuagenetworks.net<mailto:vladyslav.gridin at nuagenetworks.net>> wrote:

Hi All,

Is there a way to successfully deploy a vm with sriov nic
on both single segment vlan network, and multi provider network,
containing vlan segment?
When nova builds pci request for nic it looks for 'physical_network'
at network level, but for multi provider networks this is set within a segment.

e.g.
RESP BODY: {"network": {"status": "ACTIVE", "subnets": ["3862051f-de55-4bb9-8c88-acd675bb3702"], "name": "sriov", "admin_state_up": true, "router:external": false, "segments": [{"provider:segmentation_id": 77, "provider:physical_network": "physnet1", "provider:network_type": "vlan"}, {"provider:segmentation_id": 35, "provider:physical_network": null, "provider:network_type": "vxlan"}], "mtu": 0, "tenant_id": "bd3afb5fac0745faa34713e6cada5a8d", "shared": false, "id": "53c0e71e-4c9a-4a33-b1a0-69529583e05f"}}


So, if on compute my pci_passthrough_whitelist contains physical_network,
deployment will fail in multi segment network, and vice versa.

Thanks,
Vlad.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/cbbb3abe/attachment.html>

From gord at live.ca  Tue Sep  8 19:45:10 2015
From: gord at live.ca (gord chung)
Date: Tue, 8 Sep 2015 15:45:10 -0400
Subject: [openstack-dev] [aodh][ceilometer] (re)introducing Aodh - OpenStack
	Alarming
Message-ID: <BLU436-SMTP32A3A7E84295D52C56310CDE530@phx.gbl>

hi all,

as you may have heard, in an effort to simplify OpenStack Telemetry 
(Ceilometer) and streamline it's code, the alarming functionality 
provided by OpenStack Telemetry has been moved to it's own 
repository[1]. The new project is called Aodh[2]. the idea is that Aodh 
will grow as it's own entity, with it's own distinct core team, under 
the Telemetry umbrella. this way, we will have a focused team 
specifically for the alarming aspects of Telemetry. as always, feedback 
and contributions are welcomed[3].

in the coming days, we will release a migration/changes document to 
explain the differences between the original alarming code and Aodh. all 
effort was made to maintain compatibility in configurations such that it 
should be possible to take the existing configuration and reuse it for 
Aodh deployment.

some quick notes:
- the existing alarming code will remain consumable for Liberty release 
(but in deprecated state)
- all new functionality (ie. inline/streaming alarm evaluations) will be 
added only to Aodh
- client and api support has been added to common Ceilometer interfaces 
such that if Aodh is enabled, the client can still be used and redirect 
to Aodh.
- mailing list items can be tagged with [aodh]
- irc discussions will remain under #openstack-ceilometer

many thanks for all those who worked on the code split and integration 
testing.

[1] https://github.com/openstack/aodh
[2] http://www.behindthename.com/name/aodh
[3] https://launchpad.net/aodh

cheers,

-- 
gord



From akalambu at cisco.com  Tue Sep  8 19:58:42 2015
From: akalambu at cisco.com (Ajay Kalambur (akalambu))
Date: Tue, 8 Sep 2015 19:58:42 +0000
Subject: [openstack-dev] [neutron] port delete allowed on VM
Message-ID: <D2148C03.253EB%akalambu@cisco.com>

Hi
Today when we create a VM on a port and delete that port I don?t get a message saying Port in Use

Is there a plan to fix this or is this expected behavior in neutron

Is there a plan to fix this and if so is there a bug tracking this?

Ajay

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/f49d2cba/attachment.html>

From deepti.ramakrishna at intel.com  Tue Sep  8 20:02:27 2015
From: deepti.ramakrishna at intel.com (Ramakrishna, Deepti)
Date: Tue, 8 Sep 2015 20:02:27 +0000
Subject: [openstack-dev] [Glance] New BP required for adding new json file
 to Glance metadefs?
Message-ID: <EEF613A4FA911D48911298B78DC42A53398636D4@ORSMSX109.amr.corp.intel.com>

Hi all,


I have a question. Hoping one of the Glance cores or spec-cores could answer.

As part of adding some encryption related work, we would like to propose an additional data-security.json file to Glance metadefs which can be used to set encryption requirements for an image. Should we have to propose a BP for this or just submit as part of a patch with our reasons for doing the same?



Please let me know.

Thanks,
Deepti

[P.S: I didn't get any response on IRC and hence this email.]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/798ecf7d/attachment.html>

From msm at redhat.com  Tue Sep  8 20:03:59 2015
From: msm at redhat.com (michael mccune)
Date: Tue, 8 Sep 2015 16:03:59 -0400
Subject: [openstack-dev] Getting Started : OpenStack
In-Reply-To: <CAMobWWqmEZxNtv0JUVvPynKUw60i_iZOtjHJLZf0CANtwFfH9Q@mail.gmail.com>
References: <CAMobWWqoYBDGmH+9wo2OjUzwdBUEQLbXH68Hw5dq47SQLPKGHQ@mail.gmail.com>
 <CAJ_e2gA81srzjQ4iN+GnXcHzPdHwWoDbE0r1aCo2+pzqM_sd-Q@mail.gmail.com>
 <CAMobWWqmEZxNtv0JUVvPynKUw60i_iZOtjHJLZf0CANtwFfH9Q@mail.gmail.com>
Message-ID: <55EF3F2F.3040803@redhat.com>

On 09/08/2015 02:05 PM, Bhagyashree Uday wrote:
> Hi Victoria ,
>
> Thanks for the prompt reply. I go by Bee(IRC nick : bee2502) .There
> doesn't seem to be much information regarding this project even on the
> Ceilometer project page :( I will wait till the next Outreachy
> applications begin though to check out any new developments. Thanks for
> suggesting the IRC channel :) Btw, do you happen to know any other open
> data analysis projects in OpenStack ?
>
> Bee

hi Bee,

you may also be interested in the sahara project, the data processing 
service for openstack[1].

i am a developer with the project, and although we don't deal 
specifically in the analysis of data, we are addressing the issues of 
deploying popular data processing frameworks(Hadoop, Spark, Storm) into 
openstack.

if this sounds interesting, please stop by our channel, 
#openstack-sahara and chat us up. we are always looking for more people 
interested in contributing =)

regards,
mike

(elmiko on irc)

[1]: http://docs.openstack.org/developer/sahara/



From nik.komawar at gmail.com  Tue Sep  8 20:14:26 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Tue, 8 Sep 2015 16:14:26 -0400
Subject: [openstack-dev] [Glance] New BP required for adding new json
 file to Glance metadefs?
In-Reply-To: <EEF613A4FA911D48911298B78DC42A53398636D4@ORSMSX109.amr.corp.intel.com>
References: <EEF613A4FA911D48911298B78DC42A53398636D4@ORSMSX109.amr.corp.intel.com>
Message-ID: <55EF41A2.8010305@gmail.com>

Please create a standard glance-spec [0] and a corres. BP for rel-mgmt.
The discussion can be expected on the spec. You can request it to be
discussed during the weekly glance drivers meeting [1].

[0] https://github.com/openstack/glance-specs
[1] http://eavesdrop.openstack.org/#Glance_Drivers_Meeting

On 9/8/15 4:02 PM, Ramakrishna, Deepti wrote:
>
> Hi all,
>
>  
>
> I have a question. Hoping one of the Glance cores or spec-cores could
> answer.
>
> As part of adding some encryption related work, we would like to
> propose an additional data-security.json file to Glance metadefs which
> can be used to set encryption requirements for an image. Should we
> have to propose a BP for this or just submit as part of a patch with
> our reasons for doing the same?
>
>  
>
> Please let me know.
>
>  
>
> Thanks,
>
> Deepti
>
>  
>
> [P.S: I didn?t get any response on IRC and hence this email.]
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/25a7be7f/attachment.html>

From dborodaenko at mirantis.com  Tue Sep  8 20:31:22 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Tue, 8 Sep 2015 13:31:22 -0700
Subject: [openstack-dev] [Fuel] Nominate Evgeniy Konstantinov for
 fuel-docs core
In-Reply-To: <CAK2oe+Jy02oP2Z-mMZsgkXarR_7UYmAt8+jR6_nCsfkS1aNnig@mail.gmail.com>
References: <CAFY49iBwxknorBHmVLZSkUWD9zMr4Tc57vKOg_F0=7PEG0_tSA@mail.gmail.com>
 <CAM0pNLOpBAhyQnRCHXK=jL6NTpxdEe880a=h7c-Jvw4GdTuk9w@mail.gmail.com>
 <CAC+XjbZqz-qk1fi+pR=H-KXEgOqW9W0_+0f89xKVSPpiA5otWg@mail.gmail.com>
 <CAHAWLf2apU=0b_xOhEMA=DjKoEKRsSCtys4sGnjyBmQckgXhUA@mail.gmail.com>
 <CAPQe3Ln-Rv2Z-8LyWPo914mFk+xhxHe05Vj=wxR=yuoUd2+PyA@mail.gmail.com>
 <CAEg2Y8M2HW2QLbNNNga2jMwCm4Z-78wxoZaQCGuW-q_-3PqjwA@mail.gmail.com>
 <CAK2oe+Jy02oP2Z-mMZsgkXarR_7UYmAt8+jR6_nCsfkS1aNnig@mail.gmail.com>
Message-ID: <CAM0pNLOeb9_=b8PTnyoCbA+485Sra8cjw1ptz-gDhx6d_hJYKQ@mail.gmail.com>

It's been 6 days and there's an obvious consensus in favor of adding
Evgeny to fuel-docs core reviewers. I've added Evgeny to
fuel-docs-core group:
https://review.openstack.org/#/admin/groups/657,members

Thanks for your contribution so far and please keep up the good work!

On Tue, Sep 8, 2015 at 7:41 AM, Alexander Adamov <aadamov at mirantis.com> wrote:
> +1
>
> On Thu, Sep 3, 2015 at 11:41 PM, Dmitry Pyzhov <dpyzhov at mirantis.com> wrote:
>>
>> +1
>>
>> On Thu, Sep 3, 2015 at 10:14 PM, Sergey Vasilenko
>> <svasilenko at mirantis.com> wrote:
>>>
>>> +1
>>>
>>>
>>> /sv
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dmitry Borodaenko


From clint at fewbar.com  Tue Sep  8 20:35:31 2015
From: clint at fewbar.com (Clint Byrum)
Date: Tue, 08 Sep 2015 13:35:31 -0700
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
Message-ID: <1441743880-sup-3578@fewbar.com>

Excerpts from Nir Yechiel's message of 2014-07-07 09:15:09 -0700:
> AFAIK, the cloud-init metadata service can currently be accessed only by sending a request to http://169.254.169.254, and no IPv6 equivalent is currently implemented. Does anyone working on this or tried to address this before?
> 

I'm not sure we'd want to carry the way metadata works forward now that
we have had some time to think about this.

We already have DHCP6 and NDP. Just use one of those, and set the host's
name to a nonce that it can use to lookup the endpoint for instance
differentiation via DNS SRV records. So if you were told you are

d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com

Then you look that up as a SRV record on your configured DNS resolver,
and connect to the host name returned and do something like  GET
/d02a684d-56ea-44bc-9eba-18d997b1d32d

And viola, metadata returns without any special link local thing, and
it works like any other dual stack application on the planet.


From vkozhukalov at mirantis.com  Tue Sep  8 20:41:57 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Tue, 8 Sep 2015 23:41:57 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
	slave nodes
Message-ID: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>

Dear colleagues,

Currently, we install fuel-libraryX.Y package(s) on the master node and
then right before starting actual deployment we rsync [1] puppet modules
(one of installed versions) from the master node to slave nodes. Such a
flow makes things much more complicated than they could be if we installed
puppet modules on slave nodes as rpm/deb packages. Deployment itself is
parameterized by repo urls (upstream + mos) and this pre-deployment task
could be nothing more than just installing fuel-library package from mos
repo defined for a cluster. We would not have several versions of
fuel-library on the master node, we would not need that complicated upgrade
stuff like we currently have for puppet modules.

Please give your opinions on this.


[1]
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218

Vladimir Kozhukalov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/d715caea/attachment.html>

From deepti.ramakrishna at intel.com  Tue Sep  8 21:12:31 2015
From: deepti.ramakrishna at intel.com (Ramakrishna, Deepti)
Date: Tue, 8 Sep 2015 21:12:31 +0000
Subject: [openstack-dev] [Glance] New BP required for adding new json
 file to Glance metadefs?
In-Reply-To: <55EF41A2.8010305@gmail.com>
References: <EEF613A4FA911D48911298B78DC42A53398636D4@ORSMSX109.amr.corp.intel.com>
 <55EF41A2.8010305@gmail.com>
Message-ID: <EEF613A4FA911D48911298B78DC42A53398638BF@ORSMSX109.amr.corp.intel.com>

Thanks Nikhil for your response.

Given the spec deadline for Liberty is over, I assume I would submitting my spec targeting for Mitaka. Currently, I noticed there is no Mitaka specs folder yet. So I created one.

Can you please help review it - https://review.openstack.org/#/c/218098/ ? I can then upload my spec to this.

Thanks,
Deepti

From: Nikhil Komawar [mailto:nik.komawar at gmail.com]
Sent: Tuesday, September 08, 2015 1:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Ramakrishna, Deepti
Subject: Re: [openstack-dev] [Glance] New BP required for adding new json file to Glance metadefs?

Please create a standard glance-spec [0] and a corres. BP for rel-mgmt. The discussion can be expected on the spec. You can request it to be discussed during the weekly glance drivers meeting [1].

[0] https://github.com/openstack/glance-specs
[1] http://eavesdrop.openstack.org/#Glance_Drivers_Meeting
On 9/8/15 4:02 PM, Ramakrishna, Deepti wrote:
Hi all,


I have a question. Hoping one of the Glance cores or spec-cores could answer.

As part of adding some encryption related work, we would like to propose an additional data-security.json file to Glance metadefs which can be used to set encryption requirements for an image. Should we have to propose a BP for this or just submit as part of a patch with our reasons for doing the same?



Please let me know.

Thanks,
Deepti

[P.S: I didn?t get any response on IRC and hence this email.]




__________________________________________________________________________

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--



Thanks,

Nikhil
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/91885c2c/attachment.html>

From sean at dague.net  Tue Sep  8 21:29:21 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 8 Sep 2015 17:29:21 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <1441740621-sup-126@lrrr.local>
References: <55E83B84.5000000@openstack.org> <55EED0DA.6090201@dague.net>
 <1441721217-sup-1564@lrrr.local>
 <CAOJFoEsuct5CTtU_qE+oB1jgfrpi2b65Xw1HOKNmMxZEXeUjPQ@mail.gmail.com>
 <1441731915-sup-1930@lrrr.local> <55EF24E4.4060100@dague.net>
 <1441740621-sup-126@lrrr.local>
Message-ID: <55EF5331.7010804@dague.net>

On 09/08/2015 03:32 PM, Doug Hellmann wrote:
> Excerpts from Sean Dague's message of 2015-09-08 14:11:48 -0400:
>> On 09/08/2015 01:07 PM, Doug Hellmann wrote:
>>> Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
>>>> On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
>>>>>
>>>>> I'd like to come up with some way to express the time other than
>>>>> N+M because in the middle of a cycle it can be confusing to know
>>>>> what that means (if I want to deprecate something in August am I
>>>>> far enough through the current cycle that it doesn't count?).
>>>>>
>>>>> Also, as we start moving more projects to doing intermediate releases
>>>>> the notion of a "release" vs. a "cycle" will drift apart, so we
>>>>> want to talk about "stable releases" not just any old release.
>>>>>
>>>>
>>>> I've always thought the appropriate equivalent for projects not following
>>>> the (old) integrated release cadence was for N == six months.  It sets
>>>> approx. the same pace and expectation with users/deployers.
>>>>
>>>> For those deployments tracking trunk, a similar approach can be taken, in
>>>> that deprecating a config option in M3 then removing it in N1 might be too
>>>> quick, but rather wait at least the same point in the following release
>>>> cycle to increment 'N'.
>>>>
>>>> dt
>>>>
>>>
>>> Making it explicitly date-based would simplify tracking, to be sure.
>>
>> I would agree that the M3 -> N0 drop can be pretty quick, it can be 6
>> weeks (which I've seen happen). However N == six months might make FFE
>> deprecation lands in one release run into FFE in the next. For the CD
>> case my suggestion is > 3 months. Because if you aren't CDing in
>> increments smaller than that, and hence seeing the deprecation, you
>> aren't really doing the C part of CDing.
>>
>>     -Sean
>>
> 
> Do those 3 months need to span more than one stable release? For
> projects doing intermediary releases, there may be several releases
> within a 3 month period.

Yes. 1 stable release branch AND 3 months linear time is what I'd
consider reasonable.

	-Sean

-- 
Sean Dague
http://dague.net


From stevemar at ca.ibm.com  Tue Sep  8 21:38:49 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Tue, 8 Sep 2015 17:38:49 -0400
Subject: [openstack-dev] [openstackclient] add lin hua cheng to osc-core
Message-ID: <201509082139.t88Lcxjo010538@d03av04.boulder.ibm.com>



Hey everyone,

I would like to nominate Lin Hua Cheng to the OpenStackClient core team.

Lin continues to be an outstanding OpenStack contributor, as noted by his
core status in both Keystone and Horizon. He has somehow found time to also
contribute to OpenStackClient and provide meaningful and high quality
reviews, as well as several timely bug fixes. He knows the code base inside
and out, and his UX background from horizon has been a great asset.

If no one disagrees with this by end of day on Friday, I'll give Lin his
new awesome core power that evening.

Thanks,

Steve Martinelli
OpenStack Keystone Core
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/4965d58b/attachment.html>

From tasleson at redhat.com  Tue Sep  8 21:43:24 2015
From: tasleson at redhat.com (Tony Asleson)
Date: Tue, 08 Sep 2015 16:43:24 -0500
Subject: [openstack-dev] [cinder] Using storage drivers outside of
	openstack/cinder
Message-ID: <55EF567C.5090908@redhat.com>

Openstack/Cinder has a wealth of storage drivers to talk to different
storage subsystems, which is great for users of openstack.  However, it
would be even greater if this same functionality could be leveraged
outside of openstack/cinder.  So that other projects don't need to
duplicate the same functionality when trying to talk to hardware.


When looking at cinder and asking around[1] about how one could
potentially do this I find out that is there is quite a bit of coupling
with openstack, like:

* The NFS driver is initialized with knowledge about whether any volumes
exist in the database or not, and if not, can trigger certain behavior
to set permissions, etc.  This means that something other than the
cinder-volume service needs to mimic the right behavior if using this
driver.

* The LVM driver touches the database when creating a backup of a volume
(many drivers do), and when managing a volume (importing an existing
external LV to use as a Cinder volume).

* A few drivers (GPFS, others?) touch the db when managing consistency
groups.

* EMC, Hitachi, and IBM NFS drivers touch the db when creating/deleting
snapshots.


Am I the only one that thinks this would be useful?  What ideas do
people have for making the cinder drivers stand alone, so that everyone
could benefit from this great body of work?

Thanks,
Tony

[1] Special thanks to Eric Harney for the examples of coupling


From dtroyer at gmail.com  Tue Sep  8 21:43:35 2015
From: dtroyer at gmail.com (Dean Troyer)
Date: Tue, 8 Sep 2015 16:43:35 -0500
Subject: [openstack-dev] [openstackclient] add lin hua cheng to osc-core
In-Reply-To: <201509082139.t88Lcxjo010538@d03av04.boulder.ibm.com>
References: <201509082139.t88Lcxjo010538@d03av04.boulder.ibm.com>
Message-ID: <CAOJFoEv+e9xtg5-0bZQuCQL8oxT1JjFBSvF9fXe+eCoyK+=p5w@mail.gmail.com>

+++  Thanks Lin!

dt

On Tue, Sep 8, 2015 at 4:38 PM, Steve Martinelli <stevemar at ca.ibm.com>
wrote:

> Hey everyone,
>
> I would like to nominate Lin Hua Cheng to the OpenStackClient core team.
>
> Lin continues to be an outstanding OpenStack contributor, as noted by his
> core status in both Keystone and Horizon. He has somehow found time to also
> contribute to OpenStackClient and provide meaningful and high quality
> reviews, as well as several timely bug fixes. He knows the code base inside
> and out, and his UX background from horizon has been a great asset.
>
> If no one disagrees with this by end of day on Friday, I'll give Lin his
> new awesome core power that evening.
>
> Thanks,
>
> Steve Martinelli
> OpenStack Keystone Core
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Dean Troyer
dtroyer at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/99b818c2/attachment.html>

From Kevin.Fox at pnnl.gov  Tue Sep  8 21:44:35 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 8 Sep 2015 21:44:35 +0000
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <1441743880-sup-3578@fewbar.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>,
 <1441743880-sup-3578@fewbar.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F2CEB@EX10MBOX03.pnnl.gov>

How does that work with neutron private networks?

Thanks,
Kevin
________________________________________
From: Clint Byrum [clint at fewbar.com]
Sent: Tuesday, September 08, 2015 1:35 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support

Excerpts from Nir Yechiel's message of 2014-07-07 09:15:09 -0700:
> AFAIK, the cloud-init metadata service can currently be accessed only by sending a request to http://169.254.169.254, and no IPv6 equivalent is currently implemented. Does anyone working on this or tried to address this before?
>

I'm not sure we'd want to carry the way metadata works forward now that
we have had some time to think about this.

We already have DHCP6 and NDP. Just use one of those, and set the host's
name to a nonce that it can use to lookup the endpoint for instance
differentiation via DNS SRV records. So if you were told you are

d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com

Then you look that up as a SRV record on your configured DNS resolver,
and connect to the host name returned and do something like  GET
/d02a684d-56ea-44bc-9eba-18d997b1d32d

And viola, metadata returns without any special link local thing, and
it works like any other dual stack application on the planet.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From gord at live.ca  Tue Sep  8 22:10:29 2015
From: gord at live.ca (gord chung)
Date: Tue, 8 Sep 2015 18:10:29 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <55EF5331.7010804@dague.net>
References: <55E83B84.5000000@openstack.org> <55EED0DA.6090201@dague.net>
 <1441721217-sup-1564@lrrr.local>
 <CAOJFoEsuct5CTtU_qE+oB1jgfrpi2b65Xw1HOKNmMxZEXeUjPQ@mail.gmail.com>
 <1441731915-sup-1930@lrrr.local> <55EF24E4.4060100@dague.net>
 <1441740621-sup-126@lrrr.local> <55EF5331.7010804@dague.net>
Message-ID: <BLU436-SMTP196E0F7CFAEBE2D316FB4FEDE530@phx.gbl>



On 08/09/2015 5:29 PM, Sean Dague wrote:
> On 09/08/2015 03:32 PM, Doug Hellmann wrote:
>> Excerpts from Sean Dague's message of 2015-09-08 14:11:48 -0400:
>>> On 09/08/2015 01:07 PM, Doug Hellmann wrote:
>>>> Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
>>>>> On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
>>>>>> I'd like to come up with some way to express the time other than
>>>>>> N+M because in the middle of a cycle it can be confusing to know
>>>>>> what that means (if I want to deprecate something in August am I
>>>>>> far enough through the current cycle that it doesn't count?).
>>>>>>
>>>>>> Also, as we start moving more projects to doing intermediate releases
>>>>>> the notion of a "release" vs. a "cycle" will drift apart, so we
>>>>>> want to talk about "stable releases" not just any old release.
>>>>>>
>>>>> I've always thought the appropriate equivalent for projects not following
>>>>> the (old) integrated release cadence was for N == six months.  It sets
>>>>> approx. the same pace and expectation with users/deployers.
>>>>>
>>>>> For those deployments tracking trunk, a similar approach can be taken, in
>>>>> that deprecating a config option in M3 then removing it in N1 might be too
>>>>> quick, but rather wait at least the same point in the following release
>>>>> cycle to increment 'N'.
>>>>>
>>>>> dt
>>>>>
>>>> Making it explicitly date-based would simplify tracking, to be sure.
>>> I would agree that the M3 -> N0 drop can be pretty quick, it can be 6
>>> weeks (which I've seen happen). However N == six months might make FFE
>>> deprecation lands in one release run into FFE in the next. For the CD
>>> case my suggestion is > 3 months. Because if you aren't CDing in
>>> increments smaller than that, and hence seeing the deprecation, you
>>> aren't really doing the C part of CDing.
>>>
>>>      -Sean
>>>
>> Do those 3 months need to span more than one stable release? For
>> projects doing intermediary releases, there may be several releases
>> within a 3 month period.
> Yes. 1 stable release branch AND 3 months linear time is what I'd
> consider reasonable.
>
> 	-Sean
>
while the pyro in me would like to burn things asap, my fellow 
contributors won't let me so Ceilometer typically has done 
deprecate->deprecate->remove. but agree with sdague, the bare minimum 
should be ^. operators will yell, don't make them yell.

cheers,

-- 
gord



From gord at live.ca  Tue Sep  8 22:16:53 2015
From: gord at live.ca (gord chung)
Date: Tue, 8 Sep 2015 18:16:53 -0400
Subject: [openstack-dev] Getting Started : OpenStack
In-Reply-To: <CAMobWWqoYBDGmH+9wo2OjUzwdBUEQLbXH68Hw5dq47SQLPKGHQ@mail.gmail.com>
References: <CAMobWWqoYBDGmH+9wo2OjUzwdBUEQLbXH68Hw5dq47SQLPKGHQ@mail.gmail.com>
Message-ID: <BLU437-SMTP56589D9A34EFC6F187E28EDE530@phx.gbl>



On 07/09/2015 1:59 PM, Bhagyashree Uday wrote:
> Hi ,
>
> I am Bhagyashree from India(IRC nick : bee2502 ). I have previous 
> experience in data analytics including Machine Leraning,,NLP,IR and 
> User Experience Research. I am interested in contributing to OpenStack 
> on projects involving data analysis. Also , if these projects could be 
> a part of Outreachy, it would be added bonus. I went through project 
> ideas listed on https://wiki.openstack.org/wiki/Internship_ideas and 
> one of these projects interested me a lot -
> Understand OpenStack Operations via Insights from Logs and Metrics: A 
> Data Science Perspective
> However, this project does not have any mentioned mentor and I was 
> hoping you could provide me with some individual contact from 
> OpenStack community who would be interested in mentoring this project 
> or some mailing list/thread/IRC community where I could look for a 
> mentor. Other open data science projects/idea suggestions are also 
> welcome.
>

there was a project proposed a few months back called Cognitive[1] but i 
don't know the status of this project. as for Ceilometer, it doesn't 
encompass data analysis but it does collect data which you might be 
interested in leveraging (ie. resource metrics and system events)

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064195.html

-- 
gord

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/913a4126/attachment.html>

From emilien at redhat.com  Tue Sep  8 22:50:38 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Tue, 8 Sep 2015 18:50:38 -0400
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big
 tent?
In-Reply-To: <20150908145755.GC16241@localhost.localdomain>
References: <20150908145755.GC16241@localhost.localdomain>
Message-ID: <55EF663E.8050909@redhat.com>



On 09/08/2015 10:57 AM, Paul Belanger wrote:
> Greetings,
> 
> I wanted to start a discussion about the future of ansible / ansible roles in
> OpenStack. Over the last week or so I've started down the ansible path, starting
> my first ansible role; I've started with ansible-role-nodepool[1].
> 
> My initial question is simple, now that big tent is upon us, I would like
> some way to include ansible roles into the opentack git workflow.  I first
> thought the role might live under openstack-infra however I am not sure that
> is the right place.  My reason is, -infra tents to include modules they
> currently run under the -infra namespace, and I don't want to start the effort
> to convince people to migrate.

I'm wondering what would be the goal of ansible-role-nodepool and what
it would orchestrate exactly. I did not find README that explains it,
and digging into the code makes me think you try to prepare nodepool
images but I don't exactly see why.

Since we already have puppet-nodepool, I'm curious about the purpose of
this role.
IMHO, if we had to add such a new repo, it would be under
openstack-infra namespace, to be consistent with other repos
(puppet-nodepool, etc).

> Another thought might be to reach out to the os-ansible-deployment team and ask
> how they see roles in OpenStack moving foward (mostly the reason for this
> email).

os-ansible-deployment aims to setup OpenStack services in containers
(LXC). I don't see relation between os-ansible-deployment (openstack
deployment related) and ansible-role-nodepool (infra related).

> Either way, I would be interested in feedback on moving forward on this. Using
> travis-ci and github works but OpenStack workflow is much better.
> 
> [1] https://github.com/pabelanger/ansible-role-nodepool
> 

To me, it's unclear how and why we are going to use ansible-role-nodepool.
Could you explain with use-case?

Thanks,
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/bc4c801e/attachment.pgp>

From melwittt at gmail.com  Tue Sep  8 23:00:43 2015
From: melwittt at gmail.com (melanie witt)
Date: Tue, 8 Sep 2015 16:00:43 -0700
Subject: [openstack-dev] [nova] API v2.1 reference documentation
Message-ID: <A75480C2-CFDD-4574-B0F4-80911FE5877C@gmail.com>

Hi All,

With usage of v2.1 picking up (devstack) I find myself going to the API ref documentation [1] often and find it lacking compared with the similar v2 doc [2]. I refer to this doc whenever I see a novaclient bug where something broke with v2.1 and I'm trying to find out what the valid request parameters are, etc.

The main thing I notice is in the v2.1 docs, there isn't any request parameter list with descriptions like there is in v2. And I notice "create server" documentation doesn't seem to exist -- there is "Create multiple servers" but it doesn't provide much nsight about what the many request parameters are.

I assume the docs are generated from the code somehow, so I'm wondering how we can get this doc improved? Any pointers would be appreciated.

Thanks,
-melanie (irc: melwitt)


[1] http://developer.openstack.org/api-ref-compute-v2.1.html
[2] http://developer.openstack.org/api-ref-compute-v2.html

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/ba9f79ff/attachment.pgp>

From walter.boring at hp.com  Tue Sep  8 23:01:05 2015
From: walter.boring at hp.com (Walter A. Boring IV)
Date: Tue, 8 Sep 2015 16:01:05 -0700
Subject: [openstack-dev] [cinder] Using storage drivers outside of
 openstack/cinder
In-Reply-To: <55EF567C.5090908@redhat.com>
References: <55EF567C.5090908@redhat.com>
Message-ID: <55EF68B1.50407@hp.com>

Hey Tony,
   This has been a long running pain point/problem for some of the 
drivers in Cinder.
As a reviewer, I try and -1 drivers that talk directly to the database 
as I don't think
drivers *should* be doing that.   But, for some drivers, unfortunately, 
in order to
implement the features, they currently need to talk to the DB. :(  One 
of the new
features in Cinder, namely consistency groups, has a bug that basically 
requires
drivers to talk to the DB to fetch additional data.  There are plans to 
remedy this
problem in the M release of Cinder.   For other DB calls in drivers, 
it's a case by
case basis for removing the call, that's not entirely obvious how to do 
it at the
current time.   It's a topic that has come up now and again within the 
community,
and I for one, would like to see the DB calls removed as well. Feel free to
help contribute!  It's OpenSource after all. :)

Cheers,
Walt
> Openstack/Cinder has a wealth of storage drivers to talk to different
> storage subsystems, which is great for users of openstack.  However, it
> would be even greater if this same functionality could be leveraged
> outside of openstack/cinder.  So that other projects don't need to
> duplicate the same functionality when trying to talk to hardware.
>
>
> When looking at cinder and asking around[1] about how one could
> potentially do this I find out that is there is quite a bit of coupling
> with openstack, like:
>
> * The NFS driver is initialized with knowledge about whether any volumes
> exist in the database or not, and if not, can trigger certain behavior
> to set permissions, etc.  This means that something other than the
> cinder-volume service needs to mimic the right behavior if using this
> driver.
>
> * The LVM driver touches the database when creating a backup of a volume
> (many drivers do), and when managing a volume (importing an existing
> external LV to use as a Cinder volume).
>
> * A few drivers (GPFS, others?) touch the db when managing consistency
> groups.
>
> * EMC, Hitachi, and IBM NFS drivers touch the db when creating/deleting
> snapshots.
>
>
> Am I the only one that thinks this would be useful?  What ideas do
> people have for making the cinder drivers stand alone, so that everyone
> could benefit from this great body of work?
>
> Thanks,
> Tony
>
> [1] Special thanks to Eric Harney for the examples of coupling
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> .
>



From vilobhmeshram.openstack at gmail.com  Tue Sep  8 23:25:00 2015
From: vilobhmeshram.openstack at gmail.com (Vilobh Meshram)
Date: Tue, 8 Sep 2015 16:25:00 -0700
Subject: [openstack-dev] [magnum] Should pod/rc/service 'bay_uuid' be
 foreign key for Bay 'uuid' ?
Message-ID: <CAPJ8RRWZSqH=J4ggztq59ryn_M42=Q-ioU1ZCHTZUGWZqeE=eA@mail.gmail.com>

Hi All,


K8s resources Pod/RC/Service share same 'bay_uuid' which it gets from the
Bay 'uuid' (which happens to be the primary key for Bay). Shouldn't it be a
good idea to make pod/rc/service 'bay_uuid' be foreign key for Bay 'uuid'.
Are there any cons in doing so ? Why was it done in this specific way
initially ?

Listing down some pros in doing so :-

1. It helps to give a clear indication whether a Bay exist or not; if the
Pod/RC/Service 'bay_uuid' is a foreign key for Bay 'uuid'.
2. No additional lookup is necessary for the Bay table to check the
existence of Bay.



Nova already does so by [1]. Even other projects do follow the same pattern.


- Vilobh

[1]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L352
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/ed6a1d7a/attachment.html>

From clint at fewbar.com  Wed Sep  9 00:03:07 2015
From: clint at fewbar.com (Clint Byrum)
Date: Tue, 08 Sep 2015 17:03:07 -0700
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A2F2CEB@EX10MBOX03.pnnl.gov>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <1441743880-sup-3578@fewbar.com>
 <1A3C52DFCD06494D8528644858247BF01A2F2CEB@EX10MBOX03.pnnl.gov>
Message-ID: <1441756390-sup-9410@fewbar.com>

Neutron would add a soft router that only knows the route to the metadata
service (and any other services you want your neutron private network vms
to be able to reach). This is not unique to the metadata service. Heat,
Trove, etc, all want this as a feature so that one can poke holes out of
these private networks only to the places where the cloud operator has
services running.

Excerpts from Fox, Kevin M's message of 2015-09-08 14:44:35 -0700:
> How does that work with neutron private networks?
> 
> Thanks,
> Kevin
> ________________________________________
> From: Clint Byrum [clint at fewbar.com]
> Sent: Tuesday, September 08, 2015 1:35 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support
> 
> Excerpts from Nir Yechiel's message of 2014-07-07 09:15:09 -0700:
> > AFAIK, the cloud-init metadata service can currently be accessed only by sending a request to http://169.254.169.254, and no IPv6 equivalent is currently implemented. Does anyone working on this or tried to address this before?
> >
> 
> I'm not sure we'd want to carry the way metadata works forward now that
> we have had some time to think about this.
> 
> We already have DHCP6 and NDP. Just use one of those, and set the host's
> name to a nonce that it can use to lookup the endpoint for instance
> differentiation via DNS SRV records. So if you were told you are
> 
> d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com
> 
> Then you look that up as a SRV record on your configured DNS resolver,
> and connect to the host name returned and do something like  GET
> /d02a684d-56ea-44bc-9eba-18d997b1d32d
> 
> And viola, metadata returns without any special link local thing, and
> it works like any other dual stack application on the planet.
> 


From ken1ohmichi at gmail.com  Wed Sep  9 00:15:44 2015
From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi)
Date: Wed, 9 Sep 2015 09:15:44 +0900
Subject: [openstack-dev] Should v2 compatibility mode (v2.0 on v2.1)
 fixes be applicable for v2.1 too?
In-Reply-To: <55EEBC48.6080200@dague.net>
References: <CACE3TKWnnCtjc-CM408zO4BLfG733Rz4s90ap69PdE2jmvNWmg@mail.gmail.com>
 <55EEBC48.6080200@dague.net>
Message-ID: <CAA393vgzTYYfb-YAvtHrFAjdVr5GL-sX7Qh4OG7S+SwNGvdFSg@mail.gmail.com>

2015-09-08 19:45 GMT+09:00 Sean Dague <sean at dague.net>:
> On 09/06/2015 11:15 PM, GHANSHYAM MANN wrote:
>> Hi All,
>>
>> As we all knows, api-paste.ini default setting for /v2 was changed to
>> run those on v2.1 (v2.0 on v2.1) which is really great think for easy
>> code maintenance in future (removal of v2 code).
>>
>> To keep "v2.0 on v2.1" fully compatible with "v2.0 on v2.0", some bugs
>> were found[1] and fixed. But I think we should fix those only for v2
>> compatible mode not for v2.1.
>>
>> For example bug#1491325, 'device' on volume attachment Request is
>> optional param[2] (which does not mean 'null-able' is allowed) and
>> v2.1 used to detect and error on usage of 'device' as "None". But as
>> it was used as 'None' by many /v2 users and not to break those, we
>> should allow 'None' on v2 compatible mode also. But we should not
>> allow the same for v2.1.
>>
>> IMO v2.1 strong input validation feature (which helps to make API
>> usage in correct manner) should not be changed, and for v2 compatible
>> mode we should have another solution without affecting v2.1 behavior
>> may be having different schema for v2 compatible mode and do the
>> necessary fixes there.
>>
>> Trying to know other's opinion on this or something I missed during
>> any discussion.
>>
>> [1]: https://bugs.launchpad.net/python-novaclient/+bug/1491325
>>       https://bugs.launchpad.net/nova/+bug/1491511
>>
>> [2]: http://developer.openstack.org/api-ref-compute-v2.1.html#attachVolume
>
> A lot of these issue need to be a case by case determination.
>
> In this particular case, we had the Documetation, the nova code, the
> clients, and the future.
>
> The documentation: device is optional. That means it should be a string
> or not there at all. The schema was set to enforce this on v2.1
>
> The nova code: device = None was accepted previously, because device is
> a mandatory parameter all the way down the call stack. 2 layers in we
> default it to None if it wasn't specified.
>
> The clients: both python-novaclient and ruby fog sent device=None in the
> common case. While only 2 data points, this does demonstrate this is
> more wide spread than just our buggy code.
>
> The future: it turns out we really can't honor this parameter in most
> cases anyway, and passing it just means causing bugs. This is an
> artifact of the EC2 API that only works on specific (and possibly
> forked) versions of Xen that Amazon runs. Most hypervisor / guest
> relationships don't allow this to be set. The long term direction is
> going to be removing it from our API.
>
> Given that it seemed fine to relax this across all API. We screwed up
> and didn't test this case correctly, and long term we're going to dump
> it. So we don't want to honor 3 different versions of this API,
> especially as no one seems written to work against the documentation,
> but were written against the code in question. If they write to the
> docs, they'll be fine. But the clients that are out in the wild will be
> fine as well.

I think the case by case determination is fine, but current change
progress of relaxing validation seems wrong.
In Kilo, we required nova-specs for relaxing v2.1 API validation like
https://review.openstack.org/#/c/126696/
and we had much enough discussion and we built a consensus about that.
But we merged the above patch in just 2 working days without any
nova-spec even if we didn't have a consensus about that v2.1
validation change requires microversion bump or not.

If we really need to relax validation thing for v2.0 compatible API,
please consider separating v2.0 API schema from v2.1 API schema.
I have one idea about that like https://review.openstack.org/#/c/221129/

We worked for strict and consistent validation way on v2.1 API over 2
years, and I don't want to make it loose without enough thinking.

Thanks
Ken Ohmichi


From john.griffith8 at gmail.com  Wed Sep  9 00:32:58 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Tue, 8 Sep 2015 18:32:58 -0600
Subject: [openstack-dev] [cinder] Using storage drivers outside of
	openstack/cinder
In-Reply-To: <55EF68B1.50407@hp.com>
References: <55EF567C.5090908@redhat.com>
	<55EF68B1.50407@hp.com>
Message-ID: <CAPWkaSWHm+aLn4D6n7VrmG92D_UmWmqoDradb92BrY4LG+Ru8A@mail.gmail.com>

On Tue, Sep 8, 2015 at 5:01 PM, Walter A. Boring IV <walter.boring at hp.com>
wrote:

> Hey Tony,
>   This has been a long running pain point/problem for some of the drivers
> in Cinder.
> As a reviewer, I try and -1 drivers that talk directly to the database as
> I don't think
> drivers *should* be doing that.   But, for some drivers, unfortunately, in
> order to
> implement the features, they currently need to talk to the DB. :(  One of
> the new
> features in Cinder, namely consistency groups, has a bug that basically
> requires
> drivers to talk to the DB to fetch additional data.  There are plans to
> remedy this
> problem in the M release of Cinder.   For other DB calls in drivers, it's
> a case by
> case basis for removing the call, that's not entirely obvious how to do it
> at the
> current time.   It's a topic that has come up now and again within the
> community,
> and I for one, would like to see the DB calls removed as well. Feel free to
> help contribute!  It's OpenSource after all. :)
>
> Cheers,
> Walt
>
>> Openstack/Cinder has a wealth of storage drivers to talk to different
>> storage subsystems, which is great for users of openstack.  However, it
>> would be even greater if this same functionality could be leveraged
>> outside of openstack/cinder.  So that other projects don't need to
>> duplicate the same functionality when trying to talk to hardware.
>>
>>
>> When looking at cinder and asking around[1] about how one could
>> potentially do this I find out that is there is quite a bit of coupling
>> with openstack, like:
>>
>> * The NFS driver is initialized with knowledge about whether any volumes
>> exist in the database or not, and if not, can trigger certain behavior
>> to set permissions, etc.  This means that something other than the
>> cinder-volume service needs to mimic the right behavior if using this
>> driver.
>>
>> * The LVM driver touches the database when creating a backup of a volume
>> (many drivers do), and when managing a volume (importing an existing
>> external LV to use as a Cinder volume).
>>
>> * A few drivers (GPFS, others?) touch the db when managing consistency
>> groups.
>>
>> * EMC, Hitachi, and IBM NFS drivers touch the db when creating/deleting
>> snapshots.
>>
>>
>> Am I the only one that thinks this would be useful?  What ideas do
>> people have for making the cinder drivers stand alone, so that everyone
>> could benefit from this great body of work?
>>
>> Thanks,
>> Tony
>>
>> [1] Special thanks to Eric Harney for the examples of coupling
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> .
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

?Hey Tony,

Thanks for posting this, I've thought for a while that something like what
you propose would be AWESOME and in fact is the direction we should go.
Initially I'd planned to make Cinder a consumable service outside of
OpenStack, but over time that's become less easy of a task with all the
libraries, dependencies and assumptions that we make with respect to our
environment.

The idea of a more "general" driver that can be consumed is something
that's come up a number of times and proposed to sit in Cinder as a driver
(Vipr/CoprHd and some others over the years), I think we (Cinder) could
provide that level of abstraction and consumability better than most of
what's been proposed so far.  It would be a good deal of work I think to
make it happen and require some buy in / commitment from almost all the
Cinder contributors but I think it would be something worth doing.

I'll be curious to see if any other interest is expressed here.  There are
ways to deal with the DB pieces and things like that I think (hackish ways
like config settings for OpenStack vs non-OpenStack env).  Anyway, I'd love
to talk more about it... maybe in Tokyo?

John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/aa85aa25/attachment.html>

From r1chardj0n3s at gmail.com  Wed Sep  9 01:22:43 2015
From: r1chardj0n3s at gmail.com (Richard Jones)
Date: Wed, 9 Sep 2015 11:22:43 +1000
Subject: [openstack-dev] [horizon] python-selenium landed in Debian main
 today (in Debian Experimental for the moment)
In-Reply-To: <55EF3882.2000805@debian.org>
References: <55EF3882.2000805@debian.org>
Message-ID: <CAHrZfZCvaW7Ckry9isBjfjkEa3h7MudyOn2TmX0GL-5QFNA_MQ@mail.gmail.com>

On 9 September 2015 at 05:35, Thomas Goirand <zigo at debian.org> wrote:

> After the non-free files were removed from the package (after I asked
> for it through the Debian bug https://bugs.debian.org/770232), Selenium
> was uploaded and reached Debian Experimental in main today (ie: Selenium
> is not in non-free section of Debian anymore). \o/
>

\o/


Now, I wonder: can the Horizon team use python-selenium as uploaded to
> Debian experimental today? Can we run the Selenium unit tests, even
> without the browser plugins? It is my understanding that it's possible,
> if we use something like PhantomJS, which is also available in Debian.
>

We can't use PhantomJS as a webdriver as a couple of the tests interact
with file inputs and ghostdriver doesn't support those, sadly (and the
developer of ghostdriver is MIA). We are pretty much stuck with just
Firefox as the webdriver.


     Richard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/c8fff635/attachment.html>

From ken1ohmichi at gmail.com  Wed Sep  9 01:41:53 2015
From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi)
Date: Wed, 9 Sep 2015 10:41:53 +0900
Subject: [openstack-dev] [nova] API v2.1 reference documentation
In-Reply-To: <A75480C2-CFDD-4574-B0F4-80911FE5877C@gmail.com>
References: <A75480C2-CFDD-4574-B0F4-80911FE5877C@gmail.com>
Message-ID: <CAA393vg8FeCNGDqgN_NUHUTXSy=U7VTqim4PJ8D00LBwkzBwdA@mail.gmail.com>

Hi Melanie,

2015-09-09 8:00 GMT+09:00 melanie witt <melwittt at gmail.com>:
> Hi All,
>
> With usage of v2.1 picking up (devstack) I find myself going to the API ref documentation [1] often and find it lacking compared with the similar v2 doc [2]. I refer to this doc whenever I see a novaclient bug where something broke with v2.1 and I'm trying to find out what the valid request parameters are, etc.
>
> The main thing I notice is in the v2.1 docs, there isn't any request parameter list with descriptions like there is in v2. And I notice "create server" documentation doesn't seem to exist -- there is "Create multiple servers" but it doesn't provide much nsight about what the many request parameters are.
>
> I assume the docs are generated from the code somehow, so I'm wondering how we can get this doc improved? Any pointers would be appreciated.
>
> Thanks,
> -melanie (irc: melwitt)
>
>
> [1] http://developer.openstack.org/api-ref-compute-v2.1.html
> [2] http://developer.openstack.org/api-ref-compute-v2.html

Nice point.
"create server" API is most important and necessary to be described on
the document anyway.

In short-term, we need to describe it from the code by hands, and we
can know available parameters from JSON-Schema code.
The base parameters can be gotten from
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L18
In addition, there are extensions which add more parameters and we can
get to know from
https://github.com/openstack/nova/tree/master/nova/api/openstack/compute/schemas
If module files contain the dict *server_create*, they are also API parameters.
For example, keypairs extension adds "key_name" parameter and we can
know it from https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/keypairs.py

In long-term, it will be great to generate these API parameter
document from JSON-Schema directly.
JSON-Schema supports "description" parameter and we can describe the
meaning of each parameter.
But that will be long-term way. We need to write them by hands now.

Thanks
Ken Ohmichi


From ben at swartzlander.org  Wed Sep  9 02:22:38 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Tue, 8 Sep 2015 22:22:38 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <1441735065-sup-780@lrrr.local>
References: <55E83B84.5000000@openstack.org>
 <55EF1BCA.8020708@swartzlander.org> <1441735065-sup-780@lrrr.local>
Message-ID: <55EF97EE.9090801@swartzlander.org>

On 09/08/2015 01:58 PM, Doug Hellmann wrote:
> Excerpts from Ben Swartzlander's message of 2015-09-08 13:32:58 -0400:
>> On 09/03/2015 08:22 AM, Thierry Carrez wrote:
>>> Hi everyone,
>>>
>>> A feature deprecation policy is a standard way to communicate and
>>> perform the removal of user-visible behaviors and capabilities. It helps
>>> setting user expectations on how much and how long they can rely on a
>>> feature being present. It gives them reassurance over the timeframe they
>>> have to adapt in such cases.
>>>
>>> In OpenStack we always had a feature deprecation policy that would apply
>>> to "integrated projects", however it was never written down. It was
>>> something like "to remove a feature, you mark it deprecated for n
>>> releases, then you can remove it".
>>>
>>> We don't have an "integrated release" anymore, but having a base
>>> deprecation policy, and knowing which projects are mature enough to
>>> follow it, is a great piece of information to communicate to our users.
>>>
>>> That's why the next-tags workgroup at the Technical Committee has been
>>> working to propose such a base policy as a 'tag' that project teams can
>>> opt to apply to their projects when they agree to apply it to one of
>>> their deliverables:
>>>
>>> https://review.openstack.org/#/c/207467/
>>>
>>> Before going through the last stage of this, we want to survey existing
>>> projects to see which deprecation policy they currently follow, and
>>> verify that our proposed base deprecation policy makes sense. The goal
>>> is not to dictate something new from the top, it's to reflect what's
>>> generally already applied on the field.
>>>
>>> In particular, the current proposal says:
>>>
>>> "At the very minimum the feature [...] should be marked deprecated (and
>>> still be supported) in the next two coordinated end-of-cyle releases.
>>> For example, a feature deprecated during the M development cycle should
>>> still appear in the M and N releases and cannot be removed before the
>>> beginning of the O development cycle."
>>>
>>> That would be a n+2 deprecation policy. Some suggested that this is too
>>> far-reaching, and that a n+1 deprecation policy (feature deprecated
>>> during the M development cycle can't be removed before the start of the
>>> N cycle) would better reflect what's being currently done. Or that
>>> config options (which are user-visible things) should have n+1 as long
>>> as the underlying feature (or behavior) is not removed.
>>>
>>> Please let us know what makes the most sense. In particular between the
>>> 3 options (but feel free to suggest something else):
>>>
>>> 1. n+2 overall
>>> 2. n+2 for features and capabilities, n+1 for config options
>>> 3. n+1 overall
>> I think any discussion of a deprecation policy needs to be combined with
>> a discussion about LTS (long term support) releases. Real customers (not
>> devops users -- people who pay money for support) can't deal with
>> upgrades every 6 months.
>>
>> Unavoidably, distros are going to want to support certain releases for
>> longer than the normal upstream support window so they can satisfy the
>> needs of the aforementioned customers. This will be true whether the
>> deprecation policy is N+1, N+2, or N+3.
>>
>> It makes sense for the community to define LTS releases and coordinate
>> making sure all the relevant projects are mutually compatible at that
>> release point. Then the job of actually maintaining the LTS release can
>> fall on people who care about such things. The major benefit to solving
>> the LTS problem, though, is that deprecation will get a lot less painful
>> because you could assume upgrades to be one release at a time or
>> skipping directly from one LTS to the next, and you can reduce your
>> upgrade test matrix accordingly.
> How is this fundamentally different from what we do now with stable
> releases, aside from involving a longer period of time?

It would be a recognition that most customers don't want to upgrade 
every 6 months -- they want to skip over 3 releases and upgrade every 2 
years. I'm sure there are customers all over the spectrum from those who 
run master to those to do want a new release every 6 month, to some that 
want to install something and run it forever without upgrading*. My 
intuition is that, for most customers, 2 years is a reasonable amount of 
time to run a release before upgrading. I think major Linux distros 
understand this, as is evidenced by their release and support patterns.

As sdague mentions, the idea of LTS is really a separate goal from the 
deprecation policy, but I see the two becoming related when the 
deprecation policy makes it impossible to cleanly jump 4 releases in a 
single upgrade. I also believe that if you solve the LTS problem, the 
deprecation policy flows naturally from whatever your supported-upgrade 
path is: you simply avoid breaking anyone who does a supported upgrade.

It sounds to me like the current supported upgrade path is: you upgrade 
each release one at a time, never skipping over a release. In this 
model, N+1 deprecation makes perfect sense. I think the same people who 
want longer deprecation periods are the ones who want to skip over 
releases when upgrade for the reasons I mention.

-Ben

* I'm the kind that never upgrades. I don't fix things that aren't 
broken. Until recently I was running FreeBSD 7 and Ubuntu 8.04. 
Eventually I was forced to upgrade though when support was dropped. I'm 
*still* running CentOS 5 though.

> Doug
>
>> -Ben Swartzlander
>>
>>> Thanks in advance for your input.
>>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From annegentle at justwriteclick.com  Wed Sep  9 02:43:35 2015
From: annegentle at justwriteclick.com (Anne Gentle)
Date: Tue, 8 Sep 2015 21:43:35 -0500
Subject: [openstack-dev] [nova] API v2.1 reference documentation
In-Reply-To: <CAA393vg8FeCNGDqgN_NUHUTXSy=U7VTqim4PJ8D00LBwkzBwdA@mail.gmail.com>
References: <A75480C2-CFDD-4574-B0F4-80911FE5877C@gmail.com>
 <CAA393vg8FeCNGDqgN_NUHUTXSy=U7VTqim4PJ8D00LBwkzBwdA@mail.gmail.com>
Message-ID: <CAD0KtVF+Mg1yNMFoPqtqgCC+t46v9PfwhH7waQBtb_cZ1EYLoQ@mail.gmail.com>

On Tue, Sep 8, 2015 at 8:41 PM, Ken'ichi Ohmichi <ken1ohmichi at gmail.com>
wrote:

> Hi Melanie,
>
> 2015-09-09 8:00 GMT+09:00 melanie witt <melwittt at gmail.com>:
> > Hi All,
> >
> > With usage of v2.1 picking up (devstack) I find myself going to the API
> ref documentation [1] often and find it lacking compared with the similar
> v2 doc [2]. I refer to this doc whenever I see a novaclient bug where
> something broke with v2.1 and I'm trying to find out what the valid request
> parameters are, etc.
> >
> > The main thing I notice is in the v2.1 docs, there isn't any request
> parameter list with descriptions like there is in v2. And I notice "create
> server" documentation doesn't seem to exist -- there is "Create multiple
> servers" but it doesn't provide much nsight about what the many request
> parameters are.
> >
> > I assume the docs are generated from the code somehow, so I'm wondering
> how we can get this doc improved? Any pointers would be appreciated.
>

They are manual, and Alex made a list of how far behind the 2.1 docs which
is in a doc bug here:

https://bugs.launchpad.net/openstack-api-site/+bug/1488144

It's great to see Atsushi Sakai working hard on those, join him in the
patching.

We're still patching WADL for this release with the hope of adding Swagger
for many services by October 15th -- however the WADL to Swagger tool we
have now migrates WADL.

Thanks,
Anne

>
> > Thanks,
> > -melanie (irc: melwitt)
> >
> >
> > [1] http://developer.openstack.org/api-ref-compute-v2.1.html
> > [2] http://developer.openstack.org/api-ref-compute-v2.html
>
> Nice point.
> "create server" API is most important and necessary to be described on
> the document anyway.
>
> In short-term, we need to describe it from the code by hands, and we
> can know available parameters from JSON-Schema code.
> The base parameters can be gotten from
>
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L18
> In addition, there are extensions which add more parameters and we can
> get to know from
>
> https://github.com/openstack/nova/tree/master/nova/api/openstack/compute/schemas
> If module files contain the dict *server_create*, they are also API
> parameters.
> For example, keypairs extension adds "key_name" parameter and we can
> know it from
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/keypairs.py
>
> In long-term, it will be great to generate these API parameter
> document from JSON-Schema directly.
> JSON-Schema supports "description" parameter and we can describe the
> meaning of each parameter.
> But that will be long-term way. We need to write them by hands now.
>
> Thanks
> Ken Ohmichi
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/7ad02a9f/attachment.html>

From ayoung at redhat.com  Wed Sep  9 03:30:46 2015
From: ayoung at redhat.com (Adam Young)
Date: Tue, 8 Sep 2015 23:30:46 -0400
Subject: [openstack-dev] [depfreeze] [keystone] Set minimum version for
 passlib
In-Reply-To: <CAGi==UW6OdXiBxfBdj6QRiMtgOLy+6g=zUY2E4r6h7iss4eF=Q@mail.gmail.com>
References: <CAGi==UW6OdXiBxfBdj6QRiMtgOLy+6g=zUY2E4r6h7iss4eF=Q@mail.gmail.com>
Message-ID: <55EFA7E6.1060505@redhat.com>

On 09/08/2015 09:50 AM, Alan Pevec wrote:
> Hi all,
>
> according to https://wiki.openstack.org/wiki/DepFreeze I'm requesting
> depfreeze exception for
> https://review.openstack.org/221267
> This is just a sync with reality, copying Javier's description:
>
> (Keystone) commit a7235fc0511c643a8441efd3d21fc334535066e2 [1] uses
> passlib.utils.MAX_PASSWORD_SIZE, which was only introduced to
> passlib in version 1.6
>
> Cheers,
> Alan
>
> [1] https://review.openstack.org/217449
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1


From ayoung at redhat.com  Wed Sep  9 03:36:37 2015
From: ayoung at redhat.com (Adam Young)
Date: Tue, 8 Sep 2015 23:36:37 -0400
Subject: [openstack-dev] This is what disabled-by-policy should look
 like to the user
In-Reply-To: <CAOyZ2aEKoAb9m=qz69u93G9cbkS6toybz7gdv9JR97jv9RVMgA@mail.gmail.com>
References: <55E9A4F2.5030809@inaugust.com> <55EA56BC.70408@redhat.com>
 <CAOyZ2aEKoAb9m=qz69u93G9cbkS6toybz7gdv9JR97jv9RVMgA@mail.gmail.com>
Message-ID: <55EFA945.8080207@redhat.com>

On 09/06/2015 03:31 PM, Duncan Thomas wrote:
>
>
> On 5 Sep 2015 05:47, "Adam Young" <ayoung at redhat.com 
> <mailto:ayoung at redhat.com>> wrote:
>
> > Then let my Hijack:
> >
> > Policy is still broken.  We need the pieces of Dynamic policy.
> >
> > I am going to call for a cross project policy discussion for the 
> upcoming summit.  Please, please, please all the projects attend. The 
> operators have made it clear they need better policy support.
>
> Can you give us a heads up on the perceived shortcomings, please, 
> together with an overview of any proposed changes? Turning up to a 
> session to hear, unprepared, something that can be introduced in 
> advance over email so that people can ruminate on the details and be 
> better prepared to discuss them is probably more productive than 
> expecting tired, jet-lagged people to think on their feet.
>
> In general, I think the practice of introducing new things at design 
> summits, rather than letting people prepare, is slowing us down as a 
> community.
>

I've been harping o0n this for a while, both at summits and before.

It starts with:

https://bugs.launchpad.net/keystone/+bug/968696

We can't fix that until we have an approach that lets us unstick the 
situations where we need a global admin.

This was the start of it:
https://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/

Submitted this overview spec (which was termed not implementable because 
it was an overview)

https://review.openstack.org/#/c/147651/



and a bunch of supporting specs:

https://review.openstack.org/#/q/status:open+project:openstack/keystone-specs+branch:master+topic:dynamic-policy,n,z

We've m,ade very little progress on this since this point 6 months ago.

Had a cross project policy discussion in Vancouver.  It was almost all 
Keystone folks, with a very few people from other projects.



>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/b8154cb4/attachment.html>

From germy.lure at gmail.com  Wed Sep  9 06:43:00 2015
From: germy.lure at gmail.com (Germy Lure)
Date: Wed, 9 Sep 2015 14:43:00 +0800
Subject: [openstack-dev] [Neutron] Port forwarding
In-Reply-To: <CALiLy7qv+c+ozm2Vw-pCZ1yQiJ1WhTW-Rc1GevCu7soz5H7cRg@mail.gmail.com>
References: <CAG9LJa7uv2cn6_xOu1oMUR-AjkT9jP_yxBrUXeNjY_vYzMtOBA@mail.gmail.com>
 <CALiLy7qv+c+ozm2Vw-pCZ1yQiJ1WhTW-Rc1GevCu7soz5H7cRg@mail.gmail.com>
Message-ID: <CAEfdOg0oOz7v_-7Y_P9XRJOiXeAhdQ1iKN62=2uDYtjnu00xyg@mail.gmail.com>

Hi Gal,

Congratulations, eventually you understand what I mean.

Yes, in bulk. But I don't think that's an enhancement to the API. The bulk
operation is more common scenario. It is more useful and covers the single
port-mapping scenario.

By the way, bulk operation may apply to a subnet, a range(IP1 to IP100) or
even all the VMs behind a router. Perhaps, we need make a choice between
them while I prefer "range". Because it's more flexible and easier to use.

Many thanks.
Germy

On Wed, Sep 9, 2015 at 3:30 AM, Carl Baldwin <carl at ecbaldwin.net> wrote:

> On Tue, Sep 1, 2015 at 11:59 PM, Gal Sagie <gal.sagie at gmail.com> wrote:
> > Hello All,
> >
> > I have searched and found many past efforts to implement port forwarding
> in
> > Neutron.
>
> I have heard a few express a desire for this use case a few times in
> the past without gaining much traction.  Your summary here seems to
> show that this continues to come up.  I would be interested in seeing
> this move forward.
>
> > I have found two incomplete blueprints [1], [2] and an abandoned patch
> [3].
> >
> > There is even a project in Stackforge [4], [5] that claims
> > to implement this, but the L3 parts in it seems older then current
> master.
>
> I looked at this stack forge project.  It looks like files copied out
> of neutron and modified as an alternative to proposing a patch set to
> neutron.
>
> > I have recently came across this requirement for various use cases, one
> of
> > them is
> > providing feature compliance with Docker port-mapping feature (for
> Kuryr),
> > and saving floating
> > IP's space.
>
> I think both of these could be compelling use cases.
>
> > There has been many discussions in the past that require this feature,
> so i
> > assume
> > there is a demand to make this formal, just a small examples [6], [7],
> [8],
> > [9]
> >
> > The idea in a nutshell is to support port forwarding (TCP/UDP ports) on
> the
> > external router
> > leg from the public network to internal ports, so user can use one
> Floating
> > IP (the external
> > gateway router interface IP) and reach different internal ports
> depending on
> > the port numbers.
> > This should happen on the network node (and can also be leveraged for
> > security reasons).
>
> I'm sure someone will ask how this works with DVR.  It should be
> implemented so that it works with a DVR router but it will be
> implemented in the central part of the router.  Ideally, DVR and
> legacy routers work the same in this regard and a single bit of code
> will implement it for both.  If this isn't the case, I think that is a
> problem with our current code structure.
>
> > I think that the POC implementation in the Stackforge project shows that
> > this needs to be
> > implemented inside the L3 parts of the current reference implementation,
> it
> > will be hard
> > to maintain something like that in an external repository.
> > (I also think that the API/DB extensions should be close to the current
> L3
> > reference
> > implementation)
>
> Agreed.
>
> > I would like to renew the efforts on this feature and propose a RFE and a
> > spec for this to the
> > next release, any comments/ideas/thoughts are welcome.
> > And of course if any of the people interested or any of the people that
> > worked on this before
> > want to join the effort, you are more then welcome to join and comment.
>
> I have added this to the agenda for the Neutron drivers meeting.  When
> the team starts to turn its eye toward Mitaka, we'll discuss it.
> Hopefully that will be soon as I'm started to think about it already.
>
> I'd like to see how the API for this will look.  I don't think we'll
> need more detail that that for now.
>
> Carl
>
> > [1]
> https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
> > [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
> > [3] https://review.openstack.org/#/c/60512/
> > [4] https://github.com/stackforge/networking-portforwarding
> > [5] https://review.openstack.org/#/q/port+forwarding,n,z
> >
> > [6]
> >
> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
> > [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
> > [8]
> >
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
> > [9]
> >
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
> >
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/1512abef/attachment.html>

From sukhdevkapur at gmail.com  Wed Sep  9 06:46:00 2015
From: sukhdevkapur at gmail.com (Sukhdev Kapur)
Date: Tue, 8 Sep 2015 23:46:00 -0700
Subject: [openstack-dev] [Neutron][ML2] ML2 late/early-cycle sprint
	announcement
Message-ID: <CA+wZVHSv87sPpEFj2jC1YM9_WtDfFDEdzZF8tHqYCs=110rGSw@mail.gmail.com>

Folks,

We are planning on having ML2 coding sprint on October 6 through 8, 2015.
Some are calling it Liberty late-cycle sprint, others are calling it Mitaka
early-cycle sprint.

ML2 team has been discussing the issues related to synchronization of the
Neutron DB resources with the back-end drivers. Several issues have been
reported when multiple ML2 drivers are deployed in scaled HA deployments.
The issues surface when either side (Neutron or back-end HW/drivers)
restart and resource view gets out of sync. There is no mechanism in
Neutron or ML2 plugin which ensures the synchronization of the state
between the front-end and back-end. The drivers either end up implementing
their own solutions or they dump the issue on the operators to intervene
and correct it manually.

We plan on utilizing Task Flow to implement the framework in ML2 plugin
which can be leveraged by ML2 drivers to achieve synchronization in a
simplified manner.

There are couple of additional items on the Sprint agenda, which are listed
on the etherpad [1]. The details of venue and schedule are listed on the
enterpad as well. The sprint is hosted by Yahoo Inc.
Whoever is interested in the topics listed on the etherpad, is welcome to
sign up for the sprint and join us in making this reality.

Additionally, we will utilize this sprint to formalize the design
proposal(s) for the fish bowl session at Tokyo summit [2]

Any questions/clarifications, please join us in our weekly ML2 meeting on
Wednesday at 1600 UTC (9AM pacific time) at #openstack-meeting-alt

Thanks
-Sukhdev

[1] - https://etherpad.openstack.org/p/Neutron_ML2_Mid-Cycle_Sprint
[2] - https://etherpad.openstack.org/p/neutron-mitaka-designsummit
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150908/211c0b5b/attachment.html>

From ijw.ubuntu at cack.org.uk  Wed Sep  9 07:17:14 2015
From: ijw.ubuntu at cack.org.uk (Ian Wells)
Date: Wed, 9 Sep 2015 00:17:14 -0700
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <1441756390-sup-9410@fewbar.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <1441743880-sup-3578@fewbar.com>
 <1A3C52DFCD06494D8528644858247BF01A2F2CEB@EX10MBOX03.pnnl.gov>
 <1441756390-sup-9410@fewbar.com>
Message-ID: <CAPoubz4_zo3jZwvDQLKpYZVVmF6DN85m_1HNgx1wHi6nxAX+AA@mail.gmail.com>

Neutron already offers a DNS server (within the DHCP namespace, I think).
It does forward on non-local queries to an external DNS server, but it
already serves local names for instances; we'd simply have to set one
aside, or perhaps use one in a 'root' but nonlocal domain
(metadata.openstack e.g.).  In fact, this improves things slightly over the
IPv4 metadata server: IPv4 metadata is usually reached via the router,
whereas in ipv6 if we have a choice over addresses with can use a link
local address (and any link local address will do; it's not an address that
is 'magic' in some way, thanks to the wonder of service advertisement).

And per previous comments about 'Amazon owns this' - the current metadata
service is a de facto standard, which Amazon initiated but is not owned by
anybody, and it's not the only standard.  If you'd like proof of the
former, I believe our metadata service offers /openstack/ URLs, unlike
Amazon (mirroring the /openstack/ files on the config drive); and on the
latter, config-drive and Amazon-style metadata are only two of quite an
assortment of data providers that cloud-init will query.  If it makes you
think of it differently, think of this as the *Openstack* ipv6 metadata
service, and not the 'will-be-Amazon-one-day-maybe' service.


On 8 September 2015 at 17:03, Clint Byrum <clint at fewbar.com> wrote:

> Neutron would add a soft router that only knows the route to the metadata
> service (and any other services you want your neutron private network vms
> to be able to reach). This is not unique to the metadata service. Heat,
> Trove, etc, all want this as a feature so that one can poke holes out of
> these private networks only to the places where the cloud operator has
> services running.
>
> Excerpts from Fox, Kevin M's message of 2015-09-08 14:44:35 -0700:
> > How does that work with neutron private networks?
> >
> > Thanks,
> > Kevin
> > ________________________________________
> > From: Clint Byrum [clint at fewbar.com]
> > Sent: Tuesday, September 08, 2015 1:35 PM
> > To: openstack-dev
> > Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support
> >
> > Excerpts from Nir Yechiel's message of 2014-07-07 09:15:09 -0700:
> > > AFAIK, the cloud-init metadata service can currently be accessed only
> by sending a request to http://169.254.169.254, and no IPv6 equivalent is
> currently implemented. Does anyone working on this or tried to address this
> before?
> > >
> >
> > I'm not sure we'd want to carry the way metadata works forward now that
> > we have had some time to think about this.
> >
> > We already have DHCP6 and NDP. Just use one of those, and set the host's
> > name to a nonce that it can use to lookup the endpoint for instance
> > differentiation via DNS SRV records. So if you were told you are
> >
> > d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com
> >
> > Then you look that up as a SRV record on your configured DNS resolver,
> > and connect to the host name returned and do something like  GET
> > /d02a684d-56ea-44bc-9eba-18d997b1d32d
> >
> > And viola, metadata returns without any special link local thing, and
> > it works like any other dual stack application on the planet.
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/07686c50/attachment.html>

From eduard.matei at cloudfounders.com  Wed Sep  9 07:31:46 2015
From: eduard.matei at cloudfounders.com (Eduard Matei)
Date: Wed, 9 Sep 2015 10:31:46 +0300
Subject: [openstack-dev] [Openstack-dev] Devstack broken - third party CI
	broken
Message-ID: <CAEOp6J-=mT7oP1Dhmib63AQpiZa01DBy6hvGOJEHvDAA3R0DsQ@mail.gmail.com>

Hi,

Our Jenkins CI has been failing consistently last night during devstack
install:


2015-09-08 21:56:33.585 | Error: Service ceilometer-acentral is not running
2015-09-08 21:56:33.585 | + for service in '$failures'
2015-09-08 21:56:33.586 | ++ basename
/opt/stack/status/stack/ceilometer-acompute.failure
2015-09-08 21:56:33.587 | + service=ceilometer-acompute.failure
2015-09-08 21:56:33.587 | + service=ceilometer-acompute
2015-09-08 21:56:33.587 | + echo 'Error: Service ceilometer-acompute is not
running'
2015-09-08 21:56:33.587 | Error: Service ceilometer-acompute is not running
2015-09-08 21:56:33.587 | + '[' -n
'/opt/stack/status/stack/ceilometer-acentral.failure
2015-09-08 21:56:33.587 |
/opt/stack/status/stack/ceilometer-acompute.failure' ']'
2015-09-08 21:56:33.587 | + die 1467 'More details about the above errors
can be found with screen, with ./rejoin-stack.sh'
2015-09-08 21:56:33.587 | + local exitcode=0


Screen logs for ceilometer show:

stack at d-p-c-local-01-2592:~/devstack$
/usr/local/bin/ceilometer-agent-central --
config-file /etc/ceilometer/ceilometer.conf & echo $!
>/opt/stack/status/stack/c
eilometer-acentral.pid; fg || echo "ceilometer-acentral failed to start" |
tee "
/opt/stack/status/stack/ceilometer-acentral.failure"
[1] 3837
/usr/local/bin/ceilometer-agent-central --config-file
/etc/ceilometer/ceilometer.conf
bash: /usr/local/bin/ceilometer-agent-central: No such file or directory
ceilometer-acentral failed to start
stack at d-p-c-local-01-2592:~/devstack$

Anyone any idea how to fix this?

Thanks,
-- 

*Eduard Biceri Matei*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/bd6136fc/attachment.html>

From weiting.chen at intel.com  Wed Sep  9 07:33:56 2015
From: weiting.chen at intel.com (Chen, Weiting)
Date: Wed, 9 Sep 2015 07:33:56 +0000
Subject: [openstack-dev]  [sahara] FFE request for nfs-as-a-data-source
Message-ID: <6EEB8A90CDE31C4680037A635100E8FF953DBC@SHSMSX104.ccr.corp.intel.com>

Hi, all.

I would like to request FFE for nfs as a data source for sahara.
This bp originally should include a dashboard change to create nfs as a data source.
I will register it as another bp and implement it in next version.
However, these patches have already done to put nfs-driver into sahara-image-elements and enable it in the cluster.
By using this way, the user can use nfs protocol via command line in Liberty release.

Blueprint:
https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source

Spec:
https://review.openstack.org/#/c/210839/

Patch:
https://review.openstack.org/#/c/218637/
https://review.openstack.org/#/c/218638/


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/7d97fa07/attachment.html>

From huan.xie at citrix.com  Wed Sep  9 07:42:44 2015
From: huan.xie at citrix.com (Huan Xie)
Date: Wed, 9 Sep 2015 07:42:44 +0000
Subject: [openstack-dev] [neutron] Fail to get ipv4 address from dhcp
In-Reply-To: <CAENZyq7F6F=icS6Tx4=3MoouKLfzB=-Ti29qX5b3hci8RxDDKw@mail.gmail.com>
References: <27E8119E14BEBA418E5E368E4DF2CA71FCB66B@SINPEX01CL02.citrite.net>
 <CAENZyq7F6F=icS6Tx4=3MoouKLfzB=-Ti29qX5b3hci8RxDDKw@mail.gmail.com>
Message-ID: <27E8119E14BEBA418E5E368E4DF2CA71FCCAB3@SINPEX01CL02.citrite.net>

Hi Zhi,

Thanks very much for your help?
Even turn off ?ARP Spoofing? cannot work.
But now, I find the cause for this:
For ovs-agent-plugin, it wil loop to check OVS status and port status.
But in my case, during the loop, it cannot detect there is new port added, so it fails to add tag for this port, and thus all package will be dropped.
Therefore, the dhcp request cannot be reached by dhcp agent, and the VM cannot get ip all the time.

My current walk around is set an configuration item in compute node?s ml2_conf.ini
[agent]
minimize_polling = False


Although it can work, but I?m still wondering why the new added port cannot be detected even ?minimized_polling = True?
It seems the ovsdb monitor cannot detect that.
Do you have any suggestions for this part?

BR//Huan

From: zhi [mailto:changzhi1990 at gmail.com]
Sent: Monday, September 07, 2015 2:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Fail to get ipv4 address from dhcp

hi, if you turn off the "ARP Spoofing" flag and restart the q-agt service. Does vm can get IP successfully?

2015-09-06 17:03 GMT+08:00 Huan Xie <huan.xie at citrix.com<mailto:huan.xie at citrix.com>>:

Hi all,

I?m trying to deploy OpenStack environment using DevStack with latest master code.
I use Xenserver + neutron, with ML2 plugins and VLAN type.

The problem I met is that the instances cannot really get IP address (I use DHCP), although we can see the VM with IP from horizon.
I have tcpdump from VM side and DHCP server side, I can get DHCP request packet from VM side but cannot see any request packet from DHCP server side.
But after I reboot the q-agt, the VM can get IP successfully.
Checking the difference before and after q-agt restart, all my seen are the flow rules about ARP spoofing.

This is the q-agt?s br-int port, it is dom0?s flow rules and the bold part are new added

                NXST_FLOW reply (xid=0x4):
               cookie=0x824d13a352a4e216, duration=163244.088s, table=0, n_packets=93, n_bytes=18140, idle_age=4998, hard_age=65534, priority=0 actions=NORMAL
cookie=0x824d13a352a4e216, duration=163215.062s, table=0, n_packets=7, n_bytes=294, idle_age=33540, hard_age=65534, priority=10,arp,in_port=5 actions=resubmit(,24)
               cookie=0x824d13a352a4e216, duration=163230.050s, table=0, n_packets=25179, n_bytes=2839586, idle_age=5, hard_age=65534, priority=3,in_port=2,dl_vlan=1023 actions=mod_vlan_vid:1,NORMAL
               cookie=0x824d13a352a4e216, duration=163236.775s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=2 actions=drop
               cookie=0x824d13a352a4e216, duration=163243.516s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
               cookie=0x824d13a352a4e216, duration=163242.953s, table=24, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x824d13a352a4e216, duration=163215.636s, table=24, n_packets=7, n_bytes=294, idle_age=33540, hard_age=65534, priority=2,arp,in_port=5,arp_spa=10.0.0.6 actions=NORMAL

I cannot see other changes after reboot q-agt, but it seems these rules are only for ARP spoofing, however, the instance can get IP from DHCP.
I also google for this problem, but failed to deal this problem.
Is anyone met this problem before or has any suggestion about how to debugging for this?

Thanks a lot

BR//Huan

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/7f5446d0/attachment.html>

From tomer.shtilman at alcatel-lucent.com  Wed Sep  9 08:10:59 2015
From: tomer.shtilman at alcatel-lucent.com (SHTILMAN, Tomer (Tomer))
Date: Wed, 9 Sep 2015 08:10:59 +0000
Subject: [openstack-dev] [Heat] Multi Node Stack - keystone federation
In-Reply-To: <55EEF653.4040909@redhat.com>
References: <94346481835D244BB7F6486C00E9C1BA2AE1E553@FR711WXCHMBA06.zeu.alcatel-lucent.com>
 <55EEF653.4040909@redhat.com>
Message-ID: <94346481835D244BB7F6486C00E9C1BA2AE20194@FR711WXCHMBA06.zeu.alcatel-lucent.com>



>>On 07/09/15 05:27, SHTILMAN, Tomer (Tomer) wrote:
>> Hi
>>
>> Currently in heat we have the ability to deploy a remote stack on a 
>> different region using OS::Heat::Stack and region_name in the context
>>
>> My question is regarding multi node , separate keystones, with 
>> keystone federation.
>>
>> Is there an option in a HOT template to send a stack to a different 
>> node, using the keystone federation feature?
>>
>> For example ,If I have two Nodes (N1 and N2) with separate keystones 
>> (and keystone federation), I would like to deploy a stack on N1 with a 
>> nested stack that will deploy on N2, similar to what we have now for 
>> regions

>Zane wrote:
>Short answer: no.

>Long answer: this is something we've wanted to do for a while, and a lot of folks have asked for it. We've been calling it multi-cloud (i.e. 
>multiple keystones, as opposed to multi-region which is multiple regions with one keystone). In principle it's a small extension to the multi-region stacks (just add a way to specify the auth_url as well as the region), but the tricky part is how to authenticate to the other clouds. We don't want to encourage people to put their login credentials into a template. I'm not sure to what extent keystone federation could solve that - I suspect that it does not allow you to use a single token on multiple clouds, just that it allows you to obtain a token on multiple clouds using the same credentials? So basically this idea is on hold until someone comes up with a safe way to authenticate to the other clouds. Ideas/specs welcome.

>cheers,
>Zane.

Thanks Zane for your reply
My understanding was that with keystone federation once you have a token issued by one keystone the other one respect it and there is no need to re-authenticate with the second keystone. My thinking was more of changing the remote stack resource to have in the context the heat_url of the other node ,I am not sure if credentials are needed here.
We are currently building in our lab multi cloud setup with keystone federation and I will check if my understating is correct, I am planning for propose a BP for this once will be clear
Thanks again
Tomer
 
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From chdent at redhat.com  Wed Sep  9 08:13:49 2015
From: chdent at redhat.com (Chris Dent)
Date: Wed, 9 Sep 2015 09:13:49 +0100 (BST)
Subject: [openstack-dev] [Openstack-dev] Devstack broken - third party
 CI	broken
In-Reply-To: <CAEOp6J-=mT7oP1Dhmib63AQpiZa01DBy6hvGOJEHvDAA3R0DsQ@mail.gmail.com>
References: <CAEOp6J-=mT7oP1Dhmib63AQpiZa01DBy6hvGOJEHvDAA3R0DsQ@mail.gmail.com>
Message-ID: <alpine.OSX.2.11.1509090909020.18154@seed.local>

On Wed, 9 Sep 2015, Eduard Matei wrote:

> 2015-09-08 21:56:33.585 | Error: Service ceilometer-acentral is not running
> 2015-09-08 21:56:33.585 | + for service in '$failures'
> 2015-09-08 21:56:33.586 | ++ basename
> /opt/stack/status/stack/ceilometer-acompute.failure
> 2015-09-08 21:56:33.587 | + service=ceilometer-acompute.failure
> 2015-09-08 21:56:33.587 | + service=ceilometer-acompute
> 2015-09-08 21:56:33.587 | + echo 'Error: Service ceilometer-acompute is not
> running'
> 2015-09-08 21:56:33.587 | Error: Service ceilometer-acompute is not running
> 2015-09-08 21:56:33.587 | + '[' -n
> '/opt/stack/status/stack/ceilometer-acentral.failure
> 2015-09-08 21:56:33.587 |
> /opt/stack/status/stack/ceilometer-acompute.failure' ']'
> 2015-09-08 21:56:33.587 | + die 1467 'More details about the above errors
> can be found with screen, with ./rejoin-stack.sh'
> 2015-09-08 21:56:33.587 | + local exitcode=0

This is because of a recent commit[1] on ceilometer that changed the
names of some of the agents. While the devstack plugin in ceilometer
is updated to reflect these changes, the ceilometer code that is
still in devstack itself is not. The removal of ceilometer from
devstack itself[2] is pending some infra updates[3].

I'll push up a couple of reviews to fix this, either on the
ceilometer or devstack side and we can choose which one we prefer.

[1] https://review.openstack.org/#/c/212498/
[2] https://review.openstack.org/#/c/196383/
[3] https://review.openstack.org/#/q/topic:bug/1489436,n,z

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent


From chdent at redhat.com  Wed Sep  9 08:21:12 2015
From: chdent at redhat.com (Chris Dent)
Date: Wed, 9 Sep 2015 09:21:12 +0100 (BST)
Subject: [openstack-dev] [Openstack-dev] Devstack broken - third party
 CI broken
In-Reply-To: <alpine.OSX.2.11.1509090909020.18154@seed.local>
References: <CAEOp6J-=mT7oP1Dhmib63AQpiZa01DBy6hvGOJEHvDAA3R0DsQ@mail.gmail.com>
 <alpine.OSX.2.11.1509090909020.18154@seed.local>
Message-ID: <alpine.OSX.2.11.1509090919540.18154@seed.local>


You can work around the problem by replacing lines like:

     enable_service ceilometer-acompute ceilometer-acentral ceilometer-anotification ceilometer-collector ceilometer-api

with:

     enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer

in your local.conf

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent


From zhipengh512 at gmail.com  Wed Sep  9 08:22:50 2015
From: zhipengh512 at gmail.com (Zhipeng Huang)
Date: Wed, 9 Sep 2015 16:22:50 +0800
Subject: [openstack-dev] [tricircle]Weekly Team Meeting 2015.09.09
Message-ID: <CAHZqm+UGTS-aX6evwC3-QoqNR+04COUh15SLRvn9PA1wzjwgRA@mail.gmail.com>

Hi Team,

Let's resume our weekly meeting today. As Eran suggest before, we will
mainly discuss the work we have now, and leave the design session in
another time slot :) See you at UTC 1300 today.

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng at huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh at uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/c92aa822/attachment.html>

From chdent at redhat.com  Wed Sep  9 08:38:46 2015
From: chdent at redhat.com (Chris Dent)
Date: Wed, 9 Sep 2015 09:38:46 +0100 (BST)
Subject: [openstack-dev] [Openstack-dev] Devstack broken - third party
 CI	broken
In-Reply-To: <alpine.OSX.2.11.1509090909020.18154@seed.local>
References: <CAEOp6J-=mT7oP1Dhmib63AQpiZa01DBy6hvGOJEHvDAA3R0DsQ@mail.gmail.com>
 <alpine.OSX.2.11.1509090909020.18154@seed.local>
Message-ID: <alpine.OSX.2.11.1509090937130.18154@seed.local>

On Wed, 9 Sep 2015, Chris Dent wrote:

> I'll push up a couple of reviews to fix this, either on the
> ceilometer or devstack side and we can choose which one we prefer.

Here's the devstack fix: https://review.openstack.org/#/c/221634/

In discussion with other ceilometer cores we decided this was more
effective than reverting the ceilometer change.

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent


From john at johngarbutt.com  Wed Sep  9 08:41:09 2015
From: john at johngarbutt.com (John Garbutt)
Date: Wed, 9 Sep 2015 09:41:09 +0100
Subject: [openstack-dev] [nova] API v2.1 reference documentation
In-Reply-To: <CAD0KtVF+Mg1yNMFoPqtqgCC+t46v9PfwhH7waQBtb_cZ1EYLoQ@mail.gmail.com>
References: <A75480C2-CFDD-4574-B0F4-80911FE5877C@gmail.com>
 <CAA393vg8FeCNGDqgN_NUHUTXSy=U7VTqim4PJ8D00LBwkzBwdA@mail.gmail.com>
 <CAD0KtVF+Mg1yNMFoPqtqgCC+t46v9PfwhH7waQBtb_cZ1EYLoQ@mail.gmail.com>
Message-ID: <CABib2_ov7Pa5XfwksUjsPvSd=nDNuo6BLyNUCh16xSBHezPnEw@mail.gmail.com>

On 9 September 2015 at 03:43, Anne Gentle <annegentle at justwriteclick.com> wrote:
>
>
> On Tue, Sep 8, 2015 at 8:41 PM, Ken'ichi Ohmichi <ken1ohmichi at gmail.com>
> wrote:
>>
>> Hi Melanie,
>>
>> 2015-09-09 8:00 GMT+09:00 melanie witt <melwittt at gmail.com>:
>> > Hi All,
>> >
>> > With usage of v2.1 picking up (devstack) I find myself going to the API
>> > ref documentation [1] often and find it lacking compared with the similar v2
>> > doc [2]. I refer to this doc whenever I see a novaclient bug where something
>> > broke with v2.1 and I'm trying to find out what the valid request parameters
>> > are, etc.
>> >
>> > The main thing I notice is in the v2.1 docs, there isn't any request
>> > parameter list with descriptions like there is in v2. And I notice "create
>> > server" documentation doesn't seem to exist -- there is "Create multiple
>> > servers" but it doesn't provide much nsight about what the many request
>> > parameters are.
>> >
>> > I assume the docs are generated from the code somehow, so I'm wondering
>> > how we can get this doc improved? Any pointers would be appreciated.
>
>
> They are manual, and Alex made a list of how far behind the 2.1 docs which
> is in a doc bug here:
>
> https://bugs.launchpad.net/openstack-api-site/+bug/1488144
>
> It's great to see Atsushi Sakai working hard on those, join him in the
> patching.
>
> We're still patching WADL for this release with the hope of adding Swagger
> for many services by October 15th -- however the WADL to Swagger tool we
> have now migrates WADL.

Mel, thanks for raising this one, its super important.

As I understand it from the API meeting I have dropped in on, some of
this work is being tracked in here:
https://etherpad.openstack.org/p/nova-v2.1-api-doc

At the summit we agreed the focus as Docs and getting v2.1 on by default.

That hasn't quite happened, but now is a good time to try and catch up
on the API docs. All help really appreciated there.

The recent issues with Horizon not working after upgrading
python-novaclient seem to be highlighting the need for docs on how to
use our client with the microversions too.

Thanks,
John

>
>> >
>> > Thanks,
>> > -melanie (irc: melwitt)
>> >
>> >
>> > [1] http://developer.openstack.org/api-ref-compute-v2.1.html
>> > [2] http://developer.openstack.org/api-ref-compute-v2.html
>>
>> Nice point.
>> "create server" API is most important and necessary to be described on
>> the document anyway.
>>
>> In short-term, we need to describe it from the code by hands, and we
>> can know available parameters from JSON-Schema code.
>> The base parameters can be gotten from
>>
>> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L18
>> In addition, there are extensions which add more parameters and we can
>> get to know from
>>
>> https://github.com/openstack/nova/tree/master/nova/api/openstack/compute/schemas
>> If module files contain the dict *server_create*, they are also API
>> parameters.
>> For example, keypairs extension adds "key_name" parameter and we can
>> know it from
>> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/keypairs.py
>>
>> In long-term, it will be great to generate these API parameter
>> document from JSON-Schema directly.
>> JSON-Schema supports "description" parameter and we can describe the
>> meaning of each parameter.
>> But that will be long-term way. We need to write them by hands now.
>>
>> Thanks
>> Ken Ohmichi


From mrunge at redhat.com  Wed Sep  9 08:45:52 2015
From: mrunge at redhat.com (Matthias Runge)
Date: Wed, 9 Sep 2015 10:45:52 +0200
Subject: [openstack-dev] [horizon] Concern about XStatic-bootswatch
 imports from fonts.googleapis.com
In-Reply-To: <55E89948.6050803@redhat.com>
References: <55E82E0A.1050507@debian.org> <55E89948.6050803@redhat.com>
Message-ID: <55EFF1C0.5070102@redhat.com>

On 03/09/15 21:02, Matthias Runge wrote:
> On 03/09/15 13:24, Thomas Goirand wrote:
>> Hi,
>>
>> When doing:
>> grep -r fonts.googleapis.com *
>>
>> there's 56 lines of this kind of result:
>> xstatic/pkg/bootswatch/data/cyborg/bootstrap.css:@import
>> url("https://fonts.googleapis.com/css?family=Roboto:400,700");
>>
>> This is wrong because:
I'd like to raise an issue with roboto fontface.


xstatic package points to
https://github.com/choffmeister/roboto-fontface-bower/tree/develop/fonts

it's unclear, where those files are coming from. roboto font is
apparently coming from Google
https://www.google.com/fonts/specimen/Roboto

Unfortunately it's not clear, where .eot, .woff, .svg are coming from.
or how to recreate them form googles published .ttf files.

On the other side Googles repository doesn't have tags or releases at
all. That makes it hard to detect a newer release (there is no release).




Why do we care (about this)?

Packaging a software package can be compared to building a car in the
middle of nowhere, where you simply have a plan and maybe steel, and
some basic tools. But no access to postal service, no flying in special
tools, etc.
According to the plan, you're building the car. If you need a tool,
build that tool, too.

Matthias


From chdent at redhat.com  Wed Sep  9 09:00:25 2015
From: chdent at redhat.com (Chris Dent)
Date: Wed, 9 Sep 2015 10:00:25 +0100 (BST)
Subject: [openstack-dev] [Openstack-dev] Devstack broken - third party
 CI	broken
In-Reply-To: <alpine.OSX.2.11.1509090937130.18154@seed.local>
References: <CAEOp6J-=mT7oP1Dhmib63AQpiZa01DBy6hvGOJEHvDAA3R0DsQ@mail.gmail.com>
 <alpine.OSX.2.11.1509090909020.18154@seed.local>
 <alpine.OSX.2.11.1509090937130.18154@seed.local>
Message-ID: <alpine.OSX.2.11.1509091000000.18154@seed.local>

On Wed, 9 Sep 2015, Chris Dent wrote:

> On Wed, 9 Sep 2015, Chris Dent wrote:
>
>> I'll push up a couple of reviews to fix this, either on the
>> ceilometer or devstack side and we can choose which one we prefer.
>
> Here's the devstack fix: https://review.openstack.org/#/c/221634/

This is breaking ceilometer in the gate too, not just third party
CI.

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent


From berrange at redhat.com  Wed Sep  9 09:01:04 2015
From: berrange at redhat.com (Daniel P. Berrange)
Date: Wed, 9 Sep 2015 10:01:04 +0100
Subject: [openstack-dev] [Nova] What is the no_device flag for in block
 device mapping?
In-Reply-To: <39E5672E03A1CB4B93936D1C4AA5E15D1DBC8401@G1W3640.americas.hpqcorp.net>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBC8401@G1W3640.americas.hpqcorp.net>
Message-ID: <20150909090104.GE7737@redhat.com>

On Tue, Sep 08, 2015 at 05:32:29PM +0000, Murray, Paul (HP Cloud) wrote:
> Hi All,
> 
> I'm wondering what the "no_device" flag is used for in the block device
> mappings. I had a dig around in the code but couldn't figure out why it
> is there. The name suggests an obvious meaning, but I've learnt not to
> guess too much from names.
> 
> Any pointers welcome.

I was going to suggest reading the docs

  http://docs.openstack.org/developer/nova/block_device_mapping.html

but they don't mention 'no_device' at all :-(

When we find out what it actually means we should document it there :-)

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


From jordan.pittier at scality.com  Wed Sep  9 09:15:05 2015
From: jordan.pittier at scality.com (Jordan Pittier)
Date: Wed, 9 Sep 2015 11:15:05 +0200
Subject: [openstack-dev] [Openstack-dev] Devstack broken - third party
	CI broken
In-Reply-To: <alpine.OSX.2.11.1509091000000.18154@seed.local>
References: <CAEOp6J-=mT7oP1Dhmib63AQpiZa01DBy6hvGOJEHvDAA3R0DsQ@mail.gmail.com>
 <alpine.OSX.2.11.1509090909020.18154@seed.local>
 <alpine.OSX.2.11.1509090937130.18154@seed.local>
 <alpine.OSX.2.11.1509091000000.18154@seed.local>
Message-ID: <CAAKgrcmnFRLbmMYt3AFsqJMvxEqceUGs4=XJPfFCmNfe0UJJTw@mail.gmail.com>

Also, as I believe your CI is for Cinder, I recommend that you disable all
uneeded services. (look how the DEVSTACK_LOCAL_CONFIG is used in
devstack-gate to add the proper disable_service line).

On Wed, Sep 9, 2015 at 11:00 AM, Chris Dent <chdent at redhat.com> wrote:

> On Wed, 9 Sep 2015, Chris Dent wrote:
>
> On Wed, 9 Sep 2015, Chris Dent wrote:
>>
>> I'll push up a couple of reviews to fix this, either on the
>>> ceilometer or devstack side and we can choose which one we prefer.
>>>
>>
>> Here's the devstack fix: https://review.openstack.org/#/c/221634/
>>
>
> This is breaking ceilometer in the gate too, not just third party
> CI.
>
>
> --
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/f2e51720/attachment.html>

From john at johngarbutt.com  Wed Sep  9 09:59:30 2015
From: john at johngarbutt.com (John Garbutt)
Date: Wed, 9 Sep 2015 10:59:30 +0100
Subject: [openstack-dev] [nova] [all] Updated String Freeze Guidelines
Message-ID: <CABib2_pcxZ=C+n3iEDe7CfdB2o0pkOvVJpK+BeaaS6EnyK5qHQ@mail.gmail.com>

Hi,

I have had quite a few comments from folks about the string freeze
being too strict.

It was noted that:
* users will prefer an untranslated log, over a silent failure
* translators don't want existing strings changing while they are
translating them
* translators tooling can cope OK with moved strings and new strings

After yesterday's cross project meeting, and hanging out in
#openstack-i18n I have come up with these updates to the String Freeze
Guidelines:
https://wiki.openstack.org/wiki/StringFreeze

Basically, we have a Soft String Freeze from Feature Freeze until RC1:
* Translators work through all existing strings during this time
* So avoid changing existing translatable strings
* Additional strings are generally OK

Then post RC1, we have a Hard String Freeze:
* No new strings, and no string changes
* Exceptions need discussion

Then at least 10 working days after RC1:
* we need a new RC candidate to include any updated strings

Is everyone happy with these changes?

Thanks,
John


From dougal at redhat.com  Wed Sep  9 10:15:59 2015
From: dougal at redhat.com (Dougal Matthews)
Date: Wed, 9 Sep 2015 06:15:59 -0400 (EDT)
Subject: [openstack-dev]  [TripleO] Releasing tripleo-common on PyPI
In-Reply-To: <539909743.18328347.1441793474317.JavaMail.zimbra@redhat.com>
Message-ID: <1793280466.18329104.1441793759084.JavaMail.zimbra@redhat.com>

Hi,

The tripleo-common library appears to be registered or PyPI but hasn't yet had 
a release[1]. I am not familiar with the release process - what do we need to 
do to make sure it is regularly released with other TripleO packages?

We will also want to do something similar with the new python-tripleoclient 
which doesn't seem to be registered on PyPI yet at all.

Thanks,
Dougal

[1]: https://pypi.python.org/pypi/tripleo-common


From tengqim at linux.vnet.ibm.com  Wed Sep  9 10:20:48 2015
From: tengqim at linux.vnet.ibm.com (Qiming Teng)
Date: Wed, 9 Sep 2015 18:20:48 +0800
Subject: [openstack-dev] [aodh][ceilometer] (re)introducing Aodh -
 OpenStack Alarming
In-Reply-To: <BLU436-SMTP32A3A7E84295D52C56310CDE530@phx.gbl>
References: <BLU436-SMTP32A3A7E84295D52C56310CDE530@phx.gbl>
Message-ID: <20150909093949.GA24936@qiming-ThinkCentre-M58p>

Hi, Gord,

Good to know there will be a team dedicated to this alarming service.
After reading your email, I still feel a need for some clarifications.

- According to [1], Aodh will be released as a standalone service,
  am I understanding this correctly?

- What is the official name for this new serivce when it stands on its
  own feet: "Telemetry Alarming" or just "Alarming" or something else?
  We will need a name for this to support in OpenStack SDK.

- There will be a need to create endpoints for Aodh in keystone? Or it
  is just a 'library' for ceilometer, sharing the same API endpoint and
  the same client with ceilometer?

- The original email mentioned that "the client can still be used and
  redirect to Aodh". Could you please clarify where the redirection will
  happen? It is a client side redirection or a ceilometer-server side
  redirection? I'm asking this because we sometimes program the REST
  APIs directly.

[1]
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n64 


Regards,
  Qiming



From apevec at gmail.com  Wed Sep  9 10:22:29 2015
From: apevec at gmail.com (Alan Pevec)
Date: Wed, 9 Sep 2015 12:22:29 +0200
Subject: [openstack-dev] [requirements] attention requirements-cores,
 please look out for constraints updates
In-Reply-To: <CAJ3HoZ3T1oVxCNXMQONCmJCEq-NCoTOWU4gfMYXhXxYXRKjB2g@mail.gmail.com>
References: <CAJ3HoZ0eBWU0VWZ0-f3JY0ttqf2=O5Z+gGq5veW6V8LfjvKLEQ@mail.gmail.com>
 <559CE6F3.20507@openstack.org>
 <CAJ3HoZ2Mk5e_r=mTx5L=L+uFi3bjWDR9i+F3UBUGvDrCnuXhLw@mail.gmail.com>
 <1436396493-sup-2848@lrrr.local>
 <CAJ3HoZ3T1oVxCNXMQONCmJCEq-NCoTOWU4gfMYXhXxYXRKjB2g@mail.gmail.com>
Message-ID: <CAGi==UUiun7Q3f4YRe+hi5acPiHVG9jFA227LHaAX8x_zVhZsA@mail.gmail.com>

> I'd like to add in a lower-constraints.txt set of pins and actually
> start reporting on whether our lower bounds *work*.

Do you have a spec in progress for lower-constraints.txt?
It should help catch issues like https://review.openstack.org/221267
There are also lots of entries in global-requirements without minimum
version set while they should:
http://git.openstack.org/cgit/openstack/requirements/tree/README.rst#n226

Cheers,
Alan


From spasquier at mirantis.com  Wed Sep  9 10:25:42 2015
From: spasquier at mirantis.com (Simon Pasquier)
Date: Wed, 9 Sep 2015 12:25:42 +0200
Subject: [openstack-dev] [Fuel][Plugins] request for update of
	fuel-plugin-builder on pypi
Message-ID: <CAOq3GZVG--EwhQ5g1PqNrNPL=DvOZWZWNfeUvzQhcd25Y-K+cw@mail.gmail.com>

Hi,
It would be cool if fuel-plugin-builder (fpb) v3.0.0 could be released on
pypi. We've moved some of the LMA plugins to use the v3 format.
Right now we have to install fpb from source which is hard to automate in
our tests unfortunately (as already noted by Sergii [1]).
BR,
Simon
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070781.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/dab3bc20/attachment.html>

From sgolovatiuk at mirantis.com  Wed Sep  9 10:34:17 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Wed, 9 Sep 2015 12:34:17 +0200
Subject: [openstack-dev] [Fuel][Plugins] request for update of
 fuel-plugin-builder on pypi
In-Reply-To: <CAOq3GZVG--EwhQ5g1PqNrNPL=DvOZWZWNfeUvzQhcd25Y-K+cw@mail.gmail.com>
References: <CAOq3GZVG--EwhQ5g1PqNrNPL=DvOZWZWNfeUvzQhcd25Y-K+cw@mail.gmail.com>
Message-ID: <CA+HkNVskdn_Mr_fXdvc866D7aC1UgzaYq9L1QoTp0yMHBQOERw@mail.gmail.com>

+1 to Simon

Also the structure of fuel-plugin-builder should be refactored to community
standards. everything in 'fuel_plugin_builder' directory should moved to
top of repository.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Sep 9, 2015 at 12:25 PM, Simon Pasquier <spasquier at mirantis.com>
wrote:

> Hi,
> It would be cool if fuel-plugin-builder (fpb) v3.0.0 could be released on
> pypi. We've moved some of the LMA plugins to use the v3 format.
> Right now we have to install fpb from source which is hard to automate in
> our tests unfortunately (as already noted by Sergii [1]).
> BR,
> Simon
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070781.html
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/e348e723/attachment.html>

From sean at dague.net  Wed Sep  9 10:53:19 2015
From: sean at dague.net (Sean Dague)
Date: Wed, 9 Sep 2015 06:53:19 -0400
Subject: [openstack-dev] Should v2 compatibility mode (v2.0 on v2.1)
 fixes be applicable for v2.1 too?
In-Reply-To: <CAA393vgzTYYfb-YAvtHrFAjdVr5GL-sX7Qh4OG7S+SwNGvdFSg@mail.gmail.com>
References: <CACE3TKWnnCtjc-CM408zO4BLfG733Rz4s90ap69PdE2jmvNWmg@mail.gmail.com>
 <55EEBC48.6080200@dague.net>
 <CAA393vgzTYYfb-YAvtHrFAjdVr5GL-sX7Qh4OG7S+SwNGvdFSg@mail.gmail.com>
Message-ID: <55F00F9F.7000505@dague.net>

On 09/08/2015 08:15 PM, Ken'ichi Ohmichi wrote:
> 2015-09-08 19:45 GMT+09:00 Sean Dague <sean at dague.net>:
>> On 09/06/2015 11:15 PM, GHANSHYAM MANN wrote:
>>> Hi All,
>>> 
>>> As we all knows, api-paste.ini default setting for /v2 was
>>> changed to run those on v2.1 (v2.0 on v2.1) which is really great
>>> think for easy code maintenance in future (removal of v2 code).
>>> 
>>> To keep "v2.0 on v2.1" fully compatible with "v2.0 on v2.0", some
>>> bugs were found[1] and fixed. But I think we should fix those
>>> only for v2 compatible mode not for v2.1.
>>> 
>>> For example bug#1491325, 'device' on volume attachment Request
>>> is optional param[2] (which does not mean 'null-able' is allowed)
>>> and v2.1 used to detect and error on usage of 'device' as "None".
>>> But as it was used as 'None' by many /v2 users and not to break
>>> those, we should allow 'None' on v2 compatible mode also. But we
>>> should not allow the same for v2.1.
>>> 
>>> IMO v2.1 strong input validation feature (which helps to make
>>> API usage in correct manner) should not be changed, and for v2
>>> compatible mode we should have another solution without affecting
>>> v2.1 behavior may be having different schema for v2 compatible
>>> mode and do the necessary fixes there.
>>> 
>>> Trying to know other's opinion on this or something I missed
>>> during any discussion.
>>> 
>>> [1]: https://bugs.launchpad.net/python-novaclient/+bug/1491325 
>>> https://bugs.launchpad.net/nova/+bug/1491511
>>> 
>>> [2]:
>>> http://developer.openstack.org/api-ref-compute-v2.1.html#attachVolume
>>
>>
>>> 
A lot of these issue need to be a case by case determination.
>> 
>> In this particular case, we had the Documetation, the nova code,
>> the clients, and the future.
>> 
>> The documentation: device is optional. That means it should be a
>> string or not there at all. The schema was set to enforce this on
>> v2.1
>> 
>> The nova code: device = None was accepted previously, because
>> device is a mandatory parameter all the way down the call stack. 2
>> layers in we default it to None if it wasn't specified.
>> 
>> The clients: both python-novaclient and ruby fog sent device=None
>> in the common case. While only 2 data points, this does demonstrate
>> this is more wide spread than just our buggy code.
>> 
>> The future: it turns out we really can't honor this parameter in
>> most cases anyway, and passing it just means causing bugs. This is
>> an artifact of the EC2 API that only works on specific (and
>> possibly forked) versions of Xen that Amazon runs. Most hypervisor
>> / guest relationships don't allow this to be set. The long term
>> direction is going to be removing it from our API.
>> 
>> Given that it seemed fine to relax this across all API. We screwed
>> up and didn't test this case correctly, and long term we're going
>> to dump it. So we don't want to honor 3 different versions of this
>> API, especially as no one seems written to work against the
>> documentation, but were written against the code in question. If
>> they write to the docs, they'll be fine. But the clients that are
>> out in the wild will be fine as well.
> 
> I think the case by case determination is fine, but current change 
> progress of relaxing validation seems wrong. In Kilo, we required
> nova-specs for relaxing v2.1 API validation like 
> https://review.openstack.org/#/c/126696/ and we had much enough
> discussion and we built a consensus about that. But we merged the
> above patch in just 2 working days without any nova-spec even if we
> didn't have a consensus about that v2.1 validation change requires
> microversion bump or not.
> 
> If we really need to relax validation thing for v2.0 compatible API, 
> please consider separating v2.0 API schema from v2.1 API schema. I
> have one idea about that like
> https://review.openstack.org/#/c/221129/
> 
> We worked for strict and consistent validation way on v2.1 API over
> 2 years, and I don't want to make it loose without enough thinking.

There also was no field data about what it broke. The strict schema's
were based on an assumed understanding of how people were interacting
with the API. But that wasn't tested until we merged a change to make
everyone in OpenStack use it.

And we found a couple of bugs in our assumptions. Those issues were
blocking other parts of the OpenStack ecosystem from merging any code.
Which is a pretty big deal. We did a lot of thinking about this one. It
also went to a Nova meeting and got discussed there.

I'm also in favor of the v2.0 schema patch, I just +2ed it. That doesn't
mean that we don't address real issues that will inhibit adoption. The
promise of v2.1 is that it was going to be the same surface as v2.0
except that stuff no one ever should have sent, or would send, would be
rejected on the surface. Which is a win from code complexity and
security. But in this particular case the schema made the wrong call
about how people were actually using this API. So we fixed that.

Being responsive to real users that we end up breaking by accident is
important.

	-Sean

-- 
Sean Dague
http://dague.net


From dtantsur at redhat.com  Wed Sep  9 10:58:04 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Wed, 9 Sep 2015 12:58:04 +0200
Subject: [openstack-dev] [TripleO] Releasing tripleo-common on PyPI
In-Reply-To: <1793280466.18329104.1441793759084.JavaMail.zimbra@redhat.com>
References: <1793280466.18329104.1441793759084.JavaMail.zimbra@redhat.com>
Message-ID: <55F010BC.4040206@redhat.com>

On 09/09/2015 12:15 PM, Dougal Matthews wrote:
> Hi,
>
> The tripleo-common library appears to be registered or PyPI but hasn't yet had
> a release[1]. I am not familiar with the release process - what do we need to
> do to make sure it is regularly released with other TripleO packages?

I think this is a good start: 
https://github.com/openstack/releases/blob/master/README.rst

>
> We will also want to do something similar with the new python-tripleoclient
> which doesn't seem to be registered on PyPI yet at all.

And instack-undercloud.

>
> Thanks,
> Dougal
>
> [1]: https://pypi.python.org/pypi/tripleo-common
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From eduard.matei at cloudfounders.com  Wed Sep  9 10:58:59 2015
From: eduard.matei at cloudfounders.com (Eduard Matei)
Date: Wed, 9 Sep 2015 13:58:59 +0300
Subject: [openstack-dev] [Openstack-dev] Devstack broken - third party
	CI broken
In-Reply-To: <CAAKgrcmnFRLbmMYt3AFsqJMvxEqceUGs4=XJPfFCmNfe0UJJTw@mail.gmail.com>
References: <CAEOp6J-=mT7oP1Dhmib63AQpiZa01DBy6hvGOJEHvDAA3R0DsQ@mail.gmail.com>
 <alpine.OSX.2.11.1509090909020.18154@seed.local>
 <alpine.OSX.2.11.1509090937130.18154@seed.local>
 <alpine.OSX.2.11.1509091000000.18154@seed.local>
 <CAAKgrcmnFRLbmMYt3AFsqJMvxEqceUGs4=XJPfFCmNfe0UJJTw@mail.gmail.com>
Message-ID: <CAEOp6J9bn3Y4sbwyt_XDzjVMvYYLwL=4CTODNRT4q8ioXnvOPg@mail.gmail.com>

Line
export DEVSTACK_LOCAL_CONFIG="disable_service ceilometer-acompute
ceilometer-acentral ceilometer-collector ceilometer-api"
in the job config did the trick.

Thanks,
Eduard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/a50f2e6c/attachment.html>

From nstarodubtsev at mirantis.com  Wed Sep  9 11:07:42 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Wed, 9 Sep 2015 14:07:42 +0300
Subject: [openstack-dev] [murano][merlin] murano APIv2 and murano future ui
Message-ID: <CAAa8YgBBTiWnZ47ZzMMd_u__qet6j2w1-=hgarpGrkvTMBo=Qg@mail.gmail.com>

Hi all,
Yesterday on IRC weekly meeting murano team decided to start collecting
ideas about murano APIv2 and murano future ui. We have to etherpads for
this purpose:
1) https://etherpad.openstack.org/p/murano-APIv2 - for murano API v2 ideas
2) https://etherpad.openstack.org/p/murano-future-ui-(Merlin) - for future
ui ideas

Feel free to write your ideas. If you have any questions you can reach me
in IRC.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/20e7a553/attachment.html>

From scroiset at mirantis.com  Wed Sep  9 11:23:03 2015
From: scroiset at mirantis.com (Swann Croiset)
Date: Wed, 9 Sep 2015 13:23:03 +0200
Subject: [openstack-dev] [Fuel][Plugins] request for update of
 fuel-plugin-builder on pypi
In-Reply-To: <CA+HkNVskdn_Mr_fXdvc866D7aC1UgzaYq9L1QoTp0yMHBQOERw@mail.gmail.com>
References: <CAOq3GZVG--EwhQ5g1PqNrNPL=DvOZWZWNfeUvzQhcd25Y-K+cw@mail.gmail.com>
 <CA+HkNVskdn_Mr_fXdvc866D7aC1UgzaYq9L1QoTp0yMHBQOERw@mail.gmail.com>
Message-ID: <CAOmgvhx58CpBToYDHUeyhu7v_oZkh3nRK+Y2VNoJv28aMqHD0Q@mail.gmail.com>

+2 to sergii

and btw create dedicated repos for plugin examples

On Wed, Sep 9, 2015 at 12:34 PM, Sergii Golovatiuk <sgolovatiuk at mirantis.com
> wrote:

> +1 to Simon
>
> Also the structure of fuel-plugin-builder should be refactored to
> community standards. everything in 'fuel_plugin_builder' directory should
> moved to top of repository.
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Wed, Sep 9, 2015 at 12:25 PM, Simon Pasquier <spasquier at mirantis.com>
> wrote:
>
>> Hi,
>> It would be cool if fuel-plugin-builder (fpb) v3.0.0 could be released on
>> pypi. We've moved some of the LMA plugins to use the v3 format.
>> Right now we have to install fpb from source which is hard to automate in
>> our tests unfortunately (as already noted by Sergii [1]).
>> BR,
>> Simon
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070781.html
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/76b9d284/attachment.html>

From rakhmerov at mirantis.com  Wed Sep  9 11:38:26 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Wed, 9 Sep 2015 17:38:26 +0600
Subject: [openstack-dev] [mistral] Mistral Liberty-3 milestone has been
	released
Message-ID: <4A908435-AB90-49A8-8166-473C5FEE4B6F@mirantis.com>

Hi,

Mistral Liberty 3 milestone has been released! We?ve also released Mistral Client 1.0.2 that has some adjustments needed to use the new Mistral server.

Below are corresponding release pages where you can find downloadable artefacts and more detailed information about what has changed:
https://launchpad.net/mistral/liberty/liberty-3 <https://launchpad.net/mistral/liberty/liberty-3>
https://launchpad.net/python-mistralclient/liberty/1.0.2 <https://launchpad.net/python-mistralclient/liberty/1.0.2>

From now on till the official Liberty release let?s focus on massive bugfixing, writing documentation and whatever is left on UI. The next release (RC1) is scheduled for 25 Sep, not so much time to relax.

Many thanks to the team for your hard work!

Renat Akhmerov
@ Mirantis Inc.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/51898ea2/attachment.html>

From efedorova at mirantis.com  Wed Sep  9 11:42:37 2015
From: efedorova at mirantis.com (Ekaterina Chernova)
Date: Wed, 9 Sep 2015 14:42:37 +0300
Subject: [openstack-dev] [murano][merlin] murano APIv2 and murano future
	ui
In-Reply-To: <CAAa8YgBBTiWnZ47ZzMMd_u__qet6j2w1-=hgarpGrkvTMBo=Qg@mail.gmail.com>
References: <CAAa8YgBBTiWnZ47ZzMMd_u__qet6j2w1-=hgarpGrkvTMBo=Qg@mail.gmail.com>
Message-ID: <CAOFFu8b0v+ygZc-eJb-hELtBJsj25SWx7ffsWow6xPG-GTQHsA@mail.gmail.com>

Hi Nikolay!

Thanks for starting this activity! This is a really hot topic.
We also used to have plan to migrate our API to pecan. [1]
This also can be discussed.

Do we have a blueprint for that? Could you please file it and attach
etherpad to a new blueprint.

[1] -
https://blueprints.launchpad.net/murano/+spec/murano-api-server-pecan-wsme

Thanks,
Kate.

On Wed, Sep 9, 2015 at 2:07 PM, Nikolay Starodubtsev <
nstarodubtsev at mirantis.com> wrote:

> Hi all,
> Yesterday on IRC weekly meeting murano team decided to start collecting
> ideas about murano APIv2 and murano future ui. We have to etherpads for
> this purpose:
> 1) https://etherpad.openstack.org/p/murano-APIv2 - for murano API v2 ideas
> 2) https://etherpad.openstack.org/p/murano-future-ui-(Merlin) - for
> future ui ideas
>
> Feel free to write your ideas. If you have any questions you can reach me
> in IRC.
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/74ab1dec/attachment.html>

From adanin at mirantis.com  Wed Sep  9 11:47:03 2015
From: adanin at mirantis.com (Andrey Danin)
Date: Wed, 9 Sep 2015 14:47:03 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
Message-ID: <CA+vYeFo0B=XgapahfiBAX=GOqHXpb1EXJEmDZNchKBmLfFwbOA@mail.gmail.com>

I disagree from the development point of view. Now I just change manifests
on Fuel node and redeploy cluster to apply that changes. With your proposal
I'll need to build a new package and add it to a repo every time I change
something.

On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
vkozhukalov at mirantis.com> wrote:

> Dear colleagues,
>
> Currently, we install fuel-libraryX.Y package(s) on the master node and
> then right before starting actual deployment we rsync [1] puppet modules
> (one of installed versions) from the master node to slave nodes. Such a
> flow makes things much more complicated than they could be if we installed
> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
> parameterized by repo urls (upstream + mos) and this pre-deployment task
> could be nothing more than just installing fuel-library package from mos
> repo defined for a cluster. We would not have several versions of
> fuel-library on the master node, we would not need that complicated upgrade
> stuff like we currently have for puppet modules.
>
> Please give your opinions on this.
>
>
> [1]
> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>
> Vladimir Kozhukalov
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Danin
adanin at mirantis.com
skype: gcon.monolake
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/93442606/attachment.html>

From dpyzhov at mirantis.com  Wed Sep  9 11:48:02 2015
From: dpyzhov at mirantis.com (Dmitry Pyzhov)
Date: Wed, 9 Sep 2015 14:48:02 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
Message-ID: <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>

Vladimir,

thanks for bringing this up. It greatly correlates with the idea of
modularity. Everything related to an openstack release should be put in one
place and should be managed as a solid bundle on the master node. Package
repository is the first solution that comes to the mind and it looks pretty
good. Puppet modules, openstack.yaml and maybe even serialisers should be
stored in packages in the openstack release repository. And eventually
every other piece of our software should get rid of release-specific logic.

On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
vkozhukalov at mirantis.com> wrote:

> Dear colleagues,
>
> Currently, we install fuel-libraryX.Y package(s) on the master node and
> then right before starting actual deployment we rsync [1] puppet modules
> (one of installed versions) from the master node to slave nodes. Such a
> flow makes things much more complicated than they could be if we installed
> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
> parameterized by repo urls (upstream + mos) and this pre-deployment task
> could be nothing more than just installing fuel-library package from mos
> repo defined for a cluster. We would not have several versions of
> fuel-library on the master node, we would not need that complicated upgrade
> stuff like we currently have for puppet modules.
>
> Please give your opinions on this.
>
>
> [1]
> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>
> Vladimir Kozhukalov
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/83242e2a/attachment.html>

From gal.sagie at gmail.com  Wed Sep  9 11:49:50 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Wed, 9 Sep 2015 14:49:50 +0300
Subject: [openstack-dev] [Neutron][ML2] ML2 late/early-cycle sprint
	announcement
In-Reply-To: <CA+wZVHSv87sPpEFj2jC1YM9_WtDfFDEdzZF8tHqYCs=110rGSw@mail.gmail.com>
References: <CA+wZVHSv87sPpEFj2jC1YM9_WtDfFDEdzZF8tHqYCs=110rGSw@mail.gmail.com>
Message-ID: <CAG9LJa7LDVijDkS4TU2JL=KJ1OMrbkmhynXVgMcJVBRmzyuxcQ@mail.gmail.com>

Hi Sukhdev,

The common sync framework is something i was also thinking about for some
time now.
I think its a very good idea and would love if i could participate in the
talks (and hopefully the implementation as well)

Thanks
Gal.

On Wed, Sep 9, 2015 at 9:46 AM, Sukhdev Kapur <sukhdevkapur at gmail.com>
wrote:

> Folks,
>
> We are planning on having ML2 coding sprint on October 6 through 8, 2015.
> Some are calling it Liberty late-cycle sprint, others are calling it Mitaka
> early-cycle sprint.
>
> ML2 team has been discussing the issues related to synchronization of the
> Neutron DB resources with the back-end drivers. Several issues have been
> reported when multiple ML2 drivers are deployed in scaled HA deployments.
> The issues surface when either side (Neutron or back-end HW/drivers)
> restart and resource view gets out of sync. There is no mechanism in
> Neutron or ML2 plugin which ensures the synchronization of the state
> between the front-end and back-end. The drivers either end up implementing
> their own solutions or they dump the issue on the operators to intervene
> and correct it manually.
>
> We plan on utilizing Task Flow to implement the framework in ML2 plugin
> which can be leveraged by ML2 drivers to achieve synchronization in a
> simplified manner.
>
> There are couple of additional items on the Sprint agenda, which are
> listed on the etherpad [1]. The details of venue and schedule are listed on
> the enterpad as well. The sprint is hosted by Yahoo Inc.
> Whoever is interested in the topics listed on the etherpad, is welcome to
> sign up for the sprint and join us in making this reality.
>
> Additionally, we will utilize this sprint to formalize the design
> proposal(s) for the fish bowl session at Tokyo summit [2]
>
> Any questions/clarifications, please join us in our weekly ML2 meeting on
> Wednesday at 1600 UTC (9AM pacific time) at #openstack-meeting-alt
>
> Thanks
> -Sukhdev
>
> [1] - https://etherpad.openstack.org/p/Neutron_ML2_Mid-Cycle_Sprint
> [2] - https://etherpad.openstack.org/p/neutron-mitaka-designsummit
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/ca877c51/attachment.html>

From nstarodubtsev at mirantis.com  Wed Sep  9 11:50:34 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Wed, 9 Sep 2015 14:50:34 +0300
Subject: [openstack-dev] [murano][merlin] murano APIv2 and murano future
	ui
In-Reply-To: <CAOFFu8b0v+ygZc-eJb-hELtBJsj25SWx7ffsWow6xPG-GTQHsA@mail.gmail.com>
References: <CAAa8YgBBTiWnZ47ZzMMd_u__qet6j2w1-=hgarpGrkvTMBo=Qg@mail.gmail.com>
 <CAOFFu8b0v+ygZc-eJb-hELtBJsj25SWx7ffsWow6xPG-GTQHsA@mail.gmail.com>
Message-ID: <CAAa8YgDNzxULh18xjS+5qQPybMa7r+-wv82LUhX_Q4zB6rh8Gw@mail.gmail.com>

Kate,
This bp is pretty old, but I think it suits our needs [1]. Yeah, I'll
attach etherpads to it.
The idea about pecan/wsme is really useful.
[1]: https://blueprints.launchpad.net/murano/+spec/api-vnext



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-09 14:42 GMT+03:00 Ekaterina Chernova <efedorova at mirantis.com>:

> Hi Nikolay!
>
> Thanks for starting this activity! This is a really hot topic.
> We also used to have plan to migrate our API to pecan. [1]
> This also can be discussed.
>
> Do we have a blueprint for that? Could you please file it and attach
> etherpad to a new blueprint.
>
> [1] -
> https://blueprints.launchpad.net/murano/+spec/murano-api-server-pecan-wsme
>
> Thanks,
> Kate.
>
> On Wed, Sep 9, 2015 at 2:07 PM, Nikolay Starodubtsev <
> nstarodubtsev at mirantis.com> wrote:
>
>> Hi all,
>> Yesterday on IRC weekly meeting murano team decided to start collecting
>> ideas about murano APIv2 and murano future ui. We have to etherpads for
>> this purpose:
>> 1) https://etherpad.openstack.org/p/murano-APIv2 - for murano API v2
>> ideas
>> 2) https://etherpad.openstack.org/p/murano-future-ui-(Merlin) - for
>> future ui ideas
>>
>> Feel free to write your ideas. If you have any questions you can reach me
>> in IRC.
>>
>>
>>
>> Nikolay Starodubtsev
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>>
>> Skype: dark_harlequine1
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/0e172c40/attachment.html>

From filip.blaha at hp.com  Wed Sep  9 11:53:40 2015
From: filip.blaha at hp.com (Filip Blaha)
Date: Wed, 9 Sep 2015 13:53:40 +0200
Subject: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core
In-Reply-To: <CAOCoZiZO+4f=+yrbcpH44rzkUe0+h6xZtaG8sm6QT1M1CuVr-g@mail.gmail.com>
References: <etPan.55e4d925.59528236.146@TefMBPr.local>
 <CAOnDsYPpN1XGQ-ZLsbxv36Y2JWi+meuWz4vXXY=u44oaawTTjw@mail.gmail.com>
 <CAKSp79yQfLg0=ZkhfGA895csbubEtBxKyD-jahrEmWrwFykypw@mail.gmail.com>
 <CAOFFu8aNYx-4mhnSA_4M7mDD5ndWNJuXnpQ5s1L0c7tSb7WdaA@mail.gmail.com>
 <etPan.55e5904c.61791e85.14d@pegasus.local>
 <CAM6FM9T-VRxqTgSbz3gcyEPA+F-+Hs3qCMqF2EpC85KvvwXvhw@mail.gmail.com>
 <CAOCoZiZO+4f=+yrbcpH44rzkUe0+h6xZtaG8sm6QT1M1CuVr-g@mail.gmail.com>
Message-ID: <55F01DC4.2050504@hp.com>

+1

On 09/08/2015 02:28 PM, Stan Lagun wrote:
> +1
>
> Sincerely yours,
> Stan Lagun
> Principal Software Engineer @ Mirantis
>
>
> On Tue, Sep 1, 2015 at 3:03 PM, Alexander Tivelkov 
> <ativelkov at mirantis.com <mailto:ativelkov at mirantis.com>> wrote:
>
>     +1. Well deserved.
>
>     --
>     Regards,
>     Alexander Tivelkov
>
>     On Tue, Sep 1, 2015 at 2:47 PM, Victor Ryzhenkin
>     <vryzhenkin at mirantis.com <mailto:vryzhenkin at mirantis.com>> wrote:
>
>         +1 from me ;)
>
>         -- 
>         Victor Ryzhenkin
>         Junior QA Engeneer
>         freerunner on #freenode
>
>         ???????? 1 ???????? 2015 ?. ? 12:18:19, Ekaterina Chernova
>         (efedorova at mirantis.com <mailto:efedorova at mirantis.com>) ???????:
>
>>         +1
>>
>>         On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii
>>         <ddovbii at mirantis.com <mailto:ddovbii at mirantis.com>> wrote:
>>
>>             +1
>>
>>             2015-09-01 2:24 GMT+03:00 Serg Melikyan
>>             <smelikyan at mirantis.com <mailto:smelikyan at mirantis.com>>:
>>
>>                 +1
>>
>>                 On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev
>>                 <kzaitsev at mirantis.com
>>                 <mailto:kzaitsev at mirantis.com>> wrote:
>>
>>                     I?m pleased to nominate Nikolai for Murano core.
>>
>>                     He?s been actively participating in development
>>                     of murano during liberty and is among top5
>>                     contributors during last 90 days. He?s also
>>                     leading the CloudFoundry integration initiative.
>>
>>                     Here are some useful links:
>>
>>                     Overall contribution:
>>                     http://stackalytics.com/?user_id=starodubcevna
>>                     List of reviews:
>>                     https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
>>                     Murano contribution during latest 90 days
>>                     http://stackalytics.com/report/contribution/murano/90
>>
>>                     Please vote with +1/-1 for approval/objections
>>
>>                     -- 
>>                     Kirill Zaitsev
>>                     Murano team
>>                     Software Engineer
>>                     Mirantis, Inc
>>
>>                     __________________________________________________________________________
>>                     OpenStack Development Mailing List (not for usage
>>                     questions)
>>                     Unsubscribe:
>>                     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>                     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>                     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>                 --
>>                 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
>>                 http://mirantis.com <http://mirantis.com/> |
>>                 smelikyan at mirantis.com <mailto:smelikyan at mirantis.com>
>>
>>                 +7 (495) 640-4904
>>                 <tel:%2B7%20%28495%29%C2%A0640-4904>, 0261
>>                 +7 (903) 156-0836 <tel:%2B7%20%28903%29%20156-0836>
>>
>>                 __________________________________________________________________________
>>                 OpenStack Development Mailing List (not for usage
>>                 questions)
>>                 Unsubscribe:
>>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>                 <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>             __________________________________________________________________________
>>             OpenStack Development Mailing List (not for usage questions)
>>             Unsubscribe:
>>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>         __________________________________________________________________________
>>
>>         OpenStack Development Mailing List (not for usage questions)
>>         Unsubscribe:
>>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>
>>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/8e97bb79/attachment-0001.html>

From zzelle at gmail.com  Wed Sep  9 11:55:01 2015
From: zzelle at gmail.com (ZZelle)
Date: Wed, 9 Sep 2015 13:55:01 +0200
Subject: [openstack-dev] [Horizon] Feature Freeze Exception: shelving
	commands
Message-ID: <CAMS-DWjVO2wHDhbW=FgPr5UNb-AO6KH3scQgVGUNOwKeUoQ_Ew@mail.gmail.com>

Hi,

I wanted to propose horizon-shelving-command[1][2] feature proposal for
exception.

This is a small feature based on existing pause/suspend command
implementations.


[1] https://blueprints.launchpad.net/horizon/+spec/horizon-shelving-command
[2] https://review.openstack.org/220838

Cedric/ZZelle at IRC
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/210e4463/attachment.html>

From stephane.bisinger at gmail.com  Wed Sep  9 12:10:50 2015
From: stephane.bisinger at gmail.com (=?UTF-8?Q?St=C3=A9phane_Bisinger?=)
Date: Wed, 9 Sep 2015 14:10:50 +0200
Subject: [openstack-dev] [murano][merlin] murano APIv2 and murano future
	ui
In-Reply-To: <CAAa8YgDNzxULh18xjS+5qQPybMa7r+-wv82LUhX_Q4zB6rh8Gw@mail.gmail.com>
References: <CAAa8YgBBTiWnZ47ZzMMd_u__qet6j2w1-=hgarpGrkvTMBo=Qg@mail.gmail.com>
 <CAOFFu8b0v+ygZc-eJb-hELtBJsj25SWx7ffsWow6xPG-GTQHsA@mail.gmail.com>
 <CAAa8YgDNzxULh18xjS+5qQPybMa7r+-wv82LUhX_Q4zB6rh8Gw@mail.gmail.com>
Message-ID: <CAJ09J1-UfRVu7N4COaOBCxnO1W9X9a3bL6GiPw+vUBnBCqcV8A@mail.gmail.com>

Please note that some projects already using pecan+WSME are actually
thinking about finding something else, since WSME doesn't have much
activity and has its fair share of issues. If you missed it, check out this
conversation thread:
http://lists.openstack.org/pipermail/openstack-dev/2015-August/073156.html

On Wed, Sep 9, 2015 at 1:50 PM, Nikolay Starodubtsev <
nstarodubtsev at mirantis.com> wrote:

> Kate,
> This bp is pretty old, but I think it suits our needs [1]. Yeah, I'll
> attach etherpads to it.
> The idea about pecan/wsme is really useful.
> [1]: https://blueprints.launchpad.net/murano/+spec/api-vnext
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> 2015-09-09 14:42 GMT+03:00 Ekaterina Chernova <efedorova at mirantis.com>:
>
>> Hi Nikolay!
>>
>> Thanks for starting this activity! This is a really hot topic.
>> We also used to have plan to migrate our API to pecan. [1]
>> This also can be discussed.
>>
>> Do we have a blueprint for that? Could you please file it and attach
>> etherpad to a new blueprint.
>>
>> [1] -
>> https://blueprints.launchpad.net/murano/+spec/murano-api-server-pecan-wsme
>>
>> Thanks,
>> Kate.
>>
>> On Wed, Sep 9, 2015 at 2:07 PM, Nikolay Starodubtsev <
>> nstarodubtsev at mirantis.com> wrote:
>>
>>> Hi all,
>>> Yesterday on IRC weekly meeting murano team decided to start collecting
>>> ideas about murano APIv2 and murano future ui. We have to etherpads for
>>> this purpose:
>>> 1) https://etherpad.openstack.org/p/murano-APIv2 - for murano API v2
>>> ideas
>>> 2) https://etherpad.openstack.org/p/murano-future-ui-(Merlin) - for
>>> future ui ideas
>>>
>>> Feel free to write your ideas. If you have any questions you can reach
>>> me in IRC.
>>>
>>>
>>>
>>> Nikolay Starodubtsev
>>>
>>> Software Engineer
>>>
>>> Mirantis Inc.
>>>
>>>
>>> Skype: dark_harlequine1
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
St?phane
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/1cc5eb73/attachment.html>

From nstarodubtsev at mirantis.com  Wed Sep  9 12:34:18 2015
From: nstarodubtsev at mirantis.com (Nikolay Starodubtsev)
Date: Wed, 9 Sep 2015 15:34:18 +0300
Subject: [openstack-dev] [murano][merlin] murano APIv2 and murano future
	ui
In-Reply-To: <CAJ09J1-UfRVu7N4COaOBCxnO1W9X9a3bL6GiPw+vUBnBCqcV8A@mail.gmail.com>
References: <CAAa8YgBBTiWnZ47ZzMMd_u__qet6j2w1-=hgarpGrkvTMBo=Qg@mail.gmail.com>
 <CAOFFu8b0v+ygZc-eJb-hELtBJsj25SWx7ffsWow6xPG-GTQHsA@mail.gmail.com>
 <CAAa8YgDNzxULh18xjS+5qQPybMa7r+-wv82LUhX_Q4zB6rh8Gw@mail.gmail.com>
 <CAJ09J1-UfRVu7N4COaOBCxnO1W9X9a3bL6GiPw+vUBnBCqcV8A@mail.gmail.com>
Message-ID: <CAAa8YgA0k-WznqPq8UPfYBacL_pgGv-9d766LEXC6pfsqM_MyA@mail.gmail.com>

thanks, St?phane. It's a good notice.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-09 15:10 GMT+03:00 St?phane Bisinger <stephane.bisinger at gmail.com>:

> Please note that some projects already using pecan+WSME are actually
> thinking about finding something else, since WSME doesn't have much
> activity and has its fair share of issues. If you missed it, check out this
> conversation thread:
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/073156.html
>
> On Wed, Sep 9, 2015 at 1:50 PM, Nikolay Starodubtsev <
> nstarodubtsev at mirantis.com> wrote:
>
>> Kate,
>> This bp is pretty old, but I think it suits our needs [1]. Yeah, I'll
>> attach etherpads to it.
>> The idea about pecan/wsme is really useful.
>> [1]: https://blueprints.launchpad.net/murano/+spec/api-vnext
>>
>>
>>
>> Nikolay Starodubtsev
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>>
>> Skype: dark_harlequine1
>>
>> 2015-09-09 14:42 GMT+03:00 Ekaterina Chernova <efedorova at mirantis.com>:
>>
>>> Hi Nikolay!
>>>
>>> Thanks for starting this activity! This is a really hot topic.
>>> We also used to have plan to migrate our API to pecan. [1]
>>> This also can be discussed.
>>>
>>> Do we have a blueprint for that? Could you please file it and attach
>>> etherpad to a new blueprint.
>>>
>>> [1] -
>>> https://blueprints.launchpad.net/murano/+spec/murano-api-server-pecan-wsme
>>>
>>> Thanks,
>>> Kate.
>>>
>>> On Wed, Sep 9, 2015 at 2:07 PM, Nikolay Starodubtsev <
>>> nstarodubtsev at mirantis.com> wrote:
>>>
>>>> Hi all,
>>>> Yesterday on IRC weekly meeting murano team decided to start collecting
>>>> ideas about murano APIv2 and murano future ui. We have to etherpads for
>>>> this purpose:
>>>> 1) https://etherpad.openstack.org/p/murano-APIv2 - for murano API v2
>>>> ideas
>>>> 2) https://etherpad.openstack.org/p/murano-future-ui-(Merlin) - for
>>>> future ui ideas
>>>>
>>>> Feel free to write your ideas. If you have any questions you can reach
>>>> me in IRC.
>>>>
>>>>
>>>>
>>>> Nikolay Starodubtsev
>>>>
>>>> Software Engineer
>>>>
>>>> Mirantis Inc.
>>>>
>>>>
>>>> Skype: dark_harlequine1
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> St?phane
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/99d17639/attachment.html>

From ebogdanov at mirantis.com  Wed Sep  9 12:35:00 2015
From: ebogdanov at mirantis.com (Eugene Bogdanov)
Date: Wed, 09 Sep 2015 15:35:00 +0300
Subject: [openstack-dev] [Fuel] 7.0 Release - Hard Code Freeze in action
Message-ID: <55F02774.803@mirantis.com>

Hello everyone,

Please be informed that Hard Code Freeze for Fuel 7.0 Release is 
officially in action and the following changes have been applied:

1. Stable/7.0 branch was created for the following repos:

fuel-main
fuel-library
fuel-web
fuel-ostf
fuel-astute
fuel-qa
python-fuelclient
fuel-agent
fuel-nailgun-agent
fuel-mirror

2. Development Focus in LP is now changed to 8.0.
3. 7.0 builds are now switched to stable/7.0 branch and new Jenkins jobs 
are created to make builds from master (8.0) [1]. Note that 8.0 builds 
are based on Liberty release and therefore are highly unstable because 
Liberty packaging is currently in progress.

Bug reporters, please ensure you target both master and 7.0 (stable/7.0) 
milestones since now when reporting bugs. Also, please remember that all 
fixes for stable/7.0 branch should first be applied to master (8.0) and 
then cherry-picked to stable/7.0. As always, please ensure that you do 
NOT merge changes to stable branch first. It always has to be a backport 
with the same Change-ID. Please see more on this at [2].

-- 
EugeneB

[1] https://ci.fuel-infra.org/view/ISO/
[2] 
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/42b3c8ef/attachment.html>

From ikalnitsky at mirantis.com  Wed Sep  9 12:36:08 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Wed, 9 Sep 2015 15:36:08 +0300
Subject: [openstack-dev] [Fuel][Plugins] request for update of
 fuel-plugin-builder on pypi
In-Reply-To: <CAOmgvhx58CpBToYDHUeyhu7v_oZkh3nRK+Y2VNoJv28aMqHD0Q@mail.gmail.com>
References: <CAOq3GZVG--EwhQ5g1PqNrNPL=DvOZWZWNfeUvzQhcd25Y-K+cw@mail.gmail.com>
 <CA+HkNVskdn_Mr_fXdvc866D7aC1UgzaYq9L1QoTp0yMHBQOERw@mail.gmail.com>
 <CAOmgvhx58CpBToYDHUeyhu7v_oZkh3nRK+Y2VNoJv28aMqHD0Q@mail.gmail.com>
Message-ID: <CACo6NWBO5RVG=imVLFCN7LNHAOPM3gV6hTczz_pNNDNiUqTVtg@mail.gmail.com>

Hi guys,

I'm going to wait for the patch [1] and then make a FPB release.

Regarding repo restructuring.. We do have an issue, and IIRC it's
targeted to 8.0.

[1]: https://review.openstack.org/#/c/221434/

Thanks,
Igor

On Wed, Sep 9, 2015 at 2:23 PM, Swann Croiset <scroiset at mirantis.com> wrote:
> +2 to sergii
>
> and btw create dedicated repos for plugin examples
>
>
> On Wed, Sep 9, 2015 at 12:34 PM, Sergii Golovatiuk
> <sgolovatiuk at mirantis.com> wrote:
>>
>> +1 to Simon
>>
>> Also the structure of fuel-plugin-builder should be refactored to
>> community standards. everything in 'fuel_plugin_builder' directory should
>> moved to top of repository.
>>
>> --
>> Best regards,
>> Sergii Golovatiuk,
>> Skype #golserge
>> IRC #holser
>>
>> On Wed, Sep 9, 2015 at 12:25 PM, Simon Pasquier <spasquier at mirantis.com>
>> wrote:
>>>
>>> Hi,
>>> It would be cool if fuel-plugin-builder (fpb) v3.0.0 could be released on
>>> pypi. We've moved some of the LMA plugins to use the v3 format.
>>> Right now we have to install fpb from source which is hard to automate in
>>> our tests unfortunately (as already noted by Sergii [1]).
>>> BR,
>>> Simon
>>> [1]
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070781.html
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From tdurakov at mirantis.com  Wed Sep  9 12:40:29 2015
From: tdurakov at mirantis.com (Timofei Durakov)
Date: Wed, 9 Sep 2015 15:40:29 +0300
Subject: [openstack-dev] [nova] CI for reliable live-migration
In-Reply-To: <6C050B80D65D294AA85ECBB3DB2673FF127F3369@G9W0757.americas.hpqcorp.net>
References: <CAHsr+ix=i3oxZUeP-pe2SaRmww0YiwQ_VvkyLtd_UY7TD0e-6g@mail.gmail.com>
 <55DDD8BD.1090701@linux.vnet.ibm.com>
 <CAHXdxOfdKaqbUnvj6V6P-7Xi7-EWHTP1UGHTNjSRMEMvbzkcEQ@mail.gmail.com>
 <6C050B80D65D294AA85ECBB3DB2673FF127F3369@G9W0757.americas.hpqcorp.net>
Message-ID: <CAHsr+iwfNJ+bOSs8cKE8Rz2X=PDMUKX6bAFVMBb4vVzm=pf_zw@mail.gmail.com>

Hello,
Update for gate-tempest-dsvm-multinode-full job.
Here is top 12 failing tests in weekly period:
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_resize_server_from_manual_to_auto:
14
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_resize_server_from_auto_to_manual:
14
tempest.scenario.test_server_advanced_ops.TestServerAdvancedOps.test_resize_server_confirm:
12
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert:
12
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm:
12
tempest.api.compute.admin.test_live_migration.LiveBlockMigrationTestJSON.test_live_block_migration_paused:
12
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_verify_resize_state:
12
tempest.api.compute.admin.test_migrations.MigrationsAdminTest.test_list_migrations_in_flavor_resize_situation:
12
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm_from_stopped:
12
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern:
10
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern:
10
tempest.api.compute.admin.test_live_migration.LiveBlockMigrationTestJSON.test_live_block_migration:
10


Full list of failing tests: http://xsnippet.org/360947/


On Fri, Aug 28, 2015 at 12:14 AM, Kraminsky, Arkadiy <
arkadiy.kraminsky at hp.com> wrote:

> Hello,
>
> I'm a new developer on the Openstack project and am in the process of
> creating live migration CI for HP's 3PAR and Lefthand backends. I noticed
> you guys are looking for someone to pick up Joe Gordon's change for volume
> backed live migration tests and we can sure use something like this. I can
> take a look into the change, and see what I can do. :)
>
> Thanks,
>
> Arkadiy Kraminsky
> ________________________________
> From: Joe Gordon [joe.gordon0 at gmail.com]
> Sent: Wednesday, August 26, 2015 9:26 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] CI for reliable live-migration
>
>
>
> On Wed, Aug 26, 2015 at 8:18 AM, Matt Riedemann <
> mriedem at linux.vnet.ibm.com<mailto:mriedem at linux.vnet.ibm.com>> wrote:
>
>
> On 8/26/2015 3:21 AM, Timofei Durakov wrote:
> Hello,
>
> Here is the situation: nova has live-migration feature but doesn't have
> ci job to cover it by functional tests, only
> gate-tempest-dsvm-multinode-full(non-voting, btw), which covers
> block-migration only.
> The problem here is, that live-migration could be different, depending
> on how instance was booted(volume-backed/ephemeral), how environment is
> configured(is shared instance directory(NFS, for example), or RBD used
> to store ephemeral disk), or for example user don't have that and is
> going to use --block-migrate flag. To claim that we have reliable
> live-migration in nova, we should check it at least on envs with rbd or
> nfs as more popular than envs without shared storages at all.
> Here is the steps for that:
>
>  1. make  gate-tempest-dsvm-multinode-full voting, as it looks OK for
>     block-migration testing purposes;
>
> When we are ready to make multinode voting we should remove the equivalent
> single node job.
>
>
> If it's been stable for awhile then I'd be OK with making it voting on
> nova changes, I agree it's important to have at least *something* that
> gates on multi-node testing for nova since we seem to break this a few
> times per release.
>
> Last I checked it isn't as stable is single node yet:
> http://jogo.github.io/gate/multinode [0].  The data going into graphite
> is a bit noisy so this may be a red herring, but at the very least it needs
> to be investigated. When I was last looking into this there were at least
> two known bugs:
>
> https://bugs.launchpad.net/nova/+bug/1445569
> <https://bugs.launchpad.net/nova/+bug/1445569>
> https://bugs.launchpad.net/nova/+bug/1462305
>
>
> [0]
> http://graphite.openstack.org/graph/?from=-36hours&height=500&until=now&width=800&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.{SUCCESS,FAILURE})),%275hours%27),%20%27gate-tempest-dsvm-full%27),%27orange%27)&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.{SUCCESS,FAILURE})),%275hours%27),%20%27gate-tempest-dsvm-multinode-full%27),%27brown%27)&title=Check%20Failure%20Rates%20(36%20hours)&_t=0.48646087432280183
> <
> http://graphite.openstack.org/graph/?from=-36hours&height=500&until=now&width=800&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.%7BSUCCESS,FAILURE%7D)),%275hours%27),%20%27gate-tempest-dsvm-full%27),%27orange%27)&target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.%7BSUCCESS,FAILURE%7D)),%275hours%27),%20%27gate-tempest-dsvm-multinode-full%27),%27brown%27)&title=Check%20Failure%20Rates%20(36%20hours)&_t=0.48646087432280183
> >
>
>
>  2. contribute to tempest to cover volume-backed instances live-migration;
>
> jogo has had a patch up for this for awhile:
>
> https://review.openstack.org/#/c/165233/
>
> Since it's not full time on openstack anymore I assume some help there in
> picking up the change would be appreciated.
>
> yes please
>
>
>  3. make another job with rbd for storing ephemerals, it also requires
>     changing tempest config;
>
> We already have a voting ceph job for nova - can we turn that into a
> multi-node testing job and run live migration with shared storage using
> that?
>
>  4. make job with nfs for ephemerals.
>
> Can't we use a multi-node ceph job (#3) for this?
>
>
> These steps should help us to improve current situation with
> live-migration.
>
> --
> Timofey.
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/8cff396f/attachment.html>

From nathan.s.reller at gmail.com  Wed Sep  9 12:40:48 2015
From: nathan.s.reller at gmail.com (Nathan Reller)
Date: Wed, 9 Sep 2015 08:40:48 -0400
Subject: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican
	core
In-Reply-To: <55EF091F.9080601@rackspace.com>
References: <CAG=EsMOY+QBt4Hw4YdYyDk-v0yfmKKEf1BiHPxBi_enjvHZCYw@mail.gmail.com>
 <55EF091F.9080601@rackspace.com>
Message-ID: <CAMKdHYoq-6A5REiG0NsyBPZ627r9ne7M8HDx-2tci-NC0sOmrg@mail.gmail.com>

+1

Dave is a great member of the team, and I think he has earned it.

-Nate

On Tue, Sep 8, 2015 at 12:13 PM, Douglas Mendiz?bal <
douglas.mendizabal at rackspace.com> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA512
>
> +1
>
> Dave has been a great asset to the team, and I think he would make an
> excellent core reviewer.
>
> - - Douglas Mendiz?bal
>
> On 9/8/15 11:05 AM, Juan Antonio Osorio wrote:
> > I'd like to nominate Dave Mccowan for the Barbican core review
> > team.
> >
> > He has been an active contributor both in doing relevant code
> > pieces and making useful and thorough reviews; And so I think he
> > would make a great addition to the team.
> >
> > Please bring the +1's :D
> >
> > Cheers!
> >
> > -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com
> > <mailto:jaosorior at gmail.com>
> >
> >
> >
> > ______________________________________________________________________
> ____
> >
> >
> >
> OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - https://gpgtools.org
>
> iQIcBAEBCgAGBQJV7wkfAAoJEB7Z2EQgmLX7X+IP/AtYTxcx0u+O6MMLDU1VcGZg
> 5ksCdn1bosfuqJ/X/QWplHBSG8BzllwciHm7YJxIY94MaAlThk3Zw6UDKKkBMqIt
> Qag09Z868LPl9/pll0whR5fVa052zSMq/QYWTnpgwpAgQduKNe4KaR1ZKhtBBbAJ
> BvjyKEa2dJLA6LIMXxcxpoCAKSeORM5lce19kHHhWyqq9v5A89U6GHMgwRAa2fGN
> 7RyYmlOrmxh6TyJQX9Xl+w9y5WPAbxaUqC0MYEkLMpa7VnGf2pEangkN0LUAJO2x
> NxwHa73b2LA8K1+4hwTvZO28sRnyMHwjSpqvpGt60FXkgi4dLyyy8gR6gsO49EDB
> QOSwpwyFHzA//iuMl72pAD6uMzK0SCECtEu2000l0p3WEXS1i0z7p9VTfw4FySqb
> V0S/IeSFfkt09TK2DoOSzXAvBZjsLz9gjRbRIv2dx0QTTmN5JpihOeoUojn24aDV
> 86AshlhoImJGOX16MwRL+T6LCindkczGe4Faz7WzmBomEJ7SOY6pzDbyEBLYcqzu
> crvrLt2D1HmaygFGS37lVCqxlIegwsnZHGIe+Jtr8pDIDSW37ig4LZIDVra2/lj9
> E7/fWYCDqbSIUWYG2jMr0/3eQQwZCj4kNvtWaTlNFmTPJZAEYpSN3rBhkfWBgsLv
> mqBOM4IeR4EqaqaC2og7
> =jL8d
> -----END PGP SIGNATURE-----
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/be72e23d/attachment.html>

From julien at danjou.info  Wed Sep  9 12:42:23 2015
From: julien at danjou.info (Julien Danjou)
Date: Wed, 09 Sep 2015 14:42:23 +0200
Subject: [openstack-dev] [aodh][ceilometer] (re)introducing Aodh -
	OpenStack Alarming
In-Reply-To: <20150909093949.GA24936@qiming-ThinkCentre-M58p> (Qiming Teng's
 message of "Wed, 9 Sep 2015 18:20:48 +0800")
References: <BLU436-SMTP32A3A7E84295D52C56310CDE530@phx.gbl>
 <20150909093949.GA24936@qiming-ThinkCentre-M58p>
Message-ID: <m0zj0vrcgg.fsf@danjou.info>

On Wed, Sep 09 2015, Qiming Teng wrote:

> - According to [1], Aodh will be released as a standalone service,
>   am I understanding this correctly?

Yes.

> - What is the official name for this new serivce when it stands on its
>   own feet: "Telemetry Alarming" or just "Alarming" or something else?
>   We will need a name for this to support in OpenStack SDK.

I think it would be "alarming". Does that sounds good enough to
everyone?
I'm not a native speaker and I'm never sure if "alarming" is a good term
here.

> - There will be a need to create endpoints for Aodh in keystone? Or it
>   is just a 'library' for ceilometer, sharing the same API endpoint and
>   the same client with ceilometer?

Yes, you need to create endpoint in Keystone.

> - The original email mentioned that "the client can still be used and
>   redirect to Aodh". Could you please clarify where the redirection will
>   happen? It is a client side redirection or a ceilometer-server side
>   redirection? I'm asking this because we sometimes program the REST
>   APIs directly.

It's actually both, we do the redirect on the client side, but if people
uses the REST API directly there's also a 301 code returned.

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/075d444d/attachment.pgp>

From vkozhukalov at mirantis.com  Wed Sep  9 12:47:12 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Wed, 9 Sep 2015 15:47:12 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
Message-ID: <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>

Andrey,

This change is going to make things even easier. Currently you don't need
to build fuel-library package manually, Perestroika is going to do it for
you. It builds necessary packages during minutes for every review request
and packaging ci even tests it for you. You just need to make necessary
changes not on master node but on your MACBOOK using your favorite editor.
Then you need to commit this change and send this patch on review. If you
want to test this patch manually, you just need to append this CR repo
(example is here [1]) to the list of repos you define for your cluster and
start deployment. Anyway, you still have rsync, mcollective and other old
plain tools to run deployment manually.

[1] http://perestroika-repo-tst.infra.mirantis.net/review/CR-221719/



Vladimir Kozhukalov

On Wed, Sep 9, 2015 at 2:48 PM, Dmitry Pyzhov <dpyzhov at mirantis.com> wrote:

> Vladimir,
>
> thanks for bringing this up. It greatly correlates with the idea of
> modularity. Everything related to an openstack release should be put in one
> place and should be managed as a solid bundle on the master node. Package
> repository is the first solution that comes to the mind and it looks pretty
> good. Puppet modules, openstack.yaml and maybe even serialisers should be
> stored in packages in the openstack release repository. And eventually
> every other piece of our software should get rid of release-specific logic.
>
> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
> vkozhukalov at mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> Currently, we install fuel-libraryX.Y package(s) on the master node and
>> then right before starting actual deployment we rsync [1] puppet modules
>> (one of installed versions) from the master node to slave nodes. Such a
>> flow makes things much more complicated than they could be if we installed
>> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
>> parameterized by repo urls (upstream + mos) and this pre-deployment task
>> could be nothing more than just installing fuel-library package from mos
>> repo defined for a cluster. We would not have several versions of
>> fuel-library on the master node, we would not need that complicated upgrade
>> stuff like we currently have for puppet modules.
>>
>> Please give your opinions on this.
>>
>>
>> [1]
>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>>
>> Vladimir Kozhukalov
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/57830d3b/attachment.html>

From derekh at redhat.com  Wed Sep  9 13:09:20 2015
From: derekh at redhat.com (Derek Higgins)
Date: Wed, 09 Sep 2015 14:09:20 +0100
Subject: [openstack-dev] [TripleO] trello
In-Reply-To: <55EF0070.4040309@redhat.com>
References: <55EF0070.4040309@redhat.com>
Message-ID: <55F02F80.50401@redhat.com>



On 08/09/15 16:36, Derek Higgins wrote:
> Hi All,
>
>     Some of ye may remember some time ago we used to organize TripleO
> based jobs/tasks on a trello board[1], at some stage this board fell out
> of use (the exact reason I can't put my finger on). This morning I was
> putting a list of things together that need to be done in the area of CI
> and needed somewhere to keep track of it.
>
> I propose we get back to using this trello board and each of us add
> cards at the very least for the things we are working on.
>
> This should give each of us a lot more visibility into what is ongoing
> on in the tripleo project currently, unless I hear any objections,
> tomorrow I'll start archiving all cards on the boards and removing
> people no longer involved in tripleo. We can then start adding items and
> anybody who wants in can be added again.

This is now done, see
https://trello.com/tripleo

Please ping me on irc if you want to be added.

>
> thanks,
> Derek.
>
> [1] - https://trello.com/tripleo
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From adanin at mirantis.com  Wed Sep  9 13:15:23 2015
From: adanin at mirantis.com (Andrey Danin)
Date: Wed, 9 Sep 2015 16:15:23 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
Message-ID: <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>

I don't think juggling with repos and pull requests is easier than direct
editing of files on Fuel node. Do we have Perestorika installed on Fuel
node in 7.0?

On Wed, Sep 9, 2015 at 3:47 PM, Vladimir Kozhukalov <
vkozhukalov at mirantis.com> wrote:

> Andrey,
>
> This change is going to make things even easier. Currently you don't need
> to build fuel-library package manually, Perestroika is going to do it for
> you. It builds necessary packages during minutes for every review request
> and packaging ci even tests it for you. You just need to make necessary
> changes not on master node but on your MACBOOK using your favorite editor.
> Then you need to commit this change and send this patch on review. If you
> want to test this patch manually, you just need to append this CR repo
> (example is here [1]) to the list of repos you define for your cluster and
> start deployment. Anyway, you still have rsync, mcollective and other old
> plain tools to run deployment manually.
>
> [1] http://perestroika-repo-tst.infra.mirantis.net/review/CR-221719/
>
>
>
> Vladimir Kozhukalov
>
> On Wed, Sep 9, 2015 at 2:48 PM, Dmitry Pyzhov <dpyzhov at mirantis.com>
> wrote:
>
>> Vladimir,
>>
>> thanks for bringing this up. It greatly correlates with the idea of
>> modularity. Everything related to an openstack release should be put in one
>> place and should be managed as a solid bundle on the master node. Package
>> repository is the first solution that comes to the mind and it looks pretty
>> good. Puppet modules, openstack.yaml and maybe even serialisers should be
>> stored in packages in the openstack release repository. And eventually
>> every other piece of our software should get rid of release-specific logic.
>>
>> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
>> vkozhukalov at mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> Currently, we install fuel-libraryX.Y package(s) on the master node and
>>> then right before starting actual deployment we rsync [1] puppet modules
>>> (one of installed versions) from the master node to slave nodes. Such a
>>> flow makes things much more complicated than they could be if we installed
>>> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
>>> parameterized by repo urls (upstream + mos) and this pre-deployment task
>>> could be nothing more than just installing fuel-library package from mos
>>> repo defined for a cluster. We would not have several versions of
>>> fuel-library on the master node, we would not need that complicated upgrade
>>> stuff like we currently have for puppet modules.
>>>
>>> Please give your opinions on this.
>>>
>>>
>>> [1]
>>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>>>
>>> Vladimir Kozhukalov
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Danin
adanin at mirantis.com
skype: gcon.monolake
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/ace39e3c/attachment.html>

From hejie.xu at intel.com  Wed Sep  9 13:28:42 2015
From: hejie.xu at intel.com (Alex Xu)
Date: Wed, 9 Sep 2015 21:28:42 +0800
Subject: [openstack-dev] Should v2 compatibility mode (v2.0 on v2.1)
	fixes be applicable for v2.1 too?
In-Reply-To: <55EEBC48.6080200@dague.net>
References: <CACE3TKWnnCtjc-CM408zO4BLfG733Rz4s90ap69PdE2jmvNWmg@mail.gmail.com>
 <55EEBC48.6080200@dague.net>
Message-ID: <1ABB5BE6-B583-4B02-A6BE-928DCF9EB578@intel.com>


> ? 2015?9?8????6:45?Sean Dague <sean at dague.net> ???
> 
> On 09/06/2015 11:15 PM, GHANSHYAM MANN wrote:
>> Hi All,
>> 
>> As we all knows, api-paste.ini default setting for /v2 was changed to
>> run those on v2.1 (v2.0 on v2.1) which is really great think for easy
>> code maintenance in future (removal of v2 code).
>> 
>> To keep "v2.0 on v2.1" fully compatible with "v2.0 on v2.0", some bugs
>> were found[1] and fixed. But I think we should fix those only for v2
>> compatible mode not for v2.1.
>> 
>> For example bug#1491325, 'device' on volume attachment Request is
>> optional param[2] (which does not mean 'null-able' is allowed) and
>> v2.1 used to detect and error on usage of 'device' as "None". But as
>> it was used as 'None' by many /v2 users and not to break those, we
>> should allow 'None' on v2 compatible mode also. But we should not
>> allow the same for v2.1.
>> 
>> IMO v2.1 strong input validation feature (which helps to make API
>> usage in correct manner) should not be changed, and for v2 compatible
>> mode we should have another solution without affecting v2.1 behavior
>> may be having different schema for v2 compatible mode and do the
>> necessary fixes there.
>> 
>> Trying to know other's opinion on this or something I missed during
>> any discussion.
>> 
>> [1]: https://bugs.launchpad.net/python-novaclient/+bug/1491325
>>      https://bugs.launchpad.net/nova/+bug/1491511
>> 
>> [2]: http://developer.openstack.org/api-ref-compute-v2.1.html#attachVolume
> 
> A lot of these issue need to be a case by case determination.
> 

+1 case by case, in the beginning of this release, I really hope we get guideline and with few rules
to explain everything, then I can use that guideline to making everyone stop argue :) Finally
I found I?m wrong. Thanks to Sean told me I should think about client (even I know that, I still
need sometime learn to think about that.)

> In this particular case, we had the Documetation, the nova code, the
> clients, and the future.
> 
> The documentation: device is optional. That means it should be a string
> or not there at all. The schema was set to enforce this on v2.1
> 
> The nova code: device = None was accepted previously, because device is
> a mandatory parameter all the way down the call stack. 2 layers in we
> default it to None if it wasn't specified.
> 
> The clients: both python-novaclient and ruby fog sent device=None in the
> common case. While only 2 data points, this does demonstrate this is
> more wide spread than just our buggy code.
> 
> The future: it turns out we really can't honor this parameter in most
> cases anyway, and passing it just means causing bugs. This is an
> artifact of the EC2 API that only works on specific (and possibly
> forked) versions of Xen that Amazon runs. Most hypervisor / guest
> relationships don't allow this to be set. The long term direction is
> going to be removing it from our API.
> 
> Given that it seemed fine to relax this across all API. We screwed up
> and didn't test this case correctly, and long term we're going to dump
> it. So we don't want to honor 3 different versions of this API,
> especially as no one seems written to work against the documentation,
> but were written against the code in question. If they write to the
> docs, they'll be fine. But the clients that are out in the wild will be
> fine as well.

> 
> 	-Sea
> 
> -- 
> Sean Dague
> http://dague.net <http://dague.net/>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/10d2a49f/attachment.html>

From vkozhukalov at mirantis.com  Wed Sep  9 13:34:26 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Wed, 9 Sep 2015 16:34:26 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
 <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
Message-ID: <CAFLqvG6nqSm4A9mQVwTAP4+tEjSQT=3qSHUrUG_9v=nSJ5oTjQ@mail.gmail.com>

No, Perestroika is not available on the Fuel master node and it is not
going to be available in the future. But Perestroika is going to be
re-worked so as to make it is possible to used separately from CI. It is
gonna be a python application to make package building as easy for a
developer/user as possible. Anyway I think this argument that it is easier
to develop is not that kind of argument which can prevail when discussing
production ready delivery approach.

Vladimir Kozhukalov

On Wed, Sep 9, 2015 at 4:15 PM, Andrey Danin <adanin at mirantis.com> wrote:

> I don't think juggling with repos and pull requests is easier than direct
> editing of files on Fuel node. Do we have Perestorika installed on Fuel
> node in 7.0?
>
> On Wed, Sep 9, 2015 at 3:47 PM, Vladimir Kozhukalov <
> vkozhukalov at mirantis.com> wrote:
>
>> Andrey,
>>
>> This change is going to make things even easier. Currently you don't need
>> to build fuel-library package manually, Perestroika is going to do it for
>> you. It builds necessary packages during minutes for every review request
>> and packaging ci even tests it for you. You just need to make necessary
>> changes not on master node but on your MACBOOK using your favorite editor.
>> Then you need to commit this change and send this patch on review. If you
>> want to test this patch manually, you just need to append this CR repo
>> (example is here [1]) to the list of repos you define for your cluster and
>> start deployment. Anyway, you still have rsync, mcollective and other old
>> plain tools to run deployment manually.
>>
>> [1] http://perestroika-repo-tst.infra.mirantis.net/review/CR-221719/
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 2:48 PM, Dmitry Pyzhov <dpyzhov at mirantis.com>
>> wrote:
>>
>>> Vladimir,
>>>
>>> thanks for bringing this up. It greatly correlates with the idea of
>>> modularity. Everything related to an openstack release should be put in one
>>> place and should be managed as a solid bundle on the master node. Package
>>> repository is the first solution that comes to the mind and it looks pretty
>>> good. Puppet modules, openstack.yaml and maybe even serialisers should be
>>> stored in packages in the openstack release repository. And eventually
>>> every other piece of our software should get rid of release-specific logic.
>>>
>>> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
>>> vkozhukalov at mirantis.com> wrote:
>>>
>>>> Dear colleagues,
>>>>
>>>> Currently, we install fuel-libraryX.Y package(s) on the master node and
>>>> then right before starting actual deployment we rsync [1] puppet modules
>>>> (one of installed versions) from the master node to slave nodes. Such a
>>>> flow makes things much more complicated than they could be if we installed
>>>> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
>>>> parameterized by repo urls (upstream + mos) and this pre-deployment task
>>>> could be nothing more than just installing fuel-library package from mos
>>>> repo defined for a cluster. We would not have several versions of
>>>> fuel-library on the master node, we would not need that complicated upgrade
>>>> stuff like we currently have for puppet modules.
>>>>
>>>> Please give your opinions on this.
>>>>
>>>>
>>>> [1]
>>>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>>>>
>>>> Vladimir Kozhukalov
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Andrey Danin
> adanin at mirantis.com
> skype: gcon.monolake
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/2a86e1f7/attachment.html>

From aschultz at mirantis.com  Wed Sep  9 13:39:30 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Wed, 9 Sep 2015 08:39:30 -0500
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
 <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
Message-ID: <CABzFt8N12ADSuafDBZHg+QTHqPGjXPigzCvYZ1LE48KZJSGzyA@mail.gmail.com>

I agree that we shouldn't need to sync as we should be able to just update
the fuel-library package. That being said, I think there might be a few
issues with this method. The first issue is with plugins and how to
properly handle the distribution of the plugins as they may also include
puppet code that needs to be installed on the other nodes for a deployment.
Currently I do not believe we install the plugin packages anywhere except
the master and when they do get installed there may be some post-install
actions that are only valid for the master.  Another issue is being
flexible enough to allow for deployment engineers to make custom changes
for a given environment.  Unless we can provide an improved process to
allow for people to provide in place modifications for an environment, we
can't do away with the rsync.

If we want to go completely down the package route (and we probably
should), we need to make sure that all of the other pieces that currently
go together to make a complete fuel deployment can be updated in the same
way.

-Alex

On Wed, Sep 9, 2015 at 8:15 AM, Andrey Danin <adanin at mirantis.com> wrote:

> I don't think juggling with repos and pull requests is easier than direct
> editing of files on Fuel node. Do we have Perestorika installed on Fuel
> node in 7.0?
>
> On Wed, Sep 9, 2015 at 3:47 PM, Vladimir Kozhukalov <
> vkozhukalov at mirantis.com> wrote:
>
>> Andrey,
>>
>> This change is going to make things even easier. Currently you don't need
>> to build fuel-library package manually, Perestroika is going to do it for
>> you. It builds necessary packages during minutes for every review request
>> and packaging ci even tests it for you. You just need to make necessary
>> changes not on master node but on your MACBOOK using your favorite editor.
>> Then you need to commit this change and send this patch on review. If you
>> want to test this patch manually, you just need to append this CR repo
>> (example is here [1]) to the list of repos you define for your cluster and
>> start deployment. Anyway, you still have rsync, mcollective and other old
>> plain tools to run deployment manually.
>>
>> [1] http://perestroika-repo-tst.infra.mirantis.net/review/CR-221719/
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 2:48 PM, Dmitry Pyzhov <dpyzhov at mirantis.com>
>> wrote:
>>
>>> Vladimir,
>>>
>>> thanks for bringing this up. It greatly correlates with the idea of
>>> modularity. Everything related to an openstack release should be put in one
>>> place and should be managed as a solid bundle on the master node. Package
>>> repository is the first solution that comes to the mind and it looks pretty
>>> good. Puppet modules, openstack.yaml and maybe even serialisers should be
>>> stored in packages in the openstack release repository. And eventually
>>> every other piece of our software should get rid of release-specific logic.
>>>
>>> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
>>> vkozhukalov at mirantis.com> wrote:
>>>
>>>> Dear colleagues,
>>>>
>>>> Currently, we install fuel-libraryX.Y package(s) on the master node and
>>>> then right before starting actual deployment we rsync [1] puppet modules
>>>> (one of installed versions) from the master node to slave nodes. Such a
>>>> flow makes things much more complicated than they could be if we installed
>>>> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
>>>> parameterized by repo urls (upstream + mos) and this pre-deployment task
>>>> could be nothing more than just installing fuel-library package from mos
>>>> repo defined for a cluster. We would not have several versions of
>>>> fuel-library on the master node, we would not need that complicated upgrade
>>>> stuff like we currently have for puppet modules.
>>>>
>>>> Please give your opinions on this.
>>>>
>>>>
>>>> [1]
>>>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>>>>
>>>> Vladimir Kozhukalov
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Andrey Danin
> adanin at mirantis.com
> skype: gcon.monolake
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/3b9d54a8/attachment.html>

From rpothier at cisco.com  Wed Sep  9 13:41:00 2015
From: rpothier at cisco.com (Rob Pothier (rpothier))
Date: Wed, 9 Sep 2015 13:41:00 +0000
Subject: [openstack-dev] [tripleo] Plugin integration and environment
 file naming
In-Reply-To: <20150908084055.GA23526@t430slt.redhat.com>
References: <20150908084055.GA23526@t430slt.redhat.com>
Message-ID: <D215ADAA.2DA9A%rpothier@cisco.com>


Looks good to me.

Rob


On 9/8/15, 4:40 AM, "Steven Hardy" <shardy at redhat.com> wrote:

>Hi all,
>
>So, lately we're seeing an increasing number of patches adding integration
>for various third-party plugins, such as different neutron and cinder
>backends.
>
>This is great to see, but it also poses the question of how we organize
>the
>user-visible interfaces to these things long term.
>
>Originally, I was hoping to land some Heat composability improvements[1]
>which would allow for tagging templates as providing a particular
>capability (such as "provides neutron ML2 plugin"), but this has stalled
>on
>some negative review feedback and isn't going to be implemented for
>Liberty.
>
>However, today looking at [2] and [3], (which both add t-h-t integration
>to
>enable neutron ML2 plugins), a simpler interim solution occured to me,
>which is just to make use of a suggested/mandatory naming convention.
>
>For example:
>
>environments/neutron-ml2-bigswitch.yaml
>environments/neutron-ml2-cisco-nexus-ucsm.yaml
>
>Or via directory structure:
>
>environments/neutron-ml2/bigswitch.yaml
>environments/neutron-ml2/cisco-nexus-ucsm.yaml
>
>This would require enforcement via code-review, but could potentially
>provide a much more intuitive interface for users when they go to create
>their cloud, and particularly it would make life much easier for any Ux to
>ask "choose which neutron-ml2 plugin you want", because the available
>options can simply be listed by looking at the available environment
>files?
>
>What do folks think of this, is now a good time to start enforcing such a
>convention?
>
>Steve
>
>[1] https://review.openstack.org/#/c/196656/
>[2] https://review.openstack.org/#/c/213142/
>[3] https://review.openstack.org/#/c/198754/
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From vkozhukalov at mirantis.com  Wed Sep  9 14:17:52 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Wed, 9 Sep 2015 17:17:52 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CABzFt8N12ADSuafDBZHg+QTHqPGjXPigzCvYZ1LE48KZJSGzyA@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
 <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
 <CABzFt8N12ADSuafDBZHg+QTHqPGjXPigzCvYZ1LE48KZJSGzyA@mail.gmail.com>
Message-ID: <CAFLqvG5P2Ckp61nB9woU=AP3e0rFPfVsDg81HJadM=v2bc6=5w@mail.gmail.com>

Alex,

Regarding plugins: plugins are welcome to install specific additional
DEB/RPM repos on the master node, or just configure cluster to use
additional onl?ne repos, where all necessary packages (including plugin
specific puppet manifests) are to be available. Current granular deployment
approach makes it easy to append specific pre-deployment tasks
(master/slave does not matter). Correct me if I am wrong.

Regarding flexibility: having several versioned directories with puppet
modules on the master node, having several fuel-libraryX.Y packages
installed on the master node makes things "exquisitely convoluted" rather
than flexible. Like I said, it is flexible enough to use mcollective, plain
rsync, etc. if you really need to do things manually. But we have
convenient service (Perestroika) which builds packages in minutes if you
need. Moreover, In the nearest future (by 8.0) Perestroika will be
available as an application independent from CI. So, what is wrong with
building fuel-library package? What if you want to troubleshoot nova (we
install it using packages)? Should we also use rsync for everything else
like nova, mysql, etc.?

Vladimir Kozhukalov

On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz <aschultz at mirantis.com> wrote:

> I agree that we shouldn't need to sync as we should be able to just update
> the fuel-library package. That being said, I think there might be a few
> issues with this method. The first issue is with plugins and how to
> properly handle the distribution of the plugins as they may also include
> puppet code that needs to be installed on the other nodes for a deployment.
> Currently I do not believe we install the plugin packages anywhere except
> the master and when they do get installed there may be some post-install
> actions that are only valid for the master.  Another issue is being
> flexible enough to allow for deployment engineers to make custom changes
> for a given environment.  Unless we can provide an improved process to
> allow for people to provide in place modifications for an environment, we
> can't do away with the rsync.
>
> If we want to go completely down the package route (and we probably
> should), we need to make sure that all of the other pieces that currently
> go together to make a complete fuel deployment can be updated in the same
> way.
>
> -Alex
>
> On Wed, Sep 9, 2015 at 8:15 AM, Andrey Danin <adanin at mirantis.com> wrote:
>
>> I don't think juggling with repos and pull requests is easier than direct
>> editing of files on Fuel node. Do we have Perestorika installed on Fuel
>> node in 7.0?
>>
>> On Wed, Sep 9, 2015 at 3:47 PM, Vladimir Kozhukalov <
>> vkozhukalov at mirantis.com> wrote:
>>
>>> Andrey,
>>>
>>> This change is going to make things even easier. Currently you don't
>>> need to build fuel-library package manually, Perestroika is going to do it
>>> for you. It builds necessary packages during minutes for every review
>>> request and packaging ci even tests it for you. You just need to make
>>> necessary changes not on master node but on your MACBOOK using your
>>> favorite editor. Then you need to commit this change and send this patch on
>>> review. If you want to test this patch manually, you just need to append
>>> this CR repo (example is here [1]) to the list of repos you define for your
>>> cluster and start deployment. Anyway, you still have rsync, mcollective and
>>> other old plain tools to run deployment manually.
>>>
>>> [1] http://perestroika-repo-tst.infra.mirantis.net/review/CR-221719/
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Wed, Sep 9, 2015 at 2:48 PM, Dmitry Pyzhov <dpyzhov at mirantis.com>
>>> wrote:
>>>
>>>> Vladimir,
>>>>
>>>> thanks for bringing this up. It greatly correlates with the idea of
>>>> modularity. Everything related to an openstack release should be put in one
>>>> place and should be managed as a solid bundle on the master node. Package
>>>> repository is the first solution that comes to the mind and it looks pretty
>>>> good. Puppet modules, openstack.yaml and maybe even serialisers should be
>>>> stored in packages in the openstack release repository. And eventually
>>>> every other piece of our software should get rid of release-specific logic.
>>>>
>>>> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
>>>> vkozhukalov at mirantis.com> wrote:
>>>>
>>>>> Dear colleagues,
>>>>>
>>>>> Currently, we install fuel-libraryX.Y package(s) on the master node
>>>>> and then right before starting actual deployment we rsync [1] puppet
>>>>> modules (one of installed versions) from the master node to slave nodes.
>>>>> Such a flow makes things much more complicated than they could be if we
>>>>> installed puppet modules on slave nodes as rpm/deb packages. Deployment
>>>>> itself is parameterized by repo urls (upstream + mos) and this
>>>>> pre-deployment task could be nothing more than just installing fuel-library
>>>>> package from mos repo defined for a cluster. We would not have several
>>>>> versions of fuel-library on the master node, we would not need that
>>>>> complicated upgrade stuff like we currently have for puppet modules.
>>>>>
>>>>> Please give your opinions on this.
>>>>>
>>>>>
>>>>> [1]
>>>>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>>>>>
>>>>> Vladimir Kozhukalov
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Andrey Danin
>> adanin at mirantis.com
>> skype: gcon.monolake
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/d7fb79ed/attachment.html>

From zhipengh512 at gmail.com  Wed Sep  9 14:22:54 2015
From: zhipengh512 at gmail.com (Zhipeng Huang)
Date: Wed, 9 Sep 2015 22:22:54 +0800
Subject: [openstack-dev] [tricircle]Weekly Team Meeting 2015.09.09
In-Reply-To: <CAHZqm+UGTS-aX6evwC3-QoqNR+04COUh15SLRvn9PA1wzjwgRA@mail.gmail.com>
References: <CAHZqm+UGTS-aX6evwC3-QoqNR+04COUh15SLRvn9PA1wzjwgRA@mail.gmail.com>
Message-ID: <CAHZqm+VG8Fa-xJAxyLJHscr2wmd=-w33D4NospkALjunf-aeVA@mail.gmail.com>

Hi Please find the meetbot log at
http://eavesdrop.openstack.org/meetings/tricircle/2015/tricircle.2015-09-09-13.01.html
.

And also a noise cancelled minutes in the attachment.

On Wed, Sep 9, 2015 at 4:22 PM, Zhipeng Huang <zhipengh512 at gmail.com> wrote:

> Hi Team,
>
> Let's resume our weekly meeting today. As Eran suggest before, we will
> mainly discuss the work we have now, and leave the design session in
> another time slot :) See you at UTC 1300 today.
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhipeng at huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipengh at uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng at huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh at uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/cd5bc900/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: tricircle meeting minutes 2015.09.09.docx
Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document
Size: 34970 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/cd5bc900/attachment.docx>

From zbitter at redhat.com  Wed Sep  9 14:28:28 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Wed, 9 Sep 2015 10:28:28 -0400
Subject: [openstack-dev] [Heat] Multi Node Stack - keystone federation
In-Reply-To: <94346481835D244BB7F6486C00E9C1BA2AE20194@FR711WXCHMBA06.zeu.alcatel-lucent.com>
References: <94346481835D244BB7F6486C00E9C1BA2AE1E553@FR711WXCHMBA06.zeu.alcatel-lucent.com>
 <55EEF653.4040909@redhat.com>
 <94346481835D244BB7F6486C00E9C1BA2AE20194@FR711WXCHMBA06.zeu.alcatel-lucent.com>
Message-ID: <55F0420C.1010501@redhat.com>

On 09/09/15 04:10, SHTILMAN, Tomer (Tomer) wrote:
>
>
>>> On 07/09/15 05:27, SHTILMAN, Tomer (Tomer) wrote:
>>> Hi
>>>
>>> Currently in heat we have the ability to deploy a remote stack on a
>>> different region using OS::Heat::Stack and region_name in the context
>>>
>>> My question is regarding multi node , separate keystones, with
>>> keystone federation.
>>>
>>> Is there an option in a HOT template to send a stack to a different
>>> node, using the keystone federation feature?
>>>
>>> For example ,If I have two Nodes (N1 and N2) with separate keystones
>>> (and keystone federation), I would like to deploy a stack on N1 with a
>>> nested stack that will deploy on N2, similar to what we have now for
>>> regions
>
>> Zane wrote:
>> Short answer: no.
>
>> Long answer: this is something we've wanted to do for a while, and a lot of folks have asked for it. We've been calling it multi-cloud (i.e.
>> multiple keystones, as opposed to multi-region which is multiple regions with one keystone). In principle it's a small extension to the multi-region stacks (just add a way to specify the auth_url as well as the region), but the tricky part is how to authenticate to the other clouds. We don't want to encourage people to put their login credentials into a template. I'm not sure to what extent keystone federation could solve that - I suspect that it does not allow you to use a single token on multiple clouds, just that it allows you to obtain a token on multiple clouds using the same credentials? So basically this idea is on hold until someone comes up with a safe way to authenticate to the other clouds. Ideas/specs welcome.
>
>> cheers,
>> Zane.
>
> Thanks Zane for your reply
> My understanding was that with keystone federation once you have a token issued by one keystone the other one respect it and there is no need to re-authenticate with the second keystone.

OK, that sounds close to what Kevin said as well, which was that you use 
your token from the local keystone to obtain a token from the remote 
keystone that will allow you to access the remote Heat. If that's the 
case we'll need to write some code to grab that other token, but either 
way it all sounds relatively straightforward without any security headaches.

I know there are people who want to do this with clouds that are not 
federated (and even people with custom resources for non-OpenStack 
clouds who want to use this) so we may still need to find a solution for 
the credential thing in the long term, but I see no reason not to start 
now by implementing the federation case - that will solve a big subset 
of the problem and doesn't foreclose any future development paths.

> My thinking was more of changing the remote stack resource to have in the context the heat_url of the other node ,I am not sure if credentials are needed here.

Not the heat_url, but the auth_url - we'll obtain the Heat endpoint from 
the remote keystone catalog, just like we do locally. But other than 
that, exactly - it's just another optional sub-property of the context 
on the remote stack resource.

> We are currently building in our lab multi cloud setup with keystone federation and I will check if my understating is correct, I am planning for propose a BP for this once will be clear

+1

> Thanks again
> Tomer
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From mangelajo at redhat.com  Wed Sep  9 14:35:19 2015
From: mangelajo at redhat.com (Miguel Angel Ajo)
Date: Wed, 09 Sep 2015 16:35:19 +0200
Subject: [openstack-dev] =?utf-8?b?W25ldXRyb25dW25vdmFdwqBRb1MgTmV1dHJv?=
 =?utf-8?q?n-Nova_integration?=
Message-ID: <55F043A7.4070202@redhat.com>


    Hi,

      Looking forward to the M cycle,

      I was wondering if we could loop you in our next week Neutron/QoS 
meeting
on #openstack-meeting-3 around 16:00 CEST Sept 16th.

      We're thinking about several ways we should integrate QoS between 
nova and the new
extendable QoS service on neutron, specially regarding flavor 
integration, and guaranteed
limits (to avoid compute node and in-node physical interface overcommit).

     Some details from this last meeting:
     
http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-09-09-14.07.html

Best regards,
Miguel ?ngel.




From dpyzhov at mirantis.com  Wed Sep  9 14:36:35 2015
From: dpyzhov at mirantis.com (Dmitry Pyzhov)
Date: Wed, 9 Sep 2015 17:36:35 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CA+vYeFo0B=XgapahfiBAX=GOqHXpb1EXJEmDZNchKBmLfFwbOA@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CA+vYeFo0B=XgapahfiBAX=GOqHXpb1EXJEmDZNchKBmLfFwbOA@mail.gmail.com>
Message-ID: <CAEg2Y8Pp4EL+=-CE2jaf_KCN2CRQ4jLNo6X+05A7NF3iT=vAtA@mail.gmail.com>

Andrey, you have highlighted important case. I hope you agree that this
case is not a blocker for the proposal. From the developer's point of view
packages are awful and we should use raw git repos on every node. It could
make developer's life way easier. But from architecture perspective it
would be a disaster.

Rsync is just another legacy part of our architecture. We had puppet master
before. We have rsync now. Let's see what we should use in future and how
we can make it convenient for developers.

On Wed, Sep 9, 2015 at 2:47 PM, Andrey Danin <adanin at mirantis.com> wrote:

> I disagree from the development point of view. Now I just change manifests
> on Fuel node and redeploy cluster to apply that changes. With your proposal
> I'll need to build a new package and add it to a repo every time I change
> something.
>
> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
> vkozhukalov at mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> Currently, we install fuel-libraryX.Y package(s) on the master node and
>> then right before starting actual deployment we rsync [1] puppet modules
>> (one of installed versions) from the master node to slave nodes. Such a
>> flow makes things much more complicated than they could be if we installed
>> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
>> parameterized by repo urls (upstream + mos) and this pre-deployment task
>> could be nothing more than just installing fuel-library package from mos
>> repo defined for a cluster. We would not have several versions of
>> fuel-library on the master node, we would not need that complicated upgrade
>> stuff like we currently have for puppet modules.
>>
>> Please give your opinions on this.
>>
>>
>> [1]
>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>>
>> Vladimir Kozhukalov
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Andrey Danin
> adanin at mirantis.com
> skype: gcon.monolake
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/cfe29eea/attachment.html>

From mriedem at linux.vnet.ibm.com  Wed Sep  9 14:39:19 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 9 Sep 2015 09:39:19 -0500
Subject: [openstack-dev] [nova][qa] Microsoft Hyper-V CI removed from
 nova-ci group until fixed
Message-ID: <55F04497.80208@linux.vnet.ibm.com>

I noticed hyper-v CI reporting a +1 on a change [1] that actually failed 
with a bad looking merge conflict, so I've removed the hyper-v CI 
account from the nova-ci group in Gerrit [2].  From talking with 
ociuhandu it sounds like zuul issues and they are working on it.

Ping me or John when things are fixed and we can add that account back 
to the nova-ci group in Gerrit.

[1] https://review.openstack.org/#/c/214493/
[2] https://review.openstack.org/#/admin/groups/511,members

-- 

Thanks,

Matt Riedemann



From alawson at aqorn.com  Wed Sep  9 14:55:10 2015
From: alawson at aqorn.com (Adam Lawson)
Date: Wed, 9 Sep 2015 07:55:10 -0700
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
Message-ID: <CAJfWK4_Ed3H9sbjpR7C3iQK=sfmOdY7gmFu8w5Jjqt+UKF6hLQ@mail.gmail.com>

We thought about doing this as well and opted for a local repo, at least
for now. If you want to offer an online repo, I think it could be useful to
allow either scenario.

Just a thought from your friendly neighbors here. ; )

/adam
On Sep 8, 2015 7:03 AM, "Vladimir Kozhukalov" <vkozhukalov at mirantis.com>
wrote:

> Sorry, fat fingers => early sending.
>
> =============
> Dear colleagues,
>
> The idea is to remove MOS DEB repo from the Fuel master node by default
> and use online MOS repo instead. Pros of such an approach are:
>
> 0) Reduced requirement for the master node minimal disk space
> 1) There won't be such things in like [1] and [2], thus less complicated
> flow, less errors, easier to maintain, easier to understand, easier to
> troubleshoot
> 2) If one wants to have local mirror, the flow is the same as in case of
> upstream repos (fuel-createmirror), which is clrear for a user to
> understand.
>
> Many people still associate ISO with MOS, but it is not true when using
> package based delivery approach.
>
> It is easy to define necessary repos during deployment and thus it is easy
> to control what exactly is going to be installed on slave nodes.
>
> What do you guys think of it?
>
>
>
> Vladimir Kozhukalov
>
> On Tue, Sep 8, 2015 at 4:53 PM, Vladimir Kozhukalov <
> vkozhukalov at mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> The idea is to remove MOS DEB repo from the Fuel master node by default
>> and use online MOS repo instead. Pros of such an approach are:
>>
>> 0) Reduced requirement for the master node minimal disk space
>> 1) There won't be such things in like [1] and [2], thus less complicated
>> flow, less errors, easier to maintain, easier to understand, easier to
>> troubleshoot
>> 2) If one wants to have local mirror, the flow is the same as in case of
>> upstream repos (fuel-createmirror), which is clrear for a user to
>> understand.
>>
>> Many people still associate ISO with MOS
>>
>>
>>
>>
>>
>> [1]
>> https://github.com/stackforge/fuel-main/blob/master/iso/ks.template#L416-L419
>> [2]
>> https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/host_system.py#L109-L115
>>
>>
>> Vladimir Kozhukalov
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/40272723/attachment.html>

From vikschw at gmail.com  Wed Sep  9 14:55:13 2015
From: vikschw at gmail.com (Vikram Choudhary)
Date: Wed, 9 Sep 2015 20:25:13 +0530
Subject: [openstack-dev] [neutron][nova] QoS Neutron-Nova integration
Message-ID: <CAFeBh8s_GPbBX79RhhdrhZhc0eQK2o8i-CsK8JzZU6-N2sVGjw@mail.gmail.com>

Hi Ajo,

I am In.  Thanks for the information.

Thanks
Vikram

On Wed, Sep 9, 2015 at 8:05 PM, Miguel Angel Ajo <mangelajo at redhat.com>
wrote:

>
>    Hi,
>
>      Looking forward to the M cycle,
>
>      I was wondering if we could loop you in our next week Neutron/QoS
> meeting
> on #openstack-meeting-3 around 16:00 CEST Sept 16th.
>
>      We're thinking about several ways we should integrate QoS between
> nova and the new
> extendable QoS service on neutron, specially regarding flavor integration,
> and guaranteed
> limits (to avoid compute node and in-node physical interface overcommit).
>
>     Some details from this last meeting:
>
> http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-09-09-14.07.html
>
> Best regards,
> Miguel ?ngel.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/10438473/attachment.html>

From maishsk at maishsk.com  Wed Sep  9 14:56:31 2015
From: maishsk at maishsk.com (Maish Saidel-Keesing)
Date: Wed, 9 Sep 2015 17:56:31 +0300
Subject: [openstack-dev] [glance] [nova] Verification of glance images
	before boot
Message-ID: <55F0489F.6030506@maishsk.com>

How can I know that the image that a new instance is spawned from - is 
actually the image that was originally registered in glance - and has 
not been maliciously tampered with in some way?

Is there some kind of verification that is performed against the md5sum 
of the registered image in glance before a new instance is spawned?

Is that done by Nova?
Glance?
Both? Neither?

The reason I ask is some 'paranoid' security (that is their job I 
suppose) people have raised these questions.

I know there is a glance BP already merged for L [1] - but I would like 
to understand the actual flow in a bit more detail.

Thanks.

[1] 
https://blueprints.launchpad.net/glance/+spec/image-signing-and-verification-support

-- 
Best Regards,
Maish Saidel-Keesing


From aschultz at mirantis.com  Wed Sep  9 14:56:14 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Wed, 9 Sep 2015 09:56:14 -0500
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CAFLqvG5P2Ckp61nB9woU=AP3e0rFPfVsDg81HJadM=v2bc6=5w@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
 <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
 <CABzFt8N12ADSuafDBZHg+QTHqPGjXPigzCvYZ1LE48KZJSGzyA@mail.gmail.com>
 <CAFLqvG5P2Ckp61nB9woU=AP3e0rFPfVsDg81HJadM=v2bc6=5w@mail.gmail.com>
Message-ID: <CABzFt8O1VH8DfOCZAP=yaS_UicaSd=6BNGS=46T5LOOa2H++xA@mail.gmail.com>

Hey Vladimir,



> Regarding plugins: plugins are welcome to install specific additional
> DEB/RPM repos on the master node, or just configure cluster to use
> additional onl?ne repos, where all necessary packages (including plugin
> specific puppet manifests) are to be available. Current granular deployment
> approach makes it easy to append specific pre-deployment tasks
> (master/slave does not matter). Correct me if I am wrong.
>
>
Don't get me wrong, I think it would be good to move to a fuel-library
distributed via package only.  I'm bringing these points up to indicate
that there is many other things that live in the fuel library puppet path
than just the fuel-library package.  The plugin example is just one place
that we will need to invest in further design and work to move to the
package only distribution.  What I don't want is some partially executed
work that only works for one type of deployment and creates headaches for
the people actually having to use fuel.  The deployment engineers and
customers who actually perform these actions should be asked about
packaging and their comfort level with this type of requirements.  I don't
have a complete understanding of the all the things supported today by the
fuel plugin system so it would be nice to get someone who is more familiar
to weigh in on this idea. Currently plugins are only rpms (no debs) and I
don't think we are building fuel-library debs at this time either.  So
without some work on both sides, we cannot move to just packages.


> Regarding flexibility: having several versioned directories with puppet
> modules on the master node, having several fuel-libraryX.Y packages
> installed on the master node makes things "exquisitely convoluted" rather
> than flexible. Like I said, it is flexible enough to use mcollective, plain
> rsync, etc. if you really need to do things manually. But we have
> convenient service (Perestroika) which builds packages in minutes if you
> need. Moreover, In the nearest future (by 8.0) Perestroika will be
> available as an application independent from CI. So, what is wrong with
> building fuel-library package? What if you want to troubleshoot nova (we
> install it using packages)? Should we also use rsync for everything else
> like nova, mysql, etc.?
>
>
Yes, we do have a service like Perestroika to build packages for us.  That
doesn't mean everyone else does or has access to do that today.  Setting up
a build system is a major undertaking and making that a hard requirement to
interact with our product may be a bit much for some customers.  In
speaking with some support folks, there are times when files have to be
munged to get around issues because there is no package or things are on
fire so they can't wait for a package to become available for a fix.  We
need to be careful not to impose limits without proper justification and
due diligence.  We already build the fuel-library package, so there's no
reason you couldn't try switching the rsync to install the package if it's
available on a mirror.  I just think you're going to run into the issues I
mentioned which need to be solved before we could just mark it done.

-Alex



> Vladimir Kozhukalov
>
> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz <aschultz at mirantis.com>
> wrote:
>
>> I agree that we shouldn't need to sync as we should be able to just
>> update the fuel-library package. That being said, I think there might be a
>> few issues with this method. The first issue is with plugins and how to
>> properly handle the distribution of the plugins as they may also include
>> puppet code that needs to be installed on the other nodes for a deployment.
>> Currently I do not believe we install the plugin packages anywhere except
>> the master and when they do get installed there may be some post-install
>> actions that are only valid for the master.  Another issue is being
>> flexible enough to allow for deployment engineers to make custom changes
>> for a given environment.  Unless we can provide an improved process to
>> allow for people to provide in place modifications for an environment, we
>> can't do away with the rsync.
>>
>> If we want to go completely down the package route (and we probably
>> should), we need to make sure that all of the other pieces that currently
>> go together to make a complete fuel deployment can be updated in the same
>> way.
>>
>> -Alex
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/f80be8f4/attachment.html>

From shardy at redhat.com  Wed Sep  9 14:59:25 2015
From: shardy at redhat.com (Steven Hardy)
Date: Wed, 9 Sep 2015 15:59:25 +0100
Subject: [openstack-dev] [tripleo] Upgrades, Releases & Branches
In-Reply-To: <CAHV77z9sqSz+Qkzit-3kRbEJn2VJ0wy8f6UZz8Q2w0wvQ3TyQQ@mail.gmail.com>
References: <20150817132836.GB11763@t430slt.redhat.com>
 <CAHV77z_AYC8tCeN_tNnAiM9wV0=nTzgX+wYxtxLUsx+sA_v_fQ@mail.gmail.com>
 <20150818181048.GC30010@t430slt.redhat.com>
 <CAHV77z9sqSz+Qkzit-3kRbEJn2VJ0wy8f6UZz8Q2w0wvQ3TyQQ@mail.gmail.com>
Message-ID: <20150909145925.GB12412@t430slt.redhat.com>

On Tue, Aug 18, 2015 at 02:28:39PM -0400, James Slagle wrote:
> On Tue, Aug 18, 2015 at 2:10 PM, Steven Hardy <shardy at redhat.com> wrote:
> > On Mon, Aug 17, 2015 at 03:29:07PM -0400, James Slagle wrote:
> >> On Mon, Aug 17, 2015 at 9:28 AM, Steven Hardy <shardy at redhat.com> wrote:
> >> > Hi all,
> >> >
> >> > Recently I had some discussion with folks around the future strategy for
> >> > TripleO wrt upgrades, releases and branches, specifically:
> >> >
> >> > - How do we support a "stable" TripleO release/branch that enables folks to
> >> >   easily deploy the current stable release of OpenStack
> >> > - Related to the above, how do we allow development of TripleO components
> >> >   (and in particular t-h-t) to proceed without imposing undue constraints
> >> >   on what new features may be used (e.g new-for-liberty Heat features which
> >> >   aren't present in the current released OpenStack version)
> >> > - We're aiming to provide upgrade support, thus from and to which versions?
> >> >
> >> > I know historically TripleO has taken something of a developer and/or
> >> > continuous deployment model for granted, but I'd like to propose that we
> >> > revisit that discusion, such that we move towards something that's more
> >> > consumable by users/operators that are consuming the OpenStack coordinated
> >> > releases.
> >> >
> >> > The obvious answer is a stable branch for certain TripleO components, and
> >> > in particular for t-h-t, but this has disadvantages if we take the
> >> > OpenStack wide "no feature backports" approach - for example "upgrade
> >> > support to liberty" could be considered a feature, and many other TripleO
> >> > "features" are really more about making features of the deployed OpenStack
> >> > services consumable.
> >> >
> >> > I'd like propose we take a somewhat modified "release branch" approach,
> >> > which combines many of the advantages of the stable-branch model, but
> >> > allows for a somewhat more liberal approach to backports, where most things
> >> > are considered valid backports provided they work with the currently
> >> > released OpenStack services (e.g right now, a t-h-t release/kilo branch
> >> > would have to maintain compatibility with a kilo Heat in the undercloud)
> >>
> >> I like the idea, it seems reasonable to me.
> >>
> >> I do think we should clarify if the rule is:
> >>
> >> We *can* backport anything to release/kilo that doesn't break
> >> compatibility with kilo Heat.
> >>
> >> Or:
> >>
> >> We *must* backport anything to release/kilo that doesn't break
> >> compatibility with kilo Heat.
> >
> > I think I was envisaging something closer to the "must", but as Zane said,
> > more a "should", which if automated would become an opt-out thing, e.g
> > through a commit tag "nobackport" or whatever.
> >
> > Ideally, for the upstream branch we should probably be backporting most
> > things which don't break compatibility with the currently released
> > OpenStack services, and don't introduce gratuitous interface changes or
> > other backwards incompatibilities.
> >
> > I know our template "interfaces" are fuzzily defined but here are some
> > ideas of things we might not backport in addition to the "must work with
> > kilo" rule:
> >
> > - Removing parameters or resource types used to hook in external/optional
> >   code (e.g *ExtraConfig etc) - we should advertise these as deprecated via
> >   the descriptions, docs and release notes, then have them removed only
> >   when moving between TripleO releases (same as deprecation policy for most
> >   other projects)
> >
> > - Adding support for new services which either don't exist or weren't
> >   considered stable in the current released version
> >
> >> If it's the former, I think we'd get into a lot of subjective
> >> discussions around if we want certain things backported or not.
> >> Essentially it's the same discussion that happens for stable/*, except
> >> we consider features as well. This could become quite difficult to
> >> manage, and lead to a lot of reviewer opinionated inconsistency into
> >> what actually ends up getting backported.
> >
> > True, but this decision making ends up happening sometime regardless, e.g
> > what patches do you carry in a downstream package etc?  But you're right
> > defining the process early should help with consistency.
> >
> >>
> >> For instance, there could be a very large and disruptive feature that
> >> doesn't break compatibility at all, but some users may not want to see
> >> it in release/kilo. Or, something like the recent proposed patch to
> >> rename a bunch of templates by dropping the "-puppet". That doesn't
> >> break compatibility with a kilo Heat at all, however it could break
> >> compatibility with someone's scripts or external tooling, and might be
> >> a considered an "API" incompatible change. The consuming downstreams
> >> (RDO) may not want to consume such a change. I know we don't have any
> >> official published "API" for tripleo-heat-templates, I'm just trying
> >> to think about how people consume the templates, and what they might
> >> find surprising if they were to be using release/kilo.
> >
> > Yeah, it's a tricky one, I mean the aim of all this is to avoid having to
> > maintain a fork downstream, and to improve the experience for folks wanting
> > to consume upstream tripleo directly to deploy the coordinated release.
> >
> > So IMO we should consider the requirements of both those groups of users -
> > some degree of stability probably makes sense, e.g not removing parameters
> > during the life of the branch etc.
> >
> > The renaming files patch you reference is a good example to consider;
> > I'm leaning towards saying that would be OK, because all those files are
> > only referenced internally, but it's true we don't really have any way to
> > know it won't impact anyone and we probably should't allow things like
> > renaming files under environments/ which we do know are used as an external
> > "interface" to enable certain features.
> >
> >> The question kind of comes down to if release/kilo is meant to imply
> >> any "stability". Or, if it's just meant to imply that you will always
> >> be able to deploy OpenStack Kilo with release/kilo of
> >> tripleo-heat-templates.
> >>
> >> I think it's important to decide this up front so we can set the
> >> expectation. I'm leaning towards the latter ("must backport") myself,
> >> but then I wonder if the release branch would really solve the
> >> downstream use.
> >
> > I guess I was considering some degree of stability a requirement, as well
> > as meeting the needs for downstream use, otherwise we don't really solve
> > the downstream-fork problem and we've still got another fork to maintain.
> >
> > That said, I think the "no features" stable-maint rule is too strict for
> > the current state of TripleO, so we do need to define some sort of
> > compromise, requires some more thought - perhaps we can start an etherpad
> > or something & try to refine a reasonable definition of the backport rules?
> 
> Sounds like we're all roughly on the same page. I think an etherpad
> would be good to help work out the details. Either that, or propose a
> spec, that way the rules are codified and long lived vs the transient
> nature of etherpad.

Sorry for the delay, I finally got around to propsing a first-cut of the
spec - I'm sure there's stuff I've missed but it's a starting point to
iterate from.  Feedback welcome!

https://review.openstack.org/221811

Steve


From flavio at redhat.com  Wed Sep  9 15:10:05 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Wed, 9 Sep 2015 17:10:05 +0200
Subject: [openstack-dev] [all] Something about being a PTL
Message-ID: <20150909151005.GL6373@redhat.com>

Greetings,

Next week many folks will be running for PTL positions and I thought
about taking the time to dump[0] some thoughts about what being a PTL
means - at least for me - and what one should consider before running.

Since the audience I want to reach is mostly in this mailing list, I
thought about sending it here as well.

[0] http://blog.flaper87.com/post/something-about-being-a-ptl/
Flavio


It's that time of the cycle, in OpenStack, when projects need to elect
who's going to be the PTL for the next 6 months. People look at the,
hopefully many, candidacies and vote based on the proposals that are
more sound to them. I believe, for the PTL elections, the voting
process has worked decently, which is why this post is not meant for
voters but for the, hopefully many, PTL candidates.

First and foremost, thank you. Thanks for raising your hand and
willing to take on this role. It's an honor to have you in the
community and I wish you the best of lucks in this round. Below are a
few things that I hope will help you in the preparation of your
candidacy and that I also hope will help making you a better PTL and
community member.


Why do you want to be a PTL?
============================

Before even start writing your candidacy, please, ask yourself why you
want to be a PTL. What is it that you want to bring to the project
that is good for both, the project and the community. You don't really
need to get stuck on this question forever, you don't really need to
bring something new to the project.

In my opinion, a very good answer for the above could be: "I believe
I'll provide the right guidance to the community and the project."

Seriously, one mistake that new PTLs often do is to believe they are
on their own. Turns out that PTLs arent. The whole point about being a
PTL is to help the community and to improve it. You're not going to do
that if you think you're the one pulling the community. PTLs ought to
work *with* the community not *for* the community.

This leads me to my next point

Be part of the community
========================

Being a PTL is more than just going through launchpad and keeping an
eye on the milestones. That's a lot of work, true. But here's a
secret, it takes more time to be involved with the community of the
project you're serving than going through launchpad.

As a PTL, you have to be around. You have to keep an eye on the
mailing list in a daily basis. You have to talk to the members of the
community you're serving because you have to be up-to-date about the
things that are happening in the project and the community. There may
be conflicts in reviews, bugs and you have to be there to help solving
those.

Among all the things you'll have to do, the community should be in the
top 2 of your priorities. I'm not talking just about the community of
the project you're working on. I'm talking about OpenStack. Does your
project have an impact on other projects? Is your project part of
DefCore? Is your project widely deployed? What are the deprecation
guarantees provided? Does your project consume common libraries? What
can your project contribute back to the rest of the community?

There are *many* things related to the project's community and its
interaction with the rest of the OpenStack community that are
important and that should be taken care of. However, you're not alone,
you have a community. Remember, you'll be serving the community, it's
not the other way around. Working with the community is the best thing
you can do.

As you can imagine, the above is exhausting and it takes time. It
takes a lot of time, which leads me to my next point.

Make sure you'll have time
==========================

There are a few things impossible in this world, predicting time
availability is one of them. Nonetheless, we can get really close
estimates and you should strive, *before* sending your candidacy, to
get the closest estimate of your upstream availability for the next 6
months.

Being a PTL is an upstream job, it's nothing - at the very least it
shouldn't have - to do with your actual employer. Being a PTL is an
*upstream* job and you have to be *upstream* to do it correctly. 

If you think you won't have time in a couple of months then, please,
don't run for PTL. If you think your manager will be asking you to
focus downstream then, please, don't run for PTL. If you think you'll
have other personal matters to take care of then, please, don't run
for PTL.

What I'm trying to say is that you should sit down and think of what
your next 6 months will look like time-wise. I believe it's safe
enough to say that you'll have to spend 60% to 70% of your time
upstream, assuming the porject is a busy one.

The above, though, is not to say that you shouldn't run when in doubt.
Actually, I'd rather have a great PTL for 3 months that'll then step
down than having the community being led by someone not motivated
enough that was forced to run.

Create new PTLs
===============

Just like in every other leading possition, you should help creating
other PTLs. Understand that winning the PTL election puts you in a
position where you have to strive to improve the project and the
community. As part of your responsibilities with regards to the
community, you should encourage folks to run for PTL. 

Being a PTL takes a lot of time and energy and you'll have to step
down[0], eventually. As a PTL, you may want to have folks from the
community ready to take over when you'll step down. I believe it's
healthy for the community to change PTLs every 2 cycles (if not every
cycle).

Community decides
=================

One of the things I always say to PTLs is that they are not dictators.
Decisions are still supposed to be taken by the community at large and
not by the PTL. However, being in a leading position gives you some
extra "trust" that the community may end up following.

Remember that as a PTL, you'll be serving the community and not the
other way around. You should lead based on what is best for the
project and the community rather than based on what's best for your
company or, even worse, based on what will make your manager happy. If
those two things happen to overlap, then AWESOME! Many times they
don't, therefore you should be ready to take a pragmatic decision that
may not be the best for the company you work for and that, certainly,
won't make your manager happy.

Are you ready to make that call?

Closing
=======

By all means, this post is not meant to discourage you. If anything,
It's meant to encourage you to jump in and be amazing. It's been an
honor for me to have served as a PTL and I'm sure it'll be for you as
well.

Despite it not being an exhaustive list and the role experiences
varying from one project to another, I hope the above will provide
enough information about what PTLs are meant to do so that your
excitement and desire to serve as one will grow.

Thanks for considering being a PTL, I look forward to read your
candidacy.

[0]: Note to existing PTLs, consider stepping down and helping others
become PTLs. It's healthier for the community you're serving to change
PTLs

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/f2b09363/attachment.pgp>

From stuart.mclaren at hp.com  Wed Sep  9 15:15:41 2015
From: stuart.mclaren at hp.com (stuart.mclaren at hp.com)
Date: Wed, 9 Sep 2015 16:15:41 +0100 (IST)
Subject: [openstack-dev] [glance] [nova] Verification of glance images
	before boot
Message-ID: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>


The glance client (running 'inside' the Nova server) will re-calculate
the checksum as it downloads the image and then compare it against the
expected value. If they don't match an error will be raised.

> How can I know that the image that a new instance is spawned from - is
> actually the image that was originally registered in glance - and has
> not been maliciously tampered with in some way?
> 
> Is there some kind of verification that is performed against the md5sum
> of the registered image in glance before a new instance is spawned?
> 
> Is that done by Nova?
> Glance?
> Both? Neither?
> 
> The reason I ask is some 'paranoid' security (that is their job I
> suppose) people have raised these questions.
> 
> I know there is a glance BP already merged for L [1] - but I would like
> to understand the actual flow in a bit more detail.
> 
> Thanks.
> 
> [1]
> https://blueprints.launchpad.net/glance/+spec/image-signing-and-verification-support
> 
> -- 
> Best Regards,
> Maish Saidel-Keesing
> 
> 
> 
> ------------------------------
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> End of OpenStack-dev Digest, Vol 41, Issue 22
> *********************************************
>


From aschultz at mirantis.com  Wed Sep  9 15:31:18 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Wed, 9 Sep 2015 10:31:18 -0500
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
Message-ID: <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>

Hey Vladimir,


>
> The idea is to remove MOS DEB repo from the Fuel master node by default
> and use online MOS repo instead. Pros of such an approach are:
>
> 0) Reduced requirement for the master node minimal disk space
>

Is this a problem? How much disk space is saved If I have to go create a
local mirror via fuel-createmirror?


> 1) There won't be such things in like [1] and [2], thus less complicated
> flow, less errors, easier to maintain, easier to understand, easier to
> troubleshoot
> 2) If one wants to have local mirror, the flow is the same as in case of
> upstream repos (fuel-createmirror), which is clrear for a user to
> understand.
>

>From the issues I've seen,  fuel-createmirror isn't very straight forward
and has some issues making it a bad UX.


>
> Many people still associate ISO with MOS, but it is not true when using
> package based delivery approach.
>
> It is easy to define necessary repos during deployment and thus it is easy
> to control what exactly is going to be installed on slave nodes.
>
> What do you guys think of it?
>
>
>
Reliance on internet connectivity has been an issue since 6.1. For many
large users, complete access to the internet is not available or not
desired.  If we want to continue down this path, we need to improve the
tools to setup the local mirror and properly document what urls/ports/etc
need to be available for the installation of openstack and any mirror
creation process.  The ideal thing is to have an all-in-one CD similar to a
live cd that allows a user to completely try out fuel wherever they want
with out further requirements of internet access.  If we don't want to
continue with that, we need to do a better job around providing the tools
for a user to get up and running in a timely fashion.  Perhaps providing an
net-only iso and an all-included iso would be a better solution so people
will have their expectations properly set up front?

-Alex


>
> Vladimir Kozhukalov
>
> On Tue, Sep 8, 2015 at 4:53 PM, Vladimir Kozhukalov <
> vkozhukalov at mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> The idea is to remove MOS DEB repo from the Fuel master node by default
>> and use online MOS repo instead. Pros of such an approach are:
>>
>> 0) Reduced requirement for the master node minimal disk space
>> 1) There won't be such things in like [1] and [2], thus less complicated
>> flow, less errors, easier to maintain, easier to understand, easier to
>> troubleshoot
>> 2) If one wants to have local mirror, the flow is the same as in case of
>> upstream repos (fuel-createmirror), which is clrear for a user to
>> understand.
>>
>> Many people still associate ISO with MOS
>>
>>
>>
>>
>>
>> [1]
>> https://github.com/stackforge/fuel-main/blob/master/iso/ks.template#L416-L419
>> [2]
>> https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/host_system.py#L109-L115
>>
>>
>> Vladimir Kozhukalov
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/7413ecb4/attachment.html>

From harlowja at outlook.com  Wed Sep  9 15:33:36 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Wed, 9 Sep 2015 08:33:36 -0700
Subject: [openstack-dev] OpenStack support for Amazon Concepts - was Re:
 cloud-init IPv6 support
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A2F269D@EX10MBOX03.pnnl.gov>
References: <CAOnzOC+8YAFqTYS0cHxTTJfFcsh=cr-LwJ3CRFTeo32AX1D1tA@mail.gmail.com>
 <A81F249089B6C7479786834457908C600B72B7A8@MISOUT7MSGUSRDH.ITServices.sbc.com>
 <CFE03DB0.66B46%harlowja@yahoo-inc.com>
 <CFE03EEA.66B51%harlowja@yahoo-inc.com>
 <1589474693.37698090.1441279425029.JavaMail.zimbra@redhat.com>
 <0000014f939d6a39-4b217c74-1e11-47b8-87e3-f51c81a4f65d-000000@email.amazonses.com>
 <CAO_F6JN2deuAg8TwsqyM6u1WyXmM8Q8CWrEzwBTC8Te5ZZKVVQ@mail.gmail.com>
 <1278524526.38111374.1441303603998.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01A2F0CB6@EX10MBOX03.pnnl.gov>
 <CAO_F6JOWwUH+naQdH-1p9pj7o4gme12khfu17qH=nvA4_OYx7g@mail.gmail.com>,
 <0000014f9f9815ff-342e8be8-ffe8-42b8-ae28-83d21fea740f-000000@email.amazonses.com>
 <1A3C52DFCD06494D8528644858247BF01A2F269D@EX10MBOX03.pnnl.gov>
Message-ID: <BLU437-SMTP10490B39AB3AE08A32114DCD8520@phx.gbl>

And here is the code that does this (for cloudinit 0.7.x):

https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/helpers/openstack.py

This same code is used by the config drive datasource (the one that 
makes a disk/iso) in cloudinit and the http endpoint based datasource in 
cloudinit (the one that exports /openstack/ on the metadata server).

https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/helpers/openstack.py#L320 
(for the config drive subclass) and 
https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/helpers/openstack.py#L419 
(for the http endpoint based subclass).

Note that in the following: 
https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/DataSourceOpenStack.py#L77 
you can already provide a different metadata url (say one that uses 
ipv6) that will override the default "http://169.254.169.254" defined in 
there. This can be done in a few various ways but feel free to jump on 
#cloud-init IRC channel and ask if interested or find me on IRC 
somewhere (since I'm the main one who worked on all the above code).

-Josh

Fox, Kevin M wrote:
> No, we already extend the metadata server with our own stuff. See
> /openstack/ on the metadata server. Cloudinit even supports the
> extensions. Supporting ipv6 as well as v4 is the same. Why does it
> matter if aws doesnt currently support it? They can support it if they
> want in the future and reuse code, or do their own thing and have to
> convince cloudinit to support there way too. But why should that hold
> back the openstack metadata server now? Lets lead rather then follow.
>
> Thanks,
> Kevin *
> *
> ------------------------------------------------------------------------
> *From:* Sean M. Collins
> *Sent:* Saturday, September 05, 2015 3:19:48 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* Fox, Kevin M; PAUL CARVER
> *Subject:* OpenStack support for Amazon Concepts - was Re:
> [openstack-dev] cloud-init IPv6 support
>
> On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:
>>  Right, it depends on your perspective of who 'owns' the API. Is it
>>  cloud-init or EC2?
>>
>>  At this point I would argue that cloud-init is in control because it would
>>  be a large undertaking to switch all of the AMI's on Amazon to something
>>  else. However, I know Sean disagrees with me on this point so I'll let him
>>  reply here.
>
>
> Here's my take:
>
> Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
> in both the Neutron and Nova projects should all the details of the
> Metadata API that is documented at:
>
> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
>
> This means that this is a compatibility layer that OpenStack has
> implemented so that users can use appliances, applications, and
> operating system images in both Amazon EC2 and an OpenStack environment.
>
> Yes, we can make changes to cloud-init. However, there is no guarantee
> that all users of the Metadata API are exclusively using cloud-init as
> their client. It is highly unlikely that people are rolling their own
> Metadata API clients, but it's a contract we've made with users. This
> includes transport level details like the IP address that the service
> listens on.
>
> The Metadata API is an established API that Amazon introduced years ago,
> and we shouldn't be "improving" APIs that we don't control. If Amazon
> were to introduce IPv6 support the Metadata API tomorrow, we would
> naturally implement it exactly the way they implemented it in EC2. We'd
> honor the contract that Amazon made with its users, in our Metadata API,
> since it is a compatibility layer.
>
> However, since they haven't defined transport level details of the
> Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
> solution. It is not our API.
>
> The nice thing about config-drive is that we've created a new mechanism
> for bootstrapping instances - by replacing the transport level details
> of the API. Rather than being a link-local address that instances access
> over HTTP, it's a device that guests can mount and read. The actual
> contents of the drive may have a similar schema as the Metadata API, but
> I think at this point we've made enough of a differentiation between the
> EC2 Metadata API and config-drive that I believe the contents of the
> actual drive that the instance mounts can be changed without breaking
> user expectations - since config-drive was developed by the OpenStack
> community. The point being that we call it "config-drive" in
> conversation and our docs. Users understand that config-drive is a
> different feature.
>
> I've had this same conversation about the Security Group API that we
> have. We've named it the same thing as the Amazon API, but then went and
> made all the fields different, inexplicably. Thankfully, it's just the
> names of the fields, rather than being huge conceptual changes.
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/068319.html
>
> Basically, I believe that OpenStack should create APIs that are
> community driven and owned, and that we should only emulate
> non-community APIs where appropriate, and explicitly state that we only
> are emulating them. Putting improvements in APIs that came from
> somewhere else, instead of creating new OpenStack branded APIs is a lost
> opportunity to differentiate OpenStack from other projects, as well as
> Amazon AWS.
>
> Thanks for reading, and have a great holiday.
>
> --
> Sean M. Collins
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From zbitter at redhat.com  Wed Sep  9 15:34:26 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Wed, 9 Sep 2015 11:34:26 -0400
Subject: [openstack-dev] [tripleo] Upgrade plans for RDO Manager -
 Brainstorming
In-Reply-To: <55DB6C8A.7040602@redhat.com>
References: <55DB6C8A.7040602@redhat.com>
Message-ID: <55F05182.4080906@redhat.com>

On 24/08/15 15:12, Emilien Macchi wrote:
> Hi,
>
> So I've been working on OpenStack deployments for 4 years now and so far
> RDO Manager is the second installer -after SpinalStack [1]- I'm working on.
>
> SpinalStack already had interested features [2] that allowed us to
> upgrade our customer platforms almost every months, with full testing
> and automation.
>
> Now, we have RDO Manager, I would be happy to share my little experience
> on the topic and help to make it possible in the next cycle.
>
> For that, I created an etherpad [3], which is not too long and focused
> on basic topics for now. This is technical and focused on Infrastructure
> upgrade automation.
>
> Feel free to continue discussion on this thread or directly in the etherpad.
>
> [1] http://spinalstack.enovance.com
> [2] http://spinalstack.enovance.com/en/latest/dev/upgrade.html
> [3] https://etherpad.openstack.org/p/rdo-manager-upgrades

I added some notes on the etherpad, but I think this discussion poses a 
larger question: what is TripleO? Why are we using Heat? Because to me 
the major benefit of Heat is that it maintains a record of the current 
state of the system that can be used to manage upgrades. And if we're 
not going to make use of that - if we're going to determine the state of 
the system by introspecting nodes and update it by using Ansible scripts 
without Heat's knowledge, then we probably shouldn't be using Heat at all.

I'm not saying that to close off the option - I think if Heat is not the 
best tool for the job then we should definitely consider other options. 
And right now it really is not the best tool for the job. Adopting 
Puppet (which was a necessary choice IMO) has meant that the 
responsibility for what I call "software orchestration"[1] is split 
awkwardly between Puppet and Heat. For example, the Puppet manifests are 
baked in to images on the servers, so Heat doesn't know when they've 
changed and can't retrigger Puppet to update the configuration when they 
do. We're left trying to reverse-engineer what is supposed to be a 
declarative model from the workflow that we want for things like 
updates/upgrades.

That said, I think there's still some cause for optimism: in a world 
where every service is deployed in a container and every container has 
its own Heat SoftwareDeployment, the boundary between Heat's 
responsibilities and Puppet's would be much clearer. The deployment 
could conceivably fit a declarative model much better, and even offer a 
lot of flexibility in which services run on which nodes. We won't really 
know until we try, but it seems distinctly possible to aspire toward 
Heat actually making things easier rather than just not making them too 
much harder. And there is stuff on the long-term roadmap that could be 
really great if only we had time to devote to it - for example, as I 
mentioned in the etherpad, I'd love to get Heat's user hooks integrated 
with Mistral so that we could have fully-automated, highly-available (in 
a hypothetical future HA undercloud) live migration of workloads off 
compute nodes during updates.

In the meantime, however, I do think that we have all the tools in Heat 
that we need to cobble together what we need to do. In Liberty, Heat 
supports batched rolling updates of ResourceGroups, so we won't need to 
use user hooks to cobble together poor-man's batched update support any 
more. We can use the user hooks for their intended purpose of notifying 
the client when to live-migrate compute workloads off a server that is 
about to upgraded. The Heat templates should already tell us exactly 
which services are running on which nodes. We can trigger particular 
software deployments on a stack update with a parameter value change (as 
we already do with the yum update deployment). For operations that 
happen in isolation on a single server, we can model them as 
SoftwareDeployment resources within the individual server templates. For 
operations that are synchronised across a group of servers (e.g. 
disabling services on the controller nodes in preparation for a DB 
migration) we can model them as a SoftwareDeploymentGroup resource in 
the parent template. And for chaining multiple sequential operations 
(e.g. disable services, migrate database, enable services), we can chain 
outputs to inputs to handle both ordering and triggering. I'm sure there 
will be many subtleties, but I don't think we *need* Ansible in the mix.

So it's really up to the wider TripleO project team to decide which path 
to go down. I am genuinely not bothered whether we choose Heat or 
Ansible. There may even be ways they can work together without 
compromising either model. But I would be pretty uncomfortable with a 
mix where we use Heat for deployment and Ansible for doing upgrades 
behind Heat's back.

cheers,
Zane.


[1] 
http://www.zerobanana.com/archive/2014/05/08#heat-configuration-management


From Kevin.Fox at pnnl.gov  Wed Sep  9 15:42:51 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 9 Sep 2015 15:42:51 +0000
Subject: [openstack-dev] [all] Something about being a PTL
In-Reply-To: <20150909151005.GL6373@redhat.com>
References: <20150909151005.GL6373@redhat.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F31A6@EX10MBOX03.pnnl.gov>

Very well said. Thank you for this.

Kevin
________________________________________
From: Flavio Percoco [flavio at redhat.com]
Sent: Wednesday, September 09, 2015 8:10 AM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [all] Something about being a PTL

Greetings,

Next week many folks will be running for PTL positions and I thought
about taking the time to dump[0] some thoughts about what being a PTL
means - at least for me - and what one should consider before running.

Since the audience I want to reach is mostly in this mailing list, I
thought about sending it here as well.

[0] http://blog.flaper87.com/post/something-about-being-a-ptl/
Flavio


It's that time of the cycle, in OpenStack, when projects need to elect
who's going to be the PTL for the next 6 months. People look at the,
hopefully many, candidacies and vote based on the proposals that are
more sound to them. I believe, for the PTL elections, the voting
process has worked decently, which is why this post is not meant for
voters but for the, hopefully many, PTL candidates.

First and foremost, thank you. Thanks for raising your hand and
willing to take on this role. It's an honor to have you in the
community and I wish you the best of lucks in this round. Below are a
few things that I hope will help you in the preparation of your
candidacy and that I also hope will help making you a better PTL and
community member.


Why do you want to be a PTL?
============================

Before even start writing your candidacy, please, ask yourself why you
want to be a PTL. What is it that you want to bring to the project
that is good for both, the project and the community. You don't really
need to get stuck on this question forever, you don't really need to
bring something new to the project.

In my opinion, a very good answer for the above could be: "I believe
I'll provide the right guidance to the community and the project."

Seriously, one mistake that new PTLs often do is to believe they are
on their own. Turns out that PTLs arent. The whole point about being a
PTL is to help the community and to improve it. You're not going to do
that if you think you're the one pulling the community. PTLs ought to
work *with* the community not *for* the community.

This leads me to my next point

Be part of the community
========================

Being a PTL is more than just going through launchpad and keeping an
eye on the milestones. That's a lot of work, true. But here's a
secret, it takes more time to be involved with the community of the
project you're serving than going through launchpad.

As a PTL, you have to be around. You have to keep an eye on the
mailing list in a daily basis. You have to talk to the members of the
community you're serving because you have to be up-to-date about the
things that are happening in the project and the community. There may
be conflicts in reviews, bugs and you have to be there to help solving
those.

Among all the things you'll have to do, the community should be in the
top 2 of your priorities. I'm not talking just about the community of
the project you're working on. I'm talking about OpenStack. Does your
project have an impact on other projects? Is your project part of
DefCore? Is your project widely deployed? What are the deprecation
guarantees provided? Does your project consume common libraries? What
can your project contribute back to the rest of the community?

There are *many* things related to the project's community and its
interaction with the rest of the OpenStack community that are
important and that should be taken care of. However, you're not alone,
you have a community. Remember, you'll be serving the community, it's
not the other way around. Working with the community is the best thing
you can do.

As you can imagine, the above is exhausting and it takes time. It
takes a lot of time, which leads me to my next point.

Make sure you'll have time
==========================

There are a few things impossible in this world, predicting time
availability is one of them. Nonetheless, we can get really close
estimates and you should strive, *before* sending your candidacy, to
get the closest estimate of your upstream availability for the next 6
months.

Being a PTL is an upstream job, it's nothing - at the very least it
shouldn't have - to do with your actual employer. Being a PTL is an
*upstream* job and you have to be *upstream* to do it correctly.

If you think you won't have time in a couple of months then, please,
don't run for PTL. If you think your manager will be asking you to
focus downstream then, please, don't run for PTL. If you think you'll
have other personal matters to take care of then, please, don't run
for PTL.

What I'm trying to say is that you should sit down and think of what
your next 6 months will look like time-wise. I believe it's safe
enough to say that you'll have to spend 60% to 70% of your time
upstream, assuming the porject is a busy one.

The above, though, is not to say that you shouldn't run when in doubt.
Actually, I'd rather have a great PTL for 3 months that'll then step
down than having the community being led by someone not motivated
enough that was forced to run.

Create new PTLs
===============

Just like in every other leading possition, you should help creating
other PTLs. Understand that winning the PTL election puts you in a
position where you have to strive to improve the project and the
community. As part of your responsibilities with regards to the
community, you should encourage folks to run for PTL.

Being a PTL takes a lot of time and energy and you'll have to step
down[0], eventually. As a PTL, you may want to have folks from the
community ready to take over when you'll step down. I believe it's
healthy for the community to change PTLs every 2 cycles (if not every
cycle).

Community decides
=================

One of the things I always say to PTLs is that they are not dictators.
Decisions are still supposed to be taken by the community at large and
not by the PTL. However, being in a leading position gives you some
extra "trust" that the community may end up following.

Remember that as a PTL, you'll be serving the community and not the
other way around. You should lead based on what is best for the
project and the community rather than based on what's best for your
company or, even worse, based on what will make your manager happy. If
those two things happen to overlap, then AWESOME! Many times they
don't, therefore you should be ready to take a pragmatic decision that
may not be the best for the company you work for and that, certainly,
won't make your manager happy.

Are you ready to make that call?

Closing
=======

By all means, this post is not meant to discourage you. If anything,
It's meant to encourage you to jump in and be amazing. It's been an
honor for me to have served as a PTL and I'm sure it'll be for you as
well.

Despite it not being an exhaustive list and the role experiences
varying from one project to another, I hope the above will provide
enough information about what PTLs are meant to do so that your
excitement and desire to serve as one will grow.

Thanks for considering being a PTL, I look forward to read your
candidacy.

[0]: Note to existing PTLs, consider stepping down and helping others
become PTLs. It's healthier for the community you're serving to change
PTLs

--
@flaper87
Flavio Percoco


From Kevin.Fox at pnnl.gov  Wed Sep  9 15:54:13 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 9 Sep 2015 15:54:13 +0000
Subject: [openstack-dev] [Neutron] cloud-init IPv6 support
In-Reply-To: <CAPoubz4_zo3jZwvDQLKpYZVVmF6DN85m_1HNgx1wHi6nxAX+AA@mail.gmail.com>
References: <1389135050.3355035.1404749709717.JavaMail.zimbra@redhat.com>
 <1441743880-sup-3578@fewbar.com>
 <1A3C52DFCD06494D8528644858247BF01A2F2CEB@EX10MBOX03.pnnl.gov>
 <1441756390-sup-9410@fewbar.com>,
 <CAPoubz4_zo3jZwvDQLKpYZVVmF6DN85m_1HNgx1wHi6nxAX+AA@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F3200@EX10MBOX03.pnnl.gov>

I think the DNS idea's going to run into problems for tenants that want to run their own, or have existing DNS servers. It may not play nicely with Designate as well.

Kevin

________________________________
From: Ian Wells [ijw.ubuntu at cack.org.uk]
Sent: Wednesday, September 09, 2015 12:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support

Neutron already offers a DNS server (within the DHCP namespace, I think).  It does forward on non-local queries to an external DNS server, but it already serves local names for instances; we'd simply have to set one aside, or perhaps use one in a 'root' but nonlocal domain (metadata.openstack e.g.).  In fact, this improves things slightly over the IPv4 metadata server: IPv4 metadata is usually reached via the router, whereas in ipv6 if we have a choice over addresses with can use a link local address (and any link local address will do; it's not an address that is 'magic' in some way, thanks to the wonder of service advertisement).

And per previous comments about 'Amazon owns this' - the current metadata service is a de facto standard, which Amazon initiated but is not owned by anybody, and it's not the only standard.  If you'd like proof of the former, I believe our metadata service offers /openstack/ URLs, unlike Amazon (mirroring the /openstack/ files on the config drive); and on the latter, config-drive and Amazon-style metadata are only two of quite an assortment of data providers that cloud-init will query.  If it makes you think of it differently, think of this as the *Openstack* ipv6 metadata service, and not the 'will-be-Amazon-one-day-maybe' service.


On 8 September 2015 at 17:03, Clint Byrum <clint at fewbar.com<mailto:clint at fewbar.com>> wrote:
Neutron would add a soft router that only knows the route to the metadata
service (and any other services you want your neutron private network vms
to be able to reach). This is not unique to the metadata service. Heat,
Trove, etc, all want this as a feature so that one can poke holes out of
these private networks only to the places where the cloud operator has
services running.

Excerpts from Fox, Kevin M's message of 2015-09-08 14:44:35 -0700:
> How does that work with neutron private networks?
>
> Thanks,
> Kevin
> ________________________________________
> From: Clint Byrum [clint at fewbar.com<mailto:clint at fewbar.com>]
> Sent: Tuesday, September 08, 2015 1:35 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support
>
> Excerpts from Nir Yechiel's message of 2014-07-07 09:15:09 -0700:
> > AFAIK, the cloud-init metadata service can currently be accessed only by sending a request to http://169.254.169.254, and no IPv6 equivalent is currently implemented. Does anyone working on this or tried to address this before?
> >
>
> I'm not sure we'd want to carry the way metadata works forward now that
> we have had some time to think about this.
>
> We already have DHCP6 and NDP. Just use one of those, and set the host's
> name to a nonce that it can use to lookup the endpoint for instance
> differentiation via DNS SRV records. So if you were told you are
>
> d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com<http://d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com>
>
> Then you look that up as a SRV record on your configured DNS resolver,
> and connect to the host name returned and do something like  GET
> /d02a684d-56ea-44bc-9eba-18d997b1d32d
>
> And viola, metadata returns without any special link local thing, and
> it works like any other dual stack application on the planet.
>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/502ef3aa/attachment.html>

From dtyzhnenko at mirantis.com  Wed Sep  9 15:58:16 2015
From: dtyzhnenko at mirantis.com (Dmitry Tyzhnenko)
Date: Wed, 9 Sep 2015 18:58:16 +0300
Subject: [openstack-dev] [Fuel] Nominate Andrey Sledzinskiy for
	fuel-ostf core
In-Reply-To: <CAFNR43P3BLkWcvay3KewNwFdLjjVLi16iHY7GVZecsexnkaDNA@mail.gmail.com>
References: <CAJWtyAOeyjVLTkuDB7pJGcbr0iPDYh1-ZqXhn_ODi-XwOxTJvQ@mail.gmail.com>
 <CAC+XjbarB-GU-R+6XS7hd2i_3-HYSZxbMDX1OXmdD8eQ=ZNO5g@mail.gmail.com>
 <CAP2-cGd9ex57e7g+WuqKK=d9kxDZs77hUHR7QJa-n8fjLwHbpw@mail.gmail.com>
 <CAFNR43P3BLkWcvay3KewNwFdLjjVLi16iHY7GVZecsexnkaDNA@mail.gmail.com>
Message-ID: <CAMZD-t8o75frtWUnkL71K+szgLwxHDi28DoHzYnUB36VeKhV4w@mail.gmail.com>

+1
8 ????. 2015 ?. 13:07 ???????????? "Alexander Kostrikov" <
akostrikov at mirantis.com> ???????:

> +1
>
> On Tue, Sep 8, 2015 at 9:07 AM, Dmitriy Shulyak <dshulyak at mirantis.com>
> wrote:
>
>> +1
>>
>> On Tue, Sep 8, 2015 at 9:02 AM, Anastasia Urlapova <
>> aurlapova at mirantis.com> wrote:
>>
>>> +1
>>>
>>> On Mon, Sep 7, 2015 at 6:30 PM, Tatyana Leontovich <
>>> tleontovich at mirantis.com> wrote:
>>>
>>>> Fuelers,
>>>>
>>>> I'd like to nominate Andrey Sledzinskiy for the fuel-ostf core team.
>>>> He?s been doing a great job in writing patches(support for detached
>>>> services ).
>>>> Also his review comments always have a lot of detailed information for
>>>> further improvements
>>>>
>>>>
>>>> http://stackalytics.com/?user_id=asledzinskiy&release=all&project_type=all&module=fuel-ostf
>>>>
>>>> Please vote with +1/-1 for approval/objection.
>>>>
>>>> Core reviewer approval process definition:
>>>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>>>
>>>> --
>>>> Best regards,
>>>> Tatyana
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Kind Regards,
>
> Alexandr Kostrikov,
>
> Mirantis, Inc.
>
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>
>
> Skype: akostrikov_mirantis
>
> E-mail: akostrikov at mirantis.com <elogutova at mirantis.com>
>
> *www.mirantis.com <http://www.mirantis.ru/>*
> *www.mirantis.ru <http://www.mirantis.ru/>*
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/9d0ea1e3/attachment.html>

From krotscheck at gmail.com  Wed Sep  9 16:06:04 2015
From: krotscheck at gmail.com (Michael Krotscheck)
Date: Wed, 09 Sep 2015 16:06:04 +0000
Subject: [openstack-dev] [all] Something about being a PTL
In-Reply-To: <20150909151005.GL6373@redhat.com>
References: <20150909151005.GL6373@redhat.com>
Message-ID: <CABM65auOgwWrVcUZG96xDtkFZ9JibguUKXfvhn+vcSp1fqAAQg@mail.gmail.com>

Beautiful summary, Flavio, especially the points about creating new PTL's.
It's the bus-number argument: How many people have to get hit by a bus for
the project to falter? It's best to have a backup.

Also: Being a PTL is a full-time job.

>From working with current and former PTL's, I've noticed that it's almost
impossible to split your time between being a PTL and, say, being a member
of the TC, or working on an employer's private
feature/cloud/deployment/etc. For a far more eloquent explanation of why
this is, I defer to Devananda's wonderful non-candidacy email last spring.

http://lists.openstack.org/pipermail/openstack-dev/2015-April/062364.html

Not many people have the privilege of working for a company that supports
that level of upstream commitment. If your employer doesn't, send me your
Resum? ;).

Michael

On Wed, Sep 9, 2015 at 8:15 AM Flavio Percoco <flavio at redhat.com> wrote:

> Greetings,
>
> Next week many folks will be running for PTL positions and I thought
> about taking the time to dump[0] some thoughts about what being a PTL
> means - at least for me - and what one should consider before running.
>
> Since the audience I want to reach is mostly in this mailing list, I
> thought about sending it here as well.
>
> [0] http://blog.flaper87.com/post/something-about-being-a-ptl/
> Flavio
>
>
> It's that time of the cycle, in OpenStack, when projects need to elect
> who's going to be the PTL for the next 6 months. People look at the,
> hopefully many, candidacies and vote based on the proposals that are
> more sound to them. I believe, for the PTL elections, the voting
> process has worked decently, which is why this post is not meant for
> voters but for the, hopefully many, PTL candidates.
>
> First and foremost, thank you. Thanks for raising your hand and
> willing to take on this role. It's an honor to have you in the
> community and I wish you the best of lucks in this round. Below are a
> few things that I hope will help you in the preparation of your
> candidacy and that I also hope will help making you a better PTL and
> community member.
>
>
> Why do you want to be a PTL?
> ============================
>
> Before even start writing your candidacy, please, ask yourself why you
> want to be a PTL. What is it that you want to bring to the project
> that is good for both, the project and the community. You don't really
> need to get stuck on this question forever, you don't really need to
> bring something new to the project.
>
> In my opinion, a very good answer for the above could be: "I believe
> I'll provide the right guidance to the community and the project."
>
> Seriously, one mistake that new PTLs often do is to believe they are
> on their own. Turns out that PTLs arent. The whole point about being a
> PTL is to help the community and to improve it. You're not going to do
> that if you think you're the one pulling the community. PTLs ought to
> work *with* the community not *for* the community.
>
> This leads me to my next point
>
> Be part of the community
> ========================
>
> Being a PTL is more than just going through launchpad and keeping an
> eye on the milestones. That's a lot of work, true. But here's a
> secret, it takes more time to be involved with the community of the
> project you're serving than going through launchpad.
>
> As a PTL, you have to be around. You have to keep an eye on the
> mailing list in a daily basis. You have to talk to the members of the
> community you're serving because you have to be up-to-date about the
> things that are happening in the project and the community. There may
> be conflicts in reviews, bugs and you have to be there to help solving
> those.
>
> Among all the things you'll have to do, the community should be in the
> top 2 of your priorities. I'm not talking just about the community of
> the project you're working on. I'm talking about OpenStack. Does your
> project have an impact on other projects? Is your project part of
> DefCore? Is your project widely deployed? What are the deprecation
> guarantees provided? Does your project consume common libraries? What
> can your project contribute back to the rest of the community?
>
> There are *many* things related to the project's community and its
> interaction with the rest of the OpenStack community that are
> important and that should be taken care of. However, you're not alone,
> you have a community. Remember, you'll be serving the community, it's
> not the other way around. Working with the community is the best thing
> you can do.
>
> As you can imagine, the above is exhausting and it takes time. It
> takes a lot of time, which leads me to my next point.
>
> Make sure you'll have time
> ==========================
>
> There are a few things impossible in this world, predicting time
> availability is one of them. Nonetheless, we can get really close
> estimates and you should strive, *before* sending your candidacy, to
> get the closest estimate of your upstream availability for the next 6
> months.
>
> Being a PTL is an upstream job, it's nothing - at the very least it
> shouldn't have - to do with your actual employer. Being a PTL is an
> *upstream* job and you have to be *upstream* to do it correctly.
>
> If you think you won't have time in a couple of months then, please,
> don't run for PTL. If you think your manager will be asking you to
> focus downstream then, please, don't run for PTL. If you think you'll
> have other personal matters to take care of then, please, don't run
> for PTL.
>
> What I'm trying to say is that you should sit down and think of what
> your next 6 months will look like time-wise. I believe it's safe
> enough to say that you'll have to spend 60% to 70% of your time
> upstream, assuming the porject is a busy one.
>
> The above, though, is not to say that you shouldn't run when in doubt.
> Actually, I'd rather have a great PTL for 3 months that'll then step
> down than having the community being led by someone not motivated
> enough that was forced to run.
>
> Create new PTLs
> ===============
>
> Just like in every other leading possition, you should help creating
> other PTLs. Understand that winning the PTL election puts you in a
> position where you have to strive to improve the project and the
> community. As part of your responsibilities with regards to the
> community, you should encourage folks to run for PTL.
>
> Being a PTL takes a lot of time and energy and you'll have to step
> down[0], eventually. As a PTL, you may want to have folks from the
> community ready to take over when you'll step down. I believe it's
> healthy for the community to change PTLs every 2 cycles (if not every
> cycle).
>
> Community decides
> =================
>
> One of the things I always say to PTLs is that they are not dictators.
> Decisions are still supposed to be taken by the community at large and
> not by the PTL. However, being in a leading position gives you some
> extra "trust" that the community may end up following.
>
> Remember that as a PTL, you'll be serving the community and not the
> other way around. You should lead based on what is best for the
> project and the community rather than based on what's best for your
> company or, even worse, based on what will make your manager happy. If
> those two things happen to overlap, then AWESOME! Many times they
> don't, therefore you should be ready to take a pragmatic decision that
> may not be the best for the company you work for and that, certainly,
> won't make your manager happy.
>
> Are you ready to make that call?
>
> Closing
> =======
>
> By all means, this post is not meant to discourage you. If anything,
> It's meant to encourage you to jump in and be amazing. It's been an
> honor for me to have served as a PTL and I'm sure it'll be for you as
> well.
>
> Despite it not being an exhaustive list and the role experiences
> varying from one project to another, I hope the above will provide
> enough information about what PTLs are meant to do so that your
> excitement and desire to serve as one will grow.
>
> Thanks for considering being a PTL, I look forward to read your
> candidacy.
>
> [0]: Note to existing PTLs, consider stepping down and helping others
> become PTLs. It's healthier for the community you're serving to change
> PTLs
>
> --
> @flaper87
> Flavio Percoco
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/903ca29f/attachment.html>

From msm at redhat.com  Wed Sep  9 16:09:32 2015
From: msm at redhat.com (michael mccune)
Date: Wed, 9 Sep 2015 12:09:32 -0400
Subject: [openstack-dev] [Sahara] FFE request for improved secret storage
Message-ID: <55F059BC.9050609@redhat.com>

hi all,

i am requesting an FFE for the improved secret storage feature.

this change will allow operators to utilize the key manager service for 
offloading the passwords stored by sahara. this change does not 
implement mandatory usage of barbican, and defaults to a backward 
compatible behavior that requires no change to a stack.

there is currently 1 review up which addresses the main thrust of this 
change, there will be 1 additional review which will include more 
passwords being migrated to use the mechanisms for offloading.

i expect this work to be complete by sept. 25.

review
https://review.openstack.org/#/c/220680/

blueprint
https://blueprints.launchpad.net/sahara/+spec/improved-secret-storage

spec
http://specs.openstack.org/openstack/sahara-specs/specs/liberty/improved-secret-storage.html

thanks,
mike


From nik.komawar at gmail.com  Wed Sep  9 16:16:57 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Wed, 9 Sep 2015 12:16:57 -0400
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
Message-ID: <55F05B79.1050508@gmail.com>

That's correct.

The size and the checksum are to be verified outside of Glance, in this
case Nova. However, you may want to note that it's not necessary that
all Nova virt drivers would use py-glanceclient so you would want to
check the download specific code in the virt driver your Nova deployment
is using.

Having said that, essentially the flow seems appropriate. Error must be
raise on mismatch.

The signing BP was to help prevent the compromised Glance from changing
the checksum and image blob at the same time. Using a digital signature,
you can prevent download of compromised data. However, the feature has
just been implemented in Glance; Glance users may take time to adopt.



On 9/9/15 11:15 AM, stuart.mclaren at hp.com wrote:
>
> The glance client (running 'inside' the Nova server) will re-calculate
> the checksum as it downloads the image and then compare it against the
> expected value. If they don't match an error will be raised.
>
>> How can I know that the image that a new instance is spawned from - is
>> actually the image that was originally registered in glance - and has
>> not been maliciously tampered with in some way?
>>
>> Is there some kind of verification that is performed against the md5sum
>> of the registered image in glance before a new instance is spawned?
>>
>> Is that done by Nova?
>> Glance?
>> Both? Neither?
>>
>> The reason I ask is some 'paranoid' security (that is their job I
>> suppose) people have raised these questions.
>>
>> I know there is a glance BP already merged for L [1] - but I would like
>> to understand the actual flow in a bit more detail.
>>
>> Thanks.
>>
>> [1]
>> https://blueprints.launchpad.net/glance/+spec/image-signing-and-verification-support
>>
>>
>> -- 
>> Best Regards,
>> Maish Saidel-Keesing
>>
>>
>>
>> ------------------------------
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> End of OpenStack-dev Digest, Vol 41, Issue 22
>> *********************************************
>>
>
> __________________________________________________________________________
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From msm at redhat.com  Wed Sep  9 16:17:47 2015
From: msm at redhat.com (michael mccune)
Date: Wed, 9 Sep 2015 12:17:47 -0400
Subject: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican
 core
In-Reply-To: <CAG=EsMOY+QBt4Hw4YdYyDk-v0yfmKKEf1BiHPxBi_enjvHZCYw@mail.gmail.com>
References: <CAG=EsMOY+QBt4Hw4YdYyDk-v0yfmKKEf1BiHPxBi_enjvHZCYw@mail.gmail.com>
Message-ID: <55F05BAB.2050901@redhat.com>

i'm not a core, but +1 from me. Dave has made solid contributions and 
would be a great addition to the core team.

mike

On 09/08/2015 12:05 PM, Juan Antonio Osorio wrote:
> I'd like to nominate Dave Mccowan for the Barbican core review team.
>
> He has been an active contributor both in doing relevant code pieces and
> making useful and thorough reviews; And so I think he would make a great
> addition to the team.
>
> Please bring the +1's :D
>
> Cheers!
>
> --
> Juan Antonio Osorio R.
> e-mail: jaosorior at gmail.com <mailto:jaosorior at gmail.com>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From john.wood at RACKSPACE.COM  Wed Sep  9 16:33:11 2015
From: john.wood at RACKSPACE.COM (John Wood)
Date: Wed, 9 Sep 2015 16:33:11 +0000
Subject: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican
 core
In-Reply-To: <55F05BAB.2050901@redhat.com>
Message-ID: <D215C963.3F504%john.wood@rackspace.com>

Agreed?+1

On 9/9/15, 11:17 AM, "michael mccune" <msm at redhat.com> wrote:

>i'm not a core, but +1 from me. Dave has made solid contributions and
>would be a great addition to the core team.
>
>mike
>
>On 09/08/2015 12:05 PM, Juan Antonio Osorio wrote:
>> I'd like to nominate Dave Mccowan for the Barbican core review team.
>>
>> He has been an active contributor both in doing relevant code pieces and
>> making useful and thorough reviews; And so I think he would make a great
>> addition to the team.
>>
>> Please bring the +1's :D
>>
>> Cheers!
>>
>> --
>> Juan Antonio Osorio R.
>> e-mail: jaosorior at gmail.com <mailto:jaosorior at gmail.com>
>>
>>
>>
>> 
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From mestery at mestery.com  Wed Sep  9 16:41:19 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Wed, 9 Sep 2015 11:41:19 -0500
Subject: [openstack-dev] [all] Something about being a PTL
In-Reply-To: <20150909151005.GL6373@redhat.com>
References: <20150909151005.GL6373@redhat.com>
Message-ID: <CAL3VkVyA+QNZ8SsEFcfFurAFP9mCR-hChv_BTDn9j1H7WoKqqg@mail.gmail.com>

Flavio, thanks for sending this out. I agree with everything you've written
below. Having served as PTL for 3 cycles now, I can say it's very
rewarding, but it's also very exhausting and takes an incredibly thick skin.

Before jumping in and throwing your hat into the ring (especially for a
large OpenStack project), please read Flavio's post carefully below. You
owe it to the project you're running for, the broader OpenStack ecosystem,
and yourself.

Thanks,
Kyle

On Wed, Sep 9, 2015 at 10:10 AM, Flavio Percoco <flavio at redhat.com> wrote:

> Greetings,
>
> Next week many folks will be running for PTL positions and I thought
> about taking the time to dump[0] some thoughts about what being a PTL
> means - at least for me - and what one should consider before running.
>
> Since the audience I want to reach is mostly in this mailing list, I
> thought about sending it here as well.
>
> [0] http://blog.flaper87.com/post/something-about-being-a-ptl/
> Flavio
>
>
> It's that time of the cycle, in OpenStack, when projects need to elect
> who's going to be the PTL for the next 6 months. People look at the,
> hopefully many, candidacies and vote based on the proposals that are
> more sound to them. I believe, for the PTL elections, the voting
> process has worked decently, which is why this post is not meant for
> voters but for the, hopefully many, PTL candidates.
>
> First and foremost, thank you. Thanks for raising your hand and
> willing to take on this role. It's an honor to have you in the
> community and I wish you the best of lucks in this round. Below are a
> few things that I hope will help you in the preparation of your
> candidacy and that I also hope will help making you a better PTL and
> community member.
>
>
> Why do you want to be a PTL?
> ============================
>
> Before even start writing your candidacy, please, ask yourself why you
> want to be a PTL. What is it that you want to bring to the project
> that is good for both, the project and the community. You don't really
> need to get stuck on this question forever, you don't really need to
> bring something new to the project.
>
> In my opinion, a very good answer for the above could be: "I believe
> I'll provide the right guidance to the community and the project."
>
> Seriously, one mistake that new PTLs often do is to believe they are
> on their own. Turns out that PTLs arent. The whole point about being a
> PTL is to help the community and to improve it. You're not going to do
> that if you think you're the one pulling the community. PTLs ought to
> work *with* the community not *for* the community.
>
> This leads me to my next point
>
> Be part of the community
> ========================
>
> Being a PTL is more than just going through launchpad and keeping an
> eye on the milestones. That's a lot of work, true. But here's a
> secret, it takes more time to be involved with the community of the
> project you're serving than going through launchpad.
>
> As a PTL, you have to be around. You have to keep an eye on the
> mailing list in a daily basis. You have to talk to the members of the
> community you're serving because you have to be up-to-date about the
> things that are happening in the project and the community. There may
> be conflicts in reviews, bugs and you have to be there to help solving
> those.
>
> Among all the things you'll have to do, the community should be in the
> top 2 of your priorities. I'm not talking just about the community of
> the project you're working on. I'm talking about OpenStack. Does your
> project have an impact on other projects? Is your project part of
> DefCore? Is your project widely deployed? What are the deprecation
> guarantees provided? Does your project consume common libraries? What
> can your project contribute back to the rest of the community?
>
> There are *many* things related to the project's community and its
> interaction with the rest of the OpenStack community that are
> important and that should be taken care of. However, you're not alone,
> you have a community. Remember, you'll be serving the community, it's
> not the other way around. Working with the community is the best thing
> you can do.
>
> As you can imagine, the above is exhausting and it takes time. It
> takes a lot of time, which leads me to my next point.
>
> Make sure you'll have time
> ==========================
>
> There are a few things impossible in this world, predicting time
> availability is one of them. Nonetheless, we can get really close
> estimates and you should strive, *before* sending your candidacy, to
> get the closest estimate of your upstream availability for the next 6
> months.
>
> Being a PTL is an upstream job, it's nothing - at the very least it
> shouldn't have - to do with your actual employer. Being a PTL is an
> *upstream* job and you have to be *upstream* to do it correctly.
> If you think you won't have time in a couple of months then, please,
> don't run for PTL. If you think your manager will be asking you to
> focus downstream then, please, don't run for PTL. If you think you'll
> have other personal matters to take care of then, please, don't run
> for PTL.
>
> What I'm trying to say is that you should sit down and think of what
> your next 6 months will look like time-wise. I believe it's safe
> enough to say that you'll have to spend 60% to 70% of your time
> upstream, assuming the porject is a busy one.
>
> The above, though, is not to say that you shouldn't run when in doubt.
> Actually, I'd rather have a great PTL for 3 months that'll then step
> down than having the community being led by someone not motivated
> enough that was forced to run.
>
> Create new PTLs
> ===============
>
> Just like in every other leading possition, you should help creating
> other PTLs. Understand that winning the PTL election puts you in a
> position where you have to strive to improve the project and the
> community. As part of your responsibilities with regards to the
> community, you should encourage folks to run for PTL.
> Being a PTL takes a lot of time and energy and you'll have to step
> down[0], eventually. As a PTL, you may want to have folks from the
> community ready to take over when you'll step down. I believe it's
> healthy for the community to change PTLs every 2 cycles (if not every
> cycle).
>
> Community decides
> =================
>
> One of the things I always say to PTLs is that they are not dictators.
> Decisions are still supposed to be taken by the community at large and
> not by the PTL. However, being in a leading position gives you some
> extra "trust" that the community may end up following.
>
> Remember that as a PTL, you'll be serving the community and not the
> other way around. You should lead based on what is best for the
> project and the community rather than based on what's best for your
> company or, even worse, based on what will make your manager happy. If
> those two things happen to overlap, then AWESOME! Many times they
> don't, therefore you should be ready to take a pragmatic decision that
> may not be the best for the company you work for and that, certainly,
> won't make your manager happy.
>
> Are you ready to make that call?
>
> Closing
> =======
>
> By all means, this post is not meant to discourage you. If anything,
> It's meant to encourage you to jump in and be amazing. It's been an
> honor for me to have served as a PTL and I'm sure it'll be for you as
> well.
>
> Despite it not being an exhaustive list and the role experiences
> varying from one project to another, I hope the above will provide
> enough information about what PTLs are meant to do so that your
> excitement and desire to serve as one will grow.
>
> Thanks for considering being a PTL, I look forward to read your
> candidacy.
>
> [0]: Note to existing PTLs, consider stepping down and helping others
> become PTLs. It's healthier for the community you're serving to change
> PTLs
>
> --
> @flaper87
> Flavio Percoco
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/29c35dae/attachment.html>

From jim at jimrollenhagen.com  Wed Sep  9 16:48:20 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Wed, 9 Sep 2015 09:48:20 -0700
Subject: [openstack-dev] [Ironic] Command structure for OSC plugin
In-Reply-To: <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>
References: <20150824150341.GB13126@redhat.com> <55DB3B46.6000503@gmail.com>
 <55DB3EB4.5000105@redhat.com> <20150824172520.GD13126@redhat.com>
 <55DB54E6.1090408@redhat.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3877B7@CERNXCHG44.cern.ch>
 <20150824193559.GF13126@redhat.com>
 <1440446092-sup-2361@lrrr.local>
 <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>
Message-ID: <20150909164820.GE21846@jimrollenhagen.com>

On Tue, Sep 01, 2015 at 03:47:03PM -0500, Dean Troyer wrote:
> [late catch-up]
> 
> On Mon, Aug 24, 2015 at 2:56 PM, Doug Hellmann <doug at doughellmann.com>
> wrote:
> 
> > Excerpts from Brad P. Crochet's message of 2015-08-24 15:35:59 -0400:
> > > On 24/08/15 18:19 +0000, Tim Bell wrote:
> > > >
> > > >From a user perspective, where bare metal and VMs are just different
> > flavors (with varying capabilities), can we not use the same commands
> > (server create/rebuild/...) ? Containers will create the same conceptual
> > problems.
> > > >
> > > >OSC can provide a converged interface but if we just replace '$ ironic
> > XXXX' by '$ openstack baremetal XXXX', this seems to be a missed
> > opportunity to hide the complexity from the end user.
> > > >
> > > >Can we re-use the existing server structures ?
> >
> 
> I've wondered about how users would see doing this, we've done it already
> with the quota and limits commands (blurring the distinction between
> project APIs).  At some level I am sure users really do not care about some
> of our project distinctions.
> 
> 
> > To my knowledge, overriding or enhancing existing commands like that
> >
> > is not possible.
> >
> > You would have to do it in tree, by making the existing commands
> > smart enough to talk to both nova and ironic, first to find the
> > server (which service knows about something with UUID XYZ?) and
> > then to take the appropriate action on that server using the right
> > client. So it could be done, but it might lose some of the nuance
> > between the server types by munging them into the same command. I
> > don't know what sorts of operations are different, but it would be
> > worth doing the analysis to see.
> >
> 
> I do have an experimental plugin that hooks the server create command to
> add some options and change its behaviour so it is possible, but right now
> I wouldn't call it supported at all.  That might be something that we could
> consider doing though for things like this.
> 
> The current model for commands calling multiple project APIs is to put them
> in openstackclient.common, so yes, in-tree.
> 
> Overall, though, to stay consistent with OSC you would map operations into
> the current verbs as much as possible.  It is best to think in terms of how
> the CLI user is thinking and what she wants to do, and not how the REST or
> Python API is written.  In this case, 'baremetal' is a type of server, a
> set of attributes of a server, etc.  As mentioned earlier, containers will
> also have a similar paradigm to consider.

Disclaimer: I don't know much about OSC or its syntax, command
structure, etc. These may not be well-formed thoughts. :)

While it would be *really* cool to support the same command to do things
to nova servers or do things to ironic servers, I don't know that it's
reasonable to do so.

Ironic is an admin-only API, that supports running standalone or behind
a Nova installation with the Nova virt driver. The API is primarily used
by Nova, or by admins for management. In the case of a standalone
configuration, an admin can use the Ironic API to deploy a server,
though the recommended approach is to use Bifrost[0] to simplify that.
In the case of Ironic behind Nova, users are expected to boot baremetal
servers through Nova, as indicated by a flavor.

So, many of the nova commands (openstack server foo) don't make sense in
an Ironic context, and vice versa. It would also be difficult to
determine if the commands should go through Nova or through Ironic.
The path could be something like: check that Ironic exists, see if user
has access, hence standalone mode (oh wait, operators probably have
access to manage Ironic *and* deploy baremetal through Nova, what do?).

I think we should think of "openstack baremetal foo" as commands to
manage the baremetal service (Ironic), as that is what the API is
primarily intended for. Then "openstack server foo" just does what it
does today, and if the flavor happens to be a baremetal flavor, the user
gets a baremetal server.

// jim

[0] https://github.com/openstack/bifrost


From Brianna.Poulos at jhuapl.edu  Wed Sep  9 16:53:36 2015
From: Brianna.Poulos at jhuapl.edu (Poulos, Brianna L.)
Date: Wed, 9 Sep 2015 16:53:36 +0000
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <55F05B79.1050508@gmail.com>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
 <55F05B79.1050508@gmail.com>
Message-ID: <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>

Stuart is right about what will currently happen in Nova when an image is
downloaded, which protects against unintentional modifications to the
image data.

What is currently being worked on is adding the ability to verify a
signature of the checksum.  The flow of this is as follows:
1. The user creates a signature of the "checksum hash" (currently MD5) of
the image data offline.
2. The user uploads a public key certificate, which can be used to verify
the signature to a key manager (currently Barbican).
3. The user creates an image in glance, with signature metadata properties.
4. The user uploads the image data to glance.
5. If the signature metadata properties exist, glance verifies the
signature of the "checksum hash", including retrieving the certificate
from the key manager.
6. If the signature verification fails, glance moves the image to a killed
state, and returns an error message to the user.
7. If the signature verification succeeds, a log message indicates that it
succeeded, and the image upload finishes successfully.

8. Nova requests the image from glance, along with the image properties,
in order to boot it.
9. Nova uses the signature metadata properties to verify the signature (if
a configuration option is set).
10. If the signature verification fails, nova does not boot the image, but
errors out.
11. If the signature verification succeeds, nova boots the image, and a
log message notes that the verification succeeded.

Regarding what is currently in Liberty, the blueprint mentioned [1] has
merged, and code [2] has also been merged in glance, which handles steps
1-7 of the flow above.

For steps 7-11, there is currently a nova blueprint [3], along with code
[4], which are proposed for Mitaka.

Note that we are in the process of adding official documentation, with
examples of creating the signature as well as the properties that need to
be added for the image before upload.  In the meantime, there's an
etherpad that describes how to test the signature verification
functionality in Glance [5].

Also note that this is the initial approach, and there are some
limitations.  For example, ideally the signature would be based on a
cryptographically secure (i.e. not MD5) hash of the image.  There is a
spec in glance to allow this hash to be configurable [6].

[1] 
https://blueprints.launchpad.net/glance/+spec/image-signing-and-verificatio
n-support
[2] 
https://github.com/openstack/glance/commit/484ef1b40b738c87adb203bba6107ddb
4b04ff6e
[3] https://review.openstack.org/#/c/188874/
[4] https://review.openstack.org/#/c/189843/
[5] 
https://etherpad.openstack.org/p/liberty-glance-image-signing-instructions
[6] https://review.openstack.org/#/c/191542/


Thanks,
~Brianna




On 9/9/15, 12:16 , "Nikhil Komawar" <nik.komawar at gmail.com> wrote:

>That's correct.
>
>The size and the checksum are to be verified outside of Glance, in this
>case Nova. However, you may want to note that it's not necessary that
>all Nova virt drivers would use py-glanceclient so you would want to
>check the download specific code in the virt driver your Nova deployment
>is using.
>
>Having said that, essentially the flow seems appropriate. Error must be
>raise on mismatch.
>
>The signing BP was to help prevent the compromised Glance from changing
>the checksum and image blob at the same time. Using a digital signature,
>you can prevent download of compromised data. However, the feature has
>just been implemented in Glance; Glance users may take time to adopt.
>
>
>
>On 9/9/15 11:15 AM, stuart.mclaren at hp.com wrote:
>>
>> The glance client (running 'inside' the Nova server) will re-calculate
>> the checksum as it downloads the image and then compare it against the
>> expected value. If they don't match an error will be raised.
>>
>>> How can I know that the image that a new instance is spawned from - is
>>> actually the image that was originally registered in glance - and has
>>> not been maliciously tampered with in some way?
>>>
>>> Is there some kind of verification that is performed against the md5sum
>>> of the registered image in glance before a new instance is spawned?
>>>
>>> Is that done by Nova?
>>> Glance?
>>> Both? Neither?
>>>
>>> The reason I ask is some 'paranoid' security (that is their job I
>>> suppose) people have raised these questions.
>>>
>>> I know there is a glance BP already merged for L [1] - but I would like
>>> to understand the actual flow in a bit more detail.
>>>
>>> Thanks.
>>>
>>> [1]
>>> 
>>>https://blueprints.launchpad.net/glance/+spec/image-signing-and-verifica
>>>tion-support
>>>
>>>
>>> -- 
>>> Best Regards,
>>> Maish Saidel-Keesing
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> End of OpenStack-dev Digest, Vol 41, Issue 22
>>> *********************************************
>>>
>>
>> 
>>_________________________________________________________________________
>>_
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>-- 
>
>Thanks,
>Nikhil
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From jrist at redhat.com  Wed Sep  9 17:03:55 2015
From: jrist at redhat.com (Jason Rist)
Date: Wed, 9 Sep 2015 11:03:55 -0600
Subject: [openstack-dev] [TripleO] trello
In-Reply-To: <55F02F80.50401@redhat.com>
References: <55EF0070.4040309@redhat.com> <55F02F80.50401@redhat.com>
Message-ID: <55F0667B.7020503@redhat.com>

On 09/09/2015 07:09 AM, Derek Higgins wrote:
>
>
> On 08/09/15 16:36, Derek Higgins wrote:
> > Hi All,
> >
> >     Some of ye may remember some time ago we used to organize TripleO
> > based jobs/tasks on a trello board[1], at some stage this board fell out
> > of use (the exact reason I can't put my finger on). This morning I was
> > putting a list of things together that need to be done in the area of CI
> > and needed somewhere to keep track of it.
> >
> > I propose we get back to using this trello board and each of us add
> > cards at the very least for the things we are working on.
> >
> > This should give each of us a lot more visibility into what is ongoing
> > on in the tripleo project currently, unless I hear any objections,
> > tomorrow I'll start archiving all cards on the boards and removing
> > people no longer involved in tripleo. We can then start adding items and
> > anybody who wants in can be added again.
>
> This is now done, see
> https://trello.com/tripleo
>
> Please ping me on irc if you want to be added.
>
> >
> > thanks,
> > Derek.
> >
> > [1] - https://trello.com/tripleo
> >
> > __________________________________________________________________________
> >
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Derek - you weren't on today when I went to ping you, can you please add me so I can track it for RHCI purposes?

Thanks!

-- 
Jason E. Rist
Senior Software Engineer
OpenStack Infrastructure Integration
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen


From jim at jimrollenhagen.com  Wed Sep  9 17:04:22 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Wed, 9 Sep 2015 10:04:22 -0700
Subject: [openstack-dev] [Ironic] Introducing Ironic 4.1.0
Message-ID: <20150909170422.GF21846@jimrollenhagen.com>

Hi all,

I'm proud to announce the release of Ironic 4.1.0! You may be asking yourself,
"what happened to 4.0.0?" I'll get to that in a minute.

This is an intermediate point release - we plan to release 4.2.0 in a few
weeks as the basis for the coordinated Liberty release.

This brings some bug fixes and small features on top of Ironic 4.0.0.
Major changes are listed below and at http://docs.openstack.org/developer/ironic/releasenotes/
and full release details are available on Launchpad: https://launchpad.net/ironic/liberty/4.1.0

* Added CORS support
* Removed deprecated 'admin_api' policy rule
* Deprecated the 'parallel' option to periodic task decorator

Ironic 4.0.0 was released on August 24, as the first semver release of
Ironic. This marks a pivot in our versioning schema from date-based
versioning; the previous released version was 2015.1. Full release
details are available on Launchpad:
https://launchpad.net/ironic/liberty/4.0.0.

* Raised API version to 1.11

 - v1.7 exposes a new 'clean_step' property on the Node resource.
 - v1.8 and v1.9 improve query and filter support
 - v1.10 fixes Node logical names to support all `RFC 3986`_ unreserved
   characters
 - v1.11 changes the default state of newly created Nodes from AVAILABLE to
   ENROLL

* Support for the new ENROLL workflow during Node creation

  Previously, all Nodes were created in the "available" provision state - before
  management credentials were validated, hardware was burned in, etc. This could
  lead to workloads being scheduled to Nodes that were not yet ready for it.

  Beginning with API v1.11, newly created Nodes begin in the ENROLL state,
  and must be "managed" and "provided" before they are made available for
  provisioning. API clients must be updated to handle the new workflow when they
  begin sending the X-OpenStack-Ironic-API-Version header with a value >= 1.11.

* Migrations from Nova "baremetal" have been removed

  After a deprecation period, the scripts and support for migrating from
  the old Nova "baremetal" driver to the new Nova "ironic" driver have
  been removed from Ironic's tree.

* Removal of deprecated vendor driver methods

  A new @passthru decorator was introduced to the driver API in a previous
  release. In this release, support for vendor_passthru and
  driver_vendor_passthru methods has been removed. All in-tree drivers have
  been updated. Any out of tree drivers which did not update to the
  @passthru decorator during the previous release will need to do so to be
  compatible with this release.

* Introduce new BootInterface to the Driver API

  Drivers may optionally add a new BootInterface. This is merely a
  refactoring of the Driver API to support future improvements.

* Several hardware drivers have been added or enhanced

 - Add OCS Driver
 - Add UCS Driver
 - Add Wake-On-Lan Power Driver
 - ipmitool driver supports IPMI v1.5
 - Add support to SNMP driver for "APC MasterSwitchPlus" series PDU's
 - pxe_ilo driver now supports UEFI Secure Boot (previous releases of the
   iLO driver only supported this for agent_ilo and iscsi_ilo)
 - Add Virtual Media support to iRMC Driver
 - Add BIOS config to DRAC Driver
 - PXE drivers now support GRUB2

As always, questions/comments/concerns welcome.

// jim + devananda


From jim at jimrollenhagen.com  Wed Sep  9 17:13:36 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Wed, 9 Sep 2015 10:13:36 -0700
Subject: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support
 multiple compute drivers?
In-Reply-To: <CALesnTzMv_+hxZLFkAbxObzGLKU0h2ENZ5-vYe1-u+EC5g7Eyg@mail.gmail.com>
References: <CALesnTzMv_+hxZLFkAbxObzGLKU0h2ENZ5-vYe1-u+EC5g7Eyg@mail.gmail.com>
Message-ID: <20150909171336.GG21846@jimrollenhagen.com>

On Wed, Sep 02, 2015 at 03:42:20PM -0400, Jeff Peeler wrote:
> Hi folks,
> 
> I'm currently looking at supporting Ironic in the Kolla project [1], but
> was unsure if it would be possible to run separate instances of nova
> compute and controller (and scheduler too?) to enable both baremetal and
> libvirt type deployments. I found this mailing list post from two years ago
> [2], asking the same question. The last response in the thread seemed to
> indicate work was being done on the scheduler to support multiple
> configurations, but the review [3] ended up abandoned.
> 
> Are the current requirements the same? Perhaps using two availability zones
> would work, but I'm not clear if that works on the same host.

At Rackspace we run Ironic in its own cell, and use cells filters to
direct builds to the right place.

The other option that supposedly works is host aggregates. I'm not sure
host aggregates supports running two scheduler instances (because you'll
want different filters), but maybe it does?

// jim



From pabelanger at redhat.com  Wed Sep  9 17:22:05 2015
From: pabelanger at redhat.com (Paul Belanger)
Date: Wed, 9 Sep 2015 13:22:05 -0400
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big
 tent?
In-Reply-To: <55EF663E.8050909@redhat.com>
References: <20150908145755.GC16241@localhost.localdomain>
 <55EF663E.8050909@redhat.com>
Message-ID: <20150909172205.GA13717@localhost.localdomain>

On Tue, Sep 08, 2015 at 06:50:38PM -0400, Emilien Macchi wrote:
> 
> 
> On 09/08/2015 10:57 AM, Paul Belanger wrote:
> > Greetings,
> > 
> > I wanted to start a discussion about the future of ansible / ansible roles in
> > OpenStack. Over the last week or so I've started down the ansible path, starting
> > my first ansible role; I've started with ansible-role-nodepool[1].
> > 
> > My initial question is simple, now that big tent is upon us, I would like
> > some way to include ansible roles into the opentack git workflow.  I first
> > thought the role might live under openstack-infra however I am not sure that
> > is the right place.  My reason is, -infra tents to include modules they
> > currently run under the -infra namespace, and I don't want to start the effort
> > to convince people to migrate.
> 
> I'm wondering what would be the goal of ansible-role-nodepool and what
> it would orchestrate exactly. I did not find README that explains it,
> and digging into the code makes me think you try to prepare nodepool
> images but I don't exactly see why.
> 
> Since we already have puppet-nodepool, I'm curious about the purpose of
> this role.
> IMHO, if we had to add such a new repo, it would be under
> openstack-infra namespace, to be consistent with other repos
> (puppet-nodepool, etc).
> 
> > Another thought might be to reach out to the os-ansible-deployment team and ask
> > how they see roles in OpenStack moving foward (mostly the reason for this
> > email).
> 
> os-ansible-deployment aims to setup OpenStack services in containers
> (LXC). I don't see relation between os-ansible-deployment (openstack
> deployment related) and ansible-role-nodepool (infra related).
> 
> > Either way, I would be interested in feedback on moving forward on this. Using
> > travis-ci and github works but OpenStack workflow is much better.
> > 
> > [1] https://github.com/pabelanger/ansible-role-nodepool
> > 
> 
> To me, it's unclear how and why we are going to use ansible-role-nodepool.
> Could you explain with use-case?
> 
The most basic use case is managing nodepool using ansible, for the purpose of
CI.  Bascially, rewrite puppet-nodepool using ansible.  I won't go into the
reasoning for that, except to say people do not want to use puppet.

Regarding os-ansible-deployment, they are only related due to both using
ansible. I wouldn't see os-ansible-deployment using the module, however I would
hope to learn best practices and code reviews from the team.

Where ever the module lives, I would hope people interested in ansible
development would be group somehow.

> Thanks,
> -- 
> Emilien Macchi
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From dolph.mathews at gmail.com  Wed Sep  9 17:27:23 2015
From: dolph.mathews at gmail.com (Dolph Mathews)
Date: Wed, 9 Sep 2015 12:27:23 -0500
Subject: [openstack-dev] [all] Something about being a PTL
In-Reply-To: <20150909151005.GL6373@redhat.com>
References: <20150909151005.GL6373@redhat.com>
Message-ID: <CAC=h7gVLgfPLjqnG3rQBi-Ji7oAzzAPtFsCjYiGdfu8WXkb8hw@mail.gmail.com>

+1 Fantastically well said. I'd encourage all current and potential PTLs to
take these words to heart.

> I believe it's safe enough to say that you'll have to spend 60% to 70% of
your time upstream, assuming the porject is a busy one.

The busier the project, the closer to 100% this becomes. For keystone, I
would expect nothing less than 100%. It's truly a full time job.

Also note that there is no mention of writing code! :( Your job is to serve
the community, and writing code is simply not the most efficient way to
accomplish that.

And as Michael Krotscheck mentioned, if you believe you'd be a great PTL
but your employer would not fully support the time commitment, then for
both the health of OpenStack and your success, I'd encourage you to change
employers (look for the employers who have a history of supporting multiple
PTLs that aren't quickly burning out).

On Wed, Sep 9, 2015 at 10:10 AM, Flavio Percoco <flavio at redhat.com> wrote:

> Greetings,
>
> Next week many folks will be running for PTL positions and I thought
> about taking the time to dump[0] some thoughts about what being a PTL
> means - at least for me - and what one should consider before running.
>
> Since the audience I want to reach is mostly in this mailing list, I
> thought about sending it here as well.
>
> [0] http://blog.flaper87.com/post/something-about-being-a-ptl/
> Flavio
>
>
> It's that time of the cycle, in OpenStack, when projects need to elect
> who's going to be the PTL for the next 6 months. People look at the,
> hopefully many, candidacies and vote based on the proposals that are
> more sound to them. I believe, for the PTL elections, the voting
> process has worked decently, which is why this post is not meant for
> voters but for the, hopefully many, PTL candidates.
>
> First and foremost, thank you. Thanks for raising your hand and
> willing to take on this role. It's an honor to have you in the
> community and I wish you the best of lucks in this round. Below are a
> few things that I hope will help you in the preparation of your
> candidacy and that I also hope will help making you a better PTL and
> community member.
>
>
> Why do you want to be a PTL?
> ============================
>
> Before even start writing your candidacy, please, ask yourself why you
> want to be a PTL. What is it that you want to bring to the project
> that is good for both, the project and the community. You don't really
> need to get stuck on this question forever, you don't really need to
> bring something new to the project.
>
> In my opinion, a very good answer for the above could be: "I believe
> I'll provide the right guidance to the community and the project."
>
> Seriously, one mistake that new PTLs often do is to believe they are
> on their own. Turns out that PTLs arent. The whole point about being a
> PTL is to help the community and to improve it. You're not going to do
> that if you think you're the one pulling the community. PTLs ought to
> work *with* the community not *for* the community.
>
> This leads me to my next point
>
> Be part of the community
> ========================
>
> Being a PTL is more than just going through launchpad and keeping an
> eye on the milestones. That's a lot of work, true. But here's a
> secret, it takes more time to be involved with the community of the
> project you're serving than going through launchpad.
>
> As a PTL, you have to be around. You have to keep an eye on the
> mailing list in a daily basis. You have to talk to the members of the
> community you're serving because you have to be up-to-date about the
> things that are happening in the project and the community. There may
> be conflicts in reviews, bugs and you have to be there to help solving
> those.
>
> Among all the things you'll have to do, the community should be in the
> top 2 of your priorities. I'm not talking just about the community of
> the project you're working on. I'm talking about OpenStack. Does your
> project have an impact on other projects? Is your project part of
> DefCore? Is your project widely deployed? What are the deprecation
> guarantees provided? Does your project consume common libraries? What
> can your project contribute back to the rest of the community?
>
> There are *many* things related to the project's community and its
> interaction with the rest of the OpenStack community that are
> important and that should be taken care of. However, you're not alone,
> you have a community. Remember, you'll be serving the community, it's
> not the other way around. Working with the community is the best thing
> you can do.
>
> As you can imagine, the above is exhausting and it takes time. It
> takes a lot of time, which leads me to my next point.
>
> Make sure you'll have time
> ==========================
>
> There are a few things impossible in this world, predicting time
> availability is one of them. Nonetheless, we can get really close
> estimates and you should strive, *before* sending your candidacy, to
> get the closest estimate of your upstream availability for the next 6
> months.
>
> Being a PTL is an upstream job, it's nothing - at the very least it
> shouldn't have - to do with your actual employer. Being a PTL is an
> *upstream* job and you have to be *upstream* to do it correctly.
> If you think you won't have time in a couple of months then, please,
> don't run for PTL. If you think your manager will be asking you to
> focus downstream then, please, don't run for PTL. If you think you'll
> have other personal matters to take care of then, please, don't run
> for PTL.
>
> What I'm trying to say is that you should sit down and think of what
> your next 6 months will look like time-wise. I believe it's safe
> enough to say that you'll have to spend 60% to 70% of your time
> upstream, assuming the porject is a busy one.
>
> The above, though, is not to say that you shouldn't run when in doubt.
> Actually, I'd rather have a great PTL for 3 months that'll then step
> down than having the community being led by someone not motivated
> enough that was forced to run.
>
> Create new PTLs
> ===============
>
> Just like in every other leading possition, you should help creating
> other PTLs. Understand that winning the PTL election puts you in a
> position where you have to strive to improve the project and the
> community. As part of your responsibilities with regards to the
> community, you should encourage folks to run for PTL.
> Being a PTL takes a lot of time and energy and you'll have to step
> down[0], eventually. As a PTL, you may want to have folks from the
> community ready to take over when you'll step down. I believe it's
> healthy for the community to change PTLs every 2 cycles (if not every
> cycle).
>
> Community decides
> =================
>
> One of the things I always say to PTLs is that they are not dictators.
> Decisions are still supposed to be taken by the community at large and
> not by the PTL. However, being in a leading position gives you some
> extra "trust" that the community may end up following.
>
> Remember that as a PTL, you'll be serving the community and not the
> other way around. You should lead based on what is best for the
> project and the community rather than based on what's best for your
> company or, even worse, based on what will make your manager happy. If
> those two things happen to overlap, then AWESOME! Many times they
> don't, therefore you should be ready to take a pragmatic decision that
> may not be the best for the company you work for and that, certainly,
> won't make your manager happy.
>
> Are you ready to make that call?
>
> Closing
> =======
>
> By all means, this post is not meant to discourage you. If anything,
> It's meant to encourage you to jump in and be amazing. It's been an
> honor for me to have served as a PTL and I'm sure it'll be for you as
> well.
>
> Despite it not being an exhaustive list and the role experiences
> varying from one project to another, I hope the above will provide
> enough information about what PTLs are meant to do so that your
> excitement and desire to serve as one will grow.
>
> Thanks for considering being a PTL, I look forward to read your
> candidacy.
>
> [0]: Note to existing PTLs, consider stepping down and helping others
> become PTLs. It's healthier for the community you're serving to change
> PTLs
>
> --
> @flaper87
> Flavio Percoco
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/3c6e992f/attachment.html>

From sean at dague.net  Wed Sep  9 17:36:37 2015
From: sean at dague.net (Sean Dague)
Date: Wed, 9 Sep 2015 13:36:37 -0400
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
Message-ID: <55F06E25.5000906@dague.net>

We've got a new pattern emerging where some of the key functionality in
services is moving into libraries that can be called from different
services. A good instance of this is os-brick, which has the setup /
config functionality for devices that sometimes need to be called by
cinder and sometimes need to be called by nova when setting up a guest.
Many of these actions need root access, so require rootwrap filters.

The point of putting this logic into a library is that it's self
contained, and that it can be an upgrade unit that is distinct from the
upgrade unit of either nova or cinder.

The problem.... rootwrap.conf. Projects ship an example rootwrap.conf
which specifies filter files like such:

[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap

however, we'd really like to be loading those filters from files the
library controls, so they are in sync with the library functionality.
Knowing where those files are going to be turns out to be a really
interesting guessing game. And, for security reasons, having a super
large set of paths rootwrap is guessing at seems really unwise.

It seems like what we really want is something more like this:

[filters]
nova=compute,network
os-brick=os-brick

Which would translate into a symbolic look up via:

filter_file = resource_filename(project, '%s.filters' % filter)
... read the filter file

So that rootwrap would be referencing things symbolically instead of
static / filebased which is going to be different depending on install
method.


For liberty we just hacked around it and put the os-brick rules into
Nova and Cinder. It's late in the release, and a clear better path
forward wasn't out there. It does mean the upgrade of the two components
is a bit coupled in the fiber channel case. But it was the best we could do.

I'd like to get the discussion rolling about the proposed solution
above. It emerged from #openstack-cinder this morning as we attempted to
get some kind of workable solution and figure out what was next. We
should definitely do a summit session on this one to nail down the
details and the path forward.

	-Sean

-- 
Sean Dague
http://dague.net


From chris.friesen at windriver.com  Wed Sep  9 17:40:55 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Wed, 9 Sep 2015 11:40:55 -0600
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
 <55F05B79.1050508@gmail.com> <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
Message-ID: <55F06F27.5060202@windriver.com>

On 09/09/2015 10:53 AM, Poulos, Brianna L. wrote:
> Stuart is right about what will currently happen in Nova when an image is
> downloaded, which protects against unintentional modifications to the
> image data.
>
> What is currently being worked on is adding the ability to verify a
> signature of the checksum.

It should be noted that this does not protect against a compromised compute node.

For an end-user that cares about this case, I think you'd pretty much need 
self-checking within the guest to ensure that its running system matches a 
downloaded manifest (or something like that).

Chris


From doug at doughellmann.com  Wed Sep  9 17:54:08 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 09 Sep 2015 13:54:08 -0400
Subject: [openstack-dev] [TripleO] Releasing tripleo-common on PyPI
In-Reply-To: <55F010BC.4040206@redhat.com>
References: <1793280466.18329104.1441793759084.JavaMail.zimbra@redhat.com>
 <55F010BC.4040206@redhat.com>
Message-ID: <1441821189-sup-984@lrrr.local>

Excerpts from Dmitry Tantsur's message of 2015-09-09 12:58:04 +0200:
> On 09/09/2015 12:15 PM, Dougal Matthews wrote:
> > Hi,
> >
> > The tripleo-common library appears to be registered or PyPI but hasn't yet had
> > a release[1]. I am not familiar with the release process - what do we need to
> > do to make sure it is regularly released with other TripleO packages?
> 
> I think this is a good start: 
> https://github.com/openstack/releases/blob/master/README.rst

That repo isn't managed by the release team, so you don't need to submit
a release request as described there. You can, however, use the tools to
tag a release yourself. Drop by #openstack-relmgr-office if you have
questions about the tools or process, and I'll be happy to offer
whatever guidance I can.

Doug

> 
> >
> > We will also want to do something similar with the new python-tripleoclient
> > which doesn't seem to be registered on PyPI yet at all.
> 
> And instack-undercloud.
> 
> >
> > Thanks,
> > Dougal
> >
> > [1]: https://pypi.python.org/pypi/tripleo-common
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 


From doug at doughellmann.com  Wed Sep  9 18:04:49 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 09 Sep 2015 14:04:49 -0400
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F06E25.5000906@dague.net>
References: <55F06E25.5000906@dague.net>
Message-ID: <1441821533-sup-4318@lrrr.local>

Excerpts from Sean Dague's message of 2015-09-09 13:36:37 -0400:
> We've got a new pattern emerging where some of the key functionality in
> services is moving into libraries that can be called from different
> services. A good instance of this is os-brick, which has the setup /
> config functionality for devices that sometimes need to be called by
> cinder and sometimes need to be called by nova when setting up a guest.
> Many of these actions need root access, so require rootwrap filters.
> 
> The point of putting this logic into a library is that it's self
> contained, and that it can be an upgrade unit that is distinct from the
> upgrade unit of either nova or cinder.
> 
> The problem.... rootwrap.conf. Projects ship an example rootwrap.conf
> which specifies filter files like such:
> 
> [DEFAULT]
> # List of directories to load filter definitions from (separated by ',').
> # These directories MUST all be only writeable by root !
> filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
> 
> however, we'd really like to be loading those filters from files the
> library controls, so they are in sync with the library functionality.
> Knowing where those files are going to be turns out to be a really
> interesting guessing game. And, for security reasons, having a super
> large set of paths rootwrap is guessing at seems really unwise.
> 
> It seems like what we really want is something more like this:
> 
> [filters]
> nova=compute,network
> os-brick=os-brick
> 
> Which would translate into a symbolic look up via:
> 
> filter_file = resource_filename(project, '%s.filters' % filter)
> ... read the filter file

Right now rootwrap takes as input an oslo.config file, which it reads to
find the filter_path config value, which it interprets as a directory
containing a bunch of other INI files, which it then reads and merges
together into a single set of filters. I'm not sure the symbolic lookup
you're proposing is going to support that use of multiple files. Maybe
it shouldn't?

What about allowing filter_path to contain more than one directory
to scan? That would let projects using os-brick pass their own path and
os-brick's path, if it's different.

Doug

> 
> So that rootwrap would be referencing things symbolically instead of
> static / filebased which is going to be different depending on install
> method.
> 
> 
> For liberty we just hacked around it and put the os-brick rules into
> Nova and Cinder. It's late in the release, and a clear better path
> forward wasn't out there. It does mean the upgrade of the two components
> is a bit coupled in the fiber channel case. But it was the best we could do.
> 
> I'd like to get the discussion rolling about the proposed solution
> above. It emerged from #openstack-cinder this morning as we attempted to
> get some kind of workable solution and figure out what was next. We
> should definitely do a summit session on this one to nail down the
> details and the path forward.
> 
>     -Sean
> 


From vkozhukalov at mirantis.com  Wed Sep  9 18:09:48 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Wed, 9 Sep 2015 21:09:48 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
Message-ID: <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>

Alex,

The idea is to remove MOS DEB repo from the Fuel master node by default and
>> use online MOS repo instead. Pros of such an approach are:
>>
>> 0) Reduced requirement for the master node minimal disk space
>>
>
> Is this a problem? How much disk space is saved If I have to go create a
> local mirror via fuel-createmirror?
>

It is not a problem at all, but still it is pro of the proposal.


> 1) There won't be such things in like [1] and [2], thus less complicated
>> flow, less errors, easier to maintain, easier to understand, easier to
>> troubleshoot
>> 2) If one wants to have local mirror, the flow is the same as in case of
>> upstream repos (fuel-createmirror), which is clrear for a user to
>> understand.
>>
>
> From the issues I've seen,  fuel-createmirror isn't very straight forward
> and has some issues making it a bad UX.
>

I'd say the whole approach of having such tool as fuel-createmirror is a
way too naive. Reliable internet connection is totally up to network
engineering rather than deployment. Even using proxy is much better that
creating local mirror. But this discussion is totally out of the scope of
this letter. Currently,  we have fuel-createmirror and it is pretty
straightforward (installed as rpm, has just a couple of command line
options). The quality of this script is also out of the scope of this
thread. BTW we have plans to improve it.


>
>> Many people still associate ISO with MOS, but it is not true when using
>> package based delivery approach.
>>
>> It is easy to define necessary repos during deployment and thus it is
>> easy to control what exactly is going to be installed on slave nodes.
>>
>> What do you guys think of it?
>>
>>
>>
> Reliance on internet connectivity has been an issue since 6.1. For many
> large users, complete access to the internet is not available or not
> desired.  If we want to continue down this path, we need to improve the
> tools to setup the local mirror and properly document what urls/ports/etc
> need to be available for the installation of openstack and any mirror
> creation process.  The ideal thing is to have an all-in-one CD similar to a
> live cd that allows a user to completely try out fuel wherever they want
> with out further requirements of internet access.  If we don't want to
> continue with that, we need to do a better job around providing the tools
> for a user to get up and running in a timely fashion.  Perhaps providing an
> net-only iso and an all-included iso would be a better solution so people
> will have their expectations properly set up front?
>

Let me explain why I think having local MOS mirror by default is bad:
1) I don't see any reason why we should treat MOS  repo other way than all
other online repos. A user sees on the settings tab the list of repos one
of which is local by default while others are online. It can make user a
little bit confused, can't it? A user can be also confused by the fact,
that some of the repos can be cloned locally by fuel-createmirror while
others can't. That is not straightforward, NOT fuel-createmirror UX.
2) Having local MOS mirror by default makes things much more convoluted. We
are forced to have several directories with predefined names and we are
forced to manage these directories in nailgun, in upgrade script, etc. Why?
3) When putting MOS mirror on ISO, we make people think that ISO is equal
to MOS, which is not true. It is possible to implement really flexible
delivery scheme, but we need to think of these things as they are
independent.
For large users it is easy to build custom ISO and put there what they need
but first we need to have simple working scheme clear for everyone. I think
dealing with all repos the same way is what is gonna makes things simpler.

This thread is not about internet connectivity, it is about aligning things.



> -Alex
>
>
>>
>> Vladimir Kozhukalov
>>
>> On Tue, Sep 8, 2015 at 4:53 PM, Vladimir Kozhukalov <
>> vkozhukalov at mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> The idea is to remove MOS DEB repo from the Fuel master node by default
>>> and use online MOS repo instead. Pros of such an approach are:
>>>
>>> 0) Reduced requirement for the master node minimal disk space
>>> 1) There won't be such things in like [1] and [2], thus less complicated
>>> flow, less errors, easier to maintain, easier to understand, easier to
>>> troubleshoot
>>> 2) If one wants to have local mirror, the flow is the same as in case of
>>> upstream repos (fuel-createmirror), which is clrear for a user to
>>> understand.
>>>
>>> Many people still associate ISO with MOS
>>>
>>>
>>>
>>>
>>>
>>> [1]
>>> https://github.com/stackforge/fuel-main/blob/master/iso/ks.template#L416-L419
>>> [2]
>>> https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/host_system.py#L109-L115
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/21ae6d53/attachment.html>

From doug at doughellmann.com  Wed Sep  9 18:18:21 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 09 Sep 2015 14:18:21 -0400
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <55EF97EE.9090801@swartzlander.org>
References: <55E83B84.5000000@openstack.org>
 <55EF1BCA.8020708@swartzlander.org> <1441735065-sup-780@lrrr.local>
 <55EF97EE.9090801@swartzlander.org>
Message-ID: <1441822516-sup-8235@lrrr.local>

Excerpts from Ben Swartzlander's message of 2015-09-08 22:22:38 -0400:
> On 09/08/2015 01:58 PM, Doug Hellmann wrote:
> > Excerpts from Ben Swartzlander's message of 2015-09-08 13:32:58 -0400:
> >> On 09/03/2015 08:22 AM, Thierry Carrez wrote:
> >>> Hi everyone,
> >>>
> >>> A feature deprecation policy is a standard way to communicate and
> >>> perform the removal of user-visible behaviors and capabilities. It helps
> >>> setting user expectations on how much and how long they can rely on a
> >>> feature being present. It gives them reassurance over the timeframe they
> >>> have to adapt in such cases.
> >>>
> >>> In OpenStack we always had a feature deprecation policy that would apply
> >>> to "integrated projects", however it was never written down. It was
> >>> something like "to remove a feature, you mark it deprecated for n
> >>> releases, then you can remove it".
> >>>
> >>> We don't have an "integrated release" anymore, but having a base
> >>> deprecation policy, and knowing which projects are mature enough to
> >>> follow it, is a great piece of information to communicate to our users.
> >>>
> >>> That's why the next-tags workgroup at the Technical Committee has been
> >>> working to propose such a base policy as a 'tag' that project teams can
> >>> opt to apply to their projects when they agree to apply it to one of
> >>> their deliverables:
> >>>
> >>> https://review.openstack.org/#/c/207467/
> >>>
> >>> Before going through the last stage of this, we want to survey existing
> >>> projects to see which deprecation policy they currently follow, and
> >>> verify that our proposed base deprecation policy makes sense. The goal
> >>> is not to dictate something new from the top, it's to reflect what's
> >>> generally already applied on the field.
> >>>
> >>> In particular, the current proposal says:
> >>>
> >>> "At the very minimum the feature [...] should be marked deprecated (and
> >>> still be supported) in the next two coordinated end-of-cyle releases.
> >>> For example, a feature deprecated during the M development cycle should
> >>> still appear in the M and N releases and cannot be removed before the
> >>> beginning of the O development cycle."
> >>>
> >>> That would be a n+2 deprecation policy. Some suggested that this is too
> >>> far-reaching, and that a n+1 deprecation policy (feature deprecated
> >>> during the M development cycle can't be removed before the start of the
> >>> N cycle) would better reflect what's being currently done. Or that
> >>> config options (which are user-visible things) should have n+1 as long
> >>> as the underlying feature (or behavior) is not removed.
> >>>
> >>> Please let us know what makes the most sense. In particular between the
> >>> 3 options (but feel free to suggest something else):
> >>>
> >>> 1. n+2 overall
> >>> 2. n+2 for features and capabilities, n+1 for config options
> >>> 3. n+1 overall
> >> I think any discussion of a deprecation policy needs to be combined with
> >> a discussion about LTS (long term support) releases. Real customers (not
> >> devops users -- people who pay money for support) can't deal with
> >> upgrades every 6 months.
> >>
> >> Unavoidably, distros are going to want to support certain releases for
> >> longer than the normal upstream support window so they can satisfy the
> >> needs of the aforementioned customers. This will be true whether the
> >> deprecation policy is N+1, N+2, or N+3.
> >>
> >> It makes sense for the community to define LTS releases and coordinate
> >> making sure all the relevant projects are mutually compatible at that
> >> release point. Then the job of actually maintaining the LTS release can
> >> fall on people who care about such things. The major benefit to solving
> >> the LTS problem, though, is that deprecation will get a lot less painful
> >> because you could assume upgrades to be one release at a time or
> >> skipping directly from one LTS to the next, and you can reduce your
> >> upgrade test matrix accordingly.
> > How is this fundamentally different from what we do now with stable
> > releases, aside from involving a longer period of time?
> 
> It would be a recognition that most customers don't want to upgrade 
> every 6 months -- they want to skip over 3 releases and upgrade every 2 
> years. I'm sure there are customers all over the spectrum from those who 
> run master to those to do want a new release every 6 month, to some that 
> want to install something and run it forever without upgrading*. My 
> intuition is that, for most customers, 2 years is a reasonable amount of 
> time to run a release before upgrading. I think major Linux distros 
> understand this, as is evidenced by their release and support patterns.

As Jeremy pointed out, we have quite a bit of trouble now maintaining
stable branches. given that, it just doesn't seem realistic in this
community right now to expect support for a 2 year long LTS period.
Given that, I see no real reason to base other policies around the
idea that we might do it in the future. If we *do* end up finding
more interest in stable/LTS support, then we can revisit the
deprecation period at that point.

Doug

> 
> As sdague mentions, the idea of LTS is really a separate goal from the 
> deprecation policy, but I see the two becoming related when the 
> deprecation policy makes it impossible to cleanly jump 4 releases in a 
> single upgrade. I also believe that if you solve the LTS problem, the 
> deprecation policy flows naturally from whatever your supported-upgrade 
> path is: you simply avoid breaking anyone who does a supported upgrade.
> 
> It sounds to me like the current supported upgrade path is: you upgrade 
> each release one at a time, never skipping over a release. In this 
> model, N+1 deprecation makes perfect sense. I think the same people who 
> want longer deprecation periods are the ones who want to skip over 
> releases when upgrade for the reasons I mention.
> 
> -Ben
> 
> * I'm the kind that never upgrades. I don't fix things that aren't 
> broken. Until recently I was running FreeBSD 7 and Ubuntu 8.04. 
> Eventually I was forced to upgrade though when support was dropped. I'm 
> *still* running CentOS 5 though.
> 
> > Doug
> >
> >> -Ben Swartzlander
> >>
> >>> Thanks in advance for your input.
> >>>
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From mriedem at linux.vnet.ibm.com  Wed Sep  9 18:45:29 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 9 Sep 2015 13:45:29 -0500
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <1441821533-sup-4318@lrrr.local>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
Message-ID: <55F07E49.1010202@linux.vnet.ibm.com>



On 9/9/2015 1:04 PM, Doug Hellmann wrote:
> Excerpts from Sean Dague's message of 2015-09-09 13:36:37 -0400:
>> We've got a new pattern emerging where some of the key functionality in
>> services is moving into libraries that can be called from different
>> services. A good instance of this is os-brick, which has the setup /
>> config functionality for devices that sometimes need to be called by
>> cinder and sometimes need to be called by nova when setting up a guest.
>> Many of these actions need root access, so require rootwrap filters.
>>
>> The point of putting this logic into a library is that it's self
>> contained, and that it can be an upgrade unit that is distinct from the
>> upgrade unit of either nova or cinder.
>>
>> The problem.... rootwrap.conf. Projects ship an example rootwrap.conf
>> which specifies filter files like such:
>>
>> [DEFAULT]
>> # List of directories to load filter definitions from (separated by ',').
>> # These directories MUST all be only writeable by root !
>> filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
>>
>> however, we'd really like to be loading those filters from files the
>> library controls, so they are in sync with the library functionality.
>> Knowing where those files are going to be turns out to be a really
>> interesting guessing game. And, for security reasons, having a super
>> large set of paths rootwrap is guessing at seems really unwise.
>>
>> It seems like what we really want is something more like this:
>>
>> [filters]
>> nova=compute,network
>> os-brick=os-brick
>>
>> Which would translate into a symbolic look up via:
>>
>> filter_file = resource_filename(project, '%s.filters' % filter)
>> ... read the filter file
>
> Right now rootwrap takes as input an oslo.config file, which it reads to
> find the filter_path config value, which it interprets as a directory
> containing a bunch of other INI files, which it then reads and merges
> together into a single set of filters. I'm not sure the symbolic lookup
> you're proposing is going to support that use of multiple files. Maybe
> it shouldn't?
>
> What about allowing filter_path to contain more than one directory
> to scan? That would let projects using os-brick pass their own path and
> os-brick's path, if it's different.
>
> Doug
>
>>
>> So that rootwrap would be referencing things symbolically instead of
>> static / filebased which is going to be different depending on install
>> method.
>>
>>
>> For liberty we just hacked around it and put the os-brick rules into
>> Nova and Cinder. It's late in the release, and a clear better path
>> forward wasn't out there. It does mean the upgrade of the two components
>> is a bit coupled in the fiber channel case. But it was the best we could do.
>>
>> I'd like to get the discussion rolling about the proposed solution
>> above. It emerged from #openstack-cinder this morning as we attempted to
>> get some kind of workable solution and figure out what was next. We
>> should definitely do a summit session on this one to nail down the
>> details and the path forward.
>>
>>      -Sean
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

The problem with the static file paths in rootwrap.conf is that we don't 
know where those other library filter files are going to end up on the 
system when the library is installed.  We could hard-code nova's 
rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then 
that means the deploy/config management tooling that installing this 
stuff needs to copy that directory structure from the os-brick install 
location (which we're finding non-deterministic, at least when using 
data_files with pbr) to the target location that rootwrap.conf cares about.

That's why we were proposing adding things to rootwrap.conf that 
oslo.rootwrap can parse and process dynamically using the resource 
access stuff in pkg_resources, so we just say 'I want you to load the 
os-brick.filters file from the os-brick project, thanks.'.

-- 

Thanks,

Matt Riedemann



From robertc at robertcollins.net  Wed Sep  9 18:55:17 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Thu, 10 Sep 2015 06:55:17 +1200
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F07E49.1010202@linux.vnet.ibm.com>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com>
Message-ID: <CAJ3HoZ3=R5WN4-EBDxN1j6OP6k2+X7QCgpCkNqhHbHf3ZfeBCw@mail.gmail.com>

On 10 September 2015 at 06:45, Matt Riedemann
<mriedem at linux.vnet.ibm.com> wrote:
>

> The problem with the static file paths in rootwrap.conf is that we don't
> know where those other library filter files are going to end up on the
> system when the library is installed.  We could hard-code nova's
> rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then
> that means the deploy/config management tooling that installing this stuff
> needs to copy that directory structure from the os-brick install location
> (which we're finding non-deterministic, at least when using data_files with
> pbr) to the target location that rootwrap.conf cares about.
>
> That's why we were proposing adding things to rootwrap.conf that
> oslo.rootwrap can parse and process dynamically using the resource access
> stuff in pkg_resources, so we just say 'I want you to load the
> os-brick.filters file from the os-brick project, thanks.'.

So, I realise thats a bit sucky. My suggestion would be to just take
the tactical approach of syncing things into each consuming tree - and
dogpile onto the privsep daemon asap.

privsep is the outcome of Gus' experiments with having a Python API to
talk a richer language than shell command lines to a privileged
daemon, with one (or more) dedicated daemon processes per server
process. It avoids all of the churn and difficulties in mapping
complex things through the command line (none of our rootwrap files
are actually secure). And its massively lower latency and better
performing.

 https://review.openstack.org/#/c/204073/

-Rob


-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From srics.r at gmail.com  Wed Sep  9 19:07:27 2015
From: srics.r at gmail.com (Sridhar Ramaswamy)
Date: Wed, 9 Sep 2015 12:07:27 -0700
Subject: [openstack-dev] [Tacker][NFV] Proposal for experimental features
Message-ID: <CAK6Sh4DrACOk_H2-M7gFwnF-H4YQtF66K_OLcagRP+sQiQXung@mail.gmail.com>

Folks:

We are gathering momentum in our activities towards building a VNFM / NFVO
in OpenStack Tacker project [1]. As discussed in the last week's irc
meeting [2], I'd like to propose few "experimental features" within Tacker.

It is well know practice to use experimental tag to introduce bleeding edge
features in openstack. We realize some of these experimental features have
various unknowns - sometimes in architectural clarity (e.g. specific roles
of NFVO subcomponents) and other times in specific downstream dependency
(like ODL-SFC support). The experimental feature tag will allow a safe
place to iterate in these areas, perhaps fail fast, to eventually reach our
goal. The experimental feature will be marked as such in the tacker docs
(coming soon). Once an experimental feature's usage is vetted the
"experimental" tag will be removed.

The following two features are identified as the initial candidates,

1) VNF Forwarding Graph using SFC API

    SFC efforts introduced by Tim Rozet will form the basis of this track.
The wider Tacker team can pitch in to carry this forward into a functional
VNF Forwarding Graph feature.

2) Basic NSD support

     Basic Network Service Descriptor (NSD) support to instantiate a
sequence of VNFs (described in VNFD). When (1) above becomes available this
track can build on a combined NSD + VNFFD for a fully orchestrated network
service chain.

As always comments and inputs are welcome.

- Sridhar

[1]  <https://wiki.openstack.org/wiki/Tacker>
<https://wiki.openstack.org/wiki/Tacker>
https://wiki.openstack.org/wiki/Tacker
[2]
http://eavesdrop.openstack.org/meetings/tacker/2015/tacker.2015-09-03-16.03.log.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/8f18362a/attachment.html>

From aschultz at mirantis.com  Wed Sep  9 19:29:25 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Wed, 9 Sep 2015 14:29:25 -0500
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
Message-ID: <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>

Hey Vladimir,


>
>
>> 1) There won't be such things in like [1] and [2], thus less complicated
>>> flow, less errors, easier to maintain, easier to understand, easier to
>>> troubleshoot
>>> 2) If one wants to have local mirror, the flow is the same as in case of
>>> upstream repos (fuel-createmirror), which is clrear for a user to
>>> understand.
>>>
>>
>> From the issues I've seen,  fuel-createmirror isn't very straight forward
>> and has some issues making it a bad UX.
>>
>
> I'd say the whole approach of having such tool as fuel-createmirror is a
> way too naive. Reliable internet connection is totally up to network
> engineering rather than deployment. Even using proxy is much better that
> creating local mirror. But this discussion is totally out of the scope of
> this letter. Currently,  we have fuel-createmirror and it is pretty
> straightforward (installed as rpm, has just a couple of command line
> options). The quality of this script is also out of the scope of this
> thread. BTW we have plans to improve it.
>


Fair enough, I just wanted to raise the UX issues around these types of
things as they should go into the decision making process.



>
>>
>>> Many people still associate ISO with MOS, but it is not true when using
>>> package based delivery approach.
>>>
>>> It is easy to define necessary repos during deployment and thus it is
>>> easy to control what exactly is going to be installed on slave nodes.
>>>
>>> What do you guys think of it?
>>>
>>>
>>>
>> Reliance on internet connectivity has been an issue since 6.1. For many
>> large users, complete access to the internet is not available or not
>> desired.  If we want to continue down this path, we need to improve the
>> tools to setup the local mirror and properly document what urls/ports/etc
>> need to be available for the installation of openstack and any mirror
>> creation process.  The ideal thing is to have an all-in-one CD similar to a
>> live cd that allows a user to completely try out fuel wherever they want
>> with out further requirements of internet access.  If we don't want to
>> continue with that, we need to do a better job around providing the tools
>> for a user to get up and running in a timely fashion.  Perhaps providing an
>> net-only iso and an all-included iso would be a better solution so people
>> will have their expectations properly set up front?
>>
>
> Let me explain why I think having local MOS mirror by default is bad:
> 1) I don't see any reason why we should treat MOS  repo other way than all
> other online repos. A user sees on the settings tab the list of repos one
> of which is local by default while others are online. It can make user a
> little bit confused, can't it? A user can be also confused by the fact,
> that some of the repos can be cloned locally by fuel-createmirror while
> others can't. That is not straightforward, NOT fuel-createmirror UX.
>


I agree. The process should be the same and it should be just another repo.
It doesn't mean we can't include a version on an ISO as part of a release.
Would it be better to provide the mirror on the ISO but not have it enabled
by default for a release so that we can gather user feedback on this? This
would include improved documentation and possibly allowing a user to choose
their preference so we can collect metrics?


2) Having local MOS mirror by default makes things much more convoluted. We
> are forced to have several directories with predefined names and we are
> forced to manage these directories in nailgun, in upgrade script, etc. Why?
> 3) When putting MOS mirror on ISO, we make people think that ISO is equal
> to MOS, which is not true. It is possible to implement really flexible
> delivery scheme, but we need to think of these things as they are
> independent.
>


I'm not sure what you mean by this. Including a point in time copy on an
ISO as a release is a common method of distributing software. Is this a
messaging thing that needs to be addressed? Perhaps I'm not familiar with
people referring to the ISO as being MOS.


For large users it is easy to build custom ISO and put there what they need
> but first we need to have simple working scheme clear for everyone. I think
> dealing with all repos the same way is what is gonna makes things simpler.
>
>

Who is going to build a custom ISO? How does one request that? What
resources are consumed by custom ISO creation process/request? Does this
scale?



> This thread is not about internet connectivity, it is about aligning
> things.
>
>
You are correct in that this thread is not explicitly about internet
connectivity, but they are related. Any changes to remove a local
repository and only provide an internet based solution makes internet
connectivity something that needs to be included in the discussion.  I just
want to make sure that we properly evaluate this decision based on end user
feedback not because we don't want to manage this from a developer
standpoint.

-Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/317f98ab/attachment.html>

From ricardo.carrillo.cruz at gmail.com  Wed Sep  9 19:31:57 2015
From: ricardo.carrillo.cruz at gmail.com (Ricardo Carrillo Cruz)
Date: Wed, 9 Sep 2015 21:31:57 +0200
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big
	tent?
In-Reply-To: <20150909172205.GA13717@localhost.localdomain>
References: <20150908145755.GC16241@localhost.localdomain>
 <55EF663E.8050909@redhat.com>
 <20150909172205.GA13717@localhost.localdomain>
Message-ID: <CADe0dKC6YkSx_yjp4-gg+GqAoYvdjbLpzAXJi9NqYZMx4trm2g@mail.gmail.com>

I'm interested in ansible roles for openstack-infra, but as there is
overlap in functionality
with the current openstack-infra puppet roles I'm not sure what's the
stance from the
openstack-infra core members and PTL.

I think they should go to openstack-infra, since Nodepoo/Zuul/etc are very
specific
to the OpenStack CI.

Question is if we should have a subgroup within openstack-infra namespace
for
'stuff that is not used by OpenStack CI but interesting from CI perspective
and/or
used by other downstream groups'.

Regards

2015-09-09 19:22 GMT+02:00 Paul Belanger <pabelanger at redhat.com>:

> On Tue, Sep 08, 2015 at 06:50:38PM -0400, Emilien Macchi wrote:
> >
> >
> > On 09/08/2015 10:57 AM, Paul Belanger wrote:
> > > Greetings,
> > >
> > > I wanted to start a discussion about the future of ansible / ansible
> roles in
> > > OpenStack. Over the last week or so I've started down the ansible
> path, starting
> > > my first ansible role; I've started with ansible-role-nodepool[1].
> > >
> > > My initial question is simple, now that big tent is upon us, I would
> like
> > > some way to include ansible roles into the opentack git workflow.  I
> first
> > > thought the role might live under openstack-infra however I am not
> sure that
> > > is the right place.  My reason is, -infra tents to include modules they
> > > currently run under the -infra namespace, and I don't want to start
> the effort
> > > to convince people to migrate.
> >
> > I'm wondering what would be the goal of ansible-role-nodepool and what
> > it would orchestrate exactly. I did not find README that explains it,
> > and digging into the code makes me think you try to prepare nodepool
> > images but I don't exactly see why.
> >
> > Since we already have puppet-nodepool, I'm curious about the purpose of
> > this role.
> > IMHO, if we had to add such a new repo, it would be under
> > openstack-infra namespace, to be consistent with other repos
> > (puppet-nodepool, etc).
> >
> > > Another thought might be to reach out to the os-ansible-deployment
> team and ask
> > > how they see roles in OpenStack moving foward (mostly the reason for
> this
> > > email).
> >
> > os-ansible-deployment aims to setup OpenStack services in containers
> > (LXC). I don't see relation between os-ansible-deployment (openstack
> > deployment related) and ansible-role-nodepool (infra related).
> >
> > > Either way, I would be interested in feedback on moving forward on
> this. Using
> > > travis-ci and github works but OpenStack workflow is much better.
> > >
> > > [1] https://github.com/pabelanger/ansible-role-nodepool
> > >
> >
> > To me, it's unclear how and why we are going to use
> ansible-role-nodepool.
> > Could you explain with use-case?
> >
> The most basic use case is managing nodepool using ansible, for the
> purpose of
> CI.  Bascially, rewrite puppet-nodepool using ansible.  I won't go into the
> reasoning for that, except to say people do not want to use puppet.
>
> Regarding os-ansible-deployment, they are only related due to both using
> ansible. I wouldn't see os-ansible-deployment using the module, however I
> would
> hope to learn best practices and code reviews from the team.
>
> Where ever the module lives, I would hope people interested in ansible
> development would be group somehow.
>
> > Thanks,
> > --
> > Emilien Macchi
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/d2f01edf/attachment-0001.html>

From sean at dague.net  Wed Sep  9 19:33:36 2015
From: sean at dague.net (Sean Dague)
Date: Wed, 9 Sep 2015 15:33:36 -0400
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <CAJ3HoZ3=R5WN4-EBDxN1j6OP6k2+X7QCgpCkNqhHbHf3ZfeBCw@mail.gmail.com>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com>
 <CAJ3HoZ3=R5WN4-EBDxN1j6OP6k2+X7QCgpCkNqhHbHf3ZfeBCw@mail.gmail.com>
Message-ID: <55F08990.8000605@dague.net>

On 09/09/2015 02:55 PM, Robert Collins wrote:
> On 10 September 2015 at 06:45, Matt Riedemann
> <mriedem at linux.vnet.ibm.com> wrote:
>>
> 
>> The problem with the static file paths in rootwrap.conf is that we don't
>> know where those other library filter files are going to end up on the
>> system when the library is installed.  We could hard-code nova's
>> rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then
>> that means the deploy/config management tooling that installing this stuff
>> needs to copy that directory structure from the os-brick install location
>> (which we're finding non-deterministic, at least when using data_files with
>> pbr) to the target location that rootwrap.conf cares about.
>>
>> That's why we were proposing adding things to rootwrap.conf that
>> oslo.rootwrap can parse and process dynamically using the resource access
>> stuff in pkg_resources, so we just say 'I want you to load the
>> os-brick.filters file from the os-brick project, thanks.'.
> 
> So, I realise thats a bit sucky. My suggestion would be to just take
> the tactical approach of syncing things into each consuming tree - and
> dogpile onto the privsep daemon asap.

syncing things to the consuming tree means that you've now coupled
upgrade of os-brick, cinder, and nova to be at the same time. Because
the code to use the filters is in os-brick, but the filters are in
cinder and nova.

That's exactly the opposite direction from where we'd like to move. We
did that work around for Liberty, but that nearly completely makes
os-brick pointless if it now means cinder and nova must be in lockstep
all the time.

> privsep is the outcome of Gus' experiments with having a Python API to
> talk a richer language than shell command lines to a privileged
> daemon, with one (or more) dedicated daemon processes per server
> process. It avoids all of the churn and difficulties in mapping
> complex things through the command line (none of our rootwrap files
> are actually secure). And its massively lower latency and better
> performing.
> 
>  https://review.openstack.org/#/c/204073/

If someone else wants to go down this path that's fine. I feel like
that's not an answer to the question at all, because it says that
instead of moving forward with a decoupling mechanism (and we want to do
this kind of library approach in the nova/neutron interaction next
cycle, so this isn't a one off) we have to go into a holding pattern and
completely tear up a piece of core infrastructure for N cycles.

I'm no particular fan of rootwrap, but this suggestion seems pretty non
productive in solving the problems in front of us. Especially given how
deeply the calling interface is embedded in our programs today.

	-Sean

-- 
Sean Dague
http://dague.net


From sgolovatiuk at mirantis.com  Wed Sep  9 19:38:58 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Wed, 9 Sep 2015 21:38:58 +0200
Subject: [openstack-dev] [Fuel][Fuel-Library] Nominating Alex Schultz to
 Fuel-Library Core
In-Reply-To: <104B1A59-FBD9-49B3-AEB4-E2870205D894@mirantis.com>
References: <CA+HkNVsnZL5K_zTZYX7me7zA2k-wHppjMJigjPNkYhe84sz-2g@mail.gmail.com>
 <104B1A59-FBD9-49B3-AEB4-E2870205D894@mirantis.com>
Message-ID: <CA+HkNVtqU9fnMkP4Ng1avfaHZavhqPTUVkr-9R0rFTA1GLsQHQ@mail.gmail.com>

Hi,

My congratulations to Alex!

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Sep 8, 2015 at 8:27 PM, Tomasz Napierala <tnapierala at mirantis.com>
wrote:

> > On 02 Sep 2015, at 01:31, Sergii Golovatiuk <sgolovatiuk at mirantis.com>
> wrote:
> >
> > Hi,
> >
> > I would like to nominate Alex Schultz to Fuel-Library Core team. He?s
> been doing a great job in writing patches. At the same time his reviews are
> solid with comments for further improvements. He?s #3 reviewer and #1
> contributor with 46 commits for last 90 days [1]. Additionally, Alex has
> been very active in IRC providing great ideas. His ?librarian? blueprint
> [3] made a big step towards to puppet community.
> >
> > Fuel Library, please vote with +1/-1 for approval/objection. Voting will
> be open until September 9th. This will go forward after voting is closed if
> there are no objections.
> >
> > Overall contribution:
> > [0] http://stackalytics.com/?user_id=alex-schultz
> > Fuel library contribution for last 90 days:
> > [1] http://stackalytics.com/report/contribution/fuel-library/90
> > List of reviews:
> > [2]
> https://review.openstack.org/#/q/reviewer:%22Alex+Schultz%22+status:merged,n,z
> > ?Librarian activities? in mailing list:
> > [3]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/071058.html
>
>
> Definitely well deserved for Alex. Outstanding technical work and really
> good community skills! My strong +1
>
> Regards,
> --
> Tomasz 'Zen' Napierala
> Product Engineering - Poland
>
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/76a28395/attachment.html>

From sean.mcginnis at gmx.com  Wed Sep  9 20:10:29 2015
From: sean.mcginnis at gmx.com (Sean McGinnis)
Date: Wed, 9 Sep 2015 15:10:29 -0500
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
Message-ID: <20150909201029.GA14695@gmx.com>



> Sent: Wednesday, September 09, 2015 at 2:33 PM
> From: "Sean Dague" <sean at dague.net>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
>
> On 09/09/2015 02:55 PM, Robert Collins wrote:
> > On 10 September 2015 at 06:45, Matt Riedemann
> > <mriedem at linux.vnet.ibm.com> wrote:
> >>
> > 
> >> The problem with the static file paths in rootwrap.conf is that we don't
> >> know where those other library filter files are going to end up on the
> >> system when the library is installed.  We could hard-code nova's
> >> rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then
> >> that means the deploy/config management tooling that installing this stuff
> >> needs to copy that directory structure from the os-brick install location
> >> (which we're finding non-deterministic, at least when using data_files with
> >> pbr) to the target location that rootwrap.conf cares about.
> >>
> >> That's why we were proposing adding things to rootwrap.conf that
> >> oslo.rootwrap can parse and process dynamically using the resource access
> >> stuff in pkg_resources, so we just say 'I want you to load the
> >> os-brick.filters file from the os-brick project, thanks.'.
> > 
> > So, I realise thats a bit sucky. My suggestion would be to just take
> > the tactical approach of syncing things into each consuming tree - and
> > dogpile onto the privsep daemon asap.
> 
> syncing things to the consuming tree means that you've now coupled
> upgrade of os-brick, cinder, and nova to be at the same time. Because
> the code to use the filters is in os-brick, but the filters are in
> cinder and nova.
> 
> That's exactly the opposite direction from where we'd like to move. We
> did that work around for Liberty, but that nearly completely makes
> os-brick pointless if it now means cinder and nova must be in lockstep
> all the time.
> 
> > privsep is the outcome of Gus' experiments with having a Python API to
> > talk a richer language than shell command lines to a privileged
> > daemon, with one (or more) dedicated daemon processes per server
> > process. It avoids all of the churn and difficulties in mapping
> > complex things through the command line (none of our rootwrap files
> > are actually secure). And its massively lower latency and better
> > performing.
> > 
> >  https://review.openstack.org/#/c/204073/
> 
> If someone else wants to go down this path that's fine. I feel like
> that's not an answer to the question at all, because it says that
> instead of moving forward with a decoupling mechanism (and we want to do
> this kind of library approach in the nova/neutron interaction next
> cycle, so this isn't a one off) we have to go into a holding pattern and
> completely tear up a piece of core infrastructure for N cycles.
> 
> I'm no particular fan of rootwrap, but this suggestion seems pretty non
> productive in solving the problems in front of us. Especially given how
> deeply the calling interface is embedded in our programs today.
> 
> 	-Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From robertc at robertcollins.net  Wed Sep  9 20:13:13 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Thu, 10 Sep 2015 08:13:13 +1200
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F08990.8000605@dague.net>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com>
 <CAJ3HoZ3=R5WN4-EBDxN1j6OP6k2+X7QCgpCkNqhHbHf3ZfeBCw@mail.gmail.com>
 <55F08990.8000605@dague.net>
Message-ID: <CAJ3HoZ2GozXV80MmSSsczjcawV4yYZQ-yUy_kTwzKercgj5odg@mail.gmail.com>

On 10 September 2015 at 07:33, Sean Dague <sean at dague.net> wrote:
> On 09/09/2015 02:55 PM, Robert Collins wrote:
>> On 10 September 2015 at 06:45, Matt Riedemann
>> <mriedem at linux.vnet.ibm.com> wrote:
>>>
>>
>>> The problem with the static file paths in rootwrap.conf is that we don't
>>> know where those other library filter files are going to end up on the
>>> system when the library is installed.  We could hard-code nova's
>>> rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then
>>> that means the deploy/config management tooling that installing this stuff
>>> needs to copy that directory structure from the os-brick install location
>>> (which we're finding non-deterministic, at least when using data_files with
>>> pbr) to the target location that rootwrap.conf cares about.
>>>
>>> That's why we were proposing adding things to rootwrap.conf that
>>> oslo.rootwrap can parse and process dynamically using the resource access
>>> stuff in pkg_resources, so we just say 'I want you to load the
>>> os-brick.filters file from the os-brick project, thanks.'.
>>
>> So, I realise thats a bit sucky. My suggestion would be to just take
>> the tactical approach of syncing things into each consuming tree - and
>> dogpile onto the privsep daemon asap.
>
> syncing things to the consuming tree means that you've now coupled
> upgrade of os-brick, cinder, and nova to be at the same time. Because
> the code to use the filters is in os-brick, but the filters are in
> cinder and nova.
>
> That's exactly the opposite direction from where we'd like to move. We
> did that work around for Liberty, but that nearly completely makes
> os-brick pointless if it now means cinder and nova must be in lockstep
> all the time.
>
>> privsep is the outcome of Gus' experiments with having a Python API to
>> talk a richer language than shell command lines to a privileged
>> daemon, with one (or more) dedicated daemon processes per server
>> process. It avoids all of the churn and difficulties in mapping
>> complex things through the command line (none of our rootwrap files
>> are actually secure). And its massively lower latency and better
>> performing.
>>
>>  https://review.openstack.org/#/c/204073/
>
> If someone else wants to go down this path that's fine. I feel like
> that's not an answer to the question at all, because it says that
> instead of moving forward with a decoupling mechanism (and we want to do
> this kind of library approach in the nova/neutron interaction next
> cycle, so this isn't a one off) we have to go into a holding pattern and
> completely tear up a piece of core infrastructure for N cycles.

If it not helpful, its not helpful - sorry. My perspective was that we
want privsep in deployments for M, so if the brick code can move to
privsep during M, there's only one cycle during which rootwrap filters
may need to be adjusted.

> I'm no particular fan of rootwrap, but this suggestion seems pretty non
> productive in solving the problems in front of us. Especially given how
> deeply the calling interface is embedded in our programs today.

AIUI the case here is that there is a library, and you call that.
Which means the calling interface of rootwrap isn't broadly at issue -
we only need to migrate the calls w/in brick to solve the issue.

Anyhow, as you say, it may not be helpful - so I'll not kibbitz, EOF
:). If I do have thoughts on the rootwrap thing specifically I'll
write a separate reply.

-Rob


-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From sean.mcginnis at gmx.com  Wed Sep  9 20:14:24 2015
From: sean.mcginnis at gmx.com (Sean McGinnis)
Date: Wed, 9 Sep 2015 15:14:24 -0500
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F08990.8000605@dague.net>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com>
 <CAJ3HoZ3=R5WN4-EBDxN1j6OP6k2+X7QCgpCkNqhHbHf3ZfeBCw@mail.gmail.com>
 <55F08990.8000605@dague.net>
Message-ID: <20150909201424.GA14814@gmx.com>

On Wed, Sep 09, 2015 at 03:33:36PM -0400, Sean Dague wrote:
> On 09/09/2015 02:55 PM, Robert Collins wrote:
> > On 10 September 2015 at 06:45, Matt Riedemann
> > <mriedem at linux.vnet.ibm.com> wrote:
> >>
> > So, I realise thats a bit sucky. My suggestion would be to just take
> > the tactical approach of syncing things into each consuming tree - and
> > dogpile onto the privsep daemon asap.

This does look interesting, but I would be very hesitant to change
everything right away to move from rootwrap to privsep, assuming
privsep will land and be stable enough to use in time.

> 
> syncing things to the consuming tree means that you've now coupled
> upgrade of os-brick, cinder, and nova to be at the same time. Because
> the code to use the filters is in os-brick, but the filters are in
> cinder and nova.
> 
> That's exactly the opposite direction from where we'd like to move. We
> did that work around for Liberty, but that nearly completely makes
> os-brick pointless if it now means cinder and nova must be in lockstep
> all the time.

Agreed. I would like to see a clean separation of these. The reason this
is even a big issue right now is a command was added to os-brick's
rootwrap that was not picked up by Nova and Cinder. It only affected
fibre channel attached storage, so we didn't even realize there was an
issue until the third party CI's of FC drivers started all failing.

I do like the proposed approach of passing in the library to rootwrap
and letting rootwrap take care of loading its filters. It does bring
up some security questions, but as a consumer of a library I think it
makes sense to tell rootwrap - hey I'm using this library over there,
do what it says it needs to do.

Sean
(smcginnis)

PS - pardon the mail client SNAFU just sent prior to this. Oops.


From murali.allada at RACKSPACE.COM  Wed Sep  9 20:41:40 2015
From: murali.allada at RACKSPACE.COM (Murali Allada)
Date: Wed, 9 Sep 2015 20:41:40 +0000
Subject: [openstack-dev] [magnum]  keystone pluggable model
Message-ID: <1441831299970.62471@RACKSPACE.COM>

?Hi All,


In the IRC meeting yesterday, I brought up this new blueprint I opened.


https://blueprints.launchpad.net/magnum/+spec/pluggable-keystone-model?


The goal of this blueprint is to allow magnum operators to integrate with their version of keystone easily with downstream patches.


The goal is NOT to implement support for keystone version 2 upstream, but to make it easy for operators to integrate with V2 if they need to.


Most of the work required for this is already done in this patch.


https://review.openstack.org/#/c/218699


However, we didn't want to address this change in the same review.


We just need to refactor the code a little further and isolate all version specific keystone code to one file.


See my comments in the following review for details on what this change entails.


https://review.openstack.org/#/c/218699/5/magnum/common/clients.py


https://review.openstack.org/#/c/218699/5/magnum/common/keystone.py


Thanks,

Murali
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/425c7043/attachment.html>

From zbitter at redhat.com  Wed Sep  9 21:01:14 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Wed, 9 Sep 2015 17:01:14 -0400
Subject: [openstack-dev] [Heat] Multi Node Stack - keystone federation
In-Reply-To: <94346481835D244BB7F6486C00E9C1BA2AE20194@FR711WXCHMBA06.zeu.alcatel-lucent.com>
References: <94346481835D244BB7F6486C00E9C1BA2AE1E553@FR711WXCHMBA06.zeu.alcatel-lucent.com>
 <55EEF653.4040909@redhat.com>
 <94346481835D244BB7F6486C00E9C1BA2AE20194@FR711WXCHMBA06.zeu.alcatel-lucent.com>
Message-ID: <55F09E1A.9050508@redhat.com>

On 09/09/15 04:10, SHTILMAN, Tomer (Tomer) wrote:
> We are currently building in our lab multi cloud setup with keystone federation and I will check if my understating is correct, I am planning for propose a BP for this once will be clear

There was further interest in this at the IRC meeting today (from Daniel 
Gonzalez), so I raised this blueprint:

https://blueprints.launchpad.net/heat/+spec/multi-cloud-federation

I left the Drafter and Assignee fields blank, so whoever starts working 
on the spec and the code, respectively, should put their names in those 
fields. If you see someone else's name there, you should co-ordinate 
with them to avoid double-handling.

cheers,
Zane.


From msm at redhat.com  Wed Sep  9 21:33:01 2015
From: msm at redhat.com (michael mccune)
Date: Wed, 9 Sep 2015 17:33:01 -0400
Subject: [openstack-dev] [sahara] FFE request for nfs-as-a-data-source
In-Reply-To: <6EEB8A90CDE31C4680037A635100E8FF953DBC@SHSMSX104.ccr.corp.intel.com>
References: <6EEB8A90CDE31C4680037A635100E8FF953DBC@SHSMSX104.ccr.corp.intel.com>
Message-ID: <55F0A58D.7000205@redhat.com>

i'm +1 for this feature as long as we talking about just the sahara 
controller and saharaclient. i agree we probably cannot get the horizon 
changes in before the final release.

mike

On 09/09/2015 03:33 AM, Chen, Weiting wrote:
> Hi, all.
>
> I would like to request FFE for nfs as a data source for sahara.
>
> This bp originally should include a dashboard change to create nfs as a
> data source.
>
> I will register it as another bp and implement it in next version.
>
> However, these patches have already done to put nfs-driver into
> sahara-image-elements and enable it in the cluster.
>
> By using this way, the user can use nfs protocol via command line in
> Liberty release.
>
> Blueprint:
>
> https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source
>
> Spec:
>
> https://review.openstack.org/#/c/210839/
>
> Patch:
>
> https://review.openstack.org/#/c/218637/
>
> https://review.openstack.org/#/c/218638/
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From robertc at robertcollins.net  Wed Sep  9 22:22:36 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Thu, 10 Sep 2015 10:22:36 +1200
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <1441822516-sup-8235@lrrr.local>
References: <55E83B84.5000000@openstack.org>
 <55EF1BCA.8020708@swartzlander.org> <1441735065-sup-780@lrrr.local>
 <55EF97EE.9090801@swartzlander.org>
 <1441822516-sup-8235@lrrr.local>
Message-ID: <CAJ3HoZ0=5U_Tts3bbSbrOs0b=_6dBbcg9ZGxXLSFhnZnS3wTEg@mail.gmail.com>

On 10 September 2015 at 06:18, Doug Hellmann <doug at doughellmann.com> wrote:
> Excerpts from Ben Swartzlander's message of 2015-09-08 22:22:38 -0400:

>> It would be a recognition that most customers don't want to upgrade
>> every 6 months -- they want to skip over 3 releases and upgrade every 2
>> years. I'm sure there are customers all over the spectrum from those who
>> run master to those to do want a new release every 6 month, to some that
>> want to install something and run it forever without upgrading*. My
>> intuition is that, for most customers, 2 years is a reasonable amount of
>> time to run a release before upgrading. I think major Linux distros
>> understand this, as is evidenced by their release and support patterns.
>
> As Jeremy pointed out, we have quite a bit of trouble now maintaining
> stable branches. given that, it just doesn't seem realistic in this
> community right now to expect support for a 2 year long LTS period.
> Given that, I see no real reason to base other policies around the
> idea that we might do it in the future. If we *do* end up finding
> more interest in stable/LTS support, then we can revisit the
> deprecation period at that point.

Also, there's a number of things that are bundled up into the one thing today:
 - schema migrations
 - cross-version compatibility (or lack thereof) of deps [and
co-installability too]
 - RPC compatibility
 - config file compatibility

All of which impact the ability to do rolling upgrades. Non-rolling
upgrades without config changes are a bit of a special case, but also
important.

So any LTS discussion needs to cover:
 - resourcing the maintenance of the LTS branch
 - solving the technical problems in keeping the branch running as the
platform it was developed on ages underneath it
 - solving the technical problems in dealing with libraries that are
moving on while its staying static (and please no, do not suggest LTS
branches for oslo libraries - they are only a subset of the problem)
 - dealing with having compat code hang around in-tree for much longer

So the backwards compatibility aspect of this discussion is really
just the tip of the iceberg.

-Rob


-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From robertc at robertcollins.net  Wed Sep  9 22:34:02 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Thu, 10 Sep 2015 10:34:02 +1200
Subject: [openstack-dev] [requirements] attention requirements-cores,
 please look out for constraints updates
In-Reply-To: <CAGi==UUiun7Q3f4YRe+hi5acPiHVG9jFA227LHaAX8x_zVhZsA@mail.gmail.com>
References: <CAJ3HoZ0eBWU0VWZ0-f3JY0ttqf2=O5Z+gGq5veW6V8LfjvKLEQ@mail.gmail.com>
 <559CE6F3.20507@openstack.org>
 <CAJ3HoZ2Mk5e_r=mTx5L=L+uFi3bjWDR9i+F3UBUGvDrCnuXhLw@mail.gmail.com>
 <1436396493-sup-2848@lrrr.local>
 <CAJ3HoZ3T1oVxCNXMQONCmJCEq-NCoTOWU4gfMYXhXxYXRKjB2g@mail.gmail.com>
 <CAGi==UUiun7Q3f4YRe+hi5acPiHVG9jFA227LHaAX8x_zVhZsA@mail.gmail.com>
Message-ID: <CAJ3HoZ3w9xs4SMGT-_=roz7E36FXxsYRAqZfh5MnKOqnFO9joQ@mail.gmail.com>

On 9 September 2015 at 22:22, Alan Pevec <apevec at gmail.com> wrote:
>> I'd like to add in a lower-constraints.txt set of pins and actually
>> start reporting on whether our lower bounds *work*.
>
> Do you have a spec in progress for lower-constraints.txt?
> It should help catch issues like https://review.openstack.org/221267
> There are also lots of entries in global-requirements without minimum
> version set while they should:
> http://git.openstack.org/cgit/openstack/requirements/tree/README.rst#n226

Not yet. Got some other fish to fry first :) - but I'd certainly be
happy to review a spec if someone else wants to work on
lower-constraints testing.

So the bare library thing - I think that advice is overly
prescriptive. Certainly with something like testtools, a bare version
is bad - its a mature library and older versions certainly aren't
relevant today. OTOH with something like os-brick, brand new, all
versions may well be ok.

Cheers,
Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From jpeeler at redhat.com  Wed Sep  9 22:50:25 2015
From: jpeeler at redhat.com (Jeff Peeler)
Date: Wed, 9 Sep 2015 18:50:25 -0400
Subject: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support
 multiple compute drivers?
In-Reply-To: <20150909171336.GG21846@jimrollenhagen.com>
References: <CALesnTzMv_+hxZLFkAbxObzGLKU0h2ENZ5-vYe1-u+EC5g7Eyg@mail.gmail.com>
 <20150909171336.GG21846@jimrollenhagen.com>
Message-ID: <CALesnTyuK17bUpYuA=9q+_L5TU7xxAF=tdsQmwtPtr+Z1vmt1w@mail.gmail.com>

I'd greatly prefer using availability zones/host aggregates as I'm trying
to keep the footprint as small as possible. It does appear that in the
section "configure scheduler to support host aggregates" [1], that I can
configure filtering using just one scheduler (right?). However, perhaps
more importantly, I'm now unsure with the network configuration changes
required for Ironic that deploying normal instances along with baremetal
servers is possible.

[1]
http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html

On Wed, Sep 9, 2015 at 1:13 PM, Jim Rollenhagen <jim at jimrollenhagen.com>
wrote:

> On Wed, Sep 02, 2015 at 03:42:20PM -0400, Jeff Peeler wrote:
> > Hi folks,
> >
> > I'm currently looking at supporting Ironic in the Kolla project [1], but
> > was unsure if it would be possible to run separate instances of nova
> > compute and controller (and scheduler too?) to enable both baremetal and
> > libvirt type deployments. I found this mailing list post from two years
> ago
> > [2], asking the same question. The last response in the thread seemed to
> > indicate work was being done on the scheduler to support multiple
> > configurations, but the review [3] ended up abandoned.
> >
> > Are the current requirements the same? Perhaps using two availability
> zones
> > would work, but I'm not clear if that works on the same host.
>
> At Rackspace we run Ironic in its own cell, and use cells filters to
> direct builds to the right place.
>
> The other option that supposedly works is host aggregates. I'm not sure
> host aggregates supports running two scheduler instances (because you'll
> want different filters), but maybe it does?
>
> // jim
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/b32224e2/attachment.html>

From openstack at nemebean.com  Wed Sep  9 22:56:43 2015
From: openstack at nemebean.com (Ben Nemec)
Date: Wed, 09 Sep 2015 17:56:43 -0500
Subject: [openstack-dev] [TripleO] Releasing tripleo-common on PyPI
In-Reply-To: <1441821189-sup-984@lrrr.local>
References: <1793280466.18329104.1441793759084.JavaMail.zimbra@redhat.com>
 <55F010BC.4040206@redhat.com> <1441821189-sup-984@lrrr.local>
Message-ID: <55F0B92B.2000002@nemebean.com>

On 09/09/2015 12:54 PM, Doug Hellmann wrote:
> Excerpts from Dmitry Tantsur's message of 2015-09-09 12:58:04 +0200:
>> On 09/09/2015 12:15 PM, Dougal Matthews wrote:
>>> Hi,
>>>
>>> The tripleo-common library appears to be registered or PyPI but hasn't yet had
>>> a release[1]. I am not familiar with the release process - what do we need to
>>> do to make sure it is regularly released with other TripleO packages?
>>
>> I think this is a good start: 
>> https://github.com/openstack/releases/blob/master/README.rst
> 
> That repo isn't managed by the release team, so you don't need to submit
> a release request as described there. You can, however, use the tools to
> tag a release yourself. Drop by #openstack-relmgr-office if you have
> questions about the tools or process, and I'll be happy to offer
> whatever guidance I can.

We have tripleo-specific docs for doing releases on the wiki:
https://wiki.openstack.org/wiki/TripleO/ReleaseManagement

Someday when I stop dropping the ball on getting the tripleo launchpad
permissions fixed, we'll be able to move to the same release model as
everyone else...

> 
> Doug
> 
>>
>>>
>>> We will also want to do something similar with the new python-tripleoclient
>>> which doesn't seem to be registered on PyPI yet at all.
>>
>> And instack-undercloud.
>>
>>>
>>> Thanks,
>>> Dougal
>>>
>>> [1]: https://pypi.python.org/pypi/tripleo-common
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From doug at doughellmann.com  Wed Sep  9 23:16:44 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 09 Sep 2015 19:16:44 -0400
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F07E49.1010202@linux.vnet.ibm.com>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com>
Message-ID: <1441840459-sup-9283@lrrr.local>

Excerpts from Matt Riedemann's message of 2015-09-09 13:45:29 -0500:
> 
> On 9/9/2015 1:04 PM, Doug Hellmann wrote:
> > Excerpts from Sean Dague's message of 2015-09-09 13:36:37 -0400:
> >> We've got a new pattern emerging where some of the key functionality in
> >> services is moving into libraries that can be called from different
> >> services. A good instance of this is os-brick, which has the setup /
> >> config functionality for devices that sometimes need to be called by
> >> cinder and sometimes need to be called by nova when setting up a guest.
> >> Many of these actions need root access, so require rootwrap filters.
> >>
> >> The point of putting this logic into a library is that it's self
> >> contained, and that it can be an upgrade unit that is distinct from the
> >> upgrade unit of either nova or cinder.
> >>
> >> The problem.... rootwrap.conf. Projects ship an example rootwrap.conf
> >> which specifies filter files like such:
> >>
> >> [DEFAULT]
> >> # List of directories to load filter definitions from (separated by ',').
> >> # These directories MUST all be only writeable by root !
> >> filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
> >>
> >> however, we'd really like to be loading those filters from files the
> >> library controls, so they are in sync with the library functionality.
> >> Knowing where those files are going to be turns out to be a really
> >> interesting guessing game. And, for security reasons, having a super
> >> large set of paths rootwrap is guessing at seems really unwise.
> >>
> >> It seems like what we really want is something more like this:
> >>
> >> [filters]
> >> nova=compute,network
> >> os-brick=os-brick
> >>
> >> Which would translate into a symbolic look up via:
> >>
> >> filter_file = resource_filename(project, '%s.filters' % filter)
> >> ... read the filter file
> >
> > Right now rootwrap takes as input an oslo.config file, which it reads to
> > find the filter_path config value, which it interprets as a directory
> > containing a bunch of other INI files, which it then reads and merges
> > together into a single set of filters. I'm not sure the symbolic lookup
> > you're proposing is going to support that use of multiple files. Maybe
> > it shouldn't?
> >
> > What about allowing filter_path to contain more than one directory
> > to scan? That would let projects using os-brick pass their own path and
> > os-brick's path, if it's different.
> >
> > Doug
> >
> >>
> >> So that rootwrap would be referencing things symbolically instead of
> >> static / filebased which is going to be different depending on install
> >> method.
> >>
> >>
> >> For liberty we just hacked around it and put the os-brick rules into
> >> Nova and Cinder. It's late in the release, and a clear better path
> >> forward wasn't out there. It does mean the upgrade of the two components
> >> is a bit coupled in the fiber channel case. But it was the best we could do.
> >>
> >> I'd like to get the discussion rolling about the proposed solution
> >> above. It emerged from #openstack-cinder this morning as we attempted to
> >> get some kind of workable solution and figure out what was next. We
> >> should definitely do a summit session on this one to nail down the
> >> details and the path forward.
> >>
> >>      -Sean
> >>
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> The problem with the static file paths in rootwrap.conf is that we don't 
> know where those other library filter files are going to end up on the 
> system when the library is installed.  We could hard-code nova's 
> rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then 

I thought the configuration file passed to rootwrap was something the
deployer could change, which would let them fix the paths on their
system. Did I misunderstand what the argument was?

> that means the deploy/config management tooling that installing this 
> stuff needs to copy that directory structure from the os-brick install 
> location (which we're finding non-deterministic, at least when using 
> data_files with pbr) to the target location that rootwrap.conf cares about.
> 
> That's why we were proposing adding things to rootwrap.conf that 
> oslo.rootwrap can parse and process dynamically using the resource 
> access stuff in pkg_resources, so we just say 'I want you to load the 
> os-brick.filters file from the os-brick project, thanks.'.
> 

Doesn't that put the rootwrap config file for os-brick in a place the
deployer can't change it? Maybe they're not supposed to? If they're not,
then I agree that burying the actual file inside the library and using
something like pkgtools to get its contents makes more sense.

Doug


From blak111 at gmail.com  Wed Sep  9 23:33:38 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Wed, 9 Sep 2015 16:33:38 -0700
Subject: [openstack-dev] [neutron] port delete allowed on VM
In-Reply-To: <D2148C03.253EB%akalambu@cisco.com>
References: <D2148C03.253EB%akalambu@cisco.com>
Message-ID: <CAO_F6JN1RuaCK0i5JN+8fd85zzCQswpyV+UWG3zUA75unnev+w@mail.gmail.com>

This is expected behavior.

On Tue, Sep 8, 2015 at 12:58 PM, Ajay Kalambur (akalambu) <
akalambu at cisco.com> wrote:

> Hi
> Today when we create a VM on a port and delete that port I don?t get a
> message saying Port in Use
>
> Is there a plan to fix this or is this expected behavior in neutron
>
> Is there a plan to fix this and if so is there a bug tracking this?
>
> Ajay
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/97a92f36/attachment.html>

From westlifezs at gmail.com  Wed Sep  9 23:46:52 2015
From: westlifezs at gmail.com (Su Zhang)
Date: Wed, 9 Sep 2015 16:46:52 -0700
Subject: [openstack-dev] [glance] differences between def detail() and def
	index() in glance/registry/api/v1/images.py
Message-ID: <CAPjtyqsQdh_-RW9pv6Y48RKVSN3zF_Ny5hPdndrbCKuUfbzPYw@mail.gmail.com>

Hello,

I am hitting an error and its trace passes def index ()
in glance/registry/api/v1/images.py.

I assume def index() is called by glance image-list. However, while testing
glance image-list I realized that def detail() is called
under glance/registry/api/v1/images.py instead of def index().

Could someone let me know what's the difference between the two functions?
How can I test out def index() under glance/registry/api/v1/images.py
through CLI or API?

Thanks,

-- 
Su Zhang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/07b4898b/attachment.html>

From jj at chef.io  Wed Sep  9 23:53:53 2015
From: jj at chef.io (JJ Asghar)
Date: Wed, 9 Sep 2015 18:53:53 -0500
Subject: [openstack-dev] [openstack-operators][chef] Pre-release of new
 kitchen-openstack driver, Windows and 1.4 test-kitchen support
Message-ID: <55F0C691.3010804@chef.io>


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Hi Everyone!

I'd like to announce a pre-release[1] of 2.0.0.dev of the
kitchen-openstack[2] driver gem.  I've been able to test this with
Windows 2012R2 and all the flavors of Linux I have on my OpenStack
Cloud, but would love some more feedback.[3]

If you could give this pre-release a shot, and post any issues, negative
or positive; I'd like to get this pushed out to a real release late next
week. (Sept. 17 or 18)

If you don't know you can install the gem via: `gem install
kitchen-openstack --pre`

Please take note, with 1.4 TK, there is now a transport option for the
transport driver, so you'll most likely need to add a transport section
to your .kitchen.yml. I've made note of this here[3].

[1]: https://rubygems.org/gems/kitchen-openstack/versions/2.0.0.dev
[2]: https://github.com/test-kitchen/kitchen-openstack
[3]: https://github.com/test-kitchen/kitchen-openstack/issues
[4]: https://github.com/test-kitchen/kitchen-openstack#usage

- -- 
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2

-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV8MaQAAoJEDZbxzMH0+jTCYUP/iUEqNG3WOZ6VJjuZCBO+4Wt
5h2DSvg7Tc7gB+IIUyvz8G++ymatvGyY9zRNhxCJcpQuwndxrfYJuVFB+2KiJ8hS
oVfHcggVms0/DlmGUn8Lr/8GdCCawU2qwjkYg1STJorXCwH6phh6dIhWcPSjus8r
f2/JKStmawFqQ7MW/hI5qvJ2o46AvfHEbzyPChD9YYNMffdZzrUfKVNL9JcCpl4+
N9VU+Y2e2oo1yjKro68tM7JR0qE5gF5k0BgRXcxWkSzPVLB+ilD+mAqCwoaaRmkr
yuxAgWV7kwFWXQnK8O/OJEEX4/EQx6QqC7oR36DrPLGafxW9Jk9Jyj6eh12mt7G9
/uEAuKcket5F6CNLYikGH3Lm7ZhaFD75Of/9ourVWZy4aTl3zX7PaC7SwSb6Yx9B
Flt4O4hf4Nl7PgPzf3kyuWaR+39HmEpF4WwCNQ+NdA92IKebDcsR6SgdcwxxkiOl
5wXhSs8vr+fgEGBYp4ZoEHmGUMWghd/fcoH5yDVt+neM1FB9wJwQAjMUV0z3kCJ1
AEyzwyNHTtflsnL3613//zwWTjKE9U3cHhY7KaiBrL2jO+rDsfi1cAD34usYI1G0
T76D9IAkxZz4TycGWgVzSVTY1ESqJFIxE2BLGeCDHZq/8fQsa7ZlrKif02V/eO+o
jMSXjkVcXHATaMqFIMNe
=f+U2
-----END PGP SIGNATURE-----



From feilong at catalyst.net.nz  Thu Sep 10 00:04:01 2015
From: feilong at catalyst.net.nz (Fei Long Wang)
Date: Thu, 10 Sep 2015 12:04:01 +1200
Subject: [openstack-dev] [glance] differences between def detail() and
 def index() in glance/registry/api/v1/images.py
In-Reply-To: <CAPjtyqsQdh_-RW9pv6Y48RKVSN3zF_Ny5hPdndrbCKuUfbzPYw@mail.gmail.com>
References: <CAPjtyqsQdh_-RW9pv6Y48RKVSN3zF_Ny5hPdndrbCKuUfbzPYw@mail.gmail.com>
Message-ID: <55F0C8F1.5080503@catalyst.net.nz>

I assume you're using Glance client, if so, by default, when you issuing 
command 'glance image-list', it will call /v1/images/detail instead of 
/v1/images, you can use curl or any browser http client to see the 
difference. Basically, just like the endpoint name, /v1/images/detail 
will give you more details. See below difference of their response.

Response from /v1/images/detail
{
     "images": [
         {
             "status": "active",
             "deleted_at": null,
             "name": "fedora-21-atomic-3",
             "deleted": false,
             "container_format": "bare",
             "created_at": "2015-09-03T22:56:37.000000",
             "disk_format": "qcow2",
             "updated_at": "2015-09-03T23:00:15.000000",
             "min_disk": 0,
             "protected": false,
             "id": "b940521b-97ff-48d9-a22e-ecc981ec0513",
             "min_ram": 0,
             "checksum": "d3b3da0e07743805dcc852785c7fc258",
             "owner": "5f290ac4b100440b8b4c83fce78c2db7",
             "is_public": true,
             "virtual_size": null,
             "properties": {
                 "os_distro": "fedora-atomic"
             },
             "size": 770179072
         }
     ]
}

Response with /v1/images
{
     "images": [
         {
             "name": "fedora-21-atomic-3",
             "container_format": "bare",
             "disk_format": "qcow2",
             "checksum": "d3b3da0e07743805dcc852785c7fc258",
             "id": "b940521b-97ff-48d9-a22e-ecc981ec0513",
             "size": 770179072
         }
     ]
}


On 10/09/15 11:46, Su Zhang wrote:
>
> Hello,
>
> I am hitting an error and its trace passes def index () 
> in glance/registry/api/v1/images.py.
>
> I assume def index() is called by glance image-list. However, while 
> testing glance image-list I realized that def detail() is called 
> under glance/registry/api/v1/images.py instead of def index().
>
> Could someone let me know what's the difference between the two 
> functions? How can I test out def index() under 
> glance/registry/api/v1/images.py through CLI or API?
>
> Thanks,
>
> -- 
> Su Zhang
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Fei Long Wang (???)
--------------------------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang at catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--------------------------------------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/ba170fe2/attachment.html>

From ytang at mirantis.com  Thu Sep 10 00:58:01 2015
From: ytang at mirantis.com (Yaguang Tang)
Date: Thu, 10 Sep 2015 08:58:01 +0800
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
Message-ID: <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>

On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com> wrote:

>
> Hey Vladimir,
>
>
>>
>>
>>> 1) There won't be such things in like [1] and [2], thus less complicated
>>>> flow, less errors, easier to maintain, easier to understand, easier to
>>>> troubleshoot
>>>> 2) If one wants to have local mirror, the flow is the same as in case
>>>> of upstream repos (fuel-createmirror), which is clrear for a user to
>>>> understand.
>>>>
>>>
>>> From the issues I've seen,  fuel-createmirror isn't very straight
>>> forward and has some issues making it a bad UX.
>>>
>>
>> I'd say the whole approach of having such tool as fuel-createmirror is a
>> way too naive. Reliable internet connection is totally up to network
>> engineering rather than deployment. Even using proxy is much better that
>> creating local mirror. But this discussion is totally out of the scope of
>> this letter. Currently,  we have fuel-createmirror and it is pretty
>> straightforward (installed as rpm, has just a couple of command line
>> options). The quality of this script is also out of the scope of this
>> thread. BTW we have plans to improve it.
>>
>
>
> Fair enough, I just wanted to raise the UX issues around these types of
> things as they should go into the decision making process.
>
>
>
>>
>>>
>>>> Many people still associate ISO with MOS, but it is not true when using
>>>> package based delivery approach.
>>>>
>>>> It is easy to define necessary repos during deployment and thus it is
>>>> easy to control what exactly is going to be installed on slave nodes.
>>>>
>>>> What do you guys think of it?
>>>>
>>>>
>>>>
>>> Reliance on internet connectivity has been an issue since 6.1. For many
>>> large users, complete access to the internet is not available or not
>>> desired.  If we want to continue down this path, we need to improve the
>>> tools to setup the local mirror and properly document what urls/ports/etc
>>> need to be available for the installation of openstack and any mirror
>>> creation process.  The ideal thing is to have an all-in-one CD similar to a
>>> live cd that allows a user to completely try out fuel wherever they want
>>> with out further requirements of internet access.  If we don't want to
>>> continue with that, we need to do a better job around providing the tools
>>> for a user to get up and running in a timely fashion.  Perhaps providing an
>>> net-only iso and an all-included iso would be a better solution so people
>>> will have their expectations properly set up front?
>>>
>>
>> Let me explain why I think having local MOS mirror by default is bad:
>> 1) I don't see any reason why we should treat MOS  repo other way than
>> all other online repos. A user sees on the settings tab the list of repos
>> one of which is local by default while others are online. It can make user
>> a little bit confused, can't it? A user can be also confused by the fact,
>> that some of the repos can be cloned locally by fuel-createmirror while
>> others can't. That is not straightforward, NOT fuel-createmirror UX.
>>
>
>
> I agree. The process should be the same and it should be just another
> repo. It doesn't mean we can't include a version on an ISO as part of a
> release.  Would it be better to provide the mirror on the ISO but not have
> it enabled by default for a release so that we can gather user feedback on
> this? This would include improved documentation and possibly allowing a
> user to choose their preference so we can collect metrics?
>
>
> 2) Having local MOS mirror by default makes things much more convoluted.
>> We are forced to have several directories with predefined names and we are
>> forced to manage these directories in nailgun, in upgrade script, etc. Why?
>> 3) When putting MOS mirror on ISO, we make people think that ISO is equal
>> to MOS, which is not true. It is possible to implement really flexible
>> delivery scheme, but we need to think of these things as they are
>> independent.
>>
>
>
> I'm not sure what you mean by this. Including a point in time copy on an
> ISO as a release is a common method of distributing software. Is this a
> messaging thing that needs to be addressed? Perhaps I'm not familiar with
> people referring to the ISO as being MOS.
>
>
> For large users it is easy to build custom ISO and put there what they
>> need but first we need to have simple working scheme clear for everyone. I
>> think dealing with all repos the same way is what is gonna makes things
>> simpler.
>>
>>
>
> Who is going to build a custom ISO? How does one request that? What
> resources are consumed by custom ISO creation process/request? Does this
> scale?
>
>
>
>> This thread is not about internet connectivity, it is about aligning
>> things.
>>
>>
> You are correct in that this thread is not explicitly about internet
> connectivity, but they are related. Any changes to remove a local
> repository and only provide an internet based solution makes internet
> connectivity something that needs to be included in the discussion.  I just
> want to make sure that we properly evaluate this decision based on end user
> feedback not because we don't want to manage this from a developer
> standpoint.
>


 +1, whatever the changes is, please keep Fuel as a tool that can deploy
without Internet access, this is part of reason that people like it and
it's better that other tools.

>
> -Alex
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yaguang Tang
Technical Support, Mirantis China

*Phone*: +86 15210946968
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/7c15f7df/attachment.html>

From jamielennox at redhat.com  Thu Sep 10 00:58:30 2015
From: jamielennox at redhat.com (Jamie Lennox)
Date: Wed, 9 Sep 2015 20:58:30 -0400 (EDT)
Subject: [openstack-dev] [magnum]  keystone pluggable model
In-Reply-To: <1441831299970.62471@RACKSPACE.COM>
References: <1441831299970.62471@RACKSPACE.COM>
Message-ID: <1016132573.19616535.1441846710883.JavaMail.zimbra@redhat.com>



----- Original Message -----
> From: "Murali Allada" <murali.allada at RACKSPACE.COM>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Sent: Thursday, 10 September, 2015 6:41:40 AM
> Subject: [openstack-dev] [magnum]  keystone pluggable model
> 
> 
> 
> ? Hi All,
> 
> 
> 
> 
> 
> In the IRC meeting yesterday, I brought up this new blueprint I opened.
> 
> 
> 
> 
> 
> https://blueprints.launchpad.net/magnum/+spec/pluggable-keystone-model ?
> 
> 
> 
> 
> 
> The goal of this blueprint is to allow magnum operators to integrate with
> their version of keystone easily with downstream patches.
> 
> 
> 
> 
> 
> The goal is NOT to implement support for keystone version 2 upstream, but to
> make it easy for operators to integrate with V2 if they need to.
> 
> 
> 
> 
> 
> Most of the work required for this is already done in this patch.
> 
> 
> 
> 
> 
> https://review.openstack.org/#/c/218699
> 
> 
> 
> 
> 
> However, we didn't want to address this change in the same review.
> 
> 
> 
> 
> 
> We just need to refactor the code a little further and isolate all version
> specific keystone code to one file.
> 
> 
> 
> 
> 
> See my comments in the following review for details on what this change
> entails.
> 
> 
> 
> 
> 
> https://review.openstack.org/#/c/218699/5/magnum/common/clients.py
> 
> 
> 
> 
> 
> https://review.openstack.org/#/c/218699/5/magnum/common/keystone.py
> 
> 
> 
> 
> 
> Thanks,
> 
> 
> Murali

Hi, 

My keystone filter picked this up from the title so i don't really know anything specifically about magnum here, but can you explain what you are looking for in terms of abstraction a little more? 

Looking at the review the only thing that magnum is doing with the keystone API (not auth) is trust creation - and this is a v3 only feature so there's not much value to a v2 client there. I know this is a problem that heat has run into and done a similar solution where there is a contrib v2 module that short circuits some functions and leaves things somewhat broken. I don't think they would recommend it.

The other thing is auth. A version independent auth mechanism is something that keystoneclient has supplied for a while now. Here's two blog posts that show how to use sessions and auth plugins[1][2] from keystoneclient such that it is a completely deployment configuration choice what type (service passwords must die) or version of authentication is used. All clients i know with the exception of swift support sessions and plugins so this would seem like an ideal time for magnum to adopt them rather than reinvent auth version abstraction as you'll get some wins like not having to hack in already authenticated tokens into each client.

>From the general design around client management it looks like you've taken some inspiration from heat so you might be interested in the recently merged patches there that convert to using auth plugins there. 

If you need any help with this please ping me on IRC. 


Jamie


[1] http://www.jamielennox.net/blog/2014/09/15/how-to-use-keystoneclient-sessions/
[2] http://www.jamielennox.net/blog/2015/02/17/loading-authentication-plugins/
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From tdecacqu at redhat.com  Thu Sep 10 01:07:09 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Thu, 10 Sep 2015 01:07:09 +0000
Subject: [openstack-dev] [all] Election Season,
	PTL and TC September/October 2015
Message-ID: <55F0D7BD.6090508@redhat.com>

PTL Election details:
  https://wiki.openstack.org/wiki/PTL_Elections_September_2015
TC Election details:
  https://wiki.openstack.org/wiki/TC_Elections_September/October_2015

Please read the stipulations and timelines for candidates and electorate
contained in these wikipages.

There will be an announcement email opening nominations as well as an
announcement email opening the polls.

Please note that election's workflow is now based on gerrit through the
new openstack/election repository. All candidacies must be submitted as
a text file to the openstack/election repository. Please check the
instructions on the wiki documentation.

Be aware, in the PTL elections if the program only has one candidate,
that candidate is acclaimed and there will be no poll. There will only
be a poll if there is more than one candidate stepping forward for a
program's PTL position.

There will be further announcements posted to the mailing list as action
is required from the electorate or candidates. This email is for
information purposes only.

If you have any questions which you feel affect others please reply to
this email thread. If you have any questions that you which to discuss
in private please email both myself Tristan Cacqueray (tristanC) email:
tdecacqu at redhat dot com and Tony Breed (tonyb) email: tony at
bakeyournoodle dot com so that we may address your concerns.

Thank you,
Tristan

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/22095ba8/attachment.pgp>

From jim at jimrollenhagen.com  Thu Sep 10 01:29:13 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Wed, 9 Sep 2015 18:29:13 -0700
Subject: [openstack-dev] [Ironic] Summit session brainstorming
Message-ID: <20150910012913.GI21846@jimrollenhagen.com>

Hi all,

I don't have an exact schedule or anything, but I wanted to start
brainstorming topics for the summit. I've started an etherpad:
https://etherpad.openstack.org/p/mitaka-ironic-design-summit-ideas

Keep in mind these should be topics that we need to sort things out on -
let's be sure not to rehash things where we already have consensus on
the path forward, etc.

Thanks!

// jim


From sgordon at redhat.com  Thu Sep 10 02:25:55 2015
From: sgordon at redhat.com (Steve Gordon)
Date: Wed, 9 Sep 2015 22:25:55 -0400 (EDT)
Subject: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support
 multiple compute drivers?
In-Reply-To: <CALesnTyuK17bUpYuA=9q+_L5TU7xxAF=tdsQmwtPtr+Z1vmt1w@mail.gmail.com>
References: <CALesnTzMv_+hxZLFkAbxObzGLKU0h2ENZ5-vYe1-u+EC5g7Eyg@mail.gmail.com>
 <20150909171336.GG21846@jimrollenhagen.com>
 <CALesnTyuK17bUpYuA=9q+_L5TU7xxAF=tdsQmwtPtr+Z1vmt1w@mail.gmail.com>
Message-ID: <1474881269.44702768.1441851955747.JavaMail.zimbra@redhat.com>

----- Original Message -----
> From: "Jeff Peeler" <jpeeler at redhat.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> 
> I'd greatly prefer using availability zones/host aggregates as I'm trying
> to keep the footprint as small as possible. It does appear that in the
> section "configure scheduler to support host aggregates" [1], that I can
> configure filtering using just one scheduler (right?). However, perhaps
> more importantly, I'm now unsure with the network configuration changes
> required for Ironic that deploying normal instances along with baremetal
> servers is possible.
> 
> [1]
> http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html

Hi Jeff,

I assume your need for a second scheduler is spurred by wanting to enable different filters for baremetal vs virt (rather than influencing scheduling using the same filters via image properties, extra specs, and boot parameters (hints)? 

I ask because if not you should be able to use the hypervisor_type image property to ensure that images intended for baremetal are directed there and those intended for kvm etc. are directed to those hypervisors. The documentation [1] doesn't list ironic as a valid value for this property but I looked into the code for this a while ago and it seemed like it should work... Apologies if you had already considered this.

Thanks,

Steve

[1] http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html

> On Wed, Sep 9, 2015 at 1:13 PM, Jim Rollenhagen <jim at jimrollenhagen.com>
> wrote:
> 
> > On Wed, Sep 02, 2015 at 03:42:20PM -0400, Jeff Peeler wrote:
> > > Hi folks,
> > >
> > > I'm currently looking at supporting Ironic in the Kolla project [1], but
> > > was unsure if it would be possible to run separate instances of nova
> > > compute and controller (and scheduler too?) to enable both baremetal and
> > > libvirt type deployments. I found this mailing list post from two years
> > ago
> > > [2], asking the same question. The last response in the thread seemed to
> > > indicate work was being done on the scheduler to support multiple
> > > configurations, but the review [3] ended up abandoned.
> > >
> > > Are the current requirements the same? Perhaps using two availability
> > zones
> > > would work, but I'm not clear if that works on the same host.
> >
> > At Rackspace we run Ironic in its own cell, and use cells filters to
> > direct builds to the right place.
> >
> > The other option that supposedly works is host aggregates. I'm not sure
> > host aggregates supports running two scheduler instances (because you'll
> > want different filters), but maybe it does?
> >
> > // jim
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform


From jj19931006 at 163.com  Thu Sep 10 02:53:36 2015
From: jj19931006 at 163.com (=?GBK?B?va+8qg==?=)
Date: Thu, 10 Sep 2015 10:53:36 +0800 (CST)
Subject: [openstack-dev] SOS
Message-ID: <254dc482.7202.14fb52c300c.Coremail.jj19931006@163.com>

Hi:
    I need a coreos image that supports kubernetes , without it , my boss will kick my arse.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/b457d3c4/attachment.html>

From nik.komawar at gmail.com  Thu Sep 10 03:06:21 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Wed, 9 Sep 2015 23:06:21 -0400
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <55EEF89F.4040003@gmail.com>
References: <55E7AC5C.9010504@gmail.com> <20150903085224.GD30997@redhat.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>
 <EA70533067B8F34F801E964ABCA4C4410F4C1D0D@G4W3202.americas.hpqcorp.net>
 <D20DBFD8.210FE%brian.rosmaita@rackspace.com> <55E8784A.4060809@gmail.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B3397CA@fmsmsx117.amr.corp.intel.com>
 <55E9CA69.9030003@gmail.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33A963@fmsmsx117.amr.corp.intel.com>
 <55EEF89F.4040003@gmail.com>
Message-ID: <55F0F3AD.3020209@gmail.com>

FYI, this was granted FFE.

On 9/8/15 11:02 AM, Nikhil Komawar wrote:
> Malini,
>
> Your note on the etherpad [1] went unnoticed as we had that sync on
> Friday outside of our regular meeting and weekly meeting agenda etherpad
> was not fit for discussion purposes.
>
> It would be nice if you all can update & comment on the spec, ref. the
> note or have someone send a relative email here that explains the
> redressal of the issues raised on the spec and during Friday sync [2].
>
> [1] https://etherpad.openstack.org/p/glance-team-meeting-agenda
> [2]
> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47
>
> On 9/5/15 4:40 PM, Bhandaru, Malini K wrote:
>> Thank you Nikhil and Glance team on the FFE consideration.
>> We are committed to making the revisions per suggestion and separately seek help from the Flavio, Sabari, and Harsh.
>> Regards
>> Malini, Kent, and Jakub 
>>
>>
>> -----Original Message-----
>> From: Nikhil Komawar [mailto:nik.komawar at gmail.com] 
>> Sent: Friday, September 04, 2015 9:44 AM
>> To: openstack-dev at lists.openstack.org
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
>>
>> Hi Malini et.al.,
>>
>> We had a sync up earlier today on this topic and a few items were discussed including new comments on the spec and existing code proposal.
>> You can find the logs of the conversation here [1].
>>
>> There are 3 main outcomes of the discussion:
>> 1. We hope to get a commitment on the feature (spec and the code) that the comments would be addressed and code would be ready by Sept 18th; after which the RC1 is planned to be cut [2]. Our hope is that the spec is merged way before and implementation to the very least is ready if not merged. The comments on the spec and merge proposal are currently implementation details specific so we were positive on this front.
>> 2. The decision to grant FFE will be on Tuesday Sept 8th after the spec has newer patch sets with major concerns addressed.
>> 3. We cannot commit to granting a backport to this feature so, we ask the implementors to consider using the plug-ability and modularity of the taskflow library. You may consult developers who have already worked on adopting this library in Glance (Flavio, Sabari and Harsh). Deployers can then use those scripts and put them back in their Liberty deployments even if it's not in the standard tarball.
>>
>> Please let me know if you have more questions.
>>
>> [1]
>> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47
>> [2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
>>
>> On 9/3/15 1:13 PM, Bhandaru, Malini K wrote:
>>> Thank you Nikhil and Brian!
>>>
>>> -----Original Message-----
>>> From: Nikhil Komawar [mailto:nik.komawar at gmail.com]
>>> Sent: Thursday, September 03, 2015 9:42 AM
>>> To: openstack-dev at lists.openstack.org
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>> proposal
>>>
>>> We agreed to hold off on granting it a FFE until tomorrow.
>>>
>>> There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
>>> 14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and cast your vote.
>>>
>>> On 9/3/15 9:15 AM, Brian Rosmaita wrote:
>>>> I added an agenda item for this for today's Glance meeting:
>>>>    https://etherpad.openstack.org/p/glance-team-meeting-agenda
>>>>
>>>> I'd prefer to hold my vote until after the meeting.
>>>>
>>>> cheers,
>>>> brian
>>>>
>>>>
>>>> On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuvaja at hp.com> wrote:
>>>>
>>>>> Malini, all,
>>>>>
>>>>> My current opinion is -1 for FFE based on the concerns in the spec 
>>>>> and implementation.
>>>>>
>>>>> I'm more than happy to realign my stand after we have updated spec 
>>>>> and a) it's agreed to be the approach as of now and b) we can 
>>>>> evaluate how much work the implementation needs to meet with the revisited spec.
>>>>>
>>>>> If we end up to the unfortunate situation that this functionality 
>>>>> does not merge in time for Liberty, I'm confident that this is one 
>>>>> of the first things in Mitaka. I really don't think there is too 
>>>>> much to go, we just might run out of time.
>>>>>
>>>>> Thanks for your patience and endless effort to get this done.
>>>>>
>>>>> Best,
>>>>> Erno
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Bhandaru, Malini K [mailto:malini.k.bhandaru at intel.com]
>>>>>> Sent: Thursday, September 03, 2015 10:10 AM
>>>>>> To: Flavio Percoco; OpenStack Development Mailing List (not for 
>>>>>> usage
>>>>>> questions)
>>>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>>>> proposal
>>>>>>
>>>>>> Flavio, first thing in the morning Kent will upload a new BP that 
>>>>>> addresses the comments. We would very much appreciate a +1 on the 
>>>>>> FFE.
>>>>>>
>>>>>> Regards
>>>>>> Malini
>>>>>>
>>>>>>
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Flavio Percoco [mailto:flavio at redhat.com]
>>>>>> Sent: Thursday, September 03, 2015 1:52 AM
>>>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>>>> proposal
>>>>>>
>>>>>> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I wanted to propose 'Single disk image OVA import' [1] feature 
>>>>>>> proposal for exception. This looks like a decently safe proposal 
>>>>>>> that should be able to adjust in the extended time period of 
>>>>>>> Liberty. It has been discussed at the Vancouver summit during a 
>>>>>>> work session and the proposal has been trimmed down as per the 
>>>>>>> suggestions then; has been overall accepted by those present 
>>>>>>> during the discussions (barring a few changes needed on the spec itself).
>>>>>>> It being a addition to already existing import task, doesn't 
>>>>>>> involve API change or change to any of the core Image functionality as of now.
>>>>>>>
>>>>>>> Please give your vote: +1 or -1 .
>>>>>>>
>>>>>>> [1] https://review.openstack.org/#/c/194868/
>>>>>> I'd like to see support for OVF being, finally, implemented in Glance.
>>>>>> Unfortunately, I think there are too many open questions in the 
>>>>>> spec right now to make this FFE worthy.
>>>>>>
>>>>>> Could those questions be answered to before the EOW?
>>>>>>
>>>>>> With those questions answered, we'll be able to provide a more, 
>>>>>> realistic, vote.
>>>>>>
>>>>>> Also, I'd like us to evaluate how mature the implementation[0] is 
>>>>>> and the likelihood of it addressing the concerns/comments in time.
>>>>>>
>>>>>> For now, it's a -1 from me.
>>>>>>
>>>>>> Thanks all for working on this, this has been a long time requested 
>>>>>> format to have in Glance.
>>>>>> Flavio
>>>>>>
>>>>>> [0] https://review.openstack.org/#/c/214810/
>>>>>>
>>>>>>
>>>>>> --
>>>>>> @flaper87
>>>>>> Flavio Percoco
>>>>>> __________________________________________________________
>>>>>> ________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe: OpenStack-dev-
>>>>>> request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> ____________________________________________________________________
>>>>> _ _____ OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe: 
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> _____________________________________________________________________
>>>> _ ____ OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: 
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From emilien at redhat.com  Thu Sep 10 03:35:59 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Wed, 9 Sep 2015 23:35:59 -0400
Subject: [openstack-dev] [puppet]Fwd: Delorean with CBS Liberty deps repos
In-Reply-To: <CAGi==UWYiP5yJP2ibhj2_s86F7i-_hM1pnwb68WaPjzqE-yY-g@mail.gmail.com>
References: <CAGi==UWYiP5yJP2ibhj2_s86F7i-_hM1pnwb68WaPjzqE-yY-g@mail.gmail.com>
Message-ID: <55F0FA9F.1070105@redhat.com>

I'll make the changes this week and test if our CI pass.


-------- Forwarded Message --------
Subject: Delorean with CBS Liberty deps repos
Date: Thu, 10 Sep 2015 02:22:24 +0200
From: Alan Pevec <apevec at gmail.com>
To: Emilien Macchi <emacchi at redhat.com>
CC: Matthias Runge <mrunge at redhat.com>, Haikel Guemar
<hguemar at redhat.com>, Javier Pena <jpena at redhat.com>

Hi Emilien,

I've updated RDO status in
https://etherpad.openstack.org/p/puppet-liberty-blocker

Testing repos to be used with Delorean repo without RDO Kilo and EPEL7
enabled are:

http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/x86_64/os/
http://cbs.centos.org/repos/cloud7-openstack-common-testing/x86_64/os/

Haikel, Javier, please test those if I didn't miss something and add
your notes in https://etherpad.openstack.org/p/RDO-Liberty
Known issue is python-hacking deps but that shouldn't be only needed
for unit testing, priority is to clear all runtime deps first.

Cheers,
Alan



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/7904e82d/attachment.pgp>

From gong_ys2004 at aliyun.com  Thu Sep 10 03:51:33 2015
From: gong_ys2004 at aliyun.com (Gongys)
Date: Thu, 10 Sep 2015 11:51:33 +0800
Subject: [openstack-dev] SOS
In-Reply-To: <254dc482.7202.14fb52c300c.Coremail.jj19931006@163.com>
References: <254dc482.7202.14fb52c300c.Coremail.jj19931006@163.com>
Message-ID: <12607C27-9E5D-4966-87B3-EA5DE3F419B8@aliyun.com>

It should be easy, you google it for coreos raw or qcow2 image then import it into openstack

Sent from my iPhone

> On 2015?9?10?, at 10:53, ?? <jj19931006 at 163.com> wrote:
> 
> Hi:
>     I need a coreos image that supports kubernetes , without it , my boss will kick my arse.
> 
> 
>  
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/38bef036/attachment.html>

From xyzjerry at gmail.com  Thu Sep 10 03:57:12 2015
From: xyzjerry at gmail.com (Jerry Zhao)
Date: Wed, 9 Sep 2015 20:57:12 -0700
Subject: [openstack-dev] SOS
In-Reply-To: <254dc482.7202.14fb52c300c.Coremail.jj19931006@163.com>
References: <254dc482.7202.14fb52c300c.Coremail.jj19931006@163.com>
Message-ID: <55F0FF98.2060509@gmail.com>

I haven't tried it out, but hope this blog could save you arse.

https://www.cloudssky.com/en/blog/Kubernetes-on-CoreOS-with-OpenStack/

http://stable.release.core-os.net/amd64-usr/367.1.0/coreos_production_openstack_image.img.bz2



On 09/09/2015 07:53 PM, ?? wrote:
> Hi:
>     I need a coreos image that supports kubernetes , without it , my 
> boss will kick my arse.
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150909/14a18317/attachment.html>

From joehuang at huawei.com  Thu Sep 10 04:00:32 2015
From: joehuang at huawei.com (joehuang)
Date: Thu, 10 Sep 2015 04:00:32 +0000
Subject: [openstack-dev] [tricircle]Weekly Team Meeting 2015.09.09
In-Reply-To: <CAHZqm+VG8Fa-xJAxyLJHscr2wmd=-w33D4NospkALjunf-aeVA@mail.gmail.com>
References: <CAHZqm+UGTS-aX6evwC3-QoqNR+04COUh15SLRvn9PA1wzjwgRA@mail.gmail.com>
 <CAHZqm+VG8Fa-xJAxyLJHscr2wmd=-w33D4NospkALjunf-aeVA@mail.gmail.com>
Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF542F70C6@szxema505-mbx.china.huawei.com>

Let?s also start the PTL selection according to the guide of OpenStack:  https://wiki.openstack.org/wiki/PTL_Elections_September_2015

Please submit your PTL candidacy according to : https://wiki.openstack.org/wiki/PTL_Elections_September_2015#How_to_submit_your_candidacy

Best Regards
Chaoyi Huang ( Joe Huang )

From: Zhipeng Huang [mailto:zhipengh512 at gmail.com]
Sent: Wednesday, September 09, 2015 10:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: joehuang; caizhiyuan (A); Eran Gampel; Saggi Mizrahi; Irena Berezovsky
Subject: Re: [openstack-dev][tricircle]Weekly Team Meeting 2015.09.09

Hi Please find the meetbot log at http://eavesdrop.openstack.org/meetings/tricircle/2015/tricircle.2015-09-09-13.01.html.

And also a noise cancelled minutes in the attachment.

On Wed, Sep 9, 2015 at 4:22 PM, Zhipeng Huang <zhipengh512 at gmail.com<mailto:zhipengh512 at gmail.com>> wrote:
Hi Team,

Let's resume our weekly meeting today. As Eran suggest before, we will mainly discuss the work we have now, and leave the design session in another time slot :) See you at UTC 1300 today.

--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng at huawei.com<mailto:huangzhipeng at huawei.com>
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh at uci.edu<mailto:zhipengh at uci.edu>
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado



--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng at huawei.com<mailto:huangzhipeng at huawei.com>
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh at uci.edu<mailto:zhipengh at uci.edu>
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/33a439a9/attachment.html>

From jamesd at catalyst.net.nz  Thu Sep 10 05:34:11 2015
From: jamesd at catalyst.net.nz (James Dempsey)
Date: Thu, 10 Sep 2015 17:34:11 +1200
Subject: [openstack-dev] [neutron] RFE process question
Message-ID: <55F11653.8080509@catalyst.net.nz>

Greetings Devs,

I'm very excited about the new RFE process and thought I'd test it by
requesting a feature that is very often requested by my users[1].

There are some great docs out there about how to submit an RFE, but I
don't know what should happen after the submission to launchpad. My RFE
bug seems to have been untouched for a month and I'm unsure if I've done
something wrong. So, here are a few questions that I have.


1. Should I be following up on the dev list to ask for someone to look
at my RFE bug?
2. How long should I expect it to take to have my RFE acknowledged?
3. As an operator, I'm a bit ignorant as to whether or not there are
times during the release cycle during which there simply won't be
bandwidth to consider RFE bugs.
4. Should I be doing anything else?

Would love some guidance.

Cheers,
James

[1] https://bugs.launchpad.net/neutron/+bug/1483480

-- 
James Dempsey
Senior Cloud Engineer
Catalyst IT Limited
+64 4 803 2264
--


From ikalnitsky at mirantis.com  Thu Sep 10 05:52:31 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Thu, 10 Sep 2015 08:52:31 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
Message-ID: <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>

Hello,

I agree with Vladimir - the idea of online repos is a right way to
move. In 2015 I believe we can ignore this "poor Internet connection"
reason, and simplify both Fuel and UX. Moreover, take a look at Linux
distributives - most of them fetch needed packages from the Internet
during installation, not from CD/DVD. The netboot installers are
popular, I can't even remember when was the last time I install my
Debian from the DVD-1 - I use netboot installer for years.

Thanks,
Igor


On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com> wrote:
>
>
> On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com> wrote:
>>
>>
>> Hey Vladimir,
>>
>>>
>>>
>>>>>
>>>>> 1) There won't be such things in like [1] and [2], thus less
>>>>> complicated flow, less errors, easier to maintain, easier to understand,
>>>>> easier to troubleshoot
>>>>> 2) If one wants to have local mirror, the flow is the same as in case
>>>>> of upstream repos (fuel-createmirror), which is clrear for a user to
>>>>> understand.
>>>>
>>>>
>>>> From the issues I've seen,  fuel-createmirror isn't very straight
>>>> forward and has some issues making it a bad UX.
>>>
>>>
>>> I'd say the whole approach of having such tool as fuel-createmirror is a
>>> way too naive. Reliable internet connection is totally up to network
>>> engineering rather than deployment. Even using proxy is much better that
>>> creating local mirror. But this discussion is totally out of the scope of
>>> this letter. Currently,  we have fuel-createmirror and it is pretty
>>> straightforward (installed as rpm, has just a couple of command line
>>> options). The quality of this script is also out of the scope of this
>>> thread. BTW we have plans to improve it.
>>
>>
>>
>> Fair enough, I just wanted to raise the UX issues around these types of
>> things as they should go into the decision making process.
>>
>>
>>>
>>>>>
>>>>>
>>>>> Many people still associate ISO with MOS, but it is not true when using
>>>>> package based delivery approach.
>>>>>
>>>>> It is easy to define necessary repos during deployment and thus it is
>>>>> easy to control what exactly is going to be installed on slave nodes.
>>>>>
>>>>> What do you guys think of it?
>>>>>
>>>>>
>>>>
>>>> Reliance on internet connectivity has been an issue since 6.1. For many
>>>> large users, complete access to the internet is not available or not
>>>> desired.  If we want to continue down this path, we need to improve the
>>>> tools to setup the local mirror and properly document what urls/ports/etc
>>>> need to be available for the installation of openstack and any mirror
>>>> creation process.  The ideal thing is to have an all-in-one CD similar to a
>>>> live cd that allows a user to completely try out fuel wherever they want
>>>> with out further requirements of internet access.  If we don't want to
>>>> continue with that, we need to do a better job around providing the tools
>>>> for a user to get up and running in a timely fashion.  Perhaps providing an
>>>> net-only iso and an all-included iso would be a better solution so people
>>>> will have their expectations properly set up front?
>>>
>>>
>>> Let me explain why I think having local MOS mirror by default is bad:
>>> 1) I don't see any reason why we should treat MOS  repo other way than
>>> all other online repos. A user sees on the settings tab the list of repos
>>> one of which is local by default while others are online. It can make user a
>>> little bit confused, can't it? A user can be also confused by the fact, that
>>> some of the repos can be cloned locally by fuel-createmirror while others
>>> can't. That is not straightforward, NOT fuel-createmirror UX.
>>
>>
>>
>> I agree. The process should be the same and it should be just another
>> repo. It doesn't mean we can't include a version on an ISO as part of a
>> release.  Would it be better to provide the mirror on the ISO but not have
>> it enabled by default for a release so that we can gather user feedback on
>> this? This would include improved documentation and possibly allowing a user
>> to choose their preference so we can collect metrics?
>>
>>
>>> 2) Having local MOS mirror by default makes things much more convoluted.
>>> We are forced to have several directories with predefined names and we are
>>> forced to manage these directories in nailgun, in upgrade script, etc. Why?
>>> 3) When putting MOS mirror on ISO, we make people think that ISO is equal
>>> to MOS, which is not true. It is possible to implement really flexible
>>> delivery scheme, but we need to think of these things as they are
>>> independent.
>>
>>
>>
>> I'm not sure what you mean by this. Including a point in time copy on an
>> ISO as a release is a common method of distributing software. Is this a
>> messaging thing that needs to be addressed? Perhaps I'm not familiar with
>> people referring to the ISO as being MOS.
>>
>>
>>> For large users it is easy to build custom ISO and put there what they
>>> need but first we need to have simple working scheme clear for everyone. I
>>> think dealing with all repos the same way is what is gonna makes things
>>> simpler.
>>>
>>
>>
>> Who is going to build a custom ISO? How does one request that? What
>> resources are consumed by custom ISO creation process/request? Does this
>> scale?
>>
>>
>>>
>>> This thread is not about internet connectivity, it is about aligning
>>> things.
>>>
>>
>> You are correct in that this thread is not explicitly about internet
>> connectivity, but they are related. Any changes to remove a local repository
>> and only provide an internet based solution makes internet connectivity
>> something that needs to be included in the discussion.  I just want to make
>> sure that we properly evaluate this decision based on end user feedback not
>> because we don't want to manage this from a developer standpoint.
>
>
>
>  +1, whatever the changes is, please keep Fuel as a tool that can deploy
> without Internet access, this is part of reason that people like it and it's
> better that other tools.
>>
>>
>> -Alex
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Yaguang Tang
> Technical Support, Mirantis China
>
> Phone: +86 15210946968
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From armamig at gmail.com  Thu Sep 10 05:59:28 2015
From: armamig at gmail.com (Armando M.)
Date: Thu, 10 Sep 2015 11:29:28 +0530
Subject: [openstack-dev] [neutron] RFE process question
In-Reply-To: <55F11653.8080509@catalyst.net.nz>
References: <55F11653.8080509@catalyst.net.nz>
Message-ID: <CAK+RQeY_SjXNyApy9gfp7NOUk0V7qwhnSay+jU1LBr=aDXAG0A@mail.gmail.com>

On 10 September 2015 at 11:04, James Dempsey <jamesd at catalyst.net.nz> wrote:

> Greetings Devs,
>
> I'm very excited about the new RFE process and thought I'd test it by
> requesting a feature that is very often requested by my users[1].
>
> There are some great docs out there about how to submit an RFE, but I
> don't know what should happen after the submission to launchpad. My RFE
> bug seems to have been untouched for a month and I'm unsure if I've done
> something wrong. So, here are a few questions that I have.
>
>
> 1. Should I be following up on the dev list to ask for someone to look
> at my RFE bug?
> 2. How long should I expect it to take to have my RFE acknowledged?
> 3. As an operator, I'm a bit ignorant as to whether or not there are
> times during the release cycle during which there simply won't be
> bandwidth to consider RFE bugs.
> 4. Should I be doing anything else?
>
> Would love some guidance.
>

you did nothing wrong, the team was simply busy going through the existing
schedule. Having said that, you could have spared a few more words on the
use case and what you mean by annotations.

I'll follow up on the RFE for more questions.

Cheers,
Armando


>
> Cheers,
> James
>
> [1] https://bugs.launchpad.net/neutron/+bug/1483480
>
> --
> James Dempsey
> Senior Cloud Engineer
> Catalyst IT Limited
> +64 4 803 2264
> --
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/ab89b48b/attachment.html>

From mscherbakov at mirantis.com  Thu Sep 10 06:44:47 2015
From: mscherbakov at mirantis.com (Mike Scherbakov)
Date: Thu, 10 Sep 2015 06:44:47 +0000
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
Message-ID: <CAKYN3rM5j1Wt67z7+55WZ57arSnX0xRGQ7TfGeVj7FTtkf_xQA@mail.gmail.com>

I've heard very strong requirements on having all-in-one ISO, with no
Internet access installation. Why can't we make this optional in the build
system? It should be easy to implement, is not it?

We can then consider to have this as a default option.

Igor, unfortunately
>  In 2015 I believe we can ignore this "poor Internet connection"
still not exactly true for some large enterprises. Due to all the security,
etc., there are sometimes VPNs / proxies / firewalls with very low
throughput.

I found two related bugs, but I remember there were more. I'll ask guys to
provide some exact measurements they have done as well.
https://bugs.launchpad.net/fuel/+bug/1456805
https://bugs.launchpad.net/fuel/+bug/1459089 - Ryan, can you share your
network throughput measurement / estimate?

PS. I'm all for having a way in Fuel to use another packages repos, such as
CloudArchive, etc. I just want it to be flexible.
Thanks,

On Wed, Sep 9, 2015 at 10:53 PM Igor Kalnitsky <ikalnitsky at mirantis.com>
wrote:

> Hello,
>
> I agree with Vladimir - the idea of online repos is a right way to
> move. In 2015 I believe we can ignore this "poor Internet connection"
> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
> distributives - most of them fetch needed packages from the Internet
> during installation, not from CD/DVD. The netboot installers are
> popular, I can't even remember when was the last time I install my
> Debian from the DVD-1 - I use netboot installer for years.
>
> Thanks,
> Igor
>
>
> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com> wrote:
> >
> >
> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com>
> wrote:
> >>
> >>
> >> Hey Vladimir,
> >>
> >>>
> >>>
> >>>>>
> >>>>> 1) There won't be such things in like [1] and [2], thus less
> >>>>> complicated flow, less errors, easier to maintain, easier to
> understand,
> >>>>> easier to troubleshoot
> >>>>> 2) If one wants to have local mirror, the flow is the same as in case
> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user to
> >>>>> understand.
> >>>>
> >>>>
> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
> >>>> forward and has some issues making it a bad UX.
> >>>
> >>>
> >>> I'd say the whole approach of having such tool as fuel-createmirror is
> a
> >>> way too naive. Reliable internet connection is totally up to network
> >>> engineering rather than deployment. Even using proxy is much better
> that
> >>> creating local mirror. But this discussion is totally out of the scope
> of
> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
> >>> straightforward (installed as rpm, has just a couple of command line
> >>> options). The quality of this script is also out of the scope of this
> >>> thread. BTW we have plans to improve it.
> >>
> >>
> >>
> >> Fair enough, I just wanted to raise the UX issues around these types of
> >> things as they should go into the decision making process.
> >>
> >>
> >>>
> >>>>>
> >>>>>
> >>>>> Many people still associate ISO with MOS, but it is not true when
> using
> >>>>> package based delivery approach.
> >>>>>
> >>>>> It is easy to define necessary repos during deployment and thus it is
> >>>>> easy to control what exactly is going to be installed on slave nodes.
> >>>>>
> >>>>> What do you guys think of it?
> >>>>>
> >>>>>
> >>>>
> >>>> Reliance on internet connectivity has been an issue since 6.1. For
> many
> >>>> large users, complete access to the internet is not available or not
> >>>> desired.  If we want to continue down this path, we need to improve
> the
> >>>> tools to setup the local mirror and properly document what
> urls/ports/etc
> >>>> need to be available for the installation of openstack and any mirror
> >>>> creation process.  The ideal thing is to have an all-in-one CD
> similar to a
> >>>> live cd that allows a user to completely try out fuel wherever they
> want
> >>>> with out further requirements of internet access.  If we don't want to
> >>>> continue with that, we need to do a better job around providing the
> tools
> >>>> for a user to get up and running in a timely fashion.  Perhaps
> providing an
> >>>> net-only iso and an all-included iso would be a better solution so
> people
> >>>> will have their expectations properly set up front?
> >>>
> >>>
> >>> Let me explain why I think having local MOS mirror by default is bad:
> >>> 1) I don't see any reason why we should treat MOS  repo other way than
> >>> all other online repos. A user sees on the settings tab the list of
> repos
> >>> one of which is local by default while others are online. It can make
> user a
> >>> little bit confused, can't it? A user can be also confused by the
> fact, that
> >>> some of the repos can be cloned locally by fuel-createmirror while
> others
> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
> >>
> >>
> >>
> >> I agree. The process should be the same and it should be just another
> >> repo. It doesn't mean we can't include a version on an ISO as part of a
> >> release.  Would it be better to provide the mirror on the ISO but not
> have
> >> it enabled by default for a release so that we can gather user feedback
> on
> >> this? This would include improved documentation and possibly allowing a
> user
> >> to choose their preference so we can collect metrics?
> >>
> >>
> >>> 2) Having local MOS mirror by default makes things much more
> convoluted.
> >>> We are forced to have several directories with predefined names and we
> are
> >>> forced to manage these directories in nailgun, in upgrade script, etc.
> Why?
> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
> equal
> >>> to MOS, which is not true. It is possible to implement really flexible
> >>> delivery scheme, but we need to think of these things as they are
> >>> independent.
> >>
> >>
> >>
> >> I'm not sure what you mean by this. Including a point in time copy on an
> >> ISO as a release is a common method of distributing software. Is this a
> >> messaging thing that needs to be addressed? Perhaps I'm not familiar
> with
> >> people referring to the ISO as being MOS.
> >>
> >>
> >>> For large users it is easy to build custom ISO and put there what they
> >>> need but first we need to have simple working scheme clear for
> everyone. I
> >>> think dealing with all repos the same way is what is gonna makes things
> >>> simpler.
> >>>
> >>
> >>
> >> Who is going to build a custom ISO? How does one request that? What
> >> resources are consumed by custom ISO creation process/request? Does this
> >> scale?
> >>
> >>
> >>>
> >>> This thread is not about internet connectivity, it is about aligning
> >>> things.
> >>>
> >>
> >> You are correct in that this thread is not explicitly about internet
> >> connectivity, but they are related. Any changes to remove a local
> repository
> >> and only provide an internet based solution makes internet connectivity
> >> something that needs to be included in the discussion.  I just want to
> make
> >> sure that we properly evaluate this decision based on end user feedback
> not
> >> because we don't want to manage this from a developer standpoint.
> >
> >
> >
> >  +1, whatever the changes is, please keep Fuel as a tool that can deploy
> > without Internet access, this is part of reason that people like it and
> it's
> > better that other tools.
> >>
> >>
> >> -Alex
> >>
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Yaguang Tang
> > Technical Support, Mirantis China
> >
> > Phone: +86 15210946968
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/4a3bd366/attachment.html>

From ytang at mirantis.com  Thu Sep 10 06:53:33 2015
From: ytang at mirantis.com (Yaguang Tang)
Date: Thu, 10 Sep 2015 14:53:33 +0800
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
Message-ID: <CAGi6=NCBVX3Dg3JQudzrTtz6YhCqLhBnmizB2Jmxm+e8R+jNdw@mail.gmail.com>

On Thu, Sep 10, 2015 at 1:52 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
wrote:

> Hello,
>
> I agree with Vladimir - the idea of online repos is a right way to
> move. In 2015 I believe we can ignore this "poor Internet connection"
> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
> distributives - most of them fetch needed packages from the Internet
> during installation, not from CD/DVD. The netboot installers are
> popular, I can't even remember when was the last time I install my
> Debian from the DVD-1 - I use netboot installer for years.
>

You are think in a way of developers, but the fact is that Fuel are widely
used by various  enterprises in the world wide, there are many security
policies  for enterprise to have no internet connection.



> Thanks,
> Igor
>
>
> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com> wrote:
> >
> >
> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com>
> wrote:
> >>
> >>
> >> Hey Vladimir,
> >>
> >>>
> >>>
> >>>>>
> >>>>> 1) There won't be such things in like [1] and [2], thus less
> >>>>> complicated flow, less errors, easier to maintain, easier to
> understand,
> >>>>> easier to troubleshoot
> >>>>> 2) If one wants to have local mirror, the flow is the same as in case
> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user to
> >>>>> understand.
> >>>>
> >>>>
> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
> >>>> forward and has some issues making it a bad UX.
> >>>
> >>>
> >>> I'd say the whole approach of having such tool as fuel-createmirror is
> a
> >>> way too naive. Reliable internet connection is totally up to network
> >>> engineering rather than deployment. Even using proxy is much better
> that
> >>> creating local mirror. But this discussion is totally out of the scope
> of
> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
> >>> straightforward (installed as rpm, has just a couple of command line
> >>> options). The quality of this script is also out of the scope of this
> >>> thread. BTW we have plans to improve it.
> >>
> >>
> >>
> >> Fair enough, I just wanted to raise the UX issues around these types of
> >> things as they should go into the decision making process.
> >>
> >>
> >>>
> >>>>>
> >>>>>
> >>>>> Many people still associate ISO with MOS, but it is not true when
> using
> >>>>> package based delivery approach.
> >>>>>
> >>>>> It is easy to define necessary repos during deployment and thus it is
> >>>>> easy to control what exactly is going to be installed on slave nodes.
> >>>>>
> >>>>> What do you guys think of it?
> >>>>>
> >>>>>
> >>>>
> >>>> Reliance on internet connectivity has been an issue since 6.1. For
> many
> >>>> large users, complete access to the internet is not available or not
> >>>> desired.  If we want to continue down this path, we need to improve
> the
> >>>> tools to setup the local mirror and properly document what
> urls/ports/etc
> >>>> need to be available for the installation of openstack and any mirror
> >>>> creation process.  The ideal thing is to have an all-in-one CD
> similar to a
> >>>> live cd that allows a user to completely try out fuel wherever they
> want
> >>>> with out further requirements of internet access.  If we don't want to
> >>>> continue with that, we need to do a better job around providing the
> tools
> >>>> for a user to get up and running in a timely fashion.  Perhaps
> providing an
> >>>> net-only iso and an all-included iso would be a better solution so
> people
> >>>> will have their expectations properly set up front?
> >>>
> >>>
> >>> Let me explain why I think having local MOS mirror by default is bad:
> >>> 1) I don't see any reason why we should treat MOS  repo other way than
> >>> all other online repos. A user sees on the settings tab the list of
> repos
> >>> one of which is local by default while others are online. It can make
> user a
> >>> little bit confused, can't it? A user can be also confused by the
> fact, that
> >>> some of the repos can be cloned locally by fuel-createmirror while
> others
> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
> >>
> >>
> >>
> >> I agree. The process should be the same and it should be just another
> >> repo. It doesn't mean we can't include a version on an ISO as part of a
> >> release.  Would it be better to provide the mirror on the ISO but not
> have
> >> it enabled by default for a release so that we can gather user feedback
> on
> >> this? This would include improved documentation and possibly allowing a
> user
> >> to choose their preference so we can collect metrics?
> >>
> >>
> >>> 2) Having local MOS mirror by default makes things much more
> convoluted.
> >>> We are forced to have several directories with predefined names and we
> are
> >>> forced to manage these directories in nailgun, in upgrade script, etc.
> Why?
> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
> equal
> >>> to MOS, which is not true. It is possible to implement really flexible
> >>> delivery scheme, but we need to think of these things as they are
> >>> independent.
> >>
> >>
> >>
> >> I'm not sure what you mean by this. Including a point in time copy on an
> >> ISO as a release is a common method of distributing software. Is this a
> >> messaging thing that needs to be addressed? Perhaps I'm not familiar
> with
> >> people referring to the ISO as being MOS.
> >>
> >>
> >>> For large users it is easy to build custom ISO and put there what they
> >>> need but first we need to have simple working scheme clear for
> everyone. I
> >>> think dealing with all repos the same way is what is gonna makes things
> >>> simpler.
> >>>
> >>
> >>
> >> Who is going to build a custom ISO? How does one request that? What
> >> resources are consumed by custom ISO creation process/request? Does this
> >> scale?
> >>
> >>
> >>>
> >>> This thread is not about internet connectivity, it is about aligning
> >>> things.
> >>>
> >>
> >> You are correct in that this thread is not explicitly about internet
> >> connectivity, but they are related. Any changes to remove a local
> repository
> >> and only provide an internet based solution makes internet connectivity
> >> something that needs to be included in the discussion.  I just want to
> make
> >> sure that we properly evaluate this decision based on end user feedback
> not
> >> because we don't want to manage this from a developer standpoint.
> >
> >
> >
> >  +1, whatever the changes is, please keep Fuel as a tool that can deploy
> > without Internet access, this is part of reason that people like it and
> it's
> > better that other tools.
> >>
> >>
> >> -Alex
> >>
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Yaguang Tang
> > Technical Support, Mirantis China
> >
> > Phone: +86 15210946968
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yaguang Tang
Technical Support, Mirantis China

*Phone*: +86 15210946968
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/32b2d4c3/attachment.html>

From mscherbakov at mirantis.com  Thu Sep 10 06:58:29 2015
From: mscherbakov at mirantis.com (Mike Scherbakov)
Date: Thu, 10 Sep 2015 06:58:29 +0000
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CABzFt8O1VH8DfOCZAP=yaS_UicaSd=6BNGS=46T5LOOa2H++xA@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
 <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
 <CABzFt8N12ADSuafDBZHg+QTHqPGjXPigzCvYZ1LE48KZJSGzyA@mail.gmail.com>
 <CAFLqvG5P2Ckp61nB9woU=AP3e0rFPfVsDg81HJadM=v2bc6=5w@mail.gmail.com>
 <CABzFt8O1VH8DfOCZAP=yaS_UicaSd=6BNGS=46T5LOOa2H++xA@mail.gmail.com>
Message-ID: <CAKYN3rOPFvwuES3f4nRNy6YSUAjj3KPKvtRmt3L-Opk=eG=BQA@mail.gmail.com>

+1 to Alex & Andrey. Let's just be very careful, and consider all the use
cases before making a change.
If we can have answers to all the use cases, then we are good to go.

Important thing which we need to fix now - is to enable easy UX for making
changes to environments after deployments. Like standard configuration
management allows you to do. Namely, being able to:
a) modify params on settings tab
b) modify templates / puppet manifests
and apply changes easily to nodes.

Now, we can do b) easy and just click Deploy button or run two-three
commands [1]. a) requires changes in Nailgun code to allow post-deployment
modification of settings (we currently lock settings tab after deployment).

If we switch to package installation and this workflow (change to
manifests + 2-3 commands to rsync/run puppet on nodes) will become a
nightmare - then we'll need to figure out something else. It has to be easy
to do development and use Fuel as configuration management tool.

[1] https://bugs.launchpad.net/fuel/+bug/1385615

On Wed, Sep 9, 2015 at 8:01 AM Alex Schultz <aschultz at mirantis.com> wrote:

> Hey Vladimir,
>
>
>
>> Regarding plugins: plugins are welcome to install specific additional
>> DEB/RPM repos on the master node, or just configure cluster to use
>> additional onl?ne repos, where all necessary packages (including plugin
>> specific puppet manifests) are to be available. Current granular deployment
>> approach makes it easy to append specific pre-deployment tasks
>> (master/slave does not matter). Correct me if I am wrong.
>>
>>
> Don't get me wrong, I think it would be good to move to a fuel-library
> distributed via package only.  I'm bringing these points up to indicate
> that there is many other things that live in the fuel library puppet path
> than just the fuel-library package.  The plugin example is just one place
> that we will need to invest in further design and work to move to the
> package only distribution.  What I don't want is some partially executed
> work that only works for one type of deployment and creates headaches for
> the people actually having to use fuel.  The deployment engineers and
> customers who actually perform these actions should be asked about
> packaging and their comfort level with this type of requirements.  I don't
> have a complete understanding of the all the things supported today by the
> fuel plugin system so it would be nice to get someone who is more familiar
> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
> don't think we are building fuel-library debs at this time either.  So
> without some work on both sides, we cannot move to just packages.
>
>
>> Regarding flexibility: having several versioned directories with puppet
>> modules on the master node, having several fuel-libraryX.Y packages
>> installed on the master node makes things "exquisitely convoluted" rather
>> than flexible. Like I said, it is flexible enough to use mcollective, plain
>> rsync, etc. if you really need to do things manually. But we have
>> convenient service (Perestroika) which builds packages in minutes if you
>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
>> available as an application independent from CI. So, what is wrong with
>> building fuel-library package? What if you want to troubleshoot nova (we
>> install it using packages)? Should we also use rsync for everything else
>> like nova, mysql, etc.?
>>
>>
> Yes, we do have a service like Perestroika to build packages for us.  That
> doesn't mean everyone else does or has access to do that today.  Setting up
> a build system is a major undertaking and making that a hard requirement to
> interact with our product may be a bit much for some customers.  In
> speaking with some support folks, there are times when files have to be
> munged to get around issues because there is no package or things are on
> fire so they can't wait for a package to become available for a fix.  We
> need to be careful not to impose limits without proper justification and
> due diligence.  We already build the fuel-library package, so there's no
> reason you couldn't try switching the rsync to install the package if it's
> available on a mirror.  I just think you're going to run into the issues I
> mentioned which need to be solved before we could just mark it done.
>
> -Alex
>
>
>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz <aschultz at mirantis.com>
>> wrote:
>>
>>> I agree that we shouldn't need to sync as we should be able to just
>>> update the fuel-library package. That being said, I think there might be a
>>> few issues with this method. The first issue is with plugins and how to
>>> properly handle the distribution of the plugins as they may also include
>>> puppet code that needs to be installed on the other nodes for a deployment.
>>> Currently I do not believe we install the plugin packages anywhere except
>>> the master and when they do get installed there may be some post-install
>>> actions that are only valid for the master.  Another issue is being
>>> flexible enough to allow for deployment engineers to make custom changes
>>> for a given environment.  Unless we can provide an improved process to
>>> allow for people to provide in place modifications for an environment, we
>>> can't do away with the rsync.
>>>
>>> If we want to go completely down the package route (and we probably
>>> should), we need to make sure that all of the other pieces that currently
>>> go together to make a complete fuel deployment can be updated in the same
>>> way.
>>>
>>> -Alex
>>>
>>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/c6e6466b/attachment.html>

From tomer.shtilman at alcatel-lucent.com  Thu Sep 10 07:02:49 2015
From: tomer.shtilman at alcatel-lucent.com (SHTILMAN, Tomer (Tomer))
Date: Thu, 10 Sep 2015 07:02:49 +0000
Subject: [openstack-dev] [Heat] Multi Node Stack - keystone federation
In-Reply-To: <55F09E1A.9050508@redhat.com>
References: <94346481835D244BB7F6486C00E9C1BA2AE1E553@FR711WXCHMBA06.zeu.alcatel-lucent.com>
 <55EEF653.4040909@redhat.com>
 <94346481835D244BB7F6486C00E9C1BA2AE20194@FR711WXCHMBA06.zeu.alcatel-lucent.com>
 <55F09E1A.9050508@redhat.com>
Message-ID: <94346481835D244BB7F6486C00E9C1BA2AE20F5A@FR711WXCHMBA06.zeu.alcatel-lucent.com>

>>On 09/09/15 04:10, SHTILMAN, Tomer (Tomer) wrote:
>> We are currently building in our lab multi cloud setup with keystone 
>> federation and I will check if my understating is correct, I am 
>> planning for propose a BP for this once will be clear
> On 09/09/15 Zane wrote:
>There was further interest in this at the IRC meeting today (from Daniel Gonzalez), so I raised this blueprint:
>
>https://blueprints.launchpad.net/heat/+spec/multi-cloud-federation
>
>I left the Drafter and Assignee fields blank, so whoever starts working on the spec and the code, respectively, should put their names in those fields. If you see someone else's name there, you should co-ordinate with them to avoid double-handling.
>
>cheers,
>Zane.
>
Hi Zane
Couldn't change the assignee and the drafter on this from some reason can you please assign me on this BP


From mscherbakov at mirantis.com  Thu Sep 10 07:09:09 2015
From: mscherbakov at mirantis.com (Mike Scherbakov)
Date: Thu, 10 Sep 2015 07:09:09 +0000
Subject: [openstack-dev] [Fuel] IRC meeting today
Message-ID: <CAKYN3rNBv77xVA3A7FdSJ-NRt+0yfNa5A9JU1=LDVbr-dUtNxg@mail.gmail.com>

Hi folks,
please add topics to the agenda before the meeting:
https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda

It would be great to discuss:

   - Critical bugs, and the pipeline of bugs which may become Criticals
   with larger audience.
   - Current status of builds from master
   - Updates on progress in other areas not related to bugs

Thanks,


-- 
Mike Scherbakov
#mihgen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/1ac3170a/attachment.html>

From aheczko at mirantis.com  Thu Sep 10 07:10:25 2015
From: aheczko at mirantis.com (Adam Heczko)
Date: Thu, 10 Sep 2015 09:10:25 +0200
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
Message-ID: <CAJciqMzDvY69KKtG6O-2DAz_+OaC_TFBvioOJiCona_xFUk85g@mail.gmail.com>

Folks, what I can see is that most of you represent 'engineering' point of
view.
Way of installing OpenStack by Fuel is not 'engineering' decision - it is
political and business related decision.
I believe that possibility to get fully working / not internet dependent
ISO or ISO + some additional let's say QCOW2 disk image, holding all
necessary stuff required to deploy OpenStack is crucial determinant factor
to most of enterprise customers.
This is absolutely not internet connectivity related, this is not technical
question, we are touching philosophical approach to software distribution
approach and way of understanding of things by all kind of 'C' grade
managers.
In order OpenStack to succeed, 'no internet connectivity' approach is a
must have.
If not directly from within ISO, then we have to provide easy to use and
understand guidance how to deploy OS without any internet connectivity,
e.g. fuel-createmirror or other similar scripts.

Regards,

A.


On Thu, Sep 10, 2015 at 7:52 AM, Igor Kalnitsky <ikalnitsky at mirantis.com>
wrote:

> Hello,
>
> I agree with Vladimir - the idea of online repos is a right way to
> move. In 2015 I believe we can ignore this "poor Internet connection"
> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
> distributives - most of them fetch needed packages from the Internet
> during installation, not from CD/DVD. The netboot installers are
> popular, I can't even remember when was the last time I install my
> Debian from the DVD-1 - I use netboot installer for years.
>
> Thanks,
> Igor
>
>
> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com> wrote:
> >
> >
> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com>
> wrote:
> >>
> >>
> >> Hey Vladimir,
> >>
> >>>
> >>>
> >>>>>
> >>>>> 1) There won't be such things in like [1] and [2], thus less
> >>>>> complicated flow, less errors, easier to maintain, easier to
> understand,
> >>>>> easier to troubleshoot
> >>>>> 2) If one wants to have local mirror, the flow is the same as in case
> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user to
> >>>>> understand.
> >>>>
> >>>>
> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
> >>>> forward and has some issues making it a bad UX.
> >>>
> >>>
> >>> I'd say the whole approach of having such tool as fuel-createmirror is
> a
> >>> way too naive. Reliable internet connection is totally up to network
> >>> engineering rather than deployment. Even using proxy is much better
> that
> >>> creating local mirror. But this discussion is totally out of the scope
> of
> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
> >>> straightforward (installed as rpm, has just a couple of command line
> >>> options). The quality of this script is also out of the scope of this
> >>> thread. BTW we have plans to improve it.
> >>
> >>
> >>
> >> Fair enough, I just wanted to raise the UX issues around these types of
> >> things as they should go into the decision making process.
> >>
> >>
> >>>
> >>>>>
> >>>>>
> >>>>> Many people still associate ISO with MOS, but it is not true when
> using
> >>>>> package based delivery approach.
> >>>>>
> >>>>> It is easy to define necessary repos during deployment and thus it is
> >>>>> easy to control what exactly is going to be installed on slave nodes.
> >>>>>
> >>>>> What do you guys think of it?
> >>>>>
> >>>>>
> >>>>
> >>>> Reliance on internet connectivity has been an issue since 6.1. For
> many
> >>>> large users, complete access to the internet is not available or not
> >>>> desired.  If we want to continue down this path, we need to improve
> the
> >>>> tools to setup the local mirror and properly document what
> urls/ports/etc
> >>>> need to be available for the installation of openstack and any mirror
> >>>> creation process.  The ideal thing is to have an all-in-one CD
> similar to a
> >>>> live cd that allows a user to completely try out fuel wherever they
> want
> >>>> with out further requirements of internet access.  If we don't want to
> >>>> continue with that, we need to do a better job around providing the
> tools
> >>>> for a user to get up and running in a timely fashion.  Perhaps
> providing an
> >>>> net-only iso and an all-included iso would be a better solution so
> people
> >>>> will have their expectations properly set up front?
> >>>
> >>>
> >>> Let me explain why I think having local MOS mirror by default is bad:
> >>> 1) I don't see any reason why we should treat MOS  repo other way than
> >>> all other online repos. A user sees on the settings tab the list of
> repos
> >>> one of which is local by default while others are online. It can make
> user a
> >>> little bit confused, can't it? A user can be also confused by the
> fact, that
> >>> some of the repos can be cloned locally by fuel-createmirror while
> others
> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
> >>
> >>
> >>
> >> I agree. The process should be the same and it should be just another
> >> repo. It doesn't mean we can't include a version on an ISO as part of a
> >> release.  Would it be better to provide the mirror on the ISO but not
> have
> >> it enabled by default for a release so that we can gather user feedback
> on
> >> this? This would include improved documentation and possibly allowing a
> user
> >> to choose their preference so we can collect metrics?
> >>
> >>
> >>> 2) Having local MOS mirror by default makes things much more
> convoluted.
> >>> We are forced to have several directories with predefined names and we
> are
> >>> forced to manage these directories in nailgun, in upgrade script, etc.
> Why?
> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
> equal
> >>> to MOS, which is not true. It is possible to implement really flexible
> >>> delivery scheme, but we need to think of these things as they are
> >>> independent.
> >>
> >>
> >>
> >> I'm not sure what you mean by this. Including a point in time copy on an
> >> ISO as a release is a common method of distributing software. Is this a
> >> messaging thing that needs to be addressed? Perhaps I'm not familiar
> with
> >> people referring to the ISO as being MOS.
> >>
> >>
> >>> For large users it is easy to build custom ISO and put there what they
> >>> need but first we need to have simple working scheme clear for
> everyone. I
> >>> think dealing with all repos the same way is what is gonna makes things
> >>> simpler.
> >>>
> >>
> >>
> >> Who is going to build a custom ISO? How does one request that? What
> >> resources are consumed by custom ISO creation process/request? Does this
> >> scale?
> >>
> >>
> >>>
> >>> This thread is not about internet connectivity, it is about aligning
> >>> things.
> >>>
> >>
> >> You are correct in that this thread is not explicitly about internet
> >> connectivity, but they are related. Any changes to remove a local
> repository
> >> and only provide an internet based solution makes internet connectivity
> >> something that needs to be included in the discussion.  I just want to
> make
> >> sure that we properly evaluate this decision based on end user feedback
> not
> >> because we don't want to manage this from a developer standpoint.
> >
> >
> >
> >  +1, whatever the changes is, please keep Fuel as a tool that can deploy
> > without Internet access, this is part of reason that people like it and
> it's
> > better that other tools.
> >>
> >>
> >> -Alex
> >>
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Yaguang Tang
> > Technical Support, Mirantis China
> >
> > Phone: +86 15210946968
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/2f96ce6c/attachment-0001.html>

From aheczko at mirantis.com  Thu Sep 10 07:16:53 2015
From: aheczko at mirantis.com (Adam Heczko)
Date: Thu, 10 Sep 2015 09:16:53 +0200
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAGi6=NCBVX3Dg3JQudzrTtz6YhCqLhBnmizB2Jmxm+e8R+jNdw@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAGi6=NCBVX3Dg3JQudzrTtz6YhCqLhBnmizB2Jmxm+e8R+jNdw@mail.gmail.com>
Message-ID: <CAJciqMyVL9PvRs+ohd9shin0oXBzJsWa0vaWh1hqV8kbhUr6vA@mail.gmail.com>

Agree, we should understand taht this is not engineering decision but
rather political/business related and fully functional ISO or ISO+some
QCOW2/whatever is a strong requirement.

regards,

A.

On Thu, Sep 10, 2015 at 8:53 AM, Yaguang Tang <ytang at mirantis.com> wrote:

>
>
> On Thu, Sep 10, 2015 at 1:52 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
> wrote:
>
>> Hello,
>>
>> I agree with Vladimir - the idea of online repos is a right way to
>> move. In 2015 I believe we can ignore this "poor Internet connection"
>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
>> distributives - most of them fetch needed packages from the Internet
>> during installation, not from CD/DVD. The netboot installers are
>> popular, I can't even remember when was the last time I install my
>> Debian from the DVD-1 - I use netboot installer for years.
>>
>
> You are think in a way of developers, but the fact is that Fuel are widely
> used by various  enterprises in the world wide, there are many security
> policies  for enterprise to have no internet connection.
>
>
>
>> Thanks,
>> Igor
>>
>>
>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com> wrote:
>> >
>> >
>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com>
>> wrote:
>> >>
>> >>
>> >> Hey Vladimir,
>> >>
>> >>>
>> >>>
>> >>>>>
>> >>>>> 1) There won't be such things in like [1] and [2], thus less
>> >>>>> complicated flow, less errors, easier to maintain, easier to
>> understand,
>> >>>>> easier to troubleshoot
>> >>>>> 2) If one wants to have local mirror, the flow is the same as in
>> case
>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user to
>> >>>>> understand.
>> >>>>
>> >>>>
>> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
>> >>>> forward and has some issues making it a bad UX.
>> >>>
>> >>>
>> >>> I'd say the whole approach of having such tool as fuel-createmirror
>> is a
>> >>> way too naive. Reliable internet connection is totally up to network
>> >>> engineering rather than deployment. Even using proxy is much better
>> that
>> >>> creating local mirror. But this discussion is totally out of the
>> scope of
>> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
>> >>> straightforward (installed as rpm, has just a couple of command line
>> >>> options). The quality of this script is also out of the scope of this
>> >>> thread. BTW we have plans to improve it.
>> >>
>> >>
>> >>
>> >> Fair enough, I just wanted to raise the UX issues around these types of
>> >> things as they should go into the decision making process.
>> >>
>> >>
>> >>>
>> >>>>>
>> >>>>>
>> >>>>> Many people still associate ISO with MOS, but it is not true when
>> using
>> >>>>> package based delivery approach.
>> >>>>>
>> >>>>> It is easy to define necessary repos during deployment and thus it
>> is
>> >>>>> easy to control what exactly is going to be installed on slave
>> nodes.
>> >>>>>
>> >>>>> What do you guys think of it?
>> >>>>>
>> >>>>>
>> >>>>
>> >>>> Reliance on internet connectivity has been an issue since 6.1. For
>> many
>> >>>> large users, complete access to the internet is not available or not
>> >>>> desired.  If we want to continue down this path, we need to improve
>> the
>> >>>> tools to setup the local mirror and properly document what
>> urls/ports/etc
>> >>>> need to be available for the installation of openstack and any mirror
>> >>>> creation process.  The ideal thing is to have an all-in-one CD
>> similar to a
>> >>>> live cd that allows a user to completely try out fuel wherever they
>> want
>> >>>> with out further requirements of internet access.  If we don't want
>> to
>> >>>> continue with that, we need to do a better job around providing the
>> tools
>> >>>> for a user to get up and running in a timely fashion.  Perhaps
>> providing an
>> >>>> net-only iso and an all-included iso would be a better solution so
>> people
>> >>>> will have their expectations properly set up front?
>> >>>
>> >>>
>> >>> Let me explain why I think having local MOS mirror by default is bad:
>> >>> 1) I don't see any reason why we should treat MOS  repo other way than
>> >>> all other online repos. A user sees on the settings tab the list of
>> repos
>> >>> one of which is local by default while others are online. It can make
>> user a
>> >>> little bit confused, can't it? A user can be also confused by the
>> fact, that
>> >>> some of the repos can be cloned locally by fuel-createmirror while
>> others
>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>> >>
>> >>
>> >>
>> >> I agree. The process should be the same and it should be just another
>> >> repo. It doesn't mean we can't include a version on an ISO as part of a
>> >> release.  Would it be better to provide the mirror on the ISO but not
>> have
>> >> it enabled by default for a release so that we can gather user
>> feedback on
>> >> this? This would include improved documentation and possibly allowing
>> a user
>> >> to choose their preference so we can collect metrics?
>> >>
>> >>
>> >>> 2) Having local MOS mirror by default makes things much more
>> convoluted.
>> >>> We are forced to have several directories with predefined names and
>> we are
>> >>> forced to manage these directories in nailgun, in upgrade script,
>> etc. Why?
>> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
>> equal
>> >>> to MOS, which is not true. It is possible to implement really flexible
>> >>> delivery scheme, but we need to think of these things as they are
>> >>> independent.
>> >>
>> >>
>> >>
>> >> I'm not sure what you mean by this. Including a point in time copy on
>> an
>> >> ISO as a release is a common method of distributing software. Is this a
>> >> messaging thing that needs to be addressed? Perhaps I'm not familiar
>> with
>> >> people referring to the ISO as being MOS.
>> >>
>> >>
>> >>> For large users it is easy to build custom ISO and put there what they
>> >>> need but first we need to have simple working scheme clear for
>> everyone. I
>> >>> think dealing with all repos the same way is what is gonna makes
>> things
>> >>> simpler.
>> >>>
>> >>
>> >>
>> >> Who is going to build a custom ISO? How does one request that? What
>> >> resources are consumed by custom ISO creation process/request? Does
>> this
>> >> scale?
>> >>
>> >>
>> >>>
>> >>> This thread is not about internet connectivity, it is about aligning
>> >>> things.
>> >>>
>> >>
>> >> You are correct in that this thread is not explicitly about internet
>> >> connectivity, but they are related. Any changes to remove a local
>> repository
>> >> and only provide an internet based solution makes internet connectivity
>> >> something that needs to be included in the discussion.  I just want to
>> make
>> >> sure that we properly evaluate this decision based on end user
>> feedback not
>> >> because we don't want to manage this from a developer standpoint.
>> >
>> >
>> >
>> >  +1, whatever the changes is, please keep Fuel as a tool that can deploy
>> > without Internet access, this is part of reason that people like it and
>> it's
>> > better that other tools.
>> >>
>> >>
>> >> -Alex
>> >>
>> >>
>> >>
>> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > --
>> > Yaguang Tang
>> > Technical Support, Mirantis China
>> >
>> > Phone: +86 15210946968
>> >
>> >
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Yaguang Tang
> Technical Support, Mirantis China
>
> *Phone*: +86 15210946968
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/e044493b/attachment.html>

From yolanda.robla-mota at hpe.com  Thu Sep 10 07:31:47 2015
From: yolanda.robla-mota at hpe.com (Yolanda Robla Mota)
Date: Thu, 10 Sep 2015 09:31:47 +0200
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big
 tent?
In-Reply-To: <CADe0dKC6YkSx_yjp4-gg+GqAoYvdjbLpzAXJi9NqYZMx4trm2g@mail.gmail.com>
References: <20150908145755.GC16241@localhost.localdomain>
 <55EF663E.8050909@redhat.com> <20150909172205.GA13717@localhost.localdomain>
 <CADe0dKC6YkSx_yjp4-gg+GqAoYvdjbLpzAXJi9NqYZMx4trm2g@mail.gmail.com>
Message-ID: <55F131E3.7050909@hpe.com>

Hi
I will be interested as well. Having these playbooks in ansible can also 
be useful
in order to integrate with infra-ansible project.
I really see that collection as a valid alternative for puppet modules, 
with the advantages
that ansible can provide, but of course that moving from puppet to 
ansible on infra internally
is something that cannot be done easily, and needs a wider discussion.
If we limit the scope of the ansible playbooks only to infra components, 
I think that infra
namespace is the way to go, having an independent group of reviewers.

Best
Yolanda


El 09/09/15 a las 21:31, Ricardo Carrillo Cruz escribi?:
> I'm interested in ansible roles for openstack-infra, but as there is 
> overlap in functionality
> with the current openstack-infra puppet roles I'm not sure what's the 
> stance from the
> openstack-infra core members and PTL.
>
> I think they should go to openstack-infra, since Nodepoo/Zuul/etc are 
> very specific
> to the OpenStack CI.
>
> Question is if we should have a subgroup within openstack-infra 
> namespace for
> 'stuff that is not used by OpenStack CI but interesting from CI 
> perspective and/or
> used by other downstream groups'.
>
> Regards
>
> 2015-09-09 19:22 GMT+02:00 Paul Belanger <pabelanger at redhat.com 
> <mailto:pabelanger at redhat.com>>:
>
>     On Tue, Sep 08, 2015 at 06:50:38PM -0400, Emilien Macchi wrote:
>     >
>     >
>     > On 09/08/2015 10:57 AM, Paul Belanger wrote:
>     > > Greetings,
>     > >
>     > > I wanted to start a discussion about the future of ansible /
>     ansible roles in
>     > > OpenStack. Over the last week or so I've started down the
>     ansible path, starting
>     > > my first ansible role; I've started with ansible-role-nodepool[1].
>     > >
>     > > My initial question is simple, now that big tent is upon us, I
>     would like
>     > > some way to include ansible roles into the opentack git
>     workflow.  I first
>     > > thought the role might live under openstack-infra however I am
>     not sure that
>     > > is the right place.  My reason is, -infra tents to include
>     modules they
>     > > currently run under the -infra namespace, and I don't want to
>     start the effort
>     > > to convince people to migrate.
>     >
>     > I'm wondering what would be the goal of ansible-role-nodepool
>     and what
>     > it would orchestrate exactly. I did not find README that
>     explains it,
>     > and digging into the code makes me think you try to prepare nodepool
>     > images but I don't exactly see why.
>     >
>     > Since we already have puppet-nodepool, I'm curious about the
>     purpose of
>     > this role.
>     > IMHO, if we had to add such a new repo, it would be under
>     > openstack-infra namespace, to be consistent with other repos
>     > (puppet-nodepool, etc).
>     >
>     > > Another thought might be to reach out to the
>     os-ansible-deployment team and ask
>     > > how they see roles in OpenStack moving foward (mostly the
>     reason for this
>     > > email).
>     >
>     > os-ansible-deployment aims to setup OpenStack services in containers
>     > (LXC). I don't see relation between os-ansible-deployment (openstack
>     > deployment related) and ansible-role-nodepool (infra related).
>     >
>     > > Either way, I would be interested in feedback on moving
>     forward on this. Using
>     > > travis-ci and github works but OpenStack workflow is much better.
>     > >
>     > > [1] https://github.com/pabelanger/ansible-role-nodepool
>     > >
>     >
>     > To me, it's unclear how and why we are going to use
>     ansible-role-nodepool.
>     > Could you explain with use-case?
>     >
>     The most basic use case is managing nodepool using ansible, for
>     the purpose of
>     CI.  Bascially, rewrite puppet-nodepool using ansible.  I won't go
>     into the
>     reasoning for that, except to say people do not want to use puppet.
>
>     Regarding os-ansible-deployment, they are only related due to both
>     using
>     ansible. I wouldn't see os-ansible-deployment using the module,
>     however I would
>     hope to learn best practices and code reviews from the team.
>
>     Where ever the module lives, I would hope people interested in ansible
>     development would be group somehow.
>
>     > Thanks,
>     > --
>     > Emilien Macchi
>     >
>     >
>     __________________________________________________________________________
>     > OpenStack Development Mailing List (not for usage questions)
>     > Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Yolanda Robla Mota
Cloud Automation and Distribution Engineer
+34 605641639
yolanda.robla-mota at hp.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/f9ce9ada/attachment.html>

From ogelbukh at mirantis.com  Thu Sep 10 07:33:12 2015
From: ogelbukh at mirantis.com (Oleg Gelbukh)
Date: Thu, 10 Sep 2015 10:33:12 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
Message-ID: <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>

The reason people want offline deployment feature is not because of poor
connection, but rather the enterprise intranets where getting subnet with
external access sometimes is a real pain in various body parts.

--
Best regards,
Oleg Gelbukh

On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <ikalnitsky at mirantis.com>
wrote:

> Hello,
>
> I agree with Vladimir - the idea of online repos is a right way to
> move. In 2015 I believe we can ignore this "poor Internet connection"
> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
> distributives - most of them fetch needed packages from the Internet
> during installation, not from CD/DVD. The netboot installers are
> popular, I can't even remember when was the last time I install my
> Debian from the DVD-1 - I use netboot installer for years.
>
> Thanks,
> Igor
>
>
> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com> wrote:
> >
> >
> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com>
> wrote:
> >>
> >>
> >> Hey Vladimir,
> >>
> >>>
> >>>
> >>>>>
> >>>>> 1) There won't be such things in like [1] and [2], thus less
> >>>>> complicated flow, less errors, easier to maintain, easier to
> understand,
> >>>>> easier to troubleshoot
> >>>>> 2) If one wants to have local mirror, the flow is the same as in case
> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user to
> >>>>> understand.
> >>>>
> >>>>
> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
> >>>> forward and has some issues making it a bad UX.
> >>>
> >>>
> >>> I'd say the whole approach of having such tool as fuel-createmirror is
> a
> >>> way too naive. Reliable internet connection is totally up to network
> >>> engineering rather than deployment. Even using proxy is much better
> that
> >>> creating local mirror. But this discussion is totally out of the scope
> of
> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
> >>> straightforward (installed as rpm, has just a couple of command line
> >>> options). The quality of this script is also out of the scope of this
> >>> thread. BTW we have plans to improve it.
> >>
> >>
> >>
> >> Fair enough, I just wanted to raise the UX issues around these types of
> >> things as they should go into the decision making process.
> >>
> >>
> >>>
> >>>>>
> >>>>>
> >>>>> Many people still associate ISO with MOS, but it is not true when
> using
> >>>>> package based delivery approach.
> >>>>>
> >>>>> It is easy to define necessary repos during deployment and thus it is
> >>>>> easy to control what exactly is going to be installed on slave nodes.
> >>>>>
> >>>>> What do you guys think of it?
> >>>>>
> >>>>>
> >>>>
> >>>> Reliance on internet connectivity has been an issue since 6.1. For
> many
> >>>> large users, complete access to the internet is not available or not
> >>>> desired.  If we want to continue down this path, we need to improve
> the
> >>>> tools to setup the local mirror and properly document what
> urls/ports/etc
> >>>> need to be available for the installation of openstack and any mirror
> >>>> creation process.  The ideal thing is to have an all-in-one CD
> similar to a
> >>>> live cd that allows a user to completely try out fuel wherever they
> want
> >>>> with out further requirements of internet access.  If we don't want to
> >>>> continue with that, we need to do a better job around providing the
> tools
> >>>> for a user to get up and running in a timely fashion.  Perhaps
> providing an
> >>>> net-only iso and an all-included iso would be a better solution so
> people
> >>>> will have their expectations properly set up front?
> >>>
> >>>
> >>> Let me explain why I think having local MOS mirror by default is bad:
> >>> 1) I don't see any reason why we should treat MOS  repo other way than
> >>> all other online repos. A user sees on the settings tab the list of
> repos
> >>> one of which is local by default while others are online. It can make
> user a
> >>> little bit confused, can't it? A user can be also confused by the
> fact, that
> >>> some of the repos can be cloned locally by fuel-createmirror while
> others
> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
> >>
> >>
> >>
> >> I agree. The process should be the same and it should be just another
> >> repo. It doesn't mean we can't include a version on an ISO as part of a
> >> release.  Would it be better to provide the mirror on the ISO but not
> have
> >> it enabled by default for a release so that we can gather user feedback
> on
> >> this? This would include improved documentation and possibly allowing a
> user
> >> to choose their preference so we can collect metrics?
> >>
> >>
> >>> 2) Having local MOS mirror by default makes things much more
> convoluted.
> >>> We are forced to have several directories with predefined names and we
> are
> >>> forced to manage these directories in nailgun, in upgrade script, etc.
> Why?
> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
> equal
> >>> to MOS, which is not true. It is possible to implement really flexible
> >>> delivery scheme, but we need to think of these things as they are
> >>> independent.
> >>
> >>
> >>
> >> I'm not sure what you mean by this. Including a point in time copy on an
> >> ISO as a release is a common method of distributing software. Is this a
> >> messaging thing that needs to be addressed? Perhaps I'm not familiar
> with
> >> people referring to the ISO as being MOS.
> >>
> >>
> >>> For large users it is easy to build custom ISO and put there what they
> >>> need but first we need to have simple working scheme clear for
> everyone. I
> >>> think dealing with all repos the same way is what is gonna makes things
> >>> simpler.
> >>>
> >>
> >>
> >> Who is going to build a custom ISO? How does one request that? What
> >> resources are consumed by custom ISO creation process/request? Does this
> >> scale?
> >>
> >>
> >>>
> >>> This thread is not about internet connectivity, it is about aligning
> >>> things.
> >>>
> >>
> >> You are correct in that this thread is not explicitly about internet
> >> connectivity, but they are related. Any changes to remove a local
> repository
> >> and only provide an internet based solution makes internet connectivity
> >> something that needs to be included in the discussion.  I just want to
> make
> >> sure that we properly evaluate this decision based on end user feedback
> not
> >> because we don't want to manage this from a developer standpoint.
> >
> >
> >
> >  +1, whatever the changes is, please keep Fuel as a tool that can deploy
> > without Internet access, this is part of reason that people like it and
> it's
> > better that other tools.
> >>
> >>
> >> -Alex
> >>
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Yaguang Tang
> > Technical Support, Mirantis China
> >
> > Phone: +86 15210946968
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/d4226ac7/attachment.html>

From ogelbukh at mirantis.com  Thu Sep 10 07:33:40 2015
From: ogelbukh at mirantis.com (Oleg Gelbukh)
Date: Thu, 10 Sep 2015 10:33:40 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CABzFt8O1VH8DfOCZAP=yaS_UicaSd=6BNGS=46T5LOOa2H++xA@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
 <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
 <CABzFt8N12ADSuafDBZHg+QTHqPGjXPigzCvYZ1LE48KZJSGzyA@mail.gmail.com>
 <CAFLqvG5P2Ckp61nB9woU=AP3e0rFPfVsDg81HJadM=v2bc6=5w@mail.gmail.com>
 <CABzFt8O1VH8DfOCZAP=yaS_UicaSd=6BNGS=46T5LOOa2H++xA@mail.gmail.com>
Message-ID: <CAFkLEwoXR-C-etHpS-KDZokYkP8CfS8q9UXjKYF0eNYo6yOpcQ@mail.gmail.com>

Alex,

I absolutely understand the point you are making about need for deployment
engineers to modify things 'on the fly' in customer environment. It's makes
things really flexible and lowers the entry barrier for sure.

However, I would like to note that in my opinion this kind on 'monkey
patching' is actually a bad practice for any environments other than dev
ones. It immediately leads to emergence of unsupportable frankenclouds. I
would greet any modification to the workflow that will discourage people
from doing that.

--
Best regards,
Oleg Gelbukh

On Wed, Sep 9, 2015 at 5:56 PM, Alex Schultz <aschultz at mirantis.com> wrote:

> Hey Vladimir,
>
>
>
>> Regarding plugins: plugins are welcome to install specific additional
>> DEB/RPM repos on the master node, or just configure cluster to use
>> additional onl?ne repos, where all necessary packages (including plugin
>> specific puppet manifests) are to be available. Current granular deployment
>> approach makes it easy to append specific pre-deployment tasks
>> (master/slave does not matter). Correct me if I am wrong.
>>
>>
> Don't get me wrong, I think it would be good to move to a fuel-library
> distributed via package only.  I'm bringing these points up to indicate
> that there is many other things that live in the fuel library puppet path
> than just the fuel-library package.  The plugin example is just one place
> that we will need to invest in further design and work to move to the
> package only distribution.  What I don't want is some partially executed
> work that only works for one type of deployment and creates headaches for
> the people actually having to use fuel.  The deployment engineers and
> customers who actually perform these actions should be asked about
> packaging and their comfort level with this type of requirements.  I don't
> have a complete understanding of the all the things supported today by the
> fuel plugin system so it would be nice to get someone who is more familiar
> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
> don't think we are building fuel-library debs at this time either.  So
> without some work on both sides, we cannot move to just packages.
>
>
>> Regarding flexibility: having several versioned directories with puppet
>> modules on the master node, having several fuel-libraryX.Y packages
>> installed on the master node makes things "exquisitely convoluted" rather
>> than flexible. Like I said, it is flexible enough to use mcollective, plain
>> rsync, etc. if you really need to do things manually. But we have
>> convenient service (Perestroika) which builds packages in minutes if you
>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
>> available as an application independent from CI. So, what is wrong with
>> building fuel-library package? What if you want to troubleshoot nova (we
>> install it using packages)? Should we also use rsync for everything else
>> like nova, mysql, etc.?
>>
>>
> Yes, we do have a service like Perestroika to build packages for us.  That
> doesn't mean everyone else does or has access to do that today.  Setting up
> a build system is a major undertaking and making that a hard requirement to
> interact with our product may be a bit much for some customers.  In
> speaking with some support folks, there are times when files have to be
> munged to get around issues because there is no package or things are on
> fire so they can't wait for a package to become available for a fix.  We
> need to be careful not to impose limits without proper justification and
> due diligence.  We already build the fuel-library package, so there's no
> reason you couldn't try switching the rsync to install the package if it's
> available on a mirror.  I just think you're going to run into the issues I
> mentioned which need to be solved before we could just mark it done.
>
> -Alex
>
>
>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz <aschultz at mirantis.com>
>> wrote:
>>
>>> I agree that we shouldn't need to sync as we should be able to just
>>> update the fuel-library package. That being said, I think there might be a
>>> few issues with this method. The first issue is with plugins and how to
>>> properly handle the distribution of the plugins as they may also include
>>> puppet code that needs to be installed on the other nodes for a deployment.
>>> Currently I do not believe we install the plugin packages anywhere except
>>> the master and when they do get installed there may be some post-install
>>> actions that are only valid for the master.  Another issue is being
>>> flexible enough to allow for deployment engineers to make custom changes
>>> for a given environment.  Unless we can provide an improved process to
>>> allow for people to provide in place modifications for an environment, we
>>> can't do away with the rsync.
>>>
>>> If we want to go completely down the package route (and we probably
>>> should), we need to make sure that all of the other pieces that currently
>>> go together to make a complete fuel deployment can be updated in the same
>>> way.
>>>
>>> -Alex
>>>
>>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/c38591e3/attachment.html>

From gal.sagie at gmail.com  Thu Sep 10 07:39:38 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Thu, 10 Sep 2015 10:39:38 +0300
Subject: [openstack-dev] [neutron] RFE process question
In-Reply-To: <CAK+RQeY_SjXNyApy9gfp7NOUk0V7qwhnSay+jU1LBr=aDXAG0A@mail.gmail.com>
References: <55F11653.8080509@catalyst.net.nz>
 <CAK+RQeY_SjXNyApy9gfp7NOUk0V7qwhnSay+jU1LBr=aDXAG0A@mail.gmail.com>
Message-ID: <CAG9LJa79xEhGTvq5pEYEvDTvP1y9Z9PcJ0qxKEEkPeE0MoqBkg@mail.gmail.com>

Hi James,

I think that https://review.openstack.org/#/c/216021/ might be what you are
looking for.
Please review and see that it fits your requierment.
Hopefully this gets approved for next release and i can start working on
it, if you would like to join
and contribute (or anyone in your team) i would love any help on that.

Thanks
Gal.

On Thu, Sep 10, 2015 at 8:59 AM, Armando M. <armamig at gmail.com> wrote:

>
> On 10 September 2015 at 11:04, James Dempsey <jamesd at catalyst.net.nz>
> wrote:
>
>> Greetings Devs,
>>
>> I'm very excited about the new RFE process and thought I'd test it by
>> requesting a feature that is very often requested by my users[1].
>>
>> There are some great docs out there about how to submit an RFE, but I
>> don't know what should happen after the submission to launchpad. My RFE
>> bug seems to have been untouched for a month and I'm unsure if I've done
>> something wrong. So, here are a few questions that I have.
>>
>>
>> 1. Should I be following up on the dev list to ask for someone to look
>> at my RFE bug?
>> 2. How long should I expect it to take to have my RFE acknowledged?
>> 3. As an operator, I'm a bit ignorant as to whether or not there are
>> times during the release cycle during which there simply won't be
>> bandwidth to consider RFE bugs.
>> 4. Should I be doing anything else?
>>
>> Would love some guidance.
>>
>
> you did nothing wrong, the team was simply busy going through the existing
> schedule. Having said that, you could have spared a few more words on the
> use case and what you mean by annotations.
>
> I'll follow up on the RFE for more questions.
>
> Cheers,
> Armando
>
>
>>
>> Cheers,
>> James
>>
>> [1] https://bugs.launchpad.net/neutron/+bug/1483480
>>
>> --
>> James Dempsey
>> Senior Cloud Engineer
>> Catalyst IT Limited
>> +64 4 803 2264
>> --
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/ddd36354/attachment.html>

From liuhoug at cn.ibm.com  Thu Sep 10 07:52:03 2015
From: liuhoug at cn.ibm.com (Hou Gang HG Liu)
Date: Thu, 10 Sep 2015 15:52:03 +0800
Subject: [openstack-dev] questions about nova compute monitors extensions
Message-ID: <OF8D69D44A.5AC308A3-ON48257EBC.0029CD2B-48257EBC.002B55F8@cn.ibm.com>

Hi all,

I notice nova compute monitor now only tries to load monitors with 
namespace "nova.compute.monitors.cpu", and only one monitor in one 
namespace can be enabled(
https://review.openstack.org/#/c/209499/6/nova/compute/monitors/__init__.py
).

Is there a plan to make MonitorHandler.NAMESPACES configurable or just 
hard code constraint as it is now? And how to make compute monitor support 
user defined as it was?

Thanks!
B.R

Hougang Liu ?????
Developer - IBM Platform Resource Scheduler   
Systems and Technology Group

Mobile: 86-13519121974 | Phone: 86-29-68797023 | Tie-Line: 87023     ??
????????42?????3?
E-mail: liuhoug at cn.ibm.com                                  Xian, Shaanxi 
Province 710075, China 
 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/505e7d3f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 360 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/505e7d3f/attachment.gif>

From paul.carlton2 at hp.com  Thu Sep 10 07:53:29 2015
From: paul.carlton2 at hp.com (Paul Carlton)
Date: Thu, 10 Sep 2015 08:53:29 +0100
Subject: [openstack-dev] Question about generating an oslo.utils release
Message-ID: <55F136F9.9060700@hp.com>

Hi

I have an olso.utils change merged (220620 
<https://review.openstack.org/#/c/220620/>).  A nova change (220622 
<https://review.openstack.org/#/c/220622/>) depends on this.  What is 
the process for creating a new version of olso.utils?  Is this performed 
periodically by a release manager or do I need to do something myself?

Incidentally, despite including a depends-on tag in my nova change's 
commit message my tests that depend on the oslo.utils change failed in 
CI, I thought the use of depends-on would cause it to load olso.utils 
using the referenced development commit?

Thanks

-- 
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:    +44 (0)7768 994283
Email:    mailto:paul.carlton2 at hp.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL".

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/2574fa32/attachment.html>

From dtantsur at redhat.com  Thu Sep 10 07:55:47 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Thu, 10 Sep 2015 09:55:47 +0200
Subject: [openstack-dev] [Ironic] Command structure for OSC plugin
In-Reply-To: <20150909164820.GE21846@jimrollenhagen.com>
References: <20150824150341.GB13126@redhat.com> <55DB3B46.6000503@gmail.com>
 <55DB3EB4.5000105@redhat.com> <20150824172520.GD13126@redhat.com>
 <55DB54E6.1090408@redhat.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3877B7@CERNXCHG44.cern.ch>
 <20150824193559.GF13126@redhat.com> <1440446092-sup-2361@lrrr.local>
 <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>
 <20150909164820.GE21846@jimrollenhagen.com>
Message-ID: <55F13783.9020702@redhat.com>

On 09/09/2015 06:48 PM, Jim Rollenhagen wrote:
> On Tue, Sep 01, 2015 at 03:47:03PM -0500, Dean Troyer wrote:
>> [late catch-up]
>>
>> On Mon, Aug 24, 2015 at 2:56 PM, Doug Hellmann <doug at doughellmann.com>
>> wrote:
>>
>>> Excerpts from Brad P. Crochet's message of 2015-08-24 15:35:59 -0400:
>>>> On 24/08/15 18:19 +0000, Tim Bell wrote:
>>>>>
>>>> >From a user perspective, where bare metal and VMs are just different
>>> flavors (with varying capabilities), can we not use the same commands
>>> (server create/rebuild/...) ? Containers will create the same conceptual
>>> problems.
>>>>>
>>>>> OSC can provide a converged interface but if we just replace '$ ironic
>>> XXXX' by '$ openstack baremetal XXXX', this seems to be a missed
>>> opportunity to hide the complexity from the end user.
>>>>>
>>>>> Can we re-use the existing server structures ?
>>>
>>
>> I've wondered about how users would see doing this, we've done it already
>> with the quota and limits commands (blurring the distinction between
>> project APIs).  At some level I am sure users really do not care about some
>> of our project distinctions.
>>
>>
>>> To my knowledge, overriding or enhancing existing commands like that
>>>
>>> is not possible.
>>>
>>> You would have to do it in tree, by making the existing commands
>>> smart enough to talk to both nova and ironic, first to find the
>>> server (which service knows about something with UUID XYZ?) and
>>> then to take the appropriate action on that server using the right
>>> client. So it could be done, but it might lose some of the nuance
>>> between the server types by munging them into the same command. I
>>> don't know what sorts of operations are different, but it would be
>>> worth doing the analysis to see.
>>>
>>
>> I do have an experimental plugin that hooks the server create command to
>> add some options and change its behaviour so it is possible, but right now
>> I wouldn't call it supported at all.  That might be something that we could
>> consider doing though for things like this.
>>
>> The current model for commands calling multiple project APIs is to put them
>> in openstackclient.common, so yes, in-tree.
>>
>> Overall, though, to stay consistent with OSC you would map operations into
>> the current verbs as much as possible.  It is best to think in terms of how
>> the CLI user is thinking and what she wants to do, and not how the REST or
>> Python API is written.  In this case, 'baremetal' is a type of server, a
>> set of attributes of a server, etc.  As mentioned earlier, containers will
>> also have a similar paradigm to consider.
>
> Disclaimer: I don't know much about OSC or its syntax, command
> structure, etc. These may not be well-formed thoughts. :)

With the same disclaimer applied...

>
> While it would be *really* cool to support the same command to do things
> to nova servers or do things to ironic servers, I don't know that it's
> reasonable to do so.
>
> Ironic is an admin-only API, that supports running standalone or behind
> a Nova installation with the Nova virt driver. The API is primarily used
> by Nova, or by admins for management. In the case of a standalone
> configuration, an admin can use the Ironic API to deploy a server,
> though the recommended approach is to use Bifrost[0] to simplify that.
> In the case of Ironic behind Nova, users are expected to boot baremetal
> servers through Nova, as indicated by a flavor.
>
> So, many of the nova commands (openstack server foo) don't make sense in
> an Ironic context, and vice versa. It would also be difficult to
> determine if the commands should go through Nova or through Ironic.
> The path could be something like: check that Ironic exists, see if user
> has access, hence standalone mode (oh wait, operators probably have
> access to manage Ironic *and* deploy baremetal through Nova, what do?).

I second this. I'd like also to add that in case of Ironic "server 
create" may involve actually several complex actions, that do not map to 
'nova boot'. First of all we create a node record in database, second we 
check it's power credentials, third we do properties inspection, finally 
we do cleaning. None of these make any sense on a virtual environment.

>
> I think we should think of "openstack baremetal foo" as commands to
> manage the baremetal service (Ironic), as that is what the API is
> primarily intended for. Then "openstack server foo" just does what it
> does today, and if the flavor happens to be a baremetal flavor, the user
> gets a baremetal server.
>
> // jim
>
> [0] https://github.com/openstack/bifrost
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From xiexs at cn.fujitsu.com  Thu Sep 10 08:30:23 2015
From: xiexs at cn.fujitsu.com (Xie, Xianshan)
Date: Thu, 10 Sep 2015 08:30:23 +0000
Subject: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into gerrit
 server
Message-ID: <9086590602E58741A4119DC210CF893AA92C53DD@G08CNEXMBPEKD01.g08.fujitsu.local>

Hi, all,
   In my CI environment, after submitting a patch into openstack-dev/sandbox,
the Jenkins Job can be launched automatically, and the result message of the job also can be posted into the gerrit server successfully.
Everything seems fine.

But in the ?Verified? column, there is no verified vote, such as +1 or -1.
(patch url: https://review.openstack.org/#/c/222049/,
CI name:  Fnst OpenStackTest CI)

Although I have already added the ?verified? label into the layout.yaml , under the check pipeline, it does not work yet.

And my configuration info is setted as follows:
Layout.yaml
-------------------------------------------
pipelines:
  - name: check
   trigger:
     gerrit:
      - event: patchset-created
      - event: change-restored
      - event: comment-added
?
   success:
    gerrit:
      verified: 1
   failure:
    gerrit:
      verified: -1

jobs:
   - name: noop-check-communication
      parameter-function: reusable_node
projects:
- name: openstack-dev/sandbox
   - noop-check-communication
-------------------------------------------


And the projects.yaml of Jenkins job:
-------------------------------------------
- project:
name: sandbox
jobs:
      - noop-check-communication:
         node: 'devstack_slave || devstack-precise-check || d-p-c'
?
-------------------------------------------

Could anyone help me? Thanks in advance.

Xiexs

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/5c1c7181/attachment.html>

From kuvaja at hpe.com  Thu Sep 10 08:36:06 2015
From: kuvaja at hpe.com (Kuvaja, Erno)
Date: Thu, 10 Sep 2015 08:36:06 +0000
Subject: [openstack-dev] [glance] differences between def detail() and
 def index() in glance/registry/api/v1/images.py
References: <CAPjtyqsQdh_-RW9pv6Y48RKVSN3zF_Ny5hPdndrbCKuUfbzPYw@mail.gmail.com>
 <55F0C8F1.5080503@catalyst.net.nz> 
Message-ID: <EA70533067B8F34F801E964ABCA4C4410F4C4218@G4W3202.americas.hpqcorp.net>

This was the case until about two weeks ago.

Since 1.0.0 release we have been defaulting to Images API v2 instead of v1 [0].

If you want to exercise the v1 functionality from the CLI client you would need to specify the either environmental variable OS_IMAGE_API_VERSION=1 or use the command line option ?os-image-api-version 1. Either case ?debug can be used with glanceclient to provide detailed information about where the request is being sent and what the responses are.

If you haven?t moved to the latest client yet, forget about the above apart from the ?debug part.

[0] https://github.com/openstack/python-glanceclient/blob/master/doc/source/index.rst


-          Erno

From: Fei Long Wang [mailto:feilong at catalyst.net.nz]
Sent: Thursday, September 10, 2015 1:04 AM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [glance] differences between def detail() and def index() in glance/registry/api/v1/images.py

I assume you're using Glance client, if so, by default, when you issuing command 'glance image-list', it will call /v1/images/detail instead of /v1/images, you can use curl or any browser http client to see the difference. Basically, just like the endpoint name, /v1/images/detail will give you more details. See below difference of their response.

Response from /v1/images/detail
{
    "images": [
        {
            "status": "active",
            "deleted_at": null,
            "name": "fedora-21-atomic-3",
            "deleted": false,
            "container_format": "bare",
            "created_at": "2015-09-03T22:56:37.000000",
            "disk_format": "qcow2",
            "updated_at": "2015-09-03T23:00:15.000000",
            "min_disk": 0,
            "protected": false,
            "id": "b940521b-97ff-48d9-a22e-ecc981ec0513",
            "min_ram": 0,
            "checksum": "d3b3da0e07743805dcc852785c7fc258",
            "owner": "5f290ac4b100440b8b4c83fce78c2db7",
            "is_public": true,
            "virtual_size": null,
            "properties": {
                "os_distro": "fedora-atomic"
            },
            "size": 770179072
        }
    ]
}

Response with /v1/images
{
    "images": [
        {
            "name": "fedora-21-atomic-3",
            "container_format": "bare",
            "disk_format": "qcow2",
            "checksum": "d3b3da0e07743805dcc852785c7fc258",
            "id": "b940521b-97ff-48d9-a22e-ecc981ec0513",
            "size": 770179072
        }
    ]
}
On 10/09/15 11:46, Su Zhang wrote:

Hello,

I am hitting an error and its trace passes def index () in glance/registry/api/v1/images.py.

I assume def index() is called by glance image-list. However, while testing glance image-list I realized that def detail() is called under glance/registry/api/v1/images.py instead of def index().

Could someone let me know what's the difference between the two functions? How can I test out def index() under glance/registry/api/v1/images.py through CLI or API?

Thanks,

--
Su Zhang



__________________________________________________________________________

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Cheers & Best regards,

Fei Long Wang (???)

--------------------------------------------------------------------------

Senior Cloud Software Engineer

Tel: +64-48032246

Email: flwang at catalyst.net.nz<mailto:flwang at catalyst.net.nz>

Catalyst IT Limited

Level 6, Catalyst House, 150 Willis Street, Wellington

--------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/ddd9e0af/attachment.html>

From sgolovatiuk at mirantis.com  Thu Sep 10 08:50:08 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Thu, 10 Sep 2015 10:50:08 +0200
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CAFkLEwoXR-C-etHpS-KDZokYkP8CfS8q9UXjKYF0eNYo6yOpcQ@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
 <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
 <CABzFt8N12ADSuafDBZHg+QTHqPGjXPigzCvYZ1LE48KZJSGzyA@mail.gmail.com>
 <CAFLqvG5P2Ckp61nB9woU=AP3e0rFPfVsDg81HJadM=v2bc6=5w@mail.gmail.com>
 <CABzFt8O1VH8DfOCZAP=yaS_UicaSd=6BNGS=46T5LOOa2H++xA@mail.gmail.com>
 <CAFkLEwoXR-C-etHpS-KDZokYkP8CfS8q9UXjKYF0eNYo6yOpcQ@mail.gmail.com>
Message-ID: <CA+HkNVsj5m4CduyZ9duTgCsp-BKP-Dwt85iZ01=txPBxVasANg@mail.gmail.com>

Oleg,

Alex gave a perfect example regarding support folks when they need to fix
something really quick. It's client's choice what to patch or not. You may
like it or not, but it's client's choice.

On 10 Sep 2015, at 09:33, Oleg Gelbukh <ogelbukh at mirantis.com> wrote:

Alex,

I absolutely understand the point you are making about need for deployment
engineers to modify things 'on the fly' in customer environment. It's makes
things really flexible and lowers the entry barrier for sure.

However, I would like to note that in my opinion this kind on 'monkey
patching' is actually a bad practice for any environments other than dev
ones. It immediately leads to emergence of unsupportable frankenclouds. I
would greet any modification to the workflow that will discourage people
from doing that.

--
Best regards,
Oleg Gelbukh

On Wed, Sep 9, 2015 at 5:56 PM, Alex Schultz <aschultz at mirantis.com> wrote:

> Hey Vladimir,
>
>
>
>> Regarding plugins: plugins are welcome to install specific additional
>> DEB/RPM repos on the master node, or just configure cluster to use
>> additional onl?ne repos, where all necessary packages (including plugin
>> specific puppet manifests) are to be available. Current granular deployment
>> approach makes it easy to append specific pre-deployment tasks
>> (master/slave does not matter). Correct me if I am wrong.
>>
>>
> Don't get me wrong, I think it would be good to move to a fuel-library
> distributed via package only.  I'm bringing these points up to indicate
> that there is many other things that live in the fuel library puppet path
> than just the fuel-library package.  The plugin example is just one place
> that we will need to invest in further design and work to move to the
> package only distribution.  What I don't want is some partially executed
> work that only works for one type of deployment and creates headaches for
> the people actually having to use fuel.  The deployment engineers and
> customers who actually perform these actions should be asked about
> packaging and their comfort level with this type of requirements.  I don't
> have a complete understanding of the all the things supported today by the
> fuel plugin system so it would be nice to get someone who is more familiar
> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
> don't think we are building fuel-library debs at this time either.  So
> without some work on both sides, we cannot move to just packages.
>
>
>> Regarding flexibility: having several versioned directories with puppet
>> modules on the master node, having several fuel-libraryX.Y packages
>> installed on the master node makes things "exquisitely convoluted" rather
>> than flexible. Like I said, it is flexible enough to use mcollective, plain
>> rsync, etc. if you really need to do things manually. But we have
>> convenient service (Perestroika) which builds packages in minutes if you
>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
>> available as an application independent from CI. So, what is wrong with
>> building fuel-library package? What if you want to troubleshoot nova (we
>> install it using packages)? Should we also use rsync for everything else
>> like nova, mysql, etc.?
>>
>>
> Yes, we do have a service like Perestroika to build packages for us.  That
> doesn't mean everyone else does or has access to do that today.  Setting up
> a build system is a major undertaking and making that a hard requirement to
> interact with our product may be a bit much for some customers.  In
> speaking with some support folks, there are times when files have to be
> munged to get around issues because there is no package or things are on
> fire so they can't wait for a package to become available for a fix.  We
> need to be careful not to impose limits without proper justification and
> due diligence.  We already build the fuel-library package, so there's no
> reason you couldn't try switching the rsync to install the package if it's
> available on a mirror.  I just think you're going to run into the issues I
> mentioned which need to be solved before we could just mark it done.
>
> -Alex
>
>
>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz <aschultz at mirantis.com>
>> wrote:
>>
>>> I agree that we shouldn't need to sync as we should be able to just
>>> update the fuel-library package. That being said, I think there might be a
>>> few issues with this method. The first issue is with plugins and how to
>>> properly handle the distribution of the plugins as they may also include
>>> puppet code that needs to be installed on the other nodes for a deployment.
>>> Currently I do not believe we install the plugin packages anywhere except
>>> the master and when they do get installed there may be some post-install
>>> actions that are only valid for the master.  Another issue is being
>>> flexible enough to allow for deployment engineers to make custom changes
>>> for a given environment.  Unless we can provide an improved process to
>>> allow for people to provide in place modifications for an environment, we
>>> can't do away with the rsync.
>>>
>>> If we want to go completely down the package route (and we probably
>>> should), we need to make sure that all of the other pieces that currently
>>> go together to make a complete fuel deployment can be updated in the same
>>> way.
>>>
>>> -Alex
>>>
>>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/0a8c5a7a/attachment.html>

From malini.k.bhandaru at intel.com  Thu Sep 10 09:04:14 2015
From: malini.k.bhandaru at intel.com (Bhandaru, Malini K)
Date: Thu, 10 Sep 2015 09:04:14 +0000
Subject: [openstack-dev] [Glance] Feature Freeze Exception proposal
In-Reply-To: <55F0F3AD.3020209@gmail.com>
References: <55E7AC5C.9010504@gmail.com> <20150903085224.GD30997@redhat.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B339376@fmsmsx117.amr.corp.intel.com>
 <EA70533067B8F34F801E964ABCA4C4410F4C1D0D@G4W3202.americas.hpqcorp.net>
 <D20DBFD8.210FE%brian.rosmaita@rackspace.com> <55E8784A.4060809@gmail.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B3397CA@fmsmsx117.amr.corp.intel.com>
 <55E9CA69.9030003@gmail.com>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33A963@fmsmsx117.amr.corp.intel.com>
 <55EEF89F.4040003@gmail.com> <55F0F3AD.3020209@gmail.com>
Message-ID: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33D119@fmsmsx117.amr.corp.intel.com>

Thank you! -- Malini

-----Original Message-----
From: Nikhil Komawar [mailto:nik.komawar at gmail.com] 
Sent: Wednesday, September 09, 2015 8:06 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

FYI, this was granted FFE.

On 9/8/15 11:02 AM, Nikhil Komawar wrote:
> Malini,
>
> Your note on the etherpad [1] went unnoticed as we had that sync on 
> Friday outside of our regular meeting and weekly meeting agenda 
> etherpad was not fit for discussion purposes.
>
> It would be nice if you all can update & comment on the spec, ref. the 
> note or have someone send a relative email here that explains the 
> redressal of the issues raised on the spec and during Friday sync [2].
>
> [1] https://etherpad.openstack.org/p/glance-team-meeting-agenda
> [2]
> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstac
> k-glance.2015-09-04.log.html#t2015-09-04T14:29:47
>
> On 9/5/15 4:40 PM, Bhandaru, Malini K wrote:
>> Thank you Nikhil and Glance team on the FFE consideration.
>> We are committed to making the revisions per suggestion and separately seek help from the Flavio, Sabari, and Harsh.
>> Regards
>> Malini, Kent, and Jakub
>>
>>
>> -----Original Message-----
>> From: Nikhil Komawar [mailto:nik.komawar at gmail.com]
>> Sent: Friday, September 04, 2015 9:44 AM
>> To: openstack-dev at lists.openstack.org
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>> proposal
>>
>> Hi Malini et.al.,
>>
>> We had a sync up earlier today on this topic and a few items were discussed including new comments on the spec and existing code proposal.
>> You can find the logs of the conversation here [1].
>>
>> There are 3 main outcomes of the discussion:
>> 1. We hope to get a commitment on the feature (spec and the code) that the comments would be addressed and code would be ready by Sept 18th; after which the RC1 is planned to be cut [2]. Our hope is that the spec is merged way before and implementation to the very least is ready if not merged. The comments on the spec and merge proposal are currently implementation details specific so we were positive on this front.
>> 2. The decision to grant FFE will be on Tuesday Sept 8th after the spec has newer patch sets with major concerns addressed.
>> 3. We cannot commit to granting a backport to this feature so, we ask the implementors to consider using the plug-ability and modularity of the taskflow library. You may consult developers who have already worked on adopting this library in Glance (Flavio, Sabari and Harsh). Deployers can then use those scripts and put them back in their Liberty deployments even if it's not in the standard tarball.
>>
>> Please let me know if you have more questions.
>>
>> [1]
>> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23opensta
>> ck-glance.2015-09-04.log.html#t2015-09-04T14:29:47
>> [2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
>>
>> On 9/3/15 1:13 PM, Bhandaru, Malini K wrote:
>>> Thank you Nikhil and Brian!
>>>
>>> -----Original Message-----
>>> From: Nikhil Komawar [mailto:nik.komawar at gmail.com]
>>> Sent: Thursday, September 03, 2015 9:42 AM
>>> To: openstack-dev at lists.openstack.org
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>> proposal
>>>
>>> We agreed to hold off on granting it a FFE until tomorrow.
>>>
>>> There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
>>> 14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and cast your vote.
>>>
>>> On 9/3/15 9:15 AM, Brian Rosmaita wrote:
>>>> I added an agenda item for this for today's Glance meeting:
>>>>    https://etherpad.openstack.org/p/glance-team-meeting-agenda
>>>>
>>>> I'd prefer to hold my vote until after the meeting.
>>>>
>>>> cheers,
>>>> brian
>>>>
>>>>
>>>> On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuvaja at hp.com> wrote:
>>>>
>>>>> Malini, all,
>>>>>
>>>>> My current opinion is -1 for FFE based on the concerns in the spec 
>>>>> and implementation.
>>>>>
>>>>> I'm more than happy to realign my stand after we have updated spec 
>>>>> and a) it's agreed to be the approach as of now and b) we can 
>>>>> evaluate how much work the implementation needs to meet with the revisited spec.
>>>>>
>>>>> If we end up to the unfortunate situation that this functionality 
>>>>> does not merge in time for Liberty, I'm confident that this is one 
>>>>> of the first things in Mitaka. I really don't think there is too 
>>>>> much to go, we just might run out of time.
>>>>>
>>>>> Thanks for your patience and endless effort to get this done.
>>>>>
>>>>> Best,
>>>>> Erno
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Bhandaru, Malini K [mailto:malini.k.bhandaru at intel.com]
>>>>>> Sent: Thursday, September 03, 2015 10:10 AM
>>>>>> To: Flavio Percoco; OpenStack Development Mailing List (not for 
>>>>>> usage
>>>>>> questions)
>>>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>>>> proposal
>>>>>>
>>>>>> Flavio, first thing in the morning Kent will upload a new BP that 
>>>>>> addresses the comments. We would very much appreciate a +1 on the 
>>>>>> FFE.
>>>>>>
>>>>>> Regards
>>>>>> Malini
>>>>>>
>>>>>>
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Flavio Percoco [mailto:flavio at redhat.com]
>>>>>> Sent: Thursday, September 03, 2015 1:52 AM
>>>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>>>> proposal
>>>>>>
>>>>>> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I wanted to propose 'Single disk image OVA import' [1] feature 
>>>>>>> proposal for exception. This looks like a decently safe proposal 
>>>>>>> that should be able to adjust in the extended time period of 
>>>>>>> Liberty. It has been discussed at the Vancouver summit during a 
>>>>>>> work session and the proposal has been trimmed down as per the 
>>>>>>> suggestions then; has been overall accepted by those present 
>>>>>>> during the discussions (barring a few changes needed on the spec itself).
>>>>>>> It being a addition to already existing import task, doesn't 
>>>>>>> involve API change or change to any of the core Image functionality as of now.
>>>>>>>
>>>>>>> Please give your vote: +1 or -1 .
>>>>>>>
>>>>>>> [1] https://review.openstack.org/#/c/194868/
>>>>>> I'd like to see support for OVF being, finally, implemented in Glance.
>>>>>> Unfortunately, I think there are too many open questions in the 
>>>>>> spec right now to make this FFE worthy.
>>>>>>
>>>>>> Could those questions be answered to before the EOW?
>>>>>>
>>>>>> With those questions answered, we'll be able to provide a more, 
>>>>>> realistic, vote.
>>>>>>
>>>>>> Also, I'd like us to evaluate how mature the implementation[0] is 
>>>>>> and the likelihood of it addressing the concerns/comments in time.
>>>>>>
>>>>>> For now, it's a -1 from me.
>>>>>>
>>>>>> Thanks all for working on this, this has been a long time 
>>>>>> requested format to have in Glance.
>>>>>> Flavio
>>>>>>
>>>>>> [0] https://review.openstack.org/#/c/214810/
>>>>>>
>>>>>>
>>>>>> --
>>>>>> @flaper87
>>>>>> Flavio Percoco
>>>>>> __________________________________________________________
>>>>>> ________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe: OpenStack-dev-
>>>>>> request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> __________________________________________________________________
>>>>> __ _ _____ OpenStack Development Mailing List (not for usage 
>>>>> questions)
>>>>> Unsubscribe: 
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> ___________________________________________________________________
>>>> __ _ ____ OpenStack Development Mailing List (not for usage 
>>>> questions)
>>>> Unsubscribe: 
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From malini.k.bhandaru at intel.com  Thu Sep 10 09:10:17 2015
From: malini.k.bhandaru at intel.com (Bhandaru, Malini K)
Date: Thu, 10 Sep 2015 09:10:17 +0000
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
 <55F05B79.1050508@gmail.com> <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
Message-ID: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33D128@fmsmsx117.amr.corp.intel.com>

Brianna, I can imagine a denial of service attack by uploading images whose signature is invalid if we allow them to reside in Glance
In a "killed" state. This would be less of an issue "killed" images still consume storage quota until actually deleted.
Also given MD-5 less secure, why not have the default hash be SHA-1 or 2?
Regards
Malini

-----Original Message-----
From: Poulos, Brianna L. [mailto:Brianna.Poulos at jhuapl.edu] 
Sent: Wednesday, September 09, 2015 9:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: stuart.mclaren at hp.com
Subject: Re: [openstack-dev] [glance] [nova] Verification of glance images before boot

Stuart is right about what will currently happen in Nova when an image is downloaded, which protects against unintentional modifications to the image data.

What is currently being worked on is adding the ability to verify a signature of the checksum.  The flow of this is as follows:
1. The user creates a signature of the "checksum hash" (currently MD5) of the image data offline.
2. The user uploads a public key certificate, which can be used to verify the signature to a key manager (currently Barbican).
3. The user creates an image in glance, with signature metadata properties.
4. The user uploads the image data to glance.
5. If the signature metadata properties exist, glance verifies the signature of the "checksum hash", including retrieving the certificate from the key manager.
6. If the signature verification fails, glance moves the image to a killed state, and returns an error message to the user.
7. If the signature verification succeeds, a log message indicates that it succeeded, and the image upload finishes successfully.

8. Nova requests the image from glance, along with the image properties, in order to boot it.
9. Nova uses the signature metadata properties to verify the signature (if a configuration option is set).
10. If the signature verification fails, nova does not boot the image, but errors out.
11. If the signature verification succeeds, nova boots the image, and a log message notes that the verification succeeded.

Regarding what is currently in Liberty, the blueprint mentioned [1] has merged, and code [2] has also been merged in glance, which handles steps
1-7 of the flow above.

For steps 7-11, there is currently a nova blueprint [3], along with code [4], which are proposed for Mitaka.

Note that we are in the process of adding official documentation, with examples of creating the signature as well as the properties that need to be added for the image before upload.  In the meantime, there's an etherpad that describes how to test the signature verification functionality in Glance [5].

Also note that this is the initial approach, and there are some limitations.  For example, ideally the signature would be based on a cryptographically secure (i.e. not MD5) hash of the image.  There is a spec in glance to allow this hash to be configurable [6].

[1]
https://blueprints.launchpad.net/glance/+spec/image-signing-and-verificatio
n-support
[2]
https://github.com/openstack/glance/commit/484ef1b40b738c87adb203bba6107ddb
4b04ff6e
[3] https://review.openstack.org/#/c/188874/
[4] https://review.openstack.org/#/c/189843/
[5]
https://etherpad.openstack.org/p/liberty-glance-image-signing-instructions
[6] https://review.openstack.org/#/c/191542/


Thanks,
~Brianna




On 9/9/15, 12:16 , "Nikhil Komawar" <nik.komawar at gmail.com> wrote:

>That's correct.
>
>The size and the checksum are to be verified outside of Glance, in this 
>case Nova. However, you may want to note that it's not necessary that 
>all Nova virt drivers would use py-glanceclient so you would want to 
>check the download specific code in the virt driver your Nova 
>deployment is using.
>
>Having said that, essentially the flow seems appropriate. Error must be 
>raise on mismatch.
>
>The signing BP was to help prevent the compromised Glance from changing 
>the checksum and image blob at the same time. Using a digital 
>signature, you can prevent download of compromised data. However, the 
>feature has just been implemented in Glance; Glance users may take time to adopt.
>
>
>
>On 9/9/15 11:15 AM, stuart.mclaren at hp.com wrote:
>>
>> The glance client (running 'inside' the Nova server) will 
>> re-calculate the checksum as it downloads the image and then compare 
>> it against the expected value. If they don't match an error will be raised.
>>
>>> How can I know that the image that a new instance is spawned from - 
>>> is actually the image that was originally registered in glance - and 
>>> has not been maliciously tampered with in some way?
>>>
>>> Is there some kind of verification that is performed against the 
>>> md5sum of the registered image in glance before a new instance is spawned?
>>>
>>> Is that done by Nova?
>>> Glance?
>>> Both? Neither?
>>>
>>> The reason I ask is some 'paranoid' security (that is their job I
>>> suppose) people have raised these questions.
>>>
>>> I know there is a glance BP already merged for L [1] - but I would 
>>> like to understand the actual flow in a bit more detail.
>>>
>>> Thanks.
>>>
>>> [1]
>>> 
>>>https://blueprints.launchpad.net/glance/+spec/image-signing-and-verif
>>>ica
>>>tion-support
>>>
>>>
>>> --
>>> Best Regards,
>>> Maish Saidel-Keesing
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> End of OpenStack-dev Digest, Vol 41, Issue 22
>>> *********************************************
>>>
>>
>> 
>>______________________________________________________________________
>>___
>>_
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>--
>
>Thanks,
>Nikhil
>
>
>_______________________________________________________________________
>___ OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From bharath at brocade.com  Thu Sep 10 09:22:36 2015
From: bharath at brocade.com (bharath)
Date: Thu, 10 Sep 2015 14:52:36 +0530
Subject: [openstack-dev] Fwd: Re:  [neutron][L3][dvr][fwaas] FWaaS
In-Reply-To: <55E3CA0D.9030206@brocade.com>
References: <55E3CA0D.9030206@brocade.com>
Message-ID: <55F14BDC.5050709@brocade.com>

Hi ,

Instance creation is failing with below error from last 4 days.

*2015-09-10 02:14:00.583 WARNING neutron.plugins.ml2.drivers.mech_agent 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] Attempting to bind with dead agent: 
{'binary': u'neutron-openvswitch-agent', 'des
cription': None, 'admin_state_up': True, 'heartbeat_timestamp': 
datetime.datetime(2015, 9, 10, 9, 6, 57), 'alive': False, 'topic': 
u'N/A', 'host': u'ci-jslave-base', 'agent_type': u'Open vSwitch agent', 
'created_at': datetime.datetime(2
015, 9, 10, 9, 4, 57), 'started_at': datetime.datetime(2015, 9, 10, 9, 
6, 57), 'id': u'aa9098fe-c412-449e-b979-1f5ab46c3c1d', 'configurations': 
{u'in_distributed_mode': False, u'arp_responder_enabled': False, 
u'tunneling_ip': u'192.168.
30.41', u'devices': 0, u'log_agent_heartbeats': False, u'l2_population': 
False, u'tunnel_types': [u'vxlan'], u'enable_distributed_routing': 
False, u'bridge_mappings': {u'ext': u'br-ext', u'mng': u'br-mng'}}}*
2015-09-10 02:14:00.583 DEBUG neutron.plugins.ml2.drivers.mech_agent 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] Attempting to bind port 
6733610d-e7dc-4ecd-a810-b2b791af9b97 on network c6fb26cc-96
1e-4f38-bf40-bfc72cc59f67 from (pid=25516) bind_port 
/opt/stack/neutron/neutron/plugins/ml2/drivers/mech_agent.py:60
*2015-09-10 02:14:00.588 ERROR neutron.plugins.ml2.managers 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] Failed to bind port 
6733610d-e7dc-4ecd-a810-b2b791af9b97 on host ci-jslave-base
2015-09-10 02:14:00.588 ERROR neutron.plugins.ml2.managers 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] Failed to bind port 
6733610d-e7dc-4ecd-a810-b2b791af9b97 on host ci-jslave-base*
2015-09-10 02:14:00.608 DEBUG neutron.plugins.ml2.db 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] For port 
6733610d-e7dc-4ecd-a810-b2b791af9b97, host ci-jslave-base, cleared 
binding levels from (pi
d=25516) clear_binding_levels 
/opt/stack/neutron/neutron/plugins/ml2/db.py:189
2015-09-10 02:14:00.608 DEBUG neutron.plugins.ml2.db 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] Attempted to set empty binding levels 
from (pid=25516) set_binding_levels /opt/stack/neutron/neutro
n/plugins/ml2/db.py:164


Recent commit seems to be broken this.


During stacking i am getting below error , but i dont know whether its 
related to above issue or not

2015-09-09 15:18:48.658 | ERROR: openstack 'module' object has no attribute 'UpdateDataSource'


would love some help on this issue

Thanks,
bharath
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/6aeeafed/attachment.html>

From lucasagomes at gmail.com  Thu Sep 10 09:23:17 2015
From: lucasagomes at gmail.com (Lucas Alvares Gomes)
Date: Thu, 10 Sep 2015 10:23:17 +0100
Subject: [openstack-dev] [Ironic] Command structure for OSC plugin
In-Reply-To: <20150909164820.GE21846@jimrollenhagen.com>
References: <20150824150341.GB13126@redhat.com> <55DB3B46.6000503@gmail.com>
 <55DB3EB4.5000105@redhat.com> <20150824172520.GD13126@redhat.com>
 <55DB54E6.1090408@redhat.com>
 <5D7F9996EA547448BC6C54C8C5AAF4E5010A3877B7@CERNXCHG44.cern.ch>
 <20150824193559.GF13126@redhat.com> <1440446092-sup-2361@lrrr.local>
 <CAOJFoEu_1MetjjFgD5k5OH=k_Ov54huWfHi0m130C2apezXEMw@mail.gmail.com>
 <20150909164820.GE21846@jimrollenhagen.com>
Message-ID: <CAB1EZBroOv+6Sba4+NFBhW8Tef9k+oeX8DSF4u9UvpYR4MM3dw@mail.gmail.com>

Hi,

> Disclaimer: I don't know much about OSC or its syntax, command
> structure, etc. These may not be well-formed thoughts. :)
>

Same here, I don't know much about OSC in general.

> So, many of the nova commands (openstack server foo) don't make sense in
> an Ironic context, and vice versa. It would also be difficult to
> determine if the commands should go through Nova or through Ironic.
> The path could be something like: check that Ironic exists, see if user
> has access, hence standalone mode (oh wait, operators probably have
> access to manage Ironic *and* deploy baremetal through Nova, what do?).
>

I was looking at the list of OSC commands [1], some I can think it
could be possible to map to Ironic functions are:

* openstack server create
* openstack server delete
* openstack server list
* openstack server show
* openstack server reboot
* openstack server rebuild

But when I go to the specifics, I find it hard to map all the
parameters supported for it in the Ironic context, i.e the "openstack
server list" command [2] supports parameters such as "--flavor" or
"--instance-name" to search by flavor or instance name wouldn't be
possible to be implement in Ironic at present (we don't keep such
information registered within the deployed Nodes); among other things
"--ip", "--ip6", etc...

So I think it may worth to do a better research about those commands
and it's parameters to see what can be reused in the Ironic context.
But, at first glance it also seems to me that this is going to bring
more confusion around the usage of the CLI than actually help by
having generic commands for different services.

[1] http://docs.openstack.org/cli-reference/content/openstackclient_commands.html
[2] http://docs.openstack.org/cli-reference/content/openstackclient_commands.html#openstackclient_subcommand_server_list

Cheers,
Lucas


From vkuklin at mirantis.com  Thu Sep 10 09:25:31 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Thu, 10 Sep 2015 12:25:31 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CA+HkNVsj5m4CduyZ9duTgCsp-BKP-Dwt85iZ01=txPBxVasANg@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
 <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
 <CABzFt8N12ADSuafDBZHg+QTHqPGjXPigzCvYZ1LE48KZJSGzyA@mail.gmail.com>
 <CAFLqvG5P2Ckp61nB9woU=AP3e0rFPfVsDg81HJadM=v2bc6=5w@mail.gmail.com>
 <CABzFt8O1VH8DfOCZAP=yaS_UicaSd=6BNGS=46T5LOOa2H++xA@mail.gmail.com>
 <CAFkLEwoXR-C-etHpS-KDZokYkP8CfS8q9UXjKYF0eNYo6yOpcQ@mail.gmail.com>
 <CA+HkNVsj5m4CduyZ9duTgCsp-BKP-Dwt85iZ01=txPBxVasANg@mail.gmail.com>
Message-ID: <CAHAWLf1SaOK_6RSSdgPkjUMdf+gPxRewQ-ygzLTXNiEN+oWqRg@mail.gmail.com>

Folks

I have a strong +1 for the proposal to decouple master node and slave nodes.
Here are the stregnths of this approach
1) We can always decide which particular node runs which particular set of
manifests. This will allow us to do be able to apply/roll back changes
node-by-node. This is very important from operations perspective.
2) We can decouple master and slave nodes manifests and not drag new
library version onto the master node when it is not needed. This allows us
to decrease probability of regressions
3) This makes life easier for the user - you just run 'apt-get/yum install'
instead of some difficult to digest `mco` command.

The only weakness that I see here is on mentioned by Andrey. I think we can
fix it by providing developers with clean and simple way of building
library package on the fly. This will make developers life easier enough to
work with proposed approach.

Also, we need to provide ways for better UX, like provide one button/api
call for:

1) update all manifests on particular nodes (e.g. all or only a part of
nodes of the cluster) to particular version
2)  revert all manifests back to the version which is provided by the
particular GA release
3) <more things we need to think of>

So far I would mark need for simple package-building system for developer
as a dependency for the proposed change, but I do not see any other way
than proceeding with it.



On Thu, Sep 10, 2015 at 11:50 AM, Sergii Golovatiuk <
sgolovatiuk at mirantis.com> wrote:

> Oleg,
>
> Alex gave a perfect example regarding support folks when they need to fix
> something really quick. It's client's choice what to patch or not. You may
> like it or not, but it's client's choice.
>
> On 10 Sep 2015, at 09:33, Oleg Gelbukh <ogelbukh at mirantis.com> wrote:
>
> Alex,
>
> I absolutely understand the point you are making about need for deployment
> engineers to modify things 'on the fly' in customer environment. It's makes
> things really flexible and lowers the entry barrier for sure.
>
> However, I would like to note that in my opinion this kind on 'monkey
> patching' is actually a bad practice for any environments other than dev
> ones. It immediately leads to emergence of unsupportable frankenclouds. I
> would greet any modification to the workflow that will discourage people
> from doing that.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Wed, Sep 9, 2015 at 5:56 PM, Alex Schultz <aschultz at mirantis.com>
> wrote:
>
>> Hey Vladimir,
>>
>>
>>
>>> Regarding plugins: plugins are welcome to install specific additional
>>> DEB/RPM repos on the master node, or just configure cluster to use
>>> additional onl?ne repos, where all necessary packages (including plugin
>>> specific puppet manifests) are to be available. Current granular deployment
>>> approach makes it easy to append specific pre-deployment tasks
>>> (master/slave does not matter). Correct me if I am wrong.
>>>
>>>
>> Don't get me wrong, I think it would be good to move to a fuel-library
>> distributed via package only.  I'm bringing these points up to indicate
>> that there is many other things that live in the fuel library puppet path
>> than just the fuel-library package.  The plugin example is just one place
>> that we will need to invest in further design and work to move to the
>> package only distribution.  What I don't want is some partially executed
>> work that only works for one type of deployment and creates headaches for
>> the people actually having to use fuel.  The deployment engineers and
>> customers who actually perform these actions should be asked about
>> packaging and their comfort level with this type of requirements.  I don't
>> have a complete understanding of the all the things supported today by the
>> fuel plugin system so it would be nice to get someone who is more familiar
>> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
>> don't think we are building fuel-library debs at this time either.  So
>> without some work on both sides, we cannot move to just packages.
>>
>>
>>> Regarding flexibility: having several versioned directories with puppet
>>> modules on the master node, having several fuel-libraryX.Y packages
>>> installed on the master node makes things "exquisitely convoluted" rather
>>> than flexible. Like I said, it is flexible enough to use mcollective, plain
>>> rsync, etc. if you really need to do things manually. But we have
>>> convenient service (Perestroika) which builds packages in minutes if you
>>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
>>> available as an application independent from CI. So, what is wrong with
>>> building fuel-library package? What if you want to troubleshoot nova (we
>>> install it using packages)? Should we also use rsync for everything else
>>> like nova, mysql, etc.?
>>>
>>>
>> Yes, we do have a service like Perestroika to build packages for us.
>> That doesn't mean everyone else does or has access to do that today.
>> Setting up a build system is a major undertaking and making that a hard
>> requirement to interact with our product may be a bit much for some
>> customers.  In speaking with some support folks, there are times when files
>> have to be munged to get around issues because there is no package or
>> things are on fire so they can't wait for a package to become available for
>> a fix.  We need to be careful not to impose limits without proper
>> justification and due diligence.  We already build the fuel-library
>> package, so there's no reason you couldn't try switching the rsync to
>> install the package if it's available on a mirror.  I just think you're
>> going to run into the issues I mentioned which need to be solved before we
>> could just mark it done.
>>
>> -Alex
>>
>>
>>
>>> Vladimir Kozhukalov
>>>
>>> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz <aschultz at mirantis.com>
>>> wrote:
>>>
>>>> I agree that we shouldn't need to sync as we should be able to just
>>>> update the fuel-library package. That being said, I think there might be a
>>>> few issues with this method. The first issue is with plugins and how to
>>>> properly handle the distribution of the plugins as they may also include
>>>> puppet code that needs to be installed on the other nodes for a deployment.
>>>> Currently I do not believe we install the plugin packages anywhere except
>>>> the master and when they do get installed there may be some post-install
>>>> actions that are only valid for the master.  Another issue is being
>>>> flexible enough to allow for deployment engineers to make custom changes
>>>> for a given environment.  Unless we can provide an improved process to
>>>> allow for people to provide in place modifications for an environment, we
>>>> can't do away with the rsync.
>>>>
>>>> If we want to go completely down the package route (and we probably
>>>> should), we need to make sure that all of the other pieces that currently
>>>> go together to make a complete fuel deployment can be updated in the same
>>>> way.
>>>>
>>>> -Alex
>>>>
>>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/23f6c76a/attachment.html>

From vkuklin at mirantis.com  Thu Sep 10 09:30:54 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Thu, 10 Sep 2015 12:30:54 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
Message-ID: <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>

Folks

I think, Mike is completely right here - we need an option to build
all-in-one ISO which can be tried-out/deployed unattendedly without
internet access. Let's let a user make a choice what he wants, not push him
into embarassing situation. We still have many parts of Fuel which make
choices for user that cannot be overriden. Let's not pretend that we know
more than user does about his environment.

On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <ogelbukh at mirantis.com>
wrote:

> The reason people want offline deployment feature is not because of poor
> connection, but rather the enterprise intranets where getting subnet with
> external access sometimes is a real pain in various body parts.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <ikalnitsky at mirantis.com>
> wrote:
>
>> Hello,
>>
>> I agree with Vladimir - the idea of online repos is a right way to
>> move. In 2015 I believe we can ignore this "poor Internet connection"
>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
>> distributives - most of them fetch needed packages from the Internet
>> during installation, not from CD/DVD. The netboot installers are
>> popular, I can't even remember when was the last time I install my
>> Debian from the DVD-1 - I use netboot installer for years.
>>
>> Thanks,
>> Igor
>>
>>
>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com> wrote:
>> >
>> >
>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com>
>> wrote:
>> >>
>> >>
>> >> Hey Vladimir,
>> >>
>> >>>
>> >>>
>> >>>>>
>> >>>>> 1) There won't be such things in like [1] and [2], thus less
>> >>>>> complicated flow, less errors, easier to maintain, easier to
>> understand,
>> >>>>> easier to troubleshoot
>> >>>>> 2) If one wants to have local mirror, the flow is the same as in
>> case
>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user to
>> >>>>> understand.
>> >>>>
>> >>>>
>> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
>> >>>> forward and has some issues making it a bad UX.
>> >>>
>> >>>
>> >>> I'd say the whole approach of having such tool as fuel-createmirror
>> is a
>> >>> way too naive. Reliable internet connection is totally up to network
>> >>> engineering rather than deployment. Even using proxy is much better
>> that
>> >>> creating local mirror. But this discussion is totally out of the
>> scope of
>> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
>> >>> straightforward (installed as rpm, has just a couple of command line
>> >>> options). The quality of this script is also out of the scope of this
>> >>> thread. BTW we have plans to improve it.
>> >>
>> >>
>> >>
>> >> Fair enough, I just wanted to raise the UX issues around these types of
>> >> things as they should go into the decision making process.
>> >>
>> >>
>> >>>
>> >>>>>
>> >>>>>
>> >>>>> Many people still associate ISO with MOS, but it is not true when
>> using
>> >>>>> package based delivery approach.
>> >>>>>
>> >>>>> It is easy to define necessary repos during deployment and thus it
>> is
>> >>>>> easy to control what exactly is going to be installed on slave
>> nodes.
>> >>>>>
>> >>>>> What do you guys think of it?
>> >>>>>
>> >>>>>
>> >>>>
>> >>>> Reliance on internet connectivity has been an issue since 6.1. For
>> many
>> >>>> large users, complete access to the internet is not available or not
>> >>>> desired.  If we want to continue down this path, we need to improve
>> the
>> >>>> tools to setup the local mirror and properly document what
>> urls/ports/etc
>> >>>> need to be available for the installation of openstack and any mirror
>> >>>> creation process.  The ideal thing is to have an all-in-one CD
>> similar to a
>> >>>> live cd that allows a user to completely try out fuel wherever they
>> want
>> >>>> with out further requirements of internet access.  If we don't want
>> to
>> >>>> continue with that, we need to do a better job around providing the
>> tools
>> >>>> for a user to get up and running in a timely fashion.  Perhaps
>> providing an
>> >>>> net-only iso and an all-included iso would be a better solution so
>> people
>> >>>> will have their expectations properly set up front?
>> >>>
>> >>>
>> >>> Let me explain why I think having local MOS mirror by default is bad:
>> >>> 1) I don't see any reason why we should treat MOS  repo other way than
>> >>> all other online repos. A user sees on the settings tab the list of
>> repos
>> >>> one of which is local by default while others are online. It can make
>> user a
>> >>> little bit confused, can't it? A user can be also confused by the
>> fact, that
>> >>> some of the repos can be cloned locally by fuel-createmirror while
>> others
>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>> >>
>> >>
>> >>
>> >> I agree. The process should be the same and it should be just another
>> >> repo. It doesn't mean we can't include a version on an ISO as part of a
>> >> release.  Would it be better to provide the mirror on the ISO but not
>> have
>> >> it enabled by default for a release so that we can gather user
>> feedback on
>> >> this? This would include improved documentation and possibly allowing
>> a user
>> >> to choose their preference so we can collect metrics?
>> >>
>> >>
>> >>> 2) Having local MOS mirror by default makes things much more
>> convoluted.
>> >>> We are forced to have several directories with predefined names and
>> we are
>> >>> forced to manage these directories in nailgun, in upgrade script,
>> etc. Why?
>> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
>> equal
>> >>> to MOS, which is not true. It is possible to implement really flexible
>> >>> delivery scheme, but we need to think of these things as they are
>> >>> independent.
>> >>
>> >>
>> >>
>> >> I'm not sure what you mean by this. Including a point in time copy on
>> an
>> >> ISO as a release is a common method of distributing software. Is this a
>> >> messaging thing that needs to be addressed? Perhaps I'm not familiar
>> with
>> >> people referring to the ISO as being MOS.
>> >>
>> >>
>> >>> For large users it is easy to build custom ISO and put there what they
>> >>> need but first we need to have simple working scheme clear for
>> everyone. I
>> >>> think dealing with all repos the same way is what is gonna makes
>> things
>> >>> simpler.
>> >>>
>> >>
>> >>
>> >> Who is going to build a custom ISO? How does one request that? What
>> >> resources are consumed by custom ISO creation process/request? Does
>> this
>> >> scale?
>> >>
>> >>
>> >>>
>> >>> This thread is not about internet connectivity, it is about aligning
>> >>> things.
>> >>>
>> >>
>> >> You are correct in that this thread is not explicitly about internet
>> >> connectivity, but they are related. Any changes to remove a local
>> repository
>> >> and only provide an internet based solution makes internet connectivity
>> >> something that needs to be included in the discussion.  I just want to
>> make
>> >> sure that we properly evaluate this decision based on end user
>> feedback not
>> >> because we don't want to manage this from a developer standpoint.
>> >
>> >
>> >
>> >  +1, whatever the changes is, please keep Fuel as a tool that can deploy
>> > without Internet access, this is part of reason that people like it and
>> it's
>> > better that other tools.
>> >>
>> >>
>> >> -Alex
>> >>
>> >>
>> >>
>> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > --
>> > Yaguang Tang
>> > Technical Support, Mirantis China
>> >
>> > Phone: +86 15210946968
>> >
>> >
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/afd846d2/attachment.html>

From derekh at redhat.com  Thu Sep 10 09:37:07 2015
From: derekh at redhat.com (Derek Higgins)
Date: Thu, 10 Sep 2015 10:37:07 +0100
Subject: [openstack-dev] [TripleO] trello
In-Reply-To: <55F0667B.7020503@redhat.com>
References: <55EF0070.4040309@redhat.com> <55F02F80.50401@redhat.com>
 <55F0667B.7020503@redhat.com>
Message-ID: <55F14F43.8080904@redhat.com>



On 09/09/15 18:03, Jason Rist wrote:
> On 09/09/2015 07:09 AM, Derek Higgins wrote:
>>
>>
>> On 08/09/15 16:36, Derek Higgins wrote:
>>> Hi All,
>>>
>>>      Some of ye may remember some time ago we used to organize TripleO
>>> based jobs/tasks on a trello board[1], at some stage this board fell out
>>> of use (the exact reason I can't put my finger on). This morning I was
>>> putting a list of things together that need to be done in the area of CI
>>> and needed somewhere to keep track of it.
>>>
>>> I propose we get back to using this trello board and each of us add
>>> cards at the very least for the things we are working on.
>>>
>>> This should give each of us a lot more visibility into what is ongoing
>>> on in the tripleo project currently, unless I hear any objections,
>>> tomorrow I'll start archiving all cards on the boards and removing
>>> people no longer involved in tripleo. We can then start adding items and
>>> anybody who wants in can be added again.
>>
>> This is now done, see
>> https://trello.com/tripleo
>>
>> Please ping me on irc if you want to be added.
>>
>>>
>>> thanks,
>>> Derek.
>>>
>>> [1] - https://trello.com/tripleo
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Derek - you weren't on today when I went to ping you, can you please add me so I can track it for RHCI purposes?

Done

>
> Thanks!
>


From jgalvin at servecentric.com  Thu Sep 10 09:39:57 2015
From: jgalvin at servecentric.com (James Galvin)
Date: Thu, 10 Sep 2015 09:39:57 +0000
Subject: [openstack-dev] Instance STOPS/STARTS while taking snapshot
Message-ID: <e6e9795b5568400ba62df1ccd030ca42@SC-EX2013.centric.local>

Openstack Kilo release Ceph storage backend

I have run into an issue lately where I create a snapshot of a running instance, while the snapshot state is "image pending upload" And "Queued" state in the image service, I cant access my Instance

I cant access it via the console, ssh and I ran a continuous ping to the floating IP and it times out, but the instance still shows running on the dashboard

As soon as the instance state is "Image uploading" and in the image service "saving" The instance becomes available again,

Is this a known issue? All of this is done via Horizon dashboard

I can see the following in the logs on the compute:

2015-09-09 14:33:39.265 23261 INFO nova.compute.manager [req-0252f823-73f5-4c37-aa86-efbe6536e4f6 d2b1cc9566d44a909de46689569118e3 3b5e03b8a83e44dd9a7140d868d28a9e - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] instance snapshotting 2015-09-09 14:33:39.877 23261 INFO nova.compute.manager [req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] VM Paused (Lifecycle Event) 2015-09-09 14:33:40.049 23261 INFO nova.compute.manager [req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] During sync_power_state the instance has a pending task (image_snapshot). Skip. 2015-09-09 14:33:50.756 23261 INFO nova.virt.libvirt.driver [req-0252f823-73f5-4c37-aa86-efbe6536e4f6 d2b1cc9566d44a909de46689569118e3 3b5e03b8a83e44dd9a7140d868d28a9e - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] Beginning cold snapshot process 2015-09-09 14:33:50.759 23261 INFO nova.compute.manager [req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] VM Stopped (Lifecycle Event) 2015-09-09 14:33:50.939 23261 INFO nova.compute.manager [req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] During sync_power_state the instance has a pending task (image_snapshot). Skip. 2015-09-09 14:35:57.831 23261 INFO nova.compute.manager [req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] VM Started (Lifecycle Event) 2015-09-09 14:35:57.990 23261 INFO nova.compute.manager [req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] During sync_power_state the instance has a pending task (image_pending_upload). Skip. 2015-09-09 14:35:57.991 23261 INFO nova.compute.manager [req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] VM Resumed (Lifecycle Event) 2015-09-09 14:35:58.151 23261 INFO nova.compute.manager [req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] During sync_power_state the instance has a pending task (image_pending_upload). Skip. 2015-09-09 14:35:58.498 23261 INFO nova.virt.libvirt.driver [req-0252f823-73f5-4c37-aa86-efbe6536e4f6 d2b1cc9566d44a909de46689569118e3 3b5e03b8a83e44dd9a7140d868d28a9e - - -] [instance: a0e855e5-c205-4d61-bd48-99384d6310f5] Snapshot extracted, beginning image upload

Any help with this would be appreciated :)

Thanks
James
This e-mail contains confidential information or information belonging to Servecentric Ltd and is intended solely for the addressee(s). The unauthorized disclosure, use, dissemination or copy (either in whole or in part) of this e-mail, or any information it contains, is prohibited. Any views or opinions presented are solely those of the author and do not necessarily represent those of Servecentric Ltd. E-mails are susceptible to alteration and their integrity cannot be guaranteed. Servecentric shall not be liable for the contents of this e-mail if modified or falsified. If you are not the intended recipient of this e-mail, please delete it immediately from your system and notify the sender of the wrong delivery and of the email's deletion.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/9923a1c3/attachment.html>

From davanum at gmail.com  Thu Sep 10 09:56:37 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Thu, 10 Sep 2015 05:56:37 -0400
Subject: [openstack-dev] Question about generating an oslo.utils release
In-Reply-To: <55F136F9.9060700@hp.com>
References: <55F136F9.9060700@hp.com>
Message-ID: <CANw6fcHOTX31NKYYnVY75370_xDDBgbo9yUU4hPmnRnHVbMwSg@mail.gmail.com>

Paul,

Usually there are releases every week from the oslo team. At the moment,
oslo.* releases are frozen until stable/liberty branches are cut. Usually
you can also request a new oslo library release by logging a review against
openstack/releases repository as well.

Depends-On tag works for things installed from git, NOT libraries from
pypi. hence the fail. If you want to try the change locally you can
use  LIBS_FROM_GIT config option in devstack's configuration files to
specify the library in question. However you would that once your patch has
been merged into the master branch. There are additional toggles in
devstack local.conf to do this from a pending review as well if you really
want to try it.

-- Dims

On Thu, Sep 10, 2015 at 3:53 AM, Paul Carlton <paul.carlton2 at hp.com> wrote:

> Hi
>
> I have an olso.utils change merged (220620
> <https://review.openstack.org/#/c/220620/>).  A nova change (220622
> <https://review.openstack.org/#/c/220622/>) depends on this.  What is the
> process for creating a new version of olso.utils?  Is this performed
> periodically by a release manager or do I need to do something myself?
>
> Incidentally, despite including a depends-on tag in my nova change's
> commit message my tests that depend on the oslo.utils change failed in CI,
> I thought the use of depends-on would cause it to load olso.utils using the
> referenced development commit?
>
> Thanks
>
> --
> Paul Carlton
> Software Engineer
> Cloud Services
> Hewlett Packard
> BUK03:T242
> Longdown Avenue
> Stoke Gifford
> Bristol BS34 8QZ
>
> Mobile:    +44 (0)7768 994283
> Email:    mailto:paul.carlton2 at hp.com <paul.carlton2 at hp.com>
> Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England.
> The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL".
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/bdd93530/attachment.html>

From jprovazn at redhat.com  Thu Sep 10 10:01:49 2015
From: jprovazn at redhat.com (Jan Provaznik)
Date: Thu, 10 Sep 2015 12:01:49 +0200
Subject: [openstack-dev] [TripleO] Releasing tripleo-common on PyPI
In-Reply-To: <1793280466.18329104.1441793759084.JavaMail.zimbra@redhat.com>
References: <1793280466.18329104.1441793759084.JavaMail.zimbra@redhat.com>
Message-ID: <55F1550D.3070005@redhat.com>

On 09/09/2015 12:15 PM, Dougal Matthews wrote:
> Hi,
>
> The tripleo-common library appears to be registered or PyPI but hasn't yet had
> a release[1]. I am not familiar with the release process - what do we need to
> do to make sure it is regularly released with other TripleO packages?
>
> We will also want to do something similar with the new python-tripleoclient
> which doesn't seem to be registered on PyPI yet at all.
>
> Thanks,
> Dougal
>
> [1]: https://pypi.python.org/pypi/tripleo-common
>

Hi Dougal,
thanks for moving this forward. I've never finished the release process 
upstream, there was no interest/consumer of this lib upstream as UI/CLI 
decided to use midstream. I'm excited to see this is changing now.

Jan


From ikalnitsky at mirantis.com  Thu Sep 10 10:18:07 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Thu, 10 Sep 2015 13:18:07 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
Message-ID: <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>

Mike,

> still not exactly true for some large enterprises. Due to all the security, etc.,
> there are sometimes VPNs / proxies / firewalls with very low throughput.

It's their problem, and their policies. We can't and shouldn't handle
all possible cases. If some enterprise has "no Internet" policy, I bet
it won't be a problem for their IT guys to create an intranet mirror
for MOS packages. Moreover, I also bet they do have a mirror for
Ubuntu or other Linux distributive. So it basically about approach how
to consume our mirrors.

On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <vkuklin at mirantis.com> wrote:
> Folks
>
> I think, Mike is completely right here - we need an option to build
> all-in-one ISO which can be tried-out/deployed unattendedly without internet
> access. Let's let a user make a choice what he wants, not push him into
> embarassing situation. We still have many parts of Fuel which make choices
> for user that cannot be overriden. Let's not pretend that we know more than
> user does about his environment.
>
> On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <ogelbukh at mirantis.com>
> wrote:
>>
>> The reason people want offline deployment feature is not because of poor
>> connection, but rather the enterprise intranets where getting subnet with
>> external access sometimes is a real pain in various body parts.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <ikalnitsky at mirantis.com>
>> wrote:
>>>
>>> Hello,
>>>
>>> I agree with Vladimir - the idea of online repos is a right way to
>>> move. In 2015 I believe we can ignore this "poor Internet connection"
>>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
>>> distributives - most of them fetch needed packages from the Internet
>>> during installation, not from CD/DVD. The netboot installers are
>>> popular, I can't even remember when was the last time I install my
>>> Debian from the DVD-1 - I use netboot installer for years.
>>>
>>> Thanks,
>>> Igor
>>>
>>>
>>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com> wrote:
>>> >
>>> >
>>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com>
>>> > wrote:
>>> >>
>>> >>
>>> >> Hey Vladimir,
>>> >>
>>> >>>
>>> >>>
>>> >>>>>
>>> >>>>> 1) There won't be such things in like [1] and [2], thus less
>>> >>>>> complicated flow, less errors, easier to maintain, easier to
>>> >>>>> understand,
>>> >>>>> easier to troubleshoot
>>> >>>>> 2) If one wants to have local mirror, the flow is the same as in
>>> >>>>> case
>>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user
>>> >>>>> to
>>> >>>>> understand.
>>> >>>>
>>> >>>>
>>> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
>>> >>>> forward and has some issues making it a bad UX.
>>> >>>
>>> >>>
>>> >>> I'd say the whole approach of having such tool as fuel-createmirror
>>> >>> is a
>>> >>> way too naive. Reliable internet connection is totally up to network
>>> >>> engineering rather than deployment. Even using proxy is much better
>>> >>> that
>>> >>> creating local mirror. But this discussion is totally out of the
>>> >>> scope of
>>> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
>>> >>> straightforward (installed as rpm, has just a couple of command line
>>> >>> options). The quality of this script is also out of the scope of this
>>> >>> thread. BTW we have plans to improve it.
>>> >>
>>> >>
>>> >>
>>> >> Fair enough, I just wanted to raise the UX issues around these types
>>> >> of
>>> >> things as they should go into the decision making process.
>>> >>
>>> >>
>>> >>>
>>> >>>>>
>>> >>>>>
>>> >>>>> Many people still associate ISO with MOS, but it is not true when
>>> >>>>> using
>>> >>>>> package based delivery approach.
>>> >>>>>
>>> >>>>> It is easy to define necessary repos during deployment and thus it
>>> >>>>> is
>>> >>>>> easy to control what exactly is going to be installed on slave
>>> >>>>> nodes.
>>> >>>>>
>>> >>>>> What do you guys think of it?
>>> >>>>>
>>> >>>>>
>>> >>>>
>>> >>>> Reliance on internet connectivity has been an issue since 6.1. For
>>> >>>> many
>>> >>>> large users, complete access to the internet is not available or not
>>> >>>> desired.  If we want to continue down this path, we need to improve
>>> >>>> the
>>> >>>> tools to setup the local mirror and properly document what
>>> >>>> urls/ports/etc
>>> >>>> need to be available for the installation of openstack and any
>>> >>>> mirror
>>> >>>> creation process.  The ideal thing is to have an all-in-one CD
>>> >>>> similar to a
>>> >>>> live cd that allows a user to completely try out fuel wherever they
>>> >>>> want
>>> >>>> with out further requirements of internet access.  If we don't want
>>> >>>> to
>>> >>>> continue with that, we need to do a better job around providing the
>>> >>>> tools
>>> >>>> for a user to get up and running in a timely fashion.  Perhaps
>>> >>>> providing an
>>> >>>> net-only iso and an all-included iso would be a better solution so
>>> >>>> people
>>> >>>> will have their expectations properly set up front?
>>> >>>
>>> >>>
>>> >>> Let me explain why I think having local MOS mirror by default is bad:
>>> >>> 1) I don't see any reason why we should treat MOS  repo other way
>>> >>> than
>>> >>> all other online repos. A user sees on the settings tab the list of
>>> >>> repos
>>> >>> one of which is local by default while others are online. It can make
>>> >>> user a
>>> >>> little bit confused, can't it? A user can be also confused by the
>>> >>> fact, that
>>> >>> some of the repos can be cloned locally by fuel-createmirror while
>>> >>> others
>>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>>> >>
>>> >>
>>> >>
>>> >> I agree. The process should be the same and it should be just another
>>> >> repo. It doesn't mean we can't include a version on an ISO as part of
>>> >> a
>>> >> release.  Would it be better to provide the mirror on the ISO but not
>>> >> have
>>> >> it enabled by default for a release so that we can gather user
>>> >> feedback on
>>> >> this? This would include improved documentation and possibly allowing
>>> >> a user
>>> >> to choose their preference so we can collect metrics?
>>> >>
>>> >>
>>> >>> 2) Having local MOS mirror by default makes things much more
>>> >>> convoluted.
>>> >>> We are forced to have several directories with predefined names and
>>> >>> we are
>>> >>> forced to manage these directories in nailgun, in upgrade script,
>>> >>> etc. Why?
>>> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
>>> >>> equal
>>> >>> to MOS, which is not true. It is possible to implement really
>>> >>> flexible
>>> >>> delivery scheme, but we need to think of these things as they are
>>> >>> independent.
>>> >>
>>> >>
>>> >>
>>> >> I'm not sure what you mean by this. Including a point in time copy on
>>> >> an
>>> >> ISO as a release is a common method of distributing software. Is this
>>> >> a
>>> >> messaging thing that needs to be addressed? Perhaps I'm not familiar
>>> >> with
>>> >> people referring to the ISO as being MOS.
>>> >>
>>> >>
>>> >>> For large users it is easy to build custom ISO and put there what
>>> >>> they
>>> >>> need but first we need to have simple working scheme clear for
>>> >>> everyone. I
>>> >>> think dealing with all repos the same way is what is gonna makes
>>> >>> things
>>> >>> simpler.
>>> >>>
>>> >>
>>> >>
>>> >> Who is going to build a custom ISO? How does one request that? What
>>> >> resources are consumed by custom ISO creation process/request? Does
>>> >> this
>>> >> scale?
>>> >>
>>> >>
>>> >>>
>>> >>> This thread is not about internet connectivity, it is about aligning
>>> >>> things.
>>> >>>
>>> >>
>>> >> You are correct in that this thread is not explicitly about internet
>>> >> connectivity, but they are related. Any changes to remove a local
>>> >> repository
>>> >> and only provide an internet based solution makes internet
>>> >> connectivity
>>> >> something that needs to be included in the discussion.  I just want to
>>> >> make
>>> >> sure that we properly evaluate this decision based on end user
>>> >> feedback not
>>> >> because we don't want to manage this from a developer standpoint.
>>> >
>>> >
>>> >
>>> >  +1, whatever the changes is, please keep Fuel as a tool that can
>>> > deploy
>>> > without Internet access, this is part of reason that people like it and
>>> > it's
>>> > better that other tools.
>>> >>
>>> >>
>>> >> -Alex
>>> >>
>>> >>
>>> >>
>>> >> __________________________________________________________________________
>>> >> OpenStack Development Mailing List (not for usage questions)
>>> >> Unsubscribe:
>>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Yaguang Tang
>>> > Technical Support, Mirantis China
>>> >
>>> > Phone: +86 15210946968
>>> >
>>> >
>>> >
>>> >
>>> > __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuklin at mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From sean at dague.net  Thu Sep 10 10:39:28 2015
From: sean at dague.net (Sean Dague)
Date: Thu, 10 Sep 2015 06:39:28 -0400
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <1441840459-sup-9283@lrrr.local>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com> <1441840459-sup-9283@lrrr.local>
Message-ID: <55F15DE0.7040804@dague.net>

On 09/09/2015 07:16 PM, Doug Hellmann wrote:
> Excerpts from Matt Riedemann's message of 2015-09-09 13:45:29 -0500:
>>
>> On 9/9/2015 1:04 PM, Doug Hellmann wrote:
>>> Excerpts from Sean Dague's message of 2015-09-09 13:36:37 -0400:
<snip>
>> The problem with the static file paths in rootwrap.conf is that we don't 
>> know where those other library filter files are going to end up on the 
>> system when the library is installed.  We could hard-code nova's 
>> rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then 
> 
> I thought the configuration file passed to rootwrap was something the
> deployer could change, which would let them fix the paths on their
> system. Did I misunderstand what the argument was?
> 
>> that means the deploy/config management tooling that installing this 
>> stuff needs to copy that directory structure from the os-brick install 
>> location (which we're finding non-deterministic, at least when using 
>> data_files with pbr) to the target location that rootwrap.conf cares about.
>>
>> That's why we were proposing adding things to rootwrap.conf that 
>> oslo.rootwrap can parse and process dynamically using the resource 
>> access stuff in pkg_resources, so we just say 'I want you to load the 
>> os-brick.filters file from the os-brick project, thanks.'.
>>
> 
> Doesn't that put the rootwrap config file for os-brick in a place the
> deployer can't change it? Maybe they're not supposed to? If they're not,
> then I agree that burying the actual file inside the library and using
> something like pkgtools to get its contents makes more sense.

Right now, they are all a bunch of files, they can be anywhere. And then
you have other files that have to reference these files by path, which
can be anywhere. We could just punt in that part and say "punt! every
installer and configuration management install needs to solve this on
their own." I'm not convinced that's a good answer. The os-brick filters
aren't really config. If you change them all that happens is
terribleness. Stuff stops working, and you don't know why. They are data
to exchange with another process about how to function. Honestly, they
should probably be python code that's imported by rootwrap.

Much like the issues around clouds failing when you try to GET /v2 on
the Nova API (because we have a bunch of knobs you have to align for SSL
termination, and a bunch of deployers didn't), I don't think we should
be satisfied with "there's a config for that!" when all that config
means is that someone can break their configuration if they don't get it
exactly right.

	-Sean

-- 
Sean Dague
http://dague.net


From eantyshev at virtuozzo.com  Thu Sep 10 10:50:39 2015
From: eantyshev at virtuozzo.com (Evgeny Antyshev)
Date: Thu, 10 Sep 2015 13:50:39 +0300
Subject: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
 gerrit server
In-Reply-To: <9086590602E58741A4119DC210CF893AA92C53DD@G08CNEXMBPEKD01.g08.fujitsu.local>
References: <9086590602E58741A4119DC210CF893AA92C53DD@G08CNEXMBPEKD01.g08.fujitsu.local>
Message-ID: <55F1607F.9060509@virtuozzo.com>



On 10.09.2015 11:30, Xie, Xianshan wrote:
>
> Hi, all,
>
>    In my CI environment, after submitting a patch into 
> openstack-dev/sandbox,
>
> the Jenkins Job can be launched automatically, and the result message 
> of the job also can be posted into the gerrit server successfully.
>
> Everything seems fine.
>
> But in the ?Verified? column, there is no verified vote, such as +1 or -1.
>
You will be able when your CI account is added to "Third-Party CI" group 
on review.openstack.org
https://review.openstack.org/#/admin/groups/270,members
I advice you to ask for such a permission in an IRC meeting for 
third-party CI maintainers:
https://wiki.openstack.org/wiki/Meetings/ThirdParty
But you still won't be able to vote on other projects, except the sandbox.

> (patch url: https://review.openstack.org/#/c/222049/,
>
> CI name:  Fnst OpenStackTest CI)
>
> Although I have already added the ?verified? label into the 
> layout.yaml , under the check pipeline, it does not work yet.
>
> And my configuration info is setted as follows:
>
> Layout.yaml
>
> -------------------------------------------
>
> pipelines:
>
>   - name: check
>
>    trigger:
>
>      gerrit:
>
>       - event: patchset-created
>
>       - event: change-restored
>
>       - event: comment-added
>
> ?
>
>    success:
>
>     gerrit:
>
>       verified: 1
>
>    failure:
>
>     gerrit:
>
>       verified: -1
>
> jobs:
>
>    - name: noop-check-communication
>
>       parameter-function: reusable_node
>
> projects:
>
> - name: openstack-dev/sandbox
>
>    - noop-check-communication
>
> -------------------------------------------
>
> And the projects.yaml of Jenkins job:
>
> -------------------------------------------
>
> - project:
>
> name: sandbox
>
> jobs:
>
>       - noop-check-communication:
>
>          node: 'devstack_slave || devstack-precise-check || d-p-c'
>
> ?
>
> -------------------------------------------
>
> Could anyone help me? Thanks in advance.
>
> Xiexs
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/9f60bf7f/attachment.html>

From abhishek.talwar at tcs.com  Thu Sep 10 10:54:45 2015
From: abhishek.talwar at tcs.com (Abhishek Talwar)
Date: Thu, 10 Sep 2015 16:24:45 +0530
Subject: [openstack-dev]  [kilo-devstack] [disk-usage]
Message-ID: <OF5B7AC239.4AF0398B-ON65257EBC.003BF1DB-65257EBC.003BF1E0@tcs.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/1f20b0fb/attachment.html>

From tsufiev at mirantis.com  Thu Sep 10 10:57:27 2015
From: tsufiev at mirantis.com (Timur Sufiev)
Date: Thu, 10 Sep 2015 10:57:27 +0000
Subject: [openstack-dev] [horizon] [keystone] [docs] Two kinds of
 'region' entity: finding better names for them
In-Reply-To: <OFDC4C8096.937FD1E7-ON86257E7D.007C7576-86257E7D.007D3E21@us.ibm.com>
References: <559D2EBE.5050102@gmail.com>
 <CAEHC1zsL7LGk5H32iFRfu5npoPNcfCNAF8buH7NsPcj2LyLGaA@mail.gmail.com>
 <559D271A.8060506@gmail.com>
 <CAEHC1zs-h5pgwFxfOYc+G7OVp2qsEOsxsDjTN2b8_OJc+kxD5w@mail.gmail.com>
 <OF5536F19A.8FD40F45-ON87257E7C.00636A61-87257E7C.00636A7D@us.ibm.com>
 <OFDC4C8096.937FD1E7-ON86257E7D.007C7576-86257E7D.007D3E21@us.ibm.com>
Message-ID: <CAEHC1zuw7U0swmsU+bYBRMNVkvsExvHcRnCXqAoPqo8ENDzZGw@mail.gmail.com>

I went forward and filed a bug for this issue (since we agreed that it
should be fixed): https://bugs.launchpad.net/horizon/+bug/1494251
The code is already in gerrit (see links in bug), feel free to review.

On Fri, Jul 10, 2015 at 1:51 AM Douglas Fish <drfish at us.ibm.com> wrote:

> I think another important question is how to represent this to the user on
> the login screen. "Keystone Endpoint:" matches the setting, but seems like
> a weird choice to me. Is there a better terminology to use for the label
> for this on the login screen?
>
> I see the related selector has no label at all in the header. Maybe using
> the same label there would be a good idea.
>
> Doug
>
> Thai Q Tran/Silicon Valley/IBM at IBMUS wrote on 07/08/2015 01:05:53 PM:
>
> > From: Thai Q Tran/Silicon Valley/IBM at IBMUS
> > To: "OpenStack Development Mailing List \(not for usage questions\)"
> > <openstack-dev at lists.openstack.org>
> > Date: 07/09/2015 01:17 PM
> > Subject: Re: [openstack-dev] [horizon] [keystone] [docs] Two kinds
> > of 'region' entity: finding better names for them
> >
> > Had the same issue when I worked on the context selection menu for
> > switching domain and project. I think it make sense to rename it to
> > AVAILABLE_KEYSTONE_ENDPOINTS. Since it is local_settings.py, its
> > going to affect some folks (maybe even break) until they also update
> > their setting, something that would have to be done manually.
> >
> > -----Jay Pipes <jaypipes at gmail.com> wrote: -----
> > To: openstack-dev at lists.openstack.org
> > From: Jay Pipes <jaypipes at gmail.com>
> > Date: 07/08/2015 07:14AM
> > Subject: Re: [openstack-dev] [horizon] [keystone] [docs] Two kinds
> > of 'region' entity: finding better names for them
>
> > Got it, thanks for the excellent explanation, Timur! Yeah, I think
> > renaming to AVAILABLE_KEYSTONE_ENDPOINTS would be a good solution.
> >
> > Best,
> > -jay
> >
> > On 07/08/2015 09:53 AM, Timur Sufiev wrote:
> > > Hi, Jay!
> > >
> > > As Doug said, Horizon regions are just different Keystone endpoints
> that
> > > Horizon could use to authorize against (and retrieve the whole catalog
> > > from any of them afterwards).
> > >
> > > Another example of how complicated things could be: imagine that
> Horizon
> > > config has two Keystone endpoints inside AVAILABLE_REGIONS setting,
> > > http://keystone.europe and http://keystone.asia, each of them hosting
> a
> > > different catalog with service endpoint pointing to Europe/Asia located
> > > services. For European Keystone all Europe-based services are marked as
> > > 'RegionOne', for Asian Keystone all its Asia-based services are marked
> > > as 'RegionOne'. Then, imagine that each Keystone also has 'RegionTwo'
> > > region, for European Keystone the Asian services are marked so, for
> > > Asian Keystone the opposite is true. One of customers did roughly the
> > > same thing (with both Keystones using common LDAP backend), and
> > > understanding what exactly in Horizon didn't work well was a puzzling
> > > experience.
> > >
> > > On Wed, Jul 8, 2015 at 4:37 PM Jay Pipes <jaypipes at gmail.com
> > > <mailto:jaypipes at gmail.com>> wrote:
> > >
> > >     On 07/08/2015 08:50 AM, Timur Sufiev wrote:
> > >      > Hello, folks!
> > >      >
> > >      > Somehow it happened that we have 2 different kinds of regions:
> the
> > >      > service regions inside Keystone catalog and AVAILABLE_REGIONS
> setting
> > >      > inside Horizon, yet use the same name 'regions' for both of
> them.
> > >     That
> > >      > creates a lot of confusion when solving some
> region-relatedissues at
> > >      > the Horizon/Keystone junction, even explaining what is exactly
> being
> > >      > broken poses a serious challenge when our common language has
> > >     such a flaw!
> > >      >
> > >      > I propose to invent 2 distinct terms for these entities, so at
> > >     least we
> > >      > won't be terminologically challenged when fixing the related
> bugs.
> > >
> > >     Hi!
> > >
> > >     I understand what the Keystone region represents: a simple,
> > >     non-geographically-connotated division of the entire OpenStack
> > >     deployment.
> > >
> > >     Unfortunately, I don't know what the Horizon regions represent.
> Could
> > >     you explain?
> > >
> > >     Best,
> > >     -jay
> > >
> > >
> >
> __________________________________________________________________________
> > >     OpenStack Development Mailing List (not for usage questions)
> > >     Unsubscribe:
> > >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >
> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> > >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/403b1548/attachment.html>

From ramy.asselin at hp.com  Thu Sep 10 10:59:35 2015
From: ramy.asselin at hp.com (Asselin, Ramy)
Date: Thu, 10 Sep 2015 10:59:35 +0000
Subject: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
 gerrit server
In-Reply-To: <55F1607F.9060509@virtuozzo.com>
References: <9086590602E58741A4119DC210CF893AA92C53DD@G08CNEXMBPEKD01.g08.fujitsu.local>
 <55F1607F.9060509@virtuozzo.com>
Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF06E8@G4W3223.americas.hpqcorp.net>

I added Fnst OpenStackTest CI<https://review.openstack.org/#/q/owner:openstack_dev%2540163.com+status:open,n,z> to the third-party ci group.
Ramy

From: Evgeny Antyshev [mailto:eantyshev at virtuozzo.com]
Sent: Thursday, September 10, 2015 3:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into gerrit server


On 10.09.2015 11:30, Xie, Xianshan wrote:
Hi, all,
   In my CI environment, after submitting a patch into openstack-dev/sandbox,
the Jenkins Job can be launched automatically, and the result message of the job also can be posted into the gerrit server successfully.
Everything seems fine.

But in the "Verified" column, there is no verified vote, such as +1 or -1.
You will be able when your CI account is added to "Third-Party CI" group on review.openstack.org
https://review.openstack.org/#/admin/groups/270,members
I advice you to ask for such a permission in an IRC meeting for third-party CI maintainers:
https://wiki.openstack.org/wiki/Meetings/ThirdParty
But you still won't be able to vote on other projects, except the sandbox.


(patch url: https://review.openstack.org/#/c/222049/,
CI name:  Fnst OpenStackTest CI)

Although I have already added the "verified" label into the layout.yaml , under the check pipeline, it does not work yet.

And my configuration info is setted as follows:
Layout.yaml
-------------------------------------------
pipelines:
  - name: check
   trigger:
     gerrit:
      - event: patchset-created
      - event: change-restored
      - event: comment-added
...
   success:
    gerrit:
      verified: 1
   failure:
    gerrit:
      verified: -1

jobs:
   - name: noop-check-communication
      parameter-function: reusable_node
projects:
- name: openstack-dev/sandbox
   - noop-check-communication
-------------------------------------------


And the projects.yaml of Jenkins job:
-------------------------------------------
- project:
name: sandbox
jobs:
      - noop-check-communication:
         node: 'devstack_slave || devstack-precise-check || d-p-c'
...
-------------------------------------------

Could anyone help me? Thanks in advance.

Xiexs





__________________________________________________________________________

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/d7bdb5de/attachment.html>

From sean at dague.net  Thu Sep 10 11:05:23 2015
From: sean at dague.net (Sean Dague)
Date: Thu, 10 Sep 2015 07:05:23 -0400
Subject: [openstack-dev] [kilo-devstack] [disk-usage]
In-Reply-To: <OF5B7AC239.4AF0398B-ON65257EBC.003BF1DB-65257EBC.003BF1E0@tcs.com>
References: <OF5B7AC239.4AF0398B-ON65257EBC.003BF1DB-65257EBC.003BF1E0@tcs.com>
Message-ID: <55F163F3.6020400@dague.net>

disk = 0 does not mean there is no disk. It means the disk image won't
be expanded to a larger size. The disk used will be whatever your image
size is.

	-Sean

On 09/10/2015 06:54 AM, Abhishek Talwar wrote:
> Hi Folks,
> 
> 
> I have installed devstack kilo version of OpenStack and created an
> instance on it with flavor *?m1.nano?* that gives a disk of 0 to your
> instance.
> 
> But while checking disk usage of the instance using *?disk.usage?* meter
> it gives the output to be greater than 0 so how is it possible ?
> 
> 
> *stack at abhishek:/opt/stack/ceilometer$ nova show
> e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa *
> 
> +--------------------------------------+----------------------------------------------------------------+
> 
> 
> | Property | Value |
> 
> +--------------------------------------+----------------------------------------------------------------+
> 
> 
> | OS-DCF:diskConfig | AUTO |
> 
> | OS-EXT-AZ:availability_zone | nova |
> 
> | OS-EXT-SRV-ATTR:host | tcs-HP-Compaq-Elite-8300-SFF |
> 
> | OS-EXT-SRV-ATTR:hypervisor_hostname | tcs-HP-Compaq-Elite-8300-SFF |
> 
> | OS-EXT-SRV-ATTR:instance_name | instance-00000002 |
> 
> | OS-EXT-STS:power_state | 1 |
> 
> | OS-EXT-STS:task_state | - |
> 
> | OS-EXT-STS:vm_state | active |
> 
> | OS-SRV-USG:launched_at | 2015-09-10T05:24:19.000000 |
> 
> | OS-SRV-USG:terminated_at | - |
> 
> | accessIPv4 | |
> 
> | accessIPv6 | |
> 
> | config_drive | True |
> 
> | created | 2015-09-10T05:24:10Z |
> 
> | flavor | m1.nano (42) |
> 
> | hostId | 4a3e03e0a89fbf3790a1b1cd59b1b10acbaad6aa31a4361996d52440 |
> 
> | id | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa |
> 
> | image | cirros-0.3.2-x86_64-uec (221c46b3-9619-485e-8f60-0e1a363fc0e5) |
> 
> | key_name | - |
> 
> | metadata | {} |
> 
> | name | vmssasa |
> 
> | os-extended-volumes:volumes_attached | [] |
> 
> | progress | 0 |
> 
> | public network | 172.24.4.4 |
> 
> | security_groups | default |
> 
> | status | ACTIVE |
> 
> | tenant_id | 5f4f5ee531a441d7bb3830529e611c7d |
> 
> | updated | 2015-09-10T05:24:19Z |
> 
> | user_id | d04b218204414a1891646735befd449c |
> 
> +--------------------------------------+----------------------------------------------------------------+
> 
> 
> 
> 
> 
> *stack at abhishek:/opt/stack/ceilometer$ nova flavor-show m1.nano *
> 
> +----------------------------+---------+
> 
> | Property | Value |
> 
> +----------------------------+---------+
> 
> | OS-FLV-DISABLED:disabled | False |
> 
> | OS-FLV-EXT-DATA:ephemeral | 0 |
> 
> | disk | 0 |
> 
> | extra_specs | {} |
> 
> | id | 42 |
> 
> | name | m1.nano |
> 
> | os-flavor-access:is_public | True |
> 
> | ram | 64 |
> 
> | rxtx_factor | 1.0 |
> 
> | swap | |
> 
> | vcpus | 1 |
> 
> +----------------------------+---------+
> 
> 
> *stack at abhishek:/opt/stack/ceilometer$ ceilometer sample-list -m
> 'cpu_util' -q "resource_id=e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa"*
> 
> +--------------------------------------+------------+-------+------------+------+---------------------+
> 
> 
> | Resource ID | Name | Type | Volume | Unit | Timestamp |
> 
> +--------------------------------------+------------+-------+------------+------+---------------------+
> 
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T10:30:54 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T10:20:54 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T10:10:54 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T10:00:54 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T09:48:25 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T09:38:25 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T09:21:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T09:11:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T09:01:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T08:51:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T08:41:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T08:31:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 11112448.0
> | B | 2015-09-10T08:21:42 |
> 
> 
> 
> +--------------------------------------+------------+-------+------------+------+---------------------+
> 
> 
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net


From ipovolotskaya at mirantis.com  Thu Sep 10 11:52:32 2015
From: ipovolotskaya at mirantis.com (Irina Povolotskaya)
Date: Thu, 10 Sep 2015 14:52:32 +0300
Subject: [openstack-dev] [Fuel][Plugins] SDK is updated with the latest
	information
Message-ID: <CAFY49iD+HsqjJVECnGEHjp+5M0ZS6CDQ-V2b1wcKsOOjTexnhA@mail.gmail.com>

Hi to all,

Please be informed that the Fuel Plugin SDK now has a set of useful
instructions that cover the following issues:
- how to create a new project for Fuel Plugins in /openstack namespace [1].
- how to add your plugin to DriverLog [2]
- how to form documentation for your plugin [3].

If you suppose some issues are still missing, please let me know.

Thanks.


[1] https://wiki.openstack.org/wiki/Fuel/Plugins#How_to_create_a_project
<https://wiki.openstack.org/wiki/Fuel/Plugins#How_to_create_a_project>
[2]
https://wiki.openstack.org/wiki/Fuel/Plugins#Add_your_plugin_to_DriverLog
[3]
https://wiki.openstack.org/wiki/Fuel/Plugins#Creating_documentation_for_Fuel_Plugins
-- 
Best regards,

Irina

*Business Analyst*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/20937ca8/attachment.html>

From yorik.sar at gmail.com  Thu Sep 10 11:53:06 2015
From: yorik.sar at gmail.com (Yuriy Taraday)
Date: Thu, 10 Sep 2015 11:53:06 +0000
Subject: [openstack-dev] [Fuel] Let's change the way we distribute Fuel
 (was: [Fuel] Remove MOS DEB repo from master node)
In-Reply-To: <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
Message-ID: <CABocrW50Wh7TwbWkwEfJvfkghMFw5qBWE5ZT4Fc7SWyJFi4yoQ@mail.gmail.com>

Hello, thread!

First let me address some of the very good points Alex raised in his email.

On Wed, Sep 9, 2015 at 10:33 PM Alex Schultz <aschultz at mirantis.com> wrote:

> Fair enough, I just wanted to raise the UX issues around these types of
> things as they should go into the decision making process.
>

UX issues is what we definitely should address even for ourselves: number
of things that need to happen to deploy Master with just one small change
is enormous.


> Let me explain why I think having local MOS mirror by default is bad:
>> 1) I don't see any reason why we should treat MOS  repo other way than
>> all other online repos. A user sees on the settings tab the list of repos
>> one of which is local by default while others are online. It can make user
>> a little bit confused, can't it? A user can be also confused by the fact,
>> that some of the repos can be cloned locally by fuel-createmirror while
>> others can't. That is not straightforward, NOT fuel-createmirror UX.
>>
>
> I agree. The process should be the same and it should be just another
> repo. It doesn't mean we can't include a version on an ISO as part of a
> release.  Would it be better to provide the mirror on the ISO but not have
> it enabled by default for a release so that we can gather user feedback on
> this? This would include improved documentation and possibly allowing a
> user to choose their preference so we can collect metrics?
>

I think instead of relying on average user of spherical Fuel we should let
user decide what goes to ISO.

2) Having local MOS mirror by default makes things much more convoluted. We
>> are forced to have several directories with predefined names and we are
>> forced to manage these directories in nailgun, in upgrade script, etc. Why?
>> 3) When putting MOS mirror on ISO, we make people think that ISO is equal
>> to MOS, which is not true. It is possible to implement really flexible
>> delivery scheme, but we need to think of these things as they are
>> independent.
>>
>
> I'm not sure what you mean by this. Including a point in time copy on an
> ISO as a release is a common method of distributing software. Is this a
> messaging thing that needs to be addressed? Perhaps I'm not familiar with
> people referring to the ISO as being MOS.
>

It is so common that some people think it's very broken. But we can fix
that.

For large users it is easy to build custom ISO and put there what they need
>> but first we need to have simple working scheme clear for everyone. I think
>> dealing with all repos the same way is what is gonna makes things simpler.
>>
>
> Who is going to build a custom ISO? How does one request that? What
> resources are consumed by custom ISO creation process/request? Does this
> scale?
>

How about user building ISO on one's workstation?

This thread is not about internet connectivity, it is about aligning things.
>>
>
> You are correct in that this thread is not explicitly about internet
> connectivity, but they are related. Any changes to remove a local
> repository and only provide an internet based solution makes internet
> connectivity something that needs to be included in the discussion.  I just
> want to make sure that we properly evaluate this decision based on end user
> feedback not because we don't want to manage this from a developer
> standpoint.
>

We can use Internet connectivity not only in target DC.

Now what do I mean by all that? Let's make Fuel distribution that's easier
to develop and distribute while making it more comfortable to use in the
process.

As Alex pointed out, the common way to distribute an OS is to put some
number of packages from some snapshot of golden repo on ISO and let user
install that. Let's say, it's a DVD way (although there was time OS could
fit CD). The other less common way of distributing OS is a small minimal
ISO and use online repo to install everything. Let's say, it's a MiniCD way.

Fuel is now using a DVD way: we put everything user will ever need to an
ISO and give it to user. Vladimir's proposal was to use smth similar to
MiniCD way: put only Fuel on ISO and keep online repo running.

Note that I'll speak of Fuel as an installer people put on MiniCD. It's a
bit bigger, but it deploys clouds, not just separate machines. Packages and
OS then translate to everything needed to deploy OpenStack: packages and
deploy scripts (puppet manifests, could be packaged as well). We could
apply the same logic to distribution of Fuel itself though, but let's not
get into it right now.

Let's compare these ways from distributor (D) and user (U) point of view.

DVD way.
Pros:
- (D) a single piece to deliver to user;
- (D,U) a snapshot of repo put on ISO is easier to cover with QA and so
it's better tested;
- (U) one-time download for everything;
- (U) no need for Internet connectivity when you're installing OS;
- (U) you can store ISO and reuse it any number of times.
Cons:
- (D) you still have to maintain online repo for updates;
- (D,U) it's hard to create a custom ISO if customer needs it, so one of
them have to take the burden of this on one's shoulders (usually as well as
in our case it's user);
- (U) huge download that pulls everything even if you'll never need it;
- (U) after installation one have outdated versions of everything, so
you'll have to go online and upgrade everything.

MiniCD way.
Pros:
- (D) ISO creation is simple as it doesn't depend on current state of repos;
- (D,U) small one-time download;
- (U) one can customize everything, install only what is needed, use
different repos;
- (U) up-to-date software is installed straight away.
Cons:
- (D) installer has to deal with any state of online repo;
- (U) you have to have Internet connection wherever you want to install;
- (U) download time is added to install process;
- (U) you have to download all packages for each install.

How about we define another way that combines these pros and overcomes cons?

>From user point of view we have two stages: downloading and installing. In
MiniCD way part of downloading stage is melted into installing stage.
Installing stage can be configured: you can use different repos and opt in
for using online repo when installing from DVD even. Downloading stage is
fixed unless you're going to build your own ISO in which case you just
don't use upstream distribution, you create your own.

Let's shift customization to download stage, so user will have this
workflow:
- download some archive (ISO, USB stick image or even just some archive);
- unpack it;
- run a script from that archive that downloads all software user is going
to install, including MOS from our repos or one's custom repo;
- run another script (unless the previous one does this) that bundles
everything to one image (or stores it on USB stick);
- go to datacenter (via iKVM or via iLegs) and deploy Fuel and MOS from
your image.

Note that all steps (except the last one) can be done on one's workstation
with shiny Internet access and comfortable chair available and datacenter
doesn't require any access to Internet or custom mirrors.

The ideal UX would be like this (replace USB stick with empty disc or just
ISO image if you like):
- plug in USB stick;
- download some app, run it;
- fill some forms;
- wait a bit;
- take this USB stick to datacenter.
App should download and put all necessary components on USB stick.

Now let's break down pros and cons for this approach.
Pros:
- (D) creation of source archive is simple as it doesn't depend on current
state of repos;
- (U) download only what you need and from where you're free to download
anything;
- (D,U) customization is built into the process;
- (U) no need for Internet access from datacenter;
- (U) but if you have one, you can freshen packages on Fuel master
afterwards without need to download everything;
- (U) all packages are as fresh as they are when you're preparing the image;
- (U) image can be reused on as many clouds as you like.
Cons:
- (D) installer have to deal with any state of the repo (although with
customization it's user's fault if one uses bad repos);
- (U) you have to do some preparation steps on your workstation (although
if you don't need any customization it would take the same amount of time
as downloading MiniCD and "burning" it);
- I'm out of ideas.

Note that this customization phase is where developers can inject their
modified packages.

PS: It's not my idea, I've read/heard it somewhere.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/d6c82870/attachment.html>

From bharath at brocade.com  Thu Sep 10 12:11:41 2015
From: bharath at brocade.com (bharath)
Date: Thu, 10 Sep 2015 17:41:41 +0530
Subject: [openstack-dev]  Fwd: Re:  [neutron][L3][dvr][fwaas] FWaaS
In-Reply-To: <55F16D7B.6020802@brocade.com>
References: <55F16D7B.6020802@brocade.com>
Message-ID: <55F1737D.3090100@brocade.com>


Hi ,

neutron-openvswitch-agent is crashing with below error

2015-09-10 04:39:36.675 DEBUG neutron.agent.linux.utils 
[req-a6c70c4e-aa40-44e4-bd09-493e82bfe43c None None]
Command: ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', 
'--', '--columns=name,other_config,tag', 'list', 'Port', u'tap8e259da4-e8']
Exit code: 0
  from (pid=26026) execute 
/opt/stack/neutron/neutron/agent/linux/utils.py:157
*2015-09-10 04:39:36.675 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-a6c70c4e-aa40-44e4-bd09-493e82bfe43c None None] invalid literal for 
int() with base 10: 'None' Agent terminated!**
**2015-09-10 04:39:36.677 INFO oslo_rootwrap.client 
[req-a6c70c4e-aa40-44e4-bd09-493e82bfe43c None None] Stopping rootwrap 
daemon process with pid=26080**
*
I suspect commit "*Implement external physical bridge mapping in 
linuxbridge*" causing the breakage. [*commit-id: 
bd734811753a99d61e30998c734e465a8d507b8f*]

When i set the branch back to b6d780a83cd9a811e8a91db77eb24bb65fa0b075 
commit , issue is not seen.

I am raising a defect for this.

Thanks,
bharath





On Thursday 10 September 2015 02:52 PM, bharath wrote:
> Hi ,
>
> Instance creation is failing with below error from last 4 days.
>
> *2015-09-10 02:14:00.583 WARNING 
> neutron.plugins.ml2.drivers.mech_agent 
> [req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
> 24109c82ae76465c8fb20562cce67a4f] Attempting to bind with dead agent: 
> {'binary': u'neutron-openvswitch-agent', 'des
> cription': None, 'admin_state_up': True, 'heartbeat_timestamp': 
> datetime.datetime(2015, 9, 10, 9, 6, 57), 'alive': False, 'topic': 
> u'N/A', 'host': u'ci-jslave-base', 'agent_type': u'Open vSwitch 
> agent', 'created_at': datetime.datetime(2
> 015, 9, 10, 9, 4, 57), 'started_at': datetime.datetime(2015, 9, 10, 9, 
> 6, 57), 'id': u'aa9098fe-c412-449e-b979-1f5ab46c3c1d', 
> 'configurations': {u'in_distributed_mode': False, 
> u'arp_responder_enabled': False, u'tunneling_ip': u'192.168.
> 30.41', u'devices': 0, u'log_agent_heartbeats': False, 
> u'l2_population': False, u'tunnel_types': [u'vxlan'], 
> u'enable_distributed_routing': False, u'bridge_mappings': {u'ext': 
> u'br-ext', u'mng': u'br-mng'}}}*
> 2015-09-10 02:14:00.583 DEBUG neutron.plugins.ml2.drivers.mech_agent 
> [req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
> 24109c82ae76465c8fb20562cce67a4f] Attempting to bind port 
> 6733610d-e7dc-4ecd-a810-b2b791af9b97 on network c6fb26cc-96
> 1e-4f38-bf40-bfc72cc59f67 from (pid=25516) bind_port 
> /opt/stack/neutron/neutron/plugins/ml2/drivers/mech_agent.py:60
> *2015-09-10 02:14:00.588 ERROR neutron.plugins.ml2.managers 
> [req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
> 24109c82ae76465c8fb20562cce67a4f] Failed to bind port 
> 6733610d-e7dc-4ecd-a810-b2b791af9b97 on host ci-jslave-base
> 2015-09-10 02:14:00.588 ERROR neutron.plugins.ml2.managers 
> [req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
> 24109c82ae76465c8fb20562cce67a4f] Failed to bind port 
> 6733610d-e7dc-4ecd-a810-b2b791af9b97 on host ci-jslave-base*
> 2015-09-10 02:14:00.608 DEBUG neutron.plugins.ml2.db 
> [req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
> 24109c82ae76465c8fb20562cce67a4f] For port 
> 6733610d-e7dc-4ecd-a810-b2b791af9b97, host ci-jslave-base, cleared 
> binding levels from (pi
> d=25516) clear_binding_levels 
> /opt/stack/neutron/neutron/plugins/ml2/db.py:189
> 2015-09-10 02:14:00.608 DEBUG neutron.plugins.ml2.db 
> [req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
> 24109c82ae76465c8fb20562cce67a4f] Attempted to set empty binding 
> levels from (pid=25516) set_binding_levels /opt/stack/neutron/neutro
> n/plugins/ml2/db.py:164
>
>
> Recent commit seems to be broken this.
>
>
> During stacking i am getting below error , but i dont know whether its 
> related to above issue or not
> 2015-09-09 15:18:48.658 | ERROR: openstack 'module' object has no attribute 'UpdateDataSource'
>
> would love some help on this issue
>
> Thanks,
> bharath
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/c246924d/attachment.html>

From thierry at openstack.org  Thu Sep 10 12:23:34 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Thu, 10 Sep 2015 14:23:34 +0200
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F15DE0.7040804@dague.net>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com> <1441840459-sup-9283@lrrr.local>
 <55F15DE0.7040804@dague.net>
Message-ID: <55F17646.4000203@openstack.org>

Sean Dague wrote:
> Right now, they are all a bunch of files, they can be anywhere. And then
> you have other files that have to reference these files by path, which
> can be anywhere. We could just punt in that part and say "punt! every
> installer and configuration management install needs to solve this on
> their own." I'm not convinced that's a good answer. The os-brick filters
> aren't really config. If you change them all that happens is
> terribleness. Stuff stops working, and you don't know why. They are data
> to exchange with another process about how to function. Honestly, they
> should probably be python code that's imported by rootwrap.
> 
> Much like the issues around clouds failing when you try to GET /v2 on
> the Nova API (because we have a bunch of knobs you have to align for SSL
> termination, and a bunch of deployers didn't), I don't think we should
> be satisfied with "there's a config for that!" when all that config
> means is that someone can break their configuration if they don't get it
> exactly right.

My quick 2cents on this. Rootwrap was designed as a generic solution to
wrap privileged calls. That's why filter files are part of its
"configuration". The problem is, OpenStack needs a pretty precise set of
those filters to be "configured" to run properly. So it's configuration
for rootwrap, but not "configuration" for OpenStack.

The way it was supposed to work out was that you would have a single
rootwrap on nodes and every component on that node needing filters would
drop them in some unique location. A library is just another component
needing filters, so os-brick could just deploy a few more filters on
nodes where it's installed.

The trick is, to increase "security" we promoted usage of per-project
directories (so that Nova only has access to Nova privileged commands),
which translated into using a specific config file for Nova rootwrap
pointing to Nova filters. Now if we are willing to sacrifice that, we
could have a single directory per-node (/usr/share/rootwrap instead of
/usr/share/*/rootwrap) that makes most of the interpolation you're
describing unnecessary.

Alternatively you could keep project-specific directories and have
os-brick drop symbolic links to its filters into both nova and
cinder-specific directories. It's slightly less flexible (since the lib
now has to know what consumes it) but keeps you from sacrificing "security".

Now another problem you're describing is that there is no single place
where those filters end up, depending on the way the projects (or libs)
are packaged and installed. And it's up to the distros to "fix" the
filters_path in the configuration file so that it points to every single
place where those end up. It's a problem (especially when you start to
install things using multiple concurrent packaging systems), but it's
not exactly new -- it's just that libraries shipping fliters file are
even more likely to ship their filters somewhere weird. So maybe we can
continue to live with that problem we always had, until the privsep
system completely replaces rootwrap ?

-- 
Thierry Carrez (ttx)


From vkuklin at mirantis.com  Thu Sep 10 12:25:01 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Thu, 10 Sep 2015 15:25:01 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
 <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
Message-ID: <CAHAWLf0j6mSJxV-1rKC5m+92w-LXbEJ5pGoHP-WpFu_mmJ_5kA@mail.gmail.com>

Igor

This is not the case to tell users if they are stupid are not. We are
working for our users, not vice versa.

On Thu, Sep 10, 2015 at 1:18 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
wrote:

> Mike,
>
> > still not exactly true for some large enterprises. Due to all the
> security, etc.,
> > there are sometimes VPNs / proxies / firewalls with very low throughput.
>
> It's their problem, and their policies. We can't and shouldn't handle
> all possible cases. If some enterprise has "no Internet" policy, I bet
> it won't be a problem for their IT guys to create an intranet mirror
> for MOS packages. Moreover, I also bet they do have a mirror for
> Ubuntu or other Linux distributive. So it basically about approach how
> to consume our mirrors.
>
> On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <vkuklin at mirantis.com>
> wrote:
> > Folks
> >
> > I think, Mike is completely right here - we need an option to build
> > all-in-one ISO which can be tried-out/deployed unattendedly without
> internet
> > access. Let's let a user make a choice what he wants, not push him into
> > embarassing situation. We still have many parts of Fuel which make
> choices
> > for user that cannot be overriden. Let's not pretend that we know more
> than
> > user does about his environment.
> >
> > On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <ogelbukh at mirantis.com>
> > wrote:
> >>
> >> The reason people want offline deployment feature is not because of poor
> >> connection, but rather the enterprise intranets where getting subnet
> with
> >> external access sometimes is a real pain in various body parts.
> >>
> >> --
> >> Best regards,
> >> Oleg Gelbukh
> >>
> >> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <
> ikalnitsky at mirantis.com>
> >> wrote:
> >>>
> >>> Hello,
> >>>
> >>> I agree with Vladimir - the idea of online repos is a right way to
> >>> move. In 2015 I believe we can ignore this "poor Internet connection"
> >>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
> >>> distributives - most of them fetch needed packages from the Internet
> >>> during installation, not from CD/DVD. The netboot installers are
> >>> popular, I can't even remember when was the last time I install my
> >>> Debian from the DVD-1 - I use netboot installer for years.
> >>>
> >>> Thanks,
> >>> Igor
> >>>
> >>>
> >>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com>
> wrote:
> >>> >
> >>> >
> >>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com
> >
> >>> > wrote:
> >>> >>
> >>> >>
> >>> >> Hey Vladimir,
> >>> >>
> >>> >>>
> >>> >>>
> >>> >>>>>
> >>> >>>>> 1) There won't be such things in like [1] and [2], thus less
> >>> >>>>> complicated flow, less errors, easier to maintain, easier to
> >>> >>>>> understand,
> >>> >>>>> easier to troubleshoot
> >>> >>>>> 2) If one wants to have local mirror, the flow is the same as in
> >>> >>>>> case
> >>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user
> >>> >>>>> to
> >>> >>>>> understand.
> >>> >>>>
> >>> >>>>
> >>> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
> >>> >>>> forward and has some issues making it a bad UX.
> >>> >>>
> >>> >>>
> >>> >>> I'd say the whole approach of having such tool as fuel-createmirror
> >>> >>> is a
> >>> >>> way too naive. Reliable internet connection is totally up to
> network
> >>> >>> engineering rather than deployment. Even using proxy is much better
> >>> >>> that
> >>> >>> creating local mirror. But this discussion is totally out of the
> >>> >>> scope of
> >>> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
> >>> >>> straightforward (installed as rpm, has just a couple of command
> line
> >>> >>> options). The quality of this script is also out of the scope of
> this
> >>> >>> thread. BTW we have plans to improve it.
> >>> >>
> >>> >>
> >>> >>
> >>> >> Fair enough, I just wanted to raise the UX issues around these types
> >>> >> of
> >>> >> things as they should go into the decision making process.
> >>> >>
> >>> >>
> >>> >>>
> >>> >>>>>
> >>> >>>>>
> >>> >>>>> Many people still associate ISO with MOS, but it is not true when
> >>> >>>>> using
> >>> >>>>> package based delivery approach.
> >>> >>>>>
> >>> >>>>> It is easy to define necessary repos during deployment and thus
> it
> >>> >>>>> is
> >>> >>>>> easy to control what exactly is going to be installed on slave
> >>> >>>>> nodes.
> >>> >>>>>
> >>> >>>>> What do you guys think of it?
> >>> >>>>>
> >>> >>>>>
> >>> >>>>
> >>> >>>> Reliance on internet connectivity has been an issue since 6.1. For
> >>> >>>> many
> >>> >>>> large users, complete access to the internet is not available or
> not
> >>> >>>> desired.  If we want to continue down this path, we need to
> improve
> >>> >>>> the
> >>> >>>> tools to setup the local mirror and properly document what
> >>> >>>> urls/ports/etc
> >>> >>>> need to be available for the installation of openstack and any
> >>> >>>> mirror
> >>> >>>> creation process.  The ideal thing is to have an all-in-one CD
> >>> >>>> similar to a
> >>> >>>> live cd that allows a user to completely try out fuel wherever
> they
> >>> >>>> want
> >>> >>>> with out further requirements of internet access.  If we don't
> want
> >>> >>>> to
> >>> >>>> continue with that, we need to do a better job around providing
> the
> >>> >>>> tools
> >>> >>>> for a user to get up and running in a timely fashion.  Perhaps
> >>> >>>> providing an
> >>> >>>> net-only iso and an all-included iso would be a better solution so
> >>> >>>> people
> >>> >>>> will have their expectations properly set up front?
> >>> >>>
> >>> >>>
> >>> >>> Let me explain why I think having local MOS mirror by default is
> bad:
> >>> >>> 1) I don't see any reason why we should treat MOS  repo other way
> >>> >>> than
> >>> >>> all other online repos. A user sees on the settings tab the list of
> >>> >>> repos
> >>> >>> one of which is local by default while others are online. It can
> make
> >>> >>> user a
> >>> >>> little bit confused, can't it? A user can be also confused by the
> >>> >>> fact, that
> >>> >>> some of the repos can be cloned locally by fuel-createmirror while
> >>> >>> others
> >>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
> >>> >>
> >>> >>
> >>> >>
> >>> >> I agree. The process should be the same and it should be just
> another
> >>> >> repo. It doesn't mean we can't include a version on an ISO as part
> of
> >>> >> a
> >>> >> release.  Would it be better to provide the mirror on the ISO but
> not
> >>> >> have
> >>> >> it enabled by default for a release so that we can gather user
> >>> >> feedback on
> >>> >> this? This would include improved documentation and possibly
> allowing
> >>> >> a user
> >>> >> to choose their preference so we can collect metrics?
> >>> >>
> >>> >>
> >>> >>> 2) Having local MOS mirror by default makes things much more
> >>> >>> convoluted.
> >>> >>> We are forced to have several directories with predefined names and
> >>> >>> we are
> >>> >>> forced to manage these directories in nailgun, in upgrade script,
> >>> >>> etc. Why?
> >>> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
> >>> >>> equal
> >>> >>> to MOS, which is not true. It is possible to implement really
> >>> >>> flexible
> >>> >>> delivery scheme, but we need to think of these things as they are
> >>> >>> independent.
> >>> >>
> >>> >>
> >>> >>
> >>> >> I'm not sure what you mean by this. Including a point in time copy
> on
> >>> >> an
> >>> >> ISO as a release is a common method of distributing software. Is
> this
> >>> >> a
> >>> >> messaging thing that needs to be addressed? Perhaps I'm not familiar
> >>> >> with
> >>> >> people referring to the ISO as being MOS.
> >>> >>
> >>> >>
> >>> >>> For large users it is easy to build custom ISO and put there what
> >>> >>> they
> >>> >>> need but first we need to have simple working scheme clear for
> >>> >>> everyone. I
> >>> >>> think dealing with all repos the same way is what is gonna makes
> >>> >>> things
> >>> >>> simpler.
> >>> >>>
> >>> >>
> >>> >>
> >>> >> Who is going to build a custom ISO? How does one request that? What
> >>> >> resources are consumed by custom ISO creation process/request? Does
> >>> >> this
> >>> >> scale?
> >>> >>
> >>> >>
> >>> >>>
> >>> >>> This thread is not about internet connectivity, it is about
> aligning
> >>> >>> things.
> >>> >>>
> >>> >>
> >>> >> You are correct in that this thread is not explicitly about internet
> >>> >> connectivity, but they are related. Any changes to remove a local
> >>> >> repository
> >>> >> and only provide an internet based solution makes internet
> >>> >> connectivity
> >>> >> something that needs to be included in the discussion.  I just want
> to
> >>> >> make
> >>> >> sure that we properly evaluate this decision based on end user
> >>> >> feedback not
> >>> >> because we don't want to manage this from a developer standpoint.
> >>> >
> >>> >
> >>> >
> >>> >  +1, whatever the changes is, please keep Fuel as a tool that can
> >>> > deploy
> >>> > without Internet access, this is part of reason that people like it
> and
> >>> > it's
> >>> > better that other tools.
> >>> >>
> >>> >>
> >>> >> -Alex
> >>> >>
> >>> >>
> >>> >>
> >>> >>
> __________________________________________________________________________
> >>> >> OpenStack Development Mailing List (not for usage questions)
> >>> >> Unsubscribe:
> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >>
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Yaguang Tang
> >>> > Technical Support, Mirantis China
> >>> >
> >>> > Phone: +86 15210946968
> >>> >
> >>> >
> >>> >
> >>> >
> >>> >
> __________________________________________________________________________
> >>> > OpenStack Development Mailing List (not for usage questions)
> >>> > Unsubscribe:
> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>>
> >>>
> >>>
> __________________________________________________________________________
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 35bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com
> > www.mirantis.ru
> > vkuklin at mirantis.com
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/980dd266/attachment.html>

From xiexs at cn.fujitsu.com  Thu Sep 10 12:32:02 2015
From: xiexs at cn.fujitsu.com (Xie, Xianshan)
Date: Thu, 10 Sep 2015 12:32:02 +0000
Subject: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
 gerrit server
In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF06E8@G4W3223.americas.hpqcorp.net>
References: <9086590602E58741A4119DC210CF893AA92C53DD@G08CNEXMBPEKD01.g08.fujitsu.local>
 <55F1607F.9060509@virtuozzo.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF06E8@G4W3223.americas.hpqcorp.net>
Message-ID: <9086590602E58741A4119DC210CF893AA92C572E@G08CNEXMBPEKD01.g08.fujitsu.local>

Hi Ramy & Evgeny,

Yes, it does work.
Thanks a lot.

Once I thought there is no need to add the CI group for the sandbox project.


Xiexs

From: Asselin, Ramy [mailto:ramy.asselin at hp.com]
Sent: Thursday, September 10, 2015 7:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into gerrit server

I added Fnst OpenStackTest CI<https://review.openstack.org/#/q/owner:openstack_dev%2540163.com+status:open,n,z> to the third-party ci group.
Ramy

From: Evgeny Antyshev [mailto:eantyshev at virtuozzo.com]
Sent: Thursday, September 10, 2015 3:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into gerrit server


On 10.09.2015 11:30, Xie, Xianshan wrote:
Hi, all,
   In my CI environment, after submitting a patch into openstack-dev/sandbox,
the Jenkins Job can be launched automatically, and the result message of the job also can be posted into the gerrit server successfully.
Everything seems fine.

But in the ?Verified? column, there is no verified vote, such as +1 or -1.
You will be able when your CI account is added to "Third-Party CI" group on review.openstack.org
https://review.openstack.org/#/admin/groups/270,members
I advice you to ask for such a permission in an IRC meeting for third-party CI maintainers:
https://wiki.openstack.org/wiki/Meetings/ThirdParty
But you still won't be able to vote on other projects, except the sandbox.

(patch url: https://review.openstack.org/#/c/222049/,
CI name:  Fnst OpenStackTest CI)

Although I have already added the ?verified? label into the layout.yaml , under the check pipeline, it does not work yet.

And my configuration info is setted as follows:
Layout.yaml
-------------------------------------------
pipelines:
  - name: check
   trigger:
     gerrit:
      - event: patchset-created
      - event: change-restored
      - event: comment-added
?
   success:
    gerrit:
      verified: 1
   failure:
    gerrit:
      verified: -1

jobs:
   - name: noop-check-communication
      parameter-function: reusable_node
projects:
- name: openstack-dev/sandbox
   - noop-check-communication
-------------------------------------------


And the projects.yaml of Jenkins job:
-------------------------------------------
- project:
name: sandbox
jobs:
      - noop-check-communication:
         node: 'devstack_slave || devstack-precise-check || d-p-c'
?
-------------------------------------------

Could anyone help me? Thanks in advance.

Xiexs




__________________________________________________________________________

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/bdb38d5b/attachment.html>

From ikalnitsky at mirantis.com  Thu Sep 10 12:44:22 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Thu, 10 Sep 2015 15:44:22 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAHAWLf0j6mSJxV-1rKC5m+92w-LXbEJ5pGoHP-WpFu_mmJ_5kA@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
 <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
 <CAHAWLf0j6mSJxV-1rKC5m+92w-LXbEJ5pGoHP-WpFu_mmJ_5kA@mail.gmail.com>
Message-ID: <CACo6NWCxoKWnzEP57-fycpk6cwnBsjSw9b6j48QA_Kt3zUn6fQ@mail.gmail.com>

Vladimir,

Different users have different requirements. If start covering all of
them, the project will become complex, buggy and unsupportable.

It's a common practice to choose just one path and follow it.
Obviously, some "recipes" how to achieve something could be provided,
but I believe we should focus on most important feature of Fuel - How
to deploy reliable OpenStack environment, and this has nothing in
common where and how to fetch packages.

Thanks,
Igor



On Thu, Sep 10, 2015 at 3:25 PM, Vladimir Kuklin <vkuklin at mirantis.com> wrote:
> Igor
>
> This is not the case to tell users if they are stupid are not. We are
> working for our users, not vice versa.
>
> On Thu, Sep 10, 2015 at 1:18 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
> wrote:
>>
>> Mike,
>>
>> > still not exactly true for some large enterprises. Due to all the
>> > security, etc.,
>> > there are sometimes VPNs / proxies / firewalls with very low throughput.
>>
>> It's their problem, and their policies. We can't and shouldn't handle
>> all possible cases. If some enterprise has "no Internet" policy, I bet
>> it won't be a problem for their IT guys to create an intranet mirror
>> for MOS packages. Moreover, I also bet they do have a mirror for
>> Ubuntu or other Linux distributive. So it basically about approach how
>> to consume our mirrors.
>>
>> On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <vkuklin at mirantis.com>
>> wrote:
>> > Folks
>> >
>> > I think, Mike is completely right here - we need an option to build
>> > all-in-one ISO which can be tried-out/deployed unattendedly without
>> > internet
>> > access. Let's let a user make a choice what he wants, not push him into
>> > embarassing situation. We still have many parts of Fuel which make
>> > choices
>> > for user that cannot be overriden. Let's not pretend that we know more
>> > than
>> > user does about his environment.
>> >
>> > On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <ogelbukh at mirantis.com>
>> > wrote:
>> >>
>> >> The reason people want offline deployment feature is not because of
>> >> poor
>> >> connection, but rather the enterprise intranets where getting subnet
>> >> with
>> >> external access sometimes is a real pain in various body parts.
>> >>
>> >> --
>> >> Best regards,
>> >> Oleg Gelbukh
>> >>
>> >> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky
>> >> <ikalnitsky at mirantis.com>
>> >> wrote:
>> >>>
>> >>> Hello,
>> >>>
>> >>> I agree with Vladimir - the idea of online repos is a right way to
>> >>> move. In 2015 I believe we can ignore this "poor Internet connection"
>> >>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
>> >>> distributives - most of them fetch needed packages from the Internet
>> >>> during installation, not from CD/DVD. The netboot installers are
>> >>> popular, I can't even remember when was the last time I install my
>> >>> Debian from the DVD-1 - I use netboot installer for years.
>> >>>
>> >>> Thanks,
>> >>> Igor
>> >>>
>> >>>
>> >>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com>
>> >>> wrote:
>> >>> >
>> >>> >
>> >>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz
>> >>> > <aschultz at mirantis.com>
>> >>> > wrote:
>> >>> >>
>> >>> >>
>> >>> >> Hey Vladimir,
>> >>> >>
>> >>> >>>
>> >>> >>>
>> >>> >>>>>
>> >>> >>>>> 1) There won't be such things in like [1] and [2], thus less
>> >>> >>>>> complicated flow, less errors, easier to maintain, easier to
>> >>> >>>>> understand,
>> >>> >>>>> easier to troubleshoot
>> >>> >>>>> 2) If one wants to have local mirror, the flow is the same as in
>> >>> >>>>> case
>> >>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a
>> >>> >>>>> user
>> >>> >>>>> to
>> >>> >>>>> understand.
>> >>> >>>>
>> >>> >>>>
>> >>> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
>> >>> >>>> forward and has some issues making it a bad UX.
>> >>> >>>
>> >>> >>>
>> >>> >>> I'd say the whole approach of having such tool as
>> >>> >>> fuel-createmirror
>> >>> >>> is a
>> >>> >>> way too naive. Reliable internet connection is totally up to
>> >>> >>> network
>> >>> >>> engineering rather than deployment. Even using proxy is much
>> >>> >>> better
>> >>> >>> that
>> >>> >>> creating local mirror. But this discussion is totally out of the
>> >>> >>> scope of
>> >>> >>> this letter. Currently,  we have fuel-createmirror and it is
>> >>> >>> pretty
>> >>> >>> straightforward (installed as rpm, has just a couple of command
>> >>> >>> line
>> >>> >>> options). The quality of this script is also out of the scope of
>> >>> >>> this
>> >>> >>> thread. BTW we have plans to improve it.
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> Fair enough, I just wanted to raise the UX issues around these
>> >>> >> types
>> >>> >> of
>> >>> >> things as they should go into the decision making process.
>> >>> >>
>> >>> >>
>> >>> >>>
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>> Many people still associate ISO with MOS, but it is not true
>> >>> >>>>> when
>> >>> >>>>> using
>> >>> >>>>> package based delivery approach.
>> >>> >>>>>
>> >>> >>>>> It is easy to define necessary repos during deployment and thus
>> >>> >>>>> it
>> >>> >>>>> is
>> >>> >>>>> easy to control what exactly is going to be installed on slave
>> >>> >>>>> nodes.
>> >>> >>>>>
>> >>> >>>>> What do you guys think of it?
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>
>> >>> >>>> Reliance on internet connectivity has been an issue since 6.1.
>> >>> >>>> For
>> >>> >>>> many
>> >>> >>>> large users, complete access to the internet is not available or
>> >>> >>>> not
>> >>> >>>> desired.  If we want to continue down this path, we need to
>> >>> >>>> improve
>> >>> >>>> the
>> >>> >>>> tools to setup the local mirror and properly document what
>> >>> >>>> urls/ports/etc
>> >>> >>>> need to be available for the installation of openstack and any
>> >>> >>>> mirror
>> >>> >>>> creation process.  The ideal thing is to have an all-in-one CD
>> >>> >>>> similar to a
>> >>> >>>> live cd that allows a user to completely try out fuel wherever
>> >>> >>>> they
>> >>> >>>> want
>> >>> >>>> with out further requirements of internet access.  If we don't
>> >>> >>>> want
>> >>> >>>> to
>> >>> >>>> continue with that, we need to do a better job around providing
>> >>> >>>> the
>> >>> >>>> tools
>> >>> >>>> for a user to get up and running in a timely fashion.  Perhaps
>> >>> >>>> providing an
>> >>> >>>> net-only iso and an all-included iso would be a better solution
>> >>> >>>> so
>> >>> >>>> people
>> >>> >>>> will have their expectations properly set up front?
>> >>> >>>
>> >>> >>>
>> >>> >>> Let me explain why I think having local MOS mirror by default is
>> >>> >>> bad:
>> >>> >>> 1) I don't see any reason why we should treat MOS  repo other way
>> >>> >>> than
>> >>> >>> all other online repos. A user sees on the settings tab the list
>> >>> >>> of
>> >>> >>> repos
>> >>> >>> one of which is local by default while others are online. It can
>> >>> >>> make
>> >>> >>> user a
>> >>> >>> little bit confused, can't it? A user can be also confused by the
>> >>> >>> fact, that
>> >>> >>> some of the repos can be cloned locally by fuel-createmirror while
>> >>> >>> others
>> >>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> I agree. The process should be the same and it should be just
>> >>> >> another
>> >>> >> repo. It doesn't mean we can't include a version on an ISO as part
>> >>> >> of
>> >>> >> a
>> >>> >> release.  Would it be better to provide the mirror on the ISO but
>> >>> >> not
>> >>> >> have
>> >>> >> it enabled by default for a release so that we can gather user
>> >>> >> feedback on
>> >>> >> this? This would include improved documentation and possibly
>> >>> >> allowing
>> >>> >> a user
>> >>> >> to choose their preference so we can collect metrics?
>> >>> >>
>> >>> >>
>> >>> >>> 2) Having local MOS mirror by default makes things much more
>> >>> >>> convoluted.
>> >>> >>> We are forced to have several directories with predefined names
>> >>> >>> and
>> >>> >>> we are
>> >>> >>> forced to manage these directories in nailgun, in upgrade script,
>> >>> >>> etc. Why?
>> >>> >>> 3) When putting MOS mirror on ISO, we make people think that ISO
>> >>> >>> is
>> >>> >>> equal
>> >>> >>> to MOS, which is not true. It is possible to implement really
>> >>> >>> flexible
>> >>> >>> delivery scheme, but we need to think of these things as they are
>> >>> >>> independent.
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> I'm not sure what you mean by this. Including a point in time copy
>> >>> >> on
>> >>> >> an
>> >>> >> ISO as a release is a common method of distributing software. Is
>> >>> >> this
>> >>> >> a
>> >>> >> messaging thing that needs to be addressed? Perhaps I'm not
>> >>> >> familiar
>> >>> >> with
>> >>> >> people referring to the ISO as being MOS.
>> >>> >>
>> >>> >>
>> >>> >>> For large users it is easy to build custom ISO and put there what
>> >>> >>> they
>> >>> >>> need but first we need to have simple working scheme clear for
>> >>> >>> everyone. I
>> >>> >>> think dealing with all repos the same way is what is gonna makes
>> >>> >>> things
>> >>> >>> simpler.
>> >>> >>>
>> >>> >>
>> >>> >>
>> >>> >> Who is going to build a custom ISO? How does one request that? What
>> >>> >> resources are consumed by custom ISO creation process/request? Does
>> >>> >> this
>> >>> >> scale?
>> >>> >>
>> >>> >>
>> >>> >>>
>> >>> >>> This thread is not about internet connectivity, it is about
>> >>> >>> aligning
>> >>> >>> things.
>> >>> >>>
>> >>> >>
>> >>> >> You are correct in that this thread is not explicitly about
>> >>> >> internet
>> >>> >> connectivity, but they are related. Any changes to remove a local
>> >>> >> repository
>> >>> >> and only provide an internet based solution makes internet
>> >>> >> connectivity
>> >>> >> something that needs to be included in the discussion.  I just want
>> >>> >> to
>> >>> >> make
>> >>> >> sure that we properly evaluate this decision based on end user
>> >>> >> feedback not
>> >>> >> because we don't want to manage this from a developer standpoint.
>> >>> >
>> >>> >
>> >>> >
>> >>> >  +1, whatever the changes is, please keep Fuel as a tool that can
>> >>> > deploy
>> >>> > without Internet access, this is part of reason that people like it
>> >>> > and
>> >>> > it's
>> >>> > better that other tools.
>> >>> >>
>> >>> >>
>> >>> >> -Alex
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> __________________________________________________________________________
>> >>> >> OpenStack Development Mailing List (not for usage questions)
>> >>> >> Unsubscribe:
>> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>> >>
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > Yaguang Tang
>> >>> > Technical Support, Mirantis China
>> >>> >
>> >>> > Phone: +86 15210946968
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> > __________________________________________________________________________
>> >>> > OpenStack Development Mailing List (not for usage questions)
>> >>> > Unsubscribe:
>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>> >
>> >>>
>> >>>
>> >>>
>> >>> __________________________________________________________________________
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>
>> >>
>> >>
>> >> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > --
>> > Yours Faithfully,
>> > Vladimir Kuklin,
>> > Fuel Library Tech Lead,
>> > Mirantis, Inc.
>> > +7 (495) 640-49-04
>> > +7 (926) 702-39-68
>> > Skype kuklinvv
>> > 35bk3, Vorontsovskaya Str.
>> > Moscow, Russia,
>> > www.mirantis.com
>> > www.mirantis.ru
>> > vkuklin at mirantis.com
>> >
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuklin at mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From lajos.katona at ericsson.com  Thu Sep 10 12:56:31 2015
From: lajos.katona at ericsson.com (Lajos Katona)
Date: Thu, 10 Sep 2015 14:56:31 +0200
Subject: [openstack-dev] [tempest] Is there a sandbox project how to use
 tempest test plugin interface?
Message-ID: <55F17DFF.4000602@ericsson.com>

Hi,

I just noticed that from tag 6, the test plugin interface considered 
ready, and I am eager to start to use it.
I have some questions:

If I understand well in the future the plugin interface will be moved to 
tempest-lib, but now I have to import module(s) from tempest to start to 
use the interface.
Is there a plan for this, I mean when the whole interface will be moved 
to tempest-lib?

If I start to create a test plugin now (from tag 6), what should be the 
best solution to do this?
I thought to create a repo for my plugin and add that as a subrepo to my 
local tempest repo, and than I can easily import stuff from tempest, but 
I can keep my test code separated from other parts of tempest.
Is there a better way of doing this?

If there would be an example plugin somewhere, that would be the most 
preferable maybe.

Thanks in advance for the help.

Regards
Lajos


From vkozhukalov at mirantis.com  Thu Sep 10 13:06:23 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Thu, 10 Sep 2015 16:06:23 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
 <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
Message-ID: <CAFLqvG5bF0gtKSidwbNJP61hzqkFTDaMEvv-6hkH+KbFqXc75Q@mail.gmail.com>

Guys,

I really appreciate your opinions on whether Fuel should be all inclusive
or not. But the original topic of this thread is different. I personally
think that in 2015 it is not a big deal to make the master node able to
access any online host (even taking into account paranoid security
policies). It is just a matter of network engineering. But it is completely
out of the scope. What I am suggesting is to align the way how we treat
different repos, whether upstream or MOS. What I am working on right now is
I am trying to make Fuel build and delivery approach really flexible. That
means we need to have as little non-standard ways/hacks/approaches/options
as possible.

> Why can't we make this optional in the build system? It should be easy to
implement, is not it?

That is exactly what I am trying to do (make it optional). But I don't want
it to be yet another boolean variable for this particular thing (MOS repo).
We have working approach for dealing with repos. Repos can either online or
local mirrors. We have a tool for making local mirrors (fuel-createmirror).
Even if we put MOS on the ISO, a user still can not deploy OpenStack,
because he/she still needs upstream to be available. Anyway, the user is
still forced to do some additional actions. Again, we have plans to improve
the quality and UX of fuel-createmirror.

Yet another thing I don't want to be on the master node is a bunch of MOS
repos directories named like
/var/www/nailgun/2015.1-7.0
/var/www/nailgun/2014.4-6.1
with links like
/var/www/nailgun/ubuntu -> /var/www/nailgun/2015.1-7.0
What does this link mean? Even Fuel developers can be confused. It is scary
to imagine what users think of it :-) Why should Nailgun and upgrade script
manage that kind of storage in this exact kind of format? A long time ago
people invented RPM/DEB repositories, tools to manage them and structure
for versioning them. We have Perestoika for that and we have plans to put
all package/mirror related tools in one place (
github.com/stackforge/fuel-mirror) and make all these tools available out
of Fuel CI. So, users will be able to easily build their own packages,
clone necessary repos and manage them in the way which is standard in the
industry. However, it is out of the scope of the letter.

I also don't like the idea of putting MOS repo on the ISO by default
because it encourages people thing that ISO is the way of distributing MOS.
ISO should be nothing more than just a way of installing Fuel from scratch.
MOS should be distributed via MOS repos. Fuel is available as RPM package
in RPM MOS repo.







Vladimir Kozhukalov

On Thu, Sep 10, 2015 at 1:18 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
wrote:

> Mike,
>
> > still not exactly true for some large enterprises. Due to all the
> security, etc.,
> > there are sometimes VPNs / proxies / firewalls with very low throughput.
>
> It's their problem, and their policies. We can't and shouldn't handle
> all possible cases. If some enterprise has "no Internet" policy, I bet
> it won't be a problem for their IT guys to create an intranet mirror
> for MOS packages. Moreover, I also bet they do have a mirror for
> Ubuntu or other Linux distributive. So it basically about approach how
> to consume our mirrors.
>
> On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <vkuklin at mirantis.com>
> wrote:
> > Folks
> >
> > I think, Mike is completely right here - we need an option to build
> > all-in-one ISO which can be tried-out/deployed unattendedly without
> internet
> > access. Let's let a user make a choice what he wants, not push him into
> > embarassing situation. We still have many parts of Fuel which make
> choices
> > for user that cannot be overriden. Let's not pretend that we know more
> than
> > user does about his environment.
> >
> > On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <ogelbukh at mirantis.com>
> > wrote:
> >>
> >> The reason people want offline deployment feature is not because of poor
> >> connection, but rather the enterprise intranets where getting subnet
> with
> >> external access sometimes is a real pain in various body parts.
> >>
> >> --
> >> Best regards,
> >> Oleg Gelbukh
> >>
> >> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <
> ikalnitsky at mirantis.com>
> >> wrote:
> >>>
> >>> Hello,
> >>>
> >>> I agree with Vladimir - the idea of online repos is a right way to
> >>> move. In 2015 I believe we can ignore this "poor Internet connection"
> >>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
> >>> distributives - most of them fetch needed packages from the Internet
> >>> during installation, not from CD/DVD. The netboot installers are
> >>> popular, I can't even remember when was the last time I install my
> >>> Debian from the DVD-1 - I use netboot installer for years.
> >>>
> >>> Thanks,
> >>> Igor
> >>>
> >>>
> >>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com>
> wrote:
> >>> >
> >>> >
> >>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com
> >
> >>> > wrote:
> >>> >>
> >>> >>
> >>> >> Hey Vladimir,
> >>> >>
> >>> >>>
> >>> >>>
> >>> >>>>>
> >>> >>>>> 1) There won't be such things in like [1] and [2], thus less
> >>> >>>>> complicated flow, less errors, easier to maintain, easier to
> >>> >>>>> understand,
> >>> >>>>> easier to troubleshoot
> >>> >>>>> 2) If one wants to have local mirror, the flow is the same as in
> >>> >>>>> case
> >>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user
> >>> >>>>> to
> >>> >>>>> understand.
> >>> >>>>
> >>> >>>>
> >>> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
> >>> >>>> forward and has some issues making it a bad UX.
> >>> >>>
> >>> >>>
> >>> >>> I'd say the whole approach of having such tool as fuel-createmirror
> >>> >>> is a
> >>> >>> way too naive. Reliable internet connection is totally up to
> network
> >>> >>> engineering rather than deployment. Even using proxy is much better
> >>> >>> that
> >>> >>> creating local mirror. But this discussion is totally out of the
> >>> >>> scope of
> >>> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
> >>> >>> straightforward (installed as rpm, has just a couple of command
> line
> >>> >>> options). The quality of this script is also out of the scope of
> this
> >>> >>> thread. BTW we have plans to improve it.
> >>> >>
> >>> >>
> >>> >>
> >>> >> Fair enough, I just wanted to raise the UX issues around these types
> >>> >> of
> >>> >> things as they should go into the decision making process.
> >>> >>
> >>> >>
> >>> >>>
> >>> >>>>>
> >>> >>>>>
> >>> >>>>> Many people still associate ISO with MOS, but it is not true when
> >>> >>>>> using
> >>> >>>>> package based delivery approach.
> >>> >>>>>
> >>> >>>>> It is easy to define necessary repos during deployment and thus
> it
> >>> >>>>> is
> >>> >>>>> easy to control what exactly is going to be installed on slave
> >>> >>>>> nodes.
> >>> >>>>>
> >>> >>>>> What do you guys think of it?
> >>> >>>>>
> >>> >>>>>
> >>> >>>>
> >>> >>>> Reliance on internet connectivity has been an issue since 6.1. For
> >>> >>>> many
> >>> >>>> large users, complete access to the internet is not available or
> not
> >>> >>>> desired.  If we want to continue down this path, we need to
> improve
> >>> >>>> the
> >>> >>>> tools to setup the local mirror and properly document what
> >>> >>>> urls/ports/etc
> >>> >>>> need to be available for the installation of openstack and any
> >>> >>>> mirror
> >>> >>>> creation process.  The ideal thing is to have an all-in-one CD
> >>> >>>> similar to a
> >>> >>>> live cd that allows a user to completely try out fuel wherever
> they
> >>> >>>> want
> >>> >>>> with out further requirements of internet access.  If we don't
> want
> >>> >>>> to
> >>> >>>> continue with that, we need to do a better job around providing
> the
> >>> >>>> tools
> >>> >>>> for a user to get up and running in a timely fashion.  Perhaps
> >>> >>>> providing an
> >>> >>>> net-only iso and an all-included iso would be a better solution so
> >>> >>>> people
> >>> >>>> will have their expectations properly set up front?
> >>> >>>
> >>> >>>
> >>> >>> Let me explain why I think having local MOS mirror by default is
> bad:
> >>> >>> 1) I don't see any reason why we should treat MOS  repo other way
> >>> >>> than
> >>> >>> all other online repos. A user sees on the settings tab the list of
> >>> >>> repos
> >>> >>> one of which is local by default while others are online. It can
> make
> >>> >>> user a
> >>> >>> little bit confused, can't it? A user can be also confused by the
> >>> >>> fact, that
> >>> >>> some of the repos can be cloned locally by fuel-createmirror while
> >>> >>> others
> >>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
> >>> >>
> >>> >>
> >>> >>
> >>> >> I agree. The process should be the same and it should be just
> another
> >>> >> repo. It doesn't mean we can't include a version on an ISO as part
> of
> >>> >> a
> >>> >> release.  Would it be better to provide the mirror on the ISO but
> not
> >>> >> have
> >>> >> it enabled by default for a release so that we can gather user
> >>> >> feedback on
> >>> >> this? This would include improved documentation and possibly
> allowing
> >>> >> a user
> >>> >> to choose their preference so we can collect metrics?
> >>> >>
> >>> >>
> >>> >>> 2) Having local MOS mirror by default makes things much more
> >>> >>> convoluted.
> >>> >>> We are forced to have several directories with predefined names and
> >>> >>> we are
> >>> >>> forced to manage these directories in nailgun, in upgrade script,
> >>> >>> etc. Why?
> >>> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
> >>> >>> equal
> >>> >>> to MOS, which is not true. It is possible to implement really
> >>> >>> flexible
> >>> >>> delivery scheme, but we need to think of these things as they are
> >>> >>> independent.
> >>> >>
> >>> >>
> >>> >>
> >>> >> I'm not sure what you mean by this. Including a point in time copy
> on
> >>> >> an
> >>> >> ISO as a release is a common method of distributing software. Is
> this
> >>> >> a
> >>> >> messaging thing that needs to be addressed? Perhaps I'm not familiar
> >>> >> with
> >>> >> people referring to the ISO as being MOS.
> >>> >>
> >>> >>
> >>> >>> For large users it is easy to build custom ISO and put there what
> >>> >>> they
> >>> >>> need but first we need to have simple working scheme clear for
> >>> >>> everyone. I
> >>> >>> think dealing with all repos the same way is what is gonna makes
> >>> >>> things
> >>> >>> simpler.
> >>> >>>
> >>> >>
> >>> >>
> >>> >> Who is going to build a custom ISO? How does one request that? What
> >>> >> resources are consumed by custom ISO creation process/request? Does
> >>> >> this
> >>> >> scale?
> >>> >>
> >>> >>
> >>> >>>
> >>> >>> This thread is not about internet connectivity, it is about
> aligning
> >>> >>> things.
> >>> >>>
> >>> >>
> >>> >> You are correct in that this thread is not explicitly about internet
> >>> >> connectivity, but they are related. Any changes to remove a local
> >>> >> repository
> >>> >> and only provide an internet based solution makes internet
> >>> >> connectivity
> >>> >> something that needs to be included in the discussion.  I just want
> to
> >>> >> make
> >>> >> sure that we properly evaluate this decision based on end user
> >>> >> feedback not
> >>> >> because we don't want to manage this from a developer standpoint.
> >>> >
> >>> >
> >>> >
> >>> >  +1, whatever the changes is, please keep Fuel as a tool that can
> >>> > deploy
> >>> > without Internet access, this is part of reason that people like it
> and
> >>> > it's
> >>> > better that other tools.
> >>> >>
> >>> >>
> >>> >> -Alex
> >>> >>
> >>> >>
> >>> >>
> >>> >>
> __________________________________________________________________________
> >>> >> OpenStack Development Mailing List (not for usage questions)
> >>> >> Unsubscribe:
> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >>
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Yaguang Tang
> >>> > Technical Support, Mirantis China
> >>> >
> >>> > Phone: +86 15210946968
> >>> >
> >>> >
> >>> >
> >>> >
> >>> >
> __________________________________________________________________________
> >>> > OpenStack Development Mailing List (not for usage questions)
> >>> > Unsubscribe:
> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>>
> >>>
> >>>
> __________________________________________________________________________
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 35bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com
> > www.mirantis.ru
> > vkuklin at mirantis.com
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/177be423/attachment.html>

From sean at dague.net  Thu Sep 10 13:16:31 2015
From: sean at dague.net (Sean Dague)
Date: Thu, 10 Sep 2015 09:16:31 -0400
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F17646.4000203@openstack.org>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com> <1441840459-sup-9283@lrrr.local>
 <55F15DE0.7040804@dague.net> <55F17646.4000203@openstack.org>
Message-ID: <55F182AF.2040003@dague.net>

On 09/10/2015 08:23 AM, Thierry Carrez wrote:
> Sean Dague wrote:
>> Right now, they are all a bunch of files, they can be anywhere. And then
>> you have other files that have to reference these files by path, which
>> can be anywhere. We could just punt in that part and say "punt! every
>> installer and configuration management install needs to solve this on
>> their own." I'm not convinced that's a good answer. The os-brick filters
>> aren't really config. If you change them all that happens is
>> terribleness. Stuff stops working, and you don't know why. They are data
>> to exchange with another process about how to function. Honestly, they
>> should probably be python code that's imported by rootwrap.
>>
>> Much like the issues around clouds failing when you try to GET /v2 on
>> the Nova API (because we have a bunch of knobs you have to align for SSL
>> termination, and a bunch of deployers didn't), I don't think we should
>> be satisfied with "there's a config for that!" when all that config
>> means is that someone can break their configuration if they don't get it
>> exactly right.
> 
> My quick 2cents on this. Rootwrap was designed as a generic solution to
> wrap privileged calls. That's why filter files are part of its
> "configuration". The problem is, OpenStack needs a pretty precise set of
> those filters to be "configured" to run properly. So it's configuration
> for rootwrap, but not "configuration" for OpenStack.
> 
> The way it was supposed to work out was that you would have a single
> rootwrap on nodes and every component on that node needing filters would
> drop them in some unique location. A library is just another component
> needing filters, so os-brick could just deploy a few more filters on
> nodes where it's installed.
> 
> The trick is, to increase "security" we promoted usage of per-project
> directories (so that Nova only has access to Nova privileged commands),
> which translated into using a specific config file for Nova rootwrap
> pointing to Nova filters. Now if we are willing to sacrifice that, we
> could have a single directory per-node (/usr/share/rootwrap instead of
> /usr/share/*/rootwrap) that makes most of the interpolation you're
> describing unnecessary.
> 
> Alternatively you could keep project-specific directories and have
> os-brick drop symbolic links to its filters into both nova and
> cinder-specific directories. It's slightly less flexible (since the lib
> now has to know what consumes it) but keeps you from sacrificing "security".
> 
> Now another problem you're describing is that there is no single place
> where those filters end up, depending on the way the projects (or libs)
> are packaged and installed. And it's up to the distros to "fix" the
> filters_path in the configuration file so that it points to every single
> place where those end up. It's a problem (especially when you start to
> install things using multiple concurrent packaging systems), but it's
> not exactly new -- it's just that libraries shipping fliters file are
> even more likely to ship their filters somewhere weird. So maybe we can
> continue to live with that problem we always had, until the privsep
> system completely replaces rootwrap ?

I do get this is where we came from. I feel like this doesn't really
address or understand that things are actually quite different when it
comes to libraries doing rootwrap. We've spent weeks attempting various
work arounds, and for Liberty just punted and said "os-brick, cinder,
and nova all must upgrade exactly at the same time". Because that's the
only solution that doesn't require pages of documentation that
installers will get wrong some times.

I don't feel like that's an acceptable solution. And it also means that
"living" with it means that next cycle we're going to have to say "nova,
neutron, cinder, os-brick, and vif library must all upgrade at exactly
the same time". Which is clearly not a thing we want. Had we figured out
this rootwrap limitation early, os-brick would never have been put into
Nova because it makes the upgrade process demonstrably worse and more
fragile.

	-Sean

-- 
Sean Dague
http://dague.net


From vkuklin at mirantis.com  Thu Sep 10 13:17:41 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Thu, 10 Sep 2015 16:17:41 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAFLqvG5bF0gtKSidwbNJP61hzqkFTDaMEvv-6hkH+KbFqXc75Q@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
 <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
 <CAFLqvG5bF0gtKSidwbNJP61hzqkFTDaMEvv-6hkH+KbFqXc75Q@mail.gmail.com>
Message-ID: <CAHAWLf0_J+8n=R_btK=peHXMT_hVT-0Qiq7JgBkwXKR7Lvm4=Q@mail.gmail.com>

Igor

Having poor access to the internet is a regular use case which we must
support. This is not a crazy requirement. Not having full ISO makes cloud
setup harder to complete. Even more, not having hard requirement to create
a mirror will detract newcomers. I can say that if I were a user and saw a
requirement to create mirror I would not try the product with comparison to
the case when I can get a full ISO with all the stuff I need.

On Thu, Sep 10, 2015 at 4:06 PM, Vladimir Kozhukalov <
vkozhukalov at mirantis.com> wrote:

> Guys,
>
> I really appreciate your opinions on whether Fuel should be all inclusive
> or not. But the original topic of this thread is different. I personally
> think that in 2015 it is not a big deal to make the master node able to
> access any online host (even taking into account paranoid security
> policies). It is just a matter of network engineering. But it is completely
> out of the scope. What I am suggesting is to align the way how we treat
> different repos, whether upstream or MOS. What I am working on right now is
> I am trying to make Fuel build and delivery approach really flexible. That
> means we need to have as little non-standard ways/hacks/approaches/options
> as possible.
>
> > Why can't we make this optional in the build system? It should be easy
> to implement, is not it?
>
> That is exactly what I am trying to do (make it optional). But I don't
> want it to be yet another boolean variable for this particular thing (MOS
> repo). We have working approach for dealing with repos. Repos can either
> online or local mirrors. We have a tool for making local mirrors
> (fuel-createmirror). Even if we put MOS on the ISO, a user still can not
> deploy OpenStack, because he/she still needs upstream to be available.
> Anyway, the user is still forced to do some additional actions. Again, we
> have plans to improve the quality and UX of fuel-createmirror.
>
> Yet another thing I don't want to be on the master node is a bunch of MOS
> repos directories named like
> /var/www/nailgun/2015.1-7.0
> /var/www/nailgun/2014.4-6.1
> with links like
> /var/www/nailgun/ubuntu -> /var/www/nailgun/2015.1-7.0
> What does this link mean? Even Fuel developers can be confused. It is
> scary to imagine what users think of it :-) Why should Nailgun and upgrade
> script manage that kind of storage in this exact kind of format? A long
> time ago people invented RPM/DEB repositories, tools to manage them and
> structure for versioning them. We have Perestoika for that and we have
> plans to put all package/mirror related tools in one place (
> github.com/stackforge/fuel-mirror) and make all these tools available out
> of Fuel CI. So, users will be able to easily build their own packages,
> clone necessary repos and manage them in the way which is standard in the
> industry. However, it is out of the scope of the letter.
>
> I also don't like the idea of putting MOS repo on the ISO by default
> because it encourages people thing that ISO is the way of distributing MOS.
> ISO should be nothing more than just a way of installing Fuel from scratch.
> MOS should be distributed via MOS repos. Fuel is available as RPM package
> in RPM MOS repo.
>
>
>
>
>
>
>
> Vladimir Kozhukalov
>
> On Thu, Sep 10, 2015 at 1:18 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
> wrote:
>
>> Mike,
>>
>> > still not exactly true for some large enterprises. Due to all the
>> security, etc.,
>> > there are sometimes VPNs / proxies / firewalls with very low throughput.
>>
>> It's their problem, and their policies. We can't and shouldn't handle
>> all possible cases. If some enterprise has "no Internet" policy, I bet
>> it won't be a problem for their IT guys to create an intranet mirror
>> for MOS packages. Moreover, I also bet they do have a mirror for
>> Ubuntu or other Linux distributive. So it basically about approach how
>> to consume our mirrors.
>>
>> On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <vkuklin at mirantis.com>
>> wrote:
>> > Folks
>> >
>> > I think, Mike is completely right here - we need an option to build
>> > all-in-one ISO which can be tried-out/deployed unattendedly without
>> internet
>> > access. Let's let a user make a choice what he wants, not push him into
>> > embarassing situation. We still have many parts of Fuel which make
>> choices
>> > for user that cannot be overriden. Let's not pretend that we know more
>> than
>> > user does about his environment.
>> >
>> > On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <ogelbukh at mirantis.com>
>> > wrote:
>> >>
>> >> The reason people want offline deployment feature is not because of
>> poor
>> >> connection, but rather the enterprise intranets where getting subnet
>> with
>> >> external access sometimes is a real pain in various body parts.
>> >>
>> >> --
>> >> Best regards,
>> >> Oleg Gelbukh
>> >>
>> >> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <
>> ikalnitsky at mirantis.com>
>> >> wrote:
>> >>>
>> >>> Hello,
>> >>>
>> >>> I agree with Vladimir - the idea of online repos is a right way to
>> >>> move. In 2015 I believe we can ignore this "poor Internet connection"
>> >>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
>> >>> distributives - most of them fetch needed packages from the Internet
>> >>> during installation, not from CD/DVD. The netboot installers are
>> >>> popular, I can't even remember when was the last time I install my
>> >>> Debian from the DVD-1 - I use netboot installer for years.
>> >>>
>> >>> Thanks,
>> >>> Igor
>> >>>
>> >>>
>> >>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com>
>> wrote:
>> >>> >
>> >>> >
>> >>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <
>> aschultz at mirantis.com>
>> >>> > wrote:
>> >>> >>
>> >>> >>
>> >>> >> Hey Vladimir,
>> >>> >>
>> >>> >>>
>> >>> >>>
>> >>> >>>>>
>> >>> >>>>> 1) There won't be such things in like [1] and [2], thus less
>> >>> >>>>> complicated flow, less errors, easier to maintain, easier to
>> >>> >>>>> understand,
>> >>> >>>>> easier to troubleshoot
>> >>> >>>>> 2) If one wants to have local mirror, the flow is the same as in
>> >>> >>>>> case
>> >>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a
>> user
>> >>> >>>>> to
>> >>> >>>>> understand.
>> >>> >>>>
>> >>> >>>>
>> >>> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
>> >>> >>>> forward and has some issues making it a bad UX.
>> >>> >>>
>> >>> >>>
>> >>> >>> I'd say the whole approach of having such tool as
>> fuel-createmirror
>> >>> >>> is a
>> >>> >>> way too naive. Reliable internet connection is totally up to
>> network
>> >>> >>> engineering rather than deployment. Even using proxy is much
>> better
>> >>> >>> that
>> >>> >>> creating local mirror. But this discussion is totally out of the
>> >>> >>> scope of
>> >>> >>> this letter. Currently,  we have fuel-createmirror and it is
>> pretty
>> >>> >>> straightforward (installed as rpm, has just a couple of command
>> line
>> >>> >>> options). The quality of this script is also out of the scope of
>> this
>> >>> >>> thread. BTW we have plans to improve it.
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> Fair enough, I just wanted to raise the UX issues around these
>> types
>> >>> >> of
>> >>> >> things as they should go into the decision making process.
>> >>> >>
>> >>> >>
>> >>> >>>
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>> Many people still associate ISO with MOS, but it is not true
>> when
>> >>> >>>>> using
>> >>> >>>>> package based delivery approach.
>> >>> >>>>>
>> >>> >>>>> It is easy to define necessary repos during deployment and thus
>> it
>> >>> >>>>> is
>> >>> >>>>> easy to control what exactly is going to be installed on slave
>> >>> >>>>> nodes.
>> >>> >>>>>
>> >>> >>>>> What do you guys think of it?
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>
>> >>> >>>> Reliance on internet connectivity has been an issue since 6.1.
>> For
>> >>> >>>> many
>> >>> >>>> large users, complete access to the internet is not available or
>> not
>> >>> >>>> desired.  If we want to continue down this path, we need to
>> improve
>> >>> >>>> the
>> >>> >>>> tools to setup the local mirror and properly document what
>> >>> >>>> urls/ports/etc
>> >>> >>>> need to be available for the installation of openstack and any
>> >>> >>>> mirror
>> >>> >>>> creation process.  The ideal thing is to have an all-in-one CD
>> >>> >>>> similar to a
>> >>> >>>> live cd that allows a user to completely try out fuel wherever
>> they
>> >>> >>>> want
>> >>> >>>> with out further requirements of internet access.  If we don't
>> want
>> >>> >>>> to
>> >>> >>>> continue with that, we need to do a better job around providing
>> the
>> >>> >>>> tools
>> >>> >>>> for a user to get up and running in a timely fashion.  Perhaps
>> >>> >>>> providing an
>> >>> >>>> net-only iso and an all-included iso would be a better solution
>> so
>> >>> >>>> people
>> >>> >>>> will have their expectations properly set up front?
>> >>> >>>
>> >>> >>>
>> >>> >>> Let me explain why I think having local MOS mirror by default is
>> bad:
>> >>> >>> 1) I don't see any reason why we should treat MOS  repo other way
>> >>> >>> than
>> >>> >>> all other online repos. A user sees on the settings tab the list
>> of
>> >>> >>> repos
>> >>> >>> one of which is local by default while others are online. It can
>> make
>> >>> >>> user a
>> >>> >>> little bit confused, can't it? A user can be also confused by the
>> >>> >>> fact, that
>> >>> >>> some of the repos can be cloned locally by fuel-createmirror while
>> >>> >>> others
>> >>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> I agree. The process should be the same and it should be just
>> another
>> >>> >> repo. It doesn't mean we can't include a version on an ISO as part
>> of
>> >>> >> a
>> >>> >> release.  Would it be better to provide the mirror on the ISO but
>> not
>> >>> >> have
>> >>> >> it enabled by default for a release so that we can gather user
>> >>> >> feedback on
>> >>> >> this? This would include improved documentation and possibly
>> allowing
>> >>> >> a user
>> >>> >> to choose their preference so we can collect metrics?
>> >>> >>
>> >>> >>
>> >>> >>> 2) Having local MOS mirror by default makes things much more
>> >>> >>> convoluted.
>> >>> >>> We are forced to have several directories with predefined names
>> and
>> >>> >>> we are
>> >>> >>> forced to manage these directories in nailgun, in upgrade script,
>> >>> >>> etc. Why?
>> >>> >>> 3) When putting MOS mirror on ISO, we make people think that ISO
>> is
>> >>> >>> equal
>> >>> >>> to MOS, which is not true. It is possible to implement really
>> >>> >>> flexible
>> >>> >>> delivery scheme, but we need to think of these things as they are
>> >>> >>> independent.
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> I'm not sure what you mean by this. Including a point in time copy
>> on
>> >>> >> an
>> >>> >> ISO as a release is a common method of distributing software. Is
>> this
>> >>> >> a
>> >>> >> messaging thing that needs to be addressed? Perhaps I'm not
>> familiar
>> >>> >> with
>> >>> >> people referring to the ISO as being MOS.
>> >>> >>
>> >>> >>
>> >>> >>> For large users it is easy to build custom ISO and put there what
>> >>> >>> they
>> >>> >>> need but first we need to have simple working scheme clear for
>> >>> >>> everyone. I
>> >>> >>> think dealing with all repos the same way is what is gonna makes
>> >>> >>> things
>> >>> >>> simpler.
>> >>> >>>
>> >>> >>
>> >>> >>
>> >>> >> Who is going to build a custom ISO? How does one request that? What
>> >>> >> resources are consumed by custom ISO creation process/request? Does
>> >>> >> this
>> >>> >> scale?
>> >>> >>
>> >>> >>
>> >>> >>>
>> >>> >>> This thread is not about internet connectivity, it is about
>> aligning
>> >>> >>> things.
>> >>> >>>
>> >>> >>
>> >>> >> You are correct in that this thread is not explicitly about
>> internet
>> >>> >> connectivity, but they are related. Any changes to remove a local
>> >>> >> repository
>> >>> >> and only provide an internet based solution makes internet
>> >>> >> connectivity
>> >>> >> something that needs to be included in the discussion.  I just
>> want to
>> >>> >> make
>> >>> >> sure that we properly evaluate this decision based on end user
>> >>> >> feedback not
>> >>> >> because we don't want to manage this from a developer standpoint.
>> >>> >
>> >>> >
>> >>> >
>> >>> >  +1, whatever the changes is, please keep Fuel as a tool that can
>> >>> > deploy
>> >>> > without Internet access, this is part of reason that people like it
>> and
>> >>> > it's
>> >>> > better that other tools.
>> >>> >>
>> >>> >>
>> >>> >> -Alex
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >>
>> __________________________________________________________________________
>> >>> >> OpenStack Development Mailing List (not for usage questions)
>> >>> >> Unsubscribe:
>> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>> >>
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > Yaguang Tang
>> >>> > Technical Support, Mirantis China
>> >>> >
>> >>> > Phone: +86 15210946968
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> __________________________________________________________________________
>> >>> > OpenStack Development Mailing List (not for usage questions)
>> >>> > Unsubscribe:
>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>> >
>> >>>
>> >>>
>> >>>
>> __________________________________________________________________________
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>
>> >>
>> >>
>> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > --
>> > Yours Faithfully,
>> > Vladimir Kuklin,
>> > Fuel Library Tech Lead,
>> > Mirantis, Inc.
>> > +7 (495) 640-49-04
>> > +7 (926) 702-39-68
>> > Skype kuklinvv
>> > 35bk3, Vorontsovskaya Str.
>> > Moscow, Russia,
>> > www.mirantis.com
>> > www.mirantis.ru
>> > vkuklin at mirantis.com
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/9584e963/attachment.html>

From vkozhukalov at mirantis.com  Thu Sep 10 13:35:41 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Thu, 10 Sep 2015 16:35:41 +0300
Subject: [openstack-dev] [Fuel] Let's change the way we distribute Fuel
 (was: [Fuel] Remove MOS DEB repo from master node)
In-Reply-To: <CABocrW50Wh7TwbWkwEfJvfkghMFw5qBWE5ZT4Fc7SWyJFi4yoQ@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CABocrW50Wh7TwbWkwEfJvfkghMFw5qBWE5ZT4Fc7SWyJFi4yoQ@mail.gmail.com>
Message-ID: <CAFLqvG5e0JXFYkwYuip-jrpJfQGPb-AoCxmQzVnEVEd4hDoHNg@mail.gmail.com>

> Vladimir's proposal was to use smth similar to MiniCD

Just to clarify. My proposal is to remove DEB MOS repo from the master node
by default and thus from the ISO. That is it.
My proposal does not assume having internet connection during installing
the master node. Fuel RPM packages together with their dependencies are
still there on ISO, thus the master node can be installed w/o internet
connection. Cloud/OpenStack can not be deployed out of the box anyway. It
is because we don't put Ubuntu upstream on ISO. Anyway a user is forced to
make Ubuntu upstream mirror available on the master node (cloning it
locally or via internet connection).

IMO, Fuel in this case is like a browser or bittorrent client. Packages are
available on Linux DVDs but it makes little sense to use them w/o internet
connection.


Vladimir Kozhukalov

On Thu, Sep 10, 2015 at 2:53 PM, Yuriy Taraday <yorik.sar at gmail.com> wrote:

> Hello, thread!
>
> First let me address some of the very good points Alex raised in his email.
>
> On Wed, Sep 9, 2015 at 10:33 PM Alex Schultz <aschultz at mirantis.com>
> wrote:
>
>> Fair enough, I just wanted to raise the UX issues around these types of
>> things as they should go into the decision making process.
>>
>
> UX issues is what we definitely should address even for ourselves: number
> of things that need to happen to deploy Master with just one small change
> is enormous.
>
>
>> Let me explain why I think having local MOS mirror by default is bad:
>>> 1) I don't see any reason why we should treat MOS  repo other way than
>>> all other online repos. A user sees on the settings tab the list of repos
>>> one of which is local by default while others are online. It can make user
>>> a little bit confused, can't it? A user can be also confused by the fact,
>>> that some of the repos can be cloned locally by fuel-createmirror while
>>> others can't. That is not straightforward, NOT fuel-createmirror UX.
>>>
>>
>> I agree. The process should be the same and it should be just another
>> repo. It doesn't mean we can't include a version on an ISO as part of a
>> release.  Would it be better to provide the mirror on the ISO but not have
>> it enabled by default for a release so that we can gather user feedback on
>> this? This would include improved documentation and possibly allowing a
>> user to choose their preference so we can collect metrics?
>>
>
> I think instead of relying on average user of spherical Fuel we should let
> user decide what goes to ISO.
>
> 2) Having local MOS mirror by default makes things much more convoluted.
>>> We are forced to have several directories with predefined names and we are
>>> forced to manage these directories in nailgun, in upgrade script, etc. Why?
>>> 3) When putting MOS mirror on ISO, we make people think that ISO is
>>> equal to MOS, which is not true. It is possible to implement really
>>> flexible delivery scheme, but we need to think of these things as they are
>>> independent.
>>>
>>
>> I'm not sure what you mean by this. Including a point in time copy on an
>> ISO as a release is a common method of distributing software. Is this a
>> messaging thing that needs to be addressed? Perhaps I'm not familiar with
>> people referring to the ISO as being MOS.
>>
>
> It is so common that some people think it's very broken. But we can fix
> that.
>
> For large users it is easy to build custom ISO and put there what they
>>> need but first we need to have simple working scheme clear for everyone. I
>>> think dealing with all repos the same way is what is gonna makes things
>>> simpler.
>>>
>>
>> Who is going to build a custom ISO? How does one request that? What
>> resources are consumed by custom ISO creation process/request? Does this
>> scale?
>>
>
> How about user building ISO on one's workstation?
>
> This thread is not about internet connectivity, it is about aligning
>>> things.
>>>
>>
>> You are correct in that this thread is not explicitly about internet
>> connectivity, but they are related. Any changes to remove a local
>> repository and only provide an internet based solution makes internet
>> connectivity something that needs to be included in the discussion.  I just
>> want to make sure that we properly evaluate this decision based on end user
>> feedback not because we don't want to manage this from a developer
>> standpoint.
>>
>
> We can use Internet connectivity not only in target DC.
>
> Now what do I mean by all that? Let's make Fuel distribution that's easier
> to develop and distribute while making it more comfortable to use in the
> process.
>
> As Alex pointed out, the common way to distribute an OS is to put some
> number of packages from some snapshot of golden repo on ISO and let user
> install that. Let's say, it's a DVD way (although there was time OS could
> fit CD). The other less common way of distributing OS is a small minimal
> ISO and use online repo to install everything. Let's say, it's a MiniCD way.
>
> Fuel is now using a DVD way: we put everything user will ever need to an
> ISO and give it to user. Vladimir's proposal was to use smth similar to
> MiniCD way: put only Fuel on ISO and keep online repo running.
>
> Note that I'll speak of Fuel as an installer people put on MiniCD. It's a
> bit bigger, but it deploys clouds, not just separate machines. Packages and
> OS then translate to everything needed to deploy OpenStack: packages and
> deploy scripts (puppet manifests, could be packaged as well). We could
> apply the same logic to distribution of Fuel itself though, but let's not
> get into it right now.
>
> Let's compare these ways from distributor (D) and user (U) point of view.
>
> DVD way.
> Pros:
> - (D) a single piece to deliver to user;
> - (D,U) a snapshot of repo put on ISO is easier to cover with QA and so
> it's better tested;
> - (U) one-time download for everything;
> - (U) no need for Internet connectivity when you're installing OS;
> - (U) you can store ISO and reuse it any number of times.
> Cons:
> - (D) you still have to maintain online repo for updates;
> - (D,U) it's hard to create a custom ISO if customer needs it, so one of
> them have to take the burden of this on one's shoulders (usually as well as
> in our case it's user);
> - (U) huge download that pulls everything even if you'll never need it;
> - (U) after installation one have outdated versions of everything, so
> you'll have to go online and upgrade everything.
>
> MiniCD way.
> Pros:
> - (D) ISO creation is simple as it doesn't depend on current state of
> repos;
> - (D,U) small one-time download;
> - (U) one can customize everything, install only what is needed, use
> different repos;
> - (U) up-to-date software is installed straight away.
> Cons:
> - (D) installer has to deal with any state of online repo;
> - (U) you have to have Internet connection wherever you want to install;
> - (U) download time is added to install process;
> - (U) you have to download all packages for each install.
>
> How about we define another way that combines these pros and overcomes
> cons?
>
> From user point of view we have two stages: downloading and installing. In
> MiniCD way part of downloading stage is melted into installing stage.
> Installing stage can be configured: you can use different repos and opt in
> for using online repo when installing from DVD even. Downloading stage is
> fixed unless you're going to build your own ISO in which case you just
> don't use upstream distribution, you create your own.
>
> Let's shift customization to download stage, so user will have this
> workflow:
> - download some archive (ISO, USB stick image or even just some archive);
> - unpack it;
> - run a script from that archive that downloads all software user is going
> to install, including MOS from our repos or one's custom repo;
> - run another script (unless the previous one does this) that bundles
> everything to one image (or stores it on USB stick);
> - go to datacenter (via iKVM or via iLegs) and deploy Fuel and MOS from
> your image.
>
> Note that all steps (except the last one) can be done on one's workstation
> with shiny Internet access and comfortable chair available and datacenter
> doesn't require any access to Internet or custom mirrors.
>
> The ideal UX would be like this (replace USB stick with empty disc or just
> ISO image if you like):
> - plug in USB stick;
> - download some app, run it;
> - fill some forms;
> - wait a bit;
> - take this USB stick to datacenter.
> App should download and put all necessary components on USB stick.
>
> Now let's break down pros and cons for this approach.
> Pros:
> - (D) creation of source archive is simple as it doesn't depend on current
> state of repos;
> - (U) download only what you need and from where you're free to download
> anything;
> - (D,U) customization is built into the process;
> - (U) no need for Internet access from datacenter;
> - (U) but if you have one, you can freshen packages on Fuel master
> afterwards without need to download everything;
> - (U) all packages are as fresh as they are when you're preparing the
> image;
> - (U) image can be reused on as many clouds as you like.
> Cons:
> - (D) installer have to deal with any state of the repo (although with
> customization it's user's fault if one uses bad repos);
> - (U) you have to do some preparation steps on your workstation (although
> if you don't need any customization it would take the same amount of time
> as downloading MiniCD and "burning" it);
> - I'm out of ideas.
>
> Note that this customization phase is where developers can inject their
> modified packages.
>
> PS: It's not my idea, I've read/heard it somewhere.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/77f2107e/attachment.html>

From james.slagle at gmail.com  Thu Sep 10 14:06:31 2015
From: james.slagle at gmail.com (James Slagle)
Date: Thu, 10 Sep 2015 10:06:31 -0400
Subject: [openstack-dev] [TripleO] Core reviewers for python-tripleoclient
	and tripleo-common
Message-ID: <CAHV77z-jrZZ7O+beb98NExannOqsvUJvfgQV=A09k5on1UT+=g@mail.gmail.com>

TripleO has added a few new repositories, one of which is
python-tripleoclient[1], the former python-rdomanager-oscplugin.

With the additional repositories, there is an additional review burden
on our core reviewers. There is also the fact that folks who have been
working on the client code for a while when it was only part of RDO
are not TripleO core reviewers.

I think we could help with the additional burden of reviews if we made
two of those people core on python-tripleoclient and tripleo-common
now.

Specifically, the folks I'm proposing are:
Brad P. Crochet <brad at redhat.com>
Dougal Matthews <dougal at redhat.com>

The options I see are:
- keep just 1 tripleo acl, and add additional folks there, with a good
faith agreement not to +/-2,+A code that is not from the 2 client
repos.
- create a new gerrit acl in project-config for just these 2 client
repos, and add folks there as needed. the new acl would also contain
the existing acl for tripleo core reviewers
- neither of the above options - don't add these individuals to any
TripleO core team at this time.

The first is what was more or less done when Tuskar was brought under
the TripleO umbrella to avoid splitting the core teams, and it's the
option I'd prefer.

TripleO cores, please reply here with your vote from the above
options. Or, if you have other ideas, you can share those as well :)

[1] https://review.openstack.org/#/c/215186/

-- 
-- James Slagle
--


From yorik.sar at gmail.com  Thu Sep 10 14:08:58 2015
From: yorik.sar at gmail.com (Yuriy Taraday)
Date: Thu, 10 Sep 2015 14:08:58 +0000
Subject: [openstack-dev] [Fuel] Let's change the way we distribute Fuel
 (was: [Fuel] Remove MOS DEB repo from master node)
In-Reply-To: <CAFLqvG5e0JXFYkwYuip-jrpJfQGPb-AoCxmQzVnEVEd4hDoHNg@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CABocrW50Wh7TwbWkwEfJvfkghMFw5qBWE5ZT4Fc7SWyJFi4yoQ@mail.gmail.com>
 <CAFLqvG5e0JXFYkwYuip-jrpJfQGPb-AoCxmQzVnEVEd4hDoHNg@mail.gmail.com>
Message-ID: <CABocrW68LdVHUMteyvmxOLMC-rcEUj-o_+LopuGtzpJoOGkWzg@mail.gmail.com>

On Thu, Sep 10, 2015 at 4:43 PM Vladimir Kozhukalov <
vkozhukalov at mirantis.com> wrote:

> > Vladimir's proposal was to use smth similar to MiniCD
>
> Just to clarify. My proposal is to remove DEB MOS repo from the master
> node by default and thus from the ISO. That is it.
> My proposal does not assume having internet connection during installing
> the master node. Fuel RPM packages together with their dependencies are
> still there on ISO, thus the master node can be installed w/o internet
> connection. Cloud/OpenStack can not be deployed out of the box anyway. It
> is because we don't put Ubuntu upstream on ISO. Anyway a user is forced to
> make Ubuntu upstream mirror available on the master node (cloning it
> locally or via internet connection).
>
> IMO, Fuel in this case is like a browser or bittorrent client. Packages
> are available on Linux DVDs but it makes little sense to use them w/o
> internet connection.
>
>
> Vladimir Kozhukalov
>
> On Thu, Sep 10, 2015 at 2:53 PM, Yuriy Taraday <yorik.sar at gmail.com>
> wrote:
>
>> Note that I'll speak of Fuel as an installer people put on MiniCD. It's a
>> bit bigger, but it deploys clouds, not just separate machines. Packages and
>> OS then translate to everything needed to deploy OpenStack: packages and
>> deploy scripts (puppet manifests, could be packaged as well). We could
>> apply the same logic to distribution of Fuel itself though, but let's not
>> get into it right now.
>>
>
As I've mentioned later in the initial mail (see above), I'm not talking
about using this approach to deploy Fuel (although it'd be great if we do).
I'm talking about using it to deploy Fuel and then MOS. We can download
some fixed part of the image that contains everything needed to deploy Fuel
and add all necessary repos and manifests to it, for example.

So to repeat the analogy, Fuel is like deb-installer that is present on any
Debian-based MiniCD and MOS (packages+manifests) is like packages that
present on DVD (and downloaded in MiniCD case). You don't want to dig into
deb-installer, but you might want to install different software from
different sources. Just like you don't want to mess with Fuel itself while
you might want to install customized MOS from local repo (or from resulting
image).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/97bf38bd/attachment.html>

From jason.dobies at redhat.com  Thu Sep 10 14:10:58 2015
From: jason.dobies at redhat.com (Jay Dobies)
Date: Thu, 10 Sep 2015 10:10:58 -0400
Subject: [openstack-dev] [TripleO] Core reviewers for
 python-tripleoclient and tripleo-common
In-Reply-To: <CAHV77z-jrZZ7O+beb98NExannOqsvUJvfgQV=A09k5on1UT+=g@mail.gmail.com>
References: <CAHV77z-jrZZ7O+beb98NExannOqsvUJvfgQV=A09k5on1UT+=g@mail.gmail.com>
Message-ID: <55F18F72.8050805@redhat.com>

On 09/10/2015 10:06 AM, James Slagle wrote:
> TripleO has added a few new repositories, one of which is
> python-tripleoclient[1], the former python-rdomanager-oscplugin.
>
> With the additional repositories, there is an additional review burden
> on our core reviewers. There is also the fact that folks who have been
> working on the client code for a while when it was only part of RDO
> are not TripleO core reviewers.
>
> I think we could help with the additional burden of reviews if we made
> two of those people core on python-tripleoclient and tripleo-common
> now.
>
> Specifically, the folks I'm proposing are:
> Brad P. Crochet <brad at redhat.com>
> Dougal Matthews <dougal at redhat.com>

+1 to both. I've seen a lot of Dougal's reviews and his Python knowledge 
is excellent.

> The options I see are:
> - keep just 1 tripleo acl, and add additional folks there, with a good
> faith agreement not to +/-2,+A code that is not from the 2 client
> repos.

+1 to this. I feel like it encourages cross pollination into other 
tripleo repos (we could use the eyes on THT) without having to jump 
through extra hoops as their involvement with them increases.

> - create a new gerrit acl in project-config for just these 2 client
> repos, and add folks there as needed. the new acl would also contain
> the existing acl for tripleo core reviewers
> - neither of the above options - don't add these individuals to any
> TripleO core team at this time.
>
> The first is what was more or less done when Tuskar was brought under
> the TripleO umbrella to avoid splitting the core teams, and it's the
> option I'd prefer.
>
> TripleO cores, please reply here with your vote from the above
> options. Or, if you have other ideas, you can share those as well :)
>
> [1] https://review.openstack.org/#/c/215186/
>


From derekh at redhat.com  Thu Sep 10 14:12:39 2015
From: derekh at redhat.com (Derek Higgins)
Date: Thu, 10 Sep 2015 15:12:39 +0100
Subject: [openstack-dev] [TripleO] Current meeting timeslot
Message-ID: <55F18FD7.7070305@redhat.com>

Hi All,

The current meeting slot for TripleO is every second Tuesday @ 1900 UTC, 
since that time slot was chosen a lot of people have joined the team and 
others have moved on, I like to revisit the timeslot to see if we can 
accommodate more people at the meeting (myself included).

Sticking with Tuesday I see two other slots available that I think will 
accommodate more people currently working on TripleO,

Here is the etherpad[1], can you please add your name under the time 
slots that would suit you so we can get a good idea how a change would 
effect people

thanks,
Derek.


[1] - https://etherpad.openstack.org/p/SocOjvLr6o


From mtreinish at kortar.org  Thu Sep 10 14:13:02 2015
From: mtreinish at kortar.org (Matthew Treinish)
Date: Thu, 10 Sep 2015 10:13:02 -0400
Subject: [openstack-dev] [tempest] Is there a sandbox project how to use
 tempest test plugin interface?
In-Reply-To: <55F17DFF.4000602@ericsson.com>
References: <55F17DFF.4000602@ericsson.com>
Message-ID: <20150910141302.GA2037@sazabi.kortar.org>

On Thu, Sep 10, 2015 at 02:56:31PM +0200, Lajos Katona wrote:
> Hi,
> 
> I just noticed that from tag 6, the test plugin interface considered ready,
> and I am eager to start to use it.
> I have some questions:
> 
> If I understand well in the future the plugin interface will be moved to
> tempest-lib, but now I have to import module(s) from tempest to start to use
> the interface.
> Is there a plan for this, I mean when the whole interface will be moved to
> tempest-lib?

The only thing which will eventually move to tempest-lib is the abstract class
that defines the expected methods of a plugin class [1] The other pieces will
remain in tempest. Honestly this won't likely happen until sometime during
Mitaka. Also when it does move to tempest-lib we'll deprecate the tempest
version and keep it around to allow for a graceful switchover.

The rationale behind this is we really don't provide any stability guarantees
on tempest internals (except for a couple of places which are documented, like
this plugin class) and we want any code from tempest that's useful to external
consumers to really live in tempest-lib.

> 
> If I start to create a test plugin now (from tag 6), what should be the best
> solution to do this?
> I thought to create a repo for my plugin and add that as a subrepo to my
> local tempest repo, and than I can easily import stuff from tempest, but I
> can keep my test code separated from other parts of tempest.
> Is there a better way of doing this?

To start I'd take a look at the documentation for tempest plugins:

http://docs.openstack.org/developer/tempest/plugin.html

From tempest's point of view a plugin is really just an entry point that points
to a class that exposes certain methods. So the Tempest plugin can live anywhere
as long as it's installed as an entry point in the proper namespace. Personally
I feel like including it as a subrepo in a local tempest tree is a bit strange,
but I don't think it'll cause any issues if you do that.

> 
> If there would be an example plugin somewhere, that would be the most
> preferable maybe.

There is a cookiecutter repo in progress. [2] Once that's ready it'll let you
create a blank plugin dir that'll be ready for you to populate. (similar to the
devstack plugin cookiecutter that already exists)

For current examples the only project I know of that's using a plugin interface
is manila [3] so maybe take a look at what they're doing.

-Matt Treinish

[1] http://git.openstack.org/cgit/openstack/tempest/tree/tempest/test_discover/plugins.py#n26
[2] https://review.openstack.org/208389
[3] https://review.openstack.org/#/c/201955
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/73e34049/attachment.pgp>

From pabelanger at redhat.com  Thu Sep 10 14:28:14 2015
From: pabelanger at redhat.com (Paul Belanger)
Date: Thu, 10 Sep 2015 10:28:14 -0400
Subject: [openstack-dev] [Ansible][Infra] Moving ansible roles into big
 tent?
In-Reply-To: <55F131E3.7050909@hpe.com>
References: <20150908145755.GC16241@localhost.localdomain>
 <55EF663E.8050909@redhat.com>
 <20150909172205.GA13717@localhost.localdomain>
 <CADe0dKC6YkSx_yjp4-gg+GqAoYvdjbLpzAXJi9NqYZMx4trm2g@mail.gmail.com>
 <55F131E3.7050909@hpe.com>
Message-ID: <20150910142814.GB31061@localhost.localdomain>

On Thu, Sep 10, 2015 at 09:31:47AM +0200, Yolanda Robla Mota wrote:
> Hi
> I will be interested as well. Having these playbooks in ansible can also be
> useful
> in order to integrate with infra-ansible project.
> I really see that collection as a valid alternative for puppet modules, with
> the advantages
> that ansible can provide, but of course that moving from puppet to ansible
> on infra internally
> is something that cannot be done easily, and needs a wider discussion.
> If we limit the scope of the ansible playbooks only to infra components, I
> think that infra
> namespace is the way to go, having an independent group of reviewers.
> 
Right, I don't want to go down the path of having openstack-infra consume
ansible. I believe puppet will the default for a while to come. So, if both can
live under the openstack-infra namespace, that works for me.

> Best
> Yolanda
> 
> 
> El 09/09/15 a las 21:31, Ricardo Carrillo Cruz escribi?:
> >I'm interested in ansible roles for openstack-infra, but as there is
> >overlap in functionality
> >with the current openstack-infra puppet roles I'm not sure what's the
> >stance from the
> >openstack-infra core members and PTL.
> >
> >I think they should go to openstack-infra, since Nodepoo/Zuul/etc are very
> >specific
> >to the OpenStack CI.
> >
> >Question is if we should have a subgroup within openstack-infra namespace
> >for
> >'stuff that is not used by OpenStack CI but interesting from CI
> >perspective and/or
> >used by other downstream groups'.
> >
> >Regards
> >
> >2015-09-09 19:22 GMT+02:00 Paul Belanger <pabelanger at redhat.com
> ><mailto:pabelanger at redhat.com>>:
> >
> >    On Tue, Sep 08, 2015 at 06:50:38PM -0400, Emilien Macchi wrote:
> >    >
> >    >
> >    > On 09/08/2015 10:57 AM, Paul Belanger wrote:
> >    > > Greetings,
> >    > >
> >    > > I wanted to start a discussion about the future of ansible /
> >    ansible roles in
> >    > > OpenStack. Over the last week or so I've started down the
> >    ansible path, starting
> >    > > my first ansible role; I've started with ansible-role-nodepool[1].
> >    > >
> >    > > My initial question is simple, now that big tent is upon us, I
> >    would like
> >    > > some way to include ansible roles into the opentack git
> >    workflow.  I first
> >    > > thought the role might live under openstack-infra however I am
> >    not sure that
> >    > > is the right place.  My reason is, -infra tents to include
> >    modules they
> >    > > currently run under the -infra namespace, and I don't want to
> >    start the effort
> >    > > to convince people to migrate.
> >    >
> >    > I'm wondering what would be the goal of ansible-role-nodepool
> >    and what
> >    > it would orchestrate exactly. I did not find README that
> >    explains it,
> >    > and digging into the code makes me think you try to prepare nodepool
> >    > images but I don't exactly see why.
> >    >
> >    > Since we already have puppet-nodepool, I'm curious about the
> >    purpose of
> >    > this role.
> >    > IMHO, if we had to add such a new repo, it would be under
> >    > openstack-infra namespace, to be consistent with other repos
> >    > (puppet-nodepool, etc).
> >    >
> >    > > Another thought might be to reach out to the
> >    os-ansible-deployment team and ask
> >    > > how they see roles in OpenStack moving foward (mostly the
> >    reason for this
> >    > > email).
> >    >
> >    > os-ansible-deployment aims to setup OpenStack services in containers
> >    > (LXC). I don't see relation between os-ansible-deployment (openstack
> >    > deployment related) and ansible-role-nodepool (infra related).
> >    >
> >    > > Either way, I would be interested in feedback on moving
> >    forward on this. Using
> >    > > travis-ci and github works but OpenStack workflow is much better.
> >    > >
> >    > > [1] https://github.com/pabelanger/ansible-role-nodepool
> >    > >
> >    >
> >    > To me, it's unclear how and why we are going to use
> >    ansible-role-nodepool.
> >    > Could you explain with use-case?
> >    >
> >    The most basic use case is managing nodepool using ansible, for
> >    the purpose of
> >    CI.  Bascially, rewrite puppet-nodepool using ansible.  I won't go
> >    into the
> >    reasoning for that, except to say people do not want to use puppet.
> >
> >    Regarding os-ansible-deployment, they are only related due to both
> >    using
> >    ansible. I wouldn't see os-ansible-deployment using the module,
> >    however I would
> >    hope to learn best practices and code reviews from the team.
> >
> >    Where ever the module lives, I would hope people interested in ansible
> >    development would be group somehow.
> >
> >    > Thanks,
> >    > --
> >    > Emilien Macchi
> >    >
> >    >
> >    __________________________________________________________________________
> >    > OpenStack Development Mailing List (not for usage questions)
> >    > Unsubscribe:
> >    OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >    <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >    > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >    __________________________________________________________________________
> >    OpenStack Development Mailing List (not for usage questions)
> >    Unsubscribe:
> >    OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >    <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >__________________________________________________________________________
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -- 
> Yolanda Robla Mota
> Cloud Automation and Distribution Engineer
> +34 605641639
> yolanda.robla-mota at hp.com
> 

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From sgolovatiuk at mirantis.com  Thu Sep 10 14:31:40 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Thu, 10 Sep 2015 16:31:40 +0200
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAFLqvG5bF0gtKSidwbNJP61hzqkFTDaMEvv-6hkH+KbFqXc75Q@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
 <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
 <CAFLqvG5bF0gtKSidwbNJP61hzqkFTDaMEvv-6hkH+KbFqXc75Q@mail.gmail.com>
Message-ID: <CA+HkNVu4T+sFFXFPg+1jVZDhg7pLPi5x3+PtUzsvinB8y=ohxg@mail.gmail.com>

Vladimir,

I give my +1. We must have different repos for MOS and master node. As a
sample I am giving a case when we need to implement CentOS 7 support but
master node may remain on Centos 6.



--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Thu, Sep 10, 2015 at 3:06 PM, Vladimir Kozhukalov <
vkozhukalov at mirantis.com> wrote:

> Guys,
>
> I really appreciate your opinions on whether Fuel should be all inclusive
> or not. But the original topic of this thread is different. I personally
> think that in 2015 it is not a big deal to make the master node able to
> access any online host (even taking into account paranoid security
> policies). It is just a matter of network engineering. But it is completely
> out of the scope. What I am suggesting is to align the way how we treat
> different repos, whether upstream or MOS. What I am working on right now is
> I am trying to make Fuel build and delivery approach really flexible. That
> means we need to have as little non-standard ways/hacks/approaches/options
> as possible.
>
> > Why can't we make this optional in the build system? It should be easy
> to implement, is not it?
>
> That is exactly what I am trying to do (make it optional). But I don't
> want it to be yet another boolean variable for this particular thing (MOS
> repo). We have working approach for dealing with repos. Repos can either
> online or local mirrors. We have a tool for making local mirrors
> (fuel-createmirror). Even if we put MOS on the ISO, a user still can not
> deploy OpenStack, because he/she still needs upstream to be available.
> Anyway, the user is still forced to do some additional actions. Again, we
> have plans to improve the quality and UX of fuel-createmirror.
>
> Yet another thing I don't want to be on the master node is a bunch of MOS
> repos directories named like
> /var/www/nailgun/2015.1-7.0
> /var/www/nailgun/2014.4-6.1
> with links like
> /var/www/nailgun/ubuntu -> /var/www/nailgun/2015.1-7.0
> What does this link mean? Even Fuel developers can be confused. It is
> scary to imagine what users think of it :-) Why should Nailgun and upgrade
> script manage that kind of storage in this exact kind of format? A long
> time ago people invented RPM/DEB repositories, tools to manage them and
> structure for versioning them. We have Perestoika for that and we have
> plans to put all package/mirror related tools in one place (
> github.com/stackforge/fuel-mirror) and make all these tools available out
> of Fuel CI. So, users will be able to easily build their own packages,
> clone necessary repos and manage them in the way which is standard in the
> industry. However, it is out of the scope of the letter.
>
> I also don't like the idea of putting MOS repo on the ISO by default
> because it encourages people thing that ISO is the way of distributing MOS.
> ISO should be nothing more than just a way of installing Fuel from scratch.
> MOS should be distributed via MOS repos. Fuel is available as RPM package
> in RPM MOS repo.
>
>
>
>
>
>
>
> Vladimir Kozhukalov
>
> On Thu, Sep 10, 2015 at 1:18 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
> wrote:
>
>> Mike,
>>
>> > still not exactly true for some large enterprises. Due to all the
>> security, etc.,
>> > there are sometimes VPNs / proxies / firewalls with very low throughput.
>>
>> It's their problem, and their policies. We can't and shouldn't handle
>> all possible cases. If some enterprise has "no Internet" policy, I bet
>> it won't be a problem for their IT guys to create an intranet mirror
>> for MOS packages. Moreover, I also bet they do have a mirror for
>> Ubuntu or other Linux distributive. So it basically about approach how
>> to consume our mirrors.
>>
>> On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <vkuklin at mirantis.com>
>> wrote:
>> > Folks
>> >
>> > I think, Mike is completely right here - we need an option to build
>> > all-in-one ISO which can be tried-out/deployed unattendedly without
>> internet
>> > access. Let's let a user make a choice what he wants, not push him into
>> > embarassing situation. We still have many parts of Fuel which make
>> choices
>> > for user that cannot be overriden. Let's not pretend that we know more
>> than
>> > user does about his environment.
>> >
>> > On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <ogelbukh at mirantis.com>
>> > wrote:
>> >>
>> >> The reason people want offline deployment feature is not because of
>> poor
>> >> connection, but rather the enterprise intranets where getting subnet
>> with
>> >> external access sometimes is a real pain in various body parts.
>> >>
>> >> --
>> >> Best regards,
>> >> Oleg Gelbukh
>> >>
>> >> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <
>> ikalnitsky at mirantis.com>
>> >> wrote:
>> >>>
>> >>> Hello,
>> >>>
>> >>> I agree with Vladimir - the idea of online repos is a right way to
>> >>> move. In 2015 I believe we can ignore this "poor Internet connection"
>> >>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
>> >>> distributives - most of them fetch needed packages from the Internet
>> >>> during installation, not from CD/DVD. The netboot installers are
>> >>> popular, I can't even remember when was the last time I install my
>> >>> Debian from the DVD-1 - I use netboot installer for years.
>> >>>
>> >>> Thanks,
>> >>> Igor
>> >>>
>> >>>
>> >>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com>
>> wrote:
>> >>> >
>> >>> >
>> >>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <
>> aschultz at mirantis.com>
>> >>> > wrote:
>> >>> >>
>> >>> >>
>> >>> >> Hey Vladimir,
>> >>> >>
>> >>> >>>
>> >>> >>>
>> >>> >>>>>
>> >>> >>>>> 1) There won't be such things in like [1] and [2], thus less
>> >>> >>>>> complicated flow, less errors, easier to maintain, easier to
>> >>> >>>>> understand,
>> >>> >>>>> easier to troubleshoot
>> >>> >>>>> 2) If one wants to have local mirror, the flow is the same as in
>> >>> >>>>> case
>> >>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a
>> user
>> >>> >>>>> to
>> >>> >>>>> understand.
>> >>> >>>>
>> >>> >>>>
>> >>> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
>> >>> >>>> forward and has some issues making it a bad UX.
>> >>> >>>
>> >>> >>>
>> >>> >>> I'd say the whole approach of having such tool as
>> fuel-createmirror
>> >>> >>> is a
>> >>> >>> way too naive. Reliable internet connection is totally up to
>> network
>> >>> >>> engineering rather than deployment. Even using proxy is much
>> better
>> >>> >>> that
>> >>> >>> creating local mirror. But this discussion is totally out of the
>> >>> >>> scope of
>> >>> >>> this letter. Currently,  we have fuel-createmirror and it is
>> pretty
>> >>> >>> straightforward (installed as rpm, has just a couple of command
>> line
>> >>> >>> options). The quality of this script is also out of the scope of
>> this
>> >>> >>> thread. BTW we have plans to improve it.
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> Fair enough, I just wanted to raise the UX issues around these
>> types
>> >>> >> of
>> >>> >> things as they should go into the decision making process.
>> >>> >>
>> >>> >>
>> >>> >>>
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>> Many people still associate ISO with MOS, but it is not true
>> when
>> >>> >>>>> using
>> >>> >>>>> package based delivery approach.
>> >>> >>>>>
>> >>> >>>>> It is easy to define necessary repos during deployment and thus
>> it
>> >>> >>>>> is
>> >>> >>>>> easy to control what exactly is going to be installed on slave
>> >>> >>>>> nodes.
>> >>> >>>>>
>> >>> >>>>> What do you guys think of it?
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>
>> >>> >>>> Reliance on internet connectivity has been an issue since 6.1.
>> For
>> >>> >>>> many
>> >>> >>>> large users, complete access to the internet is not available or
>> not
>> >>> >>>> desired.  If we want to continue down this path, we need to
>> improve
>> >>> >>>> the
>> >>> >>>> tools to setup the local mirror and properly document what
>> >>> >>>> urls/ports/etc
>> >>> >>>> need to be available for the installation of openstack and any
>> >>> >>>> mirror
>> >>> >>>> creation process.  The ideal thing is to have an all-in-one CD
>> >>> >>>> similar to a
>> >>> >>>> live cd that allows a user to completely try out fuel wherever
>> they
>> >>> >>>> want
>> >>> >>>> with out further requirements of internet access.  If we don't
>> want
>> >>> >>>> to
>> >>> >>>> continue with that, we need to do a better job around providing
>> the
>> >>> >>>> tools
>> >>> >>>> for a user to get up and running in a timely fashion.  Perhaps
>> >>> >>>> providing an
>> >>> >>>> net-only iso and an all-included iso would be a better solution
>> so
>> >>> >>>> people
>> >>> >>>> will have their expectations properly set up front?
>> >>> >>>
>> >>> >>>
>> >>> >>> Let me explain why I think having local MOS mirror by default is
>> bad:
>> >>> >>> 1) I don't see any reason why we should treat MOS  repo other way
>> >>> >>> than
>> >>> >>> all other online repos. A user sees on the settings tab the list
>> of
>> >>> >>> repos
>> >>> >>> one of which is local by default while others are online. It can
>> make
>> >>> >>> user a
>> >>> >>> little bit confused, can't it? A user can be also confused by the
>> >>> >>> fact, that
>> >>> >>> some of the repos can be cloned locally by fuel-createmirror while
>> >>> >>> others
>> >>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> I agree. The process should be the same and it should be just
>> another
>> >>> >> repo. It doesn't mean we can't include a version on an ISO as part
>> of
>> >>> >> a
>> >>> >> release.  Would it be better to provide the mirror on the ISO but
>> not
>> >>> >> have
>> >>> >> it enabled by default for a release so that we can gather user
>> >>> >> feedback on
>> >>> >> this? This would include improved documentation and possibly
>> allowing
>> >>> >> a user
>> >>> >> to choose their preference so we can collect metrics?
>> >>> >>
>> >>> >>
>> >>> >>> 2) Having local MOS mirror by default makes things much more
>> >>> >>> convoluted.
>> >>> >>> We are forced to have several directories with predefined names
>> and
>> >>> >>> we are
>> >>> >>> forced to manage these directories in nailgun, in upgrade script,
>> >>> >>> etc. Why?
>> >>> >>> 3) When putting MOS mirror on ISO, we make people think that ISO
>> is
>> >>> >>> equal
>> >>> >>> to MOS, which is not true. It is possible to implement really
>> >>> >>> flexible
>> >>> >>> delivery scheme, but we need to think of these things as they are
>> >>> >>> independent.
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> I'm not sure what you mean by this. Including a point in time copy
>> on
>> >>> >> an
>> >>> >> ISO as a release is a common method of distributing software. Is
>> this
>> >>> >> a
>> >>> >> messaging thing that needs to be addressed? Perhaps I'm not
>> familiar
>> >>> >> with
>> >>> >> people referring to the ISO as being MOS.
>> >>> >>
>> >>> >>
>> >>> >>> For large users it is easy to build custom ISO and put there what
>> >>> >>> they
>> >>> >>> need but first we need to have simple working scheme clear for
>> >>> >>> everyone. I
>> >>> >>> think dealing with all repos the same way is what is gonna makes
>> >>> >>> things
>> >>> >>> simpler.
>> >>> >>>
>> >>> >>
>> >>> >>
>> >>> >> Who is going to build a custom ISO? How does one request that? What
>> >>> >> resources are consumed by custom ISO creation process/request? Does
>> >>> >> this
>> >>> >> scale?
>> >>> >>
>> >>> >>
>> >>> >>>
>> >>> >>> This thread is not about internet connectivity, it is about
>> aligning
>> >>> >>> things.
>> >>> >>>
>> >>> >>
>> >>> >> You are correct in that this thread is not explicitly about
>> internet
>> >>> >> connectivity, but they are related. Any changes to remove a local
>> >>> >> repository
>> >>> >> and only provide an internet based solution makes internet
>> >>> >> connectivity
>> >>> >> something that needs to be included in the discussion.  I just
>> want to
>> >>> >> make
>> >>> >> sure that we properly evaluate this decision based on end user
>> >>> >> feedback not
>> >>> >> because we don't want to manage this from a developer standpoint.
>> >>> >
>> >>> >
>> >>> >
>> >>> >  +1, whatever the changes is, please keep Fuel as a tool that can
>> >>> > deploy
>> >>> > without Internet access, this is part of reason that people like it
>> and
>> >>> > it's
>> >>> > better that other tools.
>> >>> >>
>> >>> >>
>> >>> >> -Alex
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >>
>> __________________________________________________________________________
>> >>> >> OpenStack Development Mailing List (not for usage questions)
>> >>> >> Unsubscribe:
>> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>> >>
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > Yaguang Tang
>> >>> > Technical Support, Mirantis China
>> >>> >
>> >>> > Phone: +86 15210946968
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> __________________________________________________________________________
>> >>> > OpenStack Development Mailing List (not for usage questions)
>> >>> > Unsubscribe:
>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>> >
>> >>>
>> >>>
>> >>>
>> __________________________________________________________________________
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>
>> >>
>> >>
>> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > --
>> > Yours Faithfully,
>> > Vladimir Kuklin,
>> > Fuel Library Tech Lead,
>> > Mirantis, Inc.
>> > +7 (495) 640-49-04
>> > +7 (926) 702-39-68
>> > Skype kuklinvv
>> > 35bk3, Vorontsovskaya Str.
>> > Moscow, Russia,
>> > www.mirantis.com
>> > www.mirantis.ru
>> > vkuklin at mirantis.com
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/ffc5da2a/attachment.html>

From vkozhukalov at mirantis.com  Thu Sep 10 14:40:45 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Thu, 10 Sep 2015 17:40:45 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAHAWLf0_J+8n=R_btK=peHXMT_hVT-0Qiq7JgBkwXKR7Lvm4=Q@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
 <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
 <CAFLqvG5bF0gtKSidwbNJP61hzqkFTDaMEvv-6hkH+KbFqXc75Q@mail.gmail.com>
 <CAHAWLf0_J+8n=R_btK=peHXMT_hVT-0Qiq7JgBkwXKR7Lvm4=Q@mail.gmail.com>
Message-ID: <CAFLqvG6gF-OT5a-3Px8nHy-w=Y_18shOVJ0fgap8LFMqk7pZ3g@mail.gmail.com>

Vladimir,

* We don't have full ISO anyway
* We don't require to create mirror. When you launch your browser, do you
mean to have mirror of the Internet locally? Probably, no. The same is
here. Internet connection is the common requirement nowadays, but if you
don't have one, you definitely need to have a kind of local copy.

Vladimir Kozhukalov

On Thu, Sep 10, 2015 at 4:17 PM, Vladimir Kuklin <vkuklin at mirantis.com>
wrote:

> Igor
>
> Having poor access to the internet is a regular use case which we must
> support. This is not a crazy requirement. Not having full ISO makes cloud
> setup harder to complete. Even more, not having hard requirement to create
> a mirror will detract newcomers. I can say that if I were a user and saw a
> requirement to create mirror I would not try the product with comparison to
> the case when I can get a full ISO with all the stuff I need.
>
> On Thu, Sep 10, 2015 at 4:06 PM, Vladimir Kozhukalov <
> vkozhukalov at mirantis.com> wrote:
>
>> Guys,
>>
>> I really appreciate your opinions on whether Fuel should be all inclusive
>> or not. But the original topic of this thread is different. I personally
>> think that in 2015 it is not a big deal to make the master node able to
>> access any online host (even taking into account paranoid security
>> policies). It is just a matter of network engineering. But it is completely
>> out of the scope. What I am suggesting is to align the way how we treat
>> different repos, whether upstream or MOS. What I am working on right now is
>> I am trying to make Fuel build and delivery approach really flexible. That
>> means we need to have as little non-standard ways/hacks/approaches/options
>> as possible.
>>
>> > Why can't we make this optional in the build system? It should be easy
>> to implement, is not it?
>>
>> That is exactly what I am trying to do (make it optional). But I don't
>> want it to be yet another boolean variable for this particular thing (MOS
>> repo). We have working approach for dealing with repos. Repos can either
>> online or local mirrors. We have a tool for making local mirrors
>> (fuel-createmirror). Even if we put MOS on the ISO, a user still can not
>> deploy OpenStack, because he/she still needs upstream to be available.
>> Anyway, the user is still forced to do some additional actions. Again, we
>> have plans to improve the quality and UX of fuel-createmirror.
>>
>> Yet another thing I don't want to be on the master node is a bunch of MOS
>> repos directories named like
>> /var/www/nailgun/2015.1-7.0
>> /var/www/nailgun/2014.4-6.1
>> with links like
>> /var/www/nailgun/ubuntu -> /var/www/nailgun/2015.1-7.0
>> What does this link mean? Even Fuel developers can be confused. It is
>> scary to imagine what users think of it :-) Why should Nailgun and upgrade
>> script manage that kind of storage in this exact kind of format? A long
>> time ago people invented RPM/DEB repositories, tools to manage them and
>> structure for versioning them. We have Perestoika for that and we have
>> plans to put all package/mirror related tools in one place (
>> github.com/stackforge/fuel-mirror) and make all these tools available
>> out of Fuel CI. So, users will be able to easily build their own packages,
>> clone necessary repos and manage them in the way which is standard in the
>> industry. However, it is out of the scope of the letter.
>>
>> I also don't like the idea of putting MOS repo on the ISO by default
>> because it encourages people thing that ISO is the way of distributing MOS.
>> ISO should be nothing more than just a way of installing Fuel from scratch.
>> MOS should be distributed via MOS repos. Fuel is available as RPM package
>> in RPM MOS repo.
>>
>>
>>
>>
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Thu, Sep 10, 2015 at 1:18 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
>> wrote:
>>
>>> Mike,
>>>
>>> > still not exactly true for some large enterprises. Due to all the
>>> security, etc.,
>>> > there are sometimes VPNs / proxies / firewalls with very low
>>> throughput.
>>>
>>> It's their problem, and their policies. We can't and shouldn't handle
>>> all possible cases. If some enterprise has "no Internet" policy, I bet
>>> it won't be a problem for their IT guys to create an intranet mirror
>>> for MOS packages. Moreover, I also bet they do have a mirror for
>>> Ubuntu or other Linux distributive. So it basically about approach how
>>> to consume our mirrors.
>>>
>>> On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <vkuklin at mirantis.com>
>>> wrote:
>>> > Folks
>>> >
>>> > I think, Mike is completely right here - we need an option to build
>>> > all-in-one ISO which can be tried-out/deployed unattendedly without
>>> internet
>>> > access. Let's let a user make a choice what he wants, not push him into
>>> > embarassing situation. We still have many parts of Fuel which make
>>> choices
>>> > for user that cannot be overriden. Let's not pretend that we know more
>>> than
>>> > user does about his environment.
>>> >
>>> > On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <ogelbukh at mirantis.com>
>>> > wrote:
>>> >>
>>> >> The reason people want offline deployment feature is not because of
>>> poor
>>> >> connection, but rather the enterprise intranets where getting subnet
>>> with
>>> >> external access sometimes is a real pain in various body parts.
>>> >>
>>> >> --
>>> >> Best regards,
>>> >> Oleg Gelbukh
>>> >>
>>> >> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <
>>> ikalnitsky at mirantis.com>
>>> >> wrote:
>>> >>>
>>> >>> Hello,
>>> >>>
>>> >>> I agree with Vladimir - the idea of online repos is a right way to
>>> >>> move. In 2015 I believe we can ignore this "poor Internet connection"
>>> >>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
>>> >>> distributives - most of them fetch needed packages from the Internet
>>> >>> during installation, not from CD/DVD. The netboot installers are
>>> >>> popular, I can't even remember when was the last time I install my
>>> >>> Debian from the DVD-1 - I use netboot installer for years.
>>> >>>
>>> >>> Thanks,
>>> >>> Igor
>>> >>>
>>> >>>
>>> >>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com>
>>> wrote:
>>> >>> >
>>> >>> >
>>> >>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <
>>> aschultz at mirantis.com>
>>> >>> > wrote:
>>> >>> >>
>>> >>> >>
>>> >>> >> Hey Vladimir,
>>> >>> >>
>>> >>> >>>
>>> >>> >>>
>>> >>> >>>>>
>>> >>> >>>>> 1) There won't be such things in like [1] and [2], thus less
>>> >>> >>>>> complicated flow, less errors, easier to maintain, easier to
>>> >>> >>>>> understand,
>>> >>> >>>>> easier to troubleshoot
>>> >>> >>>>> 2) If one wants to have local mirror, the flow is the same as
>>> in
>>> >>> >>>>> case
>>> >>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a
>>> user
>>> >>> >>>>> to
>>> >>> >>>>> understand.
>>> >>> >>>>
>>> >>> >>>>
>>> >>> >>>> From the issues I've seen,  fuel-createmirror isn't very
>>> straight
>>> >>> >>>> forward and has some issues making it a bad UX.
>>> >>> >>>
>>> >>> >>>
>>> >>> >>> I'd say the whole approach of having such tool as
>>> fuel-createmirror
>>> >>> >>> is a
>>> >>> >>> way too naive. Reliable internet connection is totally up to
>>> network
>>> >>> >>> engineering rather than deployment. Even using proxy is much
>>> better
>>> >>> >>> that
>>> >>> >>> creating local mirror. But this discussion is totally out of the
>>> >>> >>> scope of
>>> >>> >>> this letter. Currently,  we have fuel-createmirror and it is
>>> pretty
>>> >>> >>> straightforward (installed as rpm, has just a couple of command
>>> line
>>> >>> >>> options). The quality of this script is also out of the scope of
>>> this
>>> >>> >>> thread. BTW we have plans to improve it.
>>> >>> >>
>>> >>> >>
>>> >>> >>
>>> >>> >> Fair enough, I just wanted to raise the UX issues around these
>>> types
>>> >>> >> of
>>> >>> >> things as they should go into the decision making process.
>>> >>> >>
>>> >>> >>
>>> >>> >>>
>>> >>> >>>>>
>>> >>> >>>>>
>>> >>> >>>>> Many people still associate ISO with MOS, but it is not true
>>> when
>>> >>> >>>>> using
>>> >>> >>>>> package based delivery approach.
>>> >>> >>>>>
>>> >>> >>>>> It is easy to define necessary repos during deployment and
>>> thus it
>>> >>> >>>>> is
>>> >>> >>>>> easy to control what exactly is going to be installed on slave
>>> >>> >>>>> nodes.
>>> >>> >>>>>
>>> >>> >>>>> What do you guys think of it?
>>> >>> >>>>>
>>> >>> >>>>>
>>> >>> >>>>
>>> >>> >>>> Reliance on internet connectivity has been an issue since 6.1.
>>> For
>>> >>> >>>> many
>>> >>> >>>> large users, complete access to the internet is not available
>>> or not
>>> >>> >>>> desired.  If we want to continue down this path, we need to
>>> improve
>>> >>> >>>> the
>>> >>> >>>> tools to setup the local mirror and properly document what
>>> >>> >>>> urls/ports/etc
>>> >>> >>>> need to be available for the installation of openstack and any
>>> >>> >>>> mirror
>>> >>> >>>> creation process.  The ideal thing is to have an all-in-one CD
>>> >>> >>>> similar to a
>>> >>> >>>> live cd that allows a user to completely try out fuel wherever
>>> they
>>> >>> >>>> want
>>> >>> >>>> with out further requirements of internet access.  If we don't
>>> want
>>> >>> >>>> to
>>> >>> >>>> continue with that, we need to do a better job around providing
>>> the
>>> >>> >>>> tools
>>> >>> >>>> for a user to get up and running in a timely fashion.  Perhaps
>>> >>> >>>> providing an
>>> >>> >>>> net-only iso and an all-included iso would be a better solution
>>> so
>>> >>> >>>> people
>>> >>> >>>> will have their expectations properly set up front?
>>> >>> >>>
>>> >>> >>>
>>> >>> >>> Let me explain why I think having local MOS mirror by default is
>>> bad:
>>> >>> >>> 1) I don't see any reason why we should treat MOS  repo other way
>>> >>> >>> than
>>> >>> >>> all other online repos. A user sees on the settings tab the list
>>> of
>>> >>> >>> repos
>>> >>> >>> one of which is local by default while others are online. It can
>>> make
>>> >>> >>> user a
>>> >>> >>> little bit confused, can't it? A user can be also confused by the
>>> >>> >>> fact, that
>>> >>> >>> some of the repos can be cloned locally by fuel-createmirror
>>> while
>>> >>> >>> others
>>> >>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>>> >>> >>
>>> >>> >>
>>> >>> >>
>>> >>> >> I agree. The process should be the same and it should be just
>>> another
>>> >>> >> repo. It doesn't mean we can't include a version on an ISO as
>>> part of
>>> >>> >> a
>>> >>> >> release.  Would it be better to provide the mirror on the ISO but
>>> not
>>> >>> >> have
>>> >>> >> it enabled by default for a release so that we can gather user
>>> >>> >> feedback on
>>> >>> >> this? This would include improved documentation and possibly
>>> allowing
>>> >>> >> a user
>>> >>> >> to choose their preference so we can collect metrics?
>>> >>> >>
>>> >>> >>
>>> >>> >>> 2) Having local MOS mirror by default makes things much more
>>> >>> >>> convoluted.
>>> >>> >>> We are forced to have several directories with predefined names
>>> and
>>> >>> >>> we are
>>> >>> >>> forced to manage these directories in nailgun, in upgrade script,
>>> >>> >>> etc. Why?
>>> >>> >>> 3) When putting MOS mirror on ISO, we make people think that ISO
>>> is
>>> >>> >>> equal
>>> >>> >>> to MOS, which is not true. It is possible to implement really
>>> >>> >>> flexible
>>> >>> >>> delivery scheme, but we need to think of these things as they are
>>> >>> >>> independent.
>>> >>> >>
>>> >>> >>
>>> >>> >>
>>> >>> >> I'm not sure what you mean by this. Including a point in time
>>> copy on
>>> >>> >> an
>>> >>> >> ISO as a release is a common method of distributing software. Is
>>> this
>>> >>> >> a
>>> >>> >> messaging thing that needs to be addressed? Perhaps I'm not
>>> familiar
>>> >>> >> with
>>> >>> >> people referring to the ISO as being MOS.
>>> >>> >>
>>> >>> >>
>>> >>> >>> For large users it is easy to build custom ISO and put there what
>>> >>> >>> they
>>> >>> >>> need but first we need to have simple working scheme clear for
>>> >>> >>> everyone. I
>>> >>> >>> think dealing with all repos the same way is what is gonna makes
>>> >>> >>> things
>>> >>> >>> simpler.
>>> >>> >>>
>>> >>> >>
>>> >>> >>
>>> >>> >> Who is going to build a custom ISO? How does one request that?
>>> What
>>> >>> >> resources are consumed by custom ISO creation process/request?
>>> Does
>>> >>> >> this
>>> >>> >> scale?
>>> >>> >>
>>> >>> >>
>>> >>> >>>
>>> >>> >>> This thread is not about internet connectivity, it is about
>>> aligning
>>> >>> >>> things.
>>> >>> >>>
>>> >>> >>
>>> >>> >> You are correct in that this thread is not explicitly about
>>> internet
>>> >>> >> connectivity, but they are related. Any changes to remove a local
>>> >>> >> repository
>>> >>> >> and only provide an internet based solution makes internet
>>> >>> >> connectivity
>>> >>> >> something that needs to be included in the discussion.  I just
>>> want to
>>> >>> >> make
>>> >>> >> sure that we properly evaluate this decision based on end user
>>> >>> >> feedback not
>>> >>> >> because we don't want to manage this from a developer standpoint.
>>> >>> >
>>> >>> >
>>> >>> >
>>> >>> >  +1, whatever the changes is, please keep Fuel as a tool that can
>>> >>> > deploy
>>> >>> > without Internet access, this is part of reason that people like
>>> it and
>>> >>> > it's
>>> >>> > better that other tools.
>>> >>> >>
>>> >>> >>
>>> >>> >> -Alex
>>> >>> >>
>>> >>> >>
>>> >>> >>
>>> >>> >>
>>> __________________________________________________________________________
>>> >>> >> OpenStack Development Mailing List (not for usage questions)
>>> >>> >> Unsubscribe:
>>> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>> >>
>>> >>> >
>>> >>> >
>>> >>> >
>>> >>> > --
>>> >>> > Yaguang Tang
>>> >>> > Technical Support, Mirantis China
>>> >>> >
>>> >>> > Phone: +86 15210946968
>>> >>> >
>>> >>> >
>>> >>> >
>>> >>> >
>>> >>> >
>>> __________________________________________________________________________
>>> >>> > OpenStack Development Mailing List (not for usage questions)
>>> >>> > Unsubscribe:
>>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>> >
>>> >>>
>>> >>>
>>> >>>
>>> __________________________________________________________________________
>>> >>> OpenStack Development Mailing List (not for usage questions)
>>> >>> Unsubscribe:
>>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >>
>>> >>
>>> >>
>>> __________________________________________________________________________
>>> >> OpenStack Development Mailing List (not for usage questions)
>>> >> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Yours Faithfully,
>>> > Vladimir Kuklin,
>>> > Fuel Library Tech Lead,
>>> > Mirantis, Inc.
>>> > +7 (495) 640-49-04
>>> > +7 (926) 702-39-68
>>> > Skype kuklinvv
>>> > 35bk3, Vorontsovskaya Str.
>>> > Moscow, Russia,
>>> > www.mirantis.com
>>> > www.mirantis.ru
>>> > vkuklin at mirantis.com
>>> >
>>> >
>>> __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru
> vkuklin at mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/e5e98450/attachment-0001.html>

From sean at dague.net  Thu Sep 10 14:44:20 2015
From: sean at dague.net (Sean Dague)
Date: Thu, 10 Sep 2015 10:44:20 -0400
Subject: [openstack-dev] [gate] broken by pyeclib 1.0.9 release
Message-ID: <55F19744.6020503@dague.net>

The pyeclib 1.0.9 release has broken the gate because Swift is in the
default grenade upgrade jobs, and Swift stable/kilo allows 1.0.9 (which
doesn't compile correctly with a pip install).

We're working to pin requirements in kilo/juno right now, but anything
that has a grenade job is going to fail until these land.

	-Sean

-- 
Sean Dague
http://dague.net


From major at mhtx.net  Thu Sep 10 14:54:20 2015
From: major at mhtx.net (Major Hayden)
Date: Thu, 10 Sep 2015 09:54:20 -0500
Subject: [openstack-dev] [openstack-ansible] Security hardening
Message-ID: <55F1999C.4020509@mhtx.net>

Hey there,

I've been looking for some ways to harden the systems that are deployed by os-ansible-deployment (soon to be openstack-ansible?) and I've been using the Center for Internet Security (CIS)[1] benchmarks as a potential pathway for that.  There are benchmarks available for various operating systems and applications there.

Many of the items shown there fall into a few different categories:

  1) things OSAD should configure
  2) things deployers should configure
  3) things nobody should configure (they break the deployment, for example)

#3 is often quite obvious, but #1 and #2 are a bit more nebulous.  For example, I opened a ticket[2] about getting auditd installed by default with openstack-ansible.  My gut says that many deployers could use auditd since it collects denials from AppArmor and that can help with troubleshooting broken policies.

Also, I opened another ticket[3] for compressing all logs by default.  This affects availability (part of the information security CIA triad[4]) in a fairly critical way in the long term.

My question is this: How should I go about determining which security changes should go upstream and which should go into documentation as things deployers should do locally?


[1] https://benchmarks.cisecurity.org/
[2] https://bugs.launchpad.net/openstack-ansible/+bug/1491915
[3] https://bugs.launchpad.net/openstack-ansible/+bug/1493981
[4] https://en.wikipedia.org/wiki/Information_security#Key_concepts

--
Major Hayden


From mriedem at linux.vnet.ibm.com  Thu Sep 10 14:54:06 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Thu, 10 Sep 2015 09:54:06 -0500
Subject: [openstack-dev] [gate] broken by pyeclib 1.0.9 release
In-Reply-To: <55F19744.6020503@dague.net>
References: <55F19744.6020503@dague.net>
Message-ID: <55F1998E.9060402@linux.vnet.ibm.com>



On 9/10/2015 9:44 AM, Sean Dague wrote:
> The pyeclib 1.0.9 release has broken the gate because Swift is in the
> default grenade upgrade jobs, and Swift stable/kilo allows 1.0.9 (which
> doesn't compile correctly with a pip install).
>
> We're working to pin requirements in kilo/juno right now, but anything
> that has a grenade job is going to fail until these land.
>
> 	-Sean
>

This is the LP bug we're tracking:

https://bugs.launchpad.net/openstack-gate/+bug/1494347

These are the immediate fixes:

Juno g-r cap: https://review.openstack.org/#/c/222221/

Kilo g-r pin: https://review.openstack.org/#/c/222218/

Upstream bug reported:

https://bitbucket.org/kmgreen2/pyeclib/issues/76/pyeclib-109-fails-to-install-if-not

-- 

Thanks,

Matt Riedemann



From vkuklin at mirantis.com  Thu Sep 10 15:18:29 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Thu, 10 Sep 2015 18:18:29 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAFLqvG6gF-OT5a-3Px8nHy-w=Y_18shOVJ0fgap8LFMqk7pZ3g@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
 <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
 <CAFLqvG5bF0gtKSidwbNJP61hzqkFTDaMEvv-6hkH+KbFqXc75Q@mail.gmail.com>
 <CAHAWLf0_J+8n=R_btK=peHXMT_hVT-0Qiq7JgBkwXKR7Lvm4=Q@mail.gmail.com>
 <CAFLqvG6gF-OT5a-3Px8nHy-w=Y_18shOVJ0fgap8LFMqk7pZ3g@mail.gmail.com>
Message-ID: <CAHAWLf2OdSt-gjM5iHqUZeid7f+Wo6TZEjJoFosLCGWQO2qvHQ@mail.gmail.com>

Folks

I guess I need to get you on-site to deploy something at our user's
datacenter. I do want to be able to download an ISO which contains all
packages. This may not be the primary artifact of our software suite, but
we need to have this opportunity to build full ISO with ALL components.
Please, do not narrow down our feature set just by thinking that users do
not need something because we are reluctant to implement this. Just believe
me - users need this opportunity in a lot of deployment cases. It is not
hard to implement it. We do not need to set this as a default option, but
we need to have it. That is my point.

On Thu, Sep 10, 2015 at 5:40 PM, Vladimir Kozhukalov <
vkozhukalov at mirantis.com> wrote:

> Vladimir,
>
> * We don't have full ISO anyway
> * We don't require to create mirror. When you launch your browser, do you
> mean to have mirror of the Internet locally? Probably, no. The same is
> here. Internet connection is the common requirement nowadays, but if you
> don't have one, you definitely need to have a kind of local copy.
>
> Vladimir Kozhukalov
>
> On Thu, Sep 10, 2015 at 4:17 PM, Vladimir Kuklin <vkuklin at mirantis.com>
> wrote:
>
>> Igor
>>
>> Having poor access to the internet is a regular use case which we must
>> support. This is not a crazy requirement. Not having full ISO makes cloud
>> setup harder to complete. Even more, not having hard requirement to create
>> a mirror will detract newcomers. I can say that if I were a user and saw a
>> requirement to create mirror I would not try the product with comparison to
>> the case when I can get a full ISO with all the stuff I need.
>>
>> On Thu, Sep 10, 2015 at 4:06 PM, Vladimir Kozhukalov <
>> vkozhukalov at mirantis.com> wrote:
>>
>>> Guys,
>>>
>>> I really appreciate your opinions on whether Fuel should be all
>>> inclusive or not. But the original topic of this thread is different. I
>>> personally think that in 2015 it is not a big deal to make the master node
>>> able to access any online host (even taking into account paranoid security
>>> policies). It is just a matter of network engineering. But it is completely
>>> out of the scope. What I am suggesting is to align the way how we treat
>>> different repos, whether upstream or MOS. What I am working on right now is
>>> I am trying to make Fuel build and delivery approach really flexible. That
>>> means we need to have as little non-standard ways/hacks/approaches/options
>>> as possible.
>>>
>>> > Why can't we make this optional in the build system? It should be easy
>>> to implement, is not it?
>>>
>>> That is exactly what I am trying to do (make it optional). But I don't
>>> want it to be yet another boolean variable for this particular thing (MOS
>>> repo). We have working approach for dealing with repos. Repos can either
>>> online or local mirrors. We have a tool for making local mirrors
>>> (fuel-createmirror). Even if we put MOS on the ISO, a user still can not
>>> deploy OpenStack, because he/she still needs upstream to be available.
>>> Anyway, the user is still forced to do some additional actions. Again, we
>>> have plans to improve the quality and UX of fuel-createmirror.
>>>
>>> Yet another thing I don't want to be on the master node is a bunch of
>>> MOS repos directories named like
>>> /var/www/nailgun/2015.1-7.0
>>> /var/www/nailgun/2014.4-6.1
>>> with links like
>>> /var/www/nailgun/ubuntu -> /var/www/nailgun/2015.1-7.0
>>> What does this link mean? Even Fuel developers can be confused. It is
>>> scary to imagine what users think of it :-) Why should Nailgun and upgrade
>>> script manage that kind of storage in this exact kind of format? A long
>>> time ago people invented RPM/DEB repositories, tools to manage them and
>>> structure for versioning them. We have Perestoika for that and we have
>>> plans to put all package/mirror related tools in one place (
>>> github.com/stackforge/fuel-mirror) and make all these tools available
>>> out of Fuel CI. So, users will be able to easily build their own packages,
>>> clone necessary repos and manage them in the way which is standard in the
>>> industry. However, it is out of the scope of the letter.
>>>
>>> I also don't like the idea of putting MOS repo on the ISO by default
>>> because it encourages people thing that ISO is the way of distributing MOS.
>>> ISO should be nothing more than just a way of installing Fuel from scratch.
>>> MOS should be distributed via MOS repos. Fuel is available as RPM package
>>> in RPM MOS repo.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Thu, Sep 10, 2015 at 1:18 PM, Igor Kalnitsky <ikalnitsky at mirantis.com
>>> > wrote:
>>>
>>>> Mike,
>>>>
>>>> > still not exactly true for some large enterprises. Due to all the
>>>> security, etc.,
>>>> > there are sometimes VPNs / proxies / firewalls with very low
>>>> throughput.
>>>>
>>>> It's their problem, and their policies. We can't and shouldn't handle
>>>> all possible cases. If some enterprise has "no Internet" policy, I bet
>>>> it won't be a problem for their IT guys to create an intranet mirror
>>>> for MOS packages. Moreover, I also bet they do have a mirror for
>>>> Ubuntu or other Linux distributive. So it basically about approach how
>>>> to consume our mirrors.
>>>>
>>>> On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <vkuklin at mirantis.com>
>>>> wrote:
>>>> > Folks
>>>> >
>>>> > I think, Mike is completely right here - we need an option to build
>>>> > all-in-one ISO which can be tried-out/deployed unattendedly without
>>>> internet
>>>> > access. Let's let a user make a choice what he wants, not push him
>>>> into
>>>> > embarassing situation. We still have many parts of Fuel which make
>>>> choices
>>>> > for user that cannot be overriden. Let's not pretend that we know
>>>> more than
>>>> > user does about his environment.
>>>> >
>>>> > On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <ogelbukh at mirantis.com
>>>> >
>>>> > wrote:
>>>> >>
>>>> >> The reason people want offline deployment feature is not because of
>>>> poor
>>>> >> connection, but rather the enterprise intranets where getting subnet
>>>> with
>>>> >> external access sometimes is a real pain in various body parts.
>>>> >>
>>>> >> --
>>>> >> Best regards,
>>>> >> Oleg Gelbukh
>>>> >>
>>>> >> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <
>>>> ikalnitsky at mirantis.com>
>>>> >> wrote:
>>>> >>>
>>>> >>> Hello,
>>>> >>>
>>>> >>> I agree with Vladimir - the idea of online repos is a right way to
>>>> >>> move. In 2015 I believe we can ignore this "poor Internet
>>>> connection"
>>>> >>> reason, and simplify both Fuel and UX. Moreover, take a look at
>>>> Linux
>>>> >>> distributives - most of them fetch needed packages from the Internet
>>>> >>> during installation, not from CD/DVD. The netboot installers are
>>>> >>> popular, I can't even remember when was the last time I install my
>>>> >>> Debian from the DVD-1 - I use netboot installer for years.
>>>> >>>
>>>> >>> Thanks,
>>>> >>> Igor
>>>> >>>
>>>> >>>
>>>> >>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com>
>>>> wrote:
>>>> >>> >
>>>> >>> >
>>>> >>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <
>>>> aschultz at mirantis.com>
>>>> >>> > wrote:
>>>> >>> >>
>>>> >>> >>
>>>> >>> >> Hey Vladimir,
>>>> >>> >>
>>>> >>> >>>
>>>> >>> >>>
>>>> >>> >>>>>
>>>> >>> >>>>> 1) There won't be such things in like [1] and [2], thus less
>>>> >>> >>>>> complicated flow, less errors, easier to maintain, easier to
>>>> >>> >>>>> understand,
>>>> >>> >>>>> easier to troubleshoot
>>>> >>> >>>>> 2) If one wants to have local mirror, the flow is the same as
>>>> in
>>>> >>> >>>>> case
>>>> >>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a
>>>> user
>>>> >>> >>>>> to
>>>> >>> >>>>> understand.
>>>> >>> >>>>
>>>> >>> >>>>
>>>> >>> >>>> From the issues I've seen,  fuel-createmirror isn't very
>>>> straight
>>>> >>> >>>> forward and has some issues making it a bad UX.
>>>> >>> >>>
>>>> >>> >>>
>>>> >>> >>> I'd say the whole approach of having such tool as
>>>> fuel-createmirror
>>>> >>> >>> is a
>>>> >>> >>> way too naive. Reliable internet connection is totally up to
>>>> network
>>>> >>> >>> engineering rather than deployment. Even using proxy is much
>>>> better
>>>> >>> >>> that
>>>> >>> >>> creating local mirror. But this discussion is totally out of the
>>>> >>> >>> scope of
>>>> >>> >>> this letter. Currently,  we have fuel-createmirror and it is
>>>> pretty
>>>> >>> >>> straightforward (installed as rpm, has just a couple of command
>>>> line
>>>> >>> >>> options). The quality of this script is also out of the scope
>>>> of this
>>>> >>> >>> thread. BTW we have plans to improve it.
>>>> >>> >>
>>>> >>> >>
>>>> >>> >>
>>>> >>> >> Fair enough, I just wanted to raise the UX issues around these
>>>> types
>>>> >>> >> of
>>>> >>> >> things as they should go into the decision making process.
>>>> >>> >>
>>>> >>> >>
>>>> >>> >>>
>>>> >>> >>>>>
>>>> >>> >>>>>
>>>> >>> >>>>> Many people still associate ISO with MOS, but it is not true
>>>> when
>>>> >>> >>>>> using
>>>> >>> >>>>> package based delivery approach.
>>>> >>> >>>>>
>>>> >>> >>>>> It is easy to define necessary repos during deployment and
>>>> thus it
>>>> >>> >>>>> is
>>>> >>> >>>>> easy to control what exactly is going to be installed on slave
>>>> >>> >>>>> nodes.
>>>> >>> >>>>>
>>>> >>> >>>>> What do you guys think of it?
>>>> >>> >>>>>
>>>> >>> >>>>>
>>>> >>> >>>>
>>>> >>> >>>> Reliance on internet connectivity has been an issue since 6.1.
>>>> For
>>>> >>> >>>> many
>>>> >>> >>>> large users, complete access to the internet is not available
>>>> or not
>>>> >>> >>>> desired.  If we want to continue down this path, we need to
>>>> improve
>>>> >>> >>>> the
>>>> >>> >>>> tools to setup the local mirror and properly document what
>>>> >>> >>>> urls/ports/etc
>>>> >>> >>>> need to be available for the installation of openstack and any
>>>> >>> >>>> mirror
>>>> >>> >>>> creation process.  The ideal thing is to have an all-in-one CD
>>>> >>> >>>> similar to a
>>>> >>> >>>> live cd that allows a user to completely try out fuel wherever
>>>> they
>>>> >>> >>>> want
>>>> >>> >>>> with out further requirements of internet access.  If we don't
>>>> want
>>>> >>> >>>> to
>>>> >>> >>>> continue with that, we need to do a better job around
>>>> providing the
>>>> >>> >>>> tools
>>>> >>> >>>> for a user to get up and running in a timely fashion.  Perhaps
>>>> >>> >>>> providing an
>>>> >>> >>>> net-only iso and an all-included iso would be a better
>>>> solution so
>>>> >>> >>>> people
>>>> >>> >>>> will have their expectations properly set up front?
>>>> >>> >>>
>>>> >>> >>>
>>>> >>> >>> Let me explain why I think having local MOS mirror by default
>>>> is bad:
>>>> >>> >>> 1) I don't see any reason why we should treat MOS  repo other
>>>> way
>>>> >>> >>> than
>>>> >>> >>> all other online repos. A user sees on the settings tab the
>>>> list of
>>>> >>> >>> repos
>>>> >>> >>> one of which is local by default while others are online. It
>>>> can make
>>>> >>> >>> user a
>>>> >>> >>> little bit confused, can't it? A user can be also confused by
>>>> the
>>>> >>> >>> fact, that
>>>> >>> >>> some of the repos can be cloned locally by fuel-createmirror
>>>> while
>>>> >>> >>> others
>>>> >>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>>>> >>> >>
>>>> >>> >>
>>>> >>> >>
>>>> >>> >> I agree. The process should be the same and it should be just
>>>> another
>>>> >>> >> repo. It doesn't mean we can't include a version on an ISO as
>>>> part of
>>>> >>> >> a
>>>> >>> >> release.  Would it be better to provide the mirror on the ISO
>>>> but not
>>>> >>> >> have
>>>> >>> >> it enabled by default for a release so that we can gather user
>>>> >>> >> feedback on
>>>> >>> >> this? This would include improved documentation and possibly
>>>> allowing
>>>> >>> >> a user
>>>> >>> >> to choose their preference so we can collect metrics?
>>>> >>> >>
>>>> >>> >>
>>>> >>> >>> 2) Having local MOS mirror by default makes things much more
>>>> >>> >>> convoluted.
>>>> >>> >>> We are forced to have several directories with predefined names
>>>> and
>>>> >>> >>> we are
>>>> >>> >>> forced to manage these directories in nailgun, in upgrade
>>>> script,
>>>> >>> >>> etc. Why?
>>>> >>> >>> 3) When putting MOS mirror on ISO, we make people think that
>>>> ISO is
>>>> >>> >>> equal
>>>> >>> >>> to MOS, which is not true. It is possible to implement really
>>>> >>> >>> flexible
>>>> >>> >>> delivery scheme, but we need to think of these things as they
>>>> are
>>>> >>> >>> independent.
>>>> >>> >>
>>>> >>> >>
>>>> >>> >>
>>>> >>> >> I'm not sure what you mean by this. Including a point in time
>>>> copy on
>>>> >>> >> an
>>>> >>> >> ISO as a release is a common method of distributing software. Is
>>>> this
>>>> >>> >> a
>>>> >>> >> messaging thing that needs to be addressed? Perhaps I'm not
>>>> familiar
>>>> >>> >> with
>>>> >>> >> people referring to the ISO as being MOS.
>>>> >>> >>
>>>> >>> >>
>>>> >>> >>> For large users it is easy to build custom ISO and put there
>>>> what
>>>> >>> >>> they
>>>> >>> >>> need but first we need to have simple working scheme clear for
>>>> >>> >>> everyone. I
>>>> >>> >>> think dealing with all repos the same way is what is gonna makes
>>>> >>> >>> things
>>>> >>> >>> simpler.
>>>> >>> >>>
>>>> >>> >>
>>>> >>> >>
>>>> >>> >> Who is going to build a custom ISO? How does one request that?
>>>> What
>>>> >>> >> resources are consumed by custom ISO creation process/request?
>>>> Does
>>>> >>> >> this
>>>> >>> >> scale?
>>>> >>> >>
>>>> >>> >>
>>>> >>> >>>
>>>> >>> >>> This thread is not about internet connectivity, it is about
>>>> aligning
>>>> >>> >>> things.
>>>> >>> >>>
>>>> >>> >>
>>>> >>> >> You are correct in that this thread is not explicitly about
>>>> internet
>>>> >>> >> connectivity, but they are related. Any changes to remove a local
>>>> >>> >> repository
>>>> >>> >> and only provide an internet based solution makes internet
>>>> >>> >> connectivity
>>>> >>> >> something that needs to be included in the discussion.  I just
>>>> want to
>>>> >>> >> make
>>>> >>> >> sure that we properly evaluate this decision based on end user
>>>> >>> >> feedback not
>>>> >>> >> because we don't want to manage this from a developer standpoint.
>>>> >>> >
>>>> >>> >
>>>> >>> >
>>>> >>> >  +1, whatever the changes is, please keep Fuel as a tool that can
>>>> >>> > deploy
>>>> >>> > without Internet access, this is part of reason that people like
>>>> it and
>>>> >>> > it's
>>>> >>> > better that other tools.
>>>> >>> >>
>>>> >>> >>
>>>> >>> >> -Alex
>>>> >>> >>
>>>> >>> >>
>>>> >>> >>
>>>> >>> >>
>>>> __________________________________________________________________________
>>>> >>> >> OpenStack Development Mailing List (not for usage questions)
>>>> >>> >> Unsubscribe:
>>>> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> >>> >>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >>> >>
>>>> >>> >
>>>> >>> >
>>>> >>> >
>>>> >>> > --
>>>> >>> > Yaguang Tang
>>>> >>> > Technical Support, Mirantis China
>>>> >>> >
>>>> >>> > Phone: +86 15210946968
>>>> >>> >
>>>> >>> >
>>>> >>> >
>>>> >>> >
>>>> >>> >
>>>> __________________________________________________________________________
>>>> >>> > OpenStack Development Mailing List (not for usage questions)
>>>> >>> > Unsubscribe:
>>>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >>> >
>>>> >>>
>>>> >>>
>>>> >>>
>>>> __________________________________________________________________________
>>>> >>> OpenStack Development Mailing List (not for usage questions)
>>>> >>> Unsubscribe:
>>>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> __________________________________________________________________________
>>>> >> OpenStack Development Mailing List (not for usage questions)
>>>> >> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >>
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > Yours Faithfully,
>>>> > Vladimir Kuklin,
>>>> > Fuel Library Tech Lead,
>>>> > Mirantis, Inc.
>>>> > +7 (495) 640-49-04
>>>> > +7 (926) 702-39-68
>>>> > Skype kuklinvv
>>>> > 35bk3, Vorontsovskaya Str.
>>>> > Moscow, Russia,
>>>> > www.mirantis.com
>>>> > www.mirantis.ru
>>>> > vkuklin at mirantis.com
>>>> >
>>>> >
>>>> __________________________________________________________________________
>>>> > OpenStack Development Mailing List (not for usage questions)
>>>> > Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Yours Faithfully,
>> Vladimir Kuklin,
>> Fuel Library Tech Lead,
>> Mirantis, Inc.
>> +7 (495) 640-49-04
>> +7 (926) 702-39-68
>> Skype kuklinvv
>> 35bk3, Vorontsovskaya Str.
>> Moscow, Russia,
>> www.mirantis.com <http://www.mirantis.ru/>
>> www.mirantis.ru
>> vkuklin at mirantis.com
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/326af48c/attachment.html>

From openstack at nemebean.com  Thu Sep 10 15:37:10 2015
From: openstack at nemebean.com (Ben Nemec)
Date: Thu, 10 Sep 2015 10:37:10 -0500
Subject: [openstack-dev] [TripleO] Core reviewers for
 python-tripleoclient and tripleo-common
In-Reply-To: <CAHV77z-jrZZ7O+beb98NExannOqsvUJvfgQV=A09k5on1UT+=g@mail.gmail.com>
References: <CAHV77z-jrZZ7O+beb98NExannOqsvUJvfgQV=A09k5on1UT+=g@mail.gmail.com>
Message-ID: <55F1A3A6.9070300@nemebean.com>

On 09/10/2015 09:06 AM, James Slagle wrote:
> TripleO has added a few new repositories, one of which is
> python-tripleoclient[1], the former python-rdomanager-oscplugin.
> 
> With the additional repositories, there is an additional review burden
> on our core reviewers. There is also the fact that folks who have been
> working on the client code for a while when it was only part of RDO
> are not TripleO core reviewers.
> 
> I think we could help with the additional burden of reviews if we made
> two of those people core on python-tripleoclient and tripleo-common
> now.
> 
> Specifically, the folks I'm proposing are:
> Brad P. Crochet <brad at redhat.com>
> Dougal Matthews <dougal at redhat.com> 

+1 to both

> 
> The options I see are:
> - keep just 1 tripleo acl, and add additional folks there, with a good
> faith agreement not to +/-2,+A code that is not from the 2 client
> repos.

+1 to this.

Personally I would hope that anyone who is a core has the necessary
judgment to not +2 things they don't understand, regardless of project
(and vice versa; Brad and Dougal obviously understand TripleO from their
work on the client, so if they +2 a simple patch in another project I'm
not inclined to take them to the woodshed :-).  "TripleO" is a broad
enough thing that there are areas where all of the cores are going to be
weaker or stronger.  I'd rather not have to maintain half a dozen
separate ACL's just to enforce something that should be common sense.

> - create a new gerrit acl in project-config for just these 2 client
> repos, and add folks there as needed. the new acl would also contain
> the existing acl for tripleo core reviewers
> - neither of the above options - don't add these individuals to any
> TripleO core team at this time.
> 
> The first is what was more or less done when Tuskar was brought under
> the TripleO umbrella to avoid splitting the core teams, and it's the
> option I'd prefer.
> 
> TripleO cores, please reply here with your vote from the above
> options. Or, if you have other ideas, you can share those as well :)
> 
> [1] https://review.openstack.org/#/c/215186/
> 



From tim at styra.com  Thu Sep 10 15:41:19 2015
From: tim at styra.com (Tim Hinrichs)
Date: Thu, 10 Sep 2015 15:41:19 +0000
Subject: [openstack-dev] [Congress] Ending feature freeze
Message-ID: <CAJjxPADktX9fM=9KvqkjQ7jYqK3ARpmCsnXSKkkTLGo6MzDv3g@mail.gmail.com>

Hi all,

We're now finished with feature freeze.  We have our first release
candidate and the stable/liberty branch.  So master is once again open for
new features.  Couple of things to note:

1. Documentation.  We should also look through the docs and update them.
Documentation is really important.  There's one doc patch not yet merged,
so be sure to pull that down before editing.  That patch officially
deprecates a number of API calls that don't make sense for the new
distributed architecture.  If you find places where we don't mention the
deprecation, please fix that.

https://review.openstack.org/#/c/220707/

2. Bugs.  We should still all be manually testing, looking for bugs, and
fixing them.  This will be true especially as other projects change their
clients, which as we've seen can break our datasource drivers.

All bug fixes first go into master, and then we cherry-pick to
stable/liberty.  Once you've patched a bug on master and it's been merged,
you'll create another change for your bug-fix and push it to review.  Then
one of the cores will +2/+1 it (usually without needing another formal
round of reviews).  Here's the procedure.

// pull down the latest changes for master
$ git checkout master
$ git pull

// create a local branch for stable/liberty and switch to it
$ git checkout origin/stable/liberty -b stable/liberty

// cherry-pick your change from master onto the local stable/liberty
// The -x records the original <sha1 from master> in the commit msg
$ git cherry-pick -x <sha1 from master>

// Push to review and specify the stable/liberty branch.
// Notice in gerrit that the branch is stable/liberty, not master
$ git review stable/liberty

// Once your change to stable/liberty gets merged, fetch all the new
// changes.

// switch to local version of stable/liberty
$ git checkout stable/liberty

// fetch all the new changes to all the branches
$ git fetch origin

// update your local branch
$ git rebase origin/stable/liberty

Tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/fa1bd247/attachment.html>

From Kevin.Fox at pnnl.gov  Thu Sep 10 15:58:53 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Thu, 10 Sep 2015 15:58:53 +0000
Subject: [openstack-dev] SOS
In-Reply-To: <12607C27-9E5D-4966-87B3-EA5DE3F419B8@aliyun.com>
References: <254dc482.7202.14fb52c300c.Coremail.jj19931006@163.com>,
 <12607C27-9E5D-4966-87B3-EA5DE3F419B8@aliyun.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F3C9E@EX10MBOX03.pnnl.gov>

Shameless plug:
http://apps.openstack.org

:)

Thanks,
Kevin
________________________________
From: Gongys [gong_ys2004 at aliyun.com]
Sent: Wednesday, September 09, 2015 8:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] SOS

It should be easy, you google it for coreos raw or qcow2 image then import it into openstack

Sent from my iPhone

On 2015?9?10?, at 10:53, ?? <jj19931006 at 163.com<mailto:jj19931006 at 163.com>> wrote:

Hi:
    I need a coreos image that supports kubernetes , without it , my boss will kick my arse.





__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/0f21327a/attachment.html>

From jpeeler at redhat.com  Thu Sep 10 16:00:25 2015
From: jpeeler at redhat.com (Jeff Peeler)
Date: Thu, 10 Sep 2015 12:00:25 -0400
Subject: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support
 multiple compute drivers?
In-Reply-To: <1474881269.44702768.1441851955747.JavaMail.zimbra@redhat.com>
References: <CALesnTzMv_+hxZLFkAbxObzGLKU0h2ENZ5-vYe1-u+EC5g7Eyg@mail.gmail.com>
 <20150909171336.GG21846@jimrollenhagen.com>
 <CALesnTyuK17bUpYuA=9q+_L5TU7xxAF=tdsQmwtPtr+Z1vmt1w@mail.gmail.com>
 <1474881269.44702768.1441851955747.JavaMail.zimbra@redhat.com>
Message-ID: <CALesnTzOP77F6qyvP6KK_evRRKuOw6iZr7jfNcdFJ6btJjhRqg@mail.gmail.com>

On Wed, Sep 9, 2015 at 10:25 PM, Steve Gordon <sgordon at redhat.com> wrote:

> ----- Original Message -----
> > From: "Jeff Peeler" <jpeeler at redhat.com>
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> >
> > I'd greatly prefer using availability zones/host aggregates as I'm trying
> > to keep the footprint as small as possible. It does appear that in the
> > section "configure scheduler to support host aggregates" [1], that I can
> > configure filtering using just one scheduler (right?). However, perhaps
> > more importantly, I'm now unsure with the network configuration changes
> > required for Ironic that deploying normal instances along with baremetal
> > servers is possible.
> >
> > [1]
> >
> http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html
>
> Hi Jeff,
>
> I assume your need for a second scheduler is spurred by wanting to enable
> different filters for baremetal vs virt (rather than influencing scheduling
> using the same filters via image properties, extra specs, and boot
> parameters (hints)?
>
> I ask because if not you should be able to use the hypervisor_type image
> property to ensure that images intended for baremetal are directed there
> and those intended for kvm etc. are directed to those hypervisors. The
> documentation [1] doesn't list ironic as a valid value for this property
> but I looked into the code for this a while ago and it seemed like it
> should work... Apologies if you had already considered this.
>
> Thanks,
>
> Steve
>
> [1]
> http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html


I hadn't considered that, thanks. It's still unknown to me though if a
separate compute service is required. And if it is required, how much
segregation is required to make that work.

Not being a networking guru, I'm also unsure if the Ironic setup
instructions to use a flat network is a requirement or is just a sample of
possible configuration. In a brief out of band conversation I had, it does
sound like Ironic can be configured to use linuxbridge too, which I didn't
know was possible.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/360b5360/attachment.html>

From derekh at redhat.com  Thu Sep 10 16:09:03 2015
From: derekh at redhat.com (Derek Higgins)
Date: Thu, 10 Sep 2015 17:09:03 +0100
Subject: [openstack-dev] [TripleO] Core reviewers for
 python-tripleoclient and tripleo-common
In-Reply-To: <CAHV77z-jrZZ7O+beb98NExannOqsvUJvfgQV=A09k5on1UT+=g@mail.gmail.com>
References: <CAHV77z-jrZZ7O+beb98NExannOqsvUJvfgQV=A09k5on1UT+=g@mail.gmail.com>
Message-ID: <55F1AB1F.7030900@redhat.com>



On 10/09/15 15:06, James Slagle wrote:
> TripleO has added a few new repositories, one of which is
> python-tripleoclient[1], the former python-rdomanager-oscplugin.
>
> With the additional repositories, there is an additional review burden
> on our core reviewers. There is also the fact that folks who have been
> working on the client code for a while when it was only part of RDO
> are not TripleO core reviewers.
>
> I think we could help with the additional burden of reviews if we made
> two of those people core on python-tripleoclient and tripleo-common
> now.
>
> Specifically, the folks I'm proposing are:
> Brad P. Crochet <brad at redhat.com>
> Dougal Matthews <dougal at redhat.com>
>
> The options I see are:
> - keep just 1 tripleo acl, and add additional folks there, with a good
> faith agreement not to +/-2,+A code that is not from the 2 client
> repos.

+1 to doing this, but I would reword the good faith aggreement to "not 
to +/-2,+A code that they are not comfortable/familiar with", in other 
words the same agreement I would expect from any other core. In the same 
way I'll not be adding +2 on tripleoclient code until(if) I know with 
reasonable confidence I'm not doing something stupid.


> - create a new gerrit acl in project-config for just these 2 client
> repos, and add folks there as needed. the new acl would also contain
> the existing acl for tripleo core reviewers
> - neither of the above options - don't add these individuals to any
> TripleO core team at this time.
>
> The first is what was more or less done when Tuskar was brought under
> the TripleO umbrella to avoid splitting the core teams, and it's the
> option I'd prefer.
>
> TripleO cores, please reply here with your vote from the above
> options. Or, if you have other ideas, you can share those as well :)
>
> [1] https://review.openstack.org/#/c/215186/
>


From prometheanfire at gentoo.org  Thu Sep 10 16:22:24 2015
From: prometheanfire at gentoo.org (Matthew Thode)
Date: Thu, 10 Sep 2015 11:22:24 -0500
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <55F1999C.4020509@mhtx.net>
References: <55F1999C.4020509@mhtx.net>
Message-ID: <55F1AE40.5020009@gentoo.org>

On 09/10/2015 09:54 AM, Major Hayden wrote:
> Hey there,
> 
> I've been looking for some ways to harden the systems that are deployed by os-ansible-deployment (soon to be openstack-ansible?) and I've been using the Center for Internet Security (CIS)[1] benchmarks as a potential pathway for that.  There are benchmarks available for various operating systems and applications there.
> 
> Many of the items shown there fall into a few different categories:
> 
>   1) things OSAD should configure
>   2) things deployers should configure
>   3) things nobody should configure (they break the deployment, for example)
> 
> #3 is often quite obvious, but #1 and #2 are a bit more nebulous.  For example, I opened a ticket[2] about getting auditd installed by default with openstack-ansible.  My gut says that many deployers could use auditd since it collects denials from AppArmor and that can help with troubleshooting broken policies.
> 
> Also, I opened another ticket[3] for compressing all logs by default.  This affects availability (part of the information security CIA triad[4]) in a fairly critical way in the long term.
> 
> My question is this: How should I go about determining which security changes should go upstream and which should go into documentation as things deployers should do locally?
> 
> 
> [1] https://benchmarks.cisecurity.org/
> [2] https://bugs.launchpad.net/openstack-ansible/+bug/1491915
> [3] https://bugs.launchpad.net/openstack-ansible/+bug/1493981
> [4] https://en.wikipedia.org/wiki/Information_security#Key_concepts
> 
> --
> Major Hayden
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Sane defaults can't be used?  The two bugs you listed look fine to me as
default things to do.

-- 
Matthew Thode (prometheanfire)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/1bd5c480/attachment.pgp>

From major at mhtx.net  Thu Sep 10 16:33:27 2015
From: major at mhtx.net (Major Hayden)
Date: Thu, 10 Sep 2015 11:33:27 -0500
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <55F1AE40.5020009@gentoo.org>
References: <55F1999C.4020509@mhtx.net> <55F1AE40.5020009@gentoo.org>
Message-ID: <55F1B0D7.8070404@mhtx.net>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 09/10/2015 11:22 AM, Matthew Thode wrote:
> Sane defaults can't be used?  The two bugs you listed look fine to me as
> default things to do.

Thanks, Matthew.  I tend to agree.

I'm wondering if it would be best to make a "punch list" of CIS benchmarks and try to tag them with one of the following:

  * Do this in OSAD
  * Tell deployers how to do this (in docs)
  * Tell deployers not to do this (in docs)

That could be lumped in with a spec/blueprint of some sort.  Would that be beneficial?

- --
Major Hayden
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBCAAGBQJV8bDUAAoJEHNwUeDBAR+xEc0P/3S4qQ/U/TET0ag3hwzN0JBv
G3bbcUHhalnGYq12+nX49rF3f0aa3HOOFcWi/hgFFoS4RXQg9vrxnX6AHE5dWP0u
UgqBM8y24CaYXqPoXnfQC/sNbs7gngduKbpTLFqIUXIS+Rye1uOQSnnHuX4iN0US
u1cTQA6Dv04wORvRIAKlKN1kpGoS/WfKUzVQ7bmorCJjm+azBw0amj50Z7KW9w4X
Ci0VBraKSqojUbhvWgD0so7gcfLwrq9eT5pz67xo26df8cpic/LX1UJ9TBBmS97W
YeDxFcubvPviWxHTwxJEnOjHAN4UF2J4sEn8ExwC5UhfG6vOLt97Je6Bt8inTBh4
tTXgfpLrh50B3xk6l1jFEjglaVaSIMLMhirUUALIaJgMcUsWt5F5utcnvp+4+A41
+MKYn/EhGQIHDe/JPa5Yd37TZTwkTW2jthDWb2lkn72sBfC43L/hnIYcKPq7sLVZ
VOP2hSkoMHVT+My8zUBY/m/gcdVJgR9dHDnTPhAts54P4mZg7iOBlRjk+i4YLmSL
0HA4lDiBbpX1wbDIueeDlSDAnQl0PENRrM8fUiJpI0pJC4AOflqQr2r5Bsb6Cz0V
2q/uPgmv0FRup5efjSF2tGTMGAVarijWlqsPSzkGHBt8KVeR0qlgq1Da8qojesdN
gcW2nS0sHcS6Z90t62dJ
=rN2a
-----END PGP SIGNATURE-----


From shardy at redhat.com  Thu Sep 10 16:34:15 2015
From: shardy at redhat.com (Steven Hardy)
Date: Thu, 10 Sep 2015 17:34:15 +0100
Subject: [openstack-dev] [TripleO] Core reviewers for
 python-tripleoclient and tripleo-common
In-Reply-To: <CAHV77z-jrZZ7O+beb98NExannOqsvUJvfgQV=A09k5on1UT+=g@mail.gmail.com>
References: <CAHV77z-jrZZ7O+beb98NExannOqsvUJvfgQV=A09k5on1UT+=g@mail.gmail.com>
Message-ID: <20150910163414.GA16252@t430slt.redhat.com>

On Thu, Sep 10, 2015 at 10:06:31AM -0400, James Slagle wrote:
> TripleO has added a few new repositories, one of which is
> python-tripleoclient[1], the former python-rdomanager-oscplugin.
> 
> With the additional repositories, there is an additional review burden
> on our core reviewers. There is also the fact that folks who have been
> working on the client code for a while when it was only part of RDO
> are not TripleO core reviewers.
> 
> I think we could help with the additional burden of reviews if we made
> two of those people core on python-tripleoclient and tripleo-common
> now.
> 
> Specifically, the folks I'm proposing are:
> Brad P. Crochet <brad at redhat.com>
> Dougal Matthews <dougal at redhat.com>

+1 to both!

> The options I see are:
> - keep just 1 tripleo acl, and add additional folks there, with a good
> faith agreement not to +/-2,+A code that is not from the 2 client
> repos.

+1, as others have mentioned I think a common-sense agreement here will be fine :)

Steve


From nik.komawar at gmail.com  Thu Sep 10 16:39:20 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Thu, 10 Sep 2015 12:39:20 -0400
Subject: [openstack-dev]  [Glance] FFE request for parallel scrubbing
Message-ID: <55F1B238.6080707@gmail.com>

Hi,

We had a request for this small spec [1] to be in Liberty. It addresses
the concerns around parallel scrubbing performance improvements. The
reason for asking for creating a spec was that it can rather affect
performance negatively in some cases and a config is to help fine tune
that. This will enable operators to run scrubber as per their
requirements. It is a borderline bug/spec for those who wish to look at
the details around such.

So, please vote your thoughts on FFE for this feature. We will decide
the fate of the same by tomorrow (Friday) EOD.

Thanks for putting it together quickly, Hemanth!

[1] https://review.openstack.org/#/c/222284/

-- 

Thanks,
Nikhil



From sukhdevkapur at gmail.com  Thu Sep 10 16:49:16 2015
From: sukhdevkapur at gmail.com (Sukhdev Kapur)
Date: Thu, 10 Sep 2015 09:49:16 -0700
Subject: [openstack-dev] [Neutron][ML2] ML2 late/early-cycle sprint
	announcement
In-Reply-To: <CAG9LJa7LDVijDkS4TU2JL=KJ1OMrbkmhynXVgMcJVBRmzyuxcQ@mail.gmail.com>
References: <CA+wZVHSv87sPpEFj2jC1YM9_WtDfFDEdzZF8tHqYCs=110rGSw@mail.gmail.com>
 <CAG9LJa7LDVijDkS4TU2JL=KJ1OMrbkmhynXVgMcJVBRmzyuxcQ@mail.gmail.com>
Message-ID: <CA+wZVHTWV+NznN2p8tGJeh6byatSDOhcCppXB3-t6MtpW9p7bA@mail.gmail.com>

Hi Gal,

I was hoping you will join us in yesterday's ML2 meeting to discuss
further. Anyhow,  we would love your participation in this activity. Please
check the ether pad and see if you could join us in this sprint? If yes,
please sign up so that host can make appropriate arrangements.
Regardless,  please review the document and provide feedback. Also,  see if
you can join next week's meeting.

Thanks
Sukhdev
On Sep 9, 2015 4:50 AM, "Gal Sagie" <gal.sagie at gmail.com> wrote:

> Hi Sukhdev,
>
> The common sync framework is something i was also thinking about for some
> time now.
> I think its a very good idea and would love if i could participate in the
> talks (and hopefully the implementation as well)
>
> Thanks
> Gal.
>
> On Wed, Sep 9, 2015 at 9:46 AM, Sukhdev Kapur <sukhdevkapur at gmail.com>
> wrote:
>
>> Folks,
>>
>> We are planning on having ML2 coding sprint on October 6 through 8, 2015.
>> Some are calling it Liberty late-cycle sprint, others are calling it Mitaka
>> early-cycle sprint.
>>
>> ML2 team has been discussing the issues related to synchronization of the
>> Neutron DB resources with the back-end drivers. Several issues have been
>> reported when multiple ML2 drivers are deployed in scaled HA deployments.
>> The issues surface when either side (Neutron or back-end HW/drivers)
>> restart and resource view gets out of sync. There is no mechanism in
>> Neutron or ML2 plugin which ensures the synchronization of the state
>> between the front-end and back-end. The drivers either end up implementing
>> their own solutions or they dump the issue on the operators to intervene
>> and correct it manually.
>>
>> We plan on utilizing Task Flow to implement the framework in ML2 plugin
>> which can be leveraged by ML2 drivers to achieve synchronization in a
>> simplified manner.
>>
>> There are couple of additional items on the Sprint agenda, which are
>> listed on the etherpad [1]. The details of venue and schedule are listed on
>> the enterpad as well. The sprint is hosted by Yahoo Inc.
>> Whoever is interested in the topics listed on the etherpad, is welcome to
>> sign up for the sprint and join us in making this reality.
>>
>> Additionally, we will utilize this sprint to formalize the design
>> proposal(s) for the fish bowl session at Tokyo summit [2]
>>
>> Any questions/clarifications, please join us in our weekly ML2 meeting on
>> Wednesday at 1600 UTC (9AM pacific time) at #openstack-meeting-alt
>>
>> Thanks
>> -Sukhdev
>>
>> [1] - https://etherpad.openstack.org/p/Neutron_ML2_Mid-Cycle_Sprint
>> [2] - https://etherpad.openstack.org/p/neutron-mitaka-designsummit
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/21d25932/attachment.html>

From shardy at redhat.com  Thu Sep 10 16:53:23 2015
From: shardy at redhat.com (Steven Hardy)
Date: Thu, 10 Sep 2015 17:53:23 +0100
Subject: [openstack-dev] [heat] Backup resources and properties in the
	delete-path
Message-ID: <20150910165322.GB16252@t430slt.redhat.com>

Hi all,

So, I've been battling with $subject for the last few days ref [1][2].

The problem I have is that out TestResource references several properties
in the delete (check_delete_complete) path[4], which it turns out doesn't
work very well if those properties refer to parameters via get_param, and
the parameter in the template is added/removed between updates which
fail[3].

Essentially, the confusing dance we do on update with backup stacks and
backup resources bites us, because the backed-up resource ends up referring
to a parameter which doesn't exist (either in
stack.Stack._delete_backup_stack on stack-delete, or in
update.StackUpdate._remove_backup_resource on stack-update.)

As far as I can tell, referencing properties in the delete path is the main
problem here, and it's something we don't do at all AFAICS in any other
resources - the normal pattern is only to refer to the resource_id in the
delete path, and possibly the resource_data (which will work OK after [5]
lands)

So the question is, is it *ever* valid to reference self.properties
in the delete path?  If the answer is no, can we fix TestResource by e.g
storing the properties in resource_data instead?

If we do expect to allow/support refering to properties in the delete path,
the question becomes how to we make it work with the backup resource update
mangling we do atm?  I've posted a hacky workaround for the delete path in
[2], but I don't yet have a solution for the failure on update in
_remove_backup_resource, is it worth expending the effort to work that out
or is TestResource basically doing the wrong thing?

Any ideas much appreciated, as I'd like to clarify the best path forward
before burning a bunch more time on this :)

Thanks!

Steve

[1] https://review.openstack.org/#/c/205754/
[2] https://review.openstack.org/#/c/222176/
[3] https://bugs.launchpad.net/heat/+bug/1494260
[4] https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/heat/test_resource.py#L209
[5] https://review.openstack.org/#/c/220986/


From prometheanfire at gentoo.org  Thu Sep 10 16:57:14 2015
From: prometheanfire at gentoo.org (Matthew Thode)
Date: Thu, 10 Sep 2015 11:57:14 -0500
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <55F1B0D7.8070404@mhtx.net>
References: <55F1999C.4020509@mhtx.net> <55F1AE40.5020009@gentoo.org>
 <55F1B0D7.8070404@mhtx.net>
Message-ID: <55F1B66A.7040303@gentoo.org>

On 09/10/2015 11:33 AM, Major Hayden wrote:
> On 09/10/2015 11:22 AM, Matthew Thode wrote:
>> Sane defaults can't be used?  The two bugs you listed look fine to me as
>> default things to do.
> 
> Thanks, Matthew.  I tend to agree.
> 
> I'm wondering if it would be best to make a "punch list" of CIS benchmarks and try to tag them with one of the following:
> 
>   * Do this in OSAD
>   * Tell deployers how to do this (in docs)
>   * Tell deployers not to do this (in docs)
> 
> That could be lumped in with a spec/blueprint of some sort.  Would that be beneficial?
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

I think that'd work, it'd also allow discussion on if something should
be in each section as well.

-- 
Matthew Thode

-- 
-- Matthew Thode (prometheanfire)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/a2ccf80e/attachment.pgp>

From dpyzhov at mirantis.com  Thu Sep 10 16:58:06 2015
From: dpyzhov at mirantis.com (Dmitry Pyzhov)
Date: Thu, 10 Sep 2015 19:58:06 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAHAWLf2OdSt-gjM5iHqUZeid7f+Wo6TZEjJoFosLCGWQO2qvHQ@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
 <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
 <CAFLqvG5bF0gtKSidwbNJP61hzqkFTDaMEvv-6hkH+KbFqXc75Q@mail.gmail.com>
 <CAHAWLf0_J+8n=R_btK=peHXMT_hVT-0Qiq7JgBkwXKR7Lvm4=Q@mail.gmail.com>
 <CAFLqvG6gF-OT5a-3Px8nHy-w=Y_18shOVJ0fgap8LFMqk7pZ3g@mail.gmail.com>
 <CAHAWLf2OdSt-gjM5iHqUZeid7f+Wo6TZEjJoFosLCGWQO2qvHQ@mail.gmail.com>
Message-ID: <3002A0DF-8E57-4652-B095-126E03EF05E7@mirantis.com>

Guys,

looks like you?ve started to talk about different things. As I see, original proposal was: stop treat MOS DEB repo as a special case and use the same flow for all repos. Your use case does not contradict it.

Moreover, it requires standard flow for all repos. ?Put everything on the ISO? use case should be implemented as a new feature. It is a matter of running fuel-createmirror script during ISO build and usage of it during master node deployment. It definitely should use mirror as a single object. And this object should be compatible with result of the fuel-createmirror script usage.

> On 10 Sep 2015, at 18:18, Vladimir Kuklin <vkuklin at mirantis.com> wrote:
> 
> Folks
> 
> I guess I need to get you on-site to deploy something at our user's datacenter. I do want to be able to download an ISO which contains all packages. This may not be the primary artifact of our software suite, but we need to have this opportunity to build full ISO with ALL components. Please, do not narrow down our feature set just by thinking that users do not need something because we are reluctant to implement this. Just believe me - users need this opportunity in a lot of deployment cases. It is not hard to implement it. We do not need to set this as a default option, but we need to have it. That is my point.
> 
> On Thu, Sep 10, 2015 at 5:40 PM, Vladimir Kozhukalov <vkozhukalov at mirantis.com <mailto:vkozhukalov at mirantis.com>> wrote:
> Vladimir,
> 
> * We don't have full ISO anyway
> * We don't require to create mirror. When you launch your browser, do you mean to have mirror of the Internet locally? Probably, no. The same is here. Internet connection is the common requirement nowadays, but if you don't have one, you definitely need to have a kind of local copy.
> 
> Vladimir Kozhukalov
> 
> On Thu, Sep 10, 2015 at 4:17 PM, Vladimir Kuklin <vkuklin at mirantis.com <mailto:vkuklin at mirantis.com>> wrote:
> Igor
> 
> Having poor access to the internet is a regular use case which we must support. This is not a crazy requirement. Not having full ISO makes cloud setup harder to complete. Even more, not having hard requirement to create a mirror will detract newcomers. I can say that if I were a user and saw a requirement to create mirror I would not try the product with comparison to the case when I can get a full ISO with all the stuff I need.
> 
> On Thu, Sep 10, 2015 at 4:06 PM, Vladimir Kozhukalov <vkozhukalov at mirantis.com <mailto:vkozhukalov at mirantis.com>> wrote:
> Guys, 
> 
> I really appreciate your opinions on whether Fuel should be all inclusive or not. But the original topic of this thread is different. I personally think that in 2015 it is not a big deal to make the master node able to access any online host (even taking into account paranoid security policies). It is just a matter of network engineering. But it is completely out of the scope. What I am suggesting is to align the way how we treat different repos, whether upstream or MOS. What I am working on right now is I am trying to make Fuel build and delivery approach really flexible. That means we need to have as little non-standard ways/hacks/approaches/options as possible. 
> 
> > Why can't we make this optional in the build system? It should be easy to implement, is not it?
> 
> That is exactly what I am trying to do (make it optional). But I don't want it to be yet another boolean variable for this particular thing (MOS repo). We have working approach for dealing with repos. Repos can either online or local mirrors. We have a tool for making local mirrors (fuel-createmirror). Even if we put MOS on the ISO, a user still can not deploy OpenStack, because he/she still needs upstream to be available. Anyway, the user is still forced to do some additional actions. Again, we have plans to improve the quality and UX of fuel-createmirror. 
> 
> Yet another thing I don't want to be on the master node is a bunch of MOS repos directories named like 
> /var/www/nailgun/2015.1-7.0 
> /var/www/nailgun/2014.4-6.1 
> with links like
> /var/www/nailgun/ubuntu -> /var/www/nailgun/2015.1-7.0
> What does this link mean? Even Fuel developers can be confused. It is scary to imagine what users think of it :-) Why should Nailgun and upgrade script manage that kind of storage in this exact kind of format? A long time ago people invented RPM/DEB repositories, tools to manage them and structure for versioning them. We have Perestoika for that and we have plans to put all package/mirror related tools in one place (github.com/stackforge/fuel-mirror <http://github.com/stackforge/fuel-mirror>) and make all these tools available out of Fuel CI. So, users will be able to easily build their own packages, clone necessary repos and manage them in the way which is standard in the industry. However, it is out of the scope of the letter. 
> 
> I also don't like the idea of putting MOS repo on the ISO by default because it encourages people thing that ISO is the way of distributing MOS. ISO should be nothing more than just a way of installing Fuel from scratch. MOS should be distributed via MOS repos. Fuel is available as RPM package in RPM MOS repo.
> 
> 
> 
> 
> 
>  
> 
> Vladimir Kozhukalov
> 
> On Thu, Sep 10, 2015 at 1:18 PM, Igor Kalnitsky <ikalnitsky at mirantis.com <mailto:ikalnitsky at mirantis.com>> wrote:
> Mike,
> 
> > still not exactly true for some large enterprises. Due to all the security, etc.,
> > there are sometimes VPNs / proxies / firewalls with very low throughput.
> 
> It's their problem, and their policies. We can't and shouldn't handle
> all possible cases. If some enterprise has "no Internet" policy, I bet
> it won't be a problem for their IT guys to create an intranet mirror
> for MOS packages. Moreover, I also bet they do have a mirror for
> Ubuntu or other Linux distributive. So it basically about approach how
> to consume our mirrors.
> 
> On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <vkuklin at mirantis.com <mailto:vkuklin at mirantis.com>> wrote:
> > Folks
> >
> > I think, Mike is completely right here - we need an option to build
> > all-in-one ISO which can be tried-out/deployed unattendedly without internet
> > access. Let's let a user make a choice what he wants, not push him into
> > embarassing situation. We still have many parts of Fuel which make choices
> > for user that cannot be overriden. Let's not pretend that we know more than
> > user does about his environment.
> >
> > On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <ogelbukh at mirantis.com <mailto:ogelbukh at mirantis.com>>
> > wrote:
> >>
> >> The reason people want offline deployment feature is not because of poor
> >> connection, but rather the enterprise intranets where getting subnet with
> >> external access sometimes is a real pain in various body parts.
> >>
> >> --
> >> Best regards,
> >> Oleg Gelbukh
> >>
> >> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <ikalnitsky at mirantis.com <mailto:ikalnitsky at mirantis.com>>
> >> wrote:
> >>>
> >>> Hello,
> >>>
> >>> I agree with Vladimir - the idea of online repos is a right way to
> >>> move. In 2015 I believe we can ignore this "poor Internet connection"
> >>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
> >>> distributives - most of them fetch needed packages from the Internet
> >>> during installation, not from CD/DVD. The netboot installers are
> >>> popular, I can't even remember when was the last time I install my
> >>> Debian from the DVD-1 - I use netboot installer for years.
> >>>
> >>> Thanks,
> >>> Igor
> >>>
> >>>
> >>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com <mailto:ytang at mirantis.com>> wrote:
> >>> >
> >>> >
> >>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <aschultz at mirantis.com <mailto:aschultz at mirantis.com>>
> >>> > wrote:
> >>> >>
> >>> >>
> >>> >> Hey Vladimir,
> >>> >>
> >>> >>>
> >>> >>>
> >>> >>>>>
> >>> >>>>> 1) There won't be such things in like [1] and [2], thus less
> >>> >>>>> complicated flow, less errors, easier to maintain, easier to
> >>> >>>>> understand,
> >>> >>>>> easier to troubleshoot
> >>> >>>>> 2) If one wants to have local mirror, the flow is the same as in
> >>> >>>>> case
> >>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a user
> >>> >>>>> to
> >>> >>>>> understand.
> >>> >>>>
> >>> >>>>
> >>> >>>> From the issues I've seen,  fuel-createmirror isn't very straight
> >>> >>>> forward and has some issues making it a bad UX.
> >>> >>>
> >>> >>>
> >>> >>> I'd say the whole approach of having such tool as fuel-createmirror
> >>> >>> is a
> >>> >>> way too naive. Reliable internet connection is totally up to network
> >>> >>> engineering rather than deployment. Even using proxy is much better
> >>> >>> that
> >>> >>> creating local mirror. But this discussion is totally out of the
> >>> >>> scope of
> >>> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
> >>> >>> straightforward (installed as rpm, has just a couple of command line
> >>> >>> options). The quality of this script is also out of the scope of this
> >>> >>> thread. BTW we have plans to improve it.
> >>> >>
> >>> >>
> >>> >>
> >>> >> Fair enough, I just wanted to raise the UX issues around these types
> >>> >> of
> >>> >> things as they should go into the decision making process.
> >>> >>
> >>> >>
> >>> >>>
> >>> >>>>>
> >>> >>>>>
> >>> >>>>> Many people still associate ISO with MOS, but it is not true when
> >>> >>>>> using
> >>> >>>>> package based delivery approach.
> >>> >>>>>
> >>> >>>>> It is easy to define necessary repos during deployment and thus it
> >>> >>>>> is
> >>> >>>>> easy to control what exactly is going to be installed on slave
> >>> >>>>> nodes.
> >>> >>>>>
> >>> >>>>> What do you guys think of it?
> >>> >>>>>
> >>> >>>>>
> >>> >>>>
> >>> >>>> Reliance on internet connectivity has been an issue since 6.1. For
> >>> >>>> many
> >>> >>>> large users, complete access to the internet is not available or not
> >>> >>>> desired.  If we want to continue down this path, we need to improve
> >>> >>>> the
> >>> >>>> tools to setup the local mirror and properly document what
> >>> >>>> urls/ports/etc
> >>> >>>> need to be available for the installation of openstack and any
> >>> >>>> mirror
> >>> >>>> creation process.  The ideal thing is to have an all-in-one CD
> >>> >>>> similar to a
> >>> >>>> live cd that allows a user to completely try out fuel wherever they
> >>> >>>> want
> >>> >>>> with out further requirements of internet access.  If we don't want
> >>> >>>> to
> >>> >>>> continue with that, we need to do a better job around providing the
> >>> >>>> tools
> >>> >>>> for a user to get up and running in a timely fashion.  Perhaps
> >>> >>>> providing an
> >>> >>>> net-only iso and an all-included iso would be a better solution so
> >>> >>>> people
> >>> >>>> will have their expectations properly set up front?
> >>> >>>
> >>> >>>
> >>> >>> Let me explain why I think having local MOS mirror by default is bad:
> >>> >>> 1) I don't see any reason why we should treat MOS  repo other way
> >>> >>> than
> >>> >>> all other online repos. A user sees on the settings tab the list of
> >>> >>> repos
> >>> >>> one of which is local by default while others are online. It can make
> >>> >>> user a
> >>> >>> little bit confused, can't it? A user can be also confused by the
> >>> >>> fact, that
> >>> >>> some of the repos can be cloned locally by fuel-createmirror while
> >>> >>> others
> >>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
> >>> >>
> >>> >>
> >>> >>
> >>> >> I agree. The process should be the same and it should be just another
> >>> >> repo. It doesn't mean we can't include a version on an ISO as part of
> >>> >> a
> >>> >> release.  Would it be better to provide the mirror on the ISO but not
> >>> >> have
> >>> >> it enabled by default for a release so that we can gather user
> >>> >> feedback on
> >>> >> this? This would include improved documentation and possibly allowing
> >>> >> a user
> >>> >> to choose their preference so we can collect metrics?
> >>> >>
> >>> >>
> >>> >>> 2) Having local MOS mirror by default makes things much more
> >>> >>> convoluted.
> >>> >>> We are forced to have several directories with predefined names and
> >>> >>> we are
> >>> >>> forced to manage these directories in nailgun, in upgrade script,
> >>> >>> etc. Why?
> >>> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
> >>> >>> equal
> >>> >>> to MOS, which is not true. It is possible to implement really
> >>> >>> flexible
> >>> >>> delivery scheme, but we need to think of these things as they are
> >>> >>> independent.
> >>> >>
> >>> >>
> >>> >>
> >>> >> I'm not sure what you mean by this. Including a point in time copy on
> >>> >> an
> >>> >> ISO as a release is a common method of distributing software. Is this
> >>> >> a
> >>> >> messaging thing that needs to be addressed? Perhaps I'm not familiar
> >>> >> with
> >>> >> people referring to the ISO as being MOS.
> >>> >>
> >>> >>
> >>> >>> For large users it is easy to build custom ISO and put there what
> >>> >>> they
> >>> >>> need but first we need to have simple working scheme clear for
> >>> >>> everyone. I
> >>> >>> think dealing with all repos the same way is what is gonna makes
> >>> >>> things
> >>> >>> simpler.
> >>> >>>
> >>> >>
> >>> >>
> >>> >> Who is going to build a custom ISO? How does one request that? What
> >>> >> resources are consumed by custom ISO creation process/request? Does
> >>> >> this
> >>> >> scale?
> >>> >>
> >>> >>
> >>> >>>
> >>> >>> This thread is not about internet connectivity, it is about aligning
> >>> >>> things.
> >>> >>>
> >>> >>
> >>> >> You are correct in that this thread is not explicitly about internet
> >>> >> connectivity, but they are related. Any changes to remove a local
> >>> >> repository
> >>> >> and only provide an internet based solution makes internet
> >>> >> connectivity
> >>> >> something that needs to be included in the discussion.  I just want to
> >>> >> make
> >>> >> sure that we properly evaluate this decision based on end user
> >>> >> feedback not
> >>> >> because we don't want to manage this from a developer standpoint.
> >>> >
> >>> >
> >>> >
> >>> >  +1, whatever the changes is, please keep Fuel as a tool that can
> >>> > deploy
> >>> > without Internet access, this is part of reason that people like it and
> >>> > it's
> >>> > better that other tools.
> >>> >>
> >>> >>
> >>> >> -Alex
> >>> >>
> >>> >>
> >>> >>
> >>> >> __________________________________________________________________________
> >>> >> OpenStack Development Mailing List (not for usage questions)
> >>> >> Unsubscribe:
> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>> >>
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Yaguang Tang
> >>> > Technical Support, Mirantis China
> >>> >
> >>> > Phone: +86 15210946968
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > __________________________________________________________________________
> >>> > OpenStack Development Mailing List (not for usage questions)
> >>> > Unsubscribe:
> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>> >
> >>>
> >>>
> >>> __________________________________________________________________________
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>
> >>
> >>
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >>
> >
> >
> >
> > --
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 35bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com <http://www.mirantis.com/>
> > www.mirantis.ru <http://www.mirantis.ru/>
> > vkuklin at mirantis.com <mailto:vkuklin at mirantis.com>
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> 
> 
> -- 
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru <http://www.mirantis.ru/>
> vkuklin at mirantis.com <mailto:vkuklin at mirantis.com>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> 
> 
> -- 
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru <http://www.mirantis.ru/>
> vkuklin at mirantis.com <mailto:vkuklin at mirantis.com>__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/d9901f7b/attachment.html>

From vkozhukalov at mirantis.com  Thu Sep 10 17:01:52 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Thu, 10 Sep 2015 20:01:52 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <CAHAWLf2OdSt-gjM5iHqUZeid7f+Wo6TZEjJoFosLCGWQO2qvHQ@mail.gmail.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
 <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
 <CAFLqvG5bF0gtKSidwbNJP61hzqkFTDaMEvv-6hkH+KbFqXc75Q@mail.gmail.com>
 <CAHAWLf0_J+8n=R_btK=peHXMT_hVT-0Qiq7JgBkwXKR7Lvm4=Q@mail.gmail.com>
 <CAFLqvG6gF-OT5a-3Px8nHy-w=Y_18shOVJ0fgap8LFMqk7pZ3g@mail.gmail.com>
 <CAHAWLf2OdSt-gjM5iHqUZeid7f+Wo6TZEjJoFosLCGWQO2qvHQ@mail.gmail.com>
Message-ID: <CAFLqvG6BxyqcKr7yMYG1_ZLzcFV-zWZCevJgQ_10y3fVdoadyA@mail.gmail.com>

Vladimir,

* We can not put upstream Ubuntu on ISO by default and publish it. We just
can not and thus won't do that.
* If a user wants to have all inclusive ISO, he/she will do it on his/her
own, on his/her own machine, probably using publicly available Fuel tools,
but not using Fuel CI/CD.

I am suggesting nothing more than to make things simpler so that I am able
to implement such tools that could be used to easily build all
inclusive/not inclusive/empty/whatever ISO if one would like to do it.


Vladimir Kozhukalov

On Thu, Sep 10, 2015 at 6:18 PM, Vladimir Kuklin <vkuklin at mirantis.com>
wrote:

> Folks
>
> I guess I need to get you on-site to deploy something at our user's
> datacenter. I do want to be able to download an ISO which contains all
> packages. This may not be the primary artifact of our software suite, but
> we need to have this opportunity to build full ISO with ALL components.
> Please, do not narrow down our feature set just by thinking that users do
> not need something because we are reluctant to implement this. Just believe
> me - users need this opportunity in a lot of deployment cases. It is not
> hard to implement it. We do not need to set this as a default option, but
> we need to have it. That is my point.
>
> On Thu, Sep 10, 2015 at 5:40 PM, Vladimir Kozhukalov <
> vkozhukalov at mirantis.com> wrote:
>
>> Vladimir,
>>
>> * We don't have full ISO anyway
>> * We don't require to create mirror. When you launch your browser, do you
>> mean to have mirror of the Internet locally? Probably, no. The same is
>> here. Internet connection is the common requirement nowadays, but if you
>> don't have one, you definitely need to have a kind of local copy.
>>
>> Vladimir Kozhukalov
>>
>> On Thu, Sep 10, 2015 at 4:17 PM, Vladimir Kuklin <vkuklin at mirantis.com>
>> wrote:
>>
>>> Igor
>>>
>>> Having poor access to the internet is a regular use case which we must
>>> support. This is not a crazy requirement. Not having full ISO makes cloud
>>> setup harder to complete. Even more, not having hard requirement to create
>>> a mirror will detract newcomers. I can say that if I were a user and saw a
>>> requirement to create mirror I would not try the product with comparison to
>>> the case when I can get a full ISO with all the stuff I need.
>>>
>>> On Thu, Sep 10, 2015 at 4:06 PM, Vladimir Kozhukalov <
>>> vkozhukalov at mirantis.com> wrote:
>>>
>>>> Guys,
>>>>
>>>> I really appreciate your opinions on whether Fuel should be all
>>>> inclusive or not. But the original topic of this thread is different. I
>>>> personally think that in 2015 it is not a big deal to make the master node
>>>> able to access any online host (even taking into account paranoid security
>>>> policies). It is just a matter of network engineering. But it is completely
>>>> out of the scope. What I am suggesting is to align the way how we treat
>>>> different repos, whether upstream or MOS. What I am working on right now is
>>>> I am trying to make Fuel build and delivery approach really flexible. That
>>>> means we need to have as little non-standard ways/hacks/approaches/options
>>>> as possible.
>>>>
>>>> > Why can't we make this optional in the build system? It should be
>>>> easy to implement, is not it?
>>>>
>>>> That is exactly what I am trying to do (make it optional). But I don't
>>>> want it to be yet another boolean variable for this particular thing (MOS
>>>> repo). We have working approach for dealing with repos. Repos can either
>>>> online or local mirrors. We have a tool for making local mirrors
>>>> (fuel-createmirror). Even if we put MOS on the ISO, a user still can not
>>>> deploy OpenStack, because he/she still needs upstream to be available.
>>>> Anyway, the user is still forced to do some additional actions. Again, we
>>>> have plans to improve the quality and UX of fuel-createmirror.
>>>>
>>>> Yet another thing I don't want to be on the master node is a bunch of
>>>> MOS repos directories named like
>>>> /var/www/nailgun/2015.1-7.0
>>>> /var/www/nailgun/2014.4-6.1
>>>> with links like
>>>> /var/www/nailgun/ubuntu -> /var/www/nailgun/2015.1-7.0
>>>> What does this link mean? Even Fuel developers can be confused. It is
>>>> scary to imagine what users think of it :-) Why should Nailgun and upgrade
>>>> script manage that kind of storage in this exact kind of format? A long
>>>> time ago people invented RPM/DEB repositories, tools to manage them and
>>>> structure for versioning them. We have Perestoika for that and we have
>>>> plans to put all package/mirror related tools in one place (
>>>> github.com/stackforge/fuel-mirror) and make all these tools available
>>>> out of Fuel CI. So, users will be able to easily build their own packages,
>>>> clone necessary repos and manage them in the way which is standard in the
>>>> industry. However, it is out of the scope of the letter.
>>>>
>>>> I also don't like the idea of putting MOS repo on the ISO by default
>>>> because it encourages people thing that ISO is the way of distributing MOS.
>>>> ISO should be nothing more than just a way of installing Fuel from scratch.
>>>> MOS should be distributed via MOS repos. Fuel is available as RPM package
>>>> in RPM MOS repo.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Vladimir Kozhukalov
>>>>
>>>> On Thu, Sep 10, 2015 at 1:18 PM, Igor Kalnitsky <
>>>> ikalnitsky at mirantis.com> wrote:
>>>>
>>>>> Mike,
>>>>>
>>>>> > still not exactly true for some large enterprises. Due to all the
>>>>> security, etc.,
>>>>> > there are sometimes VPNs / proxies / firewalls with very low
>>>>> throughput.
>>>>>
>>>>> It's their problem, and their policies. We can't and shouldn't handle
>>>>> all possible cases. If some enterprise has "no Internet" policy, I bet
>>>>> it won't be a problem for their IT guys to create an intranet mirror
>>>>> for MOS packages. Moreover, I also bet they do have a mirror for
>>>>> Ubuntu or other Linux distributive. So it basically about approach how
>>>>> to consume our mirrors.
>>>>>
>>>>> On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <
>>>>> vkuklin at mirantis.com> wrote:
>>>>> > Folks
>>>>> >
>>>>> > I think, Mike is completely right here - we need an option to build
>>>>> > all-in-one ISO which can be tried-out/deployed unattendedly without
>>>>> internet
>>>>> > access. Let's let a user make a choice what he wants, not push him
>>>>> into
>>>>> > embarassing situation. We still have many parts of Fuel which make
>>>>> choices
>>>>> > for user that cannot be overriden. Let's not pretend that we know
>>>>> more than
>>>>> > user does about his environment.
>>>>> >
>>>>> > On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <
>>>>> ogelbukh at mirantis.com>
>>>>> > wrote:
>>>>> >>
>>>>> >> The reason people want offline deployment feature is not because of
>>>>> poor
>>>>> >> connection, but rather the enterprise intranets where getting
>>>>> subnet with
>>>>> >> external access sometimes is a real pain in various body parts.
>>>>> >>
>>>>> >> --
>>>>> >> Best regards,
>>>>> >> Oleg Gelbukh
>>>>> >>
>>>>> >> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <
>>>>> ikalnitsky at mirantis.com>
>>>>> >> wrote:
>>>>> >>>
>>>>> >>> Hello,
>>>>> >>>
>>>>> >>> I agree with Vladimir - the idea of online repos is a right way to
>>>>> >>> move. In 2015 I believe we can ignore this "poor Internet
>>>>> connection"
>>>>> >>> reason, and simplify both Fuel and UX. Moreover, take a look at
>>>>> Linux
>>>>> >>> distributives - most of them fetch needed packages from the
>>>>> Internet
>>>>> >>> during installation, not from CD/DVD. The netboot installers are
>>>>> >>> popular, I can't even remember when was the last time I install my
>>>>> >>> Debian from the DVD-1 - I use netboot installer for years.
>>>>> >>>
>>>>> >>> Thanks,
>>>>> >>> Igor
>>>>> >>>
>>>>> >>>
>>>>> >>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com>
>>>>> wrote:
>>>>> >>> >
>>>>> >>> >
>>>>> >>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <
>>>>> aschultz at mirantis.com>
>>>>> >>> > wrote:
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> Hey Vladimir,
>>>>> >>> >>
>>>>> >>> >>>
>>>>> >>> >>>
>>>>> >>> >>>>>
>>>>> >>> >>>>> 1) There won't be such things in like [1] and [2], thus less
>>>>> >>> >>>>> complicated flow, less errors, easier to maintain, easier to
>>>>> >>> >>>>> understand,
>>>>> >>> >>>>> easier to troubleshoot
>>>>> >>> >>>>> 2) If one wants to have local mirror, the flow is the same
>>>>> as in
>>>>> >>> >>>>> case
>>>>> >>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a
>>>>> user
>>>>> >>> >>>>> to
>>>>> >>> >>>>> understand.
>>>>> >>> >>>>
>>>>> >>> >>>>
>>>>> >>> >>>> From the issues I've seen,  fuel-createmirror isn't very
>>>>> straight
>>>>> >>> >>>> forward and has some issues making it a bad UX.
>>>>> >>> >>>
>>>>> >>> >>>
>>>>> >>> >>> I'd say the whole approach of having such tool as
>>>>> fuel-createmirror
>>>>> >>> >>> is a
>>>>> >>> >>> way too naive. Reliable internet connection is totally up to
>>>>> network
>>>>> >>> >>> engineering rather than deployment. Even using proxy is much
>>>>> better
>>>>> >>> >>> that
>>>>> >>> >>> creating local mirror. But this discussion is totally out of
>>>>> the
>>>>> >>> >>> scope of
>>>>> >>> >>> this letter. Currently,  we have fuel-createmirror and it is
>>>>> pretty
>>>>> >>> >>> straightforward (installed as rpm, has just a couple of
>>>>> command line
>>>>> >>> >>> options). The quality of this script is also out of the scope
>>>>> of this
>>>>> >>> >>> thread. BTW we have plans to improve it.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> Fair enough, I just wanted to raise the UX issues around these
>>>>> types
>>>>> >>> >> of
>>>>> >>> >> things as they should go into the decision making process.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>>
>>>>> >>> >>>>>
>>>>> >>> >>>>>
>>>>> >>> >>>>> Many people still associate ISO with MOS, but it is not true
>>>>> when
>>>>> >>> >>>>> using
>>>>> >>> >>>>> package based delivery approach.
>>>>> >>> >>>>>
>>>>> >>> >>>>> It is easy to define necessary repos during deployment and
>>>>> thus it
>>>>> >>> >>>>> is
>>>>> >>> >>>>> easy to control what exactly is going to be installed on
>>>>> slave
>>>>> >>> >>>>> nodes.
>>>>> >>> >>>>>
>>>>> >>> >>>>> What do you guys think of it?
>>>>> >>> >>>>>
>>>>> >>> >>>>>
>>>>> >>> >>>>
>>>>> >>> >>>> Reliance on internet connectivity has been an issue since
>>>>> 6.1. For
>>>>> >>> >>>> many
>>>>> >>> >>>> large users, complete access to the internet is not available
>>>>> or not
>>>>> >>> >>>> desired.  If we want to continue down this path, we need to
>>>>> improve
>>>>> >>> >>>> the
>>>>> >>> >>>> tools to setup the local mirror and properly document what
>>>>> >>> >>>> urls/ports/etc
>>>>> >>> >>>> need to be available for the installation of openstack and any
>>>>> >>> >>>> mirror
>>>>> >>> >>>> creation process.  The ideal thing is to have an all-in-one CD
>>>>> >>> >>>> similar to a
>>>>> >>> >>>> live cd that allows a user to completely try out fuel
>>>>> wherever they
>>>>> >>> >>>> want
>>>>> >>> >>>> with out further requirements of internet access.  If we
>>>>> don't want
>>>>> >>> >>>> to
>>>>> >>> >>>> continue with that, we need to do a better job around
>>>>> providing the
>>>>> >>> >>>> tools
>>>>> >>> >>>> for a user to get up and running in a timely fashion.  Perhaps
>>>>> >>> >>>> providing an
>>>>> >>> >>>> net-only iso and an all-included iso would be a better
>>>>> solution so
>>>>> >>> >>>> people
>>>>> >>> >>>> will have their expectations properly set up front?
>>>>> >>> >>>
>>>>> >>> >>>
>>>>> >>> >>> Let me explain why I think having local MOS mirror by default
>>>>> is bad:
>>>>> >>> >>> 1) I don't see any reason why we should treat MOS  repo other
>>>>> way
>>>>> >>> >>> than
>>>>> >>> >>> all other online repos. A user sees on the settings tab the
>>>>> list of
>>>>> >>> >>> repos
>>>>> >>> >>> one of which is local by default while others are online. It
>>>>> can make
>>>>> >>> >>> user a
>>>>> >>> >>> little bit confused, can't it? A user can be also confused by
>>>>> the
>>>>> >>> >>> fact, that
>>>>> >>> >>> some of the repos can be cloned locally by fuel-createmirror
>>>>> while
>>>>> >>> >>> others
>>>>> >>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> I agree. The process should be the same and it should be just
>>>>> another
>>>>> >>> >> repo. It doesn't mean we can't include a version on an ISO as
>>>>> part of
>>>>> >>> >> a
>>>>> >>> >> release.  Would it be better to provide the mirror on the ISO
>>>>> but not
>>>>> >>> >> have
>>>>> >>> >> it enabled by default for a release so that we can gather user
>>>>> >>> >> feedback on
>>>>> >>> >> this? This would include improved documentation and possibly
>>>>> allowing
>>>>> >>> >> a user
>>>>> >>> >> to choose their preference so we can collect metrics?
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>> 2) Having local MOS mirror by default makes things much more
>>>>> >>> >>> convoluted.
>>>>> >>> >>> We are forced to have several directories with predefined
>>>>> names and
>>>>> >>> >>> we are
>>>>> >>> >>> forced to manage these directories in nailgun, in upgrade
>>>>> script,
>>>>> >>> >>> etc. Why?
>>>>> >>> >>> 3) When putting MOS mirror on ISO, we make people think that
>>>>> ISO is
>>>>> >>> >>> equal
>>>>> >>> >>> to MOS, which is not true. It is possible to implement really
>>>>> >>> >>> flexible
>>>>> >>> >>> delivery scheme, but we need to think of these things as they
>>>>> are
>>>>> >>> >>> independent.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> I'm not sure what you mean by this. Including a point in time
>>>>> copy on
>>>>> >>> >> an
>>>>> >>> >> ISO as a release is a common method of distributing software.
>>>>> Is this
>>>>> >>> >> a
>>>>> >>> >> messaging thing that needs to be addressed? Perhaps I'm not
>>>>> familiar
>>>>> >>> >> with
>>>>> >>> >> people referring to the ISO as being MOS.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>> For large users it is easy to build custom ISO and put there
>>>>> what
>>>>> >>> >>> they
>>>>> >>> >>> need but first we need to have simple working scheme clear for
>>>>> >>> >>> everyone. I
>>>>> >>> >>> think dealing with all repos the same way is what is gonna
>>>>> makes
>>>>> >>> >>> things
>>>>> >>> >>> simpler.
>>>>> >>> >>>
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> Who is going to build a custom ISO? How does one request that?
>>>>> What
>>>>> >>> >> resources are consumed by custom ISO creation process/request?
>>>>> Does
>>>>> >>> >> this
>>>>> >>> >> scale?
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>>
>>>>> >>> >>> This thread is not about internet connectivity, it is about
>>>>> aligning
>>>>> >>> >>> things.
>>>>> >>> >>>
>>>>> >>> >>
>>>>> >>> >> You are correct in that this thread is not explicitly about
>>>>> internet
>>>>> >>> >> connectivity, but they are related. Any changes to remove a
>>>>> local
>>>>> >>> >> repository
>>>>> >>> >> and only provide an internet based solution makes internet
>>>>> >>> >> connectivity
>>>>> >>> >> something that needs to be included in the discussion.  I just
>>>>> want to
>>>>> >>> >> make
>>>>> >>> >> sure that we properly evaluate this decision based on end user
>>>>> >>> >> feedback not
>>>>> >>> >> because we don't want to manage this from a developer
>>>>> standpoint.
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >  +1, whatever the changes is, please keep Fuel as a tool that can
>>>>> >>> > deploy
>>>>> >>> > without Internet access, this is part of reason that people like
>>>>> it and
>>>>> >>> > it's
>>>>> >>> > better that other tools.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> -Alex
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>
>>>>> __________________________________________________________________________
>>>>> >>> >> OpenStack Development Mailing List (not for usage questions)
>>>>> >>> >> Unsubscribe:
>>>>> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> >>> >>
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> >>> >>
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >
>>>>> >>> > --
>>>>> >>> > Yaguang Tang
>>>>> >>> > Technical Support, Mirantis China
>>>>> >>> >
>>>>> >>> > Phone: +86 15210946968
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >
>>>>> __________________________________________________________________________
>>>>> >>> > OpenStack Development Mailing List (not for usage questions)
>>>>> >>> > Unsubscribe:
>>>>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> >>> >
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> >>> >
>>>>> >>>
>>>>> >>>
>>>>> >>>
>>>>> __________________________________________________________________________
>>>>> >>> OpenStack Development Mailing List (not for usage questions)
>>>>> >>> Unsubscribe:
>>>>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> __________________________________________________________________________
>>>>> >> OpenStack Development Mailing List (not for usage questions)
>>>>> >> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> >>
>>>>> >
>>>>> >
>>>>> >
>>>>> > --
>>>>> > Yours Faithfully,
>>>>> > Vladimir Kuklin,
>>>>> > Fuel Library Tech Lead,
>>>>> > Mirantis, Inc.
>>>>> > +7 (495) 640-49-04
>>>>> > +7 (926) 702-39-68
>>>>> > Skype kuklinvv
>>>>> > 35bk3, Vorontsovskaya Str.
>>>>> > Moscow, Russia,
>>>>> > www.mirantis.com
>>>>> > www.mirantis.ru
>>>>> > vkuklin at mirantis.com
>>>>> >
>>>>> >
>>>>> __________________________________________________________________________
>>>>> > OpenStack Development Mailing List (not for usage questions)
>>>>> > Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> >
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Yours Faithfully,
>>> Vladimir Kuklin,
>>> Fuel Library Tech Lead,
>>> Mirantis, Inc.
>>> +7 (495) 640-49-04
>>> +7 (926) 702-39-68
>>> Skype kuklinvv
>>> 35bk3, Vorontsovskaya Str.
>>> Moscow, Russia,
>>> www.mirantis.com <http://www.mirantis.ru/>
>>> www.mirantis.ru
>>> vkuklin at mirantis.com
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru
> vkuklin at mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/44e633c9/attachment.html>

From zbitter at redhat.com  Thu Sep 10 17:02:42 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Thu, 10 Sep 2015 13:02:42 -0400
Subject: [openstack-dev] [heat] Backup resources and properties in the
 delete-path
In-Reply-To: <20150910165322.GB16252@t430slt.redhat.com>
References: <20150910165322.GB16252@t430slt.redhat.com>
Message-ID: <55F1B7B2.9010607@redhat.com>

On 10/09/15 12:53, Steven Hardy wrote:
> Hi all,
>
> So, I've been battling with $subject for the last few days ref [1][2].
>
> The problem I have is that out TestResource references several properties
> in the delete (check_delete_complete) path[4], which it turns out doesn't
> work very well if those properties refer to parameters via get_param, and
> the parameter in the template is added/removed between updates which
> fail[3].
>
> Essentially, the confusing dance we do on update with backup stacks and
> backup resources bites us, because the backed-up resource ends up referring
> to a parameter which doesn't exist (either in
> stack.Stack._delete_backup_stack on stack-delete, or in
> update.StackUpdate._remove_backup_resource on stack-update.)
>
> As far as I can tell, referencing properties in the delete path is the main
> problem here, and it's something we don't do at all AFAICS in any other
> resources - the normal pattern is only to refer to the resource_id in the
> delete path, and possibly the resource_data (which will work OK after [5]
> lands)
>
> So the question is, is it *ever* valid to reference self.properties
> in the delete path?

I think it's fine to say 'no'.

> If the answer is no, can we fix TestResource by e.g
> storing the properties in resource_data instead?

They're already stored as self._stored_properties_data; you could just 
reference that instead. (The 'right' way would probably be to use 
"self.frozen_definition().properties(self.properties_schema, 
self.context)", but this is a test resource we're talking about.)

> If we do expect to allow/support refering to properties in the delete path,
> the question becomes how to we make it work with the backup resource update
> mangling we do atm?  I've posted a hacky workaround for the delete path in
> [2], but I don't yet have a solution for the failure on update in
> _remove_backup_resource, is it worth expending the effort to work that out
> or is TestResource basically doing the wrong thing?
>
> Any ideas much appreciated, as I'd like to clarify the best path forward
> before burning a bunch more time on this :)
>
> Thanks!
>
> Steve
>
> [1] https://review.openstack.org/#/c/205754/
> [2] https://review.openstack.org/#/c/222176/
> [3] https://bugs.launchpad.net/heat/+bug/1494260
> [4] https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/heat/test_resource.py#L209
> [5] https://review.openstack.org/#/c/220986/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From doug at doughellmann.com  Thu Sep 10 17:05:22 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Thu, 10 Sep 2015 13:05:22 -0400
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F17646.4000203@openstack.org>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com> <1441840459-sup-9283@lrrr.local>
 <55F15DE0.7040804@dague.net> <55F17646.4000203@openstack.org>
Message-ID: <1441904554-sup-3119@lrrr>

Excerpts from Thierry Carrez's message of 2015-09-10 14:23:34 +0200:
> Sean Dague wrote:
> > Right now, they are all a bunch of files, they can be anywhere. And then
> > you have other files that have to reference these files by path, which
> > can be anywhere. We could just punt in that part and say "punt! every
> > installer and configuration management install needs to solve this on
> > their own." I'm not convinced that's a good answer. The os-brick filters
> > aren't really config. If you change them all that happens is
> > terribleness. Stuff stops working, and you don't know why. They are data
> > to exchange with another process about how to function. Honestly, they
> > should probably be python code that's imported by rootwrap.
> > 
> > Much like the issues around clouds failing when you try to GET /v2 on
> > the Nova API (because we have a bunch of knobs you have to align for SSL
> > termination, and a bunch of deployers didn't), I don't think we should
> > be satisfied with "there's a config for that!" when all that config
> > means is that someone can break their configuration if they don't get it
> > exactly right.
> 
> My quick 2cents on this. Rootwrap was designed as a generic solution to
> wrap privileged calls. That's why filter files are part of its
> "configuration". The problem is, OpenStack needs a pretty precise set of
> those filters to be "configured" to run properly. So it's configuration
> for rootwrap, but not "configuration" for OpenStack.

That makes them sound like data, not configuration. If that's the case,
Python's pkgutil module is an existing API for putting a data file
inside a library and then accessing it. Maybe we should look at moving
the files to a place that lets us use that, instead of requiring any
deployer-based configuration at all. Combining that with the "symbolic"
suggestion from Sean, we would use the package name as the symbol and
there would be a well-defined resource name within that package to use
with pkgutil.get_data() [1].

Doug

[1] https://docs.python.org/2.7/library/pkgutil.html#pkgutil.get_data

> 
> The way it was supposed to work out was that you would have a single
> rootwrap on nodes and every component on that node needing filters would
> drop them in some unique location. A library is just another component
> needing filters, so os-brick could just deploy a few more filters on
> nodes where it's installed.
> 
> The trick is, to increase "security" we promoted usage of per-project
> directories (so that Nova only has access to Nova privileged commands),
> which translated into using a specific config file for Nova rootwrap
> pointing to Nova filters. Now if we are willing to sacrifice that, we
> could have a single directory per-node (/usr/share/rootwrap instead of
> /usr/share/*/rootwrap) that makes most of the interpolation you're
> describing unnecessary.
> 
> Alternatively you could keep project-specific directories and have
> os-brick drop symbolic links to its filters into both nova and
> cinder-specific directories. It's slightly less flexible (since the lib
> now has to know what consumes it) but keeps you from sacrificing "security".
> 
> Now another problem you're describing is that there is no single place
> where those filters end up, depending on the way the projects (or libs)
> are packaged and installed. And it's up to the distros to "fix" the
> filters_path in the configuration file so that it points to every single
> place where those end up. It's a problem (especially when you start to
> install things using multiple concurrent packaging systems), but it's
> not exactly new -- it's just that libraries shipping fliters file are
> even more likely to ship their filters somewhere weird. So maybe we can
> continue to live with that problem we always had, until the privsep
> system completely replaces rootwrap ?
> 


From vkozhukalov at mirantis.com  Thu Sep 10 17:09:52 2015
From: vkozhukalov at mirantis.com (Vladimir Kozhukalov)
Date: Thu, 10 Sep 2015 20:09:52 +0300
Subject: [openstack-dev] [Fuel] Remove MOS DEB repo from master node
In-Reply-To: <3002A0DF-8E57-4652-B095-126E03EF05E7@mirantis.com>
References: <CAFLqvG5Vt=PNfWCGHxsEjmm6ouqxb33YdV-PS7H32agC6wx75w@mail.gmail.com>
 <CAFLqvG6CCpVJ+_q3mCP8oyTEAoVN6tz8v5OTM8VuAnV4MP+szw@mail.gmail.com>
 <CABzFt8Pk1HqNkUhtGFzGkrJ55ektwusm3TwAHccVRMZznvQ1gQ@mail.gmail.com>
 <CAFLqvG4B1eBpQ4wQE_NUYQPik3wF9P6-4CpSR3=ZPhyRoT_Mcg@mail.gmail.com>
 <CABzFt8PLMNKZH=nb5Uw0ekTtCdfORB14Jx-oNCCypX_H7tyitQ@mail.gmail.com>
 <CAGi6=NDk+EvDimxC5Qycf7eu0KeJE-nUBMpqeTpYxZqdpeen9A@mail.gmail.com>
 <CACo6NWAA57CHmv5+WqgFnrZ0d=95ao4BN0ON4e7pt9V3zA7Xzg@mail.gmail.com>
 <CAFkLEwo4cci5VKqRpcEfQoqJxkZuAMcPWk8=37cPsGf2CiSWyQ@mail.gmail.com>
 <CAHAWLf0K1XXH3kGXk7pvskBJ7PN01HuN2mVbtmApO8zYtoMNHQ@mail.gmail.com>
 <CACo6NWAGFU=cGYAAisLhqw7uVnTZbWmtFRbEbNoh0ZsrM71acw@mail.gmail.com>
 <CAFLqvG5bF0gtKSidwbNJP61hzqkFTDaMEvv-6hkH+KbFqXc75Q@mail.gmail.com>
 <CAHAWLf0_J+8n=R_btK=peHXMT_hVT-0Qiq7JgBkwXKR7Lvm4=Q@mail.gmail.com>
 <CAFLqvG6gF-OT5a-3Px8nHy-w=Y_18shOVJ0fgap8LFMqk7pZ3g@mail.gmail.com>
 <CAHAWLf2OdSt-gjM5iHqUZeid7f+Wo6TZEjJoFosLCGWQO2qvHQ@mail.gmail.com>
 <3002A0DF-8E57-4652-B095-126E03EF05E7@mirantis.com>
Message-ID: <CAFLqvG7ARjx0zmeedxkM4fCfFPhL5xyvWZeuox21haS28dPFsw@mail.gmail.com>

Dmitry,

Thanks a lot. That is exactly what I mean. fuel-createmirror can be used by
a user during building ISO, after deployment, never and whenever he/she
wants. Standard approach for all repos. That is it. We just don't need to
put DEB MOS repo on ISO by default, because it makes little sense if
upstream repos are still online.



Vladimir Kozhukalov

On Thu, Sep 10, 2015 at 7:58 PM, Dmitry Pyzhov <dpyzhov at mirantis.com> wrote:

> Guys,
>
> looks like you?ve started to talk about different things. As I see,
> original proposal was: stop treat MOS DEB repo as a special case and use
> the same flow for all repos. Your use case does not contradict it.
>
> Moreover, it requires standard flow for all repos. ?Put everything on the
> ISO? use case should be implemented as a new feature. It is a matter of
> running fuel-createmirror script during ISO build and usage of it during
> master node deployment. It definitely should use mirror as a single object.
> And this object should be compatible with result of the fuel-createmirror
> script usage.
>
> On 10 Sep 2015, at 18:18, Vladimir Kuklin <vkuklin at mirantis.com> wrote:
>
> Folks
>
> I guess I need to get you on-site to deploy something at our user's
> datacenter. I do want to be able to download an ISO which contains all
> packages. This may not be the primary artifact of our software suite, but
> we need to have this opportunity to build full ISO with ALL components.
> Please, do not narrow down our feature set just by thinking that users do
> not need something because we are reluctant to implement this. Just believe
> me - users need this opportunity in a lot of deployment cases. It is not
> hard to implement it. We do not need to set this as a default option, but
> we need to have it. That is my point.
>
> On Thu, Sep 10, 2015 at 5:40 PM, Vladimir Kozhukalov <
> vkozhukalov at mirantis.com> wrote:
>
>> Vladimir,
>>
>> * We don't have full ISO anyway
>> * We don't require to create mirror. When you launch your browser, do you
>> mean to have mirror of the Internet locally? Probably, no. The same is
>> here. Internet connection is the common requirement nowadays, but if you
>> don't have one, you definitely need to have a kind of local copy.
>>
>> Vladimir Kozhukalov
>>
>> On Thu, Sep 10, 2015 at 4:17 PM, Vladimir Kuklin <vkuklin at mirantis.com>
>> wrote:
>>
>>> Igor
>>>
>>> Having poor access to the internet is a regular use case which we must
>>> support. This is not a crazy requirement. Not having full ISO makes cloud
>>> setup harder to complete. Even more, not having hard requirement to create
>>> a mirror will detract newcomers. I can say that if I were a user and saw a
>>> requirement to create mirror I would not try the product with comparison to
>>> the case when I can get a full ISO with all the stuff I need.
>>>
>>> On Thu, Sep 10, 2015 at 4:06 PM, Vladimir Kozhukalov <
>>> vkozhukalov at mirantis.com> wrote:
>>>
>>>> Guys,
>>>>
>>>> I really appreciate your opinions on whether Fuel should be all
>>>> inclusive or not. But the original topic of this thread is different. I
>>>> personally think that in 2015 it is not a big deal to make the master node
>>>> able to access any online host (even taking into account paranoid security
>>>> policies). It is just a matter of network engineering. But it is completely
>>>> out of the scope. What I am suggesting is to align the way how we treat
>>>> different repos, whether upstream or MOS. What I am working on right now is
>>>> I am trying to make Fuel build and delivery approach really flexible. That
>>>> means we need to have as little non-standard ways/hacks/approaches/options
>>>> as possible.
>>>>
>>>> > Why can't we make this optional in the build system? It should be
>>>> easy to implement, is not it?
>>>>
>>>> That is exactly what I am trying to do (make it optional). But I don't
>>>> want it to be yet another boolean variable for this particular thing (MOS
>>>> repo). We have working approach for dealing with repos. Repos can either
>>>> online or local mirrors. We have a tool for making local mirrors
>>>> (fuel-createmirror). Even if we put MOS on the ISO, a user still can not
>>>> deploy OpenStack, because he/she still needs upstream to be available.
>>>> Anyway, the user is still forced to do some additional actions. Again, we
>>>> have plans to improve the quality and UX of fuel-createmirror.
>>>>
>>>> Yet another thing I don't want to be on the master node is a bunch of
>>>> MOS repos directories named like
>>>> /var/www/nailgun/2015.1-7.0
>>>> /var/www/nailgun/2014.4-6.1
>>>> with links like
>>>> /var/www/nailgun/ubuntu -> /var/www/nailgun/2015.1-7.0
>>>> What does this link mean? Even Fuel developers can be confused. It is
>>>> scary to imagine what users think of it :-) Why should Nailgun and upgrade
>>>> script manage that kind of storage in this exact kind of format? A long
>>>> time ago people invented RPM/DEB repositories, tools to manage them and
>>>> structure for versioning them. We have Perestoika for that and we have
>>>> plans to put all package/mirror related tools in one place (
>>>> github.com/stackforge/fuel-mirror) and make all these tools available
>>>> out of Fuel CI. So, users will be able to easily build their own packages,
>>>> clone necessary repos and manage them in the way which is standard in the
>>>> industry. However, it is out of the scope of the letter.
>>>>
>>>> I also don't like the idea of putting MOS repo on the ISO by default
>>>> because it encourages people thing that ISO is the way of distributing MOS.
>>>> ISO should be nothing more than just a way of installing Fuel from scratch.
>>>> MOS should be distributed via MOS repos. Fuel is available as RPM package
>>>> in RPM MOS repo.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Vladimir Kozhukalov
>>>>
>>>> On Thu, Sep 10, 2015 at 1:18 PM, Igor Kalnitsky <
>>>> ikalnitsky at mirantis.com> wrote:
>>>>
>>>>> Mike,
>>>>>
>>>>> > still not exactly true for some large enterprises. Due to all the
>>>>> security, etc.,
>>>>> > there are sometimes VPNs / proxies / firewalls with very low
>>>>> throughput.
>>>>>
>>>>> It's their problem, and their policies. We can't and shouldn't handle
>>>>> all possible cases. If some enterprise has "no Internet" policy, I bet
>>>>> it won't be a problem for their IT guys to create an intranet mirror
>>>>> for MOS packages. Moreover, I also bet they do have a mirror for
>>>>> Ubuntu or other Linux distributive. So it basically about approach how
>>>>> to consume our mirrors.
>>>>>
>>>>> On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin <
>>>>> vkuklin at mirantis.com> wrote:
>>>>> > Folks
>>>>> >
>>>>> > I think, Mike is completely right here - we need an option to build
>>>>> > all-in-one ISO which can be tried-out/deployed unattendedly without
>>>>> internet
>>>>> > access. Let's let a user make a choice what he wants, not push him
>>>>> into
>>>>> > embarassing situation. We still have many parts of Fuel which make
>>>>> choices
>>>>> > for user that cannot be overriden. Let's not pretend that we know
>>>>> more than
>>>>> > user does about his environment.
>>>>> >
>>>>> > On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh <
>>>>> ogelbukh at mirantis.com>
>>>>> > wrote:
>>>>> >>
>>>>> >> The reason people want offline deployment feature is not because of
>>>>> poor
>>>>> >> connection, but rather the enterprise intranets where getting
>>>>> subnet with
>>>>> >> external access sometimes is a real pain in various body parts.
>>>>> >>
>>>>> >> --
>>>>> >> Best regards,
>>>>> >> Oleg Gelbukh
>>>>> >>
>>>>> >> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky <
>>>>> ikalnitsky at mirantis.com>
>>>>> >> wrote:
>>>>> >>>
>>>>> >>> Hello,
>>>>> >>>
>>>>> >>> I agree with Vladimir - the idea of online repos is a right way to
>>>>> >>> move. In 2015 I believe we can ignore this "poor Internet
>>>>> connection"
>>>>> >>> reason, and simplify both Fuel and UX. Moreover, take a look at
>>>>> Linux
>>>>> >>> distributives - most of them fetch needed packages from the
>>>>> Internet
>>>>> >>> during installation, not from CD/DVD. The netboot installers are
>>>>> >>> popular, I can't even remember when was the last time I install my
>>>>> >>> Debian from the DVD-1 - I use netboot installer for years.
>>>>> >>>
>>>>> >>> Thanks,
>>>>> >>> Igor
>>>>> >>>
>>>>> >>>
>>>>> >>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang <ytang at mirantis.com>
>>>>> wrote:
>>>>> >>> >
>>>>> >>> >
>>>>> >>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz <
>>>>> aschultz at mirantis.com>
>>>>> >>> > wrote:
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> Hey Vladimir,
>>>>> >>> >>
>>>>> >>> >>>
>>>>> >>> >>>
>>>>> >>> >>>>>
>>>>> >>> >>>>> 1) There won't be such things in like [1] and [2], thus less
>>>>> >>> >>>>> complicated flow, less errors, easier to maintain, easier to
>>>>> >>> >>>>> understand,
>>>>> >>> >>>>> easier to troubleshoot
>>>>> >>> >>>>> 2) If one wants to have local mirror, the flow is the same
>>>>> as in
>>>>> >>> >>>>> case
>>>>> >>> >>>>> of upstream repos (fuel-createmirror), which is clrear for a
>>>>> user
>>>>> >>> >>>>> to
>>>>> >>> >>>>> understand.
>>>>> >>> >>>>
>>>>> >>> >>>>
>>>>> >>> >>>> From the issues I've seen,  fuel-createmirror isn't very
>>>>> straight
>>>>> >>> >>>> forward and has some issues making it a bad UX.
>>>>> >>> >>>
>>>>> >>> >>>
>>>>> >>> >>> I'd say the whole approach of having such tool as
>>>>> fuel-createmirror
>>>>> >>> >>> is a
>>>>> >>> >>> way too naive. Reliable internet connection is totally up to
>>>>> network
>>>>> >>> >>> engineering rather than deployment. Even using proxy is much
>>>>> better
>>>>> >>> >>> that
>>>>> >>> >>> creating local mirror. But this discussion is totally out of
>>>>> the
>>>>> >>> >>> scope of
>>>>> >>> >>> this letter. Currently,  we have fuel-createmirror and it is
>>>>> pretty
>>>>> >>> >>> straightforward (installed as rpm, has just a couple of
>>>>> command line
>>>>> >>> >>> options). The quality of this script is also out of the scope
>>>>> of this
>>>>> >>> >>> thread. BTW we have plans to improve it.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> Fair enough, I just wanted to raise the UX issues around these
>>>>> types
>>>>> >>> >> of
>>>>> >>> >> things as they should go into the decision making process.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>>
>>>>> >>> >>>>>
>>>>> >>> >>>>>
>>>>> >>> >>>>> Many people still associate ISO with MOS, but it is not true
>>>>> when
>>>>> >>> >>>>> using
>>>>> >>> >>>>> package based delivery approach.
>>>>> >>> >>>>>
>>>>> >>> >>>>> It is easy to define necessary repos during deployment and
>>>>> thus it
>>>>> >>> >>>>> is
>>>>> >>> >>>>> easy to control what exactly is going to be installed on
>>>>> slave
>>>>> >>> >>>>> nodes.
>>>>> >>> >>>>>
>>>>> >>> >>>>> What do you guys think of it?
>>>>> >>> >>>>>
>>>>> >>> >>>>>
>>>>> >>> >>>>
>>>>> >>> >>>> Reliance on internet connectivity has been an issue since
>>>>> 6.1. For
>>>>> >>> >>>> many
>>>>> >>> >>>> large users, complete access to the internet is not available
>>>>> or not
>>>>> >>> >>>> desired.  If we want to continue down this path, we need to
>>>>> improve
>>>>> >>> >>>> the
>>>>> >>> >>>> tools to setup the local mirror and properly document what
>>>>> >>> >>>> urls/ports/etc
>>>>> >>> >>>> need to be available for the installation of openstack and any
>>>>> >>> >>>> mirror
>>>>> >>> >>>> creation process.  The ideal thing is to have an all-in-one CD
>>>>> >>> >>>> similar to a
>>>>> >>> >>>> live cd that allows a user to completely try out fuel
>>>>> wherever they
>>>>> >>> >>>> want
>>>>> >>> >>>> with out further requirements of internet access.  If we
>>>>> don't want
>>>>> >>> >>>> to
>>>>> >>> >>>> continue with that, we need to do a better job around
>>>>> providing the
>>>>> >>> >>>> tools
>>>>> >>> >>>> for a user to get up and running in a timely fashion.  Perhaps
>>>>> >>> >>>> providing an
>>>>> >>> >>>> net-only iso and an all-included iso would be a better
>>>>> solution so
>>>>> >>> >>>> people
>>>>> >>> >>>> will have their expectations properly set up front?
>>>>> >>> >>>
>>>>> >>> >>>
>>>>> >>> >>> Let me explain why I think having local MOS mirror by default
>>>>> is bad:
>>>>> >>> >>> 1) I don't see any reason why we should treat MOS  repo other
>>>>> way
>>>>> >>> >>> than
>>>>> >>> >>> all other online repos. A user sees on the settings tab the
>>>>> list of
>>>>> >>> >>> repos
>>>>> >>> >>> one of which is local by default while others are online. It
>>>>> can make
>>>>> >>> >>> user a
>>>>> >>> >>> little bit confused, can't it? A user can be also confused by
>>>>> the
>>>>> >>> >>> fact, that
>>>>> >>> >>> some of the repos can be cloned locally by fuel-createmirror
>>>>> while
>>>>> >>> >>> others
>>>>> >>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> I agree. The process should be the same and it should be just
>>>>> another
>>>>> >>> >> repo. It doesn't mean we can't include a version on an ISO as
>>>>> part of
>>>>> >>> >> a
>>>>> >>> >> release.  Would it be better to provide the mirror on the ISO
>>>>> but not
>>>>> >>> >> have
>>>>> >>> >> it enabled by default for a release so that we can gather user
>>>>> >>> >> feedback on
>>>>> >>> >> this? This would include improved documentation and possibly
>>>>> allowing
>>>>> >>> >> a user
>>>>> >>> >> to choose their preference so we can collect metrics?
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>> 2) Having local MOS mirror by default makes things much more
>>>>> >>> >>> convoluted.
>>>>> >>> >>> We are forced to have several directories with predefined
>>>>> names and
>>>>> >>> >>> we are
>>>>> >>> >>> forced to manage these directories in nailgun, in upgrade
>>>>> script,
>>>>> >>> >>> etc. Why?
>>>>> >>> >>> 3) When putting MOS mirror on ISO, we make people think that
>>>>> ISO is
>>>>> >>> >>> equal
>>>>> >>> >>> to MOS, which is not true. It is possible to implement really
>>>>> >>> >>> flexible
>>>>> >>> >>> delivery scheme, but we need to think of these things as they
>>>>> are
>>>>> >>> >>> independent.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> I'm not sure what you mean by this. Including a point in time
>>>>> copy on
>>>>> >>> >> an
>>>>> >>> >> ISO as a release is a common method of distributing software.
>>>>> Is this
>>>>> >>> >> a
>>>>> >>> >> messaging thing that needs to be addressed? Perhaps I'm not
>>>>> familiar
>>>>> >>> >> with
>>>>> >>> >> people referring to the ISO as being MOS.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>> For large users it is easy to build custom ISO and put there
>>>>> what
>>>>> >>> >>> they
>>>>> >>> >>> need but first we need to have simple working scheme clear for
>>>>> >>> >>> everyone. I
>>>>> >>> >>> think dealing with all repos the same way is what is gonna
>>>>> makes
>>>>> >>> >>> things
>>>>> >>> >>> simpler.
>>>>> >>> >>>
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> Who is going to build a custom ISO? How does one request that?
>>>>> What
>>>>> >>> >> resources are consumed by custom ISO creation process/request?
>>>>> Does
>>>>> >>> >> this
>>>>> >>> >> scale?
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>>
>>>>> >>> >>> This thread is not about internet connectivity, it is about
>>>>> aligning
>>>>> >>> >>> things.
>>>>> >>> >>>
>>>>> >>> >>
>>>>> >>> >> You are correct in that this thread is not explicitly about
>>>>> internet
>>>>> >>> >> connectivity, but they are related. Any changes to remove a
>>>>> local
>>>>> >>> >> repository
>>>>> >>> >> and only provide an internet based solution makes internet
>>>>> >>> >> connectivity
>>>>> >>> >> something that needs to be included in the discussion.  I just
>>>>> want to
>>>>> >>> >> make
>>>>> >>> >> sure that we properly evaluate this decision based on end user
>>>>> >>> >> feedback not
>>>>> >>> >> because we don't want to manage this from a developer
>>>>> standpoint.
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >  +1, whatever the changes is, please keep Fuel as a tool that can
>>>>> >>> > deploy
>>>>> >>> > without Internet access, this is part of reason that people like
>>>>> it and
>>>>> >>> > it's
>>>>> >>> > better that other tools.
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >> -Alex
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>
>>>>> >>> >>
>>>>> __________________________________________________________________________
>>>>> >>> >> OpenStack Development Mailing List (not for usage questions)
>>>>> >>> >> Unsubscribe:
>>>>> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>>> >>> >>
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> >>> >>
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >
>>>>> >>> > --
>>>>> >>> > Yaguang Tang
>>>>> >>> > Technical Support, Mirantis China
>>>>> >>> >
>>>>> >>> > Phone: +86 15210946968
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >
>>>>> >>> >
>>>>> __________________________________________________________________________
>>>>> >>> > OpenStack Development Mailing List (not for usage questions)
>>>>> >>> > Unsubscribe:
>>>>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>>> >>> >
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> >>> >
>>>>> >>>
>>>>> >>>
>>>>> >>>
>>>>> __________________________________________________________________________
>>>>> >>> OpenStack Development Mailing List (not for usage questions)
>>>>> >>> Unsubscribe:
>>>>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> __________________________________________________________________________
>>>>> >> OpenStack Development Mailing List (not for usage questions)
>>>>> >> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> >>
>>>>> >
>>>>> >
>>>>> >
>>>>> > --
>>>>> > Yours Faithfully,
>>>>> > Vladimir Kuklin,
>>>>> > Fuel Library Tech Lead,
>>>>> > Mirantis, Inc.
>>>>> > +7 (495) 640-49-04
>>>>> > +7 (926) 702-39-68
>>>>> > Skype kuklinvv
>>>>> > 35bk3, Vorontsovskaya Str.
>>>>> > Moscow, Russia,
>>>>> > www.mirantis.com
>>>>> > www.mirantis.ru
>>>>> > vkuklin at mirantis.com
>>>>> >
>>>>> >
>>>>> __________________________________________________________________________
>>>>> > OpenStack Development Mailing List (not for usage questions)
>>>>> > Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> >
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Yours Faithfully,
>>> Vladimir Kuklin,
>>> Fuel Library Tech Lead,
>>> Mirantis, Inc.
>>> +7 (495) 640-49-04
>>> +7 (926) 702-39-68
>>> Skype kuklinvv
>>> 35bk3, Vorontsovskaya Str.
>>> Moscow, Russia,
>>> www.mirantis.com <http://www.mirantis.ru/>
>>> www.mirantis.ru
>>> vkuklin at mirantis.com
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru
> vkuklin at mirantis.com
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/b3d04137/attachment-0001.html>

From emilien at redhat.com  Thu Sep 10 17:13:55 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Thu, 10 Sep 2015 13:13:55 -0400
Subject: [openstack-dev] [puppet] [ceilometer] Puppetize OpenStack Aodh
Message-ID: <55F1BA53.3080708@redhat.com>

Hi,

I'm working [1] on a new Puppet modules for AODH [2].

The good news is that we don't need AODH to have alarming working in
Liberty, so people using puppet to deploy Ceilometer will be able to use
Alarming with future stable/liberty.

I'll of course use puppet-openstack-cookiecutter to create our new
module, I'll ask to Ceilometer team to have a look once we have the
basic structure in place, I have now no idea how to deploy it :-)
I guess looking at how devstack deploys it will help us to understand
the resources that need to be created.

Thanks,

[1] https://review.openstack.org/#/q/topic:puppet/aodh,n,z
[2] https://github.com/openstack/aodh
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/d88e345c/attachment.pgp>

From major at mhtx.net  Thu Sep 10 17:17:51 2015
From: major at mhtx.net (Major Hayden)
Date: Thu, 10 Sep 2015 12:17:51 -0500
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <55F1B66A.7040303@gentoo.org>
References: <55F1999C.4020509@mhtx.net> <55F1AE40.5020009@gentoo.org>
 <55F1B0D7.8070404@mhtx.net> <55F1B66A.7040303@gentoo.org>
Message-ID: <55F1BB3F.9000108@mhtx.net>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 09/10/2015 11:57 AM, Matthew Thode wrote:
> I think that'd work, it'd also allow discussion on if something should
> be in each section as well.

I'll start assembling a spec so we can throw some darts at it.

- --
Major Hayden
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBCAAGBQJV8bs7AAoJEHNwUeDBAR+xEt8P/02RVxk8C2lDoqDxO+cRrFu9
+LJgLfukX1o//GQI2h9v1X1g2eYXLtRQVGYgneiwesquvr8A3pW2w4WSiag8eSlS
NKwKjn39jBj/qMfRRUm4joPfpoJp36aKXL1Gfw6scISZ8ecEUkV8AS1+lR6kWoW6
m2X5pW1hhSMFmYUE8bBHl6umVeryEqSiw7vY3WYDQEUzAzY+8+lcX7x9bKwNILFb
d8dfUlcIJZnCq8w/ChcG+X2aA2G64acVFX5SVP3AFZE7/yiDfOsEi///7rWhdarv
LAS/eqpWZpFNSduIntY/KPvwkW9vnARgGf9H2r40lVyny/1HJKnnxcSRRkSxRyV5
JrX3mQORi+4zWiZubCzdEr9CgY/xS0apqIiXmUB/Z2vGaHqbnKyoOtzg7rlsFS8B
c/6zPu8PzslfaTjTXReMiyCK+ifqAovmH/8H1PNqbsFVwjOyrFdorsY8DPJx7Tcf
6s6oeXaIK0DZ73TB/sDth5QiEE3W5PRoqoqPGC9RCG/8YtJ5sdrSLM/UdymAkVZX
aXoaXbmW+6cSTK9WiGy4LtJSzR/qTg9kExyxm8NO+eHbCx83cbs7YaBDMRTBncxm
Nssp8JWByGYOSXyfcl/p2EAHQkko9Enq3Xuzvyf9C9HcFzIFs4az42aHE4qgl2wE
rzLM8PUhG0irbm8VTV/h
=sxI8
-----END PGP SIGNATURE-----


From Kevin.Fox at pnnl.gov  Thu Sep 10 17:27:34 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Thu, 10 Sep 2015 17:27:34 +0000
Subject: [openstack-dev] [ALL] Instance Users
Message-ID: <1A3C52DFCD06494D8528644858247BF01A2F4E0F@EX10MBOX03.pnnl.gov>

Many OpenStack projects have a failry common problem. The lack of well defined way for OpenStack Instances to talk to the OpenStack its running under.

Each OpenStack project has been working around it themselves in one form or another, including Heat, Sahara, Trove, Magnum, and others.

Zaqar and Barbican need credentials to talk to as well, and currently there is no good solution.

Developers that build on top of Heat, or do orchestration by other means don't have a workaround at present either.

I've written up a proposal to fix this situation here:
https://review.openstack.org/#/c/222293/

It is a difficult problem to solve but would be hugely beneficial to may projects.

Please, review the spec and add feedback. The more we can work together to solve this difficult problem rather then hacking around it individually in our individual projects, the better we all will be in the end.

Thanks,
Kevin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/d08b6100/attachment.html>

From thierry at openstack.org  Thu Sep 10 17:31:25 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Thu, 10 Sep 2015 19:31:25 +0200
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F182AF.2040003@dague.net>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com> <1441840459-sup-9283@lrrr.local>
 <55F15DE0.7040804@dague.net> <55F17646.4000203@openstack.org>
 <55F182AF.2040003@dague.net>
Message-ID: <55F1BE6D.80704@openstack.org>

Sean Dague wrote:
> On 09/10/2015 08:23 AM, Thierry Carrez wrote:
>> Now another problem you're describing is that there is no single place
>> where those filters end up, depending on the way the projects (or libs)
>> are packaged and installed. And it's up to the distros to "fix" the
>> filters_path in the configuration file so that it points to every single
>> place where those end up. It's a problem (especially when you start to
>> install things using multiple concurrent packaging systems), but it's
>> not exactly new -- it's just that libraries shipping fliters file are
>> even more likely to ship their filters somewhere weird. So maybe we can
>> continue to live with that problem we always had, until the privsep
>> system completely replaces rootwrap ?
> 
> I do get this is where we came from. I feel like this doesn't really
> address or understand that things are actually quite different when it
> comes to libraries doing rootwrap. We've spent weeks attempting various
> work arounds, and for Liberty just punted and said "os-brick, cinder,
> and nova all must upgrade exactly at the same time". Because that's the
> only solution that doesn't require pages of documentation that
> installers will get wrong some times.
> 
> I don't feel like that's an acceptable solution. And it also means that
> "living" with it means that next cycle we're going to have to say "nova,
> neutron, cinder, os-brick, and vif library must all upgrade at exactly
> the same time". Which is clearly not a thing we want. Had we figured out
> this rootwrap limitation early, os-brick would never have been put into
> Nova because it makes the upgrade process demonstrably worse and more
> fragile.

Right. The only reason why packagers got it right the first time is
because things were a lot simpler back then (Ubuntu was almost the only
game in town) and I personally got involved and made sure they got it
right. I suspect you're right when you assume that if we rely on them to
push the right things at the right places at this stage in the cycle,
that would probably go wrong in a lot of places.

I also agree that os-brick is not worth it if the cost is crappy
upgrades. I'm just unsure adding a layer of config on top will solve
things. I kinda like the idea of not relying on the filesystem at all
(as proposed by Doug) since the filesystem is so deployment-dependent.

-- 
Thierry Carrez (ttx)


From thierry at openstack.org  Thu Sep 10 17:35:21 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Thu, 10 Sep 2015 19:35:21 +0200
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <1441904554-sup-3119@lrrr>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com> <1441840459-sup-9283@lrrr.local>
 <55F15DE0.7040804@dague.net> <55F17646.4000203@openstack.org>
 <1441904554-sup-3119@lrrr>
Message-ID: <55F1BF59.2000906@openstack.org>

Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2015-09-10 14:23:34 +0200:
>> My quick 2cents on this. Rootwrap was designed as a generic solution to
>> wrap privileged calls. That's why filter files are part of its
>> "configuration". The problem is, OpenStack needs a pretty precise set of
>> those filters to be "configured" to run properly. So it's configuration
>> for rootwrap, but not "configuration" for OpenStack.
> 
> That makes them sound like data, not configuration. If that's the case,
> Python's pkgutil module is an existing API for putting a data file
> inside a library and then accessing it. Maybe we should look at moving
> the files to a place that lets us use that, instead of requiring any
> deployer-based configuration at all. Combining that with the "symbolic"
> suggestion from Sean, we would use the package name as the symbol and
> there would be a well-defined resource name within that package to use
> with pkgutil.get_data() [1].

That sounds promising. One trick is that it's the consuming application
that needs to ship the filters, not the library itself (so rootwrap
would have to look into nova resources, not rootwrap resources). Another
trick is that it should require root rights (not nova rights) to change
those resources, otherwise the security model is broken (the whole idea
of rootwrap being to restrict what a compromised nova user can do to the
system).

-- 
Thierry Carrez (ttx)


From harlowja at outlook.com  Thu Sep 10 17:54:51 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Thu, 10 Sep 2015 10:54:51 -0700
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F1BF59.2000906@openstack.org>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com> <1441840459-sup-9283@lrrr.local>
 <55F15DE0.7040804@dague.net> <55F17646.4000203@openstack.org>
 <1441904554-sup-3119@lrrr> <55F1BF59.2000906@openstack.org>
Message-ID: <BLU437-SMTP12D6C0179E76113F6FA66AD8510@phx.gbl>

Just out of curiosity, not 100% related to this thread, but other 
applications also bundle configuration files (for example heat templates 
@ https://github.com/openstack/magnum/tree/master/magnum/templates)

Should there be some guidelines on how these config files are packaged 
and distributed for these projects as well (ie also use pkgutil...); 
probably a good idea to have one so that 
distributors/installers/packaging folk don't have to do something custom 
when packaging these applications/libraries.

-Josh

Thierry Carrez wrote:
> Doug Hellmann wrote:
>> Excerpts from Thierry Carrez's message of 2015-09-10 14:23:34 +0200:
>>> My quick 2cents on this. Rootwrap was designed as a generic solution to
>>> wrap privileged calls. That's why filter files are part of its
>>> "configuration". The problem is, OpenStack needs a pretty precise set of
>>> those filters to be "configured" to run properly. So it's configuration
>>> for rootwrap, but not "configuration" for OpenStack.
>> That makes them sound like data, not configuration. If that's the case,
>> Python's pkgutil module is an existing API for putting a data file
>> inside a library and then accessing it. Maybe we should look at moving
>> the files to a place that lets us use that, instead of requiring any
>> deployer-based configuration at all. Combining that with the "symbolic"
>> suggestion from Sean, we would use the package name as the symbol and
>> there would be a well-defined resource name within that package to use
>> with pkgutil.get_data() [1].
>
> That sounds promising. One trick is that it's the consuming application
> that needs to ship the filters, not the library itself (so rootwrap
> would have to look into nova resources, not rootwrap resources). Another
> trick is that it should require root rights (not nova rights) to change
> those resources, otherwise the security model is broken (the whole idea
> of rootwrap being to restrict what a compromised nova user can do to the
> system).
>


From gord at live.ca  Thu Sep 10 18:03:56 2015
From: gord at live.ca (gord chung)
Date: Thu, 10 Sep 2015 14:03:56 -0400
Subject: [openstack-dev] [ceilometer][aodh][gnocchi] Tokyo design session
	planning
Message-ID: <BLU436-SMTP154A26551ED2808F2A17C96DE510@phx.gbl>

hi,

as mentioned during today's meeting, since we have our slots for design 
summit, we'll start accepting proposals for the telemetry-related topics 
for the Tokyo summit.

similar to previous design summits, anyone is welcome to propose topics 
related to ceilometer, aodh, gnocchi, or any other 
monitoring/metering/alarming related project.

official proposals can be submitted here: 
https://docs.google.com/spreadsheets/d/1ea_P2k1kNy_SILEnC-5Zl61IFhqEOrlZ5L8VY4-xNB0/edit?usp=sharing

we've also created an etherpad for those wishing to brainstorm ideas 
before adding formal proposal: 
https://etherpad.openstack.org/p/tokyo-ceilometer-design-summit

we have tentatively set submission deadline for October 5, 2015, which 
will be followed by a public vote.

cheers,

-- 
gord

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/7243eca2/attachment.html>

From sean at dague.net  Thu Sep 10 18:11:20 2015
From: sean at dague.net (Sean Dague)
Date: Thu, 10 Sep 2015 14:11:20 -0400
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <1441904554-sup-3119@lrrr>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com> <1441840459-sup-9283@lrrr.local>
 <55F15DE0.7040804@dague.net> <55F17646.4000203@openstack.org>
 <1441904554-sup-3119@lrrr>
Message-ID: <55F1C7C8.8040902@dague.net>

On 09/10/2015 01:05 PM, Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2015-09-10 14:23:34 +0200:
>> Sean Dague wrote:
>>> Right now, they are all a bunch of files, they can be anywhere. And then
>>> you have other files that have to reference these files by path, which
>>> can be anywhere. We could just punt in that part and say "punt! every
>>> installer and configuration management install needs to solve this on
>>> their own." I'm not convinced that's a good answer. The os-brick filters
>>> aren't really config. If you change them all that happens is
>>> terribleness. Stuff stops working, and you don't know why. They are data
>>> to exchange with another process about how to function. Honestly, they
>>> should probably be python code that's imported by rootwrap.
>>>
>>> Much like the issues around clouds failing when you try to GET /v2 on
>>> the Nova API (because we have a bunch of knobs you have to align for SSL
>>> termination, and a bunch of deployers didn't), I don't think we should
>>> be satisfied with "there's a config for that!" when all that config
>>> means is that someone can break their configuration if they don't get it
>>> exactly right.
>>
>> My quick 2cents on this. Rootwrap was designed as a generic solution to
>> wrap privileged calls. That's why filter files are part of its
>> "configuration". The problem is, OpenStack needs a pretty precise set of
>> those filters to be "configured" to run properly. So it's configuration
>> for rootwrap, but not "configuration" for OpenStack.
> 
> That makes them sound like data, not configuration. If that's the case,
> Python's pkgutil module is an existing API for putting a data file
> inside a library and then accessing it. Maybe we should look at moving
> the files to a place that lets us use that, instead of requiring any
> deployer-based configuration at all. Combining that with the "symbolic"
> suggestion from Sean, we would use the package name as the symbol and
> there would be a well-defined resource name within that package to use
> with pkgutil.get_data() [1].
> 
> Doug
> 
> [1] https://docs.python.org/2.7/library/pkgutil.html#pkgutil.get_data

That sounds reasonable as well. Monty and dstuft had sent us down the
get_resource direction instead using setuptools -
https://pythonhosted.org/setuptools/pkg_resources.html. If you think
that get_data is a better option, this would apply just as well there.

The important thing is: filters are tied to code, so should be able to
be looked up symbolicly based on the code level that's installed.

	-Sean

-- 
Sean Dague
http://dague.net


From clint at fewbar.com  Thu Sep 10 18:21:47 2015
From: clint at fewbar.com (Clint Byrum)
Date: Thu, 10 Sep 2015 11:21:47 -0700
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <55F1B0D7.8070404@mhtx.net>
References: <55F1999C.4020509@mhtx.net> <55F1AE40.5020009@gentoo.org>
 <55F1B0D7.8070404@mhtx.net>
Message-ID: <1441909133-sup-2320@fewbar.com>

Excerpts from Major Hayden's message of 2015-09-10 09:33:27 -0700:
> Hash: SHA256
> 
> On 09/10/2015 11:22 AM, Matthew Thode wrote:
> > Sane defaults can't be used?  The two bugs you listed look fine to me as
> > default things to do.
> 
> Thanks, Matthew.  I tend to agree.
> 
> I'm wondering if it would be best to make a "punch list" of CIS benchmarks and try to tag them with one of the following:
> 
>   * Do this in OSAD
>   * Tell deployers how to do this (in docs)

Just a thought from somebody outside of this. If OSAD can provide the
automation, turned off by default as a convenience, and run a bank of
tests with all of these turned on to make sure they do actually work with
the stock configuration, you'll get more traction this way. Docs should
be the focus of this effort, but the effort should be on explaining how
it fits into the system so operators who are customizing know when they
will have to choose a less secure path. One should be able to have code
do the "turn it on" "turn it off" mechanics.

>   * Tell deployers not to do this (in docs)
> 
> That could be lumped in with a spec/blueprint of some sort.  Would that be beneficial?
> 
> 


From tushar.gohad at intel.com  Thu Sep 10 18:31:46 2015
From: tushar.gohad at intel.com (Gohad, Tushar)
Date: Thu, 10 Sep 2015 18:31:46 +0000
Subject: [openstack-dev] [gate] broken by pyeclib 1.0.9 release
In-Reply-To: <55F19744.6020503@dague.net>
References: <55F19744.6020503@dague.net>
Message-ID: <82115773F5CCE44DA5B6CE8B3878B6F028F51E0E@fmsmsx115.amr.corp.intel.com>

Hi Sean,


To start with, "pip install PyECLib==1.0.9" works fine on a clean trusty instance.


Looking at the gate log - http://logs.openstack.org/92/115092/11/check/gate-grenade-dsvm/ecfb1f5/logs/grenade.sh.txt.gz#_2015-09-10_13_54_32_069:

-[snip]-

2015-09-10 13:55:52.335 | Collecting PyECLib===1.0.7 (from -c /opt/stack/new/requirements/upper-constraints.txt (line 15))
2015-09-10 13:55:52.393 |   Downloading http://pypi.region-b.geo-1.openstack.org/packages/source/P/PyECLib/PyECLib-1.0.7.tar.gz (8.4MB)

-[snip]-


If requirements.txt has "PyECLib >= 1.0.7", and latest on PyPI is PyECLib 1.0.9, I wonder why the slave is trying to pull 1.0.7 .. 

Also how is "upper-constaints.txt" used?


Thanks
Tushar



-----Original Message-----
From: Sean Dague [mailto:sean at dague.net] 
Sent: Thursday, September 10, 2015 7:44 AM
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [gate] broken by pyeclib 1.0.9 release

The pyeclib 1.0.9 release has broken the gate because Swift is in the default grenade upgrade jobs, and Swift stable/kilo allows 1.0.9 (which doesn't compile correctly with a pip install).

We're working to pin requirements in kilo/juno right now, but anything that has a grenade job is going to fail until these land.

	-Sean

--
Sean Dague
http://dague.net

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From mriedem at linux.vnet.ibm.com  Thu Sep 10 18:41:49 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Thu, 10 Sep 2015 13:41:49 -0500
Subject: [openstack-dev] [gate] broken by pyeclib 1.0.9 release
In-Reply-To: <82115773F5CCE44DA5B6CE8B3878B6F028F51E0E@fmsmsx115.amr.corp.intel.com>
References: <55F19744.6020503@dague.net>
 <82115773F5CCE44DA5B6CE8B3878B6F028F51E0E@fmsmsx115.amr.corp.intel.com>
Message-ID: <55F1CEED.8040304@linux.vnet.ibm.com>



On 9/10/2015 1:31 PM, Gohad, Tushar wrote:
> Hi Sean,
>
>
> To start with, "pip install PyECLib==1.0.9" works fine on a clean trusty instance.
>
>
> Looking at the gate log - http://logs.openstack.org/92/115092/11/check/gate-grenade-dsvm/ecfb1f5/logs/grenade.sh.txt.gz#_2015-09-10_13_54_32_069:
>
> -[snip]-
>
> 2015-09-10 13:55:52.335 | Collecting PyECLib===1.0.7 (from -c /opt/stack/new/requirements/upper-constraints.txt (line 15))
> 2015-09-10 13:55:52.393 |   Downloading http://pypi.region-b.geo-1.openstack.org/packages/source/P/PyECLib/PyECLib-1.0.7.tar.gz (8.4MB)
>
> -[snip]-
>
>
> If requirements.txt has "PyECLib >= 1.0.7", and latest on PyPI is PyECLib 1.0.9, I wonder why the slave is trying to pull 1.0.7 ..
>
> Also how is "upper-constaints.txt" used?
>
>
> Thanks
> Tushar
>
>
>
> -----Original Message-----
> From: Sean Dague [mailto:sean at dague.net]
> Sent: Thursday, September 10, 2015 7:44 AM
> To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [gate] broken by pyeclib 1.0.9 release
>
> The pyeclib 1.0.9 release has broken the gate because Swift is in the default grenade upgrade jobs, and Swift stable/kilo allows 1.0.9 (which doesn't compile correctly with a pip install).
>
> We're working to pin requirements in kilo/juno right now, but anything that has a grenade job is going to fail until these land.
>
> 	-Sean
>
> --
> Sean Dague
> http://dague.net
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

The grenade job is first installing 1.0.9 for kilo since pyeclib is 
uncapped there (well, it was uncapped until today).

Then it goes to install the 'new' side of grenade, which is liberty 
(master) and pyeclib is pinned to 1.0.7 there:

https://github.com/openstack/requirements/blob/master/global-requirements.txt#L125

That's where it's trying to downgrade from 1.0.9 to 1.0.7 and blows up.

-- 

Thanks,

Matt Riedemann



From jpeeler at redhat.com  Thu Sep 10 18:55:29 2015
From: jpeeler at redhat.com (Jeff Peeler)
Date: Thu, 10 Sep 2015 14:55:29 -0400
Subject: [openstack-dev] Fwd: [kolla] Planned maintenance for Delorean
	instance - September 14-15
Message-ID: <CALesnTwWP3OmKF_KgwFctmJqfXPiSobFBr2GXyQQ8W28q_Z45w@mail.gmail.com>

FYI

---------- Forwarded message ----------
From: Javier Pena <javier.pena at redhat.com>
Date: Thu, Sep 10, 2015 at 11:50 AM
Subject: [Rdo-list] [delorean] Planned maintenance for Delorean instance -
September 14-15
To: rdo-list <rdo-list at redhat.com>

Dear all,

Due to a planned maintenance of the infrastructure supporting the Delorean
instance (trunk.rdoproject.org), it is expected to be offline between
September 14 (~ 9PM EDT) and September 15 (~ 9PM EDT).

We will be sending updates to the list if there is any additional
information or change in the plans, and keep you updated on the status.

Please let us know if you have any questions or concerns.

Regards,
Javier

----
Javier Pe?a, RHCA                    email: javier.pena at redhat.com
Senior Software Engineer             phone: +34 914148872
EMEA OpenStack Engineering
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/1d6010d7/attachment.html>

From robertc at robertcollins.net  Thu Sep 10 19:23:00 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Fri, 11 Sep 2015 07:23:00 +1200
Subject: [openstack-dev] [gate] broken by pyeclib 1.0.9 release
In-Reply-To: <55F1CEED.8040304@linux.vnet.ibm.com>
References: <55F19744.6020503@dague.net>
 <82115773F5CCE44DA5B6CE8B3878B6F028F51E0E@fmsmsx115.amr.corp.intel.com>
 <55F1CEED.8040304@linux.vnet.ibm.com>
Message-ID: <CAJ3HoZ18XE3+VJLTznau0PjEftS+DLBi63TrQ2vCAEWAqkakVw@mail.gmail.com>

Note that master is pinned:

commit aca1a74909d7a2841cd9805b7f57c867a1f74b73
Author: Tushar Gohad <tushar.gohad at intel.com>
Date:   Tue Aug 18 07:55:18 2015 +0000

    Restrict PyECLib version to 1.0.7

    v1.0.9 rev of PyECLib replaces Jerasure with a native EC
    implementation (liberasurecode_rs_vand) as the default
    EC scheme.  Going forward, Jerasure will not be bundled
    with PyPI version of PyECLib as it used to be, until
    v1.0.7.

    This is an interim change to Global/Swift requirements
    until we get v1.0.9 PyECLib released and included in
    global-requirements and ready patches that change Swift
    default ec_type (for doc, config samples and unit tests)
    from "jerasure_rs_vand" to "liberasurecode_rs_vand."

    Without this change, Swift unit tests will break at gate
    as soon as PyECLib v1.0.9 lands on PyPI

    * Swift is the only user of PyECLib at the moment

    Change-Id: I52180355b95679cbcddd497bbdd9be8e7167a3c7


But it appears a matching change was not done to j/k - and the pin
hasn't been removed from master.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From Brianna.Poulos at jhuapl.edu  Thu Sep 10 19:28:02 2015
From: Brianna.Poulos at jhuapl.edu (Poulos, Brianna L.)
Date: Thu, 10 Sep 2015 19:28:02 +0000
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33D128@fmsmsx117.amr.corp.intel.com>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
 <55F05B79.1050508@gmail.com> <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33D128@fmsmsx117.amr.corp.intel.com>
Message-ID: <D2174C2D.40D45%Brianna.Poulos@jhuapl.edu>

Malini,

Thank you for bringing up the ?killed? state as it relates to quota.  We
opted to move the image to a killed state since that is what occurs when
an upload fails, and the signature verification failure would occur during
an upload.  But we should keep in mind the potential to take up space and
yet not take up quota when signature verification fails.

Regarding the MD5 hash, there is currently a glance spec [1] to allow the
hash method used for the checksum to be configurable?currently it is
hardcoded in glance.  After making it configurable, the default would
transition from MD5 to something more secure (like SHA-256).

[1] https://review.openstack.org/#/c/191542/

Thanks,
~Brianna




On 9/10/15, 5:10 , "Bhandaru, Malini K" <malini.k.bhandaru at intel.com>
wrote:

>Brianna, I can imagine a denial of service attack by uploading images
>whose signature is invalid if we allow them to reside in Glance
>In a "killed" state. This would be less of an issue "killed" images still
>consume storage quota until actually deleted.
>Also given MD-5 less secure, why not have the default hash be SHA-1 or 2?
>Regards
>Malini
>
>-----Original Message-----
>From: Poulos, Brianna L. [mailto:Brianna.Poulos at jhuapl.edu]
>Sent: Wednesday, September 09, 2015 9:54 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Cc: stuart.mclaren at hp.com
>Subject: Re: [openstack-dev] [glance] [nova] Verification of glance
>images before boot
>
>Stuart is right about what will currently happen in Nova when an image is
>downloaded, which protects against unintentional modifications to the
>image data.
>
>What is currently being worked on is adding the ability to verify a
>signature of the checksum.  The flow of this is as follows:
>1. The user creates a signature of the "checksum hash" (currently MD5) of
>the image data offline.
>2. The user uploads a public key certificate, which can be used to verify
>the signature to a key manager (currently Barbican).
>3. The user creates an image in glance, with signature metadata
>properties.
>4. The user uploads the image data to glance.
>5. If the signature metadata properties exist, glance verifies the
>signature of the "checksum hash", including retrieving the certificate
>from the key manager.
>6. If the signature verification fails, glance moves the image to a
>killed state, and returns an error message to the user.
>7. If the signature verification succeeds, a log message indicates that
>it succeeded, and the image upload finishes successfully.
>
>8. Nova requests the image from glance, along with the image properties,
>in order to boot it.
>9. Nova uses the signature metadata properties to verify the signature
>(if a configuration option is set).
>10. If the signature verification fails, nova does not boot the image,
>but errors out.
>11. If the signature verification succeeds, nova boots the image, and a
>log message notes that the verification succeeded.
>
>Regarding what is currently in Liberty, the blueprint mentioned [1] has
>merged, and code [2] has also been merged in glance, which handles steps
>1-7 of the flow above.
>
>For steps 7-11, there is currently a nova blueprint [3], along with code
>[4], which are proposed for Mitaka.
>
>Note that we are in the process of adding official documentation, with
>examples of creating the signature as well as the properties that need to
>be added for the image before upload.  In the meantime, there's an
>etherpad that describes how to test the signature verification
>functionality in Glance [5].
>
>Also note that this is the initial approach, and there are some
>limitations.  For example, ideally the signature would be based on a
>cryptographically secure (i.e. not MD5) hash of the image.  There is a
>spec in glance to allow this hash to be configurable [6].
>
>[1]
>https://blueprints.launchpad.net/glance/+spec/image-signing-and-verificati
>o
>n-support
>[2]
>https://github.com/openstack/glance/commit/484ef1b40b738c87adb203bba6107dd
>b
>4b04ff6e
>[3] https://review.openstack.org/#/c/188874/
>[4] https://review.openstack.org/#/c/189843/
>[5]
>https://etherpad.openstack.org/p/liberty-glance-image-signing-instructions
>[6] https://review.openstack.org/#/c/191542/
>
>
>Thanks,
>~Brianna
>
>
>
>
>On 9/9/15, 12:16 , "Nikhil Komawar" <nik.komawar at gmail.com> wrote:
>
>>That's correct.
>>
>>The size and the checksum are to be verified outside of Glance, in this
>>case Nova. However, you may want to note that it's not necessary that
>>all Nova virt drivers would use py-glanceclient so you would want to
>>check the download specific code in the virt driver your Nova
>>deployment is using.
>>
>>Having said that, essentially the flow seems appropriate. Error must be
>>raise on mismatch.
>>
>>The signing BP was to help prevent the compromised Glance from changing
>>the checksum and image blob at the same time. Using a digital
>>signature, you can prevent download of compromised data. However, the
>>feature has just been implemented in Glance; Glance users may take time
>>to adopt.
>>
>>
>>
>>On 9/9/15 11:15 AM, stuart.mclaren at hp.com wrote:
>>>
>>> The glance client (running 'inside' the Nova server) will
>>> re-calculate the checksum as it downloads the image and then compare
>>> it against the expected value. If they don't match an error will be
>>>raised.
>>>
>>>> How can I know that the image that a new instance is spawned from -
>>>> is actually the image that was originally registered in glance - and
>>>> has not been maliciously tampered with in some way?
>>>>
>>>> Is there some kind of verification that is performed against the
>>>> md5sum of the registered image in glance before a new instance is
>>>>spawned?
>>>>
>>>> Is that done by Nova?
>>>> Glance?
>>>> Both? Neither?
>>>>
>>>> The reason I ask is some 'paranoid' security (that is their job I
>>>> suppose) people have raised these questions.
>>>>
>>>> I know there is a glance BP already merged for L [1] - but I would
>>>> like to understand the actual flow in a bit more detail.
>>>>
>>>> Thanks.
>>>>
>>>> [1]
>>>> 
>>>>https://blueprints.launchpad.net/glance/+spec/image-signing-and-verif
>>>>ica
>>>>tion-support
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Maish Saidel-Keesing
>>>>
>>>>
>>>>
>>>> ------------------------------
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>> End of OpenStack-dev Digest, Vol 41, Issue 22
>>>> *********************************************
>>>>
>>>
>>> 
>>>______________________________________________________________________
>>>___
>>>_
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>--
>>
>>Thanks,
>>Nikhil
>>
>>
>>_______________________________________________________________________
>>___ OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From julien at danjou.info  Thu Sep 10 19:29:03 2015
From: julien at danjou.info (Julien Danjou)
Date: Thu, 10 Sep 2015 21:29:03 +0200
Subject: [openstack-dev] [puppet] [ceilometer] Puppetize OpenStack Aodh
In-Reply-To: <55F1BA53.3080708@redhat.com> (Emilien Macchi's message of "Thu, 
 10 Sep 2015 13:13:55 -0400")
References: <55F1BA53.3080708@redhat.com>
Message-ID: <m0r3m6oyyo.fsf@danjou.info>

On Thu, Sep 10 2015, Emilien Macchi wrote:

> I'll of course use puppet-openstack-cookiecutter to create our new
> module, I'll ask to Ceilometer team to have a look once we have the
> basic structure in place, I have now no idea how to deploy it :-)
> I guess looking at how devstack deploys it will help us to understand
> the resources that need to be created.

It's exactly the same pattern that you use right now in Puppet for the
Ceilometer alarming daemon. Consider Aodh as just a code move and a
rename for now. :)

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/4073554e/attachment.pgp>

From nik.komawar at gmail.com  Thu Sep 10 19:36:38 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Thu, 10 Sep 2015 15:36:38 -0400
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <D2174C2D.40D45%Brianna.Poulos@jhuapl.edu>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
 <55F05B79.1050508@gmail.com> <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33D128@fmsmsx117.amr.corp.intel.com>
 <D2174C2D.40D45%Brianna.Poulos@jhuapl.edu>
Message-ID: <55F1DBC6.2000904@gmail.com>

The solution to this problem is to improve the scrubber to clean up the
garbage data left behind in the backend store during such failed uploads.

Currently, scrubber cleans up images in pending_delete and extending
that to images in killed status would avoid such a situation.

On 9/10/15 3:28 PM, Poulos, Brianna L. wrote:
> Malini,
>
> Thank you for bringing up the ?killed? state as it relates to quota.  We
> opted to move the image to a killed state since that is what occurs when
> an upload fails, and the signature verification failure would occur during
> an upload.  But we should keep in mind the potential to take up space and
> yet not take up quota when signature verification fails.
>
> Regarding the MD5 hash, there is currently a glance spec [1] to allow the
> hash method used for the checksum to be configurable?currently it is
> hardcoded in glance.  After making it configurable, the default would
> transition from MD5 to something more secure (like SHA-256).
>
> [1] https://review.openstack.org/#/c/191542/
>
> Thanks,
> ~Brianna
>
>
>
>
> On 9/10/15, 5:10 , "Bhandaru, Malini K" <malini.k.bhandaru at intel.com>
> wrote:
>
>> Brianna, I can imagine a denial of service attack by uploading images
>> whose signature is invalid if we allow them to reside in Glance
>> In a "killed" state. This would be less of an issue "killed" images still
>> consume storage quota until actually deleted.
>> Also given MD-5 less secure, why not have the default hash be SHA-1 or 2?
>> Regards
>> Malini
>>
>> -----Original Message-----
>> From: Poulos, Brianna L. [mailto:Brianna.Poulos at jhuapl.edu]
>> Sent: Wednesday, September 09, 2015 9:54 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: stuart.mclaren at hp.com
>> Subject: Re: [openstack-dev] [glance] [nova] Verification of glance
>> images before boot
>>
>> Stuart is right about what will currently happen in Nova when an image is
>> downloaded, which protects against unintentional modifications to the
>> image data.
>>
>> What is currently being worked on is adding the ability to verify a
>> signature of the checksum.  The flow of this is as follows:
>> 1. The user creates a signature of the "checksum hash" (currently MD5) of
>> the image data offline.
>> 2. The user uploads a public key certificate, which can be used to verify
>> the signature to a key manager (currently Barbican).
>> 3. The user creates an image in glance, with signature metadata
>> properties.
>> 4. The user uploads the image data to glance.
>> 5. If the signature metadata properties exist, glance verifies the
>> signature of the "checksum hash", including retrieving the certificate
> >from the key manager.
>> 6. If the signature verification fails, glance moves the image to a
>> killed state, and returns an error message to the user.
>> 7. If the signature verification succeeds, a log message indicates that
>> it succeeded, and the image upload finishes successfully.
>>
>> 8. Nova requests the image from glance, along with the image properties,
>> in order to boot it.
>> 9. Nova uses the signature metadata properties to verify the signature
>> (if a configuration option is set).
>> 10. If the signature verification fails, nova does not boot the image,
>> but errors out.
>> 11. If the signature verification succeeds, nova boots the image, and a
>> log message notes that the verification succeeded.
>>
>> Regarding what is currently in Liberty, the blueprint mentioned [1] has
>> merged, and code [2] has also been merged in glance, which handles steps
>> 1-7 of the flow above.
>>
>> For steps 7-11, there is currently a nova blueprint [3], along with code
>> [4], which are proposed for Mitaka.
>>
>> Note that we are in the process of adding official documentation, with
>> examples of creating the signature as well as the properties that need to
>> be added for the image before upload.  In the meantime, there's an
>> etherpad that describes how to test the signature verification
>> functionality in Glance [5].
>>
>> Also note that this is the initial approach, and there are some
>> limitations.  For example, ideally the signature would be based on a
>> cryptographically secure (i.e. not MD5) hash of the image.  There is a
>> spec in glance to allow this hash to be configurable [6].
>>
>> [1]
>> https://blueprints.launchpad.net/glance/+spec/image-signing-and-verificati
>> o
>> n-support
>> [2]
>> https://github.com/openstack/glance/commit/484ef1b40b738c87adb203bba6107dd
>> b
>> 4b04ff6e
>> [3] https://review.openstack.org/#/c/188874/
>> [4] https://review.openstack.org/#/c/189843/
>> [5]
>> https://etherpad.openstack.org/p/liberty-glance-image-signing-instructions
>> [6] https://review.openstack.org/#/c/191542/
>>
>>
>> Thanks,
>> ~Brianna
>>
>>
>>
>>
>> On 9/9/15, 12:16 , "Nikhil Komawar" <nik.komawar at gmail.com> wrote:
>>
>>> That's correct.
>>>
>>> The size and the checksum are to be verified outside of Glance, in this
>>> case Nova. However, you may want to note that it's not necessary that
>>> all Nova virt drivers would use py-glanceclient so you would want to
>>> check the download specific code in the virt driver your Nova
>>> deployment is using.
>>>
>>> Having said that, essentially the flow seems appropriate. Error must be
>>> raise on mismatch.
>>>
>>> The signing BP was to help prevent the compromised Glance from changing
>>> the checksum and image blob at the same time. Using a digital
>>> signature, you can prevent download of compromised data. However, the
>>> feature has just been implemented in Glance; Glance users may take time
>>> to adopt.
>>>
>>>
>>>
>>> On 9/9/15 11:15 AM, stuart.mclaren at hp.com wrote:
>>>> The glance client (running 'inside' the Nova server) will
>>>> re-calculate the checksum as it downloads the image and then compare
>>>> it against the expected value. If they don't match an error will be
>>>> raised.
>>>>
>>>>> How can I know that the image that a new instance is spawned from -
>>>>> is actually the image that was originally registered in glance - and
>>>>> has not been maliciously tampered with in some way?
>>>>>
>>>>> Is there some kind of verification that is performed against the
>>>>> md5sum of the registered image in glance before a new instance is
>>>>> spawned?
>>>>>
>>>>> Is that done by Nova?
>>>>> Glance?
>>>>> Both? Neither?
>>>>>
>>>>> The reason I ask is some 'paranoid' security (that is their job I
>>>>> suppose) people have raised these questions.
>>>>>
>>>>> I know there is a glance BP already merged for L [1] - but I would
>>>>> like to understand the actual flow in a bit more detail.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> [1]
>>>>>
>>>>> https://blueprints.launchpad.net/glance/+spec/image-signing-and-verif
>>>>> ica
>>>>> tion-support
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards,
>>>>> Maish Saidel-Keesing
>>>>>
>>>>>
>>>>>
>>>>> ------------------------------
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>> End of OpenStack-dev Digest, Vol 41, Issue 22
>>>>> *********************************************
>>>>>
>>>>
>>>> ______________________________________________________________________
>>>> ___
>>>> _
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> --
>>>
>>> Thanks,
>>> Nikhil
>>>
>>>
>>> _______________________________________________________________________
>>> ___ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From thierry at openstack.org  Thu Sep 10 19:37:14 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Thu, 10 Sep 2015 21:37:14 +0200
Subject: [openstack-dev] [nova] [all] Updated String Freeze Guidelines
In-Reply-To: <CABib2_pcxZ=C+n3iEDe7CfdB2o0pkOvVJpK+BeaaS6EnyK5qHQ@mail.gmail.com>
References: <CABib2_pcxZ=C+n3iEDe7CfdB2o0pkOvVJpK+BeaaS6EnyK5qHQ@mail.gmail.com>
Message-ID: <55F1DBEA.3020904@openstack.org>

John Garbutt wrote:
> [...]
> After yesterday's cross project meeting, and hanging out in
> #openstack-i18n I have come up with these updates to the String Freeze
> Guidelines:
> https://wiki.openstack.org/wiki/StringFreeze
> 
> Basically, we have a Soft String Freeze from Feature Freeze until RC1:
> * Translators work through all existing strings during this time
> * So avoid changing existing translatable strings
> * Additional strings are generally OK
> 
> Then post RC1, we have a Hard String Freeze:
> * No new strings, and no string changes
> * Exceptions need discussion
> 
> Then at least 10 working days after RC1:
> * we need a new RC candidate to include any updated strings
> 
> Is everyone happy with these changes?

That sounds pretty good.


-- 
Thierry Carrez (ttx)


From robertc at robertcollins.net  Thu Sep 10 19:45:19 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Fri, 11 Sep 2015 07:45:19 +1200
Subject: [openstack-dev] [gate] broken by pyeclib 1.0.9 release
In-Reply-To: <CAJ3HoZ18XE3+VJLTznau0PjEftS+DLBi63TrQ2vCAEWAqkakVw@mail.gmail.com>
References: <55F19744.6020503@dague.net>
 <82115773F5CCE44DA5B6CE8B3878B6F028F51E0E@fmsmsx115.amr.corp.intel.com>
 <55F1CEED.8040304@linux.vnet.ibm.com>
 <CAJ3HoZ18XE3+VJLTznau0PjEftS+DLBi63TrQ2vCAEWAqkakVw@mail.gmail.com>
Message-ID: <CAJ3HoZ2VvwcVsYYXQhw=EtazZvmTR7ubbqDOWV6drUKk9Qr+6w@mail.gmail.com>

On 11 September 2015 at 07:23, Robert Collins <robertc at robertcollins.net> wrote:
> Note that master is pinned:
>
> commit aca1a74909d7a2841cd9805b7f57c867a1f74b73
> Author: Tushar Gohad <tushar.gohad at intel.com>
> Date:   Tue Aug 18 07:55:18 2015 +0000
>
>     Restrict PyECLib version to 1.0.7
>
>     v1.0.9 rev of PyECLib replaces Jerasure with a native EC
>     implementation (liberasurecode_rs_vand) as the default
>     EC scheme.  Going forward, Jerasure will not be bundled
>     with PyPI version of PyECLib as it used to be, until
>     v1.0.7.
>
>     This is an interim change to Global/Swift requirements
>     until we get v1.0.9 PyECLib released and included in
>     global-requirements and ready patches that change Swift
>     default ec_type (for doc, config samples and unit tests)
>     from "jerasure_rs_vand" to "liberasurecode_rs_vand."
>
>     Without this change, Swift unit tests will break at gate
>     as soon as PyECLib v1.0.9 lands on PyPI
>
>     * Swift is the only user of PyECLib at the moment
>
>     Change-Id: I52180355b95679cbcddd497bbdd9be8e7167a3c7
>
>
> But it appears a matching change was not done to j/k - and the pin
> hasn't been removed from master.

I'm going to propose another manual review rule I think: we should not
permit lower releases to use higher versions of libraries -
approximately noone tests downgrades of their thing [and while it only
matters for packages with weird installs / state management things,
its a glaring hole in our reliability story].

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From nik.komawar at gmail.com  Thu Sep 10 19:48:11 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Thu, 10 Sep 2015 15:48:11 -0400
Subject: [openstack-dev] [Glance] python-glanceclient 1.0.x back compat for
 v2 proposal.
Message-ID: <55F1DE7B.6030205@gmail.com>

Hi all,

We have a review [1] that suggests adding backward compatibility to the
existing CLI, that is currently defaulting to v2, to be able to support
some v1 like parameters. We considered this option by consulting in
Cross Project Meeting, operators list, multiple developer meetings and
irc conversations. Here are some of the thoughts gathered from the
discussions and feedback:

1. It's been close to 10 days since this library was released and not a
lot of voluntary requests have come to add such back compat changes.
2. The number of different types of operators that have requested for
such changes are small based on the feedback received from operator ML.
Many of them have given alternative suggestions to make your scripts
work with the CLI.
3. python-glanceclient upgrades need to be done in staged manner and by
cross-checking the rel-notes or preferably commit messages if your
deployment is fragile to upgrades. A CI/CD pipeline shouldn't need such
back compat changes for long period of time.
4. Setting environment variables or by adding the version number to the
CLI parameters can help avoid breakage.
5. Working with documentation team to help the script writers change
their scripts and communicating the documentation of the necessity of
the change would help give a clearer picture. This is just a possibility
and hasn't fanned out into a concrete plan yet.
6. CLI 1.x.x onwards needs to be carefully changed and having to
backport all those changed and maintain them possibly for a long period
of time is not a feasible option for the existing stable community.
Also, having them there gives a wrong idea to the test writers against
specific versions.
7. We need to convey the message to the community that v2 is our
standard API (be it of server, client or CLI).

Although, there were good number of comments that negated the addition
of such functionality, we want to be empathetic towards our users. So,
all in all it was decided that we will delay the addition of such
commits at least until after stable/liberty. If the scriptwriters are
using the latest version then the CLI can have that behavior at later
point in time and such a change doesn't need to come into last minute
Liberty changes.

The thoughts belong to Flavio, Erno and Nikhil. Please direct any
specific questions to us.

[1] https://review.openstack.org/#/c/219802

-- 

Thanks,
Nikhil



From tushar.gohad at intel.com  Thu Sep 10 20:07:07 2015
From: tushar.gohad at intel.com (Gohad, Tushar)
Date: Thu, 10 Sep 2015 20:07:07 +0000
Subject: [openstack-dev] [gate] broken by pyeclib 1.0.9 release
In-Reply-To: <CAJ3HoZ2VvwcVsYYXQhw=EtazZvmTR7ubbqDOWV6drUKk9Qr+6w@mail.gmail.com>
References: <55F19744.6020503@dague.net>
 <82115773F5CCE44DA5B6CE8B3878B6F028F51E0E@fmsmsx115.amr.corp.intel.com>
 <55F1CEED.8040304@linux.vnet.ibm.com>
 <CAJ3HoZ18XE3+VJLTznau0PjEftS+DLBi63TrQ2vCAEWAqkakVw@mail.gmail.com>,
 <CAJ3HoZ2VvwcVsYYXQhw=EtazZvmTR7ubbqDOWV6drUKk9Qr+6w@mail.gmail.com>
Message-ID: <2FE3ACCB-EE17-47DA-9FE0-5F2ECC4C556B@intel.com>

+Kevin

> On 10 Sep 2015, at 12:49, Robert Collins <robertc at robertcollins.net> wrote:
> 
>> On 11 September 2015 at 07:23, Robert Collins <robertc at robertcollins.net> wrote:
>> Note that master is pinned:
>> 
>> commit aca1a74909d7a2841cd9805b7f57c867a1f74b73
>> Author: Tushar Gohad <tushar.gohad at intel.com>
>> Date:   Tue Aug 18 07:55:18 2015 +0000
>> 
>>    Restrict PyECLib version to 1.0.7
>> 
>>    v1.0.9 rev of PyECLib replaces Jerasure with a native EC
>>    implementation (liberasurecode_rs_vand) as the default
>>    EC scheme.  Going forward, Jerasure will not be bundled
>>    with PyPI version of PyECLib as it used to be, until
>>    v1.0.7.
>> 
>>    This is an interim change to Global/Swift requirements
>>    until we get v1.0.9 PyECLib released and included in
>>    global-requirements and ready patches that change Swift
>>    default ec_type (for doc, config samples and unit tests)
>>    from "jerasure_rs_vand" to "liberasurecode_rs_vand."
>> 
>>    Without this change, Swift unit tests will break at gate
>>    as soon as PyECLib v1.0.9 lands on PyPI
>> 
>>    * Swift is the only user of PyECLib at the moment
>> 
>>    Change-Id: I52180355b95679cbcddd497bbdd9be8e7167a3c7
>> 
>> 
>> But it appears a matching change was not done to j/k - and the pin
>> hasn't been removed from master.
> 
> I'm going to propose another manual review rule I think: we should not
> permit lower releases to use higher versions of libraries -
> approximately noone tests downgrades of their thing [and while it only
> matters for packages with weird installs / state management things,
> its a glaring hole in our reliability story].
> 
> -Rob
> 
> -- 
> Robert Collins <rbtcollins at hp.com>
> Distinguished Technologist
> HP Converged Cloud
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From robertc at robertcollins.net  Thu Sep 10 20:08:48 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Fri, 11 Sep 2015 08:08:48 +1200
Subject: [openstack-dev] [Glance] python-glanceclient 1.0.x back compat
 for v2 proposal.
In-Reply-To: <55F1DE7B.6030205@gmail.com>
References: <55F1DE7B.6030205@gmail.com>
Message-ID: <CAJ3HoZ0vHCktFvU=5Xrro-0JAVaTFO3D_t=YJniH8a4KuEr-hQ@mail.gmail.com>

On 11 September 2015 at 07:48, Nikhil Komawar <nik.komawar at gmail.com> wrote:
> Hi all,
>
...
> 3. python-glanceclient upgrades need to be done in staged manner and by
> cross-checking the rel-notes or preferably commit messages if your
> deployment is fragile to upgrades. A CI/CD pipeline shouldn't need such
> back compat changes for long period of time.

I wanted to ask more about this one. This is AIUI opposite to the
advice we've previously given well, everyone. My understanding was
that for all clients, all versions of the client work with all
versions of OpenStack that are currently upstream supported, always.

That suggests that installing the current version is always safe! Has
that changed? What should users that use many different clouds do now?

Cheers,
Rob


From corvus at inaugust.com  Thu Sep 10 20:27:53 2015
From: corvus at inaugust.com (James E. Blair)
Date: Thu, 10 Sep 2015 13:27:53 -0700
Subject: [openstack-dev] [infra] PTL non-candidacy
Message-ID: <87io7im33q.fsf@meyer.lemoncheese.net>

Hi,

I've been the Infrastructure PTL for some time now and I've been
fortunate to serve during a time when we have not only grown the
OpenStack project to a scale that we only hoped we would attain, but
also we have grown the Infrastructure project itself into truly
uncharted territory.

Serving as a PTL is a very rewarding experience that takes a good deal
of time and attention.  I would like to focus my time and energy on
diving deeper into technical projects, including quite a bit of work
that I would like to accomplish on Zuul, so I do not plan to run for PTL
in the next cycle.

Fortunately there are people in our community that have broad
involvement with all aspects of the Infrastructure project and we have
no shortage of folks who like interacting with and supporting others in
their work.  I wish whoever follows the best of luck while I look
forward to writing some code.

-Jim


From egafford at redhat.com  Thu Sep 10 20:52:54 2015
From: egafford at redhat.com (Ethan Gafford)
Date: Thu, 10 Sep 2015 16:52:54 -0400 (EDT)
Subject: [openstack-dev] [sahara] FFE request for heat wait condition
 support
In-Reply-To: <55E9C3D8.2080606@redhat.com>
References: <CAOB5mPwf6avCZD4Q6U4xh-g4f553eMzCTh1kfiX4bVY8x59i5A@mail.gmail.com>
 <CA+O3VAhA2Xi_hKCaCB2PoWr8jUM0bQhwnSUAGx2gOGB0ksii6w@mail.gmail.com>
 <55E9C3D8.2080606@redhat.com>
Message-ID: <2108366802.24676246.1441918374134.JavaMail.zimbra@redhat.com>

Seems reasonable; +1.

-Ethan

>----- Original Message -----
>From: "michael mccune" <msm at redhat.com>
>To: openstack-dev at lists.openstack.org
>Sent: Friday, September 4, 2015 12:16:24 PM
>Subject: Re: [openstack-dev] [sahara] FFE request for heat wait condition support
>
>makes sense to me, +1
>
>mike
>
>On 09/04/2015 06:37 AM, Vitaly Gridnev wrote:
>> +1 for FFE, because of
>>   1. Low risk of issues, fully covered with current scenario tests;
>>   2. Implementation already on review
>>
>> On Fri, Sep 4, 2015 at 12:54 PM, Sergey Reshetnyak
>> <sreshetniak at mirantis.com <mailto:sreshetniak at mirantis.com>> wrote:
>>
>>     Hi,
>>
>>     I would like to request FFE for wait condition support for Heat engine.
>>     Wait condition reports signal about booting instance.
>>
>>     Blueprint:
>>     https://blueprints.launchpad.net/sahara/+spec/sahara-heat-wait-conditions
>>
>>     Spec:
>>     https://github.com/openstack/sahara-specs/blob/master/specs/liberty/sahara-heat-wait-conditions.rst
>>
>>     Patch:
>>     https://review.openstack.org/#/c/169338/
>>
>>     Thanks,
>>     Sergey Reshetnyak
>>
>>     __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Best Regards,
>> Vitaly Gridnev
>> Mirantis, Inc
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From doug at doughellmann.com  Thu Sep 10 20:54:39 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Thu, 10 Sep 2015 16:54:39 -0400
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F1C7C8.8040902@dague.net>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com> <1441840459-sup-9283@lrrr.local>
 <55F15DE0.7040804@dague.net> <55F17646.4000203@openstack.org>
 <1441904554-sup-3119@lrrr> <55F1C7C8.8040902@dague.net>
Message-ID: <1441918354-sup-1749@lrrr.local>

Excerpts from Sean Dague's message of 2015-09-10 14:11:20 -0400:
> On 09/10/2015 01:05 PM, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2015-09-10 14:23:34 +0200:
> >> Sean Dague wrote:
> >>> Right now, they are all a bunch of files, they can be anywhere. And then
> >>> you have other files that have to reference these files by path, which
> >>> can be anywhere. We could just punt in that part and say "punt! every
> >>> installer and configuration management install needs to solve this on
> >>> their own." I'm not convinced that's a good answer. The os-brick filters
> >>> aren't really config. If you change them all that happens is
> >>> terribleness. Stuff stops working, and you don't know why. They are data
> >>> to exchange with another process about how to function. Honestly, they
> >>> should probably be python code that's imported by rootwrap.
> >>>
> >>> Much like the issues around clouds failing when you try to GET /v2 on
> >>> the Nova API (because we have a bunch of knobs you have to align for SSL
> >>> termination, and a bunch of deployers didn't), I don't think we should
> >>> be satisfied with "there's a config for that!" when all that config
> >>> means is that someone can break their configuration if they don't get it
> >>> exactly right.
> >>
> >> My quick 2cents on this. Rootwrap was designed as a generic solution to
> >> wrap privileged calls. That's why filter files are part of its
> >> "configuration". The problem is, OpenStack needs a pretty precise set of
> >> those filters to be "configured" to run properly. So it's configuration
> >> for rootwrap, but not "configuration" for OpenStack.
> > 
> > That makes them sound like data, not configuration. If that's the case,
> > Python's pkgutil module is an existing API for putting a data file
> > inside a library and then accessing it. Maybe we should look at moving
> > the files to a place that lets us use that, instead of requiring any
> > deployer-based configuration at all. Combining that with the "symbolic"
> > suggestion from Sean, we would use the package name as the symbol and
> > there would be a well-defined resource name within that package to use
> > with pkgutil.get_data() [1].
> > 
> > Doug
> > 
> > [1] https://docs.python.org/2.7/library/pkgutil.html#pkgutil.get_data
> 
> That sounds reasonable as well. Monty and dstuft had sent us down the
> get_resource direction instead using setuptools -
> https://pythonhosted.org/setuptools/pkg_resources.html. If you think
> that get_data is a better option, this would apply just as well there.

We rely heavily on setuptools, so using that API is probably just as
good. Frankly I don't know the implementation differences. The point is
to use something that exists, though, rather than inventing one, and so
either would satisfy me in that regard.

> The important thing is: filters are tied to code, so should be able to
> be looked up symbolicly based on the code level that's installed.

Right, we still need to tell rootwrap what package names to look at, as
Thierry pointed out. So the existing path option probably needs to be
deprecated in favor of a new way of telling the daemon what packages to
consult instead.

Doug

> 
>     -Sean
> 


From doug at doughellmann.com  Thu Sep 10 20:56:16 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Thu, 10 Sep 2015 16:56:16 -0400
Subject: [openstack-dev] [rootwrap] rootwrap and libraries - RFC
In-Reply-To: <55F1BF59.2000906@openstack.org>
References: <55F06E25.5000906@dague.net> <1441821533-sup-4318@lrrr.local>
 <55F07E49.1010202@linux.vnet.ibm.com> <1441840459-sup-9283@lrrr.local>
 <55F15DE0.7040804@dague.net> <55F17646.4000203@openstack.org>
 <1441904554-sup-3119@lrrr> <55F1BF59.2000906@openstack.org>
Message-ID: <1441918487-sup-5483@lrrr.local>

Excerpts from Thierry Carrez's message of 2015-09-10 19:35:21 +0200:
> Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2015-09-10 14:23:34 +0200:
> >> My quick 2cents on this. Rootwrap was designed as a generic solution to
> >> wrap privileged calls. That's why filter files are part of its
> >> "configuration". The problem is, OpenStack needs a pretty precise set of
> >> those filters to be "configured" to run properly. So it's configuration
> >> for rootwrap, but not "configuration" for OpenStack.
> > 
> > That makes them sound like data, not configuration. If that's the case,
> > Python's pkgutil module is an existing API for putting a data file
> > inside a library and then accessing it. Maybe we should look at moving
> > the files to a place that lets us use that, instead of requiring any
> > deployer-based configuration at all. Combining that with the "symbolic"
> > suggestion from Sean, we would use the package name as the symbol and
> > there would be a well-defined resource name within that package to use
> > with pkgutil.get_data() [1].
> 
> That sounds promising. One trick is that it's the consuming application
> that needs to ship the filters, not the library itself (so rootwrap
> would have to look into nova resources, not rootwrap resources). Another
> trick is that it should require root rights (not nova rights) to change
> those resources, otherwise the security model is broken (the whole idea
> of rootwrap being to restrict what a compromised nova user can do to the
> system).

If we put the data file inside the library, it will be installed where
the code lives, so it should have the permission protection we need. The
symbol names passed to rootwrap would be hard-coded inside the
application making the call, so that would also be protected.

Doug


From fungi at yuggoth.org  Thu Sep 10 20:56:58 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Thu, 10 Sep 2015 20:56:58 +0000
Subject: [openstack-dev] [infra] PTL non-candidacy
In-Reply-To: <87io7im33q.fsf@meyer.lemoncheese.net>
References: <87io7im33q.fsf@meyer.lemoncheese.net>
Message-ID: <20150910205658.GZ7955@yuggoth.org>

On 2015-09-10 13:27:53 -0700 (-0700), James E. Blair wrote:
[...]
> I do not plan to run for PTL in the next cycle.
[...]

Thanks for the awesome job you did as PTL these last cycles. I hope
you enjoy a much-deserved break from the post, and I'm looking
forward to the new Zuul! ;)
-- 
Jeremy Stanley


From egafford at redhat.com  Thu Sep 10 21:06:55 2015
From: egafford at redhat.com (Ethan Gafford)
Date: Thu, 10 Sep 2015 17:06:55 -0400 (EDT)
Subject: [openstack-dev] [sahara] FFE request for scheduler and suspend
 EDP job for sahara
In-Reply-To: <CAMfz_LO6P0=VF-QSJNJeo_8XfNBShLUH3q6Nt=ecL48YEkebCQ@mail.gmail.com>
References: <CAMfz_LOZmrju2eRrQ1ASCK2yjyxa-UhuV=uv3SRrtKkoPDP-bQ@mail.gmail.com>
 <CA+O3VAivX4To4MQnAwP-fj2iX2hb3dsUj7rP3zMh9dRyPHFeHA@mail.gmail.com>
 <CAMfz_LO6P0=VF-QSJNJeo_8XfNBShLUH3q6Nt=ecL48YEkebCQ@mail.gmail.com>
Message-ID: <473417310.24682096.1441919215000.JavaMail.zimbra@redhat.com>

Sadly, in reviewing the client code, Vitaly is right; the current client will not support this feature, which would make it direct REST call only. Given that, I am uncertain it is worth the risk. I'd really love to have seen this go in; it's a great feature and a lot of work has gone into it, but perhaps it should be a great feature in M. 

If we agree to cut a new client point release, I could see this, but for now I fear I'm -1. 

Thanks, 
Ethan 

>From: "lu jander" <juvenboy1987 at gmail.com> 
>To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org> 
>Sent: Monday, September 7, 2015 1:26:49 PM 
>Subject: Re: [openstack-dev] [sahara] FFE request for scheduler and suspend EDP job for sahara 
> 
>Hi Vitaly, 
>enable-scheduled-edp-jobs patch has 34 patch sets review. https://review.openstack.org/#/c/182310/ , it has no impact with another working in process patch. 
> 
>2015-09-07 18:48 GMT+08:00 Vitaly Gridnev <vgridnev at mirantis.com>: 
> 
> Hey! 
> 
> From my point of view, we definetly should not give FFE for add-suspend-resume-ability-for-edp-jobs spec, because client side for this change is not included in official liberty release. 
> 
> By the way, I am not sure about FFE for enable-scheduled-edp-jobs, because it's not clear which progress of these blueprint. Implementation of that consists with 2 patch-sets, and one of that marked as Work In Progress. 
> 
> 
> On Sun, Sep 6, 2015 at 7:18 PM, lu jander <juvenboy1987 at gmail.com> wrote: 
> 
> Hi, Guys 
> 
> I would like to request FFE for scheduler EDP job and suspend EDP job for sahara. these patches has been reviewed for a long time with lots of patch sets. 
> 
> Blueprint: 
> 
> (1) https://blueprints.launchpad.net/sahara/+spec/enable-scheduled-edp-jobs 
> (2) https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs 
> 
> Spec: 
> 
> (1) https://review.openstack.org/#/c/175719/ 
> (2) https://review.openstack.org/#/c/198264/ 
> 
> 
> Patch: 
> 
> (1) https://review.openstack.org/#/c/182310/ 
> (2) https://review.openstack.org/#/c/201448/ 
> 
> __________________________________________________________________________ 
> OpenStack Development Mailing List (not for usage questions) 
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> -- 
> Best Regards, 
> Vitaly Gridnev 
> Mirantis, Inc 
> 
> __________________________________________________________________________ 
> OpenStack Development Mailing List (not for usage questions) 
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
>__________________________________________________________________________ 
>OpenStack Development Mailing List (not for usage questions) 
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/05577868/attachment.html>

From pabelanger at redhat.com  Thu Sep 10 21:13:05 2015
From: pabelanger at redhat.com (Paul Belanger)
Date: Thu, 10 Sep 2015 17:13:05 -0400
Subject: [openstack-dev] [infra] PTL non-candidacy
In-Reply-To: <20150910205658.GZ7955@yuggoth.org>
References: <87io7im33q.fsf@meyer.lemoncheese.net>
 <20150910205658.GZ7955@yuggoth.org>
Message-ID: <20150910211305.GD31061@localhost.localdomain>

On Thu, Sep 10, 2015 at 08:56:58PM +0000, Jeremy Stanley wrote:
> On 2015-09-10 13:27:53 -0700 (-0700), James E. Blair wrote:
> [...]
> > I do not plan to run for PTL in the next cycle.
> [...]
> 
> Thanks for the awesome job you did as PTL these last cycles. I hope
> you enjoy a much-deserved break from the post, and I'm looking
> forward to the new Zuul! ;)

Agreed! Much thanks for everything you've done. And also looking forward to
Zuul v3. Exciting times ahead.


From egafford at redhat.com  Thu Sep 10 21:15:46 2015
From: egafford at redhat.com (Ethan Gafford)
Date: Thu, 10 Sep 2015 17:15:46 -0400 (EDT)
Subject: [openstack-dev] [sahara] FFE request for nfs-as-a-data-source
In-Reply-To: <55F0A58D.7000205@redhat.com>
References: <6EEB8A90CDE31C4680037A635100E8FF953DBC@SHSMSX104.ccr.corp.intel.com>
 <55F0A58D.7000205@redhat.com>
Message-ID: <604925892.24685758.1441919746934.JavaMail.zimbra@redhat.com>

This seems like a sanely scoped exception, and wholly agreed about
spinning off the UI into a separate bp. +1.

-Ethan

>----- Original Message -----
>From: "michael mccune" <msm at redhat.com>
>To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
>Sent: Wednesday, September 9, 2015 5:33:01 PM
>Subject: Re: [openstack-dev] [sahara] FFE request for nfs-as-a-data-source
>
>i'm +1 for this feature as long as we talking about just the sahara
>controller and saharaclient. i agree we probably cannot get the horizon
>changes in before the final release.
>
>mike
>
>On 09/09/2015 03:33 AM, Chen, Weiting wrote:
>> Hi, all.
>>
>> I would like to request FFE for nfs as a data source for sahara.
>>
>> This bp originally should include a dashboard change to create nfs as a
>> data source.
>>
>> I will register it as another bp and implement it in next version.
>>
>> However, these patches have already done to put nfs-driver into
>> sahara-image-elements and enable it in the cluster.
>>
>> By using this way, the user can use nfs protocol via command line in
>> Liberty release.
>>
>> Blueprint:
>>
>> https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source
>>
>> Spec:
>>
>> https://review.openstack.org/#/c/210839/
>>
>> Patch:
>>
>> https://review.openstack.org/#/c/218637/
>>
>> https://review.openstack.org/#/c/218638/
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

----- Original Message -----
From: "michael mccune" <msm at redhat.com>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Sent: Wednesday, September 9, 2015 5:33:01 PM
Subject: Re: [openstack-dev] [sahara] FFE request for nfs-as-a-data-source

i'm +1 for this feature as long as we talking about just the sahara 
controller and saharaclient. i agree we probably cannot get the horizon 
changes in before the final release.

mike

On 09/09/2015 03:33 AM, Chen, Weiting wrote:
> Hi, all.
>
> I would like to request FFE for nfs as a data source for sahara.
>
> This bp originally should include a dashboard change to create nfs as a
> data source.
>
> I will register it as another bp and implement it in next version.
>
> However, these patches have already done to put nfs-driver into
> sahara-image-elements and enable it in the cluster.
>
> By using this way, the user can use nfs protocol via command line in
> Liberty release.
>
> Blueprint:
>
> https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source
>
> Spec:
>
> https://review.openstack.org/#/c/210839/
>
> Patch:
>
> https://review.openstack.org/#/c/218637/
>
> https://review.openstack.org/#/c/218638/
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From nik.komawar at gmail.com  Thu Sep 10 21:16:15 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Thu, 10 Sep 2015 17:16:15 -0400
Subject: [openstack-dev] [Glance] python-glanceclient 1.0.x back compat
 for v2 proposal.
In-Reply-To: <CAJ3HoZ0vHCktFvU=5Xrro-0JAVaTFO3D_t=YJniH8a4KuEr-hQ@mail.gmail.com>
References: <55F1DE7B.6030205@gmail.com>
 <CAJ3HoZ0vHCktFvU=5Xrro-0JAVaTFO3D_t=YJniH8a4KuEr-hQ@mail.gmail.com>
Message-ID: <55F1F31F.9020501@gmail.com>

The client does support v1 just not by default on the CLI. Users will
need to specify --os-image-api-version as 1 to be able to use v1. The
change in 1.x.x series is that the default has moved from v1 to v2 and
the patch proposes mocking v2 CLI into v1 like CLI calls and that can be
cumbersome.

I may not be reading your comment completely so please elaborate if the
above doesn't explain.

On 9/10/15 4:08 PM, Robert Collins wrote:
> On 11 September 2015 at 07:48, Nikhil Komawar <nik.komawar at gmail.com> wrote:
>> Hi all,
>>
> ...
>> 3. python-glanceclient upgrades need to be done in staged manner and by
>> cross-checking the rel-notes or preferably commit messages if your
>> deployment is fragile to upgrades. A CI/CD pipeline shouldn't need such
>> back compat changes for long period of time.
> I wanted to ask more about this one. This is AIUI opposite to the
> advice we've previously given well, everyone. My understanding was
> that for all clients, all versions of the client work with all
> versions of OpenStack that are currently upstream supported, always.
>
> That suggests that installing the current version is always safe! Has
> that changed? What should users that use many different clouds do now?
>
> Cheers,
> Rob
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From major at mhtx.net  Thu Sep 10 21:20:31 2015
From: major at mhtx.net (Major Hayden)
Date: Thu, 10 Sep 2015 16:20:31 -0500
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <1441909133-sup-2320@fewbar.com>
References: <55F1999C.4020509@mhtx.net> <55F1AE40.5020009@gentoo.org>
 <55F1B0D7.8070404@mhtx.net> <1441909133-sup-2320@fewbar.com>
Message-ID: <55F1F41F.4020103@mhtx.net>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 09/10/2015 01:21 PM, Clint Byrum wrote:
> Just a thought from somebody outside of this. If OSAD can provide the
> automation, turned off by default as a convenience, and run a bank of
> tests with all of these turned on to make sure they do actually work with
> the stock configuration, you'll get more traction this way. Docs should
> be the focus of this effort, but the effort should be on explaining how
> it fits into the system so operators who are customizing know when they
> will have to choose a less secure path. One should be able to have code
> do the "turn it on" "turn it off" mechanics.

I'm completely in agreement on this one. ;)

- --
Major Hayden
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBCAAGBQJV8fQcAAoJEHNwUeDBAR+x1BIP/jkq0Gd2SuPcWbMU53xADj1W
ml8VtfkJwT/gs1v8Kfd/6OWzUG6DKG+Qk3HR/uAjOUxyYAWYjLcV7aRc3EhDfHqs
8OxoXqy85hPZCiMDFy6apQOsr//5WDYnFig1RPMtHjsEPto+ewJTTMhD8cD2SNC/
/TtCWYEP7pC2pxu8kJCGtJnptGEeg0SL78/MXLjMaKhXV++yrIIyFiqFlKIb2XW9
i4wgm9TAPd6Rs27AY9ew3GFS6UVCm/9nmcw2fqSN639ukTYoLent+NGXNvNEEQ2M
6Gn1hStPS08MYj97dvYrA5aFpVz7q6Jp1GM5DAk3wPAYnrFriU4WNOTdQtzAdlQz
crs4OevM+r/3y1AMcLNmCBMIdOCMwAftjr01sGbU7fHHr2jeUM2p9q/OZkWp56CZ
0YI5Gse9KugNuKNAN8rITM4QgslJOESQu4PxNOhoKCrP+hjQ6bh6K1vVGADNplTM
qG0zD5Jhdbmv42Eud/lQrkth5+aAvZKz+gOsRauTWjbwcRLaX8qT5yBPRJEq3OO9
GhdgN3IRszlUu+TTNXv96ENROOA6DERnx2HmJtBm8EpV1vOIo5tGgsE5IUuPjqnr
BEWkosgMz+ooaLQd6hnnHm88a6WVMcbbZ4pPOywXsgDwEBzCn+rVQ+3sIVrebEdI
pxYuv/fx8oR8G3YAMsLn
=Y66T
-----END PGP SIGNATURE-----


From beagles at redhat.com  Thu Sep 10 21:23:06 2015
From: beagles at redhat.com (Brent Eagles)
Date: Thu, 10 Sep 2015 18:53:06 -0230
Subject: [openstack-dev] [nova][neutron][SR-IOV] Hardware changes and
	shifting PCI addresses
Message-ID: <20150910212306.GA11628@b3ntpin.localdomain>

Hi,

I was recently informed of a situation that came up when an engineer
added an SR-IOV nic to a compute node that was hosting some guests that
had VFs attached. Unfortunately, adding the card shuffled the PCI
addresses causing some degree of havoc. Basically, the PCI addresses
associated with the previously allocated VFs were no longer valid. 

I tend to consider this a non-issue. The expectation that hosts have
relatively static hardware configuration (and kernel/driver configs for
that matter) is the price you pay for having pets with direct hardware
access. That being said, this did come as a surprise to some of those
involved and I don't think we have any messaging around this or advice
on how to deal with situations like this.

So what should we do? I can't quite see altering OpenStack to deal with
this situation (or even how that could work). Has anyone done any
research into this problem, even if it is how to recover or extricate
a guest that is no longer valid? It seems that at the very least we
could use some stern warnings in the docs.

Cheers,

Brent
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/aec9f9a8/attachment.pgp>

From msm at redhat.com  Thu Sep 10 21:28:39 2015
From: msm at redhat.com (michael mccune)
Date: Thu, 10 Sep 2015 17:28:39 -0400
Subject: [openstack-dev] [sahara] mitaka summit session ideas
Message-ID: <55F1F607.6060306@redhat.com>

hey all,

i started an etherpad for us to collect ideas about our session for the 
mitaka summit.

https://etherpad.openstack.org/p/mitaka-sahara-session-plans

please drop any thoughts or suggestions about the summit there.

thanks,
mike


From morgan.fainberg at gmail.com  Thu Sep 10 21:40:53 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Thu, 10 Sep 2015 14:40:53 -0700
Subject: [openstack-dev] [keystone] PTL non-candidacy
Message-ID: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>

As I outlined (briefly) in my recent announcement of changes (
https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/
) I will not be running for PTL of Keystone this next cycle (Mitaka). The
role of PTL is a difficult but extremely rewarding job. It has been amazing
to see both Keystone and OpenStack grow.

I am very pleased with the accomplishments of the Keystone development team
over the last year. We have seen improvements with Federation,
Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing,
releasing a dedicated authentication library, cross-project initiatives
around improving the Service Catalog, and much, much more. I want to thank
each and every contributor for the hard work that was put into Keystone and
its associated projects.

While I will be changing my focus to spend more time on the general needs
of OpenStack and working on the Public Cloud story, I am confident in those
who can, and will, step up to the challenges of leading development of
Keystone and the associated projects. I may be working across more
projects, but you can be assured I will be continuing to work hard to see
the initiatives I helped start through. I wish the best of luck to the next
PTL.

I guess this is where I get to write a lot more code soon!

See you all (in person) in Tokyo!
--Morgan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/0926e3e1/attachment.html>

From sbaker at redhat.com  Thu Sep 10 21:44:30 2015
From: sbaker at redhat.com (Steve Baker)
Date: Fri, 11 Sep 2015 09:44:30 +1200
Subject: [openstack-dev] [heat] Backup resources and properties in the
 delete-path
In-Reply-To: <55F1B7B2.9010607@redhat.com>
References: <20150910165322.GB16252@t430slt.redhat.com>
 <55F1B7B2.9010607@redhat.com>
Message-ID: <55F1F9BE.8060007@redhat.com>

On 11/09/15 05:02, Zane Bitter wrote:
> On 10/09/15 12:53, Steven Hardy wrote:
>> Hi all,
>>
>> So, I've been battling with $subject for the last few days ref [1][2].
>>
>> The problem I have is that out TestResource references several 
>> properties
>> in the delete (check_delete_complete) path[4], which it turns out 
>> doesn't
>> work very well if those properties refer to parameters via get_param, 
>> and
>> the parameter in the template is added/removed between updates which
>> fail[3].
>>
>> Essentially, the confusing dance we do on update with backup stacks and
>> backup resources bites us, because the backed-up resource ends up 
>> referring
>> to a parameter which doesn't exist (either in
>> stack.Stack._delete_backup_stack on stack-delete, or in
>> update.StackUpdate._remove_backup_resource on stack-update.)
>>
>> As far as I can tell, referencing properties in the delete path is 
>> the main
>> problem here, and it's something we don't do at all AFAICS in any other
>> resources - the normal pattern is only to refer to the resource_id in 
>> the
>> delete path, and possibly the resource_data (which will work OK after 
>> [5]
>> lands)
>>
>> So the question is, is it *ever* valid to reference self.properties
>> in the delete path?
>
> I think it's fine to say 'no'.
>
I know of a case where this is not no.

For a SoftwareDeployment which has work to do during DELETE the 
properties need to be accessed to get the config containing the DELETE 
work. There were convergence functional tests failing because the 
properties were not populated during delete:
https://bugs.launchpad.net/heat/+bug/1483946

>> If the answer is no, can we fix TestResource by e.g
>> storing the properties in resource_data instead?
>
> They're already stored as self._stored_properties_data; you could just 
> reference that instead. (The 'right' way would probably be to use 
> "self.frozen_definition().properties(self.properties_schema, 
> self.context)", but this is a test resource we're talking about.)
>
>> If we do expect to allow/support refering to properties in the delete 
>> path,
>> the question becomes how to we make it work with the backup resource 
>> update
>> mangling we do atm?  I've posted a hacky workaround for the delete 
>> path in
>> [2], but I don't yet have a solution for the failure on update in
>> _remove_backup_resource, is it worth expending the effort to work 
>> that out
>> or is TestResource basically doing the wrong thing?
>>
>> Any ideas much appreciated, as I'd like to clarify the best path forward
>> before burning a bunch more time on this :)
>>
>> Thanks!
>>
>> Steve
>>
>> [1] https://review.openstack.org/#/c/205754/
>> [2] https://review.openstack.org/#/c/222176/
>> [3] https://bugs.launchpad.net/heat/+bug/1494260
>> [4] 
>> https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/heat/test_resource.py#L209
>> [5] https://review.openstack.org/#/c/220986/
>>
>> __________________________________________________________________________ 
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From drfish at us.ibm.com  Thu Sep 10 22:26:18 2015
From: drfish at us.ibm.com (Douglas Fish)
Date: Thu, 10 Sep 2015 17:26:18 -0500
Subject: [openstack-dev] [Horizon]Let's take care of our integration tests
Message-ID: <OF57A4B719.91C38F6B-ON85257EBC.007606AE-86257EBC.007B43A0@us.ibm.com>



It looks like we've reached the point where our Horizon integration tests
are functional again.  Thanks for your work on this Timur! (Offer for
beer/hug at the next summit still stands)

I'd like to have these tests voting again ASAP, but I understand that might
be a bit risky at this point. We haven't yet proven that these tests will
be stable over the long term.

I encourage all of the reviewers to keep the integration tests in mind as
we are reviewing code. Keep an eye on the status of the
gate-horizon-dsvm-integration test. It's failure would be great reason to
hand out a -1!

Doug
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/e478bd6c/attachment.html>

From edgar.magana at workday.com  Thu Sep 10 22:28:56 2015
From: edgar.magana at workday.com (Edgar Magana)
Date: Thu, 10 Sep 2015 22:28:56 +0000
Subject: [openstack-dev] [OpenStack-Infra] [infra] PTL non-candidacy
In-Reply-To: <87io7im33q.fsf@meyer.lemoncheese.net>
References: <87io7im33q.fsf@meyer.lemoncheese.net>
Message-ID: <4AEC9370-1D14-4A5B-9DF1-ABED157D9821@workday.com>

Thanks James for the great work done on the team!

I am sure we are going to hear a lot more from you in the community  ;-)

Edgar




On 9/10/15, 1:27 PM, "James E. Blair" <corvus at inaugust.com> wrote:

>Hi,
>
>I've been the Infrastructure PTL for some time now and I've been
>fortunate to serve during a time when we have not only grown the
>OpenStack project to a scale that we only hoped we would attain, but
>also we have grown the Infrastructure project itself into truly
>uncharted territory.
>
>Serving as a PTL is a very rewarding experience that takes a good deal
>of time and attention.  I would like to focus my time and energy on
>diving deeper into technical projects, including quite a bit of work
>that I would like to accomplish on Zuul, so I do not plan to run for PTL
>in the next cycle.
>
>Fortunately there are people in our community that have broad
>involvement with all aspects of the Infrastructure project and we have
>no shortage of folks who like interacting with and supporting others in
>their work.  I wish whoever follows the best of luck while I look
>forward to writing some code.
>
>-Jim
>
>_______________________________________________
>OpenStack-Infra mailing list
>OpenStack-Infra at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

From dolph.mathews at gmail.com  Thu Sep 10 22:28:41 2015
From: dolph.mathews at gmail.com (Dolph Mathews)
Date: Thu, 10 Sep 2015 17:28:41 -0500
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
Message-ID: <CAC=h7gVgBvGBxrpTtMU1h8MuNXCh=bdQLXVMkJ-s+T2=XZaNZA@mail.gmail.com>

Thank you for all your work, Morgan! Good luck with the opportunity to
write some code again :)

On Thu, Sep 10, 2015 at 4:40 PM, Morgan Fainberg <morgan.fainberg at gmail.com>
wrote:

> As I outlined (briefly) in my recent announcement of changes (
> https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/
> ) I will not be running for PTL of Keystone this next cycle (Mitaka). The
> role of PTL is a difficult but extremely rewarding job. It has been amazing
> to see both Keystone and OpenStack grow.
>
> I am very pleased with the accomplishments of the Keystone development
> team over the last year. We have seen improvements with Federation,
> Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing,
> releasing a dedicated authentication library, cross-project initiatives
> around improving the Service Catalog, and much, much more. I want to thank
> each and every contributor for the hard work that was put into Keystone and
> its associated projects.
>
> While I will be changing my focus to spend more time on the general needs
> of OpenStack and working on the Public Cloud story, I am confident in those
> who can, and will, step up to the challenges of leading development of
> Keystone and the associated projects. I may be working across more
> projects, but you can be assured I will be continuing to work hard to see
> the initiatives I helped start through. I wish the best of luck to the next
> PTL.
>
> I guess this is where I get to write a lot more code soon!
>
> See you all (in person) in Tokyo!
> --Morgan
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/b8dfb0c4/attachment.html>

From hongbin.lu at huawei.com  Thu Sep 10 22:32:07 2015
From: hongbin.lu at huawei.com (Hongbin Lu)
Date: Thu, 10 Sep 2015 22:32:07 +0000
Subject: [openstack-dev] [magnum] Vote for our weekly meeting schedule
Message-ID: <0957CD8F4B55C0418161614FEC580D6BCD1E28@SZXEMI503-MBS.china.huawei.com>

Hi team,

Currently, magnum weekly team meeting is scheduled at Tuesday UTC1600 and UTC2200. As our team growing, contributors from different timezones joined and actively participated. I worried that our current meeting schedule (which was decided a long time ago) might not be update-to-date to reflect the convenience of our contributors. Therefore, a doodle pool was created:

http://doodle.com/poll/76ix26i2pdz89vz4

The proposed time slots were made according to an estimation of current geographic distribution of active contributors. Please feel free to ping me if you want to propose additional time slots. Finally, please vote your meeting time. The team will decide whether to adjust the current schedule according to the results. Thanks.

Best regards,
Hongbin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/9d7f7bb2/attachment.html>

From jamesd at catalyst.net.nz  Thu Sep 10 22:44:08 2015
From: jamesd at catalyst.net.nz (James Dempsey)
Date: Fri, 11 Sep 2015 10:44:08 +1200
Subject: [openstack-dev] [neutron] RFE process question
In-Reply-To: <CAG9LJa79xEhGTvq5pEYEvDTvP1y9Z9PcJ0qxKEEkPeE0MoqBkg@mail.gmail.com>
References: <55F11653.8080509@catalyst.net.nz>
 <CAK+RQeY_SjXNyApy9gfp7NOUk0V7qwhnSay+jU1LBr=aDXAG0A@mail.gmail.com>
 <CAG9LJa79xEhGTvq5pEYEvDTvP1y9Z9PcJ0qxKEEkPeE0MoqBkg@mail.gmail.com>
Message-ID: <55F207B8.6030300@catalyst.net.nz>

On 10/09/15 19:39, Gal Sagie wrote:
> Hi James,
> 
> I think that https://review.openstack.org/#/c/216021/ might be what you are
> looking for.
> Please review and see that it fits your requierment.
> Hopefully this gets approved for next release and i can start working on
> it, if you would like to join
> and contribute (or anyone in your team) i would love any help on that.

Thanks for the link; looks like a great feature. I'm not sure it is the
best fit for my use case though.

My user story is: Users can see human-readable security group
descriptions in Horizon because the security group model contains a
description field, but no such field exists for security group rules.
This makes it very confusing for users who have to manage complex
security groups.

I agree that one could encode descriptions as tags, but the problem I
see is that API consumers(Horizon, Users) would have to agree on some
common encoding.  For example... To expose a security group rule
description in Horizon, horizon would have to apply and read tags like
'description:SSH Access for Mallory'.

With a tags-based implementation, if a user wants the description for a
security group rule via the API, they have to get the security group,
then filter the tags according to whatever format horizon chose to
encode the description as.

This is in contrast to getting the description of a security group: Get
the security group and access the description attribute.

I think that resource tags are great, but this seems like a
non-intuitive workaround for a specific data model problem: Security
Groups have descriptions, but Security Group Rules do not.

Am I making sense?

Cheers,
James


> 
> Thanks
> Gal.
> 
> On Thu, Sep 10, 2015 at 8:59 AM, Armando M. <armamig at gmail.com> wrote:
> 
>>
>> On 10 September 2015 at 11:04, James Dempsey <jamesd at catalyst.net.nz>
>> wrote:
>>
>>> Greetings Devs,
>>>
>>> I'm very excited about the new RFE process and thought I'd test it by
>>> requesting a feature that is very often requested by my users[1].
>>>
>>> There are some great docs out there about how to submit an RFE, but I
>>> don't know what should happen after the submission to launchpad. My RFE
>>> bug seems to have been untouched for a month and I'm unsure if I've done
>>> something wrong. So, here are a few questions that I have.
>>>
>>>
>>> 1. Should I be following up on the dev list to ask for someone to look
>>> at my RFE bug?
>>> 2. How long should I expect it to take to have my RFE acknowledged?
>>> 3. As an operator, I'm a bit ignorant as to whether or not there are
>>> times during the release cycle during which there simply won't be
>>> bandwidth to consider RFE bugs.
>>> 4. Should I be doing anything else?
>>>
>>> Would love some guidance.
>>>
>>
>> you did nothing wrong, the team was simply busy going through the existing
>> schedule. Having said that, you could have spared a few more words on the
>> use case and what you mean by annotations.
>>
>> I'll follow up on the RFE for more questions.
>>
>> Cheers,
>> Armando
>>
>>
>>>
>>> Cheers,
>>> James
>>>
>>> [1] https://bugs.launchpad.net/neutron/+bug/1483480
>>>
>>> --
>>> James Dempsey
>>> Senior Cloud Engineer
>>> Catalyst IT Limited
>>> +64 4 803 2264
>>> --


From blak111 at gmail.com  Thu Sep 10 22:53:08 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 10 Sep 2015 15:53:08 -0700
Subject: [openstack-dev] [neutron][nova] - removing "INVALID drop" iptables
	rule
Message-ID: <CAO_F6JOEm3ijWnCsyJOQtm7hjPEf7ZCTN9sH-1XsRwOfbkHf_w@mail.gmail.com>

Hi,

I have a patch out in which I want to make sure any allow rules are
processed before the rule that drops packets conntrack deems as INVALID.[1]
This rule interferes with setups where conntrack might not see the first
part of a TCP handshake because of encapsulation in a load balancer
direct-service-return setup.

What I would like to know is why the rule was added in the first place and
if there are any concerns with not processing it before the allow rules.
The only thing I can see that it's really stopping is SYN-ACK probing to
ports the security groups are configured to allow, in which case a SYN
probe would likely work just as well.

Any feedback here or directly on the patch would be great.

1. https://review.openstack.org/#/c/218517/


Cheers
-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/9e29ce07/attachment.html>

From dklyle0 at gmail.com  Thu Sep 10 22:54:26 2015
From: dklyle0 at gmail.com (David Lyle)
Date: Thu, 10 Sep 2015 16:54:26 -0600
Subject: [openstack-dev] [Horizon]Let's take care of our integration
	tests
In-Reply-To: <OF57A4B719.91C38F6B-ON85257EBC.007606AE-86257EBC.007B43A0@us.ibm.com>
References: <OF57A4B719.91C38F6B-ON85257EBC.007606AE-86257EBC.007B43A0@us.ibm.com>
Message-ID: <CAFFhzB4Nc8f2BOmtzNkNSidSLYYx_97VmgHMFTjc6kH0kq-atw@mail.gmail.com>

I completely agree about monitoring for integration test failures and
blocking until the failure is corrected.

The hope is to make sure we've stabilized the integration testing
framework a bit before reenabling to vote.

Thanks Timur, I know this has been a considerable undertaking.

David

On Thu, Sep 10, 2015 at 4:26 PM, Douglas Fish <drfish at us.ibm.com> wrote:
> It looks like we've reached the point where our Horizon integration tests
> are functional again.  Thanks for your work on this Timur! (Offer for
> beer/hug at the next summit still stands)
>
> I'd like to have these tests voting again ASAP, but I understand that might
> be a bit risky at this point. We haven't yet proven that these tests will be
> stable over the long term.
>
> I encourage all of the reviewers to keep the integration tests in mind as we
> are reviewing code. Keep an eye on the status of the
> gate-horizon-dsvm-integration test. It's failure would be great reason to
> hand out a -1!
>
> Doug
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From tony at bakeyournoodle.com  Thu Sep 10 23:26:28 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Fri, 11 Sep 2015 09:26:28 +1000
Subject: [openstack-dev] [ceilometerclient] Updating global-requirements
	caps.
Message-ID: <20150910232628.GB11827@thor.bakeyournoodle.com>

Hi all,
    In trying to fix a few stable/juno issues we need to release a new version
of ceilometerclient for stable/juno.  This email is to try and raise awareness
so that if the proposal is bonkers [1] we can come up with something better.

This isn't currently possible due to the current caps in juno and kilo.

The proposed fix is to:

. update g-r in master (liberty): python-ceilometerclient>=1.2
  https://review.openstack.org/#/c/222386/
. update g-r in stable/kilo: python-ceilometerclient>=1.1.1,<1.2 
. release a sync of stable/kilo g-r to stable/kilo python-ceilometerclient as 1.1.1
. update g-r in stable/juno: python-ceilometerclient<1.1.0,!=1.0.13,!=1.0.14 
. release 1.0.15 with a sync of stable/juno g-r

The point is, leave 1.0.x for juno, 1.1.x for kilo and >=1.2 for liberty

This is being tracked as: https://bugs.launchpad.net/python-ceilometerclient/+bug/1494516

There is a secondary issue if getting the (juno) gate in a shape where we can
actually do all of that.

Yours Tony.
[1] Bonkers is a recognized technical term right?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/8d3438af/attachment.pgp>

From joshua.hesketh at gmail.com  Thu Sep 10 23:32:16 2015
From: joshua.hesketh at gmail.com (Joshua Hesketh)
Date: Fri, 11 Sep 2015 09:32:16 +1000
Subject: [openstack-dev] [OpenStack-Infra]  [infra] PTL non-candidacy
In-Reply-To: <20150910205658.GZ7955@yuggoth.org>
References: <87io7im33q.fsf@meyer.lemoncheese.net>
 <20150910205658.GZ7955@yuggoth.org>
Message-ID: <CA+DTi5yy_8FWyiFs9Qf7TDOLhUbuW9sH4kO-eNEM0rFt+8vW0A@mail.gmail.com>

On Fri, Sep 11, 2015 at 6:56 AM, Jeremy Stanley <fungi at yuggoth.org> wrote:

> On 2015-09-10 13:27:53 -0700 (-0700), James E. Blair wrote:
> [...]
> > I do not plan to run for PTL in the next cycle.
> [...]
>
> Thanks for the awesome job you did as PTL these last cycles. I hope
> you enjoy a much-deserved break from the post, and I'm looking
> forward to the new Zuul! ;)
>

Indeed! Your efforts on infra have been both tireless and amazing. Thanks
for all you do for open source!

Cheers,
Josh
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/24483a2a/attachment.html>

From tony at bakeyournoodle.com  Thu Sep 10 23:50:49 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Fri, 11 Sep 2015 09:50:49 +1000
Subject: [openstack-dev] [oslo][release] stable/juno branch creation
In-Reply-To: <1440616274-sup-7044@lrrr.local>
References: <20150824095748.GA74505@thor.bakeyournoodle.com>
 <1440616274-sup-7044@lrrr.local>
Message-ID: <20150910235049.GC11827@thor.bakeyournoodle.com>

On Wed, Aug 26, 2015 at 03:11:56PM -0400, Doug Hellmann wrote:
> Tony,
> 
> Thanks for digging into this!
> 
> I should be able to help, but right now we're ramping up for the L3
> feature freeze and there are a lot of release-related activities going
> on. Can this wait a few weeks for things to settle down again?

Hi Doug,
    Just following up on this.

When you have time can you please:

* create a new stable/juno branch in oslo.utils based on 1.4.0
   - https://bugs.launchpad.net/oslo.utils/+bug/1488746/comments/3
* create a new stable/juno branch in oslotest based on 1.3.0
   - https://bugs.launchpad.net/oslotest/+bug/1488752/comments/3
   - https://bugs.launchpad.net/oslotest/+bug/1488752/comments/4

Once they're there I'll upload the g-r sync patches and then Robert has
promised to help me with the release side of things.  I have to admit I'm a
little terrified ;P

I'll keep an eye on these stable branches and try to keep them clean in the
gate.

Yours Tony.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/61a1232c/attachment.pgp>

From davanum at gmail.com  Thu Sep 10 23:52:01 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Thu, 10 Sep 2015 19:52:01 -0400
Subject: [openstack-dev] [oslo][release] stable/juno branch creation
In-Reply-To: <20150910235049.GC11827@thor.bakeyournoodle.com>
References: <20150824095748.GA74505@thor.bakeyournoodle.com>
 <1440616274-sup-7044@lrrr.local>
 <20150910235049.GC11827@thor.bakeyournoodle.com>
Message-ID: <CANw6fcGfzTw+znHBRMhUvMN=3pULGf4wLEURzY3B3fVP+Lh6Gw@mail.gmail.com>

+1 from me!

Thanks,
Dims

On Thu, Sep 10, 2015 at 7:50 PM, Tony Breeds <tony at bakeyournoodle.com>
wrote:

> On Wed, Aug 26, 2015 at 03:11:56PM -0400, Doug Hellmann wrote:
> > Tony,
> >
> > Thanks for digging into this!
> >
> > I should be able to help, but right now we're ramping up for the L3
> > feature freeze and there are a lot of release-related activities going
> > on. Can this wait a few weeks for things to settle down again?
>
> Hi Doug,
>     Just following up on this.
>
> When you have time can you please:
>
> * create a new stable/juno branch in oslo.utils based on 1.4.0
>    - https://bugs.launchpad.net/oslo.utils/+bug/1488746/comments/3
> * create a new stable/juno branch in oslotest based on 1.3.0
>    - https://bugs.launchpad.net/oslotest/+bug/1488752/comments/3
>    - https://bugs.launchpad.net/oslotest/+bug/1488752/comments/4
>
> Once they're there I'll upload the g-r sync patches and then Robert has
> promised to help me with the release side of things.  I have to admit I'm a
> little terrified ;P
>
> I'll keep an eye on these stable branches and try to keep them clean in the
> gate.
>
> Yours Tony.
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/c5d1061a/attachment.html>

From shi-hoshino at yk.jp.nec.com  Fri Sep 11 00:14:10 2015
From: shi-hoshino at yk.jp.nec.com (Shinya Hoshino)
Date: Fri, 11 Sep 2015 00:14:10 +0000
Subject: [openstack-dev] [Ironic] There is a function to display the VGA
 emulation screen of BMC in the baremetal node on the Horizon?
Message-ID: <870B6C6CD434FA459C5BF1377C680282023A7160@BPXM20GP.gisp.nec.co.jp>

Hello,

We are investigating how to display on the Horizon a VGA
emulation screen of BMC in the bare metal node that has been
deployed by Ironic.
If it was already implemented, I thought that the connection
information of a VNC or SPICE server (converted if necessary)
for a VGA emulation screen of BMC is returned as the stdout of
the "nova get-*-console".
However, we were investigating how to configure Ironic and so
on, but we could not find a way to do so.
I tried to search roughly the implementation of such a process
in the source code of Ironic, but it was not found.

The current Ironic, I think such features are not implemented.
However, is this correct?

Best regards,

-- 
/* -------------------------------------------------------------
   Shinn'ya Hoshino            mailto:shi-hoshino at yk.jp.nec.com
   NEC Solution Innovators, Ltd.       Tel.:    +81-11-746-6432
    1st Platform Software Division     Domestic:   011-746-6432
    Platform Solution Business Unit    (Extension:  8-417-3840)
------------------------------------------------------------- */


From tony at bakeyournoodle.com  Fri Sep 11 00:24:17 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Fri, 11 Sep 2015 10:24:17 +1000
Subject: [openstack-dev] [all][Elections] Nominations for OpenStack PTLs
 (Program Technical Leads) are now open
Message-ID: <20150911002416.GD11827@thor.bakeyournoodle.com>

Nominations for OpenStack PTLs (Program Technical Leads) are now open and will
remain open until September 17, 05:59 UTC.

All candidacies must be submitted as a text file to the openstack/election
repository as explained on the wiki[0].

In order to be an eligible candidate (and be allowed to vote) in a given PTL
election, you need to have contributed an accepted patch to one of the
corresponding program's projects[1] during the Kilo-Liberty timeframe
(September 18, 2014 06:00 UTC to September 18, 2015 05:59 UTC)

Additional information about the nomination process can be found here:
https://wiki.openstack.org/wiki/PTL_Elections_September_2015

As Tony and I approve candidates, we will update the list of confirmed
candidates on the above wiki page.

Elections will begin on September 18, 2015 and run until after 13:00 UTC
September 24, 2015.

The electorate is requested to confirm their email address in gerrit,
review.openstack.org > Settings > Contact Information >  Preferred Email, prior
to September 17, 2015 so that the emailed ballots are mailed to the correct
email address.

Yours Tony.

[0] https://wiki.openstack.org/wiki/PTL_Elections_September_2015#How_to_submit_your_candidacy
[1] http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2015-elections
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/698ca63c/attachment.pgp>

From lbragstad at gmail.com  Fri Sep 11 00:55:46 2015
From: lbragstad at gmail.com (Lance Bragstad)
Date: Thu, 10 Sep 2015 19:55:46 -0500
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <CAC=h7gVgBvGBxrpTtMU1h8MuNXCh=bdQLXVMkJ-s+T2=XZaNZA@mail.gmail.com>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
 <CAC=h7gVgBvGBxrpTtMU1h8MuNXCh=bdQLXVMkJ-s+T2=XZaNZA@mail.gmail.com>
Message-ID: <CAE6oFcHGhn6oZqfG9v+poudy6_kNp5LaPwY1D5CfvhMShcZTiQ@mail.gmail.com>

Best of luck in your new adventures, and thanks for all your hard work!

On Thu, Sep 10, 2015 at 5:28 PM, Dolph Mathews <dolph.mathews at gmail.com>
wrote:

> Thank you for all your work, Morgan! Good luck with the opportunity to
> write some code again :)
>
> On Thu, Sep 10, 2015 at 4:40 PM, Morgan Fainberg <
> morgan.fainberg at gmail.com> wrote:
>
>> As I outlined (briefly) in my recent announcement of changes (
>> https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/
>> ) I will not be running for PTL of Keystone this next cycle (Mitaka). The
>> role of PTL is a difficult but extremely rewarding job. It has been amazing
>> to see both Keystone and OpenStack grow.
>>
>> I am very pleased with the accomplishments of the Keystone development
>> team over the last year. We have seen improvements with Federation,
>> Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing,
>> releasing a dedicated authentication library, cross-project initiatives
>> around improving the Service Catalog, and much, much more. I want to thank
>> each and every contributor for the hard work that was put into Keystone and
>> its associated projects.
>>
>> While I will be changing my focus to spend more time on the general needs
>> of OpenStack and working on the Public Cloud story, I am confident in those
>> who can, and will, step up to the challenges of leading development of
>> Keystone and the associated projects. I may be working across more
>> projects, but you can be assured I will be continuing to work hard to see
>> the initiatives I helped start through. I wish the best of luck to the next
>> PTL.
>>
>> I guess this is where I get to write a lot more code soon!
>>
>> See you all (in person) in Tokyo!
>> --Morgan
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/90475732/attachment.html>

From tony at bakeyournoodle.com  Fri Sep 11 01:08:26 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Fri, 11 Sep 2015 11:08:26 +1000
Subject: [openstack-dev] [all][Elections] Nominations for OpenStack PTLs
 (Program Technical Leads) are now open
In-Reply-To: <20150911002416.GD11827@thor.bakeyournoodle.com>
References: <20150911002416.GD11827@thor.bakeyournoodle.com>
Message-ID: <20150911010826.GE11827@thor.bakeyournoodle.com>

On Fri, Sep 11, 2015 at 10:24:17AM +1000, Tony Breeds wrote:
> Nominations for OpenStack PTLs (Program Technical Leads) are now open and will
> remain open until September 17, 05:59 UTC.
> 
> All candidacies must be submitted as a text file to the openstack/election
> repository as explained on the wiki[0].
> 
> In order to be an eligible candidate (and be allowed to vote) in a given PTL
> election, you need to have contributed an accepted patch to one of the
> corresponding program's projects[1] during the Kilo-Liberty timeframe
> (September 18, 2014 06:00 UTC to September 18, 2015 05:59 UTC)
> 
> Additional information about the nomination process can be found here:
> https://wiki.openstack.org/wiki/PTL_Elections_September_2015
> 
> As Tony and I approve candidates, we will update the list of confirmed

Of course s/Tony/Tristan/  Sorry.

> candidates on the above wiki page.
> 
> Elections will begin on September 18, 2015 and run until after 13:00 UTC
> September 24, 2015.
> 
> The electorate is requested to confirm their email address in gerrit,
> review.openstack.org > Settings > Contact Information >  Preferred Email, prior
> to September 17, 2015 so that the emailed ballots are mailed to the correct
> email address.
> 
> Yours Tony.
> 
> [0] https://wiki.openstack.org/wiki/PTL_Elections_September_2015#How_to_submit_your_candidacy
> [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2015-elections



> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Yours Tony.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/57dc8da7/attachment.pgp>

From fungi at yuggoth.org  Fri Sep 11 01:40:10 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Fri, 11 Sep 2015 01:40:10 +0000
Subject: [openstack-dev]  [infra] PTL candidacy
Message-ID: <20150911014010.GA7955@yuggoth.org>

It's time to toss my hat into the ring for Infrastructure PTL, if
you'll have me. I wasn't around at the beginning like my illustrious
predecessors, but I've been a core reviewer and root sysadmin for
OpenStack's community-maintained project infrastructure these past
three years. In that time it's been my pleasure to help further the
tremendous growth we've experienced as a team and within the
OpenStack community as a whole.

    https://wiki.openstack.org/user:fungi

As a free software idealist I'm proud not only of what we accomplish
but how we manage to do so without compromising on our standards of
transparency and openness, even when it may inconveniently highlight
new problems we also need to solve. The Infrastructure team serves
as a shining example of how communities can effectively collaborate
and produce while not relying on crutches of proprietary, commercial
tools. Expect me to continue encouraging our community to make the
hard choice in favor of free tools, of being a helpful downstream
for the communities of the tools we use, and of acting as a
responsible upstream to those who wish to reuse the tools we've
written to make our own work possible.

Hurtling now headlong into the Big Tent, our team is facing new and
interesting scaling challenges. I won't claim to possess easy
answers to the dilemmas awaiting us, but am confident in our ability
to solve them and will do everything I can to support you all to
that end. Great strides have already been made toward reorganizing,
delegating and distributing our decision making, and I expect us to
continue in those efforts as well as whatever new solutions we will
inevitably identify together.

The PTL's role is that of a communicator, facilitator, coordinator
and mentor; if these are the ways you'd like to see me spend the
next six months, then I'd appreciate your vote.
-- 
Jeremy Stanley
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 949 bytes
Desc: Digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/312796d4/attachment.pgp>

From sbalukoff at bluebox.net  Fri Sep 11 01:53:43 2015
From: sbalukoff at bluebox.net (Stephen Balukoff)
Date: Thu, 10 Sep 2015 18:53:43 -0700
Subject: [openstack-dev] custom lbaas driver
In-Reply-To: <CAHPHmHedh+zuMVJnyQN1SHgWZVXgW2GMge1CeHzw14rGDX=1NQ@mail.gmail.com>
References: <CAHPHmHcGEHMHpus-0pphrPviShoe--PcRik9HHY+iU0x1-Q74g@mail.gmail.com>
 <CAHPHmHedh+zuMVJnyQN1SHgWZVXgW2GMge1CeHzw14rGDX=1NQ@mail.gmail.com>
Message-ID: <CAAGw+Zozt793hM13CVCfOp6YSd_cF4=nmfHA2rJjev3y9XbcmA@mail.gmail.com>

Srikumar--

I'm not aware of any particular write-up. The best advice I have is to
start with the logging noop driver and refer to the reference haproxy
namespace driver to see what these are doing to implement the functionality
of LBaaS v2.

Sorry! I wish I had better news for you.

Stephen

On Fri, Sep 4, 2015 at 10:36 AM, Srikumar Chari <srikumar at appcito.net>
wrote:

> Hello,
>
> I am trying to write an custom lbaas v2.0 driver. Was wondering if there's
> a document on how to go about that. I see a number of implementations as
> part of the source code but they all seem to be different. For instance
> HAProxy is completely different when compared to the other vendors. I am
> assuming that HAProxy is the "standard" as it is the default load balancer
> for OpenStack. Is there a design doc or some kinda write up?
>
> This thread was the closest I could get to a write-up:
> https://openstack.nimeyo.com/21628/openstack-dev-neutron-lbaas-need-help-with-lbaas-drivers
> .
>
> I guess I could reverse engineering the HAProxy namespace driver but I am
> probably going to miss some of the design elements. Any help/pointers/links
> would be great.
>
> thanks
> Sri
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Principal Technologist
Blue Box, An IBM Company
www.blueboxcloud.com
sbalukoff at blueboxcloud.com
206-607-0660 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/985392e7/attachment.html>

From ayoung at redhat.com  Fri Sep 11 02:22:54 2015
From: ayoung at redhat.com (Adam Young)
Date: Thu, 10 Sep 2015 22:22:54 -0400
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
Message-ID: <55F23AFE.1080101@redhat.com>

Confirming that Morgan is eligible for non-candidacy.

You've done a great job.  Thanks.


On 09/10/2015 05:40 PM, Morgan Fainberg wrote:
> As I outlined (briefly) in my recent announcement of changes ( 
> https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/ 
> ) I will not be running for PTL of Keystone this next cycle (Mitaka). 
> The role of PTL is a difficult but extremely rewarding job. It has 
> been amazing to see both Keystone and OpenStack grow.
>
> I am very pleased with the accomplishments of the Keystone development 
> team over the last year. We have seen improvements with Federation, 
> Keystone-to-Keystone Federation, Fernet Tokens, improvements of 
> testing, releasing a dedicated authentication library, cross-project 
> initiatives around improving the Service Catalog, and much, much more. 
> I want to thank each and every contributor for the hard work that was 
> put into Keystone and its associated projects.
>
> While I will be changing my focus to spend more time on the general 
> needs of OpenStack and working on the Public Cloud story, I am 
> confident in those who can, and will, step up to the challenges of 
> leading development of Keystone and the associated projects. I may be 
> working across more projects, but you can be assured I will be 
> continuing to work hard to see the initiatives I helped start through. 
> I wish the best of luck to the next PTL.
>
> I guess this is where I get to write a lot more code soon!
>
> See you all (in person) in Tokyo!
> --Morgan
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/cf09b992/attachment.html>

From wei.d.chen at intel.com  Fri Sep 11 02:23:51 2015
From: wei.d.chen at intel.com (Chen, Wei D)
Date: Fri, 11 Sep 2015 02:23:51 +0000
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <CAE6oFcHGhn6oZqfG9v+poudy6_kNp5LaPwY1D5CfvhMShcZTiQ@mail.gmail.com>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
 <CAC=h7gVgBvGBxrpTtMU1h8MuNXCh=bdQLXVMkJ-s+T2=XZaNZA@mail.gmail.com>
 <CAE6oFcHGhn6oZqfG9v+poudy6_kNp5LaPwY1D5CfvhMShcZTiQ@mail.gmail.com>
Message-ID: <C5A0092C63E939488005F15F736A81120A8BAD6F@SHSMSX103.ccr.corp.intel.com>

Morgan, thanks for your great leadership,  all the best in your new journey.





Best Regards,

Dave Chen



From: Lance Bragstad [mailto:lbragstad at gmail.com]
Sent: Friday, September 11, 2015 8:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] PTL non-candidacy



Best of luck in your new adventures, and thanks for all your hard work!



On Thu, Sep 10, 2015 at 5:28 PM, Dolph Mathews <dolph.mathews at gmail.com> wrote:

Thank you for all your work, Morgan! Good luck with the opportunity to write some code again :)



On Thu, Sep 10, 2015 at 4:40 PM, Morgan Fainberg <morgan.fainberg at gmail.com> wrote:

As I outlined (briefly) in my recent announcement of changes ( 
https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/ ) I will not be running for PTL of Keystone this next 
cycle (Mitaka). The role of PTL is a difficult but extremely rewarding job. It has been amazing to see both Keystone and OpenStack 
grow.



I am very pleased with the accomplishments of the Keystone development team over the last year. We have seen improvements with 
Federation, Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing, releasing a dedicated authentication library, 
cross-project initiatives around improving the Service Catalog, and much, much more. I want to thank each and every contributor for 
the hard work that was put into Keystone and its associated projects.



While I will be changing my focus to spend more time on the general needs of OpenStack and working on the Public Cloud story, I am 
confident in those who can, and will, step up to the challenges of leading development of Keystone and the associated projects. I may 
be working across more projects, but you can be assured I will be continuing to work hard to see the initiatives I helped start 
through. I wish the best of luck to the next PTL.



I guess this is where I get to write a lot more code soon!



See you all (in person) in Tokyo!

--Morgan



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/6850f772/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6648 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/6850f772/attachment.bin>

From os.lcheng at gmail.com  Fri Sep 11 02:46:09 2015
From: os.lcheng at gmail.com (Lin Hua Cheng)
Date: Thu, 10 Sep 2015 19:46:09 -0700
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <55F23AFE.1080101@redhat.com>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
 <55F23AFE.1080101@redhat.com>
Message-ID: <CABtBEBWpth_6VM8xQ7o5MnrEOQOK-SZ8E8UzWGp0rNrx05Fchw@mail.gmail.com>

Thanks for doing a great job, Morgan!

Good luck on your next adventure.


On Thu, Sep 10, 2015 at 7:22 PM, Adam Young <ayoung at redhat.com> wrote:

> Confirming that Morgan is eligible for non-candidacy.
>
> You've done a great job.  Thanks.
>
>
>
> On 09/10/2015 05:40 PM, Morgan Fainberg wrote:
>
> As I outlined (briefly) in my recent announcement of changes (
> https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/
> ) I will not be running for PTL of Keystone this next cycle (Mitaka). The
> role of PTL is a difficult but extremely rewarding job. It has been amazing
> to see both Keystone and OpenStack grow.
>
> I am very pleased with the accomplishments of the Keystone development
> team over the last year. We have seen improvements with Federation,
> Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing,
> releasing a dedicated authentication library, cross-project initiatives
> around improving the Service Catalog, and much, much more. I want to thank
> each and every contributor for the hard work that was put into Keystone and
> its associated projects.
>
> While I will be changing my focus to spend more time on the general needs
> of OpenStack and working on the Public Cloud story, I am confident in those
> who can, and will, step up to the challenges of leading development of
> Keystone and the associated projects. I may be working across more
> projects, but you can be assured I will be continuing to work hard to see
> the initiatives I helped start through. I wish the best of luck to the next
> PTL.
>
> I guess this is where I get to write a lot more code soon!
>
> See you all (in person) in Tokyo!
> --Morgan
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/db5102fd/attachment-0001.html>

From watanabe_isao at jp.fujitsu.com  Fri Sep 11 03:00:10 2015
From: watanabe_isao at jp.fujitsu.com (Watanabe, Isao)
Date: Fri, 11 Sep 2015 03:00:10 +0000
Subject: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
 gerrit server
In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF06E8@G4W3223.americas.hpqcorp.net>
References: <9086590602E58741A4119DC210CF893AA92C53DD@G08CNEXMBPEKD01.g08.fujitsu.local>
 <55F1607F.9060509@virtuozzo.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF06E8@G4W3223.americas.hpqcorp.net>
Message-ID: <AC0F94DB49C0C2439892181E6CDA6E6C171F186D@G01JPEXMBYT05>

Hello, Ramy

Could you please add the following CI to the third-party ci group, too.

Fujitsu ETERNUS CI

We are preparing this CI test system, and going to use this CI system to test Cinder.
The wiki of this CI:
<https://wiki.openstack.org/wiki/ThirdPartySystems/Fujitsu_ETERNUS_CI>

Thank you very much.

Best regards,
Watanabe.isao



> -----Original Message-----
> From: Asselin, Ramy [mailto:ramy.asselin at hp.com]
> Sent: Thursday, September 10, 2015 8:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
> gerrit server
> 
> I added Fnst OpenStackTest CI
> <https://review.openstack.org/#/q/owner:openstack_dev%2540163.com+status
> :open,n,z>  to the third-party ci group.
> 
> Ramy
> 
> 
> 
> From: Evgeny Antyshev [mailto:eantyshev at virtuozzo.com]
> Sent: Thursday, September 10, 2015 3:51 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
> gerrit server
> 
> 
> 
> 
> 
> On 10.09.2015 11:30, Xie, Xianshan wrote:
> 
> 	Hi, all,
> 
> 	   In my CI environment, after submitting a patch into
> openstack-dev/sandbox,
> 
> 	the Jenkins Job can be launched automatically, and the result message
> of the job also can be posted into the gerrit server successfully.
> 
> 	Everything seems fine.
> 
> 
> 
> 	But in the ?Verified? column, there is no verified vote, such as +1
> or -1.
> 
> You will be able when your CI account is added to "Third-Party CI" group on
> review.openstack.org
> https://review.openstack.org/#/admin/groups/270,members
> I advice you to ask for such a permission in an IRC meeting for third-party
> CI maintainers:
> https://wiki.openstack.org/wiki/Meetings/ThirdParty
> But you still won't be able to vote on other projects, except the sandbox.
> 
> 
> 
> 
> 	(patch url: https://review.openstack.org/#/c/222049/
> <https://review.openstack.org/#/c/222049/> ,
> 
> 	CI name:  Fnst OpenStackTest CI)
> 
> 
> 
> 	Although I have already added the ?verified? label into the
> layout.yaml , under the check pipeline, it does not work yet.
> 
> 
> 
> 	And my configuration info is setted as follows:
> 
> 	Layout.yaml
> 
> 	-------------------------------------------
> 
> 	pipelines:
> 
> 	  - name: check
> 
> 	   trigger:
> 
> 	     gerrit:
> 
> 	      - event: patchset-created
> 
> 	      - event: change-restored
> 
> 	      - event: comment-added
> 
> 	?
> 
> 	   success:
> 
> 	    gerrit:
> 
> 	      verified: 1
> 
> 	   failure:
> 
> 	    gerrit:
> 
> 	      verified: -1
> 
> 
> 
> 	jobs:
> 
> 	   - name: noop-check-communication
> 
> 	      parameter-function: reusable_node
> 
> 	projects:
> 
> 	- name: openstack-dev/sandbox
> 
> 	   - noop-check-communication
> 
> 	-------------------------------------------
> 
> 
> 
> 
> 
> 	And the projects.yaml of Jenkins job:
> 
> 	-------------------------------------------
> 
> 	- project:
> 
> 	name: sandbox
> 
> 	jobs:
> 
> 	      - noop-check-communication:
> 
> 	         node: 'devstack_slave || devstack-precise-check || d-p-c'
> 
> 	?
> 
> 	-------------------------------------------
> 
> 
> 
> 	Could anyone help me? Thanks in advance.
> 
> 
> 
> 	Xiexs
> 
> 
> 
> 
> 
> 
> 
> 
> 	________________________________________________________________
> __________
> 	OpenStack Development Mailing List (not for usage questions)
> 	Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> 	http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
> v
> 
> 



From ramy.asselin at hp.com  Fri Sep 11 03:07:26 2015
From: ramy.asselin at hp.com (Asselin, Ramy)
Date: Fri, 11 Sep 2015 03:07:26 +0000
Subject: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
 gerrit server
In-Reply-To: <AC0F94DB49C0C2439892181E6CDA6E6C171F186D@G01JPEXMBYT05>
References: <9086590602E58741A4119DC210CF893AA92C53DD@G08CNEXMBPEKD01.g08.fujitsu.local>
 <55F1607F.9060509@virtuozzo.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF06E8@G4W3223.americas.hpqcorp.net>
 <AC0F94DB49C0C2439892181E6CDA6E6C171F186D@G01JPEXMBYT05>
Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF5DF7@G4W3223.americas.hpqcorp.net>

Done. Thank you for adding your CI system to the wiki.

Ramy

-----Original Message-----
From: Watanabe, Isao [mailto:watanabe_isao at jp.fujitsu.com] 
Sent: Thursday, September 10, 2015 8:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into gerrit server

Hello, Ramy

Could you please add the following CI to the third-party ci group, too.

Fujitsu ETERNUS CI

We are preparing this CI test system, and going to use this CI system to test Cinder.
The wiki of this CI:
<https://wiki.openstack.org/wiki/ThirdPartySystems/Fujitsu_ETERNUS_CI>

Thank you very much.

Best regards,
Watanabe.isao



> -----Original Message-----
> From: Asselin, Ramy [mailto:ramy.asselin at hp.com]
> Sent: Thursday, September 10, 2015 8:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified 
> into gerrit server
> 
> I added Fnst OpenStackTest CI
> <https://review.openstack.org/#/q/owner:openstack_dev%2540163.com+stat
> us :open,n,z>  to the third-party ci group.
> 
> Ramy
> 
> 
> 
> From: Evgeny Antyshev [mailto:eantyshev at virtuozzo.com]
> Sent: Thursday, September 10, 2015 3:51 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified 
> into gerrit server
> 
> 
> 
> 
> 
> On 10.09.2015 11:30, Xie, Xianshan wrote:
> 
> 	Hi, all,
> 
> 	   In my CI environment, after submitting a patch into 
> openstack-dev/sandbox,
> 
> 	the Jenkins Job can be launched automatically, and the result message 
> of the job also can be posted into the gerrit server successfully.
> 
> 	Everything seems fine.
> 
> 
> 
> 	But in the "Verified" column, there is no verified vote, such as +1 
> or -1.
> 
> You will be able when your CI account is added to "Third-Party CI" 
> group on review.openstack.org 
> https://review.openstack.org/#/admin/groups/270,members
> I advice you to ask for such a permission in an IRC meeting for 
> third-party CI maintainers:
> https://wiki.openstack.org/wiki/Meetings/ThirdParty
> But you still won't be able to vote on other projects, except the sandbox.
> 
> 
> 
> 
> 	(patch url: https://review.openstack.org/#/c/222049/
> <https://review.openstack.org/#/c/222049/> ,
> 
> 	CI name:  Fnst OpenStackTest CI)
> 
> 
> 
> 	Although I have already added the "verified" label into the 
> layout.yaml , under the check pipeline, it does not work yet.
> 
> 
> 
> 	And my configuration info is setted as follows:
> 
> 	Layout.yaml
> 
> 	-------------------------------------------
> 
> 	pipelines:
> 
> 	  - name: check
> 
> 	   trigger:
> 
> 	     gerrit:
> 
> 	      - event: patchset-created
> 
> 	      - event: change-restored
> 
> 	      - event: comment-added
> 
> 	...
> 
> 	   success:
> 
> 	    gerrit:
> 
> 	      verified: 1
> 
> 	   failure:
> 
> 	    gerrit:
> 
> 	      verified: -1
> 
> 
> 
> 	jobs:
> 
> 	   - name: noop-check-communication
> 
> 	      parameter-function: reusable_node
> 
> 	projects:
> 
> 	- name: openstack-dev/sandbox
> 
> 	   - noop-check-communication
> 
> 	-------------------------------------------
> 
> 
> 
> 
> 
> 	And the projects.yaml of Jenkins job:
> 
> 	-------------------------------------------
> 
> 	- project:
> 
> 	name: sandbox
> 
> 	jobs:
> 
> 	      - noop-check-communication:
> 
> 	         node: 'devstack_slave || devstack-precise-check || d-p-c'
> 
> 	...
> 
> 	-------------------------------------------
> 
> 
> 
> 	Could anyone help me? Thanks in advance.
> 
> 
> 
> 	Xiexs
> 
> 
> 
> 
> 
> 
> 
> 
> 	________________________________________________________________
> __________
> 	OpenStack Development Mailing List (not for usage questions)
> 	Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> 	http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
> v
> 
> 


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From dougwig at parksidesoftware.com  Fri Sep 11 03:24:22 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Thu, 10 Sep 2015 21:24:22 -0600
Subject: [openstack-dev] custom lbaas driver
In-Reply-To: <CAAGw+Zozt793hM13CVCfOp6YSd_cF4=nmfHA2rJjev3y9XbcmA@mail.gmail.com>
References: <CAHPHmHcGEHMHpus-0pphrPviShoe--PcRik9HHY+iU0x1-Q74g@mail.gmail.com>
 <CAHPHmHedh+zuMVJnyQN1SHgWZVXgW2GMge1CeHzw14rGDX=1NQ@mail.gmail.com>
 <CAAGw+Zozt793hM13CVCfOp6YSd_cF4=nmfHA2rJjev3y9XbcmA@mail.gmail.com>
Message-ID: <5F981BDA-D2FB-4ACD-8F9A-8B3F296FD4BC@parksidesoftware.com>

Just to add a little flavor and a slightly different twist from what Stephen wrote:

- The logging noop driver is indeed a good first place to start.

- The haproxy reference driver is the *only* driver that utilizes the lbaas-agent, and it?s being deprecated soon, so?

Most vendor implementations have some sort of outside agent/daemon of their own that can be interacted with directly. So unless you need to make async driver calls and you have no mechanism to do that yourself, I would avoid doing things the way the current haproxy driver is doing them.

For a driver that transforms neutron objects into REST calls and then converts the responses, I?d suggest taking a look at the octavia driver after the logging noop driver.

Please don?t hesitate to hop into #openstack-lbaas with any questions, in addition to here.

Thanks,
doug



> On Sep 10, 2015, at 7:53 PM, Stephen Balukoff <sbalukoff at bluebox.net> wrote:
> 
> Srikumar--
> 
> I'm not aware of any particular write-up. The best advice I have is to start with the logging noop driver and refer to the reference haproxy namespace driver to see what these are doing to implement the functionality of LBaaS v2.
> 
> Sorry! I wish I had better news for you.
> 
> Stephen
> 
> On Fri, Sep 4, 2015 at 10:36 AM, Srikumar Chari <srikumar at appcito.net <mailto:srikumar at appcito.net>> wrote:
> Hello,
> 
> I am trying to write an custom lbaas v2.0 driver. Was wondering if there's a document on how to go about that. I see a number of implementations as part of the source code but they all seem to be different. For instance HAProxy is completely different when compared to the other vendors. I am assuming that HAProxy is the "standard" as it is the default load balancer for OpenStack. Is there a design doc or some kinda write up?
> 
> This thread was the closest I could get to a write-up: https://openstack.nimeyo.com/21628/openstack-dev-neutron-lbaas-need-help-with-lbaas-drivers <https://openstack.nimeyo.com/21628/openstack-dev-neutron-lbaas-need-help-with-lbaas-drivers>.
> 
> I guess I could reverse engineering the HAProxy namespace driver but I am probably going to miss some of the design elements. Any help/pointers/links would be great.
> 
> thanks
> Sri
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> 
> 
> -- 
> Stephen Balukoff
> Principal Technologist
> Blue Box, An IBM Company
> www.blueboxcloud.com <http://www.blueboxcloud.com/>
> sbalukoff at blueboxcloud.com <mailto:sbalukoff at blueboxcloud.com>
> 206-607-0660 x807
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150910/5feba995/attachment.html>

From HoangCX at vn.fujitsu.com  Fri Sep 11 03:32:40 2015
From: HoangCX at vn.fujitsu.com (HoangCX at vn.fujitsu.com)
Date: Fri, 11 Sep 2015 03:32:40 +0000
Subject: [openstack-dev] [Neutron] Targeting "Logging API for SG and FW
 rules" feature to L-3 milestone
Message-ID: <26330608400942dd91140586f6450479@G07SGEXCMSGPS01.g07.fujitsu.local>

Good day Germy,

>I have reviewed the specification linked above. Thank you for introducing
>such an interesting and important feature. But as I commented inline, I
>think it still need some further work to do. Such as how to get those logs
>stored? To admin and tenant, I think it's different.
>And performance impact, if tenantA turn on logs, will tenantB on the same
>host be impacted?

Thank you so much for your reviewing to the specification.
That's really helpful. I have just updated the specs based on your considerations.
Could you please take a look at it?
https://review.openstack.org/#/c/203509/


Best regards,

Cao Xuan Hoang



From btopol at us.ibm.com  Fri Sep 11 04:08:44 2015
From: btopol at us.ibm.com (Brad Topol)
Date: Fri, 11 Sep 2015 00:08:44 -0400
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
Message-ID: <201509110409.t8B490Nq004034@d03av01.boulder.ibm.com>


Thank you Morgan for your outstanding leadership, tremendous effort, and
your dedication to OpenStack and Keystone in particular.   It has been an
absolute pleasure getting to work with you these past few years.  And  I am
looking forward to working with you in your new role!!!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  btopol at us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:	Morgan Fainberg <morgan.fainberg at gmail.com>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>
Date:	09/10/2015 05:44 PM
Subject:	[openstack-dev] [keystone] PTL non-candidacy



As I outlined (briefly) in my recent announcement of changes (
https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/
 ) I will not be running for PTL of Keystone this next cycle (Mitaka). The
role of PTL is a difficult but extremely rewarding job. It has been amazing
to see both Keystone and OpenStack grow.

I am very pleased with the accomplishments of the Keystone development team
over the last year. We have seen improvements with Federation,
Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing,
releasing a dedicated authentication library, cross-project initiatives
around improving the Service Catalog, and much, much more. I want to thank
each and every contributor for the hard work that was put into Keystone and
its associated projects.

While I will be changing my focus to spend more time on the general needs
of OpenStack and working on the Public Cloud story, I am confident in those
who can, and will, step up to the challenges of leading development of
Keystone and the associated projects. I may be working across more
projects, but you can be assured I will be continuing to work hard to see
the initiatives I helped start through. I wish the best of luck to the next
PTL.

I guess this is where I get to write a lot more code soon!

See you all (in person) in Tokyo!
--Morgan
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/502c1639/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/502c1639/attachment.gif>

From lajos.katona at ericsson.com  Fri Sep 11 06:50:36 2015
From: lajos.katona at ericsson.com (Lajos Katona)
Date: Fri, 11 Sep 2015 08:50:36 +0200
Subject: [openstack-dev] [tempest] Is there a sandbox project how to use
 tempest test plugin interface?
In-Reply-To: <20150910141302.GA2037@sazabi.kortar.org>
References: <55F17DFF.4000602@ericsson.com>
 <20150910141302.GA2037@sazabi.kortar.org>
Message-ID: <55F279BC.60400@ericsson.com>

Hi Matthew,

Thanks for the help, this helped a lot a start the work.

regards
Lajos

On 09/10/2015 04:13 PM, Matthew Treinish wrote:
> On Thu, Sep 10, 2015 at 02:56:31PM +0200, Lajos Katona wrote:
>> Hi,
>>
>> I just noticed that from tag 6, the test plugin interface considered ready,
>> and I am eager to start to use it.
>> I have some questions:
>>
>> If I understand well in the future the plugin interface will be moved to
>> tempest-lib, but now I have to import module(s) from tempest to start to use
>> the interface.
>> Is there a plan for this, I mean when the whole interface will be moved to
>> tempest-lib?
> The only thing which will eventually move to tempest-lib is the abstract class
> that defines the expected methods of a plugin class [1] The other pieces will
> remain in tempest. Honestly this won't likely happen until sometime during
> Mitaka. Also when it does move to tempest-lib we'll deprecate the tempest
> version and keep it around to allow for a graceful switchover.
>
> The rationale behind this is we really don't provide any stability guarantees
> on tempest internals (except for a couple of places which are documented, like
> this plugin class) and we want any code from tempest that's useful to external
> consumers to really live in tempest-lib.
>
>> If I start to create a test plugin now (from tag 6), what should be the best
>> solution to do this?
>> I thought to create a repo for my plugin and add that as a subrepo to my
>> local tempest repo, and than I can easily import stuff from tempest, but I
>> can keep my test code separated from other parts of tempest.
>> Is there a better way of doing this?
> To start I'd take a look at the documentation for tempest plugins:
>
> http://docs.openstack.org/developer/tempest/plugin.html
>
>  From tempest's point of view a plugin is really just an entry point that points
> to a class that exposes certain methods. So the Tempest plugin can live anywhere
> as long as it's installed as an entry point in the proper namespace. Personally
> I feel like including it as a subrepo in a local tempest tree is a bit strange,
> but I don't think it'll cause any issues if you do that.
>
>> If there would be an example plugin somewhere, that would be the most
>> preferable maybe.
> There is a cookiecutter repo in progress. [2] Once that's ready it'll let you
> create a blank plugin dir that'll be ready for you to populate. (similar to the
> devstack plugin cookiecutter that already exists)
>
> For current examples the only project I know of that's using a plugin interface
> is manila [3] so maybe take a look at what they're doing.
>
> -Matt Treinish
>
> [1] http://git.openstack.org/cgit/openstack/tempest/tree/tempest/test_discover/plugins.py#n26
> [2] https://review.openstack.org/208389
> [3] https://review.openstack.org/#/c/201955
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/6814bdf7/attachment.html>

From gilles at redhat.com  Fri Sep 11 07:03:19 2015
From: gilles at redhat.com (Gilles Dubreuil)
Date: Fri, 11 Sep 2015 17:03:19 +1000
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
Message-ID: <55F27CB7.2040101@redhat.com>

Hi,

Today in the #openstack-puppet channel a discussion about the pro and
cons of using domain parameter for Keystone V3 has been left opened.

The context
------------
Domain names are needed in Openstack Keystone V3 for identifying users
or groups (of users) within different projects (tenant).
Users and groups are uniquely identified within a domain (or a realm as
opposed to project domains).
Then projects have their own domain so users or groups can be assigned
to them through roles.

In Kilo, Keystone V3 have been introduced as an experimental feature.
Puppet providers such as keystone_tenant, keystone_user,
keystone_role_user have been adapted to support it.
Also new ones have appeared (keystone_domain) or are their way
(keystone_group, keystone_trust).
And to be backward compatible with V2, the default domain is used when
no domain is provided.

In existing providers such as keystone_tenant, the domain can be either
part of the name or provided as a parameter:

A. The 'composite namevar' approach:

   keystone_tenant {'projectX::domainY': ... }
 B. The 'meaningless name' approach:

  keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}

Notes:
 - Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
 - Please look at [1] this for some background between the two approaches:

The question
-------------
Decide between the two approaches, the one we would like to retain for
puppet-keystone.

Why it matters?
---------------
1. Domain names are mandatory in every user, group or project. Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.
4. Being consistent
5. Therefore the community to decide

The two approaches are not technically equivalent and it also depends
what a user might expect from a resource title.
See some of the examples below.

Because OpenStack DB tables have IDs to uniquely identify objects, it
can have several objects of a same family with the same name.
This has made things difficult for Puppet resources to guarantee
idem-potency of having unique resources.
In the context of Keystone V3 domain, hopefully this is not the case for
the users, groups or projects but unfortunately this is still the case
for trusts.

Pros/Cons
----------
A.
  Pros
    - Easier names
  Cons
    - Titles have no meaning!
    - Cases where 2 or more resources could exists
    - More difficult to debug
    - Titles mismatch when listing the resources (self.instances)

B.
  Pros
    - Unique titles guaranteed
    - No ambiguity between resource found and their title
  Cons
    - More complicated titles

Examples
----------
= Meaningless name example 1=
Puppet run:
  keystone_tenant {'myproject': name='project_A', domain=>'domain_1', ...}

Second run:
  keystone_tenant {'myproject': name='project_A', domain=>'domain_2', ...}

Result/Listing:

  keystone_tenant { 'project_A::domain_1':
    ensure  => 'present',
    domain  => 'domain_1',
    enabled => 'true',
    id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
  }
   keystone_tenant { 'project_A::domain_2':
    ensure  => 'present',
    domain  => 'domain_2',
    enabled => 'true',
    id      => '4b8255591949484781da5d86f2c47be7',
  }

= Composite name example 1  =
Puppet run:
  keystone_tenant {'project_A::domain_1', ...}

Second run:
  keystone_tenant {'project_A::domain_2', ...}

# Result/Listing
  keystone_tenant { 'project_A::domain_1':
    ensure  => 'present',
    domain  => 'domain_1',
    enabled => 'true',
    id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
   }
  keystone_tenant { 'project_A::domain_2':
    ensure  => 'present',
    domain  => 'domain_2',
    enabled => 'true',
    id      => '4b8255591949484781da5d86f2c47be7',
   }

= Meaningless name example 2  =
Puppet run:
  keystone_tenant {'myproject1': name='project_A', domain=>'domain_1', ...}
  keystone_tenant {'myproject2': name='project_A', domain=>'domain_1',
description=>'blah'...}

Result: project_A in domain_1 has a description

= Composite name example 2  =
Puppet run:
  keystone_tenant {'project_A::domain_1', ...}
  keystone_tenant {'project_A::domain_1', description => 'blah', ...}

Result: Error because the resource must be unique within a catalog

My vote
--------
I would love to have the approach A for easier name.
But I've seen the challenge of maintaining the providers behind the
curtains and the confusion it creates with name/titles and when not sure
about the domain we're dealing with.
Also I believe that supporting self.instances consistently with
meaningful name is saner.
Therefore I vote B

Finally
------
Thanks for reading that far!
To choose, please provide feedback with more pros/cons, examples and
your vote.

Thanks,
Gilles


PS:
[1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc


From henryn at linux.vnet.ibm.com  Fri Sep 11 07:00:30 2015
From: henryn at linux.vnet.ibm.com (Henry Nash)
Date: Fri, 11 Sep 2015 08:00:30 +0100
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <201509110409.t8B490Nq004034@d03av01.boulder.ibm.com>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
 <201509110409.t8B490Nq004034@d03av01.boulder.ibm.com>
Message-ID: <21212068-499F-4A36-BCD7-9AB59D7BA731@linux.vnet.ibm.com>

Gotta add my thanks as well?as I?m sure Dolph will attest, it?s a tough job - and we?ve been lucky to have people who have been prepared to put in the really significant effort  that is required to make both the role and the project successful!

To infinity and beyond?.

Henry
> On 11 Sep 2015, at 05:08, Brad Topol <btopol at us.ibm.com> wrote:
> 
> Thank you Morgan for your outstanding leadership, tremendous effort, and your dedication to OpenStack and Keystone in particular. It has been an absolute pleasure getting to work with you these past few years. And I am looking forward to working with you in your new role!!!
> 
> --Brad
> 
> 
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: btopol at us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
> 
> <graycol.gif>Morgan Fainberg ---09/10/2015 05:44:52 PM---As I outlined (briefly) in my recent announcement of changes ( https://www.morganfainberg.com/blog/2 <https://www.morganfainberg.com/blog/2>
> 
> From: Morgan Fainberg <morgan.fainberg at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Date: 09/10/2015 05:44 PM
> Subject: [openstack-dev] [keystone] PTL non-candidacy
> 
> 
> 
> 
> As I outlined (briefly) in my recent announcement of changes ( https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/ <https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/> ) I will not be running for PTL of Keystone this next cycle (Mitaka). The role of PTL is a difficult but extremely rewarding job. It has been amazing to see both Keystone and OpenStack grow.
> 
> I am very pleased with the accomplishments of the Keystone development team over the last year. We have seen improvements with Federation, Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing, releasing a dedicated authentication library, cross-project initiatives around improving the Service Catalog, and much, much more. I want to thank each and every contributor for the hard work that was put into Keystone and its associated projects.
> 
> While I will be changing my focus to spend more time on the general needs of OpenStack and working on the Public Cloud story, I am confident in those who can, and will, step up to the challenges of leading development of Keystone and the associated projects. I may be working across more projects, but you can be assured I will be continuing to work hard to see the initiatives I helped start through. I wish the best of luck to the next PTL.
> 
> I guess this is where I get to write a lot more code soon!
> 
> See you all (in person) in Tokyo!
> --Morgan__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/7af481b6/attachment.html>

From thierry at openstack.org  Fri Sep 11 07:23:29 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Fri, 11 Sep 2015 09:23:29 +0200
Subject: [openstack-dev] [all][Elections] Nominations for OpenStack PTLs
 (Program Technical Leads) are now open
In-Reply-To: <20150911002416.GD11827@thor.bakeyournoodle.com>
References: <20150911002416.GD11827@thor.bakeyournoodle.com>
Message-ID: <55F28171.3080705@openstack.org>

Tony Breeds wrote:
> Nominations for OpenStack PTLs (Program Technical Leads) are now open and will
> remain open until September 17, 05:59 UTC.

Programs are called Project Teams (since the Big Tent reform), and we
recognize that the PTL leading duties are not just technical, so PTL is
now actually an acronym for "Project Team Leads".

Cheers,

-- 
Thierry Carrez (ttx)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/62f5c08e/attachment.pgp>

From daniel.depaoli at create-net.org  Fri Sep 11 07:24:34 2015
From: daniel.depaoli at create-net.org (Daniel Depaoli)
Date: Fri, 11 Sep 2015 09:24:34 +0200
Subject: [openstack-dev] [fuel][swift] Separate roles for Swift nodes
Message-ID: <CAGqpfRWr-z-YoLESX8buwamSu_fhYfRXMqX_MWRGu=zXLZv6DQ@mail.gmail.com>

Hi all!
I'm starting to investigate some improvements for swift installation in
fuel, in paticular a way to dedicate a node for it. I found this blueprint
https://blueprints.launchpad.net/fuel/+spec/swift-separate-role that seems
to be what i'm looking for.
The blueprint was accepted but not yet started. So, can someone tell me
more about this blueprint? I'm interested in working for it.

Best regards,

-- 
========================================================
Daniel Depaoli
CREATE-NET Research Center
Smart Infrastructures Area
Junior Research Engineer
========================================================
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/ff795b6b/attachment.html>

From viktor.tikkanen at nokia.com  Fri Sep 11 07:25:37 2015
From: viktor.tikkanen at nokia.com (Tikkanen, Viktor (Nokia - FI/Espoo))
Date: Fri, 11 Sep 2015 07:25:37 +0000
Subject: [openstack-dev] [neutron][tempest] iptables-based security groups /
 accepting ingress ICMP
Message-ID: <3E4E0D3116F17747BB7E466168535A742721980E@DEMUMBX002.nsn-intra.net>

Hi!

We have a scenario tempest test case (test_cross_tenant_traffic) which
assumes that an instance should be able to receive icmp echo responses
even when no ingress security rules are defined for that instance.

I don't take a stand on iptables-based security group implementation
details (this was discussed e.g. here:
http://lists.openstack.org/pipermail/openstack-dev/2015-April/060989.html
) but rather on tempest logic.

Do we have some requirement(s) that incoming packets with ESTABLISHED
state should be accepted regardless of security rules? If so, does it
really concern also ICMP packets?

And if there are no such requirements, should we e.g. parameterize the
test case so that it will be skipped when no iptables-based firewall
drivers are used?

-Viktor


From robertc at robertcollins.net  Fri Sep 11 07:35:14 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Fri, 11 Sep 2015 19:35:14 +1200
Subject: [openstack-dev] [ceilometerclient] Updating global-requirements
	caps.
In-Reply-To: <20150910232628.GB11827@thor.bakeyournoodle.com>
References: <20150910232628.GB11827@thor.bakeyournoodle.com>
Message-ID: <CAJ3HoZ0nHROFMyYdOTaxmi+oObVyH9N_TU5Len_aTONo6i8zVg@mail.gmail.com>

On 11 September 2015 at 11:26, Tony Breeds <tony at bakeyournoodle.com> wrote:
> Hi all,
>     In trying to fix a few stable/juno issues we need to release a new version
> of ceilometerclient for stable/juno.  This email is to try and raise awareness
> so that if the proposal is bonkers [1] we can come up with something better.
>
> This isn't currently possible due to the current caps in juno and kilo.
>
> The proposed fix is to:
>
> . update g-r in master (liberty): python-ceilometerclient>=1.2
>   https://review.openstack.org/#/c/222386/
> . update g-r in stable/kilo: python-ceilometerclient>=1.1.1,<1.2
> . release a sync of stable/kilo g-r to stable/kilo python-ceilometerclient as 1.1.1
> . update g-r in stable/juno: python-ceilometerclient<1.1.0,!=1.0.13,!=1.0.14
> . release 1.0.15 with a sync of stable/juno g-r
>
> The point is, leave 1.0.x for juno, 1.1.x for kilo and >=1.2 for liberty
>
> This is being tracked as: https://bugs.launchpad.net/python-ceilometerclient/+bug/1494516
>
> There is a secondary issue if getting the (juno) gate in a shape where we can
> actually do all of that.
>
> Yours Tony.
> [1] Bonkers is a recognized technical term right?

That seems like it will work [resists urge to kibbitz on the capping thing].

Perhaps you'd want to add the caps first across all trees, then do the
minimum raising in master/liberty. Seems like J might be a little
unhappy otherwise.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From zhengzhenyulixi at gmail.com  Fri Sep 11 07:41:20 2015
From: zhengzhenyulixi at gmail.com (Zhenyu Zheng)
Date: Fri, 11 Sep 2015 15:41:20 +0800
Subject: [openstack-dev] [nova] Nova currently handles list with limit=0
 quite different for different objects.
Message-ID: <CAO0b____pvyYBSz7EzWrS--T9HSWbEBv5c-frbFT6NQ46ve-nQ@mail.gmail.com>

Hi, I found out that nova currently handles list with limit=0 quite
different for different objects.

Especially when list servers:

According to the code:
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206

when limit = 0, it should apply as max_limit, but currently, in:
http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930

we directly return [], this is quite different with comment in the api code.


I checked other objects:

when list security groups and server groups, it will return as no limit has
been set. And for flavors it returns []. I will continue to try out other
APIs if needed.

I think maybe we should make a rule for all objects, at least fix the
servers to make it same in api and db code.

I have reported a bug in launchpad:

https://bugs.launchpad.net/nova/+bug/1494617

Any suggestions?

Best Regards,

Zheng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/e4543026/attachment.html>

From blak111 at gmail.com  Fri Sep 11 07:50:47 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Fri, 11 Sep 2015 00:50:47 -0700
Subject: [openstack-dev] [neutron][tempest] iptables-based security
 groups / accepting ingress ICMP
In-Reply-To: <3E4E0D3116F17747BB7E466168535A742721980E@DEMUMBX002.nsn-intra.net>
References: <3E4E0D3116F17747BB7E466168535A742721980E@DEMUMBX002.nsn-intra.net>
Message-ID: <CAO_F6JM+Ldwie32LYM=pZU3YKnPXbt5o2F7Rbe+OGqmrCdJuJg@mail.gmail.com>

Neutron security groups are stateful. A response should be able to come
back without ingress rules regardless of the use of iptables.

On Fri, Sep 11, 2015 at 12:25 AM, Tikkanen, Viktor (Nokia - FI/Espoo) <
viktor.tikkanen at nokia.com> wrote:

> Hi!
>
> We have a scenario tempest test case (test_cross_tenant_traffic) which
> assumes that an instance should be able to receive icmp echo responses
> even when no ingress security rules are defined for that instance.
>
> I don't take a stand on iptables-based security group implementation
> details (this was discussed e.g. here:
> http://lists.openstack.org/pipermail/openstack-dev/2015-April/060989.html
> ) but rather on tempest logic.
>
> Do we have some requirement(s) that incoming packets with ESTABLISHED
> state should be accepted regardless of security rules? If so, does it
> really concern also ICMP packets?
>
> And if there are no such requirements, should we e.g. parameterize the
> test case so that it will be skipped when no iptables-based firewall
> drivers are used?
>
> -Viktor
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/f6cb74ec/attachment.html>

From flavio at redhat.com  Fri Sep 11 08:32:03 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 11 Sep 2015 10:32:03 +0200
Subject: [openstack-dev] [Glance] python-glanceclient 1.0.x back compat
 for v2 proposal.
In-Reply-To: <CAJ3HoZ0vHCktFvU=5Xrro-0JAVaTFO3D_t=YJniH8a4KuEr-hQ@mail.gmail.com>
References: <55F1DE7B.6030205@gmail.com>
 <CAJ3HoZ0vHCktFvU=5Xrro-0JAVaTFO3D_t=YJniH8a4KuEr-hQ@mail.gmail.com>
Message-ID: <20150911083203.GQ6373@redhat.com>

On 11/09/15 08:08 +1200, Robert Collins wrote:
>On 11 September 2015 at 07:48, Nikhil Komawar <nik.komawar at gmail.com> wrote:
>> Hi all,
>>
>...
>> 3. python-glanceclient upgrades need to be done in staged manner and by
>> cross-checking the rel-notes or preferably commit messages if your
>> deployment is fragile to upgrades. A CI/CD pipeline shouldn't need such
>> back compat changes for long period of time.
>
>I wanted to ask more about this one. This is AIUI opposite to the
>advice we've previously given well, everyone. My understanding was
>that for all clients, all versions of the client work with all
>versions of OpenStack that are currently upstream supported, always.
>
>That suggests that installing the current version is always safe! Has
>that changed? What should users that use many different clouds do now?

I believe the above keeps being true. At least, that's what I hope,
for the sake of interoperability and compatibility.

However, let me expand a bit what Nikhil said in his emails on this
thread:

1) The CLI and the library are still compatible with the V1 of the
API. Unfortunately, these 2 APIs are not 100% compatible between them
and that's what is causing the issues mentioned in the previous email.

2) While I think this incompatibilities are unfortunate, I also
think this changes are required to clean up the CLI from old
properties. This is the first time that Glance's does this - not going
to get in the details of why this is the case - but I believe it's
good for the project at this point.

Now, the real reason why I think we're having this conversation now
and we didn't have it before is because glanceclient keeps depending
on an explicit/default API version being passed through the CLI rather
than using the information returned by the identity service.

In addition to the above, glanceclient - and I think many other
clients too - keeps mapping the CLI 1:1 to the server's API. I think
this makes maintaining the CLI more difficult and it's also
not-user-friendly. Many things that are exposed through the CLI
- and especially the way they are exposed: json, I'm looking at you -
are simply not required or not important for the user that just wants
to "use the cloud".

To conclude, this release doesn't mean the team is dead set on the
current CLI. It just means that the CLI has been cleaned up and
Liberty's stable branch for glanceclient will be cut from the latest
version. If there are things users want to have, they'll be
reconsidered and added back if necessary. These things would be
released in a minor version during Mitaka and users will be able to
consume them anyway. In other words, I'd love to see more interaction
between the glance community and the broader users community. The lack
of communication between these two has caused most of the problems
we've seen lately. I hope enough feedback will come back and thorough
tests of this new version will also be done, which is not to say that
these breaking changes are meant to increase the communication, just
to be clear :)

Cheers,
Flavio

P.S: Also, in this specific case, I'd rather make v1's CLI v2
compatible than the other way around, TBH. That'd help people migrate
their scripts and still be able to use them to talk to v1 servers
without any changes. Can we talk about this?


>Cheers,
>Rob
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/a689f9c5/attachment.pgp>

From flavio at redhat.com  Fri Sep 11 08:42:36 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 11 Sep 2015 10:42:36 +0200
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <55F1DBC6.2000904@gmail.com>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
 <55F05B79.1050508@gmail.com>
 <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33D128@fmsmsx117.amr.corp.intel.com>
 <D2174C2D.40D45%Brianna.Poulos@jhuapl.edu>
 <55F1DBC6.2000904@gmail.com>
Message-ID: <20150911084236.GR6373@redhat.com>

On 10/09/15 15:36 -0400, Nikhil Komawar wrote:
>The solution to this problem is to improve the scrubber to clean up the
>garbage data left behind in the backend store during such failed uploads.
>
>Currently, scrubber cleans up images in pending_delete and extending
>that to images in killed status would avoid such a situation.

While the above would certainly help, I think it's not the right
solution. Images in status "killed" should not have data to begin
with.

I'd rather find a way to clean that data as soon as the image is
moved to a "killed" state instead of extending the scrubber.

Cheers,
Flavio

>On 9/10/15 3:28 PM, Poulos, Brianna L. wrote:
>> Malini,
>>
>> Thank you for bringing up the ?killed? state as it relates to quota.  We
>> opted to move the image to a killed state since that is what occurs when
>> an upload fails, and the signature verification failure would occur during
>> an upload.  But we should keep in mind the potential to take up space and
>> yet not take up quota when signature verification fails.
>>
>> Regarding the MD5 hash, there is currently a glance spec [1] to allow the
>> hash method used for the checksum to be configurable?currently it is
>> hardcoded in glance.  After making it configurable, the default would
>> transition from MD5 to something more secure (like SHA-256).
>>
>> [1] https://review.openstack.org/#/c/191542/
>>
>> Thanks,
>> ~Brianna
>>
>>
>>
>>
>> On 9/10/15, 5:10 , "Bhandaru, Malini K" <malini.k.bhandaru at intel.com>
>> wrote:
>>
>>> Brianna, I can imagine a denial of service attack by uploading images
>>> whose signature is invalid if we allow them to reside in Glance
>>> In a "killed" state. This would be less of an issue "killed" images still
>>> consume storage quota until actually deleted.
>>> Also given MD-5 less secure, why not have the default hash be SHA-1 or 2?
>>> Regards
>>> Malini
>>>
>>> -----Original Message-----
>>> From: Poulos, Brianna L. [mailto:Brianna.Poulos at jhuapl.edu]
>>> Sent: Wednesday, September 09, 2015 9:54 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Cc: stuart.mclaren at hp.com
>>> Subject: Re: [openstack-dev] [glance] [nova] Verification of glance
>>> images before boot
>>>
>>> Stuart is right about what will currently happen in Nova when an image is
>>> downloaded, which protects against unintentional modifications to the
>>> image data.
>>>
>>> What is currently being worked on is adding the ability to verify a
>>> signature of the checksum.  The flow of this is as follows:
>>> 1. The user creates a signature of the "checksum hash" (currently MD5) of
>>> the image data offline.
>>> 2. The user uploads a public key certificate, which can be used to verify
>>> the signature to a key manager (currently Barbican).
>>> 3. The user creates an image in glance, with signature metadata
>>> properties.
>>> 4. The user uploads the image data to glance.
>>> 5. If the signature metadata properties exist, glance verifies the
>>> signature of the "checksum hash", including retrieving the certificate
>> >from the key manager.
>>> 6. If the signature verification fails, glance moves the image to a
>>> killed state, and returns an error message to the user.
>>> 7. If the signature verification succeeds, a log message indicates that
>>> it succeeded, and the image upload finishes successfully.
>>>
>>> 8. Nova requests the image from glance, along with the image properties,
>>> in order to boot it.
>>> 9. Nova uses the signature metadata properties to verify the signature
>>> (if a configuration option is set).
>>> 10. If the signature verification fails, nova does not boot the image,
>>> but errors out.
>>> 11. If the signature verification succeeds, nova boots the image, and a
>>> log message notes that the verification succeeded.
>>>
>>> Regarding what is currently in Liberty, the blueprint mentioned [1] has
>>> merged, and code [2] has also been merged in glance, which handles steps
>>> 1-7 of the flow above.
>>>
>>> For steps 7-11, there is currently a nova blueprint [3], along with code
>>> [4], which are proposed for Mitaka.
>>>
>>> Note that we are in the process of adding official documentation, with
>>> examples of creating the signature as well as the properties that need to
>>> be added for the image before upload.  In the meantime, there's an
>>> etherpad that describes how to test the signature verification
>>> functionality in Glance [5].
>>>
>>> Also note that this is the initial approach, and there are some
>>> limitations.  For example, ideally the signature would be based on a
>>> cryptographically secure (i.e. not MD5) hash of the image.  There is a
>>> spec in glance to allow this hash to be configurable [6].
>>>
>>> [1]
>>> https://blueprints.launchpad.net/glance/+spec/image-signing-and-verificati
>>> o
>>> n-support
>>> [2]
>>> https://github.com/openstack/glance/commit/484ef1b40b738c87adb203bba6107dd
>>> b
>>> 4b04ff6e
>>> [3] https://review.openstack.org/#/c/188874/
>>> [4] https://review.openstack.org/#/c/189843/
>>> [5]
>>> https://etherpad.openstack.org/p/liberty-glance-image-signing-instructions
>>> [6] https://review.openstack.org/#/c/191542/
>>>
>>>
>>> Thanks,
>>> ~Brianna
>>>
>>>
>>>
>>>
>>> On 9/9/15, 12:16 , "Nikhil Komawar" <nik.komawar at gmail.com> wrote:
>>>
>>>> That's correct.
>>>>
>>>> The size and the checksum are to be verified outside of Glance, in this
>>>> case Nova. However, you may want to note that it's not necessary that
>>>> all Nova virt drivers would use py-glanceclient so you would want to
>>>> check the download specific code in the virt driver your Nova
>>>> deployment is using.
>>>>
>>>> Having said that, essentially the flow seems appropriate. Error must be
>>>> raise on mismatch.
>>>>
>>>> The signing BP was to help prevent the compromised Glance from changing
>>>> the checksum and image blob at the same time. Using a digital
>>>> signature, you can prevent download of compromised data. However, the
>>>> feature has just been implemented in Glance; Glance users may take time
>>>> to adopt.
>>>>
>>>>
>>>>
>>>> On 9/9/15 11:15 AM, stuart.mclaren at hp.com wrote:
>>>>> The glance client (running 'inside' the Nova server) will
>>>>> re-calculate the checksum as it downloads the image and then compare
>>>>> it against the expected value. If they don't match an error will be
>>>>> raised.
>>>>>
>>>>>> How can I know that the image that a new instance is spawned from -
>>>>>> is actually the image that was originally registered in glance - and
>>>>>> has not been maliciously tampered with in some way?
>>>>>>
>>>>>> Is there some kind of verification that is performed against the
>>>>>> md5sum of the registered image in glance before a new instance is
>>>>>> spawned?
>>>>>>
>>>>>> Is that done by Nova?
>>>>>> Glance?
>>>>>> Both? Neither?
>>>>>>
>>>>>> The reason I ask is some 'paranoid' security (that is their job I
>>>>>> suppose) people have raised these questions.
>>>>>>
>>>>>> I know there is a glance BP already merged for L [1] - but I would
>>>>>> like to understand the actual flow in a bit more detail.
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> [1]
>>>>>>
>>>>>> https://blueprints.launchpad.net/glance/+spec/image-signing-and-verif
>>>>>> ica
>>>>>> tion-support
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Maish Saidel-Keesing
>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------
>>>>>>
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>> End of OpenStack-dev Digest, Vol 41, Issue 22
>>>>>> *********************************************
>>>>>>
>>>>>
>>>>> ______________________________________________________________________
>>>>> ___
>>>>> _
>>>>>
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> --
>>>>
>>>> Thanks,
>>>> Nikhil
>>>>
>>>>
>>>> _______________________________________________________________________
>>>> ___ OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>-- 
>
>Thanks,
>Nikhil
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/453128f0/attachment.pgp>

From lucasagomes at gmail.com  Fri Sep 11 08:48:52 2015
From: lucasagomes at gmail.com (Lucas Alvares Gomes)
Date: Fri, 11 Sep 2015 09:48:52 +0100
Subject: [openstack-dev] [Ironic] There is a function to display the VGA
 emulation screen of BMC in the baremetal node on the Horizon?
In-Reply-To: <870B6C6CD434FA459C5BF1377C680282023A7160@BPXM20GP.gisp.nec.co.jp>
References: <870B6C6CD434FA459C5BF1377C680282023A7160@BPXM20GP.gisp.nec.co.jp>
Message-ID: <CAB1EZBoxqM=OOQ-RH3okW9AAoT2pAgTeWB_DDRQkiat=_870Ew@mail.gmail.com>

Hi,

> We are investigating how to display on the Horizon a VGA
> emulation screen of BMC in the bare metal node that has been
> deployed by Ironic.
> If it was already implemented, I thought that the connection
> information of a VNC or SPICE server (converted if necessary)
> for a VGA emulation screen of BMC is returned as the stdout of
> the "nova get-*-console".
> However, we were investigating how to configure Ironic and so
> on, but we could not find a way to do so.
> I tried to search roughly the implementation of such a process
> in the source code of Ironic, but it was not found.
>
> The current Ironic, I think such features are not implemented.
> However, is this correct?
>

A couple of drivers in Ironic supports web console (shellinabox), you
can take a look docs to see how to enable and use it:
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#configure-node-web-console

Hope that helps,
Lucas


From dtantsur at redhat.com  Fri Sep 11 08:56:07 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Fri, 11 Sep 2015 10:56:07 +0200
Subject: [openstack-dev] [Ironic] Suggestion to split install guide
Message-ID: <55F29727.7070800@redhat.com>

Hi all!

Our install guide is huge, and I've just approved even more text for it. 
WDYT about splitting it into "Basic Install Guide", which will contain 
bare minimum for running ironic and deploying instances, and "Advanced 
Install Guide", which will the following things:
1. Using Bare Metal service as a standalone service
2. Enabling the configuration drive (configdrive)
3. Inspection
4. Trusted boot
5. UEFI

Opinions?


From lgy181 at foxmail.com  Fri Sep 11 09:17:07 2015
From: lgy181 at foxmail.com (=?ISO-8859-1?B?THVvIEdhbmd5aQ==?=)
Date: Fri, 11 Sep 2015 17:17:07 +0800
Subject: [openstack-dev] [Ceilometer][Gnocchi] Gnocchi cannot deal with
	combined resource-id ?
Message-ID: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>

Hi devs,


I am trying Ceilometer with gnocchi. 


I find that gnocchi cannot deal with combined resource-id such as instance-xxxxxx-tapxxxxxx or instance-xxxx-vda. I'm not sure whether it is my configuration problem or just bug.


And if such combined resource-id can be processed correctly, what about its metadata(or called attributes)? In current design, gnocchi seems treat instance-aaa, instance-aaa-tap111,   instance-aaa-tap222 as equal although they have parent-child relationship and share many attributes.


Is anyone else have the same problem and concern?



------------------
Luo Gangyiluogangyi at chinamobile.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/c4fd465e/attachment.html>

From watanabe_isao at jp.fujitsu.com  Fri Sep 11 09:26:26 2015
From: watanabe_isao at jp.fujitsu.com (Watanabe, Isao)
Date: Fri, 11 Sep 2015 09:26:26 +0000
Subject: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
 gerrit server
In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF5DF7@G4W3223.americas.hpqcorp.net>
References: <9086590602E58741A4119DC210CF893AA92C53DD@G08CNEXMBPEKD01.g08.fujitsu.local>
 <55F1607F.9060509@virtuozzo.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF06E8@G4W3223.americas.hpqcorp.net>
 <AC0F94DB49C0C2439892181E6CDA6E6C171F186D@G01JPEXMBYT05>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF5DF7@G4W3223.americas.hpqcorp.net>
Message-ID: <AC0F94DB49C0C2439892181E6CDA6E6C171F1A0D@G01JPEXMBYT05>

Hello, Ramy

Thank you for your help.
Could you do me another favor, please.
I need to move our CI from sandbox to cinder later.
Do I need to register the CI to anywhere, so that the CI could test new patch set in cinder project, please?

Best regards,
Watanabe.isao



> -----Original Message-----
> From: Asselin, Ramy [mailto:ramy.asselin at hp.com]
> Sent: Friday, September 11, 2015 12:07 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
> gerrit server
> 
> Done. Thank you for adding your CI system to the wiki.
> 
> Ramy
> 
> -----Original Message-----
> From: Watanabe, Isao [mailto:watanabe_isao at jp.fujitsu.com]
> Sent: Thursday, September 10, 2015 8:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
> gerrit server
> 
> Hello, Ramy
> 
> Could you please add the following CI to the third-party ci group, too.
> 
> Fujitsu ETERNUS CI
> 
> We are preparing this CI test system, and going to use this CI system to test
> Cinder.
> The wiki of this CI:
> <https://wiki.openstack.org/wiki/ThirdPartySystems/Fujitsu_ETERNUS_CI>
> 
> Thank you very much.
> 
> Best regards,
> Watanabe.isao
> 
> 
> 
> > -----Original Message-----
> > From: Asselin, Ramy [mailto:ramy.asselin at hp.com]
> > Sent: Thursday, September 10, 2015 8:00 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified
> > into gerrit server
> >
> > I added Fnst OpenStackTest CI
> > <https://review.openstack.org/#/q/owner:openstack_dev%2540163.com+stat
> > us :open,n,z>  to the third-party ci group.
> >
> > Ramy
> >
> >
> >
> > From: Evgeny Antyshev [mailto:eantyshev at virtuozzo.com]
> > Sent: Thursday, September 10, 2015 3:51 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified
> > into gerrit server
> >
> >
> >
> >
> >
> > On 10.09.2015 11:30, Xie, Xianshan wrote:
> >
> > 	Hi, all,
> >
> > 	   In my CI environment, after submitting a patch into
> > openstack-dev/sandbox,
> >
> > 	the Jenkins Job can be launched automatically, and the result message
> > of the job also can be posted into the gerrit server successfully.
> >
> > 	Everything seems fine.
> >
> >
> >
> > 	But in the "Verified" column, there is no verified vote, such as +1
> > or -1.
> >
> > You will be able when your CI account is added to "Third-Party CI"
> > group on review.openstack.org
> > https://review.openstack.org/#/admin/groups/270,members
> > I advice you to ask for such a permission in an IRC meeting for
> > third-party CI maintainers:
> > https://wiki.openstack.org/wiki/Meetings/ThirdParty
> > But you still won't be able to vote on other projects, except the sandbox.
> >
> >
> >
> >
> > 	(patch url: https://review.openstack.org/#/c/222049/
> > <https://review.openstack.org/#/c/222049/> ,
> >
> > 	CI name:  Fnst OpenStackTest CI)
> >
> >
> >
> > 	Although I have already added the "verified" label into the
> > layout.yaml , under the check pipeline, it does not work yet.
> >
> >
> >
> > 	And my configuration info is setted as follows:
> >
> > 	Layout.yaml
> >
> > 	-------------------------------------------
> >
> > 	pipelines:
> >
> > 	  - name: check
> >
> > 	   trigger:
> >
> > 	     gerrit:
> >
> > 	      - event: patchset-created
> >
> > 	      - event: change-restored
> >
> > 	      - event: comment-added
> >
> > 	...
> >
> > 	   success:
> >
> > 	    gerrit:
> >
> > 	      verified: 1
> >
> > 	   failure:
> >
> > 	    gerrit:
> >
> > 	      verified: -1
> >
> >
> >
> > 	jobs:
> >
> > 	   - name: noop-check-communication
> >
> > 	      parameter-function: reusable_node
> >
> > 	projects:
> >
> > 	- name: openstack-dev/sandbox
> >
> > 	   - noop-check-communication
> >
> > 	-------------------------------------------
> >
> >
> >
> >
> >
> > 	And the projects.yaml of Jenkins job:
> >
> > 	-------------------------------------------
> >
> > 	- project:
> >
> > 	name: sandbox
> >
> > 	jobs:
> >
> > 	      - noop-check-communication:
> >
> > 	         node: 'devstack_slave || devstack-precise-check || d-p-c'
> >
> > 	...
> >
> > 	-------------------------------------------
> >
> >
> >
> > 	Could anyone help me? Thanks in advance.
> >
> >
> >
> > 	Xiexs
> >
> >
> >
> >
> >
> >
> >
> >
> > 	________________________________________________________________
> > __________
> > 	OpenStack Development Mailing List (not for usage questions)
> > 	Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > 	http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
> > v
> >
> >
> 
> 
> ________________________________________________________________________
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ________________________________________________________________________
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From mbooth at redhat.com  Fri Sep 11 09:41:47 2015
From: mbooth at redhat.com (Matthew Booth)
Date: Fri, 11 Sep 2015 10:41:47 +0100
Subject: [openstack-dev] [nova] Fine-grained error reporting via the
	external API
Message-ID: <55F2A1DB.9030203@redhat.com>

I've recently been writing a tool which uses Nova's external API. This
is my first time consuming this API, so it has involved a certain amount
of discovery. The tool is here for the curious:

  https://gist.github.com/mdbooth/163f5fdf47ab45d7addd

I have felt hamstrung by the general inability to distinguish between
different types of error. For example, if a live migration failed is it
because:

1. The compute driver doesn't support support it.

2. This instance requires block storage migration.

3. Something ephemeral.

These 3 errors all require different responses:

1. Quit and don't try again.

2. Try again immediately with the block migration argument.[1]

3. Try again in a bit.

However, all I have is that I made a BadRequest. I could potentially
grep the human readable error message, but the text of that message
doesn't form part of the API, and it may be translated in any case. As
an API consumer, it seems I can't really tell anything other than 'it
didn't work'. More than that requires guesswork, heuristics and inference.

I don't think I've missed some source of additional wisdom, but it would
obviously be great if I have. Has there ever been any effort to define
some contract around more fine-grained error reporting?

Thanks,

Matt

[1] Incidentally, this suggests to me that live migrate should just do
this anyway.
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490


From vgridnev at mirantis.com  Fri Sep 11 09:52:38 2015
From: vgridnev at mirantis.com (Vitaly Gridnev)
Date: Fri, 11 Sep 2015 12:52:38 +0300
Subject: [openstack-dev] [Devstack][Sahara][Cinder] BlockDeviceDriver
	support in Devstack
Message-ID: <CA+O3VAiBdCGZhtfEAysdmSyXfFJhYO0RKKFmxAEjbdtpYUoHDQ@mail.gmail.com>

Hello folks!

Cinder supports BlockDeviceDriver [1] for a while, but driver cannot be
selected as a backend during devstack installation. This driver solves
several possible issues with I/O performance, which is really important for
Sahara clusters. Also, support of this driver is required for CI of this
driver in Cinder.

I uploaded the change [0] to review with support of this backend at
devstack, but this change really missed for reviewers. This change got a
bunch of +1's and I would like to request for extra reviews for that.

Thanks!

[0] Change on review: https://review.openstack.org/#/c/214194/
[1] https://wiki.openstack.org/wiki/BlockDeviceDriver

Best Regards,
Vitaly Gridnev
Mirantis, Inc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/75c13892/attachment.html>

From d.w.chadwick at kent.ac.uk  Fri Sep 11 10:17:37 2015
From: d.w.chadwick at kent.ac.uk (David Chadwick)
Date: Fri, 11 Sep 2015 11:17:37 +0100
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <55F27CB7.2040101@redhat.com>
References: <55F27CB7.2040101@redhat.com>
Message-ID: <55F2AA41.9070206@kent.ac.uk>

Whichever approach is adopted you need to consider the future and the
longer term objective of moving to fully hierarchical names. I believe
the current Keystone approach is only an interim one, as it only
supports partial hierarchies. Fully hierarchical names has been
discussed in the Keystone group, but I believe that this has been
shelved until later in order to get a quick fix released now.

regards

David

On 11/09/2015 08:03, Gilles Dubreuil wrote:
> Hi,
> 
> Today in the #openstack-puppet channel a discussion about the pro and
> cons of using domain parameter for Keystone V3 has been left opened.
> 
> The context
> ------------
> Domain names are needed in Openstack Keystone V3 for identifying users
> or groups (of users) within different projects (tenant).
> Users and groups are uniquely identified within a domain (or a realm as
> opposed to project domains).
> Then projects have their own domain so users or groups can be assigned
> to them through roles.
> 
> In Kilo, Keystone V3 have been introduced as an experimental feature.
> Puppet providers such as keystone_tenant, keystone_user,
> keystone_role_user have been adapted to support it.
> Also new ones have appeared (keystone_domain) or are their way
> (keystone_group, keystone_trust).
> And to be backward compatible with V2, the default domain is used when
> no domain is provided.
> 
> In existing providers such as keystone_tenant, the domain can be either
> part of the name or provided as a parameter:
> 
> A. The 'composite namevar' approach:
> 
>    keystone_tenant {'projectX::domainY': ... }
>  B. The 'meaningless name' approach:
> 
>   keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}
> 
> Notes:
>  - Actually using both combined should work too with the domain
> supposedly overriding the name part of the domain.
>  - Please look at [1] this for some background between the two approaches:
> 
> The question
> -------------
> Decide between the two approaches, the one we would like to retain for
> puppet-keystone.
> 
> Why it matters?
> ---------------
> 1. Domain names are mandatory in every user, group or project. Besides
> the backward compatibility period mentioned earlier, where no domain
> means using the default one.
> 2. Long term impact
> 3. Both approaches are not completely equivalent which different
> consequences on the future usage.
> 4. Being consistent
> 5. Therefore the community to decide
> 
> The two approaches are not technically equivalent and it also depends
> what a user might expect from a resource title.
> See some of the examples below.
> 
> Because OpenStack DB tables have IDs to uniquely identify objects, it
> can have several objects of a same family with the same name.
> This has made things difficult for Puppet resources to guarantee
> idem-potency of having unique resources.
> In the context of Keystone V3 domain, hopefully this is not the case for
> the users, groups or projects but unfortunately this is still the case
> for trusts.
> 
> Pros/Cons
> ----------
> A.
>   Pros
>     - Easier names
>   Cons
>     - Titles have no meaning!
>     - Cases where 2 or more resources could exists
>     - More difficult to debug
>     - Titles mismatch when listing the resources (self.instances)
> 
> B.
>   Pros
>     - Unique titles guaranteed
>     - No ambiguity between resource found and their title
>   Cons
>     - More complicated titles
> 
> Examples
> ----------
> = Meaningless name example 1=
> Puppet run:
>   keystone_tenant {'myproject': name='project_A', domain=>'domain_1', ...}
> 
> Second run:
>   keystone_tenant {'myproject': name='project_A', domain=>'domain_2', ...}
> 
> Result/Listing:
> 
>   keystone_tenant { 'project_A::domain_1':
>     ensure  => 'present',
>     domain  => 'domain_1',
>     enabled => 'true',
>     id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>   }
>    keystone_tenant { 'project_A::domain_2':
>     ensure  => 'present',
>     domain  => 'domain_2',
>     enabled => 'true',
>     id      => '4b8255591949484781da5d86f2c47be7',
>   }
> 
> = Composite name example 1  =
> Puppet run:
>   keystone_tenant {'project_A::domain_1', ...}
> 
> Second run:
>   keystone_tenant {'project_A::domain_2', ...}
> 
> # Result/Listing
>   keystone_tenant { 'project_A::domain_1':
>     ensure  => 'present',
>     domain  => 'domain_1',
>     enabled => 'true',
>     id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>    }
>   keystone_tenant { 'project_A::domain_2':
>     ensure  => 'present',
>     domain  => 'domain_2',
>     enabled => 'true',
>     id      => '4b8255591949484781da5d86f2c47be7',
>    }
> 
> = Meaningless name example 2  =
> Puppet run:
>   keystone_tenant {'myproject1': name='project_A', domain=>'domain_1', ...}
>   keystone_tenant {'myproject2': name='project_A', domain=>'domain_1',
> description=>'blah'...}
> 
> Result: project_A in domain_1 has a description
> 
> = Composite name example 2  =
> Puppet run:
>   keystone_tenant {'project_A::domain_1', ...}
>   keystone_tenant {'project_A::domain_1', description => 'blah', ...}
> 
> Result: Error because the resource must be unique within a catalog
> 
> My vote
> --------
> I would love to have the approach A for easier name.
> But I've seen the challenge of maintaining the providers behind the
> curtains and the confusion it creates with name/titles and when not sure
> about the domain we're dealing with.
> Also I believe that supporting self.instances consistently with
> meaningful name is saner.
> Therefore I vote B
> 
> Finally
> ------
> Thanks for reading that far!
> To choose, please provide feedback with more pros/cons, examples and
> your vote.
> 
> Thanks,
> Gilles
> 
> 
> PS:
> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From d.w.chadwick at kent.ac.uk  Fri Sep 11 10:20:53 2015
From: d.w.chadwick at kent.ac.uk (David Chadwick)
Date: Fri, 11 Sep 2015 11:20:53 +0100
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
Message-ID: <55F2AB05.5050207@kent.ac.uk>

Hi Morgan

I think you have been an excellent PTL, and I wish you all the best in
your future roles with OpenStack

regards

David


On 10/09/2015 22:40, Morgan Fainberg wrote:
> As I outlined (briefly) in my recent announcement of changes (
> https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/
> ) I will not be running for PTL of Keystone this next cycle (Mitaka).
> The role of PTL is a difficult but extremely rewarding job. It has been
> amazing to see both Keystone and OpenStack grow.
> 
> I am very pleased with the accomplishments of the Keystone development
> team over the last year. We have seen improvements with Federation,
> Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing,
> releasing a dedicated authentication library, cross-project initiatives
> around improving the Service Catalog, and much, much more. I want to
> thank each and every contributor for the hard work that was put into
> Keystone and its associated projects.
> 
> While I will be changing my focus to spend more time on the general
> needs of OpenStack and working on the Public Cloud story, I am confident
> in those who can, and will, step up to the challenges of leading
> development of Keystone and the associated projects. I may be working
> across more projects, but you can be assured I will be continuing to
> work hard to see the initiatives I helped start through. I wish the best
> of luck to the next PTL.
> 
> I guess this is where I get to write a lot more code soon!
> 
> See you all (in person) in Tokyo!
> --Morgan
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From chdent at redhat.com  Fri Sep 11 10:31:07 2015
From: chdent at redhat.com (Chris Dent)
Date: Fri, 11 Sep 2015 11:31:07 +0100 (BST)
Subject: [openstack-dev] [ceilometer] using entry_points for configuration
 considered harmful
Message-ID: <alpine.OSX.2.11.1509111106140.69163@seed.local>


Several weeks ago I made a little tool call pollman[1] that demonstrates
pollsters plugins that are outside the ceilometer python namespace. I
was going to use it in my portion of a summit talk to show just how
incredibly easy it is to create custom pollsters. After getting the
basics working and testing it out I forgot all about it until reminded
about it in a way that I think demonstrates a fairly significant problem
in the way that ceilometer (and perhaps other OpenStack projects) manage
extensions.

Ceilometer is now frozen for Liberty so I've been doing a lot of
devstack runs to find and fix bugs. And what do I spy with my little
cli but:

     $ ceilometer meter-list
     [...]
     | weather.temperature  | gauge      | C  | 2172797  | pollman | pollman |

     $ ceilometer sample-list -q resource_id=2172797 --limit 1
     [...]
     | 0b667812-586a-11e5-9568-3417ebd4f75d | 2172797     | weather.temperature | gauge | 18.62  | C    | 2015-09-11T09:46:58.571000 |

It's 18.62 C in Newquay today. Good to know.

I have not configured Ceilometer to do this. Pollman set
entry_points on ceilometer months ago and they are being used today.

This is really weird to me. I know why it is happening; it is the
designed in behavior that ceilometer pollsters will activate and run all
available pollster plugins (even when there are not resources available)
but goodness me that's not very explicit.

If I want this to stop the most direct thing to do is uninstall pollman.

Sure, I can disable the meters associated with pollman in
pipeline.yaml but to do I've got to go find out what they are. One
way to do that is to go look at the entry_points, but if I've got
multiple external plugins those entry_points are all over the place.
Not very friendly.

I like entry_points, especially in the way that they allow external
extensions (let's have more of them please!), but we shouldn't be
using them as service configuration[2].

What ideas do people have for being more explicit?

Now that the pollsters are only using the first half of the
pipeline.yaml file it probably makes sense to explore (in Mitaka) a more
explicit polling configuration file, separate from the transformations
described by the pipeline.

[1] https://github.com/cdent/pollman
It gets the temperature for configured locations and saves it as a
sample. Its unfinished because I promptly forgot about it until
reminded as described above.

[2] Yes, I'm conscious of the fact that this problem goes away if I
always use clean installs or virtualenvs but is that something we
always want to require? It would be handy to be able to have a
generic vm image which is "the ceilometer polling machine" and has
all the pollsters installed but not all started by default.

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent


From julien at danjou.info  Fri Sep 11 10:31:38 2015
From: julien at danjou.info (Julien Danjou)
Date: Fri, 11 Sep 2015 12:31:38 +0200
Subject: [openstack-dev] [Ceilometer][Gnocchi] Gnocchi cannot deal with
	combined resource-id ?
In-Reply-To: <tencent_5535D7DC7A5CE2DE5702951F@qq.com> (Luo Gangyi's message
 of "Fri, 11 Sep 2015 17:17:07 +0800")
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
Message-ID: <m0a8stp7qt.fsf@danjou.info>

On Fri, Sep 11 2015, Luo Gangyi wrote:

Hi Luo,

> I find that gnocchi cannot deal with combined resource-id such as
> instance-xxxxxx-tapxxxxxx or instance-xxxx-vda. I'm not sure whether it is my
> configuration problem or just bug.

Which version are you testing? The master branch has no support for
resource ID that are not UUID.

> And if such combined resource-id can be processed correctly, what about its
> metadata(or called attributes)? In current design, gnocchi seems treat
> instance-aaa, instance-aaa-tap111, instance-aaa-tap222 as equal although they
> have parent-child relationship and share many attributes.

We just merged support for those resources. We do not store any
attribute other than the name and parent instance AFAICS. What do you
miss as an attribute exactly?

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/93c9e2bb/attachment.pgp>

From julien at danjou.info  Fri Sep 11 11:06:04 2015
From: julien at danjou.info (Julien Danjou)
Date: Fri, 11 Sep 2015 13:06:04 +0200
Subject: [openstack-dev] [Ceilometer][Gnocchi] Gnocchi cannot deal with
	combined resource-id ?
In-Reply-To: <m0a8stp7qt.fsf@danjou.info> (Julien Danjou's message of "Fri, 11
 Sep 2015 12:31:38 +0200")
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
 <m0a8stp7qt.fsf@danjou.info>
Message-ID: <m04mj1p65f.fsf@danjou.info>

On Fri, Sep 11 2015, Julien Danjou wrote:

> Which version are you testing? The master branch has no support for

s/no/now/

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/3d23bc28/attachment.pgp>

From sean at dague.net  Fri Sep 11 11:12:45 2015
From: sean at dague.net (Sean Dague)
Date: Fri, 11 Sep 2015 07:12:45 -0400
Subject: [openstack-dev] [gate] broken by pyeclib 1.0.9 release
In-Reply-To: <CAJ3HoZ2VvwcVsYYXQhw=EtazZvmTR7ubbqDOWV6drUKk9Qr+6w@mail.gmail.com>
References: <55F19744.6020503@dague.net>
 <82115773F5CCE44DA5B6CE8B3878B6F028F51E0E@fmsmsx115.amr.corp.intel.com>
 <55F1CEED.8040304@linux.vnet.ibm.com>
 <CAJ3HoZ18XE3+VJLTznau0PjEftS+DLBi63TrQ2vCAEWAqkakVw@mail.gmail.com>
 <CAJ3HoZ2VvwcVsYYXQhw=EtazZvmTR7ubbqDOWV6drUKk9Qr+6w@mail.gmail.com>
Message-ID: <55F2B72D.6090505@dague.net>

On 09/10/2015 03:45 PM, Robert Collins wrote:
> On 11 September 2015 at 07:23, Robert Collins <robertc at robertcollins.net> wrote:
>> Note that master is pinned:
>>
>> commit aca1a74909d7a2841cd9805b7f57c867a1f74b73
>> Author: Tushar Gohad <tushar.gohad at intel.com>
>> Date:   Tue Aug 18 07:55:18 2015 +0000
>>
>>     Restrict PyECLib version to 1.0.7
>>
>>     v1.0.9 rev of PyECLib replaces Jerasure with a native EC
>>     implementation (liberasurecode_rs_vand) as the default
>>     EC scheme.  Going forward, Jerasure will not be bundled
>>     with PyPI version of PyECLib as it used to be, until
>>     v1.0.7.
>>
>>     This is an interim change to Global/Swift requirements
>>     until we get v1.0.9 PyECLib released and included in
>>     global-requirements and ready patches that change Swift
>>     default ec_type (for doc, config samples and unit tests)
>>     from "jerasure_rs_vand" to "liberasurecode_rs_vand."
>>
>>     Without this change, Swift unit tests will break at gate
>>     as soon as PyECLib v1.0.9 lands on PyPI
>>
>>     * Swift is the only user of PyECLib at the moment
>>
>>     Change-Id: I52180355b95679cbcddd497bbdd9be8e7167a3c7
>>
>>
>> But it appears a matching change was not done to j/k - and the pin
>> hasn't been removed from master.
> 
> I'm going to propose another manual review rule I think: we should not
> permit lower releases to use higher versions of libraries -
> approximately noone tests downgrades of their thing [and while it only
> matters for packages with weird installs / state management things,
> its a glaring hole in our reliability story].

I feel like that's a bad thing to assume of people's systems. What is
the expected behavior of an installer if it discovers installing
OpenStack requires downgrading a library? Halt and catch fire?

It also means we're back to having to pin requirements in stable
branches because that's the only way we can guaruntee this for people.
And that's a thing we specifically wanted to get out of the business of
doing because it let to all kinds of problems.

	-Sean

-- 
Sean Dague
http://dague.net


From sean at dague.net  Fri Sep 11 11:19:57 2015
From: sean at dague.net (Sean Dague)
Date: Fri, 11 Sep 2015 07:19:57 -0400
Subject: [openstack-dev] [nova] Fine-grained error reporting via the
 external API
In-Reply-To: <55F2A1DB.9030203@redhat.com>
References: <55F2A1DB.9030203@redhat.com>
Message-ID: <55F2B8DD.6070802@dague.net>

On 09/11/2015 05:41 AM, Matthew Booth wrote:
> I've recently been writing a tool which uses Nova's external API. This
> is my first time consuming this API, so it has involved a certain amount
> of discovery. The tool is here for the curious:
> 
>   https://gist.github.com/mdbooth/163f5fdf47ab45d7addd
> 
> I have felt hamstrung by the general inability to distinguish between
> different types of error. For example, if a live migration failed is it
> because:
> 
> 1. The compute driver doesn't support support it.
> 
> 2. This instance requires block storage migration.
> 
> 3. Something ephemeral.
> 
> These 3 errors all require different responses:
> 
> 1. Quit and don't try again.
> 
> 2. Try again immediately with the block migration argument.[1]
> 
> 3. Try again in a bit.
> 
> However, all I have is that I made a BadRequest. I could potentially
> grep the human readable error message, but the text of that message
> doesn't form part of the API, and it may be translated in any case. As
> an API consumer, it seems I can't really tell anything other than 'it
> didn't work'. More than that requires guesswork, heuristics and inference.
> 
> I don't think I've missed some source of additional wisdom, but it would
> obviously be great if I have. Has there ever been any effort to define
> some contract around more fine-grained error reporting?
> 
> Thanks,
> 
> Matt
> 
> [1] Incidentally, this suggests to me that live migrate should just do
> this anyway.

This is an API working group recommendation evolving here. The crux of
which is going to be a structured json error return document that will
contain more info. https://review.openstack.org/#/c/167793/

	-Sean

-- 
Sean Dague
http://dague.net


From gilles at redhat.com  Fri Sep 11 11:25:52 2015
From: gilles at redhat.com (Gilles Dubreuil)
Date: Fri, 11 Sep 2015 21:25:52 +1000
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <55F2AA41.9070206@kent.ac.uk>
References: <55F27CB7.2040101@redhat.com> <55F2AA41.9070206@kent.ac.uk>
Message-ID: <55F2BA40.5010108@redhat.com>



On 11/09/15 20:17, David Chadwick wrote:
> Whichever approach is adopted you need to consider the future and the
> longer term objective of moving to fully hierarchical names. I believe
> the current Keystone approach is only an interim one, as it only
> supports partial hierarchies. Fully hierarchical names has been
> discussed in the Keystone group, but I believe that this has been
> shelved until later in order to get a quick fix released now.
> 
> regards
> 
> David
> 

Thanks David,

That's interesting.
So sub projects are pushing the issue further down.
And maybe one day sub domains and sub users?

keystone_role_user {
'user.subuser::domain1 at project.subproject.subsubproject::domain2':
roles => [...]
}

or

keystone_role_user {'user.subuser':
  user_domain => 'domain1',
  tenant => 'project.subproject',
  tenant_domain => 'domain2',
  roles => [...]
}

I tend to think the domain must stick with the name it's associated
with, otherwise we have to say 'here the domain for this and that, etc'.



> On 11/09/2015 08:03, Gilles Dubreuil wrote:
>> Hi,
>>
>> Today in the #openstack-puppet channel a discussion about the pro and
>> cons of using domain parameter for Keystone V3 has been left opened.
>>
>> The context
>> ------------
>> Domain names are needed in Openstack Keystone V3 for identifying users
>> or groups (of users) within different projects (tenant).
>> Users and groups are uniquely identified within a domain (or a realm as
>> opposed to project domains).
>> Then projects have their own domain so users or groups can be assigned
>> to them through roles.
>>
>> In Kilo, Keystone V3 have been introduced as an experimental feature.
>> Puppet providers such as keystone_tenant, keystone_user,
>> keystone_role_user have been adapted to support it.
>> Also new ones have appeared (keystone_domain) or are their way
>> (keystone_group, keystone_trust).
>> And to be backward compatible with V2, the default domain is used when
>> no domain is provided.
>>
>> In existing providers such as keystone_tenant, the domain can be either
>> part of the name or provided as a parameter:
>>
>> A. The 'composite namevar' approach:
>>
>>    keystone_tenant {'projectX::domainY': ... }
>>  B. The 'meaningless name' approach:
>>
>>   keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}
>>
>> Notes:
>>  - Actually using both combined should work too with the domain
>> supposedly overriding the name part of the domain.
>>  - Please look at [1] this for some background between the two approaches:
>>
>> The question
>> -------------
>> Decide between the two approaches, the one we would like to retain for
>> puppet-keystone.
>>
>> Why it matters?
>> ---------------
>> 1. Domain names are mandatory in every user, group or project. Besides
>> the backward compatibility period mentioned earlier, where no domain
>> means using the default one.
>> 2. Long term impact
>> 3. Both approaches are not completely equivalent which different
>> consequences on the future usage.
>> 4. Being consistent
>> 5. Therefore the community to decide
>>
>> The two approaches are not technically equivalent and it also depends
>> what a user might expect from a resource title.
>> See some of the examples below.
>>
>> Because OpenStack DB tables have IDs to uniquely identify objects, it
>> can have several objects of a same family with the same name.
>> This has made things difficult for Puppet resources to guarantee
>> idem-potency of having unique resources.
>> In the context of Keystone V3 domain, hopefully this is not the case for
>> the users, groups or projects but unfortunately this is still the case
>> for trusts.
>>
>> Pros/Cons
>> ----------
>> A.
>>   Pros
>>     - Easier names
>>   Cons
>>     - Titles have no meaning!
>>     - Cases where 2 or more resources could exists
>>     - More difficult to debug
>>     - Titles mismatch when listing the resources (self.instances)
>>
>> B.
>>   Pros
>>     - Unique titles guaranteed
>>     - No ambiguity between resource found and their title
>>   Cons
>>     - More complicated titles
>>
>> Examples
>> ----------
>> = Meaningless name example 1=
>> Puppet run:
>>   keystone_tenant {'myproject': name='project_A', domain=>'domain_1', ...}
>>
>> Second run:
>>   keystone_tenant {'myproject': name='project_A', domain=>'domain_2', ...}
>>
>> Result/Listing:
>>
>>   keystone_tenant { 'project_A::domain_1':
>>     ensure  => 'present',
>>     domain  => 'domain_1',
>>     enabled => 'true',
>>     id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>>   }
>>    keystone_tenant { 'project_A::domain_2':
>>     ensure  => 'present',
>>     domain  => 'domain_2',
>>     enabled => 'true',
>>     id      => '4b8255591949484781da5d86f2c47be7',
>>   }
>>
>> = Composite name example 1  =
>> Puppet run:
>>   keystone_tenant {'project_A::domain_1', ...}
>>
>> Second run:
>>   keystone_tenant {'project_A::domain_2', ...}
>>
>> # Result/Listing
>>   keystone_tenant { 'project_A::domain_1':
>>     ensure  => 'present',
>>     domain  => 'domain_1',
>>     enabled => 'true',
>>     id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>>    }
>>   keystone_tenant { 'project_A::domain_2':
>>     ensure  => 'present',
>>     domain  => 'domain_2',
>>     enabled => 'true',
>>     id      => '4b8255591949484781da5d86f2c47be7',
>>    }
>>
>> = Meaningless name example 2  =
>> Puppet run:
>>   keystone_tenant {'myproject1': name='project_A', domain=>'domain_1', ...}
>>   keystone_tenant {'myproject2': name='project_A', domain=>'domain_1',
>> description=>'blah'...}
>>
>> Result: project_A in domain_1 has a description
>>
>> = Composite name example 2  =
>> Puppet run:
>>   keystone_tenant {'project_A::domain_1', ...}
>>   keystone_tenant {'project_A::domain_1', description => 'blah', ...}
>>
>> Result: Error because the resource must be unique within a catalog
>>
>> My vote
>> --------
>> I would love to have the approach A for easier name.
>> But I've seen the challenge of maintaining the providers behind the
>> curtains and the confusion it creates with name/titles and when not sure
>> about the domain we're dealing with.
>> Also I believe that supporting self.instances consistently with
>> meaningful name is saner.
>> Therefore I vote B
>>
>> Finally
>> ------
>> Thanks for reading that far!
>> To choose, please provide feedback with more pros/cons, examples and
>> your vote.
>>
>> Thanks,
>> Gilles
>>
>>
>> PS:
>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From ftersin at hotmail.com  Fri Sep 11 11:28:11 2015
From: ftersin at hotmail.com (Feodor Tersin)
Date: Fri, 11 Sep 2015 14:28:11 +0300
Subject: [openstack-dev] [nova] Fine-grained error reporting via the
 external API
In-Reply-To: <55F2A1DB.9030203@redhat.com>
References: <55F2A1DB.9030203@redhat.com>
Message-ID: <COL130-W857C1BA6C5F0358A20282DBE500@phx.gbl>

> From: mbooth at redhat.com
> To: openstack-dev at lists.openstack.org
> Date: Fri, 11 Sep 2015 10:41:47 +0100
> Subject: [openstack-dev] [nova] Fine-grained error reporting via the	external API
> 
> However, all I have is that I made a BadRequest. I could potentially
> grep the human readable error message, but the text of that message
> doesn't form part of the API, and it may be translated in any case. As
> an API consumer, it seems I can't really tell anything other than 'it
> didn't work'. More than that requires guesswork, heuristics and inference.
> 
> I don't think I've missed some source of additional wisdom, but it would
> obviously be great if I have. Has there ever been any effort to define
> some contract around more fine-grained error reporting?

Matt, iiuc this has been discussed early [1]. There is a review [2] mentioned at the end of that thread.

[1] http://markmail.org/message/6l6szrm6ox7w2cxk
[2] https://review.openstack.org/#/c/167793/
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/8ebbc81c/attachment.html>

From kzaitsev at mirantis.com  Fri Sep 11 11:29:15 2015
From: kzaitsev at mirantis.com (Kirill Zaitsev)
Date: Fri, 11 Sep 2015 14:29:15 +0300
Subject: [openstack-dev] [murano] [dashboard] Remove the owner filter
 from "Package Definitions" page
In-Reply-To: <CAOFFu8Zo5SRVPUytGk7kj4UgNN5KJ5m39d9NeJpKoB427FbzfA@mail.gmail.com>
References: <CAKSp79y8cCU7z0S-Pzgy2k1TNJZZMsyVYXk-bEtSj6ByoB4JZQ@mail.gmail.com>
 <CAM6FM9S47YmJsTYGVNoPc7L2JGjBpCB+-s-HTd=d+HK939GEEg@mail.gmail.com>
 <CAOFFu8Zo5SRVPUytGk7kj4UgNN5KJ5m39d9NeJpKoB427FbzfA@mail.gmail.com>
Message-ID: <etPan.55f2bb0b.31e50771.146@TefMBPr.local>

I believe, that pagination is not broken there, since it works the same way pagination works on glance images page in horizon (the place the filter comes from)
Nevertheless +1 from me on the idea of replacing the filter with text-based filter, that would search by name. AFAIK it should be able to paginate based on filtered api calls.

--?
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 4 Sep 2015 at 14:49:26, Ekaterina Chernova (efedorova at mirantis.com) wrote:

Agreed.

Currently, pagination is broken on "Package definitions" page now, so removing that filter
will fix it back. Also, 'Other' tab looks unhelpful, admin should indicate to witch tenant this package belongs to.
This improvement will be added later.

Regards,
Kate.

On Fri, Sep 4, 2015 at 1:06 PM, Alexander Tivelkov <ativelkov at mirantis.com> wrote:
?+1 on this.

Filtering by ownership makes sense only on Catalog view (i.e. on the page of usable apps) ?but not on the admin-like console like the list of package definitions.?

--
Regards,
Alexander Tivelkov

On Fri, Sep 4, 2015 at 12:36 PM, Dmitro Dovbii <ddovbii at mirantis.com> wrote:
Hi folks!

I want suggest you to delete owner filter (3 tabs) from Package Definition page. Previously this filter was available for all users and we agreed that it is useless. Now it is available only for admin but I think this fact still doesn't improve the UX. Moreover, this filter prevents the implementation of the search by name, because the work of the two filters can be inconsistent.
So, please express your opinion on this issue. If you agree, I will remove this filter ASAP.

Best regards,
Dmytro Dovbii

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/00f70e68/attachment.html>

From mbooth at redhat.com  Fri Sep 11 11:31:41 2015
From: mbooth at redhat.com (Matthew Booth)
Date: Fri, 11 Sep 2015 12:31:41 +0100
Subject: [openstack-dev] [nova] Fine-grained error reporting via the
 external API
In-Reply-To: <55F2B8DD.6070802@dague.net>
References: <55F2A1DB.9030203@redhat.com> <55F2B8DD.6070802@dague.net>
Message-ID: <55F2BB9D.5040809@redhat.com>

On 11/09/15 12:19, Sean Dague wrote:
> On 09/11/2015 05:41 AM, Matthew Booth wrote:
>> I've recently been writing a tool which uses Nova's external API. This
>> is my first time consuming this API, so it has involved a certain amount
>> of discovery. The tool is here for the curious:
>>
>>   https://gist.github.com/mdbooth/163f5fdf47ab45d7addd
>>
>> I have felt hamstrung by the general inability to distinguish between
>> different types of error. For example, if a live migration failed is it
>> because:
>>
>> 1. The compute driver doesn't support support it.
>>
>> 2. This instance requires block storage migration.
>>
>> 3. Something ephemeral.
>>
>> These 3 errors all require different responses:
>>
>> 1. Quit and don't try again.
>>
>> 2. Try again immediately with the block migration argument.[1]
>>
>> 3. Try again in a bit.
>>
>> However, all I have is that I made a BadRequest. I could potentially
>> grep the human readable error message, but the text of that message
>> doesn't form part of the API, and it may be translated in any case. As
>> an API consumer, it seems I can't really tell anything other than 'it
>> didn't work'. More than that requires guesswork, heuristics and inference.
>>
>> I don't think I've missed some source of additional wisdom, but it would
>> obviously be great if I have. Has there ever been any effort to define
>> some contract around more fine-grained error reporting?
>>
>> Thanks,
>>
>> Matt
>>
>> [1] Incidentally, this suggests to me that live migrate should just do
>> this anyway.
> 
> This is an API working group recommendation evolving here. The crux of
> which is going to be a structured json error return document that will
> contain more info. https://review.openstack.org/#/c/167793/

Thanks, Sean, that's exactly what I was looking for. I'll continue this
discussion in that review.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490


From flavio at redhat.com  Fri Sep 11 11:51:03 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 11 Sep 2015 13:51:03 +0200
Subject: [openstack-dev] [zaqar][all] PTL No-Candidacy
Message-ID: <20150911115103.GA8182@redhat.com>

Greetings,

I'm sending this email to announce that I wont be running for Zaqar's
PTL position this cycle.

I've been Zaqar's PTL for two cycles and I believe it is time for me
to move on. More importantly, I believe it's time for this great,
still small, project to be led by someone else. The reasons behind
this belief have nothing to do with neither the previous state of the
project or even its current success story. If anything, my current
decision of not running has everything to do with the project's
current growth.

As many of you know, Zaqar (formerly known as Marconi) went through
many ups and downs. From great discussions and growth attempts to
almost being shutdown[0]. This has taugh me a lot but more
importantly, it's made the team stronger and it's cleared the team's
goals and path. And to prove that, let me share some of the success
stories the team has had since Vancouver:

3 great milestones
==================

Let me start by sharing the progress the project has made code-wise.
While it may not be the most important for many people, I believe it's
extremly valuable for the project. The reason for this being that
every single member of this team is not a full-time Zaqar developer.
That means, every single member of this team has a different full-time
responsibility and every contribution made to the project has been
made in their spare working (or free) time. From amazing Outreachy
mentees (we've mentored participants of the Outrechy program since
cycle 1) to great contributors from other projects in OpenStack.

In milestone #1[1], we closed several bugs while we discussed the
features that we wanted to work on during Liberty. In milestone #2[1],
some of the features we wanted to have in Liberty started to land and
several bugs were fixed as well. In milestone #3, many bugs were fixed
due to a heavy testing session. But it doesn't end there. In RC1[4], 3
FFE were granted - not carelessly, FWIW - to complete all the work
we've planned for Liberty and, of course, more bug fixes.

We now even have a websocket example in the code base... ZOMG!

In addition to the above, the client library has kept moving forward
and it's being aligned with the current, maintained, API. This
progress just makes me happy and happier. Keep reading and you'll know
why.

Adoption by other projects
==========================

If you read the call for adoption thread[0], you probably know how
important that was for the project to move forward. After many
discussions in Vancouver, on IRC, conferences, mailing lists, pigeons,
telegrams, etc. projects started to see[5] the different use-cases for
Zaqar and we started talking about implementations and steps forward.
One good example of this is Heat's use of Zaqar for
software-config[6], which was worked on and implemented.

Things didn't stop there on this front. Other projects, like Sahara,
are also considering using Zaqar to communicate with guests agents.
While this is under discussion on Sahara's side, the required features
for it to happen and be more secure have been implemented in Zaqar[7].
Other interesting discussions are also on-going that might help with
Zaqar's adoption[8].

That said, I believe one of the works I'm most excited about right now
is the puppet-zaqar project, which will make it simpler for
deployments based on puppet to, well, deploy zaqar[9].

Community Growth
================

None of the above would have been possible without a great community
and especially without growing it. I'm not talking about the core
reviewers team growth - although we did have an addition[10] - but the
growth of the community accross OpenStack. Folks from other teams -
OpenStack Puppet, Sahara, Heat, Trove, cross-project efforts - have
joined the efforts of pushing Zaqar forward in different ways (like
the ones I've mentioned before).

Therefore, I owe a huge THANK YOU to each and one of these people that
helped making this progress possible.

Oh God, please, stop talking
============================

Sure, fine! But before I do that, let me share why I've said all the
above.

The above is not to show off what the team has accomplished. It's
definitely not to take any credits whatsoever. It's to show exactly
why the team needs a new PTL.

I believe PTLs should rotate every 2 cycles (if not every cycle). I've
been the PTL for 2 cycles (or probably even more) and it's time for
the vision and efforts of other folks to jump in. It's time for folks
with more OPs knowledge than me to help making Zaqar more
"maintainable". It's time for new technical issues to come up and for
us as a community to work together on achieving those. More
cross-project collaboration, more APIs improvement, more user stories
is what Zaqar needs right now and I believe there are very capable
folks in Zaqar's team that would be perfect for this task.

One thing I'd like the whole team to put some efforts on, regardless
what technical decisions will be taken, is on increasing the diversity
of the project. Zaqar is not as diverse[11] (company wise) as I'd like
that worries me A LOT. Growth will, hopefully, bring more people and
reaching out to other communities remain something important.

It's been an honor to serve as Zaqar's PTL and it'll be an honor for
me to contribute to the next PTL's future plans and leads.

Sincerely,
Flavio

P.S: #openstack-zaqar remains the funiest channel ever, just sayin'.

[0] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061967.html
[1] https://launchpad.net/zaqar/+milestone/liberty-1
[2] https://launchpad.net/zaqar/+milestone/liberty-2
[3] https://launchpad.net/zaqar/+milestone/liberty-3
[4] https://launchpad.net/zaqar/+milestone/liberty-rc1
[5] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064739.html
[6] https://github.com/openstack/heat-specs/blob/master/specs/kilo/software-config-zaqar.rst
[7] http://specs.openstack.org/openstack/zaqar-specs/specs/liberty/pre-signed-url.html
[8] https://review.openstack.org/#/c/185822/
[9] https://github.com/openstack/puppet-zaqar
[10] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072191.html
[11] http://stackalytics.com/?module=zaqar-group

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/527acc74/attachment.pgp>

From sean at dague.net  Fri Sep 11 11:55:08 2015
From: sean at dague.net (Sean Dague)
Date: Fri, 11 Sep 2015 07:55:08 -0400
Subject: [openstack-dev] [nova] changes in our tempest/devstack scenarios
Message-ID: <55F2C11C.4080505@dague.net>

This is mostly an FYI for folks about how the test jobs have changed a
bit going into the Liberty release because people might not have kept on
top of it.

On a Liberty devstack the service catalogue now looks as follows:

compute => /v2.1/
compute_legacy => /v2

This means that all dsvm jobs are now using /v2.1/ as their base
testing. This change was supposed to be seamless. It mostly was. There
were a few compat issues exposed during this flip which we've got fixed.

The /v2 provided by devstack is v2.0 on v2.1 (as that's the Nova
defaults), which means it's the "relaxed validation" version of v2.1
code stack which should actually unnoticable from v2.0.

In order to make sure that the /v2 bits of Nova don't regress, there are
now 2 compat jobs up and running:

gate-tempest-dsvm-nova-v20-api - this is running Tempest Nova API tests
against the v2.0 on v2.1 stack. This is going to be around for a long time.

gate-tempest-dsvm-nova-v20-api-legacy - this is running Tempest Nova API
tests against v2.0 on the old v2.0 code base. This will go away once we
remove the v2.0 code stack.

Their success rates are looking pretty reasonable compared to a baseline:

http://tinyurl.com/oergv7q

And as such I've proposed making them voting in the Nova check queue -
https://review.openstack.org/222573


	-Sean

-- 
Sean Dague
http://dague.net



From ramy.asselin at hp.com  Fri Sep 11 11:56:18 2015
From: ramy.asselin at hp.com (Asselin, Ramy)
Date: Fri, 11 Sep 2015 11:56:18 +0000
Subject: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
 gerrit server
In-Reply-To: <AC0F94DB49C0C2439892181E6CDA6E6C171F1A0D@G01JPEXMBYT05>
References: <9086590602E58741A4119DC210CF893AA92C53DD@G08CNEXMBPEKD01.g08.fujitsu.local>
 <55F1607F.9060509@virtuozzo.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF06E8@G4W3223.americas.hpqcorp.net>
 <AC0F94DB49C0C2439892181E6CDA6E6C171F186D@G01JPEXMBYT05>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF5DF7@G4W3223.americas.hpqcorp.net>
 <AC0F94DB49C0C2439892181E6CDA6E6C171F1A0D@G01JPEXMBYT05>
Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF7166@G4W3223.americas.hpqcorp.net>

Follow these instructions to get permission from cinder: [1]
Ramy

[1] http://docs.openstack.org/infra/system-config/third_party.html#permissions-on-your-third-party-system

-----Original Message-----
From: Watanabe, Isao [mailto:watanabe_isao at jp.fujitsu.com] 
Sent: Friday, September 11, 2015 2:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into gerrit server

Hello, Ramy

Thank you for your help.
Could you do me another favor, please.
I need to move our CI from sandbox to cinder later.
Do I need to register the CI to anywhere, so that the CI could test new patch set in cinder project, please?

Best regards,
Watanabe.isao



> -----Original Message-----
> From: Asselin, Ramy [mailto:ramy.asselin at hp.com]
> Sent: Friday, September 11, 2015 12:07 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified 
> into gerrit server
> 
> Done. Thank you for adding your CI system to the wiki.
> 
> Ramy
> 
> -----Original Message-----
> From: Watanabe, Isao [mailto:watanabe_isao at jp.fujitsu.com]
> Sent: Thursday, September 10, 2015 8:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified 
> into gerrit server
> 
> Hello, Ramy
> 
> Could you please add the following CI to the third-party ci group, too.
> 
> Fujitsu ETERNUS CI
> 
> We are preparing this CI test system, and going to use this CI system 
> to test Cinder.
> The wiki of this CI:
> <https://wiki.openstack.org/wiki/ThirdPartySystems/Fujitsu_ETERNUS_CI>
> 
> Thank you very much.
> 
> Best regards,
> Watanabe.isao
> 
> 
> 
> > -----Original Message-----
> > From: Asselin, Ramy [mailto:ramy.asselin at hp.com]
> > Sent: Thursday, September 10, 2015 8:00 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified 
> > into gerrit server
> >
> > I added Fnst OpenStackTest CI
> > <https://review.openstack.org/#/q/owner:openstack_dev%2540163.com+st
> > at us :open,n,z>  to the third-party ci group.
> >
> > Ramy
> >
> >
> >
> > From: Evgeny Antyshev [mailto:eantyshev at virtuozzo.com]
> > Sent: Thursday, September 10, 2015 3:51 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified 
> > into gerrit server
> >
> >
> >
> >
> >
> > On 10.09.2015 11:30, Xie, Xianshan wrote:
> >
> > 	Hi, all,
> >
> > 	   In my CI environment, after submitting a patch into 
> > openstack-dev/sandbox,
> >
> > 	the Jenkins Job can be launched automatically, and the result 
> > message of the job also can be posted into the gerrit server successfully.
> >
> > 	Everything seems fine.
> >
> >
> >
> > 	But in the "Verified" column, there is no verified vote, such as +1 
> > or -1.
> >
> > You will be able when your CI account is added to "Third-Party CI"
> > group on review.openstack.org
> > https://review.openstack.org/#/admin/groups/270,members
> > I advice you to ask for such a permission in an IRC meeting for 
> > third-party CI maintainers:
> > https://wiki.openstack.org/wiki/Meetings/ThirdParty
> > But you still won't be able to vote on other projects, except the sandbox.
> >
> >
> >
> >
> > 	(patch url: https://review.openstack.org/#/c/222049/
> > <https://review.openstack.org/#/c/222049/> ,
> >
> > 	CI name:  Fnst OpenStackTest CI)
> >
> >
> >
> > 	Although I have already added the "verified" label into the 
> > layout.yaml , under the check pipeline, it does not work yet.
> >
> >
> >
> > 	And my configuration info is setted as follows:
> >
> > 	Layout.yaml
> >
> > 	-------------------------------------------
> >
> > 	pipelines:
> >
> > 	  - name: check
> >
> > 	   trigger:
> >
> > 	     gerrit:
> >
> > 	      - event: patchset-created
> >
> > 	      - event: change-restored
> >
> > 	      - event: comment-added
> >
> > 	...
> >
> > 	   success:
> >
> > 	    gerrit:
> >
> > 	      verified: 1
> >
> > 	   failure:
> >
> > 	    gerrit:
> >
> > 	      verified: -1
> >
> >
> >
> > 	jobs:
> >
> > 	   - name: noop-check-communication
> >
> > 	      parameter-function: reusable_node
> >
> > 	projects:
> >
> > 	- name: openstack-dev/sandbox
> >
> > 	   - noop-check-communication
> >
> > 	-------------------------------------------
> >
> >
> >
> >
> >
> > 	And the projects.yaml of Jenkins job:
> >
> > 	-------------------------------------------
> >
> > 	- project:
> >
> > 	name: sandbox
> >
> > 	jobs:
> >
> > 	      - noop-check-communication:
> >
> > 	         node: 'devstack_slave || devstack-precise-check || d-p-c'
> >
> > 	...
> >
> > 	-------------------------------------------
> >
> >
> >
> > 	Could anyone help me? Thanks in advance.
> >
> >
> >
> > 	Xiexs
> >
> >
> >
> >
> >
> >
> >
> >
> > 	________________________________________________________________
> > __________
> > 	OpenStack Development Mailing List (not for usage questions)
> > 	Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > 	http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
> > v
> >
> >
> 
> 
> ______________________________________________________________________
> __
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ______________________________________________________________________
> __
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From nik.komawar at gmail.com  Fri Sep 11 11:58:54 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 11 Sep 2015 07:58:54 -0400
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <20150911084236.GR6373@redhat.com>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
 <55F05B79.1050508@gmail.com> <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33D128@fmsmsx117.amr.corp.intel.com>
 <D2174C2D.40D45%Brianna.Poulos@jhuapl.edu> <55F1DBC6.2000904@gmail.com>
 <20150911084236.GR6373@redhat.com>
Message-ID: <55F2C1FE.6080504@gmail.com>

You are right in the sense that's the ideal scenario.

(Impl-wise) However, even today we do not guarantee that behavior. If
someone were to propose a new driver or a change driver capability or
any thing of such order, images in status killed won't be guaranteed to
have removed the garbage data. The driver may not choose to be resilient
enough or would not take the responsibility of data removal
synchronously on failures.

Taking that fact in account, I have thought of Brianna's patch to be okay.

On 9/11/15 4:42 AM, Flavio Percoco wrote:
> On 10/09/15 15:36 -0400, Nikhil Komawar wrote:
>> The solution to this problem is to improve the scrubber to clean up the
>> garbage data left behind in the backend store during such failed
>> uploads.
>>
>> Currently, scrubber cleans up images in pending_delete and extending
>> that to images in killed status would avoid such a situation.
>
> While the above would certainly help, I think it's not the right
> solution. Images in status "killed" should not have data to begin
> with.
>
> I'd rather find a way to clean that data as soon as the image is
> moved to a "killed" state instead of extending the scrubber.
>
> Cheers,
> Flavio
>
>> On 9/10/15 3:28 PM, Poulos, Brianna L. wrote:
>>> Malini,
>>>
>>> Thank you for bringing up the ?killed? state as it relates to
>>> quota.  We
>>> opted to move the image to a killed state since that is what occurs
>>> when
>>> an upload fails, and the signature verification failure would occur
>>> during
>>> an upload.  But we should keep in mind the potential to take up
>>> space and
>>> yet not take up quota when signature verification fails.
>>>
>>> Regarding the MD5 hash, there is currently a glance spec [1] to
>>> allow the
>>> hash method used for the checksum to be configurable?currently it is
>>> hardcoded in glance.  After making it configurable, the default would
>>> transition from MD5 to something more secure (like SHA-256).
>>>
>>> [1] https://review.openstack.org/#/c/191542/
>>>
>>> Thanks,
>>> ~Brianna
>>>
>>>
>>>
>>>
>>> On 9/10/15, 5:10 , "Bhandaru, Malini K" <malini.k.bhandaru at intel.com>
>>> wrote:
>>>
>>>> Brianna, I can imagine a denial of service attack by uploading images
>>>> whose signature is invalid if we allow them to reside in Glance
>>>> In a "killed" state. This would be less of an issue "killed" images
>>>> still
>>>> consume storage quota until actually deleted.
>>>> Also given MD-5 less secure, why not have the default hash be SHA-1
>>>> or 2?
>>>> Regards
>>>> Malini
>>>>
>>>> -----Original Message-----
>>>> From: Poulos, Brianna L. [mailto:Brianna.Poulos at jhuapl.edu]
>>>> Sent: Wednesday, September 09, 2015 9:54 AM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Cc: stuart.mclaren at hp.com
>>>> Subject: Re: [openstack-dev] [glance] [nova] Verification of glance
>>>> images before boot
>>>>
>>>> Stuart is right about what will currently happen in Nova when an
>>>> image is
>>>> downloaded, which protects against unintentional modifications to the
>>>> image data.
>>>>
>>>> What is currently being worked on is adding the ability to verify a
>>>> signature of the checksum.  The flow of this is as follows:
>>>> 1. The user creates a signature of the "checksum hash" (currently
>>>> MD5) of
>>>> the image data offline.
>>>> 2. The user uploads a public key certificate, which can be used to
>>>> verify
>>>> the signature to a key manager (currently Barbican).
>>>> 3. The user creates an image in glance, with signature metadata
>>>> properties.
>>>> 4. The user uploads the image data to glance.
>>>> 5. If the signature metadata properties exist, glance verifies the
>>>> signature of the "checksum hash", including retrieving the certificate
>>> >from the key manager.
>>>> 6. If the signature verification fails, glance moves the image to a
>>>> killed state, and returns an error message to the user.
>>>> 7. If the signature verification succeeds, a log message indicates
>>>> that
>>>> it succeeded, and the image upload finishes successfully.
>>>>
>>>> 8. Nova requests the image from glance, along with the image
>>>> properties,
>>>> in order to boot it.
>>>> 9. Nova uses the signature metadata properties to verify the signature
>>>> (if a configuration option is set).
>>>> 10. If the signature verification fails, nova does not boot the image,
>>>> but errors out.
>>>> 11. If the signature verification succeeds, nova boots the image,
>>>> and a
>>>> log message notes that the verification succeeded.
>>>>
>>>> Regarding what is currently in Liberty, the blueprint mentioned [1]
>>>> has
>>>> merged, and code [2] has also been merged in glance, which handles
>>>> steps
>>>> 1-7 of the flow above.
>>>>
>>>> For steps 7-11, there is currently a nova blueprint [3], along with
>>>> code
>>>> [4], which are proposed for Mitaka.
>>>>
>>>> Note that we are in the process of adding official documentation, with
>>>> examples of creating the signature as well as the properties that
>>>> need to
>>>> be added for the image before upload.  In the meantime, there's an
>>>> etherpad that describes how to test the signature verification
>>>> functionality in Glance [5].
>>>>
>>>> Also note that this is the initial approach, and there are some
>>>> limitations.  For example, ideally the signature would be based on a
>>>> cryptographically secure (i.e. not MD5) hash of the image.  There is a
>>>> spec in glance to allow this hash to be configurable [6].
>>>>
>>>> [1]
>>>> https://blueprints.launchpad.net/glance/+spec/image-signing-and-verificati
>>>>
>>>> o
>>>> n-support
>>>> [2]
>>>> https://github.com/openstack/glance/commit/484ef1b40b738c87adb203bba6107dd
>>>>
>>>> b
>>>> 4b04ff6e
>>>> [3] https://review.openstack.org/#/c/188874/
>>>> [4] https://review.openstack.org/#/c/189843/
>>>> [5]
>>>> https://etherpad.openstack.org/p/liberty-glance-image-signing-instructions
>>>>
>>>> [6] https://review.openstack.org/#/c/191542/
>>>>
>>>>
>>>> Thanks,
>>>> ~Brianna
>>>>
>>>>
>>>>
>>>>
>>>> On 9/9/15, 12:16 , "Nikhil Komawar" <nik.komawar at gmail.com> wrote:
>>>>
>>>>> That's correct.
>>>>>
>>>>> The size and the checksum are to be verified outside of Glance, in
>>>>> this
>>>>> case Nova. However, you may want to note that it's not necessary that
>>>>> all Nova virt drivers would use py-glanceclient so you would want to
>>>>> check the download specific code in the virt driver your Nova
>>>>> deployment is using.
>>>>>
>>>>> Having said that, essentially the flow seems appropriate. Error
>>>>> must be
>>>>> raise on mismatch.
>>>>>
>>>>> The signing BP was to help prevent the compromised Glance from
>>>>> changing
>>>>> the checksum and image blob at the same time. Using a digital
>>>>> signature, you can prevent download of compromised data. However, the
>>>>> feature has just been implemented in Glance; Glance users may take
>>>>> time
>>>>> to adopt.
>>>>>
>>>>>
>>>>>
>>>>> On 9/9/15 11:15 AM, stuart.mclaren at hp.com wrote:
>>>>>> The glance client (running 'inside' the Nova server) will
>>>>>> re-calculate the checksum as it downloads the image and then compare
>>>>>> it against the expected value. If they don't match an error will be
>>>>>> raised.
>>>>>>
>>>>>>> How can I know that the image that a new instance is spawned from -
>>>>>>> is actually the image that was originally registered in glance -
>>>>>>> and
>>>>>>> has not been maliciously tampered with in some way?
>>>>>>>
>>>>>>> Is there some kind of verification that is performed against the
>>>>>>> md5sum of the registered image in glance before a new instance is
>>>>>>> spawned?
>>>>>>>
>>>>>>> Is that done by Nova?
>>>>>>> Glance?
>>>>>>> Both? Neither?
>>>>>>>
>>>>>>> The reason I ask is some 'paranoid' security (that is their job I
>>>>>>> suppose) people have raised these questions.
>>>>>>>
>>>>>>> I know there is a glance BP already merged for L [1] - but I would
>>>>>>> like to understand the actual flow in a bit more detail.
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>> [1]
>>>>>>>
>>>>>>> https://blueprints.launchpad.net/glance/+spec/image-signing-and-verif
>>>>>>>
>>>>>>> ica
>>>>>>> tion-support
>>>>>>>
>>>>>>>
>>>>>>> -- 
>>>>>>> Best Regards,
>>>>>>> Maish Saidel-Keesing
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ------------------------------
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>>
>>>>>>> End of OpenStack-dev Digest, Vol 41, Issue 22
>>>>>>> *********************************************
>>>>>>>
>>>>>>
>>>>>> ______________________________________________________________________
>>>>>>
>>>>>> ___
>>>>>> _
>>>>>>
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> -- 
>>>>>
>>>>> Thanks,
>>>>> Nikhil
>>>>>
>>>>>
>>>>> _______________________________________________________________________
>>>>>
>>>>> ___ OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>> __________________________________________________________________________
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>> __________________________________________________________________________
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> -- 
>>
>> Thanks,
>> Nikhil
>>
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/1058f538/attachment.html>

From doug at doughellmann.com  Fri Sep 11 12:19:52 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 11 Sep 2015 08:19:52 -0400
Subject: [openstack-dev] [ceilometer] using entry_points for
	configuration considered harmful
In-Reply-To: <alpine.OSX.2.11.1509111106140.69163@seed.local>
References: <alpine.OSX.2.11.1509111106140.69163@seed.local>
Message-ID: <1441973605-sup-2270@lrrr.local>

Excerpts from Chris Dent's message of 2015-09-11 11:31:07 +0100:
> 
> Several weeks ago I made a little tool call pollman[1] that demonstrates
> pollsters plugins that are outside the ceilometer python namespace. I
> was going to use it in my portion of a summit talk to show just how
> incredibly easy it is to create custom pollsters. After getting the
> basics working and testing it out I forgot all about it until reminded
> about it in a way that I think demonstrates a fairly significant problem
> in the way that ceilometer (and perhaps other OpenStack projects) manage
> extensions.
> 
> Ceilometer is now frozen for Liberty so I've been doing a lot of
> devstack runs to find and fix bugs. And what do I spy with my little
> cli but:
> 
>      $ ceilometer meter-list
>      [...]
>      | weather.temperature  | gauge      | C  | 2172797  | pollman | pollman |
> 
>      $ ceilometer sample-list -q resource_id=2172797 --limit 1
>      [...]
>      | 0b667812-586a-11e5-9568-3417ebd4f75d | 2172797     | weather.temperature | gauge | 18.62  | C    | 2015-09-11T09:46:58.571000 |
> 
> It's 18.62 C in Newquay today. Good to know.
> 
> I have not configured Ceilometer to do this. Pollman set
> entry_points on ceilometer months ago and they are being used today.
> 
> This is really weird to me. I know why it is happening; it is the
> designed in behavior that ceilometer pollsters will activate and run all
> available pollster plugins (even when there are not resources available)
> but goodness me that's not very explicit.
> 
> If I want this to stop the most direct thing to do is uninstall pollman.
> 
> Sure, I can disable the meters associated with pollman in
> pipeline.yaml but to do I've got to go find out what they are. One
> way to do that is to go look at the entry_points, but if I've got
> multiple external plugins those entry_points are all over the place.
> Not very friendly.

How about making a tool to discover them? Something like
https://pypi.python.org/pypi/entry_point_inspector but more specific to
ceilometer plugins.

> 
> I like entry_points, especially in the way that they allow external
> extensions (let's have more of them please!), but we shouldn't be
> using them as service configuration[2].
> 
> What ideas do people have for being more explicit?

Are the plugins grouped in any way or named to allow wildcards for
partial names? For example, is it easy to say "I want all of the
compute pollsters, but I don't know their names" by using putting
something like 'compute:*' in the configuration file?

> 
> Now that the pollsters are only using the first half of the
> pipeline.yaml file it probably makes sense to explore (in Mitaka) a more
> explicit polling configuration file, separate from the transformations
> described by the pipeline.

That may make sense.

> 
> [1] https://github.com/cdent/pollman
> It gets the temperature for configured locations and saves it as a
> sample. Its unfinished because I promptly forgot about it until
> reminded as described above.
> 
> [2] Yes, I'm conscious of the fact that this problem goes away if I
> always use clean installs or virtualenvs but is that something we
> always want to require? It would be handy to be able to have a
> generic vm image which is "the ceilometer polling machine" and has
> all the pollsters installed but not all started by default.

I think the decision was based on the idea that we didn't expect real
deployments to install packages they weren't using. A dev environment is
obviously a different case.

Doug


From christian at berendt.io  Fri Sep 11 12:26:20 2015
From: christian at berendt.io (Christian Berendt)
Date: Fri, 11 Sep 2015 14:26:20 +0200
Subject: [openstack-dev] [keystone] creating new users with invalid mail
	addresses possible
Message-ID: <55F2C86C.5070705@berendt.io>

At the moment it is possible to create new users with invalid mail 
addresses. I pasted the output of my test at 
http://paste.openstack.org/show/456642/. (the listing of invalid mail 
addresses is available at 
http://codefool.tumblr.com/post/15288874550/list-of-valid-and-invalid-email-addresses).

Is it intended that addresses are not be validated?

Does it makes sense to validate addresses (e.g. with 
https://github.com/mailgun/flanker)?

Christian.

-- 
Christian Berendt
Cloud Solution Architect
Mail: berendt at b1-systems.de

B1 Systems GmbH
Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537


From gord at live.ca  Fri Sep 11 12:52:26 2015
From: gord at live.ca (gord chung)
Date: Fri, 11 Sep 2015 08:52:26 -0400
Subject: [openstack-dev] [ceilometerclient] Updating global-requirements
 caps.
In-Reply-To: <20150910232628.GB11827@thor.bakeyournoodle.com>
References: <20150910232628.GB11827@thor.bakeyournoodle.com>
Message-ID: <BLU436-SMTP221E17BE0B161D0D6E29C32DE500@phx.gbl>



On 10/09/2015 7:26 PM, Tony Breeds wrote:
> Hi all,
>      In trying to fix a few stable/juno issues we need to release a new version
> of ceilometerclient for stable/juno.  This email is to try and raise awareness
> so that if the proposal is bonkers [1] we can come up with something better.
>
> This isn't currently possible due to the current caps in juno and kilo.
>
> The proposed fix is to:
>
> . update g-r in master (liberty): python-ceilometerclient>=1.2
>    https://review.openstack.org/#/c/222386/
> . update g-r in stable/kilo: python-ceilometerclient>=1.1.1,<1.2
> . release a sync of stable/kilo g-r to stable/kilo python-ceilometerclient as 1.1.1
> . update g-r in stable/juno: python-ceilometerclient<1.1.0,!=1.0.13,!=1.0.14
> . release 1.0.15 with a sync of stable/juno g-r
>
> The point is, leave 1.0.x for juno, 1.1.x for kilo and >=1.2 for liberty
>
> This is being tracked as: https://bugs.launchpad.net/python-ceilometerclient/+bug/1494516
>
> There is a secondary issue if getting the (juno) gate in a shape where we can
> actually do all of that.
>
> Yours Tony.
> [1] Bonkers is a recognized technical term right?

i commented on patch already but to reiterate, this sounds sane to me. 
we tagged stuff improperly during juno/kilo timespan so our versioning 
became an issue[1] and it looks like it caught up to us.

as it stands, version 1.1.0 is the rough equivalent to 1.0.14 (but with 
a requirement updates).  this seems to solve all the requirements issues 
so i'm content with the solution. thanks to both of you for figuring out 
the requirements logistics.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-ceilometer/%23openstack-ceilometer.2015-04-14.log.html#t2015-04-14T21:50:49

cheers,

-- 
gord



From dstanek at dstanek.com  Fri Sep 11 13:04:12 2015
From: dstanek at dstanek.com (David Stanek)
Date: Fri, 11 Sep 2015 09:04:12 -0400
Subject: [openstack-dev] [keystone] creating new users with invalid mail
 addresses possible
In-Reply-To: <55F2C86C.5070705@berendt.io>
References: <55F2C86C.5070705@berendt.io>
Message-ID: <CAO69NdnM7X9O57s6K3r=vSaTko3U+ugsSZ8dFhTjreEAC8f6Jg@mail.gmail.com>

On Fri, Sep 11, 2015 at 8:26 AM, Christian Berendt <christian at berendt.io>
wrote:

> At the moment it is possible to create new users with invalid mail
> addresses. I pasted the output of my test at
> http://paste.openstack.org/show/456642/. (the listing of invalid mail
> addresses is available at
> http://codefool.tumblr.com/post/15288874550/list-of-valid-and-invalid-email-addresses
> ).
>
> Is it intended that addresses are not be validated?
>
> Does it makes sense to validate addresses (e.g. with
> https://github.com/mailgun/flanker)?
>

I don't know the complete history of this (I'm sure others can chime in
later), but since Keystone doesn't use the email address for anything it
was never really considered a first class attribute. It is just something
we accept and return through the API. It doesn't even have its own column
in the database.

I don't like this for a variety of reasons and we do have a bug[1] for
fixing this. Last Thursday several of us were discussing making a database
column for the email address as part of the fix for that bug.

1. https://bugs.launchpad.net/keystone/+bug/1218682

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/388ecc9b/attachment.html>

From rmeggins at redhat.com  Fri Sep 11 13:32:10 2015
From: rmeggins at redhat.com (Rich Megginson)
Date: Fri, 11 Sep 2015 07:32:10 -0600
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <55F2AA41.9070206@kent.ac.uk>
References: <55F27CB7.2040101@redhat.com> <55F2AA41.9070206@kent.ac.uk>
Message-ID: <55F2D7DA.6010206@redhat.com>

On 09/11/2015 04:17 AM, David Chadwick wrote:
> Whichever approach is adopted you need to consider the future and the
> longer term objective of moving to fully hierarchical names. I believe
> the current Keystone approach is only an interim one, as it only
> supports partial hierarchies. Fully hierarchical names has been
> discussed in the Keystone group, but I believe that this has been
> shelved until later in order to get a quick fix released now.

Can you explain more about "fully hierarchical names"?  What is the 
string representation?

>
> regards
>
> David
>
> On 11/09/2015 08:03, Gilles Dubreuil wrote:
>> Hi,
>>
>> Today in the #openstack-puppet channel a discussion about the pro and
>> cons of using domain parameter for Keystone V3 has been left opened.
>>
>> The context
>> ------------
>> Domain names are needed in Openstack Keystone V3 for identifying users
>> or groups (of users) within different projects (tenant).
>> Users and groups are uniquely identified within a domain (or a realm as
>> opposed to project domains).
>> Then projects have their own domain so users or groups can be assigned
>> to them through roles.
>>
>> In Kilo, Keystone V3 have been introduced as an experimental feature.
>> Puppet providers such as keystone_tenant, keystone_user,
>> keystone_role_user have been adapted to support it.
>> Also new ones have appeared (keystone_domain) or are their way
>> (keystone_group, keystone_trust).
>> And to be backward compatible with V2, the default domain is used when
>> no domain is provided.
>>
>> In existing providers such as keystone_tenant, the domain can be either
>> part of the name or provided as a parameter:
>>
>> A. The 'composite namevar' approach:
>>
>>     keystone_tenant {'projectX::domainY': ... }
>>   B. The 'meaningless name' approach:
>>
>>    keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}
>>
>> Notes:
>>   - Actually using both combined should work too with the domain
>> supposedly overriding the name part of the domain.
>>   - Please look at [1] this for some background between the two approaches:
>>
>> The question
>> -------------
>> Decide between the two approaches, the one we would like to retain for
>> puppet-keystone.
>>
>> Why it matters?
>> ---------------
>> 1. Domain names are mandatory in every user, group or project. Besides
>> the backward compatibility period mentioned earlier, where no domain
>> means using the default one.
>> 2. Long term impact
>> 3. Both approaches are not completely equivalent which different
>> consequences on the future usage.
>> 4. Being consistent
>> 5. Therefore the community to decide
>>
>> The two approaches are not technically equivalent and it also depends
>> what a user might expect from a resource title.
>> See some of the examples below.
>>
>> Because OpenStack DB tables have IDs to uniquely identify objects, it
>> can have several objects of a same family with the same name.
>> This has made things difficult for Puppet resources to guarantee
>> idem-potency of having unique resources.
>> In the context of Keystone V3 domain, hopefully this is not the case for
>> the users, groups or projects but unfortunately this is still the case
>> for trusts.
>>
>> Pros/Cons
>> ----------
>> A.
>>    Pros
>>      - Easier names
>>    Cons
>>      - Titles have no meaning!
>>      - Cases where 2 or more resources could exists
>>      - More difficult to debug
>>      - Titles mismatch when listing the resources (self.instances)
>>
>> B.
>>    Pros
>>      - Unique titles guaranteed
>>      - No ambiguity between resource found and their title
>>    Cons
>>      - More complicated titles
>>
>> Examples
>> ----------
>> = Meaningless name example 1=
>> Puppet run:
>>    keystone_tenant {'myproject': name='project_A', domain=>'domain_1', ...}
>>
>> Second run:
>>    keystone_tenant {'myproject': name='project_A', domain=>'domain_2', ...}
>>
>> Result/Listing:
>>
>>    keystone_tenant { 'project_A::domain_1':
>>      ensure  => 'present',
>>      domain  => 'domain_1',
>>      enabled => 'true',
>>      id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>>    }
>>     keystone_tenant { 'project_A::domain_2':
>>      ensure  => 'present',
>>      domain  => 'domain_2',
>>      enabled => 'true',
>>      id      => '4b8255591949484781da5d86f2c47be7',
>>    }
>>
>> = Composite name example 1  =
>> Puppet run:
>>    keystone_tenant {'project_A::domain_1', ...}
>>
>> Second run:
>>    keystone_tenant {'project_A::domain_2', ...}
>>
>> # Result/Listing
>>    keystone_tenant { 'project_A::domain_1':
>>      ensure  => 'present',
>>      domain  => 'domain_1',
>>      enabled => 'true',
>>      id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>>     }
>>    keystone_tenant { 'project_A::domain_2':
>>      ensure  => 'present',
>>      domain  => 'domain_2',
>>      enabled => 'true',
>>      id      => '4b8255591949484781da5d86f2c47be7',
>>     }
>>
>> = Meaningless name example 2  =
>> Puppet run:
>>    keystone_tenant {'myproject1': name='project_A', domain=>'domain_1', ...}
>>    keystone_tenant {'myproject2': name='project_A', domain=>'domain_1',
>> description=>'blah'...}
>>
>> Result: project_A in domain_1 has a description
>>
>> = Composite name example 2  =
>> Puppet run:
>>    keystone_tenant {'project_A::domain_1', ...}
>>    keystone_tenant {'project_A::domain_1', description => 'blah', ...}
>>
>> Result: Error because the resource must be unique within a catalog
>>
>> My vote
>> --------
>> I would love to have the approach A for easier name.
>> But I've seen the challenge of maintaining the providers behind the
>> curtains and the confusion it creates with name/titles and when not sure
>> about the domain we're dealing with.
>> Also I believe that supporting self.instances consistently with
>> meaningful name is saner.
>> Therefore I vote B
>>
>> Finally
>> ------
>> Thanks for reading that far!
>> To choose, please provide feedback with more pros/cons, examples and
>> your vote.
>>
>> Thanks,
>> Gilles
>>
>>
>> PS:
>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From dstanek at dstanek.com  Fri Sep 11 13:40:52 2015
From: dstanek at dstanek.com (David Stanek)
Date: Fri, 11 Sep 2015 09:40:52 -0400
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
Message-ID: <CAO69Ndm=Z--QieMYbYEsUkYJrnXK8nDZmknMS=PUwupKP6917g@mail.gmail.com>

On Thu, Sep 10, 2015 at 5:40 PM, Morgan Fainberg <morgan.fainberg at gmail.com>
wrote:

> While I will be changing my focus to spend more time on the general needs
> of OpenStack and working on the Public Cloud story, I am confident in those
> who can, and will, step up to the challenges of leading development of
> Keystone and the associated projects. I may be working across more
> projects, but you can be assured I will be continuing to work hard to see
> the initiatives I helped start through. I wish the best of luck to the next
> PTL.


It's been an honor and a privilege to work with you. You've done a great
job and I'm sorry to see you go. Fortunately you're not going too far! Good
luck with your future OpenStack adventures!


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/0a498d62/attachment.html>

From jim at jimrollenhagen.com  Fri Sep 11 13:44:58 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Fri, 11 Sep 2015 06:44:58 -0700
Subject: [openstack-dev] [Ironic] Suggestion to split install guide
In-Reply-To: <55F29727.7070800@redhat.com>
References: <55F29727.7070800@redhat.com>
Message-ID: <20150911134458.GK21846@jimrollenhagen.com>

On Fri, Sep 11, 2015 at 10:56:07AM +0200, Dmitry Tantsur wrote:
> Hi all!
> 
> Our install guide is huge, and I've just approved even more text for it.
> WDYT about splitting it into "Basic Install Guide", which will contain bare
> minimum for running ironic and deploying instances, and "Advanced Install
> Guide", which will the following things:
> 1. Using Bare Metal service as a standalone service
> 2. Enabling the configuration drive (configdrive)
> 3. Inspection
> 4. Trusted boot
> 5. UEFI
> 
> Opinions?

+1, our guide is impossibly long. I like the idea of splitting out the
optional bits so folks don't think that it's all required.

// jim


From lbragstad at gmail.com  Fri Sep 11 13:55:17 2015
From: lbragstad at gmail.com (Lance Bragstad)
Date: Fri, 11 Sep 2015 08:55:17 -0500
Subject: [openstack-dev] [keystone] creating new users with invalid mail
 addresses possible
In-Reply-To: <CAO69NdnM7X9O57s6K3r=vSaTko3U+ugsSZ8dFhTjreEAC8f6Jg@mail.gmail.com>
References: <55F2C86C.5070705@berendt.io>
 <CAO69NdnM7X9O57s6K3r=vSaTko3U+ugsSZ8dFhTjreEAC8f6Jg@mail.gmail.com>
Message-ID: <CAE6oFcHwkSaHvQrtvRr1wdfGqW-1Rr2NT1y4XxFxxR1DRFy0kA@mail.gmail.com>

On Fri, Sep 11, 2015 at 8:04 AM, David Stanek <dstanek at dstanek.com> wrote:

> On Fri, Sep 11, 2015 at 8:26 AM, Christian Berendt <christian at berendt.io>
> wrote:
>
>> At the moment it is possible to create new users with invalid mail
>> addresses. I pasted the output of my test at
>> http://paste.openstack.org/show/456642/. (the listing of invalid mail
>> addresses is available at
>> http://codefool.tumblr.com/post/15288874550/list-of-valid-and-invalid-email-addresses
>> ).
>>
>> Is it intended that addresses are not be validated?
>>
>> Does it makes sense to validate addresses (e.g. with
>> https://github.com/mailgun/flanker)?
>>
>
> I don't know the complete history of this (I'm sure others can chime in
> later), but since Keystone doesn't use the email address for anything it
> was never really considered a first class attribute. It is just something
> we accept and return through the API. It doesn't even have its own column
> in the database.
>

Correct, I believe this is the reason why we don't actually tie the email
address attribute validation into jsonschema [0]. The email address
attribute is just something that is grouped into the 'extra' attributes of
a create user request, so it's treated similarly with jsonschema [1]. I
remember having a few discussions around this with various people, probably
in code review somewhere [2].

I think jsonschema has built-in support that would allow us to validate
email addresses [3]. I think that would plug in pretty naturally to what's
already in keystone.

[0]
https://github.com/openstack/keystone/blob/aa8dc5c9c529c2678933c9b211b4640600e55e3a/keystone/identity/schema.py#L24-L33
[1]
https://github.com/openstack/keystone/blob/aa8dc5c9c529c2678933c9b211b4640600e55e3a/keystone/identity/schema.py#L39

[2] https://review.openstack.org/#/c/132122/6/keystone/identity/schema.py
[3]
http://python-jsonschema.readthedocs.org/en/latest/validate/#validating-formats



> I don't like this for a variety of reasons and we do have a bug[1] for
> fixing this. Last Thursday several of us were discussing making a database
> column for the email address as part of the fix for that bug.
>
> 1. https://bugs.launchpad.net/keystone/+bug/1218682
>
> --
> David
> blog: http://www.traceback.org
> twitter: http://twitter.com/dstanek
> www: http://dstanek.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/08916efd/attachment.html>

From aschultz at mirantis.com  Fri Sep 11 14:11:35 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Fri, 11 Sep 2015 09:11:35 -0500
Subject: [openstack-dev] [fuel][swift] Separate roles for Swift nodes
In-Reply-To: <CAGqpfRWr-z-YoLESX8buwamSu_fhYfRXMqX_MWRGu=zXLZv6DQ@mail.gmail.com>
References: <CAGqpfRWr-z-YoLESX8buwamSu_fhYfRXMqX_MWRGu=zXLZv6DQ@mail.gmail.com>
Message-ID: <CABzFt8OvYvcM-_r8vUttT4SjoWC4_VMU6noMco1w2=wUTa7JMA@mail.gmail.com>

Hey Daniel,

So as part of the 7.0 work we added support in plugins to be able to create
roles and being able to separate roles from the existing system. I think
swift would be a good candidate for this.  I know we also added in some
support for an external swift configuration that will be helpful if you
choose to go down the plugin route.  As an example of a plugin where we've
separated roles from the controller (I believe swift currently lives as
part of the controller role), you can take a look at our keystone, database
and rabbitmq plugins:

https://github.com/stackforge/fuel-plugin-detach-keystone
https://github.com/stackforge/fuel-plugin-detach-database
https://github.com/stackforge/fuel-plugin-detach-rabbitmq

-Alex

On Fri, Sep 11, 2015 at 2:24 AM, Daniel Depaoli <
daniel.depaoli at create-net.org> wrote:

> Hi all!
> I'm starting to investigate some improvements for swift installation in
> fuel, in paticular a way to dedicate a node for it. I found this blueprint
> https://blueprints.launchpad.net/fuel/+spec/swift-separate-role that
> seems to be what i'm looking for.
> The blueprint was accepted but not yet started. So, can someone tell me
> more about this blueprint? I'm interested in working for it.
>
> Best regards,
>
> --
> ========================================================
> Daniel Depaoli
> CREATE-NET Research Center
> Smart Infrastructures Area
> Junior Research Engineer
> ========================================================
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/11b33d29/attachment.html>

From flavio at redhat.com  Fri Sep 11 14:19:40 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 11 Sep 2015 16:19:40 +0200
Subject: [openstack-dev] [Glance] PTL Candidacy
Message-ID: <20150911141940.GB8182@redhat.com>

Greetings,

I'd like to raise my hand and send my candidacy for Glance's PTL
position. If you've heard my name, it's entirely possible that you
hear it either for my Zaqar's PTL position or my current TC position.

Before I go forward with my candidacy, allow me to say that I won't be
running for Zaqar's PTL position and that it's my intention to be
entirely focused on Glance[0]. I take being a PTL seriously[1] and I
strongly believe in the importance of focus when it comes to making
progress, which is what I'd like to base my candidacy on.

Glance may seem a simple service but it's not as simple as many folks
think. It's small compared to many other projects but it's still
exciting to work on. Perhaps, even more important than how exciting it
is, we should all remember the impact this project has on the rest of
the community and how it impacts all the OpenStack deployments out
there. In addition to this, Glance remains one of the projects that
are considered part of the starting kit and it serves a very important
task in a cloud deployment.

The reason I'm bringing the above up is because I believe we've
derailed a bit from that and that we should bring focus back on what's
the most important task for Glance, which is to serve cloud images in
a resilent, consistent and reliable way. This is not to say that
innovation should be stopped but it certainly shouldn't affect this
mission.

I believe these are some of the most important topics that our team
should focus on (non exhaustive list):

- Collaboration with other projects to migrate OpenStack to use
  Glance's V2 API

- Collaborate with the defcore team to define a clear view and support
  of what Glance provides and is.

- Increase V2 stability and awareness.

- Improve Glance's gate story to make sure we cover enough scenarios
  that are representative of existing deployments and use cases.

By reading the above four points, it's clear that I think that lot of
our work should go on cross-project collaboration. However, I'd like
to extend that to cross-team collaboration. One of the things that I
believe has affected Glance a lot is the lack of interaction with
other teams, especially with OPs, when it came down to working on new
features that had an impact on users. Therefore, I'd like to help
increasing cross-team collaboration in Glance's team.

As far as my involvement in Glance goes, I've been part of the team
since I started working on OpenStack. For the upcoming cycle, I'm
ready to dedicate my PTL duties entirely on the project and completely
focused on what's important to bring back focus and keep the project
growing but stable.

As some of you know already, I'm also part of the Technical Committee
and I'll be up for election next April, which means I'll be part of
the TC for, at least, 6 more months. Being part of the TC takes time
and this is something you should keep in mind (as much as I do).
However, if I were to be elected as Glance's PTL, I'll dedicate all my
efforts and time - consider that my current job allows me to be full
time upstream -  to just these to roles.

It'd be a great honor for me to work together with the Glance team and
to keep sharing knowledge, experiences and work as I've been doing so
far but this time as a PTL.

Thanks for reading thus far and for considering this candidacy,
Sincerely,
Flavio

[0] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074212.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/36547e45/attachment.pgp>

From nik.komawar at gmail.com  Fri Sep 11 14:23:53 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 11 Sep 2015 10:23:53 -0400
Subject: [openstack-dev]  [Glance] glance core rotation part 1
Message-ID: <55F2E3F9.1000907@gmail.com>

Hi,

I would like to propose the following removals from glance-core based on
the simple criterion of inactivity/limited activity for a long period (2
cycles or more) of time:

Alex Meade
Arnaud Legendre
Mark Washenberger
Iccha Sethi
Zhi Yan Liu (Limited activity in Kilo and absent in Liberty)

Please vote +1 or -1 and we will decide by Monday EOD PT.

-- 

Thanks,
Nikhil



From morgan.fainberg at gmail.com  Fri Sep 11 14:29:28 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Fri, 11 Sep 2015 07:29:28 -0700
Subject: [openstack-dev] [keystone] creating new users with invalid mail
	addresses possible
In-Reply-To: <CAE6oFcHwkSaHvQrtvRr1wdfGqW-1Rr2NT1y4XxFxxR1DRFy0kA@mail.gmail.com>
References: <55F2C86C.5070705@berendt.io>
 <CAO69NdnM7X9O57s6K3r=vSaTko3U+ugsSZ8dFhTjreEAC8f6Jg@mail.gmail.com>
 <CAE6oFcHwkSaHvQrtvRr1wdfGqW-1Rr2NT1y4XxFxxR1DRFy0kA@mail.gmail.com>
Message-ID: <A8F46157-ACCC-4E0F-81E6-48B7E2F94C7B@gmail.com>

We don't utilize email address for anything. It is not meant to be a top-level column. We've had a lot of discussions on this. The main result is we decided that Keystone should be getting out of the PII game as much as possible. 

I am  against making email a top level attribute. Instead we should be de-emphasizing adding in email (for PII reasons, as keystone does not have a way to securely store them - even as a top-level column) unless email is used as a username. As I recall "email address" was meant to be removed from most/all of our API examples for these reasons. Unless OpenStack or Keystone starts making real use of the email address and needs that PII in the keystone store, it doesn't make sense to treat it as a first class attribute. Keystone is not a CRM tool. 

As a side note, I have proposed a way (it needs further work and would be a Mitaka target) to add validation to the extra attributes on a case-by-case basis for a given deployment. [1]

[1] https://review.openstack.org/#/c/190532/

Sent via mobile

> On Sep 11, 2015, at 06:55, Lance Bragstad <lbragstad at gmail.com> wrote:
> 
> 
> 
>> On Fri, Sep 11, 2015 at 8:04 AM, David Stanek <dstanek at dstanek.com> wrote:
>>> On Fri, Sep 11, 2015 at 8:26 AM, Christian Berendt <christian at berendt.io> wrote:
>>> At the moment it is possible to create new users with invalid mail addresses. I pasted the output of my test at http://paste.openstack.org/show/456642/. (the listing of invalid mail addresses is available at http://codefool.tumblr.com/post/15288874550/list-of-valid-and-invalid-email-addresses).
>>> 
>>> Is it intended that addresses are not be validated?
>>> 
>>> Does it makes sense to validate addresses (e.g. with https://github.com/mailgun/flanker)?
>> 
>> 
>> I don't know the complete history of this (I'm sure others can chime in later), but since Keystone doesn't use the email address for anything it was never really considered a first class attribute. It is just something we accept and return through the API. It doesn't even have its own column in the database.
> 
> Correct, I believe this is the reason why we don't actually tie the email address attribute validation into jsonschema [0]. The email address attribute is just something that is grouped into the 'extra' attributes of a create user request, so it's treated similarly with jsonschema [1]. I remember having a few discussions around this with various people, probably in code review somewhere [2]. 
> 
> I think jsonschema has built-in support that would allow us to validate email addresses [3]. I think that would plug in pretty naturally to what's already in keystone.
> 
> [0] https://github.com/openstack/keystone/blob/aa8dc5c9c529c2678933c9b211b4640600e55e3a/keystone/identity/schema.py#L24-L33
> [1] https://github.com/openstack/keystone/blob/aa8dc5c9c529c2678933c9b211b4640600e55e3a/keystone/identity/schema.py#L39 
> [2] https://review.openstack.org/#/c/132122/6/keystone/identity/schema.py
> [3] http://python-jsonschema.readthedocs.org/en/latest/validate/#validating-formats
> 
> 
>> 
>> I don't like this for a variety of reasons and we do have a bug[1] for fixing this. Last Thursday several of us were discussing making a database column for the email address as part of the fix for that bug.
>> 
>> 1. https://bugs.launchpad.net/keystone/+bug/1218682
>> 
>> -- 
>> David
>> blog: http://www.traceback.org
>> twitter: http://twitter.com/dstanek
>> www: http://dstanek.com
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/e5575c8d/attachment.html>

From flavio at redhat.com  Fri Sep 11 14:29:45 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 11 Sep 2015 16:29:45 +0200
Subject: [openstack-dev] [Glance] glance core rotation part 1
In-Reply-To: <55F2E3F9.1000907@gmail.com>
References: <55F2E3F9.1000907@gmail.com>
Message-ID: <20150911142945.GC8182@redhat.com>

On 11/09/15 10:23 -0400, Nikhil Komawar wrote:
>Hi,
>
>I would like to propose the following removals from glance-core based on
>the simple criterion of inactivity/limited activity for a long period (2
>cycles or more) of time:
>
>Alex Meade
>Arnaud Legendre
>Mark Washenberger
>Iccha Sethi
>Zhi Yan Liu (Limited activity in Kilo and absent in Liberty)

+1 from me. Glad we're finally doing this.

I'd like to thank them all for having contributed so much to Glance
and I hope to see them back someday.

Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/1f290bec/attachment.pgp>

From flavio at redhat.com  Fri Sep 11 14:40:23 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 11 Sep 2015 16:40:23 +0200
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <55F2C1FE.6080504@gmail.com>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
 <55F05B79.1050508@gmail.com>
 <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33D128@fmsmsx117.amr.corp.intel.com>
 <D2174C2D.40D45%Brianna.Poulos@jhuapl.edu>
 <55F1DBC6.2000904@gmail.com> <20150911084236.GR6373@redhat.com>
 <55F2C1FE.6080504@gmail.com>
Message-ID: <20150911144023.GD8182@redhat.com>

On 11/09/15 07:58 -0400, Nikhil Komawar wrote:
>You are right in the sense that's the ideal scenario.
>
>(Impl-wise) However, even today we do not guarantee that behavior. If someone
>were to propose a new driver or a change driver capability or any thing of such
>order, images in status killed won't be guaranteed to have removed the garbage
>data. The driver may not choose to be resilient enough or would not take the
>responsibility of data removal synchronously on failures.

I think it's glance's responsibility to make sure the driver deletes
the image data. If the API is not strong enough to guarantee this,
then we should change that.

>Taking that fact in account, I have thought of Brianna's patch to be okay.

Oh sure, I'm not trying to say it was a wrong choice. Sorry if it
sounded like that. I was replying to the thought of extending scrubber
(unless there's a patch that does this that I might have missed).

Cheers,
Flavio

>
>On 9/11/15 4:42 AM, Flavio Percoco wrote:
>
>    On 10/09/15 15:36 -0400, Nikhil Komawar wrote:
>
>        The solution to this problem is to improve the scrubber to clean up the
>        garbage data left behind in the backend store during such failed
>        uploads.
>
>        Currently, scrubber cleans up images in pending_delete and extending
>        that to images in killed status would avoid such a situation.
>
>
>    While the above would certainly help, I think it's not the right
>    solution. Images in status "killed" should not have data to begin
>    with.
>
>    I'd rather find a way to clean that data as soon as the image is
>    moved to a "killed" state instead of extending the scrubber.
>
>    Cheers,
>    Flavio
>
>
>        On 9/10/15 3:28 PM, Poulos, Brianna L. wrote:
>
>            Malini,
>
>            Thank you for bringing up the ??killed?? state as it relates to
>            quota.?? We
>            opted to move the image to a killed state since that is what occurs
>            when
>            an upload fails, and the signature verification failure would occur
>            during
>            an upload.?? But we should keep in mind the potential to take up
>            space and
>            yet not take up quota when signature verification fails.
>
>            Regarding the MD5 hash, there is currently a glance spec [1] to
>            allow the
>            hash method used for the checksum to be configurable?currently it
>            is
>            hardcoded in glance.? After making it configurable, the default
>            would
>            transition from MD5 to something more secure (like SHA-256).
>
>            [1] https://review.openstack.org/#/c/191542/
>
>            Thanks,
>            ~Brianna
>
>
>
>
>            On 9/10/15, 5:10 , "Bhandaru, Malini K"
>            <malini.k.bhandaru at intel.com>
>            wrote:
>
>
>                Brianna, I can imagine a denial of service attack by uploading
>                images
>                whose signature is invalid if we allow them to reside in Glance
>                In a "killed" state. This would be less of an issue "killed"
>                images still
>                consume storage quota until actually deleted.
>                Also given MD-5 less secure, why not have the default hash be
>                SHA-1 or 2?
>                Regards
>                Malini
>
>                -----Original Message-----
>                From: Poulos, Brianna L. [mailto:Brianna.Poulos at jhuapl.edu]
>                Sent: Wednesday, September 09, 2015 9:54 AM
>                To: OpenStack Development Mailing List (not for usage
>                questions)
>                Cc: stuart.mclaren at hp.com
>                Subject: Re: [openstack-dev] [glance] [nova] Verification of
>                glance
>                images before boot
>
>                Stuart is right about what will currently happen in Nova when
>                an image is
>                downloaded, which protects against unintentional modifications
>                to the
>                image data.
>
>                What is currently being worked on is adding the ability to
>                verify a
>                signature of the checksum.? The flow of this is as follows:
>                1. The user creates a signature of the "checksum hash"
>                (currently MD5) of
>                the image data offline.
>                2. The user uploads a public key certificate, which can be used
>                to verify
>                the signature to a key manager (currently Barbican).
>                3. The user creates an image in glance, with signature metadata
>                properties.
>                4. The user uploads the image data to glance.
>                5. If the signature metadata properties exist, glance verifies
>                the
>                signature of the "checksum hash", including retrieving the
>                certificate
>
>            >from the key manager.
>
>                6. If the signature verification fails, glance moves the image
>                to a
>                killed state, and returns an error message to the user.
>                7. If the signature verification succeeds, a log message
>                indicates that
>                it succeeded, and the image upload finishes successfully.
>
>                8. Nova requests the image from glance, along with the image
>                properties,
>                in order to boot it.
>                9. Nova uses the signature metadata properties to verify the
>                signature
>                (if a configuration option is set).
>                10. If the signature verification fails, nova does not boot the
>                image,
>                but errors out.
>                11. If the signature verification succeeds, nova boots the
>                image, and a
>                log message notes that the verification succeeded.
>
>                Regarding what is currently in Liberty, the blueprint mentioned
>                [1] has
>                merged, and code [2] has also been merged in glance, which
>                handles steps
>                1-7 of the flow above.
>
>                For steps 7-11, there is currently a nova blueprint [3], along
>                with code
>                [4], which are proposed for Mitaka.
>
>                Note that we are in the process of adding official
>                documentation, with
>                examples of creating the signature as well as the properties
>                that need to
>                be added for the image before upload.? In the meantime, there's
>                an
>                etherpad that describes how to test the signature verification
>                functionality in Glance [5].
>
>                Also note that this is the initial approach, and there are some
>                limitations.? For example, ideally the signature would be based
>                on a
>                cryptographically secure (i.e. not MD5) hash of the image.?
>                There is a
>                spec in glance to allow this hash to be configurable [6].
>
>                [1]
>                https://blueprints.launchpad.net/glance/+spec/
>                image-signing-and-verificati
>                o
>                n-support
>                [2]
>                https://github.com/openstack/glance/commit/
>                484ef1b40b738c87adb203bba6107dd
>                b
>                4b04ff6e
>                [3] https://review.openstack.org/#/c/188874/
>                [4] https://review.openstack.org/#/c/189843/
>                [5]
>                https://etherpad.openstack.org/p/
>                liberty-glance-image-signing-instructions
>                [6] https://review.openstack.org/#/c/191542/
>
>
>                Thanks,
>                ~Brianna
>
>
>
>
>                On 9/9/15, 12:16 , "Nikhil Komawar" <nik.komawar at gmail.com>
>                wrote:
>
>
>                    That's correct.
>
>                    The size and the checksum are to be verified outside of
>                    Glance, in this
>                    case Nova. However, you may want to note that it's not
>                    necessary that
>                    all Nova virt drivers would use py-glanceclient so you
>                    would want to
>                    check the download specific code in the virt driver your
>                    Nova
>                    deployment is using.
>
>                    Having said that, essentially the flow seems appropriate.
>                    Error must be
>                    raise on mismatch.
>
>                    The signing BP was to help prevent the compromised Glance
>                    from changing
>                    the checksum and image blob at the same time. Using a
>                    digital
>                    signature, you can prevent download of compromised data.
>                    However, the
>                    feature has just been implemented in Glance; Glance users
>                    may take time
>                    to adopt.
>
>
>
>                    On 9/9/15 11:15 AM, stuart.mclaren at hp.com wrote:
>
>                        The glance client (running 'inside' the Nova server)
>                        will
>                        re-calculate the checksum as it downloads the image and
>                        then compare
>                        it against the expected value. If they don't match an
>                        error will be
>                        raised.
>
>
>                            How can I know that the image that a new instance
>                            is spawned from -
>                            is actually the image that was originally
>                            registered in glance - and
>                            has not been maliciously tampered with in some way?
>
>                            Is there some kind of verification that is
>                            performed against the
>                            md5sum of the registered image in glance before a
>                            new instance is
>                            spawned?
>
>                            Is that done by Nova?
>                            Glance?
>                            Both? Neither?
>
>                            The reason I ask is some 'paranoid' security (that
>                            is their job I
>                            suppose) people have raised these questions.
>
>                            I know there is a glance BP already merged for L
>                            [1] - but I would
>                            like to understand the actual flow in a bit more
>                            detail.
>
>                            Thanks.
>
>                            [1]
>
>                            https://blueprints.launchpad.net/glance/+spec/
>                            image-signing-and-verif
>                            ica
>                            tion-support
>
>
>                            --
>                            Best Regards,
>                            Maish Saidel-Keesing
>
>
>
>                            ------------------------------
>
>                            _______________________________________________
>                            OpenStack-dev mailing list
>                            OpenStack-dev at lists.openstack.org
>                            http://lists.openstack.org/cgi-bin/mailman/listinfo
>                            /openstack-dev
>
>
>                            End of OpenStack-dev Digest, Vol 41, Issue 22
>                            *********************************************
>
>
>
>                        ______________________________________________________________________
>
>                        ___
>                        _
>
>                        OpenStack Development Mailing List (not for usage
>                        questions)
>                        Unsubscribe:
>                        OpenStack-dev-request at lists.openstack.org?
>                        subject:unsubscribe
>                        http://lists.openstack.org/cgi-bin/mailman/listinfo/
>                        openstack-dev
>
>                    --
>
>                    Thanks,
>                    Nikhil
>
>
>                    _______________________________________________________________________
>
>                    ___ OpenStack Development Mailing List (not for usage
>                    questions)
>                    Unsubscribe:
>                    OpenStack-dev-request at lists.openstack.org?
>                    subject:unsubscribe
>                    http://lists.openstack.org/cgi-bin/mailman/listinfo/
>                    openstack-dev
>
>
>                __________________________________________________________________________
>
>                OpenStack Development Mailing List (not for usage questions)
>                Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>                subject:unsubscribe
>                http://lists.openstack.org/cgi-bin/mailman/listinfo/
>                openstack-dev
>
>                __________________________________________________________________________
>
>                OpenStack Development Mailing List (not for usage questions)
>                Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>                subject:unsubscribe
>                http://lists.openstack.org/cgi-bin/mailman/listinfo/
>                openstack-dev
>
>
>            __________________________________________________________________________
>
>            OpenStack Development Mailing List (not for usage questions)
>            Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>            subject:unsubscribe
>            http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>        --?
>
>        Thanks,
>        Nikhil
>
>
>        __________________________________________________________________________
>
>        OpenStack Development Mailing List (not for usage questions)
>        Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>        subject:unsubscribe
>        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>   
>    __________________________________________________________________________
>    OpenStack Development Mailing List (not for usage questions)
>    Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>--
>
>Thanks,
>Nikhil
>

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/e4a28d87/attachment.pgp>

From jpeeler at redhat.com  Fri Sep 11 14:47:51 2015
From: jpeeler at redhat.com (Jeff Peeler)
Date: Fri, 11 Sep 2015 10:47:51 -0400
Subject: [openstack-dev] [Ironic] Suggestion to split install guide
In-Reply-To: <20150911134458.GK21846@jimrollenhagen.com>
References: <55F29727.7070800@redhat.com>
 <20150911134458.GK21846@jimrollenhagen.com>
Message-ID: <CALesnTybR_fASxPPvNWKYWAg8xDzCzVEzLTrLL15J_VF1YEdcA@mail.gmail.com>

On Fri, Sep 11, 2015 at 9:44 AM, Jim Rollenhagen <jim at jimrollenhagen.com>
wrote:

> On Fri, Sep 11, 2015 at 10:56:07AM +0200, Dmitry Tantsur wrote:
> > Hi all!
> >
> > Our install guide is huge, and I've just approved even more text for it.
> > WDYT about splitting it into "Basic Install Guide", which will contain
> bare
> > minimum for running ironic and deploying instances, and "Advanced Install
> > Guide", which will the following things:
> > 1. Using Bare Metal service as a standalone service
> > 2. Enabling the configuration drive (configdrive)
> > 3. Inspection
> > 4. Trusted boot
> > 5. UEFI
> >
> > Opinions?
>
> +1, our guide is impossibly long. I like the idea of splitting out the
> optional bits so folks don't think that it's all required.
>

I agree as well! As somebody who is currently going through the install
guide, I might add that changing the ordering would be nice. Specifically
putting the image requirements and flavor creation last since they sort of
interrupt further server setup instructions.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/eedab28c/attachment.html>

From nik.komawar at gmail.com  Fri Sep 11 14:49:25 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 11 Sep 2015 10:49:25 -0400
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <20150911144023.GD8182@redhat.com>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
 <55F05B79.1050508@gmail.com> <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33D128@fmsmsx117.amr.corp.intel.com>
 <D2174C2D.40D45%Brianna.Poulos@jhuapl.edu> <55F1DBC6.2000904@gmail.com>
 <20150911084236.GR6373@redhat.com> <55F2C1FE.6080504@gmail.com>
 <20150911144023.GD8182@redhat.com>
Message-ID: <55F2E9F5.4030007@gmail.com>



On 9/11/15 10:40 AM, Flavio Percoco wrote:
> On 11/09/15 07:58 -0400, Nikhil Komawar wrote:
>> You are right in the sense that's the ideal scenario.
>>
>> (Impl-wise) However, even today we do not guarantee that behavior. If
>> someone
>> were to propose a new driver or a change driver capability or any
>> thing of such
>> order, images in status killed won't be guaranteed to have removed
>> the garbage
>> data. The driver may not choose to be resilient enough or would not
>> take the
>> responsibility of data removal synchronously on failures.
>
> I think it's glance's responsibility to make sure the driver deletes
> the image data. If the API is not strong enough to guarantee this,
> then we should change that.
>

Sounds like a good direction but I still think we cannot commit to it.
It really depends on the driver and introduction of new or upgrades to
the existing drivers (if not their capabilities) is a not a good idea.
This will result in backward incompatibility in many cases and there can
be enough corner cases when we think about v1, v2 and tasks to address
them at Glance API level.

I personally prefer the image-data delete to be asynchronous and fault
tolerant (on DELETE of image record). However, even that cannot match
the performance of a scrubber like service.

So, in certain cases we need to advice running the scrubber if the data
has the tendency to be left behind.

>> Taking that fact in account, I have thought of Brianna's patch to be
>> okay.
>
> Oh sure, I'm not trying to say it was a wrong choice. Sorry if it
> sounded like that. I was replying to the thought of extending scrubber
> (unless there's a patch that does this that I might have missed).
>
> Cheers,
> Flavio
>
>>
>> On 9/11/15 4:42 AM, Flavio Percoco wrote:
>>
>>    On 10/09/15 15:36 -0400, Nikhil Komawar wrote:
>>
>>        The solution to this problem is to improve the scrubber to
>> clean up the
>>        garbage data left behind in the backend store during such failed
>>        uploads.
>>
>>        Currently, scrubber cleans up images in pending_delete and
>> extending
>>        that to images in killed status would avoid such a situation.
>>
>>
>>    While the above would certainly help, I think it's not the right
>>    solution. Images in status "killed" should not have data to begin
>>    with.
>>
>>    I'd rather find a way to clean that data as soon as the image is
>>    moved to a "killed" state instead of extending the scrubber.
>>
>>    Cheers,
>>    Flavio
>>
>>
>>        On 9/10/15 3:28 PM, Poulos, Brianna L. wrote:
>>
>>            Malini,
>>
>>            Thank you for bringing up the ??killed?? state as it
>> relates to
>>            quota.?  We
>>            opted to move the image to a killed state since that is
>> what occurs
>>            when
>>            an upload fails, and the signature verification failure
>> would occur
>>            during
>>            an upload.?  But we should keep in mind the potential to
>> take up
>>            space and
>>            yet not take up quota when signature verification fails.
>>
>>            Regarding the MD5 hash, there is currently a glance spec
>> [1] to
>>            allow the
>>            hash method used for the checksum to be
>> configurable?currently it
>>            is
>>            hardcoded in glance.  After making it configurable, the
>> default
>>            would
>>            transition from MD5 to something more secure (like SHA-256).
>>
>>            [1] https://review.openstack.org/#/c/191542/
>>
>>            Thanks,
>>            ~Brianna
>>
>>
>>
>>
>>            On 9/10/15, 5:10 , "Bhandaru, Malini K"
>>            <malini.k.bhandaru at intel.com>
>>            wrote:
>>
>>
>>                Brianna, I can imagine a denial of service attack by
>> uploading
>>                images
>>                whose signature is invalid if we allow them to reside
>> in Glance
>>                In a "killed" state. This would be less of an issue
>> "killed"
>>                images still
>>                consume storage quota until actually deleted.
>>                Also given MD-5 less secure, why not have the default
>> hash be
>>                SHA-1 or 2?
>>                Regards
>>                Malini
>>
>>                -----Original Message-----
>>                From: Poulos, Brianna L.
>> [mailto:Brianna.Poulos at jhuapl.edu]
>>                Sent: Wednesday, September 09, 2015 9:54 AM
>>                To: OpenStack Development Mailing List (not for usage
>>                questions)
>>                Cc: stuart.mclaren at hp.com
>>                Subject: Re: [openstack-dev] [glance] [nova]
>> Verification of
>>                glance
>>                images before boot
>>
>>                Stuart is right about what will currently happen in
>> Nova when
>>                an image is
>>                downloaded, which protects against unintentional
>> modifications
>>                to the
>>                image data.
>>
>>                What is currently being worked on is adding the
>> ability to
>>                verify a
>>                signature of the checksum.  The flow of this is as
>> follows:
>>                1. The user creates a signature of the "checksum hash"
>>                (currently MD5) of
>>                the image data offline.
>>                2. The user uploads a public key certificate, which
>> can be used
>>                to verify
>>                the signature to a key manager (currently Barbican).
>>                3. The user creates an image in glance, with signature
>> metadata
>>                properties.
>>                4. The user uploads the image data to glance.
>>                5. If the signature metadata properties exist, glance
>> verifies
>>                the
>>                signature of the "checksum hash", including retrieving
>> the
>>                certificate
>>
>>            >from the key manager.
>>
>>                6. If the signature verification fails, glance moves
>> the image
>>                to a
>>                killed state, and returns an error message to the user.
>>                7. If the signature verification succeeds, a log message
>>                indicates that
>>                it succeeded, and the image upload finishes successfully.
>>
>>                8. Nova requests the image from glance, along with the
>> image
>>                properties,
>>                in order to boot it.
>>                9. Nova uses the signature metadata properties to
>> verify the
>>                signature
>>                (if a configuration option is set).
>>                10. If the signature verification fails, nova does not
>> boot the
>>                image,
>>                but errors out.
>>                11. If the signature verification succeeds, nova boots
>> the
>>                image, and a
>>                log message notes that the verification succeeded.
>>
>>                Regarding what is currently in Liberty, the blueprint
>> mentioned
>>                [1] has
>>                merged, and code [2] has also been merged in glance,
>> which
>>                handles steps
>>                1-7 of the flow above.
>>
>>                For steps 7-11, there is currently a nova blueprint
>> [3], along
>>                with code
>>                [4], which are proposed for Mitaka.
>>
>>                Note that we are in the process of adding official
>>                documentation, with
>>                examples of creating the signature as well as the
>> properties
>>                that need to
>>                be added for the image before upload.  In the
>> meantime, there's
>>                an
>>                etherpad that describes how to test the signature
>> verification
>>                functionality in Glance [5].
>>
>>                Also note that this is the initial approach, and there
>> are some
>>                limitations.  For example, ideally the signature would
>> be based
>>                on a
>>                cryptographically secure (i.e. not MD5) hash of the
>> image. 
>>                There is a
>>                spec in glance to allow this hash to be configurable [6].
>>
>>                [1]
>>                https://blueprints.launchpad.net/glance/+spec/
>>                image-signing-and-verificati
>>                o
>>                n-support
>>                [2]
>>                https://github.com/openstack/glance/commit/
>>                484ef1b40b738c87adb203bba6107dd
>>                b
>>                4b04ff6e
>>                [3] https://review.openstack.org/#/c/188874/
>>                [4] https://review.openstack.org/#/c/189843/
>>                [5]
>>                https://etherpad.openstack.org/p/
>>                liberty-glance-image-signing-instructions
>>                [6] https://review.openstack.org/#/c/191542/
>>
>>
>>                Thanks,
>>                ~Brianna
>>
>>
>>
>>
>>                On 9/9/15, 12:16 , "Nikhil Komawar"
>> <nik.komawar at gmail.com>
>>                wrote:
>>
>>
>>                    That's correct.
>>
>>                    The size and the checksum are to be verified
>> outside of
>>                    Glance, in this
>>                    case Nova. However, you may want to note that it's
>> not
>>                    necessary that
>>                    all Nova virt drivers would use py-glanceclient so
>> you
>>                    would want to
>>                    check the download specific code in the virt
>> driver your
>>                    Nova
>>                    deployment is using.
>>
>>                    Having said that, essentially the flow seems
>> appropriate.
>>                    Error must be
>>                    raise on mismatch.
>>
>>                    The signing BP was to help prevent the compromised
>> Glance
>>                    from changing
>>                    the checksum and image blob at the same time. Using a
>>                    digital
>>                    signature, you can prevent download of compromised
>> data.
>>                    However, the
>>                    feature has just been implemented in Glance;
>> Glance users
>>                    may take time
>>                    to adopt.
>>
>>
>>
>>                    On 9/9/15 11:15 AM, stuart.mclaren at hp.com wrote:
>>
>>                        The glance client (running 'inside' the Nova
>> server)
>>                        will
>>                        re-calculate the checksum as it downloads the
>> image and
>>                        then compare
>>                        it against the expected value. If they don't
>> match an
>>                        error will be
>>                        raised.
>>
>>
>>                            How can I know that the image that a new
>> instance
>>                            is spawned from -
>>                            is actually the image that was originally
>>                            registered in glance - and
>>                            has not been maliciously tampered with in
>> some way?
>>
>>                            Is there some kind of verification that is
>>                            performed against the
>>                            md5sum of the registered image in glance
>> before a
>>                            new instance is
>>                            spawned?
>>
>>                            Is that done by Nova?
>>                            Glance?
>>                            Both? Neither?
>>
>>                            The reason I ask is some 'paranoid'
>> security (that
>>                            is their job I
>>                            suppose) people have raised these questions.
>>
>>                            I know there is a glance BP already merged
>> for L
>>                            [1] - but I would
>>                            like to understand the actual flow in a
>> bit more
>>                            detail.
>>
>>                            Thanks.
>>
>>                            [1]
>>
>>                           
>> https://blueprints.launchpad.net/glance/+spec/
>>                            image-signing-and-verif
>>                            ica
>>                            tion-support
>>
>>
>>                            --
>>                            Best Regards,
>>                            Maish Saidel-Keesing
>>
>>
>>
>>                            ------------------------------
>>
>>                           
>> _______________________________________________
>>                            OpenStack-dev mailing list
>>                            OpenStack-dev at lists.openstack.org
>>                           
>> http://lists.openstack.org/cgi-bin/mailman/listinfo
>>                            /openstack-dev
>>
>>
>>                            End of OpenStack-dev Digest, Vol 41, Issue 22
>>                            *********************************************
>>
>>
>>
>>                       
>> ______________________________________________________________________
>>
>>                        ___
>>                        _
>>
>>                        OpenStack Development Mailing List (not for usage
>>                        questions)
>>                        Unsubscribe:
>>                        OpenStack-dev-request at lists.openstack.org?
>>                        subject:unsubscribe
>>                       
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>                        openstack-dev
>>
>>                    --
>>
>>                    Thanks,
>>                    Nikhil
>>
>>
>>                   
>> _______________________________________________________________________
>>
>>                    ___ OpenStack Development Mailing List (not for usage
>>                    questions)
>>                    Unsubscribe:
>>                    OpenStack-dev-request at lists.openstack.org?
>>                    subject:unsubscribe
>>                    http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>                    openstack-dev
>>
>>
>>               
>> __________________________________________________________________________
>>
>>                OpenStack Development Mailing List (not for usage
>> questions)
>>                Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>>                subject:unsubscribe
>>                http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>                openstack-dev
>>
>>               
>> __________________________________________________________________________
>>
>>                OpenStack Development Mailing List (not for usage
>> questions)
>>                Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>>                subject:unsubscribe
>>                http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>                openstack-dev
>>
>>
>>           
>> __________________________________________________________________________
>>
>>            OpenStack Development Mailing List (not for usage questions)
>>            Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>>            subject:unsubscribe
>>           
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>        -- 
>>
>>        Thanks,
>>        Nikhil
>>
>>
>>       
>> __________________________________________________________________________
>>
>>        OpenStack Development Mailing List (not for usage questions)
>>        Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>>        subject:unsubscribe
>>        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>     
>> __________________________________________________________________________
>>    OpenStack Development Mailing List (not for usage questions)
>>    Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> -- 
>>
>> Thanks,
>> Nikhil
>>
>

-- 

Thanks,
Nikhil



From morgan.fainberg at gmail.com  Fri Sep 11 14:52:05 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Fri, 11 Sep 2015 07:52:05 -0700
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <55F2BA40.5010108@redhat.com>
References: <55F27CB7.2040101@redhat.com> <55F2AA41.9070206@kent.ac.uk>
 <55F2BA40.5010108@redhat.com>
Message-ID: <CAGnj6as7_LKmLV=7H_U+mns8KSkCBfvCpJoqKy-3NbCUzW4rSA@mail.gmail.com>

On Fri, Sep 11, 2015 at 4:25 AM, Gilles Dubreuil <gilles at redhat.com> wrote:

>
>
> On 11/09/15 20:17, David Chadwick wrote:
> > Whichever approach is adopted you need to consider the future and the
> > longer term objective of moving to fully hierarchical names. I believe
> > the current Keystone approach is only an interim one, as it only
> > supports partial hierarchies. Fully hierarchical names has been
> > discussed in the Keystone group, but I believe that this has been
> > shelved until later in order to get a quick fix released now.
> >
> > regards
> >
> > David
> >
>
> Thanks David,
>
> That's interesting.
> So sub projects are pushing the issue further down.
> And maybe one day sub domains and sub users?
>
> keystone_role_user {
> 'user.subuser::domain1 at project.subproject.subsubproject::domain2':
> roles => [...]
> }
>
> or
>
> keystone_role_user {'user.subuser':
>   user_domain => 'domain1',
>   tenant => 'project.subproject',
>   tenant_domain => 'domain2',
>   roles => [...]
> }
>
> I tend to think the domain must stick with the name it's associated
> with, otherwise we have to say 'here the domain for this and that, etc'.
>
>
>
> > On 11/09/2015 08:03, Gilles Dubreuil wrote:
> >> Hi,
> >>
> >> Today in the #openstack-puppet channel a discussion about the pro and
> >> cons of using domain parameter for Keystone V3 has been left opened.
> >>
> >> The context
> >> ------------
> >> Domain names are needed in Openstack Keystone V3 for identifying users
> >> or groups (of users) within different projects (tenant).
> >> Users and groups are uniquely identified within a domain (or a realm as
> >> opposed to project domains).
> >> Then projects have their own domain so users or groups can be assigned
> >> to them through roles.
> >>
> >> In Kilo, Keystone V3 have been introduced as an experimental feature.
> >> Puppet providers such as keystone_tenant, keystone_user,
> >> keystone_role_user have been adapted to support it.
> >> Also new ones have appeared (keystone_domain) or are their way
> >> (keystone_group, keystone_trust).
> >> And to be backward compatible with V2, the default domain is used when
> >> no domain is provided.
> >>
> >> In existing providers such as keystone_tenant, the domain can be either
> >> part of the name or provided as a parameter:
> >>
> >> A. The 'composite namevar' approach:
> >>
> >>    keystone_tenant {'projectX::domainY': ... }
> >>  B. The 'meaningless name' approach:
> >>
> >>   keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}
> >>
> >> Notes:
> >>  - Actually using both combined should work too with the domain
> >> supposedly overriding the name part of the domain.
> >>  - Please look at [1] this for some background between the two
> approaches:
> >>
> >> The question
> >> -------------
> >> Decide between the two approaches, the one we would like to retain for
> >> puppet-keystone.
> >>
> >> Why it matters?
> >> ---------------
> >> 1. Domain names are mandatory in every user, group or project. Besides
> >> the backward compatibility period mentioned earlier, where no domain
> >> means using the default one.
> >> 2. Long term impact
> >> 3. Both approaches are not completely equivalent which different
> >> consequences on the future usage.
> >> 4. Being consistent
> >> 5. Therefore the community to decide
> >>
> >> The two approaches are not technically equivalent and it also depends
> >> what a user might expect from a resource title.
> >> See some of the examples below.
> >>
> >> Because OpenStack DB tables have IDs to uniquely identify objects, it
> >> can have several objects of a same family with the same name.
> >> This has made things difficult for Puppet resources to guarantee
> >> idem-potency of having unique resources.
> >> In the context of Keystone V3 domain, hopefully this is not the case for
> >> the users, groups or projects but unfortunately this is still the case
> >> for trusts.
> >>
> >> Pros/Cons
> >> ----------
> >> A.
> >>   Pros
> >>     - Easier names
> >>   Cons
> >>     - Titles have no meaning!
> >>     - Cases where 2 or more resources could exists
> >>     - More difficult to debug
> >>     - Titles mismatch when listing the resources (self.instances)
> >>
> >> B.
> >>   Pros
> >>     - Unique titles guaranteed
> >>     - No ambiguity between resource found and their title
> >>   Cons
> >>     - More complicated titles
> >>
> >> Examples
> >> ----------
> >> = Meaningless name example 1=
> >> Puppet run:
> >>   keystone_tenant {'myproject': name='project_A', domain=>'domain_1',
> ...}
> >>
> >> Second run:
> >>   keystone_tenant {'myproject': name='project_A', domain=>'domain_2',
> ...}
> >>
> >> Result/Listing:
> >>
> >>   keystone_tenant { 'project_A::domain_1':
> >>     ensure  => 'present',
> >>     domain  => 'domain_1',
> >>     enabled => 'true',
> >>     id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
> >>   }
> >>    keystone_tenant { 'project_A::domain_2':
> >>     ensure  => 'present',
> >>     domain  => 'domain_2',
> >>     enabled => 'true',
> >>     id      => '4b8255591949484781da5d86f2c47be7',
> >>   }
> >>
> >> = Composite name example 1  =
> >> Puppet run:
> >>   keystone_tenant {'project_A::domain_1', ...}
> >>
> >> Second run:
> >>   keystone_tenant {'project_A::domain_2', ...}
> >>
> >> # Result/Listing
> >>   keystone_tenant { 'project_A::domain_1':
> >>     ensure  => 'present',
> >>     domain  => 'domain_1',
> >>     enabled => 'true',
> >>     id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
> >>    }
> >>   keystone_tenant { 'project_A::domain_2':
> >>     ensure  => 'present',
> >>     domain  => 'domain_2',
> >>     enabled => 'true',
> >>     id      => '4b8255591949484781da5d86f2c47be7',
> >>    }
> >>
> >> = Meaningless name example 2  =
> >> Puppet run:
> >>   keystone_tenant {'myproject1': name='project_A', domain=>'domain_1',
> ...}
> >>   keystone_tenant {'myproject2': name='project_A', domain=>'domain_1',
> >> description=>'blah'...}
> >>
> >> Result: project_A in domain_1 has a description
> >>
> >> = Composite name example 2  =
> >> Puppet run:
> >>   keystone_tenant {'project_A::domain_1', ...}
> >>   keystone_tenant {'project_A::domain_1', description => 'blah', ...}
> >>
> >> Result: Error because the resource must be unique within a catalog
> >>
> >> My vote
> >> --------
> >> I would love to have the approach A for easier name.
> >> But I've seen the challenge of maintaining the providers behind the
> >> curtains and the confusion it creates with name/titles and when not sure
> >> about the domain we're dealing with.
> >> Also I believe that supporting self.instances consistently with
> >> meaningful name is saner.
> >> Therefore I vote B
> >>
> >> Finally
> >> ------
> >> Thanks for reading that far!
> >> To choose, please provide feedback with more pros/cons, examples and
> >> your vote.
> >>
> >> Thanks,
> >> Gilles
>

Please keep in mind that there are no "reserved" characters in projects
and/or domains. It is possible to have "::" in both or ":" (or any other
random entries), which couple make the composite namevar less desirable in
some cases. I expect this to be a somewhat edge case (as using :: in a
domain and/or project in places where it would impact the split would be
somewhat odd); it can likely also be stated that using puppet requires
avoiding the '::' or ':' in this manner.

I am always a fan of explicit variables personally. It is "more complex"
but it is also explicit. This just eliminates ambiguity.

--Morgan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/c105d5fa/attachment.html>

From nik.komawar at gmail.com  Fri Sep 11 14:53:17 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 11 Sep 2015 10:53:17 -0400
Subject: [openstack-dev] [Glance] glance core rotation part 1
In-Reply-To: <20150911142945.GC8182@redhat.com>
References: <55F2E3F9.1000907@gmail.com> <20150911142945.GC8182@redhat.com>
Message-ID: <55F2EADD.1010607@gmail.com>

Something I missed adding before was:

Glance community would like to extend our courtesy for a Fast Track
addition to the group to any one of these or former core members, who
wish to join the development community again and become active in
foreseeable future.

On 9/11/15 10:29 AM, Flavio Percoco wrote:
> On 11/09/15 10:23 -0400, Nikhil Komawar wrote:
>> Hi,
>>
>> I would like to propose the following removals from glance-core based on
>> the simple criterion of inactivity/limited activity for a long period (2
>> cycles or more) of time:
>>
>> Alex Meade
>> Arnaud Legendre
>> Mark Washenberger
>> Iccha Sethi
>> Zhi Yan Liu (Limited activity in Kilo and absent in Liberty)
>
> +1 from me. Glad we're finally doing this.
>
> I'd like to thank them all for having contributed so much to Glance
> and I hope to see them back someday.
>
> Flavio
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/dc8ef714/attachment.html>

From Bruno.Cornec at hp.com  Fri Sep 11 14:57:39 2015
From: Bruno.Cornec at hp.com (Bruno Cornec)
Date: Fri, 11 Sep 2015 16:57:39 +0200
Subject: [openstack-dev] [Ironic] Suggestion to split install guide
In-Reply-To: <55F29727.7070800@redhat.com>
References: <55F29727.7070800@redhat.com>
Message-ID: <20150911145739.GA6187@morley.fra.hp.com>

Hello,

Dmitry Tantsur said on Fri, Sep 11, 2015 at 10:56:07AM +0200:
>Our install guide is huge, and I've just approved even more text for 
>it. WDYT about splitting it into "Basic Install Guide", which will 
>contain bare minimum for running ironic and deploying instances, and 
>"Advanced Install Guide", which will the following things:
>1. Using Bare Metal service as a standalone service
>2. Enabling the configuration drive (configdrive)
>3. Inspection
>4. Trusted boot
>5. UEFI

As a recent reader, I'd like to keep the UEFI part in the main doc as
more and more server will be UEFI by default. The rest seems good to go
in a separate one.

Bruno.
-- 
Open Source Profession, Linux Community Lead WW  http://opensource.hp.com
HP EMEA EG Open Source Technology Strategist         http://hpintelco.net
FLOSS projects:     http://mondorescue.org     http://project-builder.org 
Musique ancienne? http://www.musique-ancienne.org http://www.medieval.org


From anteaya at anteaya.info  Fri Sep 11 15:09:49 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Fri, 11 Sep 2015 11:09:49 -0400
Subject: [openstack-dev] [OpenStack-Infra] Gerrit downtime on Friday
 2015-09-11 at 23:00 UTC
In-Reply-To: <55F2EE1E.4050309@anteaya.info>
References: <87pp287jze.fsf@meyer.lemoncheese.net>
 <55F2EE1E.4050309@anteaya.info>
Message-ID: <55F2EEBD.5030503@anteaya.info>

My reply all missed a list, sorry.

On 09/11/2015 11:07 AM, Anita Kuno wrote:
> On 08/27/2015 12:51 PM, James E. Blair wrote:
>> On Friday, September 11 at 23:00 UTC Gerrit will be unavailable for
>> about 30 minutes while we rename some projects.
>>
>> Existing reviews, project watches, etc, should all be carried
>> over. Currently, we plan on renaming the following projects:
>>
>>   stackforge/os-ansible-deployment -> openstack/openstack-ansible
>>   stackforge/os-ansible-specs -> openstack/openstack-ansible-specs
>>
>>   stackforge/solum -> openstack/solum
>>   stackforge/python-solumclient -> openstack/python-solumclient
>>   stackforge/solum-specs -> openstack/solum-specs
>>   stackforge/solum-dashboard -> openstack/solum-dashboard
>>   stackforge/solum-infra-guestagent -> openstack/solum-infra-guestagent
>>
>>   stackforge/magnetodb -> openstack/magnetodb
>>   stackforge/python-magnetodbclient -> openstack/python-magnetodbclient
>>   stackforge/magnetodb-specs -> openstack/magnetodb-specs
>>
>>   stackforge/kolla -> openstack/kolla
>>   stackforge/neutron-powervm -> openstack/networking-powervm
>>
>> This list is subject to change.
>>
>> The projects in this list have recently become official OpenStack
>> projects and many of them have been waiting patiently for some time to
>> be moved from stackforge/ to openstack/.  This is likely to be the last
>> of the so-called "big-tent" moves as we plan on retiring the stackforge/
>> namespace and moving most of the remaining projects into openstack/ [1].
>>
>> If you have any questions about the maintenance, please reply here or
>> contact us in #openstack-infra on Freenode.
>>
>> -Jim
>>
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html
>>
>> _______________________________________________
>> OpenStack-Infra mailing list
>> OpenStack-Infra at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>>
> 
> Reminder that gerrit downtime starts in about 8 hours.
> 
> stackforge/refstack -> openstack/refstack
> stackforge/refstack-client -> openstack/refstack-client
> 
> will also be added to the rename list.
> 
> If you are an author of any of these rename patches it never hurts to be
> standing by in the #openstack-infra irc channel in case of any last
> minute questions.
> 
> For core reviewers of project-config, it would be great if you could
> stop approving project-config changes at 2000 utc ensuring that last
> minute rebases are avoided during the rename workflow. Thanks for your
> understanding.
> 
> As always, if you have any questions we are available in
> #openstack-infra to do our best to answer them.
> 
> Thank you,
> Anita.
> 



From flavio at redhat.com  Fri Sep 11 15:10:19 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 11 Sep 2015 17:10:19 +0200
Subject: [openstack-dev] [glance] [nova] Verification of glance images
 before boot
In-Reply-To: <55F2E9F5.4030007@gmail.com>
References: <alpine.DEB.2.11.1509091614200.15455@tc-unix2.emea.hpqcorp.net>
 <55F05B79.1050508@gmail.com>
 <D215DAE3.40BF7%Brianna.Poulos@jhuapl.edu>
 <EE6FFF4F6C34C84C8C98DD2414EEA47E7B33D128@fmsmsx117.amr.corp.intel.com>
 <D2174C2D.40D45%Brianna.Poulos@jhuapl.edu>
 <55F1DBC6.2000904@gmail.com> <20150911084236.GR6373@redhat.com>
 <55F2C1FE.6080504@gmail.com> <20150911144023.GD8182@redhat.com>
 <55F2E9F5.4030007@gmail.com>
Message-ID: <20150911151019.GE8182@redhat.com>

On 11/09/15 10:49 -0400, Nikhil Komawar wrote:
>
>
>On 9/11/15 10:40 AM, Flavio Percoco wrote:
>> On 11/09/15 07:58 -0400, Nikhil Komawar wrote:
>>> You are right in the sense that's the ideal scenario.
>>>
>>> (Impl-wise) However, even today we do not guarantee that behavior. If
>>> someone
>>> were to propose a new driver or a change driver capability or any
>>> thing of such
>>> order, images in status killed won't be guaranteed to have removed
>>> the garbage
>>> data. The driver may not choose to be resilient enough or would not
>>> take the
>>> responsibility of data removal synchronously on failures.
>>
>> I think it's glance's responsibility to make sure the driver deletes
>> the image data. If the API is not strong enough to guarantee this,
>> then we should change that.
>>
>
>Sounds like a good direction but I still think we cannot commit to it.
>It really depends on the driver and introduction of new or upgrades to
>the existing drivers (if not their capabilities) is a not a good idea.
>This will result in backward incompatibility in many cases and there can
>be enough corner cases when we think about v1, v2 and tasks to address
>them at Glance API level.

mmh, I guess my reply was confusing again. I meant the glance_store
API and not Glance's API. What I'm trying to say is that just
depending on the scrubber (which not everyone deploys) won't be enough
to fix this issue entirely.

>
>I personally prefer the image-data delete to be asynchronous and fault
>tolerant (on DELETE of image record). However, even that cannot match
>the performance of a scrubber like service.

Yes, I also prefer the data to be deleted asynchrounously when it
comes to not blocking glance-api threads. However, that doesn't make
this step completely fault-tolerant.

What I'm saying is that depending on a asynchronous deletion only when
there's also the chance to possibility of having a synchrounous one is
not good.

>So, in certain cases we need to advice running the scrubber if the data
>has the tendency to be left behind.

This is where we difer in thoughts. I understand where you're coming
from with this but I think we shouldn't have those cases. The fact
that there's data being left behind is the real issue. We shouldn't
have those cases.

Having async deletions is fine and I agree they are useful.

Cheers,
Flavio

>
>>> Taking that fact in account, I have thought of Brianna's patch to be
>>> okay.
>>
>> Oh sure, I'm not trying to say it was a wrong choice. Sorry if it
>> sounded like that. I was replying to the thought of extending scrubber
>> (unless there's a patch that does this that I might have missed).
>>
>> Cheers,
>> Flavio
>>
>>>
>>> On 9/11/15 4:42 AM, Flavio Percoco wrote:
>>>
>>>    On 10/09/15 15:36 -0400, Nikhil Komawar wrote:
>>>
>>>        The solution to this problem is to improve the scrubber to
>>> clean up the
>>>        garbage data left behind in the backend store during such failed
>>>        uploads.
>>>
>>>        Currently, scrubber cleans up images in pending_delete and
>>> extending
>>>        that to images in killed status would avoid such a situation.
>>>
>>>
>>>    While the above would certainly help, I think it's not the right
>>>    solution. Images in status "killed" should not have data to begin
>>>    with.
>>>
>>>    I'd rather find a way to clean that data as soon as the image is
>>>    moved to a "killed" state instead of extending the scrubber.
>>>
>>>    Cheers,
>>>    Flavio
>>>
>>>
>>>        On 9/10/15 3:28 PM, Poulos, Brianna L. wrote:
>>>
>>>            Malini,
>>>
>>>            Thank you for bringing up the ??killed?? state as it
>>> relates to
>>>            quota.?  We
>>>            opted to move the image to a killed state since that is
>>> what occurs
>>>            when
>>>            an upload fails, and the signature verification failure
>>> would occur
>>>            during
>>>            an upload.?  But we should keep in mind the potential to
>>> take up
>>>            space and
>>>            yet not take up quota when signature verification fails.
>>>
>>>            Regarding the MD5 hash, there is currently a glance spec
>>> [1] to
>>>            allow the
>>>            hash method used for the checksum to be
>>> configurable?currently it
>>>            is
>>>            hardcoded in glance.  After making it configurable, the
>>> default
>>>            would
>>>            transition from MD5 to something more secure (like SHA-256).
>>>
>>>            [1] https://review.openstack.org/#/c/191542/
>>>
>>>            Thanks,
>>>            ~Brianna
>>>
>>>
>>>
>>>
>>>            On 9/10/15, 5:10 , "Bhandaru, Malini K"
>>>            <malini.k.bhandaru at intel.com>
>>>            wrote:
>>>
>>>
>>>                Brianna, I can imagine a denial of service attack by
>>> uploading
>>>                images
>>>                whose signature is invalid if we allow them to reside
>>> in Glance
>>>                In a "killed" state. This would be less of an issue
>>> "killed"
>>>                images still
>>>                consume storage quota until actually deleted.
>>>                Also given MD-5 less secure, why not have the default
>>> hash be
>>>                SHA-1 or 2?
>>>                Regards
>>>                Malini
>>>
>>>                -----Original Message-----
>>>                From: Poulos, Brianna L.
>>> [mailto:Brianna.Poulos at jhuapl.edu]
>>>                Sent: Wednesday, September 09, 2015 9:54 AM
>>>                To: OpenStack Development Mailing List (not for usage
>>>                questions)
>>>                Cc: stuart.mclaren at hp.com
>>>                Subject: Re: [openstack-dev] [glance] [nova]
>>> Verification of
>>>                glance
>>>                images before boot
>>>
>>>                Stuart is right about what will currently happen in
>>> Nova when
>>>                an image is
>>>                downloaded, which protects against unintentional
>>> modifications
>>>                to the
>>>                image data.
>>>
>>>                What is currently being worked on is adding the
>>> ability to
>>>                verify a
>>>                signature of the checksum.  The flow of this is as
>>> follows:
>>>                1. The user creates a signature of the "checksum hash"
>>>                (currently MD5) of
>>>                the image data offline.
>>>                2. The user uploads a public key certificate, which
>>> can be used
>>>                to verify
>>>                the signature to a key manager (currently Barbican).
>>>                3. The user creates an image in glance, with signature
>>> metadata
>>>                properties.
>>>                4. The user uploads the image data to glance.
>>>                5. If the signature metadata properties exist, glance
>>> verifies
>>>                the
>>>                signature of the "checksum hash", including retrieving
>>> the
>>>                certificate
>>>
>>>            >from the key manager.
>>>
>>>                6. If the signature verification fails, glance moves
>>> the image
>>>                to a
>>>                killed state, and returns an error message to the user.
>>>                7. If the signature verification succeeds, a log message
>>>                indicates that
>>>                it succeeded, and the image upload finishes successfully.
>>>
>>>                8. Nova requests the image from glance, along with the
>>> image
>>>                properties,
>>>                in order to boot it.
>>>                9. Nova uses the signature metadata properties to
>>> verify the
>>>                signature
>>>                (if a configuration option is set).
>>>                10. If the signature verification fails, nova does not
>>> boot the
>>>                image,
>>>                but errors out.
>>>                11. If the signature verification succeeds, nova boots
>>> the
>>>                image, and a
>>>                log message notes that the verification succeeded.
>>>
>>>                Regarding what is currently in Liberty, the blueprint
>>> mentioned
>>>                [1] has
>>>                merged, and code [2] has also been merged in glance,
>>> which
>>>                handles steps
>>>                1-7 of the flow above.
>>>
>>>                For steps 7-11, there is currently a nova blueprint
>>> [3], along
>>>                with code
>>>                [4], which are proposed for Mitaka.
>>>
>>>                Note that we are in the process of adding official
>>>                documentation, with
>>>                examples of creating the signature as well as the
>>> properties
>>>                that need to
>>>                be added for the image before upload.  In the
>>> meantime, there's
>>>                an
>>>                etherpad that describes how to test the signature
>>> verification
>>>                functionality in Glance [5].
>>>
>>>                Also note that this is the initial approach, and there
>>> are some
>>>                limitations.  For example, ideally the signature would
>>> be based
>>>                on a
>>>                cryptographically secure (i.e. not MD5) hash of the
>>> image.
>>>                There is a
>>>                spec in glance to allow this hash to be configurable [6].
>>>
>>>                [1]
>>>                https://blueprints.launchpad.net/glance/+spec/
>>>                image-signing-and-verificati
>>>                o
>>>                n-support
>>>                [2]
>>>                https://github.com/openstack/glance/commit/
>>>                484ef1b40b738c87adb203bba6107dd
>>>                b
>>>                4b04ff6e
>>>                [3] https://review.openstack.org/#/c/188874/
>>>                [4] https://review.openstack.org/#/c/189843/
>>>                [5]
>>>                https://etherpad.openstack.org/p/
>>>                liberty-glance-image-signing-instructions
>>>                [6] https://review.openstack.org/#/c/191542/
>>>
>>>
>>>                Thanks,
>>>                ~Brianna
>>>
>>>
>>>
>>>
>>>                On 9/9/15, 12:16 , "Nikhil Komawar"
>>> <nik.komawar at gmail.com>
>>>                wrote:
>>>
>>>
>>>                    That's correct.
>>>
>>>                    The size and the checksum are to be verified
>>> outside of
>>>                    Glance, in this
>>>                    case Nova. However, you may want to note that it's
>>> not
>>>                    necessary that
>>>                    all Nova virt drivers would use py-glanceclient so
>>> you
>>>                    would want to
>>>                    check the download specific code in the virt
>>> driver your
>>>                    Nova
>>>                    deployment is using.
>>>
>>>                    Having said that, essentially the flow seems
>>> appropriate.
>>>                    Error must be
>>>                    raise on mismatch.
>>>
>>>                    The signing BP was to help prevent the compromised
>>> Glance
>>>                    from changing
>>>                    the checksum and image blob at the same time. Using a
>>>                    digital
>>>                    signature, you can prevent download of compromised
>>> data.
>>>                    However, the
>>>                    feature has just been implemented in Glance;
>>> Glance users
>>>                    may take time
>>>                    to adopt.
>>>
>>>
>>>
>>>                    On 9/9/15 11:15 AM, stuart.mclaren at hp.com wrote:
>>>
>>>                        The glance client (running 'inside' the Nova
>>> server)
>>>                        will
>>>                        re-calculate the checksum as it downloads the
>>> image and
>>>                        then compare
>>>                        it against the expected value. If they don't
>>> match an
>>>                        error will be
>>>                        raised.
>>>
>>>
>>>                            How can I know that the image that a new
>>> instance
>>>                            is spawned from -
>>>                            is actually the image that was originally
>>>                            registered in glance - and
>>>                            has not been maliciously tampered with in
>>> some way?
>>>
>>>                            Is there some kind of verification that is
>>>                            performed against the
>>>                            md5sum of the registered image in glance
>>> before a
>>>                            new instance is
>>>                            spawned?
>>>
>>>                            Is that done by Nova?
>>>                            Glance?
>>>                            Both? Neither?
>>>
>>>                            The reason I ask is some 'paranoid'
>>> security (that
>>>                            is their job I
>>>                            suppose) people have raised these questions.
>>>
>>>                            I know there is a glance BP already merged
>>> for L
>>>                            [1] - but I would
>>>                            like to understand the actual flow in a
>>> bit more
>>>                            detail.
>>>
>>>                            Thanks.
>>>
>>>                            [1]
>>>
>>>
>>> https://blueprints.launchpad.net/glance/+spec/
>>>                            image-signing-and-verif
>>>                            ica
>>>                            tion-support
>>>
>>>
>>>                            --
>>>                            Best Regards,
>>>                            Maish Saidel-Keesing
>>>
>>>
>>>
>>>                            ------------------------------
>>>
>>>
>>> _______________________________________________
>>>                            OpenStack-dev mailing list
>>>                            OpenStack-dev at lists.openstack.org
>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo
>>>                            /openstack-dev
>>>
>>>
>>>                            End of OpenStack-dev Digest, Vol 41, Issue 22
>>>                            *********************************************
>>>
>>>
>>>
>>>
>>> ______________________________________________________________________
>>>
>>>                        ___
>>>                        _
>>>
>>>                        OpenStack Development Mailing List (not for usage
>>>                        questions)
>>>                        Unsubscribe:
>>>                        OpenStack-dev-request at lists.openstack.org?
>>>                        subject:unsubscribe
>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>>                        openstack-dev
>>>
>>>                    --
>>>
>>>                    Thanks,
>>>                    Nikhil
>>>
>>>
>>>
>>> _______________________________________________________________________
>>>
>>>                    ___ OpenStack Development Mailing List (not for usage
>>>                    questions)
>>>                    Unsubscribe:
>>>                    OpenStack-dev-request at lists.openstack.org?
>>>                    subject:unsubscribe
>>>                    http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>>                    openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>>
>>>                OpenStack Development Mailing List (not for usage
>>> questions)
>>>                Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>>>                subject:unsubscribe
>>>                http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>>                openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>>
>>>                OpenStack Development Mailing List (not for usage
>>> questions)
>>>                Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>>>                subject:unsubscribe
>>>                http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>>                openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>>
>>>            OpenStack Development Mailing List (not for usage questions)
>>>            Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>>>            subject:unsubscribe
>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>        --
>>>
>>>        Thanks,
>>>        Nikhil
>>>
>>>
>>>
>>> __________________________________________________________________________
>>>
>>>        OpenStack Development Mailing List (not for usage questions)
>>>        Unsubscribe: OpenStack-dev-request at lists.openstack.org?
>>>        subject:unsubscribe
>>>        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>>    OpenStack Development Mailing List (not for usage questions)
>>>    Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> --
>>>
>>> Thanks,
>>> Nikhil
>>>
>>
>
>-- 
>
>Thanks,
>Nikhil
>

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/7b6bffa3/attachment.pgp>

From dolph.mathews at gmail.com  Fri Sep 11 15:17:38 2015
From: dolph.mathews at gmail.com (Dolph Mathews)
Date: Fri, 11 Sep 2015 10:17:38 -0500
Subject: [openstack-dev] [keystone] creating new users with invalid mail
 addresses possible
In-Reply-To: <A8F46157-ACCC-4E0F-81E6-48B7E2F94C7B@gmail.com>
References: <55F2C86C.5070705@berendt.io>
 <CAO69NdnM7X9O57s6K3r=vSaTko3U+ugsSZ8dFhTjreEAC8f6Jg@mail.gmail.com>
 <CAE6oFcHwkSaHvQrtvRr1wdfGqW-1Rr2NT1y4XxFxxR1DRFy0kA@mail.gmail.com>
 <A8F46157-ACCC-4E0F-81E6-48B7E2F94C7B@gmail.com>
Message-ID: <CAC=h7gW-igGgu8akkg-3BO3db-ioECZvyE5Hy3V16GUtgeB20A@mail.gmail.com>

On Fri, Sep 11, 2015 at 9:29 AM, Morgan Fainberg <morgan.fainberg at gmail.com>
wrote:

> We don't utilize email address for anything. It is not meant to be a
> top-level column. We've had a lot of discussions on this. The main result
> is we decided that Keystone should be getting out of the PII game as much
> as possible.
>
> I am  against making email a top level attribute. Instead we should be
> de-emphasizing adding in email (for PII reasons, as keystone does not have
> a way to securely store them - even as a top-level column) unless email is
> used as a username. As I recall "email address" was meant to be removed
> from most/all of our API examples for these reasons. Unless OpenStack or
> Keystone starts making real use of the email address and needs that PII in
> the keystone store, it doesn't make sense to treat it as a first class
> attribute. Keystone is not a CRM tool.
>
>
+1


> As a side note, I have proposed a way (it needs further work and would be
> a Mitaka target) to add validation to the extra attributes on a
> case-by-case basis for a given deployment. [1]
>
> [1] https://review.openstack.org/#/c/190532/
>
> Sent via mobile
>
> On Sep 11, 2015, at 06:55, Lance Bragstad <lbragstad at gmail.com> wrote:
>
>
>
> On Fri, Sep 11, 2015 at 8:04 AM, David Stanek <dstanek at dstanek.com> wrote:
>
>> On Fri, Sep 11, 2015 at 8:26 AM, Christian Berendt <christian at berendt.io>
>> wrote:
>>
>>> At the moment it is possible to create new users with invalid mail
>>> addresses. I pasted the output of my test at
>>> http://paste.openstack.org/show/456642/. (the listing of invalid mail
>>> addresses is available at
>>> http://codefool.tumblr.com/post/15288874550/list-of-valid-and-invalid-email-addresses
>>> ).
>>>
>>> Is it intended that addresses are not be validated?
>>>
>>> Does it makes sense to validate addresses (e.g. with
>>> https://github.com/mailgun/flanker)?
>>>
>>
>> I don't know the complete history of this (I'm sure others can chime in
>> later), but since Keystone doesn't use the email address for anything it
>> was never really considered a first class attribute. It is just something
>> we accept and return through the API. It doesn't even have its own column
>> in the database.
>>
>
> Correct, I believe this is the reason why we don't actually tie the email
> address attribute validation into jsonschema [0]. The email address
> attribute is just something that is grouped into the 'extra' attributes of
> a create user request, so it's treated similarly with jsonschema [1]. I
> remember having a few discussions around this with various people, probably
> in code review somewhere [2].
>
> I think jsonschema has built-in support that would allow us to validate
> email addresses [3]. I think that would plug in pretty naturally to what's
> already in keystone.
>
> [0]
> https://github.com/openstack/keystone/blob/aa8dc5c9c529c2678933c9b211b4640600e55e3a/keystone/identity/schema.py#L24-L33
> [1]
> https://github.com/openstack/keystone/blob/aa8dc5c9c529c2678933c9b211b4640600e55e3a/keystone/identity/schema.py#L39
>
> [2] https://review.openstack.org/#/c/132122/6/keystone/identity/schema.py
> [3]
> http://python-jsonschema.readthedocs.org/en/latest/validate/#validating-formats
>
>
>
>> I don't like this for a variety of reasons and we do have a bug[1] for
>> fixing this. Last Thursday several of us were discussing making a database
>> column for the email address as part of the fix for that bug.
>>
>> 1. https://bugs.launchpad.net/keystone/+bug/1218682
>>
>> --
>> David
>> blog: http://www.traceback.org
>> twitter: http://twitter.com/dstanek
>> www: http://dstanek.com
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/3d2ee4a1/attachment.html>

From blk at acm.org  Fri Sep 11 15:25:09 2015
From: blk at acm.org (Brant Knudson)
Date: Fri, 11 Sep 2015 10:25:09 -0500
Subject: [openstack-dev] [keystone] creating new users with invalid mail
 addresses possible
In-Reply-To: <55F2C86C.5070705@berendt.io>
References: <55F2C86C.5070705@berendt.io>
Message-ID: <CAHjeE=RVAGnwnE18+78pepyDgrocdcb0-C469iRLYZtTOxULNQ@mail.gmail.com>

On Fri, Sep 11, 2015 at 7:26 AM, Christian Berendt <christian at berendt.io>
wrote:

> At the moment it is possible to create new users with invalid mail
> addresses. I pasted the output of my test at
> http://paste.openstack.org/show/456642/. (the listing of invalid mail
> addresses is available at
> http://codefool.tumblr.com/post/15288874550/list-of-valid-and-invalid-email-addresses
> ).
>
> Is it intended that addresses are not be validated?
>
> Does it makes sense to validate addresses (e.g. with
> https://github.com/mailgun/flanker)?
>
> Christian.
>
> --
> Christian Berendt
> Cloud Solution Architect
> Mail: berendt at b1-systems.de
>
> B1 Systems GmbH
> Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de
> GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

If you back keystone users with ldap and map the keystone email address
property to an attribute that the ldap server validates then you'll get
email validation. (I haven't tried it but that's the theory.)

- Brant
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/4d3d9600/attachment.html>

From sabeen.syed at RACKSPACE.COM  Fri Sep 11 15:46:41 2015
From: sabeen.syed at RACKSPACE.COM (Sabeen Syed)
Date: Fri, 11 Sep 2015 15:46:41 +0000
Subject: [openstack-dev]  [Heat] Integration Test Questions
Message-ID: <D218618E.ADE81%sabeen.syed@rackspace.com>

Hi All,

My coworker and I would like to start filling out some gaps in api coverage that we see in the functional integration tests. We have one patch up for review (https://review.openstack.org/#/c/219025/). We got a comment saying that any new stack creation will prolong the testing cycle. We agree with that and it got us thinking about a few things -

  1.  We are planning on adding tests for the following api's: event api's, template api's, software config api's, cancel stack updates, check stack resources and show resource data. These are the api's that we saw aren't covered in our current integration tests. Please let us know if you feel we need tests for these upstream, if we're missing something or if it's already covered somewhere.
  2.  To conserve the creation of stacks would it make sense to add one test and then under that we could call sub methods that will run tests against that stack. So something like this:def _test_template_apis()

def _test_softwareconfig_apis()

def _test_event_apis()

def test_event_template_softwareconfig_apis(self):

stack_id = self.stack_create(...)

self._test_template_apis(stack_id)

self._test_event_apis(stack_id)

self._test_softwareconfig_apis(stack_id)

  3.  The current tests are divided into two folders - scenario and functional. To help with organization - under the functional folder, would it make sense to add an 'api' folder, 'resource' folder and 'misc folder? Here is what we're thinking about where each test can be put:
     *   API folder - test_create_update.py, test_preview.py

     *

Resource folder - test_autoscaling.py, test_aws_stack.py, test_conditional_exposure.py, test_create_update_neutron_port.py, test_encryption_vol_type.py, test_heat_autoscaling.py, test_instance_group.py, test_resource_group.py, test_software_config.py, test_swiftsignal_update.py

     *

Misc folder - test_default_parameters.py, test_encrypted_parameter.py, test_hooks.py, test_notifications.py, test_reload_on_sighup.py, test_remote_stack.py, test_stack_tags.py, test_template_resource.py, test_validation.py

  4.  Should we add to our README? For example, I see that we use TestResource as a resource in some of our tests but we don't have an explanation of how to set that up. I'd also like add explanations about the pre-testhook and post-testhook file and how that works and what each line does/what test it's attached to.
  5.  For the tests that we're working on, should we be be adding a blueprint or task somewhere to let everybody know that we're working on it so there is no overlap?
  6.  From our observations, we think it would be beneficial to add more comments to the existing tests.  For example, we could have a minimum of a short blurb for each method.  Comments?
  7.  Should we add a 'high level coverage' summary in our README?  It could help all of us know at a high level where we are at in terms of which resources we have tests for and which api's, etc.

Let us know what you all think!

Thank you!
Sabeen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/5657e83a/attachment.html>

From lgy181 at foxmail.com  Fri Sep 11 15:46:57 2015
From: lgy181 at foxmail.com (=?ISO-8859-1?B?THVvIEdhbmd5aQ==?=)
Date: Fri, 11 Sep 2015 23:46:57 +0800
Subject: [openstack-dev] [Ceilometer][Gnocchi] Gnocchi cannot deal with
	combined resource-id ?
In-Reply-To: <m0a8stp7qt.fsf@danjou.info>
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
 <m0a8stp7qt.fsf@danjou.info>
Message-ID: <tencent_21E29496537979275896B7B0@qq.com>

Hi, Julien
  
 I am using master branch and newest code for testing.
  
 For the purpose for learning the structure of gnocchi, I changed the default UUID type of mysql from binary to char, so I can easily link the resource-id(I mean in database), metric id and directory name of storing measures.
  
 When I did that, I found all the metrics where their resource id is combined(here, I mean in Ceilometer, such as instance-xxx-tapxxxx) have no measures stored.
  
 Log in Ceilometer collector records this:
 "
 2015-09-11 07:55:55.097 10636 ERROR ceilometer.dispatcher.gnocchi [-] Resource instance-00000001-4641f59e-994c-4255-b0ec-43a276d1c19c-tap8aadb7ad-d7 creation failed with status: 400: <html>
 <head>
  <title>400 Bad Request</title>
 </head>
 <body>
  <h1>400 Bad Request</h1>
  The server could not comply with the request since it is either malformed or otherwise incorrect.<br /><br />
Invalid input: required key not provided @ data['display_name']
  </body>
</html>

 "
 So I wander whether gnocchi cannot deal with such combined resource-id metrics or whether it is because I change the UUID type or whatever.
  
 And another question is how to query measures for those metrics whose resource id is combined.
  
 For example, I want to query the network traffic of an vm, I know the instance uuid 1111-2222, I know metric name 'network.incoming.byte.rate' but I do not know the exact resouce_id and metric id. What procedure should I do ? 
  ------------------
 Luo Gangyi   luogangyi at cmss.chinamobile.com



  
  

 

 ------------------ Original ------------------
  From:  "Julien Danjou";<julien at danjou.info>;
 Date:  Fri, Sep 11, 2015 06:31 PM
 To:  "Luo Gangyi"<lgy181 at foxmail.com>; 
 Cc:  "OpenStack Development Mailing L"<openstack-dev at lists.openstack.org>; 
 Subject:  Re: [openstack-dev] [Ceilometer][Gnocchi] Gnocchi cannot deal with combined resource-id ?

 

On Fri, Sep 11 2015, Luo Gangyi wrote:

Hi Luo,

> I find that gnocchi cannot deal with combined resource-id such as
> instance-xxxxxx-tapxxxxxx or instance-xxxx-vda. I'm not sure whether it is my
> configuration problem or just bug.

Which version are you testing? The master branch has no support for
resource ID that are not UUID.

> And if such combined resource-id can be processed correctly, what about its
> metadata(or called attributes)? In current design, gnocchi seems treat
> instance-aaa, instance-aaa-tap111, instance-aaa-tap222 as equal although they
> have parent-child relationship and share many attributes.

We just merged support for those resources. We do not store any
attribute other than the name and parent instance AFAICS. What do you
miss as an attribute exactly?

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/f26b7928/attachment.html>

From julien at danjou.info  Fri Sep 11 16:01:46 2015
From: julien at danjou.info (Julien Danjou)
Date: Fri, 11 Sep 2015 18:01:46 +0200
Subject: [openstack-dev] [Ceilometer][Gnocchi] Gnocchi cannot deal with
	combined resource-id ?
In-Reply-To: <tencent_21E29496537979275896B7B0@qq.com> (Luo Gangyi's message
 of "Fri, 11 Sep 2015 23:46:57 +0800")
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
 <m0a8stp7qt.fsf@danjou.info> <tencent_21E29496537979275896B7B0@qq.com>
Message-ID: <m07fnxndw5.fsf@danjou.info>

On Fri, Sep 11 2015, Luo Gangyi wrote:

Hi Gangyi,

>  I am using master branch and newest code for testing.

Cool.

>  For the purpose for learning the structure of gnocchi, I changed the
>  default UUID type of mysql from binary to char, so I can easily link
>  the resource-id(I mean in database), metric id and directory name of
>  storing measures.

Bah, don't do that ? and rather use PostgreSQL, it's the recommended
backend. :)

>  When I did that, I found all the metrics where their resource id is
>  combined(here, I mean in Ceilometer, such as instance-xxx-tapxxxx)
>  have no measures stored.


>  Log in Ceilometer collector records this:
>  "
>  2015-09-11 07:55:55.097 10636 ERROR ceilometer.dispatcher.gnocchi [-] Resource instance-00000001-4641f59e-994c-4255-b0ec-43a276d1c19c-tap8aadb7ad-d7 creation failed with status: 400: <html>
>  <head>
>   <title>400 Bad Request</title>
>  </head>
>  <body>
>   <h1>400 Bad Request</h1>
>   The server could not comply with the request since it is either malformed or otherwise incorrect.<br /><br />
> Invalid input: required key not provided @ data['display_name']
>   </body>
> </html>
>
>  "
>  So I wander whether gnocchi cannot deal with such combined resource-id metrics or whether it is because I change the UUID type or whatever.

Yes, it can, but the problem is more likely in the Ceilometer collector
dispatcher code that is sending the data. From the error you have, it
seems it tries to create an instance but it has no value for
display_name, so it is denied by Gnocchi. If this is from a standard
devstack installation, I'd suggest to open a bug on Ceilometer.

>  And another question is how to query measures for those metrics whose
>  resource id is combined.

They are resource on their own so if you know their id you can just
access the metrics at /v1/resources/<type>/<id>/metric/<name>/measures.

>  For example, I want to query the network traffic of an vm, I know the
>  instance uuid 1111-2222, I know metric name
>  'network.incoming.byte.rate' but I do not know the exact resouce_id
>  and metric id. What procedure should I do ?

You need to know the ID of the resource and then ask for its metric on
the REST interface. If you don't know the ID of the resource, you can
search for it by instance id.

But I don't think Ceilometer has this metric posted to Gnocchi yet, the
code is a bit young and not finished on the Ceilometer side. If you
check gnocchi_resources.yaml, it's still marked "ignored" for now.

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/84362b3f/attachment.pgp>

From zbitter at redhat.com  Fri Sep 11 16:03:30 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Fri, 11 Sep 2015 12:03:30 -0400
Subject: [openstack-dev] [Heat] Scattered thoughts on the PTL election
Message-ID: <55F2FB52.4060706@redhat.com>

The Heat project pioneered the concept of rotating the PTL for every 
development cycle, and so far all of the early (before 2013) developers 
who are still involved have served as PTL. I think this has been a 
*tremendous* success for the project, and a testament to the sheer depth 
of leadership talent that we are fortunate to have (as well as, it must 
be said, to Thierry and the release management team and their ability to 
quickly bring new people up to speed every cycle). We're already seeing 
a lot of other projects moving toward the PTL role having a shorter time 
horizon, and I suspect the main reason they're not moving more rapidly 
in that direction is that it takes time to build up the expectation of 
rotating succession and make sure that the leaders within each project 
are preparing to take their turn. So I like to think that we've been a 
good influence on the whole OpenStack community in this respect :)

(I'd also note that this expectation is very helpful in terms of 
spreading the workload around and reducing the amount of work that falls 
on a single person. To the extent that it is possible to be the PTL of 
the Heat project and still get some real work done, not just clicking on 
things in Launchpad - though, be warned, there is still quite a bit of 
that involved.)

However, there is one area in which we have not yet been as successful: 
so far all of the PTLs have been folks that were early developers of the 
project. IMHO it's time for that to change: we have built an amazing 
team of folks since then who are great leaders in the community and who 
now have the experience to step up. I can think of at least 4 excellent 
potential candidates just off the top of my head.

Obviously there is a time commitment involved - in fact Flavio's entire 
blog post[1] is great and you should definitely read that first - but if 
you are already devoting a majority of your time to the upstream Heat 
project and you think this is likely to be sustainable for the next 6 
months, then please run for PTL!

(You may safely infer from this that I won't be running this time.)

cheers,
Zane.

[1] http://blog.flaper87.com/post/something-about-being-a-ptl/


From kevin.mitchell at rackspace.com  Fri Sep 11 16:07:35 2015
From: kevin.mitchell at rackspace.com (Kevin L. Mitchell)
Date: Fri, 11 Sep 2015 11:07:35 -0500
Subject: [openstack-dev] [nova] [api] Nova currently handles list with
 limit=0 quite different for different objects.
In-Reply-To: <CAO0b____pvyYBSz7EzWrS--T9HSWbEBv5c-frbFT6NQ46ve-nQ@mail.gmail.com>
References: <CAO0b____pvyYBSz7EzWrS--T9HSWbEBv5c-frbFT6NQ46ve-nQ@mail.gmail.com>
Message-ID: <1441987655.14645.36.camel@einstein.kev>

On Fri, 2015-09-11 at 15:41 +0800, Zhenyu Zheng wrote:
> Hi, I found out that nova currently handles list with limit=0 quite
> different for different objects.
> 
> Especially when list servers:
> 
> According to the code:
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206
> 
> when limit = 0, it should apply as max_limit, but currently, in:
> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930
> 
> we directly return [], this is quite different with comment in the api
> code.
> 
> 
> I checked other objects:
> 
> when list security groups and server groups, it will return as no
> limit has been set. And for flavors it returns []. I will continue to
> try out other APIs if needed.
> 
> I think maybe we should make a rule for all objects, at least fix the
> servers to make it same in api and db code.
> 
> I have reported a bug in launchpad:
> 
> https://bugs.launchpad.net/nova/+bug/1494617
> 
> 
> Any suggestions?

After seeing the test failures that showed up on your proposed fix, I'm
thinking that the proposed change reads like an API change, requiring a
microversion bump.  That said, I approve of increased consistency across
the API, and perhaps the behavior on limit=0 is something the API group
needs to discuss a guideline for?
-- 
Kevin L. Mitchell <kevin.mitchell at rackspace.com>
Rackspace



From rajatv at thoughtworks.com  Fri Sep 11 16:11:14 2015
From: rajatv at thoughtworks.com (Rajat Vig)
Date: Fri, 11 Sep 2015 09:11:14 -0700
Subject: [openstack-dev] [Horizon]Let's take care of our integration
	tests
In-Reply-To: <CAFFhzB4Nc8f2BOmtzNkNSidSLYYx_97VmgHMFTjc6kH0kq-atw@mail.gmail.com>
References: <OF57A4B719.91C38F6B-ON85257EBC.007606AE-86257EBC.007B43A0@us.ibm.com>
 <CAFFhzB4Nc8f2BOmtzNkNSidSLYYx_97VmgHMFTjc6kH0kq-atw@mail.gmail.com>
Message-ID: <CA+JPr9EYuuqudN3k-JuOB9mdzq9vLko=85N_BEtG3vhRz8Or3Q@mail.gmail.com>

Is there any documentation to run the tests locally?
Doing ./run_tests.sh --only-selenium skips a lot of tests. Is that the
recommended way?

-Rajat

On Thu, Sep 10, 2015 at 3:54 PM, David Lyle <dklyle0 at gmail.com> wrote:

> I completely agree about monitoring for integration test failures and
> blocking until the failure is corrected.
>
> The hope is to make sure we've stabilized the integration testing
> framework a bit before reenabling to vote.
>
> Thanks Timur, I know this has been a considerable undertaking.
>
> David
>
> On Thu, Sep 10, 2015 at 4:26 PM, Douglas Fish <drfish at us.ibm.com> wrote:
> > It looks like we've reached the point where our Horizon integration tests
> > are functional again.  Thanks for your work on this Timur! (Offer for
> > beer/hug at the next summit still stands)
> >
> > I'd like to have these tests voting again ASAP, but I understand that
> might
> > be a bit risky at this point. We haven't yet proven that these tests
> will be
> > stable over the long term.
> >
> > I encourage all of the reviewers to keep the integration tests in mind
> as we
> > are reviewing code. Keep an eye on the status of the
> > gate-horizon-dsvm-integration test. It's failure would be great reason to
> > hand out a -1!
> >
> > Doug
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/1ada3099/attachment.html>

From dklyle0 at gmail.com  Fri Sep 11 16:23:38 2015
From: dklyle0 at gmail.com (David Lyle)
Date: Fri, 11 Sep 2015 10:23:38 -0600
Subject: [openstack-dev] [Horizon]Let's take care of our integration
	tests
In-Reply-To: <CA+JPr9EYuuqudN3k-JuOB9mdzq9vLko=85N_BEtG3vhRz8Or3Q@mail.gmail.com>
References: <OF57A4B719.91C38F6B-ON85257EBC.007606AE-86257EBC.007B43A0@us.ibm.com>
 <CAFFhzB4Nc8f2BOmtzNkNSidSLYYx_97VmgHMFTjc6kH0kq-atw@mail.gmail.com>
 <CA+JPr9EYuuqudN3k-JuOB9mdzq9vLko=85N_BEtG3vhRz8Or3Q@mail.gmail.com>
Message-ID: <CAFFhzB4QqNeAFbpeFEWkRmPGf4OECOQ7W_6L1v4neJ8qZEpHng@mail.gmail.com>

raj

On Fri, Sep 11, 2015 at 10:11 AM, Rajat Vig <rajatv at thoughtworks.com> wrote:
> Is there any documentation to run the tests locally?
> Doing ./run_tests.sh --only-selenium skips a lot of tests. Is that the
> recommended way?
>
> -Rajat
>
> On Thu, Sep 10, 2015 at 3:54 PM, David Lyle <dklyle0 at gmail.com> wrote:
>>
>> I completely agree about monitoring for integration test failures and
>> blocking until the failure is corrected.
>>
>> The hope is to make sure we've stabilized the integration testing
>> framework a bit before reenabling to vote.
>>
>> Thanks Timur, I know this has been a considerable undertaking.
>>
>> David
>>
>> On Thu, Sep 10, 2015 at 4:26 PM, Douglas Fish <drfish at us.ibm.com> wrote:
>> > It looks like we've reached the point where our Horizon integration
>> > tests
>> > are functional again.  Thanks for your work on this Timur! (Offer for
>> > beer/hug at the next summit still stands)
>> >
>> > I'd like to have these tests voting again ASAP, but I understand that
>> > might
>> > be a bit risky at this point. We haven't yet proven that these tests
>> > will be
>> > stable over the long term.
>> >
>> > I encourage all of the reviewers to keep the integration tests in mind
>> > as we
>> > are reviewing code. Keep an eye on the status of the
>> > gate-horizon-dsvm-integration test. It's failure would be great reason
>> > to
>> > hand out a -1!
>> >
>> > Doug
>> >
>> >
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From dklyle0 at gmail.com  Fri Sep 11 16:25:58 2015
From: dklyle0 at gmail.com (David Lyle)
Date: Fri, 11 Sep 2015 10:25:58 -0600
Subject: [openstack-dev] [Horizon]Let's take care of our integration
	tests
In-Reply-To: <CAFFhzB4QqNeAFbpeFEWkRmPGf4OECOQ7W_6L1v4neJ8qZEpHng@mail.gmail.com>
References: <OF57A4B719.91C38F6B-ON85257EBC.007606AE-86257EBC.007B43A0@us.ibm.com>
 <CAFFhzB4Nc8f2BOmtzNkNSidSLYYx_97VmgHMFTjc6kH0kq-atw@mail.gmail.com>
 <CA+JPr9EYuuqudN3k-JuOB9mdzq9vLko=85N_BEtG3vhRz8Or3Q@mail.gmail.com>
 <CAFFhzB4QqNeAFbpeFEWkRmPGf4OECOQ7W_6L1v4neJ8qZEpHng@mail.gmail.com>
Message-ID: <CAFFhzB60YQdLOQnRGr2yaxdhLz_1DByzj=J=V5uh4ZVYcKKQLQ@mail.gmail.com>

Oops!

./run_tests.sh --only-selenium   only skips the non-selenium tests. It
works as intended.  ./run_tests.sh --with-selenium will run all tests.
Both of which are documented in the help of run_tests.sh

David

On Fri, Sep 11, 2015 at 10:23 AM, David Lyle <dklyle0 at gmail.com> wrote:
> raj
>
> On Fri, Sep 11, 2015 at 10:11 AM, Rajat Vig <rajatv at thoughtworks.com> wrote:
>> Is there any documentation to run the tests locally?
>> Doing ./run_tests.sh --only-selenium skips a lot of tests. Is that the
>> recommended way?
>>
>> -Rajat
>>
>> On Thu, Sep 10, 2015 at 3:54 PM, David Lyle <dklyle0 at gmail.com> wrote:
>>>
>>> I completely agree about monitoring for integration test failures and
>>> blocking until the failure is corrected.
>>>
>>> The hope is to make sure we've stabilized the integration testing
>>> framework a bit before reenabling to vote.
>>>
>>> Thanks Timur, I know this has been a considerable undertaking.
>>>
>>> David
>>>
>>> On Thu, Sep 10, 2015 at 4:26 PM, Douglas Fish <drfish at us.ibm.com> wrote:
>>> > It looks like we've reached the point where our Horizon integration
>>> > tests
>>> > are functional again.  Thanks for your work on this Timur! (Offer for
>>> > beer/hug at the next summit still stands)
>>> >
>>> > I'd like to have these tests voting again ASAP, but I understand that
>>> > might
>>> > be a bit risky at this point. We haven't yet proven that these tests
>>> > will be
>>> > stable over the long term.
>>> >
>>> > I encourage all of the reviewers to keep the integration tests in mind
>>> > as we
>>> > are reviewing code. Keep an eye on the status of the
>>> > gate-horizon-dsvm-integration test. It's failure would be great reason
>>> > to
>>> > hand out a -1!
>>> >
>>> > Doug
>>> >
>>> >
>>> >
>>> > __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>


From dklyle0 at gmail.com  Fri Sep 11 16:28:04 2015
From: dklyle0 at gmail.com (David Lyle)
Date: Fri, 11 Sep 2015 10:28:04 -0600
Subject: [openstack-dev] [Horizon]Let's take care of our integration
	tests
In-Reply-To: <CAFFhzB60YQdLOQnRGr2yaxdhLz_1DByzj=J=V5uh4ZVYcKKQLQ@mail.gmail.com>
References: <OF57A4B719.91C38F6B-ON85257EBC.007606AE-86257EBC.007B43A0@us.ibm.com>
 <CAFFhzB4Nc8f2BOmtzNkNSidSLYYx_97VmgHMFTjc6kH0kq-atw@mail.gmail.com>
 <CA+JPr9EYuuqudN3k-JuOB9mdzq9vLko=85N_BEtG3vhRz8Or3Q@mail.gmail.com>
 <CAFFhzB4QqNeAFbpeFEWkRmPGf4OECOQ7W_6L1v4neJ8qZEpHng@mail.gmail.com>
 <CAFFhzB60YQdLOQnRGr2yaxdhLz_1DByzj=J=V5uh4ZVYcKKQLQ@mail.gmail.com>
Message-ID: <CAFFhzB5vPnwdMsgjpxYbZVtqM141=DAhJg5Ze_QV7y7FQNowKg@mail.gmail.com>

Of course, your question confused me. As you switched to selenium tests.

To run integration tests,
./run_tests.sh --integration

which is also documented in ./run_tests.sh --help

David

On Fri, Sep 11, 2015 at 10:25 AM, David Lyle <dklyle0 at gmail.com> wrote:
> Oops!
>
> ./run_tests.sh --only-selenium   only skips the non-selenium tests. It
> works as intended.  ./run_tests.sh --with-selenium will run all tests.
> Both of which are documented in the help of run_tests.sh
>
> David
>
> On Fri, Sep 11, 2015 at 10:23 AM, David Lyle <dklyle0 at gmail.com> wrote:
>> raj
>>
>> On Fri, Sep 11, 2015 at 10:11 AM, Rajat Vig <rajatv at thoughtworks.com> wrote:
>>> Is there any documentation to run the tests locally?
>>> Doing ./run_tests.sh --only-selenium skips a lot of tests. Is that the
>>> recommended way?
>>>
>>> -Rajat
>>>
>>> On Thu, Sep 10, 2015 at 3:54 PM, David Lyle <dklyle0 at gmail.com> wrote:
>>>>
>>>> I completely agree about monitoring for integration test failures and
>>>> blocking until the failure is corrected.
>>>>
>>>> The hope is to make sure we've stabilized the integration testing
>>>> framework a bit before reenabling to vote.
>>>>
>>>> Thanks Timur, I know this has been a considerable undertaking.
>>>>
>>>> David
>>>>
>>>> On Thu, Sep 10, 2015 at 4:26 PM, Douglas Fish <drfish at us.ibm.com> wrote:
>>>> > It looks like we've reached the point where our Horizon integration
>>>> > tests
>>>> > are functional again.  Thanks for your work on this Timur! (Offer for
>>>> > beer/hug at the next summit still stands)
>>>> >
>>>> > I'd like to have these tests voting again ASAP, but I understand that
>>>> > might
>>>> > be a bit risky at this point. We haven't yet proven that these tests
>>>> > will be
>>>> > stable over the long term.
>>>> >
>>>> > I encourage all of the reviewers to keep the integration tests in mind
>>>> > as we
>>>> > are reviewing code. Keep an eye on the status of the
>>>> > gate-horizon-dsvm-integration test. It's failure would be great reason
>>>> > to
>>>> > hand out a -1!
>>>> >
>>>> > Doug
>>>> >
>>>> >
>>>> >
>>>> > __________________________________________________________________________
>>>> > OpenStack Development Mailing List (not for usage questions)
>>>> > Unsubscribe:
>>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>


From lgy181 at foxmail.com  Fri Sep 11 16:30:43 2015
From: lgy181 at foxmail.com (=?gb18030?B?THVvIEdhbmd5aQ==?=)
Date: Sat, 12 Sep 2015 00:30:43 +0800
Subject: [openstack-dev] =?gb18030?b?u9i4tKO6ICBbQ2VpbG9tZXRlcl1bR25vY2No?=
 =?gb18030?q?i=5D_Gnocchi_cannot_deal_with_combined_resource-id_=3F?=
In-Reply-To: <m07fnxndw5.fsf@danjou.info>
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
 <m0a8stp7qt.fsf@danjou.info> <tencent_21E29496537979275896B7B0@qq.com>
 <m07fnxndw5.fsf@danjou.info>
Message-ID: <tencent_0B963CB97CD845581696926B@qq.com>

Thanks Julien :)
  
 >  But I don't think Ceilometer has this metric posted to Gnocchi yet, the
>  code is a bit young and not finished on the Ceilometer side. If you
>  check gnocchi_resources.yaml, it's still marked "ignored" for now.
  
 I checked it again, no "ignored" is marked, seems the bug of devstack ;(
  
 And it's OK that gnocchi is not perfect now, but I still have some worries about how gnocchi deal with or going to deal with instance-xxxx-tapxxx condition.
 I see 'network.incoming.bytes' belongs to resouce type 'instance'. But no attributes of instance can save the infomation of tap name. Although I can search
 all metric ids from resouce id(instance uuid), how do I distinguish them from different taps of an instance?
  ------------------
 Luo Gangyi   luogangyi at cmss.chinamobile.com



  
  

 

 ------------------ ???? ------------------
  ???: "Julien Danjou";<julien at danjou.info>;
 ????: 2015?9?12?(???) ??0:01
 ???: "Luo Gangyi"<lgy181 at foxmail.com>; 
 ??: "OpenStack Development Mailing L"<openstack-dev at lists.openstack.org>; 
 ??: Re: [openstack-dev] [Ceilometer][Gnocchi] Gnocchi cannot deal with combined resource-id ?

 

On Fri, Sep 11 2015, Luo Gangyi wrote:

Hi Gangyi,

>  I am using master branch and newest code for testing.

Cool.

>  For the purpose for learning the structure of gnocchi, I changed the
>  default UUID type of mysql from binary to char, so I can easily link
>  the resource-id(I mean in database), metric id and directory name of
>  storing measures.

Bah, don't do that ? and rather use PostgreSQL, it's the recommended
backend. :)

>  When I did that, I found all the metrics where their resource id is
>  combined(here, I mean in Ceilometer, such as instance-xxx-tapxxxx)
>  have no measures stored.


>  Log in Ceilometer collector records this:
>  "
>  2015-09-11 07:55:55.097 10636 ERROR ceilometer.dispatcher.gnocchi [-] Resource instance-00000001-4641f59e-994c-4255-b0ec-43a276d1c19c-tap8aadb7ad-d7 creation failed with status: 400: <html>
>  <head>
>   <title>400 Bad Request</title>
>  </head>
>  <body>
>   <h1>400 Bad Request</h1>
>   The server could not comply with the request since it is either malformed or otherwise incorrect.<br /><br />
> Invalid input: required key not provided @ data['display_name']
>   </body>
> </html>
>
>  "
>  So I wander whether gnocchi cannot deal with such combined resource-id metrics or whether it is because I change the UUID type or whatever.

Yes, it can, but the problem is more likely in the Ceilometer collector
dispatcher code that is sending the data. From the error you have, it
seems it tries to create an instance but it has no value for
display_name, so it is denied by Gnocchi. If this is from a standard
devstack installation, I'd suggest to open a bug on Ceilometer.

>  And another question is how to query measures for those metrics whose
>  resource id is combined.

They are resource on their own so if you know their id you can just
access the metrics at /v1/resources/<type>/<id>/metric/<name>/measures.

>  For example, I want to query the network traffic of an vm, I know the
>  instance uuid 1111-2222, I know metric name
>  'network.incoming.byte.rate' but I do not know the exact resouce_id
>  and metric id. What procedure should I do ?

You need to know the ID of the resource and then ask for its metric on
the REST interface. If you don't know the ID of the resource, you can
search for it by instance id.

But I don't think Ceilometer has this metric posted to Gnocchi yet, the
code is a bit young and not finished on the Ceilometer side. If you
check gnocchi_resources.yaml, it's still marked "ignored" for now.

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/b78b4f1c/attachment.html>

From lyz at princessleia.com  Fri Sep 11 16:32:51 2015
From: lyz at princessleia.com (Elizabeth K. Joseph)
Date: Fri, 11 Sep 2015 09:32:51 -0700
Subject: [openstack-dev] [OpenStack-Infra]  [infra] PTL non-candidacy
In-Reply-To: <20150910205658.GZ7955@yuggoth.org>
References: <87io7im33q.fsf@meyer.lemoncheese.net>
 <20150910205658.GZ7955@yuggoth.org>
Message-ID: <CABesOu09zK+NObLkk7dWKxr9hC5tpXAYK1qW4=6=VHs68RFtyA@mail.gmail.com>

On Thu, Sep 10, 2015 at 1:56 PM, Jeremy Stanley <fungi at yuggoth.org> wrote:
> On 2015-09-10 13:27:53 -0700 (-0700), James E. Blair wrote:
> [...]
>> I do not plan to run for PTL in the next cycle.
> [...]
>
> Thanks for the awesome job you did as PTL these last cycles. I hope
> you enjoy a much-deserved break from the post, and I'm looking
> forward to the new Zuul! ;)

Indeed! Thanks for your excellent work as PTL for so long, it was
certainly a pleasure working on a team where you were our fearless
leader :)

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2


From davanum at gmail.com  Fri Sep 11 16:45:48 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Fri, 11 Sep 2015 12:45:48 -0400
Subject: [openstack-dev] [devstack][keystone][ironic] Use only Keystone v3
	API in DevStack
Message-ID: <CANw6fcEH2hvooB_pagE+BPbpnATCBKzgoD+n5i1RWqsCEaQZEw@mail.gmail.com>

Hi,

Short story/question:
Is keystone /v3 support important to the ironic team? For Mitaka i guess?

Long story:
The previous discussion - guidance from keystone team on magnum (
http://markmail.org/message/jchf2vj752jdzfet) motivated me to dig into the
experimental job we have in devstack for full keystone v3 api and ended up
with this review.

https://review.openstack.org/#/c/221300/

So essentially that rips out v2 keystone pipeline *except* for ironic jobs.
as ironic has some hard-coded dependencies to keystone /v2 api. I've logged
a bug here:
https://bugs.launchpad.net/ironic/+bug/1494776

Note that review above depends on Jamie's tempest patch which had some hard
coded /v2 dependency as well (https://review.openstack.org/#/c/214987/)

follow up question:
Does anyone know of anything else that does not work with /v3?

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/893b98aa/attachment.html>

From chdent at redhat.com  Fri Sep 11 16:47:20 2015
From: chdent at redhat.com (Chris Dent)
Date: Fri, 11 Sep 2015 17:47:20 +0100 (BST)
Subject: [openstack-dev] [Ceilometer][Gnocchi] Gnocchi cannot deal with
 combined resource-id ?
In-Reply-To: <tencent_21E29496537979275896B7B0@qq.com>
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
 <m0a8stp7qt.fsf@danjou.info> <tencent_21E29496537979275896B7B0@qq.com>
Message-ID: <alpine.OSX.2.11.1509111738110.74088@seed.local>

On Fri, 11 Sep 2015, Luo Gangyi wrote:

> I am using master branch and newest code for testing.
>
> For the purpose for learning the structure of gnocchi, I changed the
> default UUID type of mysql from binary to char, so I can easily link
> the resource-id(I mean in database), metric id and directory name of
> storing measures.
>
> When I did that, I found all the metrics where their resource id is
> combined(here, I mean in Ceilometer, such as instance-xxx-tapxxxx)
> have no measures stored.

In addition to the things that Julien has said, one thing that is
non-obvious about how gnocchi stores resources is that if the
incoming ID is _not_ a UUID then a uuid5 hash is created based on the id
provided. So if your resource has an ceilometer-side id of 'instance-xxx-
tapxxxx' it will be saved in the database in a form like 5B0A4989-44C3-46D1-A1ED-
4705062F51A2. The code review for that change is here:
https://review.openstack.org/#/c/216390/

With that change in place you can still use the 'instance-xxx-tapxxxx' ID
in the <id> part of /v1/metric/resource/<type>/<id> URLs and in
search queries.

But as pointed out elsewhere in the thread at least some of your
resources are missing because of the 'display_name' attribute being
dropped.

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent


From drfish at us.ibm.com  Fri Sep 11 16:44:12 2015
From: drfish at us.ibm.com (Douglas Fish)
Date: Fri, 11 Sep 2015 16:44:12 +0000
Subject: [openstack-dev] [Horizon]Let's take care of our integrationtests
In-Reply-To: <CAFFhzB5vPnwdMsgjpxYbZVtqM141=DAhJg5Ze_QV7y7FQNowKg@mail.gmail.com>
References: <CAFFhzB5vPnwdMsgjpxYbZVtqM141=DAhJg5Ze_QV7y7FQNowKg@mail.gmail.com>,
 <OF57A4B719.91C38F6B-ON85257EBC.007606AE-86257EBC.007B43A0@us.ibm.com>
 <CAFFhzB4Nc8f2BOmtzNkNSidSLYYx_97VmgHMFTjc6kH0kq-atw@mail.gmail.com>
 <CA+JPr9EYuuqudN3k-JuOB9mdzq9vLko=85N_BEtG3vhRz8Or3Q@mail.gmail.com>
 <CAFFhzB4QqNeAFbpeFEWkRmPGf4OECOQ7W_6L1v4neJ8qZEpHng@mail.gmail.com>
 <CAFFhzB60YQdLOQnRGr2yaxdhLz_1DByzj=J=V5uh4ZVYcKKQLQ@mail.gmail.com>
Message-ID: <201509111644.t8BGiSB9014140@d03av01.boulder.ibm.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/0b3edf5c/attachment-0001.html>

From devananda.vdv at gmail.com  Fri Sep 11 17:13:55 2015
From: devananda.vdv at gmail.com (Devananda van der Veen)
Date: Fri, 11 Sep 2015 10:13:55 -0700
Subject: [openstack-dev] [Ironic] Suggestion to split install guide
In-Reply-To: <20150911145739.GA6187@morley.fra.hp.com>
References: <55F29727.7070800@redhat.com>
 <20150911145739.GA6187@morley.fra.hp.com>
Message-ID: <CAExZKErub_Hixm6fva6pOdeia7CUwPfjy2_Jiq6q+Zj7ht6krA@mail.gmail.com>

I agree that it's far too long right now and should be split up.

I would suggest splitting the section on standalone usage into its own
guide, since many of the advanced topics can apply to using Ironic
with and without other services. So perhaps more like this:

- Basic Install Guide with OpenStack
- Basic Install Gude for stand-alone usage
- Advanced topics
-- working with UEFI
-- using configdrive instead of cloud-init
-- hardware inspection
-- trusted and secure boot
- Driver references
-- one section for each driver's specific guide



On Fri, Sep 11, 2015 at 7:57 AM, Bruno Cornec <Bruno.Cornec at hp.com> wrote:
> Hello,
>
> Dmitry Tantsur said on Fri, Sep 11, 2015 at 10:56:07AM +0200:
>>
>> Our install guide is huge, and I've just approved even more text for it.
>> WDYT about splitting it into "Basic Install Guide", which will contain bare
>> minimum for running ironic and deploying instances, and "Advanced Install
>> Guide", which will the following things:
>> 1. Using Bare Metal service as a standalone service
>> 2. Enabling the configuration drive (configdrive)
>> 3. Inspection
>> 4. Trusted boot
>> 5. UEFI
>
>
> As a recent reader, I'd like to keep the UEFI part in the main doc as
> more and more server will be UEFI by default. The rest seems good to go
> in a separate one.
>
> Bruno.
> --
> Open Source Profession, Linux Community Lead WW  http://opensource.hp.com
> HP EMEA EG Open Source Technology Strategist         http://hpintelco.net
> FLOSS projects:     http://mondorescue.org     http://project-builder.org
> Musique ancienne? http://www.musique-ancienne.org http://www.medieval.org
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From john at johngarbutt.com  Fri Sep 11 17:15:44 2015
From: john at johngarbutt.com (John Garbutt)
Date: Fri, 11 Sep 2015 18:15:44 +0100
Subject: [openstack-dev] [Nova] Design Summit Topics for Nova
Message-ID: <CABib2_pVxCtF=0hCGtZzg18OmMRv1LNXeHwmdow9vWx+Sw7HMg@mail.gmail.com>

Hi,

Its time to start thinking about what things you want to get discussed
at the Design Summit, this is specifically the Nova developer design
summit track. We have about 14 fishbowl sessions.

To make it easier to know who submitted what, we are going to try out
google forms for the submissions:
http://goo.gl/forms/D2Qk8XGhZ6

If that does not work for you, let me know, and I can see what can be done.

Note: the form does encourage linking to a spec, as that can be useful
for setting the context of the idea you want to discuss. There will
likely be more news on nova-spec reviews, and setting priorities on
specs, once we have RC1 out the door.

Thanks,
johnthetubaguy

PS
We can sort out links to etherpads for the Friday meet up and
unconference slots (if we do those) nearer the time.


From devananda.vdv at gmail.com  Fri Sep 11 17:24:40 2015
From: devananda.vdv at gmail.com (Devananda van der Veen)
Date: Fri, 11 Sep 2015 10:24:40 -0700
Subject: [openstack-dev] [devstack][keystone][ironic] Use only Keystone
 v3 API in DevStack
In-Reply-To: <CANw6fcEH2hvooB_pagE+BPbpnATCBKzgoD+n5i1RWqsCEaQZEw@mail.gmail.com>
References: <CANw6fcEH2hvooB_pagE+BPbpnATCBKzgoD+n5i1RWqsCEaQZEw@mail.gmail.com>
Message-ID: <CAExZKEq7yiNWqsJUiNbyb7mofB_U_qXnGbkVRSZSyVRQnPqttA@mail.gmail.com>

We (the Ironic team) have talked a couple times about keystone /v3
support and about improving the granularity of policy support within
Ironic. No one stepped up to work on these specifically, and they
weren't prioritized during Liberty ... but I think everyone agreed
that we should get on with the keystone v3 relatively soon.

If Ironic is the only integrated project that doesn't support v3 yet,
then yea, we should get on that as soon as M opens.

-Devananda

On Fri, Sep 11, 2015 at 9:45 AM, Davanum Srinivas <davanum at gmail.com> wrote:
> Hi,
>
> Short story/question:
> Is keystone /v3 support important to the ironic team? For Mitaka i guess?
>
> Long story:
> The previous discussion - guidance from keystone team on magnum
> (http://markmail.org/message/jchf2vj752jdzfet) motivated me to dig into the
> experimental job we have in devstack for full keystone v3 api and ended up
> with this review.
>
> https://review.openstack.org/#/c/221300/
>
> So essentially that rips out v2 keystone pipeline *except* for ironic jobs.
> as ironic has some hard-coded dependencies to keystone /v2 api. I've logged
> a bug here:
> https://bugs.launchpad.net/ironic/+bug/1494776
>
> Note that review above depends on Jamie's tempest patch which had some hard
> coded /v2 dependency as well (https://review.openstack.org/#/c/214987/)
>
> follow up question:
> Does anyone know of anything else that does not work with /v3?
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From lgy181 at foxmail.com  Fri Sep 11 17:29:17 2015
From: lgy181 at foxmail.com (=?ISO-8859-1?B?THVvIEdhbmd5aQ==?=)
Date: Sat, 12 Sep 2015 01:29:17 +0800
Subject: [openstack-dev] [Ceilometer][Gnocchi] Gnocchi cannot deal
	withcombined resource-id ?
In-Reply-To: <alpine.OSX.2.11.1509111738110.74088@seed.local>
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
 <m0a8stp7qt.fsf@danjou.info> <tencent_21E29496537979275896B7B0@qq.com>
 <alpine.OSX.2.11.1509111738110.74088@seed.local>
Message-ID: <tencent_4169FBB506502CF121ACB980@qq.com>

Hi, Chris
 
> With that change in place you can still use the 'instance-xxx-tapxxxx' ID
> in the <id> part of /v1/metric/resource/<type>/<id> URLs and in
> search queries.
  
 But we cannot know the tap name through nova api or neutron api.
 In general,  there exists some conditions that we cannot know the exact resource id in advance. And using hash of  resource id also makes fuzzy search impossible. What we know are some relationships between resources, like a vm has several taps. The resource instance-111 and resource instance-111-tap111 has inborn connections. 
 So I suggest that we should add some attribute to describe such relation.
  
  ------------------
 Luo Gangyi   luogangyi at cmss.chinamobile.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/4e900786/attachment.html>

From ipovolotskaya at mirantis.com  Fri Sep 11 18:07:58 2015
From: ipovolotskaya at mirantis.com (Irina Povolotskaya)
Date: Fri, 11 Sep 2015 21:07:58 +0300
Subject: [openstack-dev] [Fuel] Nominate Olga Gusarenko for fuel-docs core
Message-ID: <CAFY49iD2U+NkvgtjrWOHorSty_Rf3K6_-vqbZ0CNjH92UfDv6g@mail.gmail.com>

Fuelers,

I'd like to nominate Olga Gusarenko for the fuel-docs-core.

She has been doing great work and made a great contribution
into Fuel documentation:

http://stackalytics.com/?user_id=ogusarenko&release=all&project_type=all&module=fuel-docs

It's high time to grant her core reviewer's rights in fuel-docs.

Core reviewer approval process definition:
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

-- 
Best regards,

Irina
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/47fd7f8b/attachment.html>

From dborodaenko at mirantis.com  Fri Sep 11 18:19:51 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Fri, 11 Sep 2015 18:19:51 +0000
Subject: [openstack-dev] [Fuel] Nominate Olga Gusarenko for fuel-docs
	core
In-Reply-To: <CAFY49iD2U+NkvgtjrWOHorSty_Rf3K6_-vqbZ0CNjH92UfDv6g@mail.gmail.com>
References: <CAFY49iD2U+NkvgtjrWOHorSty_Rf3K6_-vqbZ0CNjH92UfDv6g@mail.gmail.com>
Message-ID: <CAM0pNLPiuyycwSU+572wz0ycEr3jbR3wnTUn2k=dAorhfDvA0w@mail.gmail.com>

+1

Great work Olga!

On Fri, Sep 11, 2015, 11:09 Irina Povolotskaya <ipovolotskaya at mirantis.com>
wrote:

> Fuelers,
>
> I'd like to nominate Olga Gusarenko for the fuel-docs-core.
>
> She has been doing great work and made a great contribution
> into Fuel documentation:
>
>
> http://stackalytics.com/?user_id=ogusarenko&release=all&project_type=all&module=fuel-docs
>
> It's high time to grant her core reviewer's rights in fuel-docs.
>
> Core reviewer approval process definition:
> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
> --
> Best regards,
>
> Irina
>
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/932bbcf0/attachment.html>

From ian.cordasco at RACKSPACE.COM  Fri Sep 11 18:33:44 2015
From: ian.cordasco at RACKSPACE.COM (Ian Cordasco)
Date: Fri, 11 Sep 2015 18:33:44 +0000
Subject: [openstack-dev] [Glance] glance core rotation part 1
In-Reply-To: <55F2E3F9.1000907@gmail.com>
References: <55F2E3F9.1000907@gmail.com>
Message-ID: <etPan.55f31e93.35ed68df.5cd@MMR298FD58>

?

-----Original Message-----
From:?Nikhil Komawar <nik.komawar at gmail.com>
Reply:?OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Date:?September 11, 2015 at 09:30:23
To:?openstack-dev at lists.openstack.org <openstack-dev at lists.openstack.org>
Subject:? [openstack-dev] [Glance] glance core rotation part 1

> Hi,
>  
> I would like to propose the following removals from glance-core based on
> the simple criterion of inactivity/limited activity for a long period (2
> cycles or more) of time:
>  
> Alex Meade
> Arnaud Legendre
> Mark Washenberger
> Iccha Sethi

I think these are overdue

> Zhi Yan Liu (Limited activity in Kilo and absent in Liberty)

Sad to see Zhi Yan Liu's activity drop off.

> Please vote +1 or -1 and we will decide by Monday EOD PT.

+1

--  
Ian Cordasco

From mr.alex.meade at gmail.com  Fri Sep 11 18:36:54 2015
From: mr.alex.meade at gmail.com (Alex Meade)
Date: Fri, 11 Sep 2015 14:36:54 -0400
Subject: [openstack-dev] [Glance] glance core rotation part 1
In-Reply-To: <etPan.55f31e93.35ed68df.5cd@MMR298FD58>
References: <55F2E3F9.1000907@gmail.com>
 <etPan.55f31e93.35ed68df.5cd@MMR298FD58>
Message-ID: <CABdthUSsbSxn1Vb5nTyGnLO__8VYn7KvGHUV8MeBD6ZERtD8ew@mail.gmail.com>

+1

On Fri, Sep 11, 2015 at 2:33 PM, Ian Cordasco <ian.cordasco at rackspace.com>
wrote:

>
>
> -----Original Message-----
> From: Nikhil Komawar <nik.komawar at gmail.com>
> Reply: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev at lists.openstack.org>
> Date: September 11, 2015 at 09:30:23
> To: openstack-dev at lists.openstack.org <openstack-dev at lists.openstack.org>
> Subject:  [openstack-dev] [Glance] glance core rotation part 1
>
> > Hi,
> >
> > I would like to propose the following removals from glance-core based on
> > the simple criterion of inactivity/limited activity for a long period (2
> > cycles or more) of time:
> >
> > Alex Meade
> > Arnaud Legendre
> > Mark Washenberger
> > Iccha Sethi
>
> I think these are overdue
>
> > Zhi Yan Liu (Limited activity in Kilo and absent in Liberty)
>
> Sad to see Zhi Yan Liu's activity drop off.
>
> > Please vote +1 or -1 and we will decide by Monday EOD PT.
>
> +1
>
> --
> Ian Cordasco
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/65d99b28/attachment.html>

From jon at jonproulx.com  Fri Sep 11 19:04:04 2015
From: jon at jonproulx.com (Jonathan Proulx)
Date: Fri, 11 Sep 2015 15:04:04 -0400
Subject: [openstack-dev] [Neutron] Allow for per-subnet dhcp options
Message-ID: <CABZB-sjDmijUExd_xA4+vwhZ8jV5qQLxTwYF7mMdWSAvBGcChg@mail.gmail.com>

I'm hurt that this blue print has seen no love in 18 months:
https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet

I need different MTUs and different domians on different subnets.  It
appears there is still no way to do this other than running a network
node (or two if I want HA) for each subnet.

Please someone tell me I'm a fool and there's an easy way to do this
that I failed to find (again)...

-Jon


From sarbajitc at gmail.com  Fri Sep 11 19:18:02 2015
From: sarbajitc at gmail.com (Sarbajit Chatterjee)
Date: Sat, 12 Sep 2015 00:48:02 +0530
Subject: [openstack-dev] [nova] How to get notification for new compute node?
Message-ID: <CAN20VngVY=rbHaX+2Ght-9s3oJ6N4s38BPCs-4kdkqVmkPX0BA@mail.gmail.com>

Hi,

I wanted to know how I can get a notification event for compute node
addition in OpenStack. I can see a new hypervisor entry gets added after
every compute node is added but, can't find the exchange where I can get
this notification (I can get notifications for VM creation, deletion etc.).
I tried listening to nova exchange but, did not receive this event.

Please help.

Thanks,
Sarbajit
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/4ca449be/attachment.html>

From doc at aedo.net  Fri Sep 11 19:23:28 2015
From: doc at aedo.net (Christopher Aedo)
Date: Fri, 11 Sep 2015 12:23:28 -0700
Subject: [openstack-dev] [app-catalog] PTL Candidacy
Message-ID: <CA+odVQF6KtsewLHvCNBungC=XQ=hGjnBRPRgtOLThdve66iGwA@mail.gmail.com>

It's time for the Community App Catalog to go through an official
election cycle, and I'm putting my name in for PTL.  I've been filling
that role provisionally since before the App Catalog was launched at
the Vancouver summit, and I would like to continue service as PTL
officially.  Now that we've joined the Big Tent[1], having a committed
leader is more important than ever :) .

I believe the App Catalog has tremendous potential for helping the
end-users of OpenStack clouds find and share things they can deploy on
those clouds.  To that end, I've been working with folks on extending
the types of assets that can live in the catalog and also trying to
make finding and consuming those assets easier.

Since we announced the Community App Catalog I've done everything I
could to deliver on the "community" part.  With the help of the
OpenStack Infra team, we moved the site to OpenStack infrastructure as
quickly as possible.  All planning and coordination efforts have
happened on IRC (#openstack-app-catalog), the dev and operators
mailing list, and during the weekly IRC meetings[2].  I've also been
working to get more people engaged and involved with the Community App
Catalog project while attempting to raise the profile and exposure
whenever possible.

Speaking of community, I know being part of the OpenStack community at
a broad level is one of the most important things for a PTL.  On that
front I'm active and always available on IRC (docaedo), and do my best
to stay on top of all the traffic on the dev mailing list.  I also
work with Richard Raseley to organize the OpenStack meetup in Portland
in order to reach, educate (and entertain) people who want to learn
more about OpenStack.

The next big thing we will do for the Community App Catalog is to
build out the backend so it becomes a more engaging experience for the
users, as well as makes it easier for other projects to contribute and
consume the assets.  In addition to the Horizon plugin[3][4] (check it
out with devstack, it's pretty cool!) we are thinking through the API
side of this and will eventually contribute the code to search, fetch
and push from the OpenStack Client.

All of this is to say that I'm eager and proud to serve as the
Community App Catalog PTL for the next six months if you'll have me!

[1] https://review.openstack.org/#/c/217957/
[2] https://wiki.openstack.org/wiki/Meetings/app-catalog
[3] https://github.com/stackforge/apps-catalog-ui
[4] https://github.com/openstack/app-catalog-ui


From harlowja at outlook.com  Fri Sep 11 19:26:53 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Fri, 11 Sep 2015 12:26:53 -0700
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
	concerns
Message-ID: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>

Hi all,

I was reading over the TC IRC logs for this week (my weekly reading) and 
I just wanted to let my thoughts and comments be known on:

http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309

I feel it's very important to send a positive note for new/upcoming 
projects and libraries... (and for everyone to remember that most 
projects do start off with a small set of backers). So I just wanted to 
try to ensure that we send a positive note with any tag like this that 
gets created and applied and that we all (especially the TC) really 
really considers the negative connotations of applying that tag to a 
project (it may effectively ~kill~ that project).

I would really appreciate that instead of just applying this tag (or 
other similarly named tag to projects) that instead the TC try to 
actually help out projects with those potential tags in the first place 
(say perhaps by actively listing projects that may need more 
contributors from a variety of companies on the openstack blog under say 
a 'HELP WANTED' page or something). I'd much rather have that vs. any 
said tags, because the latter actually tries to help projects, vs just 
stamping them with a 'you are bad, figure out how to fix yourself, 
because you are not diverse' tag.

I believe it is the TC job (in part) to help make the community better, 
and not via tags like this that IMHO actually make it worse; I really 
hope that folks on the TC can look back at their own projects they may 
have created and ask how would their own project have turned out if they 
were stamped with a similar tag...

- Josh


From Vijay.Venkatachalam at citrix.com  Fri Sep 11 19:35:43 2015
From: Vijay.Venkatachalam at citrix.com (Vijay Venkatachalam)
Date: Fri, 11 Sep 2015 19:35:43 +0000
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible
 using non "admin" tenant?
Message-ID: <26B082831A2B1A4783604AB89B9B2C080E8925C2@SINPEX01CL02.citrite.net>

Hi,
              Has anyone tried configuring SSL Offload as a tenant?
              During listener creation there is an error thrown saying 'could not locate/find container'.
              The lbaas plugin is not able to fetch the tenant's certificate.

              From the code it looks like the lbaas plugin is tyring to connect to barbican with keystone details provided in neutron.conf
              Which is by default username = "admin" and tenant_name ="admin".
              This means lbaas plugin is looking for tenant's ceritifcate in "admin" tenant, which it will never be able to find.

              What is the procedure for the lbaas plugin to get hold of the tenant's certificate?

              Assuming "admin" user has access to all tenant's certificates. Should the lbaas plugin connect to barbican with username='admin' and tenant_name =  listener's tenant_name?

Is this, the way forward ? *OR* Am I missing something?


Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/75fe0c5b/attachment.html>

From mestery at mestery.com  Fri Sep 11 19:43:24 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Fri, 11 Sep 2015 14:43:24 -0500
Subject: [openstack-dev] [Neutron] Allow for per-subnet dhcp options
In-Reply-To: <CABZB-sjDmijUExd_xA4+vwhZ8jV5qQLxTwYF7mMdWSAvBGcChg@mail.gmail.com>
References: <CABZB-sjDmijUExd_xA4+vwhZ8jV5qQLxTwYF7mMdWSAvBGcChg@mail.gmail.com>
Message-ID: <CAL3VkVy-yHRHuzXf8bu-DH0qwODzy84yWqE4Ck1kJbs02cgUZg@mail.gmail.com>

On Fri, Sep 11, 2015 at 2:04 PM, Jonathan Proulx <jon at jonproulx.com> wrote:

> I'm hurt that this blue print has seen no love in 18 months:
> https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet
>
>
This BP has no RFE bug or spec filed for it, so it's hard to be on anyone's
radar when it's not following the submission guidelines Neutron has for new
work [1]. I'm sorry this has flown under the radar so far, hopefully it can
rise up with an RFE bug.

[1] http://docs.openstack.org/developer/neutron/policies/blueprints.html


> I need different MTUs and different domians on different subnets.  It
> appears there is still no way to do this other than running a network
> node (or two if I want HA) for each subnet.
>
> Please someone tell me I'm a fool and there's an easy way to do this
> that I failed to find (again)...
>
> -Jon
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/14f16ddc/attachment.html>

From itzshamail at gmail.com  Fri Sep 11 19:45:31 2015
From: itzshamail at gmail.com (Shamail Tahir)
Date: Fri, 11 Sep 2015 15:45:31 -0400
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
	concerns
In-Reply-To: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
Message-ID: <CALrdpTVmT1-=33WF=-EZO4A-on2ds6u69i0mpJj6UPHwq+P5vw@mail.gmail.com>

On Fri, Sep 11, 2015 at 3:26 PM, Joshua Harlow <harlowja at outlook.com> wrote:

> Hi all,
>
> I was reading over the TC IRC logs for this week (my weekly reading) and I
> just wanted to let my thoughts and comments be known on:
>
>
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309
>
> I feel it's very important to send a positive note for new/upcoming
> projects and libraries... (and for everyone to remember that most projects
> do start off with a small set of backers). So I just wanted to try to
> ensure that we send a positive note with any tag like this that gets
> created and applied and that we all (especially the TC) really really
> considers the negative connotations of applying that tag to a project (it
> may effectively ~kill~ that project).
>
> I would really appreciate that instead of just applying this tag (or other
> similarly named tag to projects) that instead the TC try to actually help
> out projects with those potential tags in the first place (say perhaps by
> actively listing projects that may need more contributors from a variety of
> companies on the openstack blog under say a 'HELP WANTED' page or
> something). I'd much rather have that vs. any said tags, because the latter
> actually tries to help projects, vs just stamping them with a 'you are bad,
> figure out how to fix yourself, because you are not diverse' tag.
>
> I believe it is the TC job (in part) to help make the community better,
> and not via tags like this that IMHO actually make it worse; I really hope
> that folks on the TC can look back at their own projects they may have
> created and ask how would their own project have turned out if they were
> stamped with a similar tag...
>

I agree with Josh and, furthermore, maybe a similar "warning" could be
implicitly made by helping the community understand why the
"diverse-affiliation" tag matters.  If we (through education on tags in
general) stated that the reason diverse-affiliation matters, amongst other
things, is because it shows that the project can potentially survive a
single contributor changing their involvement then wouldn't that achieve
the same purpose of showing stability/mindshare/collaboration for projects
with diverse-affiliation tag (versus those that don't have it) and make
them more "preferred" in a sense?

Thanks,
Shamail


> - Josh
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/985f6cef/attachment.html>

From vipuls at gmail.com  Fri Sep 11 19:51:10 2015
From: vipuls at gmail.com (Vipul Sabhaya)
Date: Fri, 11 Sep 2015 12:51:10 -0700
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
	concerns
In-Reply-To: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
Message-ID: <CAHC46juNMGDAxbPiZbyPcP9aMLp9ofbHPSx0kZQV6Kd5_3gv+Q@mail.gmail.com>

Thanks for starting this thread Josh.

On Fri, Sep 11, 2015 at 12:26 PM, Joshua Harlow <harlowja at outlook.com>
wrote:

> Hi all,
>
> I was reading over the TC IRC logs for this week (my weekly reading) and I
> just wanted to let my thoughts and comments be known on:
>
>
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309
>
> I feel it's very important to send a positive note for new/upcoming
> projects and libraries... (and for everyone to remember that most projects
> do start off with a small set of backers). So I just wanted to try to
> ensure that we send a positive note with any tag like this that gets
> created and applied and that we all (especially the TC) really really
> considers the negative connotations of applying that tag to a project (it
> may effectively ~kill~ that project).
>
> Completely agree. Projects that don?t automatically fit into the
?stater-kit? type of tag (e.g. Cue) are going to take longer to really
build a community.  It doesn?t mean that the project isn?t active, or that
the team is not willing to fix bugs, or that operators should be afraid to
run it.


> I would really appreciate that instead of just applying this tag (or other
> similarly named tag to projects) that instead the TC try to actually help
> out projects with those potential tags in the first place (say perhaps by
> actively listing projects that may need more contributors from a variety of
> companies on the openstack blog under say a 'HELP WANTED' page or
> something). I'd much rather have that vs. any said tags, because the latter
> actually tries to help projects, vs just stamping them with a 'you are bad,
> figure out how to fix yourself, because you are not diverse' tag.
>
>
+1.  If the TC can play a role in helping projects build their community, a
lot more of the smaller projects would be much more successful.


> I believe it is the TC job (in part) to help make the community better,
> and not via tags like this that IMHO actually make it worse; I really hope
> that folks on the TC can look back at their own projects they may have
> created and ask how would their own project have turned out if they were
> stamped with a similar tag...
>
> - Josh
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/7ebd8481/attachment.html>

From guang.yee at hpe.com  Fri Sep 11 19:52:27 2015
From: guang.yee at hpe.com (Yee, Guang)
Date: Fri, 11 Sep 2015 19:52:27 +0000
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
Message-ID: <C73B35E94B12854596094F2CA866854964269C0C@G4W3292.americas.hpqcorp.net>

Morgan, thanks for all your hard work. It?s been an honor to have you as our PTL.

"All the world's a stage,?

Now set back, relax, grab a drink, and enjoy the show. ?


Guang


From: Morgan Fainberg [mailto:morgan.fainberg at gmail.com]
Sent: Thursday, September 10, 2015 2:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [keystone] PTL non-candidacy

As I outlined (briefly) in my recent announcement of changes ( https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/ ) I will not be running for PTL of Keystone this next cycle (Mitaka). The role of PTL is a difficult but extremely rewarding job. It has been amazing to see both Keystone and OpenStack grow.

I am very pleased with the accomplishments of the Keystone development team over the last year. We have seen improvements with Federation, Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing, releasing a dedicated authentication library, cross-project initiatives around improving the Service Catalog, and much, much more. I want to thank each and every contributor for the hard work that was put into Keystone and its associated projects.

While I will be changing my focus to spend more time on the general needs of OpenStack and working on the Public Cloud story, I am confident in those who can, and will, step up to the challenges of leading development of Keystone and the associated projects. I may be working across more projects, but you can be assured I will be continuing to work hard to see the initiatives I helped start through. I wish the best of luck to the next PTL.

I guess this is where I get to write a lot more code soon!

See you all (in person) in Tokyo!
--Morgan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/bec5dbef/attachment.html>

From d.w.chadwick at kent.ac.uk  Fri Sep 11 19:54:02 2015
From: d.w.chadwick at kent.ac.uk (David Chadwick)
Date: Fri, 11 Sep 2015 20:54:02 +0100
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <55F2D7DA.6010206@redhat.com>
References: <55F27CB7.2040101@redhat.com> <55F2AA41.9070206@kent.ac.uk>
 <55F2D7DA.6010206@redhat.com>
Message-ID: <55F3315A.4070206@kent.ac.uk>



On 11/09/2015 14:32, Rich Megginson wrote:
> On 09/11/2015 04:17 AM, David Chadwick wrote:
>> Whichever approach is adopted you need to consider the future and the
>> longer term objective of moving to fully hierarchical names. I believe
>> the current Keystone approach is only an interim one, as it only
>> supports partial hierarchies. Fully hierarchical names has been
>> discussed in the Keystone group, but I believe that this has been
>> shelved until later in order to get a quick fix released now.
> 
> Can you explain more about "fully hierarchical names"?  What is the
> string representation?

sorry no because there was no agreement on it
david

> 
>>
>> regards
>>
>> David
>>
>> On 11/09/2015 08:03, Gilles Dubreuil wrote:
>>> Hi,
>>>
>>> Today in the #openstack-puppet channel a discussion about the pro and
>>> cons of using domain parameter for Keystone V3 has been left opened.
>>>
>>> The context
>>> ------------
>>> Domain names are needed in Openstack Keystone V3 for identifying users
>>> or groups (of users) within different projects (tenant).
>>> Users and groups are uniquely identified within a domain (or a realm as
>>> opposed to project domains).
>>> Then projects have their own domain so users or groups can be assigned
>>> to them through roles.
>>>
>>> In Kilo, Keystone V3 have been introduced as an experimental feature.
>>> Puppet providers such as keystone_tenant, keystone_user,
>>> keystone_role_user have been adapted to support it.
>>> Also new ones have appeared (keystone_domain) or are their way
>>> (keystone_group, keystone_trust).
>>> And to be backward compatible with V2, the default domain is used when
>>> no domain is provided.
>>>
>>> In existing providers such as keystone_tenant, the domain can be either
>>> part of the name or provided as a parameter:
>>>
>>> A. The 'composite namevar' approach:
>>>
>>>     keystone_tenant {'projectX::domainY': ... }
>>>   B. The 'meaningless name' approach:
>>>
>>>    keystone_tenant {'myproject': name='projectX', domain=>'domainY',
>>> ...}
>>>
>>> Notes:
>>>   - Actually using both combined should work too with the domain
>>> supposedly overriding the name part of the domain.
>>>   - Please look at [1] this for some background between the two
>>> approaches:
>>>
>>> The question
>>> -------------
>>> Decide between the two approaches, the one we would like to retain for
>>> puppet-keystone.
>>>
>>> Why it matters?
>>> ---------------
>>> 1. Domain names are mandatory in every user, group or project. Besides
>>> the backward compatibility period mentioned earlier, where no domain
>>> means using the default one.
>>> 2. Long term impact
>>> 3. Both approaches are not completely equivalent which different
>>> consequences on the future usage.
>>> 4. Being consistent
>>> 5. Therefore the community to decide
>>>
>>> The two approaches are not technically equivalent and it also depends
>>> what a user might expect from a resource title.
>>> See some of the examples below.
>>>
>>> Because OpenStack DB tables have IDs to uniquely identify objects, it
>>> can have several objects of a same family with the same name.
>>> This has made things difficult for Puppet resources to guarantee
>>> idem-potency of having unique resources.
>>> In the context of Keystone V3 domain, hopefully this is not the case for
>>> the users, groups or projects but unfortunately this is still the case
>>> for trusts.
>>>
>>> Pros/Cons
>>> ----------
>>> A.
>>>    Pros
>>>      - Easier names
>>>    Cons
>>>      - Titles have no meaning!
>>>      - Cases where 2 or more resources could exists
>>>      - More difficult to debug
>>>      - Titles mismatch when listing the resources (self.instances)
>>>
>>> B.
>>>    Pros
>>>      - Unique titles guaranteed
>>>      - No ambiguity between resource found and their title
>>>    Cons
>>>      - More complicated titles
>>>
>>> Examples
>>> ----------
>>> = Meaningless name example 1=
>>> Puppet run:
>>>    keystone_tenant {'myproject': name='project_A',
>>> domain=>'domain_1', ...}
>>>
>>> Second run:
>>>    keystone_tenant {'myproject': name='project_A',
>>> domain=>'domain_2', ...}
>>>
>>> Result/Listing:
>>>
>>>    keystone_tenant { 'project_A::domain_1':
>>>      ensure  => 'present',
>>>      domain  => 'domain_1',
>>>      enabled => 'true',
>>>      id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>>>    }
>>>     keystone_tenant { 'project_A::domain_2':
>>>      ensure  => 'present',
>>>      domain  => 'domain_2',
>>>      enabled => 'true',
>>>      id      => '4b8255591949484781da5d86f2c47be7',
>>>    }
>>>
>>> = Composite name example 1  =
>>> Puppet run:
>>>    keystone_tenant {'project_A::domain_1', ...}
>>>
>>> Second run:
>>>    keystone_tenant {'project_A::domain_2', ...}
>>>
>>> # Result/Listing
>>>    keystone_tenant { 'project_A::domain_1':
>>>      ensure  => 'present',
>>>      domain  => 'domain_1',
>>>      enabled => 'true',
>>>      id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>>>     }
>>>    keystone_tenant { 'project_A::domain_2':
>>>      ensure  => 'present',
>>>      domain  => 'domain_2',
>>>      enabled => 'true',
>>>      id      => '4b8255591949484781da5d86f2c47be7',
>>>     }
>>>
>>> = Meaningless name example 2  =
>>> Puppet run:
>>>    keystone_tenant {'myproject1': name='project_A',
>>> domain=>'domain_1', ...}
>>>    keystone_tenant {'myproject2': name='project_A', domain=>'domain_1',
>>> description=>'blah'...}
>>>
>>> Result: project_A in domain_1 has a description
>>>
>>> = Composite name example 2  =
>>> Puppet run:
>>>    keystone_tenant {'project_A::domain_1', ...}
>>>    keystone_tenant {'project_A::domain_1', description => 'blah', ...}
>>>
>>> Result: Error because the resource must be unique within a catalog
>>>
>>> My vote
>>> --------
>>> I would love to have the approach A for easier name.
>>> But I've seen the challenge of maintaining the providers behind the
>>> curtains and the confusion it creates with name/titles and when not sure
>>> about the domain we're dealing with.
>>> Also I believe that supporting self.instances consistently with
>>> meaningful name is saner.
>>> Therefore I vote B
>>>
>>> Finally
>>> ------
>>> Thanks for reading that far!
>>> To choose, please provide feedback with more pros/cons, examples and
>>> your vote.
>>>
>>> Thanks,
>>> Gilles
>>>
>>>
>>> PS:
>>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From guang.yee at hpe.com  Fri Sep 11 19:55:58 2015
From: guang.yee at hpe.com (Yee, Guang)
Date: Fri, 11 Sep 2015 19:55:58 +0000
Subject: [openstack-dev] [devstack][keystone][ironic] Use only Keystone
 v3 API in DevStack
In-Reply-To: <CAExZKEq7yiNWqsJUiNbyb7mofB_U_qXnGbkVRSZSyVRQnPqttA@mail.gmail.com>
References: <CANw6fcEH2hvooB_pagE+BPbpnATCBKzgoD+n5i1RWqsCEaQZEw@mail.gmail.com>
 <CAExZKEq7yiNWqsJUiNbyb7mofB_U_qXnGbkVRSZSyVRQnPqttA@mail.gmail.com>
Message-ID: <C73B35E94B12854596094F2CA866854964269C33@G4W3292.americas.hpqcorp.net>

Can you please elaborate on "granularity of policy support within Ironic."? Is there a blueprint/etherpad we can take a look?


Guang


-----Original Message-----
From: Devananda van der Veen [mailto:devananda.vdv at gmail.com] 
Sent: Friday, September 11, 2015 10:25 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [devstack][keystone][ironic] Use only Keystone v3 API in DevStack

We (the Ironic team) have talked a couple times about keystone /v3 support and about improving the granularity of policy support within Ironic. No one stepped up to work on these specifically, and they weren't prioritized during Liberty ... but I think everyone agreed that we should get on with the keystone v3 relatively soon.

If Ironic is the only integrated project that doesn't support v3 yet, then yea, we should get on that as soon as M opens.

-Devananda

On Fri, Sep 11, 2015 at 9:45 AM, Davanum Srinivas <davanum at gmail.com> wrote:
> Hi,
>
> Short story/question:
> Is keystone /v3 support important to the ironic team? For Mitaka i guess?
>
> Long story:
> The previous discussion - guidance from keystone team on magnum
> (http://markmail.org/message/jchf2vj752jdzfet) motivated me to dig 
> into the experimental job we have in devstack for full keystone v3 api 
> and ended up with this review.
>
> https://review.openstack.org/#/c/221300/
>
> So essentially that rips out v2 keystone pipeline *except* for ironic jobs.
> as ironic has some hard-coded dependencies to keystone /v2 api. I've 
> logged a bug here:
> https://bugs.launchpad.net/ironic/+bug/1494776
>
> Note that review above depends on Jamie's tempest patch which had some 
> hard coded /v2 dependency as well 
> (https://review.openstack.org/#/c/214987/)
>
> follow up question:
> Does anyone know of anything else that does not work with /v3?
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> ______________________________________________________________________
> ____ OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From thingee at gmail.com  Fri Sep 11 19:58:31 2015
From: thingee at gmail.com (Mike Perez)
Date: Fri, 11 Sep 2015 12:58:31 -0700
Subject: [openstack-dev] [cinder] Design Summit Topics
Message-ID: <CAHcn5b1BoXPT=aok0G+vDB97DQCvyu--jC0cOF5osVTjBHzrqw@mail.gmail.com>

Propose your topics:

https://etherpad.openstack.org/p/cinder-mitaka-summit-topics

Next Cinder meeting, we'll discuss them:

https://wiki.openstack.org/wiki/CinderMeetings

--
Mike Perez


From adanin at mirantis.com  Fri Sep 11 19:59:40 2015
From: adanin at mirantis.com (Andrey Danin)
Date: Fri, 11 Sep 2015 22:59:40 +0300
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CAHAWLf1SaOK_6RSSdgPkjUMdf+gPxRewQ-ygzLTXNiEN+oWqRg@mail.gmail.com>
References: <CAFLqvG4eoc7V4rr4pNaVog+BN1aFCLmSvGtLjXKd7XO3G1sppg@mail.gmail.com>
 <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
 <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
 <CABzFt8N12ADSuafDBZHg+QTHqPGjXPigzCvYZ1LE48KZJSGzyA@mail.gmail.com>
 <CAFLqvG5P2Ckp61nB9woU=AP3e0rFPfVsDg81HJadM=v2bc6=5w@mail.gmail.com>
 <CABzFt8O1VH8DfOCZAP=yaS_UicaSd=6BNGS=46T5LOOa2H++xA@mail.gmail.com>
 <CAFkLEwoXR-C-etHpS-KDZokYkP8CfS8q9UXjKYF0eNYo6yOpcQ@mail.gmail.com>
 <CA+HkNVsj5m4CduyZ9duTgCsp-BKP-Dwt85iZ01=txPBxVasANg@mail.gmail.com>
 <CAHAWLf1SaOK_6RSSdgPkjUMdf+gPxRewQ-ygzLTXNiEN+oWqRg@mail.gmail.com>
Message-ID: <CA+vYeFoP_dGpP3xr04Mr-iM00jG7AjnXMx+-f5OTED-BVeB7-g@mail.gmail.com>

I support this proposal but I just wanted to mention that we'll lose an
easy way to develop manifests. I agree that manifests in this case have no
difference with Neutron code, for instance. But anyway I +1 this,
especially with Vova Kuklin's additions.

On Thu, Sep 10, 2015 at 12:25 PM, Vladimir Kuklin <vkuklin at mirantis.com>
wrote:

> Folks
>
> I have a strong +1 for the proposal to decouple master node and slave
> nodes.
> Here are the stregnths of this approach
> 1) We can always decide which particular node runs which particular set of
> manifests. This will allow us to do be able to apply/roll back changes
> node-by-node. This is very important from operations perspective.
> 2) We can decouple master and slave nodes manifests and not drag new
> library version onto the master node when it is not needed. This allows us
> to decrease probability of regressions
> 3) This makes life easier for the user - you just run 'apt-get/yum
> install' instead of some difficult to digest `mco` command.
>
> The only weakness that I see here is on mentioned by Andrey. I think we
> can fix it by providing developers with clean and simple way of building
> library package on the fly. This will make developers life easier enough to
> work with proposed approach.
>
> Also, we need to provide ways for better UX, like provide one button/api
> call for:
>
> 1) update all manifests on particular nodes (e.g. all or only a part of
> nodes of the cluster) to particular version
> 2)  revert all manifests back to the version which is provided by the
> particular GA release
> 3) <more things we need to think of>
>
> So far I would mark need for simple package-building system for developer
> as a dependency for the proposed change, but I do not see any other way
> than proceeding with it.
>
>
>
> On Thu, Sep 10, 2015 at 11:50 AM, Sergii Golovatiuk <
> sgolovatiuk at mirantis.com> wrote:
>
>> Oleg,
>>
>> Alex gave a perfect example regarding support folks when they need to fix
>> something really quick. It's client's choice what to patch or not. You may
>> like it or not, but it's client's choice.
>>
>> On 10 Sep 2015, at 09:33, Oleg Gelbukh <ogelbukh at mirantis.com> wrote:
>>
>> Alex,
>>
>> I absolutely understand the point you are making about need for
>> deployment engineers to modify things 'on the fly' in customer environment.
>> It's makes things really flexible and lowers the entry barrier for sure.
>>
>> However, I would like to note that in my opinion this kind on 'monkey
>> patching' is actually a bad practice for any environments other than dev
>> ones. It immediately leads to emergence of unsupportable frankenclouds. I
>> would greet any modification to the workflow that will discourage people
>> from doing that.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Wed, Sep 9, 2015 at 5:56 PM, Alex Schultz <aschultz at mirantis.com>
>> wrote:
>>
>>> Hey Vladimir,
>>>
>>>
>>>
>>>> Regarding plugins: plugins are welcome to install specific additional
>>>> DEB/RPM repos on the master node, or just configure cluster to use
>>>> additional onl?ne repos, where all necessary packages (including plugin
>>>> specific puppet manifests) are to be available. Current granular deployment
>>>> approach makes it easy to append specific pre-deployment tasks
>>>> (master/slave does not matter). Correct me if I am wrong.
>>>>
>>>>
>>> Don't get me wrong, I think it would be good to move to a fuel-library
>>> distributed via package only.  I'm bringing these points up to indicate
>>> that there is many other things that live in the fuel library puppet path
>>> than just the fuel-library package.  The plugin example is just one place
>>> that we will need to invest in further design and work to move to the
>>> package only distribution.  What I don't want is some partially executed
>>> work that only works for one type of deployment and creates headaches for
>>> the people actually having to use fuel.  The deployment engineers and
>>> customers who actually perform these actions should be asked about
>>> packaging and their comfort level with this type of requirements.  I don't
>>> have a complete understanding of the all the things supported today by the
>>> fuel plugin system so it would be nice to get someone who is more familiar
>>> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
>>> don't think we are building fuel-library debs at this time either.  So
>>> without some work on both sides, we cannot move to just packages.
>>>
>>>
>>>> Regarding flexibility: having several versioned directories with puppet
>>>> modules on the master node, having several fuel-libraryX.Y packages
>>>> installed on the master node makes things "exquisitely convoluted" rather
>>>> than flexible. Like I said, it is flexible enough to use mcollective, plain
>>>> rsync, etc. if you really need to do things manually. But we have
>>>> convenient service (Perestroika) which builds packages in minutes if you
>>>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
>>>> available as an application independent from CI. So, what is wrong with
>>>> building fuel-library package? What if you want to troubleshoot nova (we
>>>> install it using packages)? Should we also use rsync for everything else
>>>> like nova, mysql, etc.?
>>>>
>>>>
>>> Yes, we do have a service like Perestroika to build packages for us.
>>> That doesn't mean everyone else does or has access to do that today.
>>> Setting up a build system is a major undertaking and making that a hard
>>> requirement to interact with our product may be a bit much for some
>>> customers.  In speaking with some support folks, there are times when files
>>> have to be munged to get around issues because there is no package or
>>> things are on fire so they can't wait for a package to become available for
>>> a fix.  We need to be careful not to impose limits without proper
>>> justification and due diligence.  We already build the fuel-library
>>> package, so there's no reason you couldn't try switching the rsync to
>>> install the package if it's available on a mirror.  I just think you're
>>> going to run into the issues I mentioned which need to be solved before we
>>> could just mark it done.
>>>
>>> -Alex
>>>
>>>
>>>
>>>> Vladimir Kozhukalov
>>>>
>>>> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz <aschultz at mirantis.com>
>>>> wrote:
>>>>
>>>>> I agree that we shouldn't need to sync as we should be able to just
>>>>> update the fuel-library package. That being said, I think there might be a
>>>>> few issues with this method. The first issue is with plugins and how to
>>>>> properly handle the distribution of the plugins as they may also include
>>>>> puppet code that needs to be installed on the other nodes for a deployment.
>>>>> Currently I do not believe we install the plugin packages anywhere except
>>>>> the master and when they do get installed there may be some post-install
>>>>> actions that are only valid for the master.  Another issue is being
>>>>> flexible enough to allow for deployment engineers to make custom changes
>>>>> for a given environment.  Unless we can provide an improved process to
>>>>> allow for people to provide in place modifications for an environment, we
>>>>> can't do away with the rsync.
>>>>>
>>>>> If we want to go completely down the package route (and we probably
>>>>> should), we need to make sure that all of the other pieces that currently
>>>>> go together to make a complete fuel deployment can be updated in the same
>>>>> way.
>>>>>
>>>>> -Alex
>>>>>
>>>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru
> vkuklin at mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Danin
adanin at mirantis.com
skype: gcon.monolake
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/dae2946c/attachment.html>

From dolph.mathews at gmail.com  Fri Sep 11 20:01:02 2015
From: dolph.mathews at gmail.com (Dolph Mathews)
Date: Fri, 11 Sep 2015 15:01:02 -0500
Subject: [openstack-dev] [devstack][keystone][ironic] Use only Keystone
 v3 API in DevStack
In-Reply-To: <C73B35E94B12854596094F2CA866854964269C33@G4W3292.americas.hpqcorp.net>
References: <CANw6fcEH2hvooB_pagE+BPbpnATCBKzgoD+n5i1RWqsCEaQZEw@mail.gmail.com>
 <CAExZKEq7yiNWqsJUiNbyb7mofB_U_qXnGbkVRSZSyVRQnPqttA@mail.gmail.com>
 <C73B35E94B12854596094F2CA866854964269C33@G4W3292.americas.hpqcorp.net>
Message-ID: <CAC=h7gWPsgE3GkKvEqPAsK2=9TkLneSuqN9LCCLPMxO2zdEk5A@mail.gmail.com>

On Fri, Sep 11, 2015 at 2:55 PM, Yee, Guang <guang.yee at hpe.com> wrote:

> Can you please elaborate on "granularity of policy support within
> Ironic."? Is there a blueprint/etherpad we can take a look?
>

See the lack of granularity expressed by Ironic's current policy file:


https://github.com/openstack/ironic/blob/5671e7c2df455f97ef996c47c9c4f461a82e1c38/etc/ironic/policy.json


>
>
> Guang
>
>
> -----Original Message-----
> From: Devananda van der Veen [mailto:devananda.vdv at gmail.com]
> Sent: Friday, September 11, 2015 10:25 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [devstack][keystone][ironic] Use only
> Keystone v3 API in DevStack
>
> We (the Ironic team) have talked a couple times about keystone /v3 support
> and about improving the granularity of policy support within Ironic. No one
> stepped up to work on these specifically, and they weren't prioritized
> during Liberty ... but I think everyone agreed that we should get on with
> the keystone v3 relatively soon.
>
> If Ironic is the only integrated project that doesn't support v3 yet, then
> yea, we should get on that as soon as M opens.
>
> -Devananda
>
> On Fri, Sep 11, 2015 at 9:45 AM, Davanum Srinivas <davanum at gmail.com>
> wrote:
> > Hi,
> >
> > Short story/question:
> > Is keystone /v3 support important to the ironic team? For Mitaka i guess?
> >
> > Long story:
> > The previous discussion - guidance from keystone team on magnum
> > (http://markmail.org/message/jchf2vj752jdzfet) motivated me to dig
> > into the experimental job we have in devstack for full keystone v3 api
> > and ended up with this review.
> >
> > https://review.openstack.org/#/c/221300/
> >
> > So essentially that rips out v2 keystone pipeline *except* for ironic
> jobs.
> > as ironic has some hard-coded dependencies to keystone /v2 api. I've
> > logged a bug here:
> > https://bugs.launchpad.net/ironic/+bug/1494776
> >
> > Note that review above depends on Jamie's tempest patch which had some
> > hard coded /v2 dependency as well
> > (https://review.openstack.org/#/c/214987/)
> >
> > follow up question:
> > Does anyone know of anything else that does not work with /v3?
> >
> > Thanks,
> > Dims
> >
> > --
> > Davanum Srinivas :: https://twitter.com/dims
> >
> > ______________________________________________________________________
> > ____ OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/f68d466d/attachment.html>

From devananda.vdv at gmail.com  Fri Sep 11 20:21:38 2015
From: devananda.vdv at gmail.com (Devananda van der Veen)
Date: Fri, 11 Sep 2015 13:21:38 -0700
Subject: [openstack-dev] [devstack][keystone][ironic] Use only Keystone
 v3 API in DevStack
In-Reply-To: <CAC=h7gWPsgE3GkKvEqPAsK2=9TkLneSuqN9LCCLPMxO2zdEk5A@mail.gmail.com>
References: <CANw6fcEH2hvooB_pagE+BPbpnATCBKzgoD+n5i1RWqsCEaQZEw@mail.gmail.com>
 <CAExZKEq7yiNWqsJUiNbyb7mofB_U_qXnGbkVRSZSyVRQnPqttA@mail.gmail.com>
 <C73B35E94B12854596094F2CA866854964269C33@G4W3292.americas.hpqcorp.net>
 <CAC=h7gWPsgE3GkKvEqPAsK2=9TkLneSuqN9LCCLPMxO2zdEk5A@mail.gmail.com>
Message-ID: <CAExZKErvB66y3M0Fz+82B2g-dck6tZWb=2mt3fBY1_REozea=g@mail.gmail.com>

This has been informal discussions at various times around how differently
privileged users might use Ironic for different things. It would be great
if our API supported policy settings that corresponded to, let's say, a
junior support engineer's read-only access, or a DC technician's need to
perform maintenance on a server without granting them admin access to the
whole cloud. Things like that... but nothing formal has been written yet.

On Fri, Sep 11, 2015 at 1:01 PM, Dolph Mathews <dolph.mathews at gmail.com>
wrote:

>
> On Fri, Sep 11, 2015 at 2:55 PM, Yee, Guang <guang.yee at hpe.com> wrote:
>
>> Can you please elaborate on "granularity of policy support within
>> Ironic."? Is there a blueprint/etherpad we can take a look?
>>
>
> See the lack of granularity expressed by Ironic's current policy file:
>
>
> https://github.com/openstack/ironic/blob/5671e7c2df455f97ef996c47c9c4f461a82e1c38/etc/ironic/policy.json
>
>
>>
>>
>> Guang
>>
>>
>> -----Original Message-----
>> From: Devananda van der Veen [mailto:devananda.vdv at gmail.com]
>> Sent: Friday, September 11, 2015 10:25 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [devstack][keystone][ironic] Use only
>> Keystone v3 API in DevStack
>>
>> We (the Ironic team) have talked a couple times about keystone /v3
>> support and about improving the granularity of policy support within
>> Ironic. No one stepped up to work on these specifically, and they weren't
>> prioritized during Liberty ... but I think everyone agreed that we should
>> get on with the keystone v3 relatively soon.
>>
>> If Ironic is the only integrated project that doesn't support v3 yet,
>> then yea, we should get on that as soon as M opens.
>>
>> -Devananda
>>
>> On Fri, Sep 11, 2015 at 9:45 AM, Davanum Srinivas <davanum at gmail.com>
>> wrote:
>> > Hi,
>> >
>> > Short story/question:
>> > Is keystone /v3 support important to the ironic team? For Mitaka i
>> guess?
>> >
>> > Long story:
>> > The previous discussion - guidance from keystone team on magnum
>> > (http://markmail.org/message/jchf2vj752jdzfet) motivated me to dig
>> > into the experimental job we have in devstack for full keystone v3 api
>> > and ended up with this review.
>> >
>> > https://review.openstack.org/#/c/221300/
>> >
>> > So essentially that rips out v2 keystone pipeline *except* for ironic
>> jobs.
>> > as ironic has some hard-coded dependencies to keystone /v2 api. I've
>> > logged a bug here:
>> > https://bugs.launchpad.net/ironic/+bug/1494776
>> >
>> > Note that review above depends on Jamie's tempest patch which had some
>> > hard coded /v2 dependency as well
>> > (https://review.openstack.org/#/c/214987/)
>> >
>> > follow up question:
>> > Does anyone know of anything else that does not work with /v3?
>> >
>> > Thanks,
>> > Dims
>> >
>> > --
>> > Davanum Srinivas :: https://twitter.com/dims
>> >
>> > ______________________________________________________________________
>> > ____ OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/96f585a6/attachment.html>

From zabolzadeh at gmail.com  Fri Sep 11 20:25:42 2015
From: zabolzadeh at gmail.com (Hossein Zabolzadeh)
Date: Sat, 12 Sep 2015 00:55:42 +0430
Subject: [openstack-dev] [UX] Creating acount at Invision
Message-ID: <CAMadfcx2aDZyZJ0en03jfWdUknyqRfP2DEiehRJQ_MSjfzJESg@mail.gmail.com>

Hi,
I want to have an account at Invision.
Thanks to someone with right priviledge to create a new account for me
there.
I sent a request to Horizon IRC channel, but it yields no result.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/eeda7301/attachment.html>

From rmeggins at redhat.com  Fri Sep 11 20:26:02 2015
From: rmeggins at redhat.com (Rich Megginson)
Date: Fri, 11 Sep 2015 14:26:02 -0600
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <55F27CB7.2040101@redhat.com>
References: <55F27CB7.2040101@redhat.com>
Message-ID: <55F338DA.2040004@redhat.com>

On 09/11/2015 01:03 AM, Gilles Dubreuil wrote:
> Hi,
>
> Today in the #openstack-puppet channel a discussion about the pro and
> cons of using domain parameter for Keystone V3 has been left opened.
>
> The context
> ------------
> Domain names are needed in Openstack Keystone V3 for identifying users
> or groups (of users) within different projects (tenant).
> Users and groups are uniquely identified within a domain (or a realm as
> opposed to project domains).
> Then projects have their own domain so users or groups can be assigned
> to them through roles.
>
> In Kilo, Keystone V3 have been introduced as an experimental feature.
> Puppet providers such as keystone_tenant, keystone_user,
> keystone_role_user have been adapted to support it.
> Also new ones have appeared (keystone_domain) or are their way
> (keystone_group, keystone_trust).
> And to be backward compatible with V2, the default domain is used when
> no domain is provided.
>
> In existing providers such as keystone_tenant, the domain can be either
> part of the name or provided as a parameter:
>
> A. The 'composite namevar' approach:
>
>     keystone_tenant {'projectX::domainY': ... }
>   B. The 'meaningless name' approach:
>
>    keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}
>
> Notes:
>   - Actually using both combined should work too with the domain
> supposedly overriding the name part of the domain.
>   - Please look at [1] this for some background between the two approaches:
>
> The question
> -------------
> Decide between the two approaches, the one we would like to retain for
> puppet-keystone.
>
> Why it matters?
> ---------------
> 1. Domain names are mandatory in every user, group or project. Besides
> the backward compatibility period mentioned earlier, where no domain
> means using the default one.
> 2. Long term impact
> 3. Both approaches are not completely equivalent which different
> consequences on the future usage.
> 4. Being consistent
> 5. Therefore the community to decide
>
> The two approaches are not technically equivalent and it also depends
> what a user might expect from a resource title.
> See some of the examples below.
>
> Because OpenStack DB tables have IDs to uniquely identify objects, it
> can have several objects of a same family with the same name.
> This has made things difficult for Puppet resources to guarantee
> idem-potency of having unique resources.
> In the context of Keystone V3 domain, hopefully this is not the case for
> the users, groups or projects but unfortunately this is still the case
> for trusts.
>
> Pros/Cons
> ----------
> A.
>    Pros
>      - Easier names
>    Cons
>      - Titles have no meaning!
>      - Cases where 2 or more resources could exists
>      - More difficult to debug
>      - Titles mismatch when listing the resources (self.instances)
>
> B.
>    Pros
>      - Unique titles guaranteed
>      - No ambiguity between resource found and their title
>    Cons
>      - More complicated titles
>
> Examples
> ----------
> = Meaningless name example 1=
> Puppet run:
>    keystone_tenant {'myproject': name='project_A', domain=>'domain_1', ...}
>
> Second run:
>    keystone_tenant {'myproject': name='project_A', domain=>'domain_2', ...}
>
> Result/Listing:
>
>    keystone_tenant { 'project_A::domain_1':
>      ensure  => 'present',
>      domain  => 'domain_1',
>      enabled => 'true',
>      id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>    }
>     keystone_tenant { 'project_A::domain_2':
>      ensure  => 'present',
>      domain  => 'domain_2',
>      enabled => 'true',
>      id      => '4b8255591949484781da5d86f2c47be7',
>    }
>
> = Composite name example 1  =
> Puppet run:
>    keystone_tenant {'project_A::domain_1', ...}
>
> Second run:
>    keystone_tenant {'project_A::domain_2', ...}
>
> # Result/Listing
>    keystone_tenant { 'project_A::domain_1':
>      ensure  => 'present',
>      domain  => 'domain_1',
>      enabled => 'true',
>      id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>     }
>    keystone_tenant { 'project_A::domain_2':
>      ensure  => 'present',
>      domain  => 'domain_2',
>      enabled => 'true',
>      id      => '4b8255591949484781da5d86f2c47be7',
>     }
>
> = Meaningless name example 2  =
> Puppet run:
>    keystone_tenant {'myproject1': name='project_A', domain=>'domain_1', ...}
>    keystone_tenant {'myproject2': name='project_A', domain=>'domain_1',
> description=>'blah'...}
>
> Result: project_A in domain_1 has a description
>
> = Composite name example 2  =
> Puppet run:
>    keystone_tenant {'project_A::domain_1', ...}
>    keystone_tenant {'project_A::domain_1', description => 'blah', ...}
>
> Result: Error because the resource must be unique within a catalog
>
> My vote
> --------
> I would love to have the approach A for easier name.
> But I've seen the challenge of maintaining the providers behind the
> curtains and the confusion it creates with name/titles and when not sure
> about the domain we're dealing with.
> Also I believe that supporting self.instances consistently with
> meaningful name is saner.
> Therefore I vote B

+1

Although, in my limited testing, I have not been able to get this to 
work with Puppet 3.8.  I've been following the link below to create a 
keystone_tenant provider with multiple namevars (name and domain).  I 
still can't figure out how to get puppet to think that

   keystone_tenant {'myproject1': name='project_A', domain=>'domain_1', ...}
   keystone_tenant {'myproject2': name='project_A', domain=>'domain_2', ...}

are different, distinct, unique projects.

I think it's going to take someone with a _lot_ of Puppet/Ruby 
experience and a lot of time to develop something that will work for a 
wide variety of scenarios.

>
> Finally
> ------
> Thanks for reading that far!
> To choose, please provide feedback with more pros/cons, examples and
> your vote.
>
> Thanks,
> Gilles
>
>
> PS:
> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc



From stevemar at ca.ibm.com  Fri Sep 11 20:54:34 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Fri, 11 Sep 2015 16:54:34 -0400
Subject: [openstack-dev] [keystone][ptl] PTL Candidacy
Message-ID: <201509112054.t8BKsg7m005133@d03av05.boulder.ibm.com>


Hey everyone,

After contributing consistently to Keystone since the Grizzly release, I'd
like to run for the Keystone PTL position for the Mikata release cycle.

I've been a core contributor to Keystone since Icehouse and have largely
been focused on improving Keystone?s ability to integrate with enterprise
environments. I was a significant contributor to Keystone?s federation
support, adding capabilities such as SAML and OpenID Connect enablement,
Keystone to Keystone federation support hybrid clouds, and I have had the
pleasure of collaborating with folks from CERN, RackSpace, HP, Red Hat, and
U. Kent on all these initiatives. In addition I have added cloud auditing
support to Keystone. In my spare time, I have served as a core contributor
to OpenStack Client, Oslo Policy, Oslo Cache and pyCADF. I?ve also
contributed small patches to various other OpenStack projects, such as
Docs, Horizon, Oslo, Infra, DevStack and whatever else was needed.

All of this would be for not if it wasn?t without the exceptional Keystone
core team and its extended team. They are truly fantastic folks, and I?ve
been honored to serve under Morgan and Dolph for the last two and a half
years. Thanks to their mentoring I feel this is the right time for me to
serve as PTL. I am fortunate enough to work for an employer that would
allow me to focus 100% of my time on the role of PTL.

I've also helped many new developers contribute to OpenStack and Keystone,
and have always tried to be available to other OpenStack teams to ensure
the other projects have the support from Keystone they need in order to
succeed.

Some of my goals for the Mikata cycle are:
  - Continue our track record of striving to be an extremely stable project
  - Stride to make v3 the go-to API and finally deprecate v2.0!
  - Improvements on the federated identity use-cases
  - Continue the work being done in Hierarchical multi-tenancy and Reseller
  - Release a version of keystoneclient that no longer includes the
session/auth/CLI code
  - General cleanup and paying down technical debt:
    - Deprecate PKI tokens in favor of Fernet tokens
    - Remove the concept of extensions, and instead mark features as
experimental or stable
    - Create a functional test suite for more advanced Keystone
configurations

Finally, I think it?s important that as PTL time is spent on non-technical
duties such as improving the growth and vitality of the OpenStack community
in the following ways:
  - Actively seek out the input and feedback from operators and deployers
  - Mentoring others to become future core contributor and project team
leads
  - Ensuring I act as a point of contact for other OpenStack projects
  - Continue to foster the healthy environment we have created in the
Keystone team and OpenStack as a whole

Thanks,

Steve Martinelli
OpenStack Keystone Core
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/7c086e01/attachment.html>

From emilien at redhat.com  Fri Sep 11 21:01:43 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Fri, 11 Sep 2015 17:01:43 -0400
Subject: [openstack-dev] [puppet] PTL Candidacy
Message-ID: <55F34137.8050706@redhat.com>

( Also posted on https://review.openstack.org/222767 )

Moving Puppet OpenStack modules under the big tent was an amazing
opportunity for us to make sure our project remains Open [1].
Liberty was our first cycle where we were part of OpenStack and we can
be proud of what we achieved together [2].

We've built a community coming from two different worlds: developers and
operators. We need to keep that, because I think that's what is making
the difference today when you're deploying OpenStack: you need a short
loop feedback between both worlds.

Being OpenStack is not easy, and we have great challenges
in front of us.

* Documentation
I would like to put emphasis on having more documentation
so we can more easily welcome new contributors and hopefuly get more
adoption.
I truly believe more documentation will help our contributors
to get quickly involved and eventually give a chance to scale-up our team.
Our users deserve more guidance to understand best practices when using
our modules, that should also be part of this effort.

* Continuous Integration
We did a lot of work on CI, on both beaker & integration jobs. I would
like to continue integration work and test more OpenStack modules.
I would like to continue collaboration with Tempest team and keep using
it for testing. I'm also interested by multi-node and upgrade testing,
that would make stronger how we develop the modules.

* Release management
I would like to reach a better release velocity by trying to stay close
to OpenStack releases (especially from packaging).
As soon as major distributions release stable packaging, I think we
should provide a release.

* Community
I would like to continue the collaboration with other projects, mostly
OpenStack Infrastructure (for Continuous Integration work), TripleO,
Fuel, Kolla (container integration), Documentation, Tempest (for
puppet-openstack-integration work) and packagers (Ubuntu Cloud Archive
and RDO teams).
This collaboration is making OpenStack better and is the reason for our
success today. We need to continue that way by coordinating groups and
maintaining good communications.


I had the immense pleasure to lead our team during the last few months
and I would like to continue my role of PTL for the next cycle.
Thank you for your time and consideration,


[1] https://wiki.openstack.org/wiki/Open
[2] http://my1.fr/blog/liberty-cycle-retrospective-in-puppet-openstack/

-- 
Emilien Macchi
https://wiki.openstack.org/wiki/User:Emilienm

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/f6532470/attachment.pgp>

From mestery at mestery.com  Fri Sep 11 21:12:36 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Fri, 11 Sep 2015 16:12:36 -0500
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
Message-ID: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>

I'm writing to let everyone know that I do not plan to run for Neutron PTL
for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
recently put it in his non-candidacy email [1]. But it goes further than
that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
full time job. In the case of Neutron, it's more than a full time job, it's
literally an always on job.

I've tried really hard over my three cycles as PTL to build a stronger web
of trust so the project can grow, and I feel that's been accomplished. We
have a strong bench of future PTLs and leaders ready to go, I'm excited to
watch them lead and help them in anyway I can.

As was said by Zane in a recent email [3], while Heat may have pioneered
the concept of rotating PTL duties with each cycle, I'd like to highly
encourage Neutron and other projects to do the same. Having a deep bench of
leaders supporting each other is important for the future of all projects.

See you all in Tokyo!
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/918d9e01/attachment.html>

From rodrigodsousa at gmail.com  Fri Sep 11 21:29:43 2015
From: rodrigodsousa at gmail.com (Rodrigo Duarte)
Date: Fri, 11 Sep 2015 18:29:43 -0300
Subject: [openstack-dev] [keystone] PTL non-candidacy
In-Reply-To: <C73B35E94B12854596094F2CA866854964269C0C@G4W3292.americas.hpqcorp.net>
References: <CAGnj6auS+0qFq0EvdtUxdLYFVm_ihc17K9q2sZ1zT63SJV83Hw@mail.gmail.com>
 <C73B35E94B12854596094F2CA866854964269C0C@G4W3292.americas.hpqcorp.net>
Message-ID: <CAAJsUKJGaKx3+aLSWFzR77byA2ZxqjO5w0pcUSz4UjeA1uYN-A@mail.gmail.com>

Thanks Morgan, it was a pleasure to have you contributing as PTL while
working with Keystone.

On Fri, Sep 11, 2015 at 4:52 PM, Yee, Guang <guang.yee at hpe.com> wrote:

> Morgan, thanks for all your hard work. It?s been an honor to have you as
> our PTL.
>
>
>
> "All the world's a stage,?
>
>
>
> Now set back, relax, grab a drink, and enjoy the show. J
>
>
>
>
>
> Guang
>
>
>
>
>
> *From:* Morgan Fainberg [mailto:morgan.fainberg at gmail.com]
> *Sent:* Thursday, September 10, 2015 2:41 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [keystone] PTL non-candidacy
>
>
>
> As I outlined (briefly) in my recent announcement of changes (
> https://www.morganfainberg.com/blog/2015/09/09/openstack-career-act-3-scene-1/
> ) I will not be running for PTL of Keystone this next cycle (Mitaka). The
> role of PTL is a difficult but extremely rewarding job. It has been amazing
> to see both Keystone and OpenStack grow.
>
>
>
> I am very pleased with the accomplishments of the Keystone development
> team over the last year. We have seen improvements with Federation,
> Keystone-to-Keystone Federation, Fernet Tokens, improvements of testing,
> releasing a dedicated authentication library, cross-project initiatives
> around improving the Service Catalog, and much, much more. I want to thank
> each and every contributor for the hard work that was put into Keystone and
> its associated projects.
>
>
>
> While I will be changing my focus to spend more time on the general needs
> of OpenStack and working on the Public Cloud story, I am confident in those
> who can, and will, step up to the challenges of leading development of
> Keystone and the associated projects. I may be working across more
> projects, but you can be assured I will be continuing to work hard to see
> the initiatives I helped start through. I wish the best of luck to the next
> PTL.
>
>
>
> I guess this is where I get to write a lot more code soon!
>
>
>
> See you all (in person) in Tokyo!
>
> --Morgan
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
MSc in Computer Science
http://rodrigods.com <http://lsd.ufcg.edu.br/%7Erodrigods>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/ce6d6ab5/attachment.html>

From jim at geekdaily.org  Fri Sep 11 21:30:35 2015
From: jim at geekdaily.org (Jim Meyer)
Date: Fri, 11 Sep 2015 14:30:35 -0700
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
	concerns
In-Reply-To: <CALrdpTVmT1-=33WF=-EZO4A-on2ds6u69i0mpJj6UPHwq+P5vw@mail.gmail.com>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
 <CALrdpTVmT1-=33WF=-EZO4A-on2ds6u69i0mpJj6UPHwq+P5vw@mail.gmail.com>
Message-ID: <3D683933-0978-49EE-82AE-3CADC677B7EE@geekdaily.org>

On Sep 11, 2015, at 12:45 PM, Shamail Tahir <itzshamail at gmail.com> wrote:
> On Fri, Sep 11, 2015 at 3:26 PM, Joshua Harlow <harlowja at outlook.com <mailto:harlowja at outlook.com>> wrote:
> Hi all,
> 
> I was reading over the TC IRC logs for this week (my weekly reading) and I just wanted to let my thoughts and comments be known on:
> 
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309 <http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309>
> 
> I feel it's very important to send a positive note for new/upcoming projects and libraries... (and for everyone to remember that most projects do start off with a small set of backers). So I just wanted to try to ensure that we send a positive note with any tag like this that gets created and applied and that we all (especially the TC) really really considers the negative connotations of applying that tag to a project (it may effectively ~kill~ that project).
> 
> I would really appreciate that instead of just applying this tag (or other similarly named tag to projects) that instead the TC try to actually help out projects with those potential tags in the first place (say perhaps by actively listing projects that may need more contributors from a variety of companies on the openstack blog under say a 'HELP WANTED' page or something). I'd much rather have that vs. any said tags, because the latter actually tries to help projects, vs just stamping them with a 'you are bad, figure out how to fix yourself, because you are not diverse' tag.
> 
> I believe it is the TC job (in part) to help make the community better, and not via tags like this that IMHO actually make it worse; I really hope that folks on the TC can look back at their own projects they may have created and ask how would their own project have turned out if they were stamped with a similar tag?

First, strongly agree:

Tags should be positive attributes or encouragement, not negative or discouraging. I think they should also be as objectively true as possible. Which Monty Taylor said later[1] in the discussion and Jay Pipes reiterated[2].

> I agree with Josh and, furthermore, maybe a similar "warning" could be implicitly made by helping the community understand why the "diverse-affiliation" tag matters.  If we (through education on tags in general) stated that the reason diverse-affiliation matters, amongst other things, is because it shows that the project can potentially survive a single contributor changing their involvement then wouldn't that achieve the same purpose of showing stability/mindshare/collaboration for projects with diverse-affiliation tag (versus those that don't have it) and make them more "preferred" in a sense?

I think I agree with others, most notably Doug Hellman[3] in the TC discussion; we need a marker of the other end of the spectrum. The absence of information is only significant if you know what?s missing and it?s importance.

Separately, I agree that more education around tags and their importance is needed.

I understand the concern is that we want to highlight the need for diversity, and I believe that instead of ?danger-not-diverse? we?d be better served by ?increase-diversity? or ?needs-diversity? as the other end of the spectrum from ?diverse-affiliation.? And I?ll go rant on the review now[4]. =]

?j

[1] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-378 <http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-378>
[2] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-422 <http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-422>
[3] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-330 <http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-330>
[4] https://review.openstack.org/#/c/218725/ <https://review.openstack.org/#/c/218725/>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/7b2c5b44/attachment.html>

From brandon.logan at RACKSPACE.COM  Fri Sep 11 21:31:33 2015
From: brandon.logan at RACKSPACE.COM (Brandon Logan)
Date: Fri, 11 Sep 2015 21:31:33 +0000
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <1442007096.14240.2.camel@localhost>

Kyle,
This news saddens me, but I completely understand.  You've been a great
PTL and I appreciate everything you have done for Neutron.  Enjoy your
new found free time after this.

Thanks,
Brandon
On Fri, 2015-09-11 at 16:12 -0500, Kyle Mestery wrote:
> I'm writing to let everyone know that I do not plan to run for Neutron
> PTL for a fourth cycle. Being a PTL is a rewarding but difficult job,
> as Morgan recently put it in his non-candidacy email [1]. But it goes
> further than that for me. As Flavio put it in his post about "Being a
> PTL" [2], it's a full time job. In the case of Neutron, it's more than
> a full time job, it's literally an always on job.
> 
> I've tried really hard over my three cycles as PTL to build a stronger
> web of trust so the project can grow, and I feel that's been
> accomplished. We have a strong bench of future PTLs and leaders ready
> to go, I'm excited to watch them lead and help them in anyway I can.
> 
> 
> As was said by Zane in a recent email [3], while Heat may have
> pioneered the concept of rotating PTL duties with each cycle, I'd like
> to highly encourage Neutron and other projects to do the same. Having
> a deep bench of leaders supporting each other is important for the
> future of all projects.
> 
> 
> See you all in Tokyo!
> 
> Kyle
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From blak111 at gmail.com  Fri Sep 11 21:35:40 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Fri, 11 Sep 2015 14:35:40 -0700
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <CAO_F6JMQLpBKLdeATswrYb1ZsAZ8QxFHLV15jVccd96eD0NYcg@mail.gmail.com>

This has the works "PTL" and "Candidacy" in the subject. I think that's
enough to make it on the ballot!

On Fri, Sep 11, 2015 at 2:12 PM, Kyle Mestery <mestery at mestery.com> wrote:

> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered
> the concept of rotating PTL duties with each cycle, I'd like to highly
> encourage Neutron and other projects to do the same. Having a deep bench of
> leaders supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/1acf1eec/attachment.html>

From gord at live.ca  Fri Sep 11 21:36:59 2015
From: gord at live.ca (gord chung)
Date: Fri, 11 Sep 2015 17:36:59 -0400
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
 concerns
In-Reply-To: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
Message-ID: <BLU437-SMTP17784BED2B7522355E9473DE500@phx.gbl>



On 11/09/2015 3:26 PM, Joshua Harlow wrote:
> Hi all,
>
> I was reading over the TC IRC logs for this week (my weekly reading) 
> and I just wanted to let my thoughts and comments be known on:
>
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309 
>
>
> I feel it's very important to send a positive note for new/upcoming 
> projects and libraries... (and for everyone to remember that most 
> projects do start off with a small set of backers). So I just wanted 
> to try to ensure that we send a positive note with any tag like this 
> that gets created and applied and that we all (especially the TC) 
> really really considers the negative connotations of applying that tag 
> to a project (it may effectively ~kill~ that project).
>
> I would really appreciate that instead of just applying this tag (or 
> other similarly named tag to projects) that instead the TC try to 
> actually help out projects with those potential tags in the first 
> place (say perhaps by actively listing projects that may need more 
> contributors from a variety of companies on the openstack blog under 
> say a 'HELP WANTED' page or something). I'd much rather have that vs. 
> any said tags, because the latter actually tries to help projects, vs 
> just stamping them with a 'you are bad, figure out how to fix 
> yourself, because you are not diverse' tag.
>
> I believe it is the TC job (in part) to help make the community 
> better, and not via tags like this that IMHO actually make it worse; I 
> really hope that folks on the TC can look back at their own projects 
> they may have created and ask how would their own project have turned 
> out if they were stamped with a similar tag...
>
completely agree with everything here... i made a comment on the 
patch[1] regarding this and was told the idea was that the purpose of 
the tag was to note the potential fragility of a project if the leading 
company were to decide to pull out. this seems like a valid item to 
track but with that said, the existing wording of the proposal is not that.

[1] https://review.openstack.org/#/c/218725/

cheers,

-- 
gord



From mark at mcclain.xyz  Fri Sep 11 21:55:38 2015
From: mark at mcclain.xyz (Mark McClain)
Date: Fri, 11 Sep 2015 17:55:38 -0400
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <B0BCEDBE-41ED-4D78-91AA-454C41A2B5DA@mcclain.xyz>


> On Sep 11, 2015, at 5:12 PM, Kyle Mestery <mestery at mestery.com> wrote:
> 
> I'm writing to let everyone know that I do not plan to run for Neutron PTL for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan recently put it in his non-candidacy email [1]. But it goes further than that for me. As Flavio put it in his post about "Being a PTL" [2], it's a full time job. In the case of Neutron, it's more than a full time job, it's literally an always on job.
> 
> I've tried really hard over my three cycles as PTL to build a stronger web of trust so the project can grow, and I feel that's been accomplished. We have a strong bench of future PTLs and leaders ready to go, I'm excited to watch them lead and help them in anyway I can.
> 
> As was said by Zane in a recent email [3], while Heat may have pioneered the concept of rotating PTL duties with each cycle, I'd like to highly encourage Neutron and other projects to do the same. Having a deep bench of leaders supporting each other is important for the future of all projects.
> 
> See you all in Tokyo!
> Kyle
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html <http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html <http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html>
> [2] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html <http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html>

Thanks for leading the Neutron team!  Neutron PTL really is truly a demanding role and you?ve filled it exceptionally the last 3 cycles.  Now enjoy some good gin in your new found free time.

mark

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/359413fa/attachment.html>

From german.eichberger at hpe.com  Fri Sep 11 22:09:38 2015
From: german.eichberger at hpe.com (Eichberger, German)
Date: Fri, 11 Sep 2015 22:09:38 +0000
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAO_F6JMQLpBKLdeATswrYb1ZsAZ8QxFHLV15jVccd96eD0NYcg@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
 <CAO_F6JMQLpBKLdeATswrYb1ZsAZ8QxFHLV15jVccd96eD0NYcg@mail.gmail.com>
Message-ID: <D2189F2D.174E8%german.eichberger@hpe.com>

I am with Kevin ? we will just write you into the ballot!

Kyle, you rock! Thanks for all the support and help ? and hit me up if you are short on gin :-)

German


From: Kevin Benton <blak111 at gmail.com<mailto:blak111 at gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, September 11, 2015 at 2:35 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] PTL Non-Candidacy

This has the works "PTL" and "Candidacy" in the subject. I think that's enough to make it on the ballot!

On Fri, Sep 11, 2015 at 2:12 PM, Kyle Mestery <mestery at mestery.com<mailto:mestery at mestery.com>> wrote:
I'm writing to let everyone know that I do not plan to run for Neutron PTL for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan recently put it in his non-candidacy email [1]. But it goes further than that for me. As Flavio put it in his post about "Being a PTL" [2], it's a full time job. In the case of Neutron, it's more than a full time job, it's literally an always on job.

I've tried really hard over my three cycles as PTL to build a stronger web of trust so the project can grow, and I feel that's been accomplished. We have a strong bench of future PTLs and leaders ready to go, I'm excited to watch them lead and help them in anyway I can.

As was said by Zane in a recent email [3], while Heat may have pioneered the concept of rotating PTL duties with each cycle, I'd like to highly encourage Neutron and other projects to do the same. Having a deep bench of leaders supporting each other is important for the future of all projects.

See you all in Tokyo!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


From dklyle0 at gmail.com  Fri Sep 11 22:21:27 2015
From: dklyle0 at gmail.com (David Lyle)
Date: Fri, 11 Sep 2015 16:21:27 -0600
Subject: [openstack-dev] [UX] Creating acount at Invision
In-Reply-To: <CAMadfcx2aDZyZJ0en03jfWdUknyqRfP2DEiehRJQ_MSjfzJESg@mail.gmail.com>
References: <CAMadfcx2aDZyZJ0en03jfWdUknyqRfP2DEiehRJQ_MSjfzJESg@mail.gmail.com>
Message-ID: <CAFFhzB5W-MQeqw2fybKULm278N6OYjrdeG+E3dR4q5NqXX3GPg@mail.gmail.com>

Invite sent.

On Fri, Sep 11, 2015 at 2:25 PM, Hossein Zabolzadeh
<zabolzadeh at gmail.com> wrote:
> Hi,
> I want to have an account at Invision.
> Thanks to someone with right priviledge to create a new account for me
> there.
> I sent a request to Horizon IRC channel, but it yields no result.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From ayoung at redhat.com  Fri Sep 11 22:27:20 2015
From: ayoung at redhat.com (Adam Young)
Date: Fri, 11 Sep 2015 18:27:20 -0400
Subject: [openstack-dev] [Keystone] PTL Candidacy: Adam Young
Message-ID: <55F35548.5050803@redhat.com>

My name is Adam Young and I am running for Keystone Project Technical Lead.


Why I am running:  I've been part of this project since it was in 
incubation.  During that time, I've received the benefit of several 
dedicated PTLs.  It is time for me to offer to do the hard work 
necessary to make Keystone successful.

The Keystone project gets its name from the block on an arch that is put 
into position to make the whole structure self supporting. Remove any 
piece of the arch and the whole thing collapses.  What is notable about 
a Keystone is that it has the highest degree of pressure of any block in 
the arch.

The Keystone project fills that same role in OpenStack.  It provides the 
means to make OpenStack capable of architectural feats not previously 
possible.  But it also is the highest risk piece from a security 
perspective, and it is here that, as PTL, I will focus my attentions.

My goals for Keystone:

1.  Removing the bearer aspects of tokens
2.  Better delegation mechanisms to scale the management of OpenStack.
3.  Improving stability, scale, and performance.
4.  Simplify integration with external identity sources

As a member of the team, I have been frustrated by our inability to make 
progress on some of these key aspects due to workflow constraints.  As 
PTL, I will look to restructuring the code approval process to increase 
development throughput while increasing the emphasis on quality.

I've been blogging since before I started on Keystone.  It has been one 
of the key ways that I have communicated the design, criticism, and 
techniques essential for Keystone's continued success.  As PTL, I will 
continue to communicate, and to aid the other team members to 
communicate.  Keystone needs to work with the rest of the OpenStack 
project teams.

Here's the most important part;  I'm, going to do this all anyway. It 
does not matter if I am PTL or not, this is how I will work.

I'm hoping by running that I inspire other Keystone Developers to run as 
well.  We've got a great crew, and I am looking forward to being part of 
it in this upcoming release.


From pc at michali.net  Fri Sep 11 23:13:50 2015
From: pc at michali.net (Paul Michali)
Date: Fri, 11 Sep 2015 23:13:50 +0000
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <D2189F2D.174E8%german.eichberger@hpe.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
 <CAO_F6JMQLpBKLdeATswrYb1ZsAZ8QxFHLV15jVccd96eD0NYcg@mail.gmail.com>
 <D2189F2D.174E8%german.eichberger@hpe.com>
Message-ID: <CA+ikoRNFi-XSa4gJeFLZds+GyMLTSEsrCodz3bL_PqDSaz-SbA@mail.gmail.com>

You've done (are doing) a great job as PTL Kyle! Many thanks for all your
hard work in leaving the camp-site in better shape than when you got there
:)



On Fri, Sep 11, 2015 at 6:12 PM Eichberger, German <
german.eichberger at hpe.com> wrote:

> I am with Kevin ? we will just write you into the ballot!
>
> Kyle, you rock! Thanks for all the support and help ? and hit me up if you
> are short on gin :-)
>
> German
>
>
> From: Kevin Benton <blak111 at gmail.com<mailto:blak111 at gmail.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> >>
> Date: Friday, September 11, 2015 at 2:35 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> >>
> Subject: Re: [openstack-dev] [neutron] PTL Non-Candidacy
>
> This has the works "PTL" and "Candidacy" in the subject. I think that's
> enough to make it on the ballot!
>
> On Fri, Sep 11, 2015 at 2:12 PM, Kyle Mestery <mestery at mestery.com<mailto:
> mestery at mestery.com>> wrote:
> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered
> the concept of rotating PTL duties with each cycle, I'd like to highly
> encourage Neutron and other projects to do the same. Having a deep bench of
> leaders supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Kevin Benton
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/de0b8022/attachment.html>

From amuller at redhat.com  Sat Sep 12 00:02:43 2015
From: amuller at redhat.com (Assaf Muller)
Date: Fri, 11 Sep 2015 20:02:43 -0400
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <-7968523114263657745@unknownmsgid>

Kyle, you've really done a fantastic job during your time. The community is
now much more welcoming, and I think that working on Neutron is now much
easier. We've grown to be a very positive and constructive community and
that's not always been the case. I distinctly remember many conversations
with a wide range of people about this exact topic over the last year and a
half, all praising the changes you've been leading. Kudos.

On 11 ????? 2015, at 17:13, Kyle Mestery <mestery at mestery.com> wrote:

I'm writing to let everyone know that I do not plan to run for Neutron PTL
for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
recently put it in his non-candidacy email [1]. But it goes further than
that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
full time job. In the case of Neutron, it's more than a full time job, it's
literally an always on job.

I've tried really hard over my three cycles as PTL to build a stronger web
of trust so the project can grow, and I feel that's been accomplished. We
have a strong bench of future PTLs and leaders ready to go, I'm excited to
watch them lead and help them in anyway I can.

As was said by Zane in a recent email [3], while Heat may have pioneered
the concept of rotating PTL duties with each cycle, I'd like to highly
encourage Neutron and other projects to do the same. Having a deep bench of
leaders supporting each other is important for the future of all projects.

See you all in Tokyo!
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/c51ed086/attachment.html>

From jim at jimrollenhagen.com  Sat Sep 12 00:12:11 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Fri, 11 Sep 2015 17:12:11 -0700
Subject: [openstack-dev] [Ironic] PTL Candidacy
Message-ID: <20150912001211.GL21846@jimrollenhagen.com>

Hi friends,

I'd like to throw in my hats (yes, all of them, I'll get to that shortly) for
the Ironic PTL election. In case you don't immediately recognize me,
I'm 'jroll' on IRC, where you can always find me.

I've been working on the Ironic project for a year and a half now, as one of
an architect of the first public cloud deployment of Ironic. It's
amazing to see how far we've come. When I joined the project, it could boot a
server. Now we have a laundry list of hardware drivers, support for cleaning
a node after teardown, ironic-python-agent, Bifrost, UEFI support, and so on
and so on, with plenty more on the way.

Over the last cycle or so, I've helped lead the charge on moving the project
even faster. It took much debate, but we now have fine-grained API versioning
that helps us advance our API faster and signal changes to users. We're also
beginning to embark on our new release model, which I have great hopes for:
we shipped 23 bug fixes in the 16 days between 4.0 and 4.1. I'd like to
continue serving our users by releasing frequently.

Back to the hats: I've worn many while working on Ironic, so I think I have
a unique perspective on the different aspects of the project.

With my deployer/operator hat on:

* Let's make deployments easier. Deploying Ironic with the rest of OpenStack
  is complex, and we need to improve the docs and tools around that. We should
  also point people at Bifrost more often, where they just need the Cobbler use
  case of spinning up a bunch of metal.

* Let's make operator's lives better. Ironic has cases where things can get
  into bad states. Let's document those (or better yet, fix them!). We should
  also make it easier for operators to get an insight into their environment.
  Some of our logs are vague or hard to find; we need to improve that. We
  should also work on the metrics spec to give ops insight into Ironic's
  performance.

With my developer hat on:

* Let's build a good devref, similar to Nova's. It's difficult for new or
  casual contributors to find their way around the project, because we don't
  have docs on how we develop code and the conventions around doing so. For
  example, when does a patch need to bump the API version? Cores know this;
  is it documented?

* I'd also love to see us grow our core reviewer team. We mostly do a good
  job at keeping reviews timely. However, our reviewers are also fantastic
  developers, and it would be great if they had more time to write code. I
  think we should evaluate giving more people +2 power, and trusting them
  not to land code if they aren't familiar with that part of the system.

With my leader hat on:

* Let's collaborate better with Nova. In the past, we haven't done a good
  job of this, and there was even some animosity between the projects. We now
  have two Nova liasions that help out by bring patches to nova-core's
  attention. It's a good start, but I think we need people working on Ironic
  that follow development in Nova that could impact us; truly collaborating
  to make sure Nova doesn't break us, and vice-versa. These folks should be
  actively reviewing Nova specs and code to ensure it doesn't break our model,
  and working to bring our model back in line with what Nova expects.

* Let's also start paying better attention to cross-project initiatives in
  OpenStack; collaborating with the API working group, cross-project specs,
  etc. I've started doing this, but the community needs more people doing it.
  We should be reviewing specs from these teams and making sure we implement
  those initiatives in a timely fashion. Most recent example: Keystone v3.
  We're way behind on getting that work done.

There's other things I'd like to accomplish as well, but those are the main
pieces. As for my downstream hat, it won't be getting as much use if I'm
elected. You can look at my history as one of the most active Ironic
developers on IRC, helping people solve problems, reviewing code, and helping
design large features. I've confirmed with my employer that if I'm elected
PTL, nearly 100% of my focus will be upstream, and you'll be seeing more of
me, for better or worse. :)

Regardless of the outcome, I'm looking forward to serving the Ironic community
during Mitaka.

Thanks for reading,

// jim


From travis.tripp at hp.com  Sat Sep 12 00:22:40 2015
From: travis.tripp at hp.com (Tripp, Travis S)
Date: Sat, 12 Sep 2015 00:22:40 +0000
Subject: [openstack-dev]  [horizon] Patterns for Angular Panels
Message-ID: <98017DD2-36D5-4883-B100-9C8E6FB46B64@hp.com>


"A pattern, apart from the term's use to mean Template is a discernible regularity in the world or in a manmade design. As such, the elements of a   pattern repeat in a predictable manner." -https://en.wikipedia.org/wiki/Pattern


Hello horizon-eers,

We have made some progress towards angular development in horizon, but much of it is still invisible. During Kilo, enough effort was put forth on angular work to help us recognize the deficiencies of the existing horizon angular code structure, style, localization, and even directory layout. 

A lot of effort went into Liberty to improve all of those areas, which has now enabled a much more serious discussion on producing angular based panels for horizon. And we actually have quite a few panels pretty far along in the patch process, but pretty much stuck in a holding pattern. Why? Primarily because there isn?t agreement on the coding pattern to be used for the panels.

Everybody seems to agree that we want a good enough pattern to base all the panels on. And most people would like a pattern that provides enough reusable widgets / services / etc that replicating the pattern requires a minimal amount of code with a maximum amount of flexibility.

However, one problem is that no single panel on its own constitutes a pattern. And within any line of patches for a particular panel, the attempts to extract out reusable pieces into separate patches often get blocked because they are only used in a single panel. This creates an impasse where the ability to effectively work on panels stagnates.

So, right now, the most recognizable pattern for angular panels is release after release of horizon having zero angular panels implemented.

That is a pattern that I believe must be broken.

So what can we do about it? Here are a few options:

1) Formalize a status of "experimental" and allow a limited number of disabled panels to merge with refactoring allowed
2) Immediately create a relatively short lived "Angular Panel" feature branch for some cross panel work.
3) Establish a new angular repo with additional cores for angular based features with a separate release mechanism


One argument says that merging in code that is initially disabled (panel disabled, workflow disabled) at least provides some real examples to draw from and actually can better enable external plugin developers, such as the app catalog work being done. It also can help to identify bugs and usability problems that may not otherwise be discovered (such as hard coded static urls and webroots) because deployers will have access to the feature. If a particular deployer wants to use it, they can enable it, potentially at their own risk. If another deployer does not want to use it until the feature has more time to bake, they do not have to use it and don?t have to block other deployers that do want to use it.

A counter argument is that allowing the merge of disabled code allows undesirable patterns to replicate quickly, causing way too much time to be wasted with having to refactor everything.

The idea of a feature branch has been brought up before, but I think it was not accepted for a number of reasons. A few being that the scope and goal of such a feature branch was not clear (too narrow or too broad) and with a lack of belief that there would be a reasonable timeline for acceptance back to master. 

We could also just create a separate repo for the angular based work (framework, dashboards, panels) and perhaps provide that as its own xstatic package (synced up to the main horizon release). A deployer desiring the angular work would deploy that package along with the base horizon release and still be able to selectively enable / disable the angular features they want. The argument against this is that it is more complicated to manage and even more likely that we could break things.


In my opinion the most effective route forward is something like this:

 1) Immediately create a feature branch for Angular Panel Pattern Establishment
 2) Allow 3 - 5 panels and their patches to be eagerly merged
 3) Use the panels to establish cross panel patterns and to find ways to simplify code re-use
 4) Extract out patches to be proposed to master as we see fit
 5) Set a goal of Mitaka M1 for at least a few panels to be merged back to master

While on the feature branch, the goal is to promote co-existence and pattern development allowing for easier collaboration between developers. This means allowing incomplete features on the branch. When merged back to master, the reviews would enforce the more stringent standards for merge guidelines, but could still allow for panels to be merged and still initially be disabled if desired.

I believe that this would create a pattern of visible progress.

Remember, perfection is the enemy of... http://pasteboard.co/zq44Y8f.png

-Travis

From corvus at inaugust.com  Sat Sep 12 00:39:18 2015
From: corvus at inaugust.com (James E. Blair)
Date: Fri, 11 Sep 2015 17:39:18 -0700
Subject: [openstack-dev] Gerrit downtime on Friday 2015-09-11 at 23:00
	UTC
In-Reply-To: <87pp287jze.fsf@meyer.lemoncheese.net> (James E. Blair's message
 of "Thu, 27 Aug 2015 09:51:33 -0700")
References: <87pp287jze.fsf@meyer.lemoncheese.net>
Message-ID: <87si6kh3nt.fsf@meyer.lemoncheese.net>

corvus at inaugust.com (James E. Blair) writes:

> On Friday, September 11 at 23:00 UTC Gerrit will be unavailable for
> about 30 minutes while we rename some projects.
>
> Existing reviews, project watches, etc, should all be carried
> over.

This has been completed without incident.

>  Currently, we plan on renaming the following projects:
>
>   stackforge/os-ansible-deployment -> openstack/openstack-ansible
>   stackforge/os-ansible-specs -> openstack/openstack-ansible-specs
>
>   stackforge/solum -> openstack/solum
>   stackforge/python-solumclient -> openstack/python-solumclient
>   stackforge/solum-specs -> openstack/solum-specs
>   stackforge/solum-dashboard -> openstack/solum-dashboard
>   stackforge/solum-infra-guestagent -> openstack/solum-infra-guestagent
>
>   stackforge/magnetodb -> openstack/magnetodb
>   stackforge/python-magnetodbclient -> openstack/python-magnetodbclient
>   stackforge/magnetodb-specs -> openstack/magnetodb-specs
>
>   stackforge/kolla -> openstack/kolla
>   stackforge/neutron-powervm -> openstack/networking-powervm

And we also moved these:

    stackforge/os-ansible-deployment -> openstack/openstack-ansible
    stackforge/os-ansible-deployment-specs -> openstack/openstack-ansible-specs

    stackforge/refstack -> openstack/refstack
    stackforge/refstack-client -> openstack/refstack-client

Thanks to everyone that pitched in to help the move go smoothly.

As a reminder, we expect this to be the last move of projects from
stackforge into openstack before we retire the stackforge/ namespace as
previously announced [1].

-Jim

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html


From vikschw at gmail.com  Sat Sep 12 01:27:40 2015
From: vikschw at gmail.com (Vikram Choudhary)
Date: Sat, 12 Sep 2015 06:57:40 +0530
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <-7968523114263657745@unknownmsgid>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
 <-7968523114263657745@unknownmsgid>
Message-ID: <CAFeBh8uKiibN0UTmS5P2Si3PCmRg_u76iMzr65NbeNsNzTyWqA@mail.gmail.com>

I was little upset hearing this. You are a true leader Kyle. Nice working
with you.
On Sep 12, 2015 5:33 AM, "Assaf Muller" <amuller at redhat.com> wrote:

> Kyle, you've really done a fantastic job during your time. The community
> is now much more welcoming, and I think that working on Neutron is now much
> easier. We've grown to be a very positive and constructive community and
> that's not always been the case. I distinctly remember many conversations
> with a wide range of people about this exact topic over the last year and a
> half, all praising the changes you've been leading. Kudos.
>
> On 11 ????? 2015, at 17:13, Kyle Mestery <mestery at mestery.com> wrote:
>
> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered
> the concept of rotating PTL duties with each cycle, I'd like to highly
> encourage Neutron and other projects to do the same. Having a deep bench of
> leaders supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/438b2033/attachment.html>

From sukhdevkapur at gmail.com  Sat Sep 12 01:28:28 2015
From: sukhdevkapur at gmail.com (Sukhdev Kapur)
Date: Fri, 11 Sep 2015 18:28:28 -0700
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <CA+wZVHQD2X67odg0jnwCEY-TEfDPhNnDwP9UQZEH87Fu8UYipQ@mail.gmail.com>

Hi Kyle,

You have done wonders for the Neutron project. I hate to see you go, but,
fully understand your posiiton. We will miss you.

Best of luck
-Sukhdev


On Fri, Sep 11, 2015 at 2:12 PM, Kyle Mestery <mestery at mestery.com> wrote:

> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered
> the concept of rotating PTL duties with each cycle, I'd like to highly
> encourage Neutron and other projects to do the same. Having a deep bench of
> leaders supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/2f242a3a/attachment.html>

From joe.gordon0 at gmail.com  Sat Sep 12 02:21:45 2015
From: joe.gordon0 at gmail.com (Joe Gordon)
Date: Fri, 11 Sep 2015 19:21:45 -0700
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
	concerns
In-Reply-To: <3D683933-0978-49EE-82AE-3CADC677B7EE@geekdaily.org>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
 <CALrdpTVmT1-=33WF=-EZO4A-on2ds6u69i0mpJj6UPHwq+P5vw@mail.gmail.com>
 <3D683933-0978-49EE-82AE-3CADC677B7EE@geekdaily.org>
Message-ID: <CAHXdxOfQPnEjhiy3OUewu5H-0PnK7h6SqMA0i8Tn601+q7QvdA@mail.gmail.com>

On Fri, Sep 11, 2015 at 2:30 PM, Jim Meyer <jim at geekdaily.org> wrote:

> On Sep 11, 2015, at 12:45 PM, Shamail Tahir <itzshamail at gmail.com> wrote:
>
> On Fri, Sep 11, 2015 at 3:26 PM, Joshua Harlow <harlowja at outlook.com>
> wrote:
>
>> Hi all,
>>
>> I was reading over the TC IRC logs for this week (my weekly reading) and
>> I just wanted to let my thoughts and comments be known on:
>>
>>
>> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309
>>
>> I feel it's very important to send a positive note for new/upcoming
>> projects and libraries... (and for everyone to remember that most projects
>> do start off with a small set of backers). So I just wanted to try to
>> ensure that we send a positive note with any tag like this that gets
>> created and applied and that we all (especially the TC) really really
>> considers the negative connotations of applying that tag to a project (it
>> may effectively ~kill~ that project).
>>
>> I would really appreciate that instead of just applying this tag (or
>> other similarly named tag to projects) that instead the TC try to actually
>> help out projects with those potential tags in the first place (say perhaps
>> by actively listing projects that may need more contributors from a variety
>> of companies on the openstack blog under say a 'HELP WANTED' page or
>> something). I'd much rather have that vs. any said tags, because the latter
>> actually tries to help projects, vs just stamping them with a 'you are bad,
>> figure out how to fix yourself, because you are not diverse' tag.
>>
>> I believe it is the TC job (in part) to help make the community better,
>> and not via tags like this that IMHO actually make it worse; I really hope
>> that folks on the TC can look back at their own projects they may have
>> created and ask how would their own project have turned out if they were
>> stamped with a similar tag?
>
>
> First, strongly agree:
>
> *Tags should be positive attributes or encouragement, not negative or
> discouraging. *I think they should also be as objectively true as
> possible. Which Monty Taylor said later[1] in the discussion and Jay Pipes
> reiterated[2].
>
> I agree with Josh and, furthermore, maybe a similar "warning" could be
> implicitly made by helping the community understand why the
> "diverse-affiliation" tag matters.  If we (through education on tags in
> general) stated that the reason diverse-affiliation matters, amongst other
> things, is because it shows that the project can potentially survive a
> single contributor changing their involvement then wouldn't that achieve
> the same purpose of showing stability/mindshare/collaboration for projects
> with diverse-affiliation tag (versus those that don't have it) and make
> them more "preferred" in a sense?
>
>
> I think I agree with others, most notably Doug Hellman[3] in the TC
> discussion; we need a marker of the other end of the spectrum. The absence
> of information is only significant if you know what?s missing and it?s
> importance.
>
> Separately, I agree that more education around tags and their importance
> is needed.
>
> I understand the concern is that we want to highlight the need for
> diversity, and I believe that instead of ?danger-not-diverse? we?d be
> better served by ?increase-diversity? or ?needs-diversity? as the other end
> of the spectrum from ?diverse-affiliation.? And I?ll go rant on the review
> now[4]. =]
>

Thank you for actually providing a review of the patch. I will respond to
the feedback in gerrit.



>
> ?j
>
> [1]
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-378
> [2]
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-422
> [3]
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-330
> [4] https://review.openstack.org/#/c/218725/
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150911/73d4ce16/attachment.html>

From ichihara.hirofumi at gmail.com  Sat Sep 12 03:04:25 2015
From: ichihara.hirofumi at gmail.com (Ichihara Hirofumi)
Date: Sat, 12 Sep 2015 12:04:25 +0900
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <CAKrd4DN=H3DDwM6bSQJb8YuCEgQGo5Zo9kE4Y+gNzUvWPgCw7A@mail.gmail.com>

I'm stunned by the news. Neutron team doesn't lose great PTL but I know
your hard work. Your devotion made Neutron grow more. Enjoy your new found
free time

Hirofumi

2015-09-12 6:12 GMT+09:00 Kyle Mestery <mestery at mestery.com>:

> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered
> the concept of rotating PTL duties with each cycle, I'd like to highly
> encourage Neutron and other projects to do the same. Having a deep bench of
> leaders supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/a9cd4afc/attachment.html>

From skinjo at redhat.com  Sat Sep 12 03:08:01 2015
From: skinjo at redhat.com (Shinobu Kinjo)
Date: Fri, 11 Sep 2015 23:08:01 -0400 (EDT)
Subject: [openstack-dev] [Keystone] PTL Candidacy: Adam Young
In-Reply-To: <1880948951.15807544.1442026644345.JavaMail.zimbra@redhat.com>
Message-ID: <1878800368.15810934.1442027281799.JavaMail.zimbra@redhat.com>

I'm not sure if I should reply to you or not since I am not developer of this project but another.

But I would like to let you know what I am really really thinking how the Keystone should work is:

    Just HUB

I know that this description would be confusing you and developers subscribing this list.
Anyhow here is my thoughts.

>    1.  Removing the bearer aspects of tokens

  Yes, that is what I am thinking of.

>    2.  Better delegation mechanisms to scale the management of OpenStack.

  Yes, that is what I am thinking of.

>    3.  Improving stability, scale, and performance.

  Plus security and isolation, if it would work as security hub for the OpenStack.

>    4.  Simplify integration with external identity sources

    Yes, that is what I am thinking of.

Shinobu


From gal.sagie at gmail.com  Sat Sep 12 09:17:47 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Sat, 12 Sep 2015 12:17:47 +0300
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAKrd4DN=H3DDwM6bSQJb8YuCEgQGo5Zo9kE4Y+gNzUvWPgCw7A@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
 <CAKrd4DN=H3DDwM6bSQJb8YuCEgQGo5Zo9kE4Y+gNzUvWPgCw7A@mail.gmail.com>
Message-ID: <CAG9LJa61xmvyYoW2wmrQjsFAtZSPKAkHA1Zq+jYpud4hmLRZAQ@mail.gmail.com>

Kyle,

Thank you for all the great time as PTL, you set a high bar for the next
person.
I think the Neutron community was lucky to have you in this position, your
devotion and passion
to OpenStackt, to Neutron and to Open source is inspiring and one of the
reasons i love this community.
In my eyes and i am sure others will agree you were always available and
ready to help to experienced
and new comers alike, your finger prints are hopefully going to remain in
this community
for a long time.

I personally think this deserve a drinking party in Tokyo...  Salvatore?




On Sat, Sep 12, 2015 at 6:04 AM, Ichihara Hirofumi <
ichihara.hirofumi at gmail.com> wrote:

> I'm stunned by the news. Neutron team doesn't lose great PTL but I know
> your hard work. Your devotion made Neutron grow more. Enjoy your new
> found free time
>
> Hirofumi
>
> 2015-09-12 6:12 GMT+09:00 Kyle Mestery <mestery at mestery.com>:
>
>> I'm writing to let everyone know that I do not plan to run for Neutron
>> PTL for a fourth cycle. Being a PTL is a rewarding but difficult job, as
>> Morgan recently put it in his non-candidacy email [1]. But it goes further
>> than that for me. As Flavio put it in his post about "Being a PTL" [2],
>> it's a full time job. In the case of Neutron, it's more than a full time
>> job, it's literally an always on job.
>>
>> I've tried really hard over my three cycles as PTL to build a stronger
>> web of trust so the project can grow, and I feel that's been accomplished.
>> We have a strong bench of future PTLs and leaders ready to go, I'm excited
>> to watch them lead and help them in anyway I can.
>>
>> As was said by Zane in a recent email [3], while Heat may have pioneered
>> the concept of rotating PTL duties with each cycle, I'd like to highly
>> encourage Neutron and other projects to do the same. Having a deep bench of
>> leaders supporting each other is important for the future of all projects.
>>
>> See you all in Tokyo!
>> Kyle
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
>> [2]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/c6b40a1f/attachment.html>

From pshchelokovskyy at mirantis.com  Sat Sep 12 12:08:16 2015
From: pshchelokovskyy at mirantis.com (Pavlo Shchelokovskyy)
Date: Sat, 12 Sep 2015 15:08:16 +0300
Subject: [openstack-dev] [Heat] Integration Test Questions
In-Reply-To: <D218618E.ADE81%sabeen.syed@rackspace.com>
References: <D218618E.ADE81%sabeen.syed@rackspace.com>
Message-ID: <CACfB1uutGXqUbd2D5rRAjRvVMT=H2qTn0myxOS6eJLdNQ=nbsg@mail.gmail.com>

Hi Sabeen,

thank you for the effort :) More tests is always better than less, but
unfortunately we are limited by the power of VM and time available for
gate jobs. This is why we do no exhaustive functional testing of all
resource plugins APIs, as every time a test goes out and makes an
async API call to OpenStack, like create a server, it always consumes
time, and often consumes resources of VM that runs other tests of the
same test suit as well (we do run them in parallel) making other tests
also slower to some degree. Also, even for not-async/lightweight
resources (e.g. SaharaNodeGroupTemplate), testing all of them requires
running a corresponding OpenStack service on the gate job, which will
consume its resources even further.

Below are my thoughts and comments inline:

On Fri, Sep 11, 2015 at 6:46 PM, Sabeen Syed <sabeen.syed at rackspace.com> wrote:
> Hi All,
>
> My coworker and I would like to start filling out some gaps in api coverage
> that we see in the functional integration tests. We have one patch up for
> review (https://review.openstack.org/#/c/219025/). We got a comment saying
> that any new stack creation will prolong the testing cycle. We agree with
> that and it got us thinking about a few things -

this test should use the TestResource (or even RandomString if you do
not need to ensure a particular order of events), as there is no point
of using an actual server for the assertions this test makes on
stack/resource events.

>
> We are planning on adding tests for the following api's: event api's,
> template api's, software config api's, cancel stack updates, check stack
> resources and show resource data. These are the api's that we saw aren't
> covered in our current integration tests. Please let us know if you feel we
> need tests for these upstream, if we're missing something or if it's already
> covered somewhere.

Just make sure all (ideally) of them use TestResource/RandomStrings.
You might still have to tweak it a bit to support a successful/failed
check though. There is a test for SC/SD in functional (and I actually
wonder what is it doing in functional but not scenario), is it not
enough?

> To conserve the creation of stacks would it make sense to add one test and
> then under that we could call sub methods that will run tests against that
> stack. So something like this:
>
> def _test_template_apis()
>
> def _test_softwareconfig_apis()
>
> def _test_event_apis()
>
> def test_event_template_softwareconfig_apis(self):
>
> stack_id = self.stack_create(?)
>
> self._test_template_apis(stack_id)
>
> self._test_event_apis(stack_id)
>
> self._test_softwareconfig_apis(stack_id)

If you use TestResource and the like, the time to create a new stack
for each test is not that long. And it is much better to have API
tests separated as actual unit tests, otherwise failure in one API
will fail the whole test which only leaves the developer wondering
"what was that?" and makes it harder to find the root cause.

>
> The current tests are divided into two folders ? scenario and functional. To
> help with organization - under the functional folder, would it make sense to
> add an 'api' folder, 'resource' folder and 'misc folder? Here is what we're
> thinking about where each test can be put:
>
> API folder - test_create_update.py, test_preview.py
>
> Resource folder ? test_autoscaling.py, test_aws_stack.py,
> test_conditional_exposure.py, test_create_update_neutron_port.py,
> test_encryption_vol_type.py, test_heat_autoscaling.py,
> test_instance_group.py, test_resource_group.py, test_software_config.py,
> test_swiftsignal_update.py
>
> Misc folder - test_default_parameters.py, test_encrypted_parameter.py,
> test_hooks.py, test_notifications.py, test_reload_on_sighup.py,
> test_remote_stack.py, test_stack_tags.py, test_template_resource.py,
> test_validation.py
>
> Should we add to our README? For example, I see that we use TestResource as
> a resource in some of our tests but we don't have an explanation of how to
> set that up. I'd also like add explanations about the pre-testhook and
> post-testhook file and how that works and what each line does/what test it's
> attached to.

By all means :) If it flattens the learning curve for new Heat
contributors, it's even better.

> For the tests that we're working on, should we be be adding a blueprint or
> task somewhere to let everybody know that we're working on it so there is no
> overlap?

File a bug against Heat, make it a wishlist priority, and add a tag it
'functional-tests'. Assign to yourself at will :) but please check out
what we already have filed:

https://bugs.launchpad.net/heat/+bugs?field.tag=functional-tests

> From our observations, we think it would be beneficial to add more comments
> to the existing tests.  For example, we could have a minimum of a short
> blurb for each method.  Comments?

A (multi-line) doc string for module/test method would suffice. For
longer scenario tests we already do this describing a scenario the
test aim to pass through.

> Should we add a 'high level coverage' summary in our README?  It could help
> all of us know at a high level where we are at in terms of which resources
> we have tests for and which api's, etc.

As for APIs - I believe we could use some functional test coverage
tool. I am not sure if there is a common thing already settled for in
the community though. It might be a good cross-project topic to
discuss during summit with Tempest community, they might already have
something in the works.

As for resources - we do try to exercise the native Heat ones that are
there to provide the functionality of Heat itself (ASGs, RGs etc), but
AFAIK we have no plans on deep testing all the other resources in a
functional way.

>
> Let us know what you all think!

Thanks again for bringing this up. "If it is not tested - it does not works" :)

Best regards,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com


From stdake at cisco.com  Sat Sep 12 14:43:46 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Sat, 12 Sep 2015 14:43:46 +0000
Subject: [openstack-dev] [kolla][ptl] Kolla PTL Candidacy
Message-ID: <D21987CE.1249A%stdake@cisco.com>

My Peers,

Liberty for Kolla was fantastically successful!  We held our first in person mid-cycle and entered the Big Tent governance process.  We successfully released 3 milestone releases within 3 days of deadline with increasing functionality.  I believe our Kolla release of rc1 on September 25th will deliver on the promises I made in my PTL candidacy which was to deliver a container deployment system that could deploy 100 node OpenStack cluster with controller, compute, storage and networking roles.  I wouldn?t have been able to do it alone, or even with the small team we had heading into Liberty.  A big thank you goes out to our community for your strong commitment to our success!

A leader?s job is to take the community on the trip they want to go on.    I?ve done a good job of balancing the various interests in Kolla to lead us to where I think people wanted us to be at the end of the cycle.  I think we are a bit ahead on all measures except functional testing.

For Liberty I wish to directly contribute to or facilitate the following actions:

  *   Continue to encourage diversity in our Community.
  *   Obtain real-world production deployments using Kolla.
  *   Grow out community of developers, reviewers, and operators.
  *   Execute functional testing of Kolla deployment by using a baby-steps strategy, first executing AIO, then two nodes, growing to ~5 node multi-node deployments.
  *   Containerize and deliver all Big Tent server projects with our Ansible deployment tooling.
  *   Radically improve our documentation to match the standards set by mature OpenStack Big Tent projects.
  *   Make the image building and functional gating voting(!) by delivering mirrors of our upstream software dependencies internally in OpenStack Infrastructure.
  *   Continue to provide excellent project management and improve our processes so that we may grow to use the release_managed[2] tag in the N release.

I am pleased to accept your vote and serve as your PTL for the Mitaka release cycle.  As a Community I am certain we can make Mitaka as successful as Liberty!

Regards
-steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-March/058930.html
[2] https://github.com/openstack/governance/blob/master/reference/tags/release_managed.rst
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/2a8bbbd8/attachment.html>

From julien at danjou.info  Sat Sep 12 14:54:26 2015
From: julien at danjou.info (Julien Danjou)
Date: Sat, 12 Sep 2015 16:54:26 +0200
Subject: [openstack-dev] =?utf-8?b?5Zue5aSN77yaICBbQ2VpbG9tZXRlcl1bR25v?=
 =?utf-8?q?cchi=5D_Gnocchi_cannot_deal_with_combined_resource-id_=3F?=
In-Reply-To: <tencent_0B963CB97CD845581696926B@qq.com> (Luo Gangyi's message
 of "Sat, 12 Sep 2015 00:30:43 +0800")
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
 <m0a8stp7qt.fsf@danjou.info> <tencent_21E29496537979275896B7B0@qq.com>
 <m07fnxndw5.fsf@danjou.info> <tencent_0B963CB97CD845581696926B@qq.com>
Message-ID: <m0vbbfn0wt.fsf@danjou.info>

On Sat, Sep 12 2015, Luo Gangyi wrote:

>  I checked it again, no "ignored" is marked, seems the bug of devstack ;(

I was talking about that:

  https://git.openstack.org/cgit/openstack/ceilometer/tree/etc/ceilometer/gnocchi_resources.yaml#n67

>  And it's OK that gnocchi is not perfect now, but I still have some worries about how gnocchi deal with or going to deal with instance-xxxx-tapxxx condition.
>  I see 'network.incoming.bytes' belongs to resouce type 'instance'.
>  But no attributes of instance can save the infomation of tap name.
>  Although I can search
>  all metric ids from resouce id(instance uuid), how do I distinguish them from different taps of an instance?

Where do you see network.incoming.bytes as being linked to an instance?
Reading gnocchi_resources.yaml I don't see that.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/ed36deb5/attachment.pgp>

From gkotton at vmware.com  Sat Sep 12 16:38:50 2015
From: gkotton at vmware.com (Gary Kotton)
Date: Sat, 12 Sep 2015 16:38:50 +0000
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <D21A2EED.BD34E%gkotton@vmware.com>

Thanks! You did a great job. Looking back you made some very tough and healthy decisions. Neutron has a new lease on life!
It is tradition that the exiting PTL buy drinks for the community :)

From: "mestery at mestery.com<mailto:mestery at mestery.com>" <mestery at mestery.com<mailto:mestery at mestery.com>>
Reply-To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Saturday, September 12, 2015 at 12:12 AM
To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [neutron] PTL Non-Candidacy

I'm writing to let everyone know that I do not plan to run for Neutron PTL for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan recently put it in his non-candidacy email [1]. But it goes further than that for me. As Flavio put it in his post about "Being a PTL" [2], it's a full time job. In the case of Neutron, it's more than a full time job, it's literally an always on job.

I've tried really hard over my three cycles as PTL to build a stronger web of trust so the project can grow, and I feel that's been accomplished. We have a strong bench of future PTLs and leaders ready to go, I'm excited to watch them lead and help them in anyway I can.

As was said by Zane in a recent email [3], while Heat may have pioneered the concept of rotating PTL duties with each cycle, I'd like to highly encourage Neutron and other projects to do the same. Having a deep bench of leaders supporting each other is important for the future of all projects.

See you all in Tokyo!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/77dd1097/attachment.html>

From armamig at gmail.com  Sat Sep 12 17:14:51 2015
From: armamig at gmail.com (Armando M.)
Date: Sat, 12 Sep 2015 19:14:51 +0200
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <D21A2EED.BD34E%gkotton@vmware.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
 <D21A2EED.BD34E%gkotton@vmware.com>
Message-ID: <CAK+RQeZC2-82NP2Uysm19jpKa=tsPWRXTL6PGMhGgw8PitWijg@mail.gmail.com>

On 12 September 2015 at 18:38, Gary Kotton <gkotton at vmware.com> wrote:

> Thanks! You did a great job. Looking back you made some very tough and
> healthy decisions. Neutron has a new lease on life!
> It is tradition that the exiting PTL buy drinks for the community :)
>

Ok, none of these kind words make you change your mind? This project needs
you!


>
> From: "mestery at mestery.com" <mestery at mestery.com>
> Reply-To: OpenStack List <openstack-dev at lists.openstack.org>
> Date: Saturday, September 12, 2015 at 12:12 AM
> To: OpenStack List <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [neutron] PTL Non-Candidacy
>
> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered
> the concept of rotating PTL duties with each cycle, I'd like to highly
> encourage Neutron and other projects to do the same. Having a deep bench of
> leaders supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/27b06760/attachment.html>

From blp at nicira.com  Sat Sep 12 17:32:21 2015
From: blp at nicira.com (Ben Pfaff)
Date: Sat, 12 Sep 2015 10:32:21 -0700
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <CACjuMbxy3AnJcEU02oNdx+4s3tr8EDYoHg_rMtufWNa1zep4Vw@mail.gmail.com>

Are you planning to remain involved with OpenStack?

On Fri, Sep 11, 2015 at 2:12 PM, Kyle Mestery <mestery at mestery.com> wrote:
> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered the
> concept of rotating PTL duties with each cycle, I'd like to highly encourage
> Neutron and other projects to do the same. Having a deep bench of leaders
> supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
"I don't normally do acked-by's.  I think it's my way of avoiding
getting blamed when it all blows up."               Andrew Morton


From sharis at Brocade.com  Sat Sep 12 20:21:43 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Sat, 12 Sep 2015 20:21:43 +0000
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <D219D685.6857A%sharis@brocade.com>


Thanks you very much for the great job you did. Your cool while doing a tough job was noteworthy.
You will be missed.

-Shiv


From: Kyle Mestery <mestery at mestery.com<mailto:mestery at mestery.com>>
Reply-To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, September 11, 2015 at 2:12 PM
To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [neutron] PTL Non-Candidacy

I'm writing to let everyone know that I do not plan to run for Neutron PTL for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan recently put it in his non-candidacy email [1]. But it goes further than that for me. As Flavio put it in his post about "Being a PTL" [2], it's a full time job. In the case of Neutron, it's more than a full time job, it's literally an always on job.

I've tried really hard over my three cycles as PTL to build a stronger web of trust so the project can grow, and I feel that's been accomplished. We have a strong bench of future PTLs and leaders ready to go, I'm excited to watch them lead and help them in anyway I can.

As was said by Zane in a recent email [3], while Heat may have pioneered the concept of rotating PTL duties with each cycle, I'd like to highly encourage Neutron and other projects to do the same. Having a deep bench of leaders supporting each other is important for the future of all projects.

See you all in Tokyo!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150912/06bd98f5/attachment.html>

From jon at jonproulx.com  Sun Sep 13 03:55:39 2015
From: jon at jonproulx.com (Jonathan Proulx)
Date: Sat, 12 Sep 2015 23:55:39 -0400
Subject: [openstack-dev] [Neutron] Allow for per-subnet dhcp options
In-Reply-To: <CAL3VkVy-yHRHuzXf8bu-DH0qwODzy84yWqE4Ck1kJbs02cgUZg@mail.gmail.com>
References: <CABZB-sjDmijUExd_xA4+vwhZ8jV5qQLxTwYF7mMdWSAvBGcChg@mail.gmail.com>
 <CAL3VkVy-yHRHuzXf8bu-DH0qwODzy84yWqE4Ck1kJbs02cgUZg@mail.gmail.com>
Message-ID: <CABZB-sj3gdKfkH4s0_kDJ9nJsLzJQTvJT57-rLA4xETKacB59Q@mail.gmail.com>

On Fri, Sep 11, 2015 at 3:43 PM, Kyle Mestery <mestery at mestery.com> wrote:
> On Fri, Sep 11, 2015 at 2:04 PM, Jonathan Proulx <jon at jonproulx.com> wrote:
>>
>> I'm hurt that this blue print has seen no love in 18 months:
>> https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet
>>
>
> This BP has no RFE bug or spec filed for it, so it's hard to be on anyone's
> radar when it's not following the submission guidelines Neutron has for new
> work [1]. I'm sorry this has flown under the radar so far, hopefully it can
> rise up with an RFE bug.
>
> [1] http://docs.openstack.org/developer/neutron/policies/blueprints.html

Fair.  Does look like there was never anything behind this BP, so my
own fault for not looking deeper 18mo ago and noticing that.  I like
the RFE bug tag though that part of the process is news to me (good
news).

Thanks,
-Jon


From irenab.dev at gmail.com  Sun Sep 13 05:27:44 2015
From: irenab.dev at gmail.com (Irena Berezovsky)
Date: Sun, 13 Sep 2015 07:27:44 +0200
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <CALqgCCp4JRf-+Tktb5iC52ztGtWZh_UK1e5hhiosfVVDe+JvZg@mail.gmail.com>

Kyle,
Thank you for the hard work you did making neuron project and neutron
community  better!
You have been open and very supportive as a neutron community lead.
Hope you will stay involved.


On Fri, Sep 11, 2015 at 11:12 PM, Kyle Mestery <mestery at mestery.com> wrote:

> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered
> the concept of rotating PTL duties with each cycle, I'd like to highly
> encourage Neutron and other projects to do the same. Having a deep bench of
> leaders supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/27dfe951/attachment.html>

From stdake at cisco.com  Sun Sep 13 05:39:31 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Sun, 13 Sep 2015 05:39:31 +0000
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating to
	RHOS + RDO types
Message-ID: <D21A5A21.124FA%stdake@cisco.com>

Hey folks,

Sam had asked a reasonable set of questions regarding a patchset:
https://review.openstack.org/#/c/222893/

The purpose of the patchset is to enable both RDO and RHOS as binary choices on RHEL platforms.  I suspect over time, from-source deployments have the potential to become the norm, but the business logistics of such a change are going to take some significant time to sort out.

Red Hat has two distros of OpenStack neither of which are from source.  One is free called RDO and the other is paid called RHOS.  In order to obtain support for RHEL VMs running in an OpenStack cloud, you must be running on RHOS RPM binaries.  You must also be running on RHEL.  It remains to be seen whether Red Hat will actively support Kolla deployments with a RHEL+RHOS set of packaging in containers, but my hunch says they will.  It is in Kolla?s best interest to implement this model and not make it hard on Operators since many of them do indeed want Red Hat?s support structure for their OpenStack deployments.

Now to Sam?s questions:
"Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do we add? What's our policy on adding a new type??

I?m not immediately clear on how binary fits in.  We could make binary synonymous with the community supported version (RDO) while still implementing the binary RHOS version.  Note Kolla does not ?support? any distribution or deployment of OpenStack ? Operators will have to look to their vendors for support.

As such the implied second question ?How many more do we add?? sort of sounds like ?how many do we support??.  The answer to the second question is none ? again the Kolla community does not support any deployment of OpenStack.  To the question as posed, how many we add, the answer is it is really up to community members willing to  implement and maintain the work.  In this case, I have personally stepped up to implement RHOS and maintain it going forward.

Our policy on adding a new type could be simple or onerous.  I prefer simple.  If someone is willing to write the code and maintain it so that is stays in good working order, I see no harm in it remaining in tree.  I don?t suspect there will be a lot of people interested in adding multiple distributions for a particular operating system.  To my knowledge, and I could be incorrect, Red Hat is the only OpenStack company with a paid and community version available of OpenStack simultaneously and the paid version is only available on RHEL.  I think the risk of RPM based distributions plus their type count spiraling out of manageability is low.  Even if the risk were high, I?d prefer to keep an open mind to facilitate an increase in diversity in our community (which is already fantastically diverse, btw ;)

I am open to questions, comments or concerns.  Please feel free to voice them.

Regards,
-steve

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/994e775d/attachment.html>

From cropper.joe at gmail.com  Sun Sep 13 05:45:49 2015
From: cropper.joe at gmail.com (Joe Cropper)
Date: Sun, 13 Sep 2015 00:45:49 -0500
Subject: [openstack-dev] questions about nova compute monitors extensions
In-Reply-To: <OF8D69D44A.5AC308A3-ON48257EBC.0029CD2B-48257EBC.002B55F8@cn.ibm.com>
References: <OF8D69D44A.5AC308A3-ON48257EBC.0029CD2B-48257EBC.002B55F8@cn.ibm.com>
Message-ID: <6F05A6D3-24ED-4FB4-8B6B-A92E203A5B6F@gmail.com>

The new framework does indeed support user-defined monitors.  You just extend whatever monitor?d like (e.g., nova.compute.monitors.cpu.virt_driver.Monitor) and add your customized logic.  And since the new framework uses stevedore-based extension points, you just need to be sure to add the appropriate entry to your project?s setup.py file (or entry_points.txt in your egg) so that stevedore can load them properly.

Hope this helps!

Thanks,
Joe
> On Sep 10, 2015, at 2:52 AM, Hou Gang HG Liu <liuhoug at cn.ibm.com> wrote:
> 
> Hi all, 
> 
> I notice nova compute monitor now only tries to load monitors with namespace "nova.compute.monitors.cpu", and only one monitor in one namespace can be enabled( 
> https://review.openstack.org/#/c/209499/6/nova/compute/monitors/__init__.py <https://review.openstack.org/#/c/209499/6/nova/compute/monitors/__init__.py>). 
> 
> Is there a plan to make MonitorHandler.NAMESPACES configurable or just hard code constraint as it is now? And how to make compute monitor support user defined as it was? 
> 
> Thanks! 
> B.R 
> 
> Hougang Liu ????? 
> Developer - IBM Platform Resource Scheduler                                                                                            <Mail Attachment.gif> 
> Systems and Technology Group 
> 
> Mobile: 86-13519121974 | Phone: 86-29-68797023 | Tie-Line: 87023                                       ??????????42?????3?
> E-mail: liuhoug at cn.ibm.com                                                                                                 Xian, Shaanxi Province 710075, China
> 
>                                                                            
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/b72fe50b/attachment.html>

From samuel at yaple.net  Sun Sep 13 06:01:46 2015
From: samuel at yaple.net (Sam Yaple)
Date: Sun, 13 Sep 2015 01:01:46 -0500
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
	to RHOS + RDO types
In-Reply-To: <D21A5A21.124FA%stdake@cisco.com>
References: <D21A5A21.124FA%stdake@cisco.com>
Message-ID: <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>

On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) <stdake at cisco.com>
wrote:

> Hey folks,
>
> Sam had asked a reasonable set of questions regarding a patchset:
> https://review.openstack.org/#/c/222893/
>
> The purpose of the patchset is to enable both RDO and RHOS as binary
> choices on RHEL platforms.  I suspect over time, from-source deployments
> have the potential to become the norm, but the business logistics of such a
> change are going to take some significant time to sort out.
>
> Red Hat has two distros of OpenStack neither of which are from source.
> One is free called RDO and the other is paid called RHOS.  In order to
> obtain support for RHEL VMs running in an OpenStack cloud, you must be
> running on RHOS RPM binaries.  You must also be running on RHEL.  It
> remains to be seen whether Red Hat will actively support Kolla deployments
> with a RHEL+RHOS set of packaging in containers, but my hunch says they
> will.  It is in Kolla?s best interest to implement this model and not make
> it hard on Operators since many of them do indeed want Red Hat?s support
> structure for their OpenStack deployments.
>
> Now to Sam?s questions:
> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do
> we add? What's our policy on adding a new type??
>
> I?m not immediately clear on how binary fits in.  We could make
> binary synonymous with the community supported version (RDO) while still
> implementing the binary RHOS version.  Note Kolla does not ?support? any
> distribution or deployment of OpenStack ? Operators will have to look to
> their vendors for support.
>

If everything between centos+rdo and rhel+rhos is mostly the same then I
would think it would make more sense to just use the base ('rhel' in this
case) to branch of any differences in the templates. This would also allow
for the least amount of change and most generic implementation of this
vendor specific packaging. This would also match what we do with
oraclelinux, we do not have a special type for that and any specifics would
be handled by an if statement around 'oraclelinux' and not some special
type.

Since we implement multiple bases, some of which are not RPM based, it
doesn't make much sense to me to have rhel and rdo as a type which is why
we removed rdo in the first place in favor of the more generic 'binary'.


>
> As such the implied second question ?How many more do we add?? sort of
> sounds like ?how many do we support??.  The answer to the second question
> is none ? again the Kolla community does not support any deployment of
> OpenStack.  To the question as posed, how many we add, the answer is it is
> really up to community members willing to  implement and maintain the
> work.  In this case, I have personally stepped up to implement RHOS and
> maintain it going forward.
>
> Our policy on adding a new type could be simple or onerous.  I prefer
> simple.  If someone is willing to write the code and maintain it so that is
> stays in good working order, I see no harm in it remaining in tree.  I
> don?t suspect there will be a lot of people interested in adding multiple
> distributions for a particular operating system.  To my knowledge, and I
> could be incorrect, Red Hat is the only OpenStack company with a paid and
> community version available of OpenStack simultaneously and the paid
> version is only available on RHEL.  I think the risk of RPM based
> distributions plus their type count spiraling out of manageability is low.
> Even if the risk were high, I?d prefer to keep an open mind to facilitate
> an increase in diversity in our community (which is already fantastically
> diverse, btw ;)
>
> I am open to questions, comments or concerns.  Please feel free to voice
> them.
>
> Regards,
> -steve
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/e43fa967/attachment.html>

From stdake at cisco.com  Sun Sep 13 06:15:32 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Sun, 13 Sep 2015 06:15:32 +0000
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
Message-ID: <D21A60E8.12504%stdake@cisco.com>



From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:sam at yaple.net>>
Date: Saturday, September 12, 2015 at 11:01 PM
To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO types


On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
Hey folks,

Sam had asked a reasonable set of questions regarding a patchset:
https://review.openstack.org/#/c/222893/

The purpose of the patchset is to enable both RDO and RHOS as binary choices on RHEL platforms.  I suspect over time, from-source deployments have the potential to become the norm, but the business logistics of such a change are going to take some significant time to sort out.

Red Hat has two distros of OpenStack neither of which are from source.  One is free called RDO and the other is paid called RHOS.  In order to obtain support for RHEL VMs running in an OpenStack cloud, you must be running on RHOS RPM binaries.  You must also be running on RHEL.  It remains to be seen whether Red Hat will actively support Kolla deployments with a RHEL+RHOS set of packaging in containers, but my hunch says they will.  It is in Kolla?s best interest to implement this model and not make it hard on Operators since many of them do indeed want Red Hat?s support structure for their OpenStack deployments.

Now to Sam?s questions:
"Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do we add? What's our policy on adding a new type??

I?m not immediately clear on how binary fits in.  We could make binary synonymous with the community supported version (RDO) while still implementing the binary RHOS version.  Note Kolla does not ?support? any distribution or deployment of OpenStack ? Operators will have to look to their vendors for support.

If everything between centos+rdo and rhel+rhos is mostly the same then I would think it would make more sense to just use the base ('rhel' in this case) to branch of any differences in the templates. This would also allow for the least amount of change and most generic implementation of this vendor specific packaging. This would also match what we do with oraclelinux, we do not have a special type for that and any specifics would be handled by an if statement around 'oraclelinux' and not some special type.

I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO also runs on RHEL.  I want to enable Red Hat customers to make a choice to have a supported  operating system but not a supported Cloud environment.  The answer here is RHEL + RDO.  This leads to full support down the road if the Operator chooses to pay Red Hat for it by an easy transition to RHOS.

For oracle linux, I?d like to keep RDO for oracle linux and from source on oracle linux as choices.  RDO also runs on oracle linux.  Perhaps the patch set needs some later work here to address this point in more detail, but as is ?binary? covers oracle linu.

Perhaps what we should do is get rid of the binary type entirely.  Ubuntu doesn?t really have a binary type, they have a cloudarchive type, so binary doesn?t make a lot of sense.  Since Ubuntu to my knowledge doesn?t have two distributions of OpenStack the same logic wouldn?t apply to providing a full support onramp for Ubuntu customers.  Oracle doesn?t provide a binary type either, their binary type is really RDO.

FWIW I never liked the transition away from rdo in the repo names to binary.  I guess I should have ?1?ed those reviews back then, but I think its time to either revisit the decision or compromise that binary and rdo mean the same thing in a centos and rhel world.

Regards
-steve


Since we implement multiple bases, some of which are not RPM based, it doesn't make much sense to me to have rhel and rdo as a type which is why we removed rdo in the first place in favor of the more generic 'binary'.


As such the implied second question ?How many more do we add?? sort of sounds like ?how many do we support??.  The answer to the second question is none ? again the Kolla community does not support any deployment of OpenStack.  To the question as posed, how many we add, the answer is it is really up to community members willing to  implement and maintain the work.  In this case, I have personally stepped up to implement RHOS and maintain it going forward.

Our policy on adding a new type could be simple or onerous.  I prefer simple.  If someone is willing to write the code and maintain it so that is stays in good working order, I see no harm in it remaining in tree.  I don?t suspect there will be a lot of people interested in adding multiple distributions for a particular operating system.  To my knowledge, and I could be incorrect, Red Hat is the only OpenStack company with a paid and community version available of OpenStack simultaneously and the paid version is only available on RHEL.  I think the risk of RPM based distributions plus their type count spiraling out of manageability is low.  Even if the risk were high, I?d prefer to keep an open mind to facilitate an increase in diversity in our community (which is already fantastically diverse, btw ;)

I am open to questions, comments or concerns.  Please feel free to voice them.

Regards,
-steve


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/52e4a6c1/attachment.html>

From samuel at yaple.net  Sun Sep 13 06:34:10 2015
From: samuel at yaple.net (Sam Yaple)
Date: Sun, 13 Sep 2015 01:34:10 -0500
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
	to RHOS + RDO types
In-Reply-To: <D21A60E8.12504%stdake@cisco.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
Message-ID: <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>

Sam Yaple

On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) <stdake at cisco.com>
wrote:

>
>
> From: Sam Yaple <samuel at yaple.net>
> Reply-To: "sam at yaple.net" <sam at yaple.net>
> Date: Saturday, September 12, 2015 at 11:01 PM
> To: Steven Dake <stdake at cisco.com>
> Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
> types
>
>
> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) <stdake at cisco.com>
> wrote:
>
>> Hey folks,
>>
>> Sam had asked a reasonable set of questions regarding a patchset:
>> https://review.openstack.org/#/c/222893/
>>
>> The purpose of the patchset is to enable both RDO and RHOS as binary
>> choices on RHEL platforms.  I suspect over time, from-source deployments
>> have the potential to become the norm, but the business logistics of such a
>> change are going to take some significant time to sort out.
>>
>> Red Hat has two distros of OpenStack neither of which are from source.
>> One is free called RDO and the other is paid called RHOS.  In order to
>> obtain support for RHEL VMs running in an OpenStack cloud, you must be
>> running on RHOS RPM binaries.  You must also be running on RHEL.  It
>> remains to be seen whether Red Hat will actively support Kolla deployments
>> with a RHEL+RHOS set of packaging in containers, but my hunch says they
>> will.  It is in Kolla?s best interest to implement this model and not make
>> it hard on Operators since many of them do indeed want Red Hat?s support
>> structure for their OpenStack deployments.
>>
>> Now to Sam?s questions:
>> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do
>> we add? What's our policy on adding a new type??
>>
>> I?m not immediately clear on how binary fits in.  We could make
>> binary synonymous with the community supported version (RDO) while still
>> implementing the binary RHOS version.  Note Kolla does not ?support? any
>> distribution or deployment of OpenStack ? Operators will have to look to
>> their vendors for support.
>>
>
> If everything between centos+rdo and rhel+rhos is mostly the same then I
> would think it would make more sense to just use the base ('rhel' in this
> case) to branch of any differences in the templates. This would also allow
> for the least amount of change and most generic implementation of this
> vendor specific packaging. This would also match what we do with
> oraclelinux, we do not have a special type for that and any specifics would
> be handled by an if statement around 'oraclelinux' and not some special
> type.
>
>
> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO also
> runs on RHEL.  I want to enable Red Hat customers to make a choice to have
> a supported  operating system but not a supported Cloud environment.  The
> answer here is RHEL + RDO.  This leads to full support down the road if the
> Operator chooses to pay Red Hat for it by an easy transition to RHOS.
>

I am against including vendor specific things like RHOS in Kolla outright
like you are purposing. Suppose another vendor comes along with a new base
and new packages. They are willing to maintain it, but its something that
no one but their customers with their licensing can use. This is not
something that belongs in Kolla and I am unsure that it is even appropriate
to belong in OpenStack as a whole. Unless RHEL+RHOS can be used by those
that do not have a license for it, I do not agree with adding it at all.


> For oracle linux, I?d like to keep RDO for oracle linux and from source on
> oracle linux as choices.  RDO also runs on oracle linux.  Perhaps the patch
> set needs some later work here to address this point in more detail, but as
> is ?binary? covers oracle linu.
>

> Perhaps what we should do is get rid of the binary type entirely.  Ubuntu
> doesn?t really have a binary type, they have a cloudarchive type, so binary
> doesn?t make a lot of sense.  Since Ubuntu to my knowledge doesn?t have two
> distributions of OpenStack the same logic wouldn?t apply to providing a
> full support onramp for Ubuntu customers.  Oracle doesn?t provide a binary
> type either, their binary type is really RDO.
>

The binary packages for Ubuntu are _packaged_ by the cloudarchive team. But
in the case of when OpenStack collides with an LTS release (Icehouse and
14.04 was the last one) you do not add a new repo because the packages are
in the main Ubuntu repo.

Debian provides its own packages as well. I do not want a type name per
distro. 'binary' catches all packaged OpenStack things by a distro.


>
> FWIW I never liked the transition away from rdo in the repo names to
> binary.  I guess I should have ?1?ed those reviews back then, but I think
> its time to either revisit the decision or compromise that binary and rdo
> mean the same thing in a centos and rhel world.
>
> Regards
> -steve
>
>
> Since we implement multiple bases, some of which are not RPM based, it
> doesn't make much sense to me to have rhel and rdo as a type which is why
> we removed rdo in the first place in favor of the more generic 'binary'.
>
>
>>
>> As such the implied second question ?How many more do we add?? sort of
>> sounds like ?how many do we support??.  The answer to the second question
>> is none ? again the Kolla community does not support any deployment of
>> OpenStack.  To the question as posed, how many we add, the answer is it is
>> really up to community members willing to  implement and maintain the
>> work.  In this case, I have personally stepped up to implement RHOS and
>> maintain it going forward.
>>
>> Our policy on adding a new type could be simple or onerous.  I prefer
>> simple.  If someone is willing to write the code and maintain it so that is
>> stays in good working order, I see no harm in it remaining in tree.  I
>> don?t suspect there will be a lot of people interested in adding multiple
>> distributions for a particular operating system.  To my knowledge, and I
>> could be incorrect, Red Hat is the only OpenStack company with a paid and
>> community version available of OpenStack simultaneously and the paid
>> version is only available on RHEL.  I think the risk of RPM based
>> distributions plus their type count spiraling out of manageability is low.
>> Even if the risk were high, I?d prefer to keep an open mind to facilitate
>> an increase in diversity in our community (which is already fantastically
>> diverse, btw ;)
>>
>> I am open to questions, comments or concerns.  Please feel free to voice
>> them.
>>
>> Regards,
>> -steve
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/bf229023/attachment.html>

From feilong at catalyst.net.nz  Sun Sep 13 07:12:10 2015
From: feilong at catalyst.net.nz (Fei Long Wang)
Date: Sun, 13 Sep 2015 19:12:10 +1200
Subject: [openstack-dev] [zaqar][all] PTL No-Candidacy
In-Reply-To: <20150911115103.GA8182@redhat.com>
References: <20150911115103.GA8182@redhat.com>
Message-ID: <55F521CA.30300@catalyst.net.nz>

Thanks, Flavio. You did a great job. Many thanks for all your hard work 
to make those happen.

On 11/09/15 23:51, Flavio Percoco wrote:
> Greetings,
>
> I'm sending this email to announce that I wont be running for Zaqar's
> PTL position this cycle.
>
> I've been Zaqar's PTL for two cycles and I believe it is time for me
> to move on. More importantly, I believe it's time for this great,
> still small, project to be led by someone else. The reasons behind
> this belief have nothing to do with neither the previous state of the
> project or even its current success story. If anything, my current
> decision of not running has everything to do with the project's
> current growth.
>
> As many of you know, Zaqar (formerly known as Marconi) went through
> many ups and downs. From great discussions and growth attempts to
> almost being shutdown[0]. This has taugh me a lot but more
> importantly, it's made the team stronger and it's cleared the team's
> goals and path. And to prove that, let me share some of the success
> stories the team has had since Vancouver:
>
> 3 great milestones
> ==================
>
> Let me start by sharing the progress the project has made code-wise.
> While it may not be the most important for many people, I believe it's
> extremly valuable for the project. The reason for this being that
> every single member of this team is not a full-time Zaqar developer.
> That means, every single member of this team has a different full-time
> responsibility and every contribution made to the project has been
> made in their spare working (or free) time. From amazing Outreachy
> mentees (we've mentored participants of the Outrechy program since
> cycle 1) to great contributors from other projects in OpenStack.
>
> In milestone #1[1], we closed several bugs while we discussed the
> features that we wanted to work on during Liberty. In milestone #2[1],
> some of the features we wanted to have in Liberty started to land and
> several bugs were fixed as well. In milestone #3, many bugs were fixed
> due to a heavy testing session. But it doesn't end there. In RC1[4], 3
> FFE were granted - not carelessly, FWIW - to complete all the work
> we've planned for Liberty and, of course, more bug fixes.
>
> We now even have a websocket example in the code base... ZOMG!
>
> In addition to the above, the client library has kept moving forward
> and it's being aligned with the current, maintained, API. This
> progress just makes me happy and happier. Keep reading and you'll know
> why.
>
> Adoption by other projects
> ==========================
>
> If you read the call for adoption thread[0], you probably know how
> important that was for the project to move forward. After many
> discussions in Vancouver, on IRC, conferences, mailing lists, pigeons,
> telegrams, etc. projects started to see[5] the different use-cases for
> Zaqar and we started talking about implementations and steps forward.
> One good example of this is Heat's use of Zaqar for
> software-config[6], which was worked on and implemented.
>
> Things didn't stop there on this front. Other projects, like Sahara,
> are also considering using Zaqar to communicate with guests agents.
> While this is under discussion on Sahara's side, the required features
> for it to happen and be more secure have been implemented in Zaqar[7].
> Other interesting discussions are also on-going that might help with
> Zaqar's adoption[8].
>
> That said, I believe one of the works I'm most excited about right now
> is the puppet-zaqar project, which will make it simpler for
> deployments based on puppet to, well, deploy zaqar[9].
>
> Community Growth
> ================
>
> None of the above would have been possible without a great community
> and especially without growing it. I'm not talking about the core
> reviewers team growth - although we did have an addition[10] - but the
> growth of the community accross OpenStack. Folks from other teams -
> OpenStack Puppet, Sahara, Heat, Trove, cross-project efforts - have
> joined the efforts of pushing Zaqar forward in different ways (like
> the ones I've mentioned before).
>
> Therefore, I owe a huge THANK YOU to each and one of these people that
> helped making this progress possible.
>
> Oh God, please, stop talking
> ============================
>
> Sure, fine! But before I do that, let me share why I've said all the
> above.
>
> The above is not to show off what the team has accomplished. It's
> definitely not to take any credits whatsoever. It's to show exactly
> why the team needs a new PTL.
>
> I believe PTLs should rotate every 2 cycles (if not every cycle). I've
> been the PTL for 2 cycles (or probably even more) and it's time for
> the vision and efforts of other folks to jump in. It's time for folks
> with more OPs knowledge than me to help making Zaqar more
> "maintainable". It's time for new technical issues to come up and for
> us as a community to work together on achieving those. More
> cross-project collaboration, more APIs improvement, more user stories
> is what Zaqar needs right now and I believe there are very capable
> folks in Zaqar's team that would be perfect for this task.
>
> One thing I'd like the whole team to put some efforts on, regardless
> what technical decisions will be taken, is on increasing the diversity
> of the project. Zaqar is not as diverse[11] (company wise) as I'd like
> that worries me A LOT. Growth will, hopefully, bring more people and
> reaching out to other communities remain something important.
>
> It's been an honor to serve as Zaqar's PTL and it'll be an honor for
> me to contribute to the next PTL's future plans and leads.
>
> Sincerely,
> Flavio
>
> P.S: #openstack-zaqar remains the funiest channel ever, just sayin'.
>
> [0] 
> http://lists.openstack.org/pipermail/openstack-dev/2015-April/061967.html
> [1] https://launchpad.net/zaqar/+milestone/liberty-1
> [2] https://launchpad.net/zaqar/+milestone/liberty-2
> [3] https://launchpad.net/zaqar/+milestone/liberty-3
> [4] https://launchpad.net/zaqar/+milestone/liberty-rc1
> [5] 
> http://lists.openstack.org/pipermail/openstack-dev/2015-May/064739.html
> [6] 
> https://github.com/openstack/heat-specs/blob/master/specs/kilo/software-config-zaqar.rst
> [7] 
> http://specs.openstack.org/openstack/zaqar-specs/specs/liberty/pre-signed-url.html
> [8] https://review.openstack.org/#/c/185822/
> [9] https://github.com/openstack/puppet-zaqar
> [10] 
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072191.html
> [11] http://stackalytics.com/?module=zaqar-group
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Fei Long Wang (???)
--------------------------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang at catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--------------------------------------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/5283f069/attachment.html>

From stdake at cisco.com  Sun Sep 13 08:01:15 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Sun, 13 Sep 2015 08:01:15 +0000
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
Message-ID: <D21A6FA1.12519%stdake@cisco.com>

Response inline.

From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:sam at yaple.net>>
Date: Saturday, September 12, 2015 at 11:34 PM
To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO types



Sam Yaple

On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:


From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:sam at yaple.net>>
Date: Saturday, September 12, 2015 at 11:01 PM
To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO types


On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
Hey folks,

Sam had asked a reasonable set of questions regarding a patchset:
https://review.openstack.org/#/c/222893/

The purpose of the patchset is to enable both RDO and RHOS as binary choices on RHEL platforms.  I suspect over time, from-source deployments have the potential to become the norm, but the business logistics of such a change are going to take some significant time to sort out.

Red Hat has two distros of OpenStack neither of which are from source.  One is free called RDO and the other is paid called RHOS.  In order to obtain support for RHEL VMs running in an OpenStack cloud, you must be running on RHOS RPM binaries.  You must also be running on RHEL.  It remains to be seen whether Red Hat will actively support Kolla deployments with a RHEL+RHOS set of packaging in containers, but my hunch says they will.  It is in Kolla?s best interest to implement this model and not make it hard on Operators since many of them do indeed want Red Hat?s support structure for their OpenStack deployments.

Now to Sam?s questions:
"Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do we add? What's our policy on adding a new type??

I?m not immediately clear on how binary fits in.  We could make binary synonymous with the community supported version (RDO) while still implementing the binary RHOS version.  Note Kolla does not ?support? any distribution or deployment of OpenStack ? Operators will have to look to their vendors for support.

If everything between centos+rdo and rhel+rhos is mostly the same then I would think it would make more sense to just use the base ('rhel' in this case) to branch of any differences in the templates. This would also allow for the least amount of change and most generic implementation of this vendor specific packaging. This would also match what we do with oraclelinux, we do not have a special type for that and any specifics would be handled by an if statement around 'oraclelinux' and not some special type.

I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO also runs on RHEL.  I want to enable Red Hat customers to make a choice to have a supported  operating system but not a supported Cloud environment.  The answer here is RHEL + RDO.  This leads to full support down the road if the Operator chooses to pay Red Hat for it by an easy transition to RHOS.

I am against including vendor specific things like RHOS in Kolla outright like you are purposing. Suppose another vendor comes along with a new base and new packages. They are willing to maintain it, but its something that no one but their customers with their licensing can use. This is not something that belongs in Kolla and I am unsure that it is even appropriate to belong in OpenStack as a whole. Unless RHEL+RHOS can be used by those that do not have a license for it, I do not agree with adding it at all.

Sam,

Someone stepping up to maintain a completely independent set of docker images hasn?t happened.  To date nobody has done that.  If someone were to make that offer, and it was a significant change, I think the community as a whole would have to evaluate such a drastic change.  That would certainly increase our implementation and maintenance burden, which we don?t want  to do.  I don?t think what you propose would be in the best interest of the Kolla project, but I?d have to see the patch set to evaluated the scenario appropriately.

What we are talking about is 5 additional lines to enable RHEL+RHOS specific repositories, which is not very onerous.

The fact that you can?t use it directly has little bearing on whether its valid technology for OpenStack.  There are already two well-defined historical precedents for non-licensed unusable integration in OpenStack.  Cinder has 55 [1] Volume drivers which they SUPPORT.     At-leat 80% of them are completely proprietary hardware which in reality is mostly just software which without a license to, it would be impossible to use.  There are 41 [2] Neutron drivers registered on the Neutron driver page; almost the entirety require proprietary licenses to what amounts as integration to access proprietary software.  The OpenStack preferred license is ASL for a reason ? to be business friendly.  Licensed software has a place in the world of OpenStack, even it only serves as an integration point which the proposed patch does.  We are consistent with community values on this point or I wouldn?t have bothered proposing the patch.

We want to encourage people to use Kolla for proprietary solutions if they so choose.  This is how support manifests, which increases the strength of the Kolla project.  The presence of support increases the likelihood that Kolla will be adopted by Operators.  If your asking the Operators to maintain a fork for those 5 RHOS repo lines, that seems unreasonable.

I?d like to hear other Core Reviewer opinions on this matter and will hold a majority vote on this thread as to whether we will facilitate integration with third party software such as the Cinder Block Drivers, the Neutron Network drivers, and various for-pay versions of OpenStack such as RHOS.  I?d like all core reviewers to weigh in please.  Without a complete vote it will be hard to gauge what the Kolla community really wants.

Core reviewers:
Please vote +1 if you ARE satisfied with integration with third party unusable without a license software, specifically Cinder volume drivers, Neutron network drivers, and various for-pay distributions of OpenStack and container runtimes.
Please vote ?1 if you ARE NOT satisfied with integration with third party unusable without a license software, specifically Cinder volume drivers, Neutron network drivers, and various for pay distributions of OpenStack and container runtimes.

A bit of explanation on your vote might be helpful.

My vote is +1.  I have already provided my rationale.

Regards,
-steve

[1] https://wiki.openstack.org/wiki/CinderSupportMatrix
[2] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers



For oracle linux, I?d like to keep RDO for oracle linux and from source on oracle linux as choices.  RDO also runs on oracle linux.  Perhaps the patch set needs some later work here to address this point in more detail, but as is ?binary? covers oracle linu.

Perhaps what we should do is get rid of the binary type entirely.  Ubuntu doesn?t really have a binary type, they have a cloudarchive type, so binary doesn?t make a lot of sense.  Since Ubuntu to my knowledge doesn?t have two distributions of OpenStack the same logic wouldn?t apply to providing a full support onramp for Ubuntu customers.  Oracle doesn?t provide a binary type either, their binary type is really RDO.

The binary packages for Ubuntu are _packaged_ by the cloudarchive team. But in the case of when OpenStack collides with an LTS release (Icehouse and 14.04 was the last one) you do not add a new repo because the packages are in the main Ubuntu repo.

Debian provides its own packages as well. I do not want a type name per distro. 'binary' catches all packaged OpenStack things by a distro.


FWIW I never liked the transition away from rdo in the repo names to binary.  I guess I should have ?1?ed those reviews back then, but I think its time to either revisit the decision or compromise that binary and rdo mean the same thing in a centos and rhel world.

Regards
-steve


Since we implement multiple bases, some of which are not RPM based, it doesn't make much sense to me to have rhel and rdo as a type which is why we removed rdo in the first place in favor of the more generic 'binary'.


As such the implied second question ?How many more do we add?? sort of sounds like ?how many do we support??.  The answer to the second question is none ? again the Kolla community does not support any deployment of OpenStack.  To the question as posed, how many we add, the answer is it is really up to community members willing to  implement and maintain the work.  In this case, I have personally stepped up to implement RHOS and maintain it going forward.

Our policy on adding a new type could be simple or onerous.  I prefer simple.  If someone is willing to write the code and maintain it so that is stays in good working order, I see no harm in it remaining in tree.  I don?t suspect there will be a lot of people interested in adding multiple distributions for a particular operating system.  To my knowledge, and I could be incorrect, Red Hat is the only OpenStack company with a paid and community version available of OpenStack simultaneously and the paid version is only available on RHEL.  I think the risk of RPM based distributions plus their type count spiraling out of manageability is low.  Even if the risk were high, I?d prefer to keep an open mind to facilitate an increase in diversity in our community (which is already fantastically diverse, btw ;)

I am open to questions, comments or concerns.  Please feel free to voice them.

Regards,
-steve



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/28f91b21/attachment.html>

From samuel at yaple.net  Sun Sep 13 08:35:30 2015
From: samuel at yaple.net (Sam Yaple)
Date: Sun, 13 Sep 2015 03:35:30 -0500
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
	to RHOS + RDO types
In-Reply-To: <D21A6FA1.12519%stdake@cisco.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
 <D21A6FA1.12519%stdake@cisco.com>
Message-ID: <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>

On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake) <stdake at cisco.com>
wrote:

> Response inline.
>
> From: Sam Yaple <samuel at yaple.net>
> Reply-To: "sam at yaple.net" <sam at yaple.net>
> Date: Saturday, September 12, 2015 at 11:34 PM
> To: Steven Dake <stdake at cisco.com>
> Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
> types
>
>
>
> Sam Yaple
>
> On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) <stdake at cisco.com>
> wrote:
>
>>
>>
>> From: Sam Yaple <samuel at yaple.net>
>> Reply-To: "sam at yaple.net" <sam at yaple.net>
>> Date: Saturday, September 12, 2015 at 11:01 PM
>> To: Steven Dake <stdake at cisco.com>
>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev at lists.openstack.org>
>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>> types
>>
>>
>> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) <stdake at cisco.com>
>> wrote:
>>
>>> Hey folks,
>>>
>>> Sam had asked a reasonable set of questions regarding a patchset:
>>> https://review.openstack.org/#/c/222893/
>>>
>>> The purpose of the patchset is to enable both RDO and RHOS as binary
>>> choices on RHEL platforms.  I suspect over time, from-source deployments
>>> have the potential to become the norm, but the business logistics of such a
>>> change are going to take some significant time to sort out.
>>>
>>> Red Hat has two distros of OpenStack neither of which are from source.
>>> One is free called RDO and the other is paid called RHOS.  In order to
>>> obtain support for RHEL VMs running in an OpenStack cloud, you must be
>>> running on RHOS RPM binaries.  You must also be running on RHEL.  It
>>> remains to be seen whether Red Hat will actively support Kolla deployments
>>> with a RHEL+RHOS set of packaging in containers, but my hunch says they
>>> will.  It is in Kolla?s best interest to implement this model and not make
>>> it hard on Operators since many of them do indeed want Red Hat?s support
>>> structure for their OpenStack deployments.
>>>
>>> Now to Sam?s questions:
>>> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more
>>> do we add? What's our policy on adding a new type??
>>>
>>> I?m not immediately clear on how binary fits in.  We could make
>>> binary synonymous with the community supported version (RDO) while still
>>> implementing the binary RHOS version.  Note Kolla does not ?support? any
>>> distribution or deployment of OpenStack ? Operators will have to look to
>>> their vendors for support.
>>>
>>
>> If everything between centos+rdo and rhel+rhos is mostly the same then I
>> would think it would make more sense to just use the base ('rhel' in this
>> case) to branch of any differences in the templates. This would also allow
>> for the least amount of change and most generic implementation of this
>> vendor specific packaging. This would also match what we do with
>> oraclelinux, we do not have a special type for that and any specifics would
>> be handled by an if statement around 'oraclelinux' and not some special
>> type.
>>
>>
>> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO also
>> runs on RHEL.  I want to enable Red Hat customers to make a choice to have
>> a supported  operating system but not a supported Cloud environment.  The
>> answer here is RHEL + RDO.  This leads to full support down the road if the
>> Operator chooses to pay Red Hat for it by an easy transition to RHOS.
>>
>
> I am against including vendor specific things like RHOS in Kolla outright
> like you are purposing. Suppose another vendor comes along with a new base
> and new packages. They are willing to maintain it, but its something that
> no one but their customers with their licensing can use. This is not
> something that belongs in Kolla and I am unsure that it is even appropriate
> to belong in OpenStack as a whole. Unless RHEL+RHOS can be used by those
> that do not have a license for it, I do not agree with adding it at all.
>
>
> Sam,
>
> Someone stepping up to maintain a completely independent set of docker
> images hasn?t happened.  To date nobody has done that.  If someone were to
> make that offer, and it was a significant change, I think the community as
> a whole would have to evaluate such a drastic change.  That would certainly
> increase our implementation and maintenance burden, which we don?t want  to
> do.  I don?t think what you propose would be in the best interest of the
> Kolla project, but I?d have to see the patch set to evaluated the scenario
> appropriately.
>
> What we are talking about is 5 additional lines to enable RHEL+RHOS
> specific repositories, which is not very onerous.
>
> The fact that you can?t use it directly has little bearing on whether its
> valid technology for OpenStack.  There are already two well-defined
> historical precedents for non-licensed unusable integration in OpenStack.
> Cinder has 55 [1] Volume drivers which they SUPPORT.     At-leat 80% of
> them are completely proprietary hardware which in reality is mostly just
> software which without a license to, it would be impossible to use.  There
> are 41 [2] Neutron drivers registered on the Neutron driver page; almost
> the entirety require proprietary licenses to what amounts as integration to
> access proprietary software.  The OpenStack preferred license is ASL for a
> reason ? to be business friendly.  Licensed software has a place in the
> world of OpenStack, even it only serves as an integration point which the
> proposed patch does.  We are consistent with community values on this point
> or I wouldn?t have bothered proposing the patch.
>
> We want to encourage people to use Kolla for proprietary solutions if they
> so choose.  This is how support manifests, which increases the strength of
> the Kolla project.  The presence of support increases the likelihood that
> Kolla will be adopted by Operators.  If your asking the Operators to
> maintain a fork for those 5 RHOS repo lines, that seems unreasonable.
>
> I?d like to hear other *Core Reviewer* opinions on this matter and will
> hold a majority vote on this thread as to whether we will facilitate
> integration with third party software such as the Cinder Block Drivers, the
> Neutron Network drivers, and various for-pay versions of OpenStack such as
> RHOS.  I?d like all core reviewers to weigh in please.  Without a complete
> vote it will be hard to gauge what the Kolla community really wants.
>
> *Core reviewers:*
> Please vote +1 if you *ARE* satisfied with integration with third party
> unusable without a license software, specifically Cinder volume drivers,
> Neutron network drivers, and various for-pay distributions of OpenStack and
> container runtimes.
> Please vote ?1 if you* ARE NOT* satisfied with integration with third
> party unusable without a license software, specifically Cinder volume
> drivers, Neutron network drivers, and various for pay distributions of
> OpenStack and container runtimes.
>
> A bit of explanation on your vote might be helpful.
>
> My vote is +1.  I have already provided my rationale.
>
> Regards,
> -steve
>
> [1] https://wiki.openstack.org/wiki/CinderSupportMatrix
> [2] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>
>
I appreciate you calling a vote so early. But I haven't had my questions
answered yet enough to even vote on the matter at hand.

In this situation the closest thing we have to a plugin type system as
Cinder or Neutron does is our header/footer system. What you are proposing
is integrating a proprietary solution into the core of Kolla. Those Cinder
and Neutron plugins have external components and those external components
are not baked into the project.

What happens if and when the RHOS packages require different tweaks in the
various containers? What if it requires changes to the Ansible playbooks?
It begins to balloon out past 5 lines of code.

Unfortunately, the community _wont_ get to vote on whether or not to
implement those changes because RHOS is already in place. That's why I am
asking the questions now as this _right_ _now_ is the significant change
you are talking about, regardless of the lines of code.

So the question is not whether we are going to integrate 3rd party plugins,
but whether we are going to allow companies to build proprietary products
in the Kolla repo. If we allow RHEL+RHOS then we would need to allow
another distro+company packaging and potential Ansible tweaks to get it to
work for them.

If you really want to do what Cinder and Neutron do, we need a better
system for injecting code. That would be much closer to the plugins that
the other projects have.

I'd like to have a discussion about this rather than immediately call for a
vote which is why I asked you to raise this question in a public forum in
the first place.


>
>
>> For oracle linux, I?d like to keep RDO for oracle linux and from source
>> on oracle linux as choices.  RDO also runs on oracle linux.  Perhaps the
>> patch set needs some later work here to address this point in more detail,
>> but as is ?binary? covers oracle linu.
>>
>
>> Perhaps what we should do is get rid of the binary type entirely.  Ubuntu
>> doesn?t really have a binary type, they have a cloudarchive type, so binary
>> doesn?t make a lot of sense.  Since Ubuntu to my knowledge doesn?t have two
>> distributions of OpenStack the same logic wouldn?t apply to providing a
>> full support onramp for Ubuntu customers.  Oracle doesn?t provide a binary
>> type either, their binary type is really RDO.
>>
>
> The binary packages for Ubuntu are _packaged_ by the cloudarchive team.
> But in the case of when OpenStack collides with an LTS release (Icehouse
> and 14.04 was the last one) you do not add a new repo because the packages
> are in the main Ubuntu repo.
>
> Debian provides its own packages as well. I do not want a type name per
> distro. 'binary' catches all packaged OpenStack things by a distro.
>
>
>>
>> FWIW I never liked the transition away from rdo in the repo names to
>> binary.  I guess I should have ?1?ed those reviews back then, but I think
>> its time to either revisit the decision or compromise that binary and rdo
>> mean the same thing in a centos and rhel world.
>>
>> Regards
>> -steve
>>
>>
>> Since we implement multiple bases, some of which are not RPM based, it
>> doesn't make much sense to me to have rhel and rdo as a type which is why
>> we removed rdo in the first place in favor of the more generic 'binary'.
>>
>>
>>>
>>> As such the implied second question ?How many more do we add?? sort of
>>> sounds like ?how many do we support??.  The answer to the second question
>>> is none ? again the Kolla community does not support any deployment of
>>> OpenStack.  To the question as posed, how many we add, the answer is it is
>>> really up to community members willing to  implement and maintain the
>>> work.  In this case, I have personally stepped up to implement RHOS and
>>> maintain it going forward.
>>>
>>> Our policy on adding a new type could be simple or onerous.  I prefer
>>> simple.  If someone is willing to write the code and maintain it so that is
>>> stays in good working order, I see no harm in it remaining in tree.  I
>>> don?t suspect there will be a lot of people interested in adding multiple
>>> distributions for a particular operating system.  To my knowledge, and I
>>> could be incorrect, Red Hat is the only OpenStack company with a paid and
>>> community version available of OpenStack simultaneously and the paid
>>> version is only available on RHEL.  I think the risk of RPM based
>>> distributions plus their type count spiraling out of manageability is low.
>>> Even if the risk were high, I?d prefer to keep an open mind to facilitate
>>> an increase in diversity in our community (which is already fantastically
>>> diverse, btw ;)
>>>
>>> I am open to questions, comments or concerns.  Please feel free to voice
>>> them.
>>>
>>> Regards,
>>> -steve
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/236eab1a/attachment.html>

From skraynev at mirantis.com  Sun Sep 13 11:56:25 2015
From: skraynev at mirantis.com (Sergey Kraynev)
Date: Sun, 13 Sep 2015 14:56:25 +0300
Subject: [openstack-dev] [Tripleo][Heat][Nova][Ironic] Rich-network stuff
	and ironic hypervisor
Message-ID: <CAAbQNR=J6RqQn_ibeJtK3E-ad+cAeqb1FptcPe=QkQ15v6Hx9g@mail.gmail.com>

Hi folks,

Currently during implementation rich-network bp [1] (spec - [2]) we met
with issue on Tripleo
 [3]. As temporary solution patch [4] was reverted.

According traceback mentioned in bug description current issue related with
mac
 addresses which should be used for specific hypervisor [5] [6].
Previously in Tripleo, when we created vm without 'port-id' in networks
parameters, it was
 handled by Nova [7], so new port got mac address from list of allowed
addresses.

According rich-network BP, we want to use pre-created port (which we create
in Heat code
 directly) during booting VM. Unfortunately in this case validation
mentioned above fails due
 to different mac_addresses (for port and for hypervisor).

We discussed it with Derek, and it looks like for Tripleo  it's overhead
work to get such mac
 addresses and pass it in Heat template. Also I personally think, that it's
not user side issue,
i.e. we should solve it inside Heat code ourselves. So we probably need to
ask Nova Ironic driver (because we can not ask ironic directly from Heat)
for this information - about list
of allowed mac-addresses and then use it during creating port.

I have investigated Novaclient code, but did not met any ability to do it,
except make to_dict() for Hypervisor object, but I am not sure, that it
will be presented in this output.

So I'd ask Nova guys about some suggestions.
Also any thoughts are welcome.


[1] https://blueprints.launchpad.net/heat/+spec/rich-network-prop
[2] https://review.openstack.org/#/c/130093
[3] https://bugs.launchpad.net/tripleo/+bug/1494747
[4] https://review.openstack.org/#/c/217753/
[5]
https://github.com/openstack/nova/blob/309301381039b162588e5f2d348b5b666c96bd3a/nova/network/neutronv2/api.py#L477-L488
[6]
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L662-L678
[7]
https://github.com/openstack/nova/blob/309301381039b162588e5f2d348b5b666c96bd3a/nova/network/neutronv2/api.py#L278

Regards,
Sergey.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/3331aaeb/attachment.html>

From lgy181 at foxmail.com  Sun Sep 13 12:29:44 2015
From: lgy181 at foxmail.com (=?gb18030?B?THVvIEdhbmd5aQ==?=)
Date: Sun, 13 Sep 2015 20:29:44 +0800
Subject: [openstack-dev] [Ceilometer][Gnocchi]Gnocchi cannot deal with
	combined resource-id ?
In-Reply-To: <m0vbbfn0wt.fsf@danjou.info>
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
 <m0a8stp7qt.fsf@danjou.info> <tencent_21E29496537979275896B7B0@qq.com>
 <m07fnxndw5.fsf@danjou.info> <tencent_0B963CB97CD845581696926B@qq.com>
 <m0vbbfn0wt.fsf@danjou.info>
Message-ID: <tencent_0ED4A670187C712B44B10FF5@qq.com>

>  I was talking about that:

>   https://git.openstack.org/cgit/openstack/ceilometer/tree/etc/ceilometer/gnocchi_resources.yaml#n67

 I check it again, it seems gnocchin_resouces.yaml was updated on Sep 1, 2015, and my Ceilometer code is not
 updated. 
 Thanks Julien.
  ------------------
 Luo Gangyi   luogangyi at cmss.chinamobile.com



  
  

 

 ------------------ Original ------------------
  From:  "Julien Danjou";<julien at danjou.info>;
 Date:  Sat, Sep 12, 2015 10:54 PM
 To:  "Luo Gangyi"<lgy181 at foxmail.com>; 
 Cc:  "OpenStack Development Mailing L"<openstack-dev at lists.openstack.org>; 
 Subject:  Re: ??? [openstack-dev] [Ceilometer][Gnocchi]Gnocchi cannot deal with combined resource-id ?

 

On Sat, Sep 12 2015, Luo Gangyi wrote:

>  I checked it again, no "ignored" is marked, seems the bug of devstack ;(

I was talking about that:

  https://git.openstack.org/cgit/openstack/ceilometer/tree/etc/ceilometer/gnocchi_resources.yaml#n67

>  And it's OK that gnocchi is not perfect now, but I still have some worries about how gnocchi deal with or going to deal with instance-xxxx-tapxxx condition.
>  I see 'network.incoming.bytes' belongs to resouce type 'instance'.
>  But no attributes of instance can save the infomation of tap name.
>  Although I can search
>  all metric ids from resouce id(instance uuid), how do I distinguish them from different taps of an instance?

Where do you see network.incoming.bytes as being linked to an instance?
Reading gnocchi_resources.yaml I don't see that.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/2f8fbcc8/attachment.html>

From skraynev at mirantis.com  Sun Sep 13 13:10:54 2015
From: skraynev at mirantis.com (Sergey Kraynev)
Date: Sun, 13 Sep 2015 16:10:54 +0300
Subject: [openstack-dev] [Heat] Integration Test Questions
In-Reply-To: <CACfB1uutGXqUbd2D5rRAjRvVMT=H2qTn0myxOS6eJLdNQ=nbsg@mail.gmail.com>
References: <D218618E.ADE81%sabeen.syed@rackspace.com>
 <CACfB1uutGXqUbd2D5rRAjRvVMT=H2qTn0myxOS6eJLdNQ=nbsg@mail.gmail.com>
Message-ID: <CAAbQNRmsNmFCuEp0jzdRRUCfaGphY=6giTR0xujQPZ5ziLmCQQ@mail.gmail.com>

Hi Sabeen,

I think, that Pavlo described whole picture really nice.
So I'd like just to add couple my thoughts below:


Regards,
Sergey.

On 12 September 2015 at 15:08, Pavlo Shchelokovskyy <
pshchelokovskyy at mirantis.com> wrote:

> Hi Sabeen,
>
> thank you for the effort :) More tests is always better than less, but
> unfortunately we are limited by the power of VM and time available for
> gate jobs. This is why we do no exhaustive functional testing of all
> resource plugins APIs, as every time a test goes out and makes an
> async API call to OpenStack, like create a server, it always consumes
> time, and often consumes resources of VM that runs other tests of the
> same test suit as well (we do run them in parallel) making other tests
> also slower to some degree. Also, even for not-async/lightweight
> resources (e.g. SaharaNodeGroupTemplate), testing all of them requires
> running a corresponding OpenStack service on the gate job, which will
> consume its resources even further.
>

More over new additional services make devstack installation longer, so
as result we have less time for running tests.
Note, that we also should be careful during adding new tests, because as
Pavlo mentioned, we run them in parallel. I suppose, that everyone try
to use unique names for stacks and for internal resources or use only
random ids, but anyway want to remind do it carefully ;)


>
> Below are my thoughts and comments inline:
>
> On Fri, Sep 11, 2015 at 6:46 PM, Sabeen Syed <sabeen.syed at rackspace.com>
> wrote:
> > Hi All,
> >
> > My coworker and I would like to start filling out some gaps in api
> coverage
> > that we see in the functional integration tests. We have one patch up for
> > review (https://review.openstack.org/#/c/219025/). We got a comment
> saying
> > that any new stack creation will prolong the testing cycle. We agree with
> > that and it got us thinking about a few things -
>
> this test should use the TestResource (or even RandomString if you do
> not need to ensure a particular order of events), as there is no point
> of using an actual server for the assertions this test makes on
> stack/resource events.
>

I personally prefer TestResource, because it's more flexible and



>
> >
> > We are planning on adding tests for the following api's: event api's,
> > template api's, software config api's, cancel stack updates, check stack
> > resources and show resource data. These are the api's that we saw aren't
> > covered in our current integration tests. Please let us know if you feel
> we
> > need tests for these upstream, if we're missing something or if it's
> already
> > covered somewhere.
>
> Just make sure all (ideally) of them use TestResource/RandomStrings.
> You might still have to tweak it a bit to support a successful/failed
> check though. There is a test for SC/SD in functional (and I actually
> wonder what is it doing in functional but not scenario), is it not
> enough?
>
> > To conserve the creation of stacks would it make sense to add one test
> and
> > then under that we could call sub methods that will run tests against
> that
> > stack. So something like this:
> >
> > def _test_template_apis()
> >
> > def _test_softwareconfig_apis()
> >
> > def _test_event_apis()
> >
> > def test_event_template_softwareconfig_apis(self):
> >
> > stack_id = self.stack_create(?)
> >
> > self._test_template_apis(stack_id)
> >
> > self._test_event_apis(stack_id)
> >
> > self._test_softwareconfig_apis(stack_id)
>
> If you use TestResource and the like, the time to create a new stack
> for each test is not that long. And it is much better to have API
> tests separated as actual unit tests, otherwise failure in one API
> will fail the whole test which only leaves the developer wondering
> "what was that?" and makes it harder to find the root cause.
>
> >
> > The current tests are divided into two folders ? scenario and
> functional. To
> > help with organization - under the functional folder, would it make
> sense to
> > add an 'api' folder, 'resource' folder and 'misc folder? Here is what
> we're
> > thinking about where each test can be put:
> >
> > API folder - test_create_update.py, test_preview.py
> >
> > Resource folder ? test_autoscaling.py, test_aws_stack.py,
> > test_conditional_exposure.py, test_create_update_neutron_port.py,
> > test_encryption_vol_type.py, test_heat_autoscaling.py,
> > test_instance_group.py, test_resource_group.py, test_software_config.py,
> > test_swiftsignal_update.py
> >
> > Misc folder - test_default_parameters.py, test_encrypted_parameter.py,
> > test_hooks.py, test_notifications.py, test_reload_on_sighup.py,
> > test_remote_stack.py, test_stack_tags.py, test_template_resource.py,
> > test_validation.py
> >
> > Should we add to our README? For example, I see that we use TestResource
> as
> > a resource in some of our tests but we don't have an explanation of how
> to
> > set that up. I'd also like add explanations about the pre-testhook and
> > post-testhook file and how that works and what each line does/what test
> it's
> > attached to.
>
> By all means :) If it flattens the learning curve for new Heat
> contributors, it's even better.
>
> > For the tests that we're working on, should we be be adding a blueprint
> or
> > task somewhere to let everybody know that we're working on it so there
> is no
> > overlap?
>
> File a bug against Heat, make it a wishlist priority, and add a tag it
> 'functional-tests'. Assign to yourself at will :) but please check out
> what we already have filed:
>
> https://bugs.launchpad.net/heat/+bugs?field.tag=functional-tests
>
> > From our observations, we think it would be beneficial to add more
> comments
> > to the existing tests.  For example, we could have a minimum of a short
> > blurb for each method.  Comments?
>
> A (multi-line) doc string for module/test method would suffice. For
> longer scenario tests we already do this describing a scenario the
> test aim to pass through.
>
> > Should we add a 'high level coverage' summary in our README?  It could
> help
> > all of us know at a high level where we are at in terms of which
> resources
> > we have tests for and which api's, etc.
>
> As for APIs - I believe we could use some functional test coverage
> tool. I am not sure if there is a common thing already settled for in
> the community though. It might be a good cross-project topic to
> discuss during summit with Tempest community, they might already have
> something in the works.
>
> As for resources - we do try to exercise the native Heat ones that are
> there to provide the functionality of Heat itself (ASGs, RGs etc), but
> AFAIK we have no plans on deep testing all the other resources in a
> functional way.
>
> >
> > Let us know what you all think!
>
> Thanks again for bringing this up. "If it is not tested - it does not
> works" :)
>
> Best regards,
>
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/d896b653/attachment.html>

From skraynev at mirantis.com  Sun Sep 13 13:18:55 2015
From: skraynev at mirantis.com (Sergey Kraynev)
Date: Sun, 13 Sep 2015 16:18:55 +0300
Subject: [openstack-dev] [Heat] Integration Test Questions
In-Reply-To: <CAAbQNRmsNmFCuEp0jzdRRUCfaGphY=6giTR0xujQPZ5ziLmCQQ@mail.gmail.com>
References: <D218618E.ADE81%sabeen.syed@rackspace.com>
 <CACfB1uutGXqUbd2D5rRAjRvVMT=H2qTn0myxOS6eJLdNQ=nbsg@mail.gmail.com>
 <CAAbQNRmsNmFCuEp0jzdRRUCfaGphY=6giTR0xujQPZ5ziLmCQQ@mail.gmail.com>
Message-ID: <CAAbQNRnFL+WVj5ehuqWbq_vB0SU5+8gKOTvz1Qb=M7AkPCq1+A@mail.gmail.com>

sorry, false pressing ;)

Regards,
Sergey.

On 13 September 2015 at 16:10, Sergey Kraynev <skraynev at mirantis.com> wrote:

> Hi Sabeen,
>
> I think, that Pavlo described whole picture really nice.
> So I'd like just to add couple my thoughts below:
>
>
> Regards,
> Sergey.
>
> On 12 September 2015 at 15:08, Pavlo Shchelokovskyy <
> pshchelokovskyy at mirantis.com> wrote:
>
>> Hi Sabeen,
>>
>> thank you for the effort :) More tests is always better than less, but
>> unfortunately we are limited by the power of VM and time available for
>> gate jobs. This is why we do no exhaustive functional testing of all
>> resource plugins APIs, as every time a test goes out and makes an
>> async API call to OpenStack, like create a server, it always consumes
>> time, and often consumes resources of VM that runs other tests of the
>> same test suit as well (we do run them in parallel) making other tests
>> also slower to some degree. Also, even for not-async/lightweight
>> resources (e.g. SaharaNodeGroupTemplate), testing all of them requires
>> running a corresponding OpenStack service on the gate job, which will
>> consume its resources even further.
>>
>
> More over new additional services make devstack installation longer, so
> as result we have less time for running tests.
> Note, that we also should be careful during adding new tests, because as
> Pavlo mentioned, we run them in parallel. I suppose, that everyone try
> to use unique names for stacks and for internal resources or use only
> random ids, but anyway want to remind do it carefully ;)
>
>
>>
>> Below are my thoughts and comments inline:
>>
>> On Fri, Sep 11, 2015 at 6:46 PM, Sabeen Syed <sabeen.syed at rackspace.com>
>> wrote:
>> > Hi All,
>> >
>> > My coworker and I would like to start filling out some gaps in api
>> coverage
>> > that we see in the functional integration tests. We have one patch up
>> for
>> > review (https://review.openstack.org/#/c/219025/). We got a comment
>> saying
>> > that any new stack creation will prolong the testing cycle. We agree
>> with
>> > that and it got us thinking about a few things -
>>
>> this test should use the TestResource (or even RandomString if you do
>> not need to ensure a particular order of events), as there is no point
>> of using an actual server for the assertions this test makes on
>> stack/resource events.
>>
>
> I personally prefer TestResource, because it's more flexible and
>
    probably easier for understanding/writing templates.

>
>
>
>>
>> >
>> > We are planning on adding tests for the following api's: event api's,
>> > template api's, software config api's, cancel stack updates, check stack
>> > resources and show resource data. These are the api's that we saw aren't
>> > covered in our current integration tests. Please let us know if you
>> feel we
>> > need tests for these upstream, if we're missing something or if it's
>> already
>> > covered somewhere.
>>
>> Just make sure all (ideally) of them use TestResource/RandomStrings.
>> You might still have to tweak it a bit to support a successful/failed
>> check though. There is a test for SC/SD in functional (and I actually
>> wonder what is it doing in functional but not scenario), is it not
>> enough?
>>
>> > To conserve the creation of stacks would it make sense to add one test
>> and
>> > then under that we could call sub methods that will run tests against
>> that
>> > stack. So something like this:
>> >
>> > def _test_template_apis()
>> >
>> > def _test_softwareconfig_apis()
>> >
>> > def _test_event_apis()
>> >
>> > def test_event_template_softwareconfig_apis(self):
>> >
>> > stack_id = self.stack_create(?)
>> >
>> > self._test_template_apis(stack_id)
>> >
>> > self._test_event_apis(stack_id)
>> >
>> > self._test_softwareconfig_apis(stack_id)
>>
>> If you use TestResource and the like, the time to create a new stack
>> for each test is not that long. And it is much better to have API
>> tests separated as actual unit tests, otherwise failure in one API
>> will fail the whole test which only leaves the developer wondering
>> "what was that?" and makes it harder to find the root cause.
>>
>> >
>> > The current tests are divided into two folders ? scenario and
>> functional. To
>> > help with organization - under the functional folder, would it make
>> sense to
>> > add an 'api' folder, 'resource' folder and 'misc folder? Here is what
>> we're
>> > thinking about where each test can be put:
>> >
>> > API folder - test_create_update.py, test_preview.py
>> >
>> > Resource folder ? test_autoscaling.py, test_aws_stack.py,
>> > test_conditional_exposure.py, test_create_update_neutron_port.py,
>> > test_encryption_vol_type.py, test_heat_autoscaling.py,
>> > test_instance_group.py, test_resource_group.py, test_software_config.py,
>> > test_swiftsignal_update.py
>> >
>> > Misc folder - test_default_parameters.py, test_encrypted_parameter.py,
>> > test_hooks.py, test_notifications.py, test_reload_on_sighup.py,
>> > test_remote_stack.py, test_stack_tags.py, test_template_resource.py,
>> > test_validation.py
>> >
>> > Should we add to our README? For example, I see that we use
>> TestResource as
>> > a resource in some of our tests but we don't have an explanation of how
>> to
>> > set that up. I'd also like add explanations about the pre-testhook and
>> > post-testhook file and how that works and what each line does/what test
>> it's
>> > attached to.
>>
>> By all means :) If it flattens the learning curve for new Heat
>> contributors, it's even better.
>>
>
        Agreed. More documentation and orderliness are welcome.

>
>> > For the tests that we're working on, should we be be adding a blueprint
>> or
>> > task somewhere to let everybody know that we're working on it so there
>> is no
>> > overlap?
>>
>> File a bug against Heat, make it a wishlist priority, and add a tag it
>> 'functional-tests'. Assign to yourself at will :) but please check out
>> what we already have filed:
>>
>> https://bugs.launchpad.net/heat/+bugs?field.tag=functional-tests
>
>
Also you may take some of assigned bugs, which have not progress long time,
     but please ask assigned person about it before.

>
>>
>> > From our observations, we think it would be beneficial to add more
>> comments
>> > to the existing tests.  For example, we could have a minimum of a short
>> > blurb for each method.  Comments?
>
>
>> A (multi-line) doc string for module/test method would suffice. For
>> longer scenario tests we already do this describing a scenario the
>> test aim to pass through.
>>
>
      + 1

>
>> > Should we add a 'high level coverage' summary in our README?  It could
>> help
>> > all of us know at a high level where we are at in terms of which
>> resources
>> > we have tests for and which api's, etc.
>>
>> As for APIs - I believe we could use some functional test coverage
>> tool. I am not sure if there is a common thing already settled for in
>> the community though. It might be a good cross-project topic to
>> discuss during summit with Tempest community, they might already have
>> something in the works.
>>
>
      + 100 % for this idea. it will be really useful for all community.

>
>> As for resources - we do try to exercise the native Heat ones that are
>> there to provide the functionality of Heat itself (ASGs, RGs etc), but
>> AFAIK we have no plans on deep testing all the other resources in a
>> functional way.
>>
>> >
>> > Let us know what you all think!
>>
>> Thanks again for bringing this up. "If it is not tested - it does not
>> works" :)
>>
>> Best regards,
>>
>> Dr. Pavlo Shchelokovskyy
>> Senior Software Engineer
>> Mirantis Inc
>> www.mirantis.com
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/edd0f92a/attachment.html>

From stdake at cisco.com  Sun Sep 13 17:34:51 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Sun, 13 Sep 2015 17:34:51 +0000
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
 <D21A6FA1.12519%stdake@cisco.com>
 <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>
Message-ID: <D21AFAE0.12587%stdake@cisco.com>

Response inline.

From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:sam at yaple.net>>
Date: Sunday, September 13, 2015 at 1:35 AM
To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO types

On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake) <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
Response inline.

From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:sam at yaple.net>>
Date: Saturday, September 12, 2015 at 11:34 PM
To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO types



Sam Yaple

On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:


From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:sam at yaple.net>>
Date: Saturday, September 12, 2015 at 11:01 PM
To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO types


On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
Hey folks,

Sam had asked a reasonable set of questions regarding a patchset:
https://review.openstack.org/#/c/222893/

The purpose of the patchset is to enable both RDO and RHOS as binary choices on RHEL platforms.  I suspect over time, from-source deployments have the potential to become the norm, but the business logistics of such a change are going to take some significant time to sort out.

Red Hat has two distros of OpenStack neither of which are from source.  One is free called RDO and the other is paid called RHOS.  In order to obtain support for RHEL VMs running in an OpenStack cloud, you must be running on RHOS RPM binaries.  You must also be running on RHEL.  It remains to be seen whether Red Hat will actively support Kolla deployments with a RHEL+RHOS set of packaging in containers, but my hunch says they will.  It is in Kolla?s best interest to implement this model and not make it hard on Operators since many of them do indeed want Red Hat?s support structure for their OpenStack deployments.

Now to Sam?s questions:
"Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do we add? What's our policy on adding a new type??

I?m not immediately clear on how binary fits in.  We could make binary synonymous with the community supported version (RDO) while still implementing the binary RHOS version.  Note Kolla does not ?support? any distribution or deployment of OpenStack ? Operators will have to look to their vendors for support.

If everything between centos+rdo and rhel+rhos is mostly the same then I would think it would make more sense to just use the base ('rhel' in this case) to branch of any differences in the templates. This would also allow for the least amount of change and most generic implementation of this vendor specific packaging. This would also match what we do with oraclelinux, we do not have a special type for that and any specifics would be handled by an if statement around 'oraclelinux' and not some special type.

I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO also runs on RHEL.  I want to enable Red Hat customers to make a choice to have a supported  operating system but not a supported Cloud environment.  The answer here is RHEL + RDO.  This leads to full support down the road if the Operator chooses to pay Red Hat for it by an easy transition to RHOS.

I am against including vendor specific things like RHOS in Kolla outright like you are purposing. Suppose another vendor comes along with a new base and new packages. They are willing to maintain it, but its something that no one but their customers with their licensing can use. This is not something that belongs in Kolla and I am unsure that it is even appropriate to belong in OpenStack as a whole. Unless RHEL+RHOS can be used by those that do not have a license for it, I do not agree with adding it at all.

Sam,

Someone stepping up to maintain a completely independent set of docker images hasn?t happened.  To date nobody has done that.  If someone were to make that offer, and it was a significant change, I think the community as a whole would have to evaluate such a drastic change.  That would certainly increase our implementation and maintenance burden, which we don?t want  to do.  I don?t think what you propose would be in the best interest of the Kolla project, but I?d have to see the patch set to evaluated the scenario appropriately.

What we are talking about is 5 additional lines to enable RHEL+RHOS specific repositories, which is not very onerous.

The fact that you can?t use it directly has little bearing on whether its valid technology for OpenStack.  There are already two well-defined historical precedents for non-licensed unusable integration in OpenStack.  Cinder has 55 [1] Volume drivers which they SUPPORT.     At-leat 80% of them are completely proprietary hardware which in reality is mostly just software which without a license to, it would be impossible to use.  There are 41 [2] Neutron drivers registered on the Neutron driver page; almost the entirety require proprietary licenses to what amounts as integration to access proprietary software.  The OpenStack preferred license is ASL for a reason ? to be business friendly.  Licensed software has a place in the world of OpenStack, even it only serves as an integration point which the proposed patch does.  We are consistent with community values on this point or I wouldn?t have bothered proposing the patch.

We want to encourage people to use Kolla for proprietary solutions if they so choose.  This is how support manifests, which increases the strength of the Kolla project.  The presence of support increases the likelihood that Kolla will be adopted by Operators.  If your asking the Operators to maintain a fork for those 5 RHOS repo lines, that seems unreasonable.

I?d like to hear other Core Reviewer opinions on this matter and will hold a majority vote on this thread as to whether we will facilitate integration with third party software such as the Cinder Block Drivers, the Neutron Network drivers, and various for-pay versions of OpenStack such as RHOS.  I?d like all core reviewers to weigh in please.  Without a complete vote it will be hard to gauge what the Kolla community really wants.

Core reviewers:
Please vote +1 if you ARE satisfied with integration with third party unusable without a license software, specifically Cinder volume drivers, Neutron network drivers, and various for-pay distributions of OpenStack and container runtimes.
Please vote ?1 if you ARE NOT satisfied with integration with third party unusable without a license software, specifically Cinder volume drivers, Neutron network drivers, and various for pay distributions of OpenStack and container runtimes.

A bit of explanation on your vote might be helpful.

My vote is +1.  I have already provided my rationale.

Regards,
-steve

[1] https://wiki.openstack.org/wiki/CinderSupportMatrix
[2] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers


I appreciate you calling a vote so early. But I haven't had my questions answered yet enough to even vote on the matter at hand.

In this situation the closest thing we have to a plugin type system as Cinder or Neutron does is our header/footer system. What you are proposing is integrating a proprietary solution into the core of Kolla. Those Cinder and Neutron plugins have external components and those external components are not baked into the project.

What happens if and when the RHOS packages require different tweaks in the various containers? What if it requires changes to the Ansible playbooks? It begins to balloon out past 5 lines of code.

Unfortunately, the community _wont_ get to vote on whether or not to implement those changes because RHOS is already in place. That's why I am asking the questions now as this _right_ _now_ is the significant change you are talking about, regardless of the lines of code.

So the question is not whether we are going to integrate 3rd party plugins, but whether we are going to allow companies to build proprietary products in the Kolla repo. If we allow RHEL+RHOS then we would need to allow another distro+company packaging and potential Ansible tweaks to get it to work for them.

If you really want to do what Cinder and Neutron do, we need a better system for injecting code. That would be much closer to the plugins that the other projects have.

I'd like to have a discussion about this rather than immediately call for a vote which is why I asked you to raise this question in a public forum in the first place.


Sam,

While a true code injection system might be interesting and would be more parallel with the plugin model used in cinder and neutron (and to some degrees nova), those various systems didn?t begin that way.  Their driver code at one point was completely integrated.  Only after 2-3 years was the code broken into a fully injectable state.  I think that is an awfully high bar to set to sort out the design ahead of time.  One of the reasons Neutron has taken so long to mature is the Neutron community attempted to do plugins at too early a stage which created big gaps in unit and functional tests.  A more appropriate design would be for that pattern to emerge from the system over time as people begin to adopt various distro tech to Kolla.  If you looked at the patch in gerrit, there is one clear pattern ?Setup distro repos? which at some point in the future could be made to be injectable much as headers and footers are today.

As for building proprietary products in the Kolla repository, the license is ASL, which means it is inherently not proprietary.  I am fine with the code base integrating with proprietary software as long as the license terms are met; someone has to pay the mortgages of the thousands of OpenStack developers.  We should encourage growth of OpenStack, and one of the ways for that to happen is to be business friendly.  This translates into first knowing the world is increasingly adopting open source methodologies and facilitating that transition, and second accepting the world has a whole slew of proprietary software that already exists today that requires integration.

Nonetheless, we have a difference of opinion on this matter, and I want this work to merge prior to rc1.  Since this is a project policy decision and not a technical issue, it makes sense to put it to a wider vote to either unblock or kill the work.  It would be a shame if we reject all driver and supported distro integration because we as a community take an anti-business stance on our policies, but I?ll live by what the community decides.  This is not a decision either you or I may dictate which is why it has been put to a vote.

Regards
-steve



For oracle linux, I?d like to keep RDO for oracle linux and from source on oracle linux as choices.  RDO also runs on oracle linux.  Perhaps the patch set needs some later work here to address this point in more detail, but as is ?binary? covers oracle linu.

Perhaps what we should do is get rid of the binary type entirely.  Ubuntu doesn?t really have a binary type, they have a cloudarchive type, so binary doesn?t make a lot of sense.  Since Ubuntu to my knowledge doesn?t have two distributions of OpenStack the same logic wouldn?t apply to providing a full support onramp for Ubuntu customers.  Oracle doesn?t provide a binary type either, their binary type is really RDO.

The binary packages for Ubuntu are _packaged_ by the cloudarchive team. But in the case of when OpenStack collides with an LTS release (Icehouse and 14.04 was the last one) you do not add a new repo because the packages are in the main Ubuntu repo.

Debian provides its own packages as well. I do not want a type name per distro. 'binary' catches all packaged OpenStack things by a distro.


FWIW I never liked the transition away from rdo in the repo names to binary.  I guess I should have ?1?ed those reviews back then, but I think its time to either revisit the decision or compromise that binary and rdo mean the same thing in a centos and rhel world.

Regards
-steve


Since we implement multiple bases, some of which are not RPM based, it doesn't make much sense to me to have rhel and rdo as a type which is why we removed rdo in the first place in favor of the more generic 'binary'.


As such the implied second question ?How many more do we add?? sort of sounds like ?how many do we support??.  The answer to the second question is none ? again the Kolla community does not support any deployment of OpenStack.  To the question as posed, how many we add, the answer is it is really up to community members willing to  implement and maintain the work.  In this case, I have personally stepped up to implement RHOS and maintain it going forward.

Our policy on adding a new type could be simple or onerous.  I prefer simple.  If someone is willing to write the code and maintain it so that is stays in good working order, I see no harm in it remaining in tree.  I don?t suspect there will be a lot of people interested in adding multiple distributions for a particular operating system.  To my knowledge, and I could be incorrect, Red Hat is the only OpenStack company with a paid and community version available of OpenStack simultaneously and the paid version is only available on RHEL.  I think the risk of RPM based distributions plus their type count spiraling out of manageability is low.  Even if the risk were high, I?d prefer to keep an open mind to facilitate an increase in diversity in our community (which is already fantastically diverse, btw ;)

I am open to questions, comments or concerns.  Please feel free to voice them.

Regards,
-steve




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/b17361a0/attachment.html>

From mestery at mestery.com  Sun Sep 13 21:25:38 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Sun, 13 Sep 2015 16:25:38 -0500
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CACjuMbxy3AnJcEU02oNdx+4s3tr8EDYoHg_rMtufWNa1zep4Vw@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
 <CACjuMbxy3AnJcEU02oNdx+4s3tr8EDYoHg_rMtufWNa1zep4Vw@mail.gmail.com>
Message-ID: <CAL3VkVxasb2sec1bTaxkmYD1HxmDedtj3cMScnGK3rWxHwJwuw@mail.gmail.com>

On Sat, Sep 12, 2015 at 12:32 PM, Ben Pfaff <blp at nicira.com> wrote:

> Are you planning to remain involved with OpenStack?
>
>
Yes, that's the plan!


> On Fri, Sep 11, 2015 at 2:12 PM, Kyle Mestery <mestery at mestery.com> wrote:
> > I'm writing to let everyone know that I do not plan to run for Neutron
> PTL
> > for a fourth cycle. Being a PTL is a rewarding but difficult job, as
> Morgan
> > recently put it in his non-candidacy email [1]. But it goes further than
> > that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> > full time job. In the case of Neutron, it's more than a full time job,
> it's
> > literally an always on job.
> >
> > I've tried really hard over my three cycles as PTL to build a stronger
> web
> > of trust so the project can grow, and I feel that's been accomplished. We
> > have a strong bench of future PTLs and leaders ready to go, I'm excited
> to
> > watch them lead and help them in anyway I can.
> >
> > As was said by Zane in a recent email [3], while Heat may have pioneered
> the
> > concept of rotating PTL duties with each cycle, I'd like to highly
> encourage
> > Neutron and other projects to do the same. Having a deep bench of leaders
> > supporting each other is important for the future of all projects.
> >
> > See you all in Tokyo!
> > Kyle
> >
> > [1]
> >
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> > [1]
> >
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> > [2]
> >
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> "I don't normally do acked-by's.  I think it's my way of avoiding
> getting blamed when it all blows up."               Andrew Morton
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150913/dc8be975/attachment.html>

From sbaker at redhat.com  Sun Sep 13 23:33:06 2015
From: sbaker at redhat.com (Steve Baker)
Date: Mon, 14 Sep 2015 11:33:06 +1200
Subject: [openstack-dev] [Heat] Scattered thoughts on the PTL election
In-Reply-To: <55F2FB52.4060706@redhat.com>
References: <55F2FB52.4060706@redhat.com>
Message-ID: <55F607B2.4090701@redhat.com>

On 12/09/15 04:03, Zane Bitter wrote:
> The Heat project pioneered the concept of rotating the PTL for every 
> development cycle, and so far all of the early (before 2013) 
> developers who are still involved have served as PTL. I think this has 
> been a *tremendous* success for the project, and a testament to the 
> sheer depth of leadership talent that we are fortunate to have (as 
> well as, it must be said, to Thierry and the release management team 
> and their ability to quickly bring new people up to speed every 
> cycle). We're already seeing a lot of other projects moving toward the 
> PTL role having a shorter time horizon, and I suspect the main reason 
> they're not moving more rapidly in that direction is that it takes 
> time to build up the expectation of rotating succession and make sure 
> that the leaders within each project are preparing to take their turn. 
> So I like to think that we've been a good influence on the whole 
> OpenStack community in this respect :)
>
> (I'd also note that this expectation is very helpful in terms of 
> spreading the workload around and reducing the amount of work that 
> falls on a single person. To the extent that it is possible to be the 
> PTL of the Heat project and still get some real work done, not just 
> clicking on things in Launchpad - though, be warned, there is still 
> quite a bit of that involved.)
>
> However, there is one area in which we have not yet been as 
> successful: so far all of the PTLs have been folks that were early 
> developers of the project. IMHO it's time for that to change: we have 
> built an amazing team of folks since then who are great leaders in the 
> community and who now have the experience to step up. I can think of 
> at least 4 excellent potential candidates just off the top of my head.
>

Zane is absolutely correct, I only became PTL again because we needed to 
prime the pump for successors. I too can think of many potentials who 
would be more than capable of taking this on for Mitaka and beyond.

One thing about being PTL is that the mindset and habits never leave 
you. Ongoing tasks such as bug triage, stable backports, and keeping the 
gate healthy continue into PTL retirement. In this sense I see a health 
metric of the culture of a project as being how many ex-PTLs continue to 
be engaged with it (leaving aside the many legitimate reasons people may 
have for moving on to other projects).

> Obviously there is a time commitment involved - in fact Flavio's 
> entire blog post[1] is great and you should definitely read that first 
> - but if you are already devoting a majority of your time to the 
> upstream Heat project and you think this is likely to be sustainable 
> for the next 6 months, then please run for PTL!
>
> (You may safely infer from this that I won't be running this time.)
>
(And neither will I :)


From gilles at redhat.com  Mon Sep 14 01:26:55 2015
From: gilles at redhat.com (Gilles Dubreuil)
Date: Mon, 14 Sep 2015 11:26:55 +1000
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <CAGnj6as7_LKmLV=7H_U+mns8KSkCBfvCpJoqKy-3NbCUzW4rSA@mail.gmail.com>
References: <55F27CB7.2040101@redhat.com> <55F2AA41.9070206@kent.ac.uk>
 <55F2BA40.5010108@redhat.com>
 <CAGnj6as7_LKmLV=7H_U+mns8KSkCBfvCpJoqKy-3NbCUzW4rSA@mail.gmail.com>
Message-ID: <55F6225F.7080002@redhat.com>



On 12/09/15 00:52, Morgan Fainberg wrote:
> On Fri, Sep 11, 2015 at 4:25 AM, Gilles Dubreuil <gilles at redhat.com
> <mailto:gilles at redhat.com>> wrote:
> 
> 
> 
>     On 11/09/15 20:17, David Chadwick wrote:
>     > Whichever approach is adopted you need to consider the future and the
>     > longer term objective of moving to fully hierarchical names. I believe
>     > the current Keystone approach is only an interim one, as it only
>     > supports partial hierarchies. Fully hierarchical names has been
>     > discussed in the Keystone group, but I believe that this has been
>     > shelved until later in order to get a quick fix released now.
>     >
>     > regards
>     >
>     > David
>     >
> 
>     Thanks David,
> 
>     That's interesting.
>     So sub projects are pushing the issue further down.
>     And maybe one day sub domains and sub users?
> 
>     keystone_role_user {
>     'user.subuser::domain1 at project.subproject.subsubproject::domain2':
>     roles => [...]
>     }
> 
>     or
> 
>     keystone_role_user {'user.subuser':
>       user_domain => 'domain1',
>       tenant => 'project.subproject',
>       tenant_domain => 'domain2',
>       roles => [...]
>     }
> 
>     I tend to think the domain must stick with the name it's associated
>     with, otherwise we have to say 'here the domain for this and that, etc'.
> 
> 
> 
>     > On 11/09/2015 08:03, Gilles Dubreuil wrote:
>     >> Hi,
>     >>
>     >> Today in the #openstack-puppet channel a discussion about the pro and
>     >> cons of using domain parameter for Keystone V3 has been left opened.
>     >>
>     >> The context
>     >> ------------
>     >> Domain names are needed in Openstack Keystone V3 for identifying
>     users
>     >> or groups (of users) within different projects (tenant).
>     >> Users and groups are uniquely identified within a domain (or a
>     realm as
>     >> opposed to project domains).
>     >> Then projects have their own domain so users or groups can be
>     assigned
>     >> to them through roles.
>     >>
>     >> In Kilo, Keystone V3 have been introduced as an experimental feature.
>     >> Puppet providers such as keystone_tenant, keystone_user,
>     >> keystone_role_user have been adapted to support it.
>     >> Also new ones have appeared (keystone_domain) or are their way
>     >> (keystone_group, keystone_trust).
>     >> And to be backward compatible with V2, the default domain is used
>     when
>     >> no domain is provided.
>     >>
>     >> In existing providers such as keystone_tenant, the domain can be
>     either
>     >> part of the name or provided as a parameter:
>     >>
>     >> A. The 'composite namevar' approach:
>     >>
>     >>    keystone_tenant {'projectX::domainY': ... }


>     >> B. The 'meaningless name' approach:
>     >>
>     >>   keystone_tenant {'myproject': name='projectX',
>     domain=>'domainY', ...}
>     >>

Just for the sake of the discussion, I should have mentioned the
'classic' approach using a meaningful title (where name = title) which
doesn't work for the domain scope:

A first project comes with:
keystone_tenant {'projectX', domain=>'domainY', ...}

Another tenant with same name in a different domain cannot be created:

keystone_tenant {'projectX', domain=>'domainZ', ...}

>     >> Notes:
>     >>  - Actually using both combined should work too with the domain
>     >> supposedly overriding the name part of the domain.
>     >>  - Please look at [1] this for some background between the two
>     approaches:
>     >>
>     >> The question
>     >> -------------
>     >> Decide between the two approaches, the one we would like to
>     retain for
>     >> puppet-keystone.
>     >>
>     >> Why it matters?
>     >> ---------------
>     >> 1. Domain names are mandatory in every user, group or project.
>     Besides
>     >> the backward compatibility period mentioned earlier, where no domain
>     >> means using the default one.
>     >> 2. Long term impact
>     >> 3. Both approaches are not completely equivalent which different
>     >> consequences on the future usage.
>     >> 4. Being consistent
>     >> 5. Therefore the community to decide
>     >>
>     >> The two approaches are not technically equivalent and it also depends
>     >> what a user might expect from a resource title.
>     >> See some of the examples below.
>     >>
>     >> Because OpenStack DB tables have IDs to uniquely identify objects, it
>     >> can have several objects of a same family with the same name.
>     >> This has made things difficult for Puppet resources to guarantee
>     >> idem-potency of having unique resources.
>     >> In the context of Keystone V3 domain, hopefully this is not the
>     case for
>     >> the users, groups or projects but unfortunately this is still the
>     case
>     >> for trusts.

I'm going back about this because I think it's important.

When developing Openstack providers, we grew the habit of thinking with
having multiple objects of the same family (multiple rows in a table,
the ID being the primary key).

Because of such experience, the reflex is to think of conflicts with
resources (discovery and self.instances). This has been addressed by
using the first resource found (after ordering for consistency). This is
not ideal but works for bootstrapping, more advance use to be done
directly with openstack clients.

Meanwhile there is no such need for users, groups and projects because
they are unique within a given domain. This could be a good reason to
safely use 'meaningless names'.

Besides the trusts which have to be treated separately. A trust can be
created many times with the same values, again, the ID being the only
thing to distinct them.



>     >>
>     >> Pros/Cons
>     >> ----------
>     >> A.
>     >>   Pros
>     >>     - Easier names
>     >>   Cons
>     >>     - Titles have no meaning!
>     >>     - Cases where 2 or more resources could exists
>     >>     - More difficult to debug
>     >>     - Titles mismatch when listing the resources (self.instances)
>     >>
>     >> B.
>     >>   Pros
>     >>     - Unique titles guaranteed
>     >>     - No ambiguity between resource found and their title
>     >>   Cons
>     >>     - More complicated titles
>     >>
>     >> Examples
>     >> ----------
>     >> = Meaningless name example 1=
>     >> Puppet run:
>     >>   keystone_tenant {'myproject': name='project_A',
>     domain=>'domain_1', ...}
>     >>
>     >> Second run:
>     >>   keystone_tenant {'myproject': name='project_A',
>     domain=>'domain_2', ...}
>     >>
>     >> Result/Listing:
>     >>
>     >>   keystone_tenant { 'project_A::domain_1':
>     >>     ensure  => 'present',
>     >>     domain  => 'domain_1',
>     >>     enabled => 'true',
>     >>     id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>     >>   }
>     >>    keystone_tenant { 'project_A::domain_2':
>     >>     ensure  => 'present',
>     >>     domain  => 'domain_2',
>     >>     enabled => 'true',
>     >>     id      => '4b8255591949484781da5d86f2c47be7',
>     >>   }
>     >>
>     >> = Composite name example 1  =
>     >> Puppet run:
>     >>   keystone_tenant {'project_A::domain_1', ...}
>     >>
>     >> Second run:
>     >>   keystone_tenant {'project_A::domain_2', ...}
>     >>
>     >> # Result/Listing
>     >>   keystone_tenant { 'project_A::domain_1':
>     >>     ensure  => 'present',
>     >>     domain  => 'domain_1',
>     >>     enabled => 'true',
>     >>     id      => '7f0a2b670f48437ba1204b17b7e3e9e9',
>     >>    }
>     >>   keystone_tenant { 'project_A::domain_2':
>     >>     ensure  => 'present',
>     >>     domain  => 'domain_2',
>     >>     enabled => 'true',
>     >>     id      => '4b8255591949484781da5d86f2c47be7',
>     >>    }
>     >>
>     >> = Meaningless name example 2  =
>     >> Puppet run:
>     >>   keystone_tenant {'myproject1': name='project_A',
>     domain=>'domain_1', ...}
>     >>   keystone_tenant {'myproject2': name='project_A',
>     domain=>'domain_1',
>     >> description=>'blah'...}
>     >>
>     >> Result: project_A in domain_1 has a description
>     >>
>     >> = Composite name example 2  =
>     >> Puppet run:
>     >>   keystone_tenant {'project_A::domain_1', ...}
>     >>   keystone_tenant {'project_A::domain_1', description => 'blah', ...}
>     >>
>     >> Result: Error because the resource must be unique within a catalog
>     >>
>     >> My vote
>     >> --------
>     >> I would love to have the approach A for easier name.
>     >> But I've seen the challenge of maintaining the providers behind the
>     >> curtains and the confusion it creates with name/titles and when
>     not sure
>     >> about the domain we're dealing with.
>     >> Also I believe that supporting self.instances consistently with
>     >> meaningful name is saner.
>     >> Therefore I vote B
>     >>

In the light of the evolution of the discussion, I now vote A!

>     >> Finally
>     >> ------
>     >> Thanks for reading that far!
>     >> To choose, please provide feedback with more pros/cons, examples and
>     >> your vote.
>     >>
>     >> Thanks,
>     >> Gilles
> 
> 
> Please keep in mind that there are no "reserved" characters in projects
> and/or domains. It is possible to have "::" in both or ":" (or any other
> random entries), which couple make the composite namevar less desirable
> in some cases. I expect this to be a somewhat edge case (as using :: in
> a domain and/or project in places where it would impact the split would
> be somewhat odd); it can likely also be stated that using puppet
> requires avoiding the '::' or ':' in this manner.
> 


I believe the '::' was added because the less likely of syntax conflict.

But if there is any risk on the future then that would force us to stay
away from 'Composite names'.

> I am always a fan of explicit variables personally. It is "more complex"
> but it is also explicit. This just eliminates ambiguity.
> 

+1

I think, this is why some Keystone developers have recommended to
explicitly use the domain name.

We have to keep in mind that we are still in a transition period between
V2 and V3. For backward compatibility reasons the domain is not
mandatory which implicitly means to use the default domain. Meanwhile as
discussed in a recent weekly meeting, the domain is to be made mandatory.

This is actually fine with either A or B approach, in both cases the
domain will be made mandatory.


> --Morgan
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From edgar.magana at workday.com  Mon Sep 14 01:28:08 2015
From: edgar.magana at workday.com (Edgar Magana)
Date: Mon, 14 Sep 2015 01:28:08 +0000
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <07AF2ED9-627D-4693-9DBA-878E3229F4F5@workday.com>

Let me join the rest of the Neutron team in thanking you for the great effort leading this amazing group of contributors.

I do believe it is great to have a rotational leadership and I do encourage the next Neutron PTL to do the same.

Cheers!

Edgar

From: Kyle Mestery
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Friday, September 11, 2015 at 2:12 PM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: [openstack-dev] [neutron] PTL Non-Candidacy

I'm writing to let everyone know that I do not plan to run for Neutron PTL for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan recently put it in his non-candidacy email [1]. But it goes further than that for me. As Flavio put it in his post about "Being a PTL" [2], it's a full time job. In the case of Neutron, it's more than a full time job, it's literally an always on job.

I've tried really hard over my three cycles as PTL to build a stronger web of trust so the project can grow, and I feel that's been accomplished. We have a strong bench of future PTLs and leaders ready to go, I'm excited to watch them lead and help them in anyway I can.

As was said by Zane in a recent email [3], while Heat may have pioneered the concept of rotating PTL duties with each cycle, I'd like to highly encourage Neutron and other projects to do the same. Having a deep bench of leaders supporting each other is important for the future of all projects.

See you all in Tokyo!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/8cb82c6e/attachment.html>

From sbaker at redhat.com  Mon Sep 14 01:51:25 2015
From: sbaker at redhat.com (Steve Baker)
Date: Mon, 14 Sep 2015 13:51:25 +1200
Subject: [openstack-dev] [Tripleo][Heat][Nova][Ironic] Rich-network
 stuff and ironic hypervisor
In-Reply-To: <CAAbQNR=J6RqQn_ibeJtK3E-ad+cAeqb1FptcPe=QkQ15v6Hx9g@mail.gmail.com>
References: <CAAbQNR=J6RqQn_ibeJtK3E-ad+cAeqb1FptcPe=QkQ15v6Hx9g@mail.gmail.com>
Message-ID: <55F6281D.7000408@redhat.com>

On 13/09/15 23:56, Sergey Kraynev wrote:
> Hi folks,
>
> Currently during implementation rich-network bp [1] (spec - [2]) we 
> met with issue on Tripleo
>  [3]. As temporary solution patch [4] was reverted.
>
> According traceback mentioned in bug description current issue related 
> with mac
>  addresses which should be used for specific hypervisor [5] [6].
> Previously in Tripleo, when we created vm without 'port-id' in 
> networks parameters, it was
>  handled by Nova [7], so new port got mac address from list of allowed 
> addresses.
>
> According rich-network BP, we want to use pre-created port (which we 
> create in Heat code
>  directly) during booting VM. Unfortunately in this case validation 
> mentioned above fails due
>  to different mac_addresses (for port and for hypervisor).
>
> We discussed it with Derek, and it looks like for Tripleo  it's 
> overhead work to get such mac
>  addresses and pass it in Heat template. Also I personally think, that 
> it's not user side issue,
> i.e. we should solve it inside Heat code ourselves. So we probably 
> need to ask Nova Ironic driver (because we can not ask ironic directly 
> from Heat) for this information - about list
> of allowed mac-addresses and then use it during creating port.
>
> I have investigated Novaclient code, but did not met any ability to do 
> it, except make to_dict() for Hypervisor object, but I am not sure, 
> that it will be presented in this output.
>
> So I'd ask Nova guys about some suggestions.
> Also any thoughts are welcome.
>
>
> [1] https://blueprints.launchpad.net/heat/+spec/rich-network-prop
> [2] https://review.openstack.org/#/c/130093
> [3] https://bugs.launchpad.net/tripleo/+bug/1494747
> [4] https://review.openstack.org/#/c/217753/
> [5] 
> https://github.com/openstack/nova/blob/309301381039b162588e5f2d348b5b666c96bd3a/nova/network/neutronv2/api.py#L477-L488
> [6] 
> https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L662-L678
> [7] 
> https://github.com/openstack/nova/blob/309301381039b162588e5f2d348b5b666c96bd3a/nova/network/neutronv2/api.py#L278
>
We may need to reconsider always pre-creating the port, given the above 
scenario plus comments like this[8].

One option would be to only pre-create if the template specifies 
explicit subnet or port_extra_properties, and otherwise let nova create 
the port on the specified network,

This would have implications for handling replace and rollback[9]. 
Either the server resource also needs to build resource data 
corresponding to the external_ports as well as the internal_ports, or 
the prepare_ports_for_replace needs to discover external ports too with 
a nova server get.

[8] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070648.html
[9] https://review.openstack.org/#/c/221032


From liusheng1175 at 126.com  Mon Sep 14 02:09:29 2015
From: liusheng1175 at 126.com (liusheng)
Date: Mon, 14 Sep 2015 10:09:29 +0800
Subject: [openstack-dev]  [Ceilometer]heavy time cost of event-list
In-Reply-To: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
Message-ID: <55F62C59.7050903@126.com>

Hi folks,

With the master Ceilometer installed, and use mysql backend, the 
event-list command met a heavy time cost.  if we have a bit large number 
of events stored, it is easy to cause event list API request timeout, 
the rally people told me that this issue has broken the 
gate-rally-dsvm-rally job, see[1], there is a bug they reported to track 
this[3]. As we konw, the admin roled uers can only query thire own 
events(the project trait value can match the project id) and events 
without project_id trait, this is implemented in event-rbac feature[2], 
before this change, the job is OK. maybe we have mistake in sql query ? 
I will dig more.

FYI, some testing infomation in my devstack environment:

root at szxbzci0007:/opt/stack/ceilometer# time ceilometer event-list |wc -l
109

real    0m51.780s
user    0m0.354s
sys    0m0.060s


mysql> select count(*) from event;
+----------+
| count(*) |
+----------+
|     1540 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from trait_text;
+----------+
| count(*) |
+----------+
|     3097 |
+----------+
1 row in set (0.01 sec)


[1] 
http://logs.openstack.org/35/222435/1/check/gate-rally-dsvm-rally/aa38d0f/rally-plot/results.html.gz#/CeilometerEvents.create_user_and_get_event/failures
[1] https://review.openstack.org/#/c/218706/
[2] https://bugs.launchpad.net/ceilometer/+bug/1494440

Best regards
Liu sheng




From rick.chen at prophetstor.com  Mon Sep 14 02:30:56 2015
From: rick.chen at prophetstor.com (Rick Chen)
Date: Mon, 14 Sep 2015 10:30:56 +0800
Subject: [openstack-dev] FW: [cinder] [third-party] ProphetStor CI account
In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D16965A965C1@G9W0753.americas.hpqcorp.net>
References: <001001d0db4b$e6fb3f80$b4f1be80$@prophetstor.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16958728E75@G4W3223.americas.hpqcorp.net>
 <003201d0dbc4$ed3634d0$c7a29e70$@prophetstor.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D1695872D881@G4W3223.americas.hpqcorp.net>
 <000001d0de4b$31f04dd0$95d0e970$@prophetstor.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965A8D392@G9W0753.americas.hpqcorp.net>
 <000901d0ded1$406eaab0$c14c0010$@prophetstor.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965A93780@G9W0753.americas.hpqcorp.net>
 <000201d0ded7$d4a9d820$7dfd8860$@prophetstor.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965A93AA9@G9W0753.americas.hpqcorp.net>
 <000001d0deeb$8d59d560$a80d8020$@prophetstor.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965A965C1@G9W0753.americas.hpqcorp.net>
Message-ID: <002201d0ee95$62833c50$2789b4f0$@prophetstor.com>

HI Mike:

        Thanks Ramy help us to build up our CI environment, now is ready for
the Cinder third-part CI testing.

        Can you re-enable "prophetstor-ci" gerrit account to join the Cinder
review testing?

        If our CI system has any missed condition or requirement for Cinder
CI testing, please let me know. 

 

Many thanks.

Rick

 

From: Asselin, Ramy [mailto:ramy.asselin at hp.com] 
Sent: Wednesday, August 26, 2015 12:43 AM
To: 'OpenStack Development Mailing List (not for usage questions)'
<openstack-dev at lists.openstack.org>
Cc: Rick Chen <rick.chen at prophetstor.com>
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

Looks good to me. Thanks!

Ramy

 

From: Rick Chen [ <mailto:rick.chen at prophetstor.com>
mailto:rick.chen at prophetstor.com] 
Sent: Monday, August 24, 2015 9:07 PM
To: Asselin, Ramy < <mailto:ramy.asselin at hp.com> ramy.asselin at hp.com>;
'OpenStack Development Mailing List (not for usage questions)' <
<mailto:openstack-dev at lists.openstack.org>
openstack-dev at lists.openstack.org>
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

Hi Ramy:

        I already fixed this important problem. Thanks.

        Does our CI system have any missed configuration or problem?

        Console log:

http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds
vm-tempest-cinder-ci/5141/console.html

CI Review result:

http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds
vm-tempest-cinder-ci/5141/logs/

Many thanks. 

 

 

From: Asselin, Ramy [ <mailto:ramy.asselin at hp.com>
mailto:ramy.asselin at hp.com] 
Sent: Tuesday, August 25, 2015 10:54 AM
To: Rick Chen < <mailto:rick.chen at prophetstor.com>
rick.chen at prophetstor.com>; 'OpenStack Development Mailing List (not for
usage questions)' < <mailto:openstack-dev at lists.openstack.org>
openstack-dev at lists.openstack.org>
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

Other than that, everything looks fine to me. But that is important to fix.

 

Thanks,

Ramy

 

From: Rick Chen [ <mailto:rick.chen at prophetstor.com>
mailto:rick.chen at prophetstor.com] 
Sent: Monday, August 24, 2015 6:46 PM
To: Asselin, Ramy; 'OpenStack Development Mailing List (not for usage
questions)'
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

HI Ramy:

        We use apache proxy pass option to redirect the public link to my
internal CI server. Maybe I missed some configure? I will try to find
solution for it. But it should affect the my OpenStack third-party CI
system.

        Does our CI system ready to re-enable account?

        

 

From: Asselin, Ramy [ <mailto:ramy.asselin at hp.com>
mailto:ramy.asselin at hp.com] 
Sent: Tuesday, August 25, 2015 9:14 AM
To: Rick Chen < <mailto:rick.chen at prophetstor.com>
rick.chen at prophetstor.com>; 'OpenStack Development Mailing List (not for
usage questions)' < <mailto:openstack-dev at lists.openstack.org>
openstack-dev at lists.openstack.org>
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

Rick,

 

It's strange, I can navigate using the link you provided, but not via the
"parent Directory" link.

 

This is what it links to, which is missing the prophetstor_ci portion:

 
<http://download.prophetstor.com/203895/3/check/prophetstor-dsvm-tempest-cin
der-ci/5141/>
http://download.prophetstor.com/203895/3/check/prophetstor-dsvm-tempest-cind
er-ci/5141/

 

Ramy

 

From: Rick Chen [ <mailto:rick.chen at prophetstor.com>
mailto:rick.chen at prophetstor.com] 
Sent: Monday, August 24, 2015 5:59 PM
To: Asselin, Ramy < <mailto:ramy.asselin at hp.com> ramy.asselin at hp.com>;
'OpenStack Development Mailing List (not for usage questions)' <
<mailto:openstack-dev at lists.openstack.org>
openstack-dev at lists.openstack.org>
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

HI Ramy:

        My console file is console.html as below:

 
<http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-d
svm-tempest-cinder-ci/5141/console.html>
http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds
vm-tempest-cinder-ci/5141/console.html

 

 
<http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-d
svm-tempest-cinder-ci/5141/>
http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds
vm-tempest-cinder-ci/5141/

 

From: Asselin, Ramy [ <mailto:ramy.asselin at hp.com>
mailto:ramy.asselin at hp.com] 
Sent: Monday, August 24, 2015 11:03 PM
To: 'OpenStack Development Mailing List (not for usage questions)' <
<mailto:openstack-dev at lists.openstack.org>
openstack-dev at lists.openstack.org>
Cc: Rick Chen < <mailto:rick.chen at prophetstor.com>
rick.chen at prophetstor.com>
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

Great. 

 

Somehow you lost your console.log file. Or did I miss it?

 

Ramy

 

From: Rick Chen [ <mailto:rick.chen at prophetstor.com>
mailto:rick.chen at prophetstor.com] 
Sent: Monday, August 24, 2015 2:00 AM
To: Asselin, Ramy; 'OpenStack Development Mailing List (not for usage
questions)'
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

HI Ramy:

        I competed to change zuul.conf zuul_url to be my zuul server
"zuul.rjenkins.prophetstor.com".

 

2015-08-24 16:21:48.349 | + git_fetch_at_ref openstack/cinder
refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b

2015-08-24 16:21:48.350 | + local project=openstack/cinder

2015-08-24 16:21:48.351 | + local
ref=refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b

2015-08-24 16:21:48.352 | + '['
refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b '!=' '' ']'

2015-08-24 16:21:48.353 | + git fetch
http://zuul.rjenkins.prophetstor.com/p/openstack/cinder
refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b

2015-08-24 16:21:49.264 | From
http://zuul.rjenkins.prophetstor.com/p/openstack/cinder

2015-08-24 16:21:49.265 |  * branch
refs/zuul/master/Z1a7ecbae61cc4aa090d02620bef4076b -> FETCH_HEAD

        

        ProphetStor CI review result:

 
http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds
vm-tempest-cinder-ci/5141/

 

 

From: Asselin, Ramy [ <mailto:ramy.asselin at hp.com>
mailto:ramy.asselin at hp.com] 
Sent: Saturday, August 22, 2015 6:40 AM
To: Rick Chen < <mailto:rick.chen at prophetstor.com>
rick.chen at prophetstor.com>; OpenStack Development Mailing List (not for
usage questions) < <mailto:openstack-dev at lists.openstack.org>
openstack-dev at lists.openstack.org>
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

HI Rick,

 

Let's keep this on the list so that others can benefit or chip in ideas.

If you cannot subscribe to the list, ask the folks on freenode irc in
#openstack-infra.

 

You should set in zuul.conf the [merger] zuul_url to your local zuul's url.
[1] 

E.g. 

[merger]

zuul_url=http://<your_ip_or fqdn>/p/

 

Please use export GIT_BASE=https://git.openstack.org. This will reduce the
load on OpenStack's gerrit server and point you to a more stable GIT farm
that can better handle the CI load. This will help your CI's success rate
(by avoiding timeouts and intermittent errors) and reduce your ci test setup
time.

 

Ramy

 

[1]  <http://docs.openstack.org/infra/zuul/zuul.html#merger>
http://docs.openstack.org/infra/zuul/zuul.html#merger

 

 

 

 

 

From: Rick Chen [ <mailto:rick.chen at prophetstor.com>
mailto:rick.chen at prophetstor.com] 
Sent: Thursday, August 20, 2015 8:53 PM
To: Asselin, Ramy < <mailto:ramy.asselin at hp.com> ramy.asselin at hp.com>
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

Hi Ramy:

        Can you provide detail information how to solve it?

        Yes, I use zuul. My zuul.conf use default zuul_url value.

        I thought I use build shell script to pull this patch. I had add
"export GIT_BASE=https://review.openstack.org/p" in the build shell script.

        Does it wrong?

 

From: Asselin, Ramy [ <mailto:ramy.asselin at hp.com>
mailto:ramy.asselin at hp.com] 
Sent: Thursday, August 20, 2015 11:12 PM
To: Rick Chen < <mailto:rick.chen at prophetstor.com>
rick.chen at prophetstor.com>
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

Hi Rick,

 

Thank you for adding this: "Triggered by:
<https://review.openstack.org/203895> https://review.openstack.org/203895
patchset 3"

Where do you pull down this patch in the log files?

Normally it gets pulled down during the setup-workspace script, but here
you're referencing openstack's servers which is not correct. [1]

Are you using zuul? If so, your zuul url should be there.

If not, there should be some other place in your scripts that pull down the
patch. 

 

Ramy

 

[1]
<http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-d
svm-tempest-cinder-ci/5117/logs/devstack-gate-setup-workspace-new.txt.gz#_20
15-08-20_12_03_24_813>
http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds
vm-tempest-cinder-ci/5117/logs/devstack-gate-setup-workspace-new.txt.gz#_201
5-08-20_12_03_24_813

 

From: Rick Chen [ <mailto:rick.chen at prophetstor.com>
mailto:rick.chen at prophetstor.com] 
Sent: Thursday, August 20, 2015 6:27 AM
To: Asselin, Ramy
Subject: RE: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

HI Ramy:

        Sorry, I did not make sure the mail sent, because I did not receive
my mail from "openstack dev mail list group". So I direct send mail to your
private mail account.

        Thank you for your guidance. I followed your suggestion.

        Please reference below link:

 
http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds
vm-tempest-cinder-ci/5117/console.html

 

        my gerrit account:          prophetstor-ci

              gerrit account email:        prophetstor.ci at prophetstor.com
<mailto:prophetstor.ci at prophetstor.com> 

 

 

-----Original Message-----

From: Asselin, Ramy [mailto:ramy.asselin at hp.com]

Sent: Wednesday, August 19, 2015 10:10 PM

To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev at lists.openstack.org
<mailto:openstack-dev at lists.openstack.org> >; 'Mike Perez'
<thingee at gmail.com <mailto:thingee at gmail.com> >

Subject: Re: [openstack-dev] [cinder] [third-party] ProphetStor CI account

 

Hi Rick,

 

Huge improvement. Log server is looking great! Thanks!

 

Next question is what (cinder) patch set is that job running?

It seems to be cinder master [1]. 

Is that intended? That's fine to validate general functionality, but
eventually it needs to run the actual cinder patch set under test.

 

It's helpful to include a link to the patch that invoked the job at the top
of the console.log file, e.g. [2].

 

Ramy

 

[1]
http://download.prophetstor.com/prophetstor_ci/203895/3/check/prophetstor-ds
vm-tempest-cinder-ci/5111/logs/devstack-gate-setup-workspace-new.txt.gz#_201
5-08-19_18_27_38_953

[2]
https://github.com/rasselin/os-ext-testing/blob/master/puppet/modules/os_ext
_testing/templates/jenkins_job_builder/config/macros.yaml.erb#L93

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/a4c36d24/attachment.html>

From openstack at lanabrindley.com  Mon Sep 14 02:31:51 2015
From: openstack at lanabrindley.com (Lana Brindley)
Date: Mon, 14 Sep 2015 12:31:51 +1000
Subject: [openstack-dev] [docs][ptl] Docs PTL Candidacy
Message-ID: <55F63197.9090606@lanabrindley.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi everyone,

I am seeking re-election as Documentation PTL for Mitaka.

I had a great time as incoming PTL for Liberty, and would love another
term, if you'll have me.

Liberty was a great release for documentation. Our main achievement was
converting four more books to RST, with the Install Guide happening
almost completely over a 48 hour period. This has had some great
feedback from the community, with a lot of developers who haven't
contributed to documentation before now doing so. For Mitaka, I would
like to continue the RST conversion, with plans to migrate the
Operations and Architecture Guides next.

One of the things I wanted to focus on during the Liberty cycle was the
information architecture of our books, to ensure we're delivering
content that our readers can really use. To this end, I'm happy to
report that we completed an overhaul of the User Guides during the
Liberty Release,. We also made some significant changes to the
Installation Guide, including changing the way we document networking
scenarios, one of the trickiest parts of the installation.

Since the Kilo release in April, the documentation team has released
fixes for 493 bugs. We've also focused on squashing some old bugs.
Through a concerted effort we've managed to kill all bugs older than a
year (with a few wishlist exceptions), and closed a lot of the year old
ones. I've also been working with Foundation on an organised effort to
continue this process of paying down technical debt and improve on the
work we've done through Liberty. I'll be making some exciting
announcements about this at Summit.

I was very excited to meet so many of the documentation team in
Vancouver, and had the privilege of welcoming a couple of new Docs ATCs
during the Summit. I can't wait to see you all again in Tokyo, and hope
to meet some new faces there.

I'd love to have your support for the PTL role for Mitaka, and I'm
looking forward to continuing to grow the documentation team.

Thanks,
Lana (@Loquacities)

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV9jGWAAoJELppzVb4+KUyJWYH/jlW+nEq9sxi5xuj8Ot1v7OA
OOIpBBjpfxJ9Y+s1LTXJ5Iu71QAhRAfNkV8Ia9/h/9fq4ts/ZQrMrbJ7mNN0xlQa
LUYEIOwbzP1ZqqUCtOCvHk5Id1IAcup8yth40DB2YjkX5WWIUmQx9YbtyriS0KuA
FL/WjjcffXTbL0r6bJS3nVkkHdC2UZbmCF74fRdAgg15e2kdJwXT+ZadW4j4tT1I
/y8e4HdBEJfyyPOrT/6oCeQYbPxg2wBB2jWzMjd/GA8le9lhikOqbYK7345gZ1Tc
sdQSMd7pfuNgK7jUa2dDgSQ04iD69iQIFz0pHqf1vA91H+p3Kx1/XGBlX9iz2hk=
=5iqn
-----END PGP SIGNATURE-----
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0xF8F8A532.asc
Type: application/pgp-keys
Size: 8166 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/c7613227/attachment.key>

From shi-hoshino at yk.jp.nec.com  Mon Sep 14 03:48:57 2015
From: shi-hoshino at yk.jp.nec.com (Shinya Hoshino)
Date: Mon, 14 Sep 2015 03:48:57 +0000
Subject: [openstack-dev] [Ironic] There is a function to display the VGA
 emulation screen of BMC in the baremetal node on the Horizon?
References: <870B6C6CD434FA459C5BF1377C680282023A7160@BPXM20GP.gisp.nec.co.jp>
 <CAB1EZBoxqM=OOQ-RH3okW9AAoT2pAgTeWB_DDRQkiat=_870Ew@mail.gmail.com>
Message-ID: <870B6C6CD434FA459C5BF1377C680282023A7B57@BPXM20GP.gisp.nec.co.jp>

Thanks Lucas,

However, I'd like to know whether or not capable to get the *VGA
emulation* screen of BMC, rather than serial console.

I also know that the features that can be only in VGA console is
very few, now.  And I am able to get `ironic node-get-console
<node-uuid>`.
Still, I'd like to know such a thing and whether it can be
displayed on the Horizon.

Best regards,

On 2015/09/11 17:50, Lucas Alvares Gomes wrote:
> Hi,
>
>> We are investigating how to display on the Horizon a VGA
>> emulation screen of BMC in the bare metal node that has been
>> deployed by Ironic.
>> If it was already implemented, I thought that the connection
>> information of a VNC or SPICE server (converted if necessary)
>> for a VGA emulation screen of BMC is returned as the stdout of
>> the "nova get-*-console".
>> However, we were investigating how to configure Ironic and so
>> on, but we could not find a way to do so.
>> I tried to search roughly the implementation of such a process
>> in the source code of Ironic, but it was not found.
>>
>> The current Ironic, I think such features are not implemented.
>> However, is this correct?
>>
>
> A couple of drivers in Ironic supports web console (shellinabox), you
> can take a look docs to see how to enable and use it:
> http://docs.openstack.org/developer/ironic/deploy/install-guide.html#configure-node-web-console
>
> Hope that helps,
> Lucas
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
/* -------------------------------------------------------------
   Shinn'ya Hoshino            mailto:shi-hoshino at yk.jp.nec.com
------------------------------------------------------------- */


From zhengzhenyulixi at gmail.com  Mon Sep 14 03:53:38 2015
From: zhengzhenyulixi at gmail.com (Zhenyu Zheng)
Date: Mon, 14 Sep 2015 11:53:38 +0800
Subject: [openstack-dev] [nova] [api] Nova currently handles list with
 limit=0 quite different for different objects.
In-Reply-To: <1441987655.14645.36.camel@einstein.kev>
References: <CAO0b____pvyYBSz7EzWrS--T9HSWbEBv5c-frbFT6NQ46ve-nQ@mail.gmail.com>
 <1441987655.14645.36.camel@einstein.kev>
Message-ID: <CAO0b__9z7sjS3Gqm1uj2z=6X9Sz9uG9V1eUYQWSeCpkchxroEQ@mail.gmail.com>

Hi, Thanks for your reply, after check again and I agree with you. I think
we should come up with a conclusion about how we should treat this limit=0
across nova. And that's also why I sent out this mail. I will register this
topic in the API meeting open discussion section, my be a BP in M to fix
this.

BR,

Zheng

On Sat, Sep 12, 2015 at 12:07 AM, Kevin L. Mitchell <
kevin.mitchell at rackspace.com> wrote:

> On Fri, 2015-09-11 at 15:41 +0800, Zhenyu Zheng wrote:
> > Hi, I found out that nova currently handles list with limit=0 quite
> > different for different objects.
> >
> > Especially when list servers:
> >
> > According to the code:
> >
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206
> >
> > when limit = 0, it should apply as max_limit, but currently, in:
> >
> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930
> >
> > we directly return [], this is quite different with comment in the api
> > code.
> >
> >
> > I checked other objects:
> >
> > when list security groups and server groups, it will return as no
> > limit has been set. And for flavors it returns []. I will continue to
> > try out other APIs if needed.
> >
> > I think maybe we should make a rule for all objects, at least fix the
> > servers to make it same in api and db code.
> >
> > I have reported a bug in launchpad:
> >
> > https://bugs.launchpad.net/nova/+bug/1494617
> >
> >
> > Any suggestions?
>
> After seeing the test failures that showed up on your proposed fix, I'm
> thinking that the proposed change reads like an API change, requiring a
> microversion bump.  That said, I approve of increased consistency across
> the API, and perhaps the behavior on limit=0 is something the API group
> needs to discuss a guideline for?
> --
> Kevin L. Mitchell <kevin.mitchell at rackspace.com>
> Rackspace
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/1d012c2a/attachment.html>

From openstack at lanabrindley.com  Mon Sep 14 04:14:48 2015
From: openstack at lanabrindley.com (Lana Brindley)
Date: Mon, 14 Sep 2015 14:14:48 +1000
Subject: [openstack-dev] What Up, Doc? 11 September, 2015
Message-ID: <55F649B8.60001@lanabrindley.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi everyone,

Things are really starting to heat up now! Summit preparations are well
underway, PTL elections just opened, and testing is proceeding apace on
the Install Guide. As well as preparing my candidacy for Mitaka Docs
PTL, I've been continuing to sort out the outstanding blueprints, and
also planning for some announcements in Tokyo. One of the interesting
things I learned while compiling a list of Liberty achievements is that
we have released fixes for nearly 500 bugs since Kilo was released. What
an amazing achievement!

== Progress towards Liberty ==

33 days to go

496 bugs closed so far.

* RST conversion:
** Done.

* User Guides information architecture overhaul
** This is now well underway.

* Greater focus on helping out devs with docs in their repo
** A certain amount of progress has been made here, and some wrinkles
sorted out which will improve this process for the future.

* Improve how we communicate with and support our corporate contributors
** I'm still trying to come up with great ideas for this, please let me
know what you think.

* Improve communication with Docs Liaisons
** I'm very pleased to see liaisons getting more involved in our bugs
and reviews. Keep up the good work!

* Clearing out old bugs
** We've had some solid progress on last week's bugs, with two assigned,
and one of those in progress. Three new bugs this week.

== Mitaka Summit Prep ==

The schedule has now been released, congratulations to everyone who had
a talk accepted this time around:
https://www.openstack.org/summit/tokyo-2015/schedule/

All ATCs should have received their pass by now, so now is the time to
be booking your travel and accommodation:
https://www.openstack.org/summit/tokyo-2015/tokyo-and-travel/

Docs have been given their design summit session allocation, so start
thinking about what you would like to see discussed. I'll send out an
etherpad to start planning in the next week or two.

== Conventions ==

A new governance patch has landed which changes the way we capitalise
service names (I know almost exactly 50% of you will be happy about
this!):
https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_pr
oject_names
Please be aware of this when editing files, and remember that the
'source of truth' for these things is the projects.yaml file:
http://git.openstack.org/cgit/openstack/governance/tree/reference/projec
ts.yaml

== Doc team meeting ==

The US meeting was held this week. The minutes are here:
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2015-09-09

The next meetings are:
APAC: Wednesday 16 September, 00:30:00 UTC
US: Wednesday 23 September, 14:00:00 UTC

Please go ahead and add any agenda items to the meeting page here:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_
meeting

== Spotlight bugs for this week ==

Let's show these three how much we care:

https://bugs.launchpad.net/openstack-manuals/+bug/1284301 Document
Neutron SSL-VPN for cloudadmin

https://bugs.launchpad.net/openstack-manuals/+bug/1288044 Add user
defined extra capabilities

https://bugs.launchpad.net/openstack-manuals/+bug/1292327 Make cors work
better

- --

Remember, if you have content you would like to add to this newsletter,
or you would like to be added to the distribution list, please email me
directly at openstack at lanabrindley.com, or visit:
https://wiki.openstack.org/w/index.php?title=Documentation/WhatsUpDoc

Keep on doc'ing!

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV9km4AAoJELppzVb4+KUyr0kIALxedlQxY0jONXXD+xaPNW76
JPjpnBrBjGyIvq09qeqU4vYVkSq56eF2qPtY6YCKmu0nkjTxAoJX0JxqGyTq3J+H
nPv/+Hyg/VOJaZmJ6Qs3Ij1w9759OLiovnUjh3IsjkkZOXN/uHxw5LZQ3oXhmK+j
Ydui86STbM1OUtOeNcxgH5zGl4aj/SWCrOwARAKXs6K0JShUvvtYmxPnQwvX/Mgn
uLy9976PoboOR6SCW052ZBNT1VKHPOjpCaK3xOf7JJZQYi1V1Uv7BJwpSTBiyYOv
MFvHRETKiWvfbyDKvvR1tYxyQixGq/aQWWcxke14YugEjFZ3wUAoWdwiD0VfXsA=
=52+9
-----END PGP SIGNATURE-----


From smelikyan at mirantis.com  Mon Sep 14 04:40:46 2015
From: smelikyan at mirantis.com (Serg Melikyan)
Date: Sun, 13 Sep 2015 21:40:46 -0700
Subject: [openstack-dev] [murano] PTL Candidacy
Message-ID: <CAOnDsYOrBhJ+ztRXGZoty1gauGFTQ11HkfdLrTQEqHtTS5T2hQ@mail.gmail.com>

I'd like to continue handling PTL responsibilities for Murano [1] in
the Mitaka cycle.

I would like to apologize for not being able to focus on my
responsibilities in Liberty for full-time. In Mitaka being PTL going
to be my full-time job and I feel that it will allow me to finally
archive my dream of building great community around our project.

In Liberty we increased community diversity [2] [3], but it is only
beginning of the journey which I would like to carry on with your
support.

[1] http://wiki.openstack.org/wiki/Murano
[2] http://stackalytics.com/?release=kilo&module=murano-group&metric=commits
[3] http://stackalytics.com/?release=liberty&module=murano-group&metric=commits

-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelikyan at mirantis.com


From donald.d.dugger at intel.com  Mon Sep 14 05:51:14 2015
From: donald.d.dugger at intel.com (Dugger, Donald D)
Date: Mon, 14 Sep 2015 05:51:14 +0000
Subject: [openstack-dev] [nova-scheduler] Scheduler sub-group meeting -
	Agenda 9/14
Message-ID: <6AF484C0160C61439DE06F17668F3BCB53FE61BE@ORSMSX114.amr.corp.intel.com>

Meeting on #openstack-meeting-alt at 1400 UTC (8:00AM MDT)

1) Liberty patches - https://etherpad.openstack.org/p/liberty-nova-priorities-tracking
2) Mitaka planning
3) Opens


--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786



From choudharyvikas16 at gmail.com  Mon Sep 14 06:32:29 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Mon, 14 Sep 2015 12:02:29 +0530
Subject: [openstack-dev] [magnum] Is magnum db going to be removed for k8s
	resources?
Message-ID: <CABJxuZoBFwtq3UEmE2=G6wVTiQLEz1mVReGA0XA6WU3pgJ4cFA@mail.gmail.com>

Hi Team,

As per object-from-bay blueprint implementation [1], all calls to magnum db
are being skipped for example pod.create() etc.

Are not we going to use magnum db at all for pods/services/rc ?


Thanks
Vikas Choudhary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/5ec63c60/attachment.html>

From choudharyvikas16 at gmail.com  Mon Sep 14 06:39:28 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Mon, 14 Sep 2015 12:09:28 +0530
Subject: [openstack-dev] [magnum] Is magnum db going to be removed for k8s
	resources?
Message-ID: <CABJxuZpQU=Pfvft9JQ_JAX6HcV=aUg+mWUvdJCa7FGmQA3P5qw@mail.gmail.com>

Hi Team,

As per object-from-bay blueprint implementation [1], all calls to magnum db
are being skipped for example pod.create() etc.

Are not we going to use magnum db at all for pods/services/rc ?


Thanks
Vikas Choudhary


[1] https://review.openstack.org/#/c/213368/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/7896b8a5/attachment.html>

From feilong at catalyst.net.nz  Mon Sep 14 06:43:24 2015
From: feilong at catalyst.net.nz (Fei Long Wang)
Date: Mon, 14 Sep 2015 18:43:24 +1200
Subject: [openstack-dev]  [zaqar] Zaqar PTL Candidacy
Message-ID: <55F66C8C.7050705@catalyst.net.nz>

Greeting,

I'd like to announce my candidacy for the Zaqar PTL in Mikata.

I've been working on and contributing to Zaqar since Icehouse release as 
a core developer. That said, I really understand the ups and downs which 
Zaqar went through and I know the blueprints, direction and pains of 
this project.

For Mikata release, there are some items that I feel are important for 
us to focus on:

1. Collaboration with other OpenStack projects
     We did a great job in Liberty for the integration of Heat and 
Zaqar. And I believe there are some other projects could be benefited by 
Zaqar. For example, the notification middleware of Swift, Zaqar's 
websocket transport for Horizon, communicating with guests agents via 
Zaqar for Sahara/Trove, etc.

2. Real-world deployment
     puppet-zaqar is on the way. It could be an great deployment tool 
for operator who want to deploy Zaqar. So in next release, we will 
continually provide more support for the project to get a stable release 
ASAP. Interlock and provide more support for potential cloud providers 
who have shown interest for Zaqar.

3. API improvement
     We have released three API versions. The API of key functions are 
more and more stable. But given the bug fix and new features, it's time 
to review the overall API of v2 to make it more stable.

4. Encourage diversity in our Community
     We're still a small team and obviously we need more new blood to 
join the team. And meanwhile, we would like to see the new comers from 
different organizations so that we can get different feedback from a 
wide range of users and industries. Not only developers, but also 
reviewers and operators.

It's a fantastic experience working with this amazing team and I know 
without the dedication and hard work of everyone who has contributed to 
Zaqar we can't make those success stories of Liberty happen. So I 
believe the PTL of this smart team is most like a facilitator, 
coordinator and mentor. I would be pleased to serve as PTL for Zaqar for 
the Mikata cycle and I'd appreciate your vote.

Thanks for your consideration

-- 
Cheers & Best regards,
Fei Long Wang (???)
--------------------------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang at catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--------------------------------------------------------------------------



From flavio at redhat.com  Mon Sep 14 07:37:27 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Mon, 14 Sep 2015 09:37:27 +0200
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
 concerns
In-Reply-To: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
Message-ID: <20150914073727.GA10859@redhat.com>

On 11/09/15 12:26 -0700, Joshua Harlow wrote:
>Hi all,
>
>I was reading over the TC IRC logs for this week (my weekly reading) 
>and I just wanted to let my thoughts and comments be known on:
>
>http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309
>
>I feel it's very important to send a positive note for new/upcoming 
>projects and libraries... (and for everyone to remember that most 
>projects do start off with a small set of backers). So I just wanted 
>to try to ensure that we send a positive note with any tag like this 
>that gets created and applied and that we all (especially the TC) 
>really really considers the negative connotations of applying that tag 
>to a project (it may effectively ~kill~ that project).
>
>I would really appreciate that instead of just applying this tag (or 
>other similarly named tag to projects) that instead the TC try to 
>actually help out projects with those potential tags in the first 
>place (say perhaps by actively listing projects that may need more 
>contributors from a variety of companies on the openstack blog under 
>say a 'HELP WANTED' page or something). I'd much rather have that vs. 
>any said tags, because the latter actually tries to help projects, vs 
>just stamping them with a 'you are bad, figure out how to fix 
>yourself, because you are not diverse' tag.
>
>I believe it is the TC job (in part) to help make the community 
>better, and not via tags like this that IMHO actually make it worse; I 
>really hope that folks on the TC can look back at their own projects 
>they may have created and ask how would their own project have turned 
>out if they were stamped with a similar tag...

While I agree the wording might not be the best, I also think it
isn't, by any means, trying to send a message such as "Stay away from
this project".

The issue with diversity in projects is real and it not only affects
the project but the consumers of such project as well. Tags ought to
be objective and we should provide as much relevant information as
possible for both, the developers community and the users community.

You mentioned that this tag may kill the project but I'd argue that it
that could also help the project. One of the issues I believe we're
facing with the big tent is that everyone wants to be part of the show
- It was true even before the big tent, it's just that it's easier to
get in now - but one of the things we're lacking of is good
information about where contributions should go to. Having projects
with diversity issues tagged may actually help identifying places were
newcomers may want to go and contribute to.

In the Big Tent we don't just need new acts, we also need people
willing to participate in existing ones.

FWIW, Zaqar is one of the projects that would fall into the category
of the ones that would be tagged and I honestly think that'll be good
for the project as it's becoming a more relevant piece for other
projects and the tag may raise awareness of what issues the community
currently has.

To your point about helping projects grow and improve. I fully agree
but it's also important to note that the TC members are not holding
their hands. One thing that came out of Liberty is the
project-team-guide[0], which is not just a guide to "how be part of
OpenStack" but rather a guide that'll, hopefully, help projects build
a better and healthier community that can grow.

In summary, I agree that the tag could, perhaps, use a more positive
name but I disagree with your thoughts that it's a negative tag that
will just harm projects. I also agree the TC should help projects grow
as much as possible but it'd be unfair to say nothing has been done.

Again, tags ought to be objective and honest w.r.t *both* communities,
the users' and devs'. I'm happy you brought this up.

Cheers,
Flavio

[0] http://docs.openstack.org/project-team-guide/

>
>- Josh
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/0ea3f320/attachment.pgp>

From jay.lau.513 at gmail.com  Mon Sep 14 07:57:44 2015
From: jay.lau.513 at gmail.com (Jay Lau)
Date: Mon, 14 Sep 2015 15:57:44 +0800
Subject: [openstack-dev] [magnum] Is magnum db going to be removed for
	k8s resources?
In-Reply-To: <CABJxuZpQU=Pfvft9JQ_JAX6HcV=aUg+mWUvdJCa7FGmQA3P5qw@mail.gmail.com>
References: <CABJxuZpQU=Pfvft9JQ_JAX6HcV=aUg+mWUvdJCa7FGmQA3P5qw@mail.gmail.com>
Message-ID: <CAFyztAEb_kG1-dciG=M4tifii17hKTjK3oWSVvLrvaq4cyZ1vw@mail.gmail.com>

Hi Vikas,

Thanks for starting this thread. Here just show some of my comments here.

The reason that Magnum want to get k8s resource via k8s API including two
reasons:
1) Native clients support
2) With current implantation, we cannot get pod for a replication
controller. The reason is that Magnum DB only persist replication
controller info in Magnum DB.

With the bp of objects-from-bay, the magnum will always call k8s API to get
all objects for pod/service/rc. Can you please show some of your concerns
for why do we need to persist those objects in Magnum DB? We may need to
sync up Magnum DB and k8s periodically if we persist two copies of objects.

Thanks!

<https://blueprints.launchpad.net/openstack/?searchtext=objects-from-bay>

2015-09-14 14:39 GMT+08:00 Vikas Choudhary <choudharyvikas16 at gmail.com>:

> Hi Team,
>
> As per object-from-bay blueprint implementation [1], all calls to magnum db
> are being skipped for example pod.create() etc.
>
> Are not we going to use magnum db at all for pods/services/rc ?
>
>
> Thanks
> Vikas Choudhary
>
>
> [1] https://review.openstack.org/#/c/213368/
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/5ed308dc/attachment.html>

From lajos.katona at ericsson.com  Mon Sep 14 08:01:20 2015
From: lajos.katona at ericsson.com (Lajos Katona)
Date: Mon, 14 Sep 2015 10:01:20 +0200
Subject: [openstack-dev] [tempest] Is there a sandbox project how to use
 tempest test plugin interface?
In-Reply-To: <55F279BC.60400@ericsson.com>
References: <55F17DFF.4000602@ericsson.com>
 <20150910141302.GA2037@sazabi.kortar.org> <55F279BC.60400@ericsson.com>
Message-ID: <55F67ED0.5090409@ericsson.com>

Hi Matthew,

Finally I made it working, so now I have a dummy plugin.

Few questions, remarks:
- For me it was little hard to merge my weak knowledge from python 
packaging with the documentation for tempest plugins, do you mind if I 
push an example to github, and I add the link to that to the documentation.

- From this point the generation of the idempotent id is not clear for 
me. I was able to use the check_uuid.py, and as I used a virtenv, the 
script edited the 
/.tox/venv/local/lib/python2.7/site-packages/dummyplugin// file.
Would be good maybe to add an extra path option to the check_uuid.py to 
make it possible to edit the real source files in similar cases not the 
ones in the venv.

Regards
Lajos

On 09/11/2015 08:50 AM, Lajos Katona wrote:
> Hi Matthew,
>
> Thanks for the help, this helped a lot a start the work.
>
> regards
> Lajos
>
> On 09/10/2015 04:13 PM, Matthew Treinish wrote:
>> On Thu, Sep 10, 2015 at 02:56:31PM +0200, Lajos Katona wrote:
>>> Hi,
>>>
>>> I just noticed that from tag 6, the test plugin interface considered ready,
>>> and I am eager to start to use it.
>>> I have some questions:
>>>
>>> If I understand well in the future the plugin interface will be moved to
>>> tempest-lib, but now I have to import module(s) from tempest to start to use
>>> the interface.
>>> Is there a plan for this, I mean when the whole interface will be moved to
>>> tempest-lib?
>> The only thing which will eventually move to tempest-lib is the abstract class
>> that defines the expected methods of a plugin class [1] The other pieces will
>> remain in tempest. Honestly this won't likely happen until sometime during
>> Mitaka. Also when it does move to tempest-lib we'll deprecate the tempest
>> version and keep it around to allow for a graceful switchover.
>>
>> The rationale behind this is we really don't provide any stability guarantees
>> on tempest internals (except for a couple of places which are documented, like
>> this plugin class) and we want any code from tempest that's useful to external
>> consumers to really live in tempest-lib.
>>
>>> If I start to create a test plugin now (from tag 6), what should be the best
>>> solution to do this?
>>> I thought to create a repo for my plugin and add that as a subrepo to my
>>> local tempest repo, and than I can easily import stuff from tempest, but I
>>> can keep my test code separated from other parts of tempest.
>>> Is there a better way of doing this?
>> To start I'd take a look at the documentation for tempest plugins:
>>
>> http://docs.openstack.org/developer/tempest/plugin.html
>>
>> >From tempest's point of view a plugin is really just an entry point that points
>> to a class that exposes certain methods. So the Tempest plugin can live anywhere
>> as long as it's installed as an entry point in the proper namespace. Personally
>> I feel like including it as a subrepo in a local tempest tree is a bit strange,
>> but I don't think it'll cause any issues if you do that.
>>
>>> If there would be an example plugin somewhere, that would be the most
>>> preferable maybe.
>> There is a cookiecutter repo in progress. [2] Once that's ready it'll let you
>> create a blank plugin dir that'll be ready for you to populate. (similar to the
>> devstack plugin cookiecutter that already exists)
>>
>> For current examples the only project I know of that's using a plugin interface
>> is manila [3] so maybe take a look at what they're doing.
>>
>> -Matt Treinish
>>
>> [1]http://git.openstack.org/cgit/openstack/tempest/tree/tempest/test_discover/plugins.py#n26
>> [2]https://review.openstack.org/208389
>> [3]https://review.openstack.org/#/c/201955
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/11f4a857/attachment.html>

From thierry at openstack.org  Mon Sep 14 08:02:48 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Mon, 14 Sep 2015 10:02:48 +0200
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
 concerns
In-Reply-To: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
Message-ID: <55F67F28.70506@openstack.org>

Joshua Harlow wrote:
> I believe it is the TC job (in part) to help make the community better,
> and not via tags like this that IMHO actually make it worse;

I think it's important to see the intent of the tag, rather than only
judge on its current proposed name. The big tent is vast, and there are
all kinds of projects, more or less mature, in it. The tag system is
there to help our ecosystem navigate the big tent by providing specific
bits of information about them.

One such important bit of information is how risky it is to invest on a
given project, how likely is it to still be around tomorrow. Some
projects are so dependent on a single organization that they may,
literally, disappear in one day when a single person (the CEO of that
organization) decides so. I think our ecosystem should know about that,
without having to analyze stackalytics data. This is why I support
creating a tag describing project teams that are *extremely* fragile, at
the other end of the spectrum from projects that are "healthily diverse".

> I really
> hope that folks on the TC can look back at their own projects they may
> have created and ask how would their own project have turned out if they
> were stamped with a similar tag...

The thing is, one of the requirements to become an official OpenStack
project in the "integrated release" model was to reach a given level of
diversity in contributors. So "our" OpenStack projects just could not
officially exist if they would have been stamped with a similar tag.

The big tent is more inclusive, as we no longer consider diversity
before we approve a project. The tag is the other side of the coin: we
still need to inform our ecosystem that some projects are less mature or
more fragile than others. The tag doesn't prevent the project to exist
in OpenStack, it just informs our users that there is a level of risk
associated with it.

Or are you suggesting it is preferable to hide that risk from our
operators/users, to protect that project team developers ?

-- 
Thierry Carrez (ttx)


From kuvaja at hpe.com  Mon Sep 14 08:23:14 2015
From: kuvaja at hpe.com (Kuvaja, Erno)
Date: Mon, 14 Sep 2015 08:23:14 +0000
Subject: [openstack-dev] [Glance] glance core rotation part 1
In-Reply-To: <CABdthUSsbSxn1Vb5nTyGnLO__8VYn7KvGHUV8MeBD6ZERtD8ew@mail.gmail.com>
References: <55F2E3F9.1000907@gmail.com>
 <etPan.55f31e93.35ed68df.5cd@MMR298FD58>
 <CABdthUSsbSxn1Vb5nTyGnLO__8VYn7KvGHUV8MeBD6ZERtD8ew@mail.gmail.com>
Message-ID: <EA70533067B8F34F801E964ABCA4C4410F4D3E96@G9W0745.americas.hpqcorp.net>

+1

From: Alex Meade [mailto:mr.alex.meade at gmail.com]
Sent: Friday, September 11, 2015 7:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] glance core rotation part 1

+1

On Fri, Sep 11, 2015 at 2:33 PM, Ian Cordasco <ian.cordasco at rackspace.com<mailto:ian.cordasco at rackspace.com>> wrote:


-----Original Message-----
From: Nikhil Komawar <nik.komawar at gmail.com<mailto:nik.komawar at gmail.com>>
Reply: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: September 11, 2015 at 09:30:23
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org> <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject:  [openstack-dev] [Glance] glance core rotation part 1

> Hi,
>
> I would like to propose the following removals from glance-core based on
> the simple criterion of inactivity/limited activity for a long period (2
> cycles or more) of time:
>
> Alex Meade
> Arnaud Legendre
> Mark Washenberger
> Iccha Sethi

I think these are overdue

> Zhi Yan Liu (Limited activity in Kilo and absent in Liberty)

Sad to see Zhi Yan Liu's activity drop off.

> Please vote +1 or -1 and we will decide by Monday EOD PT.

+1

--
Ian Cordasco
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/7f378e8a/attachment.html>

From jesse.pretorius at gmail.com  Mon Sep 14 08:28:21 2015
From: jesse.pretorius at gmail.com (Jesse Pretorius)
Date: Mon, 14 Sep 2015 09:28:21 +0100
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <1441909133-sup-2320@fewbar.com>
References: <55F1999C.4020509@mhtx.net> <55F1AE40.5020009@gentoo.org>
 <55F1B0D7.8070404@mhtx.net> <1441909133-sup-2320@fewbar.com>
Message-ID: <CAGSrQvy4b7fEmJGvSMfLjtiMuj-w_2S7rFL7uXuRKHkHBrVrHA@mail.gmail.com>

On 10 September 2015 at 19:21, Clint Byrum <clint at fewbar.com> wrote:

> Excerpts from Major Hayden's message of 2015-09-10 09:33:27 -0700:
> > Hash: SHA256
> >
> > On 09/10/2015 11:22 AM, Matthew Thode wrote:
> > > Sane defaults can't be used?  The two bugs you listed look fine to me
> as
> > > default things to do.
> >
> > Thanks, Matthew.  I tend to agree.
> >
> > I'm wondering if it would be best to make a "punch list" of CIS
> benchmarks and try to tag them with one of the following:
> >
> >   * Do this in OSAD
> >   * Tell deployers how to do this (in docs)
>
> Just a thought from somebody outside of this. If OSAD can provide the
> automation, turned off by default as a convenience, and run a bank of
> tests with all of these turned on to make sure they do actually work with
> the stock configuration, you'll get more traction this way. Docs should
> be the focus of this effort, but the effort should be on explaining how
> it fits into the system so operators who are customizing know when they
> will have to choose a less secure path. One should be able to have code
> do the "turn it on" "turn it off" mechanics.
>

I agree with Clint that this is a good approach.

If there is an automated way that we can verify the security of an
installation at a reasonable/standardised level then I think we should add
a gate check for it too.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/52c34f4f/attachment.html>

From choudharyvikas16 at gmail.com  Mon Sep 14 08:38:23 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Mon, 14 Sep 2015 14:08:23 +0530
Subject: [openstack-dev] [magnum] Is magnum db going to be removed for k8s
	resources?
Message-ID: <CABJxuZpgHhOVT3eVraGhcL5MY=M4VO3fdN8jK8Z2Gt61wUSw7A@mail.gmail.com>

Thanks Jay for verifying.
I also think there is no need to store pods/services/rcs in local magnum db.

Always k8s can be referred for these resources.


-Vikas

____________________________________________________________

Hi Vikas,

Thanks for starting this thread. Here just show some of my comments here.

The reason that Magnum want to get k8s resource via k8s API including two
reasons:
1) Native clients support
2) With current implantation, we cannot get pod for a replication
controller. The reason is that Magnum DB only persist replication
controller info in Magnum DB.

With the bp of objects-from-bay, the magnum will always call k8s API to get
all objects for pod/service/rc. Can you please show some of your concerns
for why do we need to persist those objects in Magnum DB? We may need to
sync up Magnum DB and k8s periodically if we persist two copies of objects.

Thanks!

<https://blueprints.launchpad.net/openstack/?searchtext=objects-from-bay>

2015-09-14 14:39 GMT+08:00 Vikas Choudhary <choudharyvikas16 at
gmail.com <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>:

>* Hi Team,
*>>* As per object-from-bay blueprint implementation [1], all calls to magnum db
*>* are being skipped for example pod.create() etc.
*>>* Are not we going to use magnum db at all for pods/services/rc ?
*>>>* Thanks
*>* Vikas Choudhary
*>>>* [1] https://review.openstack.org/#/c/213368/
<https://review.openstack.org/#/c/213368/>
*>>>* __________________________________________________________________________
*>* OpenStack Development Mailing List (not for usage questions)
*>* Unsubscribe: OpenStack-dev-request at lists.openstack.org
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>?subject:unsubscribe
*>* http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
*>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/ba6fe58f/attachment.html>

From john at johngarbutt.com  Mon Sep 14 09:13:16 2015
From: john at johngarbutt.com (John Garbutt)
Date: Mon, 14 Sep 2015 10:13:16 +0100
Subject: [openstack-dev] [nova] [all] Updated String Freeze Guidelines
In-Reply-To: <55F1DBEA.3020904@openstack.org>
References: <CABib2_pcxZ=C+n3iEDe7CfdB2o0pkOvVJpK+BeaaS6EnyK5qHQ@mail.gmail.com>
 <55F1DBEA.3020904@openstack.org>
Message-ID: <CABib2_o0VHO1dgkSwEdigQcDq6p3J6ofH3ekTfsLYBV2iW7s5g@mail.gmail.com>

On 10 September 2015 at 20:37, Thierry Carrez <thierry at openstack.org> wrote:
> John Garbutt wrote:
>> [...]
>> After yesterday's cross project meeting, and hanging out in
>> #openstack-i18n I have come up with these updates to the String Freeze
>> Guidelines:
>> https://wiki.openstack.org/wiki/StringFreeze
>>
>> Basically, we have a Soft String Freeze from Feature Freeze until RC1:
>> * Translators work through all existing strings during this time
>> * So avoid changing existing translatable strings
>> * Additional strings are generally OK
>>
>> Then post RC1, we have a Hard String Freeze:
>> * No new strings, and no string changes
>> * Exceptions need discussion
>>
>> Then at least 10 working days after RC1:
>> * we need a new RC candidate to include any updated strings
>>
>> Is everyone happy with these changes?
>
> That sounds pretty good.

As promised, I have attempted to add this into the project-team-guide,
to help avoid any wiki related issues in the future:
https://review.openstack.org/223011

Thanks,
John


From sileht at sileht.net  Mon Sep 14 09:28:57 2015
From: sileht at sileht.net (Mehdi Abaakouk)
Date: Mon, 14 Sep 2015 11:28:57 +0200
Subject: [openstack-dev] =?utf-8?b?5Zue5aSN77yaICBbQ2VpbG9tZXRlcl1bR25v?=
 =?utf-8?q?cchi=5D_Gnocchi_cannot_deal_with_combined_resource-id_=3F?=
In-Reply-To: <m0vbbfn0wt.fsf@danjou.info>
References: <tencent_5535D7DC7A5CE2DE5702951F@qq.com>
 <m0a8stp7qt.fsf@danjou.info> <tencent_21E29496537979275896B7B0@qq.com>
 <m07fnxndw5.fsf@danjou.info> <tencent_0B963CB97CD845581696926B@qq.com>
 <m0vbbfn0wt.fsf@danjou.info>
Message-ID: <b5beec43aa94b893356dd0e423a1851a@sileht.net>

Hi,

Le 2015-09-12 16:54, Julien Danjou a ?crit?:
> On Sat, Sep 12 2015, Luo Gangyi wrote:
> 
>>  I checked it again, no "ignored" is marked, seems the bug of devstack 
>> ;(
> 
> I was talking about that:
> 
> 
> https://git.openstack.org/cgit/openstack/ceilometer/tree/etc/ceilometer/gnocchi_resources.yaml#n67
> 
>>  And it's OK that gnocchi is not perfect now, but I still have some 
>> worries about how gnocchi deal with or going to deal with 
>> instance-xxxx-tapxxx condition.
>>  I see 'network.incoming.bytes' belongs to resouce type 'instance'.
>>  But no attributes of instance can save the infomation of tap name.
>>  Although I can search
>>  all metric ids from resouce id(instance uuid), how do I distinguish 
>> them from different taps of an instance?
> 
> Where do you see network.incoming.bytes as being linked to an instance?
> Reading gnocchi_resources.yaml I don't see that.

It was the case in the past, some metrics was by error associated to the 
instance. This is now fixed, they have their own resource type.
But currently this metrics are marked to be ignored by the Ceilometer 
dispatcher.

The next step is to re-enable theses metrics on the Ceilometer 
dispatcher side, but some code need to be written to extract
the instance name and the tap name from the resource id by a declarative 
manner.


Regards,
---
Mehdi Abaakouk
mail: sileht at sileht.net
irc: sileht



From slukjanov at mirantis.com  Mon Sep 14 10:38:54 2015
From: slukjanov at mirantis.com (Sergey Lukjanov)
Date: Mon, 14 Sep 2015 13:38:54 +0300
Subject: [openstack-dev] [sahara] mitaka summit session ideas
In-Reply-To: <55F1F607.6060306@redhat.com>
References: <55F1F607.6060306@redhat.com>
Message-ID: <CA+GZd798EdVGrKHSm4xnx8m+p1_Gdds2wEd_XN_B_2CeDyK0Cw@mail.gmail.com>

Thank you for starting it ;)

On Fri, Sep 11, 2015 at 12:28 AM, michael mccune <msm at redhat.com> wrote:

> hey all,
>
> i started an etherpad for us to collect ideas about our session for the
> mitaka summit.
>
> https://etherpad.openstack.org/p/mitaka-sahara-session-plans
>
> please drop any thoughts or suggestions about the summit there.
>
> thanks,
> mike
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/d819383f/attachment.html>

From aadamov at mirantis.com  Mon Sep 14 10:44:12 2015
From: aadamov at mirantis.com (Alexander Adamov)
Date: Mon, 14 Sep 2015 12:44:12 +0200
Subject: [openstack-dev] [Fuel] Nominate Olga Gusarenko for fuel-docs
	core
In-Reply-To: <CAM0pNLPiuyycwSU+572wz0ycEr3jbR3wnTUn2k=dAorhfDvA0w@mail.gmail.com>
References: <CAFY49iD2U+NkvgtjrWOHorSty_Rf3K6_-vqbZ0CNjH92UfDv6g@mail.gmail.com>
 <CAM0pNLPiuyycwSU+572wz0ycEr3jbR3wnTUn2k=dAorhfDvA0w@mail.gmail.com>
Message-ID: <CAK2oe+JOz0+bZutji6jNeqqqJN7rXY6R0tfGWk7rAKG0B1HKBQ@mail.gmail.com>

+1
Nice!)

Alexander

On Fri, Sep 11, 2015 at 8:19 PM, Dmitry Borodaenko <dborodaenko at mirantis.com
> wrote:

> +1
>
> Great work Olga!
>
> On Fri, Sep 11, 2015, 11:09 Irina Povolotskaya <ipovolotskaya at mirantis.com>
> wrote:
>
>> Fuelers,
>>
>> I'd like to nominate Olga Gusarenko for the fuel-docs-core.
>>
>> She has been doing great work and made a great contribution
>> into Fuel documentation:
>>
>>
>> http://stackalytics.com/?user_id=ogusarenko&release=all&project_type=all&module=fuel-docs
>>
>> It's high time to grant her core reviewer's rights in fuel-docs.
>>
>> Core reviewer approval process definition:
>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>
>> --
>> Best regards,
>>
>> Irina
>>
>>
>>
>>
>>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/7fc4bdbe/attachment.html>

From sxmatch1986 at gmail.com  Mon Sep 14 10:44:34 2015
From: sxmatch1986 at gmail.com (hao wang)
Date: Mon, 14 Sep 2015 18:44:34 +0800
Subject: [openstack-dev] [zaqar] Zaqar PTL Candidacy
In-Reply-To: <55F66C8C.7050705@catalyst.net.nz>
References: <55F66C8C.7050705@catalyst.net.nz>
Message-ID: <CAOEh+o3dORth6V9J7J4LUwdd+FKHMyfEysw2YAeHmEVAL5n_Yg@mail.gmail.com>

Well, I'm a new guy in Zaqar, but still glad to +1 to vote Fei Long.

2015-09-14 14:43 GMT+08:00 Fei Long Wang <feilong at catalyst.net.nz>:
> Greeting,
>
> I'd like to announce my candidacy for the Zaqar PTL in Mikata.
>
> I've been working on and contributing to Zaqar since Icehouse release as a
> core developer. That said, I really understand the ups and downs which Zaqar
> went through and I know the blueprints, direction and pains of this project.
>
> For Mikata release, there are some items that I feel are important for us to
> focus on:
>
> 1. Collaboration with other OpenStack projects
>     We did a great job in Liberty for the integration of Heat and Zaqar. And
> I believe there are some other projects could be benefited by Zaqar. For
> example, the notification middleware of Swift, Zaqar's websocket transport
> for Horizon, communicating with guests agents via Zaqar for Sahara/Trove,
> etc.
>
> 2. Real-world deployment
>     puppet-zaqar is on the way. It could be an great deployment tool for
> operator who want to deploy Zaqar. So in next release, we will continually
> provide more support for the project to get a stable release ASAP. Interlock
> and provide more support for potential cloud providers who have shown
> interest for Zaqar.
>
> 3. API improvement
>     We have released three API versions. The API of key functions are more
> and more stable. But given the bug fix and new features, it's time to review
> the overall API of v2 to make it more stable.
>
> 4. Encourage diversity in our Community
>     We're still a small team and obviously we need more new blood to join
> the team. And meanwhile, we would like to see the new comers from different
> organizations so that we can get different feedback from a wide range of
> users and industries. Not only developers, but also reviewers and operators.
>
> It's a fantastic experience working with this amazing team and I know
> without the dedication and hard work of everyone who has contributed to
> Zaqar we can't make those success stories of Liberty happen. So I believe
> the PTL of this smart team is most like a facilitator, coordinator and
> mentor. I would be pleased to serve as PTL for Zaqar for the Mikata cycle
> and I'd appreciate your vote.
>
> Thanks for your consideration
>
> --
> Cheers & Best regards,
> Fei Long Wang (???)
> --------------------------------------------------------------------------
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flwang at catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> --------------------------------------------------------------------------
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Best Wishes For You!


From tsufiev at mirantis.com  Mon Sep 14 10:49:25 2015
From: tsufiev at mirantis.com (Timur Sufiev)
Date: Mon, 14 Sep 2015 10:49:25 +0000
Subject: [openstack-dev] [Horizon] [Cinder] [Keystone] Showing Cinder quotas
 for non-admin users in Horizon
Message-ID: <CAEHC1ztdNfz_YPs4PzTEqwsdoCt6_R6PpoGVW+XDqqtgCtTNmw@mail.gmail.com>

Hi all!

It seems that recent changes in Cinder policies [1] forbade non-admin users
to see the disk quotas. Yet the volume creation is allowed for non-admins,
which effectively means that from now on a volume creation in Horizon is
free for non-admins (as soon as quotas:show rule is propagated into Horizon
policies). Along with understanding that this is not a desired UX for
Volumes panel in Horizon, I know as well that [1] wasn't responsible for
this quota behavior change on its own. It merely tried to alleviate the
situation caused by [2], which changed the requirements of quota show being
authorized. From this point I'm starting to sense that my knowledge of
Cinder and Keystone (because the hierarchical feature is involved) is
insufficient to suggest the proper solution from the Horizon point of view.
Yet hiding quota values from non-admin users makes no sense to me.
Suggestions?

[1] https://review.openstack.org/#/c/219231/7/etc/cinder/policy.json line 36
[2] https://review.openstack.org/#/c/205369/29/cinder/api/contrib/quotas.py
line
135
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/f2e1bb21/attachment.html>

From mfedosin at mirantis.com  Mon Sep 14 10:59:17 2015
From: mfedosin at mirantis.com (Mikhail Fedosin)
Date: Mon, 14 Sep 2015 13:59:17 +0300
Subject: [openstack-dev] [Glance] glance core rotation part 1
In-Reply-To: <EA70533067B8F34F801E964ABCA4C4410F4D3E96@G9W0745.americas.hpqcorp.net>
References: <55F2E3F9.1000907@gmail.com>
 <etPan.55f31e93.35ed68df.5cd@MMR298FD58>
 <CABdthUSsbSxn1Vb5nTyGnLO__8VYn7KvGHUV8MeBD6ZERtD8ew@mail.gmail.com>
 <EA70533067B8F34F801E964ABCA4C4410F4D3E96@G9W0745.americas.hpqcorp.net>
Message-ID: <CAHPxGAWbN1oFfPM1aS6=BmLcv2ZiDQzjjnBafQf_2n_O=71qqw@mail.gmail.com>

+1.
I hope that Zhi Yan joined Alibaba to make it use Openstack in the future :)

On Mon, Sep 14, 2015 at 11:23 AM, Kuvaja, Erno <kuvaja at hpe.com> wrote:

> +1
>
>
>
> *From:* Alex Meade [mailto:mr.alex.meade at gmail.com]
> *Sent:* Friday, September 11, 2015 7:37 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Glance] glance core rotation part 1
>
>
>
> +1
>
>
>
> On Fri, Sep 11, 2015 at 2:33 PM, Ian Cordasco <ian.cordasco at rackspace.com>
> wrote:
>
>
>
> -----Original Message-----
> From: Nikhil Komawar <nik.komawar at gmail.com>
> Reply: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev at lists.openstack.org>
> Date: September 11, 2015 at 09:30:23
> To: openstack-dev at lists.openstack.org <openstack-dev at lists.openstack.org>
> Subject:  [openstack-dev] [Glance] glance core rotation part 1
>
> > Hi,
> >
> > I would like to propose the following removals from glance-core based on
> > the simple criterion of inactivity/limited activity for a long period (2
> > cycles or more) of time:
> >
> > Alex Meade
> > Arnaud Legendre
> > Mark Washenberger
> > Iccha Sethi
>
> I think these are overdue
>
> > Zhi Yan Liu (Limited activity in Kilo and absent in Liberty)
>
> Sad to see Zhi Yan Liu's activity drop off.
>
> > Please vote +1 or -1 and we will decide by Monday EOD PT.
>
> +1
>
> --
> Ian Cordasco
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/422e8b0c/attachment.html>

From paul.bourke at oracle.com  Mon Sep 14 11:19:26 2015
From: paul.bourke at oracle.com (Paul Bourke)
Date: Mon, 14 Sep 2015 12:19:26 +0100
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <D21AFAE0.12587%stdake@cisco.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
 <D21A6FA1.12519%stdake@cisco.com>
 <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>
 <D21AFAE0.12587%stdake@cisco.com>
Message-ID: <55F6AD3E.9090909@oracle.com>



On 13/09/15 18:34, Steven Dake (stdake) wrote:
> Response inline.
>
> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:sam at yaple.net>>
> Date: Sunday, September 13, 2015 at 1:35 AM
> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
> Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO types
>
> On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake) <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
> Response inline.
>
> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:sam at yaple.net>>
> Date: Saturday, September 12, 2015 at 11:34 PM
> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
> Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO types
>
>
>
> Sam Yaple
>
> On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
>
>
> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:sam at yaple.net>>
> Date: Saturday, September 12, 2015 at 11:01 PM
> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
> Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO types
>
>
> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
> Hey folks,
>
> Sam had asked a reasonable set of questions regarding a patchset:
> https://review.openstack.org/#/c/222893/
>
> The purpose of the patchset is to enable both RDO and RHOS as binary choices on RHEL platforms.  I suspect over time, from-source deployments have the potential to become the norm, but the business logistics of such a change are going to take some significant time to sort out.
>
> Red Hat has two distros of OpenStack neither of which are from source.  One is free called RDO and the other is paid called RHOS.  In order to obtain support for RHEL VMs running in an OpenStack cloud, you must be running on RHOS RPM binaries.  You must also be running on RHEL.  It remains to be seen whether Red Hat will actively support Kolla deployments with a RHEL+RHOS set of packaging in containers, but my hunch says they will.  It is in Kolla?s best interest to implement this model and not make it hard on Operators since many of them do indeed want Red Hat?s support structure for their OpenStack deployments.
>
> Now to Sam?s questions:
> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do we add? What's our policy on adding a new type??
>
> I?m not immediately clear on how binary fits in.  We could make binary synonymous with the community supported version (RDO) while still implementing the binary RHOS version.  Note Kolla does not ?support? any distribution or deployment of OpenStack ? Operators will have to look to their vendors for support.
>
> If everything between centos+rdo and rhel+rhos is mostly the same then I would think it would make more sense to just use the base ('rhel' in this case) to branch of any differences in the templates. This would also allow for the least amount of change and most generic implementation of this vendor specific packaging. This would also match what we do with oraclelinux, we do not have a special type for that and any specifics would be handled by an if statement around 'oraclelinux' and not some special type.
>
> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO also runs on RHEL.  I want to enable Red Hat customers to make a choice to have a supported  operating system but not a supported Cloud environment.  The answer here is RHEL + RDO.  This leads to full support down the road if the Operator chooses to pay Red Hat for it by an easy transition to RHOS.
>
> I am against including vendor specific things like RHOS in Kolla outright like you are purposing. Suppose another vendor comes along with a new base and new packages. They are willing to maintain it, but its something that no one but their customers with their licensing can use. This is not something that belongs in Kolla and I am unsure that it is even appropriate to belong in OpenStack as a whole. Unless RHEL+RHOS can be used by those that do not have a license for it, I do not agree with adding it at all.
>
> Sam,
>
> Someone stepping up to maintain a completely independent set of docker images hasn?t happened.  To date nobody has done that.  If someone were to make that offer, and it was a significant change, I think the community as a whole would have to evaluate such a drastic change.  That would certainly increase our implementation and maintenance burden, which we don?t want  to do.  I don?t think what you propose would be in the best interest of the Kolla project, but I?d have to see the patch set to evaluated the scenario appropriately.
>
> What we are talking about is 5 additional lines to enable RHEL+RHOS specific repositories, which is not very onerous.
>
> The fact that you can?t use it directly has little bearing on whether its valid technology for OpenStack.  There are already two well-defined historical precedents for non-licensed unusable integration in OpenStack.  Cinder has 55 [1] Volume drivers which they SUPPORT.     At-leat 80% of them are completely proprietary hardware which in reality is mostly just software which without a license to, it would be impossible to use.  There are 41 [2] Neutron drivers registered on the Neutron driver page; almost the entirety require proprietary licenses to what amounts as integration to access proprietary software.  The OpenStack preferred license is ASL for a reason ? to be business friendly.  Licensed software has a place in the world of OpenStack, even it only serves as an integration point which the proposed patch does.  We are consistent with community values on this point or I wouldn?t have bothered proposing the patch.
>
> We want to encourage people to use Kolla for proprietary solutions if they so choose.  This is how support manifests, which increases the strength of the Kolla project.  The presence of support increases the likelihood that Kolla will be adopted by Operators.  If your asking the Operators to maintain a fork for those 5 RHOS repo lines, that seems unreasonable.
>
> I?d like to hear other Core Reviewer opinions on this matter and will hold a majority vote on this thread as to whether we will facilitate integration with third party software such as the Cinder Block Drivers, the Neutron Network drivers, and various for-pay versions of OpenStack such as RHOS.  I?d like all core reviewers to weigh in please.  Without a complete vote it will be hard to gauge what the Kolla community really wants.
>
> Core reviewers:
> Please vote +1 if you ARE satisfied with integration with third party unusable without a license software, specifically Cinder volume drivers, Neutron network drivers, and various for-pay distributions of OpenStack and container runtimes.
> Please vote ?1 if you ARE NOT satisfied with integration with third party unusable without a license software, specifically Cinder volume drivers, Neutron network drivers, and various for pay distributions of OpenStack and container runtimes.
>
> A bit of explanation on your vote might be helpful.
>
> My vote is +1.  I have already provided my rationale.
>
> Regards,
> -steve
>
> [1] https://wiki.openstack.org/wiki/CinderSupportMatrix
> [2] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>
>
> I appreciate you calling a vote so early. But I haven't had my questions answered yet enough to even vote on the matter at hand.
>
> In this situation the closest thing we have to a plugin type system as Cinder or Neutron does is our header/footer system. What you are proposing is integrating a proprietary solution into the core of Kolla. Those Cinder and Neutron plugins have external components and those external components are not baked into the project.
>
> What happens if and when the RHOS packages require different tweaks in the various containers? What if it requires changes to the Ansible playbooks? It begins to balloon out past 5 lines of code.
>
> Unfortunately, the community _wont_ get to vote on whether or not to implement those changes because RHOS is already in place. That's why I am asking the questions now as this _right_ _now_ is the significant change you are talking about, regardless of the lines of code.
>
> So the question is not whether we are going to integrate 3rd party plugins, but whether we are going to allow companies to build proprietary products in the Kolla repo. If we allow RHEL+RHOS then we would need to allow another distro+company packaging and potential Ansible tweaks to get it to work for them.
>
> If you really want to do what Cinder and Neutron do, we need a better system for injecting code. That would be much closer to the plugins that the other projects have.
>
> I'd like to have a discussion about this rather than immediately call for a vote which is why I asked you to raise this question in a public forum in the first place.
>
>
> Sam,
>
> While a true code injection system might be interesting and would be more parallel with the plugin model used in cinder and neutron (and to some degrees nova), those various systems didn?t begin that way.  Their driver code at one point was completely integrated.  Only after 2-3 years was the code broken into a fully injectable state.  I think that is an awfully high bar to set to sort out the design ahead of time.  One of the reasons Neutron has taken so long to mature is the Neutron community attempted to do plugins at too early a stage which created big gaps in unit and functional tests.  A more appropriate design would be for that pattern to emerge from the system over time as people begin to adopt various distro tech to Kolla.  If you looked at the patch in gerrit, there is one clear pattern ?Setup distro repos? which at some point in the future could be made to be injectable much as headers and footers are today.
>
> As for building proprietary products in the Kolla repository, the license is ASL, which means it is inherently not proprietary.  I am fine with the code base integrating with proprietary software as long as the license terms are met; someone has to pay the mortgages of the thousands of OpenStack developers.  We should encourage growth of OpenStack, and one of the ways for that to happen is to be business friendly.  This translates into first knowing the world is increasingly adopting open source methodologies and facilitating that transition, and second accepting the world has a whole slew of proprietary software that already exists today that requires integration.
>
> Nonetheless, we have a difference of opinion on this matter, and I want this work to merge prior to rc1.  Since this is a project policy decision and not a technical issue, it makes sense to put it to a wider vote to either unblock or kill the work.  It would be a shame if we reject all driver and supported distro integration because we as a community take an anti-business stance on our policies, but I?ll live by what the community decides.  This is not a decision either you or I may dictate which is why it has been put to a vote.
>
> Regards
> -steve
>
>
>
> For oracle linux, I?d like to keep RDO for oracle linux and from source on oracle linux as choices.  RDO also runs on oracle linux.  Perhaps the patch set needs some later work here to address this point in more detail, but as is ?binary? covers oracle linu.
>
> Perhaps what we should do is get rid of the binary type entirely.  Ubuntu doesn?t really have a binary type, they have a cloudarchive type, so binary doesn?t make a lot of sense.  Since Ubuntu to my knowledge doesn?t have two distributions of OpenStack the same logic wouldn?t apply to providing a full support onramp for Ubuntu customers.  Oracle doesn?t provide a binary type either, their binary type is really RDO.
>
> The binary packages for Ubuntu are _packaged_ by the cloudarchive team. But in the case of when OpenStack collides with an LTS release (Icehouse and 14.04 was the last one) you do not add a new repo because the packages are in the main Ubuntu repo.
>
> Debian provides its own packages as well. I do not want a type name per distro. 'binary' catches all packaged OpenStack things by a distro.
>
>
> FWIW I never liked the transition away from rdo in the repo names to binary.  I guess I should have ?1?ed those reviews back then, but I think its time to either revisit the decision or compromise that binary and rdo mean the same thing in a centos and rhel world.
>
> Regards
> -steve
>
>
> Since we implement multiple bases, some of which are not RPM based, it doesn't make much sense to me to have rhel and rdo as a type which is why we removed rdo in the first place in favor of the more generic 'binary'.
>
>
> As such the implied second question ?How many more do we add?? sort of sounds like ?how many do we support??.  The answer to the second question is none ? again the Kolla community does not support any deployment of OpenStack.  To the question as posed, how many we add, the answer is it is really up to community members willing to  implement and maintain the work.  In this case, I have personally stepped up to implement RHOS and maintain it going forward.
>
> Our policy on adding a new type could be simple or onerous.  I prefer simple.  If someone is willing to write the code and maintain it so that is stays in good working order, I see no harm in it remaining in tree.  I don?t suspect there will be a lot of people interested in adding multiple distributions for a particular operating system.  To my knowledge, and I could be incorrect, Red Hat is the only OpenStack company with a paid and community version available of OpenStack simultaneously and the paid version is only available on RHEL.  I think the risk of RPM based distributions plus their type count spiraling out of manageability is low.  Even if the risk were high, I?d prefer to keep an open mind to facilitate an increase in diversity in our community (which is already fantastically diverse, btw ;)
>
> I am open to questions, comments or concerns.  Please feel free to voice them.
>
> Regards,
> -steve
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Both arguments sound valid to me, both have pros and cons.

I think it's valuable to look to the experiences of Cinder and Neutron 
in this area, both of which seem to have the same scenario and have 
existed much longer than Kolla. From what I know of how these operate, 
proprietary code is allowed to exist in the mainline so long as certain 
set of criteria is met. I'd have to look it up but I think it mostly 
comprises of the relevant parties must "play by the rules", e.g. provide 
a working CI, help with reviews, attend weekly meetings, etc. If Kolla 
can look to craft a similar set of criteria for proprietary code down 
the line, I think it should work well for us.

Steve has a good point in that it may be too much overhead to implement 
a plugin system or similar up front. Instead, we should actively monitor 
the overhead in terms of reviews and code size that these extra 
implementations add. Perhaps agree to review it at the end of Mitaka?

Given the project is young, I think it can also benefit from the 
increased usage and exposure from allowing these parties in. I would 
hope independent contributors would not feel rejected from not being 
able to use/test with the pieces that need a license. The libre distros 
will remain #1 for us.

So based on the above explanation, I'm +1.

-Paul


From slukjanov at mirantis.com  Mon Sep 14 11:27:35 2015
From: slukjanov at mirantis.com (Sergey Lukjanov)
Date: Mon, 14 Sep 2015 14:27:35 +0300
Subject: [openstack-dev] [sahara] FFE request for heat wait condition
	support
In-Reply-To: <2108366802.24676246.1441918374134.JavaMail.zimbra@redhat.com>
References: <CAOB5mPwf6avCZD4Q6U4xh-g4f553eMzCTh1kfiX4bVY8x59i5A@mail.gmail.com>
 <CA+O3VAhA2Xi_hKCaCB2PoWr8jUM0bQhwnSUAGx2gOGB0ksii6w@mail.gmail.com>
 <55E9C3D8.2080606@redhat.com>
 <2108366802.24676246.1441918374134.JavaMail.zimbra@redhat.com>
Message-ID: <CA+GZd7_6qaBNreE2qdtMMbccCEOqRCTkXgsD4jqAzCCXP+BxzQ@mail.gmail.com>

Approved.

https://etherpad.openstack.org/p/sahara-liberty-ffes updated

On Thu, Sep 10, 2015 at 11:52 PM, Ethan Gafford <egafford at redhat.com> wrote:

> Seems reasonable; +1.
>
> -Ethan
>
> >----- Original Message -----
> >From: "michael mccune" <msm at redhat.com>
> >To: openstack-dev at lists.openstack.org
> >Sent: Friday, September 4, 2015 12:16:24 PM
> >Subject: Re: [openstack-dev] [sahara] FFE request for heat wait condition
> support
> >
> >makes sense to me, +1
> >
> >mike
> >
> >On 09/04/2015 06:37 AM, Vitaly Gridnev wrote:
> >> +1 for FFE, because of
> >>   1. Low risk of issues, fully covered with current scenario tests;
> >>   2. Implementation already on review
> >>
> >> On Fri, Sep 4, 2015 at 12:54 PM, Sergey Reshetnyak
> >> <sreshetniak at mirantis.com <mailto:sreshetniak at mirantis.com>> wrote:
> >>
> >>     Hi,
> >>
> >>     I would like to request FFE for wait condition support for Heat
> engine.
> >>     Wait condition reports signal about booting instance.
> >>
> >>     Blueprint:
> >>
> https://blueprints.launchpad.net/sahara/+spec/sahara-heat-wait-conditions
> >>
> >>     Spec:
> >>
> https://github.com/openstack/sahara-specs/blob/master/specs/liberty/sahara-heat-wait-conditions.rst
> >>
> >>     Patch:
> >>     https://review.openstack.org/#/c/169338/
> >>
> >>     Thanks,
> >>     Sergey Reshetnyak
> >>
> >>
>  __________________________________________________________________________
> >>     OpenStack Development Mailing List (not for usage questions)
> >>     Unsubscribe:
> >>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> >> --
> >> Best Regards,
> >> Vitaly Gridnev
> >> Mirantis, Inc
> >>
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >__________________________________________________________________________
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/0f1e5806/attachment.html>

From slukjanov at mirantis.com  Mon Sep 14 11:31:22 2015
From: slukjanov at mirantis.com (Sergey Lukjanov)
Date: Mon, 14 Sep 2015 14:31:22 +0300
Subject: [openstack-dev] [sahara] Request for Feature Freeze Exception
In-Reply-To: <CAOB5mPxaTM5QKm410c2956QMfnsaz9QqT7XreMyxPmdrK1E0Og@mail.gmail.com>
References: <CA+O3VAi689gyN-7Vu1qmBsv_T3xyaOOiL0foBo4YLLTJFW60ww@mail.gmail.com>
 <55E8A545.7000602@redhat.com>
 <47194693.21252941.1441312073237.JavaMail.zimbra@redhat.com>
 <CAOB5mPxaTM5QKm410c2956QMfnsaz9QqT7XreMyxPmdrK1E0Og@mail.gmail.com>
Message-ID: <CA+GZd78d8a9DjbP5SE9gipJnUiN==_WyCB-FDwKM_Db7mJcfhg@mail.gmail.com>

Approved.

https://etherpad.openstack.org/p/sahara-liberty-ffes updated

On Fri, Sep 4, 2015 at 12:40 PM, Sergey Reshetnyak <sreshetniak at mirantis.com
> wrote:

> +1 from me.
>
> Thanks,
> Sergey R.
>
> 2015-09-03 23:27 GMT+03:00 Ethan Gafford <egafford at redhat.com>:
>
>> Agreed. We've talked about this for a while, and it's very low risk.
>>
>> Thanks,
>> Ethan
>>
>> ----- Original Message -----
>> From: "michael mccune" <msm at redhat.com>
>> To: openstack-dev at lists.openstack.org
>> Sent: Thursday, September 3, 2015 3:53:41 PM
>> Subject: Re: [openstack-dev] [sahara] Request for Feature Freeze Exception
>>
>> On 09/03/2015 02:49 PM, Vitaly Gridnev wrote:
>> > Hey folks!
>> >
>> > I would like to propose to add to list of FFE's following blueprint:
>> > https://blueprints.launchpad.net/sahara/+spec/drop-hadoop-1
>> >
>> > Reasoning of that is following:
>> >
>> >   1. HDP 1.3.2 and Vanilla 1.2.1 are not gated for a whole release
>> > cycle, so it can be reason of several bugs in these versions;
>> >   2. Minimal risk of removal: it doesn't touch versions that we already
>> > have.
>> >   3. All required changes was already uploaded to the review:
>> >
>> https://review.openstack.org/#/q/status:open+project:openstack/sahara+branch:master+topic:bp/drop-hadoop-1,n,z
>>
>> this sounds reasonable to me
>>
>> mike
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/274e116a/attachment.html>

From fawad at plumgrid.com  Mon Sep 14 11:33:41 2015
From: fawad at plumgrid.com (Fawad Khaliq)
Date: Mon, 14 Sep 2015 16:33:41 +0500
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <CAC1Hk_eyGoCa4r3ZkSqDMnws_uWtozCAvN5HQ4pgnm_8SnrZxg@mail.gmail.com>

Kyle,

You have done an exceptional job over the three cycles.
Neutron has matured to a new level under your leadership.
You will be missed. Enjoy your new free time!

Fawad Khaliq


On Sat, Sep 12, 2015 at 2:12 AM, Kyle Mestery <mestery at mestery.com> wrote:

> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered
> the concept of rotating PTL duties with each cycle, I'd like to highly
> encourage Neutron and other projects to do the same. Having a deep bench of
> leaders supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/1caf634f/attachment.html>

From samuel at yaple.net  Mon Sep 14 11:44:12 2015
From: samuel at yaple.net (Sam Yaple)
Date: Mon, 14 Sep 2015 11:44:12 +0000
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <55F6AD3E.9090909@oracle.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
 <D21A6FA1.12519%stdake@cisco.com>
 <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>
 <D21AFAE0.12587%stdake@cisco.com> <55F6AD3E.9090909@oracle.com>
Message-ID: <CAJ3CzQWS4O-+V6A9L0GSDMUGcfpJc_3=DdQG9njxO+FBoRBDyw@mail.gmail.com>

On Mon, Sep 14, 2015 at 11:19 AM, Paul Bourke <paul.bourke at oracle.com>
wrote:

>
>
> On 13/09/15 18:34, Steven Dake (stdake) wrote:
>
>> Response inline.
>>
>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:
>> sam at yaple.net>>
>> Date: Sunday, September 13, 2015 at 1:35 AM
>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev at lists.openstack.org<mailto:
>> openstack-dev at lists.openstack.org>>
>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>> types
>>
>> On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake) <stdake at cisco.com
>> <mailto:stdake at cisco.com>> wrote:
>> Response inline.
>>
>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:
>> sam at yaple.net>>
>> Date: Saturday, September 12, 2015 at 11:34 PM
>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev at lists.openstack.org<mailto:
>> openstack-dev at lists.openstack.org>>
>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>> types
>>
>>
>>
>> Sam Yaple
>>
>> On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) <stdake at cisco.com
>> <mailto:stdake at cisco.com>> wrote:
>>
>>
>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:
>> sam at yaple.net>>
>> Date: Saturday, September 12, 2015 at 11:01 PM
>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev at lists.openstack.org<mailto:
>> openstack-dev at lists.openstack.org>>
>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>> types
>>
>>
>> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) <stdake at cisco.com
>> <mailto:stdake at cisco.com>> wrote:
>> Hey folks,
>>
>> Sam had asked a reasonable set of questions regarding a patchset:
>> https://review.openstack.org/#/c/222893/
>>
>> The purpose of the patchset is to enable both RDO and RHOS as binary
>> choices on RHEL platforms.  I suspect over time, from-source deployments
>> have the potential to become the norm, but the business logistics of such a
>> change are going to take some significant time to sort out.
>>
>> Red Hat has two distros of OpenStack neither of which are from source.
>> One is free called RDO and the other is paid called RHOS.  In order to
>> obtain support for RHEL VMs running in an OpenStack cloud, you must be
>> running on RHOS RPM binaries.  You must also be running on RHEL.  It
>> remains to be seen whether Red Hat will actively support Kolla deployments
>> with a RHEL+RHOS set of packaging in containers, but my hunch says they
>> will.  It is in Kolla?s best interest to implement this model and not make
>> it hard on Operators since many of them do indeed want Red Hat?s support
>> structure for their OpenStack deployments.
>>
>> Now to Sam?s questions:
>> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more do
>> we add? What's our policy on adding a new type??
>>
>> I?m not immediately clear on how binary fits in.  We could make binary
>> synonymous with the community supported version (RDO) while still
>> implementing the binary RHOS version.  Note Kolla does not ?support? any
>> distribution or deployment of OpenStack ? Operators will have to look to
>> their vendors for support.
>>
>> If everything between centos+rdo and rhel+rhos is mostly the same then I
>> would think it would make more sense to just use the base ('rhel' in this
>> case) to branch of any differences in the templates. This would also allow
>> for the least amount of change and most generic implementation of this
>> vendor specific packaging. This would also match what we do with
>> oraclelinux, we do not have a special type for that and any specifics would
>> be handled by an if statement around 'oraclelinux' and not some special
>> type.
>>
>> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO also
>> runs on RHEL.  I want to enable Red Hat customers to make a choice to have
>> a supported  operating system but not a supported Cloud environment.  The
>> answer here is RHEL + RDO.  This leads to full support down the road if the
>> Operator chooses to pay Red Hat for it by an easy transition to RHOS.
>>
>> I am against including vendor specific things like RHOS in Kolla outright
>> like you are purposing. Suppose another vendor comes along with a new base
>> and new packages. They are willing to maintain it, but its something that
>> no one but their customers with their licensing can use. This is not
>> something that belongs in Kolla and I am unsure that it is even appropriate
>> to belong in OpenStack as a whole. Unless RHEL+RHOS can be used by those
>> that do not have a license for it, I do not agree with adding it at all.
>>
>> Sam,
>>
>> Someone stepping up to maintain a completely independent set of docker
>> images hasn?t happened.  To date nobody has done that.  If someone were to
>> make that offer, and it was a significant change, I think the community as
>> a whole would have to evaluate such a drastic change.  That would certainly
>> increase our implementation and maintenance burden, which we don?t want  to
>> do.  I don?t think what you propose would be in the best interest of the
>> Kolla project, but I?d have to see the patch set to evaluated the scenario
>> appropriately.
>>
>> What we are talking about is 5 additional lines to enable RHEL+RHOS
>> specific repositories, which is not very onerous.
>>
>> The fact that you can?t use it directly has little bearing on whether its
>> valid technology for OpenStack.  There are already two well-defined
>> historical precedents for non-licensed unusable integration in OpenStack.
>> Cinder has 55 [1] Volume drivers which they SUPPORT.     At-leat 80% of
>> them are completely proprietary hardware which in reality is mostly just
>> software which without a license to, it would be impossible to use.  There
>> are 41 [2] Neutron drivers registered on the Neutron driver page; almost
>> the entirety require proprietary licenses to what amounts as integration to
>> access proprietary software.  The OpenStack preferred license is ASL for a
>> reason ? to be business friendly.  Licensed software has a place in the
>> world of OpenStack, even it only serves as an integration point which the
>> proposed patch does.  We are consistent with community values on this point
>> or I wouldn?t have bothered proposing the patch.
>>
>> We want to encourage people to use Kolla for proprietary solutions if
>> they so choose.  This is how support manifests, which increases the
>> strength of the Kolla project.  The presence of support increases the
>> likelihood that Kolla will be adopted by Operators.  If your asking the
>> Operators to maintain a fork for those 5 RHOS repo lines, that seems
>> unreasonable.
>>
>> I?d like to hear other Core Reviewer opinions on this matter and will
>> hold a majority vote on this thread as to whether we will facilitate
>> integration with third party software such as the Cinder Block Drivers, the
>> Neutron Network drivers, and various for-pay versions of OpenStack such as
>> RHOS.  I?d like all core reviewers to weigh in please.  Without a complete
>> vote it will be hard to gauge what the Kolla community really wants.
>>
>> Core reviewers:
>> Please vote +1 if you ARE satisfied with integration with third party
>> unusable without a license software, specifically Cinder volume drivers,
>> Neutron network drivers, and various for-pay distributions of OpenStack and
>> container runtimes.
>> Please vote ?1 if you ARE NOT satisfied with integration with third party
>> unusable without a license software, specifically Cinder volume drivers,
>> Neutron network drivers, and various for pay distributions of OpenStack and
>> container runtimes.
>>
>> A bit of explanation on your vote might be helpful.
>>
>> My vote is +1.  I have already provided my rationale.
>>
>> Regards,
>> -steve
>>
>> [1] https://wiki.openstack.org/wiki/CinderSupportMatrix
>> [2] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>>
>>
>> I appreciate you calling a vote so early. But I haven't had my questions
>> answered yet enough to even vote on the matter at hand.
>>
>> In this situation the closest thing we have to a plugin type system as
>> Cinder or Neutron does is our header/footer system. What you are proposing
>> is integrating a proprietary solution into the core of Kolla. Those Cinder
>> and Neutron plugins have external components and those external components
>> are not baked into the project.
>>
>> What happens if and when the RHOS packages require different tweaks in
>> the various containers? What if it requires changes to the Ansible
>> playbooks? It begins to balloon out past 5 lines of code.
>>
>> Unfortunately, the community _wont_ get to vote on whether or not to
>> implement those changes because RHOS is already in place. That's why I am
>> asking the questions now as this _right_ _now_ is the significant change
>> you are talking about, regardless of the lines of code.
>>
>> So the question is not whether we are going to integrate 3rd party
>> plugins, but whether we are going to allow companies to build proprietary
>> products in the Kolla repo. If we allow RHEL+RHOS then we would need to
>> allow another distro+company packaging and potential Ansible tweaks to get
>> it to work for them.
>>
>> If you really want to do what Cinder and Neutron do, we need a better
>> system for injecting code. That would be much closer to the plugins that
>> the other projects have.
>>
>> I'd like to have a discussion about this rather than immediately call for
>> a vote which is why I asked you to raise this question in a public forum in
>> the first place.
>>
>>
>> Sam,
>>
>> While a true code injection system might be interesting and would be more
>> parallel with the plugin model used in cinder and neutron (and to some
>> degrees nova), those various systems didn?t begin that way.  Their driver
>> code at one point was completely integrated.  Only after 2-3 years was the
>> code broken into a fully injectable state.  I think that is an awfully high
>> bar to set to sort out the design ahead of time.  One of the reasons
>> Neutron has taken so long to mature is the Neutron community attempted to
>> do plugins at too early a stage which created big gaps in unit and
>> functional tests.  A more appropriate design would be for that pattern to
>> emerge from the system over time as people begin to adopt various distro
>> tech to Kolla.  If you looked at the patch in gerrit, there is one clear
>> pattern ?Setup distro repos? which at some point in the future could be
>> made to be injectable much as headers and footers are today.
>>
>> As for building proprietary products in the Kolla repository, the license
>> is ASL, which means it is inherently not proprietary.  I am fine with the
>> code base integrating with proprietary software as long as the license
>> terms are met; someone has to pay the mortgages of the thousands of
>> OpenStack developers.  We should encourage growth of OpenStack, and one of
>> the ways for that to happen is to be business friendly.  This translates
>> into first knowing the world is increasingly adopting open source
>> methodologies and facilitating that transition, and second accepting the
>> world has a whole slew of proprietary software that already exists today
>> that requires integration.
>>
>> Nonetheless, we have a difference of opinion on this matter, and I want
>> this work to merge prior to rc1.  Since this is a project policy decision
>> and not a technical issue, it makes sense to put it to a wider vote to
>> either unblock or kill the work.  It would be a shame if we reject all
>> driver and supported distro integration because we as a community take an
>> anti-business stance on our policies, but I?ll live by what the community
>> decides.  This is not a decision either you or I may dictate which is why
>> it has been put to a vote.
>>
>> Regards
>> -steve
>>
>>
>>
>> For oracle linux, I?d like to keep RDO for oracle linux and from source
>> on oracle linux as choices.  RDO also runs on oracle linux.  Perhaps the
>> patch set needs some later work here to address this point in more detail,
>> but as is ?binary? covers oracle linu.
>>
>> Perhaps what we should do is get rid of the binary type entirely.  Ubuntu
>> doesn?t really have a binary type, they have a cloudarchive type, so binary
>> doesn?t make a lot of sense.  Since Ubuntu to my knowledge doesn?t have two
>> distributions of OpenStack the same logic wouldn?t apply to providing a
>> full support onramp for Ubuntu customers.  Oracle doesn?t provide a binary
>> type either, their binary type is really RDO.
>>
>> The binary packages for Ubuntu are _packaged_ by the cloudarchive team.
>> But in the case of when OpenStack collides with an LTS release (Icehouse
>> and 14.04 was the last one) you do not add a new repo because the packages
>> are in the main Ubuntu repo.
>>
>> Debian provides its own packages as well. I do not want a type name per
>> distro. 'binary' catches all packaged OpenStack things by a distro.
>>
>>
>> FWIW I never liked the transition away from rdo in the repo names to
>> binary.  I guess I should have ?1?ed those reviews back then, but I think
>> its time to either revisit the decision or compromise that binary and rdo
>> mean the same thing in a centos and rhel world.
>>
>> Regards
>> -steve
>>
>>
>> Since we implement multiple bases, some of which are not RPM based, it
>> doesn't make much sense to me to have rhel and rdo as a type which is why
>> we removed rdo in the first place in favor of the more generic 'binary'.
>>
>>
>> As such the implied second question ?How many more do we add?? sort of
>> sounds like ?how many do we support??.  The answer to the second question
>> is none ? again the Kolla community does not support any deployment of
>> OpenStack.  To the question as posed, how many we add, the answer is it is
>> really up to community members willing to  implement and maintain the
>> work.  In this case, I have personally stepped up to implement RHOS and
>> maintain it going forward.
>>
>> Our policy on adding a new type could be simple or onerous.  I prefer
>> simple.  If someone is willing to write the code and maintain it so that is
>> stays in good working order, I see no harm in it remaining in tree.  I
>> don?t suspect there will be a lot of people interested in adding multiple
>> distributions for a particular operating system.  To my knowledge, and I
>> could be incorrect, Red Hat is the only OpenStack company with a paid and
>> community version available of OpenStack simultaneously and the paid
>> version is only available on RHEL.  I think the risk of RPM based
>> distributions plus their type count spiraling out of manageability is low.
>> Even if the risk were high, I?d prefer to keep an open mind to facilitate
>> an increase in diversity in our community (which is already fantastically
>> diverse, btw ;)
>>
>> I am open to questions, comments or concerns.  Please feel free to voice
>> them.
>>
>> Regards,
>> -steve
>>
>>
>>
>>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Both arguments sound valid to me, both have pros and cons.
>
> I think it's valuable to look to the experiences of Cinder and Neutron in
> this area, both of which seem to have the same scenario and have existed
> much longer than Kolla. From what I know of how these operate, proprietary
> code is allowed to exist in the mainline so long as certain set of criteria
> is met. I'd have to look it up but I think it mostly comprises of the
> relevant parties must "play by the rules", e.g. provide a working CI, help
> with reviews, attend weekly meetings, etc. If Kolla can look to craft a
> similar set of criteria for proprietary code down the line, I think it
> should work well for us.
>
> Steve has a good point in that it may be too much overhead to implement a
> plugin system or similar up front. Instead, we should actively monitor the
> overhead in terms of reviews and code size that these extra implementations
> add. Perhaps agree to review it at the end of Mitaka?
>
> Given the project is young, I think it can also benefit from the increased
> usage and exposure from allowing these parties in. I would hope independent
> contributors would not feel rejected from not being able to use/test with
> the pieces that need a license. The libre distros will remain #1 for us.
>
> So based on the above explanation, I'm +1.
>
> -Paul
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Given Paul's comments I would agree here as well. I would like to get that
'criteria' required for Kolla to allow this proprietary code into the main
repo down as soon as possible though and suggest that we have a bare
minimum of being able to gate against it as one of the criteria.

As for a plugin system, I also agree with Paul that we should check the
overhead of including these other distros and any types needed after we
have had time to see if they do introduce any additional overhead.

So for the question 'Do we allow code that relies on proprietary packages?'
I would vote +1, with the condition that we define the requirements of
allowing that code as soon as possible.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/4eab14a4/attachment.html>

From feilong at catalyst.net.nz  Mon Sep 14 11:45:49 2015
From: feilong at catalyst.net.nz (Fei Long Wang)
Date: Mon, 14 Sep 2015 23:45:49 +1200
Subject: [openstack-dev] [Glance] glance core rotation part 1
In-Reply-To: <CAHPxGAWbN1oFfPM1aS6=BmLcv2ZiDQzjjnBafQf_2n_O=71qqw@mail.gmail.com>
References: <55F2E3F9.1000907@gmail.com>
 <etPan.55f31e93.35ed68df.5cd@MMR298FD58>
 <CABdthUSsbSxn1Vb5nTyGnLO__8VYn7KvGHUV8MeBD6ZERtD8ew@mail.gmail.com>
 <EA70533067B8F34F801E964ABCA4C4410F4D3E96@G9W0745.americas.hpqcorp.net>
 <CAHPxGAWbN1oFfPM1aS6=BmLcv2ZiDQzjjnBafQf_2n_O=71qqw@mail.gmail.com>
Message-ID: <55F6B36D.6000307@catalyst.net.nz>

+1

Yep, it would be nice if Zhi Yan can promote OpenStack in Alibaba :)

On 14/09/15 22:59, Mikhail Fedosin wrote:
> +1.
> I hope that Zhi Yan joined Alibaba to make it use Openstack in the 
> future :)
>
> On Mon, Sep 14, 2015 at 11:23 AM, Kuvaja, Erno <kuvaja at hpe.com 
> <mailto:kuvaja at hpe.com>> wrote:
>
>     +1
>
>     *From:*Alex Meade [mailto:mr.alex.meade at gmail.com
>     <mailto:mr.alex.meade at gmail.com>]
>     *Sent:* Friday, September 11, 2015 7:37 PM
>     *To:* OpenStack Development Mailing List (not for usage questions)
>     *Subject:* Re: [openstack-dev] [Glance] glance core rotation part 1
>
>     +1
>
>     On Fri, Sep 11, 2015 at 2:33 PM, Ian Cordasco
>     <ian.cordasco at rackspace.com <mailto:ian.cordasco at rackspace.com>>
>     wrote:
>
>
>
>         -----Original Message-----
>         From: Nikhil Komawar <nik.komawar at gmail.com
>         <mailto:nik.komawar at gmail.com>>
>         Reply: OpenStack Development Mailing List (not for usage
>         questions) <openstack-dev at lists.openstack.org
>         <mailto:openstack-dev at lists.openstack.org>>
>         Date: September 11, 2015 at 09:30:23
>         To: openstack-dev at lists.openstack.org
>         <mailto:openstack-dev at lists.openstack.org>
>         <openstack-dev at lists.openstack.org
>         <mailto:openstack-dev at lists.openstack.org>>
>         Subject:  [openstack-dev] [Glance] glance core rotation part 1
>
>         > Hi,
>         >
>         > I would like to propose the following removals from
>         glance-core based on
>         > the simple criterion of inactivity/limited activity for a
>         long period (2
>         > cycles or more) of time:
>         >
>         > Alex Meade
>         > Arnaud Legendre
>         > Mark Washenberger
>         > Iccha Sethi
>
>         I think these are overdue
>
>         > Zhi Yan Liu (Limited activity in Kilo and absent in Liberty)
>
>         Sad to see Zhi Yan Liu's activity drop off.
>
>         > Please vote +1 or -1 and we will decide by Monday EOD PT.
>
>         +1
>
>         --
>         Ian Cordasco
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Fei Long Wang (???)
--------------------------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang at catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--------------------------------------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/69ee6463/attachment.html>

From kuvaja at hpe.com  Mon Sep 14 11:54:20 2015
From: kuvaja at hpe.com (Kuvaja, Erno)
Date: Mon, 14 Sep 2015 11:54:20 +0000
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
 concerns
In-Reply-To: <55F67F28.70506@openstack.org>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
 <55F67F28.70506@openstack.org>
Message-ID: <EA70533067B8F34F801E964ABCA4C4410F4D5232@G9W0745.americas.hpqcorp.net>

> -----Original Message-----
> From: Thierry Carrez [mailto:thierry at openstack.org]
> Sent: Monday, September 14, 2015 9:03 AM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
> concerns
> 
<CLIP>
> 
> Or are you suggesting it is preferable to hide that risk from our
> operators/users, to protect that project team developers ?
> 
> --
> Thierry Carrez (ttx)
> 
Unfortunately this seems to be the trend, not only in <insert any group here> but in society. Everything needs to be everyone friendly and politically correct, it's not ok to talk about difficult topics with their real names because someone involved might get their feelings hurt, it's not ok to compete as losers might get their feelings hurt.

While being bit double edged sword I think this is exact example of such. One could argue if the project has reason to exist if saying out loud "it does not have diversity in its development community" will kill it. I think there is good amount of examples both ways in open source world where abandoned projects get picked up as there is people thinking they still have use case and value, on the other side maybe promising projects gets forgotten because no-one else really felt the urge to keep 'em alive.

Personally I feel this being bit like stamping feature experimental. "Please feel free to play around with it, but we do discourage you to deploy it in your production unless you're willing to pick up the maintenance of it in the case the team decides to do something else." There is nothing wrong with that.

I don't think these should be hiding behind the valance of the big tent and the consumer expectations should be set at least close to the reality without them needing to do huge amount of detective work. That was the point of the tags in first place, no?

Obviously above is just my blunt self. If someone went and rage killed their project because of that, good for you, now get yourself together and do it again. ;)

- Erno (jokke) Kuvaja


From doug at doughellmann.com  Mon Sep 14 12:10:16 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 14 Sep 2015 08:10:16 -0400
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
Message-ID: <1442232202-sup-5997@lrrr.local>


After having some conversations with folks at the Ops Midcycle a
few weeks ago, and observing some of the more recent email threads
related to glance, glance-store, the client, and the API, I spent
last week contacting a few of you individually to learn more about
some of the issues confronting the Glance team. I had some very
frank, but I think constructive, conversations with all of you about
the issues as you see them. As promised, this is the public email
thread to discuss what I found, and to see if we can agree on what
the Glance team should be focusing on going into the Mitaka summit
and development cycle and how the rest of the community can support
you in those efforts.

I apologize for the length of this email, but there's a lot to go
over. I've identified 2 high priority items that I think are critical
for the team to be focusing on starting right away in order to use
the upcoming summit time effectively. I will also describe several
other issues that need to be addressed but that are less immediately
critical. First the high priority items:

1. Resolve the situation preventing the DefCore committee from
   including image upload capabilities in the tests used for trademark
   and interoperability validation.

2. Follow through on the original commitment of the project to
   provide an image API by completing the integration work with
   nova and cinder to ensure V2 API adoption.

I. DefCore

The primary issue that attracted my attention was the fact that
DefCore cannot currently include an image upload API in its
interoperability test suite, and therefore we do not have a way to
ensure interoperability between clouds for users or for trademark
use. The DefCore process has been long, and at times confusing,
even to those of us following it sort of closely. It's not entirely
surprising that some projects haven't been following the whole time,
or aren't aware of exactly what the whole thing means. I have
proposed a cross-project summit session for the Mitaka summit to
address this need for communication more broadly, but I'll try to
summarize a bit here.

DefCore is using automated tests, combined with business policies,
to build a set of criteria for allowing trademark use. One of the
goals of that process is to ensure that all OpenStack deployments
are interoperable, so that users who write programs that talk to
one cloud can use the same program with another cloud easily. This
is a *REST API* level of compatibility. We cannot insert cloud-specific
behavior into our client libraries, because not all cloud consumers
will use those libraries to talk to the services. Similarly, we
can't put the logic in the test suite, because that defeats the
entire purpose of making the APIs interoperable. For this level of
compatibility to work, we need well-defined APIs, with a long support
period, that work the same no matter how the cloud is deployed. We
need the entire community to support this effort. From what I can
tell, that is going to require some changes to the current Glance
API to meet the requirements. I'll list those requirements, and I
hope we can discuss them to a degree that ensures everyone understands
them. I don't want this email thread to get bogged down in
implementation details or API designs, though, so let's try to keep
the discussion at a somewhat high level, and leave the details for
specs and summit discussions. I do hope you will correct any
misunderstandings or misconceptions, because unwinding this as an
outside observer has been quite a challenge and it's likely I have
some details wrong.

As I understand it, there are basically two ways to upload an image
to glance using the V2 API today. The "POST" API pushes the image's
bits through the Glance API server, and the "task" API instructs
Glance to download the image separately in the background. At one
point apparently there was a bug that caused the results of the two
different paths to be incompatible, but I believe that is now fixed.
However, the two separate APIs each have different issues that make
them unsuitable for DefCore.

The DefCore process relies on several factors when designating APIs
for compliance. One factor is the technical direction, as communicated
by the contributor community -- that's where we tell them things
like "we plan to deprecate the Glance V1 API". In addition to the
technical direction, DefCore looks at the deployment history of an
API. They do not want to require deploying an API if it is not seen
as widely usable, and they look for some level of existing adoption
by cloud providers and distributors as an indication of that the
API is desired and can be successfully used. Because we have multiple
upload APIs, the message we're sending on technical direction is
weak right now, and so they have focused on deployment considerations
to resolve the question.

The POST API is enabled in many public clouds, but not consistently.
In some clouds like HP, a tenant requires special permission to use
the API. At least one provider, Rackspace, has disabled the API
entirely. This is apparently due to what seems like a fair argument
that uploading the bits directly to the API service presents a
possible denial of service vector. Without arguing the technical
merits of that decision, the fact remains that without a strong
consensus from deployers that the POST API should be publicly and
consistently available, it does not meet the requirements to be
used for DefCore testing.

The task API is also not widely deployed, so its adoption for DefCore
is problematic. If we provide a clear technical direction that this
API is preferred, that may overcome the lack of adoption, but the
current task API seems to have technical issues that make it
fundamentally unsuitable for DefCore consideration. While the task
API addresses the problem of a denial of service, and includes
useful features such as processing of the image during import, it
is not strongly enough defined in its current form to be interoperable.
Because it's a generic API, the caller must know how to fully
construct each task, and know what task types are supported in the
first place. There is only one "import" task type supported in the
Glance code repository right now, but it is not clear that "import"
always uses the same arguments, or interprets them in the same way.
For example, the upstream documentation [1] describes a task that
appears to use a URL as source, while the Rackspace documentation [2]
describes a task that appears to take a swift storage location.
I wasn't able to find JSONSchema validation for the "input" blob
portion of the task in the code [3], though that may happen down
inside the task implementation itself somewhere.

Tasks also come from plugins, which may be installed differently
based on the deployment. This is an interesting approach to creating
API extensions, but isn't discoverable enough to write interoperable
tools against. Most of the other projects are starting to move away
from supporting API extensions at all because of interoperability
concerns they introduce. Deployers should be able to configure their
clouds to perform well, but not to behave in fundamentally different
ways. Extensions are just that, extensions. We can't rely on them
for interoperability testing.

There is a lot of fuzziness around exactly what is supported for
image upload, both in the documentation and in the minds of the
developers I've spoken to this week, so I'd like to take a step
back and try to work through some clear requirements, and then we
can have folks familiar with the code help figure out if we have a
real issue, if a minor tweak is needed, or if things are good as
they stand today and it's all a misunderstanding.

1. We need a strongly defined and well documented API, with arguments
   that do not change based on deployment choices. The behind-the-scenes
   behaviors can change, but the arguments provided by the caller
   must be the same and the responses must look the same. The
   implementation can run as a background task rather than receiving
   the full image directly, but the current task API is too vaguely
   defined to meet this requirement, and IMO we need an entry point
   focused just on uploading or importing an image.

2. Glance cannot require having a Swift deployment. It's not clear
   whether this is actually required now, so if it's not then we're
   in a good state. It's fine to provide an optional way to take
   advantage of Swift if it is present, but it cannot be a required
   component. There are three separate trademark "programs", with
   separate policies attached to them. There is an umbrella "Platform"
   program that is intended to include all of the TC approved release
   projects, such as nova, glance, and swift. However, there is
   also a separate "Compute" program that is intended to include
   Nova, Glance, and some others but *not* Swift. This is an important
   distinction, because there are many use cases both for distributors
   and public cloud providers that do not incorporate Swift for a
   variety of reasons. So, we can't have Glance's primary configuration
   require Swift and we need to provide tests for the DefCore team
   that run without Swift. Duplicate tests that do use Swift are
   fine, and might be used for "Platform" compliance tests.

3. We need an integration test suite in tempest that fully exercises
   the public image API by talking directly to Glance. This applies
   to the entire API, not just image uploads. It's fine to have
   duplicate tests using the proxy in Nova if the Nova team wants
   those, but DefCore should be using tests that talk directly to
   the service that owns each feature, without relying on any
   proxying. We've already missed the chance to deal with this in
   the current DefCore definition, which uses image-related tests
   that talk to the Nova proxy [4][5], so we'll have to maintain
   the proxy for the required deprecation period. But we won't be
   able to consider removing that proxy until we provide alternate
   tests for those features that speak directly to Glance. We may
   have some coverage already, but I wasn't able to find a task-based
   image upload test and there is no "image create" mentioned in
   the current draft of capabilities being reviewed [6]. There may
   be others missing, so someone more familiar with the feature set
   of Glance should do an audit and document what tests are needed
   so the work can be split up.

4. Once identified and incorporated into the DefCore capabilities
   set, the selected API needs to remain stable for an extended
   period of time and follow the deprecation timelines defined by
   DefCore.  That has implications for the V3 API currently in
   development to turn Glance into a more generic artifacts service.
   There are a lot of ways to handle those implications, and no
   choice needs to be made today, so I only mention it to make sure
   it's clear that (a) we must get V2 into shape for DefCore and
   (b) when that happens, we will need to maintain V2 even if V3
   is finished. We won't be able to deprecate V2 quickly.

Now, it's entirely possible that we can meet all of those requirements
today, and that would be great. If that's the case, then the problem
is just one of clear communication and documentation. I think there's
probably more work to be done than that, though.

[1] http://developer.openstack.org/api-ref-image-v2.html#os-tasks-v2
[2] http://docs.rackspace.com/images/api/v2/ci-devguide/content/POST_importImage_tasks_Image_Task_Calls.html#d6e4193
[3] http://git.openstack.org/cgit/openstack/glance/tree/glance/api/v2/tasks.py
[4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n70
[5] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/guidelines/2015.07.rst
[6] https://review.openstack.org/#/c/213353/

II. Complete Cinder and Nova V2 Adoption

The Glance team originally committed to providing an Image Service
API. Besides our end users, both Cinder and Nova consume that API.
The shift from V1 to V2 has been a long road. We're far enough
along, and the V1 API has enough issues preventing us from using
it for DefCore, that we should push ahead and complete the V2
adoption. That will let us properly deprecate and drop V1 support,
and concentrate on maintaining V2 for the necessary amount of time.

There are a few specs for the work needed in Nova, but that work
didn't land in Liberty for a variety of reasons. We need resources
from both the Glance and Nova teams to work together to get this
done as early as possible in Mitaka to ensure that it actually lands
this time. We should be able to schedule a joint session at the
summit to have the conversation, and we need to take advantage of
that opportunity to ensure the details are fully resolved so that
everyone understands the plan.

The work in Cinder is more complete, but may need to be reviewed
to ensure that it is using the API correctly, safely, and efficiently.
Again, this is a joint effort between the Glance and Cinder teams
to identify any issues and work out a resolution.

Part of this work will also be to audit the Glance API documentation,
to ensure it accurately reflects what the APIs expect to receive
and return. There are reportedly at least a few cases where things
are out of sync right now. This will require some coordination with
the Documentation team.


Those are the two big priorities I see, based on things the rest
of the community needs from the team and existing commitments that
have been made. There are some other things that should also be
addressed.


III. Security audits & bug fixes

Five of 18 recent security reports were related to Glance [7]. It's
not surprising, given recent resource constraints, that addressing
these has been a challenge. Still, these should be given high
priority.

[7] https://security.openstack.org/search.html?q=glance&check_keywords=yes&area=default


IV. Sorting out the glance-store question

This was perhaps the most confusing thing I learned about this week.
The perception outside of the Glance team is that the library is
meant to be used by Nova and Cinder to communicate directly with
the image store, bypassing the REST API, to improve performance in
several cases. I know the Cinder team is especially interested in
some sort of interface for manipulating images inside the storage
system without having to download them to make copies (for RBD and
other systems that support CoW natively). That doesn't seem to be
what the library is actually good for, though, since most of the
Glance core folks I talked to thought it was really a caching layer.
This discrepancy in what folks wanted vs. what they got may explain
some of the heated discussions in other email threads.

Frankly, given the importance of the other issues, I recommend
leaving glance-store standalone this cycle. Unless the work for
dealing with priorities I and II is made *significantly* easier by
not having a library, the time and energy it will take to re-integrate
it with the Glance service seems like a waste of limited resources.
The time to even discuss it may be better spent on the planning
work needed. That said, if the library doesn't provide the features
its users were expecting, it may be better to fold it back in and
create a different library with a better understanding of the
requirements at some point. The path to take is up to the Glance
team, of course, but we're already down far enough on the priority
list that I think we'll be lucky to finish the preceding items this
cycle.


Those are the development priorities I was able to identify in my
interviews this week, and there is one last thing the team needs
to do this cycle: Recruit more contributors.

Almost every current core contributor I spoke with this week indicated
that their time was split between another project and Glance. Often
higher priority had to be given, understandibly, to internal product
work. That's the reality we work in, and everyone feels the same
pressures to some degree. One way to address that pressure is to
bring in help. So, we need a recruiting drive to find folks willing
to contribute code and reviews to the project to keep the team
healthy. I listed this item last because if you've made it this far
you should see just how much work the team has ahead. We're a big
community, and I'm confident that we'll be able to find help for
the Glance team, but it will require mentoring and education to
bring people up to speed to make them productive.

Doug


From anlin.kong at gmail.com  Mon Sep 14 12:10:43 2015
From: anlin.kong at gmail.com (Lingxian Kong)
Date: Mon, 14 Sep 2015 20:10:43 +0800
Subject: [openstack-dev]  [Mistral] Mistral PTL Candidacy
Message-ID: <CALjNAZ3bEjv4HTiRuF0eZ-QVJZKUBTtYFewr5U6hHE=s=9vA9w@mail.gmail.com>

Greetings,

After contributing consistently to Mistral since the Kilo dev cycle, I'd
like to run for the Mistral PTL position for the Mikata release cycle.
In case you don't immediately recognize me, I'm 'xylan_kong' on IRC,
where you can always find me.

Although I know that I have served as Mistral core reviewer for only
less than 1 year, I make myself almost full-time dedicated to Mistral
upstream work, to make it more feature-riched, more stable, more
scalable, and align with the whole community rules, etc. I've also
helped many new developers contribute to Mistral project, and have tried
to always be available on IRC and mailing list, in the position to help.

Of course, I'd like to thanks to all the fantastic folks of Mistral core
team, although it's a really small team, but I've been honored to serve
under Renat, Nokolay, Winson and many others for this year, thanks all
to your mentoring, and now I feel I could try to serve as PTL, I am
fortunate enough to work for an employer that would allow me to focus
100% of my time on the role of PTL.

Some of my thoughts about Mistral for the Mikata cycle are as follows:

* Improving the growth and vitality of Mistral project
People of Mistral team may know that there are not a lot of contributors
interested in Mistral, it?s important that as PTL, spent time on
non-technical duties besides technical effort, such as actively seek out
the input and feedback from users, operators and deployers, as well as
mentoring others to become future core contributor and even project team
leads, and acting as a point of contact for other OpenStack projects.

* Continuing to improve Mistral UI and Mistral documentation
Thanks to all the talent guys for their tremendous efforts for UI and
doc (you know who what I mean :-)), we have already make a big progress
for Mistral UI work and doc reference. However, We still need to put
more efforts on that, since UI is so important to Mistral usage, no
matter for end users usage or for PoC scenarios.

* Participating in cross-project initiatives and discussion in OpenStack
Actions involved in like: collaborating with the API working group,
cross-project specs, QA team, release team, etc. We should be reviewing
specs from these teams and making sure we implement those initiatives in
a timely fashion. For example: designing our new API version according
to API guideline, keeping with pace with other projects before the
release schedule deadline, integrating with OpenStack client, showing we
are part of community as a whole like many other official projects, etc.

* Introducing spec repository for Mistral (or something else we think it
better for tracking design details)
As I mentioned before, Mistral is a small project compared to many other
projects, but that doesn't mean we could do anything 'freely'. A good
thing recently is, we started to use etherpad to bring new features for
discussion before people decide to implement. However, etherpad is so
fragile that we may lose the 'history' of feature evolvement process,
I'd like to propose we use a specific git repository (*-specs) or
something like that to propose, discuss, iterate and track approvals on
specifications using Gerrit instead of Launchpad, like other projects,
which is very useful for new comers to learn how Mistral evolves, where
we are, and where we're going.

* Last but not least, bringing more and more useful features to Mistral,
increasing Mistral stability at the same time.

There are other things I'd like to accomplish as well, but those are the
main pieces in my mind when I decide to run for PTL. I know the chance
is very small for me to be elected, but I still wanna have a try,
regardless of the outcome.

I'm looking forward to serving the Mistral community during Mitaka, as I
always did before.

-- 
*Regards!*
*-----------------------------------*
*Lingxian Kong*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/0363b3d3/attachment.html>

From slukjanov at mirantis.com  Mon Sep 14 12:13:03 2015
From: slukjanov at mirantis.com (Sergey Lukjanov)
Date: Mon, 14 Sep 2015 15:13:03 +0300
Subject: [openstack-dev] [sahara] FFE request for nfs-as-a-data-source
In-Reply-To: <604925892.24685758.1441919746934.JavaMail.zimbra@redhat.com>
References: <6EEB8A90CDE31C4680037A635100E8FF953DBC@SHSMSX104.ccr.corp.intel.com>
 <55F0A58D.7000205@redhat.com>
 <604925892.24685758.1441919746934.JavaMail.zimbra@redhat.com>
Message-ID: <CA+GZd78_kNvbdic8BKdzgLKEwNrAe6iFZxymdrNHoVBDVhNK9w@mail.gmail.com>

Hi,

Not approved.

Unfortunately spec wasn't approved in time, so, it's too late for FFEs.
Patches seems not ready too and has unresolved comments for some time.

Thanks.

On Fri, Sep 11, 2015 at 12:15 AM, Ethan Gafford <egafford at redhat.com> wrote:

> This seems like a sanely scoped exception, and wholly agreed about
> spinning off the UI into a separate bp. +1.
>
> -Ethan
>
> >----- Original Message -----
> >From: "michael mccune" <msm at redhat.com>
> >To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> >Sent: Wednesday, September 9, 2015 5:33:01 PM
> >Subject: Re: [openstack-dev] [sahara] FFE request for nfs-as-a-data-source
> >
> >i'm +1 for this feature as long as we talking about just the sahara
> >controller and saharaclient. i agree we probably cannot get the horizon
> >changes in before the final release.
> >
> >mike
> >
> >On 09/09/2015 03:33 AM, Chen, Weiting wrote:
> >> Hi, all.
> >>
> >> I would like to request FFE for nfs as a data source for sahara.
> >>
> >> This bp originally should include a dashboard change to create nfs as a
> >> data source.
> >>
> >> I will register it as another bp and implement it in next version.
> >>
> >> However, these patches have already done to put nfs-driver into
> >> sahara-image-elements and enable it in the cluster.
> >>
> >> By using this way, the user can use nfs protocol via command line in
> >> Liberty release.
> >>
> >> Blueprint:
> >>
> >> https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source
> >>
> >> Spec:
> >>
> >> https://review.openstack.org/#/c/210839/
> >>
> >> Patch:
> >>
> >> https://review.openstack.org/#/c/218637/
> >>
> >> https://review.openstack.org/#/c/218638/
> >>
> >>
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >__________________________________________________________________________
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ----- Original Message -----
> From: "michael mccune" <msm at redhat.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Sent: Wednesday, September 9, 2015 5:33:01 PM
> Subject: Re: [openstack-dev] [sahara] FFE request for nfs-as-a-data-source
>
> i'm +1 for this feature as long as we talking about just the sahara
> controller and saharaclient. i agree we probably cannot get the horizon
> changes in before the final release.
>
> mike
>
> On 09/09/2015 03:33 AM, Chen, Weiting wrote:
> > Hi, all.
> >
> > I would like to request FFE for nfs as a data source for sahara.
> >
> > This bp originally should include a dashboard change to create nfs as a
> > data source.
> >
> > I will register it as another bp and implement it in next version.
> >
> > However, these patches have already done to put nfs-driver into
> > sahara-image-elements and enable it in the cluster.
> >
> > By using this way, the user can use nfs protocol via command line in
> > Liberty release.
> >
> > Blueprint:
> >
> > https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source
> >
> > Spec:
> >
> > https://review.openstack.org/#/c/210839/
> >
> > Patch:
> >
> > https://review.openstack.org/#/c/218637/
> >
> > https://review.openstack.org/#/c/218638/
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/41ca902e/attachment.html>

From slukjanov at mirantis.com  Mon Sep 14 12:15:15 2015
From: slukjanov at mirantis.com (Sergey Lukjanov)
Date: Mon, 14 Sep 2015 15:15:15 +0300
Subject: [openstack-dev] [sahara] FFE request for scheduler and suspend
 EDP job for sahara
In-Reply-To: <473417310.24682096.1441919215000.JavaMail.zimbra@redhat.com>
References: <CAMfz_LOZmrju2eRrQ1ASCK2yjyxa-UhuV=uv3SRrtKkoPDP-bQ@mail.gmail.com>
 <CA+O3VAivX4To4MQnAwP-fj2iX2hb3dsUj7rP3zMh9dRyPHFeHA@mail.gmail.com>
 <CAMfz_LO6P0=VF-QSJNJeo_8XfNBShLUH3q6Nt=ecL48YEkebCQ@mail.gmail.com>
 <473417310.24682096.1441919215000.JavaMail.zimbra@redhat.com>
Message-ID: <CA+GZd79KAuz9+L-SMMGmo-CpJ3HnjTsHgdTrxhLYmLOgRJmpbw@mail.gmail.com>

Hi,

Not approved.

Unfortunately it seems like dangerous patch, API changing and Liberty
client will not support it as well. Let's have it merged early in M cycle
(after Liberty RC published, master will be open for M).

Thanks.

On Fri, Sep 11, 2015 at 12:06 AM, Ethan Gafford <egafford at redhat.com> wrote:

> Sadly, in reviewing the client code, Vitaly is right; the current client
> will not support this feature, which would make it direct REST call only.
> Given that, I am uncertain it is worth the risk. I'd really love to have
> seen this go in; it's a great feature and a lot of work has gone into it,
> but perhaps it should be a great feature in M.
>
> If we agree to cut a new client point release, I could see this, but for
> now I fear I'm -1.
>
> Thanks,
> Ethan
>
> >From: "lu jander" <juvenboy1987 at gmail.com>
> >To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> >Sent: Monday, September 7, 2015 1:26:49 PM
> >Subject: Re: [openstack-dev] [sahara] FFE request for scheduler and
> suspend EDP job for sahara
>
> >
> >Hi Vitaly,
> >enable-scheduled-edp-jobs  patch has 34 patch sets review.
> https://review.openstack.org/#/c/182310/ , it has no impact with another
> working in process patch.
> >
> >2015-09-07 18:48 GMT+08:00 Vitaly Gridnev <vgridnev at mirantis.com>:
> >
> >    Hey!
> >
> >    From my point of view, we definetly should not give FFE for
> add-suspend-resume-ability-for-edp-jobs spec, because client side for this
> change is not included in official liberty release.
> >
> >    By the way, I am not sure about FFE for enable-scheduled-edp-jobs,
> because it's not clear which progress of these blueprint. Implementation of
> that consists with 2 patch-sets, and one of that marked as Work In Progress.
> >
> >
> >    On Sun, Sep 6, 2015 at 7:18 PM, lu jander <juvenboy1987 at gmail.com>
> wrote:
> >
> >        Hi, Guys
> >
> >         I would like to request FFE for scheduler EDP job and suspend
> EDP job for sahara. these patches has been reviewed for a long time with
> lots of patch sets.
> >
> >        Blueprint:
> >
> >        (1)
> https://blueprints.launchpad.net/sahara/+spec/enable-scheduled-edp-jobs
> >        (2)
> https://blueprints.launchpad.net/sahara/+spec/add-suspend-resume-ability-for-edp-jobs
> >
> >        Spec:
> >
> >        (1) https://review.openstack.org/#/c/175719/
> >        (2) https://review.openstack.org/#/c/198264/
> >
> >
> >        Patch:
> >
> >        (1) https://review.openstack.org/#/c/182310/
> >        (2) https://review.openstack.org/#/c/201448/
> >
> >
> __________________________________________________________________________
> >        OpenStack Development Mailing List (not for usage questions)
> >        Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >    --
> >    Best Regards,
> >    Vitaly Gridnev
> >    Mirantis, Inc
> >
> >
> __________________________________________________________________________
> >    OpenStack Development Mailing List (not for usage questions)
> >    Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >__________________________________________________________________________
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/b8dd9f03/attachment.html>

From aurlapova at mirantis.com  Mon Sep 14 12:36:10 2015
From: aurlapova at mirantis.com (Anastasia Urlapova)
Date: Mon, 14 Sep 2015 15:36:10 +0300
Subject: [openstack-dev] [Fuel] Nominate Andrey Sledzinskiy for
	fuel-ostf core
In-Reply-To: <CAMZD-t8o75frtWUnkL71K+szgLwxHDi28DoHzYnUB36VeKhV4w@mail.gmail.com>
References: <CAJWtyAOeyjVLTkuDB7pJGcbr0iPDYh1-ZqXhn_ODi-XwOxTJvQ@mail.gmail.com>
 <CAC+XjbarB-GU-R+6XS7hd2i_3-HYSZxbMDX1OXmdD8eQ=ZNO5g@mail.gmail.com>
 <CAP2-cGd9ex57e7g+WuqKK=d9kxDZs77hUHR7QJa-n8fjLwHbpw@mail.gmail.com>
 <CAFNR43P3BLkWcvay3KewNwFdLjjVLi16iHY7GVZecsexnkaDNA@mail.gmail.com>
 <CAMZD-t8o75frtWUnkL71K+szgLwxHDi28DoHzYnUB36VeKhV4w@mail.gmail.com>
Message-ID: <CAC+XjbYamgTcoNtShU==Vjf6hBXLe+oN9wdLNoCeX_7-81YVnw@mail.gmail.com>

Andrey,
welcome to fuel-core-ostf!

Nastya.

On Wed, Sep 9, 2015 at 6:58 PM, Dmitry Tyzhnenko <dtyzhnenko at mirantis.com>
wrote:

> +1
> 8 ????. 2015 ?. 13:07 ???????????? "Alexander Kostrikov" <
> akostrikov at mirantis.com> ???????:
>
> +1
>>
>> On Tue, Sep 8, 2015 at 9:07 AM, Dmitriy Shulyak <dshulyak at mirantis.com>
>> wrote:
>>
>>> +1
>>>
>>> On Tue, Sep 8, 2015 at 9:02 AM, Anastasia Urlapova <
>>> aurlapova at mirantis.com> wrote:
>>>
>>>> +1
>>>>
>>>> On Mon, Sep 7, 2015 at 6:30 PM, Tatyana Leontovich <
>>>> tleontovich at mirantis.com> wrote:
>>>>
>>>>> Fuelers,
>>>>>
>>>>> I'd like to nominate Andrey Sledzinskiy for the fuel-ostf core team.
>>>>> He?s been doing a great job in writing patches(support for detached
>>>>> services ).
>>>>> Also his review comments always have a lot of detailed information for
>>>>> further improvements
>>>>>
>>>>>
>>>>> http://stackalytics.com/?user_id=asledzinskiy&release=all&project_type=all&module=fuel-ostf
>>>>>
>>>>> Please vote with +1/-1 for approval/objection.
>>>>>
>>>>> Core reviewer approval process definition:
>>>>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>>>>
>>>>> --
>>>>> Best regards,
>>>>> Tatyana
>>>>>
>>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>>
>> Kind Regards,
>>
>> Alexandr Kostrikov,
>>
>> Mirantis, Inc.
>>
>> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>>
>>
>> Tel.: +7 (495) 640-49-04
>> Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>
>>
>> Skype: akostrikov_mirantis
>>
>> E-mail: akostrikov at mirantis.com <elogutova at mirantis.com>
>>
>> *www.mirantis.com <http://www.mirantis.ru/>*
>> *www.mirantis.ru <http://www.mirantis.ru/>*
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/bb005280/attachment.html>

From prometheanfire at gentoo.org  Mon Sep 14 12:41:49 2015
From: prometheanfire at gentoo.org (Matthew Thode)
Date: Mon, 14 Sep 2015 07:41:49 -0500
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <CAGSrQvy4b7fEmJGvSMfLjtiMuj-w_2S7rFL7uXuRKHkHBrVrHA@mail.gmail.com>
References: <55F1999C.4020509@mhtx.net> <55F1AE40.5020009@gentoo.org>
 <55F1B0D7.8070404@mhtx.net> <1441909133-sup-2320@fewbar.com>
 <CAGSrQvy4b7fEmJGvSMfLjtiMuj-w_2S7rFL7uXuRKHkHBrVrHA@mail.gmail.com>
Message-ID: <55F6C08D.7040409@gentoo.org>

On 09/14/2015 03:28 AM, Jesse Pretorius wrote:
> On 10 September 2015 at 19:21, Clint Byrum <clint at fewbar.com
> <mailto:clint at fewbar.com>> wrote:
> 
>     Excerpts from Major Hayden's message of 2015-09-10 09:33:27 -0700:
>     > Hash: SHA256
>     >
>     > On 09/10/2015 11:22 AM, Matthew Thode wrote:
>     > > Sane defaults can't be used?  The two bugs you listed look fine to me as
>     > > default things to do.
>     >
>     > Thanks, Matthew.  I tend to agree.
>     >
>     > I'm wondering if it would be best to make a "punch list" of CIS benchmarks and try to tag them with one of the following:
>     >
>     >   * Do this in OSAD
>     >   * Tell deployers how to do this (in docs)
> 
>     Just a thought from somebody outside of this. If OSAD can provide the
>     automation, turned off by default as a convenience, and run a bank of
>     tests with all of these turned on to make sure they do actually work
>     with
>     the stock configuration, you'll get more traction this way. Docs should
>     be the focus of this effort, but the effort should be on explaining how
>     it fits into the system so operators who are customizing know when they
>     will have to choose a less secure path. One should be able to have code
>     do the "turn it on" "turn it off" mechanics.
> 
> 
> I agree with Clint that this is a good approach.
> 
> If there is an automated way that we can verify the security of an
> installation at a reasonable/standardised level then I think we should
> add a gate check for it too.
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
There are a few different ways to verify system security.  They are
generally outside tools though.

http://www.open-scap.org/page/Main_Page for instance.

-- 
-- Matthew Thode (prometheanfire)


From flavio at redhat.com  Mon Sep 14 12:41:00 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Mon, 14 Sep 2015 14:41:00 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442232202-sup-5997@lrrr.local>
References: <1442232202-sup-5997@lrrr.local>
Message-ID: <20150914124100.GC10859@redhat.com>

On 14/09/15 08:10 -0400, Doug Hellmann wrote:
>
>After having some conversations with folks at the Ops Midcycle a
>few weeks ago, and observing some of the more recent email threads
>related to glance, glance-store, the client, and the API, I spent
>last week contacting a few of you individually to learn more about
>some of the issues confronting the Glance team. I had some very
>frank, but I think constructive, conversations with all of you about
>the issues as you see them. As promised, this is the public email
>thread to discuss what I found, and to see if we can agree on what
>the Glance team should be focusing on going into the Mitaka summit
>and development cycle and how the rest of the community can support
>you in those efforts.
>
>I apologize for the length of this email, but there's a lot to go
>over. I've identified 2 high priority items that I think are critical
>for the team to be focusing on starting right away in order to use
>the upcoming summit time effectively. I will also describe several
>other issues that need to be addressed but that are less immediately
>critical. First the high priority items:
>
>1. Resolve the situation preventing the DefCore committee from
>   including image upload capabilities in the tests used for trademark
>   and interoperability validation.
>
>2. Follow through on the original commitment of the project to
>   provide an image API by completing the integration work with
>   nova and cinder to ensure V2 API adoption.

Hi Doug,

First and foremost, I'd like to thank you for taking the time to dig
into these issues, and for reaching out to the community seeking for
information and a better understanding of what the real issues are. I
can imagine how much time you had to dedicate on this and I'm glad you
did.

Now, to your email, I very much agree with the priorities you
mentioned above and I'd like for, whomever will win Glance's PTL
election, to bring focus back on that.

Please, find some comments in-line for each point:


>
>I. DefCore
>
>The primary issue that attracted my attention was the fact that
>DefCore cannot currently include an image upload API in its
>interoperability test suite, and therefore we do not have a way to
>ensure interoperability between clouds for users or for trademark
>use. The DefCore process has been long, and at times confusing,
>even to those of us following it sort of closely. It's not entirely
>surprising that some projects haven't been following the whole time,
>or aren't aware of exactly what the whole thing means. I have
>proposed a cross-project summit session for the Mitaka summit to
>address this need for communication more broadly, but I'll try to
>summarize a bit here.

+1

I think it's quite sad that some projects, especially those considered
to be part of the `starter-kit:compute`[0], don't follow closely
what's going on in DefCore. I personally consider this a task PTLs
should incorporate in their role duties. I'm glad you proposed such
session, I hope it'll help raising awareness of this effort and it'll
help moving things forward on that front.


>
>DefCore is using automated tests, combined with business policies,
>to build a set of criteria for allowing trademark use. One of the
>goals of that process is to ensure that all OpenStack deployments
>are interoperable, so that users who write programs that talk to
>one cloud can use the same program with another cloud easily. This
>is a *REST API* level of compatibility. We cannot insert cloud-specific
>behavior into our client libraries, because not all cloud consumers
>will use those libraries to talk to the services. Similarly, we
>can't put the logic in the test suite, because that defeats the
>entire purpose of making the APIs interoperable. For this level of
>compatibility to work, we need well-defined APIs, with a long support
>period, that work the same no matter how the cloud is deployed. We
>need the entire community to support this effort. From what I can
>tell, that is going to require some changes to the current Glance
>API to meet the requirements. I'll list those requirements, and I
>hope we can discuss them to a degree that ensures everyone understands
>them. I don't want this email thread to get bogged down in
>implementation details or API designs, though, so let's try to keep
>the discussion at a somewhat high level, and leave the details for
>specs and summit discussions. I do hope you will correct any
>misunderstandings or misconceptions, because unwinding this as an
>outside observer has been quite a challenge and it's likely I have
>some details wrong.
>
>As I understand it, there are basically two ways to upload an image
>to glance using the V2 API today. The "POST" API pushes the image's
>bits through the Glance API server, and the "task" API instructs
>Glance to download the image separately in the background. At one
>point apparently there was a bug that caused the results of the two
>different paths to be incompatible, but I believe that is now fixed.
>However, the two separate APIs each have different issues that make
>them unsuitable for DefCore.
>
>The DefCore process relies on several factors when designating APIs
>for compliance. One factor is the technical direction, as communicated
>by the contributor community -- that's where we tell them things
>like "we plan to deprecate the Glance V1 API". In addition to the
>technical direction, DefCore looks at the deployment history of an
>API. They do not want to require deploying an API if it is not seen
>as widely usable, and they look for some level of existing adoption
>by cloud providers and distributors as an indication of that the
>API is desired and can be successfully used. Because we have multiple
>upload APIs, the message we're sending on technical direction is
>weak right now, and so they have focused on deployment considerations
>to resolve the question.

The task upload process you're referring to is the one that uses the
`import` task, which allows you to download an image from an external
source, asynchronously, and import it in Glance. This is the old
`copy-from` behavior that was moved into a task.

The "fun" thing about this - and I'm sure other folks in the Glance
community will disagree - is that I don't consider tasks to be a
public API. That is to say, I would expect tasks to be an internal API
used by cloud admins to perform some actions (bsaed on its current
implementation). Eventually, some of these tasks could be triggered
from the external API but as background operations that are triggered
by the well-known public ones and not through the task API.

Ultimately, I believe end-users of the cloud simply shouldn't care
about what tasks are or aren't and more importantly, as you mentioned
later in the email, tasks make clouds not interoperable. I'd be pissed
if my public image service would ask me to learn about tasks to be
able to use the service.

Long story short, I believe the only upload API that should be
considered is the one that uses HTTP and, eventually, to bring
compatibility with v1 as far as the copy-from behavior goes, Glance
could bring back that behavior on top of the task (just dropping this
here for the sake of discussion and interoperability).


>The POST API is enabled in many public clouds, but not consistently.
>In some clouds like HP, a tenant requires special permission to use
>the API. At least one provider, Rackspace, has disabled the API
>entirely. This is apparently due to what seems like a fair argument
>that uploading the bits directly to the API service presents a
>possible denial of service vector. Without arguing the technical
>merits of that decision, the fact remains that without a strong
>consensus from deployers that the POST API should be publicly and
>consistently available, it does not meet the requirements to be
>used for DefCore testing.

This is definitely unfortunate. I believe a good step forward for this
discussion would be to create a list of issues related to uploading
images and see how those issues can be addressed. The result from that
work might be that it's not recommended to make that endpoint public
but again, without going through the issues, it'll be hard to
understand how we can improve this situation. I expect most of this
issues to have a security impact.


>The task API is also not widely deployed, so its adoption for DefCore
>is problematic. If we provide a clear technical direction that this
>API is preferred, that may overcome the lack of adoption, but the
>current task API seems to have technical issues that make it
>fundamentally unsuitable for DefCore consideration. While the task
>API addresses the problem of a denial of service, and includes
>useful features such as processing of the image during import, it
>is not strongly enough defined in its current form to be interoperable.
>Because it's a generic API, the caller must know how to fully
>construct each task, and know what task types are supported in the
>first place. There is only one "import" task type supported in the
>Glance code repository right now, but it is not clear that "import"
>always uses the same arguments, or interprets them in the same way.
>For example, the upstream documentation [1] describes a task that
>appears to use a URL as source, while the Rackspace documentation [2]
>describes a task that appears to take a swift storage location.
>I wasn't able to find JSONSchema validation for the "input" blob
>portion of the task in the code [3], though that may happen down
>inside the task implementation itself somewhere.


The above sounds pretty accurate as there's currently just 1 flow that
can be triggered (the import flow) and that accepts an input, which is
a json. As I mentioned above, I don't believe tasks should be part of
the public API and this is yet another reason why I think so. The
tasks API is not well defined as there's, currently, not good way to
define the expected input in a backwards compatible way and to provide
all the required validation.

I like having tasks in Glance, despite my comments above - but I like
them for cloud usage and not public usage.

As far as Rackspace's docs/endpoint goes, I'd assume this is an error
in their documetation since Glance currently doesn't allow[0] for
swift URLs to be imported (not even in juno[1]).

[0] http://git.openstack.org/cgit/openstack/glance/tree/glance/common/scripts/utils.py#n84
[1] http://git.openstack.org/cgit/openstack/glance/tree/glance/common/scripts/utils.py?h=stable/juno#n83

>Tasks also come from plugins, which may be installed differently
>based on the deployment. This is an interesting approach to creating
>API extensions, but isn't discoverable enough to write interoperable
>tools against. Most of the other projects are starting to move away
>from supporting API extensions at all because of interoperability
>concerns they introduce. Deployers should be able to configure their
>clouds to perform well, but not to behave in fundamentally different
>ways. Extensions are just that, extensions. We can't rely on them
>for interoperability testing.

This is, indeed, an interesting interpretation of what tasks are for.
I'd probably just blame us (Glance team) for not communicating
properly what tasks are meant to be. I don't believe tasks are a way
to extend the *public* API and I'd be curious to know if others see it
that way. I fully agree that just breaks interoperability and as I've
mentioned a couple of times in this reply already, I don't even think
tasks should be part of the public API.

But again, very poor job communicating so[0]. Nonetheless, for the
sake of providing enough information about tasks and sources to read
from, I'd also like to point out the original blueprint[1], some
discussions during the havana's summit[2], the wiki page for tasks[3]
and a patch I just reviewed today (thanks Brian) that introduces docs
for tasks[4]. These links show already some differences in what tasks
are.

[0] http://git.openstack.org/cgit/openstack/glance/tree/etc/policy.json?h=stable/juno#n28
[1] https://blueprints.launchpad.net/glance/+spec/async-glance-workers
[2] https://etherpad.openstack.org/p/havana-glance-requirements
[3] https://wiki.openstack.org/wiki/Glance-tasks-api
[4] https://review.openstack.org/#/c/220166/

>
>There is a lot of fuzziness around exactly what is supported for
>image upload, both in the documentation and in the minds of the
>developers I've spoken to this week, so I'd like to take a step
>back and try to work through some clear requirements, and then we
>can have folks familiar with the code help figure out if we have a
>real issue, if a minor tweak is needed, or if things are good as
>they stand today and it's all a misunderstanding.
>
>1. We need a strongly defined and well documented API, with arguments
>   that do not change based on deployment choices. The behind-the-scenes
>   behaviors can change, but the arguments provided by the caller
>   must be the same and the responses must look the same. The
>   implementation can run as a background task rather than receiving
>   the full image directly, but the current task API is too vaguely
>   defined to meet this requirement, and IMO we need an entry point
>   focused just on uploading or importing an image.
>
>2. Glance cannot require having a Swift deployment. It's not clear
>   whether this is actually required now, so if it's not then we're
>   in a good state.

This is definitely not the case. Glance doesn't require any specific
store to be deployed. It does require at least one other than the http
one (because it doesn't support write operations).

> It's fine to provide an optional way to take
>   advantage of Swift if it is present, but it cannot be a required
>   component. There are three separate trademark "programs", with
>   separate policies attached to them. There is an umbrella "Platform"
>   program that is intended to include all of the TC approved release
>   projects, such as nova, glance, and swift. However, there is
>   also a separate "Compute" program that is intended to include
>   Nova, Glance, and some others but *not* Swift. This is an important
>   distinction, because there are many use cases both for distributors
>   and public cloud providers that do not incorporate Swift for a
>   variety of reasons. So, we can't have Glance's primary configuration
>   require Swift and we need to provide tests for the DefCore team
>   that run without Swift. Duplicate tests that do use Swift are
>   fine, and might be used for "Platform" compliance tests.
>
>3. We need an integration test suite in tempest that fully exercises
>   the public image API by talking directly to Glance. This applies
>   to the entire API, not just image uploads. It's fine to have
>   duplicate tests using the proxy in Nova if the Nova team wants
>   those, but DefCore should be using tests that talk directly to
>   the service that owns each feature, without relying on any
>   proxying. We've already missed the chance to deal with this in
>   the current DefCore definition, which uses image-related tests
>   that talk to the Nova proxy [4][5], so we'll have to maintain
>   the proxy for the required deprecation period. But we won't be
>   able to consider removing that proxy until we provide alternate
>   tests for those features that speak directly to Glance. We may
>   have some coverage already, but I wasn't able to find a task-based
>   image upload test and there is no "image create" mentioned in
>   the current draft of capabilities being reviewed [6]. There may
>   be others missing, so someone more familiar with the feature set
>   of Glance should do an audit and document what tests are needed
>   so the work can be split up.
>

+1 This should become one of the top priorities for Mitaka (as you
mentioned at the beginning of this email).

>4. Once identified and incorporated into the DefCore capabilities
>   set, the selected API needs to remain stable for an extended
>   period of time and follow the deprecation timelines defined by
>   DefCore.  That has implications for the V3 API currently in
>   development to turn Glance into a more generic artifacts service.
>   There are a lot of ways to handle those implications, and no
>   choice needs to be made today, so I only mention it to make sure
>   it's clear that (a) we must get V2 into shape for DefCore and
>   (b) when that happens, we will need to maintain V2 even if V3
>   is finished. We won't be able to deprecate V2 quickly.
>
>Now, it's entirely possible that we can meet all of those requirements
>today, and that would be great. If that's the case, then the problem
>is just one of clear communication and documentation. I think there's
>probably more work to be done than that, though.


There's clearly a communication problem. The fact that this very email
has been sent out is a sign of that. However, I'd like to say, in a
very optimistic way, that Glance is not so far away from the expecte
status. There are things to fix, other things to clarify, tons to
discuss but, IMHO, besides the tempests tests and DefCore, the most
critical one is the one you mentioned in the following section.

>
>[1] http://developer.openstack.org/api-ref-image-v2.html#os-tasks-v2
>[2] http://docs.rackspace.com/images/api/v2/ci-devguide/content/POST_importImage_tasks_Image_Task_Calls.html#d6e4193
>[3] http://git.openstack.org/cgit/openstack/glance/tree/glance/api/v2/tasks.py
>[4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n70
>[5] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/guidelines/2015.07.rst
>[6] https://review.openstack.org/#/c/213353/
>
>II. Complete Cinder and Nova V2 Adoption
>
>The Glance team originally committed to providing an Image Service
>API. Besides our end users, both Cinder and Nova consume that API.
>The shift from V1 to V2 has been a long road. We're far enough
>along, and the V1 API has enough issues preventing us from using
>it for DefCore, that we should push ahead and complete the V2
>adoption. That will let us properly deprecate and drop V1 support,
>and concentrate on maintaining V2 for the necessary amount of time.
>
>There are a few specs for the work needed in Nova, but that work
>didn't land in Liberty for a variety of reasons. We need resources
>from both the Glance and Nova teams to work together to get this
>done as early as possible in Mitaka to ensure that it actually lands
>this time. We should be able to schedule a joint session at the
>summit to have the conversation, and we need to take advantage of
>that opportunity to ensure the details are fully resolved so that
>everyone understands the plan.

Super important point. I'd like people replying to this email to focus
on what we can do next and not why this hasn't been done. The later
will take us down a path that won't be useful at all at it'll just
waste everyone's time.

That said, I fully agree with the above. Last time we talked, John
Garbutt and Jay Pipes, from the nova team, raised their hands to help
out with this effort. From Glance's side, Fei Long Wang and myself
were working on the implementation. To help moving this forward and to
follow on the latest plan, which allows this migration to be smoother
than our original plan, we need folks from Glance to raise their hand.

If I'm not elected PTL, I'm more than happy to help out here but we
need someone that can commit to the above right now and we'll likely
need a team of at least 2 people to help moving this forward in early
Mitaka.


>The work in Cinder is more complete, but may need to be reviewed
>to ensure that it is using the API correctly, safely, and efficiently.
>Again, this is a joint effort between the Glance and Cinder teams
>to identify any issues and work out a resolution.
>
>Part of this work will also be to audit the Glance API documentation,
>to ensure it accurately reflects what the APIs expect to receive
>and return. There are reportedly at least a few cases where things
>are out of sync right now. This will require some coordination with
>the Documentation team.
>
>
>Those are the two big priorities I see, based on things the rest
>of the community needs from the team and existing commitments that
>have been made. There are some other things that should also be
>addressed.
>
>
>III. Security audits & bug fixes
>
>Five of 18 recent security reports were related to Glance [7]. It's
>not surprising, given recent resource constraints, that addressing
>these has been a challenge. Still, these should be given high
>priority.
>
>[7] https://security.openstack.org/search.html?q=glance&check_keywords=yes&area=default


+1 FWIW, we're in the process of growing Glance's security team. But
it's clear from the above that there needs to be quicker replies to
security issues.

>IV. Sorting out the glance-store question
>
>This was perhaps the most confusing thing I learned about this week.
>The perception outside of the Glance team is that the library is
>meant to be used by Nova and Cinder to communicate directly with
>the image store, bypassing the REST API, to improve performance in
>several cases. I know the Cinder team is especially interested in
>some sort of interface for manipulating images inside the storage
>system without having to download them to make copies (for RBD and
>other systems that support CoW natively).

Correct, the above was one of the triggerers for this effort and I
like to think it's still one of the main drivers. There are other
fancier things that could be done in the future assuming the
librarie's API is refactored in a way that such features can be
implemented.[0]

[0] https://review.openstack.org/#/c/188050/

>That doesn't seem to be
>what the library is actually good for, though, since most of the
>Glance core folks I talked to thought it was really a caching layer.
>This discrepancy in what folks wanted vs. what they got may explain
>some of the heated discussions in other email threads.

It's strange that some folks think of it as a caching layer. I believe
one of the reasons there's such discrepancy is because not enough
effort has been put in the refactor this library requires. The reason
this library requires such a refactor is that it came out from the old
`glance/store` code which was very specific to Glance's internal use.

The mistake here could be that the library should've been refactored
*before* adopting it in Glance.

>
>Frankly, given the importance of the other issues, I recommend
>leaving glance-store standalone this cycle. Unless the work for
>dealing with priorities I and II is made *significantly* easier by
>not having a library, the time and energy it will take to re-integrate
>it with the Glance service seems like a waste of limited resources.
>The time to even discuss it may be better spent on the planning
>work needed. That said, if the library doesn't provide the features
>its users were expecting, it may be better to fold it back in and
>create a different library with a better understanding of the
>requirements at some point. The path to take is up to the Glance
>team, of course, but we're already down far enough on the priority
>list that I think we'll be lucky to finish the preceding items this
>cycle.


I don't think merging glance-store back into Glance will help with any
of the priorities mentioned in this thread. If anything, refactoring
the API might help with future work that could come after the v1 -> v2
migration is complete.

>
>
>Those are the development priorities I was able to identify in my
>interviews this week, and there is one last thing the team needs
>to do this cycle: Recruit more contributors.
>
>Almost every current core contributor I spoke with this week indicated
>that their time was split between another project and Glance. Often
>higher priority had to be given, understandibly, to internal product
>work. That's the reality we work in, and everyone feels the same
>pressures to some degree. One way to address that pressure is to
>bring in help. So, we need a recruiting drive to find folks willing
>to contribute code and reviews to the project to keep the team
>healthy. I listed this item last because if you've made it this far
>you should see just how much work the team has ahead. We're a big
>community, and I'm confident that we'll be able to find help for
>the Glance team, but it will require mentoring and education to
>bring people up to speed to make them productive.

Fully agree here as well. However, I also believe that the fact that
some efforts have gone to the wrong tasks has taken Glance to the
situation it is today. More help is welcomed and required but a good
strategy is more important right now.

FWIW, I agree that our focus has gone to different thing and this has
taken us to the status you mentioned above. More importantly, it's
postponed some important tasks. However, I don't believe Glance is
completely broken - I know you are not saying this but I'd like to
mention it - and I certainly believe we can bring it back to a good
state faster than expecte, but I'm known for being a bit optimistic
sometimes.

In this reply I was hard on us (Glance team), because I tend to be
hard on myself and to dig deep into the things that are not working
well. Many times I do this based on the feedback provided by others,
which I personally value **a lot**. Unfortunately, I have to say that
there hasn't been enough feedback about these issues until now. There
was Mike's email[0] where I explicitly asked the community to speak
up. This is to say that I appreciate the time you've taken to dig into
this a lot and to encourage folks to *always* speak up and reach out
through every *public* medium possible..

No one can fix rumors, we can fix issues, though.

Thanks again and lets all work together to improve this situation,
Flavio

[0] http://lists.openstack.org/pipermail/openstack-dev/2015-August/071971.html

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/d0516f15/attachment.pgp>

From doug at doughellmann.com  Mon Sep 14 12:46:02 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 14 Sep 2015 08:46:02 -0400
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
	library releases needed
Message-ID: <1442234537-sup-4636@lrrr.local>

PTLs and release liaisons,

In order to keep the rest of our schedule for the end-of-cycle release
tasks, we need to have final releases for all client libraries in the
next day or two.

If you have not already submitted your final release request for this
cycle, please do that as soon as possible.

If you *have* already submitted your final release request for this
cycle, please reply to this email and let me know that you have so I can
create your stable/liberty branch.

Thanks!
Doug


From tleontovich at mirantis.com  Mon Sep 14 12:50:52 2015
From: tleontovich at mirantis.com (Tatyana Leontovich)
Date: Mon, 14 Sep 2015 15:50:52 +0300
Subject: [openstack-dev] [Fuel] Nominate Andrey Sledzinskiy for
	fuel-ostf core
In-Reply-To: <CAC+XjbYamgTcoNtShU==Vjf6hBXLe+oN9wdLNoCeX_7-81YVnw@mail.gmail.com>
References: <CAJWtyAOeyjVLTkuDB7pJGcbr0iPDYh1-ZqXhn_ODi-XwOxTJvQ@mail.gmail.com>
 <CAC+XjbarB-GU-R+6XS7hd2i_3-HYSZxbMDX1OXmdD8eQ=ZNO5g@mail.gmail.com>
 <CAP2-cGd9ex57e7g+WuqKK=d9kxDZs77hUHR7QJa-n8fjLwHbpw@mail.gmail.com>
 <CAFNR43P3BLkWcvay3KewNwFdLjjVLi16iHY7GVZecsexnkaDNA@mail.gmail.com>
 <CAMZD-t8o75frtWUnkL71K+szgLwxHDi28DoHzYnUB36VeKhV4w@mail.gmail.com>
 <CAC+XjbYamgTcoNtShU==Vjf6hBXLe+oN9wdLNoCeX_7-81YVnw@mail.gmail.com>
Message-ID: <CAJWtyAOfhY_46niQkGGGbP5kcM78MGJV0vDyq99y8wzeia7piA@mail.gmail.com>

Congrats, Andrew !!!

On Mon, Sep 14, 2015 at 3:36 PM, Anastasia Urlapova <aurlapova at mirantis.com>
wrote:

> Andrey,
> welcome to fuel-core-ostf!
>
> Nastya.
>
> On Wed, Sep 9, 2015 at 6:58 PM, Dmitry Tyzhnenko <dtyzhnenko at mirantis.com>
> wrote:
>
>> +1
>> 8 ????. 2015 ?. 13:07 ???????????? "Alexander Kostrikov" <
>> akostrikov at mirantis.com> ???????:
>>
>> +1
>>>
>>> On Tue, Sep 8, 2015 at 9:07 AM, Dmitriy Shulyak <dshulyak at mirantis.com>
>>> wrote:
>>>
>>>> +1
>>>>
>>>> On Tue, Sep 8, 2015 at 9:02 AM, Anastasia Urlapova <
>>>> aurlapova at mirantis.com> wrote:
>>>>
>>>>> +1
>>>>>
>>>>> On Mon, Sep 7, 2015 at 6:30 PM, Tatyana Leontovich <
>>>>> tleontovich at mirantis.com> wrote:
>>>>>
>>>>>> Fuelers,
>>>>>>
>>>>>> I'd like to nominate Andrey Sledzinskiy for the fuel-ostf core team.
>>>>>> He?s been doing a great job in writing patches(support for detached
>>>>>> services ).
>>>>>> Also his review comments always have a lot of detailed information
>>>>>> for further improvements
>>>>>>
>>>>>>
>>>>>> http://stackalytics.com/?user_id=asledzinskiy&release=all&project_type=all&module=fuel-ostf
>>>>>>
>>>>>> Please vote with +1/-1 for approval/objection.
>>>>>>
>>>>>> Core reviewer approval process definition:
>>>>>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>>>>>
>>>>>> --
>>>>>> Best regards,
>>>>>> Tatyana
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Kind Regards,
>>>
>>> Alexandr Kostrikov,
>>>
>>> Mirantis, Inc.
>>>
>>> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>>>
>>>
>>> Tel.: +7 (495) 640-49-04
>>> Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>
>>>
>>> Skype: akostrikov_mirantis
>>>
>>> E-mail: akostrikov at mirantis.com <elogutova at mirantis.com>
>>>
>>> *www.mirantis.com <http://www.mirantis.ru/>*
>>> *www.mirantis.ru <http://www.mirantis.ru/>*
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/4e25316e/attachment.html>

From coolsvap at gmail.com  Mon Sep 14 12:53:37 2015
From: coolsvap at gmail.com (Swapnil Kulkarni)
Date: Mon, 14 Sep 2015 18:23:37 +0530
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <CAJ3CzQWS4O-+V6A9L0GSDMUGcfpJc_3=DdQG9njxO+FBoRBDyw@mail.gmail.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
 <D21A6FA1.12519%stdake@cisco.com>
 <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>
 <D21AFAE0.12587%stdake@cisco.com> <55F6AD3E.9090909@oracle.com>
 <CAJ3CzQWS4O-+V6A9L0GSDMUGcfpJc_3=DdQG9njxO+FBoRBDyw@mail.gmail.com>
Message-ID: <CAKO+H+LQm2YZHqGcCNH1AWbdpPNunH6qUiMnj=L-1k7jAqkV2A@mail.gmail.com>

On Mon, Sep 14, 2015 at 5:14 PM, Sam Yaple <samuel at yaple.net> wrote:

>
> On Mon, Sep 14, 2015 at 11:19 AM, Paul Bourke <paul.bourke at oracle.com>
> wrote:
>
>>
>>
>> On 13/09/15 18:34, Steven Dake (stdake) wrote:
>>
>>> Response inline.
>>>
>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:
>>> sam at yaple.net>>
>>> Date: Sunday, September 13, 2015 at 1:35 AM
>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev at lists.openstack.org<mailto:
>>> openstack-dev at lists.openstack.org>>
>>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>>> types
>>>
>>> On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake) <stdake at cisco.com
>>> <mailto:stdake at cisco.com>> wrote:
>>> Response inline.
>>>
>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:
>>> sam at yaple.net>>
>>> Date: Saturday, September 12, 2015 at 11:34 PM
>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev at lists.openstack.org<mailto:
>>> openstack-dev at lists.openstack.org>>
>>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>>> types
>>>
>>>
>>>
>>> Sam Yaple
>>>
>>> On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) <stdake at cisco.com
>>> <mailto:stdake at cisco.com>> wrote:
>>>
>>>
>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:
>>> sam at yaple.net>>
>>> Date: Saturday, September 12, 2015 at 11:01 PM
>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev at lists.openstack.org<mailto:
>>> openstack-dev at lists.openstack.org>>
>>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS + RDO
>>> types
>>>
>>>
>>> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) <stdake at cisco.com
>>> <mailto:stdake at cisco.com>> wrote:
>>> Hey folks,
>>>
>>> Sam had asked a reasonable set of questions regarding a patchset:
>>> https://review.openstack.org/#/c/222893/
>>>
>>> The purpose of the patchset is to enable both RDO and RHOS as binary
>>> choices on RHEL platforms.  I suspect over time, from-source deployments
>>> have the potential to become the norm, but the business logistics of such a
>>> change are going to take some significant time to sort out.
>>>
>>> Red Hat has two distros of OpenStack neither of which are from source.
>>> One is free called RDO and the other is paid called RHOS.  In order to
>>> obtain support for RHEL VMs running in an OpenStack cloud, you must be
>>> running on RHOS RPM binaries.  You must also be running on RHEL.  It
>>> remains to be seen whether Red Hat will actively support Kolla deployments
>>> with a RHEL+RHOS set of packaging in containers, but my hunch says they
>>> will.  It is in Kolla?s best interest to implement this model and not make
>>> it hard on Operators since many of them do indeed want Red Hat?s support
>>> structure for their OpenStack deployments.
>>>
>>> Now to Sam?s questions:
>>> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more
>>> do we add? What's our policy on adding a new type??
>>>
>>> I?m not immediately clear on how binary fits in.  We could make binary
>>> synonymous with the community supported version (RDO) while still
>>> implementing the binary RHOS version.  Note Kolla does not ?support? any
>>> distribution or deployment of OpenStack ? Operators will have to look to
>>> their vendors for support.
>>>
>>> If everything between centos+rdo and rhel+rhos is mostly the same then I
>>> would think it would make more sense to just use the base ('rhel' in this
>>> case) to branch of any differences in the templates. This would also allow
>>> for the least amount of change and most generic implementation of this
>>> vendor specific packaging. This would also match what we do with
>>> oraclelinux, we do not have a special type for that and any specifics would
>>> be handled by an if statement around 'oraclelinux' and not some special
>>> type.
>>>
>>> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO
>>> also runs on RHEL.  I want to enable Red Hat customers to make a choice to
>>> have a supported  operating system but not a supported Cloud environment.
>>> The answer here is RHEL + RDO.  This leads to full support down the road if
>>> the Operator chooses to pay Red Hat for it by an easy transition to RHOS.
>>>
>>> I am against including vendor specific things like RHOS in Kolla
>>> outright like you are purposing. Suppose another vendor comes along with a
>>> new base and new packages. They are willing to maintain it, but its
>>> something that no one but their customers with their licensing can use.
>>> This is not something that belongs in Kolla and I am unsure that it is even
>>> appropriate to belong in OpenStack as a whole. Unless RHEL+RHOS can be used
>>> by those that do not have a license for it, I do not agree with adding it
>>> at all.
>>>
>>> Sam,
>>>
>>> Someone stepping up to maintain a completely independent set of docker
>>> images hasn?t happened.  To date nobody has done that.  If someone were to
>>> make that offer, and it was a significant change, I think the community as
>>> a whole would have to evaluate such a drastic change.  That would certainly
>>> increase our implementation and maintenance burden, which we don?t want  to
>>> do.  I don?t think what you propose would be in the best interest of the
>>> Kolla project, but I?d have to see the patch set to evaluated the scenario
>>> appropriately.
>>>
>>> What we are talking about is 5 additional lines to enable RHEL+RHOS
>>> specific repositories, which is not very onerous.
>>>
>>> The fact that you can?t use it directly has little bearing on whether
>>> its valid technology for OpenStack.  There are already two well-defined
>>> historical precedents for non-licensed unusable integration in OpenStack.
>>> Cinder has 55 [1] Volume drivers which they SUPPORT.     At-leat 80% of
>>> them are completely proprietary hardware which in reality is mostly just
>>> software which without a license to, it would be impossible to use.  There
>>> are 41 [2] Neutron drivers registered on the Neutron driver page; almost
>>> the entirety require proprietary licenses to what amounts as integration to
>>> access proprietary software.  The OpenStack preferred license is ASL for a
>>> reason ? to be business friendly.  Licensed software has a place in the
>>> world of OpenStack, even it only serves as an integration point which the
>>> proposed patch does.  We are consistent with community values on this point
>>> or I wouldn?t have bothered proposing the patch.
>>>
>>> We want to encourage people to use Kolla for proprietary solutions if
>>> they so choose.  This is how support manifests, which increases the
>>> strength of the Kolla project.  The presence of support increases the
>>> likelihood that Kolla will be adopted by Operators.  If your asking the
>>> Operators to maintain a fork for those 5 RHOS repo lines, that seems
>>> unreasonable.
>>>
>>> I?d like to hear other Core Reviewer opinions on this matter and will
>>> hold a majority vote on this thread as to whether we will facilitate
>>> integration with third party software such as the Cinder Block Drivers, the
>>> Neutron Network drivers, and various for-pay versions of OpenStack such as
>>> RHOS.  I?d like all core reviewers to weigh in please.  Without a complete
>>> vote it will be hard to gauge what the Kolla community really wants.
>>>
>>> Core reviewers:
>>> Please vote +1 if you ARE satisfied with integration with third party
>>> unusable without a license software, specifically Cinder volume drivers,
>>> Neutron network drivers, and various for-pay distributions of OpenStack and
>>> container runtimes.
>>> Please vote ?1 if you ARE NOT satisfied with integration with third
>>> party unusable without a license software, specifically Cinder volume
>>> drivers, Neutron network drivers, and various for pay distributions of
>>> OpenStack and container runtimes.
>>>
>>> A bit of explanation on your vote might be helpful.
>>>
>>> My vote is +1.  I have already provided my rationale.
>>>
>>> Regards,
>>> -steve
>>>
>>> [1] https://wiki.openstack.org/wiki/CinderSupportMatrix
>>> [2] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>>>
>>>
>>> I appreciate you calling a vote so early. But I haven't had my questions
>>> answered yet enough to even vote on the matter at hand.
>>>
>>> In this situation the closest thing we have to a plugin type system as
>>> Cinder or Neutron does is our header/footer system. What you are proposing
>>> is integrating a proprietary solution into the core of Kolla. Those Cinder
>>> and Neutron plugins have external components and those external components
>>> are not baked into the project.
>>>
>>> What happens if and when the RHOS packages require different tweaks in
>>> the various containers? What if it requires changes to the Ansible
>>> playbooks? It begins to balloon out past 5 lines of code.
>>>
>>> Unfortunately, the community _wont_ get to vote on whether or not to
>>> implement those changes because RHOS is already in place. That's why I am
>>> asking the questions now as this _right_ _now_ is the significant change
>>> you are talking about, regardless of the lines of code.
>>>
>>> So the question is not whether we are going to integrate 3rd party
>>> plugins, but whether we are going to allow companies to build proprietary
>>> products in the Kolla repo. If we allow RHEL+RHOS then we would need to
>>> allow another distro+company packaging and potential Ansible tweaks to get
>>> it to work for them.
>>>
>>> If you really want to do what Cinder and Neutron do, we need a better
>>> system for injecting code. That would be much closer to the plugins that
>>> the other projects have.
>>>
>>> I'd like to have a discussion about this rather than immediately call
>>> for a vote which is why I asked you to raise this question in a public
>>> forum in the first place.
>>>
>>>
>>> Sam,
>>>
>>> While a true code injection system might be interesting and would be
>>> more parallel with the plugin model used in cinder and neutron (and to some
>>> degrees nova), those various systems didn?t begin that way.  Their driver
>>> code at one point was completely integrated.  Only after 2-3 years was the
>>> code broken into a fully injectable state.  I think that is an awfully high
>>> bar to set to sort out the design ahead of time.  One of the reasons
>>> Neutron has taken so long to mature is the Neutron community attempted to
>>> do plugins at too early a stage which created big gaps in unit and
>>> functional tests.  A more appropriate design would be for that pattern to
>>> emerge from the system over time as people begin to adopt various distro
>>> tech to Kolla.  If you looked at the patch in gerrit, there is one clear
>>> pattern ?Setup distro repos? which at some point in the future could be
>>> made to be injectable much as headers and footers are today.
>>>
>>> As for building proprietary products in the Kolla repository, the
>>> license is ASL, which means it is inherently not proprietary.  I am fine
>>> with the code base integrating with proprietary software as long as the
>>> license terms are met; someone has to pay the mortgages of the thousands of
>>> OpenStack developers.  We should encourage growth of OpenStack, and one of
>>> the ways for that to happen is to be business friendly.  This translates
>>> into first knowing the world is increasingly adopting open source
>>> methodologies and facilitating that transition, and second accepting the
>>> world has a whole slew of proprietary software that already exists today
>>> that requires integration.
>>>
>>> Nonetheless, we have a difference of opinion on this matter, and I want
>>> this work to merge prior to rc1.  Since this is a project policy decision
>>> and not a technical issue, it makes sense to put it to a wider vote to
>>> either unblock or kill the work.  It would be a shame if we reject all
>>> driver and supported distro integration because we as a community take an
>>> anti-business stance on our policies, but I?ll live by what the community
>>> decides.  This is not a decision either you or I may dictate which is why
>>> it has been put to a vote.
>>>
>>> Regards
>>> -steve
>>>
>>>
>>>
>>> For oracle linux, I?d like to keep RDO for oracle linux and from source
>>> on oracle linux as choices.  RDO also runs on oracle linux.  Perhaps the
>>> patch set needs some later work here to address this point in more detail,
>>> but as is ?binary? covers oracle linu.
>>>
>>> Perhaps what we should do is get rid of the binary type entirely.
>>> Ubuntu doesn?t really have a binary type, they have a cloudarchive type, so
>>> binary doesn?t make a lot of sense.  Since Ubuntu to my knowledge doesn?t
>>> have two distributions of OpenStack the same logic wouldn?t apply to
>>> providing a full support onramp for Ubuntu customers.  Oracle doesn?t
>>> provide a binary type either, their binary type is really RDO.
>>>
>>> The binary packages for Ubuntu are _packaged_ by the cloudarchive team.
>>> But in the case of when OpenStack collides with an LTS release (Icehouse
>>> and 14.04 was the last one) you do not add a new repo because the packages
>>> are in the main Ubuntu repo.
>>>
>>> Debian provides its own packages as well. I do not want a type name per
>>> distro. 'binary' catches all packaged OpenStack things by a distro.
>>>
>>>
>>> FWIW I never liked the transition away from rdo in the repo names to
>>> binary.  I guess I should have ?1?ed those reviews back then, but I think
>>> its time to either revisit the decision or compromise that binary and rdo
>>> mean the same thing in a centos and rhel world.
>>>
>>> Regards
>>> -steve
>>>
>>>
>>> Since we implement multiple bases, some of which are not RPM based, it
>>> doesn't make much sense to me to have rhel and rdo as a type which is why
>>> we removed rdo in the first place in favor of the more generic 'binary'.
>>>
>>>
>>> As such the implied second question ?How many more do we add?? sort of
>>> sounds like ?how many do we support??.  The answer to the second question
>>> is none ? again the Kolla community does not support any deployment of
>>> OpenStack.  To the question as posed, how many we add, the answer is it is
>>> really up to community members willing to  implement and maintain the
>>> work.  In this case, I have personally stepped up to implement RHOS and
>>> maintain it going forward.
>>>
>>> Our policy on adding a new type could be simple or onerous.  I prefer
>>> simple.  If someone is willing to write the code and maintain it so that is
>>> stays in good working order, I see no harm in it remaining in tree.  I
>>> don?t suspect there will be a lot of people interested in adding multiple
>>> distributions for a particular operating system.  To my knowledge, and I
>>> could be incorrect, Red Hat is the only OpenStack company with a paid and
>>> community version available of OpenStack simultaneously and the paid
>>> version is only available on RHEL.  I think the risk of RPM based
>>> distributions plus their type count spiraling out of manageability is low.
>>> Even if the risk were high, I?d prefer to keep an open mind to facilitate
>>> an increase in diversity in our community (which is already fantastically
>>> diverse, btw ;)
>>>
>>> I am open to questions, comments or concerns.  Please feel free to voice
>>> them.
>>>
>>> Regards,
>>> -steve
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> Both arguments sound valid to me, both have pros and cons.
>>
>> I think it's valuable to look to the experiences of Cinder and Neutron in
>> this area, both of which seem to have the same scenario and have existed
>> much longer than Kolla. From what I know of how these operate, proprietary
>> code is allowed to exist in the mainline so long as certain set of criteria
>> is met. I'd have to look it up but I think it mostly comprises of the
>> relevant parties must "play by the rules", e.g. provide a working CI, help
>> with reviews, attend weekly meetings, etc. If Kolla can look to craft a
>> similar set of criteria for proprietary code down the line, I think it
>> should work well for us.
>>
>> Steve has a good point in that it may be too much overhead to implement a
>> plugin system or similar up front. Instead, we should actively monitor the
>> overhead in terms of reviews and code size that these extra implementations
>> add. Perhaps agree to review it at the end of Mitaka?
>>
>> Given the project is young, I think it can also benefit from the
>> increased usage and exposure from allowing these parties in. I would hope
>> independent contributors would not feel rejected from not being able to
>> use/test with the pieces that need a license. The libre distros will remain
>> #1 for us.
>>
>> So based on the above explanation, I'm +1.
>>
>> -Paul
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> Given Paul's comments I would agree here as well. I would like to get that
> 'criteria' required for Kolla to allow this proprietary code into the main
> repo down as soon as possible though and suggest that we have a bare
> minimum of being able to gate against it as one of the criteria.
>
> As for a plugin system, I also agree with Paul that we should check the
> overhead of including these other distros and any types needed after we
> have had time to see if they do introduce any additional overhead.
>
> So for the question 'Do we allow code that relies on proprietary
> packages?' I would vote +1, with the condition that we define the
> requirements of allowing that code as soon as possible.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I am +1 on the system with following criteria we already discussed above

- Set of defined requirements to adhere to for contributing and maintaining
- Set of contributors contributing to and reviewing the changes in Kolla.
- Set of maintainers available to connect with if we require any urgent
attention to any failures in Kolla due to the code.
- CI if possible, we can evaluate the options as we finalize.

Since we are on the subject of "OpenStack as a whole", I think OpenStack
has evolved better with more operators contributing to the code base, since
we need to let the code break to make it robust. This can be very easily
observed with Cinder and Neutron specially, the different nature of
implementations has always helped the base source improvements which were
never thought of.

I agree with Paul that Kolla benefit more with increased participation from
Operators who are willing to update and use it.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/474a7f79/attachment.html>

From davanum at gmail.com  Mon Sep 14 12:54:41 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Mon, 14 Sep 2015 08:54:41 -0400
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
 library releases needed
In-Reply-To: <1442234537-sup-4636@lrrr.local>
References: <1442234537-sup-4636@lrrr.local>
Message-ID: <CANw6fcEYKUX+2BhhPAv7j_Mz23WzV52+Gej2X_Z1H-mCR=BqjQ@mail.gmail.com>

Doug,

All oslo libraries are ready for stable/liberty branch.

Thanks,
-- Dims

On Mon, Sep 14, 2015 at 8:46 AM, Doug Hellmann <doug at doughellmann.com>
wrote:

> PTLs and release liaisons,
>
> In order to keep the rest of our schedule for the end-of-cycle release
> tasks, we need to have final releases for all client libraries in the
> next day or two.
>
> If you have not already submitted your final release request for this
> cycle, please do that as soon as possible.
>
> If you *have* already submitted your final release request for this
> cycle, please reply to this email and let me know that you have so I can
> create your stable/liberty branch.
>
> Thanks!
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/a00b5f5f/attachment.html>

From kurt.r.taylor at gmail.com  Mon Sep 14 13:00:05 2015
From: kurt.r.taylor at gmail.com (Kurt Taylor)
Date: Mon, 14 Sep 2015 08:00:05 -0500
Subject: [openstack-dev] [third-party] Third party CI working group chair
Message-ID: <CAG5OiwgY8HP+D=5Wh-SvEf8vyNgyEkZeNU8=38kWMMYRrkq6ng@mail.gmail.com>

The amount of time I am able to spend on third party CI efforts has been on
the decline over the last release cycle, so it's time for me to step back
from chairing the working group and let someone else take over. Ramy
Asselin has agreed to take over chairing the working group starting with
this weeks meeting, Tuesday, Sept. 15th, 1700UTC in #openstack-meeting.

I'm really proud of what we have accomplished since the formation of the
working group. Please join us on Tuesday and get involved, there is still
much more to do.

Thanks!
Kurt Taylor (krtaylor)

https://wiki.openstack.org/wiki/Meetings/ThirdParty
https://wiki.openstack.org/wiki/ThirdPartyCIWorkingGroup
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/c73fbfff/attachment.html>

From brian.haley at hpe.com  Mon Sep 14 13:24:14 2015
From: brian.haley at hpe.com (Brian Haley)
Date: Mon, 14 Sep 2015 09:24:14 -0400
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAK+RQeZC2-82NP2Uysm19jpKa=tsPWRXTL6PGMhGgw8PitWijg@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
 <D21A2EED.BD34E%gkotton@vmware.com>
 <CAK+RQeZC2-82NP2Uysm19jpKa=tsPWRXTL6PGMhGgw8PitWijg@mail.gmail.com>
Message-ID: <55F6CA7E.4080900@hpe.com>

On 09/12/2015 01:14 PM, Armando M. wrote:
>
> On 12 September 2015 at 18:38, Gary Kotton <gkotton at vmware.com
> <mailto:gkotton at vmware.com>> wrote:
>
>     Thanks! You did a great job. Looking back you made some very tough and
>     healthy decisions. Neutron has a new lease on life!
>     It is tradition that the exiting PTL buy drinks for the community :)
>
> Ok, none of these kind words make you change your mind? This project needs you!

I can only add kind words myself, thanks for all the hard work over the past 
three cycles Kyle!

-Brian


>     From: "mestery at mestery.com <mailto:mestery at mestery.com>"
>     <mestery at mestery.com <mailto:mestery at mestery.com>>
>     Reply-To: OpenStack List <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>
>     Date: Saturday, September 12, 2015 at 12:12 AM
>     To: OpenStack List <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>
>     Subject: [openstack-dev] [neutron] PTL Non-Candidacy
>
>     I'm writing to let everyone know that I do not plan to run for Neutron PTL
>     for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
>     recently put it in his non-candidacy email [1]. But it goes further than
>     that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
>     full time job. In the case of Neutron, it's more than a full time job, it's
>     literally an always on job.
>
>     I've tried really hard over my three cycles as PTL to build a stronger web
>     of trust so the project can grow, and I feel that's been accomplished. We
>     have a strong bench of future PTLs and leaders ready to go, I'm excited to
>     watch them lead and help them in anyway I can.
>
>     As was said by Zane in a recent email [3], while Heat may have pioneered the
>     concept of rotating PTL duties with each cycle, I'd like to highly encourage
>     Neutron and other projects to do the same. Having a deep bench of leaders
>     supporting each other is important for the future of all projects.
>
>     See you all in Tokyo!
>     Kyle
>
>     [1]
>     http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
>     [1]
>     http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
>     [2]
>     http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From kuvaja at hpe.com  Mon Sep 14 13:26:10 2015
From: kuvaja at hpe.com (Kuvaja, Erno)
Date: Mon, 14 Sep 2015 13:26:10 +0000
Subject: [openstack-dev] [all][ptl][release] final liberty cycle
	client	library releases needed
In-Reply-To: <1442234537-sup-4636@lrrr.local>
References: <1442234537-sup-4636@lrrr.local>
Message-ID: <EA70533067B8F34F801E964ABCA4C4410F4D6345@G9W0745.americas.hpqcorp.net>

Hi Doug,

Please find python-glanceclient 1.0.1 release request https://review.openstack.org/#/c/222716/

- Erno

> -----Original Message-----
> From: Doug Hellmann [mailto:doug at doughellmann.com]
> Sent: Monday, September 14, 2015 1:46 PM
> To: openstack-dev
> Subject: [openstack-dev] [all][ptl][release] final liberty cycle client library
> releases needed
> 
> PTLs and release liaisons,
> 
> In order to keep the rest of our schedule for the end-of-cycle release tasks,
> we need to have final releases for all client libraries in the next day or two.
> 
> If you have not already submitted your final release request for this cycle,
> please do that as soon as possible.
> 
> If you *have* already submitted your final release request for this cycle,
> please reply to this email and let me know that you have so I can create your
> stable/liberty branch.
> 
> Thanks!
> Doug
> 
> __________________________________________________________
> ________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sean at coreitpro.com  Mon Sep 14 13:32:01 2015
From: sean at coreitpro.com (Sean M. Collins)
Date: Mon, 14 Sep 2015 13:32:01 +0000
Subject: [openstack-dev] [nova][neutron][SR-IOV] Hardware changes and
 shifting PCI addresses
In-Reply-To: <20150910212306.GA11628@b3ntpin.localdomain>
References: <20150910212306.GA11628@b3ntpin.localdomain>
Message-ID: <0000014fcc0e1dfa-15d4ced0-8458-4e27-9b9c-52fede2270b0-000000@email.amazonses.com>

Brent is our Neutron-Nova liaison - can someone from the SR-IOV team
please respond?

-- 
Sean M. Collins


From sreshetniak at mirantis.com  Mon Sep 14 13:42:54 2015
From: sreshetniak at mirantis.com (Sergey Reshetnyak)
Date: Mon, 14 Sep 2015 16:42:54 +0300
Subject: [openstack-dev] [sahara] FFE for Ambari plugin
Message-ID: <CAOB5mPx5hN6JpD25raC1-E9gOJnKJpM6ZPSH3rHgEYjm=Zk1GA@mail.gmail.com>

Hello

I would like to request FFE for Ambari plugin.

Core parts of Ambari plugin merged, but scaling and HA support on review.

Patches:
* Scaling: https://review.openstack.org/#/c/193081/
* HA: https://review.openstack.org/#/c/197551/

Blueprint: https://blueprints.launchpad.net/sahara/+spec/hdp-22-support

Spec:
https://github.com/openstack/sahara-specs/blob/master/specs/liberty/hdp-22-support.rst

Thanks,
Sergey Reshetnyak
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/4071e11e/attachment.html>

From rlooyahoo at gmail.com  Mon Sep 14 13:54:21 2015
From: rlooyahoo at gmail.com (Ruby Loo)
Date: Mon, 14 Sep 2015 09:54:21 -0400
Subject: [openstack-dev] [Ironic] Suggestion to split install guide
In-Reply-To: <55F29727.7070800@redhat.com>
References: <55F29727.7070800@redhat.com>
Message-ID: <CA+5K_1H4aKznWQrcUVFYU=OvWmgUfLY0OA1fiiC+Xe33nr+S0g@mail.gmail.com>

On 11 September 2015 at 04:56, Dmitry Tantsur <dtantsur at redhat.com> wrote:

> Hi all!
>
> Our install guide is huge, and I've just approved even more text for it.
> WDYT about splitting it into "Basic Install Guide", which will contain bare
> minimum for running ironic and deploying instances, and "Advanced Install
> Guide", which will the following things:
> 1. Using Bare Metal service as a standalone service
> 2. Enabling the configuration drive (configdrive)
> 3. Inspection
> 4. Trusted boot
> 5. UEFI
>
> Opinions?
>
>
Thanks for bringing this up Dmitry. Any idea whether there is some sort of
standard format/organization of install guides for the other OpenStack
projects? And/or maybe we should ask Ops folks (non developers :-))

--ruby
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/fd0d6b65/attachment.html>

From dtantsur at redhat.com  Mon Sep 14 14:00:10 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Mon, 14 Sep 2015 16:00:10 +0200
Subject: [openstack-dev] [Ironic] Suggestion to split install guide
In-Reply-To: <CA+5K_1H4aKznWQrcUVFYU=OvWmgUfLY0OA1fiiC+Xe33nr+S0g@mail.gmail.com>
References: <55F29727.7070800@redhat.com>
 <CA+5K_1H4aKznWQrcUVFYU=OvWmgUfLY0OA1fiiC+Xe33nr+S0g@mail.gmail.com>
Message-ID: <55F6D2EA.2010907@redhat.com>

On 09/14/2015 03:54 PM, Ruby Loo wrote:
>
>
> On 11 September 2015 at 04:56, Dmitry Tantsur <dtantsur at redhat.com
> <mailto:dtantsur at redhat.com>> wrote:
>
>     Hi all!
>
>     Our install guide is huge, and I've just approved even more text for
>     it. WDYT about splitting it into "Basic Install Guide", which will
>     contain bare minimum for running ironic and deploying instances, and
>     "Advanced Install Guide", which will the following things:
>     1. Using Bare Metal service as a standalone service
>     2. Enabling the configuration drive (configdrive)
>     3. Inspection
>     4. Trusted boot
>     5. UEFI
>
>     Opinions?
>
>
> Thanks for bringing this up Dmitry. Any idea whether there is some sort
> of standard format/organization of install guides for the other
> OpenStack projects?

Not sure

 > And/or maybe we should ask Ops folks (non developers
> :-))

Fair enough. I've proposed basic vs advanced split based on what we did 
for TripleO downstream, which was somewhat user-driven.

>
> --ruby
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From doug at doughellmann.com  Mon Sep 14 14:18:23 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 14 Sep 2015 10:18:23 -0400
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
	library releases needed
In-Reply-To: <1442234537-sup-4636@lrrr.local>
References: <1442234537-sup-4636@lrrr.local>
Message-ID: <1442240201-sup-1222@lrrr.local>

Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
> PTLs and release liaisons,
> 
> In order to keep the rest of our schedule for the end-of-cycle release
> tasks, we need to have final releases for all client libraries in the
> next day or two.
> 
> If you have not already submitted your final release request for this
> cycle, please do that as soon as possible.
> 
> If you *have* already submitted your final release request for this
> cycle, please reply to this email and let me know that you have so I can
> create your stable/liberty branch.
> 
> Thanks!
> Doug

I forgot to mention that we also need the constraints file in
global-requirements updated for all of the releases, so we're actually
testing with them in the gate. Please take a minute to check the version
specified in openstack/requirements/upper-constraints.txt for your
libraries and submit a patch to update it to the latest release if
necessary. I'll do a review later in the week, too, but it's easier to
identify the causes of test failures if we have one patch at a time.

Doug


From rakhmerov at mirantis.com  Mon Sep 14 14:22:42 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Mon, 14 Sep 2015 17:22:42 +0300
Subject: [openstack-dev] [mistral] Team meeting - 09/14/2015
Message-ID: <40AEEFE1-63EF-4BBF-819D-A32FD422F725@mirantis.com>

Hi Mistral team (and not only),

This is a reminder that we?ll have a team meeting today at #openstack-meeting at 16.00 UTC.

The agenda is:
* Review action items
* Current status (progress, issues, roadblocks, further plans)
* Scoping RC releases
* Open discussion

Add your own items by editing https://wiki.openstack.org/wiki/Meetings/MistralAgenda <https://wiki.openstack.org/wiki/Meetings/MistralAgenda>

Renat Akhmerov
@ Mirantis Inc.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/9ecf1fa6/attachment.html>

From ahothan at cisco.com  Mon Sep 14 14:32:07 2015
From: ahothan at cisco.com (Alec Hothan (ahothan))
Date: Mon, 14 Sep 2015 14:32:07 +0000
Subject: [openstack-dev] [app-catalog] PTL Candidacy
In-Reply-To: <CA+odVQF6KtsewLHvCNBungC=XQ=hGjnBRPRgtOLThdve66iGwA@mail.gmail.com>
References: <CA+odVQF6KtsewLHvCNBungC=XQ=hGjnBRPRgtOLThdve66iGwA@mail.gmail.com>
Message-ID: <C799648B-5810-4A01-9289-A559C8C480A7@cisco.com>

+1

Great job leading the app catalog and looking forward for all these new features!




On 9/11/15, 12:23 PM, "Christopher Aedo" <doc at aedo.net> wrote:

>It's time for the Community App Catalog to go through an official
>election cycle, and I'm putting my name in for PTL.  I've been filling
>that role provisionally since before the App Catalog was launched at
>the Vancouver summit, and I would like to continue service as PTL
>officially.  Now that we've joined the Big Tent[1], having a committed
>leader is more important than ever :) .
>
>I believe the App Catalog has tremendous potential for helping the
>end-users of OpenStack clouds find and share things they can deploy on
>those clouds.  To that end, I've been working with folks on extending
>the types of assets that can live in the catalog and also trying to
>make finding and consuming those assets easier.
>
>Since we announced the Community App Catalog I've done everything I
>could to deliver on the "community" part.  With the help of the
>OpenStack Infra team, we moved the site to OpenStack infrastructure as
>quickly as possible.  All planning and coordination efforts have
>happened on IRC (#openstack-app-catalog), the dev and operators
>mailing list, and during the weekly IRC meetings[2].  I've also been
>working to get more people engaged and involved with the Community App
>Catalog project while attempting to raise the profile and exposure
>whenever possible.
>
>Speaking of community, I know being part of the OpenStack community at
>a broad level is one of the most important things for a PTL.  On that
>front I'm active and always available on IRC (docaedo), and do my best
>to stay on top of all the traffic on the dev mailing list.  I also
>work with Richard Raseley to organize the OpenStack meetup in Portland
>in order to reach, educate (and entertain) people who want to learn
>more about OpenStack.
>
>The next big thing we will do for the Community App Catalog is to
>build out the backend so it becomes a more engaging experience for the
>users, as well as makes it easier for other projects to contribute and
>consume the assets.  In addition to the Horizon plugin[3][4] (check it
>out with devstack, it's pretty cool!) we are thinking through the API
>side of this and will eventually contribute the code to search, fetch
>and push from the OpenStack Client.
>
>All of this is to say that I'm eager and proud to serve as the
>Community App Catalog PTL for the next six months if you'll have me!
>
>[1] https://review.openstack.org/#/c/217957/
>[2] https://wiki.openstack.org/wiki/Meetings/app-catalog
>[3] https://github.com/stackforge/apps-catalog-ui
>[4] https://github.com/openstack/app-catalog-ui
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From Kevin.Fox at pnnl.gov  Mon Sep 14 14:48:01 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Mon, 14 Sep 2015 14:48:01 +0000
Subject: [openstack-dev] [Ironic] There is a function to display the VGA
 emulation screen of BMC in the baremetal node on the Horizon?
In-Reply-To: <870B6C6CD434FA459C5BF1377C680282023A7B57@BPXM20GP.gisp.nec.co.jp>
References: <870B6C6CD434FA459C5BF1377C680282023A7160@BPXM20GP.gisp.nec.co.jp>
 <CAB1EZBoxqM=OOQ-RH3okW9AAoT2pAgTeWB_DDRQkiat=_870Ew@mail.gmail.com>,
 <870B6C6CD434FA459C5BF1377C680282023A7B57@BPXM20GP.gisp.nec.co.jp>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A3075F2@EX10MBOX03.pnnl.gov>

I dont believe the ipmi specification standardizes vga consoles. Only serial consoles. So it may be possible to get it to work for specific models, but not generically.

Thanks,
Kevin

________________________________
From: Shinya Hoshino
Sent: Sunday, September 13, 2015 8:48:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic] There is a function to display the VGA emulation screen of BMC in the baremetal node on the Horizon?

Thanks Lucas,

However, I'd like to know whether or not capable to get the *VGA
emulation* screen of BMC, rather than serial console.

I also know that the features that can be only in VGA console is
very few, now.  And I am able to get `ironic node-get-console
<node-uuid>`.
Still, I'd like to know such a thing and whether it can be
displayed on the Horizon.

Best regards,

On 2015/09/11 17:50, Lucas Alvares Gomes wrote:
> Hi,
>
>> We are investigating how to display on the Horizon a VGA
>> emulation screen of BMC in the bare metal node that has been
>> deployed by Ironic.
>> If it was already implemented, I thought that the connection
>> information of a VNC or SPICE server (converted if necessary)
>> for a VGA emulation screen of BMC is returned as the stdout of
>> the "nova get-*-console".
>> However, we were investigating how to configure Ironic and so
>> on, but we could not find a way to do so.
>> I tried to search roughly the implementation of such a process
>> in the source code of Ironic, but it was not found.
>>
>> The current Ironic, I think such features are not implemented.
>> However, is this correct?
>>
>
> A couple of drivers in Ironic supports web console (shellinabox), you
> can take a look docs to see how to enable and use it:
> http://docs.openstack.org/developer/ironic/deploy/install-guide.html#configure-node-web-console
>
> Hope that helps,
> Lucas
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
/* -------------------------------------------------------------
   Shinn'ya Hoshino            mailto:shi-hoshino at yk.jp.nec.com
------------------------------------------------------------- */

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/fd60b91c/attachment.html>

From sean.mcginnis at gmx.com  Mon Sep 14 14:49:35 2015
From: sean.mcginnis at gmx.com (Sean McGinnis)
Date: Mon, 14 Sep 2015 09:49:35 -0500
Subject: [openstack-dev] [Cinder] PTL Candidacy
Message-ID: <20150914144935.GA12267@gmx.com>

Hello everyone,

I'm announcing my candidacy for Cinder PTL for the Mitaka release.

The Cinder team has made great progress. We've not only grown the
number of supported backend drivers, but we've made significant
improvements to the core code and raised the quality of existing
and incoming code contributions. While there are still many things
that need more polish, we are headed in the right direction and
block storage is a strong, stable component to many OpenStack clouds.

Mike and John have provided the leadership to get the project where
it is today. I would like to keep that momentum going.

I've spent over a decade finding new and interesting ways to create
and delete volumes. I also work across many different product teams
and have had a lot of experience collaborating with groups to find
a balance between the work being done to best benefit all involved.

I think I can use this experience to foster collaboration both within
the Cinder team as well as between Cinder and other related projects
that interact with storage services.

Some topics I would like to see focused on for the Mitaka release
would be:

 * Complete work of making the Cinder code Python3 compatible.
 * Complete conversion to objects.
 * Sort out object inheritance and appropriate use of ABC.
 * Continued stabilization of third party CI.
 * Make sure there is a good core feature set regardless of backend type.
 * Reevaluate our deadlines to make sure core feature work gets enough
   time and allows drivers to implement support.

While there are some things I think we need to do to move the project
forward, I am mostly open to the needs of the community as a whole
and making sure that what we are doing is benefiting OpenStack and
making it a simpler, easy to use, and ubiquitous platform for the 
cloud.

Thank you for your consideration!

Sean McGinnis (smcginnis)


From brian.rosmaita at RACKSPACE.COM  Mon Sep 14 14:58:59 2015
From: brian.rosmaita at RACKSPACE.COM (Brian Rosmaita)
Date: Mon, 14 Sep 2015 14:58:59 +0000
Subject: [openstack-dev] [glance] tasks (following "proposed priorities for
 Mitaka")
Message-ID: <D21C4411.21E22%brian.rosmaita@rackspace.com>

Apologies for forking the thread, but there was way too much in Doug's
email (and Flavio's response) and I only want to make a few points about
tasks.  Please read Doug's original email and Flavio's reply at some
point, preferably before you read this.

I'm going to limit myself to 4 points.  We'll be discussing Glance tasks
during the Mitaka summit design session, so we'll be able to go into
details and determine the future of tasks there.  But I would like to make
these points before discussion gets too far along.


(1) DefCore
So I see DefCore as a two-way street, in which the OpenStack projects need
to be aware of what's going on with the DefCore process, and the DefCore
people are paying attention to what's going on in the projects.

Glance tasks are not a recent innovation, they date back at least to the
Havana summit, April 15-18, 2013.  There was a session on "Getting Glance
Ready for Public Clouds" [1], resulting in a blueprint for "New Upload
Download Workflow for Public Glance" [2], which was filed on 2013-04-22.

This was pre-specs days, but there was lots of information about the
design direction this was taking posted on the wiki (see [3] and [4],
which contain links to most of the stuff).

My point is simply that the need for tasks and the discussion around their
development and structure was carried out in the open via the standard
OpenStack practices, and if Glance was headed in a
weird/nonstandard/deviant direction, some guidance would have been useful
at that point.  (I'm not implying that such guidance is not useful now, of
course.)


(2) Tasks as a Public API
Well, that has been the whole point throughout the entire discussion.  See
[1]-[4].


(3) Tasks and Deployers
I participated in some of the DefCore discussions around image upload that
took place before the Liberty summit.  It just so happened that I was on
the program to give a talk about Glance tasks, and I left room for
discussion about (a) whether two image upload workflows are confusing for
end users, and (b) whether the flexibility of tasks (e.g., the "input"
element defined as a JSON blob) is actually a problem.  (You can look at
the talk [5] or my slides [6] to see that I didn't pull any punches about
this.)

The feedback I got from the deployers present was that they weren't
worried about (a), and that they liked (b) because it enabled them to make
customizations easily for their particular situation.

I'm not saying that there's no other way to do this -- e.g., you could do
all sorts of alternative workflows and configurations in the "regular"
upload process -- but the feedback I got can be summarized like this:
Given the importance of a properly-functioning Glance for normal cloud
operations, it is useful to have one upload/download workflow that is
locked down and you don't have to worry about, and a completely different
workflow that you can expose to end users and tinker with as necessary.


(4) Interoperability
In general, this is a worthy goal.  The OpenStack cloud platform, however,
is designed to handle many different deployment scenarios from small
private clouds to enormous public clouds, and allowing access to the same
API calls in all situations is not desirable.  A small academic
department, for example, may allow regular end users to make some calls
usually reserved for admins, whereas in a public cloud, this would be a
remarkably bad idea.  So if DefCore is going to enforce interoperability
via tests, it should revise the tests to meet the most restrictive
reasonable case.  Image upload is a good example, as some cloud operators
do not want to expose this operation to end users, period, and for a
myriad of reasons (security, user frustration when the image from some
large non-open-source cloud doesn't boot, etc.).

With respect to tasks: the cloud provider specifies the exact content of
the 'input' element.  It's going to differ from deployment to deployment.
But that isn't significantly different from different clouds having
different flavors with different capabilities.  You can't reasonably
expect that "nova boot --flavor economy-flavor --image optimized-centos
myserver" is going to work in all OpenStack clouds, i.e., you need to
figure out the appropriate values to replace 'economy-flavor' and
'optimized-centos' in the boot call.  I think the 'input' element is
similar.  The initial discussion was that it should be defined via
documentation as we saw how tasks would be used in real life.  But there's
no reason why it must be documentation only.  It would be easy to make a
schema available.

I tried to be as concise as possible, but now my email has gotten too long!

cheers,
brian

[1] 
https://etherpad.openstack.org/p/havana-getting-glance-ready-for-public-clo
uds
[2] https://blueprints.launchpad.net/glance/+spec/upload-download-workflow
[3] https://wiki.openstack.org/wiki/Glance-tasks-api
[4] https://wiki.openstack.org/wiki/Glance-tasks-api-product
[5] http://youtu.be/ROXrjX3pdqw
[6] http://www.slideshare.net/racker_br/glance-tasksvancouver2015






From sambetts at cisco.com  Mon Sep 14 15:01:04 2015
From: sambetts at cisco.com (Sam Betts (sambetts))
Date: Mon, 14 Sep 2015 15:01:04 +0000
Subject: [openstack-dev] [Ironic] Suggestion to split install guide
In-Reply-To: <55F6D2EA.2010907@redhat.com>
References: <55F29727.7070800@redhat.com>
 <CA+5K_1H4aKznWQrcUVFYU=OvWmgUfLY0OA1fiiC+Xe33nr+S0g@mail.gmail.com>
 <55F6D2EA.2010907@redhat.com>
Message-ID: <D21C9757.5C22%sambetts@cisco.com>

Looking at what they?re building for Neutron,
http://docs.openstack.org/networking-guide, they have a quite fine grain
splitting of their guide with a large contents page that helps find things
easily. My personal issues with the current ironic guide all comes down to
navigation, the current Table of contents system is unpleasant to use,
splitting the guide into multiple pages will help this to a degree because
it?ll reduce the amount of times I end up either using Ctrl-F or scrolling
back to the top to look at the contents, but I think it would be nice to
have a reworked contents page.

As a side note I don?t know about other people but I prefer the styling of
the Neutron guide, but is that something that we even have a choice in?

Sam

On 14/09/2015 15:00, "Dmitry Tantsur" <dtantsur at redhat.com> wrote:

>On 09/14/2015 03:54 PM, Ruby Loo wrote:
>>
>>
>> On 11 September 2015 at 04:56, Dmitry Tantsur <dtantsur at redhat.com
>> <mailto:dtantsur at redhat.com>> wrote:
>>
>>     Hi all!
>>
>>     Our install guide is huge, and I've just approved even more text for
>>     it. WDYT about splitting it into "Basic Install Guide", which will
>>     contain bare minimum for running ironic and deploying instances, and
>>     "Advanced Install Guide", which will the following things:
>>     1. Using Bare Metal service as a standalone service
>>     2. Enabling the configuration drive (configdrive)
>>     3. Inspection
>>     4. Trusted boot
>>     5. UEFI
>>
>>     Opinions?
>>
>>
>> Thanks for bringing this up Dmitry. Any idea whether there is some sort
>> of standard format/organization of install guides for the other
>> OpenStack projects?
>
>Not sure
>
> > And/or maybe we should ask Ops folks (non developers
>> :-))
>
>Fair enough. I've proposed basic vs advanced split based on what we did
>for TripleO downstream, which was somewhat user-driven.
>
>>
>> --ruby
>>
>>
>> 
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From kuvaja at hpe.com  Mon Sep 14 15:02:59 2015
From: kuvaja at hpe.com (Kuvaja, Erno)
Date: Mon, 14 Sep 2015 15:02:59 +0000
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <20150914124100.GC10859@redhat.com>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com>
Message-ID: <EA70533067B8F34F801E964ABCA4C4410F4D6494@G9W0745.americas.hpqcorp.net>

> -----Original Message-----
> From: Flavio Percoco [mailto:flavio at redhat.com]
> Sent: Monday, September 14, 2015 1:41 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> 
> On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> >
> >After having some conversations with folks at the Ops Midcycle a few
> >weeks ago, and observing some of the more recent email threads related
> >to glance, glance-store, the client, and the API, I spent last week
> >contacting a few of you individually to learn more about some of the
> >issues confronting the Glance team. I had some very frank, but I think
> >constructive, conversations with all of you about the issues as you see
> >them. As promised, this is the public email thread to discuss what I
> >found, and to see if we can agree on what the Glance team should be
> >focusing on going into the Mitaka summit and development cycle and how
> >the rest of the community can support you in those efforts.
> >
> >I apologize for the length of this email, but there's a lot to go over.
> >I've identified 2 high priority items that I think are critical for the
> >team to be focusing on starting right away in order to use the upcoming
> >summit time effectively. I will also describe several other issues that
> >need to be addressed but that are less immediately critical. First the
> >high priority items:
> >
> >1. Resolve the situation preventing the DefCore committee from
> >   including image upload capabilities in the tests used for trademark
> >   and interoperability validation.
> >
> >2. Follow through on the original commitment of the project to
> >   provide an image API by completing the integration work with
> >   nova and cinder to ensure V2 API adoption.
> 
> Hi Doug,
> 
> First and foremost, I'd like to thank you for taking the time to dig into these
> issues, and for reaching out to the community seeking for information and a
> better understanding of what the real issues are. I can imagine how much
> time you had to dedicate on this and I'm glad you did.

++ Really thanks for taking the time for this.
> 
> Now, to your email, I very much agree with the priorities you mentioned
> above and I'd like for, whomever will win Glance's PTL election, to bring focus
> back on that.
> 
> Please, find some comments in-line for each point:
> 
> 
> >
> >I. DefCore
> >
> >The primary issue that attracted my attention was the fact that DefCore
> >cannot currently include an image upload API in its interoperability
> >test suite, and therefore we do not have a way to ensure
> >interoperability between clouds for users or for trademark use. The
> >DefCore process has been long, and at times confusing, even to those of
> >us following it sort of closely. It's not entirely surprising that some
> >projects haven't been following the whole time, or aren't aware of
> >exactly what the whole thing means. I have proposed a cross-project
> >summit session for the Mitaka summit to address this need for
> >communication more broadly, but I'll try to summarize a bit here.
> 

Looking how different OpenStack based public clouds limits or fully prevents their users to upload images to their deployments, I'm not convinced the Image Upload should be included to this definition.
 
> +1
> 
> I think it's quite sad that some projects, especially those considered to be
> part of the `starter-kit:compute`[0], don't follow closely what's going on in
> DefCore. I personally consider this a task PTLs should incorporate in their role
> duties. I'm glad you proposed such session, I hope it'll help raising awareness
> of this effort and it'll help moving things forward on that front.
> 
> 
> >
> >DefCore is using automated tests, combined with business policies, to
> >build a set of criteria for allowing trademark use. One of the goals of
> >that process is to ensure that all OpenStack deployments are
> >interoperable, so that users who write programs that talk to one cloud
> >can use the same program with another cloud easily. This is a *REST
> >API* level of compatibility. We cannot insert cloud-specific behavior
> >into our client libraries, because not all cloud consumers will use
> >those libraries to talk to the services. Similarly, we can't put the
> >logic in the test suite, because that defeats the entire purpose of
> >making the APIs interoperable. For this level of compatibility to work,
> >we need well-defined APIs, with a long support period, that work the
> >same no matter how the cloud is deployed. We need the entire community
> >to support this effort. From what I can tell, that is going to require
> >some changes to the current Glance API to meet the requirements. I'll
> >list those requirements, and I hope we can discuss them to a degree
> >that ensures everyone understands them. I don't want this email thread
> >to get bogged down in implementation details or API designs, though, so
> >let's try to keep the discussion at a somewhat high level, and leave
> >the details for specs and summit discussions. I do hope you will
> >correct any misunderstandings or misconceptions, because unwinding this
> >as an outside observer has been quite a challenge and it's likely I
> >have some details wrong.

This just reinforces my doubt above. By including upload to the defcore requirements probably just closes out lots of the public clouds out there. Is that the intention here?

> >
> >As I understand it, there are basically two ways to upload an image to
> >glance using the V2 API today. The "POST" API pushes the image's bits
> >through the Glance API server, and the "task" API instructs Glance to
> >download the image separately in the background. At one point
> >apparently there was a bug that caused the results of the two different
> >paths to be incompatible, but I believe that is now fixed.
> >However, the two separate APIs each have different issues that make
> >them unsuitable for DefCore.

While being true that there is two ways to get image into the glance via V2 Images API, the use case for those two is completely different. While some (like Flavio) might argue the tasks being internal API only, which it might well be in Private cloud, others might be willing to expose only that for their public cloud users due to the improved processability (antivirus, etc.) and the last group is just not willing to let their users to bring their own images in at all.

Looking outside of the box the Tasks API should not be included in any core definition as it's just interface to _optional_ plugins. Obviously if there are different classifications, it might be included to some.

> >
> >The DefCore process relies on several factors when designating APIs for
> >compliance. One factor is the technical direction, as communicated by
> >the contributor community -- that's where we tell them things like "we
> >plan to deprecate the Glance V1 API". In addition to the technical
> >direction, DefCore looks at the deployment history of an API. They do
> >not want to require deploying an API if it is not seen as widely
> >usable, and they look for some level of existing adoption by cloud
> >providers and distributors as an indication of that the API is desired
> >and can be successfully used. Because we have multiple upload APIs, the
> >message we're sending on technical direction is weak right now, and so
> >they have focused on deployment considerations to resolve the question.
> 
> The task upload process you're referring to is the one that uses the `import`
> task, which allows you to download an image from an external source,
> asynchronously, and import it in Glance. This is the old `copy-from` behavior
> that was moved into a task.
> 
> The "fun" thing about this - and I'm sure other folks in the Glance community
> will disagree - is that I don't consider tasks to be a public API. That is to say, I
> would expect tasks to be an internal API used by cloud admins to perform
> some actions (bsaed on its current implementation). Eventually, some of
> these tasks could be triggered from the external API but as background
> operations that are triggered by the well-known public ones and not through
> the task API.
> 
> Ultimately, I believe end-users of the cloud simply shouldn't care about what
> tasks are or aren't and more importantly, as you mentioned later in the
> email, tasks make clouds not interoperable. I'd be pissed if my public image
> service would ask me to learn about tasks to be able to use the service.

I'd like to bring another argument here. I think our Public Images API should behave consistently regardless if there is tasks enabled in the deployment or not and with what plugins. This meaning that _if_ we expect glance upload work over the POST API and that endpoint is available in the deployment I would expect a) my image hash to match with the one the cloud returns b) I'd assume all or none of the clouds rejecting my image if it gets flagged by Vendor X virus definitions and c) it being bootable across the clouds taken it's in supported format. On the other hand if I get told by the vendor that I need to use cloud specific task that accepts only ova compliant image packages and that the image will be checked before acceptance, my expectations are quite different and I would expect all that happening outside of the standard API as it's not consistent behavior.

> 
> Long story short, I believe the only upload API that should be considered is
> the one that uses HTTP and, eventually, to bring compatibility with v1 as far
> as the copy-from behavior goes, Glance could bring back that behavior on
> top of the task (just dropping this here for the sake of discussion and
> interoperability).
> 
> 
> >The POST API is enabled in many public clouds, but not consistently.
> >In some clouds like HP, a tenant requires special permission to use the
> >API. At least one provider, Rackspace, has disabled the API entirely.
> >This is apparently due to what seems like a fair argument that
> >uploading the bits directly to the API service presents a possible
> >denial of service vector. Without arguing the technical merits of that
> >decision, the fact remains that without a strong consensus from
> >deployers that the POST API should be publicly and consistently
> >available, it does not meet the requirements to be used for DefCore
> >testing.
> 
> This is definitely unfortunate. I believe a good step forward for this
> discussion would be to create a list of issues related to uploading images and
> see how those issues can be addressed. The result from that work might be
> that it's not recommended to make that endpoint public but again, without
> going through the issues, it'll be hard to understand how we can improve this
> situation. I expect most of this issues to have a security impact.
> 

++, regardless of the helpfulness of that discussion, I don't think it's realistic expectation to prioritize that work so much that majority of those issues would be solved amongst the priorities at the top of this e-mail within a cycle.

> 
> >The task API is also not widely deployed, so its adoption for DefCore
> >is problematic. If we provide a clear technical direction that this API
> >is preferred, that may overcome the lack of adoption, but the current
> >task API seems to have technical issues that make it fundamentally
> >unsuitable for DefCore consideration. While the task API addresses the
> >problem of a denial of service, and includes useful features such as
> >processing of the image during import, it is not strongly enough
> >defined in its current form to be interoperable.
> >Because it's a generic API, the caller must know how to fully construct
> >each task, and know what task types are supported in the first place.
> >There is only one "import" task type supported in the Glance code
> >repository right now, but it is not clear that "import"
> >always uses the same arguments, or interprets them in the same way.
> >For example, the upstream documentation [1] describes a task that
> >appears to use a URL as source, while the Rackspace documentation [2]
> >describes a task that appears to take a swift storage location.
> >I wasn't able to find JSONSchema validation for the "input" blob
> >portion of the task in the code [3], though that may happen down inside
> >the task implementation itself somewhere.
> 
> 
> The above sounds pretty accurate as there's currently just 1 flow that can be
> triggered (the import flow) and that accepts an input, which is a json. As I
> mentioned above, I don't believe tasks should be part of the public API and
> this is yet another reason why I think so. The tasks API is not well defined as
> there's, currently, not good way to define the expected input in a backwards
> compatible way and to provide all the required validation.
> 
> I like having tasks in Glance, despite my comments above - but I like them for
> cloud usage and not public usage.
> 
> As far as Rackspace's docs/endpoint goes, I'd assume this is an error in their
> documetation since Glance currently doesn't allow[0] for swift URLs to be
> imported (not even in juno[1]).
> 
> [0]
> http://git.openstack.org/cgit/openstack/glance/tree/glance/common/script
> s/utils.py#n84
> [1]
> http://git.openstack.org/cgit/openstack/glance/tree/glance/common/script
> s/utils.py?h=stable/juno#n83
> 
> >Tasks also come from plugins, which may be installed differently based
> >on the deployment. This is an interesting approach to creating API
> >extensions, but isn't discoverable enough to write interoperable tools
> >against. Most of the other projects are starting to move away from
> >supporting API extensions at all because of interoperability concerns
> >they introduce. Deployers should be able to configure their clouds to
> >perform well, but not to behave in fundamentally different ways.
> >Extensions are just that, extensions. We can't rely on them for
> >interoperability testing.
> 
> This is, indeed, an interesting interpretation of what tasks are for.
> I'd probably just blame us (Glance team) for not communicating properly
> what tasks are meant to be. I don't believe tasks are a way to extend the
> *public* API and I'd be curious to know if others see it that way. I fully agree
> that just breaks interoperability and as I've mentioned a couple of times in
> this reply already, I don't even think tasks should be part of the public API.

Hmm-m, that's exactly how I have seen it. Plugins that can be provided to expand the standard functionality. I totally agree these not to being relied on interoperability. I've always assumed that that has been also the reason why tasks have not had too much focus as we've prioritized the actual API functionality and stability before it's expandability.

> 
> But again, very poor job communicating so[0]. Nonetheless, for the sake of
> providing enough information about tasks and sources to read from, I'd also
> like to point out the original blueprint[1], some discussions during the
> havana's summit[2], the wiki page for tasks[3] and a patch I just reviewed
> today (thanks Brian) that introduces docs for tasks[4]. These links show
> already some differences in what tasks are.
> 
> [0]
> http://git.openstack.org/cgit/openstack/glance/tree/etc/policy.json?h=stabl
> e/juno#n28
> [1] https://blueprints.launchpad.net/glance/+spec/async-glance-workers
> [2] https://etherpad.openstack.org/p/havana-glance-requirements
> [3] https://wiki.openstack.org/wiki/Glance-tasks-api
> [4] https://review.openstack.org/#/c/220166/
> 
> >
> >There is a lot of fuzziness around exactly what is supported for image
> >upload, both in the documentation and in the minds of the developers
> >I've spoken to this week, so I'd like to take a step back and try to
> >work through some clear requirements, and then we can have folks
> >familiar with the code help figure out if we have a real issue, if a
> >minor tweak is needed, or if things are good as they stand today and
> >it's all a misunderstanding.
> >
> >1. We need a strongly defined and well documented API, with arguments
> >   that do not change based on deployment choices. The behind-the-scenes
> >   behaviors can change, but the arguments provided by the caller
> >   must be the same and the responses must look the same. The
> >   implementation can run as a background task rather than receiving
> >   the full image directly, but the current task API is too vaguely
> >   defined to meet this requirement, and IMO we need an entry point
> >   focused just on uploading or importing an image.
> >
> >2. Glance cannot require having a Swift deployment. It's not clear
> >   whether this is actually required now, so if it's not then we're
> >   in a good state.
> 
> This is definitely not the case. Glance doesn't require any specific store to be
> deployed. It does require at least one other than the http one (because it
> doesn't support write operations).
> 
> > It's fine to provide an optional way to take
> >   advantage of Swift if it is present, but it cannot be a required
> >   component. There are three separate trademark "programs", with
> >   separate policies attached to them. There is an umbrella "Platform"
> >   program that is intended to include all of the TC approved release
> >   projects, such as nova, glance, and swift. However, there is
> >   also a separate "Compute" program that is intended to include
> >   Nova, Glance, and some others but *not* Swift. This is an important
> >   distinction, because there are many use cases both for distributors
> >   and public cloud providers that do not incorporate Swift for a
> >   variety of reasons. So, we can't have Glance's primary configuration
> >   require Swift and we need to provide tests for the DefCore team
> >   that run without Swift. Duplicate tests that do use Swift are
> >   fine, and might be used for "Platform" compliance tests.

It really saddens me and tells how narrow focused we have been, this point 2 even needing discussion.

> >
> >3. We need an integration test suite in tempest that fully exercises
> >   the public image API by talking directly to Glance. This applies
> >   to the entire API, not just image uploads. It's fine to have
> >   duplicate tests using the proxy in Nova if the Nova team wants
> >   those, but DefCore should be using tests that talk directly to
> >   the service that owns each feature, without relying on any
> >   proxying. We've already missed the chance to deal with this in
> >   the current DefCore definition, which uses image-related tests
> >   that talk to the Nova proxy [4][5], so we'll have to maintain
> >   the proxy for the required deprecation period. But we won't be
> >   able to consider removing that proxy until we provide alternate
> >   tests for those features that speak directly to Glance. We may
> >   have some coverage already, but I wasn't able to find a task-based
> >   image upload test and there is no "image create" mentioned in
> >   the current draft of capabilities being reviewed [6]. There may
> >   be others missing, so someone more familiar with the feature set
> >   of Glance should do an audit and document what tests are needed
> >   so the work can be split up.
> >
> 
> +1 This should become one of the top priorities for Mitaka (as you
> mentioned at the beginning of this email).

But I hope this integration test suite in tempest is not seen de-facto needed functionality by DefCore as those two should be different things.

> 
> >4. Once identified and incorporated into the DefCore capabilities
> >   set, the selected API needs to remain stable for an extended
> >   period of time and follow the deprecation timelines defined by
> >   DefCore.  That has implications for the V3 API currently in
> >   development to turn Glance into a more generic artifacts service.
> >   There are a lot of ways to handle those implications, and no
> >   choice needs to be made today, so I only mention it to make sure
> >   it's clear that (a) we must get V2 into shape for DefCore and
> >   (b) when that happens, we will need to maintain V2 even if V3
> >   is finished. We won't be able to deprecate V2 quickly.

This is absolutely reasonable and pretty much combines what should be our near future focus moving forwards.

> >
> >Now, it's entirely possible that we can meet all of those requirements
> >today, and that would be great. If that's the case, then the problem is
> >just one of clear communication and documentation. I think there's
> >probably more work to be done than that, though.
> 
> 
> There's clearly a communication problem. The fact that this very email has
> been sent out is a sign of that. However, I'd like to say, in a very optimistic
> way, that Glance is not so far away from the expecte status. There are things
> to fix, other things to clarify, tons to discuss but, IMHO, besides the tempests
> tests and DefCore, the most critical one is the one you mentioned in the
> following section.

Being no so optimistic person I think we're just bit lost, but still fairly close.

> 
> >
> >[1] http://developer.openstack.org/api-ref-image-v2.html#os-tasks-v2
> >[2]
> >http://docs.rackspace.com/images/api/v2/ci-
> devguide/content/POST_import
> >Image_tasks_Image_Task_Calls.html#d6e4193
> >[3]
> >http://git.openstack.org/cgit/openstack/glance/tree/glance/api/v2/tasks
> >.py [4]
> >http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n70
> >[5]
> >http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/guideli
> >nes/2015.07.rst [6] https://review.openstack.org/#/c/213353/
> >
> >II. Complete Cinder and Nova V2 Adoption
> >
> >The Glance team originally committed to providing an Image Service API.
> >Besides our end users, both Cinder and Nova consume that API.
> >The shift from V1 to V2 has been a long road. We're far enough along,
> >and the V1 API has enough issues preventing us from using it for
> >DefCore, that we should push ahead and complete the V2 adoption. That
> >will let us properly deprecate and drop V1 support, and concentrate on
> >maintaining V2 for the necessary amount of time.
> >
> >There are a few specs for the work needed in Nova, but that work didn't
> >land in Liberty for a variety of reasons. We need resources from both
> >the Glance and Nova teams to work together to get this done as early as
> >possible in Mitaka to ensure that it actually lands this time. We
> >should be able to schedule a joint session at the summit to have the
> >conversation, and we need to take advantage of that opportunity to
> >ensure the details are fully resolved so that everyone understands the
> >plan.
> 
> Super important point. I'd like people replying to this email to focus on what
> we can do next and not why this hasn't been done. The later will take us
> down a path that won't be useful at all at it'll just waste everyone's time.

++

> 
> That said, I fully agree with the above. Last time we talked, John Garbutt and
> Jay Pipes, from the nova team, raised their hands to help out with this effort.
> From Glance's side, Fei Long Wang and myself were working on the
> implementation. To help moving this forward and to follow on the latest
> plan, which allows this migration to be smoother than our original plan, we
> need folks from Glance to raise their hand.
> 
> If I'm not elected PTL, I'm more than happy to help out here but we need
> someone that can commit to the above right now and we'll likely need a
> team of at least 2 people to help moving this forward in early Mitaka.
> 
> 
> >The work in Cinder is more complete, but may need to be reviewed to
> >ensure that it is using the API correctly, safely, and efficiently.
> >Again, this is a joint effort between the Glance and Cinder teams to
> >identify any issues and work out a resolution.
> >
> >Part of this work will also be to audit the Glance API documentation,
> >to ensure it accurately reflects what the APIs expect to receive and
> >return. There are reportedly at least a few cases where things are out
> >of sync right now. This will require some coordination with the
> >Documentation team.
> >
> >
> >Those are the two big priorities I see, based on things the rest of the
> >community needs from the team and existing commitments that have been
> >made. There are some other things that should also be addressed.
> >
> >
> >III. Security audits & bug fixes
> >
> >Five of 18 recent security reports were related to Glance [7]. It's not
> >surprising, given recent resource constraints, that addressing these
> >has been a challenge. Still, these should be given high priority.
> >
> >[7]
> >https://security.openstack.org/search.html?q=glance&check_keywords=y
> es&
> >area=default

I'm not sure if I'm more ashamed or happy about this. The fact that someone is actually looking into it and working on these issues is nice 'though.

> 
> 
> +1 FWIW, we're in the process of growing Glance's security team. But
> it's clear from the above that there needs to be quicker replies to security
> issues.
> 
> >IV. Sorting out the glance-store question
> >
> >This was perhaps the most confusing thing I learned about this week.
> >The perception outside of the Glance team is that the library is meant
> >to be used by Nova and Cinder to communicate directly with the image
> >store, bypassing the REST API, to improve performance in several cases.
> >I know the Cinder team is especially interested in some sort of
> >interface for manipulating images inside the storage system without
> >having to download them to make copies (for RBD and other systems that
> >support CoW natively).
> 
> Correct, the above was one of the triggerers for this effort and I like to think
> it's still one of the main drivers. There are other fancier things that could be
> done in the future assuming the librarie's API is refactored in a way that such
> features can be implemented.[0]
> 
> [0] https://review.openstack.org/#/c/188050/
> 
> >That doesn't seem to be
> >what the library is actually good for, though, since most of the Glance
> >core folks I talked to thought it was really a caching layer.
> >This discrepancy in what folks wanted vs. what they got may explain
> >some of the heated discussions in other email threads.
> 
> It's strange that some folks think of it as a caching layer. I believe one of the
> reasons there's such discrepancy is because not enough effort has been put
> in the refactor this library requires. The reason this library requires such a
> refactor is that it came out from the old `glance/store` code which was very
> specific to Glance's internal use.
> 
> The mistake here could be that the library should've been refactored
> *before* adopting it in Glance.
> 
> >
> >Frankly, given the importance of the other issues, I recommend leaving
> >glance-store standalone this cycle. Unless the work for dealing with
> >priorities I and II is made *significantly* easier by not having a
> >library, the time and energy it will take to re-integrate it with the
> >Glance service seems like a waste of limited resources.
> >The time to even discuss it may be better spent on the planning work
> >needed. That said, if the library doesn't provide the features its
> >users were expecting, it may be better to fold it back in and create a
> >different library with a better understanding of the requirements at
> >some point. The path to take is up to the Glance team, of course, but
> >we're already down far enough on the priority list that I think we'll
> >be lucky to finish the preceding items this cycle.
> 

I don't think we should put too much effort on this, based on the reality that we do not have even agreement within the team what the motivators are.

> 
> I don't think merging glance-store back into Glance will help with any of the
> priorities mentioned in this thread. If anything, refactoring the API might help
> with future work that could come after the v1 -> v2 migration is complete.
> 

Well it would close some discussions that have been causing confusion lately, but I do agree it might not be worth of it just now.

> >
> >
> >Those are the development priorities I was able to identify in my
> >interviews this week, and there is one last thing the team needs to do
> >this cycle: Recruit more contributors.
> >
> >Almost every current core contributor I spoke with this week indicated
> >that their time was split between another project and Glance. Often
> >higher priority had to be given, understandibly, to internal product
> >work. That's the reality we work in, and everyone feels the same
> >pressures to some degree. One way to address that pressure is to bring
> >in help. So, we need a recruiting drive to find folks willing to
> >contribute code and reviews to the project to keep the team healthy. I
> >listed this item last because if you've made it this far you should see
> >just how much work the team has ahead. We're a big community, and I'm
> >confident that we'll be able to find help for the Glance team, but it
> >will require mentoring and education to bring people up to speed to
> >make them productive.
> 

I'm almost sad to say, but I'm not really convinced that our issues are because of lack of manpower. Obviously any help is welcome to improve the current situation, but I think this discussion is extremely important to have before we take all that crowd in who wants to be part of developing Glance. ;)

> Fully agree here as well. However, I also believe that the fact that some
> efforts have gone to the wrong tasks has taken Glance to the situation it is
> today. More help is welcomed and required but a good strategy is more
> important right now.
> 
> FWIW, I agree that our focus has gone to different thing and this has taken us
> to the status you mentioned above. More importantly, it's postponed some
> important tasks. However, I don't believe Glance is completely broken - I
> know you are not saying this but I'd like to mention it - and I certainly believe
> we can bring it back to a good state faster than expecte, but I'm known for
> being a bit optimistic sometimes.
> 
> In this reply I was hard on us (Glance team), because I tend to be hard on
> myself and to dig deep into the things that are not working well. Many times
> I do this based on the feedback provided by others, which I personally value
> **a lot**. Unfortunately, I have to say that there hasn't been enough
> feedback about these issues until now. There was Mike's email[0] where I
> explicitly asked the community to speak up. This is to say that I appreciate
> the time you've taken to dig into this a lot and to encourage folks to *always*
> speak up and reach out through every *public* medium possible..
> 
> No one can fix rumors, we can fix issues, though.
> 
> Thanks again and lets all work together to improve this situation, Flavio

All the above is just so easy to agree on!

> 
> [0] http://lists.openstack.org/pipermail/openstack-dev/2015-
> August/071971.html
> 
> --
> @flaper87
> Flavio Percoco

- Erno (jokke) Kuvaja

From drfish at us.ibm.com  Mon Sep 14 15:04:57 2015
From: drfish at us.ibm.com (Douglas Fish)
Date: Mon, 14 Sep 2015 15:04:57 +0000
Subject: [openstack-dev] [openstack-infra] format/pep8 checks for
	translations?
Message-ID: <201509141506.t8EF6Fxw023685@d01av05.pok.ibm.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/9d33cada/attachment.html>

From jaypipes at gmail.com  Mon Sep 14 15:12:44 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Mon, 14 Sep 2015 11:12:44 -0400
Subject: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support
 multiple compute drivers?
In-Reply-To: <CALesnTzOP77F6qyvP6KK_evRRKuOw6iZr7jfNcdFJ6btJjhRqg@mail.gmail.com>
References: <CALesnTzMv_+hxZLFkAbxObzGLKU0h2ENZ5-vYe1-u+EC5g7Eyg@mail.gmail.com>
 <20150909171336.GG21846@jimrollenhagen.com>
 <CALesnTyuK17bUpYuA=9q+_L5TU7xxAF=tdsQmwtPtr+Z1vmt1w@mail.gmail.com>
 <1474881269.44702768.1441851955747.JavaMail.zimbra@redhat.com>
 <CALesnTzOP77F6qyvP6KK_evRRKuOw6iZr7jfNcdFJ6btJjhRqg@mail.gmail.com>
Message-ID: <55F6E3EC.1090306@gmail.com>

On 09/10/2015 12:00 PM, Jeff Peeler wrote:
> On Wed, Sep 9, 2015 at 10:25 PM, Steve Gordon <sgordon at redhat.com
> <mailto:sgordon at redhat.com>> wrote:
>
>     ----- Original Message -----
>     > From: "Jeff Peeler" <jpeeler at redhat.com <mailto:jpeeler at redhat.com>>
>     > To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>
>     >
>     > I'd greatly prefer using availability zones/host aggregates as I'm trying
>     > to keep the footprint as small as possible. It does appear that in the
>     > section "configure scheduler to support host aggregates" [1], that I can
>     > configure filtering using just one scheduler (right?). However, perhaps
>     > more importantly, I'm now unsure with the network configuration changes
>     > required for Ironic that deploying normal instances along with baremetal
>     > servers is possible.
>     >
>     > [1]
>     >http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html
>
>     Hi Jeff,
>
>     I assume your need for a second scheduler is spurred by wanting to
>     enable different filters for baremetal vs virt (rather than
>     influencing scheduling using the same filters via image properties,
>     extra specs, and boot parameters (hints)?
>
>     I ask because if not you should be able to use the hypervisor_type
>     image property to ensure that images intended for baremetal are
>     directed there and those intended for kvm etc. are directed to those
>     hypervisors. The documentation [1] doesn't list ironic as a valid
>     value for this property but I looked into the code for this a while
>     ago and it seemed like it should work... Apologies if you had
>     already considered this.
>
>     Thanks,
>
>     Steve
>
>     [1]
>     http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html
>
>
> I hadn't considered that, thanks.

Yes, that's the recommended way to direct scheduling requests -- via the 
hypervisor_type image property.

 > It's still unknown to me though if a
> separate compute service is required. And if it is required, how much
> segregation is required to make that work.

Yes, a separate nova-compute worker daemon is required to manage the 
baremetal Ironic nodes.

> Not being a networking guru, I'm also unsure if the Ironic setup
> instructions to use a flat network is a requirement or is just a sample
> of possible configuration.

AFAIK, flat DHCP networking is currently the only supported network 
configuration for Ironic.

 > In a brief out of band conversation I had, it
> does sound like Ironic can be configured to use linuxbridge too, which I
> didn't know was possible.

Well, LinuxBridge vs. OVS isn't really about whether you have a flat 
network topology or not. It's just a different way of doing the actual 
switching (virtual bridging vs. standard linux bridges).

I'm no Neutron expert, but I suspect that one could use either the 
LinuxBridge *or* the OVS ML2 mechanism driver for the L2 agent, along 
with a single flat provider network for your baremetal nodes.

Hopefully an Ironic + Neutron expert will confirm or deny this?

Best,
-jay


From thierry at openstack.org  Mon Sep 14 15:27:30 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Mon, 14 Sep 2015 17:27:30 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442232202-sup-5997@lrrr.local>
References: <1442232202-sup-5997@lrrr.local>
Message-ID: <55F6E762.5090406@openstack.org>

Doug Hellmann wrote:
> [...]
> 1. Resolve the situation preventing the DefCore committee from
>    including image upload capabilities in the tests used for trademark
>    and interoperability validation.
> 
> 2. Follow through on the original commitment of the project to
>    provide an image API by completing the integration work with
>    nova and cinder to ensure V2 API adoption.
> [...]

Thanks Doug for taking the time to dive into Glance and to write this
email. I agree with your top two priorities as being a good summary of
what the "rest of the community" expects the Glance leadership to focus
on in the very short term.

Cheers,

-- 
Thierry Carrez (ttx)


From julien at danjou.info  Mon Sep 14 15:36:47 2015
From: julien at danjou.info (Julien Danjou)
Date: Mon, 14 Sep 2015 17:36:47 +0200
Subject: [openstack-dev] [Gnocchi] Some update on Grafana support
Message-ID: <m0egi1gghc.fsf@danjou.info>

Hi folks,

I've just published a little blog post about the job we've been working
on with the rest of the Gnocchi team: adding support for Gnocchi? in
Grafana?. As Gordon pointed out, that might be of some interest for some
of you, so I'll leave that here:

  https://julien.danjou.info/blog/2015/openstack-gnocchi-grafana


Feel free to reach me if you have questions.


?  http://launchpad.net/gnocchi

?  http://grafana.org

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/b78c3ccc/attachment.pgp>

From harlowja at outlook.com  Mon Sep 14 15:41:37 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Mon, 14 Sep 2015 08:41:37 -0700
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
 concerns
In-Reply-To: <55F67F28.70506@openstack.org>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
 <55F67F28.70506@openstack.org>
Message-ID: <BLU437-SMTP39044EECA496C509DA80AED85D0@phx.gbl>

Thierry Carrez wrote:
> Joshua Harlow wrote:
>> I believe it is the TC job (in part) to help make the community better,
>> and not via tags like this that IMHO actually make it worse;
>
> I think it's important to see the intent of the tag, rather than only
> judge on its current proposed name. The big tent is vast, and there are
> all kinds of projects, more or less mature, in it. The tag system is
> there to help our ecosystem navigate the big tent by providing specific
> bits of information about them.
>
> One such important bit of information is how risky it is to invest on a
> given project, how likely is it to still be around tomorrow. Some
> projects are so dependent on a single organization that they may,
> literally, disappear in one day when a single person (the CEO of that
> organization) decides so. I think our ecosystem should know about that,
> without having to analyze stackalytics data. This is why I support
> creating a tag describing project teams that are *extremely* fragile, at
> the other end of the spectrum from projects that are "healthily diverse".
>
>> I really
>> hope that folks on the TC can look back at their own projects they may
>> have created and ask how would their own project have turned out if they
>> were stamped with a similar tag...
>
> The thing is, one of the requirements to become an official OpenStack
> project in the "integrated release" model was to reach a given level of
> diversity in contributors. So "our" OpenStack projects just could not
> officially exist if they would have been stamped with a similar tag.
>
> The big tent is more inclusive, as we no longer consider diversity
> before we approve a project. The tag is the other side of the coin: we
> still need to inform our ecosystem that some projects are less mature or
> more fragile than others. The tag doesn't prevent the project to exist
> in OpenStack, it just informs our users that there is a level of risk
> associated with it.
>
> Or are you suggesting it is preferable to hide that risk from our
> operators/users, to protect that project team developers ?

Not really. I get the idea of informing operators/users about how this 
project may need more contributors. I just want it to be a positive 
statement vs. a negative one if possible; and I'd really like for the TC 
to also have some kind of proposal for helping those projects get to be 
more diverse (vs just labeling them).

Some ideas already mentioned + new ones:

* Put the project on some kind of 'help wanted' page.
* Help said projects sign-up for google summer of code (that may help 
increase diversity?).
* Something else?

>


From slukjanov at mirantis.com  Mon Sep 14 15:47:03 2015
From: slukjanov at mirantis.com (Sergey Lukjanov)
Date: Mon, 14 Sep 2015 18:47:03 +0300
Subject: [openstack-dev] [sahara] FFE for Ambari plugin
In-Reply-To: <CAOB5mPx5hN6JpD25raC1-E9gOJnKJpM6ZPSH3rHgEYjm=Zk1GA@mail.gmail.com>
References: <CAOB5mPx5hN6JpD25raC1-E9gOJnKJpM6ZPSH3rHgEYjm=Zk1GA@mail.gmail.com>
Message-ID: <CA+GZd7-pGT1RmSKTW1syFrhEMvsT-TWp5uVT8i79hHZ=zYHJ1Q@mail.gmail.com>

Approved.

This code is scoped into the new Ambari plugin, so, no possible issues with
existing functionality.

https://etherpad.openstack.org/p/sahara-liberty-ffes updated.

On Mon, Sep 14, 2015 at 4:42 PM, Sergey Reshetnyak <sreshetniak at mirantis.com
> wrote:

> Hello
>
> I would like to request FFE for Ambari plugin.
>
> Core parts of Ambari plugin merged, but scaling and HA support on review.
>
> Patches:
> * Scaling: https://review.openstack.org/#/c/193081/
> * HA: https://review.openstack.org/#/c/197551/
>
> Blueprint: https://blueprints.launchpad.net/sahara/+spec/hdp-22-support
>
> Spec:
> https://github.com/openstack/sahara-specs/blob/master/specs/liberty/hdp-22-support.rst
>
> Thanks,
> Sergey Reshetnyak
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/5dc813ee/attachment.html>

From Kevin.Fox at pnnl.gov  Mon Sep 14 15:50:10 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Mon, 14 Sep 2015 15:50:10 +0000
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
 concerns
In-Reply-To: <EA70533067B8F34F801E964ABCA4C4410F4D5232@G9W0745.americas.hpqcorp.net>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
 <55F67F28.70506@openstack.org>,
 <EA70533067B8F34F801E964ABCA4C4410F4D5232@G9W0745.americas.hpqcorp.net>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A30969E@EX10MBOX03.pnnl.gov>

OpenSSL is a recent case. Everyone relied upon it in production but it just wasn't getting the support it needed to be healthy, and everyone suffered. An event shone light on the problem and its getting better. But it was an unfortunate event that caused people to look at it. It would be better if it could be done more thoughtfully, which I think the tag is attempting to do. So, it doesn't just happen to fledgling projects, but old, well established ones too.

Thanks,
Kevin
________________________________________
From: Kuvaja, Erno [kuvaja at hpe.com]
Sent: Monday, September 14, 2015 4:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my concerns

> -----Original Message-----
> From: Thierry Carrez [mailto:thierry at openstack.org]
> Sent: Monday, September 14, 2015 9:03 AM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
> concerns
>
<CLIP>
>
> Or are you suggesting it is preferable to hide that risk from our
> operators/users, to protect that project team developers ?
>
> --
> Thierry Carrez (ttx)
>
Unfortunately this seems to be the trend, not only in <insert any group here> but in society. Everything needs to be everyone friendly and politically correct, it's not ok to talk about difficult topics with their real names because someone involved might get their feelings hurt, it's not ok to compete as losers might get their feelings hurt.

While being bit double edged sword I think this is exact example of such. One could argue if the project has reason to exist if saying out loud "it does not have diversity in its development community" will kill it. I think there is good amount of examples both ways in open source world where abandoned projects get picked up as there is people thinking they still have use case and value, on the other side maybe promising projects gets forgotten because no-one else really felt the urge to keep 'em alive.

Personally I feel this being bit like stamping feature experimental. "Please feel free to play around with it, but we do discourage you to deploy it in your production unless you're willing to pick up the maintenance of it in the case the team decides to do something else." There is nothing wrong with that.

I don't think these should be hiding behind the valance of the big tent and the consumer expectations should be set at least close to the reality without them needing to do huge amount of detective work. That was the point of the tags in first place, no?

Obviously above is just my blunt self. If someone went and rage killed their project because of that, good for you, now get yourself together and do it again. ;)

- Erno (jokke) Kuvaja

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From dolph.mathews at gmail.com  Mon Sep 14 15:50:05 2015
From: dolph.mathews at gmail.com (Dolph Mathews)
Date: Mon, 14 Sep 2015 10:50:05 -0500
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
	concerns
In-Reply-To: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
Message-ID: <CAC=h7gU5R=9S9j=3Cu42AJsAtO5no1A=3+V_5tbUwYbdxA+_nA@mail.gmail.com>

Perhaps gamify the tagging process? By inverting the tagging convention
from something negative to something positive like
"sponsored-by-company-x", you're offering bragging rights to companies that
are the sole sponsors of projects. "Here's a list of projects that Company
X directly supports, exclusively." It's a marketing advantage: they're the
experts on the project, etc. For successful projects, diversification
happens naturally. I see no benefit from casting such projects in a
negative light.

The TC can view the same tag with a more critical eye.

On Fri, Sep 11, 2015 at 2:26 PM, Joshua Harlow <harlowja at outlook.com> wrote:

> Hi all,
>
> I was reading over the TC IRC logs for this week (my weekly reading) and I
> just wanted to let my thoughts and comments be known on:
>
>
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309
>
> I feel it's very important to send a positive note for new/upcoming
> projects and libraries... (and for everyone to remember that most projects
> do start off with a small set of backers). So I just wanted to try to
> ensure that we send a positive note with any tag like this that gets
> created and applied and that we all (especially the TC) really really
> considers the negative connotations of applying that tag to a project (it
> may effectively ~kill~ that project).
>
> I would really appreciate that instead of just applying this tag (or other
> similarly named tag to projects) that instead the TC try to actually help
> out projects with those potential tags in the first place (say perhaps by
> actively listing projects that may need more contributors from a variety of
> companies on the openstack blog under say a 'HELP WANTED' page or
> something). I'd much rather have that vs. any said tags, because the latter
> actually tries to help projects, vs just stamping them with a 'you are bad,
> figure out how to fix yourself, because you are not diverse' tag.
>
> I believe it is the TC job (in part) to help make the community better,
> and not via tags like this that IMHO actually make it worse; I really hope
> that folks on the TC can look back at their own projects they may have
> created and ask how would their own project have turned out if they were
> stamped with a similar tag...
>
> - Josh
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/5417aff2/attachment.html>

From amaksimov at mirantis.com  Mon Sep 14 15:53:22 2015
From: amaksimov at mirantis.com (Andrew Maksimov)
Date: Mon, 14 Sep 2015 18:53:22 +0300
Subject: [openstack-dev] [Fuel] Bugs which we should accept in 7.0 after
 Hard Code Freeze
Message-ID: <CAJQwwYO15aBkbPoUYw7s4Kj1Dsg0iSGJDA5Jr5ZcZsHZ5TzHsQ@mail.gmail.com>

Hi Everyone!

I would like to reiterate the bugfix process after Hard Code Freeze.
According to our HCF definition [1] we should only merge fixes for
*Critical* bugs to *stable/7.0* branch, High and lower priority bugs should
NOT be accepted to *stable/7.0* branch anymore.
Also we should accept patches for critical bugs to *stable/7.0* branch only
after the corresponding patchset with same ChangeID was accepted into
master.

[1] - https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze

Regards,
Andrey Maximov
Fuel Project Manager
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/53a1026e/attachment.html>

From doug at doughellmann.com  Mon Sep 14 15:54:35 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 14 Sep 2015 11:54:35 -0400
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my
	concerns
In-Reply-To: <BLU437-SMTP39044EECA496C509DA80AED85D0@phx.gbl>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>
 <55F67F28.70506@openstack.org>
 <BLU437-SMTP39044EECA496C509DA80AED85D0@phx.gbl>
Message-ID: <1442245863-sup-6598@lrrr.local>

Excerpts from Joshua Harlow's message of 2015-09-14 08:41:37 -0700:
> Thierry Carrez wrote:
> > Joshua Harlow wrote:
> >> I believe it is the TC job (in part) to help make the community better,
> >> and not via tags like this that IMHO actually make it worse;
> >
> > I think it's important to see the intent of the tag, rather than only
> > judge on its current proposed name. The big tent is vast, and there are
> > all kinds of projects, more or less mature, in it. The tag system is
> > there to help our ecosystem navigate the big tent by providing specific
> > bits of information about them.
> >
> > One such important bit of information is how risky it is to invest on a
> > given project, how likely is it to still be around tomorrow. Some
> > projects are so dependent on a single organization that they may,
> > literally, disappear in one day when a single person (the CEO of that
> > organization) decides so. I think our ecosystem should know about that,
> > without having to analyze stackalytics data. This is why I support
> > creating a tag describing project teams that are *extremely* fragile, at
> > the other end of the spectrum from projects that are "healthily diverse".
> >
> >> I really
> >> hope that folks on the TC can look back at their own projects they may
> >> have created and ask how would their own project have turned out if they
> >> were stamped with a similar tag...
> >
> > The thing is, one of the requirements to become an official OpenStack
> > project in the "integrated release" model was to reach a given level of
> > diversity in contributors. So "our" OpenStack projects just could not
> > officially exist if they would have been stamped with a similar tag.
> >
> > The big tent is more inclusive, as we no longer consider diversity
> > before we approve a project. The tag is the other side of the coin: we
> > still need to inform our ecosystem that some projects are less mature or
> > more fragile than others. The tag doesn't prevent the project to exist
> > in OpenStack, it just informs our users that there is a level of risk
> > associated with it.
> >
> > Or are you suggesting it is preferable to hide that risk from our
> > operators/users, to protect that project team developers ?
> 
> Not really. I get the idea of informing operators/users about how this 
> project may need more contributors. I just want it to be a positive 
> statement vs. a negative one if possible; and I'd really like for the TC 
> to also have some kind of proposal for helping those projects get to be 
> more diverse (vs just labeling them).
> 
> Some ideas already mentioned + new ones:
> 
> * Put the project on some kind of 'help wanted' page.
> * Help said projects sign-up for google summer of code (that may help 
> increase diversity?).

We're talking specifically about affiliation diversity. I'm not sure how
we would indicate student affiliation (independent? their university?
the company of their mentor?). I'm also not sure that a few short-term
helpers is necessarily going to improve any project's overall situation
in terms of affiliation diversity.

Doug

> * Something else?
> 
> >
> 


From Kevin.Fox at pnnl.gov  Mon Sep 14 16:01:24 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Mon, 14 Sep 2015 16:01:24 +0000
Subject: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and
	my	concerns
In-Reply-To: <CAC=h7gU5R=9S9j=3Cu42AJsAtO5no1A=3+V_5tbUwYbdxA+_nA@mail.gmail.com>
References: <BLU436-SMTP367CD5D945FA3A65696500D8500@phx.gbl>,
 <CAC=h7gU5R=9S9j=3Cu42AJsAtO5no1A=3+V_5tbUwYbdxA+_nA@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A3096EA@EX10MBOX03.pnnl.gov>

This might encourage companies to create new projects rather then support existing ones to get their name on something. That would be horrible.

Thanks,
Kevin
________________________________
From: Dolph Mathews [dolph.mathews at gmail.com]
Sent: Monday, September 14, 2015 8:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][TC] 'team:danger-not-diverse tag' and my concerns

Perhaps gamify the tagging process? By inverting the tagging convention from something negative to something positive like "sponsored-by-company-x", you're offering bragging rights to companies that are the sole sponsors of projects. "Here's a list of projects that Company X directly supports, exclusively." It's a marketing advantage: they're the experts on the project, etc. For successful projects, diversification happens naturally. I see no benefit from casting such projects in a negative light.

The TC can view the same tag with a more critical eye.

On Fri, Sep 11, 2015 at 2:26 PM, Joshua Harlow <harlowja at outlook.com<mailto:harlowja at outlook.com>> wrote:
Hi all,

I was reading over the TC IRC logs for this week (my weekly reading) and I just wanted to let my thoughts and comments be known on:

http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-08-20.01.log.html#l-309

I feel it's very important to send a positive note for new/upcoming projects and libraries... (and for everyone to remember that most projects do start off with a small set of backers). So I just wanted to try to ensure that we send a positive note with any tag like this that gets created and applied and that we all (especially the TC) really really considers the negative connotations of applying that tag to a project (it may effectively ~kill~ that project).

I would really appreciate that instead of just applying this tag (or other similarly named tag to projects) that instead the TC try to actually help out projects with those potential tags in the first place (say perhaps by actively listing projects that may need more contributors from a variety of companies on the openstack blog under say a 'HELP WANTED' page or something). I'd much rather have that vs. any said tags, because the latter actually tries to help projects, vs just stamping them with a 'you are bad, figure out how to fix yourself, because you are not diverse' tag.

I believe it is the TC job (in part) to help make the community better, and not via tags like this that IMHO actually make it worse; I really hope that folks on the TC can look back at their own projects they may have created and ask how would their own project have turned out if they were stamped with a similar tag...

- Josh

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/da088286/attachment.html>

From donald at hansenfamily.us  Mon Sep 14 16:04:54 2015
From: donald at hansenfamily.us (Donald Hansen)
Date: Mon, 14 Sep 2015 09:04:54 -0700
Subject: [openstack-dev] Angular Directives / Tables
Message-ID: <CA+Now5Pm0kXX3juXMcUg09q4Z=vG4MCQykZ5=uKtGXv0QgqmeQ@mail.gmail.com>

I'm curious about the status of work on creating Angular Directives that
mimic the functionality in the non Angular code. Specifically in relation
to Tables.

I need to create a custom panel and I was debating on doing it in Angular
or not. The panel needs a table and much of the functionality that is
already present in the non Angular code (polling for updates on an action
to a row, modal confirmation for actions, etc). Similar to how the table
current works on the compute -> Instances panel.

Thoughts? Suggestions?

Thanks.
Donald
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/563b0e30/attachment.html>

From jpeeler at redhat.com  Mon Sep 14 16:06:41 2015
From: jpeeler at redhat.com (Jeff Peeler)
Date: Mon, 14 Sep 2015 12:06:41 -0400
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <CAKO+H+LQm2YZHqGcCNH1AWbdpPNunH6qUiMnj=L-1k7jAqkV2A@mail.gmail.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
 <D21A6FA1.12519%stdake@cisco.com>
 <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>
 <D21AFAE0.12587%stdake@cisco.com> <55F6AD3E.9090909@oracle.com>
 <CAJ3CzQWS4O-+V6A9L0GSDMUGcfpJc_3=DdQG9njxO+FBoRBDyw@mail.gmail.com>
 <CAKO+H+LQm2YZHqGcCNH1AWbdpPNunH6qUiMnj=L-1k7jAqkV2A@mail.gmail.com>
Message-ID: <CALesnTx_E3_398fSFR0oxRs268sNiqhsf5tsNc_0xexHd0UxiQ@mail.gmail.com>

On Mon, Sep 14, 2015 at 8:53 AM, Swapnil Kulkarni <coolsvap at gmail.com>
wrote:

>
>
> On Mon, Sep 14, 2015 at 5:14 PM, Sam Yaple <samuel at yaple.net> wrote:
>
>>
>> On Mon, Sep 14, 2015 at 11:19 AM, Paul Bourke <paul.bourke at oracle.com>
>> wrote:
>>
>>>
>>>
>>> On 13/09/15 18:34, Steven Dake (stdake) wrote:
>>>
>>>> Response inline.
>>>>
>>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:
>>>> sam at yaple.net>>
>>>> Date: Sunday, September 13, 2015 at 1:35 AM
>>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>>>> openstack-dev at lists.openstack.org<mailto:
>>>> openstack-dev at lists.openstack.org>>
>>>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS +
>>>> RDO types
>>>>
>>>> On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake) <stdake at cisco.com
>>>> <mailto:stdake at cisco.com>> wrote:
>>>> Response inline.
>>>>
>>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:
>>>> sam at yaple.net>>
>>>> Date: Saturday, September 12, 2015 at 11:34 PM
>>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>>>> openstack-dev at lists.openstack.org<mailto:
>>>> openstack-dev at lists.openstack.org>>
>>>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS +
>>>> RDO types
>>>>
>>>>
>>>>
>>>> Sam Yaple
>>>>
>>>> On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake) <stdake at cisco.com
>>>> <mailto:stdake at cisco.com>> wrote:
>>>>
>>>>
>>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>" <sam at yaple.net<mailto:
>>>> sam at yaple.net>>
>>>> Date: Saturday, September 12, 2015 at 11:01 PM
>>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>>> Cc: "OpenStack Development Mailing List (not for usage questions)" <
>>>> openstack-dev at lists.openstack.org<mailto:
>>>> openstack-dev at lists.openstack.org>>
>>>> Subject: Re: [kolla] Followup to review in gerrit relating to RHOS +
>>>> RDO types
>>>>
>>>>
>>>> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake) <
>>>> stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
>>>> Hey folks,
>>>>
>>>> Sam had asked a reasonable set of questions regarding a patchset:
>>>> https://review.openstack.org/#/c/222893/
>>>>
>>>> The purpose of the patchset is to enable both RDO and RHOS as binary
>>>> choices on RHEL platforms.  I suspect over time, from-source deployments
>>>> have the potential to become the norm, but the business logistics of such a
>>>> change are going to take some significant time to sort out.
>>>>
>>>> Red Hat has two distros of OpenStack neither of which are from source.
>>>> One is free called RDO and the other is paid called RHOS.  In order to
>>>> obtain support for RHEL VMs running in an OpenStack cloud, you must be
>>>> running on RHOS RPM binaries.  You must also be running on RHEL.  It
>>>> remains to be seen whether Red Hat will actively support Kolla deployments
>>>> with a RHEL+RHOS set of packaging in containers, but my hunch says they
>>>> will.  It is in Kolla?s best interest to implement this model and not make
>>>> it hard on Operators since many of them do indeed want Red Hat?s support
>>>> structure for their OpenStack deployments.
>>>>
>>>> Now to Sam?s questions:
>>>> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many more
>>>> do we add? What's our policy on adding a new type??
>>>>
>>>> I?m not immediately clear on how binary fits in.  We could make binary
>>>> synonymous with the community supported version (RDO) while still
>>>> implementing the binary RHOS version.  Note Kolla does not ?support? any
>>>> distribution or deployment of OpenStack ? Operators will have to look to
>>>> their vendors for support.
>>>>
>>>> If everything between centos+rdo and rhel+rhos is mostly the same then
>>>> I would think it would make more sense to just use the base ('rhel' in this
>>>> case) to branch of any differences in the templates. This would also allow
>>>> for the least amount of change and most generic implementation of this
>>>> vendor specific packaging. This would also match what we do with
>>>> oraclelinux, we do not have a special type for that and any specifics would
>>>> be handled by an if statement around 'oraclelinux' and not some special
>>>> type.
>>>>
>>>> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.  RDO
>>>> also runs on RHEL.  I want to enable Red Hat customers to make a choice to
>>>> have a supported  operating system but not a supported Cloud environment.
>>>> The answer here is RHEL + RDO.  This leads to full support down the road if
>>>> the Operator chooses to pay Red Hat for it by an easy transition to RHOS.
>>>>
>>>> I am against including vendor specific things like RHOS in Kolla
>>>> outright like you are purposing. Suppose another vendor comes along with a
>>>> new base and new packages. They are willing to maintain it, but its
>>>> something that no one but their customers with their licensing can use.
>>>> This is not something that belongs in Kolla and I am unsure that it is even
>>>> appropriate to belong in OpenStack as a whole. Unless RHEL+RHOS can be used
>>>> by those that do not have a license for it, I do not agree with adding it
>>>> at all.
>>>>
>>>> Sam,
>>>>
>>>> Someone stepping up to maintain a completely independent set of docker
>>>> images hasn?t happened.  To date nobody has done that.  If someone were to
>>>> make that offer, and it was a significant change, I think the community as
>>>> a whole would have to evaluate such a drastic change.  That would certainly
>>>> increase our implementation and maintenance burden, which we don?t want  to
>>>> do.  I don?t think what you propose would be in the best interest of the
>>>> Kolla project, but I?d have to see the patch set to evaluated the scenario
>>>> appropriately.
>>>>
>>>> What we are talking about is 5 additional lines to enable RHEL+RHOS
>>>> specific repositories, which is not very onerous.
>>>>
>>>> The fact that you can?t use it directly has little bearing on whether
>>>> its valid technology for OpenStack.  There are already two well-defined
>>>> historical precedents for non-licensed unusable integration in OpenStack.
>>>> Cinder has 55 [1] Volume drivers which they SUPPORT.     At-leat 80% of
>>>> them are completely proprietary hardware which in reality is mostly just
>>>> software which without a license to, it would be impossible to use.  There
>>>> are 41 [2] Neutron drivers registered on the Neutron driver page; almost
>>>> the entirety require proprietary licenses to what amounts as integration to
>>>> access proprietary software.  The OpenStack preferred license is ASL for a
>>>> reason ? to be business friendly.  Licensed software has a place in the
>>>> world of OpenStack, even it only serves as an integration point which the
>>>> proposed patch does.  We are consistent with community values on this point
>>>> or I wouldn?t have bothered proposing the patch.
>>>>
>>>> We want to encourage people to use Kolla for proprietary solutions if
>>>> they so choose.  This is how support manifests, which increases the
>>>> strength of the Kolla project.  The presence of support increases the
>>>> likelihood that Kolla will be adopted by Operators.  If your asking the
>>>> Operators to maintain a fork for those 5 RHOS repo lines, that seems
>>>> unreasonable.
>>>>
>>>> I?d like to hear other Core Reviewer opinions on this matter and will
>>>> hold a majority vote on this thread as to whether we will facilitate
>>>> integration with third party software such as the Cinder Block Drivers, the
>>>> Neutron Network drivers, and various for-pay versions of OpenStack such as
>>>> RHOS.  I?d like all core reviewers to weigh in please.  Without a complete
>>>> vote it will be hard to gauge what the Kolla community really wants.
>>>>
>>>> Core reviewers:
>>>> Please vote +1 if you ARE satisfied with integration with third party
>>>> unusable without a license software, specifically Cinder volume drivers,
>>>> Neutron network drivers, and various for-pay distributions of OpenStack and
>>>> container runtimes.
>>>> Please vote ?1 if you ARE NOT satisfied with integration with third
>>>> party unusable without a license software, specifically Cinder volume
>>>> drivers, Neutron network drivers, and various for pay distributions of
>>>> OpenStack and container runtimes.
>>>>
>>>> A bit of explanation on your vote might be helpful.
>>>>
>>>> My vote is +1.  I have already provided my rationale.
>>>>
>>>> Regards,
>>>> -steve
>>>>
>>>> [1] https://wiki.openstack.org/wiki/CinderSupportMatrix
>>>> [2] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>>>>
>>>>
>>>> I appreciate you calling a vote so early. But I haven't had my
>>>> questions answered yet enough to even vote on the matter at hand.
>>>>
>>>> In this situation the closest thing we have to a plugin type system as
>>>> Cinder or Neutron does is our header/footer system. What you are proposing
>>>> is integrating a proprietary solution into the core of Kolla. Those Cinder
>>>> and Neutron plugins have external components and those external components
>>>> are not baked into the project.
>>>>
>>>> What happens if and when the RHOS packages require different tweaks in
>>>> the various containers? What if it requires changes to the Ansible
>>>> playbooks? It begins to balloon out past 5 lines of code.
>>>>
>>>> Unfortunately, the community _wont_ get to vote on whether or not to
>>>> implement those changes because RHOS is already in place. That's why I am
>>>> asking the questions now as this _right_ _now_ is the significant change
>>>> you are talking about, regardless of the lines of code.
>>>>
>>>> So the question is not whether we are going to integrate 3rd party
>>>> plugins, but whether we are going to allow companies to build proprietary
>>>> products in the Kolla repo. If we allow RHEL+RHOS then we would need to
>>>> allow another distro+company packaging and potential Ansible tweaks to get
>>>> it to work for them.
>>>>
>>>> If you really want to do what Cinder and Neutron do, we need a better
>>>> system for injecting code. That would be much closer to the plugins that
>>>> the other projects have.
>>>>
>>>> I'd like to have a discussion about this rather than immediately call
>>>> for a vote which is why I asked you to raise this question in a public
>>>> forum in the first place.
>>>>
>>>>
>>>> Sam,
>>>>
>>>> While a true code injection system might be interesting and would be
>>>> more parallel with the plugin model used in cinder and neutron (and to some
>>>> degrees nova), those various systems didn?t begin that way.  Their driver
>>>> code at one point was completely integrated.  Only after 2-3 years was the
>>>> code broken into a fully injectable state.  I think that is an awfully high
>>>> bar to set to sort out the design ahead of time.  One of the reasons
>>>> Neutron has taken so long to mature is the Neutron community attempted to
>>>> do plugins at too early a stage which created big gaps in unit and
>>>> functional tests.  A more appropriate design would be for that pattern to
>>>> emerge from the system over time as people begin to adopt various distro
>>>> tech to Kolla.  If you looked at the patch in gerrit, there is one clear
>>>> pattern ?Setup distro repos? which at some point in the future could be
>>>> made to be injectable much as headers and footers are today.
>>>>
>>>> As for building proprietary products in the Kolla repository, the
>>>> license is ASL, which means it is inherently not proprietary.  I am fine
>>>> with the code base integrating with proprietary software as long as the
>>>> license terms are met; someone has to pay the mortgages of the thousands of
>>>> OpenStack developers.  We should encourage growth of OpenStack, and one of
>>>> the ways for that to happen is to be business friendly.  This translates
>>>> into first knowing the world is increasingly adopting open source
>>>> methodologies and facilitating that transition, and second accepting the
>>>> world has a whole slew of proprietary software that already exists today
>>>> that requires integration.
>>>>
>>>> Nonetheless, we have a difference of opinion on this matter, and I want
>>>> this work to merge prior to rc1.  Since this is a project policy decision
>>>> and not a technical issue, it makes sense to put it to a wider vote to
>>>> either unblock or kill the work.  It would be a shame if we reject all
>>>> driver and supported distro integration because we as a community take an
>>>> anti-business stance on our policies, but I?ll live by what the community
>>>> decides.  This is not a decision either you or I may dictate which is why
>>>> it has been put to a vote.
>>>>
>>>> Regards
>>>> -steve
>>>>
>>>>
>>>>
>>>> For oracle linux, I?d like to keep RDO for oracle linux and from source
>>>> on oracle linux as choices.  RDO also runs on oracle linux.  Perhaps the
>>>> patch set needs some later work here to address this point in more detail,
>>>> but as is ?binary? covers oracle linu.
>>>>
>>>> Perhaps what we should do is get rid of the binary type entirely.
>>>> Ubuntu doesn?t really have a binary type, they have a cloudarchive type, so
>>>> binary doesn?t make a lot of sense.  Since Ubuntu to my knowledge doesn?t
>>>> have two distributions of OpenStack the same logic wouldn?t apply to
>>>> providing a full support onramp for Ubuntu customers.  Oracle doesn?t
>>>> provide a binary type either, their binary type is really RDO.
>>>>
>>>> The binary packages for Ubuntu are _packaged_ by the cloudarchive team.
>>>> But in the case of when OpenStack collides with an LTS release (Icehouse
>>>> and 14.04 was the last one) you do not add a new repo because the packages
>>>> are in the main Ubuntu repo.
>>>>
>>>> Debian provides its own packages as well. I do not want a type name per
>>>> distro. 'binary' catches all packaged OpenStack things by a distro.
>>>>
>>>>
>>>> FWIW I never liked the transition away from rdo in the repo names to
>>>> binary.  I guess I should have ?1?ed those reviews back then, but I think
>>>> its time to either revisit the decision or compromise that binary and rdo
>>>> mean the same thing in a centos and rhel world.
>>>>
>>>> Regards
>>>> -steve
>>>>
>>>>
>>>> Since we implement multiple bases, some of which are not RPM based, it
>>>> doesn't make much sense to me to have rhel and rdo as a type which is why
>>>> we removed rdo in the first place in favor of the more generic 'binary'.
>>>>
>>>>
>>>> As such the implied second question ?How many more do we add?? sort of
>>>> sounds like ?how many do we support??.  The answer to the second question
>>>> is none ? again the Kolla community does not support any deployment of
>>>> OpenStack.  To the question as posed, how many we add, the answer is it is
>>>> really up to community members willing to  implement and maintain the
>>>> work.  In this case, I have personally stepped up to implement RHOS and
>>>> maintain it going forward.
>>>>
>>>> Our policy on adding a new type could be simple or onerous.  I prefer
>>>> simple.  If someone is willing to write the code and maintain it so that is
>>>> stays in good working order, I see no harm in it remaining in tree.  I
>>>> don?t suspect there will be a lot of people interested in adding multiple
>>>> distributions for a particular operating system.  To my knowledge, and I
>>>> could be incorrect, Red Hat is the only OpenStack company with a paid and
>>>> community version available of OpenStack simultaneously and the paid
>>>> version is only available on RHEL.  I think the risk of RPM based
>>>> distributions plus their type count spiraling out of manageability is low.
>>>> Even if the risk were high, I?d prefer to keep an open mind to facilitate
>>>> an increase in diversity in our community (which is already fantastically
>>>> diverse, btw ;)
>>>>
>>>> I am open to questions, comments or concerns.  Please feel free to
>>>> voice them.
>>>>
>>>> Regards,
>>>> -steve
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>> Both arguments sound valid to me, both have pros and cons.
>>>
>>> I think it's valuable to look to the experiences of Cinder and Neutron
>>> in this area, both of which seem to have the same scenario and have existed
>>> much longer than Kolla. From what I know of how these operate, proprietary
>>> code is allowed to exist in the mainline so long as certain set of criteria
>>> is met. I'd have to look it up but I think it mostly comprises of the
>>> relevant parties must "play by the rules", e.g. provide a working CI, help
>>> with reviews, attend weekly meetings, etc. If Kolla can look to craft a
>>> similar set of criteria for proprietary code down the line, I think it
>>> should work well for us.
>>>
>>> Steve has a good point in that it may be too much overhead to implement
>>> a plugin system or similar up front. Instead, we should actively monitor
>>> the overhead in terms of reviews and code size that these extra
>>> implementations add. Perhaps agree to review it at the end of Mitaka?
>>>
>>> Given the project is young, I think it can also benefit from the
>>> increased usage and exposure from allowing these parties in. I would hope
>>> independent contributors would not feel rejected from not being able to
>>> use/test with the pieces that need a license. The libre distros will remain
>>> #1 for us.
>>>
>>> So based on the above explanation, I'm +1.
>>>
>>> -Paul
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> Given Paul's comments I would agree here as well. I would like to get
>> that 'criteria' required for Kolla to allow this proprietary code into the
>> main repo down as soon as possible though and suggest that we have a bare
>> minimum of being able to gate against it as one of the criteria.
>>
>> As for a plugin system, I also agree with Paul that we should check the
>> overhead of including these other distros and any types needed after we
>> have had time to see if they do introduce any additional overhead.
>>
>> So for the question 'Do we allow code that relies on proprietary
>> packages?' I would vote +1, with the condition that we define the
>> requirements of allowing that code as soon as possible.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I am +1 on the system with following criteria we already discussed above
>
> - Set of defined requirements to adhere to for contributing and maintaining
> - Set of contributors contributing to and reviewing the changes in Kolla.
> - Set of maintainers available to connect with if we require any urgent
> attention to any failures in Kolla due to the code.
> - CI if possible, we can evaluate the options as we finalize.
>
> Since we are on the subject of "OpenStack as a whole", I think OpenStack
> has evolved better with more operators contributing to the code base, since
> we need to let the code break to make it robust. This can be very easily
> observed with Cinder and Neutron specially, the different nature of
> implementations has always helped the base source improvements which were
> never thought of.
>
> I agree with Paul that Kolla benefit more with increased participation
> from Operators who are willing to update and use it.
>

In general, whenever somebody steps up to maintain an extension to a
project the net benefit is good, even if the said extension usage is not
available to the general public. However, that assumes that Sam's concerns
of not making the maintenance burden significantly higher is met. I believe
that is the case here.

So I see no reason to not support all the types suggested, albeit perhaps
with a slightly different naming scheme:
source - obviously, using tarballs as defined in build.ini
binary - for whenever there is no distinction necessary for what type of
packages are in use
binary-rdo - for the community version of RPM OpenStack packaging
binary-rhos - for the paid version of RPM OpenStack packaging

Including defaults for binary to be interpreted as binary-rdo on RPM
distros would allow the simple existing choices to remain while also
allowing greater flexibility. Since these relationships can be expressed in
python rather than with file system symlinks, it should be more clear than
before.

As far as CI goes, everybody knows more CI is better. I'd think that as a
project Kolla would prioritize source and binary first (assuming with
default proposed above), then move onto other binary types. Perhaps I'm
naive, but I really do hope the playbooks won't need any modifications.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/b6d3b6eb/attachment.html>

From victoria at vmartinezdelacruz.com  Mon Sep 14 16:12:11 2015
From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=)
Date: Mon, 14 Sep 2015 13:12:11 -0300
Subject: [openstack-dev] Call for mentors Outreachy Dec-March 2016
Message-ID: <CAJ_e2gD+-h_mT+3gTWh=a3NEDsn_C_AFi5DT81dT_KN_RvJ_Rw@mail.gmail.com>

Hi everyone,

OpenStack will join next round of the Outreachy internships and we need you
help: we are looking for full-time contributors willing to share some of
their time, knowledge and experience to make our open-source organization
more diverse and have the opportunity to meet new talents.

*What is Outreachy*

Outreachy helps people from groups underrepresented in free and open source
software get involved. Read more in https://www.gnome.org/outreachy/.

OpenStack has been participating in this internships since Jan-Apr 2013,
and we have had really good results: we got more full-time contributors, we
got lots of good quality contributions, interns had the opportunity to give
a great step in their professional careers and also it has been an
enriching experience for everyone involved with the internship. More
information about OpenStack participation in Outreachy can be find in
https://wiki.openstack.org/wiki/Outreachy.

*Applications open next September 22nd, and the deadline for applications
is October 27th. *https://wiki.gnome.org/Outreachy/2015/DecemberMarch

*TLDR*

Contributors interested on mentoring in Outreachy, please feel free to ask
any question and ask for further directions here or in #openstack-opw in
Freenode.

Try to find and propose a task for the next couple of months for your
mentee. The task should be able to be finished on three months. Add this
idea in this wiki https://wiki.openstack.org/wiki/Internship_ideas.

Please join #openstack-opw as well, to welcome applicants and help them
with their setups.

Another way to get in touch is through the internships mailing list,
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-internships.

*Mentors*

*Time requirements*

Being a mentor shouldn't take a lot of time from your working week. Mentors
from previous rounds organized differently: you can schedule a weekly
meeting to catch up with your mentee latest work, questions and next steps;
some other prefer having a daily short meeting; you choose how to organize
your agenda.

Consider that different mentees will need different guidance, but we aim to
work with proactive and independent people. This qualities are defined
during the application process, so make sure to participate actively in
this so you make sure you get a mentee you can work with.

*Tasks*

Mentors are supposed to act as a channel for mentees with the rest of the
community. You should be able to help them understand the community
workflow (tools, milestones), introduce them to the team you are working
with (so they also have other people to learn from), help them to gain
confidence and participate of the weekly meetings.

You are not supposed to teach them the basics (say, we cannot have people
that cannot program in Python), but you should be able to point them in the
right direction with the tools we use in the community and they should be
able to learn at their own pace.

*Mentees*

*Time requirements*

It is a full-time internship, so mentees are expected to work 40 hours per
week in the assigned project.

*Tasks*

Mentees are supposed to devote their time on learning about the community,
getting familiar with the tools used to contribute to OpenStack,
understanding the basics of the code base of the project they are working
on and finishing the assigned task before the internship ends. They are
also encouraged to help with bug triaging, bug fixing and code reviewing.
All this should be shared in a personal blog mentees have to set up when
they start with their internships.

Thanks all,

Victoria
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/9ac9df28/attachment.html>

From thingee at gmail.com  Mon Sep 14 16:15:44 2015
From: thingee at gmail.com (Mike Perez)
Date: Mon, 14 Sep 2015 09:15:44 -0700
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
Message-ID: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>

Hello all,

I will not be running for Cinder PTL this next cycle. Each cycle I ran
was for a reason [1][2], and the Cinder team should feel proud of our
accomplishments:

* Spearheading the Oslo work to allow *all* OpenStack projects to have
their database being independent of services during upgrades.
* Providing quality to OpenStack operators and distributors with over
60 accepted block storage vendor drivers with reviews and enforced CI
[3].
* Helping other projects with third party CI for their needs.
* Being a welcoming group to new contributors. As a result we grew greatly [4]!
* Providing documentation for our work! We did it for Kilo [5], and I
was very proud to see the team has already started doing this on their
own to prepare for Liberty.

I would like to thank this community for making me feel accepted in
2010. I would like to thank John Griffith for starting the Cinder
project, and empowering me to lead the project through these couple of
cycles.

With the community's continued support I do plan on continuing my
efforts, but focusing cross project instead of just Cinder. The
accomplishments above are just some of the things I would like to help
others with to make OpenStack as a whole better.


[1] - http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
[2] - http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
[3] - http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
[4] - http://thing.ee/cinder/active_contribs.png
[5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7

--
Mike Perez


From major at mhtx.net  Mon Sep 14 16:34:43 2015
From: major at mhtx.net (Major Hayden)
Date: Mon, 14 Sep 2015 11:34:43 -0500
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <CAGSrQvy4b7fEmJGvSMfLjtiMuj-w_2S7rFL7uXuRKHkHBrVrHA@mail.gmail.com>
References: <55F1999C.4020509@mhtx.net> <55F1AE40.5020009@gentoo.org>
 <55F1B0D7.8070404@mhtx.net> <1441909133-sup-2320@fewbar.com>
 <CAGSrQvy4b7fEmJGvSMfLjtiMuj-w_2S7rFL7uXuRKHkHBrVrHA@mail.gmail.com>
Message-ID: <55F6F723.9050406@mhtx.net>

On 09/14/2015 03:28 AM, Jesse Pretorius wrote:
> I agree with Clint that this is a good approach.
> 
> If there is an automated way that we can verify the security of an installation at a reasonable/standardised level then I think we should add a gate check for it too.

Here's a rough draft of a spec.  Feel free to throw some darts.

  https://review.openstack.org/#/c/222619/

--
Major Hayden


From doug at doughellmann.com  Mon Sep 14 16:40:25 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 14 Sep 2015 12:40:25 -0400
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <EA70533067B8F34F801E964ABCA4C4410F4D6494@G9W0745.americas.hpqcorp.net>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com>
 <EA70533067B8F34F801E964ABCA4C4410F4D6494@G9W0745.americas.hpqcorp.net>
Message-ID: <1442247798-sup-5628@lrrr.local>

Excerpts from Kuvaja, Erno's message of 2015-09-14 15:02:59 +0000:
> > -----Original Message-----
> > From: Flavio Percoco [mailto:flavio at redhat.com]
> > Sent: Monday, September 14, 2015 1:41 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> > 
> > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> > >
> > >After having some conversations with folks at the Ops Midcycle a few
> > >weeks ago, and observing some of the more recent email threads related
> > >to glance, glance-store, the client, and the API, I spent last week
> > >contacting a few of you individually to learn more about some of the
> > >issues confronting the Glance team. I had some very frank, but I think
> > >constructive, conversations with all of you about the issues as you see
> > >them. As promised, this is the public email thread to discuss what I
> > >found, and to see if we can agree on what the Glance team should be
> > >focusing on going into the Mitaka summit and development cycle and how
> > >the rest of the community can support you in those efforts.
> > >
> > >I apologize for the length of this email, but there's a lot to go over.
> > >I've identified 2 high priority items that I think are critical for the
> > >team to be focusing on starting right away in order to use the upcoming
> > >summit time effectively. I will also describe several other issues that
> > >need to be addressed but that are less immediately critical. First the
> > >high priority items:
> > >
> > >1. Resolve the situation preventing the DefCore committee from
> > >   including image upload capabilities in the tests used for trademark
> > >   and interoperability validation.
> > >
> > >2. Follow through on the original commitment of the project to
> > >   provide an image API by completing the integration work with
> > >   nova and cinder to ensure V2 API adoption.
> > 
> > Hi Doug,
> > 
> > First and foremost, I'd like to thank you for taking the time to dig into these
> > issues, and for reaching out to the community seeking for information and a
> > better understanding of what the real issues are. I can imagine how much
> > time you had to dedicate on this and I'm glad you did.
> 
> ++ Really thanks for taking the time for this.
> > 
> > Now, to your email, I very much agree with the priorities you mentioned
> > above and I'd like for, whomever will win Glance's PTL election, to bring focus
> > back on that.
> > 
> > Please, find some comments in-line for each point:
> > 
> > 
> > >
> > >I. DefCore
> > >
> > >The primary issue that attracted my attention was the fact that DefCore
> > >cannot currently include an image upload API in its interoperability
> > >test suite, and therefore we do not have a way to ensure
> > >interoperability between clouds for users or for trademark use. The
> > >DefCore process has been long, and at times confusing, even to those of
> > >us following it sort of closely. It's not entirely surprising that some
> > >projects haven't been following the whole time, or aren't aware of
> > >exactly what the whole thing means. I have proposed a cross-project
> > >summit session for the Mitaka summit to address this need for
> > >communication more broadly, but I'll try to summarize a bit here.
> > 
> 
> Looking how different OpenStack based public clouds limits or fully prevents their users to upload images to their deployments, I'm not convinced the Image Upload should be included to this definition.

The problem with that approach is that it means end consumers of
those clouds cannot write common tools that include image uploads,
which is a frequently used/desired feature. What makes that feature
so special that we don't care about it for interoperability?

>  
> > +1
> > 
> > I think it's quite sad that some projects, especially those considered to be
> > part of the `starter-kit:compute`[0], don't follow closely what's going on in
> > DefCore. I personally consider this a task PTLs should incorporate in their role
> > duties. I'm glad you proposed such session, I hope it'll help raising awareness
> > of this effort and it'll help moving things forward on that front.
> > 
> > 
> > >
> > >DefCore is using automated tests, combined with business policies, to
> > >build a set of criteria for allowing trademark use. One of the goals of
> > >that process is to ensure that all OpenStack deployments are
> > >interoperable, so that users who write programs that talk to one cloud
> > >can use the same program with another cloud easily. This is a *REST
> > >API* level of compatibility. We cannot insert cloud-specific behavior
> > >into our client libraries, because not all cloud consumers will use
> > >those libraries to talk to the services. Similarly, we can't put the
> > >logic in the test suite, because that defeats the entire purpose of
> > >making the APIs interoperable. For this level of compatibility to work,
> > >we need well-defined APIs, with a long support period, that work the
> > >same no matter how the cloud is deployed. We need the entire community
> > >to support this effort. From what I can tell, that is going to require
> > >some changes to the current Glance API to meet the requirements. I'll
> > >list those requirements, and I hope we can discuss them to a degree
> > >that ensures everyone understands them. I don't want this email thread
> > >to get bogged down in implementation details or API designs, though, so
> > >let's try to keep the discussion at a somewhat high level, and leave
> > >the details for specs and summit discussions. I do hope you will
> > >correct any misunderstandings or misconceptions, because unwinding this
> > >as an outside observer has been quite a challenge and it's likely I
> > >have some details wrong.
> 
> This just reinforces my doubt above. By including upload to the defcore requirements probably just closes out lots of the public clouds out there. Is that the intention here?

No, absolutely not. The intention is to provide clear technical
direction about what we think the API for uploading images should be.

> 
> > >
> > >As I understand it, there are basically two ways to upload an image to
> > >glance using the V2 API today. The "POST" API pushes the image's bits
> > >through the Glance API server, and the "task" API instructs Glance to
> > >download the image separately in the background. At one point
> > >apparently there was a bug that caused the results of the two different
> > >paths to be incompatible, but I believe that is now fixed.
> > >However, the two separate APIs each have different issues that make
> > >them unsuitable for DefCore.
> 
> While being true that there is two ways to get image into the glance via V2 Images API, the use case for those two is completely different. While some (like Flavio) might argue the tasks being internal API only, which it might well be in Private cloud, others might be willing to expose only that for their public cloud users due to the improved processability (antivirus, etc.) and the last group is just not willing to let their users to bring their own images in at all.

Let's stick to the technical issue of *how* upload works, and then worry
about whether it should be supported.

As far as how it works, I really don't care whether it's a direct upload
or a task-based thing, as long as the API itself is well-defined. I can
see good technical arguments for either implementation model, but if it
was up to me something that started a background task but was
specifically tied to it being an image upload/import process would be
best. But that's really a decision for the glance team to be making.

> 
> Looking outside of the box the Tasks API should not be included in any core definition as it's just interface to _optional_ plugins. Obviously if there are different classifications, it might be included to some.

Optional pieces are by definition not interoperable, so it's perfectly
fine to leave it out of DefCore. We should still have tempest tests for
the features, of course.

> 
> > >
> > >The DefCore process relies on several factors when designating APIs for
> > >compliance. One factor is the technical direction, as communicated by
> > >the contributor community -- that's where we tell them things like "we
> > >plan to deprecate the Glance V1 API". In addition to the technical
> > >direction, DefCore looks at the deployment history of an API. They do
> > >not want to require deploying an API if it is not seen as widely
> > >usable, and they look for some level of existing adoption by cloud
> > >providers and distributors as an indication of that the API is desired
> > >and can be successfully used. Because we have multiple upload APIs, the
> > >message we're sending on technical direction is weak right now, and so
> > >they have focused on deployment considerations to resolve the question.
> > 
> > The task upload process you're referring to is the one that uses the `import`
> > task, which allows you to download an image from an external source,
> > asynchronously, and import it in Glance. This is the old `copy-from` behavior
> > that was moved into a task.
> > 
> > The "fun" thing about this - and I'm sure other folks in the Glance community
> > will disagree - is that I don't consider tasks to be a public API. That is to say, I
> > would expect tasks to be an internal API used by cloud admins to perform
> > some actions (bsaed on its current implementation). Eventually, some of
> > these tasks could be triggered from the external API but as background
> > operations that are triggered by the well-known public ones and not through
> > the task API.
> > 
> > Ultimately, I believe end-users of the cloud simply shouldn't care about what
> > tasks are or aren't and more importantly, as you mentioned later in the
> > email, tasks make clouds not interoperable. I'd be pissed if my public image
> > service would ask me to learn about tasks to be able to use the service.
> 
> I'd like to bring another argument here. I think our Public Images API should behave consistently regardless if there is tasks enabled in the deployment or not and with what plugins. This meaning that _if_ we expect glance upload work over the POST API and that endpoint is available in the deployment I would expect a) my image hash to match with the one the cloud returns b) I'd assume all or none of the clouds rejecting my image if it gets flagged by Vendor X virus definitions and c) it being bootable across the clouds taken it's in supported format. On the other hand if I get told by the vendor that I need to use cloud specific task that accepts only ova compliant image packages and that the image will be checked before acceptance, my expectations are quite different and I would expect all that happening outside of the standard API as it's not consistent behavior.

I'm not sure what you're arguing. Is it not possible to have a
background process import an image without modifying it?

> 
> > 
> > Long story short, I believe the only upload API that should be considered is
> > the one that uses HTTP and, eventually, to bring compatibility with v1 as far
> > as the copy-from behavior goes, Glance could bring back that behavior on
> > top of the task (just dropping this here for the sake of discussion and
> > interoperability).
> > 
> > 
> > >The POST API is enabled in many public clouds, but not consistently.
> > >In some clouds like HP, a tenant requires special permission to use the
> > >API. At least one provider, Rackspace, has disabled the API entirely.
> > >This is apparently due to what seems like a fair argument that
> > >uploading the bits directly to the API service presents a possible
> > >denial of service vector. Without arguing the technical merits of that
> > >decision, the fact remains that without a strong consensus from
> > >deployers that the POST API should be publicly and consistently
> > >available, it does not meet the requirements to be used for DefCore
> > >testing.
> > 
> > This is definitely unfortunate. I believe a good step forward for this
> > discussion would be to create a list of issues related to uploading images and
> > see how those issues can be addressed. The result from that work might be
> > that it's not recommended to make that endpoint public but again, without
> > going through the issues, it'll be hard to understand how we can improve this
> > situation. I expect most of this issues to have a security impact.
> > 
> 
> ++, regardless of the helpfulness of that discussion, I don't think it's realistic expectation to prioritize that work so much that majority of those issues would be solved amongst the priorities at the top of this e-mail within a cycle.

We should establish the priorities, even if the roadmap to solving them
is longer than one cycle.

> 
> > 
> > >The task API is also not widely deployed, so its adoption for DefCore
> > >is problematic. If we provide a clear technical direction that this API
> > >is preferred, that may overcome the lack of adoption, but the current
> > >task API seems to have technical issues that make it fundamentally
> > >unsuitable for DefCore consideration. While the task API addresses the
> > >problem of a denial of service, and includes useful features such as
> > >processing of the image during import, it is not strongly enough
> > >defined in its current form to be interoperable.
> > >Because it's a generic API, the caller must know how to fully construct
> > >each task, and know what task types are supported in the first place.
> > >There is only one "import" task type supported in the Glance code
> > >repository right now, but it is not clear that "import"
> > >always uses the same arguments, or interprets them in the same way.
> > >For example, the upstream documentation [1] describes a task that
> > >appears to use a URL as source, while the Rackspace documentation [2]
> > >describes a task that appears to take a swift storage location.
> > >I wasn't able to find JSONSchema validation for the "input" blob
> > >portion of the task in the code [3], though that may happen down inside
> > >the task implementation itself somewhere.
> > 
> > 
> > The above sounds pretty accurate as there's currently just 1 flow that can be
> > triggered (the import flow) and that accepts an input, which is a json. As I
> > mentioned above, I don't believe tasks should be part of the public API and
> > this is yet another reason why I think so. The tasks API is not well defined as
> > there's, currently, not good way to define the expected input in a backwards
> > compatible way and to provide all the required validation.
> > 
> > I like having tasks in Glance, despite my comments above - but I like them for
> > cloud usage and not public usage.
> > 
> > As far as Rackspace's docs/endpoint goes, I'd assume this is an error in their
> > documetation since Glance currently doesn't allow[0] for swift URLs to be
> > imported (not even in juno[1]).
> > 
> > [0]
> > http://git.openstack.org/cgit/openstack/glance/tree/glance/common/script
> > s/utils.py#n84
> > [1]
> > http://git.openstack.org/cgit/openstack/glance/tree/glance/common/script
> > s/utils.py?h=stable/juno#n83
> > 
> > >Tasks also come from plugins, which may be installed differently based
> > >on the deployment. This is an interesting approach to creating API
> > >extensions, but isn't discoverable enough to write interoperable tools
> > >against. Most of the other projects are starting to move away from
> > >supporting API extensions at all because of interoperability concerns
> > >they introduce. Deployers should be able to configure their clouds to
> > >perform well, but not to behave in fundamentally different ways.
> > >Extensions are just that, extensions. We can't rely on them for
> > >interoperability testing.
> > 
> > This is, indeed, an interesting interpretation of what tasks are for.
> > I'd probably just blame us (Glance team) for not communicating properly
> > what tasks are meant to be. I don't believe tasks are a way to extend the
> > *public* API and I'd be curious to know if others see it that way. I fully agree
> > that just breaks interoperability and as I've mentioned a couple of times in
> > this reply already, I don't even think tasks should be part of the public API.
> 
> Hmm-m, that's exactly how I have seen it. Plugins that can be provided to expand the standard functionality. I totally agree these not to being relied on interoperability. I've always assumed that that has been also the reason why tasks have not had too much focus as we've prioritized the actual API functionality and stability before it's expandability.

My issue with the task API as it stands is not how it works on the
back-end. It's only the actual REST API entry point that concerns me,
because of its vagueness.

If there are different semantics for the background task in different
providers because of different plugins, then we need to write tests to
specify the aspects we care about being standardized. You mentioned
image hashes as one case, and I can see that being useful. Rather than
allowing the cloud to modify an image on upload, we might enforce that
the hash of the image we download is the same as the hash of the image
imported. That still allows for unpacking and scanning on the back-end.

> 
> > 
> > But again, very poor job communicating so[0]. Nonetheless, for the sake of
> > providing enough information about tasks and sources to read from, I'd also
> > like to point out the original blueprint[1], some discussions during the
> > havana's summit[2], the wiki page for tasks[3] and a patch I just reviewed
> > today (thanks Brian) that introduces docs for tasks[4]. These links show
> > already some differences in what tasks are.
> > 
> > [0]
> > http://git.openstack.org/cgit/openstack/glance/tree/etc/policy.json?h=stabl
> > e/juno#n28
> > [1] https://blueprints.launchpad.net/glance/+spec/async-glance-workers
> > [2] https://etherpad.openstack.org/p/havana-glance-requirements
> > [3] https://wiki.openstack.org/wiki/Glance-tasks-api
> > [4] https://review.openstack.org/#/c/220166/
> > 
> > >
> > >There is a lot of fuzziness around exactly what is supported for image
> > >upload, both in the documentation and in the minds of the developers
> > >I've spoken to this week, so I'd like to take a step back and try to
> > >work through some clear requirements, and then we can have folks
> > >familiar with the code help figure out if we have a real issue, if a
> > >minor tweak is needed, or if things are good as they stand today and
> > >it's all a misunderstanding.
> > >
> > >1. We need a strongly defined and well documented API, with arguments
> > >   that do not change based on deployment choices. The behind-the-scenes
> > >   behaviors can change, but the arguments provided by the caller
> > >   must be the same and the responses must look the same. The
> > >   implementation can run as a background task rather than receiving
> > >   the full image directly, but the current task API is too vaguely
> > >   defined to meet this requirement, and IMO we need an entry point
> > >   focused just on uploading or importing an image.
> > >
> > >2. Glance cannot require having a Swift deployment. It's not clear
> > >   whether this is actually required now, so if it's not then we're
> > >   in a good state.
> > 
> > This is definitely not the case. Glance doesn't require any specific store to be
> > deployed. It does require at least one other than the http one (because it
> > doesn't support write operations).

OK, that's good.

> > 
> > > It's fine to provide an optional way to take
> > >   advantage of Swift if it is present, but it cannot be a required
> > >   component. There are three separate trademark "programs", with
> > >   separate policies attached to them. There is an umbrella "Platform"
> > >   program that is intended to include all of the TC approved release
> > >   projects, such as nova, glance, and swift. However, there is
> > >   also a separate "Compute" program that is intended to include
> > >   Nova, Glance, and some others but *not* Swift. This is an important
> > >   distinction, because there are many use cases both for distributors
> > >   and public cloud providers that do not incorporate Swift for a
> > >   variety of reasons. So, we can't have Glance's primary configuration
> > >   require Swift and we need to provide tests for the DefCore team
> > >   that run without Swift. Duplicate tests that do use Swift are
> > >   fine, and might be used for "Platform" compliance tests.
> 
> It really saddens me and tells how narrow focused we have been, this point 2 even needing discussion.
> 
> > >
> > >3. We need an integration test suite in tempest that fully exercises
> > >   the public image API by talking directly to Glance. This applies
> > >   to the entire API, not just image uploads. It's fine to have
> > >   duplicate tests using the proxy in Nova if the Nova team wants
> > >   those, but DefCore should be using tests that talk directly to
> > >   the service that owns each feature, without relying on any
> > >   proxying. We've already missed the chance to deal with this in
> > >   the current DefCore definition, which uses image-related tests
> > >   that talk to the Nova proxy [4][5], so we'll have to maintain
> > >   the proxy for the required deprecation period. But we won't be
> > >   able to consider removing that proxy until we provide alternate
> > >   tests for those features that speak directly to Glance. We may
> > >   have some coverage already, but I wasn't able to find a task-based
> > >   image upload test and there is no "image create" mentioned in
> > >   the current draft of capabilities being reviewed [6]. There may
> > >   be others missing, so someone more familiar with the feature set
> > >   of Glance should do an audit and document what tests are needed
> > >   so the work can be split up.
> > >
> > 
> > +1 This should become one of the top priorities for Mitaka (as you
> > mentioned at the beginning of this email).
> 
> But I hope this integration test suite in tempest is not seen de-facto needed functionality by DefCore as those two should be different things.

DefCore gets its tests from tempest today. So, yes, my point is
that we need tests in tempest that DefCore can use to define the
required functionality and *behavior* of an image service. They
won't necessarily use all of the tests, but we need to start with the
ones for features and behaviors we want them to consider important.

> 
> > 
> > >4. Once identified and incorporated into the DefCore capabilities
> > >   set, the selected API needs to remain stable for an extended
> > >   period of time and follow the deprecation timelines defined by
> > >   DefCore.  That has implications for the V3 API currently in
> > >   development to turn Glance into a more generic artifacts service.
> > >   There are a lot of ways to handle those implications, and no
> > >   choice needs to be made today, so I only mention it to make sure
> > >   it's clear that (a) we must get V2 into shape for DefCore and
> > >   (b) when that happens, we will need to maintain V2 even if V3
> > >   is finished. We won't be able to deprecate V2 quickly.
> 
> This is absolutely reasonable and pretty much combines what should be our near future focus moving forwards.
> 
> > >
> > >Now, it's entirely possible that we can meet all of those requirements
> > >today, and that would be great. If that's the case, then the problem is
> > >just one of clear communication and documentation. I think there's
> > >probably more work to be done than that, though.
> > 
> > 
> > There's clearly a communication problem. The fact that this very email has
> > been sent out is a sign of that. However, I'd like to say, in a very optimistic
> > way, that Glance is not so far away from the expecte status. There are things
> > to fix, other things to clarify, tons to discuss but, IMHO, besides the tempests
> > tests and DefCore, the most critical one is the one you mentioned in the
> > following section.
> 
> Being no so optimistic person I think we're just bit lost, but still fairly close.
> 
> > 
> > >
> > >[1] http://developer.openstack.org/api-ref-image-v2.html#os-tasks-v2
> > >[2]
> > >http://docs.rackspace.com/images/api/v2/ci-
> > devguide/content/POST_import
> > >Image_tasks_Image_Task_Calls.html#d6e4193
> > >[3]
> > >http://git.openstack.org/cgit/openstack/glance/tree/glance/api/v2/tasks
> > >.py [4]
> > >http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n70
> > >[5]
> > >http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/guideli
> > >nes/2015.07.rst [6] https://review.openstack.org/#/c/213353/
> > >
> > >II. Complete Cinder and Nova V2 Adoption
> > >
> > >The Glance team originally committed to providing an Image Service API.
> > >Besides our end users, both Cinder and Nova consume that API.
> > >The shift from V1 to V2 has been a long road. We're far enough along,
> > >and the V1 API has enough issues preventing us from using it for
> > >DefCore, that we should push ahead and complete the V2 adoption. That
> > >will let us properly deprecate and drop V1 support, and concentrate on
> > >maintaining V2 for the necessary amount of time.
> > >
> > >There are a few specs for the work needed in Nova, but that work didn't
> > >land in Liberty for a variety of reasons. We need resources from both
> > >the Glance and Nova teams to work together to get this done as early as
> > >possible in Mitaka to ensure that it actually lands this time. We
> > >should be able to schedule a joint session at the summit to have the
> > >conversation, and we need to take advantage of that opportunity to
> > >ensure the details are fully resolved so that everyone understands the
> > >plan.
> > 
> > Super important point. I'd like people replying to this email to focus on what
> > we can do next and not why this hasn't been done. The later will take us
> > down a path that won't be useful at all at it'll just waste everyone's time.
> 
> ++
> 
> > 
> > That said, I fully agree with the above. Last time we talked, John Garbutt and
> > Jay Pipes, from the nova team, raised their hands to help out with this effort.
> > From Glance's side, Fei Long Wang and myself were working on the
> > implementation. To help moving this forward and to follow on the latest
> > plan, which allows this migration to be smoother than our original plan, we
> > need folks from Glance to raise their hand.
> > 
> > If I'm not elected PTL, I'm more than happy to help out here but we need
> > someone that can commit to the above right now and we'll likely need a
> > team of at least 2 people to help moving this forward in early Mitaka.
> > 
> > 
> > >The work in Cinder is more complete, but may need to be reviewed to
> > >ensure that it is using the API correctly, safely, and efficiently.
> > >Again, this is a joint effort between the Glance and Cinder teams to
> > >identify any issues and work out a resolution.
> > >
> > >Part of this work will also be to audit the Glance API documentation,
> > >to ensure it accurately reflects what the APIs expect to receive and
> > >return. There are reportedly at least a few cases where things are out
> > >of sync right now. This will require some coordination with the
> > >Documentation team.
> > >
> > >
> > >Those are the two big priorities I see, based on things the rest of the
> > >community needs from the team and existing commitments that have been
> > >made. There are some other things that should also be addressed.
> > >
> > >
> > >III. Security audits & bug fixes
> > >
> > >Five of 18 recent security reports were related to Glance [7]. It's not
> > >surprising, given recent resource constraints, that addressing these
> > >has been a challenge. Still, these should be given high priority.
> > >
> > >[7]
> > >https://security.openstack.org/search.html?q=glance&check_keywords=y
> > es&
> > >area=default
> 
> I'm not sure if I'm more ashamed or happy about this. The fact that someone is actually looking into it and working on these issues is nice 'though.
> 
> > 
> > 
> > +1 FWIW, we're in the process of growing Glance's security team. But
> > it's clear from the above that there needs to be quicker replies to security
> > issues.
> > 
> > >IV. Sorting out the glance-store question
> > >
> > >This was perhaps the most confusing thing I learned about this week.
> > >The perception outside of the Glance team is that the library is meant
> > >to be used by Nova and Cinder to communicate directly with the image
> > >store, bypassing the REST API, to improve performance in several cases.
> > >I know the Cinder team is especially interested in some sort of
> > >interface for manipulating images inside the storage system without
> > >having to download them to make copies (for RBD and other systems that
> > >support CoW natively).
> > 
> > Correct, the above was one of the triggerers for this effort and I like to think
> > it's still one of the main drivers. There are other fancier things that could be
> > done in the future assuming the librarie's API is refactored in a way that such
> > features can be implemented.[0]
> > 
> > [0] https://review.openstack.org/#/c/188050/
> > 
> > >That doesn't seem to be
> > >what the library is actually good for, though, since most of the Glance
> > >core folks I talked to thought it was really a caching layer.
> > >This discrepancy in what folks wanted vs. what they got may explain
> > >some of the heated discussions in other email threads.
> > 
> > It's strange that some folks think of it as a caching layer. I believe one of the
> > reasons there's such discrepancy is because not enough effort has been put
> > in the refactor this library requires. The reason this library requires such a
> > refactor is that it came out from the old `glance/store` code which was very
> > specific to Glance's internal use.
> > 
> > The mistake here could be that the library should've been refactored
> > *before* adopting it in Glance.
> > 
> > >
> > >Frankly, given the importance of the other issues, I recommend leaving
> > >glance-store standalone this cycle. Unless the work for dealing with
> > >priorities I and II is made *significantly* easier by not having a
> > >library, the time and energy it will take to re-integrate it with the
> > >Glance service seems like a waste of limited resources.
> > >The time to even discuss it may be better spent on the planning work
> > >needed. That said, if the library doesn't provide the features its
> > >users were expecting, it may be better to fold it back in and create a
> > >different library with a better understanding of the requirements at
> > >some point. The path to take is up to the Glance team, of course, but
> > >we're already down far enough on the priority list that I think we'll
> > >be lucky to finish the preceding items this cycle.
> > 
> 
> I don't think we should put too much effort on this, based on the reality that we do not have even agreement within the team what the motivators are.
> 
> > 
> > I don't think merging glance-store back into Glance will help with any of the
> > priorities mentioned in this thread. If anything, refactoring the API might help
> > with future work that could come after the v1 -> v2 migration is complete.
> > 
> 
> Well it would close some discussions that have been causing confusion lately, but I do agree it might not be worth of it just now.
> 
> > >
> > >
> > >Those are the development priorities I was able to identify in my
> > >interviews this week, and there is one last thing the team needs to do
> > >this cycle: Recruit more contributors.
> > >
> > >Almost every current core contributor I spoke with this week indicated
> > >that their time was split between another project and Glance. Often
> > >higher priority had to be given, understandibly, to internal product
> > >work. That's the reality we work in, and everyone feels the same
> > >pressures to some degree. One way to address that pressure is to bring
> > >in help. So, we need a recruiting drive to find folks willing to
> > >contribute code and reviews to the project to keep the team healthy. I
> > >listed this item last because if you've made it this far you should see
> > >just how much work the team has ahead. We're a big community, and I'm
> > >confident that we'll be able to find help for the Glance team, but it
> > >will require mentoring and education to bring people up to speed to
> > >make them productive.
> > 
> 
> I'm almost sad to say, but I'm not really convinced that our issues are because of lack of manpower. Obviously any help is welcome to improve the current situation, but I think this discussion is extremely important to have before we take all that crowd in who wants to be part of developing Glance. ;)

I did hear from several Glance core team members that they were not able
to spend as much time on the project as they would like, so that's the
only reason I mentioned it. It might actually be faster to implement
some of these things with a small dedicated team. It'll be up to the
Glance team to make that call.

> 
> > Fully agree here as well. However, I also believe that the fact that some
> > efforts have gone to the wrong tasks has taken Glance to the situation it is
> > today. More help is welcomed and required but a good strategy is more
> > important right now.
> > 
> > FWIW, I agree that our focus has gone to different thing and this has taken us
> > to the status you mentioned above. More importantly, it's postponed some
> > important tasks. However, I don't believe Glance is completely broken - I
> > know you are not saying this but I'd like to mention it - and I certainly believe
> > we can bring it back to a good state faster than expecte, but I'm known for
> > being a bit optimistic sometimes.
> > 
> > In this reply I was hard on us (Glance team), because I tend to be hard on
> > myself and to dig deep into the things that are not working well. Many times
> > I do this based on the feedback provided by others, which I personally value
> > **a lot**. Unfortunately, I have to say that there hasn't been enough
> > feedback about these issues until now. There was Mike's email[0] where I
> > explicitly asked the community to speak up. This is to say that I appreciate
> > the time you've taken to dig into this a lot and to encourage folks to *always*
> > speak up and reach out through every *public* medium possible..
> > 
> > No one can fix rumors, we can fix issues, though.
> > 
> > Thanks again and lets all work together to improve this situation, Flavio
> 
> All the above is just so easy to agree on!
> 
> > 
> > [0] http://lists.openstack.org/pipermail/openstack-dev/2015-
> > August/071971.html
> > 
> > --
> > @flaper87
> > Flavio Percoco
> 
> - Erno (jokke) Kuvaja


From jaypipes at gmail.com  Mon Sep 14 16:47:00 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Mon, 14 Sep 2015 12:47:00 -0400
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
Message-ID: <55F6FA04.1070708@gmail.com>

On 09/14/2015 12:15 PM, Mike Perez wrote:
> Hello all,
>
> I will not be running for Cinder PTL this next cycle. Each cycle I ran
> was for a reason [1][2], and the Cinder team should feel proud of our
> accomplishments:
>
> * Spearheading the Oslo work to allow *all* OpenStack projects to have
> their database being independent of services during upgrades.
> * Providing quality to OpenStack operators and distributors with over
> 60 accepted block storage vendor drivers with reviews and enforced CI
> [3].
> * Helping other projects with third party CI for their needs.
> * Being a welcoming group to new contributors. As a result we grew greatly [4]!
> * Providing documentation for our work! We did it for Kilo [5], and I
> was very proud to see the team has already started doing this on their
> own to prepare for Liberty.
>
> I would like to thank this community for making me feel accepted in
> 2010. I would like to thank John Griffith for starting the Cinder
> project, and empowering me to lead the project through these couple of
> cycles.
>
> With the community's continued support I do plan on continuing my
> efforts, but focusing cross project instead of just Cinder. The
> accomplishments above are just some of the things I would like to help
> others with to make OpenStack as a whole better.

Mike, thank you for your service as Cinder PTL. You did a fantastic job 
guiding the project and deserve many kudos.

Best,
-jay


From michal.dulko at intel.com  Mon Sep 14 16:47:07 2015
From: michal.dulko at intel.com (Dulko, Michal)
Date: Mon, 14 Sep 2015 16:47:07 +0000
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
Message-ID: <3895CB36EABD4E49B816E6081F3B001735FD52DA@IRSMSX108.ger.corp.intel.com>

> -----Original Message-----
> From: Mike Perez [mailto:thingee at gmail.com]
> Sent: Monday, September 14, 2015 6:16 PM
> 
> Hello all,
> 
> I will not be running for Cinder PTL this next cycle. Each cycle I ran was for a
> reason [1][2], and the Cinder team should feel proud of our
> accomplishments:
> 
> * Spearheading the Oslo work to allow *all* OpenStack projects to have their
> database being independent of services during upgrades.
> * Providing quality to OpenStack operators and distributors with over
> 60 accepted block storage vendor drivers with reviews and enforced CI [3].
> * Helping other projects with third party CI for their needs.
> * Being a welcoming group to new contributors. As a result we grew greatly
> [4]!

As someone who started to contribute to Cinder just after Mike took the PTL role I can say that  this is true and it felt really great to participate in the project from just the beginning. Thanks!

> * Providing documentation for our work! We did it for Kilo [5], and I was very
> proud to see the team has already started doing this on their own to prepare
> for Liberty.


From openstack at medberry.net  Mon Sep 14 16:51:06 2015
From: openstack at medberry.net (David Medberry)
Date: Mon, 14 Sep 2015 10:51:06 -0600
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
Message-ID: <CAJhvMSvjKNwk0TJrxEYX0PLd-m+9zvsgoFPu7F80m1UW+4zc+Q@mail.gmail.com>

Thanks Mike. Enjoy a break (albeit thin and brief.) I've appreciated your
guiding the Cinder cats.

On Mon, Sep 14, 2015 at 10:15 AM, Mike Perez <thingee at gmail.com> wrote:

> Hello all,
>
> I will not be running for Cinder PTL this next cycle. Each cycle I ran
> was for a reason [1][2], and the Cinder team should feel proud of our
> accomplishments:
>
> * Spearheading the Oslo work to allow *all* OpenStack projects to have
> their database being independent of services during upgrades.
> * Providing quality to OpenStack operators and distributors with over
> 60 accepted block storage vendor drivers with reviews and enforced CI
> [3].
> * Helping other projects with third party CI for their needs.
> * Being a welcoming group to new contributors. As a result we grew greatly
> [4]!
> * Providing documentation for our work! We did it for Kilo [5], and I
> was very proud to see the team has already started doing this on their
> own to prepare for Liberty.
>
> I would like to thank this community for making me feel accepted in
> 2010. I would like to thank John Griffith for starting the Cinder
> project, and empowering me to lead the project through these couple of
> cycles.
>
> With the community's continued support I do plan on continuing my
> efforts, but focusing cross project instead of just Cinder. The
> accomplishments above are just some of the things I would like to help
> others with to make OpenStack as a whole better.
>
>
> [1] -
> http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
> [2] -
> http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
> [3] -
> http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
> [4] - http://thing.ee/cinder/active_contribs.png
> [5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7
>
> --
> Mike Perez
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/bfdff5da/attachment.html>

From msm at redhat.com  Mon Sep 14 16:53:01 2015
From: msm at redhat.com (michael mccune)
Date: Mon, 14 Sep 2015 12:53:01 -0400
Subject: [openstack-dev] [sahara] hadoop-openstack.jar and proxy users
Message-ID: <55F6FB6D.9010007@redhat.com>

hey all,

in doing some work recently with proxy users and hadoop deployments, i 
am noticing that the modified version of the hadoop-openstack.jar we 
package from the sahara-extras repo is not being used in our current images.

this jar file adds support for keystone v3 to the swift interface in 
hadoop. v3 support is needed for proxy users and i'm guessing we are 
going to want this included by default in the future.

what is the current process for bundling this jar file and which images 
should it be included on?

i know that the vanilla 2.7.1 image does not contain this jar, i am 
curious if we should re-address which hadoop/swift connector jar is 
being included.

thoughts?


regards,
mike


From rakhmerov at mirantis.com  Mon Sep 14 16:57:34 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Mon, 14 Sep 2015 19:57:34 +0300
Subject: [openstack-dev] [mistral] Team meeting minutes/log
Message-ID: <B1DDAA8E-F40F-4937-B126-C0F8C7254C60@mirantis.com>

Thanks for joining us for the meeting today.

Meeting minutes: http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-09-14-16.00.html <http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-09-14-16.00.html>
Meeting log: http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-09-14-16.00.log.html <http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-09-14-16.00.log.html>

The next meeting will be held on 21 Sep at the same time.

Renat Akhmerov
@ Mirantis Inc.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/34d8ef7a/attachment.html>

From sean.mcginnis at gmx.com  Mon Sep 14 17:02:03 2015
From: sean.mcginnis at gmx.com (Sean McGinnis)
Date: Mon, 14 Sep 2015 12:02:03 -0500
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
Message-ID: <20150914170202.GA13271@gmx.com>

On Mon, Sep 14, 2015 at 09:15:44AM -0700, Mike Perez wrote:
> Hello all,
> 
> I will not be running for Cinder PTL this next cycle. Each cycle I ran
> was for a reason [1][2], and the Cinder team should feel proud of our
> accomplishments:

Thanks for a couple of awesome cycles Mike!


From mordred at inaugust.com  Mon Sep 14 17:06:48 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Mon, 14 Sep 2015 19:06:48 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <20150914124100.GC10859@redhat.com>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com>
Message-ID: <55F6FEA8.8010802@inaugust.com>

On 09/14/2015 02:41 PM, Flavio Percoco wrote:
> On 14/09/15 08:10 -0400, Doug Hellmann wrote:
>>
>> After having some conversations with folks at the Ops Midcycle a
>> few weeks ago, and observing some of the more recent email threads
>> related to glance, glance-store, the client, and the API, I spent
>> last week contacting a few of you individually to learn more about
>> some of the issues confronting the Glance team. I had some very
>> frank, but I think constructive, conversations with all of you about
>> the issues as you see them. As promised, this is the public email
>> thread to discuss what I found, and to see if we can agree on what
>> the Glance team should be focusing on going into the Mitaka summit
>> and development cycle and how the rest of the community can support
>> you in those efforts.
>>
>> I apologize for the length of this email, but there's a lot to go
>> over. I've identified 2 high priority items that I think are critical
>> for the team to be focusing on starting right away in order to use
>> the upcoming summit time effectively. I will also describe several
>> other issues that need to be addressed but that are less immediately
>> critical. First the high priority items:
>>
>> 1. Resolve the situation preventing the DefCore committee from
>>   including image upload capabilities in the tests used for trademark
>>   and interoperability validation.
>>
>> 2. Follow through on the original commitment of the project to
>>   provide an image API by completing the integration work with
>>   nova and cinder to ensure V2 API adoption.
>
> Hi Doug,
>
> First and foremost, I'd like to thank you for taking the time to dig
> into these issues, and for reaching out to the community seeking for
> information and a better understanding of what the real issues are. I
> can imagine how much time you had to dedicate on this and I'm glad you
> did.

Ditto. Thanks so much for the work Doug!

> Now, to your email, I very much agree with the priorities you
> mentioned above and I'd like for, whomever will win Glance's PTL
> election, to bring focus back on that.
>
> Please, find some comments in-line for each point:
>
>
>>
>> I. DefCore
>>
>> The primary issue that attracted my attention was the fact that
>> DefCore cannot currently include an image upload API in its
>> interoperability test suite, and therefore we do not have a way to
>> ensure interoperability between clouds for users or for trademark
>> use. The DefCore process has been long, and at times confusing,
>> even to those of us following it sort of closely. It's not entirely
>> surprising that some projects haven't been following the whole time,
>> or aren't aware of exactly what the whole thing means. I have
>> proposed a cross-project summit session for the Mitaka summit to
>> address this need for communication more broadly, but I'll try to
>> summarize a bit here.
>
> +1
>
> I think it's quite sad that some projects, especially those considered
> to be part of the `starter-kit:compute`[0], don't follow closely
> what's going on in DefCore. I personally consider this a task PTLs
> should incorporate in their role duties. I'm glad you proposed such
> session, I hope it'll help raising awareness of this effort and it'll
> help moving things forward on that front.
>
>
>>
>> DefCore is using automated tests, combined with business policies,
>> to build a set of criteria for allowing trademark use. One of the
>> goals of that process is to ensure that all OpenStack deployments
>> are interoperable, so that users who write programs that talk to
>> one cloud can use the same program with another cloud easily. This
>> is a *REST API* level of compatibility. We cannot insert cloud-specific
>> behavior into our client libraries, because not all cloud consumers
>> will use those libraries to talk to the services. Similarly, we
>> can't put the logic in the test suite, because that defeats the
>> entire purpose of making the APIs interoperable. For this level of
>> compatibility to work, we need well-defined APIs, with a long support
>> period, that work the same no matter how the cloud is deployed. We
>> need the entire community to support this effort. From what I can
>> tell, that is going to require some changes to the current Glance
>> API to meet the requirements. I'll list those requirements, and I
>> hope we can discuss them to a degree that ensures everyone understands
>> them. I don't want this email thread to get bogged down in
>> implementation details or API designs, though, so let's try to keep
>> the discussion at a somewhat high level, and leave the details for
>> specs and summit discussions. I do hope you will correct any
>> misunderstandings or misconceptions, because unwinding this as an
>> outside observer has been quite a challenge and it's likely I have
>> some details wrong.
>>
>> As I understand it, there are basically two ways to upload an image
>> to glance using the V2 API today. The "POST" API pushes the image's
>> bits through the Glance API server, and the "task" API instructs
>> Glance to download the image separately in the background. At one
>> point apparently there was a bug that caused the results of the two
>> different paths to be incompatible, but I believe that is now fixed.
>> However, the two separate APIs each have different issues that make
>> them unsuitable for DefCore.
>>
>> The DefCore process relies on several factors when designating APIs
>> for compliance. One factor is the technical direction, as communicated
>> by the contributor community -- that's where we tell them things
>> like "we plan to deprecate the Glance V1 API". In addition to the
>> technical direction, DefCore looks at the deployment history of an
>> API. They do not want to require deploying an API if it is not seen
>> as widely usable, and they look for some level of existing adoption
>> by cloud providers and distributors as an indication of that the
>> API is desired and can be successfully used. Because we have multiple
>> upload APIs, the message we're sending on technical direction is
>> weak right now, and so they have focused on deployment considerations
>> to resolve the question.
>
> The task upload process you're referring to is the one that uses the
> `import` task, which allows you to download an image from an external
> source, asynchronously, and import it in Glance. This is the old
> `copy-from` behavior that was moved into a task.
>
> The "fun" thing about this - and I'm sure other folks in the Glance
> community will disagree - is that I don't consider tasks to be a
> public API. That is to say, I would expect tasks to be an internal API
> used by cloud admins to perform some actions (bsaed on its current
> implementation). Eventually, some of these tasks could be triggered
> from the external API but as background operations that are triggered
> by the well-known public ones and not through the task API.
>
> Ultimately, I believe end-users of the cloud simply shouldn't care
> about what tasks are or aren't and more importantly, as you mentioned
> later in the email, tasks make clouds not interoperable. I'd be pissed
> if my public image service would ask me to learn about tasks to be
> able to use the service.
>
> Long story short, I believe the only upload API that should be
> considered is the one that uses HTTP and, eventually, to bring
> compatibility with v1 as far as the copy-from behavior goes, Glance
> could bring back that behavior on top of the task (just dropping this
> here for the sake of discussion and interoperability).

Yes. 1000x yes.

>> The POST API is enabled in many public clouds, but not consistently.
>> In some clouds like HP, a tenant requires special permission to use
>> the API. At least one provider, Rackspace, has disabled the API
>> entirely. This is apparently due to what seems like a fair argument
>> that uploading the bits directly to the API service presents a
>> possible denial of service vector. Without arguing the technical
>> merits of that decision, the fact remains that without a strong
>> consensus from deployers that the POST API should be publicly and
>> consistently available, it does not meet the requirements to be
>> used for DefCore testing.
>
> This is definitely unfortunate. I believe a good step forward for this
> discussion would be to create a list of issues related to uploading
> images and see how those issues can be addressed. The result from that
> work might be that it's not recommended to make that endpoint public
> but again, without going through the issues, it'll be hard to
> understand how we can improve this situation. I expect most of this
> issues to have a security impact.
>
>
>> The task API is also not widely deployed, so its adoption for DefCore
>> is problematic. If we provide a clear technical direction that this
>> API is preferred, that may overcome the lack of adoption, but the
>> current task API seems to have technical issues that make it
>> fundamentally unsuitable for DefCore consideration. While the task
>> API addresses the problem of a denial of service, and includes
>> useful features such as processing of the image during import, it
>> is not strongly enough defined in its current form to be interoperable.
>> Because it's a generic API, the caller must know how to fully
>> construct each task, and know what task types are supported in the
>> first place. There is only one "import" task type supported in the
>> Glance code repository right now, but it is not clear that "import"
>> always uses the same arguments, or interprets them in the same way.
>> For example, the upstream documentation [1] describes a task that
>> appears to use a URL as source, while the Rackspace documentation [2]
>> describes a task that appears to take a swift storage location.
>> I wasn't able to find JSONSchema validation for the "input" blob
>> portion of the task in the code [3], though that may happen down
>> inside the task implementation itself somewhere.
>
>
> The above sounds pretty accurate as there's currently just 1 flow that
> can be triggered (the import flow) and that accepts an input, which is
> a json. As I mentioned above, I don't believe tasks should be part of
> the public API and this is yet another reason why I think so. The
> tasks API is not well defined as there's, currently, not good way to
> define the expected input in a backwards compatible way and to provide
> all the required validation.
>
> I like having tasks in Glance, despite my comments above - but I like
> them for cloud usage and not public usage.

I like them much more if they're not public facing. They're not BAD - 
they just don't have an end-user semantic.

> As far as Rackspace's docs/endpoint goes, I'd assume this is an error
> in their documetation since Glance currently doesn't allow[0] for
> swift URLs to be imported (not even in juno[1]).
>
> [0]
> http://git.openstack.org/cgit/openstack/glance/tree/glance/common/scripts/utils.py#n84
>
> [1]
> http://git.openstack.org/cgit/openstack/glance/tree/glance/common/scripts/utils.py?h=stable/juno#n83

Nope. You MUST upload the image to swift and then provide a swift 
location. (Infra does this in production, I promise it's the only thing 
that works)

>> Tasks also come from plugins, which may be installed differently
>> based on the deployment. This is an interesting approach to creating
>> API extensions, but isn't discoverable enough to write interoperable
>> tools against. Most of the other projects are starting to move away
>> from supporting API extensions at all because of interoperability
>> concerns they introduce. Deployers should be able to configure their
>> clouds to perform well, but not to behave in fundamentally different
>> ways. Extensions are just that, extensions. We can't rely on them
>> for interoperability testing.
>
> This is, indeed, an interesting interpretation of what tasks are for.
> I'd probably just blame us (Glance team) for not communicating
> properly what tasks are meant to be. I don't believe tasks are a way
> to extend the *public* API and I'd be curious to know if others see it
> that way. I fully agree that just breaks interoperability and as I've
> mentioned a couple of times in this reply already, I don't even think
> tasks should be part of the public API.
>
> But again, very poor job communicating so[0]. Nonetheless, for the
> sake of providing enough information about tasks and sources to read
> from, I'd also like to point out the original blueprint[1], some
> discussions during the havana's summit[2], the wiki page for tasks[3]
> and a patch I just reviewed today (thanks Brian) that introduces docs
> for tasks[4]. These links show already some differences in what tasks
> are.
>
> [0]
> http://git.openstack.org/cgit/openstack/glance/tree/etc/policy.json?h=stable/juno#n28
>
> [1] https://blueprints.launchpad.net/glance/+spec/async-glance-workers
> [2] https://etherpad.openstack.org/p/havana-glance-requirements
> [3] https://wiki.openstack.org/wiki/Glance-tasks-api
> [4] https://review.openstack.org/#/c/220166/
>
>>
>> There is a lot of fuzziness around exactly what is supported for
>> image upload, both in the documentation and in the minds of the
>> developers I've spoken to this week, so I'd like to take a step
>> back and try to work through some clear requirements, and then we
>> can have folks familiar with the code help figure out if we have a
>> real issue, if a minor tweak is needed, or if things are good as
>> they stand today and it's all a misunderstanding.
>>
>> 1. We need a strongly defined and well documented API, with arguments
>>   that do not change based on deployment choices. The behind-the-scenes
>>   behaviors can change, but the arguments provided by the caller
>>   must be the same and the responses must look the same. The
>>   implementation can run as a background task rather than receiving
>>   the full image directly, but the current task API is too vaguely
>>   defined to meet this requirement, and IMO we need an entry point
>>   focused just on uploading or importing an image.
>>
>> 2. Glance cannot require having a Swift deployment. It's not clear
>>   whether this is actually required now, so if it's not then we're
>>   in a good state.
>
> This is definitely not the case. Glance doesn't require any specific
> store to be deployed. It does require at least one other than the http
> one (because it doesn't support write operations).

Awesome.

>> It's fine to provide an optional way to take
>>   advantage of Swift if it is present, but it cannot be a required
>>   component. There are three separate trademark "programs", with
>>   separate policies attached to them. There is an umbrella "Platform"
>>   program that is intended to include all of the TC approved release
>>   projects, such as nova, glance, and swift. However, there is
>>   also a separate "Compute" program that is intended to include
>>   Nova, Glance, and some others but *not* Swift. This is an important
>>   distinction, because there are many use cases both for distributors
>>   and public cloud providers that do not incorporate Swift for a
>>   variety of reasons. So, we can't have Glance's primary configuration
>>   require Swift and we need to provide tests for the DefCore team
>>   that run without Swift. Duplicate tests that do use Swift are
>>   fine, and might be used for "Platform" compliance tests.
>>
>> 3. We need an integration test suite in tempest that fully exercises
>>   the public image API by talking directly to Glance. This applies
>>   to the entire API, not just image uploads. It's fine to have
>>   duplicate tests using the proxy in Nova if the Nova team wants
>>   those, but DefCore should be using tests that talk directly to
>>   the service that owns each feature, without relying on any
>>   proxying. We've already missed the chance to deal with this in
>>   the current DefCore definition, which uses image-related tests
>>   that talk to the Nova proxy [4][5], so we'll have to maintain
>>   the proxy for the required deprecation period. But we won't be
>>   able to consider removing that proxy until we provide alternate
>>   tests for those features that speak directly to Glance. We may
>>   have some coverage already, but I wasn't able to find a task-based
>>   image upload test and there is no "image create" mentioned in
>>   the current draft of capabilities being reviewed [6]. There may
>>   be others missing, so someone more familiar with the feature set
>>   of Glance should do an audit and document what tests are needed
>>   so the work can be split up.
>>
>
> +1 This should become one of the top priorities for Mitaka (as you
> mentioned at the beginning of this email).

++

>> 4. Once identified and incorporated into the DefCore capabilities
>>   set, the selected API needs to remain stable for an extended
>>   period of time and follow the deprecation timelines defined by
>>   DefCore.  That has implications for the V3 API currently in
>>   development to turn Glance into a more generic artifacts service.
>>   There are a lot of ways to handle those implications, and no
>>   choice needs to be made today, so I only mention it to make sure
>>   it's clear that (a) we must get V2 into shape for DefCore and
>>   (b) when that happens, we will need to maintain V2 even if V3
>>   is finished. We won't be able to deprecate V2 quickly.
>>
>> Now, it's entirely possible that we can meet all of those requirements
>> today, and that would be great. If that's the case, then the problem
>> is just one of clear communication and documentation. I think there's
>> probably more work to be done than that, though.
>
>
> There's clearly a communication problem. The fact that this very email
> has been sent out is a sign of that. However, I'd like to say, in a
> very optimistic way, that Glance is not so far away from the expecte
> status. There are things to fix, other things to clarify, tons to
> discuss but, IMHO, besides the tempests tests and DefCore, the most
> critical one is the one you mentioned in the following section.
>
>>
>> [1] http://developer.openstack.org/api-ref-image-v2.html#os-tasks-v2
>> [2]
>> http://docs.rackspace.com/images/api/v2/ci-devguide/content/POST_importImage_tasks_Image_Task_Calls.html#d6e4193
>>
>> [3]
>> http://git.openstack.org/cgit/openstack/glance/tree/glance/api/v2/tasks.py
>>
>> [4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n70
>> [5]
>> http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/guidelines/2015.07.rst
>>
>> [6] https://review.openstack.org/#/c/213353/
>>
>> II. Complete Cinder and Nova V2 Adoption
>>
>> The Glance team originally committed to providing an Image Service
>> API. Besides our end users, both Cinder and Nova consume that API.
>> The shift from V1 to V2 has been a long road. We're far enough
>> along, and the V1 API has enough issues preventing us from using
>> it for DefCore, that we should push ahead and complete the V2
>> adoption. That will let us properly deprecate and drop V1 support,
>> and concentrate on maintaining V2 for the necessary amount of time.
>>
>> There are a few specs for the work needed in Nova, but that work
>> didn't land in Liberty for a variety of reasons. We need resources
>> from both the Glance and Nova teams to work together to get this
>> done as early as possible in Mitaka to ensure that it actually lands
>> this time. We should be able to schedule a joint session at the
>> summit to have the conversation, and we need to take advantage of
>> that opportunity to ensure the details are fully resolved so that
>> everyone understands the plan.
>
> Super important point. I'd like people replying to this email to focus
> on what we can do next and not why this hasn't been done. The later
> will take us down a path that won't be useful at all at it'll just
> waste everyone's time.

++

> That said, I fully agree with the above. Last time we talked, John
> Garbutt and Jay Pipes, from the nova team, raised their hands to help
> out with this effort. From Glance's side, Fei Long Wang and myself
> were working on the implementation. To help moving this forward and to
> follow on the latest plan, which allows this migration to be smoother
> than our original plan, we need folks from Glance to raise their hand.
>
> If I'm not elected PTL, I'm more than happy to help out here but we
> need someone that can commit to the above right now and we'll likely
> need a team of at least 2 people to help moving this forward in early
> Mitaka.
>
>
>> The work in Cinder is more complete, but may need to be reviewed
>> to ensure that it is using the API correctly, safely, and efficiently.
>> Again, this is a joint effort between the Glance and Cinder teams
>> to identify any issues and work out a resolution.
>>
>> Part of this work will also be to audit the Glance API documentation,
>> to ensure it accurately reflects what the APIs expect to receive
>> and return. There are reportedly at least a few cases where things
>> are out of sync right now. This will require some coordination with
>> the Documentation team.
>>
>>
>> Those are the two big priorities I see, based on things the rest
>> of the community needs from the team and existing commitments that
>> have been made. There are some other things that should also be
>> addressed.
>>
>>
>> III. Security audits & bug fixes
>>
>> Five of 18 recent security reports were related to Glance [7]. It's
>> not surprising, given recent resource constraints, that addressing
>> these has been a challenge. Still, these should be given high
>> priority.
>>
>> [7]
>> https://security.openstack.org/search.html?q=glance&check_keywords=yes&area=default
>>
>
>
> +1 FWIW, we're in the process of growing Glance's security team. But
> it's clear from the above that there needs to be quicker replies to
> security issues.
>
>> IV. Sorting out the glance-store question
>>
>> This was perhaps the most confusing thing I learned about this week.
>> The perception outside of the Glance team is that the library is
>> meant to be used by Nova and Cinder to communicate directly with
>> the image store, bypassing the REST API, to improve performance in
>> several cases. I know the Cinder team is especially interested in
>> some sort of interface for manipulating images inside the storage
>> system without having to download them to make copies (for RBD and
>> other systems that support CoW natively).
>
> Correct, the above was one of the triggerers for this effort and I
> like to think it's still one of the main drivers. There are other
> fancier things that could be done in the future assuming the
> librarie's API is refactored in a way that such features can be
> implemented.[0]
>
> [0] https://review.openstack.org/#/c/188050/
>
>> That doesn't seem to be
>> what the library is actually good for, though, since most of the
>> Glance core folks I talked to thought it was really a caching layer.
>> This discrepancy in what folks wanted vs. what they got may explain
>> some of the heated discussions in other email threads.
>
> It's strange that some folks think of it as a caching layer. I believe
> one of the reasons there's such discrepancy is because not enough
> effort has been put in the refactor this library requires. The reason
> this library requires such a refactor is that it came out from the old
> `glance/store` code which was very specific to Glance's internal use.
>
> The mistake here could be that the library should've been refactored
> *before* adopting it in Glance.
>
>>
>> Frankly, given the importance of the other issues, I recommend
>> leaving glance-store standalone this cycle. Unless the work for
>> dealing with priorities I and II is made *significantly* easier by
>> not having a library, the time and energy it will take to re-integrate
>> it with the Glance service seems like a waste of limited resources.
>> The time to even discuss it may be better spent on the planning
>> work needed. That said, if the library doesn't provide the features
>> its users were expecting, it may be better to fold it back in and
>> create a different library with a better understanding of the
>> requirements at some point. The path to take is up to the Glance
>> team, of course, but we're already down far enough on the priority
>> list that I think we'll be lucky to finish the preceding items this
>> cycle.
>
>
> I don't think merging glance-store back into Glance will help with any
> of the priorities mentioned in this thread. If anything, refactoring
> the API might help with future work that could come after the v1 -> v2
> migration is complete.
>
>>
>>
>> Those are the development priorities I was able to identify in my
>> interviews this week, and there is one last thing the team needs
>> to do this cycle: Recruit more contributors.
>>
>> Almost every current core contributor I spoke with this week indicated
>> that their time was split between another project and Glance. Often
>> higher priority had to be given, understandibly, to internal product
>> work. That's the reality we work in, and everyone feels the same
>> pressures to some degree. One way to address that pressure is to
>> bring in help. So, we need a recruiting drive to find folks willing
>> to contribute code and reviews to the project to keep the team
>> healthy. I listed this item last because if you've made it this far
>> you should see just how much work the team has ahead. We're a big
>> community, and I'm confident that we'll be able to find help for
>> the Glance team, but it will require mentoring and education to
>> bring people up to speed to make them productive.
>
> Fully agree here as well. However, I also believe that the fact that
> some efforts have gone to the wrong tasks has taken Glance to the
> situation it is today. More help is welcomed and required but a good
> strategy is more important right now.
>
> FWIW, I agree that our focus has gone to different thing and this has
> taken us to the status you mentioned above. More importantly, it's
> postponed some important tasks. However, I don't believe Glance is
> completely broken - I know you are not saying this but I'd like to
> mention it - and I certainly believe we can bring it back to a good
> state faster than expecte, but I'm known for being a bit optimistic
> sometimes.
>
> In this reply I was hard on us (Glance team), because I tend to be
> hard on myself and to dig deep into the things that are not working
> well. Many times I do this based on the feedback provided by others,
> which I personally value **a lot**. Unfortunately, I have to say that
> there hasn't been enough feedback about these issues until now. There
> was Mike's email[0] where I explicitly asked the community to speak
> up. This is to say that I appreciate the time you've taken to dig into
> this a lot and to encourage folks to *always* speak up and reach out
> through every *public* medium possible..
>
> No one can fix rumors, we can fix issues, though.
>
> Thanks again and lets all work together to improve this situation,
> Flavio
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/071971.html
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From emilien at redhat.com  Mon Sep 14 17:12:49 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Mon, 14 Sep 2015 13:12:49 -0400
Subject: [openstack-dev] [puppet] weekly meeting #51
Message-ID: <55F70011.7080301@redhat.com>

Hello,

Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
in #openstack-meeting-4:

https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150915

Tomorrow we will make our Sprint Retrospective, please share your
experience on the etherpad [1].

Also, feel free to add any additional items you'd like to discuss.
If our schedule allows it, we'll make bug triage during the meeting.

Regards,

[1] https://etherpad.openstack.org/p/puppet-liberty-sprint-retrospective
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/63406331/attachment.pgp>

From tpb at dyncloud.net  Mon Sep 14 17:18:48 2015
From: tpb at dyncloud.net (Tom Barron)
Date: Mon, 14 Sep 2015 13:18:48 -0400
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
Message-ID: <55F70178.1090503@dyncloud.net>

On 9/14/15 12:15 PM, Mike Perez wrote:
> Hello all,
> 
> I will not be running for Cinder PTL this next cycle.

Thanks for helping make me feel welcome as I started up,
for your example of gutsy and consistent leadership,
and for some really good coffee!

-- Tom





From walter.boring at hp.com  Mon Sep 14 17:19:00 2015
From: walter.boring at hp.com (Walter A. Boring IV)
Date: Mon, 14 Sep 2015 10:19:00 -0700
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
Message-ID: <55F70184.3090601@hp.com>

Thanks for your leadership and service Mike.   You've done a great job!

Walt
> Hello all,
>
> I will not be running for Cinder PTL this next cycle. Each cycle I ran
> was for a reason [1][2], and the Cinder team should feel proud of our
> accomplishments:
>
> * Spearheading the Oslo work to allow *all* OpenStack projects to have
> their database being independent of services during upgrades.
> * Providing quality to OpenStack operators and distributors with over
> 60 accepted block storage vendor drivers with reviews and enforced CI
> [3].
> * Helping other projects with third party CI for their needs.
> * Being a welcoming group to new contributors. As a result we grew greatly [4]!
> * Providing documentation for our work! We did it for Kilo [5], and I
> was very proud to see the team has already started doing this on their
> own to prepare for Liberty.
>
> I would like to thank this community for making me feel accepted in
> 2010. I would like to thank John Griffith for starting the Cinder
> project, and empowering me to lead the project through these couple of
> cycles.
>
> With the community's continued support I do plan on continuing my
> efforts, but focusing cross project instead of just Cinder. The
> accomplishments above are just some of the things I would like to help
> others with to make OpenStack as a whole better.
>
>
> [1] - http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
> [2] - http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
> [3] - http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
> [4] - http://thing.ee/cinder/active_contribs.png
> [5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7
>
> --
> Mike Perez
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> .
>



From patrick.east at purestorage.com  Mon Sep 14 17:27:45 2015
From: patrick.east at purestorage.com (Patrick East)
Date: Mon, 14 Sep 2015 10:27:45 -0700
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <55F70184.3090601@hp.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
 <55F70184.3090601@hp.com>
Message-ID: <CA+WE0_7h9C-Q3vkA4qBbqMC+R0EKu=eYXMfPYc-gcMqRrjW4dg@mail.gmail.com>

Mike, you've been an awesome help for me getting started in Cinder. Thanks
for all your hard work as PTL!

-Patrick

On Mon, Sep 14, 2015 at 10:19 AM, Walter A. Boring IV <walter.boring at hp.com>
wrote:

> Thanks for your leadership and service Mike.   You've done a great job!
>
> Walt
>
>> Hello all,
>>
>> I will not be running for Cinder PTL this next cycle. Each cycle I ran
>> was for a reason [1][2], and the Cinder team should feel proud of our
>> accomplishments:
>>
>> * Spearheading the Oslo work to allow *all* OpenStack projects to have
>> their database being independent of services during upgrades.
>> * Providing quality to OpenStack operators and distributors with over
>> 60 accepted block storage vendor drivers with reviews and enforced CI
>> [3].
>> * Helping other projects with third party CI for their needs.
>> * Being a welcoming group to new contributors. As a result we grew
>> greatly [4]!
>> * Providing documentation for our work! We did it for Kilo [5], and I
>> was very proud to see the team has already started doing this on their
>> own to prepare for Liberty.
>>
>> I would like to thank this community for making me feel accepted in
>> 2010. I would like to thank John Griffith for starting the Cinder
>> project, and empowering me to lead the project through these couple of
>> cycles.
>>
>> With the community's continued support I do plan on continuing my
>> efforts, but focusing cross project instead of just Cinder. The
>> accomplishments above are just some of the things I would like to help
>> others with to make OpenStack as a whole better.
>>
>>
>> [1] -
>> http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
>> [2] -
>> http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
>> [3] -
>> http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
>> [4] - http://thing.ee/cinder/active_contribs.png
>> [5] -
>> https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7
>>
>> --
>> Mike Perez
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> .
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/e25698c6/attachment.html>

From thingee at gmail.com  Mon Sep 14 17:28:12 2015
From: thingee at gmail.com (Mike Perez)
Date: Mon, 14 Sep 2015 10:28:12 -0700
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442232202-sup-5997@lrrr.local>
References: <1442232202-sup-5997@lrrr.local>
Message-ID: <CAHcn5b3Dr3yvZ787UoMXa-r--W8nS_iP5hqL3SVVnz-VL+_eOw@mail.gmail.com>

On Mon, Sep 14, 2015 at 5:10 AM, Doug Hellmann <doug at doughellmann.com> wrote:
>
> After having some conversations with folks at the Ops Midcycle a
> few weeks ago, and observing some of the more recent email threads
> related to glance, glance-store, the client, and the API, I spent
> last week contacting a few of you individually to learn more about
> some of the issues confronting the Glance team. I had some very
> frank, but I think constructive, conversations with all of you about
> the issues as you see them. As promised, this is the public email
> thread to discuss what I found, and to see if we can agree on what
> the Glance team should be focusing on going into the Mitaka summit
> and development cycle and how the rest of the community can support
> you in those efforts.

Hi Doug,

Thanks for all your work with investigating this. Just last month I
raised some problems in other projects like Cinder and Nova with
working with v2 Glance [1][2] that I felt were lost due to focuses in
other areas and missing the core reason for Glance [3].

I'd really appreciate the potential PTL's of Glance in the Mitaka
cycle to read Doug's email, and take in the support behind these
priorities.

The Glance team regardless should know that as a community, we're all
in this together and support their efforts in making a great image
service for OpenStack.


[1] - http://lists.openstack.org/pipermail/openstack-dev/2015-July/070714.html
[2] - http://eavesdrop.openstack.org/meetings/crossproject/2015/crossproject.2015-07-28-21.03.log.html#l-239
[3] - http://lists.openstack.org/pipermail/openstack-dev/2015-August/071993.html

--
Mike Perez


From rbryant at redhat.com  Mon Sep 14 17:35:40 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Mon, 14 Sep 2015 13:35:40 -0400
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <55F6E762.5090406@openstack.org>
References: <1442232202-sup-5997@lrrr.local> <55F6E762.5090406@openstack.org>
Message-ID: <55F7056C.2070606@redhat.com>

On 09/14/2015 11:27 AM, Thierry Carrez wrote:
> Doug Hellmann wrote:
>> [...]
>> 1. Resolve the situation preventing the DefCore committee from
>>    including image upload capabilities in the tests used for trademark
>>    and interoperability validation.
>>
>> 2. Follow through on the original commitment of the project to
>>    provide an image API by completing the integration work with
>>    nova and cinder to ensure V2 API adoption.
>> [...]
> 
> Thanks Doug for taking the time to dive into Glance and to write this
> email. I agree with your top two priorities as being a good summary of
> what the "rest of the community" expects the Glance leadership to focus
> on in the very short term.

+1

Thanks, Doug!  and agreed with Thierry's response here.

-- 
Russell Bryant


From sharis at Brocade.com  Mon Sep 14 17:44:14 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Mon, 14 Sep 2015 17:44:14 +0000
Subject: [openstack-dev]  [congress] IRC hangout
Message-ID: <3c5905f3d73949f99aaa2007167b6f05@Hq1wp-exmb11.corp.brocade.com>

Hi,

What is the IRC channel where congress folks hangout. I  tried #openstack-congress on freenode but is seems not correct.

-Shiv



From: Su Zhang [mailto:westlifezs at gmail.com]
Sent: Tuesday, August 25, 2015 5:17 PM
To: openstack-dev
Subject: [openstack-dev] [congress] simulation example in the doc not working

Hello,

In simulation examples at http://congress.readthedocs.org/en/latest/enforcement.html?highlight=simulation,
the "action_policy" is replaced with "null". However, "null" is not considered as a valid policy as I keep receiving 400 errors.
Could someone let me know the easiest way to get around this error?
How to create a simple action policy just for test purpose as of now?

Thanks,

--
Su Zhang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/68b07d92/attachment.html>

From harlowja at outlook.com  Mon Sep 14 17:54:04 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Mon, 14 Sep 2015 10:54:04 -0700
Subject: [openstack-dev] [oslo] mitaka summit planning/ideas
Message-ID: <BLU436-SMTP22274C88FEC1DCACCD7AA7AD85D0@phx.gbl>

Hi all oslo-fans (and others),

Just thought I'd let everyone interested know about:

https://etherpad.openstack.org/p/mitaka-oslo-summit-planning

Please feel free to add any thoughts/ideas for sessions on there!

:-)

-Josh


From rlooyahoo at gmail.com  Mon Sep 14 18:20:48 2015
From: rlooyahoo at gmail.com (Ruby Loo)
Date: Mon, 14 Sep 2015 14:20:48 -0400
Subject: [openstack-dev] [ironic] weekly subteam status report
Message-ID: <CA+5K_1FqY6sVqNhz1DsRahOUR9EYmrCijzg6f2iHO67N6DwEXw@mail.gmail.com>

Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
============
As of Mon, Sep 14 (diff with Sep 7)
- Open: 142 (+4). 11 new (+1), 48 in progress, 0 critical, 11 high and 8
incomplete (-1)
- Nova bugs with Ironic tag: 23. 0 new, 0 critical, 1 high

dtantsur didn't have time to ping people about their in-progress bugs, help
appreciated


Neutron/Ironic work (jroll)
====================
getting bumped to M... pretty risky at this point, patches aren't ready, no
testing, etc :(


Oslo (lintan)
==========
Dan Smith thinks it would be good if we finish migrating to
oslo.versionedobjects so that he doesn't break our code :) (rloo)


Doc (pshige)
==========
email discussion about re-organizing the install guide (rloo)


Testing/Quality (jlvillal)
=================
(Looking for more people interested in testing)
https://wiki.openstack.org/wiki/Ironic/Quality

- raghu has volunteered to investigate other projects functional testing,
for example ironic-inspector, nova-client, and nova
- lehka is looking into working on python-ironicclient functional testing
using mimic. But there is a requirements freeze at the moment, so can't add
mimic to global-requirements


Inspector (dtansur)
===============
- python-ironic-inspector-client 1.1.0 released
- still a lot of stuff to land for liberty :(


Bifrost (TheJulia)
=============
- Minor cleanup in progress, hopefully cutting initial release this week.


webclient (krotscheck / betherly)
=========================
- Discussions underway with horizon re how we can implement the existing UI
in a Horizon panel - will require python API wrapper to be written
- Once ironic and CORS successfully working on machine and connecting to
webclient then can bypass the add cloud screen and work can begin on the
actual dashboard


Drivers
======

iRMC (naohirot)
---------------------
Status: Active (solicit core team's spec review for Ironic 4.2.0)
- New boot driver interface for iRMC drivers
    - bp/new-boot-interface

Status: Reactive
- Enhance Power Interface for Soft Reboot and NMI
    - bp/enhance-power-interface-for-soft-reboot-and-nmi

Status: Reactive
- iRMC out of band inspection
    - bp/ironic-node-properties-discovery

........

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/b7c4d651/attachment.html>

From kurt.f.martin at hpe.com  Mon Sep 14 18:21:36 2015
From: kurt.f.martin at hpe.com (Martin, Kurt Frederick (ESSN Storage MSDU))
Date: Mon, 14 Sep 2015 18:21:36 +0000
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
Message-ID: <297F9F867E2AE6468F458AFF0DD74EC116ADE51E@G9W0741.americas.hpqcorp.net>

Thanks Mike for the great leadership over the last few cycles.  The Cinder community made great strides forward with your guidance and contributions.
~Kurt

-----Original Message-----
From: Mike Perez [mailto:thingee at gmail.com] 
Sent: Monday, September 14, 2015 9:16 AM
To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [cinder] PTL Non-Candidacy

Hello all,

I will not be running for Cinder PTL this next cycle. Each cycle I ran was for a reason [1][2], and the Cinder team should feel proud of our
accomplishments:

* Spearheading the Oslo work to allow *all* OpenStack projects to have their database being independent of services during upgrades.
* Providing quality to OpenStack operators and distributors with over
60 accepted block storage vendor drivers with reviews and enforced CI [3].
* Helping other projects with third party CI for their needs.
* Being a welcoming group to new contributors. As a result we grew greatly [4]!
* Providing documentation for our work! We did it for Kilo [5], and I was very proud to see the team has already started doing this on their own to prepare for Liberty.

I would like to thank this community for making me feel accepted in 2010. I would like to thank John Griffith for starting the Cinder project, and empowering me to lead the project through these couple of cycles.

With the community's continued support I do plan on continuing my efforts, but focusing cross project instead of just Cinder. The accomplishments above are just some of the things I would like to help others with to make OpenStack as a whole better.


[1] - http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
[2] - http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
[3] - http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
[4] - http://thing.ee/cinder/active_contribs.png
[5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7

--
Mike Perez

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From e0ne at e0ne.info  Mon Sep 14 18:26:36 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Mon, 14 Sep 2015 21:26:36 +0300
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CA+WE0_7h9C-Q3vkA4qBbqMC+R0EKu=eYXMfPYc-gcMqRrjW4dg@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
 <55F70184.3090601@hp.com>
 <CA+WE0_7h9C-Q3vkA4qBbqMC+R0EKu=eYXMfPYc-gcMqRrjW4dg@mail.gmail.com>
Message-ID: <CAGocpaFvHgE2xgHT-ZZBbog8o+XfhRvH+a3H2bAM-TGN0F2pbA@mail.gmail.com>

Mike,

Thank you for doing this hard work!


Regards,
Ivan Kolodyazhny

On Mon, Sep 14, 2015 at 8:27 PM, Patrick East <patrick.east at purestorage.com>
wrote:

> Mike, you've been an awesome help for me getting started in Cinder. Thanks
> for all your hard work as PTL!
>
> -Patrick
>
> On Mon, Sep 14, 2015 at 10:19 AM, Walter A. Boring IV <
> walter.boring at hp.com> wrote:
>
>> Thanks for your leadership and service Mike.   You've done a great job!
>>
>> Walt
>>
>>> Hello all,
>>>
>>> I will not be running for Cinder PTL this next cycle. Each cycle I ran
>>> was for a reason [1][2], and the Cinder team should feel proud of our
>>> accomplishments:
>>>
>>> * Spearheading the Oslo work to allow *all* OpenStack projects to have
>>> their database being independent of services during upgrades.
>>> * Providing quality to OpenStack operators and distributors with over
>>> 60 accepted block storage vendor drivers with reviews and enforced CI
>>> [3].
>>> * Helping other projects with third party CI for their needs.
>>> * Being a welcoming group to new contributors. As a result we grew
>>> greatly [4]!
>>> * Providing documentation for our work! We did it for Kilo [5], and I
>>> was very proud to see the team has already started doing this on their
>>> own to prepare for Liberty.
>>>
>>> I would like to thank this community for making me feel accepted in
>>> 2010. I would like to thank John Griffith for starting the Cinder
>>> project, and empowering me to lead the project through these couple of
>>> cycles.
>>>
>>> With the community's continued support I do plan on continuing my
>>> efforts, but focusing cross project instead of just Cinder. The
>>> accomplishments above are just some of the things I would like to help
>>> others with to make OpenStack as a whole better.
>>>
>>>
>>> [1] -
>>> http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
>>> [2] -
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
>>> [3] -
>>> http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
>>> [4] - http://thing.ee/cinder/active_contribs.png
>>> [5] -
>>> https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7
>>>
>>> --
>>> Mike Perez
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> .
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/3ffbd3c7/attachment.html>

From flavio at redhat.com  Mon Sep 14 18:29:39 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Mon, 14 Sep 2015 20:29:39 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <55F6FEA8.8010802@inaugust.com>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com> <55F6FEA8.8010802@inaugust.com>
Message-ID: <20150914182939.GA9301@redhat.com>

On 14/09/15 19:06 +0200, Monty Taylor wrote:
>On 09/14/2015 02:41 PM, Flavio Percoco wrote:
>>On 14/09/15 08:10 -0400, Doug Hellmann wrote:
[snip]
>>>The task API is also not widely deployed, so its adoption for DefCore
>>>is problematic. If we provide a clear technical direction that this
>>>API is preferred, that may overcome the lack of adoption, but the
>>>current task API seems to have technical issues that make it
>>>fundamentally unsuitable for DefCore consideration. While the task
>>>API addresses the problem of a denial of service, and includes
>>>useful features such as processing of the image during import, it
>>>is not strongly enough defined in its current form to be interoperable.
>>>Because it's a generic API, the caller must know how to fully
>>>construct each task, and know what task types are supported in the
>>>first place. There is only one "import" task type supported in the
>>>Glance code repository right now, but it is not clear that "import"
>>>always uses the same arguments, or interprets them in the same way.
>>>For example, the upstream documentation [1] describes a task that
>>>appears to use a URL as source, while the Rackspace documentation [2]
>>>describes a task that appears to take a swift storage location.
>>>I wasn't able to find JSONSchema validation for the "input" blob
>>>portion of the task in the code [3], though that may happen down
>>>inside the task implementation itself somewhere.
>>
>>
>>The above sounds pretty accurate as there's currently just 1 flow that
>>can be triggered (the import flow) and that accepts an input, which is
>>a json. As I mentioned above, I don't believe tasks should be part of
>>the public API and this is yet another reason why I think so. The
>>tasks API is not well defined as there's, currently, not good way to
>>define the expected input in a backwards compatible way and to provide
>>all the required validation.
>>
>>I like having tasks in Glance, despite my comments above - but I like
>>them for cloud usage and not public usage.
>
>I like them much more if they're not public facing. They're not BAD - 
>they just don't have an end-user semantic.

++

>>As far as Rackspace's docs/endpoint goes, I'd assume this is an error
>>in their documetation since Glance currently doesn't allow[0] for
>>swift URLs to be imported (not even in juno[1]).
>>
>>[0]
>>http://git.openstack.org/cgit/openstack/glance/tree/glance/common/scripts/utils.py#n84
>>
>>[1]
>>http://git.openstack.org/cgit/openstack/glance/tree/glance/common/scripts/utils.py?h=stable/juno#n83
>
>Nope. You MUST upload the image to swift and then provide a swift 
>location. (Infra does this in production, I promise it's the only 
>thing that works)

mmh. That's not vanilla Glance as you can see from the link pointing
to the code in juno (unless it's a previous version w/ different code
that I don't know off). This[0] is the commit where the first version of
the import task was added - it didn't use taskflow back then. I'll
leave someone from Rackspace to chime in here since I don't have any
other context.

Regardless, the above is the exact example of why I believe the task
API should not be considered the end-user, supported, way to upload
images. Requiring users to have an external (or in cloud), public (or
accessible from the cloud) source, were images are going to be
downloaded from, to create an image is far from ideal, usable and,
AFAICR, it was never intended as the recommended upload workflow but
an additional one, which should, arguably, be used only by admins.

[0] https://git.openstack.org/cgit/openstack/glance/commit/?id=186991bb9d2b8c568f3e9a0a89744fd6001ec74a

[snip]

Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/076aea03/attachment-0001.pgp>

From mordred at inaugust.com  Mon Sep 14 18:41:38 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Mon, 14 Sep 2015 20:41:38 +0200
Subject: [openstack-dev] [glance] tasks (following "proposed priorities
 for Mitaka")
In-Reply-To: <D21C4411.21E22%brian.rosmaita@rackspace.com>
References: <D21C4411.21E22%brian.rosmaita@rackspace.com>
Message-ID: <55F714E2.4010707@inaugust.com>

On 09/14/2015 04:58 PM, Brian Rosmaita wrote:
> Apologies for forking the thread, but there was way too much in Doug's
> email (and Flavio's response) and I only want to make a few points about
> tasks.  Please read Doug's original email and Flavio's reply at some
> point, preferably before you read this.
>
> I'm going to limit myself to 4 points.  We'll be discussing Glance tasks
> during the Mitaka summit design session, so we'll be able to go into
> details and determine the future of tasks there.  But I would like to make
> these points before discussion gets too far along.
>
>
> (1) DefCore
> So I see DefCore as a two-way street, in which the OpenStack projects need
> to be aware of what's going on with the DefCore process, and the DefCore
> people are paying attention to what's going on in the projects.
>
> Glance tasks are not a recent innovation, they date back at least to the
> Havana summit, April 15-18, 2013.  There was a session on "Getting Glance
> Ready for Public Clouds" [1], resulting in a blueprint for "New Upload
> Download Workflow for Public Glance" [2], which was filed on 2013-04-22.
>
> This was pre-specs days, but there was lots of information about the
> design direction this was taking posted on the wiki (see [3] and [4],
> which contain links to most of the stuff).
>
> My point is simply that the need for tasks and the discussion around their
> development and structure was carried out in the open via the standard
> OpenStack practices, and if Glance was headed in a
> weird/nonstandard/deviant direction, some guidance would have been useful
> at that point.  (I'm not implying that such guidance is not useful now, of
> course.)

I'm very sorry that I did not participate in that. I apologize heartily.

honestly, I do not think we realized how broken this system was until 
Infra started trying to move to creating our base images once and 
uploading them to each of our cloud regions, which then resulted in me 
needing to write an entire compatibility library because the differences 
in approaches were so radically and intractably different between the two.

It's possible that humans could have picked up on the pain this would 
cause for the user if more of us had been in the sessions - but it's 
also possible we would have missed it. I TOTALLY understand the logic 
and reasoning - sometimes things that make perfect sense on paper just 
flat fall over when they see the light of day. That's not a condemnation 
of the people who worked on it - we all try things, and sometimes they 
work and sometimes they do not.

This did not work, and it's time we all acknowledge that and make plans 
to move forward.

>
> (2) Tasks as a Public API
> Well, that has been the whole point throughout the entire discussion.  See
> [1]-[4].
>
>
> (3) Tasks and Deployers
> I participated in some of the DefCore discussions around image upload that
> took place before the Liberty summit.  It just so happened that I was on
> the program to give a talk about Glance tasks, and I left room for
> discussion about (a) whether two image upload workflows are confusing for
> end users, and (b) whether the flexibility of tasks (e.g., the "input"
> element defined as a JSON blob) is actually a problem.  (You can look at
> the talk [5] or my slides [6] to see that I didn't pull any punches about
> this.)
>
> The feedback I got from the deployers present was that they weren't
> worried about (a), and that they liked (b) because it enabled them to make
> customizations easily for their particular situation.

Sure. But it's not the deployers that I care about on this point. I'm 
sure it's awesome for them.

As a person _using_ this I can tell you it is utterly living hell. It 
is, without a doubt, and with no close rivals, the hardest thing to 
interact with in all of OpenStack.

> I'm not saying that there's no other way to do this -- e.g., you could do
> all sorts of alternative workflows and configurations in the "regular"
> upload process -- but the feedback I got can be summarized like this:
> Given the importance of a properly-functioning Glance for normal cloud
> operations, it is useful to have one upload/download workflow that is
> locked down and you don't have to worry about, and a completely different
> workflow that you can expose to end users and tinker with as necessary.

IMHO - a cloud that does not allow me to upload images is not a usable 
cloud.

A cloud that requires me to upload images differently than another cloud 
is a hardship on the users.

A cloud that makes the user know the image format of the cloud is a 
hardship on the users, especially when there exist nowhere in any 
existing distro tools that can actually produce the image format in 
question. (yup, Im just going to sneak that one in there)

NOW - I think that the task api and the image conversion tools itself if 
it's a behind the scenes kind of thing is potentially nice thing.

If "glance import-from http://example.com/my-image.qcow2' always worked, 
and in the back end generated a task with the task workflow, and one of 
the task workflows that a deployer could implement was one to do 
conversions to the image format of the cloud provider's choice, that 
would be teh-awesome. It's still a bit annoying to me that I, as a user, 
need to come up with a place to put the image so that it can be 
imported, but honestly, I'll take it. It's not _that_ hard of a problem.

>
> (4) Interoperability
> In general, this is a worthy goal.  The OpenStack cloud platform, however,
> is designed to handle many different deployment scenarios from small
> private clouds to enormous public clouds, and allowing access to the same
> API calls in all situations is not desirable.  A small academic
> department, for example, may allow regular end users to make some calls
> usually reserved for admins, whereas in a public cloud, this would be a
> remarkably bad idea.  So if DefCore is going to enforce interoperability
> via tests, it should revise the tests to meet the most restrictive
> reasonable case.  Image upload is a good example, as some cloud operators
> do not want to expose this operation to end users, period, and for a
> myriad of reasons (security, user frustration when the image from some
> large non-open-source cloud doesn't boot, etc.).

Those cloud providers need to get out of the game, at least in terms of 
being able to call themselves OpenStack clouds. I respect that choice, 
but it doesn't mean that we have to make bad software just because they 
have made that choice.

> With respect to tasks: the cloud provider specifies the exact content of
> the 'input' element.  It's going to differ from deployment to deployment.

Any time the exact content for a REST API call is going to differ from 
cloud to cloud is a time that we have failed as a community. You are 
right - this is the case today. It's why the task API is not suitable 
for an end user API - it requires the end user to have specific 
foreknowledge of deployer choices that are not exposed or discoverable 
via the OpenStack API.

> But that isn't significantly different from different clouds having
> different flavors with different capabilities.

Untrue - because the 'nova flavor-list' shows you the qualities of 
flavors. You can introspect using normal OpenStack tools what the 
different flavors are.

That flavors have names is at best a quirky fun game.

> You can't reasonably
> expect that "nova boot --flavor economy-flavor --image optimized-centos
> myserver" is going to work in all OpenStack clouds, i.e., you need to
> figure out the appropriate values to replace 'economy-flavor' and
> 'optimized-centos' in the boot call.

Right. This is the reason we don't use flavor names in any of Infra but 
instead specify flavors by their min-ram. however, even in the case 
where we can't purely use parameters and need to use filters on names 
(like performance vs. non-performance flavors) we can visually inspect 
the descriptive names via nova flavor-list.

This is not possible with the input to the task-create command. You have 
to know things. In fact, I think you have to learn the magic json to 
pass to the command line tool by reading a blog post. That's not ok.

> I think the 'input' element is
> similar.  The initial discussion was that it should be defined via
> documentation as we saw how tasks would be used in real life.  But there's
> no reason why it must be documentation only.  It would be easy to make a
> schema available.

The thing is - I totally hear all of the places where this _could_ be 
different per cloud, but in reality almost all of the clouds that are 
out there are actually doing almost all of these things in a way that is 
identical, or where there are differences there are sanely discoverable 
differences. Where there are legitimate places for cloud providers to 
make different choices in implementation, which I think are important 
for our community - we MUST work to make the user interface not reflect 
those.

Glance is currently the main place where the vendor choices are not able 
to be sorted out in a sane manner by the end user. Luckily for people 
who care about interop there is exactly one deployment that exposes the 
task API, so the digging to figure out what random values of JSON a user 
has to pass to the magic incantation has only had to be done once.

I would recommend that, since there is only one public instance of this, 
we put the genie back in the bottle, make "glance import-from" spawn off 
a task in the background and make the task API be marked admin-only in 
the default policy.json. This still allows for the deployer to have the 
flexibility to use the plugin system, which is awesome ... and further 
empowers the deployer to make backend choices that they want to but to 
hide the complexities of that from their users (so that users do not 
have to know that these clouds take qcow2, these clouds take VHD and 
these clouds take RAW)

> I tried to be as concise as possible, but now my email has gotten too long!

Oh golly. I don't think I've ever written a short one - sorry for 
increasing the total aggregate length.

> cheers,
> brian
>
> [1]
> https://etherpad.openstack.org/p/havana-getting-glance-ready-for-public-clo
> uds
> [2] https://blueprints.launchpad.net/glance/+spec/upload-download-workflow
> [3] https://wiki.openstack.org/wiki/Glance-tasks-api
> [4] https://wiki.openstack.org/wiki/Glance-tasks-api-product
> [5] http://youtu.be/ROXrjX3pdqw
> [6] http://www.slideshare.net/racker_br/glance-tasksvancouver2015
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From tim at styra.com  Mon Sep 14 18:58:53 2015
From: tim at styra.com (Tim Hinrichs)
Date: Mon, 14 Sep 2015 18:58:53 +0000
Subject: [openstack-dev] [congress] IRC hangout
In-Reply-To: <3c5905f3d73949f99aaa2007167b6f05@Hq1wp-exmb11.corp.brocade.com>
References: <3c5905f3d73949f99aaa2007167b6f05@Hq1wp-exmb11.corp.brocade.com>
Message-ID: <CAJjxPADe8CiwGV9CDEAOFE-UepKxR6=hLqLFXd_7iozn0nR9vA@mail.gmail.com>

Hi Shiv,

Our IRC is #congress.

Tim

On Mon, Sep 14, 2015 at 10:45 AM Shiv Haris <sharis at brocade.com> wrote:

> Hi,
>
>
>
> What is the IRC channel where congress folks hangout. I  tried
> #openstack-congress on freenode but is seems not correct.
>
>
>
> -Shiv
>
>
>
>
>
>
>
> *From:* Su Zhang [mailto:westlifezs at gmail.com]
> *Sent:* Tuesday, August 25, 2015 5:17 PM
> *To:* openstack-dev
> *Subject:* [openstack-dev] [congress] simulation example in the doc not
> working
>
>
>
> Hello,
>
>
>
> In simulation examples at
> http://congress.readthedocs.org/en/latest/enforcement.html?highlight=simulation
> ,
>
> the "action_policy" is replaced with "null". However, "null" is not
> considered as a valid policy as I keep receiving 400 errors.
>
> Could someone let me know the easiest way to get around this error?
>
> How to create a simple action policy just for test purpose as of now?
>
>
>
> Thanks,
>
>
>
> --
>
> Su Zhang
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/0e471736/attachment.html>

From scott.dangelo at hpe.com  Mon Sep 14 18:56:26 2015
From: scott.dangelo at hpe.com (D'Angelo, Scott)
Date: Mon, 14 Sep 2015 18:56:26 +0000
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
Message-ID: <2960F1710CFACC46AF0DBFEE85B103BB3650E1AD@G9W0723.americas.hpqcorp.net>

Thanks Mike. You've done a great job, including making contributors feel welcome.

-----Original Message-----
From: Mike Perez [mailto:thingee at gmail.com] 
Sent: Monday, September 14, 2015 10:16 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [cinder] PTL Non-Candidacy

Hello all,

I will not be running for Cinder PTL this next cycle. Each cycle I ran was for a reason [1][2], and the Cinder team should feel proud of our
accomplishments:

* Spearheading the Oslo work to allow *all* OpenStack projects to have their database being independent of services during upgrades.
* Providing quality to OpenStack operators and distributors with over
60 accepted block storage vendor drivers with reviews and enforced CI [3].
* Helping other projects with third party CI for their needs.
* Being a welcoming group to new contributors. As a result we grew greatly [4]!
* Providing documentation for our work! We did it for Kilo [5], and I was very proud to see the team has already started doing this on their own to prepare for Liberty.

I would like to thank this community for making me feel accepted in 2010. I would like to thank John Griffith for starting the Cinder project, and empowering me to lead the project through these couple of cycles.

With the community's continued support I do plan on continuing my efforts, but focusing cross project instead of just Cinder. The accomplishments above are just some of the things I would like to help others with to make OpenStack as a whole better.


[1] - http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
[2] - http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
[3] - http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
[4] - http://thing.ee/cinder/active_contribs.png
[5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7

--
Mike Perez

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From aurlapova at mirantis.com  Mon Sep 14 19:19:44 2015
From: aurlapova at mirantis.com (Anastasia Urlapova)
Date: Mon, 14 Sep 2015 22:19:44 +0300
Subject: [openstack-dev] [Fuel] Nominate Denis Dmitriev for fuel-qa(devops)
	core
Message-ID: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>

Folks,
I would like to nominate Denis Dmitriev[1] for fuel-qa/fuel-devops core.

Dennis spent three months in Fuel BugFix team, his velocity was between
150-200% per week. Thanks to his efforts we have won these old issues with
time sync and ceph's clock skew. Dennis's ideas constantly help us to
improve our functional system suite.

Fuelers, please vote for Denis!

Nastya.

[1]
http://stackalytics.com/?user_id=ddmitriev&release=all&project_type=all&module=fuel-qa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/3c9d9464/attachment.html>

From asledzinskiy at mirantis.com  Mon Sep 14 19:25:15 2015
From: asledzinskiy at mirantis.com (Andrey Sledzinskiy)
Date: Mon, 14 Sep 2015 22:25:15 +0300
Subject: [openstack-dev] [Fuel] Nominate Denis Dmitriev for
	fuel-qa(devops) core
In-Reply-To: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>
References: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>
Message-ID: <CAH66fF8HqZstvSsPcMr4TkLYW6E_LMj9G5tzGOuovRnqHqJRKg@mail.gmail.com>

+1

On Mon, Sep 14, 2015 at 10:19 PM, Anastasia Urlapova <aurlapova at mirantis.com
> wrote:

> Folks,
> I would like to nominate Denis Dmitriev[1] for fuel-qa/fuel-devops core.
>
> Dennis spent three months in Fuel BugFix team, his velocity was between
> 150-200% per week. Thanks to his efforts we have won these old issues with
> time sync and ceph's clock skew. Dennis's ideas constantly help us to
> improve our functional system suite.
>
> Fuelers, please vote for Denis!
>
> Nastya.
>
> [1]
> http://stackalytics.com/?user_id=ddmitriev&release=all&project_type=all&module=fuel-qa
>
> --
> You received this message because you are subscribed to the Google Groups
> "fuel-core-team" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to fuel-core-team+unsubscribe at mirantis.com.
> For more options, visit https://groups.google.com/a/mirantis.com/d/optout.
>



-- 
Thanks,
Andrey Sledzinskiy
QA Engineer,
Mirantis, Kharkiv
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/00e86586/attachment.html>

From corvus at inaugust.com  Mon Sep 14 19:30:28 2015
From: corvus at inaugust.com (James E. Blair)
Date: Mon, 14 Sep 2015 12:30:28 -0700
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <55F6E762.5090406@openstack.org> (Thierry Carrez's message of
 "Mon, 14 Sep 2015 17:27:30 +0200")
References: <1442232202-sup-5997@lrrr.local> <55F6E762.5090406@openstack.org>
Message-ID: <87wpvsbxyj.fsf@meyer.lemoncheese.net>

Thierry Carrez <thierry at openstack.org> writes:

> Doug Hellmann wrote:
>> [...]
>> 1. Resolve the situation preventing the DefCore committee from
>>    including image upload capabilities in the tests used for trademark
>>    and interoperability validation.
>> 
>> 2. Follow through on the original commitment of the project to
>>    provide an image API by completing the integration work with
>>    nova and cinder to ensure V2 API adoption.
>> [...]
>
> Thanks Doug for taking the time to dive into Glance and to write this
> email. I agree with your top two priorities as being a good summary of
> what the "rest of the community" expects the Glance leadership to focus
> on in the very short term.

Agreed and thanks.  I'm also excited by the conversation this has
prompted and am optimistic that we will have agreement at the summit on
a way forward.

-Jim


From doug at doughellmann.com  Mon Sep 14 19:51:24 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 14 Sep 2015 15:51:24 -0400
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <20150914124100.GC10859@redhat.com>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com>
Message-ID: <1442250235-sup-1646@lrrr.local>

Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
> On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> >
> >After having some conversations with folks at the Ops Midcycle a
> >few weeks ago, and observing some of the more recent email threads
> >related to glance, glance-store, the client, and the API, I spent
> >last week contacting a few of you individually to learn more about
> >some of the issues confronting the Glance team. I had some very
> >frank, but I think constructive, conversations with all of you about
> >the issues as you see them. As promised, this is the public email
> >thread to discuss what I found, and to see if we can agree on what
> >the Glance team should be focusing on going into the Mitaka summit
> >and development cycle and how the rest of the community can support
> >you in those efforts.
> >
> >I apologize for the length of this email, but there's a lot to go
> >over. I've identified 2 high priority items that I think are critical
> >for the team to be focusing on starting right away in order to use
> >the upcoming summit time effectively. I will also describe several
> >other issues that need to be addressed but that are less immediately
> >critical. First the high priority items:
> >
> >1. Resolve the situation preventing the DefCore committee from
> >   including image upload capabilities in the tests used for trademark
> >   and interoperability validation.
> >
> >2. Follow through on the original commitment of the project to
> >   provide an image API by completing the integration work with
> >   nova and cinder to ensure V2 API adoption.
> 
> Hi Doug,
> 
> First and foremost, I'd like to thank you for taking the time to dig
> into these issues, and for reaching out to the community seeking for
> information and a better understanding of what the real issues are. I
> can imagine how much time you had to dedicate on this and I'm glad you
> did.
> 
> Now, to your email, I very much agree with the priorities you
> mentioned above and I'd like for, whomever will win Glance's PTL
> election, to bring focus back on that.
> 
> Please, find some comments in-line for each point:
> 
> >
> >I. DefCore
> >
> >The primary issue that attracted my attention was the fact that
> >DefCore cannot currently include an image upload API in its
> >interoperability test suite, and therefore we do not have a way to
> >ensure interoperability between clouds for users or for trademark
> >use. The DefCore process has been long, and at times confusing,
> >even to those of us following it sort of closely. It's not entirely
> >surprising that some projects haven't been following the whole time,
> >or aren't aware of exactly what the whole thing means. I have
> >proposed a cross-project summit session for the Mitaka summit to
> >address this need for communication more broadly, but I'll try to
> >summarize a bit here.
> 
> +1
> 
> I think it's quite sad that some projects, especially those considered
> to be part of the `starter-kit:compute`[0], don't follow closely
> what's going on in DefCore. I personally consider this a task PTLs
> should incorporate in their role duties. I'm glad you proposed such
> session, I hope it'll help raising awareness of this effort and it'll
> help moving things forward on that front.

Until fairly recently a lot of the discussion was around process
and priorities for the DefCore committee. Now that those things are
settled, and we have some approved policies, it's time to engage
more fully.  I'll be working during Mitaka to improve the two-way
communication.

> 
> >
> >DefCore is using automated tests, combined with business policies,
> >to build a set of criteria for allowing trademark use. One of the
> >goals of that process is to ensure that all OpenStack deployments
> >are interoperable, so that users who write programs that talk to
> >one cloud can use the same program with another cloud easily. This
> >is a *REST API* level of compatibility. We cannot insert cloud-specific
> >behavior into our client libraries, because not all cloud consumers
> >will use those libraries to talk to the services. Similarly, we
> >can't put the logic in the test suite, because that defeats the
> >entire purpose of making the APIs interoperable. For this level of
> >compatibility to work, we need well-defined APIs, with a long support
> >period, that work the same no matter how the cloud is deployed. We
> >need the entire community to support this effort. From what I can
> >tell, that is going to require some changes to the current Glance
> >API to meet the requirements. I'll list those requirements, and I
> >hope we can discuss them to a degree that ensures everyone understands
> >them. I don't want this email thread to get bogged down in
> >implementation details or API designs, though, so let's try to keep
> >the discussion at a somewhat high level, and leave the details for
> >specs and summit discussions. I do hope you will correct any
> >misunderstandings or misconceptions, because unwinding this as an
> >outside observer has been quite a challenge and it's likely I have
> >some details wrong.
> >
> >As I understand it, there are basically two ways to upload an image
> >to glance using the V2 API today. The "POST" API pushes the image's
> >bits through the Glance API server, and the "task" API instructs
> >Glance to download the image separately in the background. At one
> >point apparently there was a bug that caused the results of the two
> >different paths to be incompatible, but I believe that is now fixed.
> >However, the two separate APIs each have different issues that make
> >them unsuitable for DefCore.
> >
> >The DefCore process relies on several factors when designating APIs
> >for compliance. One factor is the technical direction, as communicated
> >by the contributor community -- that's where we tell them things
> >like "we plan to deprecate the Glance V1 API". In addition to the
> >technical direction, DefCore looks at the deployment history of an
> >API. They do not want to require deploying an API if it is not seen
> >as widely usable, and they look for some level of existing adoption
> >by cloud providers and distributors as an indication of that the
> >API is desired and can be successfully used. Because we have multiple
> >upload APIs, the message we're sending on technical direction is
> >weak right now, and so they have focused on deployment considerations
> >to resolve the question.
> 
> The task upload process you're referring to is the one that uses the
> `import` task, which allows you to download an image from an external
> source, asynchronously, and import it in Glance. This is the old
> `copy-from` behavior that was moved into a task.
> 
> The "fun" thing about this - and I'm sure other folks in the Glance
> community will disagree - is that I don't consider tasks to be a
> public API. That is to say, I would expect tasks to be an internal API
> used by cloud admins to perform some actions (bsaed on its current
> implementation). Eventually, some of these tasks could be triggered
> from the external API but as background operations that are triggered
> by the well-known public ones and not through the task API.

Does that mean it's more of an "admin" API?

> 
> Ultimately, I believe end-users of the cloud simply shouldn't care
> about what tasks are or aren't and more importantly, as you mentioned
> later in the email, tasks make clouds not interoperable. I'd be pissed
> if my public image service would ask me to learn about tasks to be
> able to use the service.

It would be OK if a public API set up to do a specific task returned a
task ID that could be used with a generic task API to check status, etc.
So the idea of tasks isn't completely bad, it's just too vague as it's
exposed right now.

> Long story short, I believe the only upload API that should be
> considered is the one that uses HTTP and, eventually, to bring
> compatibility with v1 as far as the copy-from behavior goes, Glance
> could bring back that behavior on top of the task (just dropping this
> here for the sake of discussion and interoperability).
> 
> >The POST API is enabled in many public clouds, but not consistently.
> >In some clouds like HP, a tenant requires special permission to use
> >the API. At least one provider, Rackspace, has disabled the API
> >entirely. This is apparently due to what seems like a fair argument
> >that uploading the bits directly to the API service presents a
> >possible denial of service vector. Without arguing the technical
> >merits of that decision, the fact remains that without a strong
> >consensus from deployers that the POST API should be publicly and
> >consistently available, it does not meet the requirements to be
> >used for DefCore testing.
> 
> This is definitely unfortunate. I believe a good step forward for this
> discussion would be to create a list of issues related to uploading
> images and see how those issues can be addressed. The result from that
> work might be that it's not recommended to make that endpoint public
> but again, without going through the issues, it'll be hard to
> understand how we can improve this situation. I expect most of this
> issues to have a security impact.

A report like that would be good to have. Can someone on the Glance team
volunteer to put it together?

> 
> >The task API is also not widely deployed, so its adoption for DefCore
> >is problematic. If we provide a clear technical direction that this
> >API is preferred, that may overcome the lack of adoption, but the
> >current task API seems to have technical issues that make it
> >fundamentally unsuitable for DefCore consideration. While the task
> >API addresses the problem of a denial of service, and includes
> >useful features such as processing of the image during import, it
> >is not strongly enough defined in its current form to be interoperable.
> >Because it's a generic API, the caller must know how to fully
> >construct each task, and know what task types are supported in the
> >first place. There is only one "import" task type supported in the
> >Glance code repository right now, but it is not clear that "import"
> >always uses the same arguments, or interprets them in the same way.
> >For example, the upstream documentation [1] describes a task that
> >appears to use a URL as source, while the Rackspace documentation [2]
> >describes a task that appears to take a swift storage location.
> >I wasn't able to find JSONSchema validation for the "input" blob
> >portion of the task in the code [3], though that may happen down
> >inside the task implementation itself somewhere.
> 
> 
> The above sounds pretty accurate as there's currently just 1 flow that
> can be triggered (the import flow) and that accepts an input, which is
> a json. As I mentioned above, I don't believe tasks should be part of
> the public API and this is yet another reason why I think so. The
> tasks API is not well defined as there's, currently, not good way to
> define the expected input in a backwards compatible way and to provide
> all the required validation.
> 
> I like having tasks in Glance, despite my comments above - but I like
> them for cloud usage and not public usage.
> 
> As far as Rackspace's docs/endpoint goes, I'd assume this is an error
> in their documetation since Glance currently doesn't allow[0] for
> swift URLs to be imported (not even in juno[1]).
> 
> [0] http://git.openstack.org/cgit/openstack/glance/tree/glance/common/scripts/utils.py#n84
> [1] http://git.openstack.org/cgit/openstack/glance/tree/glance/common/scripts/utils.py?h=stable/juno#n83
> 
> >Tasks also come from plugins, which may be installed differently
> >based on the deployment. This is an interesting approach to creating
> >API extensions, but isn't discoverable enough to write interoperable
> >tools against. Most of the other projects are starting to move away
> >from supporting API extensions at all because of interoperability
> >concerns they introduce. Deployers should be able to configure their
> >clouds to perform well, but not to behave in fundamentally different
> >ways. Extensions are just that, extensions. We can't rely on them
> >for interoperability testing.
> 
> This is, indeed, an interesting interpretation of what tasks are for.
> I'd probably just blame us (Glance team) for not communicating
> properly what tasks are meant to be. I don't believe tasks are a way
> to extend the *public* API and I'd be curious to know if others see it
> that way. I fully agree that just breaks interoperability and as I've
> mentioned a couple of times in this reply already, I don't even think
> tasks should be part of the public API.

Whether they are intended to be an extension mechanism, they
effectively are right now, as far as I can tell.

> 
> But again, very poor job communicating so[0]. Nonetheless, for the
> sake of providing enough information about tasks and sources to read
> from, I'd also like to point out the original blueprint[1], some
> discussions during the havana's summit[2], the wiki page for tasks[3]
> and a patch I just reviewed today (thanks Brian) that introduces docs
> for tasks[4]. These links show already some differences in what tasks
> are.
> 
> [0] http://git.openstack.org/cgit/openstack/glance/tree/etc/policy.json?h=stable/juno#n28
> [1] https://blueprints.launchpad.net/glance/+spec/async-glance-workers
> [2] https://etherpad.openstack.org/p/havana-glance-requirements
> [3] https://wiki.openstack.org/wiki/Glance-tasks-api
> [4] https://review.openstack.org/#/c/220166/
> 
> >
> >There is a lot of fuzziness around exactly what is supported for
> >image upload, both in the documentation and in the minds of the
> >developers I've spoken to this week, so I'd like to take a step
> >back and try to work through some clear requirements, and then we
> >can have folks familiar with the code help figure out if we have a
> >real issue, if a minor tweak is needed, or if things are good as
> >they stand today and it's all a misunderstanding.
> >
> >1. We need a strongly defined and well documented API, with arguments
> >   that do not change based on deployment choices. The behind-the-scenes
> >   behaviors can change, but the arguments provided by the caller
> >   must be the same and the responses must look the same. The
> >   implementation can run as a background task rather than receiving
> >   the full image directly, but the current task API is too vaguely
> >   defined to meet this requirement, and IMO we need an entry point
> >   focused just on uploading or importing an image.
> >
> >2. Glance cannot require having a Swift deployment. It's not clear
> >   whether this is actually required now, so if it's not then we're
> >   in a good state.
> 
> This is definitely not the case. Glance doesn't require any specific
> store to be deployed. It does require at least one other than the http
> one (because it doesn't support write operations).
> 
> > It's fine to provide an optional way to take
> >   advantage of Swift if it is present, but it cannot be a required
> >   component. There are three separate trademark "programs", with
> >   separate policies attached to them. There is an umbrella "Platform"
> >   program that is intended to include all of the TC approved release
> >   projects, such as nova, glance, and swift. However, there is
> >   also a separate "Compute" program that is intended to include
> >   Nova, Glance, and some others but *not* Swift. This is an important
> >   distinction, because there are many use cases both for distributors
> >   and public cloud providers that do not incorporate Swift for a
> >   variety of reasons. So, we can't have Glance's primary configuration
> >   require Swift and we need to provide tests for the DefCore team
> >   that run without Swift. Duplicate tests that do use Swift are
> >   fine, and might be used for "Platform" compliance tests.
> >
> >3. We need an integration test suite in tempest that fully exercises
> >   the public image API by talking directly to Glance. This applies
> >   to the entire API, not just image uploads. It's fine to have
> >   duplicate tests using the proxy in Nova if the Nova team wants
> >   those, but DefCore should be using tests that talk directly to
> >   the service that owns each feature, without relying on any
> >   proxying. We've already missed the chance to deal with this in
> >   the current DefCore definition, which uses image-related tests
> >   that talk to the Nova proxy [4][5], so we'll have to maintain
> >   the proxy for the required deprecation period. But we won't be
> >   able to consider removing that proxy until we provide alternate
> >   tests for those features that speak directly to Glance. We may
> >   have some coverage already, but I wasn't able to find a task-based
> >   image upload test and there is no "image create" mentioned in
> >   the current draft of capabilities being reviewed [6]. There may
> >   be others missing, so someone more familiar with the feature set
> >   of Glance should do an audit and document what tests are needed
> >   so the work can be split up.
> >
> 
> +1 This should become one of the top priorities for Mitaka (as you
> mentioned at the beginning of this email).
> 
> >4. Once identified and incorporated into the DefCore capabilities
> >   set, the selected API needs to remain stable for an extended
> >   period of time and follow the deprecation timelines defined by
> >   DefCore.  That has implications for the V3 API currently in
> >   development to turn Glance into a more generic artifacts service.
> >   There are a lot of ways to handle those implications, and no
> >   choice needs to be made today, so I only mention it to make sure
> >   it's clear that (a) we must get V2 into shape for DefCore and
> >   (b) when that happens, we will need to maintain V2 even if V3
> >   is finished. We won't be able to deprecate V2 quickly.
> >
> >Now, it's entirely possible that we can meet all of those requirements
> >today, and that would be great. If that's the case, then the problem
> >is just one of clear communication and documentation. I think there's
> >probably more work to be done than that, though.
> 
> 
> There's clearly a communication problem. The fact that this very email
> has been sent out is a sign of that. However, I'd like to say, in a
> very optimistic way, that Glance is not so far away from the expecte
> status. There are things to fix, other things to clarify, tons to
> discuss but, IMHO, besides the tempests tests and DefCore, the most
> critical one is the one you mentioned in the following section.
> 
> >
> >[1] http://developer.openstack.org/api-ref-image-v2.html#os-tasks-v2
> >[2] http://docs.rackspace.com/images/api/v2/ci-devguide/content/POST_importImage_tasks_Image_Task_Calls.html#d6e4193
> >[3] http://git.openstack.org/cgit/openstack/glance/tree/glance/api/v2/tasks.py
> >[4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n70
> >[5] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/guidelines/2015.07.rst
> >[6] https://review.openstack.org/#/c/213353/
> >
> >II. Complete Cinder and Nova V2 Adoption
> >
> >The Glance team originally committed to providing an Image Service
> >API. Besides our end users, both Cinder and Nova consume that API.
> >The shift from V1 to V2 has been a long road. We're far enough
> >along, and the V1 API has enough issues preventing us from using
> >it for DefCore, that we should push ahead and complete the V2
> >adoption. That will let us properly deprecate and drop V1 support,
> >and concentrate on maintaining V2 for the necessary amount of time.
> >
> >There are a few specs for the work needed in Nova, but that work
> >didn't land in Liberty for a variety of reasons. We need resources
> >from both the Glance and Nova teams to work together to get this
> >done as early as possible in Mitaka to ensure that it actually lands
> >this time. We should be able to schedule a joint session at the
> >summit to have the conversation, and we need to take advantage of
> >that opportunity to ensure the details are fully resolved so that
> >everyone understands the plan.
> 
> Super important point. I'd like people replying to this email to focus
> on what we can do next and not why this hasn't been done. The later
> will take us down a path that won't be useful at all at it'll just
> waste everyone's time.
> 
> That said, I fully agree with the above. Last time we talked, John
> Garbutt and Jay Pipes, from the nova team, raised their hands to help
> out with this effort. From Glance's side, Fei Long Wang and myself
> were working on the implementation. To help moving this forward and to
> follow on the latest plan, which allows this migration to be smoother
> than our original plan, we need folks from Glance to raise their hand.
> 
> If I'm not elected PTL, I'm more than happy to help out here but we
> need someone that can commit to the above right now and we'll likely
> need a team of at least 2 people to help moving this forward in early
> Mitaka.

Right, the work needs to be starting now to ensure the relevant specs
are ready for review and approval, summit discussions can be planned,
etc.

> 
> >The work in Cinder is more complete, but may need to be reviewed
> >to ensure that it is using the API correctly, safely, and efficiently.
> >Again, this is a joint effort between the Glance and Cinder teams
> >to identify any issues and work out a resolution.
> >
> >Part of this work will also be to audit the Glance API documentation,
> >to ensure it accurately reflects what the APIs expect to receive
> >and return. There are reportedly at least a few cases where things
> >are out of sync right now. This will require some coordination with
> >the Documentation team.
> >
> >
> >Those are the two big priorities I see, based on things the rest
> >of the community needs from the team and existing commitments that
> >have been made. There are some other things that should also be
> >addressed.
> >
> >
> >III. Security audits & bug fixes
> >
> >Five of 18 recent security reports were related to Glance [7]. It's
> >not surprising, given recent resource constraints, that addressing
> >these has been a challenge. Still, these should be given high
> >priority.
> >
> >[7] https://security.openstack.org/search.html?q=glance&check_keywords=yes&area=default
> 
> 
> +1 FWIW, we're in the process of growing Glance's security team. But
> it's clear from the above that there needs to be quicker replies to
> security issues.
> 
> >IV. Sorting out the glance-store question
> >
> >This was perhaps the most confusing thing I learned about this week.
> >The perception outside of the Glance team is that the library is
> >meant to be used by Nova and Cinder to communicate directly with
> >the image store, bypassing the REST API, to improve performance in
> >several cases. I know the Cinder team is especially interested in
> >some sort of interface for manipulating images inside the storage
> >system without having to download them to make copies (for RBD and
> >other systems that support CoW natively).
> 
> Correct, the above was one of the triggerers for this effort and I
> like to think it's still one of the main drivers. There are other
> fancier things that could be done in the future assuming the
> librarie's API is refactored in a way that such features can be
> implemented.[0]
> 
> [0] https://review.openstack.org/#/c/188050/
> 
> >That doesn't seem to be
> >what the library is actually good for, though, since most of the
> >Glance core folks I talked to thought it was really a caching layer.
> >This discrepancy in what folks wanted vs. what they got may explain
> >some of the heated discussions in other email threads.
> 
> It's strange that some folks think of it as a caching layer. I believe
> one of the reasons there's such discrepancy is because not enough
> effort has been put in the refactor this library requires. The reason
> this library requires such a refactor is that it came out from the old
> `glance/store` code which was very specific to Glance's internal use.
> 
> The mistake here could be that the library should've been refactored
> *before* adopting it in Glance.

The fact that there is disagreement over the intent of the library makes
me think the plan for creating it wasn't sufficiently circulated or
detailed.

> 
> >
> >Frankly, given the importance of the other issues, I recommend
> >leaving glance-store standalone this cycle. Unless the work for
> >dealing with priorities I and II is made *significantly* easier by
> >not having a library, the time and energy it will take to re-integrate
> >it with the Glance service seems like a waste of limited resources.
> >The time to even discuss it may be better spent on the planning
> >work needed. That said, if the library doesn't provide the features
> >its users were expecting, it may be better to fold it back in and
> >create a different library with a better understanding of the
> >requirements at some point. The path to take is up to the Glance
> >team, of course, but we're already down far enough on the priority
> >list that I think we'll be lucky to finish the preceding items this
> >cycle.
> 
> 
> I don't think merging glance-store back into Glance will help with any
> of the priorities mentioned in this thread. If anything, refactoring
> the API might help with future work that could come after the v1 -> v2
> migration is complete.
> 
> >
> >
> >Those are the development priorities I was able to identify in my
> >interviews this week, and there is one last thing the team needs
> >to do this cycle: Recruit more contributors.
> >
> >Almost every current core contributor I spoke with this week indicated
> >that their time was split between another project and Glance. Often
> >higher priority had to be given, understandibly, to internal product
> >work. That's the reality we work in, and everyone feels the same
> >pressures to some degree. One way to address that pressure is to
> >bring in help. So, we need a recruiting drive to find folks willing
> >to contribute code and reviews to the project to keep the team
> >healthy. I listed this item last because if you've made it this far
> >you should see just how much work the team has ahead. We're a big
> >community, and I'm confident that we'll be able to find help for
> >the Glance team, but it will require mentoring and education to
> >bring people up to speed to make them productive.
> 
> Fully agree here as well. However, I also believe that the fact that
> some efforts have gone to the wrong tasks has taken Glance to the
> situation it is today. More help is welcomed and required but a good
> strategy is more important right now.
> 
> FWIW, I agree that our focus has gone to different thing and this has
> taken us to the status you mentioned above. More importantly, it's
> postponed some important tasks. However, I don't believe Glance is
> completely broken - I know you are not saying this but I'd like to
> mention it - and I certainly believe we can bring it back to a good
> state faster than expecte, but I'm known for being a bit optimistic
> sometimes.
> 
> In this reply I was hard on us (Glance team), because I tend to be
> hard on myself and to dig deep into the things that are not working
> well. Many times I do this based on the feedback provided by others,
> which I personally value **a lot**. Unfortunately, I have to say that
> there hasn't been enough feedback about these issues until now. There
> was Mike's email[0] where I explicitly asked the community to speak
> up. This is to say that I appreciate the time you've taken to dig into
> this a lot and to encourage folks to *always* speak up and reach out
> through every *public* medium possible..
> 
> No one can fix rumors, we can fix issues, though.
> 
> Thanks again and lets all work together to improve this situation,
> Flavio
> 
> [0] http://lists.openstack.org/pipermail/openstack-dev/2015-August/071971.html
> 


From doug at doughellmann.com  Mon Sep 14 20:09:21 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 14 Sep 2015 16:09:21 -0400
Subject: [openstack-dev] [glance] tasks (following "proposed priorities
	for Mitaka")
In-Reply-To: <55F714E2.4010707@inaugust.com>
References: <D21C4411.21E22%brian.rosmaita@rackspace.com>
 <55F714E2.4010707@inaugust.com>
Message-ID: <1442260721-sup-4094@lrrr.local>

Excerpts from Monty Taylor's message of 2015-09-14 20:41:38 +0200:
> On 09/14/2015 04:58 PM, Brian Rosmaita wrote:
> > Apologies for forking the thread, but there was way too much in Doug's
> > email (and Flavio's response) and I only want to make a few points about
> > tasks.  Please read Doug's original email and Flavio's reply at some
> > point, preferably before you read this.
> >
> > I'm going to limit myself to 4 points.  We'll be discussing Glance tasks
> > during the Mitaka summit design session, so we'll be able to go into
> > details and determine the future of tasks there.  But I would like to make
> > these points before discussion gets too far along.

Part of the point of me starting this discussion now is to influence the
summit sessions the Glance team has, and the goals of those sessions.

> >
> >
> > (1) DefCore
> > So I see DefCore as a two-way street, in which the OpenStack projects need
> > to be aware of what's going on with the DefCore process, and the DefCore
> > people are paying attention to what's going on in the projects.
> >
> > Glance tasks are not a recent innovation, they date back at least to the
> > Havana summit, April 15-18, 2013.  There was a session on "Getting Glance
> > Ready for Public Clouds" [1], resulting in a blueprint for "New Upload
> > Download Workflow for Public Glance" [2], which was filed on 2013-04-22.
> >
> > This was pre-specs days, but there was lots of information about the
> > design direction this was taking posted on the wiki (see [3] and [4],
> > which contain links to most of the stuff).
> >
> > My point is simply that the need for tasks and the discussion around their
> > development and structure was carried out in the open via the standard
> > OpenStack practices, and if Glance was headed in a
> > weird/nonstandard/deviant direction, some guidance would have been useful
> > at that point.  (I'm not implying that such guidance is not useful now, of
> > course.)
> 
> I'm very sorry that I did not participate in that. I apologize heartily.

Indeed, as I've said elsewhere I am going to work this cycle on
improving the two-way conversations with the DefCore committee and
the contributor community. DefCore has been very good at listening
to technical input when we make it available, and I think all of
our project teams need to engage more with them now that their
processes are settling down to something we can work with regularly.

> 
> honestly, I do not think we realized how broken this system was until 
> Infra started trying to move to creating our base images once and 
> uploading them to each of our cloud regions, which then resulted in me 
> needing to write an entire compatibility library because the differences 
> in approaches were so radically and intractably different between the two.
> 
> It's possible that humans could have picked up on the pain this would 
> cause for the user if more of us had been in the sessions - but it's 
> also possible we would have missed it. I TOTALLY understand the logic 
> and reasoning - sometimes things that make perfect sense on paper just 
> flat fall over when they see the light of day. That's not a condemnation 
> of the people who worked on it - we all try things, and sometimes they 
> work and sometimes they do not.
> 
> This did not work, and it's time we all acknowledge that and make plans 
> to move forward.
> 
> >
> > (2) Tasks as a Public API
> > Well, that has been the whole point throughout the entire discussion.  See
> > [1]-[4].
> >
> >
> > (3) Tasks and Deployers
> > I participated in some of the DefCore discussions around image upload that
> > took place before the Liberty summit.  It just so happened that I was on
> > the program to give a talk about Glance tasks, and I left room for
> > discussion about (a) whether two image upload workflows are confusing for
> > end users, and (b) whether the flexibility of tasks (e.g., the "input"
> > element defined as a JSON blob) is actually a problem.  (You can look at
> > the talk [5] or my slides [6] to see that I didn't pull any punches about
> > this.)
> >
> > The feedback I got from the deployers present was that they weren't
> > worried about (a), and that they liked (b) because it enabled them to make
> > customizations easily for their particular situation.
> 
> Sure. But it's not the deployers that I care about on this point. I'm 
> sure it's awesome for them.
> 
> As a person _using_ this I can tell you it is utterly living hell. It 
> is, without a doubt, and with no close rivals, the hardest thing to 
> interact with in all of OpenStack.
> 
> > I'm not saying that there's no other way to do this -- e.g., you could do
> > all sorts of alternative workflows and configurations in the "regular"
> > upload process -- but the feedback I got can be summarized like this:
> > Given the importance of a properly-functioning Glance for normal cloud
> > operations, it is useful to have one upload/download workflow that is
> > locked down and you don't have to worry about, and a completely different
> > workflow that you can expose to end users and tinker with as necessary.
> 
> IMHO - a cloud that does not allow me to upload images is not a usable 
> cloud.
> 
> A cloud that requires me to upload images differently than another cloud 
> is a hardship on the users.
> 
> A cloud that makes the user know the image format of the cloud is a 
> hardship on the users, especially when there exist nowhere in any 
> existing distro tools that can actually produce the image format in 
> question. (yup, Im just going to sneak that one in there)
> 
> NOW - I think that the task api and the image conversion tools itself if 
> it's a behind the scenes kind of thing is potentially nice thing.
> 
> If "glance import-from http://example.com/my-image.qcow2' always worked, 
> and in the back end generated a task with the task workflow, and one of 
> the task workflows that a deployer could implement was one to do 
> conversions to the image format of the cloud provider's choice, that 
> would be teh-awesome. It's still a bit annoying to me that I, as a user, 
> need to come up with a place to put the image so that it can be 
> imported, but honestly, I'll take it. It's not _that_ hard of a problem.

This is more or less what I'm thinking we want, too. As a user, I want
to know how to import an image by having that documented clearly and by
using an obvious UI. As a deployer, I want to sometimes do things to an
image as they are imported, and background tasks may make that easier to
implement. As a user, I don't care if my image upload is a task or not.

> 
> >
> > (4) Interoperability
> > In general, this is a worthy goal.  The OpenStack cloud platform, however,
> > is designed to handle many different deployment scenarios from small
> > private clouds to enormous public clouds, and allowing access to the same
> > API calls in all situations is not desirable.  A small academic
> > department, for example, may allow regular end users to make some calls
> > usually reserved for admins, whereas in a public cloud, this would be a
> > remarkably bad idea.  So if DefCore is going to enforce interoperability
> > via tests, it should revise the tests to meet the most restrictive
> > reasonable case.  Image upload is a good example, as some cloud operators
> > do not want to expose this operation to end users, period, and for a
> > myriad of reasons (security, user frustration when the image from some
> > large non-open-source cloud doesn't boot, etc.).
> 
> Those cloud providers need to get out of the game, at least in terms of 
> being able to call themselves OpenStack clouds. I respect that choice, 
> but it doesn't mean that we have to make bad software just because they 
> have made that choice.

Yeah, whether or not deployers use a thing is a separate criteria
for DefCore than whether we can make the thing work consistently
in more than one deployment. Regardless of whether DefCore adopts
an image upload criteria and associated test, we want a good image
upload API.

> 
> > With respect to tasks: the cloud provider specifies the exact content of
> > the 'input' element.  It's going to differ from deployment to deployment.

Can you explain why that needs to be the case?

> 
> Any time the exact content for a REST API call is going to differ from 
> cloud to cloud is a time that we have failed as a community. You are 
> right - this is the case today. It's why the task API is not suitable 
> for an end user API - it requires the end user to have specific 
> foreknowledge of deployer choices that are not exposed or discoverable 
> via the OpenStack API.
> 
> > But that isn't significantly different from different clouds having
> > different flavors with different capabilities.
> 
> Untrue - because the 'nova flavor-list' shows you the qualities of 
> flavors. You can introspect using normal OpenStack tools what the 
> different flavors are.
> 
> That flavors have names is at best a quirky fun game.
> 
> > You can't reasonably
> > expect that "nova boot --flavor economy-flavor --image optimized-centos
> > myserver" is going to work in all OpenStack clouds, i.e., you need to
> > figure out the appropriate values to replace 'economy-flavor' and
> > 'optimized-centos' in the boot call.
> 
> Right. This is the reason we don't use flavor names in any of Infra but 
> instead specify flavors by their min-ram. however, even in the case 
> where we can't purely use parameters and need to use filters on names 
> (like performance vs. non-performance flavors) we can visually inspect 
> the descriptive names via nova flavor-list.
> 
> This is not possible with the input to the task-create command. You have 
> to know things. In fact, I think you have to learn the magic json to 
> pass to the command line tool by reading a blog post. That's not ok.

I did find one example in the Rackspace API documentation.

> 
> > I think the 'input' element is
> > similar.  The initial discussion was that it should be defined via
> > documentation as we saw how tasks would be used in real life.  But there's
> > no reason why it must be documentation only.  It would be easy to make a
> > schema available.
> 
> The thing is - I totally hear all of the places where this _could_ be 
> different per cloud, but in reality almost all of the clouds that are 
> out there are actually doing almost all of these things in a way that is 
> identical, or where there are differences there are sanely discoverable 
> differences. Where there are legitimate places for cloud providers to 
> make different choices in implementation, which I think are important 
> for our community - we MUST work to make the user interface not reflect 
> those.
> 
> Glance is currently the main place where the vendor choices are not able 
> to be sorted out in a sane manner by the end user. Luckily for people 
> who care about interop there is exactly one deployment that exposes the 
> task API, so the digging to figure out what random values of JSON a user 
> has to pass to the magic incantation has only had to be done once.
> 
> I would recommend that, since there is only one public instance of this, 
> we put the genie back in the bottle, make "glance import-from" spawn off 
> a task in the background and make the task API be marked admin-only in 
> the default policy.json. This still allows for the deployer to have the 
> flexibility to use the plugin system, which is awesome ... and further 
> empowers the deployer to make backend choices that they want to but to 
> hide the complexities of that from their users (so that users do not 
> have to know that these clouds take qcow2, these clouds take VHD and 
> these clouds take RAW)
> 
> > I tried to be as concise as possible, but now my email has gotten too long!
> 
> Oh golly. I don't think I've ever written a short one - sorry for 
> increasing the total aggregate length.
> 
> > cheers,
> > brian
> >
> > [1]
> > https://etherpad.openstack.org/p/havana-getting-glance-ready-for-public-clo
> > uds
> > [2] https://blueprints.launchpad.net/glance/+spec/upload-download-workflow
> > [3] https://wiki.openstack.org/wiki/Glance-tasks-api
> > [4] https://wiki.openstack.org/wiki/Glance-tasks-api-product
> > [5] http://youtu.be/ROXrjX3pdqw
> > [6] http://www.slideshare.net/racker_br/glance-tasksvancouver2015
> >
> >
> >
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 


From clint at fewbar.com  Mon Sep 14 20:25:43 2015
From: clint at fewbar.com (Clint Byrum)
Date: Mon, 14 Sep 2015 13:25:43 -0700
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442250235-sup-1646@lrrr.local>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com> <1442250235-sup-1646@lrrr.local>
Message-ID: <1442261641-sup-9577@fewbar.com>

Excerpts from Doug Hellmann's message of 2015-09-14 12:51:24 -0700:
> Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
> > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> > >
> > >After having some conversations with folks at the Ops Midcycle a
> > >few weeks ago, and observing some of the more recent email threads
> > >related to glance, glance-store, the client, and the API, I spent
> > >last week contacting a few of you individually to learn more about
> > >some of the issues confronting the Glance team. I had some very
> > >frank, but I think constructive, conversations with all of you about
> > >the issues as you see them. As promised, this is the public email
> > >thread to discuss what I found, and to see if we can agree on what
> > >the Glance team should be focusing on going into the Mitaka summit
> > >and development cycle and how the rest of the community can support
> > >you in those efforts.
> > >
> > >I apologize for the length of this email, but there's a lot to go
> > >over. I've identified 2 high priority items that I think are critical
> > >for the team to be focusing on starting right away in order to use
> > >the upcoming summit time effectively. I will also describe several
> > >other issues that need to be addressed but that are less immediately
> > >critical. First the high priority items:
> > >
> > >1. Resolve the situation preventing the DefCore committee from
> > >   including image upload capabilities in the tests used for trademark
> > >   and interoperability validation.
> > >
> > >2. Follow through on the original commitment of the project to
> > >   provide an image API by completing the integration work with
> > >   nova and cinder to ensure V2 API adoption.
> > 
> > Hi Doug,
> > 
> > First and foremost, I'd like to thank you for taking the time to dig
> > into these issues, and for reaching out to the community seeking for
> > information and a better understanding of what the real issues are. I
> > can imagine how much time you had to dedicate on this and I'm glad you
> > did.
> > 
> > Now, to your email, I very much agree with the priorities you
> > mentioned above and I'd like for, whomever will win Glance's PTL
> > election, to bring focus back on that.
> > 
> > Please, find some comments in-line for each point:
> > 
> > >
> > >I. DefCore
> > >
> > >The primary issue that attracted my attention was the fact that
> > >DefCore cannot currently include an image upload API in its
> > >interoperability test suite, and therefore we do not have a way to
> > >ensure interoperability between clouds for users or for trademark
> > >use. The DefCore process has been long, and at times confusing,
> > >even to those of us following it sort of closely. It's not entirely
> > >surprising that some projects haven't been following the whole time,
> > >or aren't aware of exactly what the whole thing means. I have
> > >proposed a cross-project summit session for the Mitaka summit to
> > >address this need for communication more broadly, but I'll try to
> > >summarize a bit here.
> > 
> > +1
> > 
> > I think it's quite sad that some projects, especially those considered
> > to be part of the `starter-kit:compute`[0], don't follow closely
> > what's going on in DefCore. I personally consider this a task PTLs
> > should incorporate in their role duties. I'm glad you proposed such
> > session, I hope it'll help raising awareness of this effort and it'll
> > help moving things forward on that front.
> 
> Until fairly recently a lot of the discussion was around process
> and priorities for the DefCore committee. Now that those things are
> settled, and we have some approved policies, it's time to engage
> more fully.  I'll be working during Mitaka to improve the two-way
> communication.
> 
> > 
> > >
> > >DefCore is using automated tests, combined with business policies,
> > >to build a set of criteria for allowing trademark use. One of the
> > >goals of that process is to ensure that all OpenStack deployments
> > >are interoperable, so that users who write programs that talk to
> > >one cloud can use the same program with another cloud easily. This
> > >is a *REST API* level of compatibility. We cannot insert cloud-specific
> > >behavior into our client libraries, because not all cloud consumers
> > >will use those libraries to talk to the services. Similarly, we
> > >can't put the logic in the test suite, because that defeats the
> > >entire purpose of making the APIs interoperable. For this level of
> > >compatibility to work, we need well-defined APIs, with a long support
> > >period, that work the same no matter how the cloud is deployed. We
> > >need the entire community to support this effort. From what I can
> > >tell, that is going to require some changes to the current Glance
> > >API to meet the requirements. I'll list those requirements, and I
> > >hope we can discuss them to a degree that ensures everyone understands
> > >them. I don't want this email thread to get bogged down in
> > >implementation details or API designs, though, so let's try to keep
> > >the discussion at a somewhat high level, and leave the details for
> > >specs and summit discussions. I do hope you will correct any
> > >misunderstandings or misconceptions, because unwinding this as an
> > >outside observer has been quite a challenge and it's likely I have
> > >some details wrong.
> > >
> > >As I understand it, there are basically two ways to upload an image
> > >to glance using the V2 API today. The "POST" API pushes the image's
> > >bits through the Glance API server, and the "task" API instructs
> > >Glance to download the image separately in the background. At one
> > >point apparently there was a bug that caused the results of the two
> > >different paths to be incompatible, but I believe that is now fixed.
> > >However, the two separate APIs each have different issues that make
> > >them unsuitable for DefCore.
> > >
> > >The DefCore process relies on several factors when designating APIs
> > >for compliance. One factor is the technical direction, as communicated
> > >by the contributor community -- that's where we tell them things
> > >like "we plan to deprecate the Glance V1 API". In addition to the
> > >technical direction, DefCore looks at the deployment history of an
> > >API. They do not want to require deploying an API if it is not seen
> > >as widely usable, and they look for some level of existing adoption
> > >by cloud providers and distributors as an indication of that the
> > >API is desired and can be successfully used. Because we have multiple
> > >upload APIs, the message we're sending on technical direction is
> > >weak right now, and so they have focused on deployment considerations
> > >to resolve the question.
> > 
> > The task upload process you're referring to is the one that uses the
> > `import` task, which allows you to download an image from an external
> > source, asynchronously, and import it in Glance. This is the old
> > `copy-from` behavior that was moved into a task.
> > 
> > The "fun" thing about this - and I'm sure other folks in the Glance
> > community will disagree - is that I don't consider tasks to be a
> > public API. That is to say, I would expect tasks to be an internal API
> > used by cloud admins to perform some actions (bsaed on its current
> > implementation). Eventually, some of these tasks could be triggered
> > from the external API but as background operations that are triggered
> > by the well-known public ones and not through the task API.
> 
> Does that mean it's more of an "admin" API?
> 

I think it is basically just a half-way done implementation that is
exposed directly to users of Rackspace Cloud and, AFAIK, nobody else.
When last I tried to make integration tests in shade that exercised the
upstream glance task import code, I was met with an implementation that
simply did not work, because the pieces behind it had never been fully
implemented upstream. That may have been resolved, but in the process
of trying to write tests and make this work, I discovered a system that
made very little sense from a user standpoint. I want to upload an
image, why do I want a task?!

> > 
> > Ultimately, I believe end-users of the cloud simply shouldn't care
> > about what tasks are or aren't and more importantly, as you mentioned
> > later in the email, tasks make clouds not interoperable. I'd be pissed
> > if my public image service would ask me to learn about tasks to be
> > able to use the service.
> 
> It would be OK if a public API set up to do a specific task returned a
> task ID that could be used with a generic task API to check status, etc.
> So the idea of tasks isn't completely bad, it's just too vague as it's
> exposed right now.
> 

I think it is a concern, because it is assuming users will want to do
generic things with a specific API. This turns into a black-box game where
the user shoves a task in and then waits to see what comes out the other
side. Not something I want to encourage users to do or burden them with.

We have an API whose sole purpose is to accept image uploads. That
Rackspace identified a scaling pain point there is _good_. But why not
*solve* it for the user, instead of introduce more complexity?

What I'd like to see is the upload image API given the ability to
respond with a URL that can be uploaded to using the object storage API
we already have in OpenStack. Exposing users to all of these operator
choices is just wasting their time. Just simply say "Oh, you want to
upload an image? Thats fine, please upload it as an object over there
and POST here again when it is ready to be imported." This will make
perfect sense to a user reading docs, and doesn't require them to grasp
an abstract concept like "tasks" when all they want to do is upload
their image.


From sathlang at redhat.com  Mon Sep 14 20:30:42 2015
From: sathlang at redhat.com (Sofer Athlan-Guyot)
Date: Mon, 14 Sep 2015 22:30:42 +0200
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
	'composite namevar' or 'meaningless name'?
In-Reply-To: <55F27CB7.2040101@redhat.com> (Gilles Dubreuil's message of "Fri, 
 11 Sep 2015 17:03:19 +1000")
References: <55F27CB7.2040101@redhat.com>
Message-ID: <87h9mw68wd.fsf@s390.unix4.net>

Hi,

Gilles Dubreuil <gilles at redhat.com> writes:

> A. The 'composite namevar' approach:
>
>    keystone_tenant {'projectX::domainY': ... }
>  B. The 'meaningless name' approach:
>
>   keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}
>
> Notes:
>  - Actually using both combined should work too with the domain
> supposedly overriding the name part of the domain.
>  - Please look at [1] this for some background between the two approaches:
>
> The question
> -------------
> Decide between the two approaches, the one we would like to retain for
> puppet-keystone.
>
> Why it matters?
> ---------------
> 1. Domain names are mandatory in every user, group or project. Besides
> the backward compatibility period mentioned earlier, where no domain
> means using the default one.
> 2. Long term impact
> 3. Both approaches are not completely equivalent which different
> consequences on the future usage.

I can't see why they couldn't be equivalent, but I may be missing
something here.

> 4. Being consistent
> 5. Therefore the community to decide
>
> Pros/Cons
> ----------
> A.

I think it's the B: meaningless approach here.

>   Pros
>     - Easier names

That's subjective, creating unique and meaningful name don't look easy
to me.

>   Cons
>     - Titles have no meaning!
>     - Cases where 2 or more resources could exists
>     - More difficult to debug
>     - Titles mismatch when listing the resources (self.instances)
>
> B.
>   Pros
>     - Unique titles guaranteed
>     - No ambiguity between resource found and their title
>   Cons
>     - More complicated titles

> My vote
> --------
> I would love to have the approach A for easier name.
> But I've seen the challenge of maintaining the providers behind the
> curtains and the confusion it creates with name/titles and when not sure
> about the domain we're dealing with.
> Also I believe that supporting self.instances consistently with
> meaningful name is saner.
> Therefore I vote B

+1 for B.  

My view is that this should be the advertised way, but the other method
(meaningless) should be there if the user need it. 

So as far as I'm concerned the two idioms should co-exist.  This would
mimic what is possible with all puppet resources.  For instance you can:

  file { '/tmp/foo.bar': ensure => present }

and you can 

  file { 'meaningless_id': name => '/tmp/foo.bar', ensure => present }

The two refer to the same resource.

But, If that's indeed not possible to have them both, then I would keep
only the meaningful name.


As a side note, someone raised an issue about the delimiter being
hardcoded to "::".  This could be a property of the resource.  This
would enable the user to use weird name with "::" in it and assign a "/"
(for instance) to the delimiter property:

  Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }

bar::is::cool is the name of the domain and foo::blah is the project.

>
> Finally
> ------
> Thanks for reading that far!
> To choose, please provide feedback with more pros/cons, examples and
> your vote.
>
> Thanks,
> Gilles
>
>
> PS:
> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Bye,
-- 
Sofer.


From ikliuk at mirantis.com  Mon Sep 14 20:37:59 2015
From: ikliuk at mirantis.com (Ivan Kliuk)
Date: Mon, 14 Sep 2015 23:37:59 +0300
Subject: [openstack-dev] [Fuel] Nominate Denis Dmitriev for
	fuel-qa(devops) core
In-Reply-To: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>
References: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>
Message-ID: <CAE1YzriPm16vPUSfv0bMEEUwiTs37a7xAoWNAFCE+enVCo8F6A@mail.gmail.com>

+1

On Mon, Sep 14, 2015 at 10:19 PM, Anastasia Urlapova <aurlapova at mirantis.com
> wrote:

> Folks,
> I would like to nominate Denis Dmitriev[1] for fuel-qa/fuel-devops core.
>
> Dennis spent three months in Fuel BugFix team, his velocity was between
> 150-200% per week. Thanks to his efforts we have won these old issues with
> time sync and ceph's clock skew. Dennis's ideas constantly help us to
> improve our functional system suite.
>
> Fuelers, please vote for Denis!
>
> Nastya.
>
> [1]
> http://stackalytics.com/?user_id=ddmitriev&release=all&project_type=all&module=fuel-qa
>
> --
> You received this message because you are subscribed to the Google Groups
> "fuel-core-team" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to fuel-core-team+unsubscribe at mirantis.com.
> For more options, visit https://groups.google.com/a/mirantis.com/d/optout.
>



-- 
Best regards,
Ivan Kliuk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/e401709d/attachment.html>

From sean at coreitpro.com  Mon Sep 14 20:40:20 2015
From: sean at coreitpro.com (Sean M. Collins)
Date: Mon, 14 Sep 2015 20:40:20 +0000
Subject: [openstack-dev] [L3][QA] DVR job failure rate and maintainability
Message-ID: <0000014fcd963ee2-0f84b22d-50af-43e1-8449-1166202557eb-000000@email.amazonses.com>

Hi,

Carl Baldwin, Doug Wiegley, Matt Kassawara, Ryan Moats, and myself are
at the QA sprint in Fort Collins. Earlier today there was a discussion
about the failure rate about the DVR job, and the possible impact that
it is having on the gate.

Ryan has a good patch up that shows the failure rates over time:

https://review.openstack.org/223201

To view the graphs, you go over into your neutron git repo, and open the
.html files that are present in doc/dashboards - which should open up
your browser and display the Graphite query.

Doug put up a patch to change the DVR job to be non-voting while we
determine the cause of the recent spikes:

https://review.openstack.org/223173

There was a good discussion after pushing the patch, revolving around
the need for Neutron to have DVR, to fit operational and reliability
requirements, and help transition away from Nova-Network by providing
one of many solutions similar to Nova's multihost feature.  I'm skipping
over a huge amount of context about the Nova-Network and Neutron work,
since that is a big and ongoing effort. 

DVR is an important feature to have, and we need to ensure that the job
that tests DVR has a high pass rate.

One thing that I think we need, is to form a group of contributors that
can help with the DVR feature in the immediate term to fix the current
bugs, and longer term maintain the feature. It's a big task and I don't
believe that a single person or company can or should do it by themselves.

The L3 group is a good place to start, but I think that even within the
L3 team we need dedicated and diverse group of people who are interested
in maintaining the DVR feature. 

Without this, I think the DVR feature will start to bit-rot and that
will have a significant impact on our ability to recommend Neutron as a
replacement for Nova-Network in the future.

-- 
Sean M. Collins


From doug at doughellmann.com  Mon Sep 14 20:46:16 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 14 Sep 2015 16:46:16 -0400
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442261641-sup-9577@fewbar.com>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com> <1442250235-sup-1646@lrrr.local>
 <1442261641-sup-9577@fewbar.com>
Message-ID: <1442263439-sup-913@lrrr.local>

Excerpts from Clint Byrum's message of 2015-09-14 13:25:43 -0700:
> Excerpts from Doug Hellmann's message of 2015-09-14 12:51:24 -0700:
> > Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
> > > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> > > >
> > > >After having some conversations with folks at the Ops Midcycle a
> > > >few weeks ago, and observing some of the more recent email threads
> > > >related to glance, glance-store, the client, and the API, I spent
> > > >last week contacting a few of you individually to learn more about
> > > >some of the issues confronting the Glance team. I had some very
> > > >frank, but I think constructive, conversations with all of you about
> > > >the issues as you see them. As promised, this is the public email
> > > >thread to discuss what I found, and to see if we can agree on what
> > > >the Glance team should be focusing on going into the Mitaka summit
> > > >and development cycle and how the rest of the community can support
> > > >you in those efforts.
> > > >
> > > >I apologize for the length of this email, but there's a lot to go
> > > >over. I've identified 2 high priority items that I think are critical
> > > >for the team to be focusing on starting right away in order to use
> > > >the upcoming summit time effectively. I will also describe several
> > > >other issues that need to be addressed but that are less immediately
> > > >critical. First the high priority items:
> > > >
> > > >1. Resolve the situation preventing the DefCore committee from
> > > >   including image upload capabilities in the tests used for trademark
> > > >   and interoperability validation.
> > > >
> > > >2. Follow through on the original commitment of the project to
> > > >   provide an image API by completing the integration work with
> > > >   nova and cinder to ensure V2 API adoption.
> > > 
> > > Hi Doug,
> > > 
> > > First and foremost, I'd like to thank you for taking the time to dig
> > > into these issues, and for reaching out to the community seeking for
> > > information and a better understanding of what the real issues are. I
> > > can imagine how much time you had to dedicate on this and I'm glad you
> > > did.
> > > 
> > > Now, to your email, I very much agree with the priorities you
> > > mentioned above and I'd like for, whomever will win Glance's PTL
> > > election, to bring focus back on that.
> > > 
> > > Please, find some comments in-line for each point:
> > > 
> > > >
> > > >I. DefCore
> > > >
> > > >The primary issue that attracted my attention was the fact that
> > > >DefCore cannot currently include an image upload API in its
> > > >interoperability test suite, and therefore we do not have a way to
> > > >ensure interoperability between clouds for users or for trademark
> > > >use. The DefCore process has been long, and at times confusing,
> > > >even to those of us following it sort of closely. It's not entirely
> > > >surprising that some projects haven't been following the whole time,
> > > >or aren't aware of exactly what the whole thing means. I have
> > > >proposed a cross-project summit session for the Mitaka summit to
> > > >address this need for communication more broadly, but I'll try to
> > > >summarize a bit here.
> > > 
> > > +1
> > > 
> > > I think it's quite sad that some projects, especially those considered
> > > to be part of the `starter-kit:compute`[0], don't follow closely
> > > what's going on in DefCore. I personally consider this a task PTLs
> > > should incorporate in their role duties. I'm glad you proposed such
> > > session, I hope it'll help raising awareness of this effort and it'll
> > > help moving things forward on that front.
> > 
> > Until fairly recently a lot of the discussion was around process
> > and priorities for the DefCore committee. Now that those things are
> > settled, and we have some approved policies, it's time to engage
> > more fully.  I'll be working during Mitaka to improve the two-way
> > communication.
> > 
> > > 
> > > >
> > > >DefCore is using automated tests, combined with business policies,
> > > >to build a set of criteria for allowing trademark use. One of the
> > > >goals of that process is to ensure that all OpenStack deployments
> > > >are interoperable, so that users who write programs that talk to
> > > >one cloud can use the same program with another cloud easily. This
> > > >is a *REST API* level of compatibility. We cannot insert cloud-specific
> > > >behavior into our client libraries, because not all cloud consumers
> > > >will use those libraries to talk to the services. Similarly, we
> > > >can't put the logic in the test suite, because that defeats the
> > > >entire purpose of making the APIs interoperable. For this level of
> > > >compatibility to work, we need well-defined APIs, with a long support
> > > >period, that work the same no matter how the cloud is deployed. We
> > > >need the entire community to support this effort. From what I can
> > > >tell, that is going to require some changes to the current Glance
> > > >API to meet the requirements. I'll list those requirements, and I
> > > >hope we can discuss them to a degree that ensures everyone understands
> > > >them. I don't want this email thread to get bogged down in
> > > >implementation details or API designs, though, so let's try to keep
> > > >the discussion at a somewhat high level, and leave the details for
> > > >specs and summit discussions. I do hope you will correct any
> > > >misunderstandings or misconceptions, because unwinding this as an
> > > >outside observer has been quite a challenge and it's likely I have
> > > >some details wrong.
> > > >
> > > >As I understand it, there are basically two ways to upload an image
> > > >to glance using the V2 API today. The "POST" API pushes the image's
> > > >bits through the Glance API server, and the "task" API instructs
> > > >Glance to download the image separately in the background. At one
> > > >point apparently there was a bug that caused the results of the two
> > > >different paths to be incompatible, but I believe that is now fixed.
> > > >However, the two separate APIs each have different issues that make
> > > >them unsuitable for DefCore.
> > > >
> > > >The DefCore process relies on several factors when designating APIs
> > > >for compliance. One factor is the technical direction, as communicated
> > > >by the contributor community -- that's where we tell them things
> > > >like "we plan to deprecate the Glance V1 API". In addition to the
> > > >technical direction, DefCore looks at the deployment history of an
> > > >API. They do not want to require deploying an API if it is not seen
> > > >as widely usable, and they look for some level of existing adoption
> > > >by cloud providers and distributors as an indication of that the
> > > >API is desired and can be successfully used. Because we have multiple
> > > >upload APIs, the message we're sending on technical direction is
> > > >weak right now, and so they have focused on deployment considerations
> > > >to resolve the question.
> > > 
> > > The task upload process you're referring to is the one that uses the
> > > `import` task, which allows you to download an image from an external
> > > source, asynchronously, and import it in Glance. This is the old
> > > `copy-from` behavior that was moved into a task.
> > > 
> > > The "fun" thing about this - and I'm sure other folks in the Glance
> > > community will disagree - is that I don't consider tasks to be a
> > > public API. That is to say, I would expect tasks to be an internal API
> > > used by cloud admins to perform some actions (bsaed on its current
> > > implementation). Eventually, some of these tasks could be triggered
> > > from the external API but as background operations that are triggered
> > > by the well-known public ones and not through the task API.
> > 
> > Does that mean it's more of an "admin" API?
> > 
> 
> I think it is basically just a half-way done implementation that is
> exposed directly to users of Rackspace Cloud and, AFAIK, nobody else.
> When last I tried to make integration tests in shade that exercised the
> upstream glance task import code, I was met with an implementation that
> simply did not work, because the pieces behind it had never been fully
> implemented upstream. That may have been resolved, but in the process
> of trying to write tests and make this work, I discovered a system that
> made very little sense from a user standpoint. I want to upload an
> image, why do I want a task?!
> 
> > > 
> > > Ultimately, I believe end-users of the cloud simply shouldn't care
> > > about what tasks are or aren't and more importantly, as you mentioned
> > > later in the email, tasks make clouds not interoperable. I'd be pissed
> > > if my public image service would ask me to learn about tasks to be
> > > able to use the service.
> > 
> > It would be OK if a public API set up to do a specific task returned a
> > task ID that could be used with a generic task API to check status, etc.
> > So the idea of tasks isn't completely bad, it's just too vague as it's
> > exposed right now.
> > 
> 
> I think it is a concern, because it is assuming users will want to do
> generic things with a specific API. This turns into a black-box game where
> the user shoves a task in and then waits to see what comes out the other
> side. Not something I want to encourage users to do or burden them with.
> 
> We have an API whose sole purpose is to accept image uploads. That
> Rackspace identified a scaling pain point there is _good_. But why not
> *solve* it for the user, instead of introduce more complexity?

That's fair. I don't actually care which API we have, as long as it
meets the other requirements.

> 
> What I'd like to see is the upload image API given the ability to
> respond with a URL that can be uploaded to using the object storage API
> we already have in OpenStack. Exposing users to all of these operator
> choices is just wasting their time. Just simply say "Oh, you want to
> upload an image? Thats fine, please upload it as an object over there
> and POST here again when it is ready to be imported." This will make
> perfect sense to a user reading docs, and doesn't require them to grasp
> an abstract concept like "tasks" when all they want to do is upload
> their image.
> 

And what would it do if the backing store for the image service
isn't Swift or another object storage system that supports direct
uploads? Return a URL that pointed back to itself, maybe?

Doug


From rmeggins at redhat.com  Mon Sep 14 20:53:03 2015
From: rmeggins at redhat.com (Rich Megginson)
Date: Mon, 14 Sep 2015 14:53:03 -0600
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <87h9mw68wd.fsf@s390.unix4.net>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
Message-ID: <55F733AF.6080005@redhat.com>

On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
> Hi,
>
> Gilles Dubreuil <gilles at redhat.com> writes:
>
>> A. The 'composite namevar' approach:
>>
>>     keystone_tenant {'projectX::domainY': ... }
>>   B. The 'meaningless name' approach:
>>
>>    keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}
>>
>> Notes:
>>   - Actually using both combined should work too with the domain
>> supposedly overriding the name part of the domain.
>>   - Please look at [1] this for some background between the two approaches:
>>
>> The question
>> -------------
>> Decide between the two approaches, the one we would like to retain for
>> puppet-keystone.
>>
>> Why it matters?
>> ---------------
>> 1. Domain names are mandatory in every user, group or project. Besides
>> the backward compatibility period mentioned earlier, where no domain
>> means using the default one.
>> 2. Long term impact
>> 3. Both approaches are not completely equivalent which different
>> consequences on the future usage.
> I can't see why they couldn't be equivalent, but I may be missing
> something here.

I think we could support both.  I don't see it as an either/or situation.

>
>> 4. Being consistent
>> 5. Therefore the community to decide
>>
>> Pros/Cons
>> ----------
>> A.
> I think it's the B: meaningless approach here.
>
>>    Pros
>>      - Easier names
> That's subjective, creating unique and meaningful name don't look easy
> to me.

The point is that this allows choice - maybe the user already has some 
naming scheme, or wants to use a more "natural" meaningful name - rather 
than being forced into a possibly "awkward" naming scheme with "::"

   keystone_user { 'heat domain admin user':
     name => 'admin',
     domain => 'HeatDomain',
     ...
   }

   keystone_user_role {'heat domain admin user@::HeatDomain':
     roles => ['admin']
     ...
   }

>
>>    Cons
>>      - Titles have no meaning!

They have meaning to the user, not necessarily to Puppet.

>>      - Cases where 2 or more resources could exists

This seems to be the hardest part - I still cannot figure out how to use 
"compound" names with Puppet.

>>      - More difficult to debug

More difficult than it is already? :P

>>      - Titles mismatch when listing the resources (self.instances)
>>
>> B.
>>    Pros
>>      - Unique titles guaranteed
>>      - No ambiguity between resource found and their title
>>    Cons
>>      - More complicated titles
>> My vote
>> --------
>> I would love to have the approach A for easier name.
>> But I've seen the challenge of maintaining the providers behind the
>> curtains and the confusion it creates with name/titles and when not sure
>> about the domain we're dealing with.
>> Also I believe that supporting self.instances consistently with
>> meaningful name is saner.
>> Therefore I vote B
> +1 for B.
>
> My view is that this should be the advertised way, but the other method
> (meaningless) should be there if the user need it.
>
> So as far as I'm concerned the two idioms should co-exist.  This would
> mimic what is possible with all puppet resources.  For instance you can:
>
>    file { '/tmp/foo.bar': ensure => present }
>
> and you can
>
>    file { 'meaningless_id': name => '/tmp/foo.bar', ensure => present }
>
> The two refer to the same resource.

Right.

>
> But, If that's indeed not possible to have them both, then I would keep
> only the meaningful name.
>
>
> As a side note, someone raised an issue about the delimiter being
> hardcoded to "::".  This could be a property of the resource.  This
> would enable the user to use weird name with "::" in it and assign a "/"
> (for instance) to the delimiter property:
>
>    Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }
>
> bar::is::cool is the name of the domain and foo::blah is the project.

That's a good idea.  Please file a bug for that.

>
>> Finally
>> ------
>> Thanks for reading that far!
>> To choose, please provide feedback with more pros/cons, examples and
>> your vote.
>>
>> Thanks,
>> Gilles
>>
>>
>> PS:
>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Bye,



From kevin.carter at RACKSPACE.COM  Mon Sep 14 21:02:26 2015
From: kevin.carter at RACKSPACE.COM (Kevin Carter)
Date: Mon, 14 Sep 2015 21:02:26 +0000
Subject: [openstack-dev] [openstack-ansible] PTL Non-Candidacy
Message-ID: <1442264539677.79104@RACKSPACE.COM>

Hello Everyone,

TL;DR - I'm sending this out to announce that I won't be running for PTL of the OpenStack-Ansible project in the upcoming cycle. Although I won't be running for PTL, with community support, I intend to remain an active contributor just with more time spent more cross project and in other upstream communities.

Being a PTL has been difficult, fun, and rewarding and is something I think everyone should strive to do at least once. In the the upcoming cycle I believe our project has reached the point of maturity where its time for the leadership to change. OpenStack-Ansible was recently moved into the "big-tent" and I consider this to be the perfect juncture for me to step aside and allow the community to evolve under the guidance of a new team lead. I share the opinions of current and former PTLs that having a revolving door of leadership is key to the success of any project [0]. While OpenStack-Ansible has only recently been moved out of Stackforge and into the OpenStack namespace as a governed project (I'm really excited about that) I've had the privileged of working as the project technical lead ever since it's inception at Rackspace with the initial proof of concept known as "Ansible-LXC-RPC". It's been an amazing journey so far and I'd like to thank everyone that's helped make OpenStack-Ansible (formally OSAD) possible; none of this would have happened without the contributions made by our devout and ever growing community of deployers and developers. 

Thank you again and I look forward to seeing you all online and in Tokyo.

[0] - http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html

--

Kevin Carter
IRC: cloudnull


From douglas.mendizabal at rackspace.com  Mon Sep 14 21:04:49 2015
From: douglas.mendizabal at rackspace.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=)
Date: Mon, 14 Sep 2015 16:04:49 -0500
Subject: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican
 core
In-Reply-To: <D215C963.3F504%john.wood@rackspace.com>
References: <D215C963.3F504%john.wood@rackspace.com>
Message-ID: <55F73671.3060506@rackspace.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512


As described in the Barbican Core Team wiki [1] Dave has gotten the
requierd +1s and no objections, so I'm happy to welcome him to the
Barbican Core reviewer team.

Douglas Mendiz?bal

On 9/9/15 11:33 AM, John Wood wrote:
> Agreed?+1
> 
> On 9/9/15, 11:17 AM, "michael mccune" <msm at redhat.com> wrote:
> 
>> i'm not a core, but +1 from me. Dave has made solid contributions
>> and would be a great addition to the core team.
>> 
>> mike
>> 
>> On 09/08/2015 12:05 PM, Juan Antonio Osorio wrote:
>>> I'd like to nominate Dave Mccowan for the Barbican core review
>>> team.
>>> 
>>> He has been an active contributor both in doing relevant code
>>> pieces and making useful and thorough reviews; And so I think
>>> he would make a great addition to the team.
>>> 
>>> Please bring the +1's :D
>>> 
>>> Cheers!
>>> 
>>> -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com
>>> <mailto:jaosorior at gmail.com>
>>> 
>>> 
>>> 
>>> 
>>> ____________________________________________________________________
_____
>>>
>>> 
_
>>> OpenStack Development Mailing List (not for usage questions) 
>>> Unsubscribe: 
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>>> 
________________________________________________________________________
__
>> OpenStack Development Mailing List (not for usage questions) 
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>> 
> 
> ______________________________________________________________________
____
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV9zZxAAoJEB7Z2EQgmLX7HYwP/ikOJ33bfi5kE2U+T+X5tpm0
XoKdUaFtpUukzCVwnCsBTam4r8SOj+1IYdmjfbExO9MfQ2iNlE6PGaxC+rjGNezA
PO368yuAXPtC0rd+MEtYHduV4iBpDQyRRHM/KIehUmMRS+gOmYfpXdPfkKh55nxi
VQVA6KAAUIMp/tqLbKMUOqsxa4VN4j2ITaI1KouRueRgdnko7bgIIGcG7ruHBQdq
xPFJFy/X2+Gpw3vKXaEYqFvAX8EphHzMd09yz4wjDRzJUuG2Qt7/qD3GEPvV9t9Z
MnbjsOY1/1m76NoFYByaGYHdfLVw7ZlvnM6JUiLdOCSLiDPSk21ps0iKOiT+vJg5
H8n70nzIs6zTOK6xENpgV24U8p3PsT24s84LP5Tp6JFIwIpSHuM/UJceFAr6IqhB
YBsbmkhUK7xi8rJjoDLDWjtAtm9ra763p2tJbRhr0wnFqYys/inAPpy3CsKFNR2/
9SDObwphkuHMSU5dkGXdPiEIMozySE9yPBJanVvESS1gkabMCVZZeAUfA0nxC265
lOJEYKgqf3KLe4sFXMeco48o2DHH5TRSqDdYiEkOYLM5QC2FBDhsgZL/u5ym1/yw
HH6XId2AT6GDSKZ4OAhSJ7+J9cjwbrv4evzvzL2HIeIS+Y2LJqGdlfTaduH19Q0U
wl0PII+/2qydH03kY2wZ
=sW/Y
-----END PGP SIGNATURE-----


From morgan.fainberg at gmail.com  Mon Sep 14 21:06:12 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Mon, 14 Sep 2015 14:06:12 -0700
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <55F733AF.6080005@redhat.com>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com>
Message-ID: <CAGnj6atXbpuzpNR6aF63cZ26WE-cbwUGozb9bvdxtaUaA7B1Ow@mail.gmail.com>

On Mon, Sep 14, 2015 at 1:53 PM, Rich Megginson <rmeggins at redhat.com> wrote:

> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>
>> Hi,
>>
>> Gilles Dubreuil <gilles at redhat.com> writes:
>>
>> A. The 'composite namevar' approach:
>>>
>>>     keystone_tenant {'projectX::domainY': ... }
>>>   B. The 'meaningless name' approach:
>>>
>>>    keystone_tenant {'myproject': name='projectX', domain=>'domainY', ...}
>>>
>>> Notes:
>>>   - Actually using both combined should work too with the domain
>>> supposedly overriding the name part of the domain.
>>>   - Please look at [1] this for some background between the two
>>> approaches:
>>>
>>> The question
>>> -------------
>>> Decide between the two approaches, the one we would like to retain for
>>> puppet-keystone.
>>>
>>> Why it matters?
>>> ---------------
>>> 1. Domain names are mandatory in every user, group or project. Besides
>>> the backward compatibility period mentioned earlier, where no domain
>>> means using the default one.
>>> 2. Long term impact
>>> 3. Both approaches are not completely equivalent which different
>>> consequences on the future usage.
>>>
>> I can't see why they couldn't be equivalent, but I may be missing
>> something here.
>>
>
> I think we could support both.  I don't see it as an either/or situation.
>
>
>> 4. Being consistent
>>> 5. Therefore the community to decide
>>>
>>> Pros/Cons
>>> ----------
>>> A.
>>>
>> I think it's the B: meaningless approach here.
>>
>>    Pros
>>>      - Easier names
>>>
>> That's subjective, creating unique and meaningful name don't look easy
>> to me.
>>
>
> The point is that this allows choice - maybe the user already has some
> naming scheme, or wants to use a more "natural" meaningful name - rather
> than being forced into a possibly "awkward" naming scheme with "::"
>
>   keystone_user { 'heat domain admin user':
>     name => 'admin',
>     domain => 'HeatDomain',
>     ...
>   }
>
>   keystone_user_role {'heat domain admin user@::HeatDomain':
>     roles => ['admin']
>     ...
>   }
>
>
>>    Cons
>>>      - Titles have no meaning!
>>>
>>
> They have meaning to the user, not necessarily to Puppet.
>
>      - Cases where 2 or more resources could exists
>>>
>>
> This seems to be the hardest part - I still cannot figure out how to use
> "compound" names with Puppet.
>
>      - More difficult to debug
>>>
>>
> More difficult than it is already? :P
>
>
>      - Titles mismatch when listing the resources (self.instances)
>>>
>>> B.
>>>    Pros
>>>      - Unique titles guaranteed
>>>      - No ambiguity between resource found and their title
>>>    Cons
>>>      - More complicated titles
>>> My vote
>>> --------
>>> I would love to have the approach A for easier name.
>>> But I've seen the challenge of maintaining the providers behind the
>>> curtains and the confusion it creates with name/titles and when not sure
>>> about the domain we're dealing with.
>>> Also I believe that supporting self.instances consistently with
>>> meaningful name is saner.
>>> Therefore I vote B
>>>
>> +1 for B.
>>
>> My view is that this should be the advertised way, but the other method
>> (meaningless) should be there if the user need it.
>>
>> So as far as I'm concerned the two idioms should co-exist.  This would
>> mimic what is possible with all puppet resources.  For instance you can:
>>
>>    file { '/tmp/foo.bar': ensure => present }
>>
>> and you can
>>
>>    file { 'meaningless_id': name => '/tmp/foo.bar', ensure => present }
>>
>> The two refer to the same resource.
>>
>
> Right.
>
>
>> But, If that's indeed not possible to have them both, then I would keep
>> only the meaningful name.
>>
>>
>> As a side note, someone raised an issue about the delimiter being
>> hardcoded to "::".  This could be a property of the resource.  This
>> would enable the user to use weird name with "::" in it and assign a "/"
>> (for instance) to the delimiter property:
>>
>>    Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }
>>
>> bar::is::cool is the name of the domain and foo::blah is the project.
>>
>
> That's a good idea.  Please file a bug for that.
>
>
I'm not sure I see a benefit to notation like:

      Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }

Overall option below just looks more straightforward (and requires less
logic to convert to something useful). However, I admit I am not an expert
in puppet conventions:

     Keystone_tenant { 'foo::blah" domain => "bar::is::cool'", ... }


--Morgan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/cc46a549/attachment.html>

From sathlang at redhat.com  Mon Sep 14 21:26:24 2015
From: sathlang at redhat.com (Sofer Athlan-Guyot)
Date: Mon, 14 Sep 2015 23:26:24 +0200
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
	'composite namevar' or 'meaningless name'?
In-Reply-To: <55F733AF.6080005@redhat.com> (Rich Megginson's message of "Mon, 
 14 Sep 2015 14:53:03 -0600")
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com>
Message-ID: <871te066bj.fsf@s390.unix4.net>

Rich Megginson <rmeggins at redhat.com> writes:

> I think we could support both.  I don't see it as an either/or
> situation.
+1

>>> A.
>> I think it's the B: meaningless approach here.
>>
>>>    Pros
>>>      - Easier names
>> That's subjective, creating unique and meaningful name don't look easy
>> to me.
>
> The point is that this allows choice - maybe the user already has some
> naming scheme, or wants to use a more "natural" meaningful name -
> rather than being forced into a possibly "awkward" naming scheme with
> "::"
>
>   keystone_user { 'heat domain admin user':
>     name => 'admin',
>     domain => 'HeatDomain',
>     ...
>   }
>
>   keystone_user_role {'heat domain admin user@::HeatDomain':
>     roles => ['admin']
>     ...
>   }
>

Thanks, I see the point.

>>
>>>    Cons
>>>      - Titles have no meaning!
>
> They have meaning to the user, not necessarily to Puppet.
>
>>>      - Cases where 2 or more resources could exists
>
> This seems to be the hardest part - I still cannot figure out how to
> use "compound" names with Puppet.

I don't get this point.  what is "2 or more resource could exists" and
how it relates to compound names ?

>>>      - More difficult to debug
>
> More difficult than it is already? :P

require 'pry';binding.pry :)

>> As a side note, someone raised an issue about the delimiter being
>> hardcoded to "::".  This could be a property of the resource.  This
>> would enable the user to use weird name with "::" in it and assign a "/"
>> (for instance) to the delimiter property:
>>
>>    Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }
>>
>> bar::is::cool is the name of the domain and foo::blah is the project.
>
> That's a good idea.  Please file a bug for that.

Done there: https://bugs.launchpad.net/puppet-keystone/+bug/1495691

>
>>
>>> Finally
>>> ------
>>> Thanks for reading that far!
>>> To choose, please provide feedback with more pros/cons, examples and
>>> your vote.
>>>
>>> Thanks,
>>> Gilles
>>>
>>>
>>> PS:
>>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> Bye,
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sofer Athlan-Guyot


From adam.harwell at RACKSPACE.COM  Mon Sep 14 21:29:39 2015
From: adam.harwell at RACKSPACE.COM (Adam Harwell)
Date: Mon, 14 Sep 2015 21:29:39 +0000
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible
 using non "admin" tenant?
In-Reply-To: <26B082831A2B1A4783604AB89B9B2C080E8925C2@SINPEX01CL02.citrite.net>
Message-ID: <D21CA5E8.217CF%adam.harwell@rackspace.com>

You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant. For now, the tenant-id should just be documented, but we are looking into making an API call that would expose the admin tenant-id to the user so they can make an API call to discover it.

Once the user has the neutron-lbaas tenant ID, they use the Barbican ACL system to add that ID as a readable user of the container and all of the secrets. Then Neutron-LBaaS hits barbican with the credentials of the admin tenant, and is granted access to the user?s container.

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, September 11, 2015 at 2:35 PM
To: "OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

Hi,
              Has anyone tried configuring SSL Offload as a tenant?
              During listener creation there is an error thrown saying ?could not locate/find container?.
              The lbaas plugin is not able to fetch the tenant?s certificate.

              From the code it looks like the lbaas plugin is tyring to connect to barbican with keystone details provided in neutron.conf
              Which is by default username = ?admin? and tenant_name =?admin?.
              This means lbaas plugin is looking for tenant?s ceritifcate in ?admin? tenant, which it will never be able to find.

              What is the procedure for the lbaas plugin to get hold of the tenant?s certificate?

              Assuming ?admin? user has access to all tenant?s certificates. Should the lbaas plugin connect to barbican with username=?admin? and tenant_name =  listener?s tenant_name?

Is this, the way forward ? *OR* Am I missing something?


Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/98cca70b/attachment.html>

From rmeggins at redhat.com  Mon Sep 14 21:35:45 2015
From: rmeggins at redhat.com (Rich Megginson)
Date: Mon, 14 Sep 2015 15:35:45 -0600
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <871te066bj.fsf@s390.unix4.net>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com> <871te066bj.fsf@s390.unix4.net>
Message-ID: <55F73DB1.5040604@redhat.com>

On 09/14/2015 03:26 PM, Sofer Athlan-Guyot wrote:
> Rich Megginson <rmeggins at redhat.com> writes:
>
>> I think we could support both.  I don't see it as an either/or
>> situation.
> +1
>
>>>> A.
>>> I think it's the B: meaningless approach here.
>>>
>>>>     Pros
>>>>       - Easier names
>>> That's subjective, creating unique and meaningful name don't look easy
>>> to me.
>> The point is that this allows choice - maybe the user already has some
>> naming scheme, or wants to use a more "natural" meaningful name -
>> rather than being forced into a possibly "awkward" naming scheme with
>> "::"
>>
>>    keystone_user { 'heat domain admin user':
>>      name => 'admin',
>>      domain => 'HeatDomain',
>>      ...
>>    }
>>
>>    keystone_user_role {'heat domain admin user@::HeatDomain':
>>      roles => ['admin']
>>      ...
>>    }
>>
> Thanks, I see the point.
>
>>>>     Cons
>>>>       - Titles have no meaning!
>> They have meaning to the user, not necessarily to Puppet.
>>
>>>>       - Cases where 2 or more resources could exists
>> This seems to be the hardest part - I still cannot figure out how to
>> use "compound" names with Puppet.
> I don't get this point.  what is "2 or more resource could exists" and
> how it relates to compound names ?

I would like to uniquely specify a resource by the _combination_ of the 
name + the domain.  For example:

   keystone_user { 'domain A admin user':
     name => 'admin',
     domain => 'domainA',
   }

   keystone_user { 'domain B admin user':
     name => 'admin',
     domain => 'domainB',
   }

Puppet doesn't like this - the value of the 'name' property of 
keystone_user is not unique throughout the manifest/catalog, even though 
both users are distinct and unique because they existing in different 
domains (and will have different UUIDs assigned by Keystone).

Gilles posted links to discussions about how to use isnamevar and 
title_patterns with Puppet Ruby providers, but I could not get it to 
work.  I was using Puppet 3.8 - perhaps it only works in Puppet 4.0 or 
later.  At any rate, this is an area for someone to do some research

>
>>>>       - More difficult to debug
>> More difficult than it is already? :P
> require 'pry';binding.pry :)

Tried that on Fedora 22 (actually - debugger pry because pry by itself 
isn't a debugger, but a REPL inspector).  Didn't work.

Also doesn't help you when someone hands you a pile of Puppet logs . . .

>
>>> As a side note, someone raised an issue about the delimiter being
>>> hardcoded to "::".  This could be a property of the resource.  This
>>> would enable the user to use weird name with "::" in it and assign a "/"
>>> (for instance) to the delimiter property:
>>>
>>>     Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }
>>>
>>> bar::is::cool is the name of the domain and foo::blah is the project.
>> That's a good idea.  Please file a bug for that.
> Done there: https://bugs.launchpad.net/puppet-keystone/+bug/1495691

Thanks!

>
>>>> Finally
>>>> ------
>>>> Thanks for reading that far!
>>>> To choose, please provide feedback with more pros/cons, examples and
>>>> your vote.
>>>>
>>>> Thanks,
>>>> Gilles
>>>>
>>>>
>>>> PS:
>>>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> Bye,
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From shardy at redhat.com  Mon Sep 14 21:42:00 2015
From: shardy at redhat.com (Steven Hardy)
Date: Mon, 14 Sep 2015 22:42:00 +0100
Subject: [openstack-dev] [tripleo] TripleO Updates & Upgrades [was:
 Upgrade plans for RDO Manager - Brainstorming]
In-Reply-To: <55F05182.4080906@redhat.com>
References: <55DB6C8A.7040602@redhat.com>
 <55F05182.4080906@redhat.com>
Message-ID: <20150914214158.GA15744@t430slt.redhat.com>

Firstly thanks Emilien for starting this discussion, I revised the subject
in an effort to get wider feedback, apologies for my delay responding;

On Wed, Sep 09, 2015 at 11:34:26AM -0400, Zane Bitter wrote:
> On 24/08/15 15:12, Emilien Macchi wrote:
> >Hi,
> >
> >So I've been working on OpenStack deployments for 4 years now and so far
> >RDO Manager is the second installer -after SpinalStack [1]- I'm working on.
> >
> >SpinalStack already had interested features [2] that allowed us to
> >upgrade our customer platforms almost every months, with full testing
> >and automation.
> >
> >Now, we have RDO Manager, I would be happy to share my little experience
> >on the topic and help to make it possible in the next cycle.
> >
> >For that, I created an etherpad [3], which is not too long and focused
> >on basic topics for now. This is technical and focused on Infrastructure
> >upgrade automation.
> >
> >Feel free to continue discussion on this thread or directly in the etherpad.
> >
> >[1] http://spinalstack.enovance.com
> >[2] http://spinalstack.enovance.com/en/latest/dev/upgrade.html
> >[3] https://etherpad.openstack.org/p/rdo-manager-upgrades
> 
> I added some notes on the etherpad, but I think this discussion poses a
> larger question: what is TripleO? Why are we using Heat? Because to me the
> major benefit of Heat is that it maintains a record of the current state of
> the system that can be used to manage upgrades. And if we're not going to
> make use of that - if we're going to determine the state of the system by
> introspecting nodes and update it by using Ansible scripts without Heat's
> knowledge, then we probably shouldn't be using Heat at all.

So, I think we should definitely learn from successful implementations such
as SpinalStack's, but given the way TripleO is currently implemented (e.g
primarily orchestrating software configuration via Heat), and the philosophy
behind the project I think it would be good to focus mostly on *what* needs
to be done and not too much on *how* in terms of tooling at this point, and
definitely not to assume any up-front requirement for additional CM
tooling.

The massive part of the value of TripleO IMHO is using OpenStack native
tooling whenever possible (even if it means working to improve the tools
for all users/use-cases), and I do think (just like orchestrating the
initial deployment) this *is* possible via Heat SoftwareDeployments, but
there's also an external workflow component, which is likely to be
satisfied via tripleo-common (short term) and probably Mistral (longer
term).

> I'm not saying that to close off the option - I think if Heat is not the
> best tool for the job then we should definitely consider other options. And
> right now it really is not the best tool for the job. Adopting Puppet (which
> was a necessary choice IMO) has meant that the responsibility for what I
> call "software orchestration"[1] is split awkwardly between Puppet and Heat.
> For example, the Puppet manifests are baked in to images on the servers, so
> Heat doesn't know when they've changed and can't retrigger Puppet to update
> the configuration when they do. We're left trying to reverse-engineer what
> is supposed to be a declarative model from the workflow that we want for
> things like updates/upgrades.

I don't really agree with this at all tbh - the puppet *modules* are by
default distributed in the images, but any update to them is deployed via
either an RPM update (which heat detects, provided it's applied via the
OS::TripleO::Tasks::PackageUpdate[1] interface, thus puppet *can* be correctly
reapplied), or potentially via rsync[2] in future, a unique
identifier is all that's required to wire in puppet getting reapplied via
NodeConfigIdentifiers[3]

[1] https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud-resource-registry-puppet.yaml#L24
[2] https://github.com/openstack/tripleo-heat-templates/blob/master/firstboot/userdata_dev_rsync.yaml
[3] https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud-without-mergepy.yaml#L1262

The puppet *manifests* are distributed via heat, so any update to those
will trigger heat to reapply the manifest the same as any change to a
SoftwareConfig resource config definition.

I actually think we've ended up with a pretty clear split in responsibility
between puppet and Heat, Heat does the orchestration and puts data in place
for to be consumed by puppet, which then owns all aspects of the software
configuration.

> That said, I think there's still some cause for optimism: in a world where
> every service is deployed in a container and every container has its own
> Heat SoftwareDeployment, the boundary between Heat's responsibilities and
> Puppet's would be much clearer. The deployment could conceivably fit a
> declarative model much better, and even offer a lot of flexibility in which
> services run on which nodes. We won't really know until we try, but it seems
> distinctly possible to aspire toward Heat actually making things easier
> rather than just not making them too much harder. And there is stuff on the
> long-term roadmap that could be really great if only we had time to devote
> to it - for example, as I mentioned in the etherpad, I'd love to get Heat's
> user hooks integrated with Mistral so that we could have fully-automated,
> highly-available (in a hypothetical future HA undercloud) live migration of
> workloads off compute nodes during updates.

Yup, definitely as we move closer towards more granular role definitions
and particularly container integration, I think the value of the heat
declarative model, composability, and built-in integration with other
OpenStack services will provide more obvious benefits vs tools geared
solely towards software configuration.

> In the meantime, however, I do think that we have all the tools in Heat that
> we need to cobble together what we need to do. In Liberty, Heat supports
> batched rolling updates of ResourceGroups, so we won't need to use user
> hooks to cobble together poor-man's batched update support any more. We can
> use the user hooks for their intended purpose of notifying the client when
> to live-migrate compute workloads off a server that is about to upgraded.
> The Heat templates should already tell us exactly which services are running
> on which nodes. We can trigger particular software deployments on a stack
> update with a parameter value change (as we already do with the yum update
> deployment). For operations that happen in isolation on a single server, we
> can model them as SoftwareDeployment resources within the individual server
> templates. For operations that are synchronised across a group of servers
> (e.g. disabling services on the controller nodes in preparation for a DB
> migration) we can model them as a SoftwareDeploymentGroup resource in the
> parent template. And for chaining multiple sequential operations (e.g.
> disable services, migrate database, enable services), we can chain outputs
> to inputs to handle both ordering and triggering. I'm sure there will be
> many subtleties, but I don't think we *need* Ansible in the mix.

+1 - While I get that Ansible is a popular tool, given the current TripleO
implementation I don't think it's *needed* to orchestrate updates or
upgrades, and there are advantages to keeping the state associated with
cluster-wide operations inside Heat.

I know from talking with Emilien that one aspect of SpinalStack's update
workflow we don't currently capture is the step of determining what is
about to be updated, then calculating a workflow associated with e.g
restarting services in the right order.  It'd be interesting to figure out
how that might be wired in via the current Heat model and maybe prototype
something which mimics what was done by SpinalStack via Ansible.

> So it's really up to the wider TripleO project team to decide which path to
> go down. I am genuinely not bothered whether we choose Heat or Ansible.
> There may even be ways they can work together without compromising either
> model. But I would be pretty uncomfortable with a mix where we use Heat for
> deployment and Ansible for doing upgrades behind Heat's back.

Perhaps it'd be helpful to work up a couple of specs (or just one which
covers both) defining;

1. Strategy for Updates (defined as all incremental updates *not* requiring
any changes to DB schema or RPC version, e.g consuming stable-branch
updates)

2. How we deal with (and test) Upgrades (e.g moving from Kilo to Liberty,
where there are requirements to do DB schema and RPC version changes, and
not all services yet support the more advanced models implemented by e.g
Nova yet)

Cheers,

Steve


From emilien at redhat.com  Mon Sep 14 21:44:24 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Mon, 14 Sep 2015 17:44:24 -0400
Subject: [openstack-dev] [puppet] monasca,murano,mistral governance
Message-ID: <55F73FB8.8050401@redhat.com>

Hi,

As a reminder, Puppet modules that are part of OpenStack are documented
here [1].

I can see puppet-murano & puppet-mistral Gerrit permissions different
from other modules, because Mirantis helped to bootstrap the module a
few months ago.

I think [2] the modules should be consistent in governance and only
Puppet OpenStack group should be able to merge patches for these modules.

Same question for puppet-monasca: if Monasca team wants their module
under the big tent, I think they'll have to change Gerrit permissions to
only have Puppet OpenStack able to merge patches.

[1] http://governance.openstack.org/reference/projects/puppet-openstack.html
[2] https://review.openstack.org/223313

Any feedback is welcome,
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/6344898d/attachment.pgp>

From sathlang at redhat.com  Mon Sep 14 21:46:18 2015
From: sathlang at redhat.com (Sofer Athlan-Guyot)
Date: Mon, 14 Sep 2015 23:46:18 +0200
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
	'composite namevar' or 'meaningless name'?
In-Reply-To: <CAGnj6atXbpuzpNR6aF63cZ26WE-cbwUGozb9bvdxtaUaA7B1Ow@mail.gmail.com>
 (Morgan Fainberg's message of "Mon, 14 Sep 2015 14:06:12 -0700")
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com>
 <CAGnj6atXbpuzpNR6aF63cZ26WE-cbwUGozb9bvdxtaUaA7B1Ow@mail.gmail.com>
Message-ID: <87oah44qtx.fsf@s390.unix4.net>

Morgan Fainberg <morgan.fainberg at gmail.com> writes:

> On Mon, Sep 14, 2015 at 1:53 PM, Rich Megginson <rmeggins at redhat.com>
> wrote:
>
>     
>     On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>     
>     Hi,
>         
>         Gilles Dubreuil <gilles at redhat.com> writes:
>         
>                 A. The 'composite namevar' approach:
>             
>             keystone_tenant {'projectX::domainY': ... }
>             B. The 'meaningless name' approach:
>             
>             keystone_tenant {'myproject': name='projectX', domain=>'domainY',
>             ...}
>             
>             Notes:
>             - Actually using both combined should work too with the
>             domain
>             supposedly overriding the name part of the domain.
>             - Please look at [1] this for some background between the
>             two approaches:
>             
>             The question
>             -------------
>             Decide between the two approaches, the one we would like
>             to retain for
>             puppet-keystone.
>             
>             Why it matters?
>             ---------------
>             1. Domain names are mandatory in every user, group or
>             project. Besides
>             the backward compatibility period mentioned earlier, where
>             no domain
>             means using the default one.
>             2. Long term impact
>             3. Both approaches are not completely equivalent which
>             different
>             consequences on the future usage.
>         I can't see why they couldn't be equivalent, but I may be
>         missing
>         something here.
>
>     
>     I think we could support both. I don't see it as an either/or
>     situation.
>     
>         
>                 4. Being consistent
>             5. Therefore the community to decide
>             
>             Pros/Cons
>             ----------
>             A.
>         I think it's the B: meaningless approach here.
>         
>                 Pros
>             - Easier names
>         That's subjective, creating unique and meaningful name don't
>         look easy
>         to me.
>
>     The point is that this allows choice - maybe the user already has
>     some naming scheme, or wants to use a more "natural" meaningful
>     name - rather than being forced into a possibly "awkward" naming
>     scheme with "::"
>     
>     keystone_user { 'heat domain admin user':
>     name => 'admin',
>     domain => 'HeatDomain',
>     ...
>     }
>     
>     keystone_user_role {'heat domain admin user@::HeatDomain':
>     roles => ['admin']
>     ...
>     }
>     
>         
>                 Cons
>             - Titles have no meaning!
>
>     They have meaning to the user, not necessarily to Puppet.
>     
>                 - Cases where 2 or more resources could exists
>
>     This seems to be the hardest part - I still cannot figure out how
>     to use "compound" names with Puppet.
>     
>                 - More difficult to debug
>
>     More difficult than it is already? :P
>     
>     
>     
>                 - Titles mismatch when listing the resources
>             (self.instances)
>             
>             B.
>             Pros
>             - Unique titles guaranteed
>             - No ambiguity between resource found and their title
>             Cons
>             - More complicated titles
>             My vote
>             --------
>             I would love to have the approach A for easier name.
>             But I've seen the challenge of maintaining the providers
>             behind the
>             curtains and the confusion it creates with name/titles and
>             when not sure
>             about the domain we're dealing with.
>             Also I believe that supporting self.instances consistently
>             with
>             meaningful name is saner.
>             Therefore I vote B
>         +1 for B.
>         
>         My view is that this should be the advertised way, but the
>         other method
>         (meaningless) should be there if the user need it.
>         
>         So as far as I'm concerned the two idioms should co-exist.
>         This would
>         mimic what is possible with all puppet resources. For instance
>         you can:
>         
>         file { '/tmp/foo.bar': ensure => present }
>         
>         and you can
>         
>         file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
>         present }
>         
>         The two refer to the same resource.
>
>     
>     Right.
>     
>         
>         But, If that's indeed not possible to have them both, then I
>         would keep
>         only the meaningful name.
>         
>         
>         As a side note, someone raised an issue about the delimiter
>         being
>         hardcoded to "::". This could be a property of the resource.
>         This
>         would enable the user to use weird name with "::" in it and
>         assign a "/"
>         (for instance) to the delimiter property:
>         
>         Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/",
>         ... }
>         
>         bar::is::cool is the name of the domain and foo::blah is the
>         project.
>
>     That's a good idea. Please file a bug for that.
>     
>     
>     
>
> I'm not sure I see a benefit to notation like:
> Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }

Currently keystone v3 is doing domain using name and the "::" separator,
so that can be classified as an existing bug.  If the decision is taken
to remove the support for such notation (ie, <name>::<domain>,
'composite namevar' approach) then the delimiter parameter will be
useless.  If we continue to support that notation, then this would
enable the user to use any characters (minus their own chosen delimiter)
in their names.

> Overall option below just looks more straightforward (and requires
> less logic to convert to something useful). However, I admit I am not
> an expert in puppet conventions:
>
> Keystone_tenant { 'foo::blah" domain => "bar::is::cool'", ... }

This would be some kind of a mix between the two options proposed by
Gilles, if I'm not mistaken.  The name would be the project and the
domain would be a property.  So the name wouldn't be meaningless, but it
wouldn't be fully qualified neither.  So many choices :)

>
> --Morgan
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sofer Athlan-Guyot


From sean at coreitpro.com  Mon Sep 14 22:01:03 2015
From: sean at coreitpro.com (Sean M. Collins)
Date: Mon, 14 Sep 2015 22:01:03 +0000
Subject: [openstack-dev] [neutron][L3][QA] DVR job failure rate and
	maintainability
Message-ID: <0000014fcde02877-55c10164-4eed-4552-ba1a-681c6a75fbcd-000000@email.amazonses.com>

[adding neutron tag to subject and resending]

Hi,

Carl Baldwin, Doug Wiegley, Matt Kassawara, Ryan Moats, and myself are
at the QA sprint in Fort Collins. Earlier today there was a discussion
about the failure rate about the DVR job, and the possible impact that
it is having on the gate.

Ryan has a good patch up that shows the failure rates over time:

https://review.openstack.org/223201

To view the graphs, you go over into your neutron git repo, and open the
.html files that are present in doc/dashboards - which should open up
your browser and display the Graphite query.

Doug put up a patch to change the DVR job to be non-voting while we
determine the cause of the recent spikes:

https://review.openstack.org/223173

There was a good discussion after pushing the patch, revolving around
the need for Neutron to have DVR, to fit operational and reliability
requirements, and help transition away from Nova-Network by providing
one of many solutions similar to Nova's multihost feature.  I'm skipping
over a huge amount of context about the Nova-Network and Neutron work,
since that is a big and ongoing effort. 

DVR is an important feature to have, and we need to ensure that the job
that tests DVR has a high pass rate.

One thing that I think we need, is to form a group of contributors that
can help with the DVR feature in the immediate term to fix the current
bugs, and longer term maintain the feature. It's a big task and I don't
believe that a single person or company can or should do it by themselves.

The L3 group is a good place to start, but I think that even within the
L3 team we need dedicated and diverse group of people who are interested
in maintaining the DVR feature. 

Without this, I think the DVR feature will start to bit-rot and that
will have a significant impact on our ability to recommend Neutron as a
replacement for Nova-Network in the future.

-- 
Sean M. Collins

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From matt at mattfischer.com  Mon Sep 14 22:01:45 2015
From: matt at mattfischer.com (Matt Fischer)
Date: Mon, 14 Sep 2015 16:01:45 -0600
Subject: [openstack-dev] [puppet] monasca,murano,mistral governance
In-Reply-To: <55F73FB8.8050401@redhat.com>
References: <55F73FB8.8050401@redhat.com>
Message-ID: <CAHr1CO83BO6yk2L=ic9CJ+WXmXBs3ExeqQHs8NpCEb08vkm4-Q@mail.gmail.com>

Emilien,

I've discussed this with some of the Monasca puppet guys here who are doing
most of the work. I think it probably makes sense to move to that model
now, especially since the pace of development has slowed substantially. One
blocker before to having it "big tent" was the lack of test coverage, so as
long as we know that's a work in progress...  I'd also like to get Brad
Kiein's thoughts on this, but he's out of town this week. I'll ask him to
reply when he is back.


On Mon, Sep 14, 2015 at 3:44 PM, Emilien Macchi <emilien at redhat.com> wrote:

> Hi,
>
> As a reminder, Puppet modules that are part of OpenStack are documented
> here [1].
>
> I can see puppet-murano & puppet-mistral Gerrit permissions different
> from other modules, because Mirantis helped to bootstrap the module a
> few months ago.
>
> I think [2] the modules should be consistent in governance and only
> Puppet OpenStack group should be able to merge patches for these modules.
>
> Same question for puppet-monasca: if Monasca team wants their module
> under the big tent, I think they'll have to change Gerrit permissions to
> only have Puppet OpenStack able to merge patches.
>
> [1]
> http://governance.openstack.org/reference/projects/puppet-openstack.html
> [2] https://review.openstack.org/223313
>
> Any feedback is welcome,
> --
> Emilien Macchi
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/24ec1265/attachment.html>

From thingee at gmail.com  Mon Sep 14 22:03:50 2015
From: thingee at gmail.com (Mike Perez)
Date: Mon, 14 Sep 2015 15:03:50 -0700
Subject: [openstack-dev] Cross-Project meeting, Tue Sept 15th, 21:00 UTC
Message-ID: <CAHcn5b1fM_gjaLsSAyEzJDGw1N7FrN2g0bO73RRtf1ZpNujnXQ@mail.gmail.com>

Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting tomorrow at 21:00 UTC, with the
following agenda:

* Review past action items
* Team announcements (horizontal, vertical, diagonal)
* Centralized Code Coverage Index/Stats --
https://review.openstack.org/#/c/221494/
* Open discussion

If you're from a horizontal team (Release management, QA, Infra, Docs,
Security, I18n...) or a vertical team (Nova, Swift, Keystone...) and
have something to communicate to the other teams, feel free to abuse the
relevant sections of that meeting and make sure it gets #info-ed by the
meetbot in the meeting summary.

See you there!

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

--
Mike Perez


From rlrossit at linux.vnet.ibm.com  Mon Sep 14 22:05:19 2015
From: rlrossit at linux.vnet.ibm.com (Ryan Rossiter)
Date: Mon, 14 Sep 2015 17:05:19 -0500
Subject: [openstack-dev] [Magnum] API response on k8s failure
Message-ID: <55F7449F.9040108@linux.vnet.ibm.com>

I was giving a devstacked version of Magnum a try last week, and from a 
new user standpoint, I hit a big roadblock that caused me a lot of 
confusion. Here's my story:

I was attempting to create a pod in a k8s bay, and I provided it with an 
sample manifest from the Kubernetes repo. The Magnum API then returned 
the following error to me:

ERROR: 'NoneType' object has no attribute 'host' (HTTP 500)

I hunted down the error to be occurring here [1]. The k8s_api call was 
going bad, but conductor was continuing on anyways thinking the k8s API 
call went fine. I dug through the API calls to find the true cause of 
the error:

{u'status': u'Failure', u'kind': u'Status', u'code': 400, u'apiVersion': 
u'v1beta3', u'reason': u'BadRequest', u'message': u'Pod in version v1 
cannot be handled as a Pod: no kind "Pod" is registered for version 
"v1"', u'metadata': {}}

It turned out the error was because the manifest I was using had 
apiVersion v1, not v1beta3. That was very unclear by Magnum originally 
sending the 500.

This all does occur within a try, but the k8s API isn't throwing any 
sort of exception that can be caught by [2]. Was this caused by a 
regression in the k8s client? It looks like the original intention of 
this was to catch something going wrong in k8s, and then forward on the 
message & error code on to let the magnum API return that.

My question here is: does this classify as a bug? This happens in more 
places than just the pod create. It's changing around API returns (quite 
a few of them), and I don't know how that is handled in the Magnum 
project. If we want to have this done as a blueprint, I can open that up 
and target it for Mitaka, and get to work. If it should be opened up as 
a bug, I can also do that and start work on it ASAP.

[1] 
https://github.com/openstack/magnum/blob/master/magnum/conductor/handlers/k8s_conductor.py#L88-L108
[2] 
https://github.com/openstack/magnum/blob/master/magnum/conductor/handlers/k8s_conductor.py#L94

-- 
Thanks,

Ryan Rossiter (rlrossit)



From mscherbakov at mirantis.com  Mon Sep 14 22:19:52 2015
From: mscherbakov at mirantis.com (Mike Scherbakov)
Date: Mon, 14 Sep 2015 22:19:52 +0000
Subject: [openstack-dev] [Fuel] Bugs which we should accept in 7.0 after
 Hard Code Freeze
In-Reply-To: <CAJQwwYO15aBkbPoUYw7s4Kj1Dsg0iSGJDA5Jr5ZcZsHZ5TzHsQ@mail.gmail.com>
References: <CAJQwwYO15aBkbPoUYw7s4Kj1Dsg0iSGJDA5Jr5ZcZsHZ5TzHsQ@mail.gmail.com>
Message-ID: <CAKYN3rP0LjcYFwCpUNPESytzcovec9CLyx4Pypi4c9ZHpDbeSQ@mail.gmail.com>

Thanks Andrew.
Team, if there are any disagreements - let's discuss it. Otherwise, I think
we should be just strict and follow defined process. We can deliver high
priority bugfixes in updates channel later if needed.

I hope that reasoning is clear for everything. Every bugfix has a potential
to break something. It's basically a risk.

On Mon, Sep 14, 2015 at 8:57 AM Andrew Maksimov <amaksimov at mirantis.com>
wrote:

> Hi Everyone!
>
> I would like to reiterate the bugfix process after Hard Code Freeze.
> According to our HCF definition [1] we should only merge fixes for
> *Critical* bugs to *stable/7.0* branch, High and lower priority bugs
> should NOT be accepted to *stable/7.0* branch anymore.
> Also we should accept patches for critical bugs to *stable/7.0* branch
> only after the corresponding patchset with same ChangeID was accepted into
> master.
>
> [1] - https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
>
> Regards,
> Andrey Maximov
> Fuel Project Manager
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/0f35004b/attachment.html>

From rmoats at us.ibm.com  Mon Sep 14 22:23:57 2015
From: rmoats at us.ibm.com (Ryan Moats)
Date: Mon, 14 Sep 2015 17:23:57 -0500
Subject: [openstack-dev] [neutron] PTL Candidancy
Message-ID: <201509142224.t8EMODDH000516@d01av05.pok.ibm.com>



I'd like to announce my candidacy for the Mitaka cycle as Neutron PTL.

While not currently a Neutron core, I've been looking at hard at Neutron
for several cycles now and I feel that while we've accomplished a great
deal, more consolidation at this point will be desirable.  Therefore, if
elected, my goals for the Mitaka cycle will be:

1. Continue the decomposition efforts that Kyle has championed up through
the Liberty cycle.
2. Improve the performance and reliability of existing reference solutions
so that operators can be more confident when deploying neutron.
3. Like Liberty, we have a large list of new items for the design summit
[1], which we will work to pare down into items that can be delivered in
Mitaka

Thanks,
Ryan Moats (IRC handle: regXboi)

[1] https://etherpad.openstack.org/p/neutron-mitaka-designsummit
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/9be79499/attachment.html>

From john.griffith8 at gmail.com  Mon Sep 14 22:36:26 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Mon, 14 Sep 2015 16:36:26 -0600
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <20150914170202.GA13271@gmx.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
 <20150914170202.GA13271@gmx.com>
Message-ID: <CAPWkaSXoeXgSw3VSWQt8_f8_Mo7MvUx50tdiqSqDmjBW6x8fWg@mail.gmail.com>

On Mon, Sep 14, 2015 at 11:02 AM, Sean McGinnis <sean.mcginnis at gmx.com>
wrote:

> On Mon, Sep 14, 2015 at 09:15:44AM -0700, Mike Perez wrote:
> > Hello all,
> >
> > I will not be running for Cinder PTL this next cycle. Each cycle I ran
> > was for a reason [1][2], and the Cinder team should feel proud of our
> > accomplishments:
>
> Thanks for a couple of awesome cycles Mike!
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
?You did a fantastic job Mike, thank you very much for the hard work and
dedication.?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/1eb4b278/attachment.html>

From harlowja at outlook.com  Mon Sep 14 22:48:03 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Mon, 14 Sep 2015 15:48:03 -0700
Subject: [openstack-dev] [oslo] PTL candidacy
Message-ID: <BLU437-SMTP74172F20BE773585621C09D85D0@phx.gbl>

Howdy folks,

I'd like to propose myself for oslo PTL for the mitaka cycle,

For those that don't know me I've been involved in openstack for ~4 years,

Have worked at yahoo! for ~7 years, ~4 of the last years have been 
getting openstack adopted in yahoo! (where it is now used by everyone, 
and is a common word/name/project all employees know about, quite a 
change from when ~three other engineers and myself started investigating 
it ~4 years ago). It has been quite the journey (for myself, others and 
yahoo! in general) and I've been pretty active in oslo for ~2 years so I 
thought it might be a good time to try to run and see how I can help in 
a PTL role (this also ensures nobody else in oslo-core burns out).

I contribute to many projects (inside and outside of openstack):

- http://stackalytics.com/report/users/harlowja

Created/maintain/co-maintain/contributor/core to the following:

(not directly openstack, generally useful to all)
- https://kazoo.readthedocs.org
- https://redis-py.readthedocs.org
- https://cloudinit.readthedocs.org
- https://fasteners.readthedocs.org
- https://pymemcache.readthedocs.org
- https://pypi.python.org/pypi/zake
- https://pypi.python.org/pypi/doc8
- https://pypi.python.org/pypi/retrying
(mainly created for usage by openstack, but not limited to)
- https://anvil.readthedocs.org
- http://docs.openstack.org/developer/futurist/
- http://docs.openstack.org/developer/automaton/
- http://docs.openstack.org/developer/debtcollector/
- http://docs.openstack.org/developer/taskflow/
- http://docs.openstack.org/developer/tooz/
(created for usage by openstack)
- oslo.messaging
- oslo.utils
- oslo.serialization
- oslo.service
- (all the other 'oslo.*' libraries)
(and more...)

I feel I can help bring a unique viewpoint to oslo and openstack in 
general; one of increasing exposure and general usefulness of oslo 
libraries outside of openstack; fostering community inside and outside 
and continuing to make oslo and openstack the best it can be.

Some of the things that I would like to focus on (not inclusive of all 
the things):

- Increasing outreach to consuming projects so that they can benefit 
from the oslo libraries, code and knowledge (and patterns) that have 
been built up by these libraries; perhaps some kind of bi-weekly blog 
about oslo?
- Improving our outreach to others in the wider world (even ones not in 
the big tent); the python community is a big world and it'd be great to 
make sure we do our part there as well.
- Asking the hard questions.
- Being jolly.

Thanks for considering me,

Any questions/comments/feedback, please let me know and I'll do my best 
to answer them :-)

-Joshua Harlow


From rlrossit at linux.vnet.ibm.com  Mon Sep 14 22:49:24 2015
From: rlrossit at linux.vnet.ibm.com (Ryan Rossiter)
Date: Mon, 14 Sep 2015 17:49:24 -0500
Subject: [openstack-dev] [magnum] Maintaining cluster API in upgrades
Message-ID: <55F74EF4.5050104@linux.vnet.ibm.com>

I have some food for thought with regards to upgrades that was provoked 
by some incorrect usage of Magnum which led me to finding [1].

Let's say we're running a cloud with Liberty Magnum, which works with 
Kubernetes API v1. During the Mitaka release, Kubernetes released v2, so 
now Magnum conductor in Mitaka works with Kubernetes v2 API. What would 
happen if I upgrade from L to M with Magnum? My existing Magnum/k8s 
stuff will be on v1, so having Mitaka conductor attempt to interact with 
that stuff will cause it to blow up right? The k8s API calls will fail 
because the communicating components are using differing versions of the 
API (assuming there are backwards incompatibilities).

I'm running through some suggestions in my head in order to handle this:

1. Have conductor maintain all supported older versions of k8s, and do 
API discovery to figure out which version of the API to use
   - This one sounds like a total headache from a code management standpoint

2. Do some sort of heat stack update to upgrade all existing clusters to 
use the current version of the API
   - In my head, this would work kind of like a database migration, but 
it seems like it would be a lot harder

3. Maintain cluster clients outside of the Magnum tree
   - This would make maintaining the client compatibilities a lot easier
   - Would help eliminate the cruft of merging 48k lines for a swagger 
generated client [2]
   - Having the client outside of tree would allow for a simple pip install
   - Not sure if this *actually* solves the problem above

This isn't meant to be a "we need to change this" topic, it's just meant 
to be more of a "what if" discussion. I am also up for suggestions other 
than the 3 above.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074448.html
[2] https://review.openstack.org/#/c/217427/

-- 
Thanks,

Ryan Rossiter (rlrossit)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/71acdb4a/attachment.html>

From clint at fewbar.com  Tue Sep 15 00:06:44 2015
From: clint at fewbar.com (Clint Byrum)
Date: Mon, 14 Sep 2015 17:06:44 -0700
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442263439-sup-913@lrrr.local>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com> <1442250235-sup-1646@lrrr.local>
 <1442261641-sup-9577@fewbar.com> <1442263439-sup-913@lrrr.local>
Message-ID: <1442275194-sup-3621@fewbar.com>

Excerpts from Doug Hellmann's message of 2015-09-14 13:46:16 -0700:
> Excerpts from Clint Byrum's message of 2015-09-14 13:25:43 -0700:
> > Excerpts from Doug Hellmann's message of 2015-09-14 12:51:24 -0700:
> > > Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
> > > > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> > > > >
> > > > >After having some conversations with folks at the Ops Midcycle a
> > > > >few weeks ago, and observing some of the more recent email threads
> > > > >related to glance, glance-store, the client, and the API, I spent
> > > > >last week contacting a few of you individually to learn more about
> > > > >some of the issues confronting the Glance team. I had some very
> > > > >frank, but I think constructive, conversations with all of you about
> > > > >the issues as you see them. As promised, this is the public email
> > > > >thread to discuss what I found, and to see if we can agree on what
> > > > >the Glance team should be focusing on going into the Mitaka summit
> > > > >and development cycle and how the rest of the community can support
> > > > >you in those efforts.
> > > > >
> > > > >I apologize for the length of this email, but there's a lot to go
> > > > >over. I've identified 2 high priority items that I think are critical
> > > > >for the team to be focusing on starting right away in order to use
> > > > >the upcoming summit time effectively. I will also describe several
> > > > >other issues that need to be addressed but that are less immediately
> > > > >critical. First the high priority items:
> > > > >
> > > > >1. Resolve the situation preventing the DefCore committee from
> > > > >   including image upload capabilities in the tests used for trademark
> > > > >   and interoperability validation.
> > > > >
> > > > >2. Follow through on the original commitment of the project to
> > > > >   provide an image API by completing the integration work with
> > > > >   nova and cinder to ensure V2 API adoption.
> > > > 
> > > > Hi Doug,
> > > > 
> > > > First and foremost, I'd like to thank you for taking the time to dig
> > > > into these issues, and for reaching out to the community seeking for
> > > > information and a better understanding of what the real issues are. I
> > > > can imagine how much time you had to dedicate on this and I'm glad you
> > > > did.
> > > > 
> > > > Now, to your email, I very much agree with the priorities you
> > > > mentioned above and I'd like for, whomever will win Glance's PTL
> > > > election, to bring focus back on that.
> > > > 
> > > > Please, find some comments in-line for each point:
> > > > 
> > > > >
> > > > >I. DefCore
> > > > >
> > > > >The primary issue that attracted my attention was the fact that
> > > > >DefCore cannot currently include an image upload API in its
> > > > >interoperability test suite, and therefore we do not have a way to
> > > > >ensure interoperability between clouds for users or for trademark
> > > > >use. The DefCore process has been long, and at times confusing,
> > > > >even to those of us following it sort of closely. It's not entirely
> > > > >surprising that some projects haven't been following the whole time,
> > > > >or aren't aware of exactly what the whole thing means. I have
> > > > >proposed a cross-project summit session for the Mitaka summit to
> > > > >address this need for communication more broadly, but I'll try to
> > > > >summarize a bit here.
> > > > 
> > > > +1
> > > > 
> > > > I think it's quite sad that some projects, especially those considered
> > > > to be part of the `starter-kit:compute`[0], don't follow closely
> > > > what's going on in DefCore. I personally consider this a task PTLs
> > > > should incorporate in their role duties. I'm glad you proposed such
> > > > session, I hope it'll help raising awareness of this effort and it'll
> > > > help moving things forward on that front.
> > > 
> > > Until fairly recently a lot of the discussion was around process
> > > and priorities for the DefCore committee. Now that those things are
> > > settled, and we have some approved policies, it's time to engage
> > > more fully.  I'll be working during Mitaka to improve the two-way
> > > communication.
> > > 
> > > > 
> > > > >
> > > > >DefCore is using automated tests, combined with business policies,
> > > > >to build a set of criteria for allowing trademark use. One of the
> > > > >goals of that process is to ensure that all OpenStack deployments
> > > > >are interoperable, so that users who write programs that talk to
> > > > >one cloud can use the same program with another cloud easily. This
> > > > >is a *REST API* level of compatibility. We cannot insert cloud-specific
> > > > >behavior into our client libraries, because not all cloud consumers
> > > > >will use those libraries to talk to the services. Similarly, we
> > > > >can't put the logic in the test suite, because that defeats the
> > > > >entire purpose of making the APIs interoperable. For this level of
> > > > >compatibility to work, we need well-defined APIs, with a long support
> > > > >period, that work the same no matter how the cloud is deployed. We
> > > > >need the entire community to support this effort. From what I can
> > > > >tell, that is going to require some changes to the current Glance
> > > > >API to meet the requirements. I'll list those requirements, and I
> > > > >hope we can discuss them to a degree that ensures everyone understands
> > > > >them. I don't want this email thread to get bogged down in
> > > > >implementation details or API designs, though, so let's try to keep
> > > > >the discussion at a somewhat high level, and leave the details for
> > > > >specs and summit discussions. I do hope you will correct any
> > > > >misunderstandings or misconceptions, because unwinding this as an
> > > > >outside observer has been quite a challenge and it's likely I have
> > > > >some details wrong.
> > > > >
> > > > >As I understand it, there are basically two ways to upload an image
> > > > >to glance using the V2 API today. The "POST" API pushes the image's
> > > > >bits through the Glance API server, and the "task" API instructs
> > > > >Glance to download the image separately in the background. At one
> > > > >point apparently there was a bug that caused the results of the two
> > > > >different paths to be incompatible, but I believe that is now fixed.
> > > > >However, the two separate APIs each have different issues that make
> > > > >them unsuitable for DefCore.
> > > > >
> > > > >The DefCore process relies on several factors when designating APIs
> > > > >for compliance. One factor is the technical direction, as communicated
> > > > >by the contributor community -- that's where we tell them things
> > > > >like "we plan to deprecate the Glance V1 API". In addition to the
> > > > >technical direction, DefCore looks at the deployment history of an
> > > > >API. They do not want to require deploying an API if it is not seen
> > > > >as widely usable, and they look for some level of existing adoption
> > > > >by cloud providers and distributors as an indication of that the
> > > > >API is desired and can be successfully used. Because we have multiple
> > > > >upload APIs, the message we're sending on technical direction is
> > > > >weak right now, and so they have focused on deployment considerations
> > > > >to resolve the question.
> > > > 
> > > > The task upload process you're referring to is the one that uses the
> > > > `import` task, which allows you to download an image from an external
> > > > source, asynchronously, and import it in Glance. This is the old
> > > > `copy-from` behavior that was moved into a task.
> > > > 
> > > > The "fun" thing about this - and I'm sure other folks in the Glance
> > > > community will disagree - is that I don't consider tasks to be a
> > > > public API. That is to say, I would expect tasks to be an internal API
> > > > used by cloud admins to perform some actions (bsaed on its current
> > > > implementation). Eventually, some of these tasks could be triggered
> > > > from the external API but as background operations that are triggered
> > > > by the well-known public ones and not through the task API.
> > > 
> > > Does that mean it's more of an "admin" API?
> > > 
> > 
> > I think it is basically just a half-way done implementation that is
> > exposed directly to users of Rackspace Cloud and, AFAIK, nobody else.
> > When last I tried to make integration tests in shade that exercised the
> > upstream glance task import code, I was met with an implementation that
> > simply did not work, because the pieces behind it had never been fully
> > implemented upstream. That may have been resolved, but in the process
> > of trying to write tests and make this work, I discovered a system that
> > made very little sense from a user standpoint. I want to upload an
> > image, why do I want a task?!
> > 
> > > > 
> > > > Ultimately, I believe end-users of the cloud simply shouldn't care
> > > > about what tasks are or aren't and more importantly, as you mentioned
> > > > later in the email, tasks make clouds not interoperable. I'd be pissed
> > > > if my public image service would ask me to learn about tasks to be
> > > > able to use the service.
> > > 
> > > It would be OK if a public API set up to do a specific task returned a
> > > task ID that could be used with a generic task API to check status, etc.
> > > So the idea of tasks isn't completely bad, it's just too vague as it's
> > > exposed right now.
> > > 
> > 
> > I think it is a concern, because it is assuming users will want to do
> > generic things with a specific API. This turns into a black-box game where
> > the user shoves a task in and then waits to see what comes out the other
> > side. Not something I want to encourage users to do or burden them with.
> > 
> > We have an API whose sole purpose is to accept image uploads. That
> > Rackspace identified a scaling pain point there is _good_. But why not
> > *solve* it for the user, instead of introduce more complexity?
> 
> That's fair. I don't actually care which API we have, as long as it
> meets the other requirements.
> 
> > 
> > What I'd like to see is the upload image API given the ability to
> > respond with a URL that can be uploaded to using the object storage API
> > we already have in OpenStack. Exposing users to all of these operator
> > choices is just wasting their time. Just simply say "Oh, you want to
> > upload an image? Thats fine, please upload it as an object over there
> > and POST here again when it is ready to be imported." This will make
> > perfect sense to a user reading docs, and doesn't require them to grasp
> > an abstract concept like "tasks" when all they want to do is upload
> > their image.
> > 
> 
> And what would it do if the backing store for the image service
> isn't Swift or another object storage system that supports direct
> uploads? Return a URL that pointed back to itself, maybe?

For those operators who don't have concerns about scaling the glance
API service to their users' demands, glance's image upload API works
perfectly well today.  The indirect approach is only meant to dealt with
the situation where the operator expects a lot of really large images to
be uploaded simultaneously, and would like to take advantage of the Swift
API's rather rich set of features for making that a positive experience.
There is also a user benefit to using the Swift API, which is that a
segmented upload can more easily be resumed.

Now, IMO HTTP has facilities for that too, it's just that glanceclient
(and lo, many HTTP clients) aren't well versed in those deeper, optional
pieces of HTTP. That is why Swift works the way it does, and I like
the idea of glance simply piggy backing on the experience of many years
of production refinement that are available and codified in Swift and
any other OpenStack Object Storage API implementations (like the CEPH
RADOS gateway).


From adrian.otto at rackspace.com  Tue Sep 15 00:30:24 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Tue, 15 Sep 2015 00:30:24 +0000
Subject: [openstack-dev] [Magnum] API response on k8s failure
In-Reply-To: <55F7449F.9040108@linux.vnet.ibm.com>
References: <55F7449F.9040108@linux.vnet.ibm.com>
Message-ID: <B4B443CF-ADC2-46B1-90D2-A0338160DDBE@rackspace.com>

Ryan,

Thanks for sharing this. Sorry you got out to a bumpy start. I suggest you do file a bug for this against magnum and we can decide how best to handle it. I can not tell from your email what the kubectl would do with the same input. We might have an opportunity to make both better.

If you need guidance for how to file a bug, feel free to email me directly and I can point you in the right direction.

Thanks,

Adrian

> On Sep 14, 2015, at 3:05 PM, Ryan Rossiter <rlrossit at linux.vnet.ibm.com> wrote:
> 
> I was giving a devstacked version of Magnum a try last week, and from a new user standpoint, I hit a big roadblock that caused me a lot of confusion. Here's my story:
> 
> I was attempting to create a pod in a k8s bay, and I provided it with an sample manifest from the Kubernetes repo. The Magnum API then returned the following error to me:
> 
> ERROR: 'NoneType' object has no attribute 'host' (HTTP 500)
> 
> I hunted down the error to be occurring here [1]. The k8s_api call was going bad, but conductor was continuing on anyways thinking the k8s API call went fine. I dug through the API calls to find the true cause of the error:
> 
> {u'status': u'Failure', u'kind': u'Status', u'code': 400, u'apiVersion': u'v1beta3', u'reason': u'BadRequest', u'message': u'Pod in version v1 cannot be handled as a Pod: no kind "Pod" is registered for version "v1"', u'metadata': {}}
> 
> It turned out the error was because the manifest I was using had apiVersion v1, not v1beta3. That was very unclear by Magnum originally sending the 500.
> 
> This all does occur within a try, but the k8s API isn't throwing any sort of exception that can be caught by [2]. Was this caused by a regression in the k8s client? It looks like the original intention of this was to catch something going wrong in k8s, and then forward on the message & error code on to let the magnum API return that.
> 
> My question here is: does this classify as a bug? This happens in more places than just the pod create. It's changing around API returns (quite a few of them), and I don't know how that is handled in the Magnum project. If we want to have this done as a blueprint, I can open that up and target it for Mitaka, and get to work. If it should be opened up as a bug, I can also do that and start work on it ASAP.
> 
> [1] https://github.com/openstack/magnum/blob/master/magnum/conductor/handlers/k8s_conductor.py#L88-L108
> [2] https://github.com/openstack/magnum/blob/master/magnum/conductor/handlers/k8s_conductor.py#L94
> 
> -- 
> Thanks,
> 
> Ryan Rossiter (rlrossit)
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From davanum at gmail.com  Tue Sep 15 00:45:33 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Mon, 14 Sep 2015 20:45:33 -0400
Subject: [openstack-dev] [oslo] PTL candidacy
Message-ID: <CANw6fcFOTRU+Qoz3LYeA-4wVJUEG_RR5w_Pt2kMiRy6MujN=+Q@mail.gmail.com>

Hi,

It's been a great privilege to be the Oslo PTL for Liberty and getting to
know and learn a whole lot of things. I hope i have helped move the Oslo
project along its path in the process. Things we should be proud about
include the fact that oslo-incubator is almost empty. We have a whole bunch
of new libraries both general purpose outside of OpenStack and those who
are specific to openstack. We as a team, have greatly stabilized core
libraries like oslo.db and oslo.messaging etc as well. We have grown both
the oslo core team and the cores for individual oslo projects to inject new
blood into the project. Another aspect we really pushed hard is to make
sure we did a lot of testing before we released code to reduce the chances
of breaking projects as well as making sure that we stuck to a schedule of
releases every week to release things early.

For Mitaka, i would like to focus on Documentation. This has been a sore
spot for a while and folks have to end up reading code quickly when things
fail. I'd also like the team to finally get rid of the remnants in the
oslo-incubator and spearhead adoption of the oslo libraries in various
projects. Shadowing Doug in the previous cycle helped me along the way in
liberty, so i'd love to show and help hand over the duties to the next ptl
for the N release. Happy to do this even if there is a new PTL for the
Mitaka release. As mentioned in the oslo meeting today, it would be great
to have a VOTE and thanks for Joshua (and anyone else who may throw their
hat) for making it a race :) Looking forward to new oslo libraries, more
drivers for existing libraries and working together to make the OpenStack
ecosystem more vibrant and welcoming to everyone.

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/3e1d9a4d/attachment.html>

From mordred at inaugust.com  Tue Sep 15 00:46:38 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Tue, 15 Sep 2015 02:46:38 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442275194-sup-3621@fewbar.com>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com> <1442250235-sup-1646@lrrr.local>
 <1442261641-sup-9577@fewbar.com> <1442263439-sup-913@lrrr.local>
 <1442275194-sup-3621@fewbar.com>
Message-ID: <55F76A6E.6000507@inaugust.com>

On 09/15/2015 02:06 AM, Clint Byrum wrote:
> Excerpts from Doug Hellmann's message of 2015-09-14 13:46:16 -0700:
>> Excerpts from Clint Byrum's message of 2015-09-14 13:25:43 -0700:
>>> Excerpts from Doug Hellmann's message of 2015-09-14 12:51:24 -0700:
>>>> Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
>>>>> On 14/09/15 08:10 -0400, Doug Hellmann wrote:
>>>>>>
>>>>>> After having some conversations with folks at the Ops Midcycle a
>>>>>> few weeks ago, and observing some of the more recent email threads
>>>>>> related to glance, glance-store, the client, and the API, I spent
>>>>>> last week contacting a few of you individually to learn more about
>>>>>> some of the issues confronting the Glance team. I had some very
>>>>>> frank, but I think constructive, conversations with all of you about
>>>>>> the issues as you see them. As promised, this is the public email
>>>>>> thread to discuss what I found, and to see if we can agree on what
>>>>>> the Glance team should be focusing on going into the Mitaka summit
>>>>>> and development cycle and how the rest of the community can support
>>>>>> you in those efforts.
>>>>>>
>>>>>> I apologize for the length of this email, but there's a lot to go
>>>>>> over. I've identified 2 high priority items that I think are critical
>>>>>> for the team to be focusing on starting right away in order to use
>>>>>> the upcoming summit time effectively. I will also describe several
>>>>>> other issues that need to be addressed but that are less immediately
>>>>>> critical. First the high priority items:
>>>>>>
>>>>>> 1. Resolve the situation preventing the DefCore committee from
>>>>>>    including image upload capabilities in the tests used for trademark
>>>>>>    and interoperability validation.
>>>>>>
>>>>>> 2. Follow through on the original commitment of the project to
>>>>>>    provide an image API by completing the integration work with
>>>>>>    nova and cinder to ensure V2 API adoption.
>>>>>
>>>>> Hi Doug,
>>>>>
>>>>> First and foremost, I'd like to thank you for taking the time to dig
>>>>> into these issues, and for reaching out to the community seeking for
>>>>> information and a better understanding of what the real issues are. I
>>>>> can imagine how much time you had to dedicate on this and I'm glad you
>>>>> did.
>>>>>
>>>>> Now, to your email, I very much agree with the priorities you
>>>>> mentioned above and I'd like for, whomever will win Glance's PTL
>>>>> election, to bring focus back on that.
>>>>>
>>>>> Please, find some comments in-line for each point:
>>>>>
>>>>>>
>>>>>> I. DefCore
>>>>>>
>>>>>> The primary issue that attracted my attention was the fact that
>>>>>> DefCore cannot currently include an image upload API in its
>>>>>> interoperability test suite, and therefore we do not have a way to
>>>>>> ensure interoperability between clouds for users or for trademark
>>>>>> use. The DefCore process has been long, and at times confusing,
>>>>>> even to those of us following it sort of closely. It's not entirely
>>>>>> surprising that some projects haven't been following the whole time,
>>>>>> or aren't aware of exactly what the whole thing means. I have
>>>>>> proposed a cross-project summit session for the Mitaka summit to
>>>>>> address this need for communication more broadly, but I'll try to
>>>>>> summarize a bit here.
>>>>>
>>>>> +1
>>>>>
>>>>> I think it's quite sad that some projects, especially those considered
>>>>> to be part of the `starter-kit:compute`[0], don't follow closely
>>>>> what's going on in DefCore. I personally consider this a task PTLs
>>>>> should incorporate in their role duties. I'm glad you proposed such
>>>>> session, I hope it'll help raising awareness of this effort and it'll
>>>>> help moving things forward on that front.
>>>>
>>>> Until fairly recently a lot of the discussion was around process
>>>> and priorities for the DefCore committee. Now that those things are
>>>> settled, and we have some approved policies, it's time to engage
>>>> more fully.  I'll be working during Mitaka to improve the two-way
>>>> communication.
>>>>
>>>>>
>>>>>>
>>>>>> DefCore is using automated tests, combined with business policies,
>>>>>> to build a set of criteria for allowing trademark use. One of the
>>>>>> goals of that process is to ensure that all OpenStack deployments
>>>>>> are interoperable, so that users who write programs that talk to
>>>>>> one cloud can use the same program with another cloud easily. This
>>>>>> is a *REST API* level of compatibility. We cannot insert cloud-specific
>>>>>> behavior into our client libraries, because not all cloud consumers
>>>>>> will use those libraries to talk to the services. Similarly, we
>>>>>> can't put the logic in the test suite, because that defeats the
>>>>>> entire purpose of making the APIs interoperable. For this level of
>>>>>> compatibility to work, we need well-defined APIs, with a long support
>>>>>> period, that work the same no matter how the cloud is deployed. We
>>>>>> need the entire community to support this effort. From what I can
>>>>>> tell, that is going to require some changes to the current Glance
>>>>>> API to meet the requirements. I'll list those requirements, and I
>>>>>> hope we can discuss them to a degree that ensures everyone understands
>>>>>> them. I don't want this email thread to get bogged down in
>>>>>> implementation details or API designs, though, so let's try to keep
>>>>>> the discussion at a somewhat high level, and leave the details for
>>>>>> specs and summit discussions. I do hope you will correct any
>>>>>> misunderstandings or misconceptions, because unwinding this as an
>>>>>> outside observer has been quite a challenge and it's likely I have
>>>>>> some details wrong.
>>>>>>
>>>>>> As I understand it, there are basically two ways to upload an image
>>>>>> to glance using the V2 API today. The "POST" API pushes the image's
>>>>>> bits through the Glance API server, and the "task" API instructs
>>>>>> Glance to download the image separately in the background. At one
>>>>>> point apparently there was a bug that caused the results of the two
>>>>>> different paths to be incompatible, but I believe that is now fixed.
>>>>>> However, the two separate APIs each have different issues that make
>>>>>> them unsuitable for DefCore.
>>>>>>
>>>>>> The DefCore process relies on several factors when designating APIs
>>>>>> for compliance. One factor is the technical direction, as communicated
>>>>>> by the contributor community -- that's where we tell them things
>>>>>> like "we plan to deprecate the Glance V1 API". In addition to the
>>>>>> technical direction, DefCore looks at the deployment history of an
>>>>>> API. They do not want to require deploying an API if it is not seen
>>>>>> as widely usable, and they look for some level of existing adoption
>>>>>> by cloud providers and distributors as an indication of that the
>>>>>> API is desired and can be successfully used. Because we have multiple
>>>>>> upload APIs, the message we're sending on technical direction is
>>>>>> weak right now, and so they have focused on deployment considerations
>>>>>> to resolve the question.
>>>>>
>>>>> The task upload process you're referring to is the one that uses the
>>>>> `import` task, which allows you to download an image from an external
>>>>> source, asynchronously, and import it in Glance. This is the old
>>>>> `copy-from` behavior that was moved into a task.
>>>>>
>>>>> The "fun" thing about this - and I'm sure other folks in the Glance
>>>>> community will disagree - is that I don't consider tasks to be a
>>>>> public API. That is to say, I would expect tasks to be an internal API
>>>>> used by cloud admins to perform some actions (bsaed on its current
>>>>> implementation). Eventually, some of these tasks could be triggered
>>>>> from the external API but as background operations that are triggered
>>>>> by the well-known public ones and not through the task API.
>>>>
>>>> Does that mean it's more of an "admin" API?
>>>>
>>>
>>> I think it is basically just a half-way done implementation that is
>>> exposed directly to users of Rackspace Cloud and, AFAIK, nobody else.
>>> When last I tried to make integration tests in shade that exercised the
>>> upstream glance task import code, I was met with an implementation that
>>> simply did not work, because the pieces behind it had never been fully
>>> implemented upstream. That may have been resolved, but in the process
>>> of trying to write tests and make this work, I discovered a system that
>>> made very little sense from a user standpoint. I want to upload an
>>> image, why do I want a task?!
>>>
>>>>>
>>>>> Ultimately, I believe end-users of the cloud simply shouldn't care
>>>>> about what tasks are or aren't and more importantly, as you mentioned
>>>>> later in the email, tasks make clouds not interoperable. I'd be pissed
>>>>> if my public image service would ask me to learn about tasks to be
>>>>> able to use the service.
>>>>
>>>> It would be OK if a public API set up to do a specific task returned a
>>>> task ID that could be used with a generic task API to check status, etc.
>>>> So the idea of tasks isn't completely bad, it's just too vague as it's
>>>> exposed right now.
>>>>
>>>
>>> I think it is a concern, because it is assuming users will want to do
>>> generic things with a specific API. This turns into a black-box game where
>>> the user shoves a task in and then waits to see what comes out the other
>>> side. Not something I want to encourage users to do or burden them with.
>>>
>>> We have an API whose sole purpose is to accept image uploads. That
>>> Rackspace identified a scaling pain point there is _good_. But why not
>>> *solve* it for the user, instead of introduce more complexity?
>>
>> That's fair. I don't actually care which API we have, as long as it
>> meets the other requirements.
>>
>>>
>>> What I'd like to see is the upload image API given the ability to
>>> respond with a URL that can be uploaded to using the object storage API
>>> we already have in OpenStack. Exposing users to all of these operator
>>> choices is just wasting their time. Just simply say "Oh, you want to
>>> upload an image? Thats fine, please upload it as an object over there
>>> and POST here again when it is ready to be imported." This will make
>>> perfect sense to a user reading docs, and doesn't require them to grasp
>>> an abstract concept like "tasks" when all they want to do is upload
>>> their image.
>>>
>>
>> And what would it do if the backing store for the image service
>> isn't Swift or another object storage system that supports direct
>> uploads? Return a URL that pointed back to itself, maybe?
>
> For those operators who don't have concerns about scaling the glance
> API service to their users' demands, glance's image upload API works
> perfectly well today.  The indirect approach is only meant to dealt with
> the situation where the operator expects a lot of really large images to
> be uploaded simultaneously, and would like to take advantage of the Swift
> API's rather rich set of features for making that a positive experience.
> There is also a user benefit to using the Swift API, which is that a
> segmented upload can more easily be resumed.

Yes, BUT ...

If there are going to be two legitimate ways to upload an image, that 
needs to be discoverable so that scripts (or things like ansible or 
razor or juju or terraform or *insert system tool here*) can accomplish 
"please upload this here image file into this here cloud"

It's really not about the REST API itself. Literally zero percent of the 
people are doing that. People use tools. Tools write to APIs. And nobody 
who is running an OpenStack cloud should have to write their own branded 
tools - that's a cost that's completely silly to bear. An operator 
running an openstack cloud should be able to say to their users "go use 
the ansible openstack modules" or "go use the juju openstack provider"

Which brings us back to your excellnet point - both of these are totally 
legitimate ways to upload to the cloud, except small clouds often don't 
run swift, and large clouds may want to handle the situation you mention 
and leverage swift. So how about:

glance image-create my-great-image
returns: 200 OK {
   upload-url: 'https://example.com/some/url/location',
   is_swift: False
}

OR

glance image-create my-great-image
returns: 200 OK {
   upload-url: 'https://example.com/some/url/location',
   is_swift: False
}

and if is_swift is true, then the user (or script) knows it can used the 
threaded swiftuploader,  If it's false, the user (or script) just 
uploads content to the URL. The process is completely sane, is pretty 
much the same for both types of cloud, and has one known and 
understandable either-or deployer difference that each fork of is open 
source and each fork of has a defined semantic.

Details, of course - and I know there are at least 5 more to work out - 
but hopefully that makes sense and doesn't disenfrancize anyone?


> Now, IMO HTTP has facilities for that too, it's just that glanceclient
> (and lo, many HTTP clients) aren't well versed in those deeper, optional
> pieces of HTTP. That is why Swift works the way it does, and I like
> the idea of glance simply piggy backing on the experience of many years
> of production refinement that are available and codified in Swift and
> any other OpenStack Object Storage API implementations (like the CEPH
> RADOS gateway).



From john.griffith8 at gmail.com  Tue Sep 15 00:51:07 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Mon, 14 Sep 2015 18:51:07 -0600
Subject: [openstack-dev] [fuel] [plugin] Release tagging
Message-ID: <CAPWkaSWF-kYWXTzuPpJ=RK5+1PBAAarAeSZ7ucb08EM4aDJHhw@mail.gmail.com>

Hey All,

I was trying to tag a release for v 1.0.1 on [1] today and noticed I don't
have permissions to do so.  Is there, a release team/process for this?

[1] https://github.com/stackforge/fuel-plugin-solidfire-cinder

Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/c3b513e6/attachment-0001.html>

From gilles at redhat.com  Tue Sep 15 01:07:40 2015
From: gilles at redhat.com (Gilles Dubreuil)
Date: Tue, 15 Sep 2015 11:07:40 +1000
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <55F733AF.6080005@redhat.com>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com>
Message-ID: <55F76F5C.2020106@redhat.com>



On 15/09/15 06:53, Rich Megginson wrote:
> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>> Hi,
>>
>> Gilles Dubreuil <gilles at redhat.com> writes:
>>
>>> A. The 'composite namevar' approach:
>>>
>>>     keystone_tenant {'projectX::domainY': ... }
>>>   B. The 'meaningless name' approach:
>>>
>>>    keystone_tenant {'myproject': name='projectX', domain=>'domainY',
>>> ...}
>>>
>>> Notes:
>>>   - Actually using both combined should work too with the domain
>>> supposedly overriding the name part of the domain.
>>>   - Please look at [1] this for some background between the two
>>> approaches:
>>>
>>> The question
>>> -------------
>>> Decide between the two approaches, the one we would like to retain for
>>> puppet-keystone.
>>>
>>> Why it matters?
>>> ---------------
>>> 1. Domain names are mandatory in every user, group or project. Besides
>>> the backward compatibility period mentioned earlier, where no domain
>>> means using the default one.
>>> 2. Long term impact
>>> 3. Both approaches are not completely equivalent which different
>>> consequences on the future usage.
>> I can't see why they couldn't be equivalent, but I may be missing
>> something here.
> 
> I think we could support both.  I don't see it as an either/or situation.
> 
>>
>>> 4. Being consistent
>>> 5. Therefore the community to decide
>>>
>>> Pros/Cons
>>> ----------
>>> A.
>> I think it's the B: meaningless approach here.
>>
>>>    Pros
>>>      - Easier names
>> That's subjective, creating unique and meaningful name don't look easy
>> to me.
> 
> The point is that this allows choice - maybe the user already has some
> naming scheme, or wants to use a more "natural" meaningful name - rather
> than being forced into a possibly "awkward" naming scheme with "::"
> 
>   keystone_user { 'heat domain admin user':
>     name => 'admin',
>     domain => 'HeatDomain',
>     ...
>   }
> 
>   keystone_user_role {'heat domain admin user@::HeatDomain':
>     roles => ['admin']
>     ...
>   }
> 
>>
>>>    Cons
>>>      - Titles have no meaning!
> 
> They have meaning to the user, not necessarily to Puppet.
> 
>>>      - Cases where 2 or more resources could exists
> 
> This seems to be the hardest part - I still cannot figure out how to use
> "compound" names with Puppet.
> 
>>>      - More difficult to debug
> 
> More difficult than it is already? :P
> 
>>>      - Titles mismatch when listing the resources (self.instances)
>>>
>>> B.
>>>    Pros
>>>      - Unique titles guaranteed
>>>      - No ambiguity between resource found and their title
>>>    Cons
>>>      - More complicated titles
>>> My vote
>>> --------
>>> I would love to have the approach A for easier name.
>>> But I've seen the challenge of maintaining the providers behind the
>>> curtains and the confusion it creates with name/titles and when not sure
>>> about the domain we're dealing with.
>>> Also I believe that supporting self.instances consistently with
>>> meaningful name is saner.
>>> Therefore I vote B
>> +1 for B.
>>
>> My view is that this should be the advertised way, but the other method
>> (meaningless) should be there if the user need it.
>>
>> So as far as I'm concerned the two idioms should co-exist.  This would
>> mimic what is possible with all puppet resources.  For instance you can:
>>
>>    file { '/tmp/foo.bar': ensure => present }
>>
>> and you can
>>
>>    file { 'meaningless_id': name => '/tmp/foo.bar', ensure => present }
>>
>> The two refer to the same resource.
> 
> Right.
> 

I disagree, using the name for the title is not creating a composite
name. The latter requires adding at least another parameter to be part
of the title.

Also in the case of the file resource, a path/filename is a unique name,
which is not the case of an Openstack user which might exist in several
domains.

I actually added the meaningful name case in:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html

But that doesn't work very well because without adding the domain to the
name, the following fails:

keystone_tenant {'project_1': domain => 'domain_A', ...}
keystone_tenant {'project_1': domain => 'domain_B', ...}

And adding the domain makes it a de-facto 'composite name'.

>>
>> But, If that's indeed not possible to have them both,

There are cases where having both won't be possible like the trusts, but
why not for the resources supporting it.

That said, I think we need to make a choice, at least to get started, to
have something working, consistently, besides exceptions. Other options
to be added later.

>> then I would keep only the meaningful name.
>>
>>
>> As a side note, someone raised an issue about the delimiter being
>> hardcoded to "::".  This could be a property of the resource.  This
>> would enable the user to use weird name with "::" in it and assign a "/"
>> (for instance) to the delimiter property:
>>
>>    Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }
>>
>> bar::is::cool is the name of the domain and foo::blah is the project.
> 
> That's a good idea.  Please file a bug for that.
> 
>>
>>> Finally
>>> ------
>>> Thanks for reading that far!
>>> To choose, please provide feedback with more pros/cons, examples and
>>> your vote.
>>>
>>> Thanks,
>>> Gilles
>>>
>>>
>>> PS:
>>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> Bye,
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From hejie.xu at intel.com  Tue Sep 15 01:17:31 2015
From: hejie.xu at intel.com (Alex Xu)
Date: Tue, 15 Sep 2015 09:17:31 +0800
Subject: [openstack-dev] [nova] Nova API sub-team meeting
Message-ID: <0179AEE2-D0D4-4F70-92DF-C91AB4D31035@intel.com>

Hi,

We have weekly Nova API meeting this week. The meeting is being held Tuesday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Tue)
Japan 21:00 (Tue)
China 20:00 (Tue)
United Kingdom 13:00 (Tue)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI <https://wiki.openstack.org/wiki/Meetings/NovaAPI>

Please feel free to add items to the agenda.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/fb30312a/attachment.html>

From watanabe_isao at jp.fujitsu.com  Tue Sep 15 01:21:18 2015
From: watanabe_isao at jp.fujitsu.com (Watanabe, Isao)
Date: Tue, 15 Sep 2015 01:21:18 +0000
Subject: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
 gerrit server
In-Reply-To: <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF7166@G4W3223.americas.hpqcorp.net>
References: <9086590602E58741A4119DC210CF893AA92C53DD@G08CNEXMBPEKD01.g08.fujitsu.local>
 <55F1607F.9060509@virtuozzo.com>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF06E8@G4W3223.americas.hpqcorp.net>
 <AC0F94DB49C0C2439892181E6CDA6E6C171F186D@G01JPEXMBYT05>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF5DF7@G4W3223.americas.hpqcorp.net>
 <AC0F94DB49C0C2439892181E6CDA6E6C171F1A0D@G01JPEXMBYT05>
 <4BFD2A2A3BAE4A46AA43C6A2DB44D16965AF7166@G4W3223.americas.hpqcorp.net>
Message-ID: <AC0F94DB49C0C2439892181E6CDA6E6C171F2BD5@G01JPEXMBYT05>

Hello, Ramy

Thank you for your help very much.

Best regards,
Watanabe.isao


> -----Original Message-----
> From: Asselin, Ramy [mailto:ramy.asselin at hp.com]
> Sent: Friday, September 11, 2015 8:56 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
> gerrit server
> 
> Follow these instructions to get permission from cinder: [1] Ramy
> 
> [1]
> http://docs.openstack.org/infra/system-config/third_party.html#permissio
> ns-on-your-third-party-system
> 
> -----Original Message-----
> From: Watanabe, Isao [mailto:watanabe_isao at jp.fujitsu.com]
> Sent: Friday, September 11, 2015 2:26 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into
> gerrit server
> 
> Hello, Ramy
> 
> Thank you for your help.
> Could you do me another favor, please.
> I need to move our CI from sandbox to cinder later.
> Do I need to register the CI to anywhere, so that the CI could test new patch
> set in cinder project, please?
> 
> Best regards,
> Watanabe.isao
> 
> 
> 
> > -----Original Message-----
> > From: Asselin, Ramy [mailto:ramy.asselin at hp.com]
> > Sent: Friday, September 11, 2015 12:07 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified
> > into gerrit server
> >
> > Done. Thank you for adding your CI system to the wiki.
> >
> > Ramy
> >
> > -----Original Message-----
> > From: Watanabe, Isao [mailto:watanabe_isao at jp.fujitsu.com]
> > Sent: Thursday, September 10, 2015 8:00 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified
> > into gerrit server
> >
> > Hello, Ramy
> >
> > Could you please add the following CI to the third-party ci group, too.
> >
> > Fujitsu ETERNUS CI
> >
> > We are preparing this CI test system, and going to use this CI system
> > to test Cinder.
> > The wiki of this CI:
> > <https://wiki.openstack.org/wiki/ThirdPartySystems/Fujitsu_ETERNUS_CI>
> >
> > Thank you very much.
> >
> > Best regards,
> > Watanabe.isao
> >
> >
> >
> > > -----Original Message-----
> > > From: Asselin, Ramy [mailto:ramy.asselin at hp.com]
> > > Sent: Thursday, September 10, 2015 8:00 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified
> > > into gerrit server
> > >
> > > I added Fnst OpenStackTest CI
> > > <https://review.openstack.org/#/q/owner:openstack_dev%2540163.com+st
> > > at us :open,n,z>  to the third-party ci group.
> > >
> > > Ramy
> > >
> > >
> > >
> > > From: Evgeny Antyshev [mailto:eantyshev at virtuozzo.com]
> > > Sent: Thursday, September 10, 2015 3:51 AM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified
> > > into gerrit server
> > >
> > >
> > >
> > >
> > >
> > > On 10.09.2015 11:30, Xie, Xianshan wrote:
> > >
> > > 	Hi, all,
> > >
> > > 	   In my CI environment, after submitting a patch into
> > > openstack-dev/sandbox,
> > >
> > > 	the Jenkins Job can be launched automatically, and the result
> > > message of the job also can be posted into the gerrit server successfully.
> > >
> > > 	Everything seems fine.
> > >
> > >
> > >
> > > 	But in the "Verified" column, there is no verified vote, such as +1
> > > or -1.
> > >
> > > You will be able when your CI account is added to "Third-Party CI"
> > > group on review.openstack.org
> > > https://review.openstack.org/#/admin/groups/270,members
> > > I advice you to ask for such a permission in an IRC meeting for
> > > third-party CI maintainers:
> > > https://wiki.openstack.org/wiki/Meetings/ThirdParty
> > > But you still won't be able to vote on other projects, except the sandbox.
> > >
> > >
> > >
> > >
> > > 	(patch url: https://review.openstack.org/#/c/222049/
> > > <https://review.openstack.org/#/c/222049/> ,
> > >
> > > 	CI name:  Fnst OpenStackTest CI)
> > >
> > >
> > >
> > > 	Although I have already added the "verified" label into the
> > > layout.yaml , under the check pipeline, it does not work yet.
> > >
> > >
> > >
> > > 	And my configuration info is setted as follows:
> > >
> > > 	Layout.yaml
> > >
> > > 	-------------------------------------------
> > >
> > > 	pipelines:
> > >
> > > 	  - name: check
> > >
> > > 	   trigger:
> > >
> > > 	     gerrit:
> > >
> > > 	      - event: patchset-created
> > >
> > > 	      - event: change-restored
> > >
> > > 	      - event: comment-added
> > >
> > > 	...
> > >
> > > 	   success:
> > >
> > > 	    gerrit:
> > >
> > > 	      verified: 1
> > >
> > > 	   failure:
> > >
> > > 	    gerrit:
> > >
> > > 	      verified: -1
> > >
> > >
> > >
> > > 	jobs:
> > >
> > > 	   - name: noop-check-communication
> > >
> > > 	      parameter-function: reusable_node
> > >
> > > 	projects:
> > >
> > > 	- name: openstack-dev/sandbox
> > >
> > > 	   - noop-check-communication
> > >
> > > 	-------------------------------------------
> > >
> > >
> > >
> > >
> > >
> > > 	And the projects.yaml of Jenkins job:
> > >
> > > 	-------------------------------------------
> > >
> > > 	- project:
> > >
> > > 	name: sandbox
> > >
> > > 	jobs:
> > >
> > > 	      - noop-check-communication:
> > >
> > > 	         node: 'devstack_slave || devstack-precise-check || d-p-c'
> > >
> > > 	...
> > >
> > > 	-------------------------------------------
> > >
> > >
> > >
> > > 	Could anyone help me? Thanks in advance.
> > >
> > >
> > >
> > > 	Xiexs
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > 	________________________________________________________________
> > > __________
> > > 	OpenStack Development Mailing List (not for usage questions)
> > > 	Unsubscribe:
> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > 	http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
> > > v
> > >
> > >
> >
> >
> > ______________________________________________________________________
> > __
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ______________________________________________________________________
> > __
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ________________________________________________________________________
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ________________________________________________________________________
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From davanum at gmail.com  Tue Sep 15 01:27:13 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Mon, 14 Sep 2015 21:27:13 -0400
Subject: [openstack-dev] [fuel] [plugin] Release tagging
In-Reply-To: <CAPWkaSWF-kYWXTzuPpJ=RK5+1PBAAarAeSZ7ucb08EM4aDJHhw@mail.gmail.com>
References: <CAPWkaSWF-kYWXTzuPpJ=RK5+1PBAAarAeSZ7ucb08EM4aDJHhw@mail.gmail.com>
Message-ID: <CANw6fcEMeqcn-DfT4SGOJrww0qZye-dAvmwDrepGrgjqEYHXcg@mail.gmail.com>

John,

per ACL in project-config:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/stackforge/fuel-plugin-solidfire-cinder.config#n9

you are already in that group:
https://review.openstack.org/#/admin/groups/956,members

The release team would be in charge *if* that line looked like:
pushSignedTag = group library-release

as documented in:
http://docs.openstack.org/infra/manual/creators.html

So there's something else wrong..what error did you get?

-- Dims


On Mon, Sep 14, 2015 at 8:51 PM, John Griffith <john.griffith8 at gmail.com>
wrote:

> Hey All,
>
> I was trying to tag a release for v 1.0.1 on [1] today and noticed I don't
> have permissions to do so.  Is there, a release team/process for this?
>
> [1] https://github.com/stackforge/fuel-plugin-solidfire-cinder
>
> Thanks,
> John
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/4a4b2ac7/attachment.html>

From jaypipes at gmail.com  Tue Sep 15 01:34:31 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Mon, 14 Sep 2015 21:34:31 -0400
Subject: [openstack-dev] [nova][neutron][SR-IOV] Hardware changes and
 shifting PCI addresses
In-Reply-To: <20150910212306.GA11628@b3ntpin.localdomain>
References: <20150910212306.GA11628@b3ntpin.localdomain>
Message-ID: <55F775A7.6050901@gmail.com>

On 09/10/2015 05:23 PM, Brent Eagles wrote:
> Hi,
>
> I was recently informed of a situation that came up when an engineer
> added an SR-IOV nic to a compute node that was hosting some guests that
> had VFs attached. Unfortunately, adding the card shuffled the PCI
> addresses causing some degree of havoc. Basically, the PCI addresses
> associated with the previously allocated VFs were no longer valid.
>
> I tend to consider this a non-issue. The expectation that hosts have
> relatively static hardware configuration (and kernel/driver configs for
> that matter) is the price you pay for having pets with direct hardware
> access. That being said, this did come as a surprise to some of those
> involved and I don't think we have any messaging around this or advice
> on how to deal with situations like this.
>
> So what should we do? I can't quite see altering OpenStack to deal with
> this situation (or even how that could work). Has anyone done any
> research into this problem, even if it is how to recover or extricate
> a guest that is no longer valid? It seems that at the very least we
> could use some stern warnings in the docs.

Hi Brent,

Interesting issue. We have code in the PCI tracker that ostensibly 
handles this problem:

https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L145-L164

But the note from yjiang5 is telling:

# Pci properties may change while assigned because of
# hotplug or config changes. Although normally this should
# not happen.
# As the devices have been assigned to a instance, we defer
# the change till the instance is destroyed. We will
# not sync the new properties with database before that.
# TODO(yjiang5): Not sure if this is a right policy, but
# at least it avoids some confusion and, if
# we can add more action like killing the instance
# by force in future.

Basically, if the PCI device tracker notices that an instance is 
assigned a PCI device with an address that no longer exists in the PCI 
device addresses returned from libvirt, it will (eventually, in the 
_free_instance() method) remove the PCI device assignment from the 
Instance object, but it will make no attempt to assign a new PCI device 
that meets the original PCI device specification in the launch request.

Should we handle this case and attempt a "hot re-assignment of a PCI 
device"? Perhaps. Is it high priority? Not really, IMHO.

If you'd like to file a bug against Nova, that would be cool, though.

Best,
-jay


From john.griffith8 at gmail.com  Tue Sep 15 01:44:57 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Mon, 14 Sep 2015 19:44:57 -0600
Subject: [openstack-dev] [fuel] [plugin] Release tagging
In-Reply-To: <CANw6fcEMeqcn-DfT4SGOJrww0qZye-dAvmwDrepGrgjqEYHXcg@mail.gmail.com>
References: <CAPWkaSWF-kYWXTzuPpJ=RK5+1PBAAarAeSZ7ucb08EM4aDJHhw@mail.gmail.com>
 <CANw6fcEMeqcn-DfT4SGOJrww0qZye-dAvmwDrepGrgjqEYHXcg@mail.gmail.com>
Message-ID: <CAPWkaSUtvRgjf0ksPgiJZ2-gaOuH3EwLmUveB=fWTF9G7QO_5Q@mail.gmail.com>

On Mon, Sep 14, 2015 at 7:27 PM, Davanum Srinivas <davanum at gmail.com> wrote:

> John,
>
> per ACL in project-config:
>
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/stackforge/fuel-plugin-solidfire-cinder.config#n9
>
> you are already in that group:
> https://review.openstack.org/#/admin/groups/956,members
>
> The release team would be in charge *if* that line looked like:
> pushSignedTag = group library-release
>
> as documented in:
> http://docs.openstack.org/infra/manual/creators.html
>
> So there's something else wrong..what error did you get?
>
> -- Dims
>
>
> On Mon, Sep 14, 2015 at 8:51 PM, John Griffith <john.griffith8 at gmail.com>
> wrote:
>
>> Hey All,
>>
>> I was trying to tag a release for v 1.0.1 on [1] today and noticed I
>> don't have permissions to do so.  Is there, a release team/process for this?
>>
>> [1] https://github.com/stackforge/fuel-plugin-solidfire-cinder
>>
>> Thanks,
>> John
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
?Hmm...  could be that I'm just making bad assumptions and trying to do
this as I've done with other projects over the years?

Here's what I tried and the error I received:

jgriffith at railbender:~/git/fuel-plugin-solidfire-cinder$ git push --tags
gerrit
Counting objects: 1, done.
Writing objects: 100% (1/1), 168 bytes | 0 bytes/s, done.
Total 1 (delta 0), reused 0 (delta 0)
remote: Processing changes: refs: 1, done
To ssh://
john-griffith at review.openstack.org:29418/stackforge/fuel-plugin-solidfire-cinder.git
 ! [remote rejected] v1.0.1 -> v1.0.1 (prohibited by Gerrit)
error: failed to push some refs to 'ssh://
john-griffith at review.openstack.org:29418/stackforge/fuel-plugin-solidfire-cinder.git
'
jgriffith at railbender:~/git/fuel-plugin-solidfire-cinder$
So clearly I can't create the remote; but not sure what I need to do to
make this happen??
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/99f13844/attachment.html>

From robertc at robertcollins.net  Tue Sep 15 01:59:44 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Tue, 15 Sep 2015 13:59:44 +1200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442232202-sup-5997@lrrr.local>
References: <1442232202-sup-5997@lrrr.local>
Message-ID: <CAJ3HoZ0FPGxjRgidpvA9b_8YZO=ParjJ4-TpTPq-_XQxAjapPA@mail.gmail.com>

On 15 September 2015 at 00:10, Doug Hellmann <doug at doughellmann.com> wrote:
>
> After having some conversations with folks at the Ops Midcycle a
> few weeks ago, and observing some of the more recent email threads
> related to glance, glance-store, the client, and the API, I spent
> last week contacting a few of you individually to learn more about
> some of the issues confronting the Glance team. I had some very
> frank, but I think constructive, conversations with all of you about
> the issues as you see them. As promised, this is the public email
> thread to discuss what I found, and to see if we can agree on what
> the Glance team should be focusing on going into the Mitaka summit
> and development cycle and how the rest of the community can support
> you in those efforts.

Thanks for getting the ball rolling here. I won't go down the design
rathole here - but I will say that I think that the goals you've
identified are well within our [all OpenStack contributors] power to
address during the next cycle, and I'm going to help facilitate that
however I can.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From sxmatch1986 at gmail.com  Tue Sep 15 02:05:22 2015
From: sxmatch1986 at gmail.com (hao wang)
Date: Tue, 15 Sep 2015 10:05:22 +0800
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAPWkaSXoeXgSw3VSWQt8_f8_Mo7MvUx50tdiqSqDmjBW6x8fWg@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
 <20150914170202.GA13271@gmx.com>
 <CAPWkaSXoeXgSw3VSWQt8_f8_Mo7MvUx50tdiqSqDmjBW6x8fWg@mail.gmail.com>
Message-ID: <CAOEh+o2aQKtuCMZvy1eMyv7jw1qFy2wcLXexNbpdktF91_+wmQ@mail.gmail.com>

Thanks Mike ! Your help is very important to me to get started in
cinder and we do a lot of proud work with your leadership.

2015-09-15 6:36 GMT+08:00 John Griffith <john.griffith8 at gmail.com>:
>
>
> On Mon, Sep 14, 2015 at 11:02 AM, Sean McGinnis <sean.mcginnis at gmx.com>
> wrote:
>>
>> On Mon, Sep 14, 2015 at 09:15:44AM -0700, Mike Perez wrote:
>> > Hello all,
>> >
>> > I will not be running for Cinder PTL this next cycle. Each cycle I ran
>> > was for a reason [1][2], and the Cinder team should feel proud of our
>> > accomplishments:
>>
>> Thanks for a couple of awesome cycles Mike!
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> You did a fantastic job Mike, thank you very much for the hard work and
> dedication.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Wishes For You!


From Vijay.Venkatachalam at citrix.com  Tue Sep 15 02:12:31 2015
From: Vijay.Venkatachalam at citrix.com (Vijay Venkatachalam)
Date: Tue, 15 Sep 2015 02:12:31 +0000
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible
 using non "admin" tenant?
In-Reply-To: <D21CA5E8.217CF%adam.harwell@rackspace.com>
References: <26B082831A2B1A4783604AB89B9B2C080E8925C2@SINPEX01CL02.citrite.net>
 <D21CA5E8.217CF%adam.harwell@rackspace.com>
Message-ID: <26B082831A2B1A4783604AB89B9B2C080E899563@SINPEX01CL02.citrite.net>

Is there a documentation which records step by step?

What is Neutron-LBaaS tenant?

Is it the tenant who is configuring the listener? *OR* is it some tenant which is created for lbaas plugin that is the having all secrets for all tenants configuring lbaas.

>>You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant.
I checked the ACL docs
http://docs.openstack.org/developer/barbican/api/quickstart/acls.html

The ACL API is to allow ?users?(not ?Tenants?) access to secrets/containers. What is the API or CLI that the admin will use to allow access of the tenant?s secret+container to Neutron-LBaaS tenant.


From: Adam Harwell [mailto:adam.harwell at RACKSPACE.COM]
Sent: 15 September 2015 03:00
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant. For now, the tenant-id should just be documented, but we are looking into making an API call that would expose the admin tenant-id to the user so they can make an API call to discover it.

Once the user has the neutron-lbaas tenant ID, they use the Barbican ACL system to add that ID as a readable user of the container and all of the secrets. Then Neutron-LBaaS hits barbican with the credentials of the admin tenant, and is granted access to the user?s container.

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, September 11, 2015 at 2:35 PM
To: "OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

Hi,
              Has anyone tried configuring SSL Offload as a tenant?
              During listener creation there is an error thrown saying ?could not locate/find container?.
              The lbaas plugin is not able to fetch the tenant?s certificate.

              From the code it looks like the lbaas plugin is tyring to connect to barbican with keystone details provided in neutron.conf
              Which is by default username = ?admin? and tenant_name =?admin?.
              This means lbaas plugin is looking for tenant?s ceritifcate in ?admin? tenant, which it will never be able to find.

              What is the procedure for the lbaas plugin to get hold of the tenant?s certificate?

              Assuming ?admin? user has access to all tenant?s certificates. Should the lbaas plugin connect to barbican with username=?admin? and tenant_name =  listener?s tenant_name?

Is this, the way forward ? *OR* Am I missing something?


Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/96860ec1/attachment.html>

From hongbin.lu at huawei.com  Tue Sep 15 02:17:44 2015
From: hongbin.lu at huawei.com (Hongbin Lu)
Date: Tue, 15 Sep 2015 02:17:44 +0000
Subject: [openstack-dev] [magnum] Maintaining cluster API in upgrades
In-Reply-To: <55F74EF4.5050104@linux.vnet.ibm.com>
References: <55F74EF4.5050104@linux.vnet.ibm.com>
Message-ID: <0957CD8F4B55C0418161614FEC580D6BCE2DD3@SZXEMI503-MBS.china.huawei.com>

Hi Ryan,

I think pushing python k8sclient out of magnum tree (option 3) is the decision, which was made in Vancouver Summit (if I remembered correctly). It definitely helps for solving the k8s versioning problems.

Best regards,
Hongbin

From: Ryan Rossiter [mailto:rlrossit at linux.vnet.ibm.com]
Sent: September-14-15 6:49 PM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [magnum] Maintaining cluster API in upgrades

I have some food for thought with regards to upgrades that was provoked by some incorrect usage of Magnum which led me to finding [1].

Let's say we're running a cloud with Liberty Magnum, which works with Kubernetes API v1. During the Mitaka release, Kubernetes released v2, so now Magnum conductor in Mitaka works with Kubernetes v2 API. What would happen if I upgrade from L to M with Magnum? My existing Magnum/k8s stuff will be on v1, so having Mitaka conductor attempt to interact with that stuff will cause it to blow up right? The k8s API calls will fail because the communicating components are using differing versions of the API (assuming there are backwards incompatibilities).

I'm running through some suggestions in my head in order to handle this:

1. Have conductor maintain all supported older versions of k8s, and do API discovery to figure out which version of the API to use
  - This one sounds like a total headache from a code management standpoint

2. Do some sort of heat stack update to upgrade all existing clusters to use the current version of the API
  - In my head, this would work kind of like a database migration, but it seems like it would be a lot harder

3. Maintain cluster clients outside of the Magnum tree
  - This would make maintaining the client compatibilities a lot easier
  - Would help eliminate the cruft of merging 48k lines for a swagger generated client [2]
  - Having the client outside of tree would allow for a simple pip install
  - Not sure if this *actually* solves the problem above

This isn't meant to be a "we need to change this" topic, it's just meant to be more of a "what if" discussion. I am also up for suggestions other than the 3 above.

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074448.html
[2] https://review.openstack.org/#/c/217427/


--

Thanks,



Ryan Rossiter (rlrossit)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/1bc89735/attachment.html>

From shi-hoshino at yk.jp.nec.com  Tue Sep 15 02:35:33 2015
From: shi-hoshino at yk.jp.nec.com (Shinya Hoshino)
Date: Tue, 15 Sep 2015 02:35:33 +0000
Subject: [openstack-dev] [Ironic] There is a function to display the VGA
 emulation screen of BMC in the baremetal node on the Horizon?
References: <870B6C6CD434FA459C5BF1377C680282023A7160@BPXM20GP.gisp.nec.co.jp>
 <CAB1EZBoxqM=OOQ-RH3okW9AAoT2pAgTeWB_DDRQkiat=_870Ew@mail.gmail.com>
 <870B6C6CD434FA459C5BF1377C680282023A7B57@BPXM20GP.gisp.nec.co.jp>
 <1A3C52DFCD06494D8528644858247BF01A3075F2@EX10MBOX03.pnnl.gov>
Message-ID: <870B6C6CD434FA459C5BF1377C680282023A8381@BPXM20GP.gisp.nec.co.jp>

Thanks Kevin,

And, can I understand that this answer means that there is no
such capability in the recent Ironic?

At present, my mission is to simply to check whether there is a
capability. Based on its result, I think the handling later.
I might be make something, or replace by other ways, or so on.

Best Regards,

On 2015/09/14 23:52, Fox, Kevin M wrote:
> I dont believe the ipmi specification standardizes vga consoles. Only serial consoles. So it may be possible to get it to work for specific models, but not generically.
>
> Thanks,
> Kevin
>
> ________________________________
> From: Shinya Hoshino
> Sent: Sunday, September 13, 2015 8:48:57 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Ironic] There is a function to display the VGA emulation screen of BMC in the baremetal node on the Horizon?
>
> Thanks Lucas,
>
> However, I'd like to know whether or not capable to get the *VGA
> emulation* screen of BMC, rather than serial console.
>
> I also know that the features that can be only in VGA console is
> very few, now.  And I am able to get `ironic node-get-console
> <node-uuid>`.
> Still, I'd like to know such a thing and whether it can be
> displayed on the Horizon.
>
> Best regards,
>
> On 2015/09/11 17:50, Lucas Alvares Gomes wrote:
>> Hi,
>>
>>> We are investigating how to display on the Horizon a VGA
>>> emulation screen of BMC in the bare metal node that has been
>>> deployed by Ironic.
>>> If it was already implemented, I thought that the connection
>>> information of a VNC or SPICE server (converted if necessary)
>>> for a VGA emulation screen of BMC is returned as the stdout of
>>> the "nova get-*-console".
>>> However, we were investigating how to configure Ironic and so
>>> on, but we could not find a way to do so.
>>> I tried to search roughly the implementation of such a process
>>> in the source code of Ironic, but it was not found.
>>>
>>> The current Ironic, I think such features are not implemented.
>>> However, is this correct?
>>>
>>
>> A couple of drivers in Ironic supports web console (shellinabox), you
>> can take a look docs to see how to enable and use it:
>> http://docs.openstack.org/developer/ironic/deploy/install-guide.html#configure-node-web-console
>>
>> Hope that helps,
>> Lucas
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> /* -------------------------------------------------------------
>     Shinn'ya Hoshino            mailto:shi-hoshino at yk.jp.nec.com
> ------------------------------------------------------------- */
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
/* -------------------------------------------------------------
   Shinn'ya Hoshino            mailto:shi-hoshino at yk.jp.nec.com
------------------------------------------------------------- */


From sxmatch1986 at gmail.com  Tue Sep 15 03:06:35 2015
From: sxmatch1986 at gmail.com (hao wang)
Date: Tue, 15 Sep 2015 11:06:35 +0800
Subject: [openstack-dev] [Cinder] PTL Candidacy
In-Reply-To: <20150914144935.GA12267@gmx.com>
References: <20150914144935.GA12267@gmx.com>
Message-ID: <CAOEh+o37kMLxQoDDeO=pQ=-1Cr2F_SYvipNs-O6Vd8T4CWDpcA@mail.gmail.com>

Thanks Sean, Vote +1.

2015-09-14 22:49 GMT+08:00 Sean McGinnis <sean.mcginnis at gmx.com>:
> Hello everyone,
>
> I'm announcing my candidacy for Cinder PTL for the Mitaka release.
>
> The Cinder team has made great progress. We've not only grown the
> number of supported backend drivers, but we've made significant
> improvements to the core code and raised the quality of existing
> and incoming code contributions. While there are still many things
> that need more polish, we are headed in the right direction and
> block storage is a strong, stable component to many OpenStack clouds.
>
> Mike and John have provided the leadership to get the project where
> it is today. I would like to keep that momentum going.
>
> I've spent over a decade finding new and interesting ways to create
> and delete volumes. I also work across many different product teams
> and have had a lot of experience collaborating with groups to find
> a balance between the work being done to best benefit all involved.
>
> I think I can use this experience to foster collaboration both within
> the Cinder team as well as between Cinder and other related projects
> that interact with storage services.
>
> Some topics I would like to see focused on for the Mitaka release
> would be:
>
>  * Complete work of making the Cinder code Python3 compatible.
>  * Complete conversion to objects.
>  * Sort out object inheritance and appropriate use of ABC.
>  * Continued stabilization of third party CI.
>  * Make sure there is a good core feature set regardless of backend type.
>  * Reevaluate our deadlines to make sure core feature work gets enough
>    time and allows drivers to implement support.
>
> While there are some things I think we need to do to move the project
> forward, I am mostly open to the needs of the community as a whole
> and making sure that what we are doing is benefiting OpenStack and
> making it a simpler, easy to use, and ubiquitous platform for the
> cloud.
>
> Thank you for your consideration!
>
> Sean McGinnis (smcginnis)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Best Wishes For You!


From skalinowski at mirantis.com  Tue Sep 15 04:29:57 2015
From: skalinowski at mirantis.com (Sebastian Kalinowski)
Date: Tue, 15 Sep 2015 06:29:57 +0200
Subject: [openstack-dev] [Fuel] Nominate Denis Dmitriev for
	fuel-qa(devops) core
In-Reply-To: <CAE1YzriPm16vPUSfv0bMEEUwiTs37a7xAoWNAFCE+enVCo8F6A@mail.gmail.com>
References: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>
 <CAE1YzriPm16vPUSfv0bMEEUwiTs37a7xAoWNAFCE+enVCo8F6A@mail.gmail.com>
Message-ID: <CAGRGKG5E9LM9z4YTQU14H6VsnxjWzWrqWxuf+jATb2c+A7ZrVg@mail.gmail.com>

+1

2015-09-14 22:37 GMT+02:00 Ivan Kliuk <ikliuk at mirantis.com>:

> +1
>
> On Mon, Sep 14, 2015 at 10:19 PM, Anastasia Urlapova <
> aurlapova at mirantis.com> wrote:
>
>> Folks,
>> I would like to nominate Denis Dmitriev[1] for fuel-qa/fuel-devops core.
>>
>> Dennis spent three months in Fuel BugFix team, his velocity was between
>> 150-200% per week. Thanks to his efforts we have won these old issues with
>> time sync and ceph's clock skew. Dennis's ideas constantly help us to
>> improve our functional system suite.
>>
>> Fuelers, please vote for Denis!
>>
>> Nastya.
>>
>> [1]
>> http://stackalytics.com/?user_id=ddmitriev&release=all&project_type=all&module=fuel-qa
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "fuel-core-team" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to fuel-core-team+unsubscribe at mirantis.com.
>> For more options, visit https://groups.google.com/a/mirantis.com/d/optout
>> .
>>
>
>
>
> --
> Best regards,
> Ivan Kliuk
>
> --
> You received this message because you are subscribed to the Google Groups
> "fuel-core-team" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to fuel-core-team+unsubscribe at mirantis.com.
> For more options, visit https://groups.google.com/a/mirantis.com/d/optout.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/c083e34d/attachment.html>

From weidongshao at gmail.com  Tue Sep 15 04:36:26 2015
From: weidongshao at gmail.com (Weidong Shao)
Date: Tue, 15 Sep 2015 04:36:26 +0000
Subject: [openstack-dev] [openstack-ansible][compass] Support of Offline
	Install
Message-ID: <CALNQoPcNgZ-N+djnNvEzRNKch+TQBpzLS8AcAbwuWSQJEJToaA@mail.gmail.com>

Hi osad team,

Compass, an openstack deployment project, is in process of using osad
project in the openstack deployment. We need to support a use case where
there is no Internet connection. The way we handle this is to split the
deployment into "build" and "install" phase. In Build phase, the Compass
server node can have Internet connection and can build local repo and other
necessary dynamic artifacts that requires Internet connection. In "install"
phase, the to-be-installed nodes do not have Internet connection, and they
only download necessary data from Compass server and other services
constructed in Build phase.

Now, is "offline install" something that OSAD project shall also support?
If yes, what is the scope of work for any changes, if required.

Thanks,
Weidong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/ce1d6a4e/attachment.html>

From prometheanfire at gentoo.org  Tue Sep 15 05:01:05 2015
From: prometheanfire at gentoo.org (Matthew Thode)
Date: Tue, 15 Sep 2015 00:01:05 -0500
Subject: [openstack-dev] [openstack-ansible][compass] Support of Offline
 Install
In-Reply-To: <CALNQoPcNgZ-N+djnNvEzRNKch+TQBpzLS8AcAbwuWSQJEJToaA@mail.gmail.com>
References: <CALNQoPcNgZ-N+djnNvEzRNKch+TQBpzLS8AcAbwuWSQJEJToaA@mail.gmail.com>
Message-ID: <55F7A611.2030204@gentoo.org>

On 09/14/2015 11:36 PM, Weidong Shao wrote:
> Hi osad team,
> 
> Compass, an openstack deployment project, is in process of using osad
> project in the openstack deployment. We need to support a use case where
> there is no Internet connection. The way we handle this is to split the
> deployment into "build" and "install" phase. In Build phase, the Compass
> server node can have Internet connection and can build local repo and
> other necessary dynamic artifacts that requires Internet connection. In
> "install" phase, the to-be-installed nodes do not have Internet
> connection, and they only download necessary data from Compass server
> and other services constructed in Build phase.
> 
> Now, is "offline install" something that OSAD project shall also
> support? If yes, what is the scope of work for any changes, if required. 
> 
> Thanks, 
> Weidong
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
I think this is something we'd be interested in.  A BP would be appreciated.

-- 
Matthew Thode (prometheanfire)


From edwin.zhai at intel.com  Tue Sep 15 05:28:47 2015
From: edwin.zhai at intel.com (Zhai, Edwin)
Date: Tue, 15 Sep 2015 13:28:47 +0800 (CST)
Subject: [openstack-dev] [Ceilometer] How to enable devstack plugins
Message-ID: <alpine.DEB.2.10.1509151322120.32581@edwin-gen>

All,
I saw some patches from Chris Dent to enable functions in devstack/*. But it 
conflicts with devstack upstream so that start each ceilometer service twice. Is 
there any official way to setup ceilometer as devstack plugin?

Best Rgds,
Edwin


From trinath.somanchi at freescale.com  Tue Sep 15 06:37:53 2015
From: trinath.somanchi at freescale.com (Somanchi Trinath)
Date: Tue, 15 Sep 2015 06:37:53 +0000
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CALqgCCp4JRf-+Tktb5iC52ztGtWZh_UK1e5hhiosfVVDe+JvZg@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
 <CALqgCCp4JRf-+Tktb5iC52ztGtWZh_UK1e5hhiosfVVDe+JvZg@mail.gmail.com>
Message-ID: <BN1PR03MB1537495FB3FB9702E7E7065975C0@BN1PR03MB153.namprd03.prod.outlook.com>

Kyle ?

I see a Good and a Sad things here,

The Good one being, you lighted the path for new PTL to come up. The Sad thing is we are missing your leadership.
Hope you still lead the team in dotted line. ?

-
Trinath

From: Irena Berezovsky [mailto:irenab.dev at gmail.com]
Sent: Sunday, September 13, 2015 10:58 AM
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron] PTL Non-Candidacy

Kyle,
Thank you for the hard work you did making neuron project and neutron community  better!
You have been open and very supportive as a neutron community lead.
Hope you will stay involved.


On Fri, Sep 11, 2015 at 11:12 PM, Kyle Mestery <mestery at mestery.com<mailto:mestery at mestery.com>> wrote:
I'm writing to let everyone know that I do not plan to run for Neutron PTL for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan recently put it in his non-candidacy email [1]. But it goes further than that for me. As Flavio put it in his post about "Being a PTL" [2], it's a full time job. In the case of Neutron, it's more than a full time job, it's literally an always on job.

I've tried really hard over my three cycles as PTL to build a stronger web of trust so the project can grow, and I feel that's been accomplished. We have a strong bench of future PTLs and leaders ready to go, I'm excited to watch them lead and help them in anyway I can.
As was said by Zane in a recent email [3], while Heat may have pioneered the concept of rotating PTL duties with each cycle, I'd like to highly encourage Neutron and other projects to do the same. Having a deep bench of leaders supporting each other is important for the future of all projects.
See you all in Tokyo!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/fe98f533/attachment.html>

From blak111 at gmail.com  Tue Sep 15 06:40:09 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Mon, 14 Sep 2015 23:40:09 -0700
Subject: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support
 multiple compute drivers?
In-Reply-To: <55F6E3EC.1090306@gmail.com>
References: <CALesnTzMv_+hxZLFkAbxObzGLKU0h2ENZ5-vYe1-u+EC5g7Eyg@mail.gmail.com>
 <20150909171336.GG21846@jimrollenhagen.com>
 <CALesnTyuK17bUpYuA=9q+_L5TU7xxAF=tdsQmwtPtr+Z1vmt1w@mail.gmail.com>
 <1474881269.44702768.1441851955747.JavaMail.zimbra@redhat.com>
 <CALesnTzOP77F6qyvP6KK_evRRKuOw6iZr7jfNcdFJ6btJjhRqg@mail.gmail.com>
 <55F6E3EC.1090306@gmail.com>
Message-ID: <CAO_F6JPGU4qiST53vjR=DDmM7saD7K0_upXRk8OBzOTKA+ES1g@mail.gmail.com>

>I'm no Neutron expert, but I suspect that one could use either the
LinuxBridge *or* the OVS ML2 mechanism driver for the L2 agent, along with
a single flat provider network for your baremetal nodes.

If it's a baremetal node, it wouldn't be running an agent at all, would it?

On Mon, Sep 14, 2015 at 8:12 AM, Jay Pipes <jaypipes at gmail.com> wrote:

> On 09/10/2015 12:00 PM, Jeff Peeler wrote:
>
>> On Wed, Sep 9, 2015 at 10:25 PM, Steve Gordon <sgordon at redhat.com
>> <mailto:sgordon at redhat.com>> wrote:
>>
>>     ----- Original Message -----
>>     > From: "Jeff Peeler" <jpeeler at redhat.com <mailto:jpeeler at redhat.com
>> >>
>>     > To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>
>>     >
>>     > I'd greatly prefer using availability zones/host aggregates as I'm
>> trying
>>     > to keep the footprint as small as possible. It does appear that in
>> the
>>     > section "configure scheduler to support host aggregates" [1], that
>> I can
>>     > configure filtering using just one scheduler (right?). However,
>> perhaps
>>     > more importantly, I'm now unsure with the network configuration
>> changes
>>     > required for Ironic that deploying normal instances along with
>> baremetal
>>     > servers is possible.
>>     >
>>     > [1]
>>     >
>> http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html
>>
>>     Hi Jeff,
>>
>>     I assume your need for a second scheduler is spurred by wanting to
>>     enable different filters for baremetal vs virt (rather than
>>     influencing scheduling using the same filters via image properties,
>>     extra specs, and boot parameters (hints)?
>>
>>     I ask because if not you should be able to use the hypervisor_type
>>     image property to ensure that images intended for baremetal are
>>     directed there and those intended for kvm etc. are directed to those
>>     hypervisors. The documentation [1] doesn't list ironic as a valid
>>     value for this property but I looked into the code for this a while
>>     ago and it seemed like it should work... Apologies if you had
>>     already considered this.
>>
>>     Thanks,
>>
>>     Steve
>>
>>     [1]
>>
>> http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html
>>
>>
>> I hadn't considered that, thanks.
>>
>
> Yes, that's the recommended way to direct scheduling requests -- via the
> hypervisor_type image property.
>
> > It's still unknown to me though if a
>
>> separate compute service is required. And if it is required, how much
>> segregation is required to make that work.
>>
>
> Yes, a separate nova-compute worker daemon is required to manage the
> baremetal Ironic nodes.
>
> Not being a networking guru, I'm also unsure if the Ironic setup
>> instructions to use a flat network is a requirement or is just a sample
>> of possible configuration.
>>
>
> AFAIK, flat DHCP networking is currently the only supported network
> configuration for Ironic.
>
> > In a brief out of band conversation I had, it
>
>> does sound like Ironic can be configured to use linuxbridge too, which I
>> didn't know was possible.
>>
>
> Well, LinuxBridge vs. OVS isn't really about whether you have a flat
> network topology or not. It's just a different way of doing the actual
> switching (virtual bridging vs. standard linux bridges).
>
> I'm no Neutron expert, but I suspect that one could use either the
> LinuxBridge *or* the OVS ML2 mechanism driver for the L2 agent, along with
> a single flat provider network for your baremetal nodes.
>
> Hopefully an Ironic + Neutron expert will confirm or deny this?
>
> Best,
> -jay
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150914/f4232b49/attachment.html>

From marc at koderer.com  Tue Sep 15 06:41:34 2015
From: marc at koderer.com (Marc Koderer)
Date: Tue, 15 Sep 2015 08:41:34 +0200
Subject: [openstack-dev] [tempest] Is there a sandbox project how to use
	tempest test plugin interface?
In-Reply-To: <55F67ED0.5090409@ericsson.com>
References: <55F17DFF.4000602@ericsson.com>
 <20150910141302.GA2037@sazabi.kortar.org> <55F279BC.60400@ericsson.com>
 <55F67ED0.5090409@ericsson.com>
Message-ID: <66BCED3C-D171-4319-A144-8FA8E7DFF788@koderer.com>


Am 14.09.2015 um 10:01 schrieb Lajos Katona <lajos.katona at ericsson.com>:

> Hi Matthew,
> 
> Finally I made it working, so now I have a dummy plugin.
> 
> Few questions, remarks:
> - For me it was little hard to merge my weak knowledge from python packaging with the documentation for tempest plugins, do you mind if I push an example to github, and I add the link to that to the documentation.

Can we add it directly into the documentation? I am fine with more code snippets there and
external links can be broken over time.

> - From this point the generation of the idempotent id is not clear for me. I was able to use the check_uuid.py, and as I used a virtenv, the script edited the .tox/venv/local/lib/python2.7/site-packages/dummyplugin/ file.
> Would be good maybe to add an extra path option to the check_uuid.py to make it possible to edit the real source files in similar cases not the ones in the venv.

Idempotent id?s aren?t covered in the first drop of tempest plugin interface.
I am wondering if there is a need from refstack..?

Regards
Marc

> Regards
> Lajos
> 
> On 09/11/2015 08:50 AM, Lajos Katona wrote:
>> Hi Matthew,
>> 
>> Thanks for the help, this helped a lot a start the work.
>> 
>> regards
>> Lajos
>> 
>> On 09/10/2015 04:13 PM, Matthew Treinish wrote:
>>> On Thu, Sep 10, 2015 at 02:56:31PM +0200, Lajos Katona wrote:
>>> 
>>>> Hi,
>>>> 
>>>> I just noticed that from tag 6, the test plugin interface considered ready,
>>>> and I am eager to start to use it.
>>>> I have some questions:
>>>> 
>>>> If I understand well in the future the plugin interface will be moved to
>>>> tempest-lib, but now I have to import module(s) from tempest to start to use
>>>> the interface.
>>>> Is there a plan for this, I mean when the whole interface will be moved to
>>>> tempest-lib?
>>>> 
>>> The only thing which will eventually move to tempest-lib is the abstract class
>>> that defines the expected methods of a plugin class [1] The other pieces will
>>> remain in tempest. Honestly this won't likely happen until sometime during
>>> Mitaka. Also when it does move to tempest-lib we'll deprecate the tempest
>>> version and keep it around to allow for a graceful switchover.
>>> 
>>> The rationale behind this is we really don't provide any stability guarantees
>>> on tempest internals (except for a couple of places which are documented, like
>>> this plugin class) and we want any code from tempest that's useful to external
>>> consumers to really live in tempest-lib.
>>> 
>>> 
>>>> If I start to create a test plugin now (from tag 6), what should be the best
>>>> solution to do this?
>>>> I thought to create a repo for my plugin and add that as a subrepo to my
>>>> local tempest repo, and than I can easily import stuff from tempest, but I
>>>> can keep my test code separated from other parts of tempest.
>>>> Is there a better way of doing this?
>>>> 
>>> To start I'd take a look at the documentation for tempest plugins:
>>> 
>>> 
>>> http://docs.openstack.org/developer/tempest/plugin.html
>>> 
>>> 
>>> >From tempest's point of view a plugin is really just an entry point that points
>>> to a class that exposes certain methods. So the Tempest plugin can live anywhere
>>> as long as it's installed as an entry point in the proper namespace. Personally
>>> I feel like including it as a subrepo in a local tempest tree is a bit strange,
>>> but I don't think it'll cause any issues if you do that.
>>> 
>>> 
>>>> If there would be an example plugin somewhere, that would be the most
>>>> preferable maybe.
>>>> 
>>> There is a cookiecutter repo in progress. [2] Once that's ready it'll let you
>>> create a blank plugin dir that'll be ready for you to populate. (similar to the
>>> devstack plugin cookiecutter that already exists)
>>> 
>>> For current examples the only project I know of that's using a plugin interface
>>> is manila [3] so maybe take a look at what they're doing.
>>> 
>>> -Matt Treinish
>>> 
>>> [1] 
>>> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/test_discover/plugins.py#n26
>>> 
>>> [2] 
>>> https://review.openstack.org/208389
>>> 
>>> [3] 
>>> https://review.openstack.org/#/c/201955
>>> 
>>> 
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From chenyingkof at outlook.com  Tue Sep 15 07:27:25 2015
From: chenyingkof at outlook.com (=?gb2312?B?s8LTqA==?=)
Date: Tue, 15 Sep 2015 07:27:25 +0000
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAOEh+o2aQKtuCMZvy1eMyv7jw1qFy2wcLXexNbpdktF91_+wmQ@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>,
 <20150914170202.GA13271@gmx.com>,
 <CAPWkaSXoeXgSw3VSWQt8_f8_Mo7MvUx50tdiqSqDmjBW6x8fWg@mail.gmail.com>,
 <CAOEh+o2aQKtuCMZvy1eMyv7jw1qFy2wcLXexNbpdktF91_+wmQ@mail.gmail.com>
Message-ID: <COL129-W3096831F489F0C475D4C21A75C0@phx.gbl>

Thanks Mike. Thank you for doing a great job.

> From: sxmatch1986 at gmail.com
> Date: Tue, 15 Sep 2015 10:05:22 +0800
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [cinder] PTL Non-Candidacy
> 
> Thanks Mike ! Your help is very important to me to get started in
> cinder and we do a lot of proud work with your leadership.
> 
> 2015-09-15 6:36 GMT+08:00 John Griffith <john.griffith8 at gmail.com>:
> >
> >
> > On Mon, Sep 14, 2015 at 11:02 AM, Sean McGinnis <sean.mcginnis at gmx.com>
> > wrote:
> >>
> >> On Mon, Sep 14, 2015 at 09:15:44AM -0700, Mike Perez wrote:
> >> > Hello all,
> >> >
> >> > I will not be running for Cinder PTL this next cycle. Each cycle I ran
> >> > was for a reason [1][2], and the Cinder team should feel proud of our
> >> > accomplishments:
> >>
> >> Thanks for a couple of awesome cycles Mike!
> >>
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > You did a fantastic job Mike, thank you very much for the hard work and
> > dedication.
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Best Wishes For You!
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/526222ed/attachment.html>

From ipovolotskaya at mirantis.com  Tue Sep 15 07:32:48 2015
From: ipovolotskaya at mirantis.com (Irina Povolotskaya)
Date: Tue, 15 Sep 2015 10:32:48 +0300
Subject: [openstack-dev] [fuel] [plugin] Release tagging - possible problem
Message-ID: <CAFY49iCqOBFx+5-6KH1vAjf59z0hekyaFh9rcAYNwBaxg51_hA@mail.gmail.com>

Hi John,

To put a tag, you need to:
- be a member of release group (that has Push Signed Tag rights) - and you
are https://review.openstack.org/#/admin/groups/956,members
- use gpg key using console gnupg - you might have missed this.


-- 
Best regards,

Irina
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/72f6246d/attachment.html>

From chdent at redhat.com  Tue Sep 15 07:43:03 2015
From: chdent at redhat.com (Chris Dent)
Date: Tue, 15 Sep 2015 08:43:03 +0100 (BST)
Subject: [openstack-dev] [Ceilometer] How to enable devstack plugins
In-Reply-To: <alpine.DEB.2.10.1509151322120.32581@edwin-gen>
References: <alpine.DEB.2.10.1509151322120.32581@edwin-gen>
Message-ID: <alpine.OSX.2.11.1509150837540.54971@crank.home>

On Tue, 15 Sep 2015, Zhai, Edwin wrote:

> I saw some patches from Chris Dent to enable functions in devstack/*. But it 
> conflicts with devstack upstream so that start each ceilometer service twice. 
> Is there any official way to setup ceilometer as devstack plugin?

What I've been doing is checking out the devstack branch associated
with this review that removes ceilometer from devstack [1] (with a
`git review -d 196383`) and then stacking from there. It's cumbersome
but gets the job done.

This pain point should go away very soon. We've just been waiting on
the necessary infra changes to get various jobs that use ceilometer
prepared to use the ceilometer devstack plugin[2]. I think that's
ready to go now so we ought to see that merge soon.

[1] https://review.openstack.org/#/c/196383/
[2] https://review.openstack.org/#/c/196446/ and dependent reviews.

-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent


From silvan at quobyte.com  Tue Sep 15 07:51:04 2015
From: silvan at quobyte.com (Silvan Kaiser)
Date: Tue, 15 Sep 2015 09:51:04 +0200
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <COL129-W3096831F489F0C475D4C21A75C0@phx.gbl>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
 <20150914170202.GA13271@gmx.com>
 <CAPWkaSXoeXgSw3VSWQt8_f8_Mo7MvUx50tdiqSqDmjBW6x8fWg@mail.gmail.com>
 <CAOEh+o2aQKtuCMZvy1eMyv7jw1qFy2wcLXexNbpdktF91_+wmQ@mail.gmail.com>
 <COL129-W3096831F489F0C475D4C21A75C0@phx.gbl>
Message-ID: <CALsyUnO4YLgZWaZnvU-FSkvLONyraDj2b_gscmrfFiARViC7gA@mail.gmail.com>

Thanks Mike!
That was really demanding work!

2015-09-15 9:27 GMT+02:00 ?? <chenyingkof at outlook.com>:

> Thanks Mike. Thank you for doing a great job.
>
>
> > From: sxmatch1986 at gmail.com
> > Date: Tue, 15 Sep 2015 10:05:22 +0800
> > To: openstack-dev at lists.openstack.org
> > Subject: Re: [openstack-dev] [cinder] PTL Non-Candidacy
>
> >
> > Thanks Mike ! Your help is very important to me to get started in
> > cinder and we do a lot of proud work with your leadership.
> >
> > 2015-09-15 6:36 GMT+08:00 John Griffith <john.griffith8 at gmail.com>:
> > >
> > >
> > > On Mon, Sep 14, 2015 at 11:02 AM, Sean McGinnis <sean.mcginnis at gmx.com
> >
> > > wrote:
> > >>
> > >> On Mon, Sep 14, 2015 at 09:15:44AM -0700, Mike Perez wrote:
> > >> > Hello all,
> > >> >
> > >> > I will not be running for Cinder PTL this next cycle. Each cycle I
> ran
> > >> > was for a reason [1][2], and the Cinder team should feel proud of
> our
> > >> > accomplishments:
> > >>
> > >> Thanks for a couple of awesome cycles Mike!
> > >>
> > >>
> __________________________________________________________________________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > > You did a fantastic job Mike, thank you very much for the hard work and
> > > dedication.
> > >
> > >
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> >
> > --
> > Best Wishes For You!
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dr. Silvan Kaiser
Quobyte GmbH
Hardenbergplatz 2, 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com<http://www.quobyte.com/>
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Bj?rn Kolbeck, Dr. Jan Stender

-- 

--
*Quobyte* GmbH
Hardenbergplatz 2 - 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Bj?rn Kolbeck, Dr. Jan Stender
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/f1a1b78d/attachment.html>

From berrange at redhat.com  Tue Sep 15 08:25:51 2015
From: berrange at redhat.com (Daniel P. Berrange)
Date: Tue, 15 Sep 2015 09:25:51 +0100
Subject: [openstack-dev] [nova][neutron][SR-IOV] Hardware changes and
 shifting PCI addresses
In-Reply-To: <20150910212306.GA11628@b3ntpin.localdomain>
References: <20150910212306.GA11628@b3ntpin.localdomain>
Message-ID: <20150915082551.GB23145@redhat.com>

On Thu, Sep 10, 2015 at 06:53:06PM -0230, Brent Eagles wrote:
> Hi,
> 
> I was recently informed of a situation that came up when an engineer
> added an SR-IOV nic to a compute node that was hosting some guests that
> had VFs attached. Unfortunately, adding the card shuffled the PCI
> addresses causing some degree of havoc. Basically, the PCI addresses
> associated with the previously allocated VFs were no longer valid.

This seems to be implying that they took the host offline to make
hardware changes, and then tried to re-start the originally running
guests directly, without letting the schedular re-run.

If correct, then IMHO that is an unsupported approach. After making
any hardware changes you should essentially consider that to be a
new compute host. There is no expectation that previously running
guests on that host can be restarted. You must let the compute
host report its new hardware capabilities, and let the schedular
place guests on it from scratch, using the new PCI address info.

> I tend to consider this a non-issue. The expectation that hosts have
> relatively static hardware configuration (and kernel/driver configs for
> that matter) is the price you pay for having pets with direct hardware
> access. That being said, this did come as a surprise to some of those
> involved and I don't think we have any messaging around this or advice
> on how to deal with situations like this.
> 
> So what should we do? I can't quite see altering OpenStack to deal with
> this situation (or even how that could work). Has anyone done any
> research into this problem, even if it is how to recover or extricate
> a guest that is no longer valid? It seems that at the very least we
> could use some stern warnings in the docs.

Taking a host offline for maintenance, should be considered
equivalent to throwing away the existing host and deploying a new
host. There should be zero state carry-over from OpenStack POV,
since both the software and hardware changes can potentially
invalidate previous informationm used by the schedular for deploying
on that host.  The idea of recovering a previously running guest
should be explicitly unsupported.


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


From berrange at redhat.com  Tue Sep 15 08:29:04 2015
From: berrange at redhat.com (Daniel P. Berrange)
Date: Tue, 15 Sep 2015 09:29:04 +0100
Subject: [openstack-dev] [nova][neutron][SR-IOV] Hardware changes and
 shifting PCI addresses
In-Reply-To: <55F775A7.6050901@gmail.com>
References: <20150910212306.GA11628@b3ntpin.localdomain>
 <55F775A7.6050901@gmail.com>
Message-ID: <20150915082904.GC23145@redhat.com>

On Mon, Sep 14, 2015 at 09:34:31PM -0400, Jay Pipes wrote:
> On 09/10/2015 05:23 PM, Brent Eagles wrote:
> >Hi,
> >
> >I was recently informed of a situation that came up when an engineer
> >added an SR-IOV nic to a compute node that was hosting some guests that
> >had VFs attached. Unfortunately, adding the card shuffled the PCI
> >addresses causing some degree of havoc. Basically, the PCI addresses
> >associated with the previously allocated VFs were no longer valid.
> >
> >I tend to consider this a non-issue. The expectation that hosts have
> >relatively static hardware configuration (and kernel/driver configs for
> >that matter) is the price you pay for having pets with direct hardware
> >access. That being said, this did come as a surprise to some of those
> >involved and I don't think we have any messaging around this or advice
> >on how to deal with situations like this.
> >
> >So what should we do? I can't quite see altering OpenStack to deal with
> >this situation (or even how that could work). Has anyone done any
> >research into this problem, even if it is how to recover or extricate
> >a guest that is no longer valid? It seems that at the very least we
> >could use some stern warnings in the docs.
> 
> Hi Brent,
> 
> Interesting issue. We have code in the PCI tracker that ostensibly handles
> this problem:
> 
> https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L145-L164
> 
> But the note from yjiang5 is telling:
> 
> # Pci properties may change while assigned because of
> # hotplug or config changes. Although normally this should
> # not happen.
> # As the devices have been assigned to a instance, we defer
> # the change till the instance is destroyed. We will
> # not sync the new properties with database before that.
> # TODO(yjiang5): Not sure if this is a right policy, but
> # at least it avoids some confusion and, if
> # we can add more action like killing the instance
> # by force in future.
> 
> Basically, if the PCI device tracker notices that an instance is assigned a
> PCI device with an address that no longer exists in the PCI device addresses
> returned from libvirt, it will (eventually, in the _free_instance() method)
> remove the PCI device assignment from the Instance object, but it will make
> no attempt to assign a new PCI device that meets the original PCI device
> specification in the launch request.
> 
> Should we handle this case and attempt a "hot re-assignment of a PCI
> device"? Perhaps. Is it high priority? Not really, IMHO.

Hotplugging new PCI devices to a running host should not have any impact
on existing PCI device addresses - it'll merely add new adddresses for
new devices - existing devices are unchanged. So Everything should "just
work" in that case. IIUC, Brent's Q was around turning off the host and
cold-plugging/unplugging hardware, which /is/ liable to arbitrarily
re-arrange existing PCI device addresses.

> If you'd like to file a bug against Nova, that would be cool, though.

I think it is explicitly out of scope for Nova to deal with this
scenario.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


From thierry at openstack.org  Tue Sep 15 08:40:15 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Tue, 15 Sep 2015 10:40:15 +0200
Subject: [openstack-dev] [all] Base feature deprecation policy
In-Reply-To: <55EF5331.7010804@dague.net>
References: <55E83B84.5000000@openstack.org> <55EED0DA.6090201@dague.net>
 <1441721217-sup-1564@lrrr.local>
 <CAOJFoEsuct5CTtU_qE+oB1jgfrpi2b65Xw1HOKNmMxZEXeUjPQ@mail.gmail.com>
 <1441731915-sup-1930@lrrr.local> <55EF24E4.4060100@dague.net>
 <1441740621-sup-126@lrrr.local> <55EF5331.7010804@dague.net>
Message-ID: <55F7D96F.3050005@openstack.org>

Sean Dague wrote:
> On 09/08/2015 03:32 PM, Doug Hellmann wrote:
>> Excerpts from Sean Dague's message of 2015-09-08 14:11:48 -0400:
>>> On 09/08/2015 01:07 PM, Doug Hellmann wrote:
>>>> Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
>>>>> On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
>>>>>>
>>>>>> I'd like to come up with some way to express the time other than
>>>>>> N+M because in the middle of a cycle it can be confusing to know
>>>>>> what that means (if I want to deprecate something in August am I
>>>>>> far enough through the current cycle that it doesn't count?).
>>>>>>
>>>>>> Also, as we start moving more projects to doing intermediate releases
>>>>>> the notion of a "release" vs. a "cycle" will drift apart, so we
>>>>>> want to talk about "stable releases" not just any old release.
>>>>>
>>>>> I've always thought the appropriate equivalent for projects not following
>>>>> the (old) integrated release cadence was for N == six months.  It sets
>>>>> approx. the same pace and expectation with users/deployers.
>>>>>
>>>>> For those deployments tracking trunk, a similar approach can be taken, in
>>>>> that deprecating a config option in M3 then removing it in N1 might be too
>>>>> quick, but rather wait at least the same point in the following release
>>>>> cycle to increment 'N'.
>>>>
>>>> Making it explicitly date-based would simplify tracking, to be sure.
>>>
>>> I would agree that the M3 -> N0 drop can be pretty quick, it can be 6
>>> weeks (which I've seen happen). However N == six months might make FFE
>>> deprecation lands in one release run into FFE in the next. For the CD
>>> case my suggestion is > 3 months. Because if you aren't CDing in
>>> increments smaller than that, and hence seeing the deprecation, you
>>> aren't really doing the C part of CDing.
>>
>> Do those 3 months need to span more than one stable release? For
>> projects doing intermediary releases, there may be several releases
>> within a 3 month period.
> 
> Yes. 1 stable release branch AND 3 months linear time is what I'd
> consider reasonable.

OK, so it seems we have convergence around:

"config options and features will have to be marked deprecated for a
minimum of one stable release branch and a minimum of 3 months"

I'll add some language in there to encourage major features to be marked
deprecated for at least two stable release branches, rather than come
with a hard rule defining what a "major" feature is.

-- 
Thierry Carrez (ttx)


From flavio at redhat.com  Tue Sep 15 08:54:04 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 15 Sep 2015 10:54:04 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442250235-sup-1646@lrrr.local>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com>
 <1442250235-sup-1646@lrrr.local>
Message-ID: <20150915085404.GB9301@redhat.com>

On 14/09/15 15:51 -0400, Doug Hellmann wrote:
>Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
>> On 14/09/15 08:10 -0400, Doug Hellmann wrote:
[snip]

>> The task upload process you're referring to is the one that uses the
>> `import` task, which allows you to download an image from an external
>> source, asynchronously, and import it in Glance. This is the old
>> `copy-from` behavior that was moved into a task.
>>
>> The "fun" thing about this - and I'm sure other folks in the Glance
>> community will disagree - is that I don't consider tasks to be a
>> public API. That is to say, I would expect tasks to be an internal API
>> used by cloud admins to perform some actions (bsaed on its current
>> implementation). Eventually, some of these tasks could be triggered
>> from the external API but as background operations that are triggered
>> by the well-known public ones and not through the task API.
>
>Does that mean it's more of an "admin" API?

As it is right now, yes. I don't think it's suitable for public use
and the current supported features are more useful for admins than
end-users.

Could it be improved to be a public API? Sure.

[snip]

>> This is definitely unfortunate. I believe a good step forward for this
>> discussion would be to create a list of issues related to uploading
>> images and see how those issues can be addressed. The result from that
>> work might be that it's not recommended to make that endpoint public
>> but again, without going through the issues, it'll be hard to
>> understand how we can improve this situation. I expect most of this
>> issues to have a security impact.
>
>A report like that would be good to have. Can someone on the Glance team
>volunteer to put it together?

Here's an attempt from someone that uses clouds but doesn't run any:

- Image authenticity (we recently landed code that allows for having
  signed images)
- Quota management: Glance's quota management is very basic and it
  allows for setting quota in a per-user level[1]
- Bandwidth requirements to upload images
- (add more here)

[0] http://specs.openstack.org/openstack/glance-specs/specs/liberty/image-signing-and-verification-support.html
[1] http://docs.openstack.org/developer/glance/configuring.html#configuring-glance-user-storage-quota

[snip]
>> This is, indeed, an interesting interpretation of what tasks are for.
>> I'd probably just blame us (Glance team) for not communicating
>> properly what tasks are meant to be. I don't believe tasks are a way
>> to extend the *public* API and I'd be curious to know if others see it
>> that way. I fully agree that just breaks interoperability and as I've
>> mentioned a couple of times in this reply already, I don't even think
>> tasks should be part of the public API.
>
>Whether they are intended to be an extension mechanism, they
>effectively are right now, as far as I can tell.

Sorry, I probably didn't express myself correctly. What I meant to say
is that I don't see them as a way to extend the *public* API but
rather as a way to add functionality to glance that is useful for
admins.

>> The mistake here could be that the library should've been refactored
>> *before* adopting it in Glance.
>
>The fact that there is disagreement over the intent of the library makes
>me think the plan for creating it wasn't sufficiently circulated or
>detailed.

There wasn't much disagreement when it was created. Some folks think
the use-cases for the library don't exist anymore and some folks that
participated in this effort are not part of OpenStack anymore.

[snip]

Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/fe33a8fb/attachment.pgp>

From flavio at redhat.com  Tue Sep 15 09:08:39 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 15 Sep 2015 11:08:39 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442263439-sup-913@lrrr.local>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com>
 <1442250235-sup-1646@lrrr.local> <1442261641-sup-9577@fewbar.com>
 <1442263439-sup-913@lrrr.local>
Message-ID: <20150915090839.GC9301@redhat.com>

On 14/09/15 16:46 -0400, Doug Hellmann wrote:
>Excerpts from Clint Byrum's message of 2015-09-14 13:25:43 -0700:
>> Excerpts from Doug Hellmann's message of 2015-09-14 12:51:24 -0700:
>> > Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
>> > > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
>> > > >
>> > > >After having some conversations with folks at the Ops Midcycle a
>> > > >few weeks ago, and observing some of the more recent email threads
>> > > >related to glance, glance-store, the client, and the API, I spent
>> > > >last week contacting a few of you individually to learn more about
>> > > >some of the issues confronting the Glance team. I had some very
>> > > >frank, but I think constructive, conversations with all of you about
>> > > >the issues as you see them. As promised, this is the public email
>> > > >thread to discuss what I found, and to see if we can agree on what
>> > > >the Glance team should be focusing on going into the Mitaka summit
>> > > >and development cycle and how the rest of the community can support
>> > > >you in those efforts.
>> > > >
>> > > >I apologize for the length of this email, but there's a lot to go
>> > > >over. I've identified 2 high priority items that I think are critical
>> > > >for the team to be focusing on starting right away in order to use
>> > > >the upcoming summit time effectively. I will also describe several
>> > > >other issues that need to be addressed but that are less immediately
>> > > >critical. First the high priority items:
>> > > >
>> > > >1. Resolve the situation preventing the DefCore committee from
>> > > >   including image upload capabilities in the tests used for trademark
>> > > >   and interoperability validation.
>> > > >
>> > > >2. Follow through on the original commitment of the project to
>> > > >   provide an image API by completing the integration work with
>> > > >   nova and cinder to ensure V2 API adoption.
>> > >
>> > > Hi Doug,
>> > >
>> > > First and foremost, I'd like to thank you for taking the time to dig
>> > > into these issues, and for reaching out to the community seeking for
>> > > information and a better understanding of what the real issues are. I
>> > > can imagine how much time you had to dedicate on this and I'm glad you
>> > > did.
>> > >
>> > > Now, to your email, I very much agree with the priorities you
>> > > mentioned above and I'd like for, whomever will win Glance's PTL
>> > > election, to bring focus back on that.
>> > >
>> > > Please, find some comments in-line for each point:
>> > >
>> > > >
>> > > >I. DefCore
>> > > >
>> > > >The primary issue that attracted my attention was the fact that
>> > > >DefCore cannot currently include an image upload API in its
>> > > >interoperability test suite, and therefore we do not have a way to
>> > > >ensure interoperability between clouds for users or for trademark
>> > > >use. The DefCore process has been long, and at times confusing,
>> > > >even to those of us following it sort of closely. It's not entirely
>> > > >surprising that some projects haven't been following the whole time,
>> > > >or aren't aware of exactly what the whole thing means. I have
>> > > >proposed a cross-project summit session for the Mitaka summit to
>> > > >address this need for communication more broadly, but I'll try to
>> > > >summarize a bit here.
>> > >
>> > > +1
>> > >
>> > > I think it's quite sad that some projects, especially those considered
>> > > to be part of the `starter-kit:compute`[0], don't follow closely
>> > > what's going on in DefCore. I personally consider this a task PTLs
>> > > should incorporate in their role duties. I'm glad you proposed such
>> > > session, I hope it'll help raising awareness of this effort and it'll
>> > > help moving things forward on that front.
>> >
>> > Until fairly recently a lot of the discussion was around process
>> > and priorities for the DefCore committee. Now that those things are
>> > settled, and we have some approved policies, it's time to engage
>> > more fully.  I'll be working during Mitaka to improve the two-way
>> > communication.
>> >
>> > >
>> > > >
>> > > >DefCore is using automated tests, combined with business policies,
>> > > >to build a set of criteria for allowing trademark use. One of the
>> > > >goals of that process is to ensure that all OpenStack deployments
>> > > >are interoperable, so that users who write programs that talk to
>> > > >one cloud can use the same program with another cloud easily. This
>> > > >is a *REST API* level of compatibility. We cannot insert cloud-specific
>> > > >behavior into our client libraries, because not all cloud consumers
>> > > >will use those libraries to talk to the services. Similarly, we
>> > > >can't put the logic in the test suite, because that defeats the
>> > > >entire purpose of making the APIs interoperable. For this level of
>> > > >compatibility to work, we need well-defined APIs, with a long support
>> > > >period, that work the same no matter how the cloud is deployed. We
>> > > >need the entire community to support this effort. From what I can
>> > > >tell, that is going to require some changes to the current Glance
>> > > >API to meet the requirements. I'll list those requirements, and I
>> > > >hope we can discuss them to a degree that ensures everyone understands
>> > > >them. I don't want this email thread to get bogged down in
>> > > >implementation details or API designs, though, so let's try to keep
>> > > >the discussion at a somewhat high level, and leave the details for
>> > > >specs and summit discussions. I do hope you will correct any
>> > > >misunderstandings or misconceptions, because unwinding this as an
>> > > >outside observer has been quite a challenge and it's likely I have
>> > > >some details wrong.
>> > > >
>> > > >As I understand it, there are basically two ways to upload an image
>> > > >to glance using the V2 API today. The "POST" API pushes the image's
>> > > >bits through the Glance API server, and the "task" API instructs
>> > > >Glance to download the image separately in the background. At one
>> > > >point apparently there was a bug that caused the results of the two
>> > > >different paths to be incompatible, but I believe that is now fixed.
>> > > >However, the two separate APIs each have different issues that make
>> > > >them unsuitable for DefCore.
>> > > >
>> > > >The DefCore process relies on several factors when designating APIs
>> > > >for compliance. One factor is the technical direction, as communicated
>> > > >by the contributor community -- that's where we tell them things
>> > > >like "we plan to deprecate the Glance V1 API". In addition to the
>> > > >technical direction, DefCore looks at the deployment history of an
>> > > >API. They do not want to require deploying an API if it is not seen
>> > > >as widely usable, and they look for some level of existing adoption
>> > > >by cloud providers and distributors as an indication of that the
>> > > >API is desired and can be successfully used. Because we have multiple
>> > > >upload APIs, the message we're sending on technical direction is
>> > > >weak right now, and so they have focused on deployment considerations
>> > > >to resolve the question.
>> > >
>> > > The task upload process you're referring to is the one that uses the
>> > > `import` task, which allows you to download an image from an external
>> > > source, asynchronously, and import it in Glance. This is the old
>> > > `copy-from` behavior that was moved into a task.
>> > >
>> > > The "fun" thing about this - and I'm sure other folks in the Glance
>> > > community will disagree - is that I don't consider tasks to be a
>> > > public API. That is to say, I would expect tasks to be an internal API
>> > > used by cloud admins to perform some actions (bsaed on its current
>> > > implementation). Eventually, some of these tasks could be triggered
>> > > from the external API but as background operations that are triggered
>> > > by the well-known public ones and not through the task API.
>> >
>> > Does that mean it's more of an "admin" API?
>> >
>>
>> I think it is basically just a half-way done implementation that is
>> exposed directly to users of Rackspace Cloud and, AFAIK, nobody else.
>> When last I tried to make integration tests in shade that exercised the
>> upstream glance task import code, I was met with an implementation that
>> simply did not work, because the pieces behind it had never been fully
>> implemented upstream. That may have been resolved, but in the process
>> of trying to write tests and make this work, I discovered a system that
>> made very little sense from a user standpoint. I want to upload an
>> image, why do I want a task?!
>>
>> > >
>> > > Ultimately, I believe end-users of the cloud simply shouldn't care
>> > > about what tasks are or aren't and more importantly, as you mentioned
>> > > later in the email, tasks make clouds not interoperable. I'd be pissed
>> > > if my public image service would ask me to learn about tasks to be
>> > > able to use the service.
>> >
>> > It would be OK if a public API set up to do a specific task returned a
>> > task ID that could be used with a generic task API to check status, etc.
>> > So the idea of tasks isn't completely bad, it's just too vague as it's
>> > exposed right now.
>> >
>>
>> I think it is a concern, because it is assuming users will want to do
>> generic things with a specific API. This turns into a black-box game where
>> the user shoves a task in and then waits to see what comes out the other
>> side. Not something I want to encourage users to do or burden them with.
>>
>> We have an API whose sole purpose is to accept image uploads. That
>> Rackspace identified a scaling pain point there is _good_. But why not
>> *solve* it for the user, instead of introduce more complexity?
>
>That's fair. I don't actually care which API we have, as long as it
>meets the other requirements.

To me, it's extremely important that the only thing a user will need
to upload an image is:

$ glance image-create .... < image.qcow2

Any image upload process that requires more than 1 command is simply
not acceptable as a *recommended* way to upload images. Whether it's
useful or not in some scenarios is not under discussion here.

>> What I'd like to see is the upload image API given the ability to
>> respond with a URL that can be uploaded to using the object storage API
>> we already have in OpenStack. Exposing users to all of these operator
>> choices is just wasting their time. Just simply say "Oh, you want to
>> upload an image? Thats fine, please upload it as an object over there
>> and POST here again when it is ready to be imported." This will make
>> perfect sense to a user reading docs, and doesn't require them to grasp
>> an abstract concept like "tasks" when all they want to do is upload
>> their image.

++

>And what would it do if the backing store for the image service
>isn't Swift or another object storage system that supports direct
>uploads? Return a URL that pointed back to itself, maybe?

Exactly!

The import task (old copy-from) serves a very specific case, which is,
most of the time, a user willing to import an image from other
clouds/places in the "internet". This is one of the reasons why the
only supported download sources are http urls and not other stores.

Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/54f4993d/attachment.pgp>

From akostrikov at mirantis.com  Tue Sep 15 09:16:17 2015
From: akostrikov at mirantis.com (Alexander Kostrikov)
Date: Tue, 15 Sep 2015 12:16:17 +0300
Subject: [openstack-dev] [Fuel] Nominate Denis Dmitriev for
 fuel-qa(devops) core
In-Reply-To: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>
References: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>
Message-ID: <CAFNR43NRtM3FrdBuPBFuEwLAjmGQfKvqfVhMotqnSdYK8YyCsA@mail.gmail.com>

+1

On Mon, Sep 14, 2015 at 10:19 PM, Anastasia Urlapova <aurlapova at mirantis.com
> wrote:

> Folks,
> I would like to nominate Denis Dmitriev[1] for fuel-qa/fuel-devops core.
>
> Dennis spent three months in Fuel BugFix team, his velocity was between
> 150-200% per week. Thanks to his efforts we have won these old issues with
> time sync and ceph's clock skew. Dennis's ideas constantly help us to
> improve our functional system suite.
>
> Fuelers, please vote for Denis!
>
> Nastya.
>
> [1]
> http://stackalytics.com/?user_id=ddmitriev&release=all&project_type=all&module=fuel-qa
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Kind Regards,

Alexandr Kostrikov,

Mirantis, Inc.

35b/3, Vorontsovskaya St., 109147, Moscow, Russia


Tel.: +7 (495) 640-49-04
Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>

Skype: akostrikov_mirantis

E-mail: akostrikov at mirantis.com <elogutova at mirantis.com>

*www.mirantis.com <http://www.mirantis.ru/>*
*www.mirantis.ru <http://www.mirantis.ru/>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/86c23c3a/attachment.html>

From kuvaja at hpe.com  Tue Sep 15 09:43:26 2015
From: kuvaja at hpe.com (Kuvaja, Erno)
Date: Tue, 15 Sep 2015 09:43:26 +0000
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442247798-sup-5628@lrrr.local>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com>
 <EA70533067B8F34F801E964ABCA4C4410F4D6494@G9W0745.americas.hpqcorp.net>
 <1442247798-sup-5628@lrrr.local>
Message-ID: <EA70533067B8F34F801E964ABCA4C4410F4D698F@G9W0745.americas.hpqcorp.net>

> -----Original Message-----
> From: Doug Hellmann [mailto:doug at doughellmann.com]
> Sent: Monday, September 14, 2015 5:40 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> 
> Excerpts from Kuvaja, Erno's message of 2015-09-14 15:02:59 +0000:
> > > -----Original Message-----
> > > From: Flavio Percoco [mailto:flavio at redhat.com]
> > > Sent: Monday, September 14, 2015 1:41 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> > >
> > > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
<CLIP>
> > > >
> > > >I. DefCore
> > > >
> > > >The primary issue that attracted my attention was the fact that
> > > >DefCore cannot currently include an image upload API in its
> > > >interoperability test suite, and therefore we do not have a way to
> > > >ensure interoperability between clouds for users or for trademark
> > > >use. The DefCore process has been long, and at times confusing,
> > > >even to those of us following it sort of closely. It's not entirely
> > > >surprising that some projects haven't been following the whole
> > > >time, or aren't aware of exactly what the whole thing means. I have
> > > >proposed a cross-project summit session for the Mitaka summit to
> > > >address this need for communication more broadly, but I'll try to
> summarize a bit here.
> > >
> >
> > Looking how different OpenStack based public clouds limits or fully
> prevents their users to upload images to their deployments, I'm not
> convinced the Image Upload should be included to this definition.
> 
> The problem with that approach is that it means end consumers of those
> clouds cannot write common tools that include image uploads, which is a
> frequently used/desired feature. What makes that feature so special that we
> don't care about it for interoperability?
> 

I'm not sure it really is so special API or technical wise, it's just the one that was lifted to the pedestal in this discussion.

> >
> > > +1
> > >
> > > I think it's quite sad that some projects, especially those
> > > considered to be part of the `starter-kit:compute`[0], don't follow
> > > closely what's going on in DefCore. I personally consider this a
> > > task PTLs should incorporate in their role duties. I'm glad you
> > > proposed such session, I hope it'll help raising awareness of this effort
> and it'll help moving things forward on that front.
> > >
> > >
> > > >
> > > >DefCore is using automated tests, combined with business policies,
> > > >to build a set of criteria for allowing trademark use. One of the
> > > >goals of that process is to ensure that all OpenStack deployments
> > > >are interoperable, so that users who write programs that talk to
> > > >one cloud can use the same program with another cloud easily. This
> > > >is a *REST
> > > >API* level of compatibility. We cannot insert cloud-specific
> > > >behavior into our client libraries, because not all cloud consumers
> > > >will use those libraries to talk to the services. Similarly, we
> > > >can't put the logic in the test suite, because that defeats the
> > > >entire purpose of making the APIs interoperable. For this level of
> > > >compatibility to work, we need well-defined APIs, with a long
> > > >support period, that work the same no matter how the cloud is
> > > >deployed. We need the entire community to support this effort. From
> > > >what I can tell, that is going to require some changes to the
> > > >current Glance API to meet the requirements. I'll list those
> > > >requirements, and I hope we can discuss them to a degree that
> > > >ensures everyone understands them. I don't want this email thread
> > > >to get bogged down in implementation details or API designs,
> > > >though, so let's try to keep the discussion at a somewhat high
> > > >level, and leave the details for specs and summit discussions. I do
> > > >hope you will correct any misunderstandings or misconceptions,
> > > >because unwinding this as an outside observer has been quite a
> challenge and it's likely I have some details wrong.
> >
> > This just reinforces my doubt above. By including upload to the defcore
> requirements probably just closes out lots of the public clouds out there. Is
> that the intention here?
> 
> No, absolutely not. The intention is to provide clear technical direction about
> what we think the API for uploading images should be.
> 

Gr8, that's easy goal to stand behind and support!

> >
> > > >
<CLIP>
> > >
> > > The task upload process you're referring to is the one that uses the
> > > `import` task, which allows you to download an image from an
> > > external source, asynchronously, and import it in Glance. This is
> > > the old `copy-from` behavior that was moved into a task.
> > >
> > > The "fun" thing about this - and I'm sure other folks in the Glance
> > > community will disagree - is that I don't consider tasks to be a
> > > public API. That is to say, I would expect tasks to be an internal
> > > API used by cloud admins to perform some actions (bsaed on its
> > > current implementation). Eventually, some of these tasks could be
> > > triggered from the external API but as background operations that
> > > are triggered by the well-known public ones and not through the task
> API.
> > >
> > > Ultimately, I believe end-users of the cloud simply shouldn't care
> > > about what tasks are or aren't and more importantly, as you
> > > mentioned later in the email, tasks make clouds not interoperable.
> > > I'd be pissed if my public image service would ask me to learn about tasks
> to be able to use the service.
> >
> > I'd like to bring another argument here. I think our Public Images API should
> behave consistently regardless if there is tasks enabled in the deployment or
> not and with what plugins. This meaning that _if_ we expect glance upload
> work over the POST API and that endpoint is available in the deployment I
> would expect a) my image hash to match with the one the cloud returns b)
> I'd assume all or none of the clouds rejecting my image if it gets flagged by
> Vendor X virus definitions and c) it being bootable across the clouds taken it's
> in supported format. On the other hand if I get told by the vendor that I need
> to use cloud specific task that accepts only ova compliant image packages and
> that the image will be checked before acceptance, my expectations are quite
> different and I would expect all that happening outside of the standard API
> as it's not consistent behavior.
> 
> I'm not sure what you're arguing. Is it not possible to have a background
> process import an image without modifying it?

Absolutely not my point, sorry. What I was trying to say here is that I rather have multiple ways of getting images into cloud I use if that means I can predict exactly how those different ways behaves on each deployment that has them enabled. My thought of tasks has been more on the processing side than just different way to do upload. So I have been in understanding that the intention behind tasks were to provide extra functionality like introspection, conversion, ova-import (which is under work currently), many of that would be beneficial for the end user of the cloud, but happening to your image as side effect of image upload without you knowing or able to avoid it, not so desirable state.

> 
> >
> > >
> > > Long story short, I believe the only upload API that should be
> > > considered is the one that uses HTTP and, eventually, to bring
> > > compatibility with v1 as far as the copy-from behavior goes, Glance
> > > could bring back that behavior on top of the task (just dropping
> > > this here for the sake of discussion and interoperability).
> > >
> > >
> > > >The POST API is enabled in many public clouds, but not consistently.
> > > >In some clouds like HP, a tenant requires special permission to use
> > > >the API. At least one provider, Rackspace, has disabled the API entirely.
> > > >This is apparently due to what seems like a fair argument that
> > > >uploading the bits directly to the API service presents a possible
> > > >denial of service vector. Without arguing the technical merits of
> > > >that decision, the fact remains that without a strong consensus
> > > >from deployers that the POST API should be publicly and
> > > >consistently available, it does not meet the requirements to be
> > > >used for DefCore testing.
> > >
> > > This is definitely unfortunate. I believe a good step forward for
> > > this discussion would be to create a list of issues related to
> > > uploading images and see how those issues can be addressed. The
> > > result from that work might be that it's not recommended to make
> > > that endpoint public but again, without going through the issues,
> > > it'll be hard to understand how we can improve this situation. I expect
> most of this issues to have a security impact.
> > >
> >
> > ++, regardless of the helpfulness of that discussion, I don't think it's realistic
> expectation to prioritize that work so much that majority of those issues
> would be solved amongst the priorities at the top of this e-mail within a cycle.
> 
> We should establish the priorities, even if the roadmap to solving them is
> longer than one cycle.
> 

That's fair enough.

> >
> > >
<CLICK>
> > >
> > > >Tasks also come from plugins, which may be installed differently
> > > >based on the deployment. This is an interesting approach to
> > > >creating API extensions, but isn't discoverable enough to write
> > > >interoperable tools against. Most of the other projects are
> > > >starting to move away from supporting API extensions at all because
> > > >of interoperability concerns they introduce. Deployers should be
> > > >able to configure their clouds to perform well, but not to behave in
> fundamentally different ways.
> > > >Extensions are just that, extensions. We can't rely on them for
> > > >interoperability testing.
> > >
> > > This is, indeed, an interesting interpretation of what tasks are for.
> > > I'd probably just blame us (Glance team) for not communicating
> > > properly what tasks are meant to be. I don't believe tasks are a way
> > > to extend the
> > > *public* API and I'd be curious to know if others see it that way. I
> > > fully agree that just breaks interoperability and as I've mentioned
> > > a couple of times in this reply already, I don't even think tasks should be
> part of the public API.
> >
> > Hmm-m, that's exactly how I have seen it. Plugins that can be provided to
> expand the standard functionality. I totally agree these not to being relied on
> interoperability. I've always assumed that that has been also the reason why
> tasks have not had too much focus as we've prioritized the actual API
> functionality and stability before it's expandability.
> 
> My issue with the task API as it stands is not how it works on the back-end.
> It's only the actual REST API entry point that concerns me, because of its
> vagueness.

That's great. This way the issues are easier to address than if the concerns were against the fundamental approach.

> 
> If there are different semantics for the background task in different
> providers because of different plugins, then we need to write tests to
> specify the aspects we care about being standardized. You mentioned image
> hashes as one case, and I can see that being useful. Rather than allowing the
> cloud to modify an image on upload, we might enforce that the hash of the
> image we download is the same as the hash of the image imported. That still
> allows for unpacking and scanning on the back-end.
> 

Again this is easy to back up.

> >
> > >
<CLIP>

- Erno


From sathlang at redhat.com  Tue Sep 15 09:48:55 2015
From: sathlang at redhat.com (Sofer Athlan-Guyot)
Date: Tue, 15 Sep 2015 11:48:55 +0200
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
	'composite namevar' or 'meaningless name'?
In-Reply-To: <55F73DB1.5040604@redhat.com> (Rich Megginson's message of "Mon, 
 14 Sep 2015 15:35:45 -0600")
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com> <871te066bj.fsf@s390.unix4.net>
 <55F73DB1.5040604@redhat.com>
Message-ID: <8737yg3tdk.fsf@s390.unix4.net>

Rich Megginson <rmeggins at redhat.com> writes:

>>> This seems to be the hardest part - I still cannot figure out how to
>>> use "compound" names with Puppet.
>> I don't get this point.  what is "2 or more resource could exists" and
>> how it relates to compound names ?
>
> I would like to uniquely specify a resource by the _combination_ of
> the name + the domain.  For example:
>
>   keystone_user { 'domain A admin user':
>     name => 'admin',
>     domain => 'domainA',
>   }
>
>   keystone_user { 'domain B admin user':
>     name => 'admin',
>     domain => 'domainB',
>   }
>
> Puppet doesn't like this - the value of the 'name' property of
> keystone_user is not unique throughout the manifest/catalog, even
> though both users are distinct and unique because they existing in
> different domains (and will have different UUIDs assigned by
> Keystone).
>
> Gilles posted links to discussions about how to use isnamevar and
> title_patterns with Puppet Ruby providers, but I could not get it to
> work.  I was using Puppet 3.8 - perhaps it only works in Puppet 4.0 or
> later.  At any rate, this is an area for someone to do some research

Thanks for the explanation.  I will definitively look into it.

>>>>>       - More difficult to debug
>>> More difficult than it is already? :P
>> require 'pry';binding.pry :)
>
> Tried that on Fedora 22 (actually - debugger pry because pry by itself
> isn't a debugger, but a REPL inspector).  Didn't work.
>
> Also doesn't help you when someone hands you a pile of Puppet logs . . .

Agreed, more useful for creating provider than debugging puppet logs.

-- 
Sofer Athlan-Guyot


From jesse.pretorius at gmail.com  Tue Sep 15 09:51:57 2015
From: jesse.pretorius at gmail.com (Jesse Pretorius)
Date: Tue, 15 Sep 2015 10:51:57 +0100
Subject: [openstack-dev] [openstack-ansible][compass] Support of Offline
	Install
In-Reply-To: <CALNQoPcNgZ-N+djnNvEzRNKch+TQBpzLS8AcAbwuWSQJEJToaA@mail.gmail.com>
References: <CALNQoPcNgZ-N+djnNvEzRNKch+TQBpzLS8AcAbwuWSQJEJToaA@mail.gmail.com>
Message-ID: <CAGSrQvz47knhGE7etO4yrPEY-avmb2v6z18E-brf7Zckm6dmow@mail.gmail.com>

On 15 September 2015 at 05:36, Weidong Shao <weidongshao at gmail.com> wrote:

> Compass, an openstack deployment project, is in process of using osad
> project in the openstack deployment. We need to support a use case where
> there is no Internet connection. The way we handle this is to split the
> deployment into "build" and "install" phase. In Build phase, the Compass
> server node can have Internet connection and can build local repo and other
> necessary dynamic artifacts that requires Internet connection. In "install"
> phase, the to-be-installed nodes do not have Internet connection, and they
> only download necessary data from Compass server and other services
> constructed in Build phase.
>
> Now, is "offline install" something that OSAD project shall also support?
> If yes, what is the scope of work for any changes, if required.
>

Currently we don't have a offline install paradigm - but that doesn't mean
that we couldn't shift things around to support it if it makes sense. I
think this is something that we could discuss via the ML, via a spec
review, or at the summit.

Some notes which may be useful:

1. We have support for the use of a proxy server [1].
2. As you probably already know, we build the python wheels for the
environment on the repo-server - so all python wheel installs (except
tempest venv requirements) are done directly from the repo server.
3. All apt-key and apt-get actions are done against online repositories. If
you wish to have these be done online then there would need to be an
addition of some sort of apt-key and apt package mirror which we currently
do not have. If there is a local repo in the environment, the functionality
to direct all apt-key and apt-get install actions against an internal
mirror is all there.

[1]
http://git.openstack.org/cgit/openstack/openstack-ansible/commit/?id=ed7f78ea5689769b3a5e1db444f4c16f3cc06060
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/6dc3835b/attachment.html>

From flavio at redhat.com  Tue Sep 15 09:51:03 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 15 Sep 2015 11:51:03 +0200
Subject: [openstack-dev] [glance] tasks (following "proposed priorities
 for Mitaka")
In-Reply-To: <1442260721-sup-4094@lrrr.local>
References: <D21C4411.21E22%brian.rosmaita@rackspace.com>
 <55F714E2.4010707@inaugust.com> <1442260721-sup-4094@lrrr.local>
Message-ID: <20150915095103.GD9301@redhat.com>

On 14/09/15 16:09 -0400, Doug Hellmann wrote:
>Excerpts from Monty Taylor's message of 2015-09-14 20:41:38 +0200:
>> On 09/14/2015 04:58 PM, Brian Rosmaita wrote:
[snip]
>> If "glance import-from http://example.com/my-image.qcow2' always worked,
>> and in the back end generated a task with the task workflow, and one of
>> the task workflows that a deployer could implement was one to do
>> conversions to the image format of the cloud provider's choice, that
>> would be teh-awesome. It's still a bit annoying to me that I, as a user,
>> need to come up with a place to put the image so that it can be
>> imported, but honestly, I'll take it. It's not _that_ hard of a problem.
>
>This is more or less what I'm thinking we want, too. As a user, I want
>to know how to import an image by having that documented clearly and by
>using an obvious UI. As a deployer, I want to sometimes do things to an
>image as they are imported, and background tasks may make that easier to
>implement. As a user, I don't care if my image upload is a task or not.

IMHO, as much as it's not a _hard_ problem to solve, as I user I'd
hate to be asked to upload my image somewhere to create an image in my
cloud account. Not all users create scripts and not all users have the
same resources.

Simplest scenario. I'm a new user and I want to test your cloud. I
want to know how it works and I want to run *my* distro which is not
available in the list of public images. If I were to be asked to
upload the image somewhere to then use it, I'd be really sad and, at
the very least, I'd expect this cloud to provide *the* place where I
should put the image, which is not always the case.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/6c716830/attachment.pgp>

From sathlang at redhat.com  Tue Sep 15 09:55:05 2015
From: sathlang at redhat.com (Sofer Athlan-Guyot)
Date: Tue, 15 Sep 2015 11:55:05 +0200
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
	'composite namevar' or 'meaningless name'?
In-Reply-To: <55F76F5C.2020106@redhat.com> (Gilles Dubreuil's message of "Tue, 
 15 Sep 2015 11:07:40 +1000")
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com> <55F76F5C.2020106@redhat.com>
Message-ID: <87vbbc2eiu.fsf@s390.unix4.net>

Gilles Dubreuil <gilles at redhat.com> writes:

> On 15/09/15 06:53, Rich Megginson wrote:
>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>> Hi,
>>>
>>> Gilles Dubreuil <gilles at redhat.com> writes:
>>>
>>>> A. The 'composite namevar' approach:
>>>>
>>>>     keystone_tenant {'projectX::domainY': ... }
>>>>   B. The 'meaningless name' approach:
>>>>
>>>>    keystone_tenant {'myproject': name='projectX', domain=>'domainY',
>>>> ...}
>>>>
>>>> Notes:
>>>>   - Actually using both combined should work too with the domain
>>>> supposedly overriding the name part of the domain.
>>>>   - Please look at [1] this for some background between the two
>>>> approaches:
>>>>
>>>> The question
>>>> -------------
>>>> Decide between the two approaches, the one we would like to retain for
>>>> puppet-keystone.
>>>>
>>>> Why it matters?
>>>> ---------------
>>>> 1. Domain names are mandatory in every user, group or project. Besides
>>>> the backward compatibility period mentioned earlier, where no domain
>>>> means using the default one.
>>>> 2. Long term impact
>>>> 3. Both approaches are not completely equivalent which different
>>>> consequences on the future usage.
>>> I can't see why they couldn't be equivalent, but I may be missing
>>> something here.
>> 
>> I think we could support both.  I don't see it as an either/or situation.
>> 
>>>
>>>> 4. Being consistent
>>>> 5. Therefore the community to decide
>>>>
>>>> Pros/Cons
>>>> ----------
>>>> A.
>>> I think it's the B: meaningless approach here.
>>>
>>>>    Pros
>>>>      - Easier names
>>> That's subjective, creating unique and meaningful name don't look easy
>>> to me.
>> 
>> The point is that this allows choice - maybe the user already has some
>> naming scheme, or wants to use a more "natural" meaningful name - rather
>> than being forced into a possibly "awkward" naming scheme with "::"
>> 
>>   keystone_user { 'heat domain admin user':
>>     name => 'admin',
>>     domain => 'HeatDomain',
>>     ...
>>   }
>> 
>>   keystone_user_role {'heat domain admin user@::HeatDomain':
>>     roles => ['admin']
>>     ...
>>   }
>> 
>>>
>>>>    Cons
>>>>      - Titles have no meaning!
>> 
>> They have meaning to the user, not necessarily to Puppet.
>> 
>>>>      - Cases where 2 or more resources could exists
>> 
>> This seems to be the hardest part - I still cannot figure out how to use
>> "compound" names with Puppet.
>> 
>>>>      - More difficult to debug
>> 
>> More difficult than it is already? :P
>> 
>>>>      - Titles mismatch when listing the resources (self.instances)
>>>>
>>>> B.
>>>>    Pros
>>>>      - Unique titles guaranteed
>>>>      - No ambiguity between resource found and their title
>>>>    Cons
>>>>      - More complicated titles
>>>> My vote
>>>> --------
>>>> I would love to have the approach A for easier name.
>>>> But I've seen the challenge of maintaining the providers behind the
>>>> curtains and the confusion it creates with name/titles and when not sure
>>>> about the domain we're dealing with.
>>>> Also I believe that supporting self.instances consistently with
>>>> meaningful name is saner.
>>>> Therefore I vote B
>>> +1 for B.
>>>
>>> My view is that this should be the advertised way, but the other method
>>> (meaningless) should be there if the user need it.
>>>
>>> So as far as I'm concerned the two idioms should co-exist.  This would
>>> mimic what is possible with all puppet resources.  For instance you can:
>>>
>>>    file { '/tmp/foo.bar': ensure => present }
>>>
>>> and you can
>>>
>>>    file { 'meaningless_id': name => '/tmp/foo.bar', ensure => present }
>>>
>>> The two refer to the same resource.
>> 
>> Right.
>> 
>
> I disagree, using the name for the title is not creating a composite
> name. The latter requires adding at least another parameter to be part
> of the title.
>
> Also in the case of the file resource, a path/filename is a unique name,
> which is not the case of an Openstack user which might exist in several
> domains.
>
> I actually added the meaningful name case in:
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html
>
> But that doesn't work very well because without adding the domain to the
> name, the following fails:
>
> keystone_tenant {'project_1': domain => 'domain_A', ...}
> keystone_tenant {'project_1': domain => 'domain_B', ...}
>
> And adding the domain makes it a de-facto 'composite name'.

I agree that my example is not similar to what the keystone provider has
to do.  What I wanted to point out is that user in puppet should be used
to have this kind of *interface*, one where your put something
meaningful in the title and one where you put something meaningless.
The fact that the meaningful one is a compound one shouldn't matter to
the user.

>>>
>>> But, If that's indeed not possible to have them both,
>
> There are cases where having both won't be possible like the trusts, but
> why not for the resources supporting it.
>
> That said, I think we need to make a choice, at least to get started, to
> have something working, consistently, besides exceptions. Other options
> to be added later.

So we should go we the meaningful one first for consistency, I think.

>>> then I would keep only the meaningful name.
>>>
>>>
>>> As a side note, someone raised an issue about the delimiter being
>>> hardcoded to "::".  This could be a property of the resource.  This
>>> would enable the user to use weird name with "::" in it and assign a "/"
>>> (for instance) to the delimiter property:
>>>
>>>    Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }
>>>
>>> bar::is::cool is the name of the domain and foo::blah is the project.
>> 
>> That's a good idea.  Please file a bug for that.
>> 
>>>
>>>> Finally
>>>> ------
>>>> Thanks for reading that far!
>>>> To choose, please provide feedback with more pros/cons, examples and
>>>> your vote.
>>>>
>>>> Thanks,
>>>> Gilles
>>>>
>>>>
>>>> PS:
>>>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>>>
>>>> __________________________________________________________________________
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> Bye,
>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sofer Athlan-Guyot


From ihrachys at redhat.com  Tue Sep 15 09:55:43 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Tue, 15 Sep 2015 11:55:43 +0200
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <4A2C098D-4D25-4DFE-AA78-152AF211FCB6@redhat.com>

> On 11 Sep 2015, at 23:12, Kyle Mestery <mestery at mestery.com> wrote:
> 
> I'm writing to let everyone know that I do not plan to run for Neutron PTL for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan recently put it in his non-candidacy email [1]. But it goes further than that for me. As Flavio put it in his post about "Being a PTL" [2], it's a full time job. In the case of Neutron, it's more than a full time job, it's literally an always on job.
> 
> I've tried really hard over my three cycles as PTL to build a stronger web of trust so the project can grow, and I feel that's been accomplished. We have a strong bench of future PTLs and leaders ready to go, I'm excited to watch them lead and help them in anyway I can.

Wow, it took me by surprise. :( I want you to know that your leadership for the last cycles was really game changing, and I am sure you should feel good leaving the position with such a live and bright and open community as it is now. Thanks a lot for what you did to the neutron island!

I am also very happy that you don?t step back from neutron, and maybe we?ll see you serve the position one more time in the future. ;)

However hard it is, for you set the bar too high, I believe this community will find someone to replace you in this role, and we?ll see no earthquakes and disasters.

See you in Tokyo! [You should give a talk about how to be an awesome PTL!]

Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/a5362c10/attachment.pgp>

From jesse.pretorius at gmail.com  Tue Sep 15 09:58:42 2015
From: jesse.pretorius at gmail.com (Jesse Pretorius)
Date: Tue, 15 Sep 2015 10:58:42 +0100
Subject: [openstack-dev] [openstack-ansible] PTL Non-Candidacy
In-Reply-To: <1442264539677.79104@RACKSPACE.COM>
References: <1442264539677.79104@RACKSPACE.COM>
Message-ID: <CAGSrQvxSjYhq6Htrca3is630=8MLNq1TOpahscpT2foMESNi3g@mail.gmail.com>

On 14 September 2015 at 22:02, Kevin Carter <kevin.carter at rackspace.com>
wrote:

>
> TL;DR - I'm sending this out to announce that I won't be running for PTL
> of the OpenStack-Ansible project in the upcoming cycle. Although I won't be
> running for PTL, with community support, I intend to remain an active
> contributor just with more time spent more cross project and in other
> upstream communities.
>
> Being a PTL has been difficult, fun, and rewarding and is something I
> think everyone should strive to do at least once. In the the upcoming cycle
> I believe our project has reached the point of maturity where its time for
> the leadership to change. OpenStack-Ansible was recently moved into the
> "big-tent" and I consider this to be the perfect juncture for me to step
> aside and allow the community to evolve under the guidance of a new team
> lead. I share the opinions of current and former PTLs that having a
> revolving door of leadership is key to the success of any project [0].
> While OpenStack-Ansible has only recently been moved out of Stackforge and
> into the OpenStack namespace as a governed project (I'm really excited
> about that) I've had the privileged of working as the project technical
> lead ever since it's inception at Rackspace with the initial proof of
> concept known as "Ansible-LXC-RPC". It's been an amazing journey so far and
> I'd like to thank everyone that's helped make OpenStack-Ansible (formally
> OSAD) possible; none of this would have happened without the contributions
> made by our devout and ever growing community of deployers and developers.
>
> Thank you again and I look forward to seeing you all online and in Tokyo.
>
> [0] -
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>

Thank you Kevin for your leadership through the journey thus far! It's been
fantastic to work with you driving the vision and execution of being Open
[1].

[1] https://wiki.openstack.org/wiki/Open
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/bef902b0/attachment.html>

From flavio at redhat.com  Tue Sep 15 09:59:35 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 15 Sep 2015 11:59:35 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <55F76A6E.6000507@inaugust.com>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com>
 <1442250235-sup-1646@lrrr.local> <1442261641-sup-9577@fewbar.com>
 <1442263439-sup-913@lrrr.local> <1442275194-sup-3621@fewbar.com>
 <55F76A6E.6000507@inaugust.com>
Message-ID: <20150915095935.GE9301@redhat.com>

On 15/09/15 02:46 +0200, Monty Taylor wrote:
>On 09/15/2015 02:06 AM, Clint Byrum wrote:
>>Excerpts from Doug Hellmann's message of 2015-09-14 13:46:16 -0700:
>>>Excerpts from Clint Byrum's message of 2015-09-14 13:25:43 -0700:
>>>>Excerpts from Doug Hellmann's message of 2015-09-14 12:51:24 -0700:
>>>>>Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
>>>>>>On 14/09/15 08:10 -0400, Doug Hellmann wrote:
>>>>>>>
>>>>>>>After having some conversations with folks at the Ops Midcycle a
>>>>>>>few weeks ago, and observing some of the more recent email threads
>>>>>>>related to glance, glance-store, the client, and the API, I spent
>>>>>>>last week contacting a few of you individually to learn more about
>>>>>>>some of the issues confronting the Glance team. I had some very
>>>>>>>frank, but I think constructive, conversations with all of you about
>>>>>>>the issues as you see them. As promised, this is the public email
>>>>>>>thread to discuss what I found, and to see if we can agree on what
>>>>>>>the Glance team should be focusing on going into the Mitaka summit
>>>>>>>and development cycle and how the rest of the community can support
>>>>>>>you in those efforts.
>>>>>>>
>>>>>>>I apologize for the length of this email, but there's a lot to go
>>>>>>>over. I've identified 2 high priority items that I think are critical
>>>>>>>for the team to be focusing on starting right away in order to use
>>>>>>>the upcoming summit time effectively. I will also describe several
>>>>>>>other issues that need to be addressed but that are less immediately
>>>>>>>critical. First the high priority items:
>>>>>>>
>>>>>>>1. Resolve the situation preventing the DefCore committee from
>>>>>>>   including image upload capabilities in the tests used for trademark
>>>>>>>   and interoperability validation.
>>>>>>>
>>>>>>>2. Follow through on the original commitment of the project to
>>>>>>>   provide an image API by completing the integration work with
>>>>>>>   nova and cinder to ensure V2 API adoption.
>>>>>>
>>>>>>Hi Doug,
>>>>>>
>>>>>>First and foremost, I'd like to thank you for taking the time to dig
>>>>>>into these issues, and for reaching out to the community seeking for
>>>>>>information and a better understanding of what the real issues are. I
>>>>>>can imagine how much time you had to dedicate on this and I'm glad you
>>>>>>did.
>>>>>>
>>>>>>Now, to your email, I very much agree with the priorities you
>>>>>>mentioned above and I'd like for, whomever will win Glance's PTL
>>>>>>election, to bring focus back on that.
>>>>>>
>>>>>>Please, find some comments in-line for each point:
>>>>>>
>>>>>>>
>>>>>>>I. DefCore
>>>>>>>
>>>>>>>The primary issue that attracted my attention was the fact that
>>>>>>>DefCore cannot currently include an image upload API in its
>>>>>>>interoperability test suite, and therefore we do not have a way to
>>>>>>>ensure interoperability between clouds for users or for trademark
>>>>>>>use. The DefCore process has been long, and at times confusing,
>>>>>>>even to those of us following it sort of closely. It's not entirely
>>>>>>>surprising that some projects haven't been following the whole time,
>>>>>>>or aren't aware of exactly what the whole thing means. I have
>>>>>>>proposed a cross-project summit session for the Mitaka summit to
>>>>>>>address this need for communication more broadly, but I'll try to
>>>>>>>summarize a bit here.
>>>>>>
>>>>>>+1
>>>>>>
>>>>>>I think it's quite sad that some projects, especially those considered
>>>>>>to be part of the `starter-kit:compute`[0], don't follow closely
>>>>>>what's going on in DefCore. I personally consider this a task PTLs
>>>>>>should incorporate in their role duties. I'm glad you proposed such
>>>>>>session, I hope it'll help raising awareness of this effort and it'll
>>>>>>help moving things forward on that front.
>>>>>
>>>>>Until fairly recently a lot of the discussion was around process
>>>>>and priorities for the DefCore committee. Now that those things are
>>>>>settled, and we have some approved policies, it's time to engage
>>>>>more fully.  I'll be working during Mitaka to improve the two-way
>>>>>communication.
>>>>>
>>>>>>
>>>>>>>
>>>>>>>DefCore is using automated tests, combined with business policies,
>>>>>>>to build a set of criteria for allowing trademark use. One of the
>>>>>>>goals of that process is to ensure that all OpenStack deployments
>>>>>>>are interoperable, so that users who write programs that talk to
>>>>>>>one cloud can use the same program with another cloud easily. This
>>>>>>>is a *REST API* level of compatibility. We cannot insert cloud-specific
>>>>>>>behavior into our client libraries, because not all cloud consumers
>>>>>>>will use those libraries to talk to the services. Similarly, we
>>>>>>>can't put the logic in the test suite, because that defeats the
>>>>>>>entire purpose of making the APIs interoperable. For this level of
>>>>>>>compatibility to work, we need well-defined APIs, with a long support
>>>>>>>period, that work the same no matter how the cloud is deployed. We
>>>>>>>need the entire community to support this effort. From what I can
>>>>>>>tell, that is going to require some changes to the current Glance
>>>>>>>API to meet the requirements. I'll list those requirements, and I
>>>>>>>hope we can discuss them to a degree that ensures everyone understands
>>>>>>>them. I don't want this email thread to get bogged down in
>>>>>>>implementation details or API designs, though, so let's try to keep
>>>>>>>the discussion at a somewhat high level, and leave the details for
>>>>>>>specs and summit discussions. I do hope you will correct any
>>>>>>>misunderstandings or misconceptions, because unwinding this as an
>>>>>>>outside observer has been quite a challenge and it's likely I have
>>>>>>>some details wrong.
>>>>>>>
>>>>>>>As I understand it, there are basically two ways to upload an image
>>>>>>>to glance using the V2 API today. The "POST" API pushes the image's
>>>>>>>bits through the Glance API server, and the "task" API instructs
>>>>>>>Glance to download the image separately in the background. At one
>>>>>>>point apparently there was a bug that caused the results of the two
>>>>>>>different paths to be incompatible, but I believe that is now fixed.
>>>>>>>However, the two separate APIs each have different issues that make
>>>>>>>them unsuitable for DefCore.
>>>>>>>
>>>>>>>The DefCore process relies on several factors when designating APIs
>>>>>>>for compliance. One factor is the technical direction, as communicated
>>>>>>>by the contributor community -- that's where we tell them things
>>>>>>>like "we plan to deprecate the Glance V1 API". In addition to the
>>>>>>>technical direction, DefCore looks at the deployment history of an
>>>>>>>API. They do not want to require deploying an API if it is not seen
>>>>>>>as widely usable, and they look for some level of existing adoption
>>>>>>>by cloud providers and distributors as an indication of that the
>>>>>>>API is desired and can be successfully used. Because we have multiple
>>>>>>>upload APIs, the message we're sending on technical direction is
>>>>>>>weak right now, and so they have focused on deployment considerations
>>>>>>>to resolve the question.
>>>>>>
>>>>>>The task upload process you're referring to is the one that uses the
>>>>>>`import` task, which allows you to download an image from an external
>>>>>>source, asynchronously, and import it in Glance. This is the old
>>>>>>`copy-from` behavior that was moved into a task.
>>>>>>
>>>>>>The "fun" thing about this - and I'm sure other folks in the Glance
>>>>>>community will disagree - is that I don't consider tasks to be a
>>>>>>public API. That is to say, I would expect tasks to be an internal API
>>>>>>used by cloud admins to perform some actions (bsaed on its current
>>>>>>implementation). Eventually, some of these tasks could be triggered
>>>>>>from the external API but as background operations that are triggered
>>>>>>by the well-known public ones and not through the task API.
>>>>>
>>>>>Does that mean it's more of an "admin" API?
>>>>>
>>>>
>>>>I think it is basically just a half-way done implementation that is
>>>>exposed directly to users of Rackspace Cloud and, AFAIK, nobody else.
>>>>When last I tried to make integration tests in shade that exercised the
>>>>upstream glance task import code, I was met with an implementation that
>>>>simply did not work, because the pieces behind it had never been fully
>>>>implemented upstream. That may have been resolved, but in the process
>>>>of trying to write tests and make this work, I discovered a system that
>>>>made very little sense from a user standpoint. I want to upload an
>>>>image, why do I want a task?!
>>>>
>>>>>>
>>>>>>Ultimately, I believe end-users of the cloud simply shouldn't care
>>>>>>about what tasks are or aren't and more importantly, as you mentioned
>>>>>>later in the email, tasks make clouds not interoperable. I'd be pissed
>>>>>>if my public image service would ask me to learn about tasks to be
>>>>>>able to use the service.
>>>>>
>>>>>It would be OK if a public API set up to do a specific task returned a
>>>>>task ID that could be used with a generic task API to check status, etc.
>>>>>So the idea of tasks isn't completely bad, it's just too vague as it's
>>>>>exposed right now.
>>>>>
>>>>
>>>>I think it is a concern, because it is assuming users will want to do
>>>>generic things with a specific API. This turns into a black-box game where
>>>>the user shoves a task in and then waits to see what comes out the other
>>>>side. Not something I want to encourage users to do or burden them with.
>>>>
>>>>We have an API whose sole purpose is to accept image uploads. That
>>>>Rackspace identified a scaling pain point there is _good_. But why not
>>>>*solve* it for the user, instead of introduce more complexity?
>>>
>>>That's fair. I don't actually care which API we have, as long as it
>>>meets the other requirements.
>>>
>>>>
>>>>What I'd like to see is the upload image API given the ability to
>>>>respond with a URL that can be uploaded to using the object storage API
>>>>we already have in OpenStack. Exposing users to all of these operator
>>>>choices is just wasting their time. Just simply say "Oh, you want to
>>>>upload an image? Thats fine, please upload it as an object over there
>>>>and POST here again when it is ready to be imported." This will make
>>>>perfect sense to a user reading docs, and doesn't require them to grasp
>>>>an abstract concept like "tasks" when all they want to do is upload
>>>>their image.
>>>>
>>>
>>>And what would it do if the backing store for the image service
>>>isn't Swift or another object storage system that supports direct
>>>uploads? Return a URL that pointed back to itself, maybe?
>>
>>For those operators who don't have concerns about scaling the glance
>>API service to their users' demands, glance's image upload API works
>>perfectly well today.  The indirect approach is only meant to dealt with
>>the situation where the operator expects a lot of really large images to
>>be uploaded simultaneously, and would like to take advantage of the Swift
>>API's rather rich set of features for making that a positive experience.
>>There is also a user benefit to using the Swift API, which is that a
>>segmented upload can more easily be resumed.
>
>Yes, BUT ...
>
>If there are going to be two legitimate ways to upload an image, that 
>needs to be discoverable so that scripts (or things like ansible or 
>razor or juju or terraform or *insert system tool here*) can 
>accomplish "please upload this here image file into this here cloud"

I don't think there's a problem with having 2 legitimate ways to
*upload* the image data but rather in those 2 ways not being always
deployed/enabled.

In Glance's v2, there are currently 2 ways to create images. Each of
these has a specific way to import - upload or downloading from - the
data. One these create workflows has 2 ways to "attach" data to an
image - uploading or using locations. As far as Glance's public API
goes, I'd really like this to be shrunk down to just 1 way to create
images and then allow for those 3, useful, ways to attach data to an
image to be used through that single call, which is basically what we
have in V1.

>It's really not about the REST API itself. Literally zero percent of 
>the people are doing that. People use tools. Tools write to APIs. And 
>nobody who is running an OpenStack cloud should have to write their 
>own branded tools - that's a cost that's completely silly to bear. An 
>operator running an openstack cloud should be able to say to their 
>users "go use the ansible openstack modules" or "go use the juju 
>openstack provider"
>
>Which brings us back to your excellnet point - both of these are 
>totally legitimate ways to upload to the cloud, except small clouds 
>often don't run swift, and large clouds may want to handle the 
>situation you mention and leverage swift. So how about:
>
>glance image-create my-great-image
>returns: 200 OK {
>  upload-url: 'https://example.com/some/url/location',
>  is_swift: False
>}
>
>OR
>
>glance image-create my-great-image
>returns: 200 OK {
>  upload-url: 'https://example.com/some/url/location',
>  is_swift: False
>}
>
>and if is_swift is true, then the user (or script) knows it can used 
>the threaded swiftuploader,  If it's false, the user (or script) just 
>uploads content to the URL. The process is completely sane, is pretty 
>much the same for both types of cloud, and has one known and 
>understandable either-or deployer difference that each fork of is open 
>source and each fork of has a defined semantic.
>
>Details, of course - and I know there are at least 5 more to work out 
>- but hopefully that makes sense and doesn't disenfrancize anyone?


IMHO, the above is already complicated enough. It's introspectable,
sure, but it already says too much about the cloud that the user
shouldn't care of. For the sake of expanding that thought, though, I'd
say the user should just get an URL and the rest should be handled
transparently.

We once talked about a possibility of allowing users to use
glance_store directly to upload the image to the cloud store when
scenarios like the one above (or Rackspace's specifically) exist.
Again, just throwing it out there for the sake of discussing scenarios
and use-cases rather than implementation details that I personally
don't care about righ now.


>>Now, IMO HTTP has facilities for that too, it's just that glanceclient
>>(and lo, many HTTP clients) aren't well versed in those deeper, optional
>>pieces of HTTP. That is why Swift works the way it does, and I like
>>the idea of glance simply piggy backing on the experience of many years
>>of production refinement that are available and codified in Swift and
>>any other OpenStack Object Storage API implementations (like the CEPH
>>RADOS gateway).

++

Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/a53abbcf/attachment-0001.pgp>

From Neil.Jerram at metaswitch.com  Tue Sep 15 10:03:43 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Tue, 15 Sep 2015 10:03:43 +0000
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
 <4A2C098D-4D25-4DFE-AA78-152AF211FCB6@redhat.com>
Message-ID: <SN1PR02MB1695D9F771E44B7E65BDBA0D995C0@SN1PR02MB1695.namprd02.prod.outlook.com>

On 15/09/15 10:59, Ihar Hrachyshka wrote:
>> On 11 Sep 2015, at 23:12, Kyle Mestery <mestery at mestery.com> wrote:
>>
>> I'm writing to let everyone know that I do not plan to run for Neutron PTL for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan recently put it in his non-candidacy email [1]. But it goes further than that for me. As Flavio put it in his post about "Being a PTL" [2], it's a full time job. In the case of Neutron, it's more than a full time job, it's literally an always on job.
>>
>> I've tried really hard over my three cycles as PTL to build a stronger web of trust so the project can grow, and I feel that's been accomplished. We have a strong bench of future PTLs and leaders ready to go, I'm excited to watch them lead and help them in anyway I can.
> Wow, it took me by surprise. :( I want you to know that your leadership for the last cycles was really game changing, and I am sure you should feel good leaving the position with such a live and bright and open community as it is now. Thanks a lot for what you did to the neutron island!
>
> I am also very happy that you don?t step back from neutron, and maybe we?ll see you serve the position one more time in the future. ;)
>
> However hard it is, for you set the bar too high, I believe this community will find someone to replace you in this role, and we?ll see no earthquakes and disasters.
>
> See you in Tokyo! [You should give a talk about how to be an awesome PTL!]

Kyle, I'm afraid I won't be original, but I'd like to add my words to
the stream of thanks to you.  Although I've only been in the Neutron
community for a short time so far, I have felt very welcomed, and I
think that's largely due to the positive and open-minded feeling that
you have embodied.

    Neil



From christian at berendt.io  Tue Sep 15 10:06:16 2015
From: christian at berendt.io (Christian Berendt)
Date: Tue, 15 Sep 2015 12:06:16 +0200
Subject: [openstack-dev] [OpenStack-docs] [docs][ptl] Docs PTL Candidacy
In-Reply-To: <55F63197.9090606@lanabrindley.com>
References: <55F63197.9090606@lanabrindley.com>
Message-ID: <55F7ED98.2010406@berendt.io>

On 09/14/2015 04:31 AM, Lana Brindley wrote:
> I'd love to have your support for the PTL role for Mitaka, and I'm
> looking forward to continuing to grow the documentation team.

You have my support. Thanks for your great work during the current cycle.

Christian.

-- 
Christian Berendt
Cloud Solution Architect
Mail: berendt at b1-systems.de

B1 Systems GmbH
Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537


From spradhan.17 at gmail.com  Tue Sep 15 10:45:38 2015
From: spradhan.17 at gmail.com (Sagar Pradhan)
Date: Tue, 15 Sep 2015 16:15:38 +0530
Subject: [openstack-dev] [Policy][Group-based-policy]
Message-ID: <CAGCtZnNLUqD7rLLSoiQjE7n1guY1hPgzo=X9t=hRKTaxyzdLZg@mail.gmail.com>

 Hello ,

We were exploring group based policy for some project.We could find CLI and
REST API documentation for GBP.
Do we have separate REST API for GBP which can be called separately ?
>From documentation it seems that we can only use CLI , Horizon and Heat.
Please point us to CLI or REST API documentation for GBP.


Regards,
Sagar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/a54264fd/attachment.html>

From tleontovich at mirantis.com  Tue Sep 15 11:25:48 2015
From: tleontovich at mirantis.com (Tatyana Leontovich)
Date: Tue, 15 Sep 2015 14:25:48 +0300
Subject: [openstack-dev] [Fuel] Nominate Denis Dmitriev for
 fuel-qa(devops) core
In-Reply-To: <CAFNR43NRtM3FrdBuPBFuEwLAjmGQfKvqfVhMotqnSdYK8YyCsA@mail.gmail.com>
References: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>
 <CAFNR43NRtM3FrdBuPBFuEwLAjmGQfKvqfVhMotqnSdYK8YyCsA@mail.gmail.com>
Message-ID: <CAJWtyAMM-MOgazDkJNAS6FAiioXP3RCeE-W0LCNN+ZJi3-_C_w@mail.gmail.com>

+1

Regards,
Tatyana

On Tue, Sep 15, 2015 at 12:16 PM, Alexander Kostrikov <
akostrikov at mirantis.com> wrote:

> +1
>
> On Mon, Sep 14, 2015 at 10:19 PM, Anastasia Urlapova <
> aurlapova at mirantis.com> wrote:
>
>> Folks,
>> I would like to nominate Denis Dmitriev[1] for fuel-qa/fuel-devops core.
>>
>> Dennis spent three months in Fuel BugFix team, his velocity was between
>> 150-200% per week. Thanks to his efforts we have won these old issues with
>> time sync and ceph's clock skew. Dennis's ideas constantly help us to
>> improve our functional system suite.
>>
>> Fuelers, please vote for Denis!
>>
>> Nastya.
>>
>> [1]
>> http://stackalytics.com/?user_id=ddmitriev&release=all&project_type=all&module=fuel-qa
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Kind Regards,
>
> Alexandr Kostrikov,
>
> Mirantis, Inc.
>
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>
>
> Skype: akostrikov_mirantis
>
> E-mail: akostrikov at mirantis.com <elogutova at mirantis.com>
>
> *www.mirantis.com <http://www.mirantis.ru/>*
> *www.mirantis.ru <http://www.mirantis.ru/>*
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/32c1efe7/attachment.html>

From mordred at inaugust.com  Tue Sep 15 11:32:17 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Tue, 15 Sep 2015 13:32:17 +0200
Subject: [openstack-dev] [glance] The current state of glance v2 in public
	clouds
Message-ID: <55F801C1.1050906@inaugust.com>

Hi!

In some of our other discussions, there have been musings such as 
"people want to..." or "people are concerned about..." Those are vague 
and unsubstantiated. Instead of "people" - I thought I'd enumerate 
actual data that I have personally empirically gathered.

I currently have an account on 12 different public clouds:

Auro
CityCloud
Dreamhost
Elastx
EnterCloudSuite
HP
OVH
Rackspace
RunAbove
Ultimum
UnitedStack
Vexxhost


(if, btw, you have a public cloud that I did not list above, please poke 
me and let's get me an account so that I can make sure you're 
listed/supported in os-client-config and also so that I don't make 
sweeping generalizations without you)

In case you care- those clouds cover US, Canada, Sweden, UK, France, 
Germany, Netherlands, Czech Republic and China.

Here's the rundown:

11 of the 12 clouds run Glance v2, 1 only have Glance v1
11 of the 12 clouds support image-create, 1 uses tasks
8 of the 12 support qcow2, 3 require raw, 1 requires vhd

Use this data as you will.

Monty

Monty


From derekh at redhat.com  Tue Sep 15 11:38:36 2015
From: derekh at redhat.com (Derek Higgins)
Date: Tue, 15 Sep 2015 12:38:36 +0100
Subject: [openstack-dev] [TripleO] Current meeting timeslot
In-Reply-To: <55F18FD7.7070305@redhat.com>
References: <55F18FD7.7070305@redhat.com>
Message-ID: <55F8033C.6060701@redhat.com>

On 10/09/15 15:12, Derek Higgins wrote:
> Hi All,
>
> The current meeting slot for TripleO is every second Tuesday @ 1900 UTC,
> since that time slot was chosen a lot of people have joined the team and
> others have moved on, I like to revisit the timeslot to see if we can
> accommodate more people at the meeting (myself included).
>
> Sticking with Tuesday I see two other slots available that I think will
> accommodate more people currently working on TripleO,
>
> Here is the etherpad[1], can you please add your name under the time
> slots that would suit you so we can get a good idea how a change would
> effect people

Looks like moving the meeting to 1400 UTC will best accommodate 
everybody, I've proposed a patch to change our slot

https://review.openstack.org/#/c/223538/

In case the etherpad disappears here was the results

Current Slot ( 1900 UTC, Tuesdays,  biweekly)
o Suits me fine - 2 votes
o May make it sometimes - 6 votes

Proposal 1 ( 1600 UTC, Tuesdays,  biweekly)
o Suits me fine - 7 votes
o May make it sometimes - 2 votes

Proposal 2 ( 1400 UTC, Tuesdays,  biweekly)
o Suits me fine - 9 votes
o May make it sometimes - 0 votes

I can't make any of these - 0 votes

thanks,
Derek.


>
> thanks,
> Derek.
>
>
> [1] - https://etherpad.openstack.org/p/SocOjvLr6o
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From iberezovskiy at mirantis.com  Tue Sep 15 11:39:09 2015
From: iberezovskiy at mirantis.com (Ivan Berezovskiy)
Date: Tue, 15 Sep 2015 14:39:09 +0300
Subject: [openstack-dev] [puppet] monasca,murano,mistral governance
In-Reply-To: <CAHr1CO83BO6yk2L=ic9CJ+WXmXBs3ExeqQHs8NpCEb08vkm4-Q@mail.gmail.com>
References: <55F73FB8.8050401@redhat.com>
 <CAHr1CO83BO6yk2L=ic9CJ+WXmXBs3ExeqQHs8NpCEb08vkm4-Q@mail.gmail.com>
Message-ID: <CAK=NR9XOQz7RKw5D0evWOaOt1Cwtfk8YqXYOc6-wWb3i+9_mxA@mail.gmail.com>

Emilien,

puppet-murano module have a bunch of patches from Alexey Deryugin on review
[0], which implements most of all Murano deployment stuff.
Murano project was added to OpenStack namespace not so long ago, that's why
I suggest to have murano-core rights on puppet-murano as they are till all
these patches will be merged.
Anyway, murano-core team doesn't merge any patches without OpenStack Puppet
team approvals.

[0] -
https://review.openstack.org/#/q/status:open+project:openstack/puppet-murano+owner:%22Alexey+Deryugin+%253Caderyugin%2540mirantis.com%253E%22,n,z

2015-09-15 1:01 GMT+03:00 Matt Fischer <matt at mattfischer.com>:

> Emilien,
>
> I've discussed this with some of the Monasca puppet guys here who are
> doing most of the work. I think it probably makes sense to move to that
> model now, especially since the pace of development has slowed
> substantially. One blocker before to having it "big tent" was the lack of
> test coverage, so as long as we know that's a work in progress...  I'd also
> like to get Brad Kiein's thoughts on this, but he's out of town this week.
> I'll ask him to reply when he is back.
>
>
> On Mon, Sep 14, 2015 at 3:44 PM, Emilien Macchi <emilien at redhat.com>
> wrote:
>
>> Hi,
>>
>> As a reminder, Puppet modules that are part of OpenStack are documented
>> here [1].
>>
>> I can see puppet-murano & puppet-mistral Gerrit permissions different
>> from other modules, because Mirantis helped to bootstrap the module a
>> few months ago.
>>
>> I think [2] the modules should be consistent in governance and only
>> Puppet OpenStack group should be able to merge patches for these modules.
>>
>> Same question for puppet-monasca: if Monasca team wants their module
>> under the big tent, I think they'll have to change Gerrit permissions to
>> only have Puppet OpenStack able to merge patches.
>>
>> [1]
>> http://governance.openstack.org/reference/projects/puppet-openstack.html
>> [2] https://review.openstack.org/223313
>>
>> Any feedback is welcome,
>> --
>> Emilien Macchi
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks, Ivan Berezovskiy
MOS Puppet Team Lead
at Mirantis <https://www.mirantis.com/>

slack: iberezovskiy
skype: bouhforever
phone: + 7-960-343-42-46
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/f55f4dcf/attachment.html>

From gord at live.ca  Tue Sep 15 11:48:04 2015
From: gord at live.ca (gord chung)
Date: Tue, 15 Sep 2015 07:48:04 -0400
Subject: [openstack-dev] [oslo][versionedobjects][ceilometer] explain
 the benefits of ceilometer+versionedobjects
In-Reply-To: <55E8A73A.5020203@danplanet.com>
References: <BLU437-SMTP766811C9632CFCC37305CDDE6F0@phx.gbl>
 <55E0910F.2080006@danplanet.com>
 <BLU437-SMTP78FC2B58548051114A0A33DE6E0@phx.gbl>
 <55E0A05E.6080100@danplanet.com>
 <BLU437-SMTP9423D46199E6D36A11BD8EDE680@phx.gbl>
 <55E8A73A.5020203@danplanet.com>
Message-ID: <BLU436-SMTP129EB1BDFC347484D5BB196DE5C0@phx.gbl>



On 03/09/2015 4:02 PM, Dan Smith wrote:
>> - do we need to migrate the db to some handle different set of
>> >attributes and what happens for nosql dbs?
> No, Nova made no schema changes as a result of moving to objects.
>
so i don't really understand how this part works. if i have two 
collectors -- one collector writes v1 schema, one writes v2 schema -- 
how do they both write to the same db without anything changing in the 
db? presumably the db would only be configured to know how to store only 
one version?

cheers,

-- 
gord



From rmoats at us.ibm.com  Tue Sep 15 11:31:20 2015
From: rmoats at us.ibm.com (Ryan Moats)
Date: Tue, 15 Sep 2015 06:31:20 -0500
Subject: [openstack-dev] [neutron][L3][QA] DVR job failure rate
	and	maintainability
In-Reply-To: <0000014fcde02877-55c10164-4eed-4552-ba1a-681c6a75fbcd-000000@email.amazonses.com>
References: <0000014fcde02877-55c10164-4eed-4552-ba1a-681c6a75fbcd-000000@email.amazonses.com>
Message-ID: <201509151158.t8FBwUa6021819@d01av01.pok.ibm.com>

I couldn't have said it better, Sean.

Ryan Moats

"Sean M. Collins" <sean at coreitpro.com> wrote on 09/14/2015 05:01:03 PM:

> From: "Sean M. Collins" <sean at coreitpro.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Date: 09/14/2015 05:01 PM
> Subject: [openstack-dev] [neutron][L3][QA] DVR job failure rate and
> maintainability
>
> [adding neutron tag to subject and resending]
>
> Hi,
>
> Carl Baldwin, Doug Wiegley, Matt Kassawara, Ryan Moats, and myself are
> at the QA sprint in Fort Collins. Earlier today there was a discussion
> about the failure rate about the DVR job, and the possible impact that
> it is having on the gate.
>
> Ryan has a good patch up that shows the failure rates over time:
>
> https://review.openstack.org/223201
>
> To view the graphs, you go over into your neutron git repo, and open the
> .html files that are present in doc/dashboards - which should open up
> your browser and display the Graphite query.
>
> Doug put up a patch to change the DVR job to be non-voting while we
> determine the cause of the recent spikes:
>
> https://review.openstack.org/223173
>
> There was a good discussion after pushing the patch, revolving around
> the need for Neutron to have DVR, to fit operational and reliability
> requirements, and help transition away from Nova-Network by providing
> one of many solutions similar to Nova's multihost feature.  I'm skipping
> over a huge amount of context about the Nova-Network and Neutron work,
> since that is a big and ongoing effort.
>
> DVR is an important feature to have, and we need to ensure that the job
> that tests DVR has a high pass rate.
>
> One thing that I think we need, is to form a group of contributors that
> can help with the DVR feature in the immediate term to fix the current
> bugs, and longer term maintain the feature. It's a big task and I don't
> believe that a single person or company can or should do it by
themselves.
>
> The L3 group is a good place to start, but I think that even within the
> L3 team we need dedicated and diverse group of people who are interested
> in maintaining the DVR feature.
>
> Without this, I think the DVR feature will start to bit-rot and that
> will have a significant impact on our ability to recommend Neutron as a
> replacement for Nova-Network in the future.
>
> --
> Sean M. Collins
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/aa694473/attachment.html>

From rsblendido at suse.com  Tue Sep 15 12:29:01 2015
From: rsblendido at suse.com (Rossella Sblendido)
Date: Tue, 15 Sep 2015 14:29:01 +0200
Subject: [openstack-dev] [Neutron] PTL Candidacy
Message-ID: <1442320141.4495.6.camel@suse.com>

Hi everyone,

I decided to run for the Neutron PTL position for the Mitaka release
cycle.

I have been contributing to Neutron since Havana and I am a member of
the control plane core review team. During these years I have touched
most parts of the Neutron code and in the last cycle my main focus has
been to restructure the OVS agent, adding the ability to use events
provided by ovsdb client and making it more resilient to failures.

Mentoring people and spreading knowledge about Neutron have been high
priorities for me in these years. I am a regular speaker at OpenStack
events (local meetups, Openstack days and summits), where my talks have
had high attendance and good feedback.
I have been a mentor for the Outreachy program [1] and the OpenStack
upstream training.

If I become PTL these are my main objectives for Mitaka:

* Make the community even more welcoming.
Neutron is still facing a big challenge in terms of review bandwidth.
Good features can't get merged because of this limit. Even if the
introduction of the Lieutenant system [2] has improved scalability, we
still need to create a better way to share knowledge.
In addition to improving the existing documentations and tutorials I'd
like to create a team of mentors that can help new contributors (the
quality of the proposed patches should increase so that the review time
needed to merge them should decrease) and form new reviewers (more good
people, more bandwidth).

* Keep working hard to increase the stability.
The introduction of functional tests and full stack tests was great, we
just need to keep going in that direction. Moreover I'd love to produce
and store some benchmark data so that we can also keep track of the
performance of Neutron during time.

* Continue getting feedback from operators.
I think it's very important to hear the opinions of the actual Neutron
users and to understand their concerns.

* Paying down technical debt
Introduce oslo versioned objects to improve RPC and keep refactoring
Neutron code to make it more readable and understandable.

Before proposing my candidacy I made sure I have the full support of my
employer for this role.

Neutron has a great team of contributors and great leaders, it would be
an honor for me to be able to coordinate our efforts and push Neutron
forward.


Thanks for reading so far and for considering this candidacy,

Rossella (rossella_s)

[1] https://wiki.openstack.org/wiki/Outreachy
[2]
http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy




From doug at doughellmann.com  Tue Sep 15 12:30:45 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 15 Sep 2015 08:30:45 -0400
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <EA70533067B8F34F801E964ABCA4C4410F4D698F@G9W0745.americas.hpqcorp.net>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com>
 <EA70533067B8F34F801E964ABCA4C4410F4D6494@G9W0745.americas.hpqcorp.net>
 <1442247798-sup-5628@lrrr.local>
 <EA70533067B8F34F801E964ABCA4C4410F4D698F@G9W0745.americas.hpqcorp.net>
Message-ID: <1442319885-sup-4931@lrrr.local>

Excerpts from Kuvaja, Erno's message of 2015-09-15 09:43:26 +0000:
> > -----Original Message-----
> > From: Doug Hellmann [mailto:doug at doughellmann.com]
> > Sent: Monday, September 14, 2015 5:40 PM
> > To: openstack-dev
> > Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> > 
> > Excerpts from Kuvaja, Erno's message of 2015-09-14 15:02:59 +0000:
> > > > -----Original Message-----
> > > > From: Flavio Percoco [mailto:flavio at redhat.com]
> > > > Sent: Monday, September 14, 2015 1:41 PM
> > > > To: OpenStack Development Mailing List (not for usage questions)
> > > > Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> > > >
> > > > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> > > > >
> > > > >I. DefCore
> > > > >
> > > > >The primary issue that attracted my attention was the fact that
> > > > >DefCore cannot currently include an image upload API in its
> > > > >interoperability test suite, and therefore we do not have a way to
> > > > >ensure interoperability between clouds for users or for trademark
> > > > >use. The DefCore process has been long, and at times confusing,
> > > > >even to those of us following it sort of closely. It's not entirely
> > > > >surprising that some projects haven't been following the whole
> > > > >time, or aren't aware of exactly what the whole thing means. I have
> > > > >proposed a cross-project summit session for the Mitaka summit to
> > > > >address this need for communication more broadly, but I'll try to
> > summarize a bit here.
> > > >
> > >
> > > Looking how different OpenStack based public clouds limits or fully
> > prevents their users to upload images to their deployments, I'm not
> > convinced the Image Upload should be included to this definition.
> > 
> > The problem with that approach is that it means end consumers of those
> > clouds cannot write common tools that include image uploads, which is a
> > frequently used/desired feature. What makes that feature so special that we
> > don't care about it for interoperability?
> > 
> 
> I'm not sure it really is so special API or technical wise, it's just the one that was lifted to the pedestal in this discussion.

OK. I'm concerned that my message of "we need an interoperable image
upload API" is sometimes being met with various versions of "that's not
possible." I think that's wrong, and we should fix it. I also think it's
possible to make the API consistent and still support background tasks,
image scanning, and other things deployers want.

> <CLIP>
> > > >
> > > > The task upload process you're referring to is the one that uses the
> > > > `import` task, which allows you to download an image from an
> > > > external source, asynchronously, and import it in Glance. This is
> > > > the old `copy-from` behavior that was moved into a task.
> > > >
> > > > The "fun" thing about this - and I'm sure other folks in the Glance
> > > > community will disagree - is that I don't consider tasks to be a
> > > > public API. That is to say, I would expect tasks to be an internal
> > > > API used by cloud admins to perform some actions (bsaed on its
> > > > current implementation). Eventually, some of these tasks could be
> > > > triggered from the external API but as background operations that
> > > > are triggered by the well-known public ones and not through the task
> > API.
> > > >
> > > > Ultimately, I believe end-users of the cloud simply shouldn't care
> > > > about what tasks are or aren't and more importantly, as you
> > > > mentioned later in the email, tasks make clouds not interoperable.
> > > > I'd be pissed if my public image service would ask me to learn about tasks
> > to be able to use the service.
> > >
> > > I'd like to bring another argument here. I think our Public Images API should
> > behave consistently regardless if there is tasks enabled in the deployment or
> > not and with what plugins. This meaning that _if_ we expect glance upload
> > work over the POST API and that endpoint is available in the deployment I
> > would expect a) my image hash to match with the one the cloud returns b)
> > I'd assume all or none of the clouds rejecting my image if it gets flagged by
> > Vendor X virus definitions and c) it being bootable across the clouds taken it's
> > in supported format. On the other hand if I get told by the vendor that I need
> > to use cloud specific task that accepts only ova compliant image packages and
> > that the image will be checked before acceptance, my expectations are quite
> > different and I would expect all that happening outside of the standard API
> > as it's not consistent behavior.
> > 
> > I'm not sure what you're arguing. Is it not possible to have a background
> > process import an image without modifying it?
> 
> Absolutely not my point, sorry. What I was trying to say here is that I rather have multiple ways of getting images into cloud I use if that means I can predict exactly how those different ways behaves on each deployment that has them enabled. My thought of tasks has been more on the processing side than just different way to do upload. So I have been in understanding that the intention behind tasks were to provide extra functionality like introspection, conversion, ova-import (which is under work currently), many of that would be beneficial for the end user of the cloud, but happening to your image as side effect of image upload without you knowing or able to avoid it, not so desirable state.

Ah, yes. But most people don't care about how all of the internals work.
They just want to upload an image, and they don't want to have to learn
15 different ways to do it. As I've said, a pull API that uses a
background task to import the image is fine, as long as the inputs and
outputs of the API itself are the same all the time. It's also fine to
have deployers add background steps, to some extent. If the API works
and a deployer's background job breaks my image, I'll be upset, but as a
user I can point directly to them in that case. If the APIs are
different everywhere, I'm not even sure where to start.

Doug


From doug at doughellmann.com  Tue Sep 15 12:34:17 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 15 Sep 2015 08:34:17 -0400
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442275194-sup-3621@fewbar.com>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com> <1442250235-sup-1646@lrrr.local>
 <1442261641-sup-9577@fewbar.com> <1442263439-sup-913@lrrr.local>
 <1442275194-sup-3621@fewbar.com>
Message-ID: <1442320312-sup-3612@lrrr.local>

Excerpts from Clint Byrum's message of 2015-09-14 17:06:44 -0700:
> Excerpts from Doug Hellmann's message of 2015-09-14 13:46:16 -0700:
> > Excerpts from Clint Byrum's message of 2015-09-14 13:25:43 -0700:
> > > Excerpts from Doug Hellmann's message of 2015-09-14 12:51:24 -0700:
> > > > Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
> > > > > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> > > > > >
> > > > > >After having some conversations with folks at the Ops Midcycle a
> > > > > >few weeks ago, and observing some of the more recent email threads
> > > > > >related to glance, glance-store, the client, and the API, I spent
> > > > > >last week contacting a few of you individually to learn more about
> > > > > >some of the issues confronting the Glance team. I had some very
> > > > > >frank, but I think constructive, conversations with all of you about
> > > > > >the issues as you see them. As promised, this is the public email
> > > > > >thread to discuss what I found, and to see if we can agree on what
> > > > > >the Glance team should be focusing on going into the Mitaka summit
> > > > > >and development cycle and how the rest of the community can support
> > > > > >you in those efforts.
> > > > > >
> > > > > >I apologize for the length of this email, but there's a lot to go
> > > > > >over. I've identified 2 high priority items that I think are critical
> > > > > >for the team to be focusing on starting right away in order to use
> > > > > >the upcoming summit time effectively. I will also describe several
> > > > > >other issues that need to be addressed but that are less immediately
> > > > > >critical. First the high priority items:
> > > > > >
> > > > > >1. Resolve the situation preventing the DefCore committee from
> > > > > >   including image upload capabilities in the tests used for trademark
> > > > > >   and interoperability validation.
> > > > > >
> > > > > >2. Follow through on the original commitment of the project to
> > > > > >   provide an image API by completing the integration work with
> > > > > >   nova and cinder to ensure V2 API adoption.
> > > > > 
> > > > > Hi Doug,
> > > > > 
> > > > > First and foremost, I'd like to thank you for taking the time to dig
> > > > > into these issues, and for reaching out to the community seeking for
> > > > > information and a better understanding of what the real issues are. I
> > > > > can imagine how much time you had to dedicate on this and I'm glad you
> > > > > did.
> > > > > 
> > > > > Now, to your email, I very much agree with the priorities you
> > > > > mentioned above and I'd like for, whomever will win Glance's PTL
> > > > > election, to bring focus back on that.
> > > > > 
> > > > > Please, find some comments in-line for each point:
> > > > > 
> > > > > >
> > > > > >I. DefCore
> > > > > >
> > > > > >The primary issue that attracted my attention was the fact that
> > > > > >DefCore cannot currently include an image upload API in its
> > > > > >interoperability test suite, and therefore we do not have a way to
> > > > > >ensure interoperability between clouds for users or for trademark
> > > > > >use. The DefCore process has been long, and at times confusing,
> > > > > >even to those of us following it sort of closely. It's not entirely
> > > > > >surprising that some projects haven't been following the whole time,
> > > > > >or aren't aware of exactly what the whole thing means. I have
> > > > > >proposed a cross-project summit session for the Mitaka summit to
> > > > > >address this need for communication more broadly, but I'll try to
> > > > > >summarize a bit here.
> > > > > 
> > > > > +1
> > > > > 
> > > > > I think it's quite sad that some projects, especially those considered
> > > > > to be part of the `starter-kit:compute`[0], don't follow closely
> > > > > what's going on in DefCore. I personally consider this a task PTLs
> > > > > should incorporate in their role duties. I'm glad you proposed such
> > > > > session, I hope it'll help raising awareness of this effort and it'll
> > > > > help moving things forward on that front.
> > > > 
> > > > Until fairly recently a lot of the discussion was around process
> > > > and priorities for the DefCore committee. Now that those things are
> > > > settled, and we have some approved policies, it's time to engage
> > > > more fully.  I'll be working during Mitaka to improve the two-way
> > > > communication.
> > > > 
> > > > > 
> > > > > >
> > > > > >DefCore is using automated tests, combined with business policies,
> > > > > >to build a set of criteria for allowing trademark use. One of the
> > > > > >goals of that process is to ensure that all OpenStack deployments
> > > > > >are interoperable, so that users who write programs that talk to
> > > > > >one cloud can use the same program with another cloud easily. This
> > > > > >is a *REST API* level of compatibility. We cannot insert cloud-specific
> > > > > >behavior into our client libraries, because not all cloud consumers
> > > > > >will use those libraries to talk to the services. Similarly, we
> > > > > >can't put the logic in the test suite, because that defeats the
> > > > > >entire purpose of making the APIs interoperable. For this level of
> > > > > >compatibility to work, we need well-defined APIs, with a long support
> > > > > >period, that work the same no matter how the cloud is deployed. We
> > > > > >need the entire community to support this effort. From what I can
> > > > > >tell, that is going to require some changes to the current Glance
> > > > > >API to meet the requirements. I'll list those requirements, and I
> > > > > >hope we can discuss them to a degree that ensures everyone understands
> > > > > >them. I don't want this email thread to get bogged down in
> > > > > >implementation details or API designs, though, so let's try to keep
> > > > > >the discussion at a somewhat high level, and leave the details for
> > > > > >specs and summit discussions. I do hope you will correct any
> > > > > >misunderstandings or misconceptions, because unwinding this as an
> > > > > >outside observer has been quite a challenge and it's likely I have
> > > > > >some details wrong.
> > > > > >
> > > > > >As I understand it, there are basically two ways to upload an image
> > > > > >to glance using the V2 API today. The "POST" API pushes the image's
> > > > > >bits through the Glance API server, and the "task" API instructs
> > > > > >Glance to download the image separately in the background. At one
> > > > > >point apparently there was a bug that caused the results of the two
> > > > > >different paths to be incompatible, but I believe that is now fixed.
> > > > > >However, the two separate APIs each have different issues that make
> > > > > >them unsuitable for DefCore.
> > > > > >
> > > > > >The DefCore process relies on several factors when designating APIs
> > > > > >for compliance. One factor is the technical direction, as communicated
> > > > > >by the contributor community -- that's where we tell them things
> > > > > >like "we plan to deprecate the Glance V1 API". In addition to the
> > > > > >technical direction, DefCore looks at the deployment history of an
> > > > > >API. They do not want to require deploying an API if it is not seen
> > > > > >as widely usable, and they look for some level of existing adoption
> > > > > >by cloud providers and distributors as an indication of that the
> > > > > >API is desired and can be successfully used. Because we have multiple
> > > > > >upload APIs, the message we're sending on technical direction is
> > > > > >weak right now, and so they have focused on deployment considerations
> > > > > >to resolve the question.
> > > > > 
> > > > > The task upload process you're referring to is the one that uses the
> > > > > `import` task, which allows you to download an image from an external
> > > > > source, asynchronously, and import it in Glance. This is the old
> > > > > `copy-from` behavior that was moved into a task.
> > > > > 
> > > > > The "fun" thing about this - and I'm sure other folks in the Glance
> > > > > community will disagree - is that I don't consider tasks to be a
> > > > > public API. That is to say, I would expect tasks to be an internal API
> > > > > used by cloud admins to perform some actions (bsaed on its current
> > > > > implementation). Eventually, some of these tasks could be triggered
> > > > > from the external API but as background operations that are triggered
> > > > > by the well-known public ones and not through the task API.
> > > > 
> > > > Does that mean it's more of an "admin" API?
> > > > 
> > > 
> > > I think it is basically just a half-way done implementation that is
> > > exposed directly to users of Rackspace Cloud and, AFAIK, nobody else.
> > > When last I tried to make integration tests in shade that exercised the
> > > upstream glance task import code, I was met with an implementation that
> > > simply did not work, because the pieces behind it had never been fully
> > > implemented upstream. That may have been resolved, but in the process
> > > of trying to write tests and make this work, I discovered a system that
> > > made very little sense from a user standpoint. I want to upload an
> > > image, why do I want a task?!
> > > 
> > > > > 
> > > > > Ultimately, I believe end-users of the cloud simply shouldn't care
> > > > > about what tasks are or aren't and more importantly, as you mentioned
> > > > > later in the email, tasks make clouds not interoperable. I'd be pissed
> > > > > if my public image service would ask me to learn about tasks to be
> > > > > able to use the service.
> > > > 
> > > > It would be OK if a public API set up to do a specific task returned a
> > > > task ID that could be used with a generic task API to check status, etc.
> > > > So the idea of tasks isn't completely bad, it's just too vague as it's
> > > > exposed right now.
> > > > 
> > > 
> > > I think it is a concern, because it is assuming users will want to do
> > > generic things with a specific API. This turns into a black-box game where
> > > the user shoves a task in and then waits to see what comes out the other
> > > side. Not something I want to encourage users to do or burden them with.
> > > 
> > > We have an API whose sole purpose is to accept image uploads. That
> > > Rackspace identified a scaling pain point there is _good_. But why not
> > > *solve* it for the user, instead of introduce more complexity?
> > 
> > That's fair. I don't actually care which API we have, as long as it
> > meets the other requirements.
> > 
> > > 
> > > What I'd like to see is the upload image API given the ability to
> > > respond with a URL that can be uploaded to using the object storage API
> > > we already have in OpenStack. Exposing users to all of these operator
> > > choices is just wasting their time. Just simply say "Oh, you want to
> > > upload an image? Thats fine, please upload it as an object over there
> > > and POST here again when it is ready to be imported." This will make
> > > perfect sense to a user reading docs, and doesn't require them to grasp
> > > an abstract concept like "tasks" when all they want to do is upload
> > > their image.
> > > 
> > 
> > And what would it do if the backing store for the image service
> > isn't Swift or another object storage system that supports direct
> > uploads? Return a URL that pointed back to itself, maybe?
> 
> For those operators who don't have concerns about scaling the glance
> API service to their users' demands, glance's image upload API works
> perfectly well today.  The indirect approach is only meant to dealt with
> the situation where the operator expects a lot of really large images to
> be uploaded simultaneously, and would like to take advantage of the Swift
> API's rather rich set of features for making that a positive experience.
> There is also a user benefit to using the Swift API, which is that a
> segmented upload can more easily be resumed.
> 
> Now, IMO HTTP has facilities for that too, it's just that glanceclient
> (and lo, many HTTP clients) aren't well versed in those deeper, optional
> pieces of HTTP. That is why Swift works the way it does, and I like
> the idea of glance simply piggy backing on the experience of many years
> of production refinement that are available and codified in Swift and
> any other OpenStack Object Storage API implementations (like the CEPH
> RADOS gateway).
> 

That's fine, as an option. But we have existing business requirements
(as differentiated from technical requirements) that constrain us
and prevent us from inserting a hard dependency from glance to
swift. We could even make it a required option, so that using the
"platform" trademark includes that behavior. But we must support a
version of glance that does not depend on swift for the "compute"
trademark program as it is defined today.

Doug


From doug at doughellmann.com  Tue Sep 15 12:46:53 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 15 Sep 2015 08:46:53 -0400
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <20150915085404.GB9301@redhat.com>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com> <1442250235-sup-1646@lrrr.local>
 <20150915085404.GB9301@redhat.com>
Message-ID: <1442320847-sup-5857@lrrr.local>

Excerpts from Flavio Percoco's message of 2015-09-15 10:54:04 +0200:
> On 14/09/15 15:51 -0400, Doug Hellmann wrote:
> >Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
> >> On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> 
> >> This is definitely unfortunate. I believe a good step forward for this
> >> discussion would be to create a list of issues related to uploading
> >> images and see how those issues can be addressed. The result from that
> >> work might be that it's not recommended to make that endpoint public
> >> but again, without going through the issues, it'll be hard to
> >> understand how we can improve this situation. I expect most of this
> >> issues to have a security impact.
> >
> >A report like that would be good to have. Can someone on the Glance team
> >volunteer to put it together?
> 
> Here's an attempt from someone that uses clouds but doesn't run any:
> 
> - Image authenticity (we recently landed code that allows for having
>   signed images)
> - Quota management: Glance's quota management is very basic and it
>   allows for setting quota in a per-user level[1]
> - Bandwidth requirements to upload images
> - (add more here)

That seems like a good start. You could add a desire to optionally
take advantage of advanced object store services like Swift and
Ceph.

> >> The mistake here could be that the library should've been refactored
> >> *before* adopting it in Glance.
> >
> >The fact that there is disagreement over the intent of the library makes
> >me think the plan for creating it wasn't sufficiently circulated or
> >detailed.
> 
> There wasn't much disagreement when it was created. Some folks think
> the use-cases for the library don't exist anymore and some folks that
> participated in this effort are not part of OpenStack anymore.

OK. There is definite desire outside of the Glance team to have
*some* library for talking to the image store directly. The evidence
for that is the specs in nova related to a "seam" library, and the
requests by some Cinder driver authors to have something similar.
>From what I can tell, everyone else thought that's what glance-store
was going to be, but it's not quite what is needed.  It seems like
the use cases need to be revisited so the requirements can be
documented properly and then we can figure out what steps to take
with the existing code.

Doug


From derekh at redhat.com  Tue Sep 15 13:04:47 2015
From: derekh at redhat.com (Derek Higgins)
Date: Tue, 15 Sep 2015 14:04:47 +0100
Subject: [openstack-dev] [TripleO] Current meeting timeslot
In-Reply-To: <55F8033C.6060701@redhat.com>
References: <55F18FD7.7070305@redhat.com> <55F8033C.6060701@redhat.com>
Message-ID: <55F8176F.4060809@redhat.com>



On 15/09/15 12:38, Derek Higgins wrote:
> On 10/09/15 15:12, Derek Higgins wrote:
>> Hi All,
>>
>> The current meeting slot for TripleO is every second Tuesday @ 1900 UTC,
>> since that time slot was chosen a lot of people have joined the team and
>> others have moved on, I like to revisit the timeslot to see if we can
>> accommodate more people at the meeting (myself included).
>>
>> Sticking with Tuesday I see two other slots available that I think will
>> accommodate more people currently working on TripleO,
>>
>> Here is the etherpad[1], can you please add your name under the time
>> slots that would suit you so we can get a good idea how a change would
>> effect people
>
> Looks like moving the meeting to 1400 UTC will best accommodate
> everybody, I've proposed a patch to change our slot
>
> https://review.openstack.org/#/c/223538/

This has merged so as of next tuesdat, the tripleo meeting will be at 
1400UTC

Hope to see ye there

>
> In case the etherpad disappears here was the results
>
> Current Slot ( 1900 UTC, Tuesdays,  biweekly)
> o Suits me fine - 2 votes
> o May make it sometimes - 6 votes
>
> Proposal 1 ( 1600 UTC, Tuesdays,  biweekly)
> o Suits me fine - 7 votes
> o May make it sometimes - 2 votes
>
> Proposal 2 ( 1400 UTC, Tuesdays,  biweekly)
> o Suits me fine - 9 votes
> o May make it sometimes - 0 votes
>
> I can't make any of these - 0 votes
>
> thanks,
> Derek.
>
>
>>
>> thanks,
>> Derek.
>>
>>
>> [1] - https://etherpad.openstack.org/p/SocOjvLr6o
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From major at mhtx.net  Tue Sep 15 13:10:14 2015
From: major at mhtx.net (Major Hayden)
Date: Tue, 15 Sep 2015 08:10:14 -0500
Subject: [openstack-dev] [openstack-ansible] PTL Non-Candidacy
In-Reply-To: <1442264539677.79104@RACKSPACE.COM>
References: <1442264539677.79104@RACKSPACE.COM>
Message-ID: <55F818B6.7050405@mhtx.net>

On 09/14/2015 04:02 PM, Kevin Carter wrote:
> TL;DR - I'm sending this out to announce that I won't be running for PTL of the OpenStack-Ansible project in the upcoming cycle. Although I won't be running for PTL, with community support, I intend to remain an active contributor just with more time spent more cross project and in other upstream communities.

I've only been working on the project for a short while, but I really appreciate your hard work and consideration!

--
Major Hayden


From stuart.mclaren at hp.com  Tue Sep 15 13:13:50 2015
From: stuart.mclaren at hp.com (stuart.mclaren at hp.com)
Date: Tue, 15 Sep 2015 14:13:50 +0100 (IST)
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
Message-ID: <alpine.DEB.2.11.1509151411370.14097@tc-unix2.emea.hpqcorp.net>

> 
> After having some conversations with folks at the Ops Midcycle a
> few weeks ago, and observing some of the more recent email threads
> related to glance, glance-store, the client, and the API, I spent
> last week contacting a few of you individually to learn more about
> some of the issues confronting the Glance team. I had some very
> frank, but I think constructive, conversations with all of you about
> the issues as you see them. As promised, this is the public email
> thread to discuss what I found, and to see if we can agree on what
> the Glance team should be focusing on going into the Mitaka summit
> and development cycle and how the rest of the community can support
> you in those efforts.

Doug, thanks for reaching out here.

I've been looking into the existing task-based-upload that Doug mentions:
can anyone clarify the following?

On a default devstack install you can do this 'task' call:

http://paste.openstack.org/show/462919

as an alternative to the traditional image upload (the bytes are streamed
from the URL).

It's not clear to me if this is just an interesting example of the kind
of operator specific thing you can configure tasks to do, or a real
attempt to define an alternative way to upload images.

The change which added it [1] calls it a 'sample'.

Is it just an example, or is it a second 'official' upload path?

-Stuart

[1] https://review.openstack.org/#/c/44355


From emilien at redhat.com  Tue Sep 15 13:32:52 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Tue, 15 Sep 2015 09:32:52 -0400
Subject: [openstack-dev] [puppet] monasca,murano,mistral governance
In-Reply-To: <CAK=NR9XOQz7RKw5D0evWOaOt1Cwtfk8YqXYOc6-wWb3i+9_mxA@mail.gmail.com>
References: <55F73FB8.8050401@redhat.com>
 <CAHr1CO83BO6yk2L=ic9CJ+WXmXBs3ExeqQHs8NpCEb08vkm4-Q@mail.gmail.com>
 <CAK=NR9XOQz7RKw5D0evWOaOt1Cwtfk8YqXYOc6-wWb3i+9_mxA@mail.gmail.com>
Message-ID: <55F81E04.50302@redhat.com>



On 09/15/2015 07:39 AM, Ivan Berezovskiy wrote:
> Emilien,
> 
> puppet-murano module have a bunch of patches from Alexey Deryugin on
> review [0], which implements most of all Murano deployment stuff.
> Murano project was added to OpenStack namespace not so long ago, that's
> why I suggest to have murano-core rights on puppet-murano as they
> are till all these patches will be merged.
> Anyway, murano-core team doesn't merge any patches without OpenStack
> Puppet team approvals.

[repeating what I said on IRC so it's official and public]

I don't think Murano team needs to be core on a Puppet module.
All OpenStack modules are managed by one group, this is how we worked
until here and I don't think we want to change that.s
Project teams (Keystone, Nova, Neutron, etc) already use -1/+1 to review
Puppet code when they want to share feedback and they are very valuable,
we actually need it.
I don't see why we would do an exception for Murano. I would like Murano
team to continue to give their valuable feedback by -1/+1 patches but
it's the Puppet OpenStack team duty to decide if they merge the code or not.

This collaboration is important and we need your experience to create
new modules, but please understand how Puppet OpenStack governance works
now.

Thanks,


> [0]
> - https://review.openstack.org/#/q/status:open+project:openstack/puppet-murano+owner:%22Alexey+Deryugin+%253Caderyugin%2540mirantis.com%253E%22,n,z
> 
> 2015-09-15 1:01 GMT+03:00 Matt Fischer <matt at mattfischer.com
> <mailto:matt at mattfischer.com>>:
> 
>     Emilien,
> 
>     I've discussed this with some of the Monasca puppet guys here who
>     are doing most of the work. I think it probably makes sense to move
>     to that model now, especially since the pace of development has
>     slowed substantially. One blocker before to having it "big tent" was
>     the lack of test coverage, so as long as we know that's a work in
>     progress...  I'd also like to get Brad Kiein's thoughts on this, but
>     he's out of town this week. I'll ask him to reply when he is back.
> 
> 
>     On Mon, Sep 14, 2015 at 3:44 PM, Emilien Macchi <emilien at redhat.com
>     <mailto:emilien at redhat.com>> wrote:
> 
>         Hi,
> 
>         As a reminder, Puppet modules that are part of OpenStack are
>         documented
>         here [1].
> 
>         I can see puppet-murano & puppet-mistral Gerrit permissions
>         different
>         from other modules, because Mirantis helped to bootstrap the
>         module a
>         few months ago.
> 
>         I think [2] the modules should be consistent in governance and only
>         Puppet OpenStack group should be able to merge patches for these
>         modules.
> 
>         Same question for puppet-monasca: if Monasca team wants their module
>         under the big tent, I think they'll have to change Gerrit
>         permissions to
>         only have Puppet OpenStack able to merge patches.
> 
>         [1]
>         http://governance.openstack.org/reference/projects/puppet-openstack.html
>         [2] https://review.openstack.org/223313
> 
>         Any feedback is welcome,
>         --
>         Emilien Macchi
> 
> 
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Thanks, Ivan Berezovskiy
> MOS Puppet Team Lead
> at Mirantis <https://www.mirantis.com/>
> 
> slack: iberezovskiy
> skype: bouhforever
> phone: + 7-960-343-42-46
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/43057b94/attachment.pgp>

From jason at mailthemyers.com  Tue Sep 15 13:50:15 2015
From: jason at mailthemyers.com (Jason Myers)
Date: Tue, 15 Sep 2015 08:50:15 -0500
Subject: [openstack-dev] Ceilometer M Midcycle
Message-ID: <55F82217.60402@mailthemyers.com>

Hello Everyone,
     We are setting up a few polls to determine the possibility of 
meeting face to face for a ceilometer midcycle in Dublin, IE. We'd like 
to gather for three days to discuss all the work we are currently doing; 
however, we have access to space for 5 so you could also use that space 
for co working outside of the meeting dates.  We have two date polls: 
one for Nov 30-Dec 18 at http://doodle.com/poll/hmukqwzvq7b54cef, and 
one for Jan 11-22 at http://doodle.com/poll/kbkmk5v2vass249i.  You can 
vote for any of the days in there that work for you.  If we don't get 
enough interest in either poll, we will do a virtual midcycle like we 
did last year.  Please vote for your favorite days in the two polls if 
you are interested in attending in person. If we don't get many votes, 
we'll circulate another poll for the virtual dates.

Cheers,
Jason Myers
-- 
Sent from Postbox 
<https://www.postbox-inc.com/?utm_source=email&utm_medium=siglink&utm_campaign=reach>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/15d2ae9c/attachment.html>

From eduard.matei at cloudfounders.com  Tue Sep 15 14:04:04 2015
From: eduard.matei at cloudfounders.com (Eduard Matei)
Date: Tue, 15 Sep 2015 17:04:04 +0300
Subject: [openstack-dev] [Cinder]Behavior when one cinder-volume service is
	down
Message-ID: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>

Hi,

This all started when we were testing Evacuate with our storage driver.
We thought we found a bug (https://bugs.launchpad.net/cinder/+bug/1491276)
then Scott replied that we should be running cinder-volume service separate
from nova-compute.
For some internal reasons we can't do that - yet, but we have some
questions regarding the behavior of the service:

- on our original test setup we have 3 nodes (1 controller + compute +
cinder, 2 compute + cinder).
-- when we shutdown the second node and tried to evacuate, the call was
routed to cinder-volume of the halted node instead of going to other nodes
(there were still 2 cinder-volume services up) - WHY?
- on the new planned setup we will have 6 nodes (3 dedicated controller +
cinder-volume, 3 compute)
-- in this case which cinder-volume will manage which volume on which
compute node?
-- what if: one compute node and one controller go down - will the Evacuate
still work if one of the cinder-volume services is down? How can we tell -
for sure - that this setup will work in case ANY 1 controller and 1 compute
nodes go down?

Hypothetical:
- if 3 dedicated controller + cinder-volume nodes work can perform evacuate
when one of them is down (at the same time with one compute), WHY can't the
same 3 nodes perform evacuate when compute services is running on the same
nodes (so 1 cinder is down and 1 compute)
- if the answer to above question is "They can't " then what is the purpose
of running 3 cinder-volume services if they can't handle one failure?
- and if the answer to above question is "You only run one cinder-volume"
then how can it handle failure of controller node?

Thanks,

Eduard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/672fc209/attachment.html>

From dtantsur at redhat.com  Tue Sep 15 14:16:00 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Tue, 15 Sep 2015 16:16:00 +0200
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
 library releases needed
In-Reply-To: <1442240201-sup-1222@lrrr.local>
References: <1442234537-sup-4636@lrrr.local> <1442240201-sup-1222@lrrr.local>
Message-ID: <55F82820.1060104@redhat.com>

On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
>> PTLs and release liaisons,
>>
>> In order to keep the rest of our schedule for the end-of-cycle release
>> tasks, we need to have final releases for all client libraries in the
>> next day or two.
>>
>> If you have not already submitted your final release request for this
>> cycle, please do that as soon as possible.
>>
>> If you *have* already submitted your final release request for this
>> cycle, please reply to this email and let me know that you have so I can
>> create your stable/liberty branch.
>>
>> Thanks!
>> Doug
>
> I forgot to mention that we also need the constraints file in
> global-requirements updated for all of the releases, so we're actually
> testing with them in the gate. Please take a minute to check the version
> specified in openstack/requirements/upper-constraints.txt for your
> libraries and submit a patch to update it to the latest release if
> necessary. I'll do a review later in the week, too, but it's easier to
> identify the causes of test failures if we have one patch at a time.

Hi Doug!

When is the last and final deadline for doing all this for 
not-so-important and non-release:managed projects like ironic-inspector? 
We still lack some Liberty features covered in 
python-ironic-inspector-client. Do we have time until end of week to 
finish them?

Sorry if you hear this question too often :)

Thanks!

>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From michal.dulko at intel.com  Tue Sep 15 14:23:17 2015
From: michal.dulko at intel.com (Dulko, Michal)
Date: Tue, 15 Sep 2015 14:23:17 +0000
Subject: [openstack-dev] [Cinder]Behavior when one cinder-volume service
 is	down
In-Reply-To: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
References: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
Message-ID: <3895CB36EABD4E49B816E6081F3B001735FD69FF@IRSMSX108.ger.corp.intel.com>

> From: Eduard Matei [mailto:eduard.matei at cloudfounders.com]
> Sent: Tuesday, September 15, 2015 4:04 PM
> 
> Hi,
> 
> This all started when we were testing Evacuate with our storage driver.
> We thought we found a bug
> (https://bugs.launchpad.net/cinder/+bug/1491276) then Scott replied that
> we should be running cinder-volume service separate from nova-compute.
> For some internal reasons we can't do that - yet, but we have some
> questions regarding the behavior of the service:
> 
> - on our original test setup we have 3 nodes (1 controller + compute + cinder,
> 2 compute + cinder).
> -- when we shutdown the second node and tried to evacuate, the call was
> routed to cinder-volume of the halted node instead of going to other nodes
> (there were still 2 cinder-volume services up) - WHY?

Cinder assumes that each c-vol can control only volumes which were scheduled onto it. As volume services are differentiated by hostname a known workaround is to set same value for host option in cinder.conf on each of the c-vols. This will make c-vols to listen on the same queue. You may however encounter some race conditions when running such configuration in Active/Active manner. Generally recommended approach is to use Pacemaker and run such c-vols in Active/Passive mode. Also expect that scheduler's decision will be generally ignored - as all the nodes are listening on the same queue.

> - on the new planned setup we will have 6 nodes (3 dedicated controller +
> cinder-volume, 3 compute)
> -- in this case which cinder-volume will manage which volume on which
> compute node?

Same situation - a volume will be controlled by c-vol which created it.

> -- what if: one compute node and one controller go down - will the Evacuate
> still work if one of the cinder-volume services is down? How can we tell - for
> sure - that this setup will work in case ANY 1 controller and 1 compute nodes
> go down?

The best idea is I think to use c-vol + Pacemaker in A/P manner. Pacemaker will make sure that on failure a new c-vol is spun up. Where are volumes physically in case of your driver? Is it like LVM driver (volume lies on the node which is running c-vol) or Ceph (Ceph takes care where volume will land physically, c-vol is just a proxy). 

> 
> Hypothetical:
> - if 3 dedicated controller + cinder-volume nodes work can perform evacuate
> when one of them is down (at the same time with one compute), WHY can't
> the same 3 nodes perform evacuate when compute services is running on
> the same nodes (so 1 cinder is down and 1 compute)

I think I've explained that.

> - if the answer to above question is "They can't " then what is the purpose of
> running 3 cinder-volume services if they can't handle one failure?

Running 3 c-vols is beneficial if you have multiple backends or use LVM driver.

> - and if the answer to above question is "You only run one cinder-volume"
> then how can it handle failure of controller node?

I've explained that too. There are efforts in the community to make it possible to run c-vol in A/A, but I don't think there's ETA yet.

> 
> Thanks,
> 
> Eduard

From john.griffith8 at gmail.com  Tue Sep 15 14:32:48 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Tue, 15 Sep 2015 08:32:48 -0600
Subject: [openstack-dev] [Cinder]Behavior when one cinder-volume service
 is down
In-Reply-To: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
References: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
Message-ID: <CAPWkaSU4q7VXT9UMsvrucs38Ju1OFjCAt9zM0aEDQ+rda4FGBA@mail.gmail.com>

On Tue, Sep 15, 2015 at 8:04 AM, Eduard Matei <
eduard.matei at cloudfounders.com> wrote:

> Hi,
>
> This all started when we were testing Evacuate with our storage driver.
> We thought we found a bug (https://bugs.launchpad.net/cinder/+bug/1491276)
> then Scott replied that we should be running cinder-volume service separate
> from nova-compute.
> For some internal reasons we can't do that - yet, but we have some
> questions regarding the behavior of the service:
>
> - on our original test setup we have 3 nodes (1 controller + compute +
> cinder, 2 compute + cinder).
> -- when we shutdown the second node and tried to evacuate, the call was
> routed to cinder-volume of the halted node instead of going to other nodes
> (there were still 2 cinder-volume services up) - WHY?
> - on the new planned setup we will have 6 nodes (3 dedicated controller +
> cinder-volume, 3 compute)
> -- in this case which cinder-volume will manage which volume on which
> compute node?
> -- what if: one compute node and one controller go down - will the
> Evacuate still work if one of the cinder-volume services is down? How can
> we tell - for sure - that this setup will work in case ANY 1 controller and
> 1 compute nodes go down?
>
> Hypothetical:
> - if 3 dedicated controller + cinder-volume nodes work can perform
> evacuate when one of them is down (at the same time with one compute), WHY
> can't the same 3 nodes perform evacuate when compute services is running on
> the same nodes (so 1 cinder is down and 1 compute)
> - if the answer to above question is "They can't " then what is the
> purpose of running 3 cinder-volume services if they can't handle one
> failure?
> - and if the answer to above question is "You only run one cinder-volume"
> then how can it handle failure of controller node?
>
> Thanks,
>
> Eduard
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ?Not sure I follow all your permutations and things here.  But... one
common misconception about multiple c-vol services;  The act of just
"deploying multiple c-vols" doesn't mean you get any sort of HA or
failover.  The default/base case for multiple c-vols is actually just for
scale out and that's it.

If you want to actually do things like have them fail over you have to look
at configuring the c-vol services with virtual-ips and using the same name
for each service etc.  In other words, do a true HA deployment.

Maybe I'm not following here but it sounds like maybe you have the wrong
expectations around what multiple c-vol services buys you.  ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/4fc2fbc9/attachment.html>

From john.griffith8 at gmail.com  Tue Sep 15 14:43:20 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Tue, 15 Sep 2015 08:43:20 -0600
Subject: [openstack-dev] [fuel] [plugin] Release tagging
In-Reply-To: <CAPWkaSUtvRgjf0ksPgiJZ2-gaOuH3EwLmUveB=fWTF9G7QO_5Q@mail.gmail.com>
References: <CAPWkaSWF-kYWXTzuPpJ=RK5+1PBAAarAeSZ7ucb08EM4aDJHhw@mail.gmail.com>
 <CANw6fcEMeqcn-DfT4SGOJrww0qZye-dAvmwDrepGrgjqEYHXcg@mail.gmail.com>
 <CAPWkaSUtvRgjf0ksPgiJZ2-gaOuH3EwLmUveB=fWTF9G7QO_5Q@mail.gmail.com>
Message-ID: <CAPWkaSVeaV2HEaT1d=uNorvA_qfc1gwr6zf-EtXmTYJ=6-8AhA@mail.gmail.com>

On Mon, Sep 14, 2015 at 7:44 PM, John Griffith <john.griffith8 at gmail.com>
wrote:

>
>
> On Mon, Sep 14, 2015 at 7:27 PM, Davanum Srinivas <davanum at gmail.com>
> wrote:
>
>> John,
>>
>> per ACL in project-config:
>>
>> http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/stackforge/fuel-plugin-solidfire-cinder.config#n9
>>
>> you are already in that group:
>> https://review.openstack.org/#/admin/groups/956,members
>>
>> The release team would be in charge *if* that line looked like:
>> pushSignedTag = group library-release
>>
>> as documented in:
>> http://docs.openstack.org/infra/manual/creators.html
>>
>> So there's something else wrong..what error did you get?
>>
>> -- Dims
>>
>>
>> On Mon, Sep 14, 2015 at 8:51 PM, John Griffith <john.griffith8 at gmail.com>
>> wrote:
>>
>>> Hey All,
>>>
>>> I was trying to tag a release for v 1.0.1 on [1] today and noticed I
>>> don't have permissions to do so.  Is there, a release team/process for this?
>>>
>>> [1] https://github.com/stackforge/fuel-plugin-solidfire-cinder
>>>
>>> Thanks,
>>> John
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ?Hmm...  could be that I'm just making bad assumptions and trying to do
> this as I've done with other projects over the years?
>
> Here's what I tried and the error I received:
>
> jgriffith at railbender:~/git/fuel-plugin-solidfire-cinder$ git push --tags
> gerrit
> Counting objects: 1, done.
> Writing objects: 100% (1/1), 168 bytes | 0 bytes/s, done.
> Total 1 (delta 0), reused 0 (delta 0)
> remote: Processing changes: refs: 1, done
> To ssh://
> john-griffith at review.openstack.org:29418/stackforge/fuel-plugin-solidfire-cinder.git
>  ! [remote rejected] v1.0.1 -> v1.0.1 (prohibited by Gerrit)
> error: failed to push some refs to 'ssh://
> john-griffith at review.openstack.org:29418/stackforge/fuel-plugin-solidfire-cinder.git
> '
> jgriffith at railbender:~/git/fuel-plugin-solidfire-cinder$
> So clearly I can't create the remote; but not sure what I need to do to
> make this happen??
>
> ?Ahh... it appears my key had expired.  Updating now, thanks everyone.?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/16788161/attachment.html>

From fungi at yuggoth.org  Tue Sep 15 14:44:38 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Tue, 15 Sep 2015 14:44:38 +0000
Subject: [openstack-dev] [congress] IRC hangout
In-Reply-To: <3c5905f3d73949f99aaa2007167b6f05@Hq1wp-exmb11.corp.brocade.com>
References: <3c5905f3d73949f99aaa2007167b6f05@Hq1wp-exmb11.corp.brocade.com>
Message-ID: <20150915144437.GD25159@yuggoth.org>

On 2015-09-14 17:44:14 +0000 (+0000), Shiv Haris wrote:
> What is the IRC channel where congress folks hangout. I  tried
> #openstack-congress on freenode but is seems not correct.

https://wiki.openstack.org/wiki/IRC has it listed as #congress
-- 
Jeremy Stanley


From doug at doughellmann.com  Tue Sep 15 14:45:45 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 15 Sep 2015 10:45:45 -0400
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
	library releases needed
In-Reply-To: <55F82820.1060104@redhat.com>
References: <1442234537-sup-4636@lrrr.local> <1442240201-sup-1222@lrrr.local>
 <55F82820.1060104@redhat.com>
Message-ID: <1442328235-sup-901@lrrr.local>

Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:
> On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> > Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
> >> PTLs and release liaisons,
> >>
> >> In order to keep the rest of our schedule for the end-of-cycle release
> >> tasks, we need to have final releases for all client libraries in the
> >> next day or two.
> >>
> >> If you have not already submitted your final release request for this
> >> cycle, please do that as soon as possible.
> >>
> >> If you *have* already submitted your final release request for this
> >> cycle, please reply to this email and let me know that you have so I can
> >> create your stable/liberty branch.
> >>
> >> Thanks!
> >> Doug
> >
> > I forgot to mention that we also need the constraints file in
> > global-requirements updated for all of the releases, so we're actually
> > testing with them in the gate. Please take a minute to check the version
> > specified in openstack/requirements/upper-constraints.txt for your
> > libraries and submit a patch to update it to the latest release if
> > necessary. I'll do a review later in the week, too, but it's easier to
> > identify the causes of test failures if we have one patch at a time.
> 
> Hi Doug!
> 
> When is the last and final deadline for doing all this for 
> not-so-important and non-release:managed projects like ironic-inspector? 
> We still lack some Liberty features covered in 
> python-ironic-inspector-client. Do we have time until end of week to 
> finish them?

We would like for the schedule to be the same for everyone. We need the
final versions for all libraries this week, so we can update
requirements constraints by early next week before the RC1.

https://wiki.openstack.org/wiki/Liberty_Release_Schedule

Doug

> 
> Sorry if you hear this question too often :)
> 
> Thanks!
> 
> >
> > Doug
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 


From annegentle at justwriteclick.com  Tue Sep 15 14:50:49 2015
From: annegentle at justwriteclick.com (Anne Gentle)
Date: Tue, 15 Sep 2015 09:50:49 -0500
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
	communications
Message-ID: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>

Hi all,

What can we do to make the cross-project meeting more helpful and useful
for cross-project communications? I started with a proposal to move it to a
different time, which morphed into an idea to alternate times. But, knowing
that we need to layer communications I wonder if we should troubleshoot
cross-project communications further? These are the current ways
cross-project communications happen:

1. The weekly meeting in IRC
2. The cross-project specs and reviewing those
3. Direct connections between team members
4. Cross-project talks at the Summits

What are some of the problems with each layer?

1. weekly meeting: time zones, global reach, size of cross-project concerns
due to multiple projects being affected, another meeting for PTLs to attend
and pay attention to
2. specs: don't seem to get much attention unless they're brought up at
weekly meeting, finding owners for the work needing to be done in a spec is
difficult since each project team has its own priorities
3. direct communications: decisions from these comms are difficult to then
communicate more widely, it's difficult to get time with busy PTLs
4. Summits: only happens twice a year, decisions made then need to be
widely communicated

I'm sure there are more details and problems I'm missing -- feel free to
fill in as needed.

Lastly, what suggestions do you have for solving problems with any of these
layers?

Thanks,
Anne

-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/aca46a7d/attachment.html>

From eduard.matei at cloudfounders.com  Tue Sep 15 14:53:53 2015
From: eduard.matei at cloudfounders.com (Eduard Matei)
Date: Tue, 15 Sep 2015 17:53:53 +0300
Subject: [openstack-dev] [Cinder]Behavior when one cinder-volume service
 is down
In-Reply-To: <CAPWkaSU4q7VXT9UMsvrucs38Ju1OFjCAt9zM0aEDQ+rda4FGBA@mail.gmail.com>
References: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
 <CAPWkaSU4q7VXT9UMsvrucs38Ju1OFjCAt9zM0aEDQ+rda4FGBA@mail.gmail.com>
Message-ID: <CAEOp6J-FwjQQWNdgaw6-ygTO0pd-AfXjFOd+UzdXszfiuYiPGg@mail.gmail.com>

Hi,

Let me see if i got this:
- running 3 (multiple) c-vols won't automatically give you failover
- each c-vol is "master" of a certain number of volumes
-- if the c-vol is "down" then those volumes cannot be managed by another
c-vol

What i'm trying to achieve is making sure ANY volume is managed
(manageable) by WHICHEVER c-vol is running (and gets the call first) - sort
of A/A - so this means i need to look into Pacemaker and virtual-ips, or i
should try first the "same name".

Thanks,

Eduard

PS. @Michal: Where are volumes physically in case of your driver? <-
similar to ceph, on a distributed object storage service (whose disks can
be anywhere even on the same compute host)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/245dc1ea/attachment.html>

From flavio at redhat.com  Tue Sep 15 14:54:17 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 15 Sep 2015 16:54:17 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <1442319885-sup-4931@lrrr.local>
References: <1442232202-sup-5997@lrrr.local>
 <20150914124100.GC10859@redhat.com>
 <EA70533067B8F34F801E964ABCA4C4410F4D6494@G9W0745.americas.hpqcorp.net>
 <1442247798-sup-5628@lrrr.local>
 <EA70533067B8F34F801E964ABCA4C4410F4D698F@G9W0745.americas.hpqcorp.net>
 <1442319885-sup-4931@lrrr.local>
Message-ID: <20150915145417.GG9301@redhat.com>

On 15/09/15 08:30 -0400, Doug Hellmann wrote:
>Excerpts from Kuvaja, Erno's message of 2015-09-15 09:43:26 +0000:
[snip]
>> I'm not sure it really is so special API or technical wise, it's just the one that was lifted to the pedestal in this discussion.
>
>OK. I'm concerned that my message of "we need an interoperable image
>upload API" is sometimes being met with various versions of "that's not
>possible." I think that's wrong, and we should fix it. I also think it's
>possible to make the API consistent and still support background tasks,
>image scanning, and other things deployers want.

Yes, this is a discussion that started in this cycle as part of
this[0] proposed spec. The discussion was put on hold until Mitaka.
One of the concerns raised was whether it's ok to make tasks part of
the upload process or not since that changes some of the existing
behavior.

For example, right now, when an image is uploaded, it can be used
right away. If we make async tasks part of the upload workflow, then
images won't be available until all tasks are executed.

Personally, I think the above is fine and it'd give the user a better
experience in comparison w/ the current task API. There are other
issues related to this that require a lenghtier discussion and are not
strictly related to the API.

[0] https://review.openstack.org/#/c/188388/

Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/d81a2c11/attachment.pgp>

From scott.dangelo at hpe.com  Tue Sep 15 14:59:17 2015
From: scott.dangelo at hpe.com (D'Angelo, Scott)
Date: Tue, 15 Sep 2015 14:59:17 +0000
Subject: [openstack-dev] [Cinder]Behavior when one cinder-volume service
 is down
In-Reply-To: <CAEOp6J-FwjQQWNdgaw6-ygTO0pd-AfXjFOd+UzdXszfiuYiPGg@mail.gmail.com>
References: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
 <CAPWkaSU4q7VXT9UMsvrucs38Ju1OFjCAt9zM0aEDQ+rda4FGBA@mail.gmail.com>
 <CAEOp6J-FwjQQWNdgaw6-ygTO0pd-AfXjFOd+UzdXszfiuYiPGg@mail.gmail.com>
Message-ID: <2960F1710CFACC46AF0DBFEE85B103BB3650ECB6@G9W0723.americas.hpqcorp.net>

Eduard, Gorka has done a great job of explaining some of the issues with Active-Active Cinder-volume services in his blog:
http://gorka.eguileor.com/

TL;DR: The hacks to use the same hostname or use Pacemaker + VIP are dangerous because of races, and are not recommended for Enterprise deployments.

From: Eduard Matei [mailto:eduard.matei at cloudfounders.com]
Sent: Tuesday, September 15, 2015 8:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

Hi,

Let me see if i got this:
- running 3 (multiple) c-vols won't automatically give you failover
- each c-vol is "master" of a certain number of volumes
-- if the c-vol is "down" then those volumes cannot be managed by another c-vol

What i'm trying to achieve is making sure ANY volume is managed (manageable) by WHICHEVER c-vol is running (and gets the call first) - sort of A/A - so this means i need to look into Pacemaker and virtual-ips, or i should try first the "same name".

Thanks,

Eduard

PS. @Michal: Where are volumes physically in case of your driver? <- similar to ceph, on a distributed object storage service (whose disks can be anywhere even on the same compute host)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/d57b43d1/attachment.html>

From flavio at redhat.com  Tue Sep 15 15:00:25 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 15 Sep 2015 17:00:25 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <alpine.DEB.2.11.1509151411370.14097@tc-unix2.emea.hpqcorp.net>
References: <alpine.DEB.2.11.1509151411370.14097@tc-unix2.emea.hpqcorp.net>
Message-ID: <20150915150025.GH9301@redhat.com>

On 15/09/15 14:13 +0100, stuart.mclaren at hp.com wrote:
>>
>>After having some conversations with folks at the Ops Midcycle a
>>few weeks ago, and observing some of the more recent email threads
>>related to glance, glance-store, the client, and the API, I spent
>>last week contacting a few of you individually to learn more about
>>some of the issues confronting the Glance team. I had some very
>>frank, but I think constructive, conversations with all of you about
>>the issues as you see them. As promised, this is the public email
>>thread to discuss what I found, and to see if we can agree on what
>>the Glance team should be focusing on going into the Mitaka summit
>>and development cycle and how the rest of the community can support
>>you in those efforts.
>
>Doug, thanks for reaching out here.
>
>I've been looking into the existing task-based-upload that Doug mentions:
>can anyone clarify the following?
>
>On a default devstack install you can do this 'task' call:
>
>http://paste.openstack.org/show/462919
>
>as an alternative to the traditional image upload (the bytes are streamed
>from the URL).
>
>It's not clear to me if this is just an interesting example of the kind
>of operator specific thing you can configure tasks to do, or a real
>attempt to define an alternative way to upload images.

It's the old copy-from moved to a task to allow for other tasks to be
executed before the image data is stored. This effort was called "new
upload workflow"[0], which intended to allow for the above but also to
avoid blocking webheads on image download.

That said, and as mentioned in other replies on this thread, I think
this is an API that would be useful for OPs not just for importing
images but also execute other "tasks". As far as the public API goes,
I don't think we should recommend it nor make it public. At least not
in its current state.

[0] https://wiki.openstack.org/wiki/Glance-new-upload-workflow

Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/b16899c9/attachment.pgp>

From dtantsur at redhat.com  Tue Sep 15 15:02:52 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Tue, 15 Sep 2015 17:02:52 +0200
Subject: [openstack-dev] [Ironic] [Inspector] Finishing Liberty (was: final
 liberty cycle client library releases needed)
In-Reply-To: <1442328235-sup-901@lrrr.local>
References: <1442328235-sup-901@lrrr.local>
Message-ID: <55F8331C.7020709@redhat.com>

Hi folks!

As you can see below, we have to make the final release of 
python-ironic-inspector-client really soon. We have 2 big missing parts:

1. Introspection rules support.
    I'm working on it: https://review.openstack.org/#/c/223096/
    This required a substantial requirement, so that our client does not 
become a complete mess: https://review.openstack.org/#/c/223490/

2. Support for getting introspection data. John (trown) volunteered to 
do this work.

I'd like to ask the inspector team to pay close attention to these 
patches, as the deadline for them is Friday (preferably European time).

Next, please have a look at the milestone page for ironic-inspector 
itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
There are things that require review, and there are things without an 
assignee. If you'd like to volunteer for something there, please assign 
it to yourself. Our deadline is next Thursday, but it would be really 
good to finish it earlier next week to dedicate some time to testing.

Thanks all, I'm looking forward to this release :)


-------- Forwarded Message --------
Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle 
client library releases needed
Date: Tue, 15 Sep 2015 10:45:45 -0400
From: Doug Hellmann <doug at doughellmann.com>
Reply-To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev at lists.openstack.org>
To: openstack-dev <openstack-dev at lists.openstack.org>

Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:
> On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> > Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
> >> PTLs and release liaisons,
> >>
> >> In order to keep the rest of our schedule for the end-of-cycle release
> >> tasks, we need to have final releases for all client libraries in the
> >> next day or two.
> >>
> >> If you have not already submitted your final release request for this
> >> cycle, please do that as soon as possible.
> >>
> >> If you *have* already submitted your final release request for this
> >> cycle, please reply to this email and let me know that you have so I can
> >> create your stable/liberty branch.
> >>
> >> Thanks!
> >> Doug
> >
> > I forgot to mention that we also need the constraints file in
> > global-requirements updated for all of the releases, so we're actually
> > testing with them in the gate. Please take a minute to check the version
> > specified in openstack/requirements/upper-constraints.txt for your
> > libraries and submit a patch to update it to the latest release if
> > necessary. I'll do a review later in the week, too, but it's easier to
> > identify the causes of test failures if we have one patch at a time.
>
> Hi Doug!
>
> When is the last and final deadline for doing all this for
> not-so-important and non-release:managed projects like ironic-inspector?
> We still lack some Liberty features covered in
> python-ironic-inspector-client. Do we have time until end of week to
> finish them?

We would like for the schedule to be the same for everyone. We need the
final versions for all libraries this week, so we can update
requirements constraints by early next week before the RC1.

https://wiki.openstack.org/wiki/Liberty_Release_Schedule

Doug

>
> Sorry if you hear this question too often :)
>
> Thanks!
>
> >
> > Doug
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




From mordred at inaugust.com  Tue Sep 15 15:04:07 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Tue, 15 Sep 2015 17:04:07 +0200
Subject: [openstack-dev] [nova][neutron][devstack] New proposed 'default'
	network model
Message-ID: <55F83367.9050503@inaugust.com>

Hey all!

If any of you have ever gotten drunk with me, you'll know I hate 
floating IPs more than I hate being stabbed in the face with a very 
angry fish.

However, that doesn't really matter. What should matter is "what is the 
most sane thing we can do for our users"

As you might have seen in the glance thread, I have a bunch of OpenStack 
public cloud accounts. Since I wrote that email this morning, I've added 
more - so we're up to 13.

auro
citycloud
datacentred
dreamhost
elastx
entercloudsuite
hp
ovh
rackspace
runabove
ultimum
unitedstack
vexxhost

Of those public clouds, 5 of them require you to use a floating IP to 
get an outbound address, the others directly attach you to the public 
network. Most of those 8 allow you to create a private network, to boot 
vms on the private network, and ALSO to create a router with a gateway 
and put floating IPs on your private ip'd machines if you choose.

Which brings me to the suggestion I'd like to make.

Instead of having our default in devstack and our default when we talk 
about things be "you boot a VM and you put a floating IP on it" - which 
solves one of the two usage models - how about:

- Cloud has a shared: True, external:routable: True neutron network. I 
don't care what it's called  ext-net, public, whatever. the "shared" 
part is the key, that's the part that lets someone boot a vm on it directly.

- Each person can then make a private network, router, gateway, etc. and 
get floating-ips from the same public network if they prefer that model.

Are there any good reasons to not push to get all of the public networks 
marked as "shared"?

OH - well, one thing - that's that once there are two networks in an 
account you have to specify which one. This is really painful in nova 
clent. Say, for instance, you have a public network called "public" and 
a private network called "private" ...

You can't just say "nova boot --network=public" - nope, you need to say 
"nova boot --nics net-id=$uuid_of_my_public_network"

So I'd suggest 2 more things;

a) an update to python-novaclient to allow a named network to be passed 
to satisfy the "you have more than one network" - the nics argument is 
still useful for more complex things

b) ability to say "vms in my cloud should default to being booted on the 
public network" or "vms in my cloud should default to being booted on a 
network owned by the user"

Thoughts?

Monty


From mriedem at linux.vnet.ibm.com  Tue Sep 15 15:04:22 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Tue, 15 Sep 2015 10:04:22 -0500
Subject: [openstack-dev] [oslo] Help with stable/juno branches / releases
In-Reply-To: <20150828032237.GF97839@thor.bakeyournoodle.com>
References: <20150824095748.GA74505@thor.bakeyournoodle.com>
 <1440616274-sup-7044@lrrr.local>
 <20150826223200.GA86688@thor.bakeyournoodle.com>
 <CAJ3HoZ0EVaZcrcVWVdBEsKt6SKiHyPKCAvX2zRqy_WCkbGHZLw@mail.gmail.com>
 <20150828032237.GF97839@thor.bakeyournoodle.com>
Message-ID: <55F83376.2030409@linux.vnet.ibm.com>



On 8/27/2015 10:22 PM, Tony Breeds wrote:
> On Fri, Aug 28, 2015 at 11:12:43AM +1200, Robert Collins wrote:
>
>> I'm pretty sure it *will* be EOL'd. OTOH thats 10 weeks of fixes folk
>> can get. I think you should do it if you've the stomach for it, and if
>> its going to help someone. I can aid by cutting library releases for
>> you I think (haven't checked stable releases yet, and I need to update
>> myself on the tooling changes in the last fortnight...).
>
> Okay I certainly have the stomach for it in some perverse way it's fun, as I'm
> learning about parts of the process / code base that are new to me :)
>
> My concerns were mostly around other peoples time (like the PTLs/cores I need
> to hassle and thems with the release power :))
>
> So I'll keep going on it.
>
> Right now there aren't any library releases to be done.
>
> Thanks Robert.
>
> Yours Tony.
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

This seems to have stalled.

For python-ceilometerclient (blocking Horizon), this is where we are at:

1. python-ceilometerclient needs g-r sync from stable/juno 
https://review.openstack.org/#/c/173126/ and then released as 1.0.15 per 
bug 1494516.  However, that's failing tests due to:

2. oslo.utils needs a stable/juno branch created from the 1.4.0 tag so 
we can sync g-r stable/juno to oslo.utils and then release that as 1.4.1.

--

We can work on the other libraries in Tony's original email when we get 
one thing done.

-- 

Thanks,

Matt Riedemann



From keopp at cray.com  Tue Sep 15 15:05:54 2015
From: keopp at cray.com (Jeff Keopp)
Date: Tue, 15 Sep 2015 15:05:54 +0000
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <55F6F723.9050406@mhtx.net>
References: <55F1999C.4020509@mhtx.net> <55F1AE40.5020009@gentoo.org>
 <55F1B0D7.8070404@mhtx.net> <1441909133-sup-2320@fewbar.com>
 <CAGSrQvy4b7fEmJGvSMfLjtiMuj-w_2S7rFL7uXuRKHkHBrVrHA@mail.gmail.com>
 <55F6F723.9050406@mhtx.net>
Message-ID: <D21D9BA6.2C1D1%keopp@cray.com>

This is a very interesting proposal and one I believe is needed.  I?m
currently looking at hardening the controller nodes from unwanted access
and discovered that every time the controller node is booted/rebooted, it
flushes the iptables and writes only those rules that neutron believes
should be there.  This behavior would render this proposal ineffective
once the node is rebooted.

So I believe neutron needs to be fixed to not flush the iptables on each
boot, but to write the iptables to /etc/sysconfig/iptables and then
restore them as a normal linux box should do.  It should be a good citizen
with other processes.

A sysadmin should be allowed to use whatever iptables handlers they wish
to implement security policies and not have an OpenStack process undo what
they have set.

I should mention this is on a system using a flat network topology and
bare metal nodes.  No VMs.

?
Jeff Keopp | Sr. Software Engineer, ES Systems.
380 Jackson Street | St. Paul, MN 55101 | USA  | www.cray.com
<http://www.cray.com>




-----Original Message-----
From: Major Hayden <major at mhtx.net>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org>
Date: Monday, September 14, 2015 at 11:34
To: "openstack-dev at lists.openstack.org" <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [openstack-ansible] Security hardening

>On 09/14/2015 03:28 AM, Jesse Pretorius wrote:
>> I agree with Clint that this is a good approach.
>> 
>> If there is an automated way that we can verify the security of an
>>installation at a reasonable/standardised level then I think we should
>>add a gate check for it too.
>
>Here's a rough draft of a spec.  Feel free to throw some darts.
>
>  https://review.openstack.org/#/c/222619/
>
>--
>Major Hayden
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From dtantsur at redhat.com  Tue Sep 15 15:10:05 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Tue, 15 Sep 2015 17:10:05 +0200
Subject: [openstack-dev] [Ironic] [Inspector] Finishing Liberty
In-Reply-To: <55F8331C.7020709@redhat.com>
References: <1442328235-sup-901@lrrr.local> <55F8331C.7020709@redhat.com>
Message-ID: <55F834CD.7090900@redhat.com>

On 09/15/2015 05:02 PM, Dmitry Tantsur wrote:
> Hi folks!
>
> As you can see below, we have to make the final release of
> python-ironic-inspector-client really soon. We have 2 big missing parts:
>
> 1. Introspection rules support.
>     I'm working on it: https://review.openstack.org/#/c/223096/
>     This required a substantial requirement, so that our client does not
> become a complete mess: https://review.openstack.org/#/c/223490/
>
> 2. Support for getting introspection data. John (trown) volunteered to
> do this work.
>
> I'd like to ask the inspector team to pay close attention to these
> patches, as the deadline for them is Friday (preferably European time).
>
> Next, please have a look at the milestone page for ironic-inspector
> itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
> There are things that require review, and there are things without an
> assignee. If you'd like to volunteer for something there, please assign
> it to yourself. Our deadline is next Thursday, but it would be really
> good to finish it earlier next week to dedicate some time to testing.

Forgot an important thing: we have 2 outstanding IPA patches as well:
https://review.openstack.org/#/c/222605/
https://review.openstack.org/#/c/223054

>
> Thanks all, I'm looking forward to this release :)
>
>
> -------- Forwarded Message --------
> Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle
> client library releases needed
> Date: Tue, 15 Sep 2015 10:45:45 -0400
> From: Doug Hellmann <doug at doughellmann.com>
> Reply-To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev at lists.openstack.org>
> To: openstack-dev <openstack-dev at lists.openstack.org>
>
> Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:
>> On 09/14/2015 04:18 PM, Doug Hellmann wrote:
>> > Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
>> >> PTLs and release liaisons,
>> >>
>> >> In order to keep the rest of our schedule for the end-of-cycle release
>> >> tasks, we need to have final releases for all client libraries in the
>> >> next day or two.
>> >>
>> >> If you have not already submitted your final release request for this
>> >> cycle, please do that as soon as possible.
>> >>
>> >> If you *have* already submitted your final release request for this
>> >> cycle, please reply to this email and let me know that you have so
>> I can
>> >> create your stable/liberty branch.
>> >>
>> >> Thanks!
>> >> Doug
>> >
>> > I forgot to mention that we also need the constraints file in
>> > global-requirements updated for all of the releases, so we're actually
>> > testing with them in the gate. Please take a minute to check the
>> version
>> > specified in openstack/requirements/upper-constraints.txt for your
>> > libraries and submit a patch to update it to the latest release if
>> > necessary. I'll do a review later in the week, too, but it's easier to
>> > identify the causes of test failures if we have one patch at a time.
>>
>> Hi Doug!
>>
>> When is the last and final deadline for doing all this for
>> not-so-important and non-release:managed projects like ironic-inspector?
>> We still lack some Liberty features covered in
>> python-ironic-inspector-client. Do we have time until end of week to
>> finish them?
>
> We would like for the schedule to be the same for everyone. We need the
> final versions for all libraries this week, so we can update
> requirements constraints by early next week before the RC1.
>
> https://wiki.openstack.org/wiki/Liberty_Release_Schedule
>
> Doug
>
>>
>> Sorry if you hear this question too often :)
>>
>> Thanks!
>>
>> >
>> > Doug
>> >
>> >
>> __________________________________________________________________________
>>
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From rakhmerov at mirantis.com  Tue Sep 15 15:11:58 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Tue, 15 Sep 2015 18:11:58 +0300
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
	library releases needed
In-Reply-To: <1442328235-sup-901@lrrr.local>
References: <1442234537-sup-4636@lrrr.local> <1442240201-sup-1222@lrrr.local>
 <55F82820.1060104@redhat.com> <1442328235-sup-901@lrrr.local>
Message-ID: <1F49ED55-E92C-460C-A41A-7F1E3A785C2E@mirantis.com>


> On 15 Sep 2015, at 17:45, Doug Hellmann <doug at doughellmann.com> wrote:
> 
> We would like for the schedule to be the same for everyone. We need the
> final versions for all libraries this week, so we can update
> requirements constraints by early next week before the RC1.


?for everyone? meaning ?for all Big Tent projects?? I?m trying to figure out if that affects projects like Mistral that are not massively interdependent with other projects.

Thnx

Renat Akhmerov
@ Mirantis Inc.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/604360c3/attachment.html>

From slukjanov at mirantis.com  Tue Sep 15 15:12:23 2015
From: slukjanov at mirantis.com (Sergey Lukjanov)
Date: Tue, 15 Sep 2015 18:12:23 +0300
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
 library releases needed
In-Reply-To: <1442328235-sup-901@lrrr.local>
References: <1442234537-sup-4636@lrrr.local> <1442240201-sup-1222@lrrr.local>
 <55F82820.1060104@redhat.com> <1442328235-sup-901@lrrr.local>
Message-ID: <CA+GZd7_VbXpgms7YeXnP-oaTK6U98tbrYUmhDJU2OF0AomcOSA@mail.gmail.com>

We're in a good shape with sahara client. 0.11.0 is the final minor release
for it. Constraints are up to date.

On Tue, Sep 15, 2015 at 5:45 PM, Doug Hellmann <doug at doughellmann.com>
wrote:

> Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:
> > On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> > > Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
> > >> PTLs and release liaisons,
> > >>
> > >> In order to keep the rest of our schedule for the end-of-cycle release
> > >> tasks, we need to have final releases for all client libraries in the
> > >> next day or two.
> > >>
> > >> If you have not already submitted your final release request for this
> > >> cycle, please do that as soon as possible.
> > >>
> > >> If you *have* already submitted your final release request for this
> > >> cycle, please reply to this email and let me know that you have so I
> can
> > >> create your stable/liberty branch.
> > >>
> > >> Thanks!
> > >> Doug
> > >
> > > I forgot to mention that we also need the constraints file in
> > > global-requirements updated for all of the releases, so we're actually
> > > testing with them in the gate. Please take a minute to check the
> version
> > > specified in openstack/requirements/upper-constraints.txt for your
> > > libraries and submit a patch to update it to the latest release if
> > > necessary. I'll do a review later in the week, too, but it's easier to
> > > identify the causes of test failures if we have one patch at a time.
> >
> > Hi Doug!
> >
> > When is the last and final deadline for doing all this for
> > not-so-important and non-release:managed projects like ironic-inspector?
> > We still lack some Liberty features covered in
> > python-ironic-inspector-client. Do we have time until end of week to
> > finish them?
>
> We would like for the schedule to be the same for everyone. We need the
> final versions for all libraries this week, so we can update
> requirements constraints by early next week before the RC1.
>
> https://wiki.openstack.org/wiki/Liberty_Release_Schedule
>
> Doug
>
> >
> > Sorry if you hear this question too often :)
> >
> > Thanks!
> >
> > >
> > > Doug
> > >
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/3f740588/attachment.html>

From robert.clark at hp.com  Tue Sep 15 15:13:38 2015
From: robert.clark at hp.com (Clark, Robert Graham)
Date: Tue, 15 Sep 2015 15:13:38 +0000
Subject: [openstack-dev] [openstack-ansible] Security hardening
In-Reply-To: <D21D9BA6.2C1D1%keopp@cray.com>
References: <55F1999C.4020509@mhtx.net> <55F1AE40.5020009@gentoo.org>
 <55F1B0D7.8070404@mhtx.net> <1441909133-sup-2320@fewbar.com>
 <CAGSrQvy4b7fEmJGvSMfLjtiMuj-w_2S7rFL7uXuRKHkHBrVrHA@mail.gmail.com>
 <55F6F723.9050406@mhtx.net> <D21D9BA6.2C1D1%keopp@cray.com>
Message-ID: <D21DF3D8.2BE6C%robert.clark@hp.com>

Very interesting discussion.

The Security project has a published security guide that I believe this
would be very appropriate content for, the current guide (for reference)
is here: http://docs.openstack.org/sec/

Contributions welcome, just like any other part of the OpenStack docs :)

-Rob

On 15/09/2015 16:05, "Jeff Keopp" <keopp at cray.com> wrote:

>This is a very interesting proposal and one I believe is needed.  I?m
>currently looking at hardening the controller nodes from unwanted access
>and discovered that every time the controller node is booted/rebooted, it
>flushes the iptables and writes only those rules that neutron believes
>should be there.  This behavior would render this proposal ineffective
>once the node is rebooted.
>
>So I believe neutron needs to be fixed to not flush the iptables on each
>boot, but to write the iptables to /etc/sysconfig/iptables and then
>restore them as a normal linux box should do.  It should be a good citizen
>with other processes.
>
>A sysadmin should be allowed to use whatever iptables handlers they wish
>to implement security policies and not have an OpenStack process undo what
>they have set.
>
>I should mention this is on a system using a flat network topology and
>bare metal nodes.  No VMs.
>
>?
>Jeff Keopp | Sr. Software Engineer, ES Systems.
>380 Jackson Street | St. Paul, MN 55101 | USA  | www.cray.com
><http://www.cray.com>
>
>
>
>
>-----Original Message-----
>From: Major Hayden <major at mhtx.net>
>Reply-To: "OpenStack Development Mailing List (not for usage questions)"
><openstack-dev at lists.openstack.org>
>Date: Monday, September 14, 2015 at 11:34
>To: "openstack-dev at lists.openstack.org"
><openstack-dev at lists.openstack.org>
>Subject: Re: [openstack-dev] [openstack-ansible] Security hardening
>
>>On 09/14/2015 03:28 AM, Jesse Pretorius wrote:
>>> I agree with Clint that this is a good approach.
>>> 
>>> If there is an automated way that we can verify the security of an
>>>installation at a reasonable/standardised level then I think we should
>>>add a gate check for it too.
>>
>>Here's a rough draft of a spec.  Feel free to throw some darts.
>>
>>  https://review.openstack.org/#/c/222619/
>>
>>--
>>Major Hayden
>>
>>_________________________________________________________________________
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From eduard.matei at cloudfounders.com  Tue Sep 15 15:15:55 2015
From: eduard.matei at cloudfounders.com (Eduard Matei)
Date: Tue, 15 Sep 2015 18:15:55 +0300
Subject: [openstack-dev] [Cinder]Behavior when one cinder-volume service
 is down
In-Reply-To: <2960F1710CFACC46AF0DBFEE85B103BB3650ECB6@G9W0723.americas.hpqcorp.net>
References: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
 <CAPWkaSU4q7VXT9UMsvrucs38Ju1OFjCAt9zM0aEDQ+rda4FGBA@mail.gmail.com>
 <CAEOp6J-FwjQQWNdgaw6-ygTO0pd-AfXjFOd+UzdXszfiuYiPGg@mail.gmail.com>
 <2960F1710CFACC46AF0DBFEE85B103BB3650ECB6@G9W0723.americas.hpqcorp.net>
Message-ID: <CAEOp6J_0wyJHuKyTb_884W5cnTKw7T+in-m8esZ6dyqNS8xtiA@mail.gmail.com>

Thanks Scott,
But the question remains: if the "hacks" are not recommended then how can i
perform Evacuate when the c-vol service of the volumes i need evacuated is
"down", but there are two more controller node with c-vol services running?

Thanks,

Eduard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/2c208970/attachment.html>

From michal.dulko at intel.com  Tue Sep 15 15:16:04 2015
From: michal.dulko at intel.com (Dulko, Michal)
Date: Tue, 15 Sep 2015 15:16:04 +0000
Subject: [openstack-dev] [Cinder]Behavior when one cinder-volume service
 is down
In-Reply-To: <CAEOp6J-FwjQQWNdgaw6-ygTO0pd-AfXjFOd+UzdXszfiuYiPGg@mail.gmail.com>
References: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
 <CAPWkaSU4q7VXT9UMsvrucs38Ju1OFjCAt9zM0aEDQ+rda4FGBA@mail.gmail.com>
 <CAEOp6J-FwjQQWNdgaw6-ygTO0pd-AfXjFOd+UzdXszfiuYiPGg@mail.gmail.com>
Message-ID: <3895CB36EABD4E49B816E6081F3B001735FD6AE7@IRSMSX108.ger.corp.intel.com>

> From: Eduard Matei [mailto:eduard.matei at cloudfounders.com]
> Sent: Tuesday, September 15, 2015 4:54 PM
> 
> Hi,
> 
> Let me see if i got this:
> - running 3 (multiple) c-vols won't automatically give you failover
> - each c-vol is "master" of a certain number of volumes
> -- if the c-vol is "down" then those volumes cannot be managed by another
> c-vol
> 
> What i'm trying to achieve is making sure ANY volume is managed
> (manageable) by WHICHEVER c-vol is running (and gets the call first) - sort of
> A/A - so this means i need to look into Pacemaker and virtual-ips, or i should
> try first the "same name".
> 

I think you should try Pacemaker A/P configuration with same hostname in cinder.conf. That's the only safe option here.

I don't quite understand John's idea of how virtual IP can help with c-vol, as this service only listens on AMQP queue. I think VIP is useful only for running c-api service. 

From duncan.thomas at gmail.com  Tue Sep 15 15:19:45 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Tue, 15 Sep 2015 18:19:45 +0300
Subject: [openstack-dev] [Cinder]Behavior when one cinder-volume service
 is down
In-Reply-To: <2960F1710CFACC46AF0DBFEE85B103BB3650ECB6@G9W0723.americas.hpqcorp.net>
References: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
 <CAPWkaSU4q7VXT9UMsvrucs38Ju1OFjCAt9zM0aEDQ+rda4FGBA@mail.gmail.com>
 <CAEOp6J-FwjQQWNdgaw6-ygTO0pd-AfXjFOd+UzdXszfiuYiPGg@mail.gmail.com>
 <2960F1710CFACC46AF0DBFEE85B103BB3650ECB6@G9W0723.americas.hpqcorp.net>
Message-ID: <CAOyZ2aEg+W0YavnnHaTKwbSDToKLaHOyJoCP6=J6bKnE9c8ZZA@mail.gmail.com>

Of the two, pacemaker is far, far safer from a cinder PoV - fewer races,
fewer problematic scenarios.

On 15 September 2015 at 17:59, D'Angelo, Scott <scott.dangelo at hpe.com>
wrote:

> Eduard, Gorka has done a great job of explaining some of the issues with
> Active-Active Cinder-volume services in his blog:
>
> http://gorka.eguileor.com/
>
>
>
> TL;DR: The hacks to use the same hostname or use Pacemaker + VIP are
> dangerous because of races, and are not recommended for Enterprise
> deployments.
>
>
>
> *From:* Eduard Matei [mailto:eduard.matei at cloudfounders.com]
> *Sent:* Tuesday, September 15, 2015 8:54 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Cinder]Behavior when one cinder-volume
> service is down
>
>
>
> Hi,
>
>
>
> Let me see if i got this:
>
> - running 3 (multiple) c-vols won't automatically give you failover
>
> - each c-vol is "master" of a certain number of volumes
>
> -- if the c-vol is "down" then those volumes cannot be managed by another
> c-vol
>
>
>
> What i'm trying to achieve is making sure ANY volume is managed
> (manageable) by WHICHEVER c-vol is running (and gets the call first) - sort
> of A/A - so this means i need to look into Pacemaker and virtual-ips, or i
> should try first the "same name".
>
>
>
> Thanks,
>
>
>
> Eduard
>
>
>
> PS. @Michal: Where are volumes physically in case of your driver? <-
> similar to ceph, on a distributed object storage service (whose disks can
> be anywhere even on the same compute host)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/b67532b9/attachment.html>

From scott.dangelo at hpe.com  Tue Sep 15 15:19:53 2015
From: scott.dangelo at hpe.com (D'Angelo, Scott)
Date: Tue, 15 Sep 2015 15:19:53 +0000
Subject: [openstack-dev] [Cinder]Behavior when one cinder-volume service
 is down
In-Reply-To: <CAEOp6J_0wyJHuKyTb_884W5cnTKw7T+in-m8esZ6dyqNS8xtiA@mail.gmail.com>
References: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
 <CAPWkaSU4q7VXT9UMsvrucs38Ju1OFjCAt9zM0aEDQ+rda4FGBA@mail.gmail.com>
 <CAEOp6J-FwjQQWNdgaw6-ygTO0pd-AfXjFOd+UzdXszfiuYiPGg@mail.gmail.com>
 <2960F1710CFACC46AF0DBFEE85B103BB3650ECB6@G9W0723.americas.hpqcorp.net>
 <CAEOp6J_0wyJHuKyTb_884W5cnTKw7T+in-m8esZ6dyqNS8xtiA@mail.gmail.com>
Message-ID: <2960F1710CFACC46AF0DBFEE85B103BB3650ED3B@G9W0723.americas.hpqcorp.net>

I?m just not sure that you can evacuate with the c-vol service for those volume down. Not without the un-safe HA active-active hacks.
In our public cloud, if the c-vol service for a backend/volumes is down, we get woken up in the middle of the night and stay at it until we get c-vol back up. That?s the only way I know of getting access to those volumes that are associated with a c-vol service: get the service back up.

From: Eduard Matei [mailto:eduard.matei at cloudfounders.com]
Sent: Tuesday, September 15, 2015 9:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

Thanks Scott,
But the question remains: if the "hacks" are not recommended then how can i perform Evacuate when the c-vol service of the volumes i need evacuated is "down", but there are two more controller node with c-vol services running?

Thanks,

Eduard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/3a2cdadb/attachment.html>

From mspreitz at us.ibm.com  Tue Sep 15 15:27:12 2015
From: mspreitz at us.ibm.com (Mike Spreitzer)
Date: Tue, 15 Sep 2015 11:27:12 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
	'default'	network model
In-Reply-To: <55F83367.9050503@inaugust.com>
References: <55F83367.9050503@inaugust.com>
Message-ID: <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>

Monty Taylor <mordred at inaugust.com> wrote on 09/15/2015 11:04:07 AM:

> a) an update to python-novaclient to allow a named network to be passed 
> to satisfy the "you have more than one network" - the nics argument is 
> still useful for more complex things

I am not using the latest, but rather Juno.  I find that in many places 
the Neutron CLI insists on a UUID when a name could be used.  Three cheers 
for any campaign to fix that.

And, yeah, creating VMs on a shared public network is good too.

Thanks,
mike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/53a10932/attachment.html>

From gord at live.ca  Tue Sep 15 15:28:01 2015
From: gord at live.ca (gord chung)
Date: Tue, 15 Sep 2015 11:28:01 -0400
Subject: [openstack-dev] [ceilometer] Ceilometer M Midcycle
In-Reply-To: <55F82217.60402@mailthemyers.com>
References: <55F82217.60402@mailthemyers.com>
Message-ID: <BLU437-SMTP1F474F6ED73AEB5922235DE5C0@phx.gbl>

thanks for organising this Jason.

for those who can't attend but want to, i think it'd also be good to 
know if location is the blocker here.

retagging with [ceilometer].

On 15/09/2015 9:50 AM, Jason Myers wrote:
> Hello Everyone,
>     We are setting up a few polls to determine the possibility of 
> meeting face to face for a ceilometer midcycle in Dublin, IE. We'd 
> like to gather for three days to discuss all the work we are currently 
> doing; however, we have access to space for 5 so you could also use 
> that space for co working outside of the meeting dates.  We have two 
> date polls: one for Nov 30-Dec 18 at 
> http://doodle.com/poll/hmukqwzvq7b54cef, and one for Jan 11-22 at 
> http://doodle.com/poll/kbkmk5v2vass249i. You can vote for any of the 
> days in there that work for you.  If we don't get enough interest in 
> either poll, we will do a virtual midcycle like we did last year.  
> Please vote for your favorite days in the two polls if you are 
> interested in attending in person. If we don't get many votes, we'll 
> circulate another poll for the virtual dates.

-- 
gord



From mvoelker at vmware.com  Tue Sep 15 15:30:55 2015
From: mvoelker at vmware.com (Mark Voelker)
Date: Tue, 15 Sep 2015 15:30:55 +0000
Subject: [openstack-dev] [glance] The current state of glance v2 in
	public	clouds
In-Reply-To: <55F801C1.1050906@inaugust.com>
References: <55F801C1.1050906@inaugust.com>
Message-ID: <AFDBC4CB-926A-4A6E-9303-F4A74CCB27E6@vmware.com>

As another data point, I took a poke around the OpenStack Marketplace [1] this morning and found:

* 1 distro/appliance claims v1 support
* 3 managed services claim v1 support
* 3 public clouds claim v1 support

And everyone else claims v2 support.  I?d encourage vendors to check their Marketplace data for accuracy?if something?s wrong there, reach out to ecosystem at openstack.org to enquire about fixing it.  If you simply aren?t listed on the Marketplace and would like to be, check out [2].

[1] https://www.openstack.org/marketplace/
[2] http://www.openstack.org/assets/marketplace/join-the-marketplace.pdf

At Your Service,

Mark T. Voelker



> On Sep 15, 2015, at 7:32 AM, Monty Taylor <mordred at inaugust.com> wrote:
> 
> Hi!
> 
> In some of our other discussions, there have been musings such as "people want to..." or "people are concerned about..." Those are vague and unsubstantiated. Instead of "people" - I thought I'd enumerate actual data that I have personally empirically gathered.
> 
> I currently have an account on 12 different public clouds:
> 
> Auro
> CityCloud
> Dreamhost
> Elastx
> EnterCloudSuite
> HP
> OVH
> Rackspace
> RunAbove
> Ultimum
> UnitedStack
> Vexxhost
> 
> 
> (if, btw, you have a public cloud that I did not list above, please poke me and let's get me an account so that I can make sure you're listed/supported in os-client-config and also so that I don't make sweeping generalizations without you)
> 
> In case you care- those clouds cover US, Canada, Sweden, UK, France, Germany, Netherlands, Czech Republic and China.
> 
> Here's the rundown:
> 
> 11 of the 12 clouds run Glance v2, 1 only have Glance v1
> 11 of the 12 clouds support image-create, 1 uses tasks
> 8 of the 12 support qcow2, 3 require raw, 1 requires vhd
> 
> Use this data as you will.
> 
> Monty
> 
> Monty
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From doug at doughellmann.com  Tue Sep 15 15:36:46 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 15 Sep 2015 11:36:46 -0400
Subject: [openstack-dev] [Ironic] [Inspector] Finishing Liberty (was:
	final liberty cycle client library releases needed)
In-Reply-To: <55F8331C.7020709@redhat.com>
References: <1442328235-sup-901@lrrr.local> <55F8331C.7020709@redhat.com>
Message-ID: <1442331113-sup-8198@lrrr.local>

Excerpts from Dmitry Tantsur's message of 2015-09-15 17:02:52 +0200:
> Hi folks!
> 
> As you can see below, we have to make the final release of 
> python-ironic-inspector-client really soon. We have 2 big missing parts:
> 
> 1. Introspection rules support.
>     I'm working on it: https://review.openstack.org/#/c/223096/
>     This required a substantial requirement, so that our client does not 
> become a complete mess: https://review.openstack.org/#/c/223490/

At this point in the schedule, I'm not sure it's a good idea to be
doing anything that's considered a "substantial" rewrite (what I
assume you meant instead of a "substantial requirement").

What depends on python-ironic-inspector-client? Are all of the things
that depend on it working for liberty right now? If so, that's your
liberty release and the rewrite should be considered for mitaka.

> 
> 2. Support for getting introspection data. John (trown) volunteered to 
> do this work.
> 
> I'd like to ask the inspector team to pay close attention to these 
> patches, as the deadline for them is Friday (preferably European time).

You should definitely not be trying to write anything new at this point.
The feature freeze was *last* week. The releases for this week are meant
to include bug fixes and any needed requirements updates.

> 
> Next, please have a look at the milestone page for ironic-inspector 
> itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
> There are things that require review, and there are things without an 
> assignee. If you'd like to volunteer for something there, please assign 
> it to yourself. Our deadline is next Thursday, but it would be really 
> good to finish it earlier next week to dedicate some time to testing.
> 
> Thanks all, I'm looking forward to this release :)
> 
> 
> -------- Forwarded Message --------
> Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle 
> client library releases needed
> Date: Tue, 15 Sep 2015 10:45:45 -0400
> From: Doug Hellmann <doug at doughellmann.com>
> Reply-To: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev at lists.openstack.org>
> To: openstack-dev <openstack-dev at lists.openstack.org>
> 
> Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:
> > On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> > > Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
> > >> PTLs and release liaisons,
> > >>
> > >> In order to keep the rest of our schedule for the end-of-cycle release
> > >> tasks, we need to have final releases for all client libraries in the
> > >> next day or two.
> > >>
> > >> If you have not already submitted your final release request for this
> > >> cycle, please do that as soon as possible.
> > >>
> > >> If you *have* already submitted your final release request for this
> > >> cycle, please reply to this email and let me know that you have so I can
> > >> create your stable/liberty branch.
> > >>
> > >> Thanks!
> > >> Doug
> > >
> > > I forgot to mention that we also need the constraints file in
> > > global-requirements updated for all of the releases, so we're actually
> > > testing with them in the gate. Please take a minute to check the version
> > > specified in openstack/requirements/upper-constraints.txt for your
> > > libraries and submit a patch to update it to the latest release if
> > > necessary. I'll do a review later in the week, too, but it's easier to
> > > identify the causes of test failures if we have one patch at a time.
> >
> > Hi Doug!
> >
> > When is the last and final deadline for doing all this for
> > not-so-important and non-release:managed projects like ironic-inspector?
> > We still lack some Liberty features covered in
> > python-ironic-inspector-client. Do we have time until end of week to
> > finish them?
> 
> We would like for the schedule to be the same for everyone. We need the
> final versions for all libraries this week, so we can update
> requirements constraints by early next week before the RC1.
> 
> https://wiki.openstack.org/wiki/Liberty_Release_Schedule
> 
> Doug
> 
> >
> > Sorry if you hear this question too often :)
> >
> > Thanks!
> >
> > >
> > > Doug
> > >
> > > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> 


From doug at doughellmann.com  Tue Sep 15 15:38:36 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 15 Sep 2015 11:38:36 -0400
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
	library releases needed
In-Reply-To: <1F49ED55-E92C-460C-A41A-7F1E3A785C2E@mirantis.com>
References: <1442234537-sup-4636@lrrr.local> <1442240201-sup-1222@lrrr.local>
 <55F82820.1060104@redhat.com> <1442328235-sup-901@lrrr.local>
 <1F49ED55-E92C-460C-A41A-7F1E3A785C2E@mirantis.com>
Message-ID: <1442331424-sup-8304@lrrr.local>

Excerpts from Renat Akhmerov's message of 2015-09-15 18:11:58 +0300:
> 
> > On 15 Sep 2015, at 17:45, Doug Hellmann <doug at doughellmann.com> wrote:
> > 
> > We would like for the schedule to be the same for everyone. We need the
> > final versions for all libraries this week, so we can update
> > requirements constraints by early next week before the RC1.
> 
> 
> ?for everyone? meaning ?for all Big Tent projects?? I?m trying to figure out if that affects projects like Mistral that are not massively interdependent with other projects.

If you plan to have a Liberty release, we would like for you to follow
the same release schedule as everyone else.

Doug


From doug at doughellmann.com  Tue Sep 15 15:44:27 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 15 Sep 2015 11:44:27 -0400
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
	library releases needed
In-Reply-To: <CA+GZd7_VbXpgms7YeXnP-oaTK6U98tbrYUmhDJU2OF0AomcOSA@mail.gmail.com>
References: <1442234537-sup-4636@lrrr.local> <1442240201-sup-1222@lrrr.local>
 <55F82820.1060104@redhat.com> <1442328235-sup-901@lrrr.local>
 <CA+GZd7_VbXpgms7YeXnP-oaTK6U98tbrYUmhDJU2OF0AomcOSA@mail.gmail.com>
Message-ID: <1442331526-sup-4861@lrrr.local>

Excerpts from Sergey Lukjanov's message of 2015-09-15 18:12:23 +0300:
> We're in a good shape with sahara client. 0.11.0 is the final minor release
> for it. Constraints are up to date.

Thanks, Sergey! We appreciate you and the rest of the Sahara team staying
on track with the work needed for the release schedule.

Doug


From dougwig at parksidesoftware.com  Tue Sep 15 15:47:41 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Tue, 15 Sep 2015 09:47:41 -0600
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
	'default' network model
In-Reply-To: <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
Message-ID: <A68D71C5-D383-4287-9A1C-B26FC03ADAD5@parksidesoftware.com>

Hi all,

One solution to this was a neutron spec that was added for a ?get me a network? api, championed by Jay Pipes, which would auto-assign a public network on vm boot. It looks like it was resource starved in Liberty, though:

https://blueprints.launchpad.net/neutron/+spec/get-me-a-network <https://blueprints.launchpad.net/neutron/+spec/get-me-a-network>

Thanks,
doug


> On Sep 15, 2015, at 9:27 AM, Mike Spreitzer <mspreitz at us.ibm.com> wrote:
> 
> Monty Taylor <mordred at inaugust.com> wrote on 09/15/2015 11:04:07 AM:
> 
> > a) an update to python-novaclient to allow a named network to be passed 
> > to satisfy the "you have more than one network" - the nics argument is 
> > still useful for more complex things
> 
> I am not using the latest, but rather Juno.  I find that in many places the Neutron CLI insists on a UUID when a name could be used.  Three cheers for any campaign to fix that.
> 
> And, yeah, creating VMs on a shared public network is good too.
> 
> Thanks,
> mike
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/85ff6d48/attachment.html>

From flavio at redhat.com  Tue Sep 15 15:48:59 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 15 Sep 2015 17:48:59 +0200
Subject: [openstack-dev] [glance] The current state of glance v2 in
 public	clouds
In-Reply-To: <AFDBC4CB-926A-4A6E-9303-F4A74CCB27E6@vmware.com>
References: <55F801C1.1050906@inaugust.com>
 <AFDBC4CB-926A-4A6E-9303-F4A74CCB27E6@vmware.com>
Message-ID: <20150915154859.GJ9301@redhat.com>

On 15/09/15 15:30 +0000, Mark Voelker wrote:
>As another data point, I took a poke around the OpenStack Marketplace [1] this morning and found:
>
>* 1 distro/appliance claims v1 support
>* 3 managed services claim v1 support
>* 3 public clouds claim v1 support
>
>And everyone else claims v2 support.  I?d encourage vendors to check their Marketplace data for accuracy?if something?s wrong there, reach out to ecosystem at openstack.org to enquire about fixing it.  If you simply aren?t listed on the Marketplace and would like to be, check out [2].
>
>[1] https://www.openstack.org/marketplace/
>[2] http://www.openstack.org/assets/marketplace/join-the-marketplace.pdf

Great!

>> On Sep 15, 2015, at 7:32 AM, Monty Taylor <mordred at inaugust.com> wrote:
>>
>> Hi!
>>
>> In some of our other discussions, there have been musings such as "people want to..." or "people are concerned about..." Those are vague and unsubstantiated. Instead of "people" - I thought I'd enumerate actual data that I have personally empirically gathered.
>>
>> I currently have an account on 12 different public clouds:
>>
>> Auro
>> CityCloud
>> Dreamhost
>> Elastx
>> EnterCloudSuite
>> HP
>> OVH
>> Rackspace
>> RunAbove
>> Ultimum
>> UnitedStack
>> Vexxhost
>>
>>
>> (if, btw, you have a public cloud that I did not list above, please poke me and let's get me an account so that I can make sure you're listed/supported in os-client-config and also so that I don't make sweeping generalizations without you)
>>
>> In case you care- those clouds cover US, Canada, Sweden, UK, France, Germany, Netherlands, Czech Republic and China.
>>
>> Here's the rundown:
>>
>> 11 of the 12 clouds run Glance v2, 1 only have Glance v1
>> 11 of the 12 clouds support image-create, 1 uses tasks
>> 8 of the 12 support qcow2, 3 require raw, 1 requires vhd

Thanks for taking the time. This is acutally good info and I won't
hide my surprise since I thought most of the clouds were disabling
Glance's v2.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/ec356a6e/attachment.pgp>

From gord at live.ca  Tue Sep 15 15:58:55 2015
From: gord at live.ca (gord chung)
Date: Tue, 15 Sep 2015 11:58:55 -0400
Subject: [openstack-dev] [ceilometer] PTL candidacy
Message-ID: <BLU436-SMTP370B193EA7162FDA3B067ADE5C0@phx.gbl>

hi folks,

less than six months ago, i decided to run for PTL of Ceilometer where 
my main goal was to support the community of contributors that exists 
within OpenStack with interests in telemetry[1]. it is under that tenet 
which i will run again for team lead of Ceilometer. as mentioned 
previously, we have a diverse set of contributors from across the globe 
working on various aspects of metering and monitoring and it is my goal 
to ensure nothing slows them down (myself included).

that said, as we look forward to Mitaka, i hope to follow along the path 
of stability, simplicity and usability. some items i'd like to target are:

- rolling upgrades - having a fluid upgrade path for operators is 
critical to providing a highly available cloud environment for their 
users. i would like to have a viable solution in Ceilometer that can 
provide this functionality with zero/minimal performance degradation.

- building up events - we started work on adding inline event alarming 
in Aodh during Liberty, this is something i'd like to improve upon by 
adding multiple worker support and broader alarming evaluations. also, a 
common use case for events is to analyse the data for BI. while we allow 
the ability to query and alarm on events. one useful tool would be the 
ability to run statistics on events such as the number of instances 
launched.

- optimising collection - we improved ease of use by adding declarative 
notification support in Liberty. it'd be great if this work could be 
adopted by projects producing metrics. additionally, we currently have a 
extremely tight coupling between resource metadata and measurement data. 
i'd like to evaluate how to loosen this so our data collection and 
storage is more flexible.

- continuing the refactoring - removing deprecated/redundant 
functionality; it was nice deprecating/deleting stuff, let's keep doing 
it (within reason)! one possible target would be splitting storage/api, 
while starting the initial deprecation of v2 metering api.

- functional and integration testing -  we now have integration and 
functional test living within our repositories. this should allow us to 
develop testing easier so it'd be good to broaden the coverage.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-April/060536.html

cheers,

-- 
gord



From emilien at redhat.com  Tue Sep 15 16:03:36 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Tue, 15 Sep 2015 12:03:36 -0400
Subject: [openstack-dev] [puppet] weekly meeting #51
In-Reply-To: <55F70011.7080301@redhat.com>
References: <55F70011.7080301@redhat.com>
Message-ID: <55F84158.8000604@redhat.com>



On 09/14/2015 01:12 PM, Emilien Macchi wrote:
> Hello,
> 
> Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
> in #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150915
> 
> Tomorrow we will make our Sprint Retrospective, please share your
> experience on the etherpad [1].
> 
> Also, feel free to add any additional items you'd like to discuss.
> If our schedule allows it, we'll make bug triage during the meeting.
> 
> Regards,
> 
> [1] https://etherpad.openstack.org/p/puppet-liberty-sprint-retrospective
> 

We did our meeting, you can read the notes here:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html

Thanks for attending, have a great week!
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/9beafe94/attachment.pgp>

From mordred at inaugust.com  Tue Sep 15 16:04:10 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Tue, 15 Sep 2015 18:04:10 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <alpine.DEB.2.11.1509151411370.14097@tc-unix2.emea.hpqcorp.net>
References: <alpine.DEB.2.11.1509151411370.14097@tc-unix2.emea.hpqcorp.net>
Message-ID: <55F8417A.6060601@inaugust.com>

On 09/15/2015 03:13 PM, stuart.mclaren at hp.com wrote:
>>
>> After having some conversations with folks at the Ops Midcycle a
>> few weeks ago, and observing some of the more recent email threads
>> related to glance, glance-store, the client, and the API, I spent
>> last week contacting a few of you individually to learn more about
>> some of the issues confronting the Glance team. I had some very
>> frank, but I think constructive, conversations with all of you about
>> the issues as you see them. As promised, this is the public email
>> thread to discuss what I found, and to see if we can agree on what
>> the Glance team should be focusing on going into the Mitaka summit
>> and development cycle and how the rest of the community can support
>> you in those efforts.
>
> Doug, thanks for reaching out here.
>
> I've been looking into the existing task-based-upload that Doug mentions:
> can anyone clarify the following?
>
> On a default devstack install you can do this 'task' call:
>
> http://paste.openstack.org/show/462919

Yup. That's the one.

> as an alternative to the traditional image upload (the bytes are streamed
> from the URL).
>
> It's not clear to me if this is just an interesting example of the kind
> of operator specific thing you can configure tasks to do, or a real
> attempt to define an alternative way to upload images.
>
> The change which added it [1] calls it a 'sample'.
>
> Is it just an example, or is it a second 'official' upload path?

It's how you have to upload images on Rackspace. If you want to see the 
full fun:

https://github.com/openstack-infra/shade/blob/master/shade/__init__.py#L1335-L1510

Which is "I want to upload an image to an OpenStack Cloud"

I've listed it on this slide in CLI format too:

http://inaugust.com/talks/product-management/index.html#/27

It should be noted that once you create the task, you need to poll the 
task with task-show, and then the image id will be in the completed 
task-show output.

Monty


From anteaya at anteaya.info  Tue Sep 15 16:12:30 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Tue, 15 Sep 2015 10:12:30 -0600
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
 communications
In-Reply-To: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
Message-ID: <55F8436E.9090208@anteaya.info>

On 09/15/2015 08:50 AM, Anne Gentle wrote:
> Hi all,
> 
> What can we do to make the cross-project meeting more helpful and useful
> for cross-project communications? I started with a proposal to move it to a
> different time, which morphed into an idea to alternate times. But, knowing
> that we need to layer communications I wonder if we should troubleshoot
> cross-project communications further? These are the current ways
> cross-project communications happen:
> 
> 1. The weekly meeting in IRC
> 2. The cross-project specs and reviewing those
> 3. Direct connections between team members
> 4. Cross-project talks at the Summits
> 
> What are some of the problems with each layer?
> 
> 1. weekly meeting: time zones, global reach, size of cross-project concerns
> due to multiple projects being affected, another meeting for PTLs to attend
> and pay attention to
> 2. specs: don't seem to get much attention unless they're brought up at
> weekly meeting, finding owners for the work needing to be done in a spec is
> difficult since each project team has its own priorities
> 3. direct communications: decisions from these comms are difficult to then
> communicate more widely, it's difficult to get time with busy PTLs
> 4. Summits: only happens twice a year, decisions made then need to be
> widely communicated
> 
> I'm sure there are more details and problems I'm missing -- feel free to
> fill in as needed.
> 
> Lastly, what suggestions do you have for solving problems with any of these
> layers?
> 
> Thanks,
> Anne
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Hi Anne,

Thanks for starting the conversation as I think it is an area we can
really benefit from improving.

For my part, I have been trying to attend multiple mid-cycles in order
to do what I can to alleviate the effect of decisions made by one group
either duplicating work with another or directly contravening it.

It is one small part of the over communication issue which affects us
all but I do believe it has some benefit to those with whom I am able to
interact. One of the things I look for is ways for folks who need to be
discussing something specific, since they are both involved in the same
problem area, to become aware of the efforts of the other and to
encourage direct communication between those involved.

I know due to financial and personal effects this strategy doesn't
scale, but I did want to bring awareness to my efforts in your status
report.

Thanks for initiating the discussion, Anne,
Anita.


From carl at ecbaldwin.net  Tue Sep 15 16:12:28 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Tue, 15 Sep 2015 10:12:28 -0600
Subject: [openstack-dev] [neutron][L3][QA] DVR job failure rate and
	maintainability
In-Reply-To: <0000014fcde02877-55c10164-4eed-4552-ba1a-681c6a75fbcd-000000@email.amazonses.com>
References: <0000014fcde02877-55c10164-4eed-4552-ba1a-681c6a75fbcd-000000@email.amazonses.com>
Message-ID: <CALiLy7qt_1N2H=o6KtGDWhxDAyFLLewn7-nkDjrFbyJP0u51Xg@mail.gmail.com>

Sean,

Thank you for writing this.  It is clear that we have some work to do
and we need more attention on this.  We were able to get the job
voting a few months ago when the failure rates for all the jobs were
at a low point.  However, we never really addressed the fact that this
job has always had a little bit higher rate than its non-DVR
counter-part.  DVR is a supported feature now and we need to be behind
it.

I'm adding this to the agenda for the L3 meeting this Thursday [1].
Let's dedicate real talent and time to getting to the bottom of the
higher failure rate, driving the bugs out, and making the enhancements
needed to make this feature what it should be.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

On Mon, Sep 14, 2015 at 4:01 PM, Sean M. Collins <sean at coreitpro.com> wrote:
> Hi,
>
> Carl Baldwin, Doug Wiegley, Matt Kassawara, Ryan Moats, and myself are
> at the QA sprint in Fort Collins. Earlier today there was a discussion
> about the failure rate about the DVR job, and the possible impact that
> it is having on the gate.
>
> Ryan has a good patch up that shows the failure rates over time:
>
> https://review.openstack.org/223201
>
> To view the graphs, you go over into your neutron git repo, and open the
> .html files that are present in doc/dashboards - which should open up
> your browser and display the Graphite query.
>
> Doug put up a patch to change the DVR job to be non-voting while we
> determine the cause of the recent spikes:
>
> https://review.openstack.org/223173
>
> There was a good discussion after pushing the patch, revolving around
> the need for Neutron to have DVR, to fit operational and reliability
> requirements, and help transition away from Nova-Network by providing
> one of many solutions similar to Nova's multihost feature.  I'm skipping
> over a huge amount of context about the Nova-Network and Neutron work,
> since that is a big and ongoing effort.
>
> DVR is an important feature to have, and we need to ensure that the job
> that tests DVR has a high pass rate.
>
> One thing that I think we need, is to form a group of contributors that
> can help with the DVR feature in the immediate term to fix the current
> bugs, and longer term maintain the feature. It's a big task and I don't
> believe that a single person or company can or should do it by themselves.
>
> The L3 group is a good place to start, but I think that even within the
> L3 team we need dedicated and diverse group of people who are interested
> in maintaining the DVR feature.
>
> Without this, I think the DVR feature will start to bit-rot and that
> will have a significant impact on our ability to recommend Neutron as a
> replacement for Nova-Network in the future.
>
> --
> Sean M. Collins
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From armamig at gmail.com  Tue Sep 15 16:16:31 2015
From: armamig at gmail.com (Armando M.)
Date: Tue, 15 Sep 2015 09:16:31 -0700
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
Message-ID: <CAK+RQeYeNgn88kF9i+R+_bV6c17MFxYbe4kcxh_-6UnaH1OU1Q@mail.gmail.com>

On 15 September 2015 at 08:27, Mike Spreitzer <mspreitz at us.ibm.com> wrote:

> Monty Taylor <mordred at inaugust.com> wrote on 09/15/2015 11:04:07 AM:
>
> > a) an update to python-novaclient to allow a named network to be passed
> > to satisfy the "you have more than one network" - the nics argument is
> > still useful for more complex things
>
> I am not using the latest, but rather Juno.  I find that in many places
> the Neutron CLI insists on a UUID when a name could be used.  Three cheers
> for any campaign to fix that.


The client is not particularly tied to a specific version of the server, so
we don't have a Juno version, or a Kilo version, etc. (even though they are
aligned, see [1] for more details).

Having said that, you could use names in place of uuids pretty much
anywhere. If your experience says otherwise, please consider filing a bug
against the client [2] and we'll get it fixed.

Thanks,
Armando

[1] https://launchpad.net/python-neutronclient/+series
[2] https://bugs.launchpad.net/python-neutronclient/+filebug


>
>
> And, yeah, creating VMs on a shared public network is good too.
>
> Thanks,
> mike
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/b3831e03/attachment.html>

From mestery at mestery.com  Tue Sep 15 16:17:50 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Tue, 15 Sep 2015 11:17:50 -0500
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <A68D71C5-D383-4287-9A1C-B26FC03ADAD5@parksidesoftware.com>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
 <A68D71C5-D383-4287-9A1C-B26FC03ADAD5@parksidesoftware.com>
Message-ID: <CAL3VkVxCmbPS2FNjdJmS3RM9+5P04_QUT7ctMv89J56yyMNJdg@mail.gmail.com>

That's exactly right, and we need to get this merged early in Mitaka. We'll
discuss this in a design summit session in Tokyo in fact to ensure it's
resourced correctly and continues to address the evolving needs in this
space.

On Tue, Sep 15, 2015 at 10:47 AM, Doug Wiegley <dougwig at parksidesoftware.com
> wrote:

> Hi all,
>
> One solution to this was a neutron spec that was added for a ?get me a
> network? api, championed by Jay Pipes, which would auto-assign a public
> network on vm boot. It looks like it was resource starved in Liberty,
> though:
>
> https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
>
> Thanks,
> doug
>
>
> On Sep 15, 2015, at 9:27 AM, Mike Spreitzer <mspreitz at us.ibm.com> wrote:
>
> Monty Taylor <mordred at inaugust.com> wrote on 09/15/2015 11:04:07 AM:
>
> > a) an update to python-novaclient to allow a named network to be passed
> > to satisfy the "you have more than one network" - the nics argument is
> > still useful for more complex things
>
> I am not using the latest, but rather Juno.  I find that in many places
> the Neutron CLI insists on a UUID when a name could be used.  Three cheers
> for any campaign to fix that.
>
> And, yeah, creating VMs on a shared public network is good too.
>
> Thanks,
> mike
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/d229bc4f/attachment.html>

From lyz at princessleia.com  Tue Sep 15 16:20:10 2015
From: lyz at princessleia.com (Elizabeth K. Joseph)
Date: Tue, 15 Sep 2015 09:20:10 -0700
Subject: [openstack-dev] [Infra] Meeting Tuesday September 15th at 19:00 UTC
Message-ID: <CABesOu02TXxHbmWM7F9_jzC5Jhb-ZQJ7vri4nePRyHErAHmCtw@mail.gmail.com>

Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday September 15th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-08-19.01.log.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-08-19.01.txt
Log: http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-08-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2


From john.griffith8 at gmail.com  Tue Sep 15 16:24:24 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Tue, 15 Sep 2015 10:24:24 -0600
Subject: [openstack-dev] [Cinder]Behavior when one cinder-volume service
 is down
In-Reply-To: <CAEOp6J-FwjQQWNdgaw6-ygTO0pd-AfXjFOd+UzdXszfiuYiPGg@mail.gmail.com>
References: <CAEOp6J-3twSrjhHQ2OHo+4z7vzwEZ-EJvM61e8vQETUoSgkWVQ@mail.gmail.com>
 <CAPWkaSU4q7VXT9UMsvrucs38Ju1OFjCAt9zM0aEDQ+rda4FGBA@mail.gmail.com>
 <CAEOp6J-FwjQQWNdgaw6-ygTO0pd-AfXjFOd+UzdXszfiuYiPGg@mail.gmail.com>
Message-ID: <CAPWkaSUQtM5rKqfSrJtRsAzhq5p87uA2JP6GkyMyX=ShgazPXw@mail.gmail.com>

On Tue, Sep 15, 2015 at 8:53 AM, Eduard Matei <
eduard.matei at cloudfounders.com> wrote:

> Hi,
>
> Let me see if i got this:
> - running 3 (multiple) c-vols won't automatically give you failover
>
?correct
?


> - each c-vol is "master" of a certain number of volumes
>
?yes
?


> -- if the c-vol is "down" then those volumes cannot be managed by another
> c-vol
>
?By default no, but you can configure an HA setup of multiple c-vol
services.  There are a number of folks doing this in production and there's
probably better documentation on how to achieve this, but this gives a
descent enough start:
http://docs.openstack.org/high-availability-guide/content/s-cinder-api.html
?


>
> What i'm trying to achieve is making sure ANY volume is managed
> (manageable) by WHICHEVER c-vol is running (and gets the call first) - sort
> of A/A - so this means i need to look into Pacemaker and virtual-ips, or i
> should try first the "same name".
>
?Yes, I gathered... and to do that you need to do something like name the
backends the same and use a VIP in from of them.
?


>
> Thanks,
>
> Eduard
>
> PS. @Michal: Where are volumes physically in case of your driver? <-
> similar to ceph, on a distributed object storage service (whose disks can
> be anywhere even on the same compute host)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/e18fe5ff/attachment-0001.html>

From armamig at gmail.com  Tue Sep 15 16:30:35 2015
From: armamig at gmail.com (Armando M.)
Date: Tue, 15 Sep 2015 09:30:35 -0700
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <55F83367.9050503@inaugust.com>
References: <55F83367.9050503@inaugust.com>
Message-ID: <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>

On 15 September 2015 at 08:04, Monty Taylor <mordred at inaugust.com> wrote:

> Hey all!
>
> If any of you have ever gotten drunk with me, you'll know I hate floating
> IPs more than I hate being stabbed in the face with a very angry fish.
>
> However, that doesn't really matter. What should matter is "what is the
> most sane thing we can do for our users"
>
> As you might have seen in the glance thread, I have a bunch of OpenStack
> public cloud accounts. Since I wrote that email this morning, I've added
> more - so we're up to 13.
>
> auro
> citycloud
> datacentred
> dreamhost
> elastx
> entercloudsuite
> hp
> ovh
> rackspace
> runabove
> ultimum
> unitedstack
> vexxhost
>
> Of those public clouds, 5 of them require you to use a floating IP to get
> an outbound address, the others directly attach you to the public network.
> Most of those 8 allow you to create a private network, to boot vms on the
> private network, and ALSO to create a router with a gateway and put
> floating IPs on your private ip'd machines if you choose.
>
> Which brings me to the suggestion I'd like to make.
>
> Instead of having our default in devstack and our default when we talk
> about things be "you boot a VM and you put a floating IP on it" - which
> solves one of the two usage models - how about:
>
> - Cloud has a shared: True, external:routable: True neutron network. I
> don't care what it's called  ext-net, public, whatever. the "shared" part
> is the key, that's the part that lets someone boot a vm on it directly.
>
> - Each person can then make a private network, router, gateway, etc. and
> get floating-ips from the same public network if they prefer that model.
>
> Are there any good reasons to not push to get all of the public networks
> marked as "shared"?
>

The reason is simple: not every cloud deployment is the same: private is
different from public and even within the same cloud model, the network
topology may vary greatly.

Perhaps Neutron fails in the sense that it provides you with too much
choice, and perhaps we have to standardize on the type of networking
profile expected by a user of OpenStack public clouds before making changes
that would fragment this landscape even further.

If you are advocating for more flexibility without limiting the existing
one, we're only making the problem worse.


>
> OH - well, one thing - that's that once there are two networks in an
> account you have to specify which one. This is really painful in nova
> clent. Say, for instance, you have a public network called "public" and a
> private network called "private" ...
>
> You can't just say "nova boot --network=public" - nope, you need to say
> "nova boot --nics net-id=$uuid_of_my_public_network"
>
> So I'd suggest 2 more things;
>
> a) an update to python-novaclient to allow a named network to be passed to
> satisfy the "you have more than one network" - the nics argument is still
> useful for more complex things
>
> b) ability to say "vms in my cloud should default to being booted on the
> public network" or "vms in my cloud should default to being booted on a
> network owned by the user"
>
> Thoughts?
>

As I implied earlier, I am not sure how healthy this choice is. As a user
of multiple clouds I may end up having a different user experience based on
which cloud I am using...I thought you were partially complaining about
lack of consistency?


>
> Monty
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/49b449f5/attachment.html>

From doc at aedo.net  Tue Sep 15 16:32:19 2015
From: doc at aedo.net (Christopher Aedo)
Date: Tue, 15 Sep 2015 09:32:19 -0700
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
	communications
In-Reply-To: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
Message-ID: <CA+odVQHvDfnbfw32-O68WeBmecpp9_QeL7q=fQFTzaThsQe9jQ@mail.gmail.com>

On Tue, Sep 15, 2015 at 7:50 AM, Anne Gentle
<annegentle at justwriteclick.com> wrote:
> Hi all,
>
> What can we do to make the cross-project meeting more helpful and useful for
> cross-project communications? I started with a proposal to move it to a
> different time, which morphed into an idea to alternate times. But, knowing
> that we need to layer communications I wonder if we should troubleshoot
> cross-project communications further? These are the current ways
> cross-project communications happen:
>
> 1. The weekly meeting in IRC
> 2. The cross-project specs and reviewing those
> 3. Direct connections between team members
> 4. Cross-project talks at the Summits

5. This mailing list

>
> What are some of the problems with each layer?
>
> 1. weekly meeting: time zones, global reach, size of cross-project concerns
> due to multiple projects being affected, another meeting for PTLs to attend
> and pay attention to
> 2. specs: don't seem to get much attention unless they're brought up at
> weekly meeting, finding owners for the work needing to be done in a spec is
> difficult since each project team has its own priorities
> 3. direct communications: decisions from these comms are difficult to then
> communicate more widely, it's difficult to get time with busy PTLs
> 4. Summits: only happens twice a year, decisions made then need to be widely
> communicated

5. There's tremendous volume on the mailing list, and it can be very
difficult to stay on top of all that traffic.

>
> I'm sure there are more details and problems I'm missing -- feel free to
> fill in as needed.
>
> Lastly, what suggestions do you have for solving problems with any of these
> layers?

Unless I missed it, I'm really not sure why the mailing list didn't
make the list here?  My take at least is that we should be
coordinating with each other through the mailing list when real-time
isn't possible (due time zone issues, etc.)  At the very least, it
keeps people from holding on to information or issues until the next
weekly meeting, or for a few months until the next mid-cycle or
summit.

I personally would like to see more coordination happening on the ML,
and would be curious to hear opinions on how that can be improved.
Maybe a tag on the subject line to draw attention in this case makes
this a little easier, since we are by nature talking about issues that
span all projects?  [cross-project] rather than [all]?

-Christopher


From dougal at redhat.com  Tue Sep 15 16:33:43 2015
From: dougal at redhat.com (Dougal Matthews)
Date: Tue, 15 Sep 2015 17:33:43 +0100
Subject: [openstack-dev] [TripleO] Remove Tuskar from tripleo-common and
	python-tripleoclient
Message-ID: <CAPMB-2TmL=ZdmA7mJjTBs5SjPqEfNi-pzf09dMGbay5AYj8Rew@mail.gmail.com>

Hi all,

This is partly a heads up for everyone, but also seeking feedback on the
direction.

We are starting to move to a more general Heat workflow without the need for
Tuskar. The CLI is already in a position to do this as we can successfully
deploy without Tuskar.

Moving forward it will be much easier for us to progress if we don't need to
take Tuskar into account in tripleo-common. This will be particularly useful
when working on the overcloud deployment library and API spec [1].

Tuskar UI doesn't currently use tripleo-common (or tripleoclient) and thus
it
is safe to make this change from the UI's point of view.

I have started the process of doing this removal and posted three WIP
reviews
[2][3][4] to assess how much change was needed, I plan to tidy them up over
the next day or two. There is one for tripleo-common, python-tripleoclient
and tripleo-docs. The documentation one only removes references to Tuskar on
the CLI and doesn't remove Tuskar totally - so Tuskar UI is still covered
until it has a suitable replacement.

I don't anticipate any impact for CI as I understand that all the current CI
has migrated from deploying with Tuskar to deploying the templates directly
(Using `openstack overcloud deploy --templates` rather than --plan). I
believe it is safe to remove from python-tripleoclient as that repo is so
new. I am however unsure about the TripleO deprecation policy for tripleo-
common?

Thanks,
Dougal


[1]: https://review.openstack.org/219754
[2]: https://review.openstack.org/223527
[3]: https://review.openstack.org/223535
[4]: https://review.openstack.org/223605
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/45b48b6e/attachment.html>

From mhorban at mirantis.com  Tue Sep 15 16:38:58 2015
From: mhorban at mirantis.com (mhorban)
Date: Tue, 15 Sep 2015 19:38:58 +0300
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
	service
Message-ID: <55F849A2.7080707@mirantis.com>

Hi guys,

I would like to talk about reloading config during reloading service.
Now we have ability to reload config of service with SIGHUP signal.
Right now SIGHUP causes just calling conf.reload_config_files().
As result configuration is updated, but services don't know about it, 
there is no way to notify them.
I've created review https://review.openstack.org/#/c/213062/ to allow to 
execute service's code on reloading config event.
Possible usage can be https://review.openstack.org/#/c/223668/.

Any ideas or suggestions


From alawson at aqorn.com  Tue Sep 15 16:43:20 2015
From: alawson at aqorn.com (Adam Lawson)
Date: Tue, 15 Sep 2015 09:43:20 -0700
Subject: [openstack-dev] [Fuel] fuel-createmirror "command not found"
Message-ID: <CAJfWK48OY3G_CBg482Ug4-XjbN4DsHJqsZm6LBayvnQ157Ry_Q@mail.gmail.com>

Hi guys,
Is there a trick to get the fuel-createmirror command to work? Customer
fuel environment was at 6.0, upgraded to 6.1, tred to create local mirror
and failed. Not working from master node.


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/51d26798/attachment.html>

From mzoeller at de.ibm.com  Tue Sep 15 16:43:22 2015
From: mzoeller at de.ibm.com (Markus Zoeller)
Date: Tue, 15 Sep 2015 18:43:22 +0200
Subject: [openstack-dev] [nova] Liberty-RC1: Identification of release
	critical bugs (urgent)
Message-ID: <OF21DAED59.7B150176-ON00257EC1.005B9282-C1257EC1.005BDC7B@notes.na.collabserv.com>

We will enter the release candidates period by next week [1], which
means we have to identify the bugs which will block us from creating
a release candidate. The upcoming nova meeting on Thursday (09/17)
will discuss those. The current plan is to release RC1 on next 
Tuesday (09/22).

Everyone, and especially the subteams, are hereby asked to identify
those bugs in their subject of expertise. The bug tag to use is 
"liberty-rc-potential". If there is consensus that a bug with this
tag is blocking our release candidate, it will be targeted to the
"liberty-rc1" milestone. The nova meeting from last week brought
that already to our attention [2]. We can use the liberty tracking
etherpad as usual [3]. 

Regards,
Markus Zoeller (markus_z)

References:
[1] https://wiki.openstack.org/wiki/Liberty_Release_Schedule 
[2] 
http://eavesdrop.openstack.org/meetings/nova/2015/nova.2015-09-10-21.00.log.html
[3] https://etherpad.openstack.org/p/liberty-nova-priorities-tracking



From james.slagle at gmail.com  Tue Sep 15 16:51:25 2015
From: james.slagle at gmail.com (James Slagle)
Date: Tue, 15 Sep 2015 12:51:25 -0400
Subject: [openstack-dev] [TripleO] Current meeting timeslot
In-Reply-To: <55F8176F.4060809@redhat.com>
References: <55F18FD7.7070305@redhat.com> <55F8033C.6060701@redhat.com>
 <55F8176F.4060809@redhat.com>
Message-ID: <CAHV77z_PaiO2B-1QnQzBDNa8zkOVnPJk_jx9b9M=YSqXrvppTA@mail.gmail.com>

On Tue, Sep 15, 2015 at 9:04 AM, Derek Higgins <derekh at redhat.com> wrote:
>
>
> On 15/09/15 12:38, Derek Higgins wrote:
>>
>> On 10/09/15 15:12, Derek Higgins wrote:
>>>
>>> Hi All,
>>>
>>> The current meeting slot for TripleO is every second Tuesday @ 1900 UTC,
>>> since that time slot was chosen a lot of people have joined the team and
>>> others have moved on, I like to revisit the timeslot to see if we can
>>> accommodate more people at the meeting (myself included).
>>>
>>> Sticking with Tuesday I see two other slots available that I think will
>>> accommodate more people currently working on TripleO,
>>>
>>> Here is the etherpad[1], can you please add your name under the time
>>> slots that would suit you so we can get a good idea how a change would
>>> effect people
>>
>>
>> Looks like moving the meeting to 1400 UTC will best accommodate
>> everybody, I've proposed a patch to change our slot
>>
>> https://review.openstack.org/#/c/223538/
>
>
> This has merged so as of next tuesdat, the tripleo meeting will be at
> 1400UTC
>
> Hope to see ye there

Thanks for running this down. I'm looking forward to seeing more folks
at the meeting next week. We have a standing wiki page where anyone
can add one-off agenda items that they'd like to discuss at the
meeting:
https://wiki.openstack.org/wiki/Meetings/TripleO

-- 
-- James Slagle
--


From doug at doughellmann.com  Tue Sep 15 16:52:40 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 15 Sep 2015 12:52:40 -0400
Subject: [openstack-dev] [oslo] Help with stable/juno branches / releases
In-Reply-To: <20150826223200.GA86688@thor.bakeyournoodle.com>
References: <20150824095748.GA74505@thor.bakeyournoodle.com>
 <1440616274-sup-7044@lrrr.local>
 <20150826223200.GA86688@thor.bakeyournoodle.com>
Message-ID: <1442335836-sup-6104@lrrr.local>

Excerpts from Tony Breeds's message of 2015-08-27 08:32:01 +1000:
> On Wed, Aug 26, 2015 at 03:11:56PM -0400, Doug Hellmann wrote:
> > Tony,
> > 
> > Thanks for digging into this!
> 
> No problem.  It seemed like such a simple thing :/
> 
> > I should be able to help, but right now we're ramping up for the L3
> > feature freeze and there are a lot of release-related activities going
> > on. Can this wait a few weeks for things to settle down again?
> 
> Hi Doug,
>     Of course I'd rather not wait but I understand that I've uncovered a bit of
> a mess that is stable/juno :(

I've created the branches for oslo.utils and oslotest, as requested.
There are patches up for each to update the .gitreview file, which will
make it easier to land the patches to update whatever requirements
settings need to be adjusted.

Since these are managed libraries, you can request releases by
submitting patches to the openstack/releases repository (see the
README and ping me in #openstack-relmgr-office if you need a hand
the first time, I'll be happy to walk you through it).

Doug

> 
> Right now I need 3 releases for oslo packages and then releases for at least 5
> other projects from stable/juno (and that after I get the various reviews
> closed out) and it's quite possible that these releases will in turn generate
> more.
> 
> I had to admit I'm questioning if it's worth it.  Not because I think it's too
> hard but it is sunstantial effort to put into juno which is (in theory) going
> to be EOL'd in 6 - 10 weeks.
> 
> I feel bad for asking that question as I've pulled in favors and people have
> agreed to $things that they're not entirely comfortable with so we can fix
> this.
> 
> Is it worth discussing this at next weeks cross-project meeting?
> 
> Yours Tony.


From smelikyan at mirantis.com  Tue Sep 15 16:57:26 2015
From: smelikyan at mirantis.com (Serg Melikyan)
Date: Tue, 15 Sep 2015 09:57:26 -0700
Subject: [openstack-dev] [puppet] monasca,murano,mistral governance
In-Reply-To: <55F81E04.50302@redhat.com>
References: <55F73FB8.8050401@redhat.com>
 <CAHr1CO83BO6yk2L=ic9CJ+WXmXBs3ExeqQHs8NpCEb08vkm4-Q@mail.gmail.com>
 <CAK=NR9XOQz7RKw5D0evWOaOt1Cwtfk8YqXYOc6-wWb3i+9_mxA@mail.gmail.com>
 <55F81E04.50302@redhat.com>
Message-ID: <CAOnDsYPgaE=-yY9NH_Fn1HNzj6U+hkQ2xx0MnYQgTbkQv1+wGQ@mail.gmail.com>

Hi Emilien,

I don't think Murano team needs to be core on a Puppet module ether,
current scheme was implemented per your proposal [1] and I am happy
that we are ready to move Murano to the same scheme that is used with
other projects.

I've updated ACL for puppet-murano project:
https://review.openstack.org/223694


References:
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/067260.html

On Tue, Sep 15, 2015 at 6:32 AM, Emilien Macchi <emilien at redhat.com> wrote:
>
>
> On 09/15/2015 07:39 AM, Ivan Berezovskiy wrote:
>> Emilien,
>>
>> puppet-murano module have a bunch of patches from Alexey Deryugin on
>> review [0], which implements most of all Murano deployment stuff.
>> Murano project was added to OpenStack namespace not so long ago, that's
>> why I suggest to have murano-core rights on puppet-murano as they
>> are till all these patches will be merged.
>> Anyway, murano-core team doesn't merge any patches without OpenStack
>> Puppet team approvals.
>
> [repeating what I said on IRC so it's official and public]
>
> I don't think Murano team needs to be core on a Puppet module.
> All OpenStack modules are managed by one group, this is how we worked
> until here and I don't think we want to change that.s
> Project teams (Keystone, Nova, Neutron, etc) already use -1/+1 to review
> Puppet code when they want to share feedback and they are very valuable,
> we actually need it.
> I don't see why we would do an exception for Murano. I would like Murano
> team to continue to give their valuable feedback by -1/+1 patches but
> it's the Puppet OpenStack team duty to decide if they merge the code or not.
>
> This collaboration is important and we need your experience to create
> new modules, but please understand how Puppet OpenStack governance works
> now.
>
> Thanks,
>
>
>> [0]
>> - https://review.openstack.org/#/q/status:open+project:openstack/puppet-murano+owner:%22Alexey+Deryugin+%253Caderyugin%2540mirantis.com%253E%22,n,z
>>
>> 2015-09-15 1:01 GMT+03:00 Matt Fischer <matt at mattfischer.com
>> <mailto:matt at mattfischer.com>>:
>>
>>     Emilien,
>>
>>     I've discussed this with some of the Monasca puppet guys here who
>>     are doing most of the work. I think it probably makes sense to move
>>     to that model now, especially since the pace of development has
>>     slowed substantially. One blocker before to having it "big tent" was
>>     the lack of test coverage, so as long as we know that's a work in
>>     progress...  I'd also like to get Brad Kiein's thoughts on this, but
>>     he's out of town this week. I'll ask him to reply when he is back.
>>
>>
>>     On Mon, Sep 14, 2015 at 3:44 PM, Emilien Macchi <emilien at redhat.com
>>     <mailto:emilien at redhat.com>> wrote:
>>
>>         Hi,
>>
>>         As a reminder, Puppet modules that are part of OpenStack are
>>         documented
>>         here [1].
>>
>>         I can see puppet-murano & puppet-mistral Gerrit permissions
>>         different
>>         from other modules, because Mirantis helped to bootstrap the
>>         module a
>>         few months ago.
>>
>>         I think [2] the modules should be consistent in governance and only
>>         Puppet OpenStack group should be able to merge patches for these
>>         modules.
>>
>>         Same question for puppet-monasca: if Monasca team wants their module
>>         under the big tent, I think they'll have to change Gerrit
>>         permissions to
>>         only have Puppet OpenStack able to merge patches.
>>
>>         [1]
>>         http://governance.openstack.org/reference/projects/puppet-openstack.html
>>         [2] https://review.openstack.org/223313
>>
>>         Any feedback is welcome,
>>         --
>>         Emilien Macchi
>>
>>
>>         __________________________________________________________________________
>>         OpenStack Development Mailing List (not for usage questions)
>>         Unsubscribe:
>>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>     __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Thanks, Ivan Berezovskiy
>> MOS Puppet Team Lead
>> at Mirantis <https://www.mirantis.com/>
>>
>> slack: iberezovskiy
>> skype: bouhforever
>> phone: + 7-960-343-42-46
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> Emilien Macchi
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelikyan at mirantis.com


From chris.friesen at windriver.com  Tue Sep 15 17:00:23 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Tue, 15 Sep 2015 11:00:23 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
Message-ID: <55F84EA7.9080902@windriver.com>

I'm currently trying to work around an issue where activating LVM snapshots 
created through cinder takes potentially a long time.  (Linearly related to the 
amount of data that differs between the original volume and the snapshot.)  On 
one system I tested it took about one minute per 25GB of data, so the worst-case 
boot delay can become significant.

According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were not 
intended to be kept around indefinitely, they were supposed to be used only 
until the backup was taken and then deleted.  He recommends using thin 
provisioning for long-lived snapshots due to differences in how the metadata is 
maintained.  (He also says he's heard reports of volume activation taking half 
an hour, which is clearly crazy when instances are waiting to access their volumes.)

Given the above, is there any reason why we couldn't make thin provisioning the 
default?

Chris


From doug at doughellmann.com  Tue Sep 15 17:02:10 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 15 Sep 2015 13:02:10 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
	'default' network model
In-Reply-To: <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
Message-ID: <1442336141-sup-9706@lrrr.local>

Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> On 15 September 2015 at 08:04, Monty Taylor <mordred at inaugust.com> wrote:
> 
> > Hey all!
> >
> > If any of you have ever gotten drunk with me, you'll know I hate floating
> > IPs more than I hate being stabbed in the face with a very angry fish.
> >
> > However, that doesn't really matter. What should matter is "what is the
> > most sane thing we can do for our users"
> >
> > As you might have seen in the glance thread, I have a bunch of OpenStack
> > public cloud accounts. Since I wrote that email this morning, I've added
> > more - so we're up to 13.
> >
> > auro
> > citycloud
> > datacentred
> > dreamhost
> > elastx
> > entercloudsuite
> > hp
> > ovh
> > rackspace
> > runabove
> > ultimum
> > unitedstack
> > vexxhost
> >
> > Of those public clouds, 5 of them require you to use a floating IP to get
> > an outbound address, the others directly attach you to the public network.
> > Most of those 8 allow you to create a private network, to boot vms on the
> > private network, and ALSO to create a router with a gateway and put
> > floating IPs on your private ip'd machines if you choose.
> >
> > Which brings me to the suggestion I'd like to make.
> >
> > Instead of having our default in devstack and our default when we talk
> > about things be "you boot a VM and you put a floating IP on it" - which
> > solves one of the two usage models - how about:
> >
> > - Cloud has a shared: True, external:routable: True neutron network. I
> > don't care what it's called  ext-net, public, whatever. the "shared" part
> > is the key, that's the part that lets someone boot a vm on it directly.
> >
> > - Each person can then make a private network, router, gateway, etc. and
> > get floating-ips from the same public network if they prefer that model.
> >
> > Are there any good reasons to not push to get all of the public networks
> > marked as "shared"?
> >
> 
> The reason is simple: not every cloud deployment is the same: private is
> different from public and even within the same cloud model, the network
> topology may vary greatly.
> 
> Perhaps Neutron fails in the sense that it provides you with too much
> choice, and perhaps we have to standardize on the type of networking
> profile expected by a user of OpenStack public clouds before making changes
> that would fragment this landscape even further.
> 
> If you are advocating for more flexibility without limiting the existing
> one, we're only making the problem worse.

As with the Glance image upload API discussion, this is an example
of an extremely common use case that is either complex for the end
user or for which they have to know something about the deployment
in order to do it at all. The usability of an OpenStack cloud running
neutron would be enhanced greatly if there was a simple, clear, way
for the user to get a new VM with a public IP on any cloud without
multiple steps on their part. There are a lot of ways to implement
that "under the hood" (what you call "networking profile" above)
but the users don't care about "under the hood" so we should provide
a way for them to ignore it. That's *not* the same as saying we
should only support one profile. Think about the API from the use
case perspective, and build it so if there are different deployment
configurations available, the right action can be taken based on
the deployment choices made without the user providing any hints.

Doug

> 
> >
> > OH - well, one thing - that's that once there are two networks in an
> > account you have to specify which one. This is really painful in nova
> > clent. Say, for instance, you have a public network called "public" and a
> > private network called "private" ...
> >
> > You can't just say "nova boot --network=public" - nope, you need to say
> > "nova boot --nics net-id=$uuid_of_my_public_network"
> >
> > So I'd suggest 2 more things;
> >
> > a) an update to python-novaclient to allow a named network to be passed to
> > satisfy the "you have more than one network" - the nics argument is still
> > useful for more complex things
> >
> > b) ability to say "vms in my cloud should default to being booted on the
> > public network" or "vms in my cloud should default to being booted on a
> > network owned by the user"
> >
> > Thoughts?
> >
> 
> As I implied earlier, I am not sure how healthy this choice is. As a user
> of multiple clouds I may end up having a different user experience based on
> which cloud I am using...I thought you were partially complaining about
> lack of consistency?
> 
> >
> > Monty
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >


From jim at jimrollenhagen.com  Tue Sep 15 17:05:18 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Tue, 15 Sep 2015 10:05:18 -0700
Subject: [openstack-dev] [ironic] [tripleo] Deprecating the bash ramdisk
Message-ID: <20150915170518.GN21846@jimrollenhagen.com>

Hi all,

We have a spec[0] for deprecating our bash ramdisk during Liberty, so we
can remove it in Mitaka and have everything using IPA.

The last patch remaining[1] is to mark it deprecated in
disk-image-builder. This has been sitting with 4x +2 votes for 3 weeks
or so now. Is there any reason we aren't landing it? It's failing tests,
but I'm fairly certain they're unrelated and I'm waiting for a recheck
right now.

// jim

[0] http://specs.openstack.org/openstack/ironic-specs/specs/approved/deprecate-bash-ramdisk.html
[1] https://review.openstack.org/#/c/209079/


From devdatta.kulkarni at RACKSPACE.COM  Tue Sep 15 17:10:22 2015
From: devdatta.kulkarni at RACKSPACE.COM (Devdatta Kulkarni)
Date: Tue, 15 Sep 2015 17:10:22 +0000
Subject: [openstack-dev] [Solum] PTL Candidacy
Message-ID: <1442337022449.10967@RACKSPACE.COM>

Hi,

I would like to announce my candidacy for the PTL position of Solum for Mitaka.

In my view the challenges in front of us are twofold, which I have outlined below.
I believe that I will be able to help us take concrete steps towards addressing these

challenges in this cycle.

Our first challenge is to continue developing and evolving Solum's feature set
so that the project becomes a valuable option for operators to offer it in their OpenStack installations.

Particularly, in my opinion, following features need to be completed in this regard:


(a) Consistency between API and CLI:
    Currently Solum API and CLI have slightly different abstractions preventing
    consistency of usage when using CLI vs. directly using the REST API.
    We have started working on changing this[1], which needs to be completed.

(b) Ability to scale application instances:
    For this, we need to investigate how we can use Magnum for satisfying
    Solum's application-instance scaling and scheduling requirements.

(c) Insight into application building and deployment process:
    This includes, collecting fine-grained logs in various Solum services,
    ability to correlate logs across Solum services, collecting and maintaining historical information

    about user actions to build and deploy applications on Solum.

(d) Ability to build and deploy multi-tier applications:
    One idea here is to investigate if something like Magnum's Pod abstraction
    can be leveraged for this.

As PTL, I will work towards helping the team move the story forward on these features.
Also, whenever required, I will work closely with other OpenStack projects, particularly Magnum,
to ensure that our team's requirements are adequately represented in their forum.


The second challenge for us is to increase community involvement in and around Solum.
Some ideas that I have in this regard are as follows:

(1) Bug squash days:
    My involvement with OpenStack started three years ago when I participated
    in a bug squash day organized by Anne Gentle at Rackspace Austin[2].
    I believe we could organize similar bug squash days for Solum to attract new contributors.

    Also, there could be experienced Solum contributors whose current priorities might
    not be allowing them to participate in Solum development, but who might still like to

    continue contributing to Solum. Bug squash days would provide them a dedicated time

    and place to participate again in Solum development.

(2) Increasing project visibility:
    Some of the actionable items here are:
    - Periodic emails to openstack-dev mailing list giving updates on project's status, achievements, etc.
    - Periodic blog posts
    - Presentations at meetup events

(3) Growing community:
    - Reaching out to folks who are interested in application build and deployment story on OpenStack,
      and inviting them to join Solum IRC meetings and mailing list discussions.
    - Reviving mid-cycle meetups

As PTL, I will take actions on some of the above ideas towards helping build and grow our community.

About my background -- I have been involved with Solum since the beginning of the project.
In this period, I have contributed to the project in several ways including,
designing and implementing different features, fixing bugs, performing code reviews,
helping debug gate issues, helping maintain a working Vagrant environment,
maintaining community engagement through various avenues (IRC channel, IRC meetings, emails), and so on.
More details about my participation and involvement in the project can be found here:
http://stackalytics.com/?module=solum
http://stackalytics.com/?module=python-solumclient

I hope you will give me an opportunity to serve as Solum's PTL for Mitaka.

Best regards,
Devdatta Kulkarni

[1] https://github.com/openstack/solum-specs/blob/master/specs/liberty/app-resource.rst
[2] http://www.meetup.com/OpenStack-Austin/events/48406252/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/5644b347/attachment.html>

From stuart.mclaren at hp.com  Tue Sep 15 17:09:12 2015
From: stuart.mclaren at hp.com (stuart.mclaren at hp.com)
Date: Tue, 15 Sep 2015 18:09:12 +0100 (IST)
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
Message-ID: <alpine.DEB.2.11.1509151747170.14097@tc-unix2.emea.hpqcorp.net>

>> I've been looking into the existing task-based-upload that Doug mentions:
>> can anyone clarify the following?
>>
>> On a default devstack install you can do this 'task' call:
>>
>> http://paste.openstack.org/show/462919
>
> Yup. That's the one.
>
>> as an alternative to the traditional image upload (the bytes are streamed
>> from the URL).
>>
>> It's not clear to me if this is just an interesting example of the kind
>> of operator specific thing you can configure tasks to do, or a real
>> attempt to define an alternative way to upload images.
>>
>> The change which added it [1] calls it a 'sample'.
>>
>> Is it just an example, or is it a second 'official' upload path?
>
> It's how you have to upload images on Rackspace.

Ok, so Rackspace have a task called image_import. But it seems to take
different json input than the devstack version. (A Swift container/object
rather than a URL.)

That seems to suggest that tasks really are operator specific, that there
is no standard task based upload ... and it should be ok to try
again with a clean slate.

> If you want to see the
> full fun:
>
> https://github.com/openstack-infra/shade/blob/master/shade/__init__.py#L1335-L1510
>
> Which is "I want to upload an image to an OpenStack Cloud"
>
> I've listed it on this slide in CLI format too:
>
> http://inaugust.com/talks/product-management/index.html#/27
>
> It should be noted that once you create the task, you need to poll the
> task with task-show, and then the image id will be in the completed
> task-show output.
>
> Monty


From chris.friesen at windriver.com  Tue Sep 15 17:16:21 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Tue, 15 Sep 2015 11:16:21 -0600
Subject: [openstack-dev] [nova][neutron][SR-IOV] Hardware changes and
 shifting PCI addresses
In-Reply-To: <20150915082551.GB23145@redhat.com>
References: <20150910212306.GA11628@b3ntpin.localdomain>
 <20150915082551.GB23145@redhat.com>
Message-ID: <55F85265.7030206@windriver.com>

On 09/15/2015 02:25 AM, Daniel P. Berrange wrote:

> Taking a host offline for maintenance, should be considered
> equivalent to throwing away the existing host and deploying a new
> host. There should be zero state carry-over from OpenStack POV,
> since both the software and hardware changes can potentially
> invalidate previous informationm used by the schedular for deploying
> on that host.  The idea of recovering a previously running guest
> should be explicitly unsupported.

This isn't the way the nova code is currently written though.

By default, any instances that were running on that compute node are going to 
still be in the DB as running on that compute node but in the "stopped" state. 
If you then do a "nova start", they'll try to start up on that node again.

Heck, if you enable "resume_guests_state_on_host_boot" then nova will restart 
them automatically for you on startup.

To robustly do what you're talking about would require someone (nova, the 
operator, etc.) to migrate all instances off of a compute node before taking it 
down (which is currently impossible for suspended instances), and then force a 
"nova evacuate" (or maybe "nova delete") for every instance that was on a 
compute node that went down.

Chris


From mordred at inaugust.com  Tue Sep 15 17:22:48 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Tue, 15 Sep 2015 19:22:48 +0200
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <alpine.DEB.2.11.1509151747170.14097@tc-unix2.emea.hpqcorp.net>
References: <alpine.DEB.2.11.1509151747170.14097@tc-unix2.emea.hpqcorp.net>
Message-ID: <55F853E8.8070409@inaugust.com>

On 09/15/2015 07:09 PM, stuart.mclaren at hp.com wrote:
>>> I've been looking into the existing task-based-upload that Doug
>>> mentions:
>>> can anyone clarify the following?
>>>
>>> On a default devstack install you can do this 'task' call:
>>>
>>> http://paste.openstack.org/show/462919
>>
>> Yup. That's the one.
>>
>>> as an alternative to the traditional image upload (the bytes are
>>> streamed
>>> from the URL).
>>>
>>> It's not clear to me if this is just an interesting example of the kind
>>> of operator specific thing you can configure tasks to do, or a real
>>> attempt to define an alternative way to upload images.
>>>
>>> The change which added it [1] calls it a 'sample'.
>>>
>>> Is it just an example, or is it a second 'official' upload path?
>>
>> It's how you have to upload images on Rackspace.
>
> Ok, so Rackspace have a task called image_import. But it seems to take
> different json input than the devstack version. (A Swift container/object
> rather than a URL.)
>
> That seems to suggest that tasks really are operator specific, that there
> is no standard task based upload ... and it should be ok to try
> again with a clean slate.

Yes - as long as we don't use the payload as a defacto undefined API to 
avoid having specific things implemented in the API I think we're fine.

Like, if it was:

glance import-image

and that presented an interface that had a status field ... I mean, 
that's a known OpenStack pattern - it's how nova boot works.

Amongst the badness with this is:

a) It's only implemented in one cloud and at that cloud with special code
b) The interface is "send some JSON to this endpoint, and we'll infer a 
special sub-API from the JSON, which is not a published nor versioned API"

>> If you want to see the
>> full fun:
>>
>> https://github.com/openstack-infra/shade/blob/master/shade/__init__.py#L1335-L1510
>>
>>
>> Which is "I want to upload an image to an OpenStack Cloud"
>>
>> I've listed it on this slide in CLI format too:
>>
>> http://inaugust.com/talks/product-management/index.html#/27
>>
>> It should be noted that once you create the task, you need to poll the
>> task with task-show, and then the image id will be in the completed
>> task-show output.
>>
>> Monty
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From weidongshao at gmail.com  Tue Sep 15 17:25:36 2015
From: weidongshao at gmail.com (Weidong Shao)
Date: Tue, 15 Sep 2015 17:25:36 +0000
Subject: [openstack-dev] [openstack-ansible][compass] Support of Offline
	Install
In-Reply-To: <CAGSrQvz47knhGE7etO4yrPEY-avmb2v6z18E-brf7Zckm6dmow@mail.gmail.com>
References: <CALNQoPcNgZ-N+djnNvEzRNKch+TQBpzLS8AcAbwuWSQJEJToaA@mail.gmail.com>
 <CAGSrQvz47knhGE7etO4yrPEY-avmb2v6z18E-brf7Zckm6dmow@mail.gmail.com>
Message-ID: <CALNQoPcBQ2EWaD2kQvYtdeSG4WuurzbxwxW9Wo=B+uwBNrJ1Hw@mail.gmail.com>

Jesse, thanks for the information! I will look into this. The proxy server
and local repo option might just work for us.

On Tue, Sep 15, 2015 at 2:53 AM Jesse Pretorius <jesse.pretorius at gmail.com>
wrote:

> On 15 September 2015 at 05:36, Weidong Shao <weidongshao at gmail.com> wrote:
>
>> Compass, an openstack deployment project, is in process of using osad
>> project in the openstack deployment. We need to support a use case where
>> there is no Internet connection. The way we handle this is to split the
>> deployment into "build" and "install" phase. In Build phase, the Compass
>> server node can have Internet connection and can build local repo and other
>> necessary dynamic artifacts that requires Internet connection. In "install"
>> phase, the to-be-installed nodes do not have Internet connection, and they
>> only download necessary data from Compass server and other services
>> constructed in Build phase.
>>
>> Now, is "offline install" something that OSAD project shall also support?
>> If yes, what is the scope of work for any changes, if required.
>>
>
> Currently we don't have a offline install paradigm - but that doesn't mean
> that we couldn't shift things around to support it if it makes sense. I
> think this is something that we could discuss via the ML, via a spec
> review, or at the summit.
>
> Some notes which may be useful:
>
> 1. We have support for the use of a proxy server [1].
> 2. As you probably already know, we build the python wheels for the
> environment on the repo-server - so all python wheel installs (except
> tempest venv requirements) are done directly from the repo server.
> 3. All apt-key and apt-get actions are done against online repositories.
> If you wish to have these be done online then there would need to be an
> addition of some sort of apt-key and apt package mirror which we currently
> do not have. If there is a local repo in the environment, the functionality
> to direct all apt-key and apt-get install actions against an internal
> mirror is all there.
>
> [1]
> http://git.openstack.org/cgit/openstack/openstack-ansible/commit/?id=ed7f78ea5689769b3a5e1db444f4c16f3cc06060
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/dc38367b/attachment.html>

From james.slagle at gmail.com  Tue Sep 15 17:31:54 2015
From: james.slagle at gmail.com (James Slagle)
Date: Tue, 15 Sep 2015 13:31:54 -0400
Subject: [openstack-dev] [ironic] [tripleo] Deprecating the bash ramdisk
In-Reply-To: <20150915170518.GN21846@jimrollenhagen.com>
References: <20150915170518.GN21846@jimrollenhagen.com>
Message-ID: <CAHV77z_kPAexducuQ4ft+t2UqA=nCUX=-cLDYa3AORK0qpVY+w@mail.gmail.com>

On Tue, Sep 15, 2015 at 1:05 PM, Jim Rollenhagen <jim at jimrollenhagen.com> wrote:
> Hi all,
>
> We have a spec[0] for deprecating our bash ramdisk during Liberty, so we
> can remove it in Mitaka and have everything using IPA.
>
> The last patch remaining[1] is to mark it deprecated in
> disk-image-builder. This has been sitting with 4x +2 votes for 3 weeks
> or so now. Is there any reason we aren't landing it? It's failing tests,
> but I'm fairly certain they're unrelated and I'm waiting for a recheck
> right now.

Failing tests...that's the exact reason it hasn't landed.

Exceptions can be made, but we don't always go hunting for them.
Someone just needs to take another look at it and push it through.

>
> // jim
>
> [0] http://specs.openstack.org/openstack/ironic-specs/specs/approved/deprecate-bash-ramdisk.html
> [1] https://review.openstack.org/#/c/209079/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--


From eharney at redhat.com  Tue Sep 15 17:38:42 2015
From: eharney at redhat.com (Eric Harney)
Date: Tue, 15 Sep 2015 13:38:42 -0400
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55F84EA7.9080902@windriver.com>
References: <55F84EA7.9080902@windriver.com>
Message-ID: <55F857A2.3020603@redhat.com>

On 09/15/2015 01:00 PM, Chris Friesen wrote:
> I'm currently trying to work around an issue where activating LVM
> snapshots created through cinder takes potentially a long time. 
> (Linearly related to the amount of data that differs between the
> original volume and the snapshot.)  On one system I tested it took about
> one minute per 25GB of data, so the worst-case boot delay can become
> significant.
> 
> According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were
> not intended to be kept around indefinitely, they were supposed to be
> used only until the backup was taken and then deleted.  He recommends
> using thin provisioning for long-lived snapshots due to differences in
> how the metadata is maintained.  (He also says he's heard reports of
> volume activation taking half an hour, which is clearly crazy when
> instances are waiting to access their volumes.)
> 
> Given the above, is there any reason why we couldn't make thin
> provisioning the default?
> 


My intention is to move toward thin-provisioned LVM as the default -- it
is definitely better suited to our use of LVM.  Previously this was less
easy, since some older Ubuntu platforms didn't support it, but in
Liberty we added the ability to specify lvm_type = "auto" [1] to use
thin if it is supported on the platform.

The other issue preventing using thin by default is that we default the
max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
the reference implementation, since it means that people who deploy
Cinder LVM on smaller storage configurations can easily fill up their
volume group and have things grind to halt.  I think we want something
closer to the semantics of thick LVM for the default case.

We haven't thought through a reasonable migration strategy for how to
handle that.  I'm not sure we can change the default oversubscription
ratio without breaking deployments using other drivers.  (Maybe I'm
wrong about this?)

If we sort out that issue, I don't see any reason we can't switch over
in Mitaka.

[1] https://review.openstack.org/#/c/104653/


From lregebro at redhat.com  Tue Sep 15 17:44:45 2015
From: lregebro at redhat.com (Lennart Regebro)
Date: Tue, 15 Sep 2015 13:44:45 -0400 (EDT)
Subject: [openstack-dev] os-cloud-config support for custom .ssh configs
In-Reply-To: <918452833.36004795.1442338438463.JavaMail.zimbra@redhat.com>
Message-ID: <1178537114.36009713.1442339085268.JavaMail.zimbra@redhat.com>

In bug https://bugzilla.redhat.com/show_bug.cgi?id=1252255 is about adding a possibility to have an .ssh directory that is not in ~/.ssh
Currently the blocker there is os-cloud-config, that just calls ssh, and ssh will look in the users homedirectory for .ssh/config no matter what. 

To solve this we would need a way to add support for custom .ssh configs in os-cloud-config, specifically to the _perform_pki_initialization() method, so that you can specify a ssh config file, which otherwise will default to ~/.ssh/con fig. 

Either we can always use ~/.ssh/config, but perform the user expansion in the Python code. That way it will pick up $HOME, and that means you can just set $HOME first. (There is a patch linked from the bug for python-tripleoclient to allow that).

Or we can pass in the path to the config file as a new paremeter. 

In both cases the change is quite trivial.

Thoughts/opinions on this?

//Lennart


From Kevin.Fox at pnnl.gov  Tue Sep 15 18:00:03 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 15 Sep 2015 18:00:03 +0000
Subject: [openstack-dev] [nova][neutron][devstack] New
	proposed	'default' network model
In-Reply-To: <1442336141-sup-9706@lrrr.local>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>,
 <1442336141-sup-9706@lrrr.local>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>

We run several clouds where there are multiple external networks. the "just run it in on THE public network" doesn't work. :/

I also strongly recommend to users to put vms on a private network and use floating ip's/load balancers. For many reasons. Such as, if you don't, the ip that gets assigned to the vm helps it become a pet. you can't replace the vm and get the same IP. Floating IP's and load balancers can help prevent pets. It also prevents security issues with DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or more the number of instances that are on the private network. Sure its easy to put everything on the public network, but it provides much better security if you only put what you must on the public network. Consider the internet. would you want to expose every device in your house directly on the internet? No. you put them in a private network and poke holes just for the stuff that does. we should be encouraging good security practices. If we encourage bad ones, then it will bite us later when OpenStack gets a reputation for being associated with compromises.

I do consider making things as simple as possible very important. but that is, make them as simple as possible, but no simpler. There's danger here of making things too simple.

Thanks,
Kevin
________________________________________
From: Doug Hellmann [doug at doughellmann.com]
Sent: Tuesday, September 15, 2015 10:02 AM
To: openstack-dev
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed     'default' network model

Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> On 15 September 2015 at 08:04, Monty Taylor <mordred at inaugust.com> wrote:
>
> > Hey all!
> >
> > If any of you have ever gotten drunk with me, you'll know I hate floating
> > IPs more than I hate being stabbed in the face with a very angry fish.
> >
> > However, that doesn't really matter. What should matter is "what is the
> > most sane thing we can do for our users"
> >
> > As you might have seen in the glance thread, I have a bunch of OpenStack
> > public cloud accounts. Since I wrote that email this morning, I've added
> > more - so we're up to 13.
> >
> > auro
> > citycloud
> > datacentred
> > dreamhost
> > elastx
> > entercloudsuite
> > hp
> > ovh
> > rackspace
> > runabove
> > ultimum
> > unitedstack
> > vexxhost
> >
> > Of those public clouds, 5 of them require you to use a floating IP to get
> > an outbound address, the others directly attach you to the public network.
> > Most of those 8 allow you to create a private network, to boot vms on the
> > private network, and ALSO to create a router with a gateway and put
> > floating IPs on your private ip'd machines if you choose.
> >
> > Which brings me to the suggestion I'd like to make.
> >
> > Instead of having our default in devstack and our default when we talk
> > about things be "you boot a VM and you put a floating IP on it" - which
> > solves one of the two usage models - how about:
> >
> > - Cloud has a shared: True, external:routable: True neutron network. I
> > don't care what it's called  ext-net, public, whatever. the "shared" part
> > is the key, that's the part that lets someone boot a vm on it directly.
> >
> > - Each person can then make a private network, router, gateway, etc. and
> > get floating-ips from the same public network if they prefer that model.
> >
> > Are there any good reasons to not push to get all of the public networks
> > marked as "shared"?
> >
>
> The reason is simple: not every cloud deployment is the same: private is
> different from public and even within the same cloud model, the network
> topology may vary greatly.
>
> Perhaps Neutron fails in the sense that it provides you with too much
> choice, and perhaps we have to standardize on the type of networking
> profile expected by a user of OpenStack public clouds before making changes
> that would fragment this landscape even further.
>
> If you are advocating for more flexibility without limiting the existing
> one, we're only making the problem worse.

As with the Glance image upload API discussion, this is an example
of an extremely common use case that is either complex for the end
user or for which they have to know something about the deployment
in order to do it at all. The usability of an OpenStack cloud running
neutron would be enhanced greatly if there was a simple, clear, way
for the user to get a new VM with a public IP on any cloud without
multiple steps on their part. There are a lot of ways to implement
that "under the hood" (what you call "networking profile" above)
but the users don't care about "under the hood" so we should provide
a way for them to ignore it. That's *not* the same as saying we
should only support one profile. Think about the API from the use
case perspective, and build it so if there are different deployment
configurations available, the right action can be taken based on
the deployment choices made without the user providing any hints.

Doug

>
> >
> > OH - well, one thing - that's that once there are two networks in an
> > account you have to specify which one. This is really painful in nova
> > clent. Say, for instance, you have a public network called "public" and a
> > private network called "private" ...
> >
> > You can't just say "nova boot --network=public" - nope, you need to say
> > "nova boot --nics net-id=$uuid_of_my_public_network"
> >
> > So I'd suggest 2 more things;
> >
> > a) an update to python-novaclient to allow a named network to be passed to
> > satisfy the "you have more than one network" - the nics argument is still
> > useful for more complex things
> >
> > b) ability to say "vms in my cloud should default to being booted on the
> > public network" or "vms in my cloud should default to being booted on a
> > network owned by the user"
> >
> > Thoughts?
> >
>
> As I implied earlier, I am not sure how healthy this choice is. As a user
> of multiple clouds I may end up having a different user experience based on
> which cloud I am using...I thought you were partially complaining about
> lack of consistency?
>
> >
> > Monty
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sumitnaiksatam at gmail.com  Tue Sep 15 18:28:17 2015
From: sumitnaiksatam at gmail.com (Sumit Naiksatam)
Date: Tue, 15 Sep 2015 11:28:17 -0700
Subject: [openstack-dev] [Policy][Group-based-policy]
In-Reply-To: <CAGCtZnNLUqD7rLLSoiQjE7n1guY1hPgzo=X9t=hRKTaxyzdLZg@mail.gmail.com>
References: <CAGCtZnNLUqD7rLLSoiQjE7n1guY1hPgzo=X9t=hRKTaxyzdLZg@mail.gmail.com>
Message-ID: <CAMWrLvgyOdX0RKgYEOZ8D4sn=4KhvNqy6X15xbJM8_vF-N1SaA@mail.gmail.com>

Hi Sagar,

GBP has a single REST API interface. The CLI, Horizon and Heat are
merely clients of the same REST API.

There was a similar question on this which I had responded to in a
different mailer:
http://lists.openstack.org/pipermail/openstack/2015-September/013952.html

and I believe you are cc'ed on that thread. I have provided more
information on how you can run the CLI in the verbose mode to explore
the REST request and responses. Hope that will be helpful, and we are
happy to guide you through this exercise (catch us on #openstack-gbp
for real time help).

Thanks,
~Sumit.

On Tue, Sep 15, 2015 at 3:45 AM, Sagar Pradhan <spradhan.17 at gmail.com> wrote:
>
>  Hello ,
>
> We were exploring group based policy for some project.We could find CLI and
> REST API documentation for GBP.
> Do we have separate REST API for GBP which can be called separately ?
> From documentation it seems that we can only use CLI , Horizon and Heat.
> Please point us to CLI or REST API documentation for GBP.
>
>
> Regards,
> Sagar
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From mriedem at linux.vnet.ibm.com  Tue Sep 15 18:28:40 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Tue, 15 Sep 2015 13:28:40 -0500
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
Message-ID: <55F86358.3050705@linux.vnet.ibm.com>



On 9/15/2015 10:27 AM, Mike Spreitzer wrote:
> Monty Taylor <mordred at inaugust.com> wrote on 09/15/2015 11:04:07 AM:
>
>  > a) an update to python-novaclient to allow a named network to be passed
>  > to satisfy the "you have more than one network" - the nics argument is
>  > still useful for more complex things
>
> I am not using the latest, but rather Juno.  I find that in many places
> the Neutron CLI insists on a UUID when a name could be used.  Three
> cheers for any campaign to fix that.

It's my understanding that network names in neutron, like security 
groups, are not unique, that's why you have to specify a UUID.

>
> And, yeah, creating VMs on a shared public network is good too.
>
> Thanks,
> mike
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 

Thanks,

Matt Riedemann



From morgan.fainberg at gmail.com  Tue Sep 15 18:36:47 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Tue, 15 Sep 2015 11:36:47 -0700
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <87oah44qtx.fsf@s390.unix4.net>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com>
 <CAGnj6atXbpuzpNR6aF63cZ26WE-cbwUGozb9bvdxtaUaA7B1Ow@mail.gmail.com>
 <87oah44qtx.fsf@s390.unix4.net>
Message-ID: <CAGnj6ave7EFDQkaFmZWDVTLOE0DQgkTksqh2QLqJe0aGkCXBpQ@mail.gmail.com>

On Mon, Sep 14, 2015 at 2:46 PM, Sofer Athlan-Guyot <sathlang at redhat.com>
wrote:

> Morgan Fainberg <morgan.fainberg at gmail.com> writes:
>
> > On Mon, Sep 14, 2015 at 1:53 PM, Rich Megginson <rmeggins at redhat.com>
> > wrote:
> >
> >
> >     On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
> >
> >     Hi,
> >
> >         Gilles Dubreuil <gilles at redhat.com> writes:
> >
> >                 A. The 'composite namevar' approach:
> >
> >             keystone_tenant {'projectX::domainY': ... }
> >             B. The 'meaningless name' approach:
> >
> >             keystone_tenant {'myproject': name='projectX',
> domain=>'domainY',
> >             ...}
> >
> >             Notes:
> >             - Actually using both combined should work too with the
> >             domain
> >             supposedly overriding the name part of the domain.
> >             - Please look at [1] this for some background between the
> >             two approaches:
> >
> >             The question
> >             -------------
> >             Decide between the two approaches, the one we would like
> >             to retain for
> >             puppet-keystone.
> >
> >             Why it matters?
> >             ---------------
> >             1. Domain names are mandatory in every user, group or
> >             project. Besides
> >             the backward compatibility period mentioned earlier, where
> >             no domain
> >             means using the default one.
> >             2. Long term impact
> >             3. Both approaches are not completely equivalent which
> >             different
> >             consequences on the future usage.
> >         I can't see why they couldn't be equivalent, but I may be
> >         missing
> >         something here.
> >
> >
> >     I think we could support both. I don't see it as an either/or
> >     situation.
> >
> >
> >                 4. Being consistent
> >             5. Therefore the community to decide
> >
> >             Pros/Cons
> >             ----------
> >             A.
> >         I think it's the B: meaningless approach here.
> >
> >                 Pros
> >             - Easier names
> >         That's subjective, creating unique and meaningful name don't
> >         look easy
> >         to me.
> >
> >     The point is that this allows choice - maybe the user already has
> >     some naming scheme, or wants to use a more "natural" meaningful
> >     name - rather than being forced into a possibly "awkward" naming
> >     scheme with "::"
> >
> >     keystone_user { 'heat domain admin user':
> >     name => 'admin',
> >     domain => 'HeatDomain',
> >     ...
> >     }
> >
> >     keystone_user_role {'heat domain admin user@::HeatDomain':
> >     roles => ['admin']
> >     ...
> >     }
> >
> >
> >                 Cons
> >             - Titles have no meaning!
> >
> >     They have meaning to the user, not necessarily to Puppet.
> >
> >                 - Cases where 2 or more resources could exists
> >
> >     This seems to be the hardest part - I still cannot figure out how
> >     to use "compound" names with Puppet.
> >
> >                 - More difficult to debug
> >
> >     More difficult than it is already? :P
> >
> >
> >
> >                 - Titles mismatch when listing the resources
> >             (self.instances)
> >
> >             B.
> >             Pros
> >             - Unique titles guaranteed
> >             - No ambiguity between resource found and their title
> >             Cons
> >             - More complicated titles
> >             My vote
> >             --------
> >             I would love to have the approach A for easier name.
> >             But I've seen the challenge of maintaining the providers
> >             behind the
> >             curtains and the confusion it creates with name/titles and
> >             when not sure
> >             about the domain we're dealing with.
> >             Also I believe that supporting self.instances consistently
> >             with
> >             meaningful name is saner.
> >             Therefore I vote B
> >         +1 for B.
> >
> >         My view is that this should be the advertised way, but the
> >         other method
> >         (meaningless) should be there if the user need it.
> >
> >         So as far as I'm concerned the two idioms should co-exist.
> >         This would
> >         mimic what is possible with all puppet resources. For instance
> >         you can:
> >
> >         file { '/tmp/foo.bar': ensure => present }
> >
> >         and you can
> >
> >         file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
> >         present }
> >
> >         The two refer to the same resource.
> >
> >
> >     Right.
> >
> >
> >         But, If that's indeed not possible to have them both, then I
> >         would keep
> >         only the meaningful name.
> >
> >
> >         As a side note, someone raised an issue about the delimiter
> >         being
> >         hardcoded to "::". This could be a property of the resource.
> >         This
> >         would enable the user to use weird name with "::" in it and
> >         assign a "/"
> >         (for instance) to the delimiter property:
> >
> >         Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/",
> >         ... }
> >
> >         bar::is::cool is the name of the domain and foo::blah is the
> >         project.
> >
> >     That's a good idea. Please file a bug for that.
> >
> >
> >
> >
> > I'm not sure I see a benefit to notation like:
> > Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }
>
> Currently keystone v3 is doing domain using name and the "::" separator,
> so that can be classified as an existing bug.  If the decision is taken
> to remove the support for such notation (ie, <name>::<domain>,
> 'composite namevar' approach) then the delimiter parameter will be
> useless.  If we continue to support that notation, then this would
> enable the user to use any characters (minus their own chosen delimiter)
> in their names.
>
> > Overall option below just looks more straightforward (and requires
> > less logic to convert to something useful). However, I admit I am not
> > an expert in puppet conventions:
> >
> > Keystone_tenant { 'foo::blah" domain => "bar::is::cool'", ... }
>
> This would be some kind of a mix between the two options proposed by
> Gilles, if I'm not mistaken.  The name would be the project and the
> domain would be a property.  So the name wouldn't be meaningless, but it
> wouldn't be fully qualified neither.  So many choices :)


I wasn't meaning to toss a new option onto the pile, I was just quoting
from a previous email.

--Morgan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/8e52ac1c/attachment.html>

From Kevin.Fox at pnnl.gov  Tue Sep 15 18:57:10 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 15 Sep 2015 18:57:10 +0000
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <55F86358.3050705@linux.vnet.ibm.com>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>,
 <55F86358.3050705@linux.vnet.ibm.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A3102BC@EX10MBOX03.pnnl.gov>

Most projects let you specify a name, and only force you to use a uuid IFF there is a conflict, leaving it up to the user to decide if they want the ease of use of names and being careful to name things, or having to use uuid's and not.

Neutron also has the odd wrinkle in that if your a cloud admin, it always gives you all the resources back in a listing rather then just the current tenant with a flag saying all.

This means if you try to use the "default" security group for example, it may work as a user, and then fail as an admin on the same tenant. very annoying. :/

I've had to work around that in heat templates before.

Thanks,
Kevin


________________________________________
From: Matt Riedemann [mriedem at linux.vnet.ibm.com]
Sent: Tuesday, September 15, 2015 11:28 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

On 9/15/2015 10:27 AM, Mike Spreitzer wrote:
> Monty Taylor <mordred at inaugust.com> wrote on 09/15/2015 11:04:07 AM:
>
>  > a) an update to python-novaclient to allow a named network to be passed
>  > to satisfy the "you have more than one network" - the nics argument is
>  > still useful for more complex things
>
> I am not using the latest, but rather Juno.  I find that in many places
> the Neutron CLI insists on a UUID when a name could be used.  Three
> cheers for any campaign to fix that.

It's my understanding that network names in neutron, like security
groups, are not unique, that's why you have to specify a UUID.

>
> And, yeah, creating VMs on a shared public network is good too.
>
> Thanks,
> mike
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

--

Thanks,

Matt Riedemann


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From adam.harwell at RACKSPACE.COM  Tue Sep 15 19:01:54 2015
From: adam.harwell at RACKSPACE.COM (Adam Harwell)
Date: Tue, 15 Sep 2015 19:01:54 +0000
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible
 using non "admin" tenant?
In-Reply-To: <26B082831A2B1A4783604AB89B9B2C080E899563@SINPEX01CL02.citrite.net>
Message-ID: <D21DD44E.218C2%adam.harwell@rackspace.com>

There is not really good documentation for this yet?
When I say Neutron-LBaaS tenant, I am maybe using the wrong word ? I guess the user that is configured as the service-account in neutron.conf.
The user will hit the ACL API themselves to set up the ACLs on their own secrets/containers, we won?t do it for them. So, workflow is like:


  *   User creates Secrets in Barbican.
  *   User creates CertificateContainer in Barbican.
  *   User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS user (right now using whatever user-id we publish in our docs) to read their data.
  *   User creates a LoadBalancer in Neutron-LBaaS.
  *   LBaaS hits Barbican using its standard configured service-account to retrieve the Container/Secrets from the user?s Barbican account.

This honestly hasn?t even been *fully* tested yet, but it SHOULD work. The question is whether right now in devstack the admin user is allowed to read all user secrets just because it is the admin user (which I think might be the case), in which case we won?t actually know if ACLs are working as intended (but I think we assume that Barbican has tested that feature and we can just rely on it working).

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 14, 2015 at 9:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

Is there a documentation which records step by step?

What is Neutron-LBaaS tenant?

Is it the tenant who is configuring the listener? *OR* is it some tenant which is created for lbaas plugin that is the having all secrets for all tenants configuring lbaas.

>>You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant.
I checked the ACL docs
http://docs.openstack.org/developer/barbican/api/quickstart/acls.html

The ACL API is to allow ?users?(not ?Tenants?) access to secrets/containers. What is the API or CLI that the admin will use to allow access of the tenant?s secret+container to Neutron-LBaaS tenant.


From: Adam Harwell [mailto:adam.harwell at RACKSPACE.COM]
Sent: 15 September 2015 03:00
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant. For now, the tenant-id should just be documented, but we are looking into making an API call that would expose the admin tenant-id to the user so they can make an API call to discover it.

Once the user has the neutron-lbaas tenant ID, they use the Barbican ACL system to add that ID as a readable user of the container and all of the secrets. Then Neutron-LBaaS hits barbican with the credentials of the admin tenant, and is granted access to the user?s container.

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, September 11, 2015 at 2:35 PM
To: "OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

Hi,
              Has anyone tried configuring SSL Offload as a tenant?
              During listener creation there is an error thrown saying ?could not locate/find container?.
              The lbaas plugin is not able to fetch the tenant?s certificate.

              From the code it looks like the lbaas plugin is tyring to connect to barbican with keystone details provided in neutron.conf
              Which is by default username = ?admin? and tenant_name =?admin?.
              This means lbaas plugin is looking for tenant?s ceritifcate in ?admin? tenant, which it will never be able to find.

              What is the procedure for the lbaas plugin to get hold of the tenant?s certificate?

              Assuming ?admin? user has access to all tenant?s certificates. Should the lbaas plugin connect to barbican with username=?admin? and tenant_name =  listener?s tenant_name?

Is this, the way forward ? *OR* Am I missing something?


Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/4bb56984/attachment.html>

From harlowja at outlook.com  Tue Sep 15 19:07:54 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Tue, 15 Sep 2015 12:07:54 -0700
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
 service
In-Reply-To: <55F849A2.7080707@mirantis.com>
References: <55F849A2.7080707@mirantis.com>
Message-ID: <BLU436-SMTP11163EBD88A3E5D56793979D85C0@phx.gbl>

Sounds like a useful idea if projects can plug-in themselves into the 
reloading process. I definitely think there needs to be a way for 
services to plug-in to this, although I'm not quite sure it will be 
sufficient at the current time though.

An example of why:

- 
https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/__init__.py#L24 
(unless this module is purged from python and reloaded it will likely 
not reload correctly).

Likely these can all be easily fixed (I just don't know how many of 
those exist in the various projects); but I guess we have to start 
somewhere so getting the underlying code able to be reloaded is a first 
step of likely many.

- Josh

mhorban wrote:
> Hi guys,
>
> I would like to talk about reloading config during reloading service.
> Now we have ability to reload config of service with SIGHUP signal.
> Right now SIGHUP causes just calling conf.reload_config_files().
> As result configuration is updated, but services don't know about it,
> there is no way to notify them.
> I've created review https://review.openstack.org/#/c/213062/ to allow to
> execute service's code on reloading config event.
> Possible usage can be https://review.openstack.org/#/c/223668/.
>
> Any ideas or suggestions
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From dborodaenko at mirantis.com  Tue Sep 15 19:20:02 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Tue, 15 Sep 2015 12:20:02 -0700
Subject: [openstack-dev] [Fuel] fuel-createmirror "command not found"
In-Reply-To: <CAJfWK48OY3G_CBg482Ug4-XjbN4DsHJqsZm6LBayvnQ157Ry_Q@mail.gmail.com>
References: <CAJfWK48OY3G_CBg482Ug4-XjbN4DsHJqsZm6LBayvnQ157Ry_Q@mail.gmail.com>
Message-ID: <20150915192002.GB7249@localhost>

Hi Adam,

Can you provide a bit more details, e.g. specific error messages and
logs? We have a fairly detailed checklist of things to look for when
reporting bugs about Fuel:

https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Test_and_report_bugs

Did you check the bugs about fuel-createmirror that have been fixed
between Fuel 6.1 and 7.0? Could it be that your problem is already
fixed? Did you try a more recent version of the script? If you have a
problem we haven't seen before, please file a bug report, it's the best
way to make sure everyone can benefit from your findings and our fixes:

https://bugs.launchpad.net/fuel/+filebug

Thanks,
-- 
Dmitry Borodaenko

On Tue, Sep 15, 2015 at 09:43:20AM -0700, Adam Lawson wrote:
> Hi guys,
> Is there a trick to get the fuel-createmirror command to work? Customer
> fuel environment was at 6.0, upgraded to 6.1, tred to create local mirror
> and failed. Not working from master node.
> 
> 
> *Adam Lawson*
> 
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From armamig at gmail.com  Tue Sep 15 19:20:09 2015
From: armamig at gmail.com (Armando M.)
Date: Tue, 15 Sep 2015 12:20:09 -0700
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <55F86358.3050705@linux.vnet.ibm.com>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
 <55F86358.3050705@linux.vnet.ibm.com>
Message-ID: <CAK+RQeasC+8-m=Yyj=m=1UZ3S=rrNMtMHY_vtu_7NW4+MqRD_w@mail.gmail.com>

On 15 September 2015 at 11:28, Matt Riedemann <mriedem at linux.vnet.ibm.com>
wrote:

>
>
> On 9/15/2015 10:27 AM, Mike Spreitzer wrote:
>
>> Monty Taylor <mordred at inaugust.com> wrote on 09/15/2015 11:04:07 AM:
>>
>>  > a) an update to python-novaclient to allow a named network to be passed
>>  > to satisfy the "you have more than one network" - the nics argument is
>>  > still useful for more complex things
>>
>> I am not using the latest, but rather Juno.  I find that in many places
>> the Neutron CLI insists on a UUID when a name could be used.  Three
>> cheers for any campaign to fix that.
>>
>
> It's my understanding that network names in neutron, like security groups,
> are not unique, that's why you have to specify a UUID.
>

Last time I checked, that's true of any resource in Openstack.


>
>> And, yeah, creating VMs on a shared public network is good too.
>>
>> Thanks,
>> mike
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/298e6bff/attachment.html>

From mb at us.ibm.com  Tue Sep 15 19:21:03 2015
From: mb at us.ibm.com (Mohammad Banikazemi)
Date: Tue, 15 Sep 2015 15:21:03 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A3102BC@EX10MBOX03.pnnl.gov>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>,
 <55F86358.3050705@linux.vnet.ibm.com>
 <1A3C52DFCD06494D8528644858247BF01A3102BC@EX10MBOX03.pnnl.gov>
Message-ID: <201509151921.t8FJLHgf015786@d03av03.boulder.ibm.com>



"Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote on 09/15/2015 02:57:10 PM:

> From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Date: 09/15/2015 02:59 PM
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> Most projects let you specify a name, and only force you to use a
> uuid IFF there is a conflict, leaving it up to the user to decide if
> they want the ease of use of names and being careful to name things,
> or having to use uuid's and not.

That is how Neutron works as well. If it doesn't in some cases, then those
are bugs that need to be filed and fixed.

mb at ubuntu14:~$ neutron net-create x1
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 02fd014d-3a84-463f-a158-317411528ff3 |
| mtu                       | 0                                    |
| name                      | x1                                   |
| port_security_enabled     | True                                 |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1037                                 |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | ce56abd5661f4140a5df98927a6f54d8     |
+---------------------------+--------------------------------------+
mb at ubuntu14:~$ neutron net-delete x1
Deleted network: x1

mb at ubuntu14:~$ neutron net-create x1
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | db95539c-1c33-4791-a87f-608872ed3e86 |
| mtu                       | 0                                    |
| name                      | x1                                   |
| port_security_enabled     | True                                 |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1010                                 |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | ce56abd5661f4140a5df98927a6f54d8     |
+---------------------------+--------------------------------------+
mb at ubuntu14:~$ neutron net-create x1
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | b2b3dd55-0f6f-46e7-aaef-c4a89a5d1ef9 |
| mtu                       | 0                                    |
| name                      | x1                                   |
| port_security_enabled     | True                                 |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1071                                 |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | ce56abd5661f4140a5df98927a6f54d8     |
+---------------------------+--------------------------------------+
mb at ubuntu14:~$ neutron net-delete x1
Multiple network matches found for name 'x1', use an ID to be more
specific.
mb at ubuntu14:~$ neutron net-delete db95539c-1c33-4791-a87f-608872ed3e86
Deleted network: db95539c-1c33-4791-a87f-608872ed3e86
mb at ubuntu14:~$ neutron net-delete x1
Deleted network: x1
mb at ubuntu14:~$


Best,

Mohammad



>
> Neutron also has the odd wrinkle in that if your a cloud admin, it
> always gives you all the resources back in a listing rather then
> just the current tenant with a flag saying all.
>
> This means if you try to use the "default" security group for
> example, it may work as a user, and then fail as an admin on the
> same tenant. very annoying. :/
>
> I've had to work around that in heat templates before.
>
> Thanks,
> Kevin
>
>
> ________________________________________
> From: Matt Riedemann [mriedem at linux.vnet.ibm.com]
> Sent: Tuesday, September 15, 2015 11:28 AM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> On 9/15/2015 10:27 AM, Mike Spreitzer wrote:
> > Monty Taylor <mordred at inaugust.com> wrote on 09/15/2015 11:04:07 AM:
> >
> >  > a) an update to python-novaclient to allow a named network to be
passed
> >  > to satisfy the "you have more than one network" - the nics argument
is
> >  > still useful for more complex things
> >
> > I am not using the latest, but rather Juno.  I find that in many places
> > the Neutron CLI insists on a UUID when a name could be used.  Three
> > cheers for any campaign to fix that.
>
> It's my understanding that network names in neutron, like security
> groups, are not unique, that's why you have to specify a UUID.
>
> >
> > And, yeah, creating VMs on a shared public network is good too.
> >
> > Thanks,
> > mike
> >
> >
> >
__________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/6029c2c1/attachment.html>

From mb at us.ibm.com  Tue Sep 15 19:23:04 2015
From: mb at us.ibm.com (Mohammad Banikazemi)
Date: Tue, 15 Sep 2015 15:23:04 -0400
Subject: [openstack-dev] [nova][neutron][devstack]
 New	proposed	'default' network model
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>,
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
Message-ID: <201509151923.t8FJNBUL024868@d01av01.pok.ibm.com>



"Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote on 09/15/2015 02:00:03 PM:

> From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Date: 09/15/2015 02:02 PM
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> We run several clouds where there are multiple external networks.
> the "just run it in on THE public network" doesn't work. :/
>
> I also strongly recommend to users to put vms on a private network
> and use floating ip's/load balancers.


Just curious to know how many floating IPs you have in each instance of
your OpenStack cloud.

Best,

Mohammad




For many reasons. Such as, if
> you don't, the ip that gets assigned to the vm helps it become a
> pet. you can't replace the vm and get the same IP. Floating IP's and
> load balancers can help prevent pets. It also prevents security
> issues with DNS and IP's. Also, for every floating ip/lb I have, I
> usually have 3x or more the number of instances that are on the
> private network. Sure its easy to put everything on the public
> network, but it provides much better security if you only put what
> you must on the public network. Consider the internet. would you
> want to expose every device in your house directly on the internet?
> No. you put them in a private network and poke holes just for the
> stuff that does. we should be encouraging good security practices.
> If we encourage bad ones, then it will bite us later when OpenStack
> gets a reputation for being associated with compromises.
>
> I do consider making things as simple as possible very important.
> but that is, make them as simple as possible, but no simpler.
> There's danger here of making things too simple.
>
> Thanks,
> Kevin
> ________________________________________
> From: Doug Hellmann [doug at doughellmann.com]
> Sent: Tuesday, September 15, 2015 10:02 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> > On 15 September 2015 at 08:04, Monty Taylor <mordred at inaugust.com>
wrote:
> >
> > > Hey all!
> > >
> > > If any of you have ever gotten drunk with me, you'll know I hate
floating
> > > IPs more than I hate being stabbed in the face with a very angry
fish.
> > >
> > > However, that doesn't really matter. What should matter is "what is
the
> > > most sane thing we can do for our users"
> > >
> > > As you might have seen in the glance thread, I have a bunch of
OpenStack
> > > public cloud accounts. Since I wrote that email this morning, I've
added
> > > more - so we're up to 13.
> > >
> > > auro
> > > citycloud
> > > datacentred
> > > dreamhost
> > > elastx
> > > entercloudsuite
> > > hp
> > > ovh
> > > rackspace
> > > runabove
> > > ultimum
> > > unitedstack
> > > vexxhost
> > >
> > > Of those public clouds, 5 of them require you to use a floating IP to
get
> > > an outbound address, the others directly attach you to the public
network.
> > > Most of those 8 allow you to create a private network, to boot vms on
the
> > > private network, and ALSO to create a router with a gateway and put
> > > floating IPs on your private ip'd machines if you choose.
> > >
> > > Which brings me to the suggestion I'd like to make.
> > >
> > > Instead of having our default in devstack and our default when we
talk
> > > about things be "you boot a VM and you put a floating IP on it" -
which
> > > solves one of the two usage models - how about:
> > >
> > > - Cloud has a shared: True, external:routable: True neutron network.
I
> > > don't care what it's called  ext-net, public, whatever. the "shared"
part
> > > is the key, that's the part that lets someone boot a vm on it
directly.
> > >
> > > - Each person can then make a private network, router, gateway, etc.
and
> > > get floating-ips from the same public network if they prefer that
model.
> > >
> > > Are there any good reasons to not push to get all of the public
networks
> > > marked as "shared"?
> > >
> >
> > The reason is simple: not every cloud deployment is the same: private
is
> > different from public and even within the same cloud model, the network
> > topology may vary greatly.
> >
> > Perhaps Neutron fails in the sense that it provides you with too much
> > choice, and perhaps we have to standardize on the type of networking
> > profile expected by a user of OpenStack public clouds before making
changes
> > that would fragment this landscape even further.
> >
> > If you are advocating for more flexibility without limiting the
existing
> > one, we're only making the problem worse.
>
> As with the Glance image upload API discussion, this is an example
> of an extremely common use case that is either complex for the end
> user or for which they have to know something about the deployment
> in order to do it at all. The usability of an OpenStack cloud running
> neutron would be enhanced greatly if there was a simple, clear, way
> for the user to get a new VM with a public IP on any cloud without
> multiple steps on their part. There are a lot of ways to implement
> that "under the hood" (what you call "networking profile" above)
> but the users don't care about "under the hood" so we should provide
> a way for them to ignore it. That's *not* the same as saying we
> should only support one profile. Think about the API from the use
> case perspective, and build it so if there are different deployment
> configurations available, the right action can be taken based on
> the deployment choices made without the user providing any hints.
>
> Doug
>
> >
> > >
> > > OH - well, one thing - that's that once there are two networks in an
> > > account you have to specify which one. This is really painful in nova
> > > clent. Say, for instance, you have a public network called "public"
and a
> > > private network called "private" ...
> > >
> > > You can't just say "nova boot --network=public" - nope, you need to
say
> > > "nova boot --nics net-id=$uuid_of_my_public_network"
> > >
> > > So I'd suggest 2 more things;
> > >
> > > a) an update to python-novaclient to allow a named network to
bepassed to
> > > satisfy the "you have more than one network" - the nics argument is
still
> > > useful for more complex things
> > >
> > > b) ability to say "vms in my cloud should default to being booted on
the
> > > public network" or "vms in my cloud should default to being booted on
a
> > > network owned by the user"
> > >
> > > Thoughts?
> > >
> >
> > As I implied earlier, I am not sure how healthy this choice is. As a
user
> > of multiple clouds I may end up having a different user experience
based on
> > which cloud I am using...I thought you were partially complaining about
> > lack of consistency?
> >
> > >
> > > Monty
> > >
> > >
__________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/760cfadb/attachment.html>

From nik.komawar at gmail.com  Tue Sep 15 19:24:51 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Tue, 15 Sep 2015 15:24:51 -0400
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
 library releases needed
In-Reply-To: <EA70533067B8F34F801E964ABCA4C4410F4D6345@G9W0745.americas.hpqcorp.net>
References: <1442234537-sup-4636@lrrr.local>
 <EA70533067B8F34F801E964ABCA4C4410F4D6345@G9W0745.americas.hpqcorp.net>
Message-ID: <55F87083.6080408@gmail.com>

Hi Doug,

And it would be good to lock in on glance_store (if it applies to this
email) 0.9.1 too. (that's on pypi)

On 9/14/15 9:26 AM, Kuvaja, Erno wrote:
> Hi Doug,
>
> Please find python-glanceclient 1.0.1 release request https://review.openstack.org/#/c/222716/
>
> - Erno
>
>> -----Original Message-----
>> From: Doug Hellmann [mailto:doug at doughellmann.com]
>> Sent: Monday, September 14, 2015 1:46 PM
>> To: openstack-dev
>> Subject: [openstack-dev] [all][ptl][release] final liberty cycle client library
>> releases needed
>>
>> PTLs and release liaisons,
>>
>> In order to keep the rest of our schedule for the end-of-cycle release tasks,
>> we need to have final releases for all client libraries in the next day or two.
>>
>> If you have not already submitted your final release request for this cycle,
>> please do that as soon as possible.
>>
>> If you *have* already submitted your final release request for this cycle,
>> please reply to this email and let me know that you have so I can create your
>> stable/liberty branch.
>>
>> Thanks!
>> Doug
>>
>> __________________________________________________________
>> ________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From divius.inside at gmail.com  Tue Sep 15 19:40:43 2015
From: divius.inside at gmail.com (Dmitry Tantsur)
Date: Tue, 15 Sep 2015 21:40:43 +0200
Subject: [openstack-dev] [Ironic] [Inspector] Finishing Liberty (was:
 final liberty cycle client library releases needed)
In-Reply-To: <1442331113-sup-8198@lrrr.local>
References: <1442328235-sup-901@lrrr.local> <55F8331C.7020709@redhat.com>
 <1442331113-sup-8198@lrrr.local>
Message-ID: <CAAL3yRsQrwveO-T=cOYsuHo_ghL7226zBno5ebP8=9AMO+-QRg@mail.gmail.com>

2015-09-15 17:36 GMT+02:00 Doug Hellmann <doug at doughellmann.com>:

> Excerpts from Dmitry Tantsur's message of 2015-09-15 17:02:52 +0200:
> > Hi folks!
> >
> > As you can see below, we have to make the final release of
> > python-ironic-inspector-client really soon. We have 2 big missing parts:
> >
> > 1. Introspection rules support.
> >     I'm working on it: https://review.openstack.org/#/c/223096/
> >     This required a substantial requirement, so that our client does not
> > become a complete mess: https://review.openstack.org/#/c/223490/
>
> At this point in the schedule, I'm not sure it's a good idea to be
> doing anything that's considered a "substantial" rewrite (what I
> assume you meant instead of a "substantial requirement").
>

Oh, right. I can't English any more, sorry :)


>
> What depends on python-ironic-inspector-client? Are all of the things
> that depend on it working for liberty right now? If so, that's your
> liberty release and the rewrite should be considered for mitaka.
>

The only thing that has an optional dependency on inspector client is
ironic. Their interaction is well covered by gate tests, so I'm pretty
confident we're not breaking what is working now.


>
> >
> > 2. Support for getting introspection data. John (trown) volunteered to
> > do this work.
> >
> > I'd like to ask the inspector team to pay close attention to these
> > patches, as the deadline for them is Friday (preferably European time).
>
> You should definitely not be trying to write anything new at this point.
> The feature freeze was *last* week. The releases for this week are meant
> to include bug fixes and any needed requirements updates.
>

Yeah, we (and especially I) should have done much better job managing our
schedule this cycle...

Having said that, I'm a bit worried that by marking the last release as
stable/liberty, we'll exclude majority of liberty features from the client.
Which might make this release somewhat useless for liberty downstream
consumers. I'm worried about downstream people (me included) having to
maintain their own stable/liberty based on the next release. What would you
advise we should do?

Thanks.


>
> >
> > Next, please have a look at the milestone page for ironic-inspector
> > itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
> > There are things that require review, and there are things without an
> > assignee. If you'd like to volunteer for something there, please assign
> > it to yourself. Our deadline is next Thursday, but it would be really
> > good to finish it earlier next week to dedicate some time to testing.
> >
> > Thanks all, I'm looking forward to this release :)
> >
> >
> > -------- Forwarded Message --------
> > Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle
> > client library releases needed
> > Date: Tue, 15 Sep 2015 10:45:45 -0400
> > From: Doug Hellmann <doug at doughellmann.com>
> > Reply-To: OpenStack Development Mailing List (not for usage questions)
> > <openstack-dev at lists.openstack.org>
> > To: openstack-dev <openstack-dev at lists.openstack.org>
> >
> > Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:
> > > On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> > > > Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
> > > >> PTLs and release liaisons,
> > > >>
> > > >> In order to keep the rest of our schedule for the end-of-cycle
> release
> > > >> tasks, we need to have final releases for all client libraries in
> the
> > > >> next day or two.
> > > >>
> > > >> If you have not already submitted your final release request for
> this
> > > >> cycle, please do that as soon as possible.
> > > >>
> > > >> If you *have* already submitted your final release request for this
> > > >> cycle, please reply to this email and let me know that you have so
> I can
> > > >> create your stable/liberty branch.
> > > >>
> > > >> Thanks!
> > > >> Doug
> > > >
> > > > I forgot to mention that we also need the constraints file in
> > > > global-requirements updated for all of the releases, so we're
> actually
> > > > testing with them in the gate. Please take a minute to check the
> version
> > > > specified in openstack/requirements/upper-constraints.txt for your
> > > > libraries and submit a patch to update it to the latest release if
> > > > necessary. I'll do a review later in the week, too, but it's easier
> to
> > > > identify the causes of test failures if we have one patch at a time.
> > >
> > > Hi Doug!
> > >
> > > When is the last and final deadline for doing all this for
> > > not-so-important and non-release:managed projects like
> ironic-inspector?
> > > We still lack some Liberty features covered in
> > > python-ironic-inspector-client. Do we have time until end of week to
> > > finish them?
> >
> > We would like for the schedule to be the same for everyone. We need the
> > final versions for all libraries this week, so we can update
> > requirements constraints by early next week before the RC1.
> >
> > https://wiki.openstack.org/wiki/Liberty_Release_Schedule
> >
> > Doug
> >
> > >
> > > Sorry if you hear this question too often :)
> > >
> > > Thanks!
> > >
> > > >
> > > > Doug
> > > >
> > > >
> __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > >
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/67799e04/attachment.html>

From Kevin.Fox at pnnl.gov  Tue Sep 15 19:43:45 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 15 Sep 2015 19:43:45 +0000
Subject: [openstack-dev] [nova][neutron][devstack]
	New	proposed	'default' network model
In-Reply-To: <201509151923.t8FJNBUL024868@d01av01.pok.ibm.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>,
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>,
 <201509151923.t8FJNBUL024868@d01av01.pok.ibm.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A310314@EX10MBOX03.pnnl.gov>

I'm not quite sure how to read your question. I think it can be taken multiple ways. I'll guess at what you meant though. If I interpreted wrong, please ask again.

For the instances that have floating ip's, usually either 1 or 2. One of our clouds has basically a public
network directly on the internet, and a shared private network that crosses tenants but is not internet facing. We can place vm's on either network easily by just attaching floating ip's. The private shared network has more floating ip's assigned then the internet one usually.

As LBaaS is maturing, we're using it more and more, putting the floating ips on the LB instead of the instances, and putting a pool of instances behind it. So our instance counts are growing faster then our usage of floating IP's.

Thanks,
Kevin
________________________________
From: Mohammad Banikazemi [mb at us.ibm.com]
Sent: Tuesday, September 15, 2015 12:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model


"Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote on 09/15/2015 02:00:03 PM:

> From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Date: 09/15/2015 02:02 PM
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> We run several clouds where there are multiple external networks.
> the "just run it in on THE public network" doesn't work. :/
>
> I also strongly recommend to users to put vms on a private network
> and use floating ip's/load balancers.


Just curious to know how many floating IPs you have in each instance of your OpenStack cloud.

Best,

Mohammad




For many reasons. Such as, if
> you don't, the ip that gets assigned to the vm helps it become a
> pet. you can't replace the vm and get the same IP. Floating IP's and
> load balancers can help prevent pets. It also prevents security
> issues with DNS and IP's. Also, for every floating ip/lb I have, I
> usually have 3x or more the number of instances that are on the
> private network. Sure its easy to put everything on the public
> network, but it provides much better security if you only put what
> you must on the public network. Consider the internet. would you
> want to expose every device in your house directly on the internet?
> No. you put them in a private network and poke holes just for the
> stuff that does. we should be encouraging good security practices.
> If we encourage bad ones, then it will bite us later when OpenStack
> gets a reputation for being associated with compromises.
>
> I do consider making things as simple as possible very important.
> but that is, make them as simple as possible, but no simpler.
> There's danger here of making things too simple.
>
> Thanks,
> Kevin
> ________________________________________
> From: Doug Hellmann [doug at doughellmann.com]
> Sent: Tuesday, September 15, 2015 10:02 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> > On 15 September 2015 at 08:04, Monty Taylor <mordred at inaugust.com> wrote:
> >
> > > Hey all!
> > >
> > > If any of you have ever gotten drunk with me, you'll know I hate floating
> > > IPs more than I hate being stabbed in the face with a very angry fish.
> > >
> > > However, that doesn't really matter. What should matter is "what is the
> > > most sane thing we can do for our users"
> > >
> > > As you might have seen in the glance thread, I have a bunch of OpenStack
> > > public cloud accounts. Since I wrote that email this morning, I've added
> > > more - so we're up to 13.
> > >
> > > auro
> > > citycloud
> > > datacentred
> > > dreamhost
> > > elastx
> > > entercloudsuite
> > > hp
> > > ovh
> > > rackspace
> > > runabove
> > > ultimum
> > > unitedstack
> > > vexxhost
> > >
> > > Of those public clouds, 5 of them require you to use a floating IP to get
> > > an outbound address, the others directly attach you to the public network.
> > > Most of those 8 allow you to create a private network, to boot vms on the
> > > private network, and ALSO to create a router with a gateway and put
> > > floating IPs on your private ip'd machines if you choose.
> > >
> > > Which brings me to the suggestion I'd like to make.
> > >
> > > Instead of having our default in devstack and our default when we talk
> > > about things be "you boot a VM and you put a floating IP on it" - which
> > > solves one of the two usage models - how about:
> > >
> > > - Cloud has a shared: True, external:routable: True neutron network. I
> > > don't care what it's called  ext-net, public, whatever. the "shared" part
> > > is the key, that's the part that lets someone boot a vm on it directly.
> > >
> > > - Each person can then make a private network, router, gateway, etc. and
> > > get floating-ips from the same public network if they prefer that model.
> > >
> > > Are there any good reasons to not push to get all of the public networks
> > > marked as "shared"?
> > >
> >
> > The reason is simple: not every cloud deployment is the same: private is
> > different from public and even within the same cloud model, the network
> > topology may vary greatly.
> >
> > Perhaps Neutron fails in the sense that it provides you with too much
> > choice, and perhaps we have to standardize on the type of networking
> > profile expected by a user of OpenStack public clouds before making changes
> > that would fragment this landscape even further.
> >
> > If you are advocating for more flexibility without limiting the existing
> > one, we're only making the problem worse.
>
> As with the Glance image upload API discussion, this is an example
> of an extremely common use case that is either complex for the end
> user or for which they have to know something about the deployment
> in order to do it at all. The usability of an OpenStack cloud running
> neutron would be enhanced greatly if there was a simple, clear, way
> for the user to get a new VM with a public IP on any cloud without
> multiple steps on their part. There are a lot of ways to implement
> that "under the hood" (what you call "networking profile" above)
> but the users don't care about "under the hood" so we should provide
> a way for them to ignore it. That's *not* the same as saying we
> should only support one profile. Think about the API from the use
> case perspective, and build it so if there are different deployment
> configurations available, the right action can be taken based on
> the deployment choices made without the user providing any hints.
>
> Doug
>
> >
> > >
> > > OH - well, one thing - that's that once there are two networks in an
> > > account you have to specify which one. This is really painful in nova
> > > clent. Say, for instance, you have a public network called "public" and a
> > > private network called "private" ...
> > >
> > > You can't just say "nova boot --network=public" - nope, you need to say
> > > "nova boot --nics net-id=$uuid_of_my_public_network"
> > >
> > > So I'd suggest 2 more things;
> > >
> > > a) an update to python-novaclient to allow a named network to bepassed to
> > > satisfy the "you have more than one network" - the nics argument is still
> > > useful for more complex things
> > >
> > > b) ability to say "vms in my cloud should default to being booted on the
> > > public network" or "vms in my cloud should default to being booted on a
> > > network owned by the user"
> > >
> > > Thoughts?
> > >
> >
> > As I implied earlier, I am not sure how healthy this choice is. As a user
> > of multiple clouds I may end up having a different user experience based on
> > which cloud I am using...I thought you were partially complaining about
> > lack of consistency?
> >
> > >
> > > Monty
> > >
> > > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/3a2d3b5d/attachment.html>

From dprince at redhat.com  Tue Sep 15 19:50:55 2015
From: dprince at redhat.com (Dan Prince)
Date: Tue, 15 Sep 2015 15:50:55 -0400
Subject: [openstack-dev] [TripleO] PTL candidacy
Message-ID: <1442346655.7968.5.camel@redhat.com>

Hi TripleO,

My name is Dan Prince and I'm running for the Mitaka TripleO PTL. I've
been working on TripleO since Grizzly and OpenStack since Bexar. I care
deeply about the project and would like to continue the vision of
deploying OpenStack w/ OpenStack.

TripleO has come a long way over the past few years. I like how the
early vision within the project set the stage for how you can deploy
OpenStack with OpenStack. I also like how we are continuing to
transform the OpenStack deployment landscape by using OpenStack, with
all its API goodness, alongside great technologies like Puppet and
Docker. Using the best with the best... that is why I like TripleO.

A couple of areas I'd like to see us focus on for Mitaka:

CI: The ability to land code upstream is critical right now. We need to
continue to refine our CI workflow so that it is both faster, more
reliable, and gives us more coverage.

Upgrades: Perhaps one of the most important areas of focus is around
upgrades. We've made some progress towards minor updates but we've got
plenty of work to be able to support minor updates and full upgrades.

Composability: Better composability within the Heat templates would
make the 3rd party integration even easier and also give us better and
more flexible role flexibility. Role flexibility in particular may
become more desirable as we take steps towards supporting a more
containerized deployment model.

Validations: The extensibility of our low level network infrastructure
is very flexible. But it isn't easy to configure. I would like to see
us continue adding validations at key points to make this easier.
Additionally validations are critical for integration with any sort of
external resource like a Ceph cluster or load balancers.

Features: New network features like IPv6 support and better support for
spine/leaf deployment topologies. We are also refining how Tuskar works
with Heat and continuing to refine Tuskar UI around a common library
that can better leverage the installation workflows OpenStack requires.

Lots of exciting stuff to work on and a great team of developers who
are driving many of these goals. As PTL I would be honored to help
organize and drive forward the efforts of the team where needed.

Thanks for your consideration.

Dan Prince


From armamig at gmail.com  Tue Sep 15 19:50:24 2015
From: armamig at gmail.com (Armando M.)
Date: Tue, 15 Sep 2015 12:50:24 -0700
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <1442336141-sup-9706@lrrr.local>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
Message-ID: <CAK+RQeZ29gkuWkvj5m5pWXK+aEf=tMpKKM7R_hkNrvgDaiHE_Q@mail.gmail.com>

On 15 September 2015 at 10:02, Doug Hellmann <doug at doughellmann.com> wrote:

> Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> > On 15 September 2015 at 08:04, Monty Taylor <mordred at inaugust.com>
> wrote:
> >
> > > Hey all!
> > >
> > > If any of you have ever gotten drunk with me, you'll know I hate
> floating
> > > IPs more than I hate being stabbed in the face with a very angry fish.
> > >
> > > However, that doesn't really matter. What should matter is "what is the
> > > most sane thing we can do for our users"
> > >
> > > As you might have seen in the glance thread, I have a bunch of
> OpenStack
> > > public cloud accounts. Since I wrote that email this morning, I've
> added
> > > more - so we're up to 13.
> > >
> > > auro
> > > citycloud
> > > datacentred
> > > dreamhost
> > > elastx
> > > entercloudsuite
> > > hp
> > > ovh
> > > rackspace
> > > runabove
> > > ultimum
> > > unitedstack
> > > vexxhost
> > >
> > > Of those public clouds, 5 of them require you to use a floating IP to
> get
> > > an outbound address, the others directly attach you to the public
> network.
> > > Most of those 8 allow you to create a private network, to boot vms on
> the
> > > private network, and ALSO to create a router with a gateway and put
> > > floating IPs on your private ip'd machines if you choose.
> > >
> > > Which brings me to the suggestion I'd like to make.
> > >
> > > Instead of having our default in devstack and our default when we talk
> > > about things be "you boot a VM and you put a floating IP on it" - which
> > > solves one of the two usage models - how about:
> > >
> > > - Cloud has a shared: True, external:routable: True neutron network. I
> > > don't care what it's called  ext-net, public, whatever. the "shared"
> part
> > > is the key, that's the part that lets someone boot a vm on it directly.
> > >
> > > - Each person can then make a private network, router, gateway, etc.
> and
> > > get floating-ips from the same public network if they prefer that
> model.
> > >
> > > Are there any good reasons to not push to get all of the public
> networks
> > > marked as "shared"?
> > >
> >
> > The reason is simple: not every cloud deployment is the same: private is
> > different from public and even within the same cloud model, the network
> > topology may vary greatly.
> >
> > Perhaps Neutron fails in the sense that it provides you with too much
> > choice, and perhaps we have to standardize on the type of networking
> > profile expected by a user of OpenStack public clouds before making
> changes
> > that would fragment this landscape even further.
> >
> > If you are advocating for more flexibility without limiting the existing
> > one, we're only making the problem worse.
>
> As with the Glance image upload API discussion, this is an example
> of an extremely common use case that is either complex for the end
> user or for which they have to know something about the deployment
> in order to do it at all. The usability of an OpenStack cloud running
> neutron would be enhanced greatly if there was a simple, clear, way
> for the user to get a new VM with a public IP on any cloud without
> multiple steps on their part.


I agree on this last statement wholeheartedly, but we gotta be careful on
how we do it, because there are implications on scalability and security.

Today Neutron provides a few network deployment models [1,2,3,4,5]. You can
mix and match, with the only caveat is that this stuff must be
pre-provisioned.

Now the way I understand Monty's request is that in certain deployments
you'd like automatic provisioning. We can look into that, as we have in
blueprint [6], but we must recognize that hint-less requests can be hard to
achieve because the way the network service is provided can vary from
system to system...a lot.

Defaults are useful, but wrong defaults are worse. A system can make an
educated guess as of the user's intention, in lieu of that an operator can
force the choice for the user, but if that one is hard too, then the only
choice is to defer to the user.

So this boils down to: in light of the possible ways of providing VM
connectivity, how can we make a choice on the user's behalf? Can we assume
that he/she always want a publicly facing VM connected to Internet? The
answer is 'no'.


> There are a lot of ways to implement
> that "under the hood" (what you call "networking profile" above)
> but the users don't care about "under the hood" so we should provide
> a way for them to ignore it. That's *not* the same as saying we
> should only support one profile. Think about the API from the use
> case perspective, and build it so if there are different deployment
> configurations available, the right action can be taken based on
> the deployment choices made without the user providing any hints.
>

[1]
http://docs.openstack.org/havana/install-guide/install/apt/content/section_use-cases-single-flat.html
[2]
http://docs.openstack.org/havana/install-guide/install/apt/content/section_use-cases-multi-flat.html
[3]
http://docs.openstack.org/havana/install-guide/install/apt/content/section_use-cases-mixed.html
[4]
http://docs.openstack.org/havana/install-guide/install/apt/content/section_use-cases-single-router.html
[5]
http://docs.openstack.org/havana/install-guide/install/apt/content/section_use-cases-tenant-router.html
[6] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network

>
> Doug
>
> >
> > >
> > > OH - well, one thing - that's that once there are two networks in an
> > > account you have to specify which one. This is really painful in nova
> > > clent. Say, for instance, you have a public network called "public"
> and a
> > > private network called "private" ...
> > >
> > > You can't just say "nova boot --network=public" - nope, you need to say
> > > "nova boot --nics net-id=$uuid_of_my_public_network"
> > >
> > > So I'd suggest 2 more things;
> > >
> > > a) an update to python-novaclient to allow a named network to be
> passed to
> > > satisfy the "you have more than one network" - the nics argument is
> still
> > > useful for more complex things
> > >
> > > b) ability to say "vms in my cloud should default to being booted on
> the
> > > public network" or "vms in my cloud should default to being booted on a
> > > network owned by the user"
> > >
> > > Thoughts?
> > >
> >
> > As I implied earlier, I am not sure how healthy this choice is. As a user
> > of multiple clouds I may end up having a different user experience based
> on
> > which cloud I am using...I thought you were partially complaining about
> > lack of consistency?
> >
> > >
> > > Monty
> > >
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/800da30c/attachment-0001.html>

From john.griffith8 at gmail.com  Tue Sep 15 19:57:10 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Tue, 15 Sep 2015 13:57:10 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55F857A2.3020603@redhat.com>
References: <55F84EA7.9080902@windriver.com>
	<55F857A2.3020603@redhat.com>
Message-ID: <CAPWkaSW88gzAXub=Vy+x8Q+ty_+KcC2ihHmBv0E9UyLM4Hdssg@mail.gmail.com>

On Tue, Sep 15, 2015 at 11:38 AM, Eric Harney <eharney at redhat.com> wrote:

> On 09/15/2015 01:00 PM, Chris Friesen wrote:
> > I'm currently trying to work around an issue where activating LVM
> > snapshots created through cinder takes potentially a long time.
> > (Linearly related to the amount of data that differs between the
> > original volume and the snapshot.)  On one system I tested it took about
> > one minute per 25GB of data, so the worst-case boot delay can become
> > significant.
>
?Sadly the addition of the whole activate/deactivate has been problematic
ever since it was introduced.  I'd like to better understand why this is
needed and why the long delay.
?


> >
> > According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were
> > not intended to be kept around indefinitely, they were supposed to be
> > used only until the backup was taken and then deleted.  He recommends
>
?Correct, and FWIW this has also been the recommendation from Cinder's
perspective for a long time as well.  Snapshots are NOT backups and
shouldn't be treated as such.
?


> > using thin provisioning for long-lived snapshots due to differences in
> > how the metadata is maintained.  (He also says he's heard reports of
> > volume activation taking half an hour, which is clearly crazy when
> > instances are waiting to access their volumes.)

>
> > Given the above, is there any reason why we couldn't make thin
> > provisioning the default?
>
?I tried, it was rejected.  I think it's crazy not to fix things up and do
this at this point.
?


> >
>
>
> My intention is to move toward thin-provisioned LVM as the default -- it
> is definitely better suited to our use of LVM.  Previously this was less
> easy, since some older Ubuntu platforms didn't support it, but in
> Liberty we added the ability to specify lvm_type = "auto" [1] to use
> thin if it is supported on the platform.
>
> The other issue preventing using thin by default is that we default the
> max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
> the reference implementation, since it means that people who deploy
> Cinder LVM on smaller storage configurations can easily fill up their
> volume group and have things grind to halt.  I think we want something
> closer to the semantics of thick LVM for the default case.
>
> We haven't thought through a reasonable migration strategy for how to
> handle that.  I'm not sure we can change the default oversubscription
> ratio without breaking deployments using other drivers.  (Maybe I'm
> wrong about this?)
>
> If we sort out that issue, I don't see any reason we can't switch over
> in Mitaka.
>
> [1] https://review.openstack.org/#/c/104653/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/648f3561/attachment.html>

From mb at us.ibm.com  Tue Sep 15 20:03:53 2015
From: mb at us.ibm.com (Mohammad Banikazemi)
Date: Tue, 15 Sep 2015 16:03:53 -0400
Subject: [openstack-dev]
 [nova][neutron][devstack]	New	proposed	'default' network model
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A310314@EX10MBOX03.pnnl.gov>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>,
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>,
 <201509151923.t8FJNBUL024868@d01av01.pok.ibm.com>
 <1A3C52DFCD06494D8528644858247BF01A310314@EX10MBOX03.pnnl.gov>
Message-ID: <201509152006.t8FK6A6H010022@d01av01.pok.ibm.com>


Thanks Kevin for your answer. My question was different. You mentioned in
your email that you run several clouds. That's why I used the word
"instance" in my question to refer to each of those clouds. So let me put
the question in a different way: in the biggest cloud you run, how many
total floating IPs do you have. Just a ballpark number will be great. 10s,
100s, 1000s, more?

Thanks,

Mohammad

"Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote on 09/15/2015 03:43:45 PM:

> From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Date: 09/15/2015 03:49 PM
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> I'm not quite sure how to read your question. I think it can be
> taken multiple ways. I'll guess at what you meant though. If I
> interpreted wrong, please ask again.
>
> For the instances that have floating ip's, usually either 1 or 2.
> One of our clouds has basically a public
> network directly on the internet, and a shared private network that
> crosses tenants but is not internet facing. We can place vm's on
> either network easily by just attaching floating ip's. The private
> shared network has more floating ip's assigned then the internet one
usually.
>
> As LBaaS is maturing, we're using it more and more, putting the
> floating ips on the LB instead of the instances, and putting a pool
> of instances behind it. So our instance counts are growing faster
> then our usage of floating IP's.
>
> Thanks,
> Kevin
>
> From: Mohammad Banikazemi [mb at us.ibm.com]
> Sent: Tuesday, September 15, 2015 12:23 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model

> "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote on 09/15/2015 02:00:03 PM:
>
> > From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev at lists.openstack.org>
> > Date: 09/15/2015 02:02 PM
> > Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> > 'default' network model
> >
> > We run several clouds where there are multiple external networks.
> > the "just run it in on THE public network" doesn't work. :/
> >
> > I also strongly recommend to users to put vms on a private network
> > and use floating ip's/load balancers.
>
>
> Just curious to know how many floating IPs you have in each instance
> of your OpenStack cloud.
>
> Best,
>
> Mohammad
>
>
>
>
> For many reasons. Such as, if
> > you don't, the ip that gets assigned to the vm helps it become a
> > pet. you can't replace the vm and get the same IP. Floating IP's and
> > load balancers can help prevent pets. It also prevents security
> > issues with DNS and IP's. Also, for every floating ip/lb I have, I
> > usually have 3x or more the number of instances that are on the
> > private network. Sure its easy to put everything on the public
> > network, but it provides much better security if you only put what
> > you must on the public network. Consider the internet. would you
> > want to expose every device in your house directly on the internet?
> > No. you put them in a private network and poke holes just for the
> > stuff that does. we should be encouraging good security practices.
> > If we encourage bad ones, then it will bite us later when OpenStack
> > gets a reputation for being associated with compromises.
> >
> > I do consider making things as simple as possible very important.
> > but that is, make them as simple as possible, but no simpler.
> > There's danger here of making things too simple.
> >
> > Thanks,
> > Kevin
> > ________________________________________
> > From: Doug Hellmann [doug at doughellmann.com]
> > Sent: Tuesday, September 15, 2015 10:02 AM
> > To: openstack-dev
> > Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> > 'default' network model
> >
> > Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> > > On 15 September 2015 at 08:04, Monty Taylor <mordred at inaugust.com>
wrote:
> > >
> > > > Hey all!
> > > >
> > > > If any of you have ever gotten drunk with me, you'll know I
> hate floating
> > > > IPs more than I hate being stabbed in the face with a very angry
fish.
> > > >
> > > > However, that doesn't really matter. What should matter is "what is
the
> > > > most sane thing we can do for our users"
> > > >
> > > > As you might have seen in the glance thread, I have a bunch
ofOpenStack
> > > > public cloud accounts. Since I wrote that email this morning, I've
added
> > > > more - so we're up to 13.
> > > >
> > > > auro
> > > > citycloud
> > > > datacentred
> > > > dreamhost
> > > > elastx
> > > > entercloudsuite
> > > > hp
> > > > ovh
> > > > rackspace
> > > > runabove
> > > > ultimum
> > > > unitedstack
> > > > vexxhost
> > > >
> > > > Of those public clouds, 5 of them require you to use a
> floating IP to get
> > > > an outbound address, the others directly attach you to the
> public network.
> > > > Most of those 8 allow you to create a private network, to bootvms
on the
> > > > private network, and ALSO to create a router with a gateway and put
> > > > floating IPs on your private ip'd machines if you choose.
> > > >
> > > > Which brings me to the suggestion I'd like to make.
> > > >
> > > > Instead of having our default in devstack and our default when we
talk
> > > > about things be "you boot a VM and you put a floating IP on it" -
which
> > > > solves one of the two usage models - how about:
> > > >
> > > > - Cloud has a shared: True, external:routable: True neutron
network. I
> > > > don't care what it's called  ext-net, public, whatever. the
> "shared" part
> > > > is the key, that's the part that lets someone boot a vm on it
directly.
> > > >
> > > > - Each person can then make a private network, router, gateway,
etc. and
> > > > get floating-ips from the same public network if they prefer that
model.
> > > >
> > > > Are there any good reasons to not push to get all of the public
networks
> > > > marked as "shared"?
> > > >
> > >
> > > The reason is simple: not every cloud deployment is the same: private
is
> > > different from public and even within the same cloud model, the
network
> > > topology may vary greatly.
> > >
> > > Perhaps Neutron fails in the sense that it provides you with too much
> > > choice, and perhaps we have to standardize on the type of networking
> > > profile expected by a user of OpenStack public clouds before
> making changes
> > > that would fragment this landscape even further.
> > >
> > > If you are advocating for more flexibility without limiting the
existing
> > > one, we're only making the problem worse.
> >
> > As with the Glance image upload API discussion, this is an example
> > of an extremely common use case that is either complex for the end
> > user or for which they have to know something about the deployment
> > in order to do it at all. The usability of an OpenStack cloud running
> > neutron would be enhanced greatly if there was a simple, clear, way
> > for the user to get a new VM with a public IP on any cloud without
> > multiple steps on their part. There are a lot of ways to implement
> > that "under the hood" (what you call "networking profile" above)
> > but the users don't care about "under the hood" so we should provide
> > a way for them to ignore it. That's *not* the same as saying we
> > should only support one profile. Think about the API from the use
> > case perspective, and build it so if there are different deployment
> > configurations available, the right action can be taken based on
> > the deployment choices made without the user providing any hints.
> >
> > Doug
> >
> > >
> > > >
> > > > OH - well, one thing - that's that once there are two networks in
an
> > > > account you have to specify which one. This is really painful in
nova
> > > > clent. Say, for instance, you have a public network called
> "public" and a
> > > > private network called "private" ...
> > > >
> > > > You can't just say "nova boot --network=public" - nope, you need to
say
> > > > "nova boot --nics net-id=$uuid_of_my_public_network"
> > > >
> > > > So I'd suggest 2 more things;
> > > >
> > > > a) an update to python-novaclient to allow a named network to
> bepassed to
> > > > satisfy the "you have more than one network" - the nics
> argument is still
> > > > useful for more complex things
> > > >
> > > > b) ability to say "vms in my cloud should default to being booted
on the
> > > > public network" or "vms in my cloud should default to being booted
on a
> > > > network owned by the user"
> > > >
> > > > Thoughts?
> > > >
> > >
> > > As I implied earlier, I am not sure how healthy this choice is. As a
user
> > > of multiple clouds I may end up having a different user
> experience based on
> > > which cloud I am using...I thought you were partially complaining
about
> > > lack of consistency?
> > >
> > > >
> > > > Monty
> > > >
> > > >
>
__________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?
> subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> >
> >
__________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
__________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/11f4fbdd/attachment.html>

From cboylan at sapwetik.org  Tue Sep 15 20:06:26 2015
From: cboylan at sapwetik.org (Clark Boylan)
Date: Tue, 15 Sep 2015 13:06:26 -0700
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
References: <55F83367.9050503@inaugust.com> <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
Message-ID: <1442347586.3210663.384528809.28DBF360@webmail.messagingengine.com>

On Tue, Sep 15, 2015, at 11:00 AM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the
> "just run it in on THE public network" doesn't work. :/
Maybe this would be better expressed as "just run it on an existing
public network" then?
> 
> I also strongly recommend to users to put vms on a private network and
> use floating ip's/load balancers. For many reasons. Such as, if you
> don't, the ip that gets assigned to the vm helps it become a pet. you
> can't replace the vm and get the same IP. Floating IP's and load
> balancers can help prevent pets. It also prevents security issues with
> DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or
> more the number of instances that are on the private network. Sure its
> easy to put everything on the public network, but it provides much better
> security if you only put what you must on the public network. Consider
> the internet. would you want to expose every device in your house
> directly on the internet? No. you put them in a private network and poke
> holes just for the stuff that does. we should be encouraging good
> security practices. If we encourage bad ones, then it will bite us later
> when OpenStack gets a reputation for being associated with compromises.
There are a few issues with this. Neutron IPv6 does not support floating
IPs. So now you have to use two completely different concepts for
networking on a single dual stacked VM. IPv4 goes on a private network
and you attach a floating IP. IPv6 is publicly routable. If security and
DNS and not making pets were really the driving force behind floating
IPs we would see IPv6 support them too. These aren't the reasons
floating IPs exist, they exist because we are running out of IPv4
addresses and NAT is everyones preferred solution to that problem. But
that doesn't make it a good default for a cloud; use them if you are
affected by an IP shortage.

Nothing prevents you from load balancing against public IPs to address
the DNS and firewall rule concerns (basically don't make pets). This
works great and is how OpenStack's git mirrors work.

It is also easy to firewall public IPs using Neutron via security groups
(and possibly the firewall service? I have never used it and don't
know). All this to say I think it is reasonable to use public shared
networks by default particularly since IPv6 does not have any concept of
a floating IP in Neutron so using them is just odd unless you really
really need them and you aren't actually any less secure.

Not to get too off topic, but I would love it if all the devices in my
home were publicly routable. I can use my firewall to punch holes for
them, NAT is not required. Unfortunately I still have issues with IPv6
at home. Maybe one day this will be a reality :)
> 
> I do consider making things as simple as possible very important. but
> that is, make them as simple as possible, but no simpler. There's danger
> here of making things too simple.
> 
> Thanks,
> Kevin
>

Clark


From tim at styra.com  Tue Sep 15 20:23:29 2015
From: tim at styra.com (Tim Hinrichs)
Date: Tue, 15 Sep 2015 20:23:29 +0000
Subject: [openstack-dev] [Congress] PTL candidacy
Message-ID: <CAJjxPABLW+BBLnRqKaikW0ZL2X5Z5Nwj-C8A9GtgXWy1hHmNbA@mail.gmail.com>

Hi all,

I?m writing to announce my candidacy for Congress PTL for the Mitaka
cycle.  I?m excited at the prospect of continuing the development of our
community, our code base, and our integrations with other projects.

This past cycle has been exciting in that we saw several new, consistent
contributors, who actively pushed code, submitted reviews, wrote specs, and
participated in the mid-cycle meet-up.  Additionally, our integration with
the rest of the OpenStack ecosystem improved with our move to running
tempest tests in the gate instead of manually or with our own CI.  The code
base matured as well, as we rounded out some of the features we added near
the end of the Kilo cycle.  We also began making the most significant
architectural change in the project?s history, in an effort meet our
high-availability and API throughput targets.

I?m looking forward to the Mitaka cycle.  My highest priority for the code
base is completing the architectural changes that we began in Liberty.
These changes are undoubtedly the right way forward for production use
cases, but it is equally important that we make Congress easy to use and
understand for both new developers and new end users.  I also plan to
further our integration with the OpenStack ecosystem by better utilizing
the plugin architectures that are available (e.g. devstack and tempest).  I
will also work to begin (or continue) dialogues with other projects that
might benefit from consuming Congress.  Finally I?m excited to continue
working with our newest project members, helping them toward becoming core
contributors.

See you all in Tokyo!
Tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/d1c2c3a5/attachment.html>

From doug at doughellmann.com  Tue Sep 15 20:32:28 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 15 Sep 2015 16:32:28 -0400
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
	service
In-Reply-To: <55F849A2.7080707@mirantis.com>
References: <55F849A2.7080707@mirantis.com>
Message-ID: <1442349091-sup-3924@lrrr.local>

Excerpts from mhorban's message of 2015-09-15 19:38:58 +0300:
> Hi guys,
> 
> I would like to talk about reloading config during reloading service.
> Now we have ability to reload config of service with SIGHUP signal.
> Right now SIGHUP causes just calling conf.reload_config_files().
> As result configuration is updated, but services don't know about it, 
> there is no way to notify them.
> I've created review https://review.openstack.org/#/c/213062/ to allow to 
> execute service's code on reloading config event.
> Possible usage can be https://review.openstack.org/#/c/223668/.
> 
> Any ideas or suggestions
> 

Rather than building hooks into oslo.config, why don't we build them
into the thing that is catching the signal. That way the app can do lots
of things in response to a signal, and one of them might be reloading
the configuration.

Doug


From mspreitz at us.ibm.com  Tue Sep 15 20:32:22 2015
From: mspreitz at us.ibm.com (Mike Spreitzer)
Date: Tue, 15 Sep 2015 16:32:22 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <1442347586.3210663.384528809.28DBF360@webmail.messagingengine.com>
References: <55F83367.9050503@inaugust.com> <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <1442347586.3210663.384528809.28DBF360@webmail.messagingengine.com>
Message-ID: <201509152032.t8FKWTNP016365@d01av01.pok.ibm.com>

Clark Boylan <cboylan at sapwetik.org> wrote on 09/15/2015 04:06:26 PM:

> > I also strongly recommend to users to put vms on a private network and
> > use floating ip's/load balancers. For many reasons. Such as, if you
> > don't, the ip that gets assigned to the vm helps it become a pet. you
> > can't replace the vm and get the same IP. Floating IP's and load
> > balancers can help prevent pets. It also prevents security issues with
> > DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x 
or
> > more the number of instances that are on the private network. Sure its
> > easy to put everything on the public network, but it provides much 
better
> > security if you only put what you must on the public network. Consider
> > the internet. would you want to expose every device in your house
> > directly on the internet? No. you put them in a private network and 
poke
> > holes just for the stuff that does. we should be encouraging good
> > security practices. If we encourage bad ones, then it will bite us 
later
> > when OpenStack gets a reputation for being associated with 
compromises.
> There are a few issues with this. Neutron IPv6 does not support floating
> IPs. So now you have to use two completely different concepts for
> networking on a single dual stacked VM. IPv4 goes on a private network
> and you attach a floating IP. IPv6 is publicly routable. If security and
> DNS and not making pets were really the driving force behind floating
> IPs we would see IPv6 support them too. These aren't the reasons
> floating IPs exist, they exist because we are running out of IPv4
> addresses and NAT is everyones preferred solution to that problem. But
> that doesn't make it a good default for a cloud; use them if you are
> affected by an IP shortage.
> 
> Nothing prevents you from load balancing against public IPs to address
> the DNS and firewall rule concerns (basically don't make pets). This
> works great and is how OpenStack's git mirrors work.
> 
> It is also easy to firewall public IPs using Neutron via security groups
> (and possibly the firewall service? I have never used it and don't
> know). All this to say I think it is reasonable to use public shared
> networks by default particularly since IPv6 does not have any concept of
> a floating IP in Neutron so using them is just odd unless you really
> really need them and you aren't actually any less secure.

I'm really glad to see the IPv6 front opened.

But I have to say that the analysis of options for securing public 
addresses omits one case that I think is important: using an external (to 
Neutron) "appliance".  In my environment this is more or less required. 
This reinforces the bifurcation of addresses that was mentioned: some VMs 
are private and do not need any service from the external appliance, while 
others have addresses that need the external appliance on the 
public/private path.

In fact, for this reason, I have taken to using two "external" networks 
(from Neutron's point of view) --- one whose addresses are handled by the 
external appliance and one whose addresses are not.  In fact, both ranges 
of address are on the same VLAN.  This is FYI, some people have wondered 
why these thins might be done.

> Not to get too off topic, but I would love it if all the devices in my
> home were publicly routable. I can use my firewall to punch holes for
> them, NAT is not required. Unfortunately I still have issues with IPv6
> at home. Maybe one day this will be a reality :)

Frankly, given the propensity for bugs to be discovered, I am glad that 
nothing in my home is accessible from the outside (aside from the device 
that does firewall, and I worry about that too).  Not that this is really 
germane to what we want to do for internet-accessible 
applications/services.

Regards,
Mike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/9a398390/attachment.html>

From nikeshmahalka at vedams.com  Tue Sep 15 20:40:52 2015
From: nikeshmahalka at vedams.com (Nikesh Kumar Mahalka)
Date: Wed, 16 Sep 2015 02:10:52 +0530
Subject: [openstack-dev] [Cinder] PTL Candidacy
In-Reply-To: <CAOEh+o37kMLxQoDDeO=pQ=-1Cr2F_SYvipNs-O6Vd8T4CWDpcA@mail.gmail.com>
References: <20150914144935.GA12267@gmx.com>
 <CAOEh+o37kMLxQoDDeO=pQ=-1Cr2F_SYvipNs-O6Vd8T4CWDpcA@mail.gmail.com>
Message-ID: <CAKj7oByLiKVhv62DAJZ4weoMgH4cYAWVYQM96Y1FNGROhwm48g@mail.gmail.com>

Thanks Sean, Vote +1.

On Tue, Sep 15, 2015 at 8:36 AM, hao wang <sxmatch1986 at gmail.com> wrote:

> Thanks Sean, Vote +1.
>
> 2015-09-14 22:49 GMT+08:00 Sean McGinnis <sean.mcginnis at gmx.com>:
> > Hello everyone,
> >
> > I'm announcing my candidacy for Cinder PTL for the Mitaka release.
> >
> > The Cinder team has made great progress. We've not only grown the
> > number of supported backend drivers, but we've made significant
> > improvements to the core code and raised the quality of existing
> > and incoming code contributions. While there are still many things
> > that need more polish, we are headed in the right direction and
> > block storage is a strong, stable component to many OpenStack clouds.
> >
> > Mike and John have provided the leadership to get the project where
> > it is today. I would like to keep that momentum going.
> >
> > I've spent over a decade finding new and interesting ways to create
> > and delete volumes. I also work across many different product teams
> > and have had a lot of experience collaborating with groups to find
> > a balance between the work being done to best benefit all involved.
> >
> > I think I can use this experience to foster collaboration both within
> > the Cinder team as well as between Cinder and other related projects
> > that interact with storage services.
> >
> > Some topics I would like to see focused on for the Mitaka release
> > would be:
> >
> >  * Complete work of making the Cinder code Python3 compatible.
> >  * Complete conversion to objects.
> >  * Sort out object inheritance and appropriate use of ABC.
> >  * Continued stabilization of third party CI.
> >  * Make sure there is a good core feature set regardless of backend type.
> >  * Reevaluate our deadlines to make sure core feature work gets enough
> >    time and allows drivers to implement support.
> >
> > While there are some things I think we need to do to move the project
> > forward, I am mostly open to the needs of the community as a whole
> > and making sure that what we are doing is benefiting OpenStack and
> > making it a simpler, easy to use, and ubiquitous platform for the
> > cloud.
> >
> > Thank you for your consideration!
> >
> > Sean McGinnis (smcginnis)
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Best Wishes For You!
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/c286bcbc/attachment.html>

From duncan.thomas at gmail.com  Tue Sep 15 20:46:02 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Tue, 15 Sep 2015 23:46:02 +0300
Subject: [openstack-dev] [Cinder] PTL Candidacy
In-Reply-To: <CAKj7oByLiKVhv62DAJZ4weoMgH4cYAWVYQM96Y1FNGROhwm48g@mail.gmail.com>
References: <20150914144935.GA12267@gmx.com>
 <CAOEh+o37kMLxQoDDeO=pQ=-1Cr2F_SYvipNs-O6Vd8T4CWDpcA@mail.gmail.com>
 <CAKj7oByLiKVhv62DAJZ4weoMgH4cYAWVYQM96Y1FNGROhwm48g@mail.gmail.com>
Message-ID: <CAOyZ2aEnjBoQ2JrEm88qkMKjnE8GjwMhuiHP-WAj3A0sAnV4aQ@mail.gmail.com>

Voting is done by formal ballot just before the summit. All Cinder ATCs
will be invited to vote. Voting on the mailing list is just noise.

On 15 September 2015 at 23:40, Nikesh Kumar Mahalka <
nikeshmahalka at vedams.com> wrote:

> Thanks Sean, Vote +1.
>
> On Tue, Sep 15, 2015 at 8:36 AM, hao wang <sxmatch1986 at gmail.com> wrote:
>
>> Thanks Sean, Vote +1.
>>
>> 2015-09-14 22:49 GMT+08:00 Sean McGinnis <sean.mcginnis at gmx.com>:
>> > Hello everyone,
>> >
>> > I'm announcing my candidacy for Cinder PTL for the Mitaka release.
>> >
>> > The Cinder team has made great progress. We've not only grown the
>> > number of supported backend drivers, but we've made significant
>> > improvements to the core code and raised the quality of existing
>> > and incoming code contributions. While there are still many things
>> > that need more polish, we are headed in the right direction and
>> > block storage is a strong, stable component to many OpenStack clouds.
>> >
>> > Mike and John have provided the leadership to get the project where
>> > it is today. I would like to keep that momentum going.
>> >
>> > I've spent over a decade finding new and interesting ways to create
>> > and delete volumes. I also work across many different product teams
>> > and have had a lot of experience collaborating with groups to find
>> > a balance between the work being done to best benefit all involved.
>> >
>> > I think I can use this experience to foster collaboration both within
>> > the Cinder team as well as between Cinder and other related projects
>> > that interact with storage services.
>> >
>> > Some topics I would like to see focused on for the Mitaka release
>> > would be:
>> >
>> >  * Complete work of making the Cinder code Python3 compatible.
>> >  * Complete conversion to objects.
>> >  * Sort out object inheritance and appropriate use of ABC.
>> >  * Continued stabilization of third party CI.
>> >  * Make sure there is a good core feature set regardless of backend
>> type.
>> >  * Reevaluate our deadlines to make sure core feature work gets enough
>> >    time and allows drivers to implement support.
>> >
>> > While there are some things I think we need to do to move the project
>> > forward, I am mostly open to the needs of the community as a whole
>> > and making sure that what we are doing is benefiting OpenStack and
>> > making it a simpler, easy to use, and ubiquitous platform for the
>> > cloud.
>> >
>> > Thank you for your consideration!
>> >
>> > Sean McGinnis (smcginnis)
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Best Wishes For You!
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/fc79d7b2/attachment.html>

From Kevin.Fox at pnnl.gov  Tue Sep 15 20:47:43 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 15 Sep 2015 20:47:43 +0000
Subject: [openstack-dev]
	[nova][neutron][devstack]	New	proposed	'default' network model
In-Reply-To: <201509152006.t8FK6A6H010022@d01av01.pok.ibm.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>,
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>,
 <201509151923.t8FJNBUL024868@d01av01.pok.ibm.com>
 <1A3C52DFCD06494D8528644858247BF01A310314@EX10MBOX03.pnnl.gov>,
 <201509152006.t8FK6A6H010022@d01av01.pok.ibm.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A3103D6@EX10MBOX03.pnnl.gov>

Ah. Instance of Cloud, not Nova Instance. Gotcha.

The biggest currently has about 100 addresses on the public net and maybe about a quoter of those are allocated to instances. the shared private's about 200 and around a 30 or 40 are used. We have a lot of big vm's on that cloud for HPC like workload, so there are only around a hundred fifty instances at present. The majority are huge, taking up a whole node. The rest are small, infrastructure related and a lot are HA behind load balancers. We're using host aggrigates to keep the workloads separate. Of the non Compute VM's, I'd say there's somewhere between a 2x relationship between vm's without floating ip's and those with.  That number's growing as we make things more HA.

Thanks,
Kevin
________________________________
From: Mohammad Banikazemi [mb at us.ibm.com]
Sent: Tuesday, September 15, 2015 1:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model


Thanks Kevin for your answer. My question was different. You mentioned in your email that you run several clouds. That's why I used the word "instance" in my question to refer to each of those clouds. So let me put the question in a different way: in the biggest cloud you run, how many total floating IPs do you have. Just a ballpark number will be great. 10s, 100s, 1000s, more?

Thanks,

Mohammad

"Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote on 09/15/2015 03:43:45 PM:

> From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Date: 09/15/2015 03:49 PM
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> I'm not quite sure how to read your question. I think it can be
> taken multiple ways. I'll guess at what you meant though. If I
> interpreted wrong, please ask again.
>
> For the instances that have floating ip's, usually either 1 or 2.
> One of our clouds has basically a public
> network directly on the internet, and a shared private network that
> crosses tenants but is not internet facing. We can place vm's on
> either network easily by just attaching floating ip's. The private
> shared network has more floating ip's assigned then the internet one usually.
>
> As LBaaS is maturing, we're using it more and more, putting the
> floating ips on the LB instead of the instances, and putting a pool
> of instances behind it. So our instance counts are growing faster
> then our usage of floating IP's.
>
> Thanks,
> Kevin
>
> From: Mohammad Banikazemi [mb at us.ibm.com]
> Sent: Tuesday, September 15, 2015 12:23 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model

> "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote on 09/15/2015 02:00:03 PM:
>
> > From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev at lists.openstack.org>
> > Date: 09/15/2015 02:02 PM
> > Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> > 'default' network model
> >
> > We run several clouds where there are multiple external networks.
> > the "just run it in on THE public network" doesn't work. :/
> >
> > I also strongly recommend to users to put vms on a private network
> > and use floating ip's/load balancers.
>
>
> Just curious to know how many floating IPs you have in each instance
> of your OpenStack cloud.
>
> Best,
>
> Mohammad
>
>
>
>
> For many reasons. Such as, if
> > you don't, the ip that gets assigned to the vm helps it become a
> > pet. you can't replace the vm and get the same IP. Floating IP's and
> > load balancers can help prevent pets. It also prevents security
> > issues with DNS and IP's. Also, for every floating ip/lb I have, I
> > usually have 3x or more the number of instances that are on the
> > private network. Sure its easy to put everything on the public
> > network, but it provides much better security if you only put what
> > you must on the public network. Consider the internet. would you
> > want to expose every device in your house directly on the internet?
> > No. you put them in a private network and poke holes just for the
> > stuff that does. we should be encouraging good security practices.
> > If we encourage bad ones, then it will bite us later when OpenStack
> > gets a reputation for being associated with compromises.
> >
> > I do consider making things as simple as possible very important.
> > but that is, make them as simple as possible, but no simpler.
> > There's danger here of making things too simple.
> >
> > Thanks,
> > Kevin
> > ________________________________________
> > From: Doug Hellmann [doug at doughellmann.com]
> > Sent: Tuesday, September 15, 2015 10:02 AM
> > To: openstack-dev
> > Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> > 'default' network model
> >
> > Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> > > On 15 September 2015 at 08:04, Monty Taylor <mordred at inaugust.com> wrote:
> > >
> > > > Hey all!
> > > >
> > > > If any of you have ever gotten drunk with me, you'll know I
> hate floating
> > > > IPs more than I hate being stabbed in the face with a very angry fish.
> > > >
> > > > However, that doesn't really matter. What should matter is "what is the
> > > > most sane thing we can do for our users"
> > > >
> > > > As you might have seen in the glance thread, I have a bunch ofOpenStack
> > > > public cloud accounts. Since I wrote that email this morning, I've added
> > > > more - so we're up to 13.
> > > >
> > > > auro
> > > > citycloud
> > > > datacentred
> > > > dreamhost
> > > > elastx
> > > > entercloudsuite
> > > > hp
> > > > ovh
> > > > rackspace
> > > > runabove
> > > > ultimum
> > > > unitedstack
> > > > vexxhost
> > > >
> > > > Of those public clouds, 5 of them require you to use a
> floating IP to get
> > > > an outbound address, the others directly attach you to the
> public network.
> > > > Most of those 8 allow you to create a private network, to bootvms on the
> > > > private network, and ALSO to create a router with a gateway and put
> > > > floating IPs on your private ip'd machines if you choose.
> > > >
> > > > Which brings me to the suggestion I'd like to make.
> > > >
> > > > Instead of having our default in devstack and our default when we talk
> > > > about things be "you boot a VM and you put a floating IP on it" - which
> > > > solves one of the two usage models - how about:
> > > >
> > > > - Cloud has a shared: True, external:routable: True neutron network. I
> > > > don't care what it's called  ext-net, public, whatever. the
> "shared" part
> > > > is the key, that's the part that lets someone boot a vm on it directly.
> > > >
> > > > - Each person can then make a private network, router, gateway, etc. and
> > > > get floating-ips from the same public network if they prefer that model.
> > > >
> > > > Are there any good reasons to not push to get all of the public networks
> > > > marked as "shared"?
> > > >
> > >
> > > The reason is simple: not every cloud deployment is the same: private is
> > > different from public and even within the same cloud model, the network
> > > topology may vary greatly.
> > >
> > > Perhaps Neutron fails in the sense that it provides you with too much
> > > choice, and perhaps we have to standardize on the type of networking
> > > profile expected by a user of OpenStack public clouds before
> making changes
> > > that would fragment this landscape even further.
> > >
> > > If you are advocating for more flexibility without limiting the existing
> > > one, we're only making the problem worse.
> >
> > As with the Glance image upload API discussion, this is an example
> > of an extremely common use case that is either complex for the end
> > user or for which they have to know something about the deployment
> > in order to do it at all. The usability of an OpenStack cloud running
> > neutron would be enhanced greatly if there was a simple, clear, way
> > for the user to get a new VM with a public IP on any cloud without
> > multiple steps on their part. There are a lot of ways to implement
> > that "under the hood" (what you call "networking profile" above)
> > but the users don't care about "under the hood" so we should provide
> > a way for them to ignore it. That's *not* the same as saying we
> > should only support one profile. Think about the API from the use
> > case perspective, and build it so if there are different deployment
> > configurations available, the right action can be taken based on
> > the deployment choices made without the user providing any hints.
> >
> > Doug
> >
> > >
> > > >
> > > > OH - well, one thing - that's that once there are two networks in an
> > > > account you have to specify which one. This is really painful in nova
> > > > clent. Say, for instance, you have a public network called
> "public" and a
> > > > private network called "private" ...
> > > >
> > > > You can't just say "nova boot --network=public" - nope, you need to say
> > > > "nova boot --nics net-id=$uuid_of_my_public_network"
> > > >
> > > > So I'd suggest 2 more things;
> > > >
> > > > a) an update to python-novaclient to allow a named network to
> bepassed to
> > > > satisfy the "you have more than one network" - the nics
> argument is still
> > > > useful for more complex things
> > > >
> > > > b) ability to say "vms in my cloud should default to being booted on the
> > > > public network" or "vms in my cloud should default to being booted on a
> > > > network owned by the user"
> > > >
> > > > Thoughts?
> > > >
> > >
> > > As I implied earlier, I am not sure how healthy this choice is. As a user
> > > of multiple clouds I may end up having a different user
> experience based on
> > > which cloud I am using...I thought you were partially complaining about
> > > lack of consistency?
> > >
> > > >
> > > > Monty
> > > >
> > > >
> __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?
> subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/fa61c22e/attachment.html>

From nikeshmahalka at vedams.com  Tue Sep 15 20:48:13 2015
From: nikeshmahalka at vedams.com (Nikesh Kumar Mahalka)
Date: Wed, 16 Sep 2015 02:18:13 +0530
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CALsyUnO4YLgZWaZnvU-FSkvLONyraDj2b_gscmrfFiARViC7gA@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
 <20150914170202.GA13271@gmx.com>
 <CAPWkaSXoeXgSw3VSWQt8_f8_Mo7MvUx50tdiqSqDmjBW6x8fWg@mail.gmail.com>
 <CAOEh+o2aQKtuCMZvy1eMyv7jw1qFy2wcLXexNbpdktF91_+wmQ@mail.gmail.com>
 <COL129-W3096831F489F0C475D4C21A75C0@phx.gbl>
 <CALsyUnO4YLgZWaZnvU-FSkvLONyraDj2b_gscmrfFiARViC7gA@mail.gmail.com>
Message-ID: <CAKj7oBwU93DxmDc5TmTPe7fAvZaEz-CC73fQR71b6Fie0jrM+w@mail.gmail.com>

Thanks Mike,
It was really a good experience working with you in kilo and liberty.



Regards
Nikesh

On Tue, Sep 15, 2015 at 1:21 PM, Silvan Kaiser <silvan at quobyte.com> wrote:

> Thanks Mike!
> That was really demanding work!
>
> 2015-09-15 9:27 GMT+02:00 ?? <chenyingkof at outlook.com>:
>
>> Thanks Mike. Thank you for doing a great job.
>>
>>
>> > From: sxmatch1986 at gmail.com
>> > Date: Tue, 15 Sep 2015 10:05:22 +0800
>> > To: openstack-dev at lists.openstack.org
>> > Subject: Re: [openstack-dev] [cinder] PTL Non-Candidacy
>>
>> >
>> > Thanks Mike ! Your help is very important to me to get started in
>> > cinder and we do a lot of proud work with your leadership.
>> >
>> > 2015-09-15 6:36 GMT+08:00 John Griffith <john.griffith8 at gmail.com>:
>> > >
>> > >
>> > > On Mon, Sep 14, 2015 at 11:02 AM, Sean McGinnis <
>> sean.mcginnis at gmx.com>
>> > > wrote:
>> > >>
>> > >> On Mon, Sep 14, 2015 at 09:15:44AM -0700, Mike Perez wrote:
>> > >> > Hello all,
>> > >> >
>> > >> > I will not be running for Cinder PTL this next cycle. Each cycle I
>> ran
>> > >> > was for a reason [1][2], and the Cinder team should feel proud of
>> our
>> > >> > accomplishments:
>> > >>
>> > >> Thanks for a couple of awesome cycles Mike!
>> > >>
>> > >>
>> __________________________________________________________________________
>> > >> OpenStack Development Mailing List (not for usage questions)
>> > >> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > > You did a fantastic job Mike, thank you very much for the hard work
>> and
>> > > dedication.
>> > >
>> > >
>> > >
>> __________________________________________________________________________
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> >
>> >
>> > --
>> > Best Wishes For You!
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Dr. Silvan Kaiser
> Quobyte GmbH
> Hardenbergplatz 2, 10623 Berlin - Germany
> +49-30-814 591 800 - www.quobyte.com<http://www.quobyte.com/>
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Bj?rn Kolbeck, Dr. Jan Stender
>
>
> --
> *Quobyte* GmbH
> Hardenbergplatz 2 - 10623 Berlin - Germany
> +49-30-814 591 800 - www.quobyte.com
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> management board: Dr. Felix Hupfeld, Dr. Bj?rn Kolbeck, Dr. Jan Stender
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/1ba3660d/attachment.html>

From xing.yang at emc.com  Tue Sep 15 20:56:19 2015
From: xing.yang at emc.com (yang, xing)
Date: Tue, 15 Sep 2015 20:56:19 +0000
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55F857A2.3020603@redhat.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
Message-ID: <D21DF6C7.589BB%xing.yang@emc.com>

Hi Eric,

Regarding the default max_over_subscription_ratio, I initially set the
default to 1 while working on oversubscription, and changed it to 2 after
getting review comments.  After it was merged, I got feedback that 2 is
too small and 20 is more appropriated, so I changed it to 20.  So it looks
like we can?t find a default value that makes everyone happy.

If we can decide what is the best default value for LVM, we can change the
default max_over_subscription_ratio, but we should also allow other
drivers to specify a different config option if a different default value
is more appropriate for them.

Thanks,
Xing


On 9/15/15, 1:38 PM, "Eric Harney" <eharney at redhat.com> wrote:

>On 09/15/2015 01:00 PM, Chris Friesen wrote:
>> I'm currently trying to work around an issue where activating LVM
>> snapshots created through cinder takes potentially a long time.
>> (Linearly related to the amount of data that differs between the
>> original volume and the snapshot.)  On one system I tested it took about
>> one minute per 25GB of data, so the worst-case boot delay can become
>> significant.
>> 
>> According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were
>> not intended to be kept around indefinitely, they were supposed to be
>> used only until the backup was taken and then deleted.  He recommends
>> using thin provisioning for long-lived snapshots due to differences in
>> how the metadata is maintained.  (He also says he's heard reports of
>> volume activation taking half an hour, which is clearly crazy when
>> instances are waiting to access their volumes.)
>> 
>> Given the above, is there any reason why we couldn't make thin
>> provisioning the default?
>> 
>
>
>My intention is to move toward thin-provisioned LVM as the default -- it
>is definitely better suited to our use of LVM.  Previously this was less
>easy, since some older Ubuntu platforms didn't support it, but in
>Liberty we added the ability to specify lvm_type = "auto" [1] to use
>thin if it is supported on the platform.
>
>The other issue preventing using thin by default is that we default the
>max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
>the reference implementation, since it means that people who deploy
>Cinder LVM on smaller storage configurations can easily fill up their
>volume group and have things grind to halt.  I think we want something
>closer to the semantics of thick LVM for the default case.
>
>We haven't thought through a reasonable migration strategy for how to
>handle that.  I'm not sure we can change the default oversubscription
>ratio without breaking deployments using other drivers.  (Maybe I'm
>wrong about this?)
>
>If we sort out that issue, I don't see any reason we can't switch over
>in Mitaka.
>
>[1] https://review.openstack.org/#/c/104653/
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From rakhmerov at mirantis.com  Tue Sep 15 20:57:56 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Tue, 15 Sep 2015 23:57:56 +0300
Subject: [openstack-dev] [Mistral] Mistral PTL Candidacy
Message-ID: <FDA863E6-E62D-4ECA-83D3-EED72C7ACFBB@mirantis.com>

Hi,

My name is Renat Akhmerov. I decided to run for Mistral PTL for Mitaka release cycle.

This is the first time when I?m doing it officially after Mistral was accepted into Big Tent.
In fact, I?ve been driving the project for just a little less than 2 years by now and I've put a lot of
my energy into this initiative by designing its architecture, coding (including initial PoC versions),
reviewing nearly every single patch coming in and presenting Mistral at every conference that
I could including OpenStack summits. Of course, I wasn?t doing it alone and I couldn?t find enough
words to express how thankful I am to folks from Mirantis, StackStorm, Huawei, Alcatel-Lucent,
HP, Ericsson and other companies. You all have done a great work and I?m proud to be a part of
such a great team.

Although a lot has been done and we certainly have achievements in the form of users who
use Mistral in production, there?s a lot more ahead. And below is what I think we need to focus on
during Mitaka cycle.

HA and maturity

Making Mistral truly stable and mature technology capable of running in HA mode. I have to admit
that so far we haven?t been paying enough attention to high-load testing and tuning. And my belief
that it?s a high time we started doing it. Some of the issues are known to us and we know how we
should be fixing them. Some have to be discovered. In my opinion, what we?re missing now is a
comprehensive understanding of, believe it or not, how Mistral works :) This may sound strange,
but I really mean is that we need to know in very tiny detail how every Mistral transaction works
in terms of potential race conditions, isolation level, concurrency model etc. In my strong opinion,
this is a prerequisite for everything else. Having said that, I am going to bring more expertise
on the project to fill this gap: either by attracting corresponding people and by planning more
time for the current team members to work on that.

Apart from that I find it very important to stop developing two many new features in workflow engine
and do a proper refactoring of it. In my strong opinion, Mistral engine started suffering from
squeezing more and more functionality into it. It?s generally normal but I believe that we need to
simplify the code base by cleaning it up wisely and at the same time improving test coverage
accounting for all kind of corner cases and negative scenarios.

Use cases

This is probably the most tricky part about this project and I believe I personally should have done
much better job of clearly explaining Mistral value for the industry. I plan to change the situation
drastically by providing battle proven scenarios where it?s hard or nearly impossible to avoid using
Mistral. Also recording screencasts and writing cookbooks is part of the plan.

UI

Thanks to engineers from Huawei and Alcatel Lucent who?ve done a good job in Liberty to move Mistral
UI to a much better state. Most of basic CRUD functionality is there and this work keeps going on.
However, I still see a lot of ways how to advance Mistral UI and make it really remarkable.
For example, one specific thing that I?d really like to work on is workflow graph visualisation
(for both editing and monitoring running workflows).

I find it very important particularly because having good UI would help us to build even larger
community around the project. Just because it would be easier to deliver a message of what the
project goal is.

What else

Other things that I?d like to pay attention to:
Solving guest VMs access problem
More intellectual task scheduling mechanism accounting for workflow priorities (FIFO but on workflow level)
New REST API (don?t confuse with DSL, or workflow language) on which we?ve almost agreed within the team
Improving significantly CLI so that it becomes truly convenient and fun to use

Eventually, my goal is to build a really useful and beautiful technology

Renat Akhmerov
@ Mirantis Inc.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/32fd17d6/attachment.html>

From aschultz at mirantis.com  Tue Sep 15 21:03:02 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Tue, 15 Sep 2015 16:03:02 -0500
Subject: [openstack-dev] [fuel][fuel-library] modules managed with librarian
	round 2
Message-ID: <CABzFt8OVqsF+dRY4ZAa1KK+j2GFqCbTnnqC0KnyYyNjMZGGkhg@mail.gmail.com>

Hello!

So after our first round of librarian changes for 7.0, it is time to start
switching to upstream for more changes.  We've had a few updates during the
fuel meetings over the last month[0][1].  I have begun to prepare
additional reviews to move modules.

The current modules available for migration are:

memcached - https://review.openstack.org/#/c/217383/
sysctl - https://review.openstack.org/#/c/221945/
staging - https://review.openstack.org/#/c/222350/
vcsrepo - https://review.openstack.org/#/c/222355/
postgresql - https://review.openstack.org/#/c/222368/


Just as an FYI in addition to these modules, I have started work on the
rsyslog module which was a very old version of the module with only a few
minor customizations. Since we leverage the rsyslog module within our
openstack composition layer module, I have also taken some time to put
together a patch[2] with some unit tests for the openstack module in fuel
library since what was there has been disabled[3] for some time and doesn't
function.  The patch[4] with the move to an upstream version of rsyslog is
out there as well if anyone is interested in taking a look. I'm going to do
some additional testing around these two patches to ensure we don't break
any syslog functionality by switching before they should be merged.

Thanks,
-Alex

[0]
http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-08-27-16.00.html
[1]
http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-09-03-16.00.html
[2] https://review.openstack.org/#/c/223395/
[3]
https://github.com/stackforge/fuel-library/blob/master/utils/jenkins/modules.disable_rspec#L29
[4] https://review.openstack.org/#/c/222758/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/8c5b5273/attachment.html>

From mspreitz at us.ibm.com  Tue Sep 15 21:04:06 2015
From: mspreitz at us.ibm.com (Mike Spreitzer)
Date: Tue, 15 Sep 2015 17:04:06 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <CAK+RQeZ29gkuWkvj5m5pWXK+aEf=tMpKKM7R_hkNrvgDaiHE_Q@mail.gmail.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <CAK+RQeZ29gkuWkvj5m5pWXK+aEf=tMpKKM7R_hkNrvgDaiHE_Q@mail.gmail.com>
Message-ID: <201509152104.t8FL4Y80025192@d03av05.boulder.ibm.com>

"Armando M." <armamig at gmail.com> wrote on 09/15/2015 03:50:24 PM:

> On 15 September 2015 at 10:02, Doug Hellmann <doug at doughellmann.com> 
wrote:
> Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
...
> As with the Glance image upload API discussion, this is an example
> of an extremely common use case that is either complex for the end
> user or for which they have to know something about the deployment
> in order to do it at all. The usability of an OpenStack cloud running
> neutron would be enhanced greatly if there was a simple, clear, way
> for the user to get a new VM with a public IP on any cloud without
> multiple steps on their part. 

<<<<<<end of excerpt from Armando>>>>>>

...
> 
> So this boils down to: in light of the possible ways of providing VM
> connectivity, how can we make a choice on the user's behalf? Can we 
> assume that he/she always want a publicly facing VM connected to 
> Internet? The answer is 'no'.

While it may be true that in some deployments there is no good way for the 
code to choose, I think that is not the end of the story here.  The 
motivation to do this is that in *some* deployments there *is* a good way 
for the code to figure out what to do.

Regards,
Mike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/e63aab96/attachment.html>

From openstack at lanabrindley.com  Tue Sep 15 21:07:13 2015
From: openstack at lanabrindley.com (Lana Brindley)
Date: Wed, 16 Sep 2015 07:07:13 +1000
Subject: [openstack-dev] [OpenStack-docs] [docs][ptl] Docs PTL Candidacy
In-Reply-To: <55F7ED98.2010406@berendt.io>
References: <55F63197.9090606@lanabrindley.com> <55F7ED98.2010406@berendt.io>
Message-ID: <55F88881.2070304@lanabrindley.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 15/09/15 20:06, Christian Berendt wrote:
> On 09/14/2015 04:31 AM, Lana Brindley wrote:
>> I'd love to have your support for the PTL role for Mitaka, and
>> I'm looking forward to continuing to grow the documentation
>> team.
> 
> You have my support. Thanks for your great work during the current
> cycle.
> 
> Christian.
> 

It's so lovely to wake up and see this. Thank you, Christian. And
thank you for all that you do, too :)

L

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV+IiBAAoJELppzVb4+KUyANIH/0nh3R5HskdjFsFpdxT6pXI5
PuQf0t8YiMXYUxNaLXQL4o11BVXlaHdI3AWOSq/YswIOSB5vrOUT0o17j1+RrJPx
MjOiuaDT7VOBjNAXv3q7qbFM2qBt+o9n2iVX5rosgTLEPFRj/hGsVFIc8xjhJnV+
PCSOs/ZvkOCtSJ2+pYDV9pd7eWJ9Lx7ts3sDapovZeSn4vEooLdrE9q5QxLUHLkb
KnzGe+oLgvlgKZDSCtdogCNKyogJzTVokzgfwm27oZXqq9o9pRHxsw4vAI/6aRWc
cM8hxihlBbNV+3/LhbSDAAEGhhA6TxzSKP3dLTnJ71F4kPkpJ0CDl7QYHlUkppg=
=hRD2
-----END PGP SIGNATURE-----


From Kevin.Fox at pnnl.gov  Tue Sep 15 21:09:14 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 15 Sep 2015 21:09:14 +0000
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <1442347586.3210663.384528809.28DBF360@webmail.messagingengine.com>
References: <55F83367.9050503@inaugust.com> <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>,
 <1442347586.3210663.384528809.28DBF360@webmail.messagingengine.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01A31041D@EX10MBOX03.pnnl.gov>

Unfortunately, I haven't had enough chance to play with ipv6 yet.

I still think ipv6 with floating ip's probably makes sense though.

In ipv4, the floating ip's solve one particular problem:

End Users want to be able to consume a service provided by a VM. They have two options:
1. contact the ip directly
2. use DNS.

DNS is preferable, since humans don't remember ip's very well. IPv6 is much harder to remember then v4 too.

DNS has its own issues, mostly, its usually not very quick to get a DNS entry updated.  At our site (and I'm sure, others), I'm afraid to say in some cases it takes as long as 24 hours to get updates to happen. Even if that was fixed, caching can bite you too.

So, when you register a DNS record, the ip that its pointing at, kind of becomes a set of state. If it can't be separated from a VM its a bad thing. You can move it from VM to VM and your VM is not a pet. But, if your IP is allocated to the VM specifically, as non Floating IP's are, you run into problems if your VM dies and you have to replace it. If your unlucky, it dies, and someone else gets allocated the fixed ip, and now someone else's server is sitting on your DNS entry! So you are very unlikely to want to give up your VM, turning it into a pet.

I'd expect v6 usage to have the same issues.

The floating ip is great in that its an abstraction of a contactable address, separate from any VM it may currently be bound to.

You allocate a floating ip. You can then register it with DNS, and another tenant can not get accidentally assigned it. You can move it from vm to vm until your done with it. You can Unregister it from DNS, and then it is safe to return to others to use.

To me, the NAT aspect of it is a secondary thing. Its primary importance is in enabling things to be more cattleish and helping with dns security.

Thanks,
Kevin






________________________________________
From: Clark Boylan [cboylan at sapwetik.org]
Sent: Tuesday, September 15, 2015 1:06 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

On Tue, Sep 15, 2015, at 11:00 AM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the
> "just run it in on THE public network" doesn't work. :/
Maybe this would be better expressed as "just run it on an existing
public network" then?
>
> I also strongly recommend to users to put vms on a private network and
> use floating ip's/load balancers. For many reasons. Such as, if you
> don't, the ip that gets assigned to the vm helps it become a pet. you
> can't replace the vm and get the same IP. Floating IP's and load
> balancers can help prevent pets. It also prevents security issues with
> DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or
> more the number of instances that are on the private network. Sure its
> easy to put everything on the public network, but it provides much better
> security if you only put what you must on the public network. Consider
> the internet. would you want to expose every device in your house
> directly on the internet? No. you put them in a private network and poke
> holes just for the stuff that does. we should be encouraging good
> security practices. If we encourage bad ones, then it will bite us later
> when OpenStack gets a reputation for being associated with compromises.
There are a few issues with this. Neutron IPv6 does not support floating
IPs. So now you have to use two completely different concepts for
networking on a single dual stacked VM. IPv4 goes on a private network
and you attach a floating IP. IPv6 is publicly routable. If security and
DNS and not making pets were really the driving force behind floating
IPs we would see IPv6 support them too. These aren't the reasons
floating IPs exist, they exist because we are running out of IPv4
addresses and NAT is everyones preferred solution to that problem. But
that doesn't make it a good default for a cloud; use them if you are
affected by an IP shortage.

Nothing prevents you from load balancing against public IPs to address
the DNS and firewall rule concerns (basically don't make pets). This
works great and is how OpenStack's git mirrors work.

It is also easy to firewall public IPs using Neutron via security groups
(and possibly the firewall service? I have never used it and don't
know). All this to say I think it is reasonable to use public shared
networks by default particularly since IPv6 does not have any concept of
a floating IP in Neutron so using them is just odd unless you really
really need them and you aren't actually any less secure.

Not to get too off topic, but I would love it if all the devices in my
home were publicly routable. I can use my firewall to punch holes for
them, NAT is not required. Unfortunately I still have issues with IPv6
at home. Maybe one day this will be a reality :)
>
> I do consider making things as simple as possible very important. but
> that is, make them as simple as possible, but no simpler. There's danger
> here of making things too simple.
>
> Thanks,
> Kevin
>

Clark

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From amuller at redhat.com  Tue Sep 15 21:16:36 2015
From: amuller at redhat.com (Assaf Muller)
Date: Tue, 15 Sep 2015 17:16:36 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A31041D@EX10MBOX03.pnnl.gov>
References: <55F83367.9050503@inaugust.com> <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <1442347586.3210663.384528809.28DBF360@webmail.messagingengine.com>
 <1A3C52DFCD06494D8528644858247BF01A31041D@EX10MBOX03.pnnl.gov>
Message-ID: <CABARBAYwMRra4h0an184PzRasr8=8g0rdxxzZ1Fvf5q7x=YymA@mail.gmail.com>

On Tue, Sep 15, 2015 at 5:09 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:

> Unfortunately, I haven't had enough chance to play with ipv6 yet.
>
> I still think ipv6 with floating ip's probably makes sense though.
>
> In ipv4, the floating ip's solve one particular problem:
>
> End Users want to be able to consume a service provided by a VM. They have
> two options:
> 1. contact the ip directly
> 2. use DNS.
>
> DNS is preferable, since humans don't remember ip's very well. IPv6 is
> much harder to remember then v4 too.
>
> DNS has its own issues, mostly, its usually not very quick to get a DNS
> entry updated.  At our site (and I'm sure, others), I'm afraid to say in
> some cases it takes as long as 24 hours to get updates to happen. Even if
> that was fixed, caching can bite you too.
>

I'm curious if you tried out Designate / DNSaaS.


>
> So, when you register a DNS record, the ip that its pointing at, kind of
> becomes a set of state. If it can't be separated from a VM its a bad thing.
> You can move it from VM to VM and your VM is not a pet. But, if your IP is
> allocated to the VM specifically, as non Floating IP's are, you run into
> problems if your VM dies and you have to replace it. If your unlucky, it
> dies, and someone else gets allocated the fixed ip, and now someone else's
> server is sitting on your DNS entry! So you are very unlikely to want to
> give up your VM, turning it into a pet.
>
> I'd expect v6 usage to have the same issues.
>
> The floating ip is great in that its an abstraction of a contactable
> address, separate from any VM it may currently be bound to.
>
> You allocate a floating ip. You can then register it with DNS, and another
> tenant can not get accidentally assigned it. You can move it from vm to vm
> until your done with it. You can Unregister it from DNS, and then it is
> safe to return to others to use.
>
> To me, the NAT aspect of it is a secondary thing. Its primary importance
> is in enabling things to be more cattleish and helping with dns security.
>
> Thanks,
> Kevin
>
>
>
>
>
>
> ________________________________________
> From: Clark Boylan [cboylan at sapwetik.org]
> Sent: Tuesday, September 15, 2015 1:06 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> On Tue, Sep 15, 2015, at 11:00 AM, Fox, Kevin M wrote:
> > We run several clouds where there are multiple external networks. the
> > "just run it in on THE public network" doesn't work. :/
> Maybe this would be better expressed as "just run it on an existing
> public network" then?
> >
> > I also strongly recommend to users to put vms on a private network and
> > use floating ip's/load balancers. For many reasons. Such as, if you
> > don't, the ip that gets assigned to the vm helps it become a pet. you
> > can't replace the vm and get the same IP. Floating IP's and load
> > balancers can help prevent pets. It also prevents security issues with
> > DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or
> > more the number of instances that are on the private network. Sure its
> > easy to put everything on the public network, but it provides much better
> > security if you only put what you must on the public network. Consider
> > the internet. would you want to expose every device in your house
> > directly on the internet? No. you put them in a private network and poke
> > holes just for the stuff that does. we should be encouraging good
> > security practices. If we encourage bad ones, then it will bite us later
> > when OpenStack gets a reputation for being associated with compromises.
> There are a few issues with this. Neutron IPv6 does not support floating
> IPs. So now you have to use two completely different concepts for
> networking on a single dual stacked VM. IPv4 goes on a private network
> and you attach a floating IP. IPv6 is publicly routable. If security and
> DNS and not making pets were really the driving force behind floating
> IPs we would see IPv6 support them too. These aren't the reasons
> floating IPs exist, they exist because we are running out of IPv4
> addresses and NAT is everyones preferred solution to that problem. But
> that doesn't make it a good default for a cloud; use them if you are
> affected by an IP shortage.
>
> Nothing prevents you from load balancing against public IPs to address
> the DNS and firewall rule concerns (basically don't make pets). This
> works great and is how OpenStack's git mirrors work.
>
> It is also easy to firewall public IPs using Neutron via security groups
> (and possibly the firewall service? I have never used it and don't
> know). All this to say I think it is reasonable to use public shared
> networks by default particularly since IPv6 does not have any concept of
> a floating IP in Neutron so using them is just odd unless you really
> really need them and you aren't actually any less secure.
>
> Not to get too off topic, but I would love it if all the devices in my
> home were publicly routable. I can use my firewall to punch holes for
> them, NAT is not required. Unfortunately I still have issues with IPv6
> at home. Maybe one day this will be a reality :)
> >
> > I do consider making things as simple as possible very important. but
> > that is, make them as simple as possible, but no simpler. There's danger
> > here of making things too simple.
> >
> > Thanks,
> > Kevin
> >
>
> Clark
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/58da8455/attachment.html>

From lyz at princessleia.com  Tue Sep 15 21:30:35 2015
From: lyz at princessleia.com (Elizabeth K. Joseph)
Date: Tue, 15 Sep 2015 14:30:35 -0700
Subject: [openstack-dev] [all] New Gerrit translations change proposals from
	Zanata
Message-ID: <CABesOu2cAewe-AiEK3odvqaxTJoDpjn2e=tVdOouCZJ3C_14jA@mail.gmail.com>

Hi everyone,

Daisy announced to the i18n team last week[0] that we've moved to
using Zanata for translations for the Liberty cycle. Everyone should
now be using Zanata at https://translate.openstack.org/ for
translations.

We're just now finishing up the infrastructure side of things with the
switch from having Transifex submit the translations proposals to
Gerrit to having Zanata do it.

The Gerrit topic for these change proposals for all projects with
translations has been changed from "transifex/translations" to
"zanata/translations". After a test with oslo.versionedobjects last
week[1], we're moving forward this Wednesday morning UTC time to have
the jobs run so that all translations changes proposed to Gerrit are
made by Zanata.

Please let us know if you run into any problems or concerns with the
changes being proposed to your project. The infra and i18n teams will
have a look and provide help as needed.

Thanks everyone, these are exciting times for the i18n team!

[0] http://lists.openstack.org/pipermail/openstack-i18n/2015-September/001331.html

[1] https://review.openstack.org/#/c/222712/ note that the topic: had
not yet been updated at the time of this change, but it has now

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2


From mgagne at internap.com  Tue Sep 15 22:11:03 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Tue, 15 Sep 2015 18:11:03 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
Message-ID: <55F89777.3000505@internap.com>

On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the "just run it in on THE public network" doesn't work. :/
> 
> I also strongly recommend to users to put vms on a private network and use floating ip's/load balancers. For many reasons. Such as, if you don't, the ip that gets assigned to the vm helps it become a pet. you can't replace the vm and get the same IP. Floating IP's and load balancers can help prevent pets. It also prevents security issues with DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or more the number of instances that are on the private network. Sure its easy to put everything on the public network, but it provides much better security if you only put what you must on the public network. Consider the internet. would you want to expose every device in your house directly on the internet? No. you put them in a private network and poke holes just for the stuff that does. we should be encouraging good security practices. If we encourage bad ones, then it will bite us later when OpenStack gets a reputation for being associated with compromises.
> 

Sorry but I feel this kind of reply explains why people are still using
nova-network over Neutron. People want simplicity and they are denied it
at every corner because (I feel) Neutron thinks it knows better.

The original statement by Monty Taylor is clear to me:

I wish to boot an instance that is on a public network and reachable
without madness.

As of today, you can't unless you implement a deployer/provider specific
solution (to scale said network). Just take a look at what actual public
cloud providers are doing:

- Rackspace has a "magic" public network
- GoDaddy has custom code in their nova-scheduler (AFAIK)
- iWeb (which I work for) has custom code in front of nova-api.

We are all writing our own custom code to implement what (we feel)
Neutron should be providing right off the bat.

By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
specs [3] and the Large Deployment Team meeting notes [4], you will see
that what is suggested here (a scalable public shared network) is an
objective we wish but are struggling hard to achieve.

People keep asking for simplicity and Neutron looks to not be able to
offer it due to philosophical conflicts between Neutron developers and
actual public users/operators. We can't force our users to adhere to ONE
networking philosophy: use NAT, floating IPs, firewall, routers, etc.
They just don't buy it. Period. (see monty's list of public providers
attaching VMs to public network)

If we can accept and agree that not everyone wishes to adhere to the
"full stack of networking good practices" (TBH, I don't know how to call
this thing), it will be a good start. Otherwise I feel we won't be able
to achieve anything.

What Monty is explaining and suggesting is something we (my team) have
been struggling with for *years* and just didn't have bandwidth (we are
operators, not developers) or public charisma to change.

I'm glad Monty brought up this subject so we can officially address it.


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
[2]
http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
[3]
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html
[4]
http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html

-- 
Mathieu


From travis.tripp at hpe.com  Tue Sep 15 22:34:18 2015
From: travis.tripp at hpe.com (Tripp, Travis S)
Date: Tue, 15 Sep 2015 22:34:18 +0000
Subject: [openstack-dev]  [searchlight] PTL Candidacy
Message-ID: <F8DAC706-C47B-4C54-B286-FE37BBF1C075@hpe.com>

Hello friends,

We are now going into our first official PTL election for Searchlight and I
would be honored if you?ll allow me to continue serving in the PTL role.

Searchlight became a new project in Liberty after being split out from its
initial experimental days in Glance. Since then we?ve been we?ve been moving
at a relatively fast pace towards fulfilling our mission: To provide advanced
and scalable indexing and search across multi-tenant cloud resources.

It would be a huge understatement to say that accomplishing this mission is
important to me. I believe that search is critical to enabling a better
experience for OpenStack users and operators alike.

In Liberty, we have made tremendous progress thanks to a small, but
passionate team who believes in the vision of Searchlight. At the Mitaka
summit we?ll be able to demonstrate a Horizon panel plugin able to search
across Glance, Nova, and Designate.

I believe that the PTL role is a commitment to the community to act as a
steward for the project. As PTL for an early stage project like searchlight,
I believe that some of these responsibilities are to:

* Evangelize the project across the OpenStack ecosystem
* Provide technical guidance and contribution
* Facilitate collaboration across the community
* Enable all developers to contribute effectively
* Grow a community of potential future project PTLs
* And perhaps most importantly, enable the project to release

I believe software must be developed with a clear demonstration of its
value. With Searchlight, I do believe that a UI is one of the most effective
ways to bring the value of Searchlight to OpenStack users. This is why
from day 1, I have been actively evangelizing Searchlight with Horizon. Once
users are able to actually take advantage of Searchlight, I believe
Searchlight will become a must have component of any OpenStack deployment.

From a feature standpoint, I believe all of the following are great
candidate goals for us to pursue in Mitaka and I look forward to working
with the community as establish priorities.

* (Obviously) Extend search indexing to as many projects as possible
** With a priority on the original integrated release projects
* Provide reference deployment architectures and deployment tooling as needed
* Establish performance testing
* Work towards cross-region search support
* Enable pre-defined quick queries for the most common searches
* Release the horizon search panel either in Horizon master or on its own
* Enable horizon top nav search to become a reality [0]

Thank you for your consideration in allowing me to continue serving as PTL
for the Mitaka cycle.

Thank you,
Travis

[0] https://invis.io/6Z3T72NXW
[1] https://review.openstack.org/#/c/223805/

From dougwig at parksidesoftware.com  Tue Sep 15 22:49:23 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Tue, 15 Sep 2015 16:49:23 -0600
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
	'default' network model
In-Reply-To: <55F89777.3000505@internap.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <55F89777.3000505@internap.com>
Message-ID: <7ABB3AEC-8DA7-4707-81D8-67910F96887F@parksidesoftware.com>



> On Sep 15, 2015, at 4:11 PM, Mathieu Gagn? <mgagne at internap.com> wrote:
> 
>> On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
>> We run several clouds where there are multiple external networks. the "just run it in on THE public network" doesn't work. :/
>> 
>> I also strongly recommend to users to put vms on a private network and use floating ip's/load balancers. For many reasons. Such as, if you don't, the ip that gets assigned to the vm helps it become a pet. you can't replace the vm and get the same IP. Floating IP's and load balancers can help prevent pets. It also prevents security issues with DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or more the number of instances that are on the private network. Sure its easy to put everything on the public network, but it provides much better security if you only put what you must on the public network. Consider the internet. would you want to expose every device in your house directly on the internet? No. you put them in a private network and poke holes just for the stuff that does. we should be encouraging good security practices. If we encourage bad ones, then it will bite us later when OpenStack gets a reputation for being associated with compromises.
> 
> Sorry but I feel this kind of reply explains why people are still using
> nova-network over Neutron. People want simplicity and they are denied it
> at every corner because (I feel) Neutron thinks it knows better.

Please stop painting such generalizations.  Go to the third or fourth email in this thread and you will find a spec, worked on by neutron and nova, that addresses exactly this use case.

It is a valid use case, and neutron does care about it. It has wrinkles. That has not stopped work on it for the common cases.

Thanks,
Doug 


> 
> The original statement by Monty Taylor is clear to me:
> 
> I wish to boot an instance that is on a public network and reachable
> without madness.
> 
> As of today, you can't unless you implement a deployer/provider specific
> solution (to scale said network). Just take a look at what actual public
> cloud providers are doing:
> 
> - Rackspace has a "magic" public network
> - GoDaddy has custom code in their nova-scheduler (AFAIK)
> - iWeb (which I work for) has custom code in front of nova-api.
> 
> We are all writing our own custom code to implement what (we feel)
> Neutron should be providing right off the bat.
> 
> By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
> specs [3] and the Large Deployment Team meeting notes [4], you will see
> that what is suggested here (a scalable public shared network) is an
> objective we wish but are struggling hard to achieve.
> 
> People keep asking for simplicity and Neutron looks to not be able to
> offer it due to philosophical conflicts between Neutron developers and
> actual public users/operators. We can't force our users to adhere to ONE
> networking philosophy: use NAT, floating IPs, firewall, routers, etc.
> They just don't buy it. Period. (see monty's list of public providers
> attaching VMs to public network)
> 
> If we can accept and agree that not everyone wishes to adhere to the
> "full stack of networking good practices" (TBH, I don't know how to call
> this thing), it will be a good start. Otherwise I feel we won't be able
> to achieve anything.
> 
> What Monty is explaining and suggesting is something we (my team) have
> been struggling with for *years* and just didn't have bandwidth (we are
> operators, not developers) or public charisma to change.
> 
> I'm glad Monty brought up this subject so we can officially address it.
> 
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
> [2]
> http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
> [3]
> http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html
> [4]
> http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html
> 
> -- 
> Mathieu
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From carl at ecbaldwin.net  Tue Sep 15 22:52:59 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Tue, 15 Sep 2015 16:52:59 -0600
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A31041D@EX10MBOX03.pnnl.gov>
References: <55F83367.9050503@inaugust.com> <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <1442347586.3210663.384528809.28DBF360@webmail.messagingengine.com>
 <1A3C52DFCD06494D8528644858247BF01A31041D@EX10MBOX03.pnnl.gov>
Message-ID: <CALiLy7o_ypCKpN6iqen57fuDsYSzYvd5zZqznXYbosuuOWPU2w@mail.gmail.com>

On Tue, Sep 15, 2015 at 3:09 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
> DNS is preferable, since humans don't remember ip's very well. IPv6 is much harder to remember then v4 too.
>
> DNS has its own issues, mostly, its usually not very quick to get a DNS entry updated.  At our site (and I'm sure, others), I'm afraid to say in some cases it takes as long as 24 hours to get updates to happen. Even if that was fixed, caching can bite you too.

We also have work going on now to automate the addition and update of
DNS entries as VMs come and go [1].  Please have a look and provide
feedback.

[1] https://review.openstack.org/#/q/topic:bp/external-dns-resolution,n,z

Carl


From fungi at yuggoth.org  Tue Sep 15 22:57:03 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Tue, 15 Sep 2015 22:57:03 +0000
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
Message-ID: <20150915225702.GE25159@yuggoth.org>

On 2015-09-15 18:00:03 +0000 (+0000), Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks.
> the "just run it in on THE public network" doesn't work. :/

Is this for a public servive provider? If so, do you expect your
users to have some premonition which tells them the particular
public network they should be choosing?

> I also strongly recommend to users to put vms on a private network
> and use floating ip's/load balancers. For many reasons. Such as,
> if you don't, the ip that gets assigned to the vm helps it become
> a pet.

I like pets just fine. Often I want pet servers not cattle servers.
Why should you (the service provider) make this choice for me?

> you can't replace the vm and get the same IP.

No? Well, you can `nova rebuild` in at least some environments, but
regardless it's not that hard to change a couple of DNS records when
replacing a server (virtual or physical).

> Floating IP's and load balancers can help prevent pets.

They can help prevent lots of things, some good, some bad. I'd
rather address translation were the exception, not the rule. NAT has
created a broken Internet.

> It also prevents security issues with DNS and IP's.

This is the first I've heard about DNS and IP addresses being
insecure. Please elaborate, and also explain your alternative
Internet which relies on neither of these.

> Also, for every floating ip/lb I have, I usually have 3x or more
> the number of instances that are on the private network.

Out of some misguided assumption that NAT is a security panacea from
the sound of it.

> Sure its easy to put everything on the public network, but it
> provides much better security if you only put what you must on the
> public network.

I highly recommend a revolutionary new technology called "packet
filtering."

> Consider the internet. would you want to expose every device in
> your house directly on the internet? No.

On the contrary, I actually would (depending on what you mean by
"expose", but I assume from context you mean assign individual
global addresses directly to the network interfaces of each). With
IPv6 I do and would with v4 as well if my local provider routed me
more than a /32 assignment.

> you put them in a private network and poke holes just for the
> stuff that does.

No, I put them in a globally-routed network (the terms "private" and
"public" are misleading in the context of these sorts of
discussions) and poke holes just for the stuff that people need to
reach from outside that network.

> we should be encouraging good security practices. If we encourage
> bad ones, then it will bite us later when OpenStack gets a
> reputation for being associated with compromises.

Here we agree, we just disagree on what those security practices
are. Address translation is no substitute for good packet filtering,
and allowing people to ignorantly assume so does them a great
disservice. We should be educating them on how to properly protect
their systems while at the same time showing them how much better
the Internet works without the distasteful workarounds brought about
by unnecessary layers of address-translating indirection.

And before this turns into a defense-in-depth debate, adding NAT to
your filtering doesn't really increase security it just increases
complexity.

> I do consider making things as simple as possible very important.
> but that is, make them as simple as possible, but no simpler.
> There's danger here of making things too simple.

Complexity is the enemy of security, so I find your reasoning
internally inconsistent. Proponents of NAT are suffering from some
manner of mass-induced Stockholm Syndrome. It's a hack to deal with
our oversubscription of the IPv4 address space and in some cases
solve address conflicts between multiple networks. Its unfortunate
ubiquity has confused lots of people into thinking it's there for
better reasons than it actually is.
-- 
Jeremy Stanley


From mordred at inaugust.com  Tue Sep 15 23:01:02 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Wed, 16 Sep 2015 01:01:02 +0200
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <CAK+RQeYeNgn88kF9i+R+_bV6c17MFxYbe4kcxh_-6UnaH1OU1Q@mail.gmail.com>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
 <CAK+RQeYeNgn88kF9i+R+_bV6c17MFxYbe4kcxh_-6UnaH1OU1Q@mail.gmail.com>
Message-ID: <55F8A32E.4020508@inaugust.com>

On 09/15/2015 06:16 PM, Armando M. wrote:
>
>
> On 15 September 2015 at 08:27, Mike Spreitzer <mspreitz at us.ibm.com
> <mailto:mspreitz at us.ibm.com>> wrote:
>
>     Monty Taylor <mordred at inaugust.com <mailto:mordred at inaugust.com>>
>     wrote on 09/15/2015 11:04:07 AM:
>
>     > a) an update to python-novaclient to allow a named network to be passed
>     > to satisfy the "you have more than one network" - the nics  argument is
>     > still useful for more complex things
>
>     I am not using the latest, but rather Juno.  I find that in many
>     places the Neutron CLI insists on a UUID when a name could be used.
>     Three cheers for any campaign to fix that.
>
>
> The client is not particularly tied to a specific version of the server,
> so we don't have a Juno version, or a Kilo version, etc. (even though
> they are aligned, see [1] for more details).
>
> Having said that, you could use names in place of uuids pretty much
> anywhere. If your experience says otherwise, please consider filing a
> bug against the client [2] and we'll get it fixed.

May just be a help-text bug in novaclient then:

   --nic 
<net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>
                                 Create a NIC on the server. Specify option
                                 multiple times to create multiple NICs. 
net-
                                 id: attach NIC to network with this UUID
                                 (either port-id or net-id must be 
provided),
                                 v4-fixed-ip: IPv4 fixed address for NIC
                                 (optional), v6-fixed-ip: IPv6 fixed address
                                 for NIC (optional), port-id: attach NIC to
                                 port with this UUID (either port-id or 
net-id
                                 must be provided).



> Thanks,
> Armando
>
> [1] https://launchpad.net/python-neutronclient/+series
> [2] https://bugs.launchpad.net/python-neutronclient/+filebug
>
>
>
>     And, yeah, creating VMs on a shared public network is good too.
>
>     Thanks,
>     mike
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From mordred at inaugust.com  Tue Sep 15 23:08:34 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Wed, 16 Sep 2015 01:08:34 +0200
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
Message-ID: <55F8A4F2.8030200@inaugust.com>

On 09/15/2015 06:30 PM, Armando M. wrote:
>
>
> On 15 September 2015 at 08:04, Monty Taylor <mordred at inaugust.com
> <mailto:mordred at inaugust.com>> wrote:
>
>     Hey all!
>
>     If any of you have ever gotten drunk with me, you'll know I hate
>     floating IPs more than I hate being stabbed in the face with a very
>     angry fish.
>
>     However, that doesn't really matter. What should matter is "what is
>     the most sane thing we can do for our users"
>
>     As you might have seen in the glance thread, I have a bunch of
>     OpenStack public cloud accounts. Since I wrote that email this
>     morning, I've added more - so we're up to 13.
>
>     auro
>     citycloud
>     datacentred
>     dreamhost
>     elastx
>     entercloudsuite
>     hp
>     ovh
>     rackspace
>     runabove
>     ultimum
>     unitedstack
>     vexxhost
>
>     Of those public clouds, 5 of them require you to use a floating IP
>     to get an outbound address, the others directly attach you to the
>     public network. Most of those 8 allow you to create a private
>     network, to boot vms on the private network, and ALSO to create a
>     router with a gateway and put floating IPs on your private ip'd
>     machines if you choose.
>
>     Which brings me to the suggestion I'd like to make.
>
>     Instead of having our default in devstack and our default when we
>     talk about things be "you boot a VM and you put a floating IP on it"
>     - which solves one of the two usage models - how about:
>
>     - Cloud has a shared: True, external:routable: True neutron network.
>     I don't care what it's called  ext-net, public, whatever. the
>     "shared" part is the key, that's the part that lets someone boot a
>     vm on it directly.
>
>     - Each person can then make a private network, router, gateway, etc.
>     and get floating-ips from the same public network if they prefer
>     that model.
>
>     Are there any good reasons to not push to get all of the public
>     networks marked as "shared"?
>
>
> The reason is simple: not every cloud deployment is the same: private is
> different from public and even within the same cloud model, the network
> topology may vary greatly.

Yes. Many things may be different.

> Perhaps Neutron fails in the sense that it provides you with too much
> choice, and perhaps we have to standardize on the type of networking
> profile expected by a user of OpenStack public clouds before making
> changes that would fragment this landscape even further.
>
> If you are advocating for more flexibility without limiting the existing
> one, we're only making the problem worse.

I am not. I am arguing for a different arbitrary 'default' deployment. 
Right now the verbiage around things is "floating IPs is the 'right' way 
to get access to public networks"

I'm not arguing for code changes, or more options, or new features.

I'm saying that there a set of public clouds that provide a default 
experience out of the box that is pleasing with neutron today, and we 
should have the "I don't know what I want tell me what to do" option 
behave like those clouds.

Yes. You can do other things.
Yes. You can get fancy.
Yes. You can express all of the things.

Those are things I LOVE about neutron and one of the reasons I think 
that the arguments around neutron and nova-net are insane.

I'm just saying that "I want a computer on the externally facing network 
from this cloud" is almost never well served by floating-ips unless you 
know what you're doing, so rather than leading people down the road 
towards that as the default behavior, since it's the HARDER thing to 
deal with - let's lead them to the behavior which makes the simple thing 
simple and then clearly open the door to them to increasingly complex 
and powerful things over time.
>
>     OH - well, one thing - that's that once there are two networks in an
>     account you have to specify which one. This is really painful in
>     nova clent. Say, for instance, you have a public network called
>     "public" and a private network called "private" ...
>
>     You can't just say "nova boot --network=public" - nope, you need to
>     say "nova boot --nics net-id=$uuid_of_my_public_network"
>
>     So I'd suggest 2 more things;
>
>     a) an update to python-novaclient to allow a named network to be
>     passed to satisfy the "you have more than one network" - the nics
>     argument is still useful for more complex things
>
>     b) ability to say "vms in my cloud should default to being booted on
>     the public network" or "vms in my cloud should default to being
>     booted on a network owned by the user"
>
>     Thoughts?
>
>
> As I implied earlier, I am not sure how healthy this choice is. As a
> user of multiple clouds I may end up having a different user experience
> based on which cloud I am using...I thought you were partially
> complaining about lack of consistency?

I am a user of multiple clouds. I am complaining about the current lack 
of consistency.

More than that though, I'm complaining that we lead people to select a 
floating-ip model when having them flip the boolean value of 
"shared=True" on their ext-net would make the end-user experience WAY nicer.



From openstack at nemebean.com  Tue Sep 15 23:15:05 2015
From: openstack at nemebean.com (Ben Nemec)
Date: Tue, 15 Sep 2015 18:15:05 -0500
Subject: [openstack-dev] [TripleO] Remove Tuskar from tripleo-common and
 python-tripleoclient
In-Reply-To: <CAPMB-2TmL=ZdmA7mJjTBs5SjPqEfNi-pzf09dMGbay5AYj8Rew@mail.gmail.com>
References: <CAPMB-2TmL=ZdmA7mJjTBs5SjPqEfNi-pzf09dMGbay5AYj8Rew@mail.gmail.com>
Message-ID: <55F8A679.2070908@nemebean.com>

On 09/15/2015 11:33 AM, Dougal Matthews wrote:
> Hi all,
> 
> This is partly a heads up for everyone, but also seeking feedback on the
> direction.
> 
> We are starting to move to a more general Heat workflow without the need for
> Tuskar. The CLI is already in a position to do this as we can successfully
> deploy without Tuskar.
> 
> Moving forward it will be much easier for us to progress if we don't need to
> take Tuskar into account in tripleo-common. This will be particularly useful
> when working on the overcloud deployment library and API spec [1].
> 
> Tuskar UI doesn't currently use tripleo-common (or tripleoclient) and
> thus it
> is safe to make this change from the UI's point of view.
> 
> I have started the process of doing this removal and posted three WIP
> reviews
> [2][3][4] to assess how much change was needed, I plan to tidy them up over
> the next day or two. There is one for tripleo-common, python-tripleoclient
> and tripleo-docs. The documentation one only removes references to Tuskar on
> the CLI and doesn't remove Tuskar totally - so Tuskar UI is still covered
> until it has a suitable replacement.
> 
> I don't anticipate any impact for CI as I understand that all the current CI
> has migrated from deploying with Tuskar to deploying the templates directly
> (Using `openstack overcloud deploy --templates` rather than --plan). I
> believe it is safe to remove from python-tripleoclient as that repo is so
> new. I am however unsure about the TripleO deprecation policy for tripleo-
> common?

I think I'd file this under "oops, didn't work" and go ahead with the
compatibility break.  This is all going to have to get ripped out to
make room for the new Tuskar, so I don't think there's any point in
jumping through hoops to notify all the people not using Tuskar that
it's going away. :-)

> 
> Thanks,
> Dougal
> 
> 
> [1]: https://review.openstack.org/219754
> [2]: https://review.openstack.org/223527
> [3]: https://review.openstack.org/223535
> [4]: https://review.openstack.org/223605
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From armamig at gmail.com  Tue Sep 15 23:19:09 2015
From: armamig at gmail.com (Armando M.)
Date: Tue, 15 Sep 2015 16:19:09 -0700
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <201509152104.t8FL4Y80025192@d03av05.boulder.ibm.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <CAK+RQeZ29gkuWkvj5m5pWXK+aEf=tMpKKM7R_hkNrvgDaiHE_Q@mail.gmail.com>
 <201509152104.t8FL4Y80025192@d03av05.boulder.ibm.com>
Message-ID: <CAK+RQeaTejTDS8V-hH6C=L_Z-WULH+nhAp7aVWN=8H06r70Znw@mail.gmail.com>

On 15 September 2015 at 14:04, Mike Spreitzer <mspreitz at us.ibm.com> wrote:

> "Armando M." <armamig at gmail.com> wrote on 09/15/2015 03:50:24 PM:
>
> > On 15 September 2015 at 10:02, Doug Hellmann <doug at doughellmann.com>
> wrote:
> > Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> ...
> > As with the Glance image upload API discussion, this is an example
> > of an extremely common use case that is either complex for the end
> > user or for which they have to know something about the deployment
> > in order to do it at all. The usability of an OpenStack cloud running
> > neutron would be enhanced greatly if there was a simple, clear, way
> > for the user to get a new VM with a public IP on any cloud without
> > multiple steps on their part.
>
> <<<<<<end of excerpt from Armando>>>>>>
>
> ...
> >
> > So this boils down to: in light of the possible ways of providing VM
> > connectivity, how can we make a choice on the user's behalf? Can we
> > assume that he/she always want a publicly facing VM connected to
> > Internet? The answer is 'no'.
>
> While it may be true that in some deployments there is no good way for the
> code to choose, I think that is not the end of the story here.  The
> motivation to do this is that in *some* deployments there *is* a good way
> for the code to figure out what to do.


Agreed, I wasn't dismissing this entirely. I was simply saying that if we
don't put constraints in place it's difficult to come up with a good
'default' answer.


>
>
> Regards,
> Mike
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/eac8abba/attachment.html>

From mordred at inaugust.com  Tue Sep 15 23:33:18 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Wed, 16 Sep 2015 01:33:18 +0200
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <55F86358.3050705@linux.vnet.ibm.com>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
 <55F86358.3050705@linux.vnet.ibm.com>
Message-ID: <55F8AABE.4080307@inaugust.com>

On 09/15/2015 08:28 PM, Matt Riedemann wrote:
>
>
> On 9/15/2015 10:27 AM, Mike Spreitzer wrote:
>> Monty Taylor <mordred at inaugust.com> wrote on 09/15/2015 11:04:07 AM:
>>
>>  > a) an update to python-novaclient to allow a named network to be
>> passed
>>  > to satisfy the "you have more than one network" - the nics argument is
>>  > still useful for more complex things
>>
>> I am not using the latest, but rather Juno.  I find that in many places
>> the Neutron CLI insists on a UUID when a name could be used.  Three
>> cheers for any campaign to fix that.
>
> It's my understanding that network names in neutron, like security
> groups, are not unique, that's why you have to specify a UUID.

Yah.

EXCEPT - we already error when the user does not specify the network 
specifically enough, so there is nothing stopping us from trying the 
obvious thing and the moving on. Such as:

nova boot

ERROR: There are more than one network, please specify one

nova boot --network public

\o/

OR

nova boot

ERROR: There are more than one network, please specify one

nova boot --network public

ERROR: There are more than one network named 'public', please specify one

nova boot --network ecc967b6-5c01-11e5-b218-4c348816caa1

\o/

These are successive attempts at a simple operation that should be 
simple, and as the situation becomes increasingly complex, so does the 
necessity of the user's response.


From tony at bakeyournoodle.com  Tue Sep 15 23:39:06 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Wed, 16 Sep 2015 09:39:06 +1000
Subject: [openstack-dev] [oslo] Help with stable/juno branches / releases
In-Reply-To: <1442335836-sup-6104@lrrr.local>
References: <20150824095748.GA74505@thor.bakeyournoodle.com>
 <1440616274-sup-7044@lrrr.local>
 <20150826223200.GA86688@thor.bakeyournoodle.com>
 <1442335836-sup-6104@lrrr.local>
Message-ID: <20150915233906.GB54897@thor.bakeyournoodle.com>

On Tue, Sep 15, 2015 at 12:52:40PM -0400, Doug Hellmann wrote:

> I've created the branches for oslo.utils and oslotest, as requested.
> There are patches up for each to update the .gitreview file, which will
> make it easier to land the patches to update whatever requirements
> settings need to be adjusted.
> 
> Since these are managed libraries, you can request releases by
> submitting patches to the openstack/releases repository (see the
> README and ping me in #openstack-relmgr-office if you need a hand
> the first time, I'll be happy to walk you through it).

Thanks Doug!

Yours Tony.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/e40e5927/attachment.pgp>

From armamig at gmail.com  Tue Sep 15 23:44:04 2015
From: armamig at gmail.com (Armando M.)
Date: Tue, 15 Sep 2015 16:44:04 -0700
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <55F89777.3000505@internap.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <55F89777.3000505@internap.com>
Message-ID: <CAK+RQeakD7H2=P1UFeHDY+GSo7nSzF_Z3KZ+OZsR0+qUYF6xjQ@mail.gmail.com>

On 15 September 2015 at 15:11, Mathieu Gagn? <mgagne at internap.com> wrote:

> On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
> > We run several clouds where there are multiple external networks. the
> "just run it in on THE public network" doesn't work. :/
> >
> > I also strongly recommend to users to put vms on a private network and
> use floating ip's/load balancers. For many reasons. Such as, if you don't,
> the ip that gets assigned to the vm helps it become a pet. you can't
> replace the vm and get the same IP. Floating IP's and load balancers can
> help prevent pets. It also prevents security issues with DNS and IP's.
> Also, for every floating ip/lb I have, I usually have 3x or more the number
> of instances that are on the private network. Sure its easy to put
> everything on the public network, but it provides much better security if
> you only put what you must on the public network. Consider the internet.
> would you want to expose every device in your house directly on the
> internet? No. you put them in a private network and poke holes just for the
> stuff that does. we should be encouraging good security practices. If we
> encourage bad ones, then it will bite us later when OpenStack gets a
> reputation for being associated with compromises.
> >
>
> Sorry but I feel this kind of reply explains why people are still using
> nova-network over Neutron. People want simplicity and they are denied it
> at every corner because (I feel) Neutron thinks it knows better.
>

I am sorry, but how can you associate a person's opinion to a project,
which is a collectivity? Surely everyone is entitled to his/her opinion,
but I don't honestly believe these are fair statements to make.


> The original statement by Monty Taylor is clear to me:
>
> I wish to boot an instance that is on a public network and reachable
> without madness.
>
> As of today, you can't unless you implement a deployer/provider specific
> solution (to scale said network). Just take a look at what actual public
> cloud providers are doing:
>
> - Rackspace has a "magic" public network
> - GoDaddy has custom code in their nova-scheduler (AFAIK)
> - iWeb (which I work for) has custom code in front of nova-api.
>
> We are all writing our own custom code to implement what (we feel)
> Neutron should be providing right off the bat.
>

What is that you think Neutron should be providing right off the bat? I
personally have never seen you publicly report usability issues that
developers could go and look into. Let's escalate these so that the Neutron
team can be aware.


>
> By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
> specs [3] and the Large Deployment Team meeting notes [4], you will see
> that what is suggested here (a scalable public shared network) is an
> objective we wish but are struggling hard to achieve.
>

There are many ways to skin this cat IMO, and scalable public shared
network can really have multiple meanings, I appreciate the pointers
nonetheless.


>
> People keep asking for simplicity and Neutron looks to not be able to
> offer it due to philosophical conflicts between Neutron developers and
> actual public users/operators. We can't force our users to adhere to ONE
> networking philosophy: use NAT, floating IPs, firewall, routers, etc.
> They just don't buy it. Period. (see monty's list of public providers
> attaching VMs to public network)
>

Public providers networking needs are not the only needs that Neutron tries
to gather. There's a balance to be struck, and I appreciate that the
balance may need to be adjusted, but being so dismissive is being myopic of
the entire industry landscape.


>
> If we can accept and agree that not everyone wishes to adhere to the
> "full stack of networking good practices" (TBH, I don't know how to call
> this thing), it will be a good start. Otherwise I feel we won't be able
> to achieve anything.
>
> What Monty is explaining and suggesting is something we (my team) have
> been struggling with for *years* and just didn't have bandwidth (we are
> operators, not developers) or public charisma to change.
>
> I'm glad Monty brought up this subject so we can officially address it.
>
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
> [2]
>
> http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
> [3]
>
> http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html
> [4]
>
> http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html
>
> --
> Mathieu
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/e8b3b677/attachment.html>

From armamig at gmail.com  Wed Sep 16 00:02:29 2015
From: armamig at gmail.com (Armando M.)
Date: Tue, 15 Sep 2015 17:02:29 -0700
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <55F8A32E.4020508@inaugust.com>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
 <CAK+RQeYeNgn88kF9i+R+_bV6c17MFxYbe4kcxh_-6UnaH1OU1Q@mail.gmail.com>
 <55F8A32E.4020508@inaugust.com>
Message-ID: <CAK+RQebF8b6prJ7+bx97AxbGxQDWHQgaXDOYiosKAyHh7hZXLA@mail.gmail.com>

On 15 September 2015 at 16:01, Monty Taylor <mordred at inaugust.com> wrote:

> On 09/15/2015 06:16 PM, Armando M. wrote:
>
>>
>>
>> On 15 September 2015 at 08:27, Mike Spreitzer <mspreitz at us.ibm.com
>> <mailto:mspreitz at us.ibm.com>> wrote:
>>
>>     Monty Taylor <mordred at inaugust.com <mailto:mordred at inaugust.com>>
>>     wrote on 09/15/2015 11:04:07 AM:
>>
>>     > a) an update to python-novaclient to allow a named network to be
>> passed
>>     > to satisfy the "you have more than one network" - the nics
>> argument is
>>     > still useful for more complex things
>>
>>     I am not using the latest, but rather Juno.  I find that in many
>>     places the Neutron CLI insists on a UUID when a name could be used.
>>     Three cheers for any campaign to fix that.
>>
>>
>> The client is not particularly tied to a specific version of the server,
>> so we don't have a Juno version, or a Kilo version, etc. (even though
>> they are aligned, see [1] for more details).
>>
>> Having said that, you could use names in place of uuids pretty much
>> anywhere. If your experience says otherwise, please consider filing a
>> bug against the client [2] and we'll get it fixed.
>>
>
> May just be a help-text bug in novaclient then:
>
>   --nic
> <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>
>                                 Create a NIC on the server. Specify option
>                                 multiple times to create multiple NICs.
> net-
>                                 id: attach NIC to network with this UUID
>                                 (either port-id or net-id must be
> provided),
>                                 v4-fixed-ip: IPv4 fixed address for NIC
>                                 (optional), v6-fixed-ip: IPv6 fixed address
>                                 for NIC (optional), port-id: attach NIC to
>                                 port with this UUID (either port-id or
> net-id
>                                 must be provided).
>

Ok, if you're asking for the ability to boot network by name (assumed
that's unique), then that can be sorted. I filed a novaclient bug [1].
Volunteers welcome!

[1] https://bugs.launchpad.net/python-novaclient/+bug/1496180


>
>
>
> Thanks,
>> Armando
>>
>> [1] https://launchpad.net/python-neutronclient/+series
>> [2] https://bugs.launchpad.net/python-neutronclient/+filebug
>>
>>
>>
>>     And, yeah, creating VMs on a shared public network is good too.
>>
>>     Thanks,
>>     mike
>>
>>
>> __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/9ba41b91/attachment-0001.html>

From mgagne at internap.com  Wed Sep 16 00:06:30 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Tue, 15 Sep 2015 20:06:30 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <CAK+RQeakD7H2=P1UFeHDY+GSo7nSzF_Z3KZ+OZsR0+qUYF6xjQ@mail.gmail.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <55F89777.3000505@internap.com>
 <CAK+RQeakD7H2=P1UFeHDY+GSo7nSzF_Z3KZ+OZsR0+qUYF6xjQ@mail.gmail.com>
Message-ID: <55F8B286.4020201@internap.com>

On 2015-09-15 7:44 PM, Armando M. wrote:
> 
> 
> On 15 September 2015 at 15:11, Mathieu Gagn? <mgagne at internap.com
> <mailto:mgagne at internap.com>> wrote:
> 
>     On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
>     > We run several clouds where there are multiple external networks. the "just run it in on THE public network" doesn't work. :/
>     >
>     > I also strongly recommend to users to put vms on a private network and use floating ip's/load balancers. For many reasons. Such as, if you don't, the ip that gets assigned to the vm helps it become a pet. you can't replace the vm and get the same IP. Floating IP's and load balancers can help prevent pets. It also prevents security issues with DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or more the number of instances that are on the private network. Sure its easy to put everything on the public network, but it provides much better security if you only put what you must on the public network. Consider the internet. would you want to expose every device in your house directly on the internet? No. you put them in a private network and poke holes just for the stuff that does. we should be encouraging good security practices. If we encourage bad ones, then it will bite us later when OpenStack gets a reputation for being associated with compromises.
>     >
> 
>     Sorry but I feel this kind of reply explains why people are still using
>     nova-network over Neutron. People want simplicity and they are denied it
>     at every corner because (I feel) Neutron thinks it knows better.
> 
> 
> I am sorry, but how can you associate a person's opinion to a project,
> which is a collectivity? Surely everyone is entitled to his/her opinion,
> but I don't honestly believe these are fair statements to make.

You are right, this is not fair. I apologize for that.


>     The original statement by Monty Taylor is clear to me:
> 
>     I wish to boot an instance that is on a public network and reachable
>     without madness.
> 
>     As of today, you can't unless you implement a deployer/provider specific
>     solution (to scale said network). Just take a look at what actual public
>     cloud providers are doing:
> 
>     - Rackspace has a "magic" public network
>     - GoDaddy has custom code in their nova-scheduler (AFAIK)
>     - iWeb (which I work for) has custom code in front of nova-api.
> 
>     We are all writing our own custom code to implement what (we feel)
>     Neutron should be providing right off the bat.
> 
> 
> What is that you think Neutron should be providing right off the bat? I
> personally have never seen you publicly report usability issues that
> developers could go and look into. Let's escalate these so that the
> Neutron team can be aware.

Please understand that I'm an operator and don't have the luxury to
contribute as much as I did before. I however participate to OpenStack
Ops meetup and this is the kind of things we discuss. You can read the
use cases below to understand what I'm referring to. I don't feel the
need to add yet another version of it since there are already multiple
ones identifying my needs.

People (such as Monty) are already voicing my concerns and I didn't feel
the need to voice mine too.


>     By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
>     specs [3] and the Large Deployment Team meeting notes [4], you will see
>     that what is suggested here (a scalable public shared network) is an
>     objective we wish but are struggling hard to achieve.
> 
> 
> There are many ways to skin this cat IMO, and scalable public shared
> network can really have multiple meanings, I appreciate the pointers
> nonetheless.
>  
> 
> 
>     People keep asking for simplicity and Neutron looks to not be able to
>     offer it due to philosophical conflicts between Neutron developers and
>     actual public users/operators. We can't force our users to adhere to ONE
>     networking philosophy: use NAT, floating IPs, firewall, routers, etc.
>     They just don't buy it. Period. (see monty's list of public providers
>     attaching VMs to public network)
> 
> 
> Public providers networking needs are not the only needs that Neutron
> tries to gather. There's a balance to be struck, and I appreciate that
> the balance may need to be adjusted, but being so dismissive is being
> myopic of the entire industry landscape.

We (my employer) also maintain private clouds and I'm fully aware of the
different between those needs. Therefore I don't think it's fair to say
that my opinion in nearsighted. Nonetheless, I would like this balance
to be adjusted and that's what I'm asking for and glad to see.


>     [1]
>     http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
>     [2]
>     http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
>     [3]
>     http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html
>     [4]
>     http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html
> 
>     --
>     Mathieu
> 


-- 
Mathieu


From dougwig at parksidesoftware.com  Wed Sep 16 00:06:55 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Tue, 15 Sep 2015 18:06:55 -0600
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
	'default' network model
In-Reply-To: <CAK+RQeakD7H2=P1UFeHDY+GSo7nSzF_Z3KZ+OZsR0+qUYF6xjQ@mail.gmail.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <55F89777.3000505@internap.com>
 <CAK+RQeakD7H2=P1UFeHDY+GSo7nSzF_Z3KZ+OZsR0+qUYF6xjQ@mail.gmail.com>
Message-ID: <93473B15-964C-4351-A0C1-03B8C59E0C94@parksidesoftware.com>

Hi all,

If I can attempt to summarize this thread:

?We want simple networks with VMs?

Ok, in progress, start here:

https://blueprints.launchpad.net/neutron/+spec/get-me-a-network <https://blueprints.launchpad.net/neutron/+spec/get-me-a-network>

?It should work with multiple networks?

Same spec, click above.

?It should work with just ?nova boot??

Yup, you guessed it, starting point is the same spec, click above.

?It should still work in the face of N-tiered ambiguity.?

Umm, how, exactly? I think if you have a super complicated setup, your boot might be a bit harder, too. Please look at the cases that are covered before getting upset, and then provide feedback on the spec.

?Networks should be accessible by name.?

Yup, if they don?t, it?s a bug. The client a few cycles ago was particularly bad at this. If you find more cases, please file a bug.

?Neutron doesn?t get it and never will.?

I?m not sure how all ?yes? above keeps translating to this old saw, but is there any tiny chance we can stop living in the past and instead focus on the use cases that we want to solve?

Thanks,
doug



> On Sep 15, 2015, at 5:44 PM, Armando M. <armamig at gmail.com> wrote:
> 
> 
> 
> On 15 September 2015 at 15:11, Mathieu Gagn? <mgagne at internap.com <mailto:mgagne at internap.com>> wrote:
> On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
> > We run several clouds where there are multiple external networks. the "just run it in on THE public network" doesn't work. :/
> >
> > I also strongly recommend to users to put vms on a private network and use floating ip's/load balancers. For many reasons. Such as, if you don't, the ip that gets assigned to the vm helps it become a pet. you can't replace the vm and get the same IP. Floating IP's and load balancers can help prevent pets. It also prevents security issues with DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or more the number of instances that are on the private network. Sure its easy to put everything on the public network, but it provides much better security if you only put what you must on the public network. Consider the internet. would you want to expose every device in your house directly on the internet? No. you put them in a private network and poke holes just for the stuff that does. we should be encouraging good security practices. If we encourage bad ones, then it will bite us later when OpenStack gets a reputation for being associated with compromises.
> >
> 
> Sorry but I feel this kind of reply explains why people are still using
> nova-network over Neutron. People want simplicity and they are denied it
> at every corner because (I feel) Neutron thinks it knows better.
> 
> I am sorry, but how can you associate a person's opinion to a project, which is a collectivity? Surely everyone is entitled to his/her opinion, but I don't honestly believe these are fair statements to make.
> 
> 
> The original statement by Monty Taylor is clear to me:
> 
> I wish to boot an instance that is on a public network and reachable
> without madness.
> 
> As of today, you can't unless you implement a deployer/provider specific
> solution (to scale said network). Just take a look at what actual public
> cloud providers are doing:
> 
> - Rackspace has a "magic" public network
> - GoDaddy has custom code in their nova-scheduler (AFAIK)
> - iWeb (which I work for) has custom code in front of nova-api.
> 
> We are all writing our own custom code to implement what (we feel)
> Neutron should be providing right off the bat.
> 
> What is that you think Neutron should be providing right off the bat? I personally have never seen you publicly report usability issues that developers could go and look into. Let's escalate these so that the Neutron team can be aware.
>  
> 
> By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
> specs [3] and the Large Deployment Team meeting notes [4], you will see
> that what is suggested here (a scalable public shared network) is an
> objective we wish but are struggling hard to achieve.
> 
> There are many ways to skin this cat IMO, and scalable public shared network can really have multiple meanings, I appreciate the pointers nonetheless.
>  
> 
> People keep asking for simplicity and Neutron looks to not be able to
> offer it due to philosophical conflicts between Neutron developers and
> actual public users/operators. We can't force our users to adhere to ONE
> networking philosophy: use NAT, floating IPs, firewall, routers, etc.
> They just don't buy it. Period. (see monty's list of public providers
> attaching VMs to public network)
> 
> Public providers networking needs are not the only needs that Neutron tries to gather. There's a balance to be struck, and I appreciate that the balance may need to be adjusted, but being so dismissive is being myopic of the entire industry landscape.
>  
> 
> If we can accept and agree that not everyone wishes to adhere to the
> "full stack of networking good practices" (TBH, I don't know how to call
> this thing), it will be a good start. Otherwise I feel we won't be able
> to achieve anything.
> 
> What Monty is explaining and suggesting is something we (my team) have
> been struggling with for *years* and just didn't have bandwidth (we are
> operators, not developers) or public charisma to change.
> 
> I'm glad Monty brought up this subject so we can officially address it.
> 
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html <http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html>
> [2]
> http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html <http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html>
> [3]
> http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html <http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html>
> [4]
> http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html <http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html>
> 
> --
> Mathieu
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/f1baf829/attachment.html>

From mgagne at internap.com  Wed Sep 16 00:11:24 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Tue, 15 Sep 2015 20:11:24 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <93473B15-964C-4351-A0C1-03B8C59E0C94@parksidesoftware.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <55F89777.3000505@internap.com>
 <CAK+RQeakD7H2=P1UFeHDY+GSo7nSzF_Z3KZ+OZsR0+qUYF6xjQ@mail.gmail.com>
 <93473B15-964C-4351-A0C1-03B8C59E0C94@parksidesoftware.com>
Message-ID: <55F8B3AC.50106@internap.com>

On 2015-09-15 8:06 PM, Doug Wiegley wrote:
> 
> ?Neutron doesn?t get it and never will.?
> 
> I?m not sure how all ?yes? above keeps translating to this old saw, but
> is there any tiny chance we can stop living in the past and instead
> focus on the use cases that we want to solve?
> 

I apologized for my unfair statement where I very wrongly associated a
person's opinion to a whole project. I would like to move on. Thanks

-- 
Mathieu


From mgagne at internap.com  Wed Sep 16 00:12:15 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Tue, 15 Sep 2015 20:12:15 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <7ABB3AEC-8DA7-4707-81D8-67910F96887F@parksidesoftware.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <55F89777.3000505@internap.com>
 <7ABB3AEC-8DA7-4707-81D8-67910F96887F@parksidesoftware.com>
Message-ID: <55F8B3DF.4000906@internap.com>

On 2015-09-15 6:49 PM, Doug Wiegley wrote:
> 
> 
>> On Sep 15, 2015, at 4:11 PM, Mathieu Gagn? <mgagne at internap.com> wrote:
>>
>>> On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
>>> We run several clouds where there are multiple external networks. the "just run it in on THE public network" doesn't work. :/
>>>
>>> I also strongly recommend to users to put vms on a private network and use floating ip's/load balancers. For many reasons. Such as, if you don't, the ip that gets assigned to the vm helps it become a pet. you can't replace the vm and get the same IP. Floating IP's and load balancers can help prevent pets. It also prevents security issues with DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or more the number of instances that are on the private network. Sure its easy to put everything on the public network, but it provides much better security if you only put what you must on the public network. Consider the internet. would you want to expose every device in your house directly on the internet? No. you put them in a private network and poke holes just for the stuff that does. we should be encouraging good security practices. If we encourage bad ones, then it will bite us later when OpenStack gets a reputation for being associated with compromises.
>>
>> Sorry but I feel this kind of reply explains why people are still using
>> nova-network over Neutron. People want simplicity and they are denied it
>> at every corner because (I feel) Neutron thinks it knows better.
> 
> Please stop painting such generalizations.  Go to the third or fourth email in this thread and you will find a spec, worked on by neutron and nova, that addresses exactly this use case.
> 
> It is a valid use case, and neutron does care about it. It has wrinkles. That has not stopped work on it for the common cases.
> 

I've read the neutron spec you are referring (which I mentioned in my
email) and I'm glad the subject is discussed. This was not my intention
to diminish the work done by the Neutron team to address those issues. I
wrongly associate a person's opinion to a whole project, this is not
fair, I apologize for that.

Jeremy Stanley replied to Kevin with much better words than mine.

-- 
Mathieu


From dougwig at parksidesoftware.com  Wed Sep 16 00:16:01 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Tue, 15 Sep 2015 18:16:01 -0600
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
	'default' network model
In-Reply-To: <55F8B3AC.50106@internap.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <55F89777.3000505@internap.com>
 <CAK+RQeakD7H2=P1UFeHDY+GSo7nSzF_Z3KZ+OZsR0+qUYF6xjQ@mail.gmail.com>
 <93473B15-964C-4351-A0C1-03B8C59E0C94@parksidesoftware.com>
 <55F8B3AC.50106@internap.com>
Message-ID: <D2F65CD2-1E4D-45C8-9C1A-697C8B571E96@parksidesoftware.com>

Sorry, didn?t mean that to come down as a triple pile-on, with me doing it twice. My bad, I?m sorry.

Thanks,
doug


> On Sep 15, 2015, at 6:11 PM, Mathieu Gagn? <mgagne at internap.com> wrote:
> 
> On 2015-09-15 8:06 PM, Doug Wiegley wrote:
>> 
>> ?Neutron doesn?t get it and never will.?
>> 
>> I?m not sure how all ?yes? above keeps translating to this old saw, but
>> is there any tiny chance we can stop living in the past and instead
>> focus on the use cases that we want to solve?
>> 
> 
> I apologized for my unfair statement where I very wrongly associated a
> person's opinion to a whole project. I would like to move on. Thanks
> 
> -- 
> Mathieu
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From Kevin.Fox at pnnl.gov  Wed Sep 16 00:21:23 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 16 Sep 2015 00:21:23 +0000
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <CABARBAYwMRra4h0an184PzRasr8=8g0rdxxzZ1Fvf5q7x=YymA@mail.gmail.com>
References: <55F83367.9050503@inaugust.com> <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <1442347586.3210663.384528809.28DBF360@webmail.messagingengine.com>
 <1A3C52DFCD06494D8528644858247BF01A31041D@EX10MBOX03.pnnl.gov>,
 <CABARBAYwMRra4h0an184PzRasr8=8g0rdxxzZ1Fvf5q7x=YymA@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7BE64B@EX10MBOX06.pnnl.gov>

Yup. And Designate works very well. :)

But DNSaaS is not always an option to "the powers that be".... floating ip's are a much easier sell.

Also Designate does have a restriction of wanting to manage a whole domain itself. When you have existing infrastructure you want your vms to merge into, its a problem.

Thanks,
Kevin
________________________________
From: Assaf Muller [amuller at redhat.com]
Sent: Tuesday, September 15, 2015 2:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model



On Tue, Sep 15, 2015 at 5:09 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
Unfortunately, I haven't had enough chance to play with ipv6 yet.

I still think ipv6 with floating ip's probably makes sense though.

In ipv4, the floating ip's solve one particular problem:

End Users want to be able to consume a service provided by a VM. They have two options:
1. contact the ip directly
2. use DNS.

DNS is preferable, since humans don't remember ip's very well. IPv6 is much harder to remember then v4 too.

DNS has its own issues, mostly, its usually not very quick to get a DNS entry updated.  At our site (and I'm sure, others), I'm afraid to say in some cases it takes as long as 24 hours to get updates to happen. Even if that was fixed, caching can bite you too.

I'm curious if you tried out Designate / DNSaaS.


So, when you register a DNS record, the ip that its pointing at, kind of becomes a set of state. If it can't be separated from a VM its a bad thing. You can move it from VM to VM and your VM is not a pet. But, if your IP is allocated to the VM specifically, as non Floating IP's are, you run into problems if your VM dies and you have to replace it. If your unlucky, it dies, and someone else gets allocated the fixed ip, and now someone else's server is sitting on your DNS entry! So you are very unlikely to want to give up your VM, turning it into a pet.

I'd expect v6 usage to have the same issues.

The floating ip is great in that its an abstraction of a contactable address, separate from any VM it may currently be bound to.

You allocate a floating ip. You can then register it with DNS, and another tenant can not get accidentally assigned it. You can move it from vm to vm until your done with it. You can Unregister it from DNS, and then it is safe to return to others to use.

To me, the NAT aspect of it is a secondary thing. Its primary importance is in enabling things to be more cattleish and helping with dns security.

Thanks,
Kevin






________________________________________
From: Clark Boylan [cboylan at sapwetik.org<mailto:cboylan at sapwetik.org>]
Sent: Tuesday, September 15, 2015 1:06 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

On Tue, Sep 15, 2015, at 11:00 AM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the
> "just run it in on THE public network" doesn't work. :/
Maybe this would be better expressed as "just run it on an existing
public network" then?
>
> I also strongly recommend to users to put vms on a private network and
> use floating ip's/load balancers. For many reasons. Such as, if you
> don't, the ip that gets assigned to the vm helps it become a pet. you
> can't replace the vm and get the same IP. Floating IP's and load
> balancers can help prevent pets. It also prevents security issues with
> DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or
> more the number of instances that are on the private network. Sure its
> easy to put everything on the public network, but it provides much better
> security if you only put what you must on the public network. Consider
> the internet. would you want to expose every device in your house
> directly on the internet? No. you put them in a private network and poke
> holes just for the stuff that does. we should be encouraging good
> security practices. If we encourage bad ones, then it will bite us later
> when OpenStack gets a reputation for being associated with compromises.
There are a few issues with this. Neutron IPv6 does not support floating
IPs. So now you have to use two completely different concepts for
networking on a single dual stacked VM. IPv4 goes on a private network
and you attach a floating IP. IPv6 is publicly routable. If security and
DNS and not making pets were really the driving force behind floating
IPs we would see IPv6 support them too. These aren't the reasons
floating IPs exist, they exist because we are running out of IPv4
addresses and NAT is everyones preferred solution to that problem. But
that doesn't make it a good default for a cloud; use them if you are
affected by an IP shortage.

Nothing prevents you from load balancing against public IPs to address
the DNS and firewall rule concerns (basically don't make pets). This
works great and is how OpenStack's git mirrors work.

It is also easy to firewall public IPs using Neutron via security groups
(and possibly the firewall service? I have never used it and don't
know). All this to say I think it is reasonable to use public shared
networks by default particularly since IPv6 does not have any concept of
a floating IP in Neutron so using them is just odd unless you really
really need them and you aren't actually any less secure.

Not to get too off topic, but I would love it if all the devices in my
home were publicly routable. I can use my firewall to punch holes for
them, NAT is not required. Unfortunately I still have issues with IPv6
at home. Maybe one day this will be a reality :)
>
> I do consider making things as simple as possible very important. but
> that is, make them as simple as possible, but no simpler. There's danger
> here of making things too simple.
>
> Thanks,
> Kevin
>

Clark

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/80a0a2e0/attachment.html>

From armamig at gmail.com  Wed Sep 16 00:25:43 2015
From: armamig at gmail.com (Armando M.)
Date: Tue, 15 Sep 2015 17:25:43 -0700
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <55F8A4F2.8030200@inaugust.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <55F8A4F2.8030200@inaugust.com>
Message-ID: <CAK+RQeZAkF59aSTKBFA-0K-9sSUsnq_u1UNz+_u037a98df1og@mail.gmail.com>

On 15 September 2015 at 16:08, Monty Taylor <mordred at inaugust.com> wrote:

> On 09/15/2015 06:30 PM, Armando M. wrote:
>
>>
>>
>> On 15 September 2015 at 08:04, Monty Taylor <mordred at inaugust.com
>> <mailto:mordred at inaugust.com>> wrote:
>>
>>     Hey all!
>>
>>     If any of you have ever gotten drunk with me, you'll know I hate
>>     floating IPs more than I hate being stabbed in the face with a very
>>     angry fish.
>>
>>     However, that doesn't really matter. What should matter is "what is
>>     the most sane thing we can do for our users"
>>
>>     As you might have seen in the glance thread, I have a bunch of
>>     OpenStack public cloud accounts. Since I wrote that email this
>>     morning, I've added more - so we're up to 13.
>>
>>     auro
>>     citycloud
>>     datacentred
>>     dreamhost
>>     elastx
>>     entercloudsuite
>>     hp
>>     ovh
>>     rackspace
>>     runabove
>>     ultimum
>>     unitedstack
>>     vexxhost
>>
>>     Of those public clouds, 5 of them require you to use a floating IP
>>     to get an outbound address, the others directly attach you to the
>>     public network. Most of those 8 allow you to create a private
>>     network, to boot vms on the private network, and ALSO to create a
>>     router with a gateway and put floating IPs on your private ip'd
>>     machines if you choose.
>>
>>     Which brings me to the suggestion I'd like to make.
>>
>>     Instead of having our default in devstack and our default when we
>>     talk about things be "you boot a VM and you put a floating IP on it"
>>     - which solves one of the two usage models - how about:
>>
>>     - Cloud has a shared: True, external:routable: True neutron network.
>>     I don't care what it's called  ext-net, public, whatever. the
>>     "shared" part is the key, that's the part that lets someone boot a
>>     vm on it directly.
>>
>>     - Each person can then make a private network, router, gateway, etc.
>>     and get floating-ips from the same public network if they prefer
>>     that model.
>>
>>     Are there any good reasons to not push to get all of the public
>>     networks marked as "shared"?
>>
>>
>> The reason is simple: not every cloud deployment is the same: private is
>> different from public and even within the same cloud model, the network
>> topology may vary greatly.
>>
>
> Yes. Many things may be different.
>
> Perhaps Neutron fails in the sense that it provides you with too much
>> choice, and perhaps we have to standardize on the type of networking
>> profile expected by a user of OpenStack public clouds before making
>> changes that would fragment this landscape even further.
>>
>> If you are advocating for more flexibility without limiting the existing
>> one, we're only making the problem worse.
>>
>
> I am not. I am arguing for a different arbitrary 'default' deployment.
> Right now the verbiage around things is "floating IPs is the 'right' way to
> get access to public networks"
>
> I'm not arguing for code changes, or more options, or new features.
>
> I'm saying that there a set of public clouds that provide a default
> experience out of the box that is pleasing with neutron today, and we
> should have the "I don't know what I want tell me what to do" option behave
> like those clouds.
>
> Yes. You can do other things.
> Yes. You can get fancy.
> Yes. You can express all of the things.
>
> Those are things I LOVE about neutron and one of the reasons I think that
> the arguments around neutron and nova-net are insane.
>
> I'm just saying that "I want a computer on the externally facing network
> from this cloud" is almost never well served by floating-ips unless you
> know what you're doing, so rather than leading people down the road towards
> that as the default behavior, since it's the HARDER thing to deal with -
> let's lead them to the behavior which makes the simple thing simple and
> then clearly open the door to them to increasingly complex and powerful
> things over time.


I can get behind this statement, but all I am trying to say is that Neutron
gives you the toolkit. How, as a deployer you use it, it's up to you. A
deployer can today implement a shared publicly facing network on which VM's
can connect to without problems. Now the issue may come from a user point
of view: does the user may need to specify the network? Or create the
topology ahead of time? It's my understanding that this is what this thread
is about. If not, then I clearly need a crash course in English :)


>
>
>>     OH - well, one thing - that's that once there are two networks in an
>>     account you have to specify which one. This is really painful in
>>     nova clent. Say, for instance, you have a public network called
>>     "public" and a private network called "private" ...
>>
>>     You can't just say "nova boot --network=public" - nope, you need to
>>     say "nova boot --nics net-id=$uuid_of_my_public_network"
>>
>>     So I'd suggest 2 more things;
>>
>>     a) an update to python-novaclient to allow a named network to be
>>     passed to satisfy the "you have more than one network" - the nics
>>     argument is still useful for more complex things
>>
>
That's the one:

https://bugs.launchpad.net/python-novaclient/+bug/1496180


>
>>     b) ability to say "vms in my cloud should default to being booted on
>>     the public network" or "vms in my cloud should default to being
>>     booted on a network owned by the user"
>>
>
I think this this is very do-able with the caveat that we gotta figure out
how this turns out to be like in the various deployment models that are
feasible with the toolkit that Neutron provides the deployer with. There
may be multiple public networks, multiple per-tenant networks etc.

As Doug initially pointed out the get-me-a-network [1] aims at simplifying
this workflow. We may need to refine it further based on your input, but
this is something that can be iterated on once we have the building block
in place, and furthermore once we have people actually working on it...bear
with us ;)

That's the other:

[1] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network


>
>>     Thoughts?
>>
>>
>> As I implied earlier, I am not sure how healthy this choice is. As a
>> user of multiple clouds I may end up having a different user experience
>> based on which cloud I am using...I thought you were partially
>> complaining about lack of consistency?
>>
>
> I am a user of multiple clouds. I am complaining about the current lack of
> consistency.
>
> More than that though, I'm complaining that we lead people to select a
> floating-ip model when having them flip the boolean value of "shared=True"
> on their ext-net would make the end-user experience WAY nicer.


It's funny you mention that because today Neutron already has a concept of
shared networks. I can make a public network (router:external) shared. If
no logical tenant-owned network were to be available, then you'd boot
straight off the external network, so your use is 90% already there.


>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/321c90db/attachment.html>

From mordred at inaugust.com  Wed Sep 16 00:30:09 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Wed, 16 Sep 2015 02:30:09 +0200
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <93473B15-964C-4351-A0C1-03B8C59E0C94@parksidesoftware.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <55F89777.3000505@internap.com>
 <CAK+RQeakD7H2=P1UFeHDY+GSo7nSzF_Z3KZ+OZsR0+qUYF6xjQ@mail.gmail.com>
 <93473B15-964C-4351-A0C1-03B8C59E0C94@parksidesoftware.com>
Message-ID: <55F8B811.2000302@inaugust.com>

On 09/16/2015 02:06 AM, Doug Wiegley wrote:
> Hi all,
>
> If I can attempt to summarize this thread:
>
> ?We want simple networks with VMs?
>
> Ok, in progress, start here:
>
> https://blueprints.launchpad.net/neutron/+spec/get-me-a-network

\o/

> ?It should work with multiple networks?
>
> Same spec, click above.

\o/

> ?It should work with just ?nova boot??
>
> Yup, you guessed it, starting point is the same spec, click above.

\o/

> ?It should still work in the face of N-tiered ambiguity.?
>
> Umm, how, exactly? I think if you have a super complicated setup, your
> boot might be a bit harder, too. Please look at the cases that are
> covered before getting upset, and then provide feedback on the spec.

Yeah. For the record, I have never thought the simple case should handle 
anything but the simple case.

> ?Networks should be accessible by name.?
>
> Yup, if they don?t, it?s a bug. The client a few cycles ago was
> particularly bad at this. If you find more cases, please file a bug.
>
> ?Neutron doesn?t get it and never will.?
>
> I?m not sure how all ?yes? above keeps translating to this old saw, but
> is there any tiny chance we can stop living in the past and instead
> focus on the use cases that we want to solve?

Also for the record, I find neutron very pleasant to work with. The 
clouds who have chosen to mark their "public" network as "shared" are 
the most pleasant- but even the ones who have not chosen to do that are 
still pretty darned good.

The clouds without neutron at all are the worst to work with.

>
>> On Sep 15, 2015, at 5:44 PM, Armando M. <armamig at gmail.com
>> <mailto:armamig at gmail.com>> wrote:
>>
>>
>>
>> On 15 September 2015 at 15:11, Mathieu Gagn? <mgagne at internap.com
>> <mailto:mgagne at internap.com>> wrote:
>>
>>     On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
>>     > We run several clouds where there are multiple external networks. the "just run it in on THE public network" doesn't work. :/
>>     >
>>     > I also strongly recommend to users to put vms on a private network and use floating ip's/load balancers. For many reasons. Such as, if you don't, the ip that gets assigned to the vm helps it become a pet. you can't replace the vm and get the same IP. Floating IP's and load balancers can help prevent pets. It also prevents security issues with DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or more the number of instances that are on the private network. Sure its easy to put everything on the public network, but it provides much better security if you only put what you must on the public network. Consider the internet. would you want to expose every device in your house directly on the internet? No. you put them in a private network and poke holes just for the stuff that does. we should be encouraging good security practices. If we encourage bad ones, then it will bite us later when OpenStack gets a reputation for being associated with compromises.
>>     >
>>
>>     Sorry but I feel this kind of reply explains why people are still
>>     using
>>     nova-network over Neutron. People want simplicity and they are
>>     denied it
>>     at every corner because (I feel) Neutron thinks it knows better.
>>
>>
>> I am sorry, but how can you associate a person's opinion to a project,
>> which is a collectivity? Surely everyone is entitled to his/her
>> opinion, but I don't honestly believe these are fair statements to make.
>>
>>
>>     The original statement by Monty Taylor is clear to me:
>>
>>     I wish to boot an instance that is on a public network and reachable
>>     without madness.
>>
>>     As of today, you can't unless you implement a deployer/provider
>>     specific
>>     solution (to scale said network). Just take a look at what actual
>>     public
>>     cloud providers are doing:
>>
>>     - Rackspace has a "magic" public network
>>     - GoDaddy has custom code in their nova-scheduler (AFAIK)
>>     - iWeb (which I work for) has custom code in front of nova-api.
>>
>>     We are all writing our own custom code to implement what (we feel)
>>     Neutron should be providing right off the bat.
>>
>>
>> What is that you think Neutron should be providing right off the bat?
>> I personally have never seen you publicly report usability issues that
>> developers could go and look into. Let's escalate these so that the
>> Neutron team can be aware.
>>
>>
>>     By reading the openstack-dev [1], openstack-operators [2] lists,
>>     Neutron
>>     specs [3] and the Large Deployment Team meeting notes [4], you
>>     will see
>>     that what is suggested here (a scalable public shared network) is an
>>     objective we wish but are struggling hard to achieve.
>>
>>
>> There are many ways to skin this cat IMO, and scalable public shared
>> network can really have multiple meanings, I appreciate the pointers
>> nonetheless.
>>
>>
>>     People keep asking for simplicity and Neutron looks to not be able to
>>     offer it due to philosophical conflicts between Neutron developers and
>>     actual public users/operators. We can't force our users to adhere
>>     to ONE
>>     networking philosophy: use NAT, floating IPs, firewall, routers, etc.
>>     They just don't buy it. Period. (see monty's list of public providers
>>     attaching VMs to public network)
>>
>>
>> Public providers networking needs are not the only needs that Neutron
>> tries to gather. There's a balance to be struck, and I appreciate that
>> the balance may need to be adjusted, but being so dismissive is being
>> myopic of the entire industry landscape.
>>
>>
>>     If we can accept and agree that not everyone wishes to adhere to the
>>     "full stack of networking good practices" (TBH, I don't know how
>>     to call
>>     this thing), it will be a good start. Otherwise I feel we won't be
>>     able
>>     to achieve anything.
>>
>>     What Monty is explaining and suggesting is something we (my team) have
>>     been struggling with for *years* and just didn't have bandwidth
>>     (we are
>>     operators, not developers) or public charisma to change.
>>
>>     I'm glad Monty brought up this subject so we can officially
>>     address it.
>>
>>
>>     [1]
>>     http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
>>     [2]
>>     http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
>>     [3]
>>     http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html
>>     [4]
>>     http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html
>>
>>     --
>>     Mathieu
>>
>>     __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>     <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org
>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From Kevin.Fox at pnnl.gov  Wed Sep 16 00:33:48 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 16 Sep 2015 00:33:48 +0000
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <55F89777.3000505@internap.com>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>,
 <55F89777.3000505@internap.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7BE668@EX10MBOX06.pnnl.gov>

I am not a neutron developer, but an operator and a writer of cloud apps.

Yes, it is sort of a philosophical issue, and I have stated my side of why I think the extra complexity is worth it. Feel free to disagree.

But either way I don't think we can ignore the complexity. There are three different ways to resolve it:

* Simple use case, no naas at all. The "simplest" solution, some app developers and some users suffer that actually need naas. users/app developers have to add it on top themselves.
* always naas. Ops (and perhaps users) have to deal with the extra complexity if they feel they don't need it. but its simpler in that you can always rely on it being there.
* No naas and naas are both supported. Ops get it easy. they pick which one they want, users suffer a little if they work on multiple clouds that differ. app developers suffer a lot since they have to write either two sets of software or pick the lowest common denominator.

Its an optimization problem. who do you shift the difficulty to?

My personal opinion again is that I'd rather suffer a little more as an Op and always deploy naas, rather then have to deal with the app developer pain of not being able to rely on it. The users/ops benefit the most if a strong app ecosystem can be developed on top.

Again, my personal opinion. Feel free to differ.

Thanks,
Kevin


________________________________________
From: Mathieu Gagn? [mgagne at internap.com]
Sent: Tuesday, September 15, 2015 3:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the "just run it in on THE public network" doesn't work. :/
>
> I also strongly recommend to users to put vms on a private network and use floating ip's/load balancers. For many reasons. Such as, if you don't, the ip that gets assigned to the vm helps it become a pet. you can't replace the vm and get the same IP. Floating IP's and load balancers can help prevent pets. It also prevents security issues with DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or more the number of instances that are on the private network. Sure its easy to put everything on the public network, but it provides much better security if you only put what you must on the public network. Consider the internet. would you want to expose every device in your house directly on the internet? No. you put them in a private network and poke holes just for the stuff that does. we should be encouraging good security practices. If we encourage bad ones, then it will bite us later when OpenStack gets a reputation for being associated with compromises.
>

Sorry but I feel this kind of reply explains why people are still using
nova-network over Neutron. People want simplicity and they are denied it
at every corner because (I feel) Neutron thinks it knows better.

The original statement by Monty Taylor is clear to me:

I wish to boot an instance that is on a public network and reachable
without madness.

As of today, you can't unless you implement a deployer/provider specific
solution (to scale said network). Just take a look at what actual public
cloud providers are doing:

- Rackspace has a "magic" public network
- GoDaddy has custom code in their nova-scheduler (AFAIK)
- iWeb (which I work for) has custom code in front of nova-api.

We are all writing our own custom code to implement what (we feel)
Neutron should be providing right off the bat.

By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
specs [3] and the Large Deployment Team meeting notes [4], you will see
that what is suggested here (a scalable public shared network) is an
objective we wish but are struggling hard to achieve.

People keep asking for simplicity and Neutron looks to not be able to
offer it due to philosophical conflicts between Neutron developers and
actual public users/operators. We can't force our users to adhere to ONE
networking philosophy: use NAT, floating IPs, firewall, routers, etc.
They just don't buy it. Period. (see monty's list of public providers
attaching VMs to public network)

If we can accept and agree that not everyone wishes to adhere to the
"full stack of networking good practices" (TBH, I don't know how to call
this thing), it will be a good start. Otherwise I feel we won't be able
to achieve anything.

What Monty is explaining and suggesting is something we (my team) have
been struggling with for *years* and just didn't have bandwidth (we are
operators, not developers) or public charisma to change.

I'm glad Monty brought up this subject so we can officially address it.


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
[2]
http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
[3]
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html
[4]
http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html

--
Mathieu

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From armamig at gmail.com  Wed Sep 16 00:38:44 2015
From: armamig at gmail.com (Armando M.)
Date: Tue, 15 Sep 2015 17:38:44 -0700
Subject: [openstack-dev] [Neutron] PTL Candidacy
Message-ID: <CAK+RQeYMvmEpeCSYQCVmy=AR3w6+CCxvcwbN1=DtpJTPB4p8-g@mail.gmail.com>

I would like to propose my candidacy for the Neutron PTL.

If you are reading this and you know me, then you probably know what I have
been up to up until now, what I have done for the project, and what I may
continue to do. If you do not know me, and you are still interested in
reading, then I will try not bore you.

As member of this project, I have been involved with it since the early
days, and I have served as core developer since Havana. If you are
wondering whether I am partially to blame for the issues that affect
Neutron, well you may have a point, but keep reading...

I believe that Neutron itself is a unique project and as such has unique
challenges. We have grown tremendously mostly propelled by a highly
opinionated vendor perspective. This has caused us some problems and we set
foot a cycle or so ago to fix these, but at the same time stay true to the
nature of our mission: define logical abstractions, and related
implementations to provide on-demand, cloud oriented networking services.

As any other project in OpenStack, we are software and we mostly implement
'stuff' in software, and because of that we are prone to all the issues
that a software project may have. To this aim, going forward I would like
us to improve the following:


   - Stability is the priority: new features are important, but complete
   and well tested existing features are more important; we gotta figure out a
   way to bring the number of bugs down to a manageable number, just like
   nations are asked to keep their sovereign debt below a certain healthy
   threshold.
   - Narrow the focus: now that the Neutron 'stadium' is here with us,
   external plugins and drivers can integrate with Neutron in a loosely
   manner, giving the core the opportunity to be more razor focus at getting
   better at what we do: logical abstractions and pluggability.
   - Consistency is paramount: having grown the review team drastically
   over the past cycle, it is easy to skew quality in one area over an other.
   We need to start defining common development and reviewer practices so
   that, even though we deal are made of many sub-projects and modules, we
   operate, feel and look like one...just like OpenStack :)
   - Define long term strategy: we need to have an idea where Neutron start
   and where Neutron end. At some point, this project will reach enough
   maturity where we feel like we are 'done' and that's okay. Some of us will
   move on to the next big thing.
   - Keep developers and reviewers _aware_: we all have to work
   collectively towards a common set of goals, defined by the release cycle.
   We will have to learn to push back on _random_ forces that keep distracting
   us.
   - I would like to promote a 'you merge it, you own it' type of
   mentality: even though we are pretty good at it already, we need a better
   balance between reviews and contributions. If you bless a patch, you got to
   be prepared to dive into the issues that it may potentially causes. If you
   bless a patch, you got to be prepared to improve the code around it, and so
   on. You will be a better reviewer if you learn to live with the pain of
   your mistakes. This is he only way to establish a virtuous cycle where
   quality improves time over time.

And last but not least:


   - Improve the relationships with other projects: Nova and QA primarily.
   We should allocate enough bandwidth to address integration issues with Nova
   and the other emerging projects, so that we stay plugged with them. QA is
   also paramount so that no-one is gonna hate us because we send the gate
   belly up. As for nova-network, I must admit I am highly skeptical by now:
   if our community were a commercial enterprise trying to solve that problem
   we would have ran out of money long time ago. We tried time and time again
   to crack this nut open, and even though we made progress in a number of
   areas, we haven't really budged where some people felt it mattered. We need
   to recognize that the problem is not just technical...it is social; no-one,
   starting from the developers and the employers behind them, seems to be
   genuinely concerned with the need of making nova-network a thing of the
   past. They have other priorities, they are chasing new customers, they want
   to disrupt Amazon. None of this nova-network deprecation drama fits with
   their agendas and furthermore, even if we found non-corporate sponsored
   developers willing to work on it, let's face it migration is a problem that
   is really not that interesting to solve. So where do we go from here? I do
   not have a clear answer yet. However, I think we all agree that the Neutron
   team wants to make Neutron a better product, more aligned with the needs of
   our users, but we must recognize that _better_ does not mean *like*
   nova-network, because the two products are not the same and they never will
   be.

Ok, now that you read this, you are ready to know whether you may want to
vote for me. Having said that, if you think that I am doing a fine job as
core reviewer, you trust my technical in-depth contribution, and you're
worried that my PTL duties may take that away from you...exercise your vote
right!

Thanks for reading and forgive the typos!

Armando Migliaccio (aka armax)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/dc88b4c5/attachment.html>

From Kevin.Fox at pnnl.gov  Wed Sep 16 00:41:01 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 16 Sep 2015 00:41:01 +0000
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <CALiLy7o_ypCKpN6iqen57fuDsYSzYvd5zZqznXYbosuuOWPU2w@mail.gmail.com>
References: <55F83367.9050503@inaugust.com> <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <1442347586.3210663.384528809.28DBF360@webmail.messagingengine.com>
 <1A3C52DFCD06494D8528644858247BF01A31041D@EX10MBOX03.pnnl.gov>,
 <CALiLy7o_ypCKpN6iqen57fuDsYSzYvd5zZqznXYbosuuOWPU2w@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7BE689@EX10MBOX06.pnnl.gov>

Let me rephrase that to be more explicit. Certain organizations, for various reasons, require people in the process chain to actually make dns changes... No amount of automation can easily address that issue. Hopefully that can change in time. But stuff as simple as letting users launch vm's can be a hard enough enough sell, without requiring automated access to change dns.

That being said, thats a cool patch. I wish I could use it. :) Hopefully some day.

Thanks,
Kevin
________________________________________
From: Carl Baldwin [carl at ecbaldwin.net]
Sent: Tuesday, September 15, 2015 3:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

On Tue, Sep 15, 2015 at 3:09 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
> DNS is preferable, since humans don't remember ip's very well. IPv6 is much harder to remember then v4 too.
>
> DNS has its own issues, mostly, its usually not very quick to get a DNS entry updated.  At our site (and I'm sure, others), I'm afraid to say in some cases it takes as long as 24 hours to get updates to happen. Even if that was fixed, caching can bite you too.

We also have work going on now to automate the addition and update of
DNS entries as VMs come and go [1].  Please have a look and provide
feedback.

[1] https://review.openstack.org/#/q/topic:bp/external-dns-resolution,n,z

Carl

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From Kevin.Fox at pnnl.gov  Wed Sep 16 00:43:29 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 16 Sep 2015 00:43:29 +0000
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <55F8AABE.4080307@inaugust.com>
References: <55F83367.9050503@inaugust.com>
 <OF6AF91F2E.EB875475-ON85257EC1.0054B0EC-85257EC1.0054E366@notes.na.collabserv.com>
 <55F86358.3050705@linux.vnet.ibm.com>,<55F8AABE.4080307@inaugust.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7BE6A2@EX10MBOX06.pnnl.gov>

+1
________________________________________
From: Monty Taylor [mordred at inaugust.com]
Sent: Tuesday, September 15, 2015 4:33 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

On 09/15/2015 08:28 PM, Matt Riedemann wrote:
>
>
> On 9/15/2015 10:27 AM, Mike Spreitzer wrote:
>> Monty Taylor <mordred at inaugust.com> wrote on 09/15/2015 11:04:07 AM:
>>
>>  > a) an update to python-novaclient to allow a named network to be
>> passed
>>  > to satisfy the "you have more than one network" - the nics argument is
>>  > still useful for more complex things
>>
>> I am not using the latest, but rather Juno.  I find that in many places
>> the Neutron CLI insists on a UUID when a name could be used.  Three
>> cheers for any campaign to fix that.
>
> It's my understanding that network names in neutron, like security
> groups, are not unique, that's why you have to specify a UUID.

Yah.

EXCEPT - we already error when the user does not specify the network
specifically enough, so there is nothing stopping us from trying the
obvious thing and the moving on. Such as:

nova boot

ERROR: There are more than one network, please specify one

nova boot --network public

\o/

OR

nova boot

ERROR: There are more than one network, please specify one

nova boot --network public

ERROR: There are more than one network named 'public', please specify one

nova boot --network ecc967b6-5c01-11e5-b218-4c348816caa1

\o/

These are successive attempts at a simple operation that should be
simple, and as the situation becomes increasingly complex, so does the
necessity of the user's response.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From spradhan.17 at gmail.com  Wed Sep 16 01:23:49 2015
From: spradhan.17 at gmail.com (Sagar Pradhan)
Date: Wed, 16 Sep 2015 06:53:49 +0530
Subject: [openstack-dev] [Policy][Group-based-policy]
In-Reply-To: <CAMWrLvgyOdX0RKgYEOZ8D4sn=4KhvNqy6X15xbJM8_vF-N1SaA@mail.gmail.com>
References: <CAGCtZnNLUqD7rLLSoiQjE7n1guY1hPgzo=X9t=hRKTaxyzdLZg@mail.gmail.com>
 <CAMWrLvgyOdX0RKgYEOZ8D4sn=4KhvNqy6X15xbJM8_vF-N1SaA@mail.gmail.com>
Message-ID: <CAGCtZnMpvVjin57-=t6m+rqZ2KLwTabph_m8kcfg8S6=kSJ3Nw@mail.gmail.com>

Thanks Sumit.
Actually I also did the same thing using --debug option with CLI to find
the REST requests and responses.
Thanks for the help . Will get back to you if required.

Regards,
Sagar
On Sep 15, 2015 11:59 PM, "Sumit Naiksatam" <sumitnaiksatam at gmail.com>
wrote:

> Hi Sagar,
>
> GBP has a single REST API interface. The CLI, Horizon and Heat are
> merely clients of the same REST API.
>
> There was a similar question on this which I had responded to in a
> different mailer:
> http://lists.openstack.org/pipermail/openstack/2015-September/013952.html
>
> and I believe you are cc'ed on that thread. I have provided more
> information on how you can run the CLI in the verbose mode to explore
> the REST request and responses. Hope that will be helpful, and we are
> happy to guide you through this exercise (catch us on #openstack-gbp
> for real time help).
>
> Thanks,
> ~Sumit.
>
> On Tue, Sep 15, 2015 at 3:45 AM, Sagar Pradhan <spradhan.17 at gmail.com>
> wrote:
> >
> >  Hello ,
> >
> > We were exploring group based policy for some project.We could find CLI
> and
> > REST API documentation for GBP.
> > Do we have separate REST API for GBP which can be called separately ?
> > From documentation it seems that we can only use CLI , Horizon and Heat.
> > Please point us to CLI or REST API documentation for GBP.
> >
> >
> > Regards,
> > Sagar
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/8e229d36/attachment.html>

From tony at bakeyournoodle.com  Wed Sep 16 01:25:41 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Wed, 16 Sep 2015 11:25:41 +1000
Subject: [openstack-dev] [all][elections] Last hours for PTL candidate
	announcements
Message-ID: <20150916012541.GC54897@thor.bakeyournoodle.com>

A quick reminder that we are in the last hours for PTL candidate announcements.

If you want to stand for PTL, don't delay, follow the instructions on the
wikipage and make sure we know your intentions:
  https://wiki.openstack.org/wiki/PTL_Elections_September_2015

Make sure your candidacy have been submitted to the openstack/election
repository and approved by election officials.

Some statistics:
Nominations started       @ 2015-09-11 05:59:00 UTC
Nominations end           @ 2015-09-17 05:59:00 UTC
Nominations duration      : 6 days, 0:00:00
Nominations remaining     : 1 day, 5:59:00
Nominations progress      :  79.18%
---------------------------------------------------
Projects                  :    43
Projects with candidates  :    22 ( 51.16%)
Projects with election    :     5 ( 11.63%)
===================================================
Stats gathered            @ 2015-09-16 00:00:00 UTC

This means that with slightly more that 1 day left nearly 50% of projects will
be deemed leaderless.

In this case the TC will be bound by [1].

Yours Tony.

[1] http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/9529788c/attachment.pgp>

From mgagne at internap.com  Wed Sep 16 01:47:28 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Tue, 15 Sep 2015 21:47:28 -0400
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7BE668@EX10MBOX06.pnnl.gov>
References: <55F83367.9050503@inaugust.com>
 <CAK+RQeZzuMuABuJ9qzNvj8yF_0euKxe2AV5jFtfeJnDctL024Q@mail.gmail.com>
 <1442336141-sup-9706@lrrr.local>
 <1A3C52DFCD06494D8528644858247BF01A310241@EX10MBOX03.pnnl.gov>
 <55F89777.3000505@internap.com>
 <1A3C52DFCD06494D8528644858247BF01B7BE668@EX10MBOX06.pnnl.gov>
Message-ID: <55F8CA30.7060503@internap.com>

Hi Kevin,

On 2015-09-15 8:33 PM, Fox, Kevin M wrote:
> I am not a neutron developer, but an operator and a writer of cloud apps.

So far, I'm only an operator and heavy cloud apps user (if you can call
Vagrant an app). =)


> Yes, it is sort of a philosophical issue, and I have stated my side
> of why I think the extra complexity is worth it. Feel free to
> disagree.

I don't disagree with the general idea of NaaS or SDN. We are looking to
offer this stuff in the future so customers wishing to have more control
over their networks can have it.

I would however like for other solutions (which doesn't require
mandatory NATing, floating IPs, and routers) to be accepted and fully
supported as first class citizen.


> But either way I don't think we can ignore the complexity. There are
> three different ways to resolve it:
> 
> * No naas and naas are both supported. Ops get it easy. they pick
> which one they want, users suffer a little if they work on multiple
> clouds that differ. app developers suffer a lot since they have to
> write either two sets of software or pick the lowest common
> denominator.
> 
> Its an optimization problem. who do you shift the difficulty to?
> 
> My personal opinion again is that I'd rather suffer a little more as
> an Op and always deploy naas, rather then have to deal with the app
> developer pain of not being able to rely on it. The users/ops benefit
> the most if a strong app ecosystem can be developed on top.


So far, I'm aiming for "No naas and naas are both supported.":

- No NaaS: A public shared and routable provider network.
- NaaS: All the goodness of SDN and private networks.

While NaaS is a very nice feature for cloud apps writer, we found that
our type of users actually don't ask for it (yet) and are looking for
simplicity instead.

BTW, let me know if I got my understanding of "No naas" (very?) wrong.


As Monty Taylor said [3], we should be able to "nova boot" or "nova boot
--network public" just fine.

So lets assume I don't have NaaS yet. I only have 1 no-NaaS network
named "public" available to all tenants.

With this public shared network from no-NaaS, you should be able to boot
just fine. Your instance ends up on a public shared network with a
public IP address without NATing/Floating IPs and such. (Note that we
implemented anti-spoofing on those networks)


Now you wish to use NaaS. So you create this network named "private" or
whatever you feel naming it.

You should be fine too with "nova boot --network private" by making sure
the network name doesn't conflict with the public shared network.
Otherwise you can provide the network UUID just like before. I agree
that you loose the ability to "nova boot" without "--network". See below.


The challenge I see here is with both "no-NaaS" and "NaaS". Now you
could end up with 2 or more networks to choose from and "nova boot"
alone will get confused.


My humble suggestions are:

- Create a new client-side config to tell which network name to choose
(OS_NETWORK_NAME?) so you don't have to type it each time.
- Create a tenant specific server-side config (stored somewhere in
Nova?) to tell which network name to choose from by default.

This will restore the coolness of "nova boot" without specifying
"--network".

If you application requires a lot of networks (and complexity), I'm sure
all these "nova boot" is non-sense to you anyway and that you will
provide the actual list of networks to boot on.


Regarding the need and wish of users to keep their public IPs, you can
still use floating IPs in both cases. It's a matter of educating the
users that public IPs on the no-NaaS network aren't preserved on
destruction. I'm planning to use routes instead of NATing for the public
shared network.


So far, what I'm missing to create a truly scalable public shared
network is what is described here [2] and here [3] as you just can't
scale your L2 network infinitely. (same for private NaaS networks but
that's an other story)


Note that I'm fully aware that it creates a lot of challenges on the
Nova side related to scheduling, resizing, live migrations, evacuate,
etc. But I'm confident that those challenges aren't impossible to overcome.


Kevin, let me know if I missed a known use case you might be actively
using. I never fully used the NaaS part of Neutron so I can't tell for
sure. Or maybe I'm just stating obvious stuff and completely missing the
point of this thread. :D


[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074618.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
[3] https://review.openstack.org/#/c/196812/


-- 
Mathieu


From Vijay.Venkatachalam at citrix.com  Wed Sep 16 02:06:39 2015
From: Vijay.Venkatachalam at citrix.com (Vijay Venkatachalam)
Date: Wed, 16 Sep 2015 02:06:39 +0000
Subject: [openstack-dev] [Barbican] Providing service user read access to
 all tenant's certificates
Message-ID: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>

Hi,
               Is there a way to provide read access to a certain user to all secrets/containers of all project/tenant's certificates?
               This user with universal "read" privilege's will be used as a service user by LBaaS plugin to read tenant's certificates during LB configuration implementation.

               Today's LBaaS users are following the below mentioned process

1.      tenant's creator/admin user uploads a certificate info as secrets and container

2.      User then have to create ACLs for the LBaaS service user to access the containers and secrets

3.      User creates LB config with the container reference

4.      LBaaS plugin using the service user will then access container reference provided in LB config and proceeds to implement.

Ideally we would want to avoid step 2 in the process. Instead add a step 5 where the lbaas plugin's service user checks if the user configuring the LB has read access to the container reference provided.

Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/861c73f7/attachment.html>

From mkassawara at gmail.com  Wed Sep 16 02:20:43 2015
From: mkassawara at gmail.com (Matt Kassawara)
Date: Tue, 15 Sep 2015 20:20:43 -0600
Subject: [openstack-dev] [nova][neutron][devstack] New proposed
 'default' network model
In-Reply-To: <55F83367.9050503@inaugust.com>
References: <55F83367.9050503@inaugust.com>
Message-ID: <CABA+jQqEkEwbcKoVjdB+CikMG6DGBrj+_AYLY04xuZwHa162wg@mail.gmail.com>

Monty,

The architectural changes to the installation guide for Liberty [1] support
booting VMs on both the public/external/provider and
private/project/self-service networks.

Also, we should consider including similar "hybrid" scenarios in the
networking guide [2] so deployers don't have to choose between these
architectures.

[1] https://review.openstack.org/#/c/221560/
[2] http://docs.openstack.org/networking-guide/deploy.html

Matt

On Tue, Sep 15, 2015 at 9:04 AM, Monty Taylor <mordred at inaugust.com> wrote:

> Hey all!
>
> If any of you have ever gotten drunk with me, you'll know I hate floating
> IPs more than I hate being stabbed in the face with a very angry fish.
>
> However, that doesn't really matter. What should matter is "what is the
> most sane thing we can do for our users"
>
> As you might have seen in the glance thread, I have a bunch of OpenStack
> public cloud accounts. Since I wrote that email this morning, I've added
> more - so we're up to 13.
>
> auro
> citycloud
> datacentred
> dreamhost
> elastx
> entercloudsuite
> hp
> ovh
> rackspace
> runabove
> ultimum
> unitedstack
> vexxhost
>
> Of those public clouds, 5 of them require you to use a floating IP to get
> an outbound address, the others directly attach you to the public network.
> Most of those 8 allow you to create a private network, to boot vms on the
> private network, and ALSO to create a router with a gateway and put
> floating IPs on your private ip'd machines if you choose.
>
> Which brings me to the suggestion I'd like to make.
>
> Instead of having our default in devstack and our default when we talk
> about things be "you boot a VM and you put a floating IP on it" - which
> solves one of the two usage models - how about:
>
> - Cloud has a shared: True, external:routable: True neutron network. I
> don't care what it's called  ext-net, public, whatever. the "shared" part
> is the key, that's the part that lets someone boot a vm on it directly.
>
> - Each person can then make a private network, router, gateway, etc. and
> get floating-ips from the same public network if they prefer that model.
>
> Are there any good reasons to not push to get all of the public networks
> marked as "shared"?
>
> OH - well, one thing - that's that once there are two networks in an
> account you have to specify which one. This is really painful in nova
> clent. Say, for instance, you have a public network called "public" and a
> private network called "private" ...
>
> You can't just say "nova boot --network=public" - nope, you need to say
> "nova boot --nics net-id=$uuid_of_my_public_network"
>
> So I'd suggest 2 more things;
>
> a) an update to python-novaclient to allow a named network to be passed to
> satisfy the "you have more than one network" - the nics argument is still
> useful for more complex things
>
> b) ability to say "vms in my cloud should default to being booted on the
> public network" or "vms in my cloud should default to being booted on a
> network owned by the user"
>
> Thoughts?
>
> Monty
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150915/ff625166/attachment.html>

From ayshihanzhang at 126.com  Wed Sep 16 02:40:37 2015
From: ayshihanzhang at 126.com (shihanzhang)
Date: Wed, 16 Sep 2015 10:40:37 +0800 (CST)
Subject: [openstack-dev] [neutron][L3][QA] DVR job failure rate and
 maintainability
In-Reply-To: <0000014fcde02877-55c10164-4eed-4552-ba1a-681c6a75fbcd-000000@email.amazonses.com>
References: <0000014fcde02877-55c10164-4eed-4552-ba1a-681c6a75fbcd-000000@email.amazonses.com>
Message-ID: <5988f344.fc6.14fd4067572.Coremail.ayshihanzhang@126.com>

Sean, 
Thank you very much for writing this, DVR indeed need to get more attention, it's a very cool and usefull feature, especially in large-scale. In Juno, it firstly lands to Neutron, through the development of Kilo and Liberty, it's getting better and better, we have used it in our production,
in the process of use, we found the following bugs have not been fixed, we have filed bug on launchpad:
1. every time we create a VM, it will trigger router scheduling, in large-scale, if there are lage l3 agents bind to a DVR router, scheduling router consume much time, but scheduling action is not necessary.[1]
2. every time we bind a VM with floatingIP, it also trigger router scheduling, and send this floatingIP to all bound
l3 agents.[2]
3. Bulk delete VMs from a compute node which has no VM on this router, for most part, the router namespace will remain.[3]
4. Updating router_gateway trigger reschedule_router, during reschedule_router, the communication is broken related to this router, for DVR router, why router need to reschedule_router? it reschedule which l3 agents? [4]
5. Stale fip namespaces are not cleaned up on compute nodes. [5]


I very agree with that we need a group of contributors that
can help with the DVR feature in the immediate term to fix the current bugs.
I am very glad to join this group.


Neutroner, let's start to do the great things!


Thanks,
Hanzhang,Shi


[1] https://bugs.launchpad.net/neutron/+bug/1486795
[2]https://bugs.launchpad.net/neutron/+bug/1486828
[3] https://bugs.launchpad.net/neutron/+bug/1496201
[4] https://bugs.launchpad.net/neutron/+bug/1496204
[5] https://bugs.launchpad.net/neutron/+bug/1470909







At 2015-09-15 06:01:03, "Sean M. Collins" <sean at coreitpro.com> wrote:
>[adding neutron tag to subject and resending]
>
>Hi,
>
>Carl Baldwin, Doug Wiegley, Matt Kassawara, Ryan Moats, and myself are
>at the QA sprint in Fort Collins. Earlier today there was a discussion
>about the failure rate about the DVR job, and the possible impact that
>it is having on the gate.
>
>Ryan has a good patch up that shows the failure rates over time:
>
>https://review.openstack.org/223201
>
>To view the graphs, you go over into your neutron git repo, and open the
>.html files that are present in doc/dashboards - which should open up
>your browser and display the Graphite query.
>
>Doug put up a patch to change the DVR job to be non-voting while we
>determine the cause of the recent spikes:
>
>https://review.openstack.org/223173
>
>There was a good discussion after pushing the patch, revolving around
>the need for Neutron to have DVR, to fit operational and reliability
>requirements, and help transition away from Nova-Network by providing
>one of many solutions similar to Nova's multihost feature.  I'm skipping
>over a huge amount of context about the Nova-Network and Neutron work,
>since that is a big and ongoing effort. 
>
>DVR is an important feature to have, and we need to ensure that the job
>that tests DVR has a high pass rate.
>
>One thing that I think we need, is to form a group of contributors that
>can help with the DVR feature in the immediate term to fix the current
>bugs, and longer term maintain the feature. It's a big task and I don't
>believe that a single person or company can or should do it by themselves.
>
>The L3 group is a good place to start, but I think that even within the
>L3 team we need dedicated and diverse group of people who are interested
>in maintaining the DVR feature. 
>
>Without this, I think the DVR feature will start to bit-rot and that
>will have a significant impact on our ability to recommend Neutron as a
>replacement for Nova-Network in the future.
>
>-- 
>Sean M. Collins
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/cb348209/attachment.html>

From jsbryant at electronicjungle.net  Wed Sep 16 03:50:20 2015
From: jsbryant at electronicjungle.net (Jay S. Bryant)
Date: Tue, 15 Sep 2015 22:50:20 -0500
Subject: [openstack-dev] [cinder] PTL Non-Candidacy
In-Reply-To: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
References: <CAHcn5b1==WzRXjdwThP1knMZ5T9ZEVtFN9kN3W+A-F_5BiQOcw@mail.gmail.com>
Message-ID: <55F8E6FC.7030206@electronicjungle.net>

Mike,

We were enabled to do some great things under your leadership. Cinder 
has greatly benefited from your time as a PTL.

Thanks for all you have done!

Jay

On 09/14/2015 11:15 AM, Mike Perez wrote:
> Hello all,
>
> I will not be running for Cinder PTL this next cycle. Each cycle I ran
> was for a reason [1][2], and the Cinder team should feel proud of our
> accomplishments:
>
> * Spearheading the Oslo work to allow *all* OpenStack projects to have
> their database being independent of services during upgrades.
> * Providing quality to OpenStack operators and distributors with over
> 60 accepted block storage vendor drivers with reviews and enforced CI
> [3].
> * Helping other projects with third party CI for their needs.
> * Being a welcoming group to new contributors. As a result we grew greatly [4]!
> * Providing documentation for our work! We did it for Kilo [5], and I
> was very proud to see the team has already started doing this on their
> own to prepare for Liberty.
>
> I would like to thank this community for making me feel accepted in
> 2010. I would like to thank John Griffith for starting the Cinder
> project, and empowering me to lead the project through these couple of
> cycles.
>
> With the community's continued support I do plan on continuing my
> efforts, but focusing cross project instead of just Cinder. The
> accomplishments above are just some of the things I would like to help
> others with to make OpenStack as a whole better.
>
>
> [1] - http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
> [2] - http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
> [3] - http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
> [4] - http://thing.ee/cinder/active_contribs.png
> [5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7
>
> --
> Mike Perez
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From dklyle0 at gmail.com  Wed Sep 16 04:36:25 2015
From: dklyle0 at gmail.com (David Lyle)
Date: Tue, 15 Sep 2015 22:36:25 -0600
Subject: [openstack-dev]  [horizon] PTL Candidacy
Message-ID: <CAFFhzB67TynzjAHCKyVvjbie+-iWF_4H5w6mVyrz-s=tg3kFDw@mail.gmail.com>

I would like to announce my candidacy for Horizon PTL for Mitaka.

I've been contributing to Horizon since the Grizzly cycle and I've had the
honor of serving as PTL for the past four cycles.

Over the past couple of releases, our main goal has been to position Horizon
for the future while maintaining a stable, extensible project for current
installations and providing a smooth path forward for those installations.
Which is proving a delicate balancing act. In Kilo, we added a great deal
of toolkit for AngularJS based content and took a first pass at some AngularJS
driven content in Horizon. Much of the Liberty cycle was spent applying the
lessons we learned from the Kilo work and correcting architectural issues.
While the amount of AngularJS based content is not growing quickly in Horizon,
we have created a framework that plugins are building on.

We've had several successes in the Liberty cycle.
    We have a more complete plugin framework to allow for an increasing number
    of projects in the big tent to create Horizon content. The plugin framework
    works for both Django based and AngularJS based plugins.

    Theming improvements have continued and is now far more powerful.

    Many improvements in the AngularJS tooling. Including: sensible
    localization support for AngularJS code; a more coherent foundation for
    JavaScript code; better testing support; and an implemented JS coding
    style.

Areas of focus for the Mitaka cycle:
    Stability. Continue to balance progress and stability.

    Finding a better way to allow forward progress on AngularJS content inside
    of Horizon. I've been advocating the use of feature branches for some time
    and will look to push work there to help establish the patterns for
    Angular in Horizon.

    Continue progress in moving separable content out of the Horizon source
    tree. This will benefit both service teams to make faster progress, while
    reducing the overall scope of the Horizon project.

    Focus work on areas of high benefit. There are a several reasons we chose
    to adopt AngularJS. Most were around scaling, usability and access to
    data. Let's focus on the areas with the greatest upside first.

    Provide better guidance for plugins in the form of testing and style
    guidelines.

I'm still driven to continue the challenging work the Horizon community has
undertaken to improve and look forward. If you'll have me, I'd like to continue
enabling the talented folks doing the heavy lifting while balancing the needs
of existing users. I believe if we continue to work through some of these
transitional pains, we'll make significant progress in Mitaka.

Thanks for your consideration,
David Lyle


From philipp.marek at linbit.com  Wed Sep 16 05:31:48 2015
From: philipp.marek at linbit.com (Philipp Marek)
Date: Wed, 16 Sep 2015 07:31:48 +0200
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55F857A2.3020603@redhat.com>
References: <55F84EA7.9080902@windriver.com>
 <55F857A2.3020603@redhat.com>
Message-ID: <20150916053146.GD24246@cacao.linbit>

> > I'm currently trying to work around an issue where activating LVM
> > snapshots created through cinder takes potentially a long time. 
[[ thick LVM snapshot performance problem ]]
> > Given the above, is there any reason why we couldn't make thin
> > provisioning the default?
> 
> My intention is to move toward thin-provisioned LVM as the default -- it
> is definitely better suited to our use of LVM.
...
> The other issue preventing using thin by default is that we default the
> max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
> the reference implementation, since it means that people who deploy
> Cinder LVM on smaller storage configurations can easily fill up their
> volume group and have things grind to halt.  I think we want something
> closer to the semantics of thick LVM for the default case.
The DRBDmanage backend has to deal with the same problem.

We decided to provide 3 different storage strategies:

 * Use Thick LVs - with the known performance implications when using
   snapshots.
 * Use one Thin Pool for the volumes - this uses the available space
   "optimally", but gives the oversubscription problem mentioned above.
 * Use multiple Thin Pools, one for each volume.
   This provides efficient snapshots *and* space reservation for each
   volume.
   
The last strategy is no panacea, though - something needs to check the free 
space in the pool, because the snapshots can still fill it up...
Without impacting the other volumes, though.




From asha.seshagiri at gmail.com  Wed Sep 16 05:39:32 2015
From: asha.seshagiri at gmail.com (Asha Seshagiri)
Date: Wed, 16 Sep 2015 00:39:32 -0500
Subject: [openstack-dev] Barbican : Unable to run barbican CURL commands
 after starting/restarting barbican using the service file
Message-ID: <CAM0q7HBe5fs5p3s5VtUZivzUDPOXvOBDcssdvz5CnHqwAcFTMA@mail.gmail.com>

Hi All,

I am Unable to run barbican CURL commands after starting/restarting
barbican using the service file.

Used below command to run  barbican service file :
(wheel)[root at controller-01 service]# systemctl restart barbican-api.service

When I tried executing the command to create the secret , I did not get any
response from the server .

(wheel)[root at controller-01 service]# ps -ef | grep barbican
barbican  1104     1  0 22:56 ?        00:00:00 /opt/barbican/bin/uwsgi
--master --emperor /etc/barbican/vassals
barbican  1105  1104  0 22:56 ?        00:00:00 /opt/barbican/bin/uwsgi
--master --emperor /etc/barbican/vassals
barbican  1106  1105  0 22:56 ?        00:00:00 /opt/barbican/bin/uwsgi
--ini barbican-api.ini
root      3195 28132  0 23:03 pts/0    00:00:00 grep --color=auto barbican

Checked the status of the barbican-api.service file and got the following
response :

(wheel)[root at controller-01 service]# systemctl status  barbican-api.service
-l
barbican-api.service - Barbican Key Management API server
   Loaded: loaded (/usr/lib/systemd/system/barbican-api.service; enabled)
   Active: active (running) since Tue 2015-09-15 22:56:12 UTC; 2min 17s ago
 Main PID: 1104 (uwsgi)
   Status: "The Emperor is governing 1 vassals"
   CGroup: /system.slice/barbican-api.service
           ??1104 /opt/barbican/bin/uwsgi --master --emperor
/etc/barbican/vassals
           ??1105 /opt/barbican/bin/uwsgi --master --emperor
/etc/barbican/vassals
           ??1106 /opt/barbican/bin/uwsgi --ini barbican-api.ini

*Sep 15 22:58:30 controller-01 uwsgi[1104]: APP, pipeline[-1], global_conf)*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: File
"/opt/barbican/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line
458, in get_context*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: section)*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: File
"/opt/barbican/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line
517, in _context_from_explicit*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: value =
import_string(found_expr)*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: File
"/opt/barbican/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line
22, in import_string*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: return
pkg_resources.EntryPoint.parse("x=" + s).load(False)*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: File
"/opt/barbican/lib/python2.7/site-packages/pkg_resources.py", line 2265, in
load*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: raise ImportError("%r has no %r
attribute" % (entry,attr))*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: ImportError: <module
'barbican.api.app' from
'/opt/barbican/lib/python2.7/site-packages/barbican/api/app.pyc'> has no
'create_main_app_v1' attribute*


Please find contents of  barbican-api. service file :

[Unit]
Description=Barbican Key Management API server
After=syslog.target network.target

[Service]
Type=simple
NotifyAccess=all
User=barbican
KillSignal=SIGINT
ExecStart={{ barbican_virtualenv_path }}/bin/uwsgi --master --emperor
/etc/barbican/vassals
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Even though barbican is running successfully , we are unable to run the
CURL command . Would like to know if the  "I*mportError: <module
'barbican.api.app' from
'/opt/barbican/lib/python2.7/site-packages/barbican/api/app.pyc'> has no
'create_main_app_v1' attribute" *is the cause for not being able to execute
the CURL commands.

How do we debug I*mportError: <module 'barbican.api.app' from
'/opt/barbican/lib/python2.7/site-packages/barbican/api/app.pyc'> has no
'create_main_app_v1' attribute"*
And also I think that barbican restart is not successful.

Any help would highly be appreciated .

But I am able tor run the command  "/bin/uwsgi --master --emperor
/etc/barbican/vassals" manually and was able to run the CURL commands .


-- 
*Thanks and Regards,*
*Asha Seshagiri*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/e8d7b8ec/attachment.html>

From zhenzan.zhou at intel.com  Wed Sep 16 06:21:18 2015
From: zhenzan.zhou at intel.com (Zhou, Zhenzan)
Date: Wed, 16 Sep 2015 06:21:18 +0000
Subject: [openstack-dev] [Congress] PTL candidacy
In-Reply-To: <CAJjxPABLW+BBLnRqKaikW0ZL2X5Z5Nwj-C8A9GtgXWy1hHmNbA@mail.gmail.com>
References: <CAJjxPABLW+BBLnRqKaikW0ZL2X5Z5Nwj-C8A9GtgXWy1hHmNbA@mail.gmail.com>
Message-ID: <EB8DB51184817F479FC9C47B120861EE1851112F@SHSMSX101.ccr.corp.intel.com>

+1

And see you all in Tokoyo ? If I could get my visa on time?!

BR
Zhou Zhenzan
From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Wednesday, September 16, 2015 04:23
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Congress] PTL candidacy

Hi all,

I?m writing to announce my candidacy for Congress PTL for the Mitaka cycle.  I?m excited at the prospect of continuing the development of our community, our code base, and our integrations with other projects.

This past cycle has been exciting in that we saw several new, consistent contributors, who actively pushed code, submitted reviews, wrote specs, and participated in the mid-cycle meet-up.  Additionally, our integration with the rest of the OpenStack ecosystem improved with our move to running tempest tests in the gate instead of manually or with our own CI.  The code base matured as well, as we rounded out some of the features we added near the end of the Kilo cycle.  We also began making the most significant architectural change in the project?s history, in an effort meet our high-availability and API throughput targets.

I?m looking forward to the Mitaka cycle.  My highest priority for the code base is completing the architectural changes that we began in Liberty.  These changes are undoubtedly the right way forward for production use cases, but it is equally important that we make Congress easy to use and understand for both new developers and new end users.  I also plan to further our integration with the OpenStack ecosystem by better utilizing the plugin architectures that are available (e.g. devstack and tempest).  I will also work to begin (or continue) dialogues with other projects that might benefit from consuming Congress.  Finally I?m excited to continue working with our newest project members, helping them toward becoming core contributors.

See you all in Tokyo!
Tim

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/48f53a04/attachment.html>

From edwin.zhai at intel.com  Wed Sep 16 06:31:46 2015
From: edwin.zhai at intel.com (Zhai, Edwin)
Date: Wed, 16 Sep 2015 14:31:46 +0800 (CST)
Subject: [openstack-dev] [Ceilometer] How to enable devstack plugins
In-Reply-To: <alpine.OSX.2.11.1509150837540.54971@crank.home>
References: <alpine.DEB.2.10.1509151322120.32581@edwin-gen>
 <alpine.OSX.2.11.1509150837540.54971@crank.home>
Message-ID: <alpine.DEB.2.10.1509161431310.32581@edwin-gen>

Chris,
Thanks for clarification.


On Tue, 15 Sep 2015, Chris Dent wrote:

> On Tue, 15 Sep 2015, Zhai, Edwin wrote:
>
>> I saw some patches from Chris Dent to enable functions in devstack/*. But 
>> it conflicts with devstack upstream so that start each ceilometer service 
>> twice. Is there any official way to setup ceilometer as devstack plugin?
>
> What I've been doing is checking out the devstack branch associated
> with this review that removes ceilometer from devstack [1] (with a
> `git review -d 196383`) and then stacking from there. It's cumbersome
> but gets the job done.
>
> This pain point should go away very soon. We've just been waiting on
> the necessary infra changes to get various jobs that use ceilometer
> prepared to use the ceilometer devstack plugin[2]. I think that's
> ready to go now so we ought to see that merge soon.
>
> [1] https://review.openstack.org/#/c/196383/
> [2] https://review.openstack.org/#/c/196446/ and dependent reviews.
>
> -- 
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Best Rgds,
Edwin


From boris at pavlovic.me  Wed Sep 16 07:03:01 2015
From: boris at pavlovic.me (Boris Pavlovic)
Date: Wed, 16 Sep 2015 00:03:01 -0700
Subject: [openstack-dev] [Rally] PTL candidacy
Message-ID: <CAD85om1f188kYoxVLF3iUo1P2L8UqLSDmhKiJnt0K8mvNpnCcA@mail.gmail.com>

Hi stackers,

My name is Boris.

Few years ago I started Rally to help OpenStack community to simplify
performance/load/scale/volume testing of OpenStack and actually make it
simple to answer on question: "How OpenStack perform (in ones very specific
case)".

Rally team did a terrific job to make from just a small initial 100 line
script, project that you can see now.

It covers most of the user cases, has plugins for most of the projects,
high quality of code & docs, as well it's simple to install/use/integrate
and it works quite stable.

However we are in the middle of our path and there are plenty of places
that should be improved:

* New input task format - that address all current issues

https://github.com/openstack/rally/blob/master/doc/specs/in-progress/new_rally_input_task_format.rst

* Multi scenario load generation
     That will allow us to do the monitoring with testing, HA testing under
load and
      load from many "different" types of workloads.

* Scaling up Rally DB
     This will allow users to run non stop  workloads for days or generte
really huge
     distributed load for a quite long amount of time

* Distributed load generation
     Generation of really huge load like 100k rps

* Workloads framework
    Benchmarking that measures performance of Servers, VMs, Network and
Volumes

* ...infinity list from here:
https://docs.google.com/spreadsheets/u/1/d/16DXpfbqvlzMFaqaXAcJsBzzpowb_XpymaK2aFY2gA2g/edit#gid=0


In other words I would like to continue to work as PTL of Rally until we
get all this done.


Best regards,
Boris Pavlovic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/1c748685/attachment.html>

From ramitsurana at gmail.com  Wed Sep 16 07:35:31 2015
From: ramitsurana at gmail.com (Ramit Surana)
Date: Wed, 16 Sep 2015 13:05:31 +0530
Subject: [openstack-dev] [Rally] PTL candidacy
In-Reply-To: <CAD85om1f188kYoxVLF3iUo1P2L8UqLSDmhKiJnt0K8mvNpnCcA@mail.gmail.com>
References: <CAD85om1f188kYoxVLF3iUo1P2L8UqLSDmhKiJnt0K8mvNpnCcA@mail.gmail.com>
Message-ID: <CAKQzuw25hiy_hzZU0QU0p1Y0FfS0U3Vft+MCkxWk+1x-qmwVJA@mail.gmail.com>

Hi Boris,

Thanks for sharing the insights on Rally.It is a really cool and innovative
project.Its efficiency and architecture is really good. I will be soon be
trying to contribute to this project.

Thanks.

On Wed, Sep 16, 2015 at 12:33 PM, Boris Pavlovic <boris at pavlovic.me> wrote:

> Hi stackers,
>
> My name is Boris.
>
> Few years ago I started Rally to help OpenStack community to simplify
> performance/load/scale/volume testing of OpenStack and actually make it
> simple to answer on question: "How OpenStack perform (in ones very specific
> case)".
>
> Rally team did a terrific job to make from just a small initial 100 line
> script, project that you can see now.
>
> It covers most of the user cases, has plugins for most of the projects,
> high quality of code & docs, as well it's simple to install/use/integrate
> and it works quite stable.
>
> However we are in the middle of our path and there are plenty of places
> that should be improved:
>
> * New input task format - that address all current issues
>
> https://github.com/openstack/rally/blob/master/doc/specs/in-progress/new_rally_input_task_format.rst
>
> * Multi scenario load generation
>      That will allow us to do the monitoring with testing, HA testing
> under load and
>       load from many "different" types of workloads.
>
> * Scaling up Rally DB
>      This will allow users to run non stop  workloads for days or generte
> really huge
>      distributed load for a quite long amount of time
>
> * Distributed load generation
>      Generation of really huge load like 100k rps
>
> * Workloads framework
>     Benchmarking that measures performance of Servers, VMs, Network and
> Volumes
>
> * ...infinity list from here:
> https://docs.google.com/spreadsheets/u/1/d/16DXpfbqvlzMFaqaXAcJsBzzpowb_XpymaK2aFY2gA2g/edit#gid=0
>
>
> In other words I would like to continue to work as PTL of Rally until we
> get all this done.
>
>
> Best regards,
> Boris Pavlovic
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Ramit Surana
skype : ramitsurana
LinkedIn : /in/ramitsurana
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/0d5441ad/attachment.html>

From academicgareth at gmail.com  Wed Sep 16 09:13:18 2015
From: academicgareth at gmail.com (Gareth)
Date: Wed, 16 Sep 2015 17:13:18 +0800
Subject: [openstack-dev] [oslo.db][sqlalchemy] rollback after commit
Message-ID: <CAAhuP_8GA1qCxnuGMnt0Ymi9yfVm8QqwY3TO0LFo=Hajzig3Jg@mail.gmail.com>

Hi DB experts,

I'm using mysql now and have general log like:

1397 Query SELECT 1

1397 Query SELECT xxxxxxxx

1397 Query UPDATE xxxxxxxx

1397 Query COMMIT

1397 Query ROLLBACK

I found there always is 'SELECT 1' before real queries and 'COMMIT'
and 'ROLLBACK' after. I know 'SELECT 1' is the lowest cost for check
db's availability and 'COMMIT' is for persistence. But why is a
'ROLLBACK' here? Is this 'ROLLBACK' the behaviour of oslo.db or
sqlchemy?



-- 
Gareth

Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
OpenStack contributor, kun_huang at freenode
My promise: if you find any spelling or grammar mistakes in my email
from Mar 1 2013, notify me
and I'll donate $1 or ?1 to an open organization you specify.


From nikhil at manchanda.me  Wed Sep 16 09:20:41 2015
From: nikhil at manchanda.me (Nikhil Manchanda)
Date: Wed, 16 Sep 2015 02:20:41 -0700
Subject: [openstack-dev] [Trove] PTL Non-Candidacy
Message-ID: <CAMt+b=kZ0wuvN3D7Kq97e7rDcwYpuCRveuhxGQc1toXk=X0-Wg@mail.gmail.com>

Hello all:

I'd like to announce that I will not be running for the position of
Trove PTL for Mitaka.

I've been PTL for 3 cycles now during which Trove has grown
immensely. We have added support for twelve different datastores --
enabling support for HA Trove instances through replication, failover
and clustering. The Trove team has also grown quite a bit over the
last 3 cycles -- we now have a healthy group of contributors from a
diverse set of companies, and reviews happening in a timely manner
keeping the project moving along. We've been able to accomplish what
we set out to do over the Juno, Kilo and Liberty releases and we
should be extremely proud of this as a team!

It's my opinion that I've been PTL of Trove for long enough now. The
OpenStack Heat team originally pioneered the idea of rotating the PTL
position frequently [1] and I absolutely agree with this viewpoint. I
believe that rotating the PTL doesn't just help with easing the
workload (since being a PTL can be more than a full time job) but also
helps develop crucial leadership talent on the team which is
extremely important. With the depth of talent that we have on the
Trove team, we're extremely lucky to have quite a few candidates whom
I think would be great choices for Trove PTL during the Mitaka cycle.

As for me, I definitely do still plan on being active in Trove during
the Mitaka cycle, and look forward to having more time to be more
hands on with contributions not just in Trove but across OpenStack.

To Mitaka and beyond!

Cheers,
Nikhil

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/522077b6/attachment.html>

From Abhishek.Kekane at nttdata.com  Wed Sep 16 09:19:32 2015
From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek)
Date: Wed, 16 Sep 2015 09:19:32 +0000
Subject: [openstack-dev] Pycharm License for OpenStack developers
Message-ID: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBD63@MAIL703.KDS.KEANE.COM>

Hi Devs,

I am using Pycharm for development and current license is about to expire.
Please let me know if anyone has a new license key for the same.

Thank you in advance.

Abhishek

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/a77d61d4/attachment.html>

From thierry at openstack.org  Wed Sep 16 09:33:34 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Wed, 16 Sep 2015 11:33:34 +0200
Subject: [openstack-dev] [relmgt] PTL non-candidacy
Message-ID: <55F9376E.3040403@openstack.org>

Hi everyone,

I have been handling release management for OpenStack since the
beginning of this story, well before Release Management was a program or
a project team needing a proper elected PTL. Until recently it was
largely a one-man job. But starting with this cycle, to accommodate the
Big Tent changes, we grew the team significantly, to the point where I
think it is healthy to set up a PTL rotation in the team.

I generally think it's a good thing to change the PTL for a team from
time to time. That allows different perspectives, skills and focus to be
brought to a project team. That lets you take a step back. That allows
to recognize the efforts and leadership of other members, which is
difficult if you hold on the throne. So I decided to put my foot where
my mouth is and apply those principles to my own team.

That doesn't mean I won't be handling release management for Mitaka, or
that I won't ever be Release Management PTL again -- it's just that
someone else will take the PTL hat for the next cycle, drive the effort
and be the most visible ambassador of the team.

Cheers,

-- 
Thierry Carrez (ttx)


From doug at doughellmann.com  Wed Sep 16 09:40:42 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 16 Sep 2015 05:40:42 -0400
Subject: [openstack-dev] [relmgt] PTL Candidacy
Message-ID: <1442396357-sup-1753@lrrr.local>

I am announcing my candidacy for PTL for the Release Management
team for the Mitaka release cycle.

Although I only formally joined the release management team during
the Liberty cycle, I have been active in release-related activities
for much longer while serving as the PTL for Oslo. I worked with
the release and infrastructure teams to develop the  release tools
and processes we use for Oslo libraries, and applying them to other
projects that now manage libraries. I am a core reviewer on the
requirements repository, and this cycle I started the work on
automating project releases using the new openstack/releases
repository. I was also involved in the process of moving server
projects away from date-based versioning to using quasi-semantic
versioning. Late in the Liberty cycle I started building reno, a
new release note management tool, based on some requirements we
gathered within the release team.

My goal for the release team during Mitaka is to automate more of
the work with a review process that allows projects to be self-service,
with some lightweight oversight to manage release timing, version
numbers, and messaging. I would like to complete the work we have
started in the releases repository to allow project teams to ask
for releases at any point in the cycle, thereby encouraging them
to shift from a milestone-based to ?intermediate? release model.
Changing release models will reduce the effort required to create
a release by removing some of the pressure to synchronize the
activities of all projects on milestones and make it easier to
release more often, while still giving us the benefits of stable
branches for longer-term maintenance of selected versions. Milestones
can become guidelines, rather than hard deadlines.

Doug


From doug at doughellmann.com  Wed Sep 16 09:41:56 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 16 Sep 2015 05:41:56 -0400
Subject: [openstack-dev] [relmgt] PTL non-candidacy
In-Reply-To: <55F9376E.3040403@openstack.org>
References: <55F9376E.3040403@openstack.org>
Message-ID: <1442396445-sup-334@lrrr.local>

Excerpts from Thierry Carrez's message of 2015-09-16 11:33:34 +0200:
> Hi everyone,
> 
> I have been handling release management for OpenStack since the
> beginning of this story, well before Release Management was a program or
> a project team needing a proper elected PTL. Until recently it was
> largely a one-man job. But starting with this cycle, to accommodate the
> Big Tent changes, we grew the team significantly, to the point where I
> think it is healthy to set up a PTL rotation in the team.
> 
> I generally think it's a good thing to change the PTL for a team from
> time to time. That allows different perspectives, skills and focus to be
> brought to a project team. That lets you take a step back. That allows
> to recognize the efforts and leadership of other members, which is
> difficult if you hold on the throne. So I decided to put my foot where
> my mouth is and apply those principles to my own team.
> 
> That doesn't mean I won't be handling release management for Mitaka, or
> that I won't ever be Release Management PTL again -- it's just that
> someone else will take the PTL hat for the next cycle, drive the effort
> and be the most visible ambassador of the team.
> 
> Cheers,
> 

Thank you for the hard work you've put into building the release team,
Thierry, and thank you for hanging in there long enough that some of the
rest of us could be trained up properly.

Doug


From michal.dulko at intel.com  Wed Sep 16 09:53:18 2015
From: michal.dulko at intel.com (Dulko, Michal)
Date: Wed, 16 Sep 2015 09:53:18 +0000
Subject: [openstack-dev] Pycharm License for OpenStack developers
In-Reply-To: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBD63@MAIL703.KDS.KEANE.COM>
References: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBD63@MAIL703.KDS.KEANE.COM>
Message-ID: <3895CB36EABD4E49B816E6081F3B001735FD73F3@IRSMSX108.ger.corp.intel.com>

> From: Kekane, Abhishek [mailto:Abhishek.Kekane at nttdata.com]
> Sent: Wednesday, September 16, 2015 11:20 AM
> Hi Devs,
> 
> 
> 
> I am using Pycharm for development and current license is about to expire.
> 
> Please let me know if anyone has a new license key for the same.
> 
> 
> 
> Thank you in advance.
> 
> 
> 
> Abhishek


I've applied for the license for OpenStack a moment ago. I'll send an update to the ML once I get a response from JetBrains.


From Abhishek.Kekane at nttdata.com  Wed Sep 16 09:59:11 2015
From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek)
Date: Wed, 16 Sep 2015 09:59:11 +0000
Subject: [openstack-dev] Pycharm License for OpenStack developers
In-Reply-To: <3895CB36EABD4E49B816E6081F3B001735FD73F3@IRSMSX108.ger.corp.intel.com>
References: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBD63@MAIL703.KDS.KEANE.COM>
 <3895CB36EABD4E49B816E6081F3B001735FD73F3@IRSMSX108.ger.corp.intel.com>
Message-ID: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBDA8@MAIL703.KDS.KEANE.COM>

Thank you Michal,

Abhishek

-----Original Message-----
From: Dulko, Michal [mailto:michal.dulko at intel.com] 
Sent: 16 September 2015 15:23
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Pycharm License for OpenStack developers

> From: Kekane, Abhishek [mailto:Abhishek.Kekane at nttdata.com]
> Sent: Wednesday, September 16, 2015 11:20 AM Hi Devs,
> 
> 
> 
> I am using Pycharm for development and current license is about to expire.
> 
> Please let me know if anyone has a new license key for the same.
> 
> 
> 
> Thank you in advance.
> 
> 
> 
> Abhishek


I've applied for the license for OpenStack a moment ago. I'll send an update to the ML once I get a response from JetBrains.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


From abhishek.talwar at tcs.com  Wed Sep 16 10:23:14 2015
From: abhishek.talwar at tcs.com (Abhishek Talwar)
Date: Wed, 16 Sep 2015 15:53:14 +0530
Subject: [openstack-dev] how to get current memory utilization of an instance
Message-ID: <OF02E0725B.2100272D-ON65257EC2.00390F35-65257EC2.00390F38@tcs.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/6d9c3018/attachment.html>

From Bruno.Cornec at hp.com  Wed Sep 16 11:44:25 2015
From: Bruno.Cornec at hp.com (Bruno Cornec)
Date: Wed, 16 Sep 2015 13:44:25 +0200
Subject: [openstack-dev] [relmgt] PTL non-candidacy
In-Reply-To: <55F9376E.3040403@openstack.org>
References: <55F9376E.3040403@openstack.org>
Message-ID: <20150916114425.GC7274@morley.fra.hp.com>

Hello Thierry,

Thierry Carrez said on Wed, Sep 16, 2015 at 11:33:34AM +0200:
>I generally think it's a good thing to change the PTL for a team from
>time to time. That allows different perspectives, skills and focus to be
>brought to a project team. That lets you take a step back. That allows
>to recognize the efforts and leadership of other members, which is
>difficult if you hold on the throne. So I decided to put my foot where
>my mouth is and apply those principles to my own team.

Bravo pour le courage, car ce ne doit pas ?tre une d?cision facile ?
prendre n?anmoins.

>That doesn't mean I won't be handling release management for Mitaka, or
>that I won't ever be Release Management PTL again 

Je pense que tout le monde l'esp?re !
Bruno.
-- 
Open Source Profession, Linux Community Lead WW  http://opensource.hp.com
HP EMEA EG Open Source Technology Strategist         http://hpintelco.net
FLOSS projects:     http://mondorescue.org     http://project-builder.org 
Musique ancienne? http://www.musique-ancienne.org http://www.medieval.org


From e0ne at e0ne.info  Wed Sep 16 11:51:41 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Wed, 16 Sep 2015 14:51:41 +0300
Subject: [openstack-dev] [Horizon] [Cinder] [Keystone] Showing Cinder
 quotas for non-admin users in Horizon
In-Reply-To: <CAEHC1ztdNfz_YPs4PzTEqwsdoCt6_R6PpoGVW+XDqqtgCtTNmw@mail.gmail.com>
References: <CAEHC1ztdNfz_YPs4PzTEqwsdoCt6_R6PpoGVW+XDqqtgCtTNmw@mail.gmail.com>
Message-ID: <CAGocpaHEd+VwXZ35GWq-L=y5z0zTm9G8-nj7n+hoRzjx04EupA@mail.gmail.com>

Hi Timur,

To get quotas  we need to retrieve project information from the
Keystone. Unfortunately, Keystone set "admin_required" rule by default [1]
in their API. We can handle it and raise 403 if Keystone return this error
only.

[1] https://github.com/openstack/keystone/blob/master/etc/policy.json#L37

Regards,
Ivan Kolodyazhny

On Mon, Sep 14, 2015 at 1:49 PM, Timur Sufiev <tsufiev at mirantis.com> wrote:

> Hi all!
>
> It seems that recent changes in Cinder policies [1] forbade non-admin
> users to see the disk quotas. Yet the volume creation is allowed for
> non-admins, which effectively means that from now on a volume creation in
> Horizon is free for non-admins (as soon as quotas:show rule is propagated
> into Horizon policies). Along with understanding that this is not a desired
> UX for Volumes panel in Horizon, I know as well that [1] wasn't responsible
> for this quota behavior change on its own. It merely tried to alleviate the
> situation caused by [2], which changed the requirements of quota show being
> authorized. From this point I'm starting to sense that my knowledge of
> Cinder and Keystone (because the hierarchical feature is involved) is
> insufficient to suggest the proper solution from the Horizon point of view.
> Yet hiding quota values from non-admin users makes no sense to me.
> Suggestions?
>
> [1] https://review.openstack.org/#/c/219231/7/etc/cinder/policy.json line
> 36
> [2]
> https://review.openstack.org/#/c/205369/29/cinder/api/contrib/quotas.py line
> 135
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/8dc34251/attachment.html>

From Bruno.Cornec at hp.com  Wed Sep 16 12:06:57 2015
From: Bruno.Cornec at hp.com (Bruno Cornec)
Date: Wed, 16 Sep 2015 14:06:57 +0200
Subject: [openstack-dev] [relmgt] PTL non-candidacy
In-Reply-To: <20150916114425.GC7274@morley.fra.hp.com>
References: <55F9376E.3040403@openstack.org>
 <20150916114425.GC7274@morley.fra.hp.com>
Message-ID: <20150916120657.GF7274@morley.fra.hp.com>

Hello,

Sorry was meant as a private answer to Thierry.
Anyway for those curious, here is the translation now it's public (my
bad).

Bruno Cornec said on Wed, Sep 16, 2015 at 01:44:25PM +0200:
>Thierry Carrez said on Wed, Sep 16, 2015 at 11:33:34AM +0200:
>>I generally think it's a good thing to change the PTL for a team from
>>time to time. That allows different perspectives, skills and focus to be
>>brought to a project team. That lets you take a step back. That allows
>>to recognize the efforts and leadership of other members, which is
>>difficult if you hold on the throne. So I decided to put my foot where
>>my mouth is and apply those principles to my own team.
>
>Bravo pour le courage, car ce ne doit pas ?tre une d?cision facile ?
>prendre n?anmoins.

Congrats for the courage, as that shouldn't be however an easy decision to take

>>That doesn't mean I won't be handling release management for Mitaka, or
>>that I won't ever be Release Management PTL again
>
>Je pense que tout le monde l'esp?re !

I think everybidy hopes that !

Bruno, who needs to better read headers next time.
-- 
Open Source Profession, Linux Community Lead WW  http://opensource.hp.com
HP EMEA EG Open Source Technology Strategist         http://hpintelco.net
FLOSS projects:     http://mondorescue.org     http://project-builder.org 
Musique ancienne? http://www.musique-ancienne.org http://www.medieval.org


From sbiswas7 at linux.vnet.ibm.com  Wed Sep 16 12:12:02 2015
From: sbiswas7 at linux.vnet.ibm.com (Sudipto Biswas)
Date: Wed, 16 Sep 2015 17:42:02 +0530
Subject: [openstack-dev] [nova] Regarding NUMA Topology filtering logic.
Message-ID: <55F95C92.90302@linux.vnet.ibm.com>

Hi,

Currently the numa_topology filter code in openstack is going by
a decision of filtering out NUMA nodes based on the length of the cpusets
on the NUMA node of a host[1]. For example: if a VM with 8 VCPUs is 
requested, we seem
to be doing len(cputset_on_the_numa_node) should be greater than or 
equal to 8.

IMHO, the logic can be further improved if we start taking the threads 
and cores
into consideration instead of directly going by the cpuset length of the 
NUMA node.

This thought is derived from an architecture like ppc where each core 
can have 8 threads.
However in this case, libvirt reports only 1 thread out of the 8 (called 
the primary
thread). The host scheduling of the guests happen at the core level(as 
only primary
thread is online). The kvm scheduler exploits as many threads of the core
as needed by guest.

Consider an example for the ppc architecture.
In a given NUMA node 0 - with 40 threads - the following
cpusets would be reported by libvirt: 0, 8, 16, 24, 32. The length of 
the cpusets
would suggest that only 5 pcpus are available for pinning, however we 
could potentially
have 40 threads available (cores * threads).

This way we could at least solve the problem that arises if we just take 
the length of the
cpusets into consideration.

Thoughts?

[1] https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L772

Thanks,
Sudipto



From jaypipes at gmail.com  Wed Sep 16 12:15:18 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Wed, 16 Sep 2015 08:15:18 -0400
Subject: [openstack-dev] [relmgt] PTL non-candidacy
In-Reply-To: <55F9376E.3040403@openstack.org>
References: <55F9376E.3040403@openstack.org>
Message-ID: <55F95D56.6050507@gmail.com>

On 09/16/2015 05:33 AM, Thierry Carrez wrote:
> Hi everyone,
>
> I have been handling release management for OpenStack since the
> beginning of this story, well before Release Management was a program or
> a project team needing a proper elected PTL. Until recently it was
> largely a one-man job. But starting with this cycle, to accommodate the
> Big Tent changes, we grew the team significantly, to the point where I
> think it is healthy to set up a PTL rotation in the team.
>
> I generally think it's a good thing to change the PTL for a team from
> time to time. That allows different perspectives, skills and focus to be
> brought to a project team. That lets you take a step back. That allows
> to recognize the efforts and leadership of other members, which is
> difficult if you hold on the throne. So I decided to put my foot where
> my mouth is and apply those principles to my own team.
>
> That doesn't mean I won't be handling release management for Mitaka, or
> that I won't ever be Release Management PTL again -- it's just that
> someone else will take the PTL hat for the next cycle, drive the effort
> and be the most visible ambassador of the team.

It's tough to express the thanks I have for your patience and experience 
in the past 5 years handling this often thankless but critically 
important position.

 From the bottom of my heart, thank you Thierry.

Best,
-jay


From gord at live.ca  Wed Sep 16 12:18:19 2015
From: gord at live.ca (gord chung)
Date: Wed, 16 Sep 2015 08:18:19 -0400
Subject: [openstack-dev] [ceilometer] how to get current memory
 utilization of an instance
In-Reply-To: <OF02E0725B.2100272D-ON65257EC2.00390F35-65257EC2.00390F38@tcs.com>
References: <OF02E0725B.2100272D-ON65257EC2.00390F35-65257EC2.00390F38@tcs.com>
Message-ID: <BLU437-SMTP67640C78F0A64275FBF4A1DE5B0@phx.gbl>

is this using KVM? there are actually a few requirements[1] to ensure 
memory.usage meter works using libvirt:

- libvirt 1.1.1+
- qemu 1.5+
- guest driver that supports memory balloon stats

retagging to ceilometer to get more eyes.


[1] 
https://github.com/openstack/ceilometer-specs/blob/master/specs/kilo/libvirt-memory-utilization-inspector.rst#dependencies


cheers,

On 16/09/2015 6:23 AM, Abhishek Talwar wrote:
> Hi Folks,
>
>
>
> I have an OpenStack kilo setup and I am trying to get the memory usage 
> of the instances created in it.
>
> I researched and found that ceilometer has a meter called 
> "memory.usage" that can give us the memory utilization of an instance. 
> However, on doing ceilometer meter-list it does not list 
> "memory.usage" as a meter. After searching I found that this problem 
> is being faced by many contributors and a possible solution is 
> changing nova configurations that does not help.
>
> So can you help me with some other possible solution that can help me 
> in finding an instance's current memory utilization. I am working with 
> something that would require getting the memory and disk utilization.
>
> Please provide a solution for this.
>
> Thanks and Regards
>
> Abhishek Talwar
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
gord

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/0299c9a3/attachment.html>

From daniel.depaoli at create-net.org  Wed Sep 16 12:42:21 2015
From: daniel.depaoli at create-net.org (Daniel Depaoli)
Date: Wed, 16 Sep 2015 14:42:21 +0200
Subject: [openstack-dev] [fuel][swift] Separate roles for Swift nodes
In-Reply-To: <CABzFt8OvYvcM-_r8vUttT4SjoWC4_VMU6noMco1w2=wUTa7JMA@mail.gmail.com>
References: <CAGqpfRWr-z-YoLESX8buwamSu_fhYfRXMqX_MWRGu=zXLZv6DQ@mail.gmail.com>
 <CABzFt8OvYvcM-_r8vUttT4SjoWC4_VMU6noMco1w2=wUTa7JMA@mail.gmail.com>
Message-ID: <CAGqpfRUW__Nqows9RxyM9EUc4KSQh+F5jWETsLPPUyBF=yyjFg@mail.gmail.com>

Alex, what about this patch https://review.openstack.org/#/c/188599/?
Does it work? Is it possible to install swift in a 'base-os' role node?

On Fri, Sep 11, 2015 at 4:11 PM, Alex Schultz <aschultz at mirantis.com> wrote:

> Hey Daniel,
>
> So as part of the 7.0 work we added support in plugins to be able to
> create roles and being able to separate roles from the existing system. I
> think swift would be a good candidate for this.  I know we also added in
> some support for an external swift configuration that will be helpful if
> you choose to go down the plugin route.  As an example of a plugin where
> we've separated roles from the controller (I believe swift currently lives
> as part of the controller role), you can take a look at our keystone,
> database and rabbitmq plugins:
>
> https://github.com/stackforge/fuel-plugin-detach-keystone
> https://github.com/stackforge/fuel-plugin-detach-database
> https://github.com/stackforge/fuel-plugin-detach-rabbitmq
>
> -Alex
>
> On Fri, Sep 11, 2015 at 2:24 AM, Daniel Depaoli <
> daniel.depaoli at create-net.org> wrote:
>
>> Hi all!
>> I'm starting to investigate some improvements for swift installation in
>> fuel, in paticular a way to dedicate a node for it. I found this blueprint
>> https://blueprints.launchpad.net/fuel/+spec/swift-separate-role that
>> seems to be what i'm looking for.
>> The blueprint was accepted but not yet started. So, can someone tell me
>> more about this blueprint? I'm interested in working for it.
>>
>> Best regards,
>>
>> --
>> ========================================================
>> Daniel Depaoli
>> CREATE-NET Research Center
>> Smart Infrastructures Area
>> Junior Research Engineer
>> ========================================================
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
========================================================
Daniel Depaoli
CREATE-NET Research Center
Smart Infrastructures Area
Junior Research Engineer
========================================================
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/0bce7058/attachment.html>

From NandaDevi.Sahu at LntTechservices.com  Wed Sep 16 12:42:24 2015
From: NandaDevi.Sahu at LntTechservices.com (Nanda Devi Sahu)
Date: Wed, 16 Sep 2015 12:42:24 +0000
Subject: [openstack-dev] ceilometer code debugging
Message-ID: <83EAC30077C13D47ACCC9A5056168097D4EB9B@POCITMSEXMB08.LntUniverse.com>

Hello,

I am trying to debug Ceilometer code taken from git. I have configured Eclipse with Pydev for the same. The configuration seems to be fine but when I do a debug of the code it shows to be running but I do not see the flow of the code. LIke the main thread and the following threads.

Attached is the debug prespective view where the run is working without any code stack.

Could you please suggest what Am I missing to get the flow. I have also attached the debug configuration image for reference.


Regards,
Nanda.

L&T Technology Services Ltd

www.LntTechservices.com<http://www.lnttechservices.com/>

This Email may contain confidential or privileged information for the intended recipient (s). If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/d61bcdc0/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: debug_prespective.png
Type: image/png
Size: 257177 bytes
Desc: debug_prespective.png
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/d61bcdc0/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: debug_configuration.png
Type: image/png
Size: 300641 bytes
Desc: debug_configuration.png
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/d61bcdc0/attachment-0003.png>

From dirk at dmllr.de  Wed Sep 16 12:59:06 2015
From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=)
Date: Wed, 16 Sep 2015 14:59:06 +0200
Subject: [openstack-dev] [rpm-packaging] PTL Candidacy
Message-ID: <CAL5wTH7eZCtm7pFDYTxbac=ujG04ZQ04cbjFXmcd3+zTcUnj7Q@mail.gmail.com>

Hi,


I'd like to announce my candidacy for the RPM Packaging for OpenStack PTL.
During the Liberty cycle I've been co-PTL'ing this together with Ha?kel
Guemar as a bootstrap. For the Mitaka cycle we've been notified that this
setup needs to be adjusted to a single PTL. Therefore we discussed this
and decided that I'd be announcing the candicacy.


We've been mostly quiet on the mailing lists so far because we've been
working together on unifying our existing packaging in order to iron out the
differences for the benefit of being able to develop a common, standardized
packaging for OpenStack on RPM base. As such, the work was mostly on the
side of downstream adjustments in order to have a common state that can be
maintained upstream. We have also spent significant amount of time in
designing a solution that fits for all interested parties, and are just
starting to upstream the bits for it. For us it is important that
we can continue to work as we do right now under the OpenStack Big Tent in
order to reach a critical mass in the project during the next cycle.


My goal for the next cycle is to support the "bootstrapping" phase as much
as possible by ensuring that we can develop a common packaging standard
across all rpm based OpenStack distributions and are actually converging
the downstream effort into those common packagings. For that we have many
open action items that I'm aware of:


* Improve documentation and transparency of the project
* Foster collaboration on the existing spec files and motivate relevant
  groups to contribute back to us
* Design and Implement a gating policy on changes to the project
* Reach the critical mass for the project so that it can self-sustain
  in the future


I'd like to work on all of them together with the other core members and anyone
who is interested in contributing to the project during the next cycle.


Thanks,
Dirk


From Neil.Jerram at metaswitch.com  Wed Sep 16 13:02:34 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Wed, 16 Sep 2015 13:02:34 +0000
Subject: [openstack-dev] [relmgt] PTL non-candidacy
References: <55F9376E.3040403@openstack.org>
 <20150916114425.GC7274@morley.fra.hp.com>
 <20150916120657.GF7274@morley.fra.hp.com>
Message-ID: <SN1PR02MB1695742BB1B76625A43DF7B6995B0@SN1PR02MB1695.namprd02.prod.outlook.com>

On 16/09/15 13:13, Bruno Cornec wrote:
> Hello,
>
> Sorry was meant as a private answer to Thierry.
> Anyway for those curious, here is the translation now it's public (my
> bad).

For what it's worth, I enjoyed seeing some French on the list... :-)

    Neil



From dmccowan at cisco.com  Wed Sep 16 13:06:07 2015
From: dmccowan at cisco.com (Dave McCowan (dmccowan))
Date: Wed, 16 Sep 2015 13:06:07 +0000
Subject: [openstack-dev] [Barbican] Providing service user read access
 to all tenant's certificates
In-Reply-To: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
References: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
Message-ID: <D21EE0ED.1C758%dmccowan@cisco.com>

A user with the role "observer" in a project will have read access to all secrets and containers for that project, using the default settings in the policy.json file.

--Dave McCowan

From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, September 15, 2015 at 10:06 PM
To: "OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Hi,
               Is there a way to provide read access to a certain user to all secrets/containers of all project/tenant?s certificates?
               This user with universal ?read? privilege?s will be used as a service user by LBaaS plugin to read tenant?s certificates during LB configuration implementation.

               Today?s LBaaS users are following the below mentioned process

1.      tenant?s creator/admin user uploads a certificate info as secrets and container

2.      User then have to create ACLs for the LBaaS service user to access the containers and secrets

3.      User creates LB config with the container reference

4.      LBaaS plugin using the service user will then access container reference provided in LB config and proceeds to implement.

Ideally we would want to avoid step 2 in the process. Instead add a step 5 where the lbaas plugin?s service user checks if the user configuring the LB has read access to the container reference provided.

Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/dc29a884/attachment.html>

From mestery at mestery.com  Wed Sep 16 13:17:32 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Wed, 16 Sep 2015 08:17:32 -0500
Subject: [openstack-dev] [relmgt] PTL non-candidacy
In-Reply-To: <55F9376E.3040403@openstack.org>
References: <55F9376E.3040403@openstack.org>
Message-ID: <CAL3VkVyTmc7uMLhe2uxHyqjPq2xKsqAjSMADcYCbNX3T2R+k8w@mail.gmail.com>

On Wed, Sep 16, 2015 at 4:33 AM, Thierry Carrez <thierry at openstack.org>
wrote:

> Hi everyone,
>
> I have been handling release management for OpenStack since the
> beginning of this story, well before Release Management was a program or
> a project team needing a proper elected PTL. Until recently it was
> largely a one-man job. But starting with this cycle, to accommodate the
> Big Tent changes, we grew the team significantly, to the point where I
> think it is healthy to set up a PTL rotation in the team.
>
> I generally think it's a good thing to change the PTL for a team from
> time to time. That allows different perspectives, skills and focus to be
> brought to a project team. That lets you take a step back. That allows
> to recognize the efforts and leadership of other members, which is
> difficult if you hold on the throne. So I decided to put my foot where
> my mouth is and apply those principles to my own team.
>
> That doesn't mean I won't be handling release management for Mitaka, or
> that I won't ever be Release Management PTL again -- it's just that
> someone else will take the PTL hat for the next cycle, drive the effort
> and be the most visible ambassador of the team.
>
> Cheers,
>
>
You've done an amazing job as release management PTL. The project has been
incredibly lucky to have you handling this, and it's great to see you
rotating the position to grow more leadership in the team. Thanks for all
you've done over the years!

Kyle


--
> Thierry Carrez (ttx)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/c18d402c/attachment.html>

From mriedem at linux.vnet.ibm.com  Wed Sep 16 13:18:20 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 16 Sep 2015 08:18:20 -0500
Subject: [openstack-dev] how to get current memory utilization of an
 instance
In-Reply-To: <OF02E0725B.2100272D-ON65257EC2.00390F35-65257EC2.00390F38@tcs.com>
References: <OF02E0725B.2100272D-ON65257EC2.00390F35-65257EC2.00390F38@tcs.com>
Message-ID: <55F96C1C.4070900@linux.vnet.ibm.com>



On 9/16/2015 5:23 AM, Abhishek Talwar wrote:
> Hi Folks,
>
>
>
> I have an OpenStack kilo setup and I am trying to get the memory usage
> of the instances created in it.
>
> I researched and found that ceilometer has a meter called "memory.usage"
> that can give us the memory utilization of an instance. However, on
> doing ceilometer meter-list it does not list "memory.usage" as a meter.
> After searching I found that this problem is being faced by many
> contributors and a possible solution is changing nova configurations
> that does not help.
>
> So can you help me with some other possible solution that can help me in
> finding an instance's current memory utilization. I am working with
> something that would require getting the memory and disk utilization.
>
> Please provide a solution for this.
>
> Thanks and Regards
>
> Abhishek Talwar
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

This is not a development question.

I replied to your operators list thread.

http://lists.openstack.org/pipermail/openstack-operators/2015-September/008114.html

-- 

Thanks,

Matt Riedemann



From blk at acm.org  Wed Sep 16 13:18:41 2015
From: blk at acm.org (Brant Knudson)
Date: Wed, 16 Sep 2015 08:18:41 -0500
Subject: [openstack-dev] [all] New Gerrit translations change proposals
 from Zanata
In-Reply-To: <CABesOu2cAewe-AiEK3odvqaxTJoDpjn2e=tVdOouCZJ3C_14jA@mail.gmail.com>
References: <CABesOu2cAewe-AiEK3odvqaxTJoDpjn2e=tVdOouCZJ3C_14jA@mail.gmail.com>
Message-ID: <CAHjeE=RVDqxkMqio2C0HM3ZcLFVTka+AzkCRnXbMabutH8zV5A@mail.gmail.com>

On Tue, Sep 15, 2015 at 4:30 PM, Elizabeth K. Joseph <lyz at princessleia.com>
wrote:

> Hi everyone,
>
>
...


>
> The Gerrit topic for these change proposals for all projects with
> translations has been changed from "transifex/translations" to
> "zanata/translations". After a test with oslo.versionedobjects last
> week[1], we're moving forward this Wednesday morning UTC time to have
> the jobs run so that all translations changes proposed to Gerrit are
> made by Zanata.
>
>
According to [0], we'll have the final releases of libraries today, so if
translations aren't there yet it's already too late for them.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074367.html

:: Brant
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/d87c54ad/attachment.html>

From aschultz at mirantis.com  Wed Sep 16 13:41:01 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Wed, 16 Sep 2015 08:41:01 -0500
Subject: [openstack-dev] [fuel][swift] Separate roles for Swift nodes
In-Reply-To: <CAGqpfRUW__Nqows9RxyM9EUc4KSQh+F5jWETsLPPUyBF=yyjFg@mail.gmail.com>
References: <CAGqpfRWr-z-YoLESX8buwamSu_fhYfRXMqX_MWRGu=zXLZv6DQ@mail.gmail.com>
 <CABzFt8OvYvcM-_r8vUttT4SjoWC4_VMU6noMco1w2=wUTa7JMA@mail.gmail.com>
 <CAGqpfRUW__Nqows9RxyM9EUc4KSQh+F5jWETsLPPUyBF=yyjFg@mail.gmail.com>
Message-ID: <CABzFt8OnH4C7V-L2U1u0iShfYFs7ww7aPEKBeNLNxy2_=Q9=dw@mail.gmail.com>

Hey Daniel,

So that patch is the one I was referring to about supporting swift being
located elsewhere. That change would allow you to create a plugin with a
swift role and be able to override where swift is currently deployed.  We
had similar changes to support splitting off keystone/database/rabbitmq as
well. If you look at the plugins, we are creating hiera override data to
flip the switches within the existing fuel code to disable functionality on
the existing controller role. That change lists the hiera values you could
set in a plugin to override the values for a deployment.  You cannot use
the 'base-os' role as no additional puppet gets run for the 'base-os' role
and we don't do any network setup.

Thanks,
-Alex

On Wed, Sep 16, 2015 at 7:42 AM, Daniel Depaoli <
daniel.depaoli at create-net.org> wrote:

> Alex, what about this patch https://review.openstack.org/#/c/188599/?
> Does it work? Is it possible to install swift in a 'base-os' role node?
>
> On Fri, Sep 11, 2015 at 4:11 PM, Alex Schultz <aschultz at mirantis.com>
> wrote:
>
>> Hey Daniel,
>>
>> So as part of the 7.0 work we added support in plugins to be able to
>> create roles and being able to separate roles from the existing system. I
>> think swift would be a good candidate for this.  I know we also added in
>> some support for an external swift configuration that will be helpful if
>> you choose to go down the plugin route.  As an example of a plugin where
>> we've separated roles from the controller (I believe swift currently lives
>> as part of the controller role), you can take a look at our keystone,
>> database and rabbitmq plugins:
>>
>> https://github.com/stackforge/fuel-plugin-detach-keystone
>> https://github.com/stackforge/fuel-plugin-detach-database
>> https://github.com/stackforge/fuel-plugin-detach-rabbitmq
>>
>> -Alex
>>
>> On Fri, Sep 11, 2015 at 2:24 AM, Daniel Depaoli <
>> daniel.depaoli at create-net.org> wrote:
>>
>>> Hi all!
>>> I'm starting to investigate some improvements for swift installation in
>>> fuel, in paticular a way to dedicate a node for it. I found this blueprint
>>> https://blueprints.launchpad.net/fuel/+spec/swift-separate-role that
>>> seems to be what i'm looking for.
>>> The blueprint was accepted but not yet started. So, can someone tell me
>>> more about this blueprint? I'm interested in working for it.
>>>
>>> Best regards,
>>>
>>> --
>>> ========================================================
>>> Daniel Depaoli
>>> CREATE-NET Research Center
>>> Smart Infrastructures Area
>>> Junior Research Engineer
>>> ========================================================
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> ========================================================
> Daniel Depaoli
> CREATE-NET Research Center
> Smart Infrastructures Area
> Junior Research Engineer
> ========================================================
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/83524dbc/attachment.html>

From Kevin.Fox at pnnl.gov  Wed Sep 16 13:54:22 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 16 Sep 2015 13:54:22 +0000
Subject: [openstack-dev] [Barbican] Providing service user read access
 to all tenant's certificates
In-Reply-To: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
References: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7BF949@EX10MBOX06.pnnl.gov>

Why not have lbaas do step 2? Even better would be to help with the instance user spec and combined with lbaas doing step 2, you could restrict secret access to just the amphora that need the secret?

Thanks,
Kevin

________________________________
From: Vijay Venkatachalam
Sent: Tuesday, September 15, 2015 7:06:39 PM
To: OpenStack Development Mailing List (openstack-dev at lists.openstack.org)
Subject: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Hi,
               Is there a way to provide read access to a certain user to all secrets/containers of all project/tenant?s certificates?
               This user with universal ?read? privilege?s will be used as a service user by LBaaS plugin to read tenant?s certificates during LB configuration implementation.

               Today?s LBaaS users are following the below mentioned process

1.      tenant?s creator/admin user uploads a certificate info as secrets and container

2.      User then have to create ACLs for the LBaaS service user to access the containers and secrets

3.      User creates LB config with the container reference

4.      LBaaS plugin using the service user will then access container reference provided in LB config and proceeds to implement.

Ideally we would want to avoid step 2 in the process. Instead add a step 5 where the lbaas plugin?s service user checks if the user configuring the LB has read access to the container reference provided.

Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/bbdd91f2/attachment.html>

From slukjanov at mirantis.com  Wed Sep 16 14:04:06 2015
From: slukjanov at mirantis.com (Sergey Lukjanov)
Date: Wed, 16 Sep 2015 17:04:06 +0300
Subject: [openstack-dev] [sahara] PTL candidacy
Message-ID: <CA+GZd7-BocSqgRfoLb3rcZWNBh5oCD0yEhaWD_nCc0OOWqZBCw@mail.gmail.com>

Hi,

I'd like to announce my intention to continue being PTL of the Data
Processing program (Sahara).

I?m working on Sahara (ex. Savanna) project from scratch, from the
initial proof of concept implementation and till now. I have been the
acting/elected PTL since Sahara was an idea. Additionally, I?m
contributing to other OpenStack projects, especially Infrastructure
for the last four releases where I?m core/root teams member now.

My high-level focus as PTL is to coordinate work of subteams, code
review, release management and general architecture/design tracking.

During the Liberty cycle we successfully continued using the specs system.
There were many new contributors during the cycle and we've renewed a lot
the core reviewer team as well. Tons of operability and supportability
improvements were done as well as number of interesting features.

For Mitaka cycle my focus is to keep going in the same direction and
to ensure that everything we're working on is related to the end users
needs. So, in a few words it's about continuing moving forward in
implementing scalable and flexible Data Processing aaS for OpenStack
ecosystem by investing in quality, usability and new features. I'd like
to highlight the following areas that I think very important for Sahara
project in the Mitaka cycle: security of Sahara and Hadoop as well,
scalability (first of all of Sahara itself) and EDP enhancements.

A few words about myself: I?m Principle Software Engineer in Mirantis.
I was working a lot with  Big Data projects and technologies (Hadoop,
HDFS, Cassandra, Twitter Storm, etc.) and enterprise-grade solutions
before (and partially in parallel) working on Sahara in OpenStack ecosystem.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/41b75f77/attachment.html>

From thierry at openstack.org  Wed Sep 16 14:16:09 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Wed, 16 Sep 2015 16:16:09 +0200
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
 communications
In-Reply-To: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
Message-ID: <55F979A9.9040206@openstack.org>

Anne Gentle wrote:
> [...]
> What are some of the problems with each layer? 
> 
> 1. weekly meeting: time zones, global reach, size of cross-project
> concerns due to multiple projects being affected, another meeting for
> PTLs to attend and pay attention to

A lot of PTLs (or liaisons/lieutenants) skip the meeting, or will only
attend when they have something to ask. Their time is precious and most
of the time the meeting is not relevant for them, so why bother ? You
have a few usual suspects attending all of them, but those people are
cross-project-aware already so those are not the people that would
benefit the most from the meeting.

This partial attendance makes the meeting completely useless as a way to
disseminate information. It makes the meeting mostly useless as a way to
get general approval on cross-project specs.

The meeting still is very useful IMHO to have more direct discussions on
hot topics. So a ML discussion is flagged for direct discussion on IRC
and we have a time slot already booked for that.

> 2. specs: don't seem to get much attention unless they're brought up at
> weekly meeting, finding owners for the work needing to be done in a spec
> is difficult since each project team has its own priorities

Right, it's difficult to get them reviewed, and getting consensus and TC
rubberstamp on them is also a bit of a thankless job. Basically you're
trying to make sure everyone is OK with what you propose and most people
ignore you (and then would be unhappy when they are impacted by the
implementation a month later). I don't think that system works well and
I'd prefer we change it.

> 3. direct communications: decisions from these comms are difficult to
> then communicate more widely, it's difficult to get time with busy PTLs
> 4. Summits: only happens twice a year, decisions made then need to be
> widely communicated
> 
> I'm sure there are more details and problems I'm missing -- feel free to
> fill in as needed. 
> 
> Lastly, what suggestions do you have for solving problems with any of
> these layers? 

I'm starting to think we need to overhaul the whole concept of
cross-project initiatives. The current system where an individual drives
a specific spec and goes through all the hoops to expose it to the rest
of the community is not really working. The current model doesn't
support big overall development cycle goals either, since there is no
team to implement those.

Just brainstorming out loud, maybe we need to have a base team of people
committed to drive such initiatives to completion, a team that
individuals could leverage when they have a cross-project idea, a team
that could define a few cycle goals and actively push them during the cycle.

Maybe cross-project initiatives are too important to be left to the
energy of an individual and rely on random weekly meetings to make
progress. They might need a clear team to own them.

-- 
Thierry Carrez (ttx)


From rpodolyaka at mirantis.com  Wed Sep 16 14:16:24 2015
From: rpodolyaka at mirantis.com (Roman Podoliaka)
Date: Wed, 16 Sep 2015 17:16:24 +0300
Subject: [openstack-dev] [oslo.db][sqlalchemy] rollback after commit
In-Reply-To: <CAAhuP_8GA1qCxnuGMnt0Ymi9yfVm8QqwY3TO0LFo=Hajzig3Jg@mail.gmail.com>
References: <CAAhuP_8GA1qCxnuGMnt0Ymi9yfVm8QqwY3TO0LFo=Hajzig3Jg@mail.gmail.com>
Message-ID: <CAKA_ueA8asQrLHEXwXiSpek+mOs9AwxkmpaP1r_EtQvzmYPekg@mail.gmail.com>

Hi Gareth,

Right, 'SELECT 1' issued at the beginning of every transaction is a
pessimistic check to detect disconnects early. oslo.db will create a
new DB connection (as well as invalidate all the existing connections
to the same DB in the pool) and retry the transaction once [1]

ROLLBACK you are referring to is issued on returning of a connection
to the pool. This is a SQLAlchemy configurable feature [2] . The
reasoning behind this is that all connections are in transactional
mode by default (there is always an ongoing transaction, you just need
to do COMMITs) and they are pooled: if we don't issue a ROLLBACK here,
it's possible that someone will return a connection to the pool not
ending the transaction properly, which can possibly lead to deadlocks
(DB rows remain locked) and stale data reads, when the very same DB
connection is checked out from the pool again and used by someone
else.

As long as you finish all your transactions with either COMMIT or
ROLLBACK before returning a connection to the pool, these forced
ROLLBACKs must be cheap, as the RDBMS doesn't have to maintain some
state bound to this transaction (as it's just begun and you ended the
previous transaction on this connection). Still, it protects you from
the cases, when something went wrong and you forgot to end the
transaction.

Thanks,
Roman

[1] https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/engines.py#L53-L82
[2] http://docs.sqlalchemy.org/en/latest/core/pooling.html#sqlalchemy.pool.Pool.params.reset_on_return

On Wed, Sep 16, 2015 at 12:13 PM, Gareth <academicgareth at gmail.com> wrote:
> Hi DB experts,
>
> I'm using mysql now and have general log like:
>
> 1397 Query SELECT 1
>
> 1397 Query SELECT xxxxxxxx
>
> 1397 Query UPDATE xxxxxxxx
>
> 1397 Query COMMIT
>
> 1397 Query ROLLBACK
>
> I found there always is 'SELECT 1' before real queries and 'COMMIT'
> and 'ROLLBACK' after. I know 'SELECT 1' is the lowest cost for check
> db's availability and 'COMMIT' is for persistence. But why is a
> 'ROLLBACK' here? Is this 'ROLLBACK' the behaviour of oslo.db or
> sqlchemy?
>
>
>
> --
> Gareth
>
> Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
> OpenStack contributor, kun_huang at freenode
> My promise: if you find any spelling or grammar mistakes in my email
> from Mar 1 2013, notify me
> and I'll donate $1 or ?1 to an open organization you specify.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From nbogdanov at mirantis.com  Wed Sep 16 14:23:42 2015
From: nbogdanov at mirantis.com (Nikolay Bogdanov)
Date: Wed, 16 Sep 2015 17:23:42 +0300
Subject: [openstack-dev] [Fuel] Changes to Fuel UI development process
Message-ID: <CACMiN9QAznznYynHA3dGr5TyFSRSsGz7PiWDm9yWye4AEbKnEA@mail.gmail.com>

Colleagues,

Please be informed about the following changes in Fuel development process
that might affect you:


   1. UI unit and functional tests have been moved from Casper to Intern
   [1] (webdriver & selenium) in sake of tests creation and debugging
   simplification.
   This brings the following advantages in comparison to the existing
   procedure:
   - nailgun developer might observe the process of their tests execution
      right in browser (screenshots have to be made in previous phantomjs-based
      implementation)
      - Intern introduces additional layer of abstraction - pages - which
      allows tests not to depend deeply on the markup but use page's methods
   2. New *voting* CI job verify-fuel-web-ui has been created. It runs all
   UI tests (unit, functional and linting) and contains screenshots of browser
   window in case of failure. Existing verify-fuel-web now runs only python
   tests.
   3. Functional tests coverage has been increased in comparison to
   existing level. Also coverage with tests now becomes an acceptance criteria
   for all the bugs and new functionality
   4. One can run run UI functional tests locally by executing ./run_tests.sh
   --tests path/to/test_file and see all the checks in their firefox


[1] The Intern: http://theintern.github.io/intern/#what-is-intern


Thank you,
Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/48490e1a/attachment.html>

From e0ne at e0ne.info  Wed Sep 16 14:33:20 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Wed, 16 Sep 2015 17:33:20 +0300
Subject: [openstack-dev] Cinder as generic volume manager
In-Reply-To: <55A515BC.80600@redhat.com>
References: <559BD342.8050804@redhat.com> <559EC129.3020602@hp.com>
 <D1C44100.203A2%Tomoki.Sekiyama@hds.com> <559EF35D.6050303@hp.com>
 <55A515BC.80600@redhat.com>
Message-ID: <CAGocpaECRVqzJ4vMG41WHh6bQyf8MT6qk1Q6Lr8VY=XAyBvgTw@mail.gmail.com>

Jan, all,

I've started a work on this task to complete it in Mitaka. Here is a very
draft spec [1] and PoC [2].

[1] https://review.openstack.org/224124
[2] https://review.openstack.org/223851

Regards,
Ivan Kolodyazhny,
Web Developer,
http://blog.e0ne.info/,
http://notacash.com/,
http://kharkivpy.org.ua/

On Tue, Jul 14, 2015 at 4:59 PM, Jan Safranek <jsafrane at redhat.com> wrote:

> On 07/10/2015 12:19 AM, Walter A. Boring IV wrote:
> > On 07/09/2015 12:21 PM, Tomoki Sekiyama wrote:
> >> Hi all,
> >>
> >> Just FYI, here is a sample script I'm using for testing os-brick which
> >> attaches/detaches the cinder volume to the host using cinderclient and
> >> os-brick:
> >>
> >> https://gist.github.com/tsekiyama/ee56cc0a953368a179f9
> >>
> >> "python attach.py <volume-uuid>" will attach the volume to the executed
> >> host and shows a volume path. When you hit the enter key, the volume is
> >> detached.
> >>
> >> Note this is skipping "reserve" or "start_detaching" APIs so the volume
> >> state is not changed to "Attaching" or "Detaching".
> >>
> >> Regards,
> >> Tomoki
> >
> > Very cool Tomoki.  After chatting with folks in the Cinder IRC channel
> > it looks like we are going to look at going with something more like what
> > your script is doing.   We are most likely going to create a separate
> > command line tool that does this same orchestration, using cinder
> client, a new
> > Cinder API that John Griffith is working on, and os-brick.
>
> Very cool indeed, it looks exactly like as what I need.
>
>
> Jan
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/ffc8cdd5/attachment.html>

From john at johngarbutt.com  Wed Sep 16 14:35:56 2015
From: john at johngarbutt.com (John Garbutt)
Date: Wed, 16 Sep 2015 15:35:56 +0100
Subject: [openstack-dev]  [nova] PTL Candidacy
Message-ID: <CABib2_onSCaExnX_2u0i4AEOd776QGzxFzUf8tZpR9hY0vGZtw@mail.gmail.com>

Hi,

I would like to continue as the Nova PTL for the Mitaka cycle, if you
will have me.

As per the new process, you can see my nomination form here:
http://git.openstack.org/cgit/openstack/election/plain/candidates/mitaka/Nova/John_Garbutt.txt

A quick summary of that...

I have enjoyed serving the Nova community over the last few years, and
I thank you all for this last few months serving the community as PTL.

My employer assures me, that if elected, I can continue to make the
Nova PTL my full time job.

I would like to continue to work with all members of the community to
continue to improve the software we are building, improve the user
experience that software enables, and improve the way we are building
it.

Many thanks,
johnthetubaguy


From ikalnitsky at mirantis.com  Wed Sep 16 15:04:59 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Wed, 16 Sep 2015 18:04:59 +0300
Subject: [openstack-dev] [Fuel] [Plugins] Fuel Plugin Builder 3.0.0 released
Message-ID: <CACo6NWBupfZFrnpb0DtxyXTqPvTRwfXg=iLPpSO-GJ2ZqVEWrQ@mail.gmail.com>

Hello fuelers,

I want to announce that FPB (fuel plugin builder) v3.0.0 has been
released on PyPI [1].

New package version "3.0.0" includes the following features:

- New `node_roles.yaml` file that allows to add new node roles.
- New `volumes.yaml` file that allows to add new volumes and/or define
"node roles <-> volumes" mapping.
- New `deployment_tasks.yaml` file that allows to declare pre/post-
and regular deployment tasks for any node role. Unlike `tasks.yaml`,
the tasks go through global deployment graph and that provides ability
to execute task at any place during deployment, or overwrite/skip
already existing ones.
- New `network_roles.yaml` file that allows to add new network roles
and reserve some VIPs, to be proceed by plugin.

Bugfixes:

- Fix executing of `deploy.sh` deployment script [2]
- Remove "Origin" field from Ubuntu's `Release` file in order to
reduce probability of broken apt pinning [3]

Thanks,
Igor

[1]: https://pypi.python.org/pypi/fuel-plugin-builder/3.0.0
[2]: https://bugs.launchpad.net/fuel/+bug/1463441
[3]: https://bugs.launchpad.net/fuel/+bug/1475665


From ozamiatin at mirantis.com  Wed Sep 16 15:24:53 2015
From: ozamiatin at mirantis.com (ozamiatin)
Date: Wed, 16 Sep 2015 18:24:53 +0300
Subject: [openstack-dev] [oslo.messaging][zmq]
Message-ID: <55F989C5.3040200@mirantis.com>

Hi All,

I'm excited to report that today we have merged [1] new zmq driver into 
oslo.messaging master branch.
The driver is not completely done yet, so we are going to continue 
developing it on the master branch now.

What we've reached for now is passing functional tests gate (we are 
going to turn it on in the master [2]).
And we also have devstack up and running (almost 80% tempest tests 
passed when I've tested it since last commit into feature/zmq). I need 
to check all this after merge, to ensure that I didn't break something 
resolving conflicts.

I'm going to put all ongoing tasks on launchpad and provide some 
documentation soon, so anyone is welcome to develop new zmq driver.
I also would like to thank Viktor Serhieiev and Doug Royal who already 
contributed to feature/zmq.

[1] - https://review.openstack.org/#/c/223877
[2] - https://review.openstack.org/#/c/224035

Thanks,
Oleksii



From danehans at cisco.com  Wed Sep 16 15:26:37 2015
From: danehans at cisco.com (Daneyon Hansen (danehans))
Date: Wed, 16 Sep 2015 15:26:37 +0000
Subject: [openstack-dev]  [magnum] Discovery
Message-ID: <D21ED83B.67DF4%danehans@cisco.com>

All,

While implementing the flannel --network-driver for swarm, I have come across an issue that requires feedback from the community. Here is the breakdown of the issue:

  1.  Flannel [1] requires etcd to store network configuration. Meeting this requirement is simple for the kubernetes bay types since kubernetes requires etcd.
  2.  A discovery process is needed for bootstrapping etcd. Magnum implements the public discovery option [2].
  3.  A discovery process is also required to bootstrap a swarm bay type. Again, Magnum implements a publicly hosted (Docker Hub) option [3].
  4.  Magnum API exposes the discovery_url attribute that is leveraged by swarm and etcd discovery.
  5.  Etcd can not be implemented in swarm because discovery_url is associated to swarm?s discovery process and not etcd.

Here are a few options on how to overcome this obstacle:

  1.  Make the discovery_url more specific, for example etcd_discovery_url and swarm_discovery_url. However, this option would needlessly expose both discovery url?s to all bay types.
  2.  Swarm supports etcd as a discovery backend. This would mean discovery is similar for both bay types. With both bay types using the same mechanism for discovery, it will be easier to provide a private discovery option in the future.
  3.  Do not support flannel as a network-driver for k8s bay types. This would require adding support for a different driver that supports multi-host networking such as libnetwork. Note: libnetwork is only implemented in the Docker experimental release: https://github.com/docker/docker/tree/master/experimental.

I lean towards #2 but their may be other options, so feel free to share your thoughts. I would like to obtain feedback from the community before proceeding in a particular direction.

[1] https://github.com/coreos/flannel
[2] https://github.com/coreos/etcd/blob/master/Documentation/discovery_protocol.md
[3] https://docs.docker.com/swarm/discovery/

Regards,
Daneyon Hansen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/6ee5e85a/attachment.html>

From ed at leafe.com  Wed Sep 16 15:27:42 2015
From: ed at leafe.com (Ed Leafe)
Date: Wed, 16 Sep 2015 10:27:42 -0500
Subject: [openstack-dev] [relmgt] PTL non-candidacy
In-Reply-To: <55F9376E.3040403@openstack.org>
References: <55F9376E.3040403@openstack.org>
Message-ID: <7264531A-64F6-41FB-94EC-25796F96B103@leafe.com>

On Sep 16, 2015, at 4:33 AM, Thierry Carrez <thierry at openstack.org> wrote:

> But starting with this cycle, to accommodate the
> Big Tent changes, we grew the team significantly, to the point where I
> think it is healthy to set up a PTL rotation in the team.

I strongly agree, and thank you for your wisdom in recognizing this. You've done us all so much good with your Release Management work to date, and I'm glad that you'll still be involved.

-- Ed Leafe





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 842 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/c2eaed0a/attachment.pgp>

From tzumainn at redhat.com  Wed Sep 16 15:43:02 2015
From: tzumainn at redhat.com (Tzu-Mainn Chen)
Date: Wed, 16 Sep 2015 11:43:02 -0400 (EDT)
Subject: [openstack-dev] [tripleo] overcloud deployment workflow spec
In-Reply-To: <719238379.28663247.1442417879512.JavaMail.zimbra@redhat.com>
Message-ID: <994437978.28665015.1442418182969.JavaMail.zimbra@redhat.com>

Hey all,

I've been working on GUIs that use the TripleO deployment methodology
for a while now.  Recent changes have started to diverge the
CLI-supported deployment workflow from an API supported workflow.
There is an increased amount of business logic in the CLI that GUIs
cannot access.

To fix this divergence, I've created a spec targeted for mitaka
(https://review.openstack.org/#/c/219754/ ) that proposes to take the
business logic out of the CLI and place it into the tripleo-common
library; and later, create a PoC REST API that allows GUIs to easily
use the TripleO deployment workflow.

What do people think?


Thanks,
Tzu-Mainn Chen


From duncan.thomas at gmail.com  Wed Sep 16 15:49:30 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Wed, 16 Sep 2015 18:49:30 +0300
Subject: [openstack-dev] [Cinder] PTL Candidacy
Message-ID: <CAOyZ2aE=sbqWK2AsfZhd-LgH6bNLGeiOd7jxVx8+tx4cZTZObw@mail.gmail.com>

Hi

I'd like to announce my candidacy for Cinder PTL.

I've been actively involved in Cinder as a core since it was split out of
Nova, and with nova-volume before that. I've been involved in mentoring new
people, reviews, code and community discussions all of that time. I've been
an operator of Cinder in a large public cloud, including being called out
at 4am when something breaks, giving me a great deal of sympathy for
operational matters. Cinder has grown and matured at an impressive rate,
and I now feel the project is at an important decision point about what we
want to be going forward. With that in mind, my main aims as a PTL would be
as follows:

- Make the ideology of Cinder - standard features, good discoverability
where universal implementation of a feature isn;t possible, and keeping the
tenant experience as clean as possible - the admin experience should be as
clean as possible without compromising the tenant experience.

- Matching our bleeding edge velocity with our trailing edge velocity - we
merged a bunch of features that only work in a very, very limited number of
drivers. We need to push implementation of these features as widely as
possible, and where a reasonable generic implementation can be made then we
need to push that as a requirement for adding the feature.

- Stability and quality - our unit test test coverage has not improved
significantly in terms of lines of code or quality of tests, and our
tempest coverage has got worse. I suggest that we push for more tempest
tests to go with new features. The reliability and usability of third party
CI can also be incrementally improved - we've got nearly every driver being
tested now, lets make the test output more useful to developers.

- Communication - Mike demonstrated the great value of clear, regular and
open communication and I intend to keep building on this example

- Less bureaucracy that gets in the way - I think that the way we did
prioritisation in Liberty, while a good first attempt, can be improved,
particularly with dropping the review priority of tasks that are blocked
waiting for rework, so that more smaller patches can bubble up the priority
list. I'd also like to look at using review priority to encourage good
community behaviour (reviews of other people's code, bug fixes and triage,
test writing, documentation, etc)

- Finishing open work before starting more work - we have a large list of
part-implemented tasks, so we should avoiding taking on new work that
doesn't drive these goals forward.




The things I'd like to see finished in the M release:
- Replication. At least 5 drivers implementing it.
- Smooth upgrade experience - even if we can't get it to zero downtime, I'd
like a well documented, tested upgrade path and a well understood list of
work to be finished..
- H/A - I believe we can have and should have a cinder experience where the
failure of any one node does not affect the externals of cinder, without
requiring pacemaker.



Thank you for your consideration.

-- 
Duncan Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/77c84939/attachment.html>

From mbayer at redhat.com  Wed Sep 16 16:03:27 2015
From: mbayer at redhat.com (Mike Bayer)
Date: Wed, 16 Sep 2015 12:03:27 -0400
Subject: [openstack-dev] [oslo.db][sqlalchemy] rollback after commit
In-Reply-To: <CAKA_ueA8asQrLHEXwXiSpek+mOs9AwxkmpaP1r_EtQvzmYPekg@mail.gmail.com>
References: <CAAhuP_8GA1qCxnuGMnt0Ymi9yfVm8QqwY3TO0LFo=Hajzig3Jg@mail.gmail.com>
 <CAKA_ueA8asQrLHEXwXiSpek+mOs9AwxkmpaP1r_EtQvzmYPekg@mail.gmail.com>
Message-ID: <55F992CF.9020101@redhat.com>



On 9/16/15 10:16 AM, Roman Podoliaka wrote:
> Hi Gareth,
>
> Right, 'SELECT 1' issued at the beginning of every transaction is a
> pessimistic check to detect disconnects early. oslo.db will create a
> new DB connection (as well as invalidate all the existing connections
> to the same DB in the pool) and retry the transaction once [1]
>
> ROLLBACK you are referring to is issued on returning of a connection
> to the pool. This is a SQLAlchemy configurable feature [2] . The
> reasoning behind this is that all connections are in transactional
> mode by default (there is always an ongoing transaction, you just need
> to do COMMITs) and they are pooled: if we don't issue a ROLLBACK here,
> it's possible that someone will return a connection to the pool not
> ending the transaction properly, which can possibly lead to deadlocks
> (DB rows remain locked) and stale data reads, when the very same DB
> connection is checked out from the pool again and used by someone
> else.
>
> As long as you finish all your transactions with either COMMIT or
> ROLLBACK before returning a connection to the pool, these forced
> ROLLBACKs must be cheap, as the RDBMS doesn't have to maintain some
> state bound to this transaction (as it's just begun and you ended the
> previous transaction on this connection). Still, it protects you from
> the cases, when something went wrong and you forgot to end the
> transaction.
So I'll note that the reason this behavior is configurable is because 
specifically some MySQL users complained that these empty ROLLBACKs are 
still expensive; these are users that were using non-transactional 
MyISAM schemas, though, it may have been an older version of MySQL, and 
I don't have access to current details on this issue.

There are ways we could tailor oslo.db to reduce these ROLLBACK calls; 
we'd turn it off in the connection pool and then use oslo.db-level event 
handlers to run the rollback conditionally, based on the observed state 
of the connection.

However I'd like to see benching first that illustrates these ROLLBACKs 
are in fact prohibitively expensive.


>
> Thanks,
> Roman
>
> [1] https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/engines.py#L53-L82
> [2] http://docs.sqlalchemy.org/en/latest/core/pooling.html#sqlalchemy.pool.Pool.params.reset_on_return
>
> On Wed, Sep 16, 2015 at 12:13 PM, Gareth <academicgareth at gmail.com> wrote:
>> Hi DB experts,
>>
>> I'm using mysql now and have general log like:
>>
>> 1397 Query SELECT 1
>>
>> 1397 Query SELECT xxxxxxxx
>>
>> 1397 Query UPDATE xxxxxxxx
>>
>> 1397 Query COMMIT
>>
>> 1397 Query ROLLBACK
>>
>> I found there always is 'SELECT 1' before real queries and 'COMMIT'
>> and 'ROLLBACK' after. I know 'SELECT 1' is the lowest cost for check
>> db's availability and 'COMMIT' is for persistence. But why is a
>> 'ROLLBACK' here? Is this 'ROLLBACK' the behaviour of oslo.db or
>> sqlchemy?
>>
>>
>>
>> --
>> Gareth
>>
>> Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
>> OpenStack contributor, kun_huang at freenode
>> My promise: if you find any spelling or grammar mistakes in my email
>> from Mar 1 2013, notify me
>> and I'll donate $1 or ?1 to an open organization you specify.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From john.wood at RACKSPACE.COM  Wed Sep 16 16:21:22 2015
From: john.wood at RACKSPACE.COM (John Wood)
Date: Wed, 16 Sep 2015 16:21:22 +0000
Subject: [openstack-dev] Barbican : Unable to run barbican CURL commands
 after starting/restarting barbican using the service file
In-Reply-To: <CAM0q7HBe5fs5p3s5VtUZivzUDPOXvOBDcssdvz5CnHqwAcFTMA@mail.gmail.com>
Message-ID: <D21F00F5.3FC60%john.wood@rackspace.com>

Hello Asha,

Please make sure you are using the latest barbican-api-paste.ini file in your /etc/barbican directory, and then try again.

Thanks,
John

From: Asha Seshagiri <asha.seshagiri at gmail.com<mailto:asha.seshagiri at gmail.com>>
Date: Wednesday, September 16, 2015 at 12:39 AM
To: openstack-dev <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Cc: John Vrbanac <john.vrbanac at RACKSPACE.COM<mailto:john.vrbanac at RACKSPACE.COM>>, Douglas Mendizabal <douglas.mendizabal at RACKSPACE.COM<mailto:douglas.mendizabal at RACKSPACE.COM>>, John Wood <john.wood at rackspace.com<mailto:john.wood at rackspace.com>>, "Reller, Nathan S." <Nathan.Reller at jhuapl.edu<mailto:Nathan.Reller at jhuapl.edu>>
Subject: Barbican : Unable to run barbican CURL commands after starting/restarting barbican using the service file

Hi All,

I am Unable to run barbican CURL commands after starting/restarting barbican using the service file.

Used below command to run  barbican service file :
(wheel)[root at controller-01 service]# systemctl restart barbican-api.service

When I tried executing the command to create the secret , I did not get any response from the server .

(wheel)[root at controller-01 service]# ps -ef | grep barbican
barbican  1104     1  0 22:56 ?        00:00:00 /opt/barbican/bin/uwsgi --master --emperor /etc/barbican/vassals
barbican  1105  1104  0 22:56 ?        00:00:00 /opt/barbican/bin/uwsgi --master --emperor /etc/barbican/vassals
barbican  1106  1105  0 22:56 ?        00:00:00 /opt/barbican/bin/uwsgi --ini barbican-api.ini
root      3195 28132  0 23:03 pts/0    00:00:00 grep --color=auto barbican

Checked the status of the barbican-api.service file and got the following response :

(wheel)[root at controller-01 service]# systemctl status  barbican-api.service -l
barbican-api.service - Barbican Key Management API server
   Loaded: loaded (/usr/lib/systemd/system/barbican-api.service; enabled)
   Active: active (running) since Tue 2015-09-15 22:56:12 UTC; 2min 17s ago
 Main PID: 1104 (uwsgi)
   Status: "The Emperor is governing 1 vassals"
   CGroup: /system.slice/barbican-api.service
           ??1104 /opt/barbican/bin/uwsgi --master --emperor /etc/barbican/vassals
           ??1105 /opt/barbican/bin/uwsgi --master --emperor /etc/barbican/vassals
           ??1106 /opt/barbican/bin/uwsgi --ini barbican-api.ini

Sep 15 22:58:30 controller-01 uwsgi[1104]: APP, pipeline[-1], global_conf)
Sep 15 22:58:30 controller-01 uwsgi[1104]: File "/opt/barbican/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 458, in get_context
Sep 15 22:58:30 controller-01 uwsgi[1104]: section)
Sep 15 22:58:30 controller-01 uwsgi[1104]: File "/opt/barbican/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 517, in _context_from_explicit
Sep 15 22:58:30 controller-01 uwsgi[1104]: value = import_string(found_expr)
Sep 15 22:58:30 controller-01 uwsgi[1104]: File "/opt/barbican/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 22, in import_string
Sep 15 22:58:30 controller-01 uwsgi[1104]: return pkg_resources.EntryPoint.parse("x=" + s).load(False)
Sep 15 22:58:30 controller-01 uwsgi[1104]: File "/opt/barbican/lib/python2.7/site-packages/pkg_resources.py", line 2265, in load
Sep 15 22:58:30 controller-01 uwsgi[1104]: raise ImportError("%r has no %r attribute" % (entry,attr))
Sep 15 22:58:30 controller-01 uwsgi[1104]: ImportError: <module 'barbican.api.app' from '/opt/barbican/lib/python2.7/site-packages/barbican/api/app.pyc'> has no 'create_main_app_v1' attribute


Please find contents of  barbican-api. service file :

[Unit]
Description=Barbican Key Management API server
After=syslog.target network.target

[Service]
Type=simple
NotifyAccess=all
User=barbican
KillSignal=SIGINT
ExecStart={{ barbican_virtualenv_path }}/bin/uwsgi --master --emperor /etc/barbican/vassals
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Even though barbican is running successfully , we are unable to run the CURL command . Would like to know if the  "ImportError: <module 'barbican.api.app' from '/opt/barbican/lib/python2.7/site-packages/barbican/api/app.pyc'> has no 'create_main_app_v1' attribute" is the cause for not being able to execute the CURL commands.

How do we debug ImportError: <module 'barbican.api.app' from '/opt/barbican/lib/python2.7/site-packages/barbican/api/app.pyc'> has no 'create_main_app_v1' attribute"
And also I think that barbican restart is not successful.

Any help would highly be appreciated .

But I am able tor run the command  "/bin/uwsgi --master --emperor /etc/barbican/vassals" manually and was able to run the CURL commands .


--
Thanks and Regards,
Asha Seshagiri
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/f071b082/attachment.html>

From aschultz at mirantis.com  Wed Sep 16 16:53:10 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Wed, 16 Sep 2015 11:53:10 -0500
Subject: [openstack-dev] [puppet] service default value functions
Message-ID: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>

Hey puppet folks,

Based on the meeting yesterday[0], I had proposed creating a parser
function called is_service_default[1] to validate if a variable matched our
agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking about how
can we maybe not use the arbitrary string throughout the puppet that can
not easily be validated.  So I tested creating another puppet function
named service_default[2] to replace the use of '<SERVICE DEFAULT>'
throughout all the puppet modules.  My tests seemed to indicate that you
can use a parser function as parameter default for classes.

I wanted to send a note to gather comments around the second function.
When we originally discussed what to use to designate for a service's
default configuration, I really didn't like using an arbitrary string since
it's hard to parse and validate. I think leveraging a function might be
better since it is something that can be validated via tests and a syntax
checker.  Thoughts?


Thanks,
-Alex

[0]
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html
[1] https://review.openstack.org/#/c/223672
[2] https://review.openstack.org/#/c/224187
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/a6a032b4/attachment.html>

From clint at fewbar.com  Wed Sep 16 17:12:48 2015
From: clint at fewbar.com (Clint Byrum)
Date: Wed, 16 Sep 2015 10:12:48 -0700
Subject: [openstack-dev] [oslo.messaging][zmq]
In-Reply-To: <55F989C5.3040200@mirantis.com>
References: <55F989C5.3040200@mirantis.com>
Message-ID: <1442423523-sup-8723@fewbar.com>

Excerpts from ozamiatin's message of 2015-09-16 08:24:53 -0700:
> Hi All,
> 
> I'm excited to report that today we have merged [1] new zmq driver into 
> oslo.messaging master branch.
> The driver is not completely done yet, so we are going to continue 
> developing it on the master branch now.
> 
> What we've reached for now is passing functional tests gate (we are 
> going to turn it on in the master [2]).
> And we also have devstack up and running (almost 80% tempest tests 
> passed when I've tested it since last commit into feature/zmq). I need 
> to check all this after merge, to ensure that I didn't break something 
> resolving conflicts.
> 
> I'm going to put all ongoing tasks on launchpad and provide some 
> documentation soon, so anyone is welcome to develop new zmq driver.
> I also would like to thank Viktor Serhieiev and Doug Royal who already 
> contributed to feature/zmq.
> 
> [1] - https://review.openstack.org/#/c/223877
> [2] - https://review.openstack.org/#/c/224035
> 


Oleksii, this is great news. Thanks so much for your hard work on this!


From dklyle0 at gmail.com  Wed Sep 16 17:19:38 2015
From: dklyle0 at gmail.com (David Lyle)
Date: Wed, 16 Sep 2015 11:19:38 -0600
Subject: [openstack-dev]  [horizon] Mitaka Summit Topic Proposals
Message-ID: <CAFFhzB6zUrCE+XM8QfKfE7Cqt6b8200_ET4BaW6qxYg2APMfNw@mail.gmail.com>

It's time to express your topic proposals for the Mitaka summit. In
order to provide an accessible tool for all geographies, we'll be
compiling session topics on an etherpad again for Mitaka.

https://etherpad.openstack.org/p/horizon-mitaka-summit

Please add your ideas there and follow the brief guidelines at the top.

Thanks,
David


From eharney at redhat.com  Wed Sep 16 17:20:07 2015
From: eharney at redhat.com (Eric Harney)
Date: Wed, 16 Sep 2015 13:20:07 -0400
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <D21DF6C7.589BB%xing.yang@emc.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com>
Message-ID: <55F9A4C7.8060901@redhat.com>

On 09/15/2015 04:56 PM, yang, xing wrote:
> Hi Eric,
> 
> Regarding the default max_over_subscription_ratio, I initially set the
> default to 1 while working on oversubscription, and changed it to 2 after
> getting review comments.  After it was merged, I got feedback that 2 is
> too small and 20 is more appropriated, so I changed it to 20.  So it looks
> like we can?t find a default value that makes everyone happy.
> 

I'm curious about how this is used in real-world deployments.  Are we
making the assumption that the admin has some external monitoring
configured to send alarms if the storage is nearing capacity?

> If we can decide what is the best default value for LVM, we can change the
> default max_over_subscription_ratio, but we should also allow other
> drivers to specify a different config option if a different default value
> is more appropriate for them.

This sounds like a good idea, I'm just not sure how to structure it yet
without creating a very confusing set of config options.


> On 9/15/15, 1:38 PM, "Eric Harney" <eharney at redhat.com> wrote:
> 
>> On 09/15/2015 01:00 PM, Chris Friesen wrote:
>>> I'm currently trying to work around an issue where activating LVM
>>> snapshots created through cinder takes potentially a long time.
>>> (Linearly related to the amount of data that differs between the
>>> original volume and the snapshot.)  On one system I tested it took about
>>> one minute per 25GB of data, so the worst-case boot delay can become
>>> significant.
>>>
>>> According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were
>>> not intended to be kept around indefinitely, they were supposed to be
>>> used only until the backup was taken and then deleted.  He recommends
>>> using thin provisioning for long-lived snapshots due to differences in
>>> how the metadata is maintained.  (He also says he's heard reports of
>>> volume activation taking half an hour, which is clearly crazy when
>>> instances are waiting to access their volumes.)
>>>
>>> Given the above, is there any reason why we couldn't make thin
>>> provisioning the default?
>>>
>>
>>
>> My intention is to move toward thin-provisioned LVM as the default -- it
>> is definitely better suited to our use of LVM.  Previously this was less
>> easy, since some older Ubuntu platforms didn't support it, but in
>> Liberty we added the ability to specify lvm_type = "auto" [1] to use
>> thin if it is supported on the platform.
>>
>> The other issue preventing using thin by default is that we default the
>> max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
>> the reference implementation, since it means that people who deploy
>> Cinder LVM on smaller storage configurations can easily fill up their
>> volume group and have things grind to halt.  I think we want something
>> closer to the semantics of thick LVM for the default case.
>>
>> We haven't thought through a reasonable migration strategy for how to
>> handle that.  I'm not sure we can change the default oversubscription
>> ratio without breaking deployments using other drivers.  (Maybe I'm
>> wrong about this?)
>>
>> If we sort out that issue, I don't see any reason we can't switch over
>> in Mitaka.
>>
>> [1] https://review.openstack.org/#/c/104653/
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From rhallise at redhat.com  Wed Sep 16 17:20:57 2015
From: rhallise at redhat.com (Ryan Hallisey)
Date: Wed, 16 Sep 2015 13:20:57 -0400 (EDT)
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <CAKO+H+LQm2YZHqGcCNH1AWbdpPNunH6qUiMnj=L-1k7jAqkV2A@mail.gmail.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
 <D21A6FA1.12519%stdake@cisco.com>
 <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>
 <D21AFAE0.12587%stdake@cisco.com> <55F6AD3E.9090909@oracle.com>
 <CAJ3CzQWS4O-+V6A9L0GSDMUGcfpJc_3=DdQG9njxO+FBoRBDyw@mail.gmail.com>
 <CAKO+H+LQm2YZHqGcCNH1AWbdpPNunH6qUiMnj=L-1k7jAqkV2A@mail.gmail.com>
Message-ID: <1605504506.24919906.1442424057974.JavaMail.zimbra@redhat.com>

> Core reviewers: 
> Please vote +1 if you ARE satisfied with integration with third party unusable without a license software, specifically Cinder volume drivers, Neutron network drivers, and various for-pay distributions of OpenStack and container runtimes. 
> Please vote ?1 if you ARE NOT satisfied with integration with third party unusable without a license software, specifically Cinder volume drivers, Neutron network drivers, and various for pay distributions of OpenStack and container runtimes. 

> A bit of explanation on your vote might be helpful. 

> My vote is +1. I have already provided my rationale. 

> Regards, 
> -steve 

Sorry I'm a little late with my response.

I think it's good for us to allow outside parties in.  Our community is growing, but in order to get to the next level, we need to be welcoming of more outside parties.
Furthermore, I think having a defined structure on how outside parties will fit in is important so we maintain project organization.
The ideas that have been listed below I agree with. +1 for integration given our set of rules.

-Ryan

> __________________________________________________________________________ 
> OpenStack Development Mailing List (not for usage questions) 
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> Both arguments sound valid to me, both have pros and cons. 
> 
> I think it's valuable to look to the experiences of Cinder and Neutron in this area, both of which seem to have the same scenario and have existed much longer than Kolla. From what I know of how these operate, proprietary code is allowed to exist in the mainline so long as certain set of criteria is met. I'd have to look it up but I think it mostly comprises of the relevant parties must "play by the rules", e.g. provide a working CI, help with reviews, attend weekly meetings, etc. If Kolla can look to craft a similar set of criteria for proprietary code down the line, I think it should work well for us. 
> 
> Steve has a good point in that it may be too much overhead to implement a plugin system or similar up front. Instead, we should actively monitor the overhead in terms of reviews and code size that these extra implementations add. Perhaps agree to review it at the end of Mitaka? 
> 
> Given the project is young, I think it can also benefit from the increased usage and exposure from allowing these parties in. I would hope independent contributors would not feel rejected from not being able to use/test with the pieces that need a license. The libre distros will remain #1 for us. 
> 
> So based on the above explanation, I'm +1. 
> 
> -Paul 
> 
> __________________________________________________________________________ 
> OpenStack Development Mailing List (not for usage questions) 
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> Given Paul's comments I would agree here as well. I would like to get that 'criteria' required for Kolla to allow this proprietary code into the main repo down as soon as possible though and suggest that we have a bare minimum of being able to gate against it as one of the criteria. 
> 
> As for a plugin system, I also agree with Paul that we should check the overhead of including these other distros and any types needed after we have had time to see if they do introduce any additional overhead. 
> 
> So for the question 'Do we allow code that relies on proprietary packages?' I would vote +1, with the condition that we define the requirements of allowing that code as soon as possible. 
> 
> __________________________________________________________________________ 
> OpenStack Development Mailing List (not for usage questions) 
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> I am +1 on the system with following criteria we already discussed above 
> 
> - Set of defined requirements to adhere to for contributing and maintaining 
> - Set of contributors contributing to and reviewing the changes in Kolla. 
> - Set of maintainers available to connect with if we require any urgent attention to any failures in Kolla due to the code. 
> - CI if possible, we can evaluate the options as we finalize. 
> 
> Since we are on the subject of " OpenStack as a whole", I think OpenStack has evolved better with more operators contributing to the code base, since we need to let the code break to make it robust. This can be very easily observed with Cinder and Neutron specially, the different nature of implementations has always helped the base source improvements which were never thought of. 
> 
> I agree with Paul that Kolla benefit more with increased participation from Operators who are willing to update and use it. 
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From walter.boring at hpe.com  Wed Sep 16 17:28:44 2015
From: walter.boring at hpe.com (Walter A. Boring IV)
Date: Wed, 16 Sep 2015 10:28:44 -0700
Subject: [openstack-dev] [Cinder] PTL Candidacy
Message-ID: <55F9A6CC.2060507@hpe.com>

Cinder community,

I am announcing my candidacy for Cinder PTL for the Mitaka release.

Cinder is a fundamental piece of the puzzle for the success
of OpenStack.  I've been lucky enough to work on Cinder since the
Grizzly release cycle.  The Cinder community has grown every release,
and we've gotten a lot of great features implemented as well as many
new drivers.  We've instituted a baseline requirement for third party
CI, which is critical to the quality of Cinder.  I believe this goes
a long way to proving to deployers that Cinder is dedicated to building
a quality product.

I believe the single best component of the project, is the diverse
community itself.  We have people from all over the world helping
Cinder grow.  We have companies that compete directly with each other,
in the marketplace for customers, working together to solve complex
problems.

I would like to continue to encourage more driver developers to get
involved in Cinder core features.   This is the future of the
community itself and the lifeblood of Cinder.  We also need to get more
active in Nova to ensure that the interactions are stable.

The following is a list of a few areas of focus that I would
like to encourage the community to consider over the next release.

* Solidify all/any deadlines milestone deadlines early in the release
* Iron out the Nova <--> Cinder interactions
* Get active-active c-vol services working
* Get driver bug fixes into previous releases
* Continue the stabilization of the 3rd party CI system.
* Support any efforts to integrate with Ironic


There is always a long list of cool stuff to work on and issues to fix
in Cinder, and the more participation we have with Cinder core the better.
We have a strong and vibrant community and I look forward to working on
Cinder for many releases ahead.

Thank you for considering me.

Walter A. Boring IV (hemna)


From xing.yang at emc.com  Wed Sep 16 17:39:54 2015
From: xing.yang at emc.com (yang, xing)
Date: Wed, 16 Sep 2015 17:39:54 +0000
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55F9A4C7.8060901@redhat.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
Message-ID: <D21F2025.58CD7%xing.yang@emc.com>

Hi Eric,

Please see my replies inline below.

Thanks,
Xing


On 9/16/15, 1:20 PM, "Eric Harney" <eharney at redhat.com> wrote:

>On 09/15/2015 04:56 PM, yang, xing wrote:
>> Hi Eric,
>> 
>> Regarding the default max_over_subscription_ratio, I initially set the
>> default to 1 while working on oversubscription, and changed it to 2
>>after
>> getting review comments.  After it was merged, I got feedback that 2 is
>> too small and 20 is more appropriated, so I changed it to 20.  So it
>>looks
>> like we can?t find a default value that makes everyone happy.
>> 
>
>I'm curious about how this is used in real-world deployments.  Are we
>making the assumption that the admin has some external monitoring
>configured to send alarms if the storage is nearing capacity?

We can have Ceilometer integration for capacity notifications.  See the
following patches on capacity headroom:

https://review.openstack.org/#/c/170380/
https://review.openstack.org/#/c/206923/

>
>> If we can decide what is the best default value for LVM, we can change
>>the
>> default max_over_subscription_ratio, but we should also allow other
>> drivers to specify a different config option if a different default
>>value
>> is more appropriate for them.
>
>This sounds like a good idea, I'm just not sure how to structure it yet
>without creating a very confusing set of config options.

I?m thinking we could have a prefix with vendor name for this and it also
requires documentation by driver maintainers if they are using a different
config option.  I proposed a topic to discuss about this at the summit.

>
>
>> On 9/15/15, 1:38 PM, "Eric Harney" <eharney at redhat.com> wrote:
>> 
>>> On 09/15/2015 01:00 PM, Chris Friesen wrote:
>>>> I'm currently trying to work around an issue where activating LVM
>>>> snapshots created through cinder takes potentially a long time.
>>>> (Linearly related to the amount of data that differs between the
>>>> original volume and the snapshot.)  On one system I tested it took
>>>>about
>>>> one minute per 25GB of data, so the worst-case boot delay can become
>>>> significant.
>>>>
>>>> According to Zdenek Kabelac on the LVM mailing list, LVM snapshots
>>>>were
>>>> not intended to be kept around indefinitely, they were supposed to be
>>>> used only until the backup was taken and then deleted.  He recommends
>>>> using thin provisioning for long-lived snapshots due to differences in
>>>> how the metadata is maintained.  (He also says he's heard reports of
>>>> volume activation taking half an hour, which is clearly crazy when
>>>> instances are waiting to access their volumes.)
>>>>
>>>> Given the above, is there any reason why we couldn't make thin
>>>> provisioning the default?
>>>>
>>>
>>>
>>> My intention is to move toward thin-provisioned LVM as the default --
>>>it
>>> is definitely better suited to our use of LVM.  Previously this was
>>>less
>>> easy, since some older Ubuntu platforms didn't support it, but in
>>> Liberty we added the ability to specify lvm_type = "auto" [1] to use
>>> thin if it is supported on the platform.
>>>
>>> The other issue preventing using thin by default is that we default the
>>> max oversubscription ratio to 20.  IMO that isn't a safe thing to do
>>>for
>>> the reference implementation, since it means that people who deploy
>>> Cinder LVM on smaller storage configurations can easily fill up their
>>> volume group and have things grind to halt.  I think we want something
>>> closer to the semantics of thick LVM for the default case.
>>>
>>> We haven't thought through a reasonable migration strategy for how to
>>> handle that.  I'm not sure we can change the default oversubscription
>>> ratio without breaking deployments using other drivers.  (Maybe I'm
>>> wrong about this?)
>>>
>>> If we sort out that issue, I don't see any reason we can't switch over
>>> in Mitaka.
>>>
>>> [1] https://review.openstack.org/#/c/104653/
>>>
>>> 
>>>________________________________________________________________________
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sharis at Brocade.com  Wed Sep 16 18:04:08 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Wed, 16 Sep 2015 18:04:08 +0000
Subject: [openstack-dev] [congress] IRC hangout
In-Reply-To: <20150915144437.GD25159@yuggoth.org>
References: <3c5905f3d73949f99aaa2007167b6f05@Hq1wp-exmb11.corp.brocade.com>
 <20150915144437.GD25159@yuggoth.org>
Message-ID: <a605b435b5fd43bfa543fc14dd664108@HQ1WP-EXMB12.corp.brocade.com>

Thanks Jeremy and Tim.

-Shiv


-----Original Message-----
From: Jeremy Stanley [mailto:fungi at yuggoth.org] 
Sent: Tuesday, September 15, 2015 7:45 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [congress] IRC hangout

On 2015-09-14 17:44:14 +0000 (+0000), Shiv Haris wrote:
> What is the IRC channel where congress folks hangout. I  tried 
> #openstack-congress on freenode but is seems not correct.

https://wiki.openstack.org/wiki/IRC has it listed as #congress
--
Jeremy Stanley

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From harlowja at outlook.com  Wed Sep 16 18:12:47 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Wed, 16 Sep 2015 11:12:47 -0700
Subject: [openstack-dev] Pycharm License for OpenStack developers
In-Reply-To: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBDA8@MAIL703.KDS.KEANE.COM>
References: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBD63@MAIL703.KDS.KEANE.COM>
 <3895CB36EABD4E49B816E6081F3B001735FD73F3@IRSMSX108.ger.corp.intel.com>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FBDA8@MAIL703.KDS.KEANE.COM>
Message-ID: <BLU436-SMTP554C7011ADEEED98ACBCFBD85B0@phx.gbl>

Anyone know about the impact of:

- 
https://mmilinkov.wordpress.com/2015/09/04/jetbrains-lockin-we-told-you-so/

- http://blog.jetbrains.com/blog/2015/09/03/introducing-jetbrains-toolbox/

I'm pretty sure a lot of openstack-devs are using pycharms, and wonder 
what this change will mean for those devs?

Kekane, Abhishek wrote:
> Thank you Michal,
>
> Abhishek
>
> -----Original Message-----
> From: Dulko, Michal [mailto:michal.dulko at intel.com]
> Sent: 16 September 2015 15:23
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Pycharm License for OpenStack developers
>
>> From: Kekane, Abhishek [mailto:Abhishek.Kekane at nttdata.com]
>> Sent: Wednesday, September 16, 2015 11:20 AM Hi Devs,
>>
>>
>>
>> I am using Pycharm for development and current license is about to expire.
>>
>> Please let me know if anyone has a new license key for the same.
>>
>>
>>
>> Thank you in advance.
>>
>>
>>
>> Abhishek
>
>
> I've applied for the license for OpenStack a moment ago. I'll send an update to the ML once I get a response from JetBrains.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ______________________________________________________________________
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From harm at weites.com  Wed Sep 16 18:22:49 2015
From: harm at weites.com (harm at weites.com)
Date: Wed, 16 Sep 2015 20:22:49 +0200
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <CAJ3CzQWS4O-+V6A9L0GSDMUGcfpJc_3=DdQG9njxO+FBoRBDyw@mail.gmail.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
 <D21A6FA1.12519%stdake@cisco.com>
 <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>
 <D21AFAE0.12587%stdake@cisco.com> <55F6AD3E.9090909@oracle.com>
 <CAJ3CzQWS4O-+V6A9L0GSDMUGcfpJc_3=DdQG9njxO+FBoRBDyw@mail.gmail.com>
Message-ID: <0ff2ac38c6044b0f039992b0a1f53ecf@weites.com>

There is an apparent need for having official RHOS being supported from 
our end, and we just so happen to have the possibility of filling that 
need. Should the need arise to support whatever fancy proprietary 
backend system or even have Kolla integrate with Oracle Solaris or 
something, that need would most probably be backed by a company plus 
developer effort. I believe the burden for our current (great) team 
would more or less stay the same (eg. lets assume we don't know anything 
about Solaris), so this company should ship in devvers to aid their 
'wish'. The team effort with these additional devvers would indeed grow, 
bigtime. Keeping our eyes on the matters feels like a fair solution, 
allowing for these additions while guarding the effort they take. Should 
Kolla start supporting LXC besides Docker, that would be awesome 
(uhm...) - but I honestly don't see a need to be thinking about that 
right now, if someone comes up with a spec about it and wants to invest 
time+effort we can atleast review it. We shouldn't prepare our 
Dockerfiles for such a possibility though, whereas the difference 
between RHOS and CentOS is very little. Hence, support is rather easy to 
implement.

The question was if Kolla wants/should support integrating with 3rd 
party tools, and I think we should support it. There should be rules, 
yes. We probably shouldn't be worrying about proprietary stuff that 
other projects hardly take seriously (even though drivers have been 
accepted)...

Vote: +1

- harmw

Sam Yaple schreef op 2015-09-14 13:44:
> On Mon, Sep 14, 2015 at 11:19 AM, Paul Bourke <paul.bourke at oracle.com>
> wrote:
> 
>> On 13/09/15 18:34, Steven Dake (stdake) wrote:
>> 
>>> Response inline.
>>> 
>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>"
>>> <sam at yaple.net<mailto:sam at yaple.net>>
>>> Date: Sunday, September 13, 2015 at 1:35 AM
>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>> Cc: "OpenStack Development Mailing List (not for usage
>>> questions)"
>>> 
>> 
>  
>  
>  
>  
>  
>  
> tack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
>>> Subject: Re: [kolla] Followup to review in gerrit relating to
>>> RHOS + RDO types
>>> 
>>> On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake)
>>> <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
>>> Response inline.
>>> 
>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>"
>>> <sam at yaple.net<mailto:sam at yaple.net>>
>>> Date: Saturday, September 12, 2015 at 11:34 PM
>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>> Cc: "OpenStack Development Mailing List (not for usage
>>> questions)"
>>> 
>> 
>  
>  
>  
>  
>  
>  
> tack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
>>> Subject: Re: [kolla] Followup to review in gerrit relating to
>>> RHOS + RDO types
>>> 
>>> Sam Yaple
>>> 
>>> On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake)
>>> <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
>>> 
>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>"
>>> <sam at yaple.net<mailto:sam at yaple.net>>
>>> Date: Saturday, September 12, 2015 at 11:01 PM
>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>> Cc: "OpenStack Development Mailing List (not for usage
>>> questions)"
>>> 
>> 
>  
>  
>  
>  
>  
>  
> tack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
>>> Subject: Re: [kolla] Followup to review in gerrit relating to
>>> RHOS + RDO types
>>> 
>>> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake)
>>> <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
>>> Hey folks,
>>> 
>>> Sam had asked a reasonable set of questions regarding a patchset:
>>> https://review.openstack.org/#/c/222893/ [1]
>>> 
>>> The purpose of the patchset is to enable both RDO and RHOS as
>>> binary choices on RHEL platforms.? I suspect over time,
>>> from-source deployments have the potential to become the norm, but
>>> the business logistics of such a change are going to take some
>>> significant time to sort out.
>>> 
>>> Red Hat has two distros of OpenStack neither of which are from
>>> source.? One is free called RDO and the other is paid called
>>> RHOS.? In order to obtain support for RHEL VMs running in an
>>> OpenStack cloud, you must be running on RHOS RPM binaries.? You
>>> must also be running on RHEL.? It remains to be seen whether Red
>>> Hat will actively support Kolla deployments with a RHEL+RHOS set
>>> of packaging in containers, but my hunch says they will.? It is
>>> in Kolla?s best interest to implement this model and not make it
>>> hard on Operators since many of them do indeed want Red Hat?s
>>> support structure for their OpenStack deployments.
>>> 
>>> Now to Sam?s questions:
>>> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many
>>> more do we add? What's our policy on adding a new type??
>>> 
>>> I?m not immediately clear on how binary fits in.? We could
>>> make binary synonymous with the community supported version (RDO)
>>> while still implementing the binary RHOS version.? Note Kolla
>>> does not ?support? any distribution or deployment of OpenStack
>>> ? Operators will have to look to their vendors for support.
>>> 
>>> If everything between centos+rdo and rhel+rhos is mostly the same
>>> then I would think it would make more sense to just use the base
>>> ('rhel' in this case) to branch of any differences in the
>>> templates. This would also allow for the least amount of change
>>> and most generic implementation of this vendor specific packaging.
>>> This would also match what we do with oraclelinux, we do not have
>>> a special type for that and any specifics would be handled by an
>>> if statement around 'oraclelinux' and not some special type.
>>> 
>>> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.?
>>> RDO also runs on RHEL.? I want to enable Red Hat customers to
>>> make a choice to have a supported? operating system but not a
>>> supported Cloud environment.? The answer here is RHEL + RDO.?
>>> This leads to full support down the road if the Operator chooses
>>> to pay Red Hat for it by an easy transition to RHOS.
>>> 
>>> I am against including vendor specific things like RHOS in Kolla
>>> outright like you are purposing. Suppose another vendor comes
>>> along with a new base and new packages. They are willing to
>>> maintain it, but its something that no one but their customers
>>> with their licensing can use. This is not something that belongs
>>> in Kolla and I am unsure that it is even appropriate to belong in
>>> OpenStack as a whole. Unless RHEL+RHOS can be used by those that
>>> do not have a license for it, I do not agree with adding it at
>>> all.
>>> 
>>> Sam,
>>> 
>>> Someone stepping up to maintain a completely independent set of
>>> docker images hasn?t happened.? To date nobody has done that.?
>>> If someone were to make that offer, and it was a significant
>>> change, I think the community as a whole would have to evaluate
>>> such a drastic change.? That would certainly increase our
>>> implementation and maintenance burden, which we don?t want? to
>>> do.? I don?t think what you propose would be in the best
>>> interest of the Kolla project, but I?d have to see the patch set
>>> to evaluated the scenario appropriately.
>>> 
>>> What we are talking about is 5 additional lines to enable
>>> RHEL+RHOS specific repositories, which is not very onerous.
>>> 
>>> The fact that you can?t use it directly has little bearing on
>>> whether its valid technology for OpenStack.? There are already
>>> two well-defined historical precedents for non-licensed unusable
>>> integration in OpenStack.? Cinder has 55 [1] Volume drivers which
>>> they SUPPORT.? ? ?At-leat 80% of them are completely
>>> proprietary hardware which in reality is mostly just software
>>> which without a license to, it would be impossible to use.? There
>>> are 41 [2] Neutron drivers registered on the Neutron driver page;
>>> almost the entirety require proprietary licenses to what amounts
>>> as integration to access proprietary software.? The OpenStack
>>> preferred license is ASL for a reason ? to be business
>>> friendly.? Licensed software has a place in the world of
>>> OpenStack, even it only serves as an integration point which the
>>> proposed patch does.? We are consistent with community values on
>>> this point or I wouldn?t have bothered proposing the patch.
>>> 
>>> We want to encourage people to use Kolla for proprietary
>>> solutions if they so choose.? This is how support manifests,
>>> which increases the strength of the Kolla project.? The presence
>>> of support increases the likelihood that Kolla will be adopted by
>>> Operators.? If your asking the Operators to maintain a fork for
>>> those 5 RHOS repo lines, that seems unreasonable.
>>> 
>>> I?d like to hear other Core Reviewer opinions on this matter
>>> and will hold a majority vote on this thread as to whether we will
>>> facilitate integration with third party software such as the
>>> Cinder Block Drivers, the Neutron Network drivers, and various
>>> for-pay versions of OpenStack such as RHOS.? I?d like all core
>>> reviewers to weigh in please.? Without a complete vote it will be
>>> hard to gauge what the Kolla community really wants.
>>> 
>>> Core reviewers:
>>> Please vote +1 if you ARE satisfied with integration with third
>>> party unusable without a license software, specifically Cinder
>>> volume drivers, Neutron network drivers, and various for-pay
>>> distributions of OpenStack and container runtimes.
>>> Please vote ?1 if you ARE NOT satisfied with integration with
>>> third party unusable without a license software, specifically
>>> Cinder volume drivers, Neutron network drivers, and various for
>>> pay distributions of OpenStack and container runtimes.
>>> 
>>> A bit of explanation on your vote might be helpful.
>>> 
>>> My vote is +1.? I have already provided my rationale.
>>> 
>>> Regards,
>>> -steve
>>> 
>>> [1] https://wiki.openstack.org/wiki/CinderSupportMatrix [2]
>>> [2] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>>> [3]
>>> 
>>> I appreciate you calling a vote so early. But I haven't had my
>>> questions answered yet enough to even vote on the matter at hand.
>>> 
>>> In this situation the closest thing we have to a plugin type
>>> system as Cinder or Neutron does is our header/footer system. What
>>> you are proposing is integrating a proprietary solution into the
>>> core of Kolla. Those Cinder and Neutron plugins have external
>>> components and those external components are not baked into the
>>> project.
>>> 
>>> What happens if and when the RHOS packages require different
>>> tweaks in the various containers? What if it requires changes to
>>> the Ansible playbooks? It begins to balloon out past 5 lines of
>>> code.
>>> 
>>> Unfortunately, the community _wont_ get to vote on whether or not
>>> to implement those changes because RHOS is already in place.
>>> That's why I am asking the questions now as this _right_ _now_ is
>>> the significant change you are talking about, regardless of the
>>> lines of code.
>>> 
>>> So the question is not whether we are going to integrate 3rd
>>> party plugins, but whether we are going to allow companies to
>>> build proprietary products in the Kolla repo. If we allow
>>> RHEL+RHOS then we would need to allow another distro+company
>>> packaging and potential Ansible tweaks to get it to work for them.
>>> 
>>> If you really want to do what Cinder and Neutron do, we need a
>>> better system for injecting code. That would be much closer to the
>>> plugins that the other projects have.
>>> 
>>> I'd like to have a discussion about this rather than immediately
>>> call for a vote which is why I asked you to raise this question in
>>> a public forum in the first place.
>>> 
>>> Sam,
>>> 
>>> While a true code injection system might be interesting and would
>>> be more parallel with the plugin model used in cinder and neutron
>>> (and to some degrees nova), those various systems didn?t begin
>>> that way.? Their driver code at one point was completely
>>> integrated.? Only after 2-3 years was the code broken into a
>>> fully injectable state.? I think that is an awfully high bar to
>>> set to sort out the design ahead of time.? One of the reasons
>>> Neutron has taken so long to mature is the Neutron community
>>> attempted to do plugins at too early a stage which created big
>>> gaps in unit and functional tests.? A more appropriate design
>>> would be for that pattern to emerge from the system over time as
>>> people begin to adopt various distro tech to Kolla.? If you
>>> looked at the patch in gerrit, there is one clear pattern ?Setup
>>> distro repos? which at some point in the future could be made to
>>> be injectable much as headers and footers are today.
>>> 
>>> As for building proprietary products in the Kolla repository, the
>>> license is ASL, which means it is inherently not proprietary.? I
>>> am fine with the code base integrating with proprietary software
>>> as long as the license terms are met; someone has to pay the
>>> mortgages of the thousands of OpenStack developers.? We should
>>> encourage growth of OpenStack, and one of the ways for that to
>>> happen is to be business friendly.? This translates into first
>>> knowing the world is increasingly adopting open source
>>> methodologies and facilitating that transition, and second
>>> accepting the world has a whole slew of proprietary software that
>>> already exists today that requires integration.
>>> 
>>> Nonetheless, we have a difference of opinion on this matter, and
>>> I want this work to merge prior to rc1.? Since this is a project
>>> policy decision and not a technical issue, it makes sense to put
>>> it to a wider vote to either unblock or kill the work.? It would
>>> be a shame if we reject all driver and supported distro
>>> integration because we as a community take an anti-business stance
>>> on our policies, but I?ll live by what the community decides.?
>>> This is not a decision either you or I may dictate which is why it
>>> has been put to a vote.
>>> 
>>> Regards
>>> -steve
>>> 
>>> For oracle linux, I?d like to keep RDO for oracle linux and
>>> from source on oracle linux as choices.? RDO also runs on oracle
>>> linux.? Perhaps the patch set needs some later work here to
>>> address this point in more detail, but as is ?binary? covers
>>> oracle linu.
>>> 
>>> Perhaps what we should do is get rid of the binary type
>>> entirely.? Ubuntu doesn?t really have a binary type, they have
>>> a cloudarchive type, so binary doesn?t make a lot of sense.?
>>> Since Ubuntu to my knowledge doesn?t have two distributions of
>>> OpenStack the same logic wouldn?t apply to providing a full
>>> support onramp for Ubuntu customers.? Oracle doesn?t provide a
>>> binary type either, their binary type is really RDO.
>>> 
>>> The binary packages for Ubuntu are _packaged_ by the cloudarchive
>>> team. But in the case of when OpenStack collides with an LTS
>>> release (Icehouse and 14.04 was the last one) you do not add a new
>>> repo because the packages are in the main Ubuntu repo.
>>> 
>>> Debian provides its own packages as well. I do not want a type
>>> name per distro. 'binary' catches all packaged OpenStack things by
>>> a distro.
>>> 
>>> FWIW I never liked the transition away from rdo in the repo names
>>> to binary.? I guess I should have ?1?ed those reviews back
>>> then, but I think its time to either revisit the decision or
>>> compromise that binary and rdo mean the same thing in a centos and
>>> rhel world.
>>> 
>>> Regards
>>> -steve
>>> 
>>> Since we implement multiple bases, some of which are not RPM
>>> based, it doesn't make much sense to me to have rhel and rdo as a
>>> type which is why we removed rdo in the first place in favor of
>>> the more generic 'binary'.
>>> 
>>> As such the implied second question ?How many more do we
>>> add?? sort of sounds like ?how many do we support??.? The
>>> answer to the second question is none ? again the Kolla
>>> community does not support any deployment of OpenStack.? To the
>>> question as posed, how many we add, the answer is it is really up
>>> to community members willing to? implement and maintain the
>>> work.? In this case, I have personally stepped up to implement
>>> RHOS and maintain it going forward.
>>> 
>>> Our policy on adding a new type could be simple or onerous.? I
>>> prefer simple.? If someone is willing to write the code and
>>> maintain it so that is stays in good working order, I see no harm
>>> in it remaining in tree.? I don?t suspect there will be a lot
>>> of people interested in adding multiple distributions for a
>>> particular operating system.? To my knowledge, and I could be
>>> incorrect, Red Hat is the only OpenStack company with a paid and
>>> community version available of OpenStack simultaneously and the
>>> paid version is only available on RHEL.? I think the risk of RPM
>>> based distributions plus their type count spiraling out of
>>> manageability is low.? Even if the risk were high, I?d prefer
>>> to keep an open mind to facilitate an increase in diversity in our
>>> community (which is already fantastically diverse, btw ;)
>>> 
>>> I am open to questions, comments or concerns.? Please feel free
>>> to voice them.
>>> 
>>> Regards,
>>> -steve
>>> 
>>> 
>> 
>  
>  
>  
> _______________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [4]
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> [5]
>> 
>> Both arguments sound valid to me, both have pros and cons.
>> 
>> I think it's valuable to look to the experiences of Cinder and
>> Neutron in this area, both of which seem to have the same scenario
>> and have existed much longer than Kolla. From what I know of how
>> these operate, proprietary code is allowed to exist in the mainline
>> so long as certain set of criteria is met. I'd have to look it up
>> but I think it mostly comprises of the relevant parties must "play
>> by the rules", e.g. provide a working CI, help with reviews, attend
>> weekly meetings, etc. If Kolla can look to craft a similar set of
>> criteria for proprietary code down the line, I think it should work
>> well for us.
>> 
>> Steve has a good point in that it may be too much overhead to
>> implement a plugin system or similar up front. Instead, we should
>> actively monitor the overhead in terms of reviews and code size that
>> these extra implementations add. Perhaps agree to review it at the
>> end of Mitaka?
>> 
>> Given the project is young, I think it can also benefit from the
>> increased usage and exposure from allowing these parties in. I would
>> hope independent contributors would not feel rejected from not being
>> able to use/test with the pieces that need a license. The libre
>> distros will remain #1 for us.
>> 
>> So based on the above explanation, I'm +1.
>> 
>> -Paul
>> 
>> 
>  
>  
>  
> _______________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [4]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> [5]
> 
> Given Paul's comments I would agree here as well. I would like to get
> that 'criteria' required for Kolla to allow this proprietary code into
> the main repo down as soon as possible though and suggest that we have
> a bare minimum of being able to gate against it as one of the
> criteria.
> 
> As for a plugin system, I also agree with Paul that we should check
> the overhead of including these other distros and any types needed
> after we have had time to see if they do introduce any additional
> overhead.
> 
> So for the question 'Do we allow code that relies on proprietary
> packages?' I would vote +1, with the condition that we define the
> requirements of allowing that code as soon as possible.
> 
> 
> Links:
> ------
> [1] https://review.openstack.org/#/c/222893/
> [2] https://wiki.openstack.org/wiki/CinderSupportMatrix
> [3] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
> [4] 
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> [5] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>  
>  
>  
> _______________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From annegentle at justwriteclick.com  Wed Sep 16 18:26:18 2015
From: annegentle at justwriteclick.com (Anne Gentle)
Date: Wed, 16 Sep 2015 13:26:18 -0500
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
	communications
In-Reply-To: <CA+odVQHvDfnbfw32-O68WeBmecpp9_QeL7q=fQFTzaThsQe9jQ@mail.gmail.com>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
 <CA+odVQHvDfnbfw32-O68WeBmecpp9_QeL7q=fQFTzaThsQe9jQ@mail.gmail.com>
Message-ID: <CAD0KtVFUOisyg+r+BbD=Ybc0ipH3sDYkq1u+2Yzj6Z1gF1fmyw@mail.gmail.com>

On Tue, Sep 15, 2015 at 11:32 AM, Christopher Aedo <doc at aedo.net> wrote:

> On Tue, Sep 15, 2015 at 7:50 AM, Anne Gentle
> <annegentle at justwriteclick.com> wrote:
> > Hi all,
> >
> > What can we do to make the cross-project meeting more helpful and useful
> for
> > cross-project communications? I started with a proposal to move it to a
> > different time, which morphed into an idea to alternate times. But,
> knowing
> > that we need to layer communications I wonder if we should troubleshoot
> > cross-project communications further? These are the current ways
> > cross-project communications happen:
> >
> > 1. The weekly meeting in IRC
> > 2. The cross-project specs and reviewing those
> > 3. Direct connections between team members
> > 4. Cross-project talks at the Summits
>
> 5. This mailing list
>
> >
> > What are some of the problems with each layer?
> >
> > 1. weekly meeting: time zones, global reach, size of cross-project
> concerns
> > due to multiple projects being affected, another meeting for PTLs to
> attend
> > and pay attention to
> > 2. specs: don't seem to get much attention unless they're brought up at
> > weekly meeting, finding owners for the work needing to be done in a spec
> is
> > difficult since each project team has its own priorities
> > 3. direct communications: decisions from these comms are difficult to
> then
> > communicate more widely, it's difficult to get time with busy PTLs
> > 4. Summits: only happens twice a year, decisions made then need to be
> widely
> > communicated
>
> 5. There's tremendous volume on the mailing list, and it can be very
> difficult to stay on top of all that traffic.
>
> >
> > I'm sure there are more details and problems I'm missing -- feel free to
> > fill in as needed.
> >
> > Lastly, what suggestions do you have for solving problems with any of
> these
> > layers?
>
> Unless I missed it, I'm really not sure why the mailing list didn't
> make the list here?  My take at least is that we should be
> coordinating with each other through the mailing list when real-time
> isn't possible (due time zone issues, etc.)  At the very least, it
> keeps people from holding on to information or issues until the next
> weekly meeting, or for a few months until the next mid-cycle or
> summit.
>
>
My apologies, that's called "blindness to the current medium" or some such.
:) Yes, absolutely we use this mailing list a lot for communication.


> I personally would like to see more coordination happening on the ML,
> and would be curious to hear opinions on how that can be improved.
> Maybe a tag on the subject line to draw attention in this case makes
> this a little easier, since we are by nature talking about issues that
> span all projects?  [cross-project] rather than [all]?
>
>
 I agree, another topic sounds like a good starting point.

Thanks,
Anne


> -Christopher
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/1386d8be/attachment.html>

From annegentle at justwriteclick.com  Wed Sep 16 18:30:03 2015
From: annegentle at justwriteclick.com (Anne Gentle)
Date: Wed, 16 Sep 2015 13:30:03 -0500
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
	communications
In-Reply-To: <55F979A9.9040206@openstack.org>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
 <55F979A9.9040206@openstack.org>
Message-ID: <CAD0KtVHyeuFTTJbMFcyvPD5ALZa3Pi5agxtoHXtFz-8MVP0L0A@mail.gmail.com>

On Wed, Sep 16, 2015 at 9:16 AM, Thierry Carrez <thierry at openstack.org>
wrote:

> Anne Gentle wrote:
> > [...]
> > What are some of the problems with each layer?
> >
> > 1. weekly meeting: time zones, global reach, size of cross-project
> > concerns due to multiple projects being affected, another meeting for
> > PTLs to attend and pay attention to
>
> A lot of PTLs (or liaisons/lieutenants) skip the meeting, or will only
> attend when they have something to ask. Their time is precious and most
> of the time the meeting is not relevant for them, so why bother ? You
> have a few usual suspects attending all of them, but those people are
> cross-project-aware already so those are not the people that would
> benefit the most from the meeting.
>
> This partial attendance makes the meeting completely useless as a way to
> disseminate information. It makes the meeting mostly useless as a way to
> get general approval on cross-project specs.
>
> The meeting still is very useful IMHO to have more direct discussions on
> hot topics. So a ML discussion is flagged for direct discussion on IRC
> and we have a time slot already booked for that.
>
> > 2. specs: don't seem to get much attention unless they're brought up at
> > weekly meeting, finding owners for the work needing to be done in a spec
> > is difficult since each project team has its own priorities
>
> Right, it's difficult to get them reviewed, and getting consensus and TC
> rubberstamp on them is also a bit of a thankless job. Basically you're
> trying to make sure everyone is OK with what you propose and most people
> ignore you (and then would be unhappy when they are impacted by the
> implementation a month later). I don't think that system works well and
> I'd prefer we change it.
>
> > 3. direct communications: decisions from these comms are difficult to
> > then communicate more widely, it's difficult to get time with busy PTLs
> > 4. Summits: only happens twice a year, decisions made then need to be
> > widely communicated
> >
> > I'm sure there are more details and problems I'm missing -- feel free to
> > fill in as needed.
> >
> > Lastly, what suggestions do you have for solving problems with any of
> > these layers?
>
> I'm starting to think we need to overhaul the whole concept of
> cross-project initiatives. The current system where an individual drives
> a specific spec and goes through all the hoops to expose it to the rest
> of the community is not really working. The current model doesn't
> support big overall development cycle goals either, since there is no
> team to implement those.
>

Completely agree, this is my observation as well from the service catalog
improvement work. While the keystone team is crucial, so many other teams
are affected. And I don't have all the key skills to implement the vision,
nor do I want to be a spec writer who can't implement, ya know? It's a
tough one.


>
> Just brainstorming out loud, maybe we need to have a base team of people
> committed to drive such initiatives to completion, a team that
> individuals could leverage when they have a cross-project idea, a team
> that could define a few cycle goals and actively push them during the
> cycle.
>
>
Or, to dig into this further, continue along the lines of the TC specialty
teams we've set up? We ran out of time a few TC meetings ago to dive into
solutions here, so I'm glad we can continue the conversation.

I'm sure existing cross-project teams have ideas too, liaisons and the like
may be matrixed somehow? We'll still need accountability and matching skill
sets for tasks.

Anne


> Maybe cross-project initiatives are too important to be left to the
> energy of an individual and rely on random weekly meetings to make
> progress. They might need a clear team to own them.
>
> --
> Thierry Carrez (ttx)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/1a85dcbe/attachment.html>

From chris.friesen at windriver.com  Wed Sep 16 18:35:42 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Wed, 16 Sep 2015 12:35:42 -0600
Subject: [openstack-dev] Pycharm License for OpenStack developers
In-Reply-To: <BLU436-SMTP554C7011ADEEED98ACBCFBD85B0@phx.gbl>
References: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBD63@MAIL703.KDS.KEANE.COM>
 <3895CB36EABD4E49B816E6081F3B001735FD73F3@IRSMSX108.ger.corp.intel.com>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FBDA8@MAIL703.KDS.KEANE.COM>
 <BLU436-SMTP554C7011ADEEED98ACBCFBD85B0@phx.gbl>
Message-ID: <55F9B67E.1000406@windriver.com>

On 09/16/2015 12:12 PM, Joshua Harlow wrote:
> Anyone know about the impact of:
>
> - https://mmilinkov.wordpress.com/2015/09/04/jetbrains-lockin-we-told-you-so/
>
> - http://blog.jetbrains.com/blog/2015/09/03/introducing-jetbrains-toolbox/
>
> I'm pretty sure a lot of openstack-devs are using pycharms, and wonder what this
> change will mean for those devs?

The jetbrains blog entry doesn't say anything about the community edition, but 
in the comments they say that it's going to stay around.

Chris


From morgan.fainberg at gmail.com  Wed Sep 16 18:35:46 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Wed, 16 Sep 2015 11:35:46 -0700
Subject: [openstack-dev] Pycharm License for OpenStack developers
In-Reply-To: <BLU436-SMTP554C7011ADEEED98ACBCFBD85B0@phx.gbl>
References: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBD63@MAIL703.KDS.KEANE.COM>
 <3895CB36EABD4E49B816E6081F3B001735FD73F3@IRSMSX108.ger.corp.intel.com>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FBDA8@MAIL703.KDS.KEANE.COM>
 <BLU436-SMTP554C7011ADEEED98ACBCFBD85B0@phx.gbl>
Message-ID: <CAGnj6augsF6DQ9bhuQGdTfLP_PJ_hudKhtPR-uiFbPL0_jfTLA@mail.gmail.com>

On Wed, Sep 16, 2015 at 11:12 AM, Joshua Harlow <harlowja at outlook.com>
wrote:

> Anyone know about the impact of:
>
> -
> https://mmilinkov.wordpress.com/2015/09/04/jetbrains-lockin-we-told-you-so/
>
> - http://blog.jetbrains.com/blog/2015/09/03/introducing-jetbrains-toolbox/
>
> I'm pretty sure a lot of openstack-devs are using pycharms, and wonder
> what this change will mean for those devs?


I have historically purchased my own license (because I believed in
supporting the companies that produce the tools, even though there was a
oss-project license that I didn't need to pay for). I am evaluating if I
wish to continue with jetbrains based on this change or not. They have said
they are evaluating the feedback - I'm willing to see what the end result
will be.

There are other IDE options out there for python and I may consider those.
I haven't run out my license, I am not sure this is enough to change my POV
that pycharm is the best option at the moment for my workflow. It was for
other software I historically purchased (business suites/photo editing) but
those filled a different space.

How much impact will this make to those pure upstream developers? Very
little unless jetbrains ceases the opensource project license.

Just my $0.02.

--Morgan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/1b9aac5e/attachment.html>

From melwittt at gmail.com  Wed Sep 16 18:40:28 2015
From: melwittt at gmail.com (melanie witt)
Date: Wed, 16 Sep 2015 11:40:28 -0700
Subject: [openstack-dev] [Nova] Design Summit Topics for Nova
In-Reply-To: <CABib2_pVxCtF=0hCGtZzg18OmMRv1LNXeHwmdow9vWx+Sw7HMg@mail.gmail.com>
References: <CABib2_pVxCtF=0hCGtZzg18OmMRv1LNXeHwmdow9vWx+Sw7HMg@mail.gmail.com>
Message-ID: <DFB24FBA-F45C-446E-96DE-F05993154BC6@gmail.com>

On Sep 11, 2015, at 10:15, John Garbutt <john at johngarbutt.com> wrote:

> To make it easier to know who submitted what, we are going to try out
> google forms for the submissions:
> http://goo.gl/forms/D2Qk8XGhZ6
> 
> If that does not work for you, let me know, and I can see what can be done.

Today I was informed that google forms are blocked in China [1], so I wanted to mention it here so we can consider an alternate way to collect submissions from those who might not be able to access the form.

-melanie (irc: melwitt)

[1] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-15-20.02.log.html#l-102





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/9f382af3/attachment.pgp>

From sharis at Brocade.com  Wed Sep 16 19:04:30 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Wed, 16 Sep 2015 19:04:30 +0000
Subject: [openstack-dev]  [Congress] CLI equivalent
Message-ID: <ef6b462e3f024c0bbe28bc864ced945b@HQ1WP-EXMB12.corp.brocade.com>

Do we have a CLI way for  doing the equivalent of:

$ curl -X GET localhost:1789/v1/policies/classification/tables/error/rows
As described in the tutorial:

https://github.com/openstack/congress/blob/master/doc/source/tutorial-tenant-sharing.rst#listing-policy-violations

-Shiv

From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Thursday, September 10, 2015 8:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Congress] Ending feature freeze

Hi all,

We're now finished with feature freeze.  We have our first release candidate and the stable/liberty branch.  So master is once again open for new features.  Couple of things to note:

1. Documentation.  We should also look through the docs and update them.  Documentation is really important.  There's one doc patch not yet merged, so be sure to pull that down before editing.  That patch officially deprecates a number of API calls that don't make sense for the new distributed architecture.  If you find places where we don't mention the deprecation, please fix that.

https://review.openstack.org/#/c/220707/

2. Bugs.  We should still all be manually testing, looking for bugs, and fixing them.  This will be true especially as other projects change their clients, which as we've seen can break our datasource drivers.

All bug fixes first go into master, and then we cherry-pick to stable/liberty.  Once you've patched a bug on master and it's been merged, you'll create another change for your bug-fix and push it to review.  Then one of the cores will +2/+1 it (usually without needing another formal round of reviews).  Here's the procedure.

// pull down the latest changes for master
$ git checkout master
$ git pull

// create a local branch for stable/liberty and switch to it
$ git checkout origin/stable/liberty -b stable/liberty

// cherry-pick your change from master onto the local stable/liberty
// The -x records the original <sha1 from master> in the commit msg
$ git cherry-pick -x <sha1 from master>

// Push to review and specify the stable/liberty branch.
// Notice in gerrit that the branch is stable/liberty, not master
$ git review stable/liberty

// Once your change to stable/liberty gets merged, fetch all the new
// changes.

// switch to local version of stable/liberty
$ git checkout stable/liberty

// fetch all the new changes to all the branches
$ git fetch origin

// update your local branch
$ git rebase origin/stable/liberty

Tim





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/c0a4ab4a/attachment.html>

From john.griffith8 at gmail.com  Wed Sep 16 19:06:18 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Wed, 16 Sep 2015 13:06:18 -0600
Subject: [openstack-dev] Pycharm License for OpenStack developers
In-Reply-To: <CAGnj6augsF6DQ9bhuQGdTfLP_PJ_hudKhtPR-uiFbPL0_jfTLA@mail.gmail.com>
References: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBD63@MAIL703.KDS.KEANE.COM>
 <3895CB36EABD4E49B816E6081F3B001735FD73F3@IRSMSX108.ger.corp.intel.com>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FBDA8@MAIL703.KDS.KEANE.COM>
 <BLU436-SMTP554C7011ADEEED98ACBCFBD85B0@phx.gbl>
 <CAGnj6augsF6DQ9bhuQGdTfLP_PJ_hudKhtPR-uiFbPL0_jfTLA@mail.gmail.com>
Message-ID: <CAPWkaSXOEX7H99zikScXJX47YpJ_3xrJhZbtupM2-Jy3qK+nWg@mail.gmail.com>

On Wed, Sep 16, 2015 at 12:35 PM, Morgan Fainberg <morgan.fainberg at gmail.com
> wrote:

>
>
> On Wed, Sep 16, 2015 at 11:12 AM, Joshua Harlow <harlowja at outlook.com>
> wrote:
>
>> Anyone know about the impact of:
>>
>> -
>> https://mmilinkov.wordpress.com/2015/09/04/jetbrains-lockin-we-told-you-so/
>>
>> -
>> http://blog.jetbrains.com/blog/2015/09/03/introducing-jetbrains-toolbox/
>>
>> I'm pretty sure a lot of openstack-devs are using pycharms, and wonder
>> what this change will mean for those devs?
>
>
> I have historically purchased my own license (because I believed in
> supporting the companies that produce the tools, even though there was a
> oss-project license that I didn't need to pay for).
>
?Total tangent, but props on purchasing a license to support something you
use on a regular basis!
?


> I am evaluating if I wish to continue with jetbrains based on this change
> or not. They have said they are evaluating the feedback - I'm willing to
> see what the end result will be.
>
> There are other IDE options out there for python and I may consider those.
> I haven't run out my license, I am not sure this is enough to change my POV
> that pycharm is the best option at the moment for my workflow. It was for
> other software I historically purchased (business suites/photo editing) but
> those filled a different space.
>
> How much impact will this make to those pure upstream developers? Very
> little unless jetbrains ceases the opensource project license.
>
> Just my $0.02.
>
> --Morgan
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/013941d3/attachment.html>

From dannchoi at cisco.com  Wed Sep 16 19:39:01 2015
From: dannchoi at cisco.com (Danny Choi (dannchoi))
Date: Wed, 16 Sep 2015 19:39:01 +0000
Subject: [openstack-dev] [keystone] is all CLI commands supported in V3?
Message-ID: <D21F3D92.DB20%dannchoi@cisco.com>

Hi,

I?m running keystone V3.

It does not seem to support all CLI commands.

E.g. There is no ?subnet create? command available.

Is this expected?

How to create a subnet in this case?

Thanks,
Danny
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/f23eeb09/attachment.html>

From emilien at redhat.com  Wed Sep 16 19:39:47 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Wed, 16 Sep 2015 15:39:47 -0400
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
Message-ID: <55F9C583.60806@redhat.com>



On 09/16/2015 12:53 PM, Alex Schultz wrote:
> Hey puppet folks,
> 
> Based on the meeting yesterday[0], I had proposed creating a parser
> function called is_service_default[1] to validate if a variable matched
> our agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking
> about how can we maybe not use the arbitrary string throughout the
> puppet that can not easily be validated.  So I tested creating another
> puppet function named service_default[2] to replace the use of '<SERVICE
> DEFAULT>' throughout all the puppet modules.  My tests seemed to
> indicate that you can use a parser function as parameter default for
> classes. 
> 
> I wanted to send a note to gather comments around the second function. 
> When we originally discussed what to use to designate for a service's
> default configuration, I really didn't like using an arbitrary string
> since it's hard to parse and validate. I think leveraging a function
> might be better since it is something that can be validated via tests
> and a syntax checker.  Thoughts?

Let me add your attempt to make it work in puppet-cinder:
https://review.openstack.org/#/c/224277

I like the proposal, +1.

> 
> Thanks,
> -Alex
> 
> [0] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html
> [1] https://review.openstack.org/#/c/223672
> [2] https://review.openstack.org/#/c/224187
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/adbdd6cd/attachment.pgp>

From caboucha at cisco.com  Wed Sep 16 19:45:20 2015
From: caboucha at cisco.com (Carol Bouchard (caboucha))
Date: Wed, 16 Sep 2015 19:45:20 +0000
Subject: [openstack-dev] [keystone] is all CLI commands supported in V3?
In-Reply-To: <D21F3D92.DB20%dannchoi@cisco.com>
References: <D21F3D92.DB20%dannchoi@cisco.com>
Message-ID: <b9aa218938234739901073b033b30790@XCH-ALN-005.cisco.com>

Danny:

Not sure it's the same thing but I've been doing the following to create a subnet (for example):

neutron net-create --tenant-id cf0a9274022540f9bc5b65a932778cd4 net-carol1
neutron subnet-create --tenant-id cf0a9274022540f9bc5b65a932778cd4 --gateway 172.17.16.1 --ip-version 4 --enable-dhcp --name subnet-carol1 net-carol1 172.17.16.0/20

To get the tenant information used in this command I had to change to use:

openstack  --os-username=admin --os-password=nova project list

But you'll need to do some install first.
http://ronaldbradford.com/blog/moving-to-openstackclient-cli-2015-04-20/

Carol

From: Danny Choi (dannchoi)
Sent: Wednesday, September 16, 2015 3:39 PM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [keystone] is all CLI commands supported in V3?

Hi,

I'm running keystone V3.

It does not seem to support all CLI commands.

E.g. There is no "subnet create" command available.

Is this expected?

How to create a subnet in this case?

Thanks,
Danny
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/08a8b564/attachment-0001.html>

From ayip at vmware.com  Wed Sep 16 19:47:57 2015
From: ayip at vmware.com (Alex Yip)
Date: Wed, 16 Sep 2015 19:47:57 +0000
Subject: [openstack-dev] [Congress] CLI equivalent
In-Reply-To: <ef6b462e3f024c0bbe28bc864ced945b@HQ1WP-EXMB12.corp.brocade.com>
References: <ef6b462e3f024c0bbe28bc864ced945b@HQ1WP-EXMB12.corp.brocade.com>
Message-ID: <1442433064831.38322@vmware.com>

Try this:

openstack congress policy row list classification error?



________________________________
From: Shiv Haris <sharis at Brocade.com>
Sent: Wednesday, September 16, 2015 12:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Congress] CLI equivalent

Do we have a CLI way for  doing the equivalent of:

$ curl -X GET localhost:1789/v1/policies/classification/tables/error/rows
As described in the tutorial:

https://github.com/openstack/congress/blob/master/doc/source/tutorial-tenant-sharing.rst#listing-policy-violations<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_congress_blob_master_doc_source_tutorial-2Dtenant-2Dsharing.rst-23listing-2Dpolicy-2Dviolations&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=m3vnP788yf3Iil8q67Kfx9ViGERr356Hb7b2KBSss9M&s=PUH7xM0t0Uy3ovTTmks2NWmKbdfY_90-EJsXIoNvSEQ&e=>

-Shiv

From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Thursday, September 10, 2015 8:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Congress] Ending feature freeze

Hi all,

We're now finished with feature freeze.  We have our first release candidate and the stable/liberty branch.  So master is once again open for new features.  Couple of things to note:

1. Documentation.  We should also look through the docs and update them.  Documentation is really important.  There's one doc patch not yet merged, so be sure to pull that down before editing.  That patch officially deprecates a number of API calls that don't make sense for the new distributed architecture.  If you find places where we don't mention the deprecation, please fix that.

https://review.openstack.org/#/c/220707/

2. Bugs.  We should still all be manually testing, looking for bugs, and fixing them.  This will be true especially as other projects change their clients, which as we've seen can break our datasource drivers.

All bug fixes first go into master, and then we cherry-pick to stable/liberty.  Once you've patched a bug on master and it's been merged, you'll create another change for your bug-fix and push it to review.  Then one of the cores will +2/+1 it (usually without needing another formal round of reviews).  Here's the procedure.

// pull down the latest changes for master
$ git checkout master
$ git pull

// create a local branch for stable/liberty and switch to it
$ git checkout origin/stable/liberty -b stable/liberty

// cherry-pick your change from master onto the local stable/liberty
// The -x records the original <sha1 from master> in the commit msg
$ git cherry-pick -x <sha1 from master>

// Push to review and specify the stable/liberty branch.
// Notice in gerrit that the branch is stable/liberty, not master
$ git review stable/liberty

// Once your change to stable/liberty gets merged, fetch all the new
// changes.

// switch to local version of stable/liberty
$ git checkout stable/liberty

// fetch all the new changes to all the branches
$ git fetch origin

// update your local branch
$ git rebase origin/stable/liberty

Tim





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/52432cf9/attachment.html>

From zigo at debian.org  Wed Sep 16 20:01:33 2015
From: zigo at debian.org (Thomas Goirand)
Date: Wed, 16 Sep 2015 22:01:33 +0200
Subject: [openstack-dev] [relmgt] PTL non-candidacy
In-Reply-To: <55F9376E.3040403@openstack.org>
References: <55F9376E.3040403@openstack.org>
Message-ID: <55F9CA9D.2060408@debian.org>

On 09/16/2015 11:33 AM, Thierry Carrez wrote:
> Hi everyone,
> 
> I have been handling release management for OpenStack since the
> beginning of this story, well before Release Management was a program or
> a project team needing a proper elected PTL. Until recently it was
> largely a one-man job. But starting with this cycle, to accommodate the
> Big Tent changes, we grew the team significantly, to the point where I
> think it is healthy to set up a PTL rotation in the team.
> 
> I generally think it's a good thing to change the PTL for a team from
> time to time. That allows different perspectives, skills and focus to be
> brought to a project team. That lets you take a step back. That allows
> to recognize the efforts and leadership of other members, which is
> difficult if you hold on the throne. So I decided to put my foot where
> my mouth is and apply those principles to my own team.
> 
> That doesn't mean I won't be handling release management for Mitaka, or
> that I won't ever be Release Management PTL again -- it's just that
> someone else will take the PTL hat for the next cycle, drive the effort
> and be the most visible ambassador of the team.
> 
> Cheers,

If we are switching from you to Doug, we'll be switching from an awesome
person to another also awesome one. Thanks for all the work.

Cheers,

Thomas



From cp16net at gmail.com  Wed Sep 16 20:03:35 2015
From: cp16net at gmail.com (Craig Vyvial)
Date: Wed, 16 Sep 2015 20:03:35 +0000
Subject: [openstack-dev] [Trove] PTL Candidacy
Message-ID: <CAOK58XR8X7MMRXgj4EKJ7ZtQfPNDxgWuqJHs6XMLpQmZhakH1A@mail.gmail.com>

Hello,

My name is Craig Vyvial and I'd like run for Mitaka Trove PTL. I've been
working on Trove for about 4 years and a core member for about 2 years.
Over the this time, Trove has continued to evolve and the community has
grown. My passion is to help make sure that Trove continues this trend and
guide Trove to be a world class database as a service for OpenStack.

Trove started just provisioing a single MySQL instance and has grown to a
total of 12 datastores to date with a few datastores supporting clustering.
I think there are a few areas that make sense with this kind of growth that
we should focus on during Mitaka.

* Add CI tests for other datastores with third patry CI systems. With so
many new datastores available within Trove, we need to make sure that we
stabilize tests on multiple datastores.

* Advance the features of Trove for all datastores.

* Simplify the installation of Trove that includes install, upgrades,
building guest images, and management operations.

* Continue the trend of growing the trove community.

As PTL of Trove, I will help the project move forward by looking to the
community for input and making sure we take steps toward making OpenStack a
better platform as well as Trove the ideal solution for users looking for a
database as a service solution.


Thanks,

- Craig Vyvial
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/a772ad0a/attachment.html>

From Ramki_Krishnan at Dell.com  Wed Sep 16 20:09:32 2015
From: Ramki_Krishnan at Dell.com (Ramki_Krishnan at Dell.com)
Date: Wed, 16 Sep 2015 15:09:32 -0500
Subject: [openstack-dev] [Congress] PTL candidacy
In-Reply-To: <CAJjxPABLW+BBLnRqKaikW0ZL2X5Z5Nwj-C8A9GtgXWy1hHmNbA@mail.gmail.com>
References: <CAJjxPABLW+BBLnRqKaikW0ZL2X5Z5Nwj-C8A9GtgXWy1hHmNbA@mail.gmail.com>
Message-ID: <DF91EBC32A031943A8B78D81D40122BC0404D4FB5E@AUSX7MCPC103.AMER.DELL.COM>

+1 and looking forward to see you in Tokyo.

Thanks,
Ramki

From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Tuesday, September 15, 2015 1:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Congress] PTL candidacy

Hi all,

I?m writing to announce my candidacy for Congress PTL for the Mitaka cycle.  I?m excited at the prospect of continuing the development of our community, our code base, and our integrations with other projects.

This past cycle has been exciting in that we saw several new, consistent contributors, who actively pushed code, submitted reviews, wrote specs, and participated in the mid-cycle meet-up.  Additionally, our integration with the rest of the OpenStack ecosystem improved with our move to running tempest tests in the gate instead of manually or with our own CI.  The code base matured as well, as we rounded out some of the features we added near the end of the Kilo cycle.  We also began making the most significant architectural change in the project?s history, in an effort meet our high-availability and API throughput targets.

I?m looking forward to the Mitaka cycle.  My highest priority for the code base is completing the architectural changes that we began in Liberty.  These changes are undoubtedly the right way forward for production use cases, but it is equally important that we make Congress easy to use and understand for both new developers and new end users.  I also plan to further our integration with the OpenStack ecosystem by better utilizing the plugin architectures that are available (e.g. devstack and tempest).  I will also work to begin (or continue) dialogues with other projects that might benefit from consuming Congress.  Finally I?m excited to continue working with our newest project members, helping them toward becoming core contributors.

See you all in Tokyo!
Tim

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/6ed1879f/attachment.html>

From ERANR at il.ibm.com  Wed Sep 16 20:19:18 2015
From: ERANR at il.ibm.com (Eran Rom)
Date: Wed, 16 Sep 2015 23:19:18 +0300
Subject: [openstack-dev] [storlets] Review requests suggestion
Message-ID: <OFAF5DC7D8.6A8D3CF1-ONC2257EC2.006F4467-C2257EC2.006FA199@il.ibm.com>

Hi All,
I suggest that until we have functional tests in place, each commit 
message should specify that:
(1) For a code change - the system tests were executed successfully with 
the change - 
(2) For installation related change that the installation was tested with 
this.

I further suggest that if the commit message does not have this the 
reviewer can assume that the committer forgot to do these and -1
Opinions?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/451c3b22/attachment.html>

From duncan.thomas at gmail.com  Wed Sep 16 20:25:18 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Wed, 16 Sep 2015 23:25:18 +0300
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <D21F2025.58CD7%xing.yang@emc.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
Message-ID: <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>

On 16 Sep 2015 20:42, "yang, xing" <xing.yang at emc.com> wrote:

> On 9/16/15, 1:20 PM, "Eric Harney" <eharney at redhat.com> wrote:

> >This sounds like a good idea, I'm just not sure how to structure it yet
> >without creating a very confusing set of config options.
>
> I?m thinking we could have a prefix with vendor name for this and it also
> requires documentation by driver maintainers if they are using a different
> config option.  I proposed a topic to discuss about this at the summit.

We already have per-backend config values in cinder.conf. I'm not sure how
the config code will need to be  structured to achieve it, but ideally I'd
like a single config option that can be:

(i) set in the default section if desired
(in) overridden in the per driver section, and (iii) have a default set in
each driver.

I don't think oslo.config lets us do (I'll) yet though.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/a06a65d2/attachment.html>

From cody at puppetlabs.com  Wed Sep 16 20:35:47 2015
From: cody at puppetlabs.com (Cody Herriges)
Date: Wed, 16 Sep 2015 13:35:47 -0700
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <55F9C583.60806@redhat.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
 <55F9C583.60806@redhat.com>
Message-ID: <55F9D2A3.8010006@puppetlabs.com>

Emilien Macchi wrote:
> 
> On 09/16/2015 12:53 PM, Alex Schultz wrote:
>> Hey puppet folks,
>>
>> Based on the meeting yesterday[0], I had proposed creating a parser
>> function called is_service_default[1] to validate if a variable matched
>> our agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking
>> about how can we maybe not use the arbitrary string throughout the
>> puppet that can not easily be validated.  So I tested creating another
>> puppet function named service_default[2] to replace the use of '<SERVICE
>> DEFAULT>' throughout all the puppet modules.  My tests seemed to
>> indicate that you can use a parser function as parameter default for
>> classes. 
>>
>> I wanted to send a note to gather comments around the second function. 
>> When we originally discussed what to use to designate for a service's
>> default configuration, I really didn't like using an arbitrary string
>> since it's hard to parse and validate. I think leveraging a function
>> might be better since it is something that can be validated via tests
>> and a syntax checker.  Thoughts?
> 
> Let me add your attempt to make it work in puppet-cinder:
> https://review.openstack.org/#/c/224277
> 
> I like the proposal, +1.
> 

Hunter,

Do you off hand know what kind of overhead is generated by this?


-- 
Cody

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 931 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/e4c794ef/attachment.pgp>

From eharney at redhat.com  Wed Sep 16 20:43:30 2015
From: eharney at redhat.com (Eric Harney)
Date: Wed, 16 Sep 2015 16:43:30 -0400
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
Message-ID: <55F9D472.5000505@redhat.com>

On 09/16/2015 04:25 PM, Duncan Thomas wrote:
> On 16 Sep 2015 20:42, "yang, xing" <xing.yang at emc.com> wrote:
> 
>> On 9/16/15, 1:20 PM, "Eric Harney" <eharney at redhat.com> wrote:
> 
>>> This sounds like a good idea, I'm just not sure how to structure it yet
>>> without creating a very confusing set of config options.
>>
>> I?m thinking we could have a prefix with vendor name for this and it also
>> requires documentation by driver maintainers if they are using a different
>> config option.  I proposed a topic to discuss about this at the summit.
> 
> We already have per-backend config values in cinder.conf. I'm not sure how
> the config code will need to be  structured to achieve it, but ideally I'd
> like a single config option that can be:
> 
> (i) set in the default section if desired
> (in) overridden in the per driver section, and (iii) have a default set in
> each driver.
> 
> I don't think oslo.config lets us do (I'll) yet though.
> 

I think there may be other issues to sort through to do that.
Currently, at least some options set in [DEFAULT] don't apply to
per-driver sections, and require you to set them in the driver section
as well.

If we keep that behavior (which I think is broken, personally), then
trying to do option (iii) may be pretty confusing, because the deployer
won't know which of the global vs. driver defaults are actually going to
be applied.



From cody at puppetlabs.com  Wed Sep 16 20:58:30 2015
From: cody at puppetlabs.com (Cody Herriges)
Date: Wed, 16 Sep 2015 13:58:30 -0700
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <CAGnj6ave7EFDQkaFmZWDVTLOE0DQgkTksqh2QLqJe0aGkCXBpQ@mail.gmail.com>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com>
 <CAGnj6atXbpuzpNR6aF63cZ26WE-cbwUGozb9bvdxtaUaA7B1Ow@mail.gmail.com>
 <87oah44qtx.fsf@s390.unix4.net>
 <CAGnj6ave7EFDQkaFmZWDVTLOE0DQgkTksqh2QLqJe0aGkCXBpQ@mail.gmail.com>
Message-ID: <55F9D7F6.2000604@puppetlabs.com>

I wrote my first composite namevar type a few years and ago and all the
magic is basically a single block of code inside the type...

https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L145-L169

It basically boils down to these three things:

* Pick your namevars
(https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L49-L64)
* Pick a delimiter
  - Personally I'd use @ here since we are talking about domains
* Build your self.title_patterns method, accounting for delimited names
and arbitrary names.

While it looks like the README never got updated, the java_ks example
supports both meaningful titles and arbitrary ones.

java_ks { 'activemq_puppetca_keystore':
  ensure       => latest,
  name         => 'puppetca',
  certificate  => '/etc/puppet/ssl/certs/ca.pem',
  target       => '/etc/activemq/broker.ks',
  password     => 'puppet',
  trustcacerts => true,
}

java_ks { 'broker.example.com:/etc/activemq/broker.ks':
  ensure      => latest,
  certificate =>
'/etc/puppet/ssl/certs/broker.example.com.pe-internal-broker.pem',
  private_key =>
'/etc/puppet/ssl/private_keys/broker.example.com.pe-internal-broker.pem',
  password    => 'puppet',
}

You'll notice the first being an arbitrary title and the second
utilizing a ":" as a delimiter and omitting the name and target parameters.

Another code example can be found in the package type.

https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type/package.rb#L268-L291.

-- 
Cody

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 931 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/73c3b7f1/attachment.pgp>

From sgregson at mirantis.com  Wed Sep 16 21:22:02 2015
From: sgregson at mirantis.com (Sheena Gregson)
Date: Wed, 16 Sep 2015 16:22:02 -0500
Subject: [openstack-dev] [Fuel] Nominate Olga Gusarenko for fuel-docs
	core
In-Reply-To: <CAK2oe+JOz0+bZutji6jNeqqqJN7rXY6R0tfGWk7rAKG0B1HKBQ@mail.gmail.com>
References: <CAFY49iD2U+NkvgtjrWOHorSty_Rf3K6_-vqbZ0CNjH92UfDv6g@mail.gmail.com>
 <CAM0pNLPiuyycwSU+572wz0ycEr3jbR3wnTUn2k=dAorhfDvA0w@mail.gmail.com>
 <CAK2oe+JOz0+bZutji6jNeqqqJN7rXY6R0tfGWk7rAKG0B1HKBQ@mail.gmail.com>
Message-ID: <7e65f8d3e04579ba78d3c14bdea4dcb2@mail.gmail.com>

+1 Thanks for being a great docs contributor and community member.



*From:* Alexander Adamov [mailto:aadamov at mirantis.com]
*Sent:* Monday, September 14, 2015 5:44 AM
*To:* OpenStack Development Mailing List (not for usage questions) <
openstack-dev at lists.openstack.org>
*Cc:* Olga Gusarenko <ogusarenko at mirantis.com>
*Subject:* Re: [openstack-dev] [Fuel] Nominate Olga Gusarenko for fuel-docs
core



+1

Nice!)



Alexander



On Fri, Sep 11, 2015 at 8:19 PM, Dmitry Borodaenko <dborodaenko at mirantis.com>
wrote:

+1

Great work Olga!



On Fri, Sep 11, 2015, 11:09 Irina Povolotskaya <ipovolotskaya at mirantis.com>
wrote:

Fuelers,



I'd like to nominate Olga Gusarenko for the fuel-docs-core.



She has been doing great work and made a great contribution

into Fuel documentation:

http://stackalytics.com/?user_id=ogusarenko&release=all&project_type=all&module=fuel-docs

It's high time to grant her core reviewer's rights in fuel-docs.

Core reviewer approval process definition:
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess



-- 

Best regards,


Irina













__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/9d63d703/attachment.html>

From dborodaenko at mirantis.com  Wed Sep 16 21:28:59 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Wed, 16 Sep 2015 14:28:59 -0700
Subject: [openstack-dev] [Fuel] Nominate Olga Gusarenko for fuel-docs
	core
In-Reply-To: <CAFY49iD2U+NkvgtjrWOHorSty_Rf3K6_-vqbZ0CNjH92UfDv6g@mail.gmail.com>
References: <CAFY49iD2U+NkvgtjrWOHorSty_Rf3K6_-vqbZ0CNjH92UfDv6g@mail.gmail.com>
Message-ID: <CAM0pNLMoTfdds-N6mXpeeE360YopScqbgbW3e-hUhPT3oysKvA@mail.gmail.com>

5 days and no objections: welcome to Fuel Docs core reviewers! Keep up
the great work!

On Fri, Sep 11, 2015 at 11:07 AM, Irina Povolotskaya
<ipovolotskaya at mirantis.com> wrote:
> Fuelers,
>
> I'd like to nominate Olga Gusarenko for the fuel-docs-core.
>
> She has been doing great work and made a great contribution
> into Fuel documentation:
>
> http://stackalytics.com/?user_id=ogusarenko&release=all&project_type=all&module=fuel-docs
>
> It's high time to grant her core reviewer's rights in fuel-docs.
>
> Core reviewer approval process definition:
> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
> --
> Best regards,
>
> Irina
>
>
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dmitry Borodaenko


From devananda.vdv at gmail.com  Wed Sep 16 21:35:51 2015
From: devananda.vdv at gmail.com (Devananda van der Veen)
Date: Wed, 16 Sep 2015 21:35:51 +0000
Subject: [openstack-dev] [ironic] PTL candidacy
Message-ID: <CAExZKEoTHvniLyO09cesYsB9VEOg3frfo_YM_qHpQfXVJgdZkg@mail.gmail.com>

Hi all,

( repost from https <https://review.openstack.org/223753>://
<https://review.openstack.org/223753>review.openstack.org
<https://review.openstack.org/223753>/223753
<https://review.openstack.org/223753> )

It's that time again, where encumbent PTLs are supposed to write about what
features or changes they accomplished and what goals they have for the next
cycle.

I'm not going to do that this time.

Even you though you may have read similar things from others (either in
this or in previous cycles) I'm going to reiterate something. Contrary to
being the technical lead, OpenStack requires the PTL to do a whole slew of
less glamorous things (or delegate them to other people).

- launchpad monkey
- midcycle coordinator
- release coordinator
- public speaker
- cross project liaison
- vendor buffer
- cat wrangler

Historically, PTL meant "project technical lead", but as OpenStack grew, we
/ the TC realized that it is more^D^D^D^Ddifferent than that, and so now
the acronym is defined as "project team lead" [0]. And that is much more
representative of the responsibilities a PTL has today. In short, being PTL
and the lead architect for a successful/sizable project at the same time ==
two full time jobs.

Even before I was doing anything internal at HP, it seemed like my upstream
work was never done since I was trying to be both the team and tech leads
for Ironic. That said, it was also extremely rewarding to found this
project and exercise my social and organizational skills in building a
community around it. I could not be more satisfied with the results -- all
of you make Ironic much more awesome than I could have done alone. That's
the point, after all :)

Last election cycle, I stepped down from the TC so that I could have more
time for my roles as tech and team lead, and to focus on some internal work
(yup, still three jobs). That other work, for better or for worse, took a
greater tax on me than I had anticipated, and my activity upstream has
suffered (sorry!). This has created room for many of the other core
developers, who've been around the project almost as long as I have, to
come forward and fill in the gaps I left in the project management. And
that's really awesome. Thank you all.

I am thrilled that more of the project responsibilities are being handled
by Jim, Ruby, Chris, Lucas, and everyone else now. They are all leading
different areas in their own ways. As PTLs, each would bring a different
viewpoint to the project's day-to-day operations, and if they were to run,
I would support all of them (even though we disagree some times). Today,
there are multiple people who could run the project in my stead, and that
makes me very happy.

If elected, I promise to continue enabling the core team to do more without
my direct involvement, to continue leading in the technical vision for the
project, and liaising with vendors and operators to ensure the project
matures in such a way that it meets their needs.

If you believe I've done a great job as PTL and want me to continue doing
what I've been doing, then please re-elect me. (*)

If you'd like to see a change of pace, please don't hesitate to elect
another PTL :)

Thank you,
Devananda

(*) If you think I haven't done a great job as PTL, I invite you to tell me
how you think I could do better. For the sake of the election archives,
please don't reply to this email.

[0]
https://github.com/openstack/governance/commit/319fae1ea13775d16f865f886db0388e42cd0d1b
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/da01d068/attachment.html>

From bellerophon at flyinghorsie.com  Wed Sep 16 21:40:27 2015
From: bellerophon at flyinghorsie.com (James Carey)
Date: Wed, 16 Sep 2015 16:40:27 -0500
Subject: [openstack-dev] [oslo][doc] Oslo doc sprint 9/24-9/25
Message-ID: <CAHjMoV0vx=mEcnsJHyq29hwoAA6oq9fwdXYqVFGttVCwLEeicg@mail.gmail.com>

In order to improve the Oslo libraries documentation, the Oslo team is
having a documentation sprint from 9/24 to 9/25.

We'll kick things off at 14:00 UTC on 9/24 in the
#openstack-oslo-docsprint IRC channel and we'll use an etherpad [0].

All help is appreciated.   If you can help or have suggestions for
areas of focus, please update the etherpad.

[0] https://etherpad.openstack.org/p/oslo-liberty-virtual-doc-sprint
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/dd80ed8e/attachment.html>

From dougwig at parksidesoftware.com  Wed Sep 16 22:33:59 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Wed, 16 Sep 2015 16:33:59 -0600
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
	neutron-lbaas core team
Message-ID: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>

Hi all,

As the Lieutenant of the advanced services, I nominate Michael Johnson to be a member of the neutron-lbaas core reviewer team.

Review stats are in line with other cores[2], and Michael has been instrumental in both neutron-lbaas and octavia.

Existing cores, please vote +1/-1 for his addition to the team (that?s Brandon, Phil, Al, and Kyle.)

Thanks,
doug

1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
2. http://stackalytics.com/report/contribution/neutron-lbaas/90



From ajmiller at ajmiller.net  Wed Sep 16 22:40:51 2015
From: ajmiller at ajmiller.net (Al Miller)
Date: Wed, 16 Sep 2015 22:40:51 +0000
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson
	for	neutron-lbaas core team
In-Reply-To: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
References: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
Message-ID: <EFBE70D7D09F0A4397281F770E1802DBA87A2451@uc2-exmbx15-n1.UC2.Chicago.Hostway>

+1

Al


> -----Original Message-----
> From: Doug Wiegley [mailto:dougwig at parksidesoftware.com]
> Sent: Wednesday, September 16, 2015 3:34 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
> neutron-lbaas core team
> 
> Hi all,
> 
> As the Lieutenant of the advanced services, I nominate Michael Johnson to
> be a member of the neutron-lbaas core reviewer team.
> 
> Review stats are in line with other cores[2], and Michael has been
> instrumental in both neutron-lbaas and octavia.
> 
> Existing cores, please vote +1/-1 for his addition to the team (that?s Brandon,
> Phil, Al, and Kyle.)
> 
> Thanks,
> doug
> 
> 1. http://docs.openstack.org/developer/neutron/policies/core-
> reviewers.html#core-review-hierarchy
> 2. http://stackalytics.com/report/contribution/neutron-lbaas/90
> 
> 
> __________________________________________________________
> ________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From german.eichberger at hpe.com  Wed Sep 16 22:44:27 2015
From: german.eichberger at hpe.com (Eichberger, German)
Date: Wed, 16 Sep 2015 22:44:27 +0000
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
 neutron-lbaas core team
In-Reply-To: <EFBE70D7D09F0A4397281F770E1802DBA87A2451@uc2-exmbx15-n1.UC2.Chicago.Hostway>
References: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
 <EFBE70D7D09F0A4397281F770E1802DBA87A2451@uc2-exmbx15-n1.UC2.Chicago.Hostway>
Message-ID: <D21F3EA2.17995%german.eichberger@hpe.com>

Great news! From the Octavia end I totally support that choice?

German

On 9/16/15, 3:40 PM, "Al Miller" <ajmiller at ajmiller.net> wrote:

>+1
>
>Al
>
>
>> -----Original Message-----
>> From: Doug Wiegley [mailto:dougwig at parksidesoftware.com]
>> Sent: Wednesday, September 16, 2015 3:34 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
>> neutron-lbaas core team
>> 
>> Hi all,
>> 
>> As the Lieutenant of the advanced services, I nominate Michael Johnson
>>to
>> be a member of the neutron-lbaas core reviewer team.
>> 
>> Review stats are in line with other cores[2], and Michael has been
>> instrumental in both neutron-lbaas and octavia.
>> 
>> Existing cores, please vote +1/-1 for his addition to the team (that?s
>>Brandon,
>> Phil, Al, and Kyle.)
>> 
>> Thanks,
>> doug
>> 
>> 1. http://docs.openstack.org/developer/neutron/policies/core-
>> reviewers.html#core-review-hierarchy
>> 2. http://stackalytics.com/report/contribution/neutron-lbaas/90
>> 
>> 
>> __________________________________________________________
>> ________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From phillip.toohill at RACKSPACE.COM  Wed Sep 16 23:03:22 2015
From: phillip.toohill at RACKSPACE.COM (Phillip Toohill)
Date: Wed, 16 Sep 2015 23:03:22 +0000
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
 neutron-lbaas core team
In-Reply-To: <D21F3EA2.17995%german.eichberger@hpe.com>
References: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
 <EFBE70D7D09F0A4397281F770E1802DBA87A2451@uc2-exmbx15-n1.UC2.Chicago.Hostway>,
 <D21F3EA2.17995%german.eichberger@hpe.com>
Message-ID: <45ef3cc22894430a97c42f7693fe843b@543881-IEXCH01.ror-uc.rackspace.com>

+1

Phillip V. Toohill III
Software Developer

phone: 210-312-4366
mobile: 210-440-8374


________________________________________
From: Eichberger, German [german.eichberger at hpe.com]
Sent: Wednesday, September 16, 2015 5:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for neutron-lbaas core team

Great news! From the Octavia end I totally support that choice?

German

On 9/16/15, 3:40 PM, "Al Miller" <ajmiller at ajmiller.net> wrote:

>+1
>
>Al
>
>
>> -----Original Message-----
>> From: Doug Wiegley [mailto:dougwig at parksidesoftware.com]
>> Sent: Wednesday, September 16, 2015 3:34 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
>> neutron-lbaas core team
>>
>> Hi all,
>>
>> As the Lieutenant of the advanced services, I nominate Michael Johnson
>>to
>> be a member of the neutron-lbaas core reviewer team.
>>
>> Review stats are in line with other cores[2], and Michael has been
>> instrumental in both neutron-lbaas and octavia.
>>
>> Existing cores, please vote +1/-1 for his addition to the team (that?s
>>Brandon,
>> Phil, Al, and Kyle.)
>>
>> Thanks,
>> doug
>>
>> 1. http://docs.openstack.org/developer/neutron/policies/core-
>> reviewers.html#core-review-hierarchy
>> 2. http://stackalytics.com/report/contribution/neutron-lbaas/90
>>
>>
>> __________________________________________________________
>> ________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From ghanshyammann at gmail.com  Wed Sep 16 23:56:53 2015
From: ghanshyammann at gmail.com (GHANSHYAM MANN)
Date: Thu, 17 Sep 2015 08:56:53 +0900
Subject: [openstack-dev]  [QA] Meeting Thursday September 17th at 9:00 UTC
Message-ID: <CACE3TKW=Ldh4=Xg0Ggo4apepB0_4Nu=6ZdKNsmf0kWsBbw2EeA@mail.gmail.com>

Hi everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, September 17th at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones,
meeting will be at:

04:00 EDT
18:00 JST
18:30 ACST
11:00 CEST
04:00 CDT
02:00 PDT


-- 
Thanks & Regards
Ghanshyam Mann


From mestery at mestery.com  Thu Sep 17 01:07:17 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Wed, 16 Sep 2015 20:07:17 -0500
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
 neutron-lbaas core team
In-Reply-To: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
References: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
Message-ID: <CAL3VkVz13METd05kGu25-ao7nvhqTod09a3PwAG9jnsRV3yYOw@mail.gmail.com>

+1

On Wed, Sep 16, 2015 at 5:33 PM, Doug Wiegley <dougwig at parksidesoftware.com>
wrote:

> Hi all,
>
> As the Lieutenant of the advanced services, I nominate Michael Johnson to
> be a member of the neutron-lbaas core reviewer team.
>
> Review stats are in line with other cores[2], and Michael has been
> instrumental in both neutron-lbaas and octavia.
>
> Existing cores, please vote +1/-1 for his addition to the team (that?s
> Brandon, Phil, Al, and Kyle.)
>
> Thanks,
> doug
>
> 1.
> http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
> 2. http://stackalytics.com/report/contribution/neutron-lbaas/90
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/ff556612/attachment.html>

From armamig at gmail.com  Thu Sep 17 01:07:57 2015
From: armamig at gmail.com (Armando M.)
Date: Wed, 16 Sep 2015 18:07:57 -0700
Subject: [openstack-dev] [gate] requirement conflict on Babel breaks grenade
	jobs
Message-ID: <CAK+RQebrUhwYS=Uvey_PRHH+ASUQ5o67XouSOigH33BLxU4u_A@mail.gmail.com>

https://bugs.launchpad.net/grenade/+bug/1496650

HTH
Armando
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/9de4060e/attachment.html>

From vipuls at gmail.com  Thu Sep 17 01:16:17 2015
From: vipuls at gmail.com (Vipul Sabhaya)
Date: Wed, 16 Sep 2015 18:16:17 -0700
Subject: [openstack-dev] [Cue] PTL Candidacy
Message-ID: <CAHC46jtoP5A5Pe7w26LFb+A06W6icr1jeEYJgPsFQDbjw21CXQ@mail.gmail.com>

Hello,

This will be the first official PTL election for Cue, and I would be
honored to serve another term leading the project for the M Cycle.

Cue is a relatively new project in Openstack, and was approved into the Big
Tent during Liberty.  I have been involved with Cue from the beginning,
from initial POC to having a product that is production worthy.  In my past
life, i?ve been a member of the Trove Core team.

During the Liberty Cycle, Cue has become a solid product with a control
plane that can manage per-tenant RabbitMQ Clusters.  We spent a lot of time
beefing up our tests, and boast >90% unit test coverage.  We also added
Tempest tests, and Rally tests, including gating jobs for both.  Our
documentation has also been revamped considerably, and allows new
contributors to ramp up quickly with the project.

During the M cycle, I would like to focus on building a community and
getting additional contributors to Cue.  I would also like to focus the
team on multi-broker support, including adding Kafka as a broker that is
managed by Cue.

I believe Cue has come a long ways in a short time, and going forward have
the opportunity accelerate the growth in terms of features, quality, and
adoption.

Thanks for your consideration!
-Vipul
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/470c335c/attachment.html>

From Vijay.Venkatachalam at citrix.com  Thu Sep 17 01:57:44 2015
From: Vijay.Venkatachalam at citrix.com (Vijay Venkatachalam)
Date: Thu, 17 Sep 2015 01:57:44 +0000
Subject: [openstack-dev] [Barbican] Providing service user read access
 to all tenant's certificates
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7BF949@EX10MBOX06.pnnl.gov>
References: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
 <1A3C52DFCD06494D8528644858247BF01B7BF949@EX10MBOX06.pnnl.gov>
Message-ID: <26B082831A2B1A4783604AB89B9B2C080E89F58B@SINPEX01CL02.citrite.net>


How does lbaas do step 2?
It does not have the privilege for that secret/container using the service user.
Should it use the keystone token through which user created LB config and assign read access for the secret/container to the LBaaS service user?

Thanks,
Vijay V.

From: Fox, Kevin M [mailto:Kevin.Fox at pnnl.gov]
Sent: 16 September 2015 19:24
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Why not have lbaas do step 2? Even better would be to help with the instance user spec and combined with lbaas doing step 2, you could restrict secret access to just the amphora that need the secret?

Thanks,
Kevin

________________________________
From: Vijay Venkatachalam
Sent: Tuesday, September 15, 2015 7:06:39 PM
To: OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)
Subject: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates
Hi,
               Is there a way to provide read access to a certain user to all secrets/containers of all project/tenant's certificates?
               This user with universal "read" privilege's will be used as a service user by LBaaS plugin to read tenant's certificates during LB configuration implementation.

               Today's LBaaS users are following the below mentioned process

1.      tenant's creator/admin user uploads a certificate info as secrets and container

2.      User then have to create ACLs for the LBaaS service user to access the containers and secrets

3.      User creates LB config with the container reference

4.      LBaaS plugin using the service user will then access container reference provided in LB config and proceeds to implement.

Ideally we would want to avoid step 2 in the process. Instead add a step 5 where the lbaas plugin's service user checks if the user configuring the LB has read access to the container reference provided.

Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/1e0b37b1/attachment.html>

From Vijay.Venkatachalam at citrix.com  Thu Sep 17 02:02:44 2015
From: Vijay.Venkatachalam at citrix.com (Vijay Venkatachalam)
Date: Thu, 17 Sep 2015 02:02:44 +0000
Subject: [openstack-dev] [Barbican] Providing service user read access
 to all tenant's certificates
In-Reply-To: <D21EE0ED.1C758%dmccowan@cisco.com>
References: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
 <D21EE0ED.1C758%dmccowan@cisco.com>
Message-ID: <26B082831A2B1A4783604AB89B9B2C080E89F5CB@SINPEX01CL02.citrite.net>


The user here is the LBaaS service user which needs read access. This service user does play any role in the config creator's project. The service user might be playing a different role is in a common project.
For ex. "admin" user with "admin" role in "admin" project is the service user in devstack for LBaaS.

--Vijay



From: Dave McCowan (dmccowan) [mailto:dmccowan at cisco.com]
Sent: 16 September 2015 18:36
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

A user with the role "observer" in a project will have read access to all secrets and containers for that project, using the default settings in the policy.json file.

--Dave McCowan

From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, September 15, 2015 at 10:06 PM
To: "OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Hi,
               Is there a way to provide read access to a certain user to all secrets/containers of all project/tenant's certificates?
               This user with universal "read" privilege's will be used as a service user by LBaaS plugin to read tenant's certificates during LB configuration implementation.

               Today's LBaaS users are following the below mentioned process

1.      tenant's creator/admin user uploads a certificate info as secrets and container

2.      User then have to create ACLs for the LBaaS service user to access the containers and secrets

3.      User creates LB config with the container reference

4.      LBaaS plugin using the service user will then access container reference provided in LB config and proceeds to implement.

Ideally we would want to avoid step 2 in the process. Instead add a step 5 where the lbaas plugin's service user checks if the user configuring the LB has read access to the container reference provided.

Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/9a2da227/attachment.html>

From gord at live.ca  Thu Sep 17 02:10:23 2015
From: gord at live.ca (gord chung)
Date: Wed, 16 Sep 2015 22:10:23 -0400
Subject: [openstack-dev] ceilometer code debugging
In-Reply-To: <83EAC30077C13D47ACCC9A5056168097D4EB9B@POCITMSEXMB08.LntUniverse.com>
References: <83EAC30077C13D47ACCC9A5056168097D4EB9B@POCITMSEXMB08.LntUniverse.com>
Message-ID: <BLU437-SMTP72E84AA275C3A5BD36B6DDE5A0@phx.gbl>

hi,

i'm not familiar with eclipse+pydev but you should be able to run that 
command as is in terminal

'./tools/ceilometer-test-event.py'

the main question i have is what are you trying to debug? that script, 
to be honest, does not do much. it seems to just test the event 
conversion functionality. you might find some useful information in the 
dev docs[1] if you have questions about Ceilometer. if it's a pydev 
specific question you might want to generalise your subject line to have 
more eyes.

[1] http://docs.openstack.org/developer/ceilometer


On 16/09/15 08:42 AM, Nanda Devi Sahu wrote:
> Hello,
>
> I am trying to debug Ceilometer code taken from git. I have configured 
> Eclipse with Pydev for the same. The configuration seems to be fine 
> but when I do a debug of the code it shows to be running but I do not 
> see the flow of the code. LIke the main thread and the following threads.
>
> Attached is the debug prespective view where the run is working 
> without any code stack.
>
> Could you please suggest what Am I missing to get the flow. I have 
> also attached the debug configuration image for reference.
>
>
> Regards,
> Nanda.
>
> *L&T Technology Services Ltd*
>
> www.LntTechservices.com <http://www.lnttechservices.com/>
>
> This Email may contain confidential or privileged information for the 
> intended recipient (s). If you are not the intended recipient, please 
> do not use or disseminate the information, notify the sender and 
> delete it from your system.
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
gord

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/34169458/attachment.html>

From chenrui.momo at gmail.com  Thu Sep 17 02:27:40 2015
From: chenrui.momo at gmail.com (Rui Chen)
Date: Thu, 17 Sep 2015 10:27:40 +0800
Subject: [openstack-dev] [Congress] PTL candidacy
In-Reply-To: <DF91EBC32A031943A8B78D81D40122BC0404D4FB5E@AUSX7MCPC103.AMER.DELL.COM>
References: <CAJjxPABLW+BBLnRqKaikW0ZL2X5Z5Nwj-C8A9GtgXWy1hHmNbA@mail.gmail.com>
 <DF91EBC32A031943A8B78D81D40122BC0404D4FB5E@AUSX7MCPC103.AMER.DELL.COM>
Message-ID: <CABHH=5D0-XNG1ebhYEbBSvHf8BMfs0x8tMxw36d9j_AMS5qrEQ@mail.gmail.com>

+1

Tim is an excellent and passionate leader, go ahead, Congress :-)


2015-09-17 4:09 GMT+08:00 <Ramki_Krishnan at dell.com>:

> +1 and looking forward to see you in Tokyo.
>
>
>
> Thanks,
>
> Ramki
>
>
>
> *From:* Tim Hinrichs [mailto:tim at styra.com]
> *Sent:* Tuesday, September 15, 2015 1:23 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [Congress] PTL candidacy
>
>
>
> Hi all,
>
>
>
> I?m writing to announce my candidacy for Congress PTL for the Mitaka
> cycle.  I?m excited at the prospect of continuing the development of our
> community, our code base, and our integrations with other projects.
>
>
>
> This past cycle has been exciting in that we saw several new, consistent
> contributors, who actively pushed code, submitted reviews, wrote specs, and
> participated in the mid-cycle meet-up.  Additionally, our integration with
> the rest of the OpenStack ecosystem improved with our move to running
> tempest tests in the gate instead of manually or with our own CI.  The code
> base matured as well, as we rounded out some of the features we added near
> the end of the Kilo cycle.  We also began making the most significant
> architectural change in the project?s history, in an effort meet our
> high-availability and API throughput targets.
>
>
>
> I?m looking forward to the Mitaka cycle.  My highest priority for the code
> base is completing the architectural changes that we began in Liberty.
> These changes are undoubtedly the right way forward for production use
> cases, but it is equally important that we make Congress easy to use and
> understand for both new developers and new end users.  I also plan to
> further our integration with the OpenStack ecosystem by better utilizing
> the plugin architectures that are available (e.g. devstack and tempest).  I
> will also work to begin (or continue) dialogues with other projects that
> might benefit from consuming Congress.  Finally I?m excited to continue
> working with our newest project members, helping them toward becoming core
> contributors.
>
>
>
> See you all in Tokyo!
>
> Tim
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/e6eb8e3f/attachment.html>

From abhishek.talwar at tcs.com  Thu Sep 17 04:30:38 2015
From: abhishek.talwar at tcs.com (Abhishek Talwar)
Date: Thu, 17 Sep 2015 10:00:38 +0530
Subject: [openstack-dev] how to get current memory utilization of an
	instance
In-Reply-To: <55F96C1C.4070900@linux.vnet.ibm.com>
References: <55F96C1C.4070900@linux.vnet.ibm.com>,
 <OF02E0725B.2100272D-ON65257EC2.00390F35-65257EC2.00390F38@tcs.com>
Message-ID: <OFE5715C3D.F5F7173F-ON65257EC3.0018C71C-65257EC3.0018C722@tcs.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/17a718cc/attachment.html>

From tony at bakeyournoodle.com  Thu Sep 17 05:01:53 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Thu, 17 Sep 2015 15:01:53 +1000
Subject: [openstack-dev] [all][gate] grende jobs failing.
Message-ID: <20150917050153.GE68449@thor.bakeyournoodle.com>

Hi all,
    The recent relapse of oslo.utils 1.4.1[1] (for juno) is valid in kilo.  The
juno global-requirements for Babel are not compatible with kilo so nothing can
get through grenade :(

We're working it
https://bugs.launchpad.net/oslo.utils/+bug/1496678

Yours Tony.

[1] https://review.openstack.org/#/c/223909/ sorry :(
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/67c713cc/attachment.pgp>

From guoyingc at cn.ibm.com  Thu Sep 17 05:22:23 2015
From: guoyingc at cn.ibm.com (Ying Chun Guo)
Date: Thu, 17 Sep 2015 13:22:23 +0800
Subject: [openstack-dev] [i18n] PTL candidacy
Message-ID: <OF4A738412.8EF74BD1-ON48257EC3.001CE928-48257EC3.001D8417@cn.ibm.com>


I, also known as "Daisy", would like to continue as the I18n PTL,
if you will have me.

I had a great time as the I18n PTL in Liberty,
working with the whole team to make important changes happen.
I would like to have another term to stabilize them.

In June, I18n project became an official project.
Translation contributors are treated the same as code contributors.
30+ active translators are regarded as ATC's in Liberty.
It is a great milestone for I18n team and translators.
We are getting more official and more visible.

In July, we started the trial of the new tool -Zanata,
which is a open source based and OpenStack hosted translation website.
After a long time evaluation and improvement, in September,
we officially migrated translations to Zanata.
49 language teams are created and 130+ translators have been registered.
Now we are running Liberty translation there.

In September, we kicked off Liberty translation, which is under
processing now. It's the first time that we translate on top of Zanata,
the first time that we include Nova in the translation plan,
and the first time we formally use two phases of string freeze.
I'm trying my best to make sure everything is running well.
When Liberty translation is done, we could review how we did and how
to make them better.

In Mitaka, I'm going to focus on blew items:

1. Get official recoginization to translators
I'm going to add translation metric in stackalytics,
and automate the process to award active translators "ATC"

2. Improve translation quality
I'm going to improve the list of translation terminologies, promote
its broad adoption in all language teams.
I'm going to boost the enablement of translation check website.

3. Better cooperation with development team and documentation team

I welcome any good ideas which could help the team to achieve
these goals. Hope to get your support for my PTL role in Mitaka.
I am honored to work with everyone to build up a better I18n project.

Thank you.
Daisy

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/f4f33730/attachment.html>

From chris at openstack.org  Thu Sep 17 05:26:17 2015
From: chris at openstack.org (Chris Hoge)
Date: Wed, 16 Sep 2015 22:26:17 -0700
Subject: [openstack-dev] [defcore] Scoring for DefCore 2016.01 Guideline
Message-ID: <0AC92343-7973-4015-A9C2-E8ADD1B5355B@openstack.org>

The DefCore Committee is working on scoring capabilities for the upcoming
2016.01 Guideline, a solid draft of which will be available at the Mitaka
summit for community review and will go to the Board of Directors for
approaval in Janaury [1]. The current 2015.07 Guideline [2] covers Nova,
Swift, and Keystone. Scoring for the upcoming Guideline may includes new
capabilities for Neutron, Glance, Cinder, and Heat, as well as
updated capabilities for Keystone and Nova.

As part of our process, we want to encourage the development and user
communities to give feedback on the proposed capability categorizations
and how they are currently graded against the Defcore criteria [3].

Capabilities we're currently considering for possible inclusion are:
        Neutron:  https://review.openstack.org/#/c/210080/    
        Glance:   https://review.openstack.org/#/c/213353/    
        Cinder:   https://review.openstack.org/#/c/221631/    
        Heat:     https://review.openstack.org/#/c/216983/
        Keystone: https://review.openstack.org/#/c/213330/    
        Nova:     https://review.openstack.org/#/c/223915/
        
We would especially like to thank the PTL's and technical community members
who helped draft the proposed capabilities lists and provided
feedback--your input has been very helpful.
        
[1] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/2015A.rst#n13
[2] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json
[3] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst



From tony at bakeyournoodle.com  Thu Sep 17 05:34:30 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Thu, 17 Sep 2015 15:34:30 +1000
Subject: [openstack-dev] [all][gate] grende jobs failing.
In-Reply-To: <20150917050153.GE68449@thor.bakeyournoodle.com>
References: <20150917050153.GE68449@thor.bakeyournoodle.com>
Message-ID: <20150917053429.GF68449@thor.bakeyournoodle.com>

On Thu, Sep 17, 2015 at 03:01:53PM +1000, Tony Breeds wrote:
> Hi all,
>     The recent relapse of oslo.utils 1.4.1[1] (for juno) is valid in kilo.  The
> juno global-requirements for Babel are not compatible with kilo so nothing can
> get through grenade :(
> 
> We're working it
> https://bugs.launchpad.net/oslo.utils/+bug/1496678

Here's my best effort for right now.
    https://review.openstack.org/#/c/224429/ 

Yours Tony.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/4fc3d53f/attachment.pgp>

From nik.komawar at gmail.com  Thu Sep 17 05:58:59 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Thu, 17 Sep 2015 01:58:59 -0400
Subject: [openstack-dev]  [Glance] PTL candidacy
Message-ID: <55FA56A3.4000001@gmail.com>

Hello everyone,

I would like to herewith propose my candidacy for the role of Glance PTL
for the Mitaka release cycle.

I. Personal Commitment

I am a full time upstream OpenStacker at IBM and have recently
transitioned into this role with the condition that I will be given 100%
time to focus on community issues, development and help solve upstream
challenges. I am also committed to giving my full attention to Glance.
Not too long ago, I have had a few conversations with the PTLs of other
projects for either help them better use Glance or for dissolving the
issues they were facing in their projectsw. However, in this cycle I
would like to work with some of you in the capacity of inter-project
liaisons to help better understand the usage and stability of Glance in
respective areas and yet be involved with you in addressing those
issues. I know this is added work that must not go unrewarded so, I
would like to work out a plan with you that will enable distribution of
power (if with more power comes more responsibility, visa versa must
true, making it true claim). It has been enabled in Glance to some
extent & I have learnt other bigger projects implementing this. I would
like to make it a complete reality in Glance. I am also committed to
documenting and improving the current specs process and criterion that
will enable quicker turnaround time.

At IBM, I have the pleasure of being part of team that has some of the
most outstanding OpenStackers and I hope to collaborate more with them
in Mitaka for solving (hard) problems.

II. Liberty

a) Stability

Over 100 bugs were fixed in Liberty (including python-glanceclient,
glance_store and Glance servers) with a relatively small review team.
Glance is gradually improving momentum to help the different code
proposals get time-justice for code reviews. Some features in Kilo
introduced a number security bugs beyond expectations so, the further
development of those features has been slowed down and focus has been
shifted to fix the functionality, making it more stable and secure.
Experimental features showed a smooth progress and close to no issues of
such sort. A close eye on the reviews was kept to ensure so. Any spec
proposals that showed potential security risks, shake stability or
performance or refactor major parts of the code base were given a wait
signal.

b) Performance

I consider them as systematic optimizations and sometimes systematic
improvisations. Major disagreements on the path forward for optimizing
streaming workflows for images data have been sorted out. Many new
proposals were made however, due to lack of early time commitment from
developers they were unable to be seen through. Continued feedback
should be expected for similar proposals for glance_store in early Mitaka.

c) Cross project communications & Process Evolution

I have made an attempt to continually evolve our process to make it more
engaging and community friendly. Now there are three meetings weekly
that are a good place for collaboration on different aspects of Glance.
Either me or a member of the Glance community is at the weekly CPL
meeting to gather feedback from horizontal teams, cross project
decision/efforts etc. as well as input has been provided using Glance?s
perspective. I have had regular chats with Nova PTL on the outstanding
issues of the Images APIs and adoption of v2 Images API by Nova.
Feedback was provided at the DefCore mid-cycle as well as a welcoming
hand was presented for attending Glance mid-cycle (even virtually) for
those who could not make it.  In Mitaka, I hope to continue
collaborating with the interested members there as well as in the other
projects. I would like to pay close personal attention to the pressing
matters that need short team resolution like the adoption of v2 API by
Nova and Images API suitable for DefCore. Unfortunately, any of the
existing (half) attempts at fixing this issue have been wasted as they
were breaking some of the fundamental aspects of project. This matter
would need wider input for a fine resolution and a change that would not
break the world; of-course compromise is very likely for some but
doesn?t need to be fundamentally disjoint with the newly established
DefCore standard. It was realized early that a half baked attempt of
making such a critical change would break core of Glance so, I have been
contemplating on the future of the project. More clarity of thought has
been accomplished during the conversations with the TC members, PTLs,
Product WG liaisons etc. and I will continue to gather more feedback in
Mitaka. It has been a pleasure to be part of (collaborating with) teams
that help build very large cloud deployments and I find that experience
valuable for solving this complicated matter. Not only upstream but at
IBM too, I plan to closely interact with the technical and product
leaders of the Public and Private cloud deployment to help solve such
issues. My team at IBM enables me to have such interactions with
veterans of OpenStack.

III. Direction

a) Short Term Goals
There are four short team goals that seem important right away: DefCore
requirements; Nova and Cinder v2 adoption/stability; cadence between
developer and reviewer bandwidth; better collaboration via
documentation, CPL & inter project liaisons power distribution.

b) Mid Term Goals
I think it?s vital that we pin down subtle aspects of v2 API in
mid-term, primarily focus on Images however, at the same time ensure
that all of our APIs are efficient and performant.

The concept of tasks needs to have a clear picture in all the developers
as well as operators viewpoints. There are equally compelling arguments
to make them user centric only, admin only and open to both. Most
important use cases will be taken into account and a plan for tasks will
be setup.

Some of the operators and active developers wish to focus on performance
improvement and reliaiblity of image data delivery ? so this will be
another very important mid-term goal.

Definition of core fundamentals like immutability of images & Glance
architecture seems critical. Also, it is necessary to be aware about the
side-effects of feature proposals that can potentially lead to breaking
changes. It has come to my notice that many a times such proposals are
made and the proposers find it difficult to grasp how the change could
affect Glance. We will try to solve this issue via collaboration and
documentation.

c) Long Term Goals

The long term goals follow the mid-term goals. Solidification of the
Images v2 API and planning long term support for the same. There?s
already some work happening to set a direction for this.

Artifacts and a generic repository ? as per Glance?s mission statement,
this is an important goal. A generic data asset service would be an
excellent one stop shop for users. Possibly evolving in to a Marketplace
like stage for many of OpenStack assets (Artifacts).

Addressing storage and retrieval of Images issues for smaller size data
(Image record in DB) and for larger blobs of data (stored in the backend
store) ? this may be termed as performance problem but as a long term
objective it will be a clear separation of concerns and give ability to
operators to work on image or artifact data in a efficient manner.

VII. Community

Glance community has been traditionally known to be friendly to help new
developers join and it avoids the influx vs. outflux issues. Keeping the
long term objectives in mind this remains important. I will try to keep
it a conducive environment for all concerned individuals. Without losing
hind sight help focus on the short term goals, I will enable specific
individuals to make important calls if it helps move forward with
pressing issues.

VIII. Solving Hard Problems

a) Collaboration

The tradition of pre-summit, within cycle video and face-to-face syncs
will be continued. Inputs of the CPLs, inter-project liaisons will be
encouraged and welcomed. Glance representatives will engage in certain
important projects regarding images related subject matter. Artifacts
representatives will engage in other working groups to make the API more
usable and adoptable. I plan to keep working with the DefCore committee
and PTLs to help bridge the gap of overlapping issues. OpenStack is a
diverse community and so is the Glance team. So, we will focus on
keeping weekly syncs addressing important issues interactively. Overall,
collaboration will be one of the most vital component of this cycle.

b) Themed Focus

The most important themes that come to mind at this point of time are:
getting a standard API for DefCore, interaction with Nova team to enable
v2 adoption, documentation of the features and processes (this will also
help with collaboration), Artifacts and it?s API, performance and
reliablilty & stability. A sub-team leader would be chosen for each of
these themes and periodic sync would be made effective for a good team
effort.

c) Core reviewer bandwidth, their influx and outflux

Glance has had the mis-fortune for losing important and talented
developers time and again due to commitment changes (and not a result of
drive away from the team). We will continue to keep the doors open for
them and if any of the super active OpenStackers wish to become core in
short window, there will be a easier process of doing so. This will help
fix any outstanding bugs, keep the gate steady and help a good momentum
for the release of libraries. Good collaborators have always been good
drivers & it would be wise to continue in that direction. Good
developers have been engaged in Liberty to become part of Glance and I
will continue to do so for Mitaka.

d) One OpenStack

No matter how easy it is to define solo project priorities, we all live
in the single OpenStack realm. Operators have to deploy Glance alongside
other OpenStack services and there is a level of understanding that will
be put in into action while defining Glance priorities so that other
projects are not severely affected. I believe in ?One OpenStack & a
unified vision? and hope you are with me.



Thank you for your consideration in allowing me to continue serving as
PTL for Mitaka cycle.

Nikhil Komawar


From SamuelB at Radware.com  Thu Sep 17 06:06:57 2015
From: SamuelB at Radware.com (Samuel Bercovici)
Date: Thu, 17 Sep 2015 06:06:57 +0000
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson
	for	neutron-lbaas core team
In-Reply-To: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
References: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
Message-ID: <F36E8145F2571242A675F56CF6096456A069A64E@ILMB1.corp.radware.com>

And then they were 6 =D>


-Sam.

-----Original Message-----
From: Doug Wiegley [mailto:dougwig at parksidesoftware.com] 
Sent: Thursday, September 17, 2015 1:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for neutron-lbaas core team

Hi all,

As the Lieutenant of the advanced services, I nominate Michael Johnson to be a member of the neutron-lbaas core reviewer team.

Review stats are in line with other cores[2], and Michael has been instrumental in both neutron-lbaas and octavia.

Existing cores, please vote +1/-1 for his addition to the team (that?s Brandon, Phil, Al, and Kyle.)

Thanks,
doug

1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
2. http://stackalytics.com/report/contribution/neutron-lbaas/90


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From ton at us.ibm.com  Thu Sep 17 06:07:26 2015
From: ton at us.ibm.com (Ton Ngo)
Date: Wed, 16 Sep 2015 23:07:26 -0700
Subject: [openstack-dev] [Magnum] API response on k8s failure
In-Reply-To: <B4B443CF-ADC2-46B1-90D2-A0338160DDBE@rackspace.com>
References: <55F7449F.9040108@linux.vnet.ibm.com>
 <B4B443CF-ADC2-46B1-90D2-A0338160DDBE@rackspace.com>
Message-ID: <201509170607.t8H67XWo014648@d01av04.pok.ibm.com>


Hi Ryan,
     There is a bug opened with a similar failure symptom, although I am
not sure if the cause is the same:
https://bugs.launchpad.net/magnum/+bug/1481889

Trying to use kubectl to deploy the V1 manifest, I get this error message:
Error: unable to recognize "redis-master.yaml": no object named "Pod" is
registered

Clearly the error handling from the k8s handler needs to be improved.  If
the bug above is different from what you found, I think opening a new bug
would be best.

Ton Ngo,




From:	Adrian Otto <adrian.otto at rackspace.com>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>
Date:	09/14/2015 05:34 PM
Subject:	Re: [openstack-dev] [Magnum] API response on k8s failure



Ryan,

Thanks for sharing this. Sorry you got out to a bumpy start. I suggest you
do file a bug for this against magnum and we can decide how best to handle
it. I can not tell from your email what the kubectl would do with the same
input. We might have an opportunity to make both better.

If you need guidance for how to file a bug, feel free to email me directly
and I can point you in the right direction.

Thanks,

Adrian

> On Sep 14, 2015, at 3:05 PM, Ryan Rossiter <rlrossit at linux.vnet.ibm.com>
wrote:
>
> I was giving a devstacked version of Magnum a try last week, and from a
new user standpoint, I hit a big roadblock that caused me a lot of
confusion. Here's my story:
>
> I was attempting to create a pod in a k8s bay, and I provided it with an
sample manifest from the Kubernetes repo. The Magnum API then returned the
following error to me:
>
> ERROR: 'NoneType' object has no attribute 'host' (HTTP 500)
>
> I hunted down the error to be occurring here [1]. The k8s_api call was
going bad, but conductor was continuing on anyways thinking the k8s API
call went fine. I dug through the API calls to find the true cause of the
error:
>
> {u'status': u'Failure', u'kind': u'Status', u'code': 400, u'apiVersion':
u'v1beta3', u'reason': u'BadRequest', u'message': u'Pod in version v1
cannot be handled as a Pod: no kind "Pod" is registered for version "v1"',
u'metadata': {}}
>
> It turned out the error was because the manifest I was using had
apiVersion v1, not v1beta3. That was very unclear by Magnum originally
sending the 500.
>
> This all does occur within a try, but the k8s API isn't throwing any sort
of exception that can be caught by [2]. Was this caused by a regression in
the k8s client? It looks like the original intention of this was to catch
something going wrong in k8s, and then forward on the message & error code
on to let the magnum API return that.
>
> My question here is: does this classify as a bug? This happens in more
places than just the pod create. It's changing around API returns (quite a
few of them), and I don't know how that is handled in the Magnum project.
If we want to have this done as a blueprint, I can open that up and target
it for Mitaka, and get to work. If it should be opened up as a bug, I can
also do that and start work on it ASAP.
>
> [1]
https://github.com/openstack/magnum/blob/master/magnum/conductor/handlers/k8s_conductor.py#L88-L108

> [2]
https://github.com/openstack/magnum/blob/master/magnum/conductor/handlers/k8s_conductor.py#L94

>
> --
> Thanks,
>
> Ryan Rossiter (rlrossit)
>
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/3a0c21e5/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150916/3a0c21e5/attachment.gif>

From skywalker.nick at gmail.com  Thu Sep 17 06:52:36 2015
From: skywalker.nick at gmail.com (Li Ma)
Date: Thu, 17 Sep 2015 14:52:36 +0800
Subject: [openstack-dev] [dragonflow] Low OVS version for Ubuntu
Message-ID: <CALFEDVcXE18WAWS=64sOQkyLn6ob2R0Nx+G2jSP3xOo37Zcubg@mail.gmail.com>

Hi all,

I tried to run devstack to deploy dragonflow, but I failed with lower
OVS version.

I used Ubuntu 14.10 server, but the official package of OVS is 2.1.3
which is much lower than the required version 2.3.1+?

So, can anyone provide a Ubuntu repository that contains the correct
OVS packages?

Thanks,
-- 

Li Ma (Nick)
Email: skywalker.nick at gmail.com


From skywalker.nick at gmail.com  Thu Sep 17 07:13:17 2015
From: skywalker.nick at gmail.com (Li Ma)
Date: Thu, 17 Sep 2015 15:13:17 +0800
Subject: [openstack-dev] [neutron] RFE process question
In-Reply-To: <55F207B8.6030300@catalyst.net.nz>
References: <55F11653.8080509@catalyst.net.nz>
 <CAK+RQeY_SjXNyApy9gfp7NOUk0V7qwhnSay+jU1LBr=aDXAG0A@mail.gmail.com>
 <CAG9LJa79xEhGTvq5pEYEvDTvP1y9Z9PcJ0qxKEEkPeE0MoqBkg@mail.gmail.com>
 <55F207B8.6030300@catalyst.net.nz>
Message-ID: <CALFEDVdLfL9FD01=JbXUJxqfxSF_9T=KGr8UkkD2FRzSPESUoA@mail.gmail.com>

A reasonable user story. Other than tag, a common description field
for Neutron resources is also usable.

I submitted a RFE bug for review:
https://bugs.launchpad.net/neutron/+bug/1496705


From mhorban at mirantis.com  Thu Sep 17 07:14:30 2015
From: mhorban at mirantis.com (mhorban)
Date: Thu, 17 Sep 2015 10:14:30 +0300
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
	service
Message-ID: <55FA6856.1@mirantis.com>

Hi Josh,

 > Sounds like a useful idea if projects can plug-in themselves into the
 > reloading process. I definitely think there needs to be a way for
 > services to plug-in to this, although I'm not quite sure it will be
 > sufficient at the current time though.
 >
 > An example of why:
 >
 > -
 > 
https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/__init__.py#L24 

 > (unless this module is purged from python and reloaded it will likely
 > not reload correctly).
 >
 > Likely these can all be easily fixed (I just don't know how many of
 > those exist in the various projects); but I guess we have to start
 > somewhere so getting the underlying code able to be reloaded is a first
 > step of likely many.

Each of openstack component should contain code responsible for 
reloading such objects.
What objects  will be reloaded? It depends of inspire and desire of 
developers/users.
Writing such code is a second step.

Marian


From skywalker.nick at gmail.com  Thu Sep 17 07:16:50 2015
From: skywalker.nick at gmail.com (Li Ma)
Date: Thu, 17 Sep 2015 15:16:50 +0800
Subject: [openstack-dev]  [neutron] Please help review this RFE
Message-ID: <CALFEDVeR=s81c9ux7bWDv=+x8YqwemUCSMfBmMhHYRky_ja7xA@mail.gmail.com>

Hi Neutron folks,

I'd like to introduce a pure python-driven network configuration
library to Neutron. A discussion just started in the RFE ticket [1].
I'd like to get feedback on this proposal.

[1]: https://bugs.launchpad.net/neutron/+bug/1492714

Take a look and let me know your thoughts.
-- 

Li Ma (Nick)
Email: skywalker.nick at gmail.com


From sbiswas7 at linux.vnet.ibm.com  Thu Sep 17 07:24:36 2015
From: sbiswas7 at linux.vnet.ibm.com (Sudipto Biswas)
Date: Thu, 17 Sep 2015 12:54:36 +0530
Subject: [openstack-dev] [dragonflow] Low OVS version for Ubuntu
In-Reply-To: <CALFEDVcXE18WAWS=64sOQkyLn6ob2R0Nx+G2jSP3xOo37Zcubg@mail.gmail.com>
References: <CALFEDVcXE18WAWS=64sOQkyLn6ob2R0Nx+G2jSP3xOo37Zcubg@mail.gmail.com>
Message-ID: <55FA6AB4.5030605@linux.vnet.ibm.com>



On Thursday 17 September 2015 12:22 PM, Li Ma wrote:
> Hi all,
>
> I tried to run devstack to deploy dragonflow, but I failed with lower
> OVS version.
>
> I used Ubuntu 14.10 server, but the official package of OVS is 2.1.3
> which is much lower than the required version 2.3.1+?
>
> So, can anyone provide a Ubuntu repository that contains the correct
> OVS packages?

Why don't you just build the OVS you want from here: http://openvswitch.org/download/

> Thanks,



From mhorban at mirantis.com  Thu Sep 17 07:26:28 2015
From: mhorban at mirantis.com (mhorban)
Date: Thu, 17 Sep 2015 10:26:28 +0300
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
	service
Message-ID: <55FA6B24.5000602@mirantis.com>

Hi Doug,

 > Rather than building hooks into oslo.config, why don't we build them
 > into the thing that is catching the signal. That way the app can do lots
 > of things in response to a signal, and one of them might be reloading
 > the configuration.

Hm... Yes... It is really stupid idea to put reloading hook into 
oslo.config.
I'll move that hook mechanism into oslo.service. oslo.service is 
responsible for catching/handling signals.

Is it enough to have one callback function? Or should I must add ability 
to register many different callback functions?

What is your point of view?

Marian


From yguenane at redhat.com  Thu Sep 17 07:28:11 2015
From: yguenane at redhat.com (Yanis Guenane)
Date: Thu, 17 Sep 2015 09:28:11 +0200
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <55F9C583.60806@redhat.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
 <55F9C583.60806@redhat.com>
Message-ID: <55FA6B8B.1090203@redhat.com>



On 09/16/2015 09:39 PM, Emilien Macchi wrote:
>
> On 09/16/2015 12:53 PM, Alex Schultz wrote:
>> Hey puppet folks,
>>
>> Based on the meeting yesterday[0], I had proposed creating a parser
>> function called is_service_default[1] to validate if a variable matched
>> our agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking
>> about how can we maybe not use the arbitrary string throughout the
>> puppet that can not easily be validated.  So I tested creating another
>> puppet function named service_default[2] to replace the use of '<SERVICE
>> DEFAULT>' throughout all the puppet modules.  My tests seemed to
>> indicate that you can use a parser function as parameter default for
>> classes. 
>>
>> I wanted to send a note to gather comments around the second function. 
>> When we originally discussed what to use to designate for a service's
>> default configuration, I really didn't like using an arbitrary string
>> since it's hard to parse and validate. I think leveraging a function
>> might be better since it is something that can be validated via tests
>> and a syntax checker.  Thoughts?
> Let me add your attempt to make it work in puppet-cinder:
> https://review.openstack.org/#/c/224277
>
> I like the proposal, +1.
Alex, thank you for this proposal.

I like your approach :

 * as it will make writing manifest more idiomatic.

$foo = service_default() with if is_service_default($foo) { } reads
better than
$foo = '<SERVICE DEFAULT>' with if $foo == '<SERVICE DEFAULT>'.

 * if for any reason, hopefully this shouldn't happen but we need to
change the value,
it will happen only on few places. (less commit to review)

 * the end result is the exact same one as the one we have today.

For me it's a go, let's see what the other have to say about it

Thank you,

>
>> Thanks,
>> -Alex
>>
>> [0] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html
>> [1] https://review.openstack.org/#/c/223672
>> [2] https://review.openstack.org/#/c/224187
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From aheczko at mirantis.com  Thu Sep 17 07:48:59 2015
From: aheczko at mirantis.com (Adam Heczko)
Date: Thu, 17 Sep 2015 09:48:59 +0200
Subject: [openstack-dev] [Fuel] Bugs which we should accept in 7.0 after
 Hard Code Freeze
In-Reply-To: <CAKYN3rP0LjcYFwCpUNPESytzcovec9CLyx4Pypi4c9ZHpDbeSQ@mail.gmail.com>
References: <CAJQwwYO15aBkbPoUYw7s4Kj1Dsg0iSGJDA5Jr5ZcZsHZ5TzHsQ@mail.gmail.com>
 <CAKYN3rP0LjcYFwCpUNPESytzcovec9CLyx4Pypi4c9ZHpDbeSQ@mail.gmail.com>
Message-ID: <CAJciqMyBC_=5AmWjwHK53Wa9pZPL77vctm-f-MEGyADh5hHvFA@mail.gmail.com>

Hi, I'd like to ask for fixing these security related bugs in stable/7.0
branch:

https://bugs.launchpad.net/fuel/+bug/1496407
https://bugs.launchpad.net/fuel/+bug/1488732

Both are fuel-library related tasks, it is Apache2 and NTPD configuration
lifting.

Thanks,

Adam

On Tue, Sep 15, 2015 at 12:19 AM, Mike Scherbakov <mscherbakov at mirantis.com>
wrote:

> Thanks Andrew.
> Team, if there are any disagreements - let's discuss it. Otherwise, I
> think we should be just strict and follow defined process. We can deliver
> high priority bugfixes in updates channel later if needed.
>
> I hope that reasoning is clear for everything. Every bugfix has a
> potential to break something. It's basically a risk.
>
> On Mon, Sep 14, 2015 at 8:57 AM Andrew Maksimov <amaksimov at mirantis.com>
> wrote:
>
>> Hi Everyone!
>>
>> I would like to reiterate the bugfix process after Hard Code Freeze.
>> According to our HCF definition [1] we should only merge fixes for
>> *Critical* bugs to *stable/7.0* branch, High and lower priority bugs
>> should NOT be accepted to *stable/7.0* branch anymore.
>> Also we should accept patches for critical bugs to *stable/7.0* branch
>> only after the corresponding patchset with same ChangeID was accepted into
>> master.
>>
>> [1] - https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
>>
>> Regards,
>> Andrey Maximov
>> Fuel Project Manager
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Mike Scherbakov
> #mihgen
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/eb282b28/attachment.html>

From tony at bakeyournoodle.com  Thu Sep 17 08:03:53 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Thu, 17 Sep 2015 18:03:53 +1000
Subject: [openstack-dev] [all][gate] grende jobs failing.
In-Reply-To: <20150917053429.GF68449@thor.bakeyournoodle.com>
References: <20150917050153.GE68449@thor.bakeyournoodle.com>
 <20150917053429.GF68449@thor.bakeyournoodle.com>
Message-ID: <20150917080353.GA72725@thor.bakeyournoodle.com>

On Thu, Sep 17, 2015 at 03:34:30PM +1000, Tony Breeds wrote:
> On Thu, Sep 17, 2015 at 03:01:53PM +1000, Tony Breeds wrote:
> > Hi all,
> >     The recent relapse of oslo.utils 1.4.1[1] (for juno) is valid in kilo.  The
> > juno global-requirements for Babel are not compatible with kilo so nothing can
> > get through grenade :(
> > 
> > We're working it
> > https://bugs.launchpad.net/oslo.utils/+bug/1496678
> 
> Here's my best effort for right now.
>     https://review.openstack.org/#/c/224429/ 

For those playing along at home the new plan is to:
 1. Release a version of oslo.utils with the kilo requirements
    This needs to be done as a revert because 1.4.2 must follow 1.4.1
         - https://review.openstack.org/#/c/224472/
 2 Tag that as 1.4.2

Then the gate should be okay again, while the stable team (and me) work out how
to fix juno without breaking kilo.

Yours Tony.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/1a6cbef7/attachment.pgp>

From liuhoug at cn.ibm.com  Thu Sep 17 08:13:24 2015
From: liuhoug at cn.ibm.com (Hou Gang HG Liu)
Date: Thu, 17 Sep 2015 16:13:24 +0800
Subject: [openstack-dev] questions about nova compute monitors extensions
Message-ID: <OFDB00AB2C.88557596-ON48257EC3.002CD29A-48257EC3.002D4C16@cn.ibm.com>

Hi Jay,

I notice nova compute monitor now only tries to load monitors with 
namespace "nova.compute.monitors.cpu", and only one monitor in one 
namespace can be enabled(
https://review.openstack.org/#/c/209499/6/nova/compute/monitors/__init__.py
).

Is there a plan to make MonitorHandler.NAMESPACES configurable or just 
hard code constraint as it is now? And how to make compute monitor support 
user defined as it was?

Thanks!
B.R

Hougang Liu ?????
Developer - IBM Platform Resource Scheduler   
Systems and Technology Group

Mobile: 86-13519121974 | Phone: 86-29-68797023 | Tie-Line: 87023     ??
????????42?????3?
E-mail: liuhoug at cn.ibm.com                                  Xian, Shaanxi 
Province 710075, China 
 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/cd7ad50b/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 360 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/cd7ad50b/attachment.gif>

From gal.sagie at gmail.com  Thu Sep 17 08:17:09 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Thu, 17 Sep 2015 11:17:09 +0300
Subject: [openstack-dev] [dragonflow] Low OVS version for Ubuntu
In-Reply-To: <55FA6AB4.5030605@linux.vnet.ibm.com>
References: <CALFEDVcXE18WAWS=64sOQkyLn6ob2R0Nx+G2jSP3xOo37Zcubg@mail.gmail.com>
 <55FA6AB4.5030605@linux.vnet.ibm.com>
Message-ID: <CAG9LJa554aMjAF06G_8U5AiGTOuZ1ODtci+6ykfNO-Vaet_B4Q@mail.gmail.com>

Hello Li Ma,

Dragonflow uses OpenFlow1.3 to communicate with OVS and thats why we need
OVS 2.3.1.
As suggested you can build it from source.
For Fedora 21 OVS2.3.1 is part of the default yum repository.

You can ping me on IRC (gsagie at freenode) if you need any additional help
how
to compile OVS.

Thanks
Gal.

On Thu, Sep 17, 2015 at 10:24 AM, Sudipto Biswas <
sbiswas7 at linux.vnet.ibm.com> wrote:

>
>
> On Thursday 17 September 2015 12:22 PM, Li Ma wrote:
>
>> Hi all,
>>
>> I tried to run devstack to deploy dragonflow, but I failed with lower
>> OVS version.
>>
>> I used Ubuntu 14.10 server, but the official package of OVS is 2.1.3
>> which is much lower than the required version 2.3.1+?
>>
>> So, can anyone provide a Ubuntu repository that contains the correct
>> OVS packages?
>>
>
> Why don't you just build the OVS you want from here:
> http://openvswitch.org/download/
>
> Thanks,
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards ,

The G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/7ba72dce/attachment.html>

From duncan.thomas at gmail.com  Thu Sep 17 09:00:22 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Thu, 17 Sep 2015 12:00:22 +0300
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55F9D472.5000505@redhat.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
Message-ID: <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>

On 16 September 2015 at 23:43, Eric Harney <eharney at redhat.com> wrote:

> Currently, at least some options set in [DEFAULT] don't apply to
> per-driver sections, and require you to set them in the driver section
> as well.
>

This is extremely confusing behaviour. Do you have any examples? I'm not
sure if we can fix it without breaking people's existing configs but I
think it is worth trying. I'll add it to the list of things to talk about
briefly in Tokyo.

-- 
Duncan Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/e6e722e1/attachment.html>

From hongbin.lu at huawei.com  Thu Sep 17 07:18:32 2015
From: hongbin.lu at huawei.com (Hongbin Lu)
Date: Thu, 17 Sep 2015 07:18:32 +0000
Subject: [openstack-dev] [magnum][ptl] Magnum PTL Candidacy
Message-ID: <0957CD8F4B55C0418161614FEC580D6BCE4960@SZXEMI503-MBS.china.huawei.com>

Hi all,

I would like to announce my candidacy for the PTL position of Magnum.

I involved in Magnum project starting from December 2014. At that time, Magnum's code base is much smaller than right now. Since then, I worked with a diverse set of team members to land features, discuss the roadmap, fix the gate, do pre-release test, fix the documentation, etc.. Thanks to our team efforts, in the past a few months, I saw important features were landed one after another, which I really proud of.

To address the question why I am a good candidate for Magnum, here are the key reasons:
* I contributed a lot to Magnum's code base and feature set.
* I am familiar with every aspects of the project, and understand both the big picture and technical details.
* I will have time and resources to take the PTL responsibility, mostly because I am a full time allocated to this project.
* I love container.
* I care the project.
Here is more details of my involvement in the project:
http://stackalytics.com/?module=magnum-group
https://github.com/openstack/magnum/graphs/contributors

In my opinion, Magnum needs to focus on the following in the next cycle:
* Production ready: Work on everything that are possible to make it happen.
* Baremetal: Complete and optimize our Ironic integration to enable running containers on baremetal.
* Container network: deliver a network solution that is high performance and customizable.
* UI: Integrate with Horizon to give users a friendly interface to use Magnum.
* Pluggable COE: A pluggable design is a key feature for users to customize Magnum, which is always the case.
* Grow community: Attract more contributors to contribute to Magnum.

If elected, I will strictly follow the principal of being a OpenStack project, especially the four opens. The direction of the project will be community-driven as always.

I hope you will give me an opportunity to serve as Magnum's PTL in the next cycle.

Best regards,
Hongbin

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/6a41cb83/attachment-0001.html>

From davanum at gmail.com  Thu Sep 17 10:22:47 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Thu, 17 Sep 2015 06:22:47 -0400
Subject: [openstack-dev] [all][gate] grende jobs failing.
In-Reply-To: <20150917080353.GA72725@thor.bakeyournoodle.com>
References: <20150917050153.GE68449@thor.bakeyournoodle.com>
 <20150917053429.GF68449@thor.bakeyournoodle.com>
 <20150917080353.GA72725@thor.bakeyournoodle.com>
Message-ID: <CANw6fcGS4C6s_WTaJ87mTBxx6BwaMDZsNbDFT5-AuEzkNJZqeA@mail.gmail.com>

Tony,
Looks like the ban is holding up:
https://review.openstack.org/#/c/224429/

-- Dims


On Thu, Sep 17, 2015 at 4:03 AM, Tony Breeds <tony at bakeyournoodle.com>
wrote:

> On Thu, Sep 17, 2015 at 03:34:30PM +1000, Tony Breeds wrote:
> > On Thu, Sep 17, 2015 at 03:01:53PM +1000, Tony Breeds wrote:
> > > Hi all,
> > >     The recent relapse of oslo.utils 1.4.1[1] (for juno) is valid in
> kilo.  The
> > > juno global-requirements for Babel are not compatible with kilo so
> nothing can
> > > get through grenade :(
> > >
> > > We're working it
> > > https://bugs.launchpad.net/oslo.utils/+bug/1496678
> >
> > Here's my best effort for right now.
> >     https://review.openstack.org/#/c/224429/
>
> For those playing along at home the new plan is to:
>  1. Release a version of oslo.utils with the kilo requirements
>     This needs to be done as a revert because 1.4.2 must follow 1.4.1
>          - https://review.openstack.org/#/c/224472/
>  2 Tag that as 1.4.2
>
> Then the gate should be okay again, while the stable team (and me) work
> out how
> to fix juno without breaking kilo.
>
> Yours Tony.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/c7293a04/attachment.html>

From e0ne at e0ne.info  Thu Sep 17 10:39:49 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Thu, 17 Sep 2015 13:39:49 +0300
Subject: [openstack-dev]  [Cinder] PTL Candidacy
Message-ID: <CAGocpaHFmL0UA=or75GiALD=NLknGOF9-usHODxNAaOVJXC5_w@mail.gmail.com>

Hello everyone,

I'm announcing my candidacy for Cinder PTL for the Mitaka release.

First of all, I would like to thank John and Mike for their great and hard
work making Cinder such a great project with good and a big community.

As a Cinder community we made a great progress not only with new features,
but with improving Cinder quality too. Our community grows quickly and
we are glad to see new contributors again and again.

I've started work with OpenStack since Diablo release. I?ve been an active
contributor to Cinder since Juno cycle [1].

As a Cinder PTL I would like to focus on the following areas of improvements
for the next release cycle:

* Finally move all projects to Cinder API v2 and prepare to remove v1 API.

* Better Cinder integration with other OpenStack projects like Nova and
  Ironic, support Keystone API v3.

* Continue to work on Cinder backups improvements like: scaling,
performance,
  etc.

* Work with driver vendors to stabilize third party CI. I believe,
  every PTL candidate will have this item on the list.

* Improve Cinder test coverage and quality: we need to get functional tests,
  better Tempest and Rally coverage, make our unit tests better

We need to continue to be open for developers community and operators
needs. Improving quality and growing the community are the tasks which
should
be done indefinitely. But we need to keep balance between new features
development and support of existing ones by all drivers and stability of
Cinder project.

[1] http://stackalytics.com/report/contribution/cinder/180

Thank you,
Ivan Kolodyazhny (e0ne)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/d0585446/attachment.html>

From lpeer at redhat.com  Thu Sep 17 10:47:34 2015
From: lpeer at redhat.com (Livnat Peer)
Date: Thu, 17 Sep 2015 13:47:34 +0300
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <CA+vgy5DLn=sH575rHyRUSOO6Yg0pHNy82aCc2zWDANa5_R7HtA@mail.gmail.com>

Kyle,
Thank you for the awesome work you did in the past 3 cycles as PTL.
You made hard decisions and drove changes that resulted in a more
welcoming, easier to work with community, not to mention a more
healthy project.
I'm very happy I had the chance to work with you,

Livnat

On Sat, Sep 12, 2015 at 12:12 AM, Kyle Mestery <mestery at mestery.com> wrote:
> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered the
> concept of rotating PTL duties with each cycle, I'd like to highly encourage
> Neutron and other projects to do the same. Having a deep bench of leaders
> supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From john at johngarbutt.com  Thu Sep 17 10:57:54 2015
From: john at johngarbutt.com (John Garbutt)
Date: Thu, 17 Sep 2015 11:57:54 +0100
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <55F6E762.5090406@openstack.org>
References: <1442232202-sup-5997@lrrr.local> <55F6E762.5090406@openstack.org>
Message-ID: <CABib2_qe3VOf3V0PbsynjHt=+=aDqSWFH=xBsz3i6+ft1Nybyw@mail.gmail.com>

On 14 September 2015 at 16:27, Thierry Carrez <thierry at openstack.org> wrote:
> Doug Hellmann wrote:
>> [...]
>> 1. Resolve the situation preventing the DefCore committee from
>>    including image upload capabilities in the tests used for trademark
>>    and interoperability validation.
>>
>> 2. Follow through on the original commitment of the project to
>>    provide an image API by completing the integration work with
>>    nova and cinder to ensure V2 API adoption.
>> [...]
>
> Thanks Doug for taking the time to dive into Glance and to write this
> email. I agree with your top two priorities as being a good summary of
> what the "rest of the community" expects the Glance leadership to focus
> on in the very short term.

Sorry for going back in time, but...

+1

This is a great summary of things, and I agree with the large strokes
of what is being suggested here.

Thank for helping ignite a very worthwhile debate here.

Thanks,
johnthetubaguy


From john at johngarbutt.com  Thu Sep 17 11:10:06 2015
From: john at johngarbutt.com (John Garbutt)
Date: Thu, 17 Sep 2015 12:10:06 +0100
Subject: [openstack-dev] [glance] proposed priorities for Mitaka
In-Reply-To: <55F853E8.8070409@inaugust.com>
References: <alpine.DEB.2.11.1509151747170.14097@tc-unix2.emea.hpqcorp.net>
 <55F853E8.8070409@inaugust.com>
Message-ID: <CABib2_qZ1RLnH3c_dLd_CpwH_z7nNEmN4HcdpPQ30tnXdZ6CNQ@mail.gmail.com>

On 15 September 2015 at 18:22, Monty Taylor <mordred at inaugust.com> wrote:
> On 09/15/2015 07:09 PM, stuart.mclaren at hp.com wrote:
>>>>
>>>> I've been looking into the existing task-based-upload that Doug
>>>> mentions:
>>>> can anyone clarify the following?
>>>>
>>>> On a default devstack install you can do this 'task' call:
>>>>
>>>> http://paste.openstack.org/show/462919
>>>
>>>
>>> Yup. That's the one.
>>>
>>>> as an alternative to the traditional image upload (the bytes are
>>>> streamed
>>>> from the URL).
>>>>
>>>> It's not clear to me if this is just an interesting example of the kind
>>>> of operator specific thing you can configure tasks to do, or a real
>>>> attempt to define an alternative way to upload images.
>>>>
>>>> The change which added it [1] calls it a 'sample'.
>>>>
>>>> Is it just an example, or is it a second 'official' upload path?
>>>
>>>
>>> It's how you have to upload images on Rackspace.
>>
>>
>> Ok, so Rackspace have a task called image_import. But it seems to take
>> different json input than the devstack version. (A Swift container/object
>> rather than a URL.)
>>
>> That seems to suggest that tasks really are operator specific, that there
>> is no standard task based upload ... and it should be ok to try
>> again with a clean slate.
>
>
> Yes - as long as we don't use the payload as a defacto undefined API to
> avoid having specific things implemented in the API I think we're fine.
>
> Like, if it was:
>
> glance import-image
>
> and that presented an interface that had a status field ... I mean, that's a
> known OpenStack pattern - it's how nova boot works.
>
> Amongst the badness with this is:
>
> a) It's only implemented in one cloud and at that cloud with special code
> b) The interface is "send some JSON to this endpoint, and we'll infer a
> special sub-API from the JSON, which is not a published nor versioned API"

+1 for moving forward with a "works everywhere" upload API.

My memory of the issue here, is the want to validate images, before
allowing users to create instances from that image. If we can get that
working with the regular HTTP upload method, I think we will have
sorted the main issue.

Its more about failing as early as possible and defence in depth,
rather than a specific threat thats not already protected against,
AFAIK.

Forcing copy from swift is handy in terms of reducing glance API load,
but that shouldn't be done at the cost of interoperability.

Thanks,
johnthetubaguy

>>> If you want to see the
>>> full fun:
>>>
>>>
>>> https://github.com/openstack-infra/shade/blob/master/shade/__init__.py#L1335-L1510
>>>
>>>
>>> Which is "I want to upload an image to an OpenStack Cloud"
>>>
>>> I've listed it on this slide in CLI format too:
>>>
>>> http://inaugust.com/talks/product-management/index.html#/27
>>>
>>> It should be noted that once you create the task, you need to poll the
>>> task with task-show, and then the image id will be in the completed
>>> task-show output.
>>>
>>> Monty
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From dstanek at dstanek.com  Thu Sep 17 11:26:09 2015
From: dstanek at dstanek.com (David Stanek)
Date: Thu, 17 Sep 2015 07:26:09 -0400
Subject: [openstack-dev] [keystone] PTL Candidacy
Message-ID: <CAO69Ndn+HWT-7b-Z9UyqHeiihiChemsN8jCHs9rm62B+P3OZfA@mail.gmail.com>

Hello everyone,

I'd like to announce my candidacy for the Keystone PTL for the Mitaka cycle.
The project has had great leadership the entire time I have been involved
and
I want to keep up the tradition.

I've been working on Keystone for the past 2 years and I'd like to step up
and take more of a leadership role. I have a long track record of technical
leadership that can be directly applied to Keystone. This includes
everything
from project management to application architecture and many things in
between.

My thoughts on the direction of the project really boil down to landing the
features we have already committed to, taking our experimental features and
making them stable, improving general project stability and expanding our
community.

Goals for this cycle:

* Land the features

  Several features that are very beneficial to the community are currently
  in development. We need to get them stable and shipped for Mitaka. This
  includes, but is not limited to federation, reseller, and centralized
  policy. There is also additional work that needs to be done with
  federation to take it from experimental to stable, like helping other
  projects adapt to the new requirements and making it a first class
OpenStack
  feature.

* Fix the bugs

  We have 293 bugs (at the time of this writing) that need some attention.
  Many have patches that need some work or just need reviews. Others need
  to be investigated and fixed. I'd like to cut this number in half. To do
  that I'd like to get people to focus more on them through activities such
  as bug reviews and bug days (more on that below).

* Expand out testing

  Over the last two cycles we have made some significant advances in our
  testing practices. We need to expand on this cultural shift and even
expand
  the focus on testing.

  The full test runtime needs to be cut by at least 50% to improve developer
  workflow. Near immediate test feedback is important for not breaking flow
  when writing code. This can be accomplished by refactoring test code to
  reduce unnecessary setup and focus on the code being tested.

* Expand our community

  Being PTL isn't about me making all of the decisions or calling all of the
  shots. It's about facilitating the design and development of Keystone by
  working with the community. Through mentoring we can get more developers
  ready to be core to speed up our review pace. We need to work together to
  find ways to give more people the ability to contribute upstream. I do
  believe it's possible to make our thriving community even better.

Thank you for voting for me as your PTL for the Mitaka release cycle.


-- David Stanek
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/02392992/attachment.html>

From doug at doughellmann.com  Thu Sep 17 11:35:41 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Thu, 17 Sep 2015 07:35:41 -0400
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
	service
In-Reply-To: <55FA6B24.5000602@mirantis.com>
References: <55FA6B24.5000602@mirantis.com>
Message-ID: <1442489679-sup-2659@lrrr.local>

Excerpts from mhorban's message of 2015-09-17 10:26:28 +0300:
> Hi Doug,
> 
>  > Rather than building hooks into oslo.config, why don't we build them
>  > into the thing that is catching the signal. That way the app can do lots
>  > of things in response to a signal, and one of them might be reloading
>  > the configuration.
> 
> Hm... Yes... It is really stupid idea to put reloading hook into 
> oslo.config.
> I'll move that hook mechanism into oslo.service. oslo.service is 
> responsible for catching/handling signals.
> 
> Is it enough to have one callback function? Or should I must add ability 
> to register many different callback functions?
> 
> What is your point of view?
> 
> Marian
> 

We probably want the ability to have multiple callbacks. There are
already a lot of libraries available on PyPI for handling "events" like
this, so maybe we can pick one of those that is well maintained and
integrate it with oslo.service?

Doug


From flavio at redhat.com  Thu Sep 17 12:12:59 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Thu, 17 Sep 2015 14:12:59 +0200
Subject: [openstack-dev] [defcore] Scoring for DefCore 2016.01 Guideline
In-Reply-To: <0AC92343-7973-4015-A9C2-E8ADD1B5355B@openstack.org>
References: <0AC92343-7973-4015-A9C2-E8ADD1B5355B@openstack.org>
Message-ID: <20150917121259.GE29319@redhat.com>

On 16/09/15 22:26 -0700, Chris Hoge wrote:
>The DefCore Committee is working on scoring capabilities for the upcoming
>2016.01 Guideline, a solid draft of which will be available at the Mitaka
>summit for community review and will go to the Board of Directors for
>approaval in Janaury [1]. The current 2015.07 Guideline [2] covers Nova,
>Swift, and Keystone. Scoring for the upcoming Guideline may includes new
>capabilities for Neutron, Glance, Cinder, and Heat, as well as
>updated capabilities for Keystone and Nova.
>
>As part of our process, we want to encourage the development and user
>communities to give feedback on the proposed capability categorizations
>and how they are currently graded against the Defcore criteria [3].
>
>Capabilities we're currently considering for possible inclusion are:
>        Neutron:  https://review.openstack.org/#/c/210080/
>        Glance:   https://review.openstack.org/#/c/213353/
>        Cinder:   https://review.openstack.org/#/c/221631/
>        Heat:     https://review.openstack.org/#/c/216983/
>        Keystone: https://review.openstack.org/#/c/213330/
>        Nova:     https://review.openstack.org/#/c/223915/
>
>We would especially like to thank the PTL's and technical community members
>who helped draft the proposed capabilities lists and provided
>feedback--your input has been very helpful.
>
>[1] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/2015A.rst#n13
>[2] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json
>[3] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst


Thanks for sending this out, it's useful and informative.

Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/c5b512a5/attachment.pgp>

From tony at bakeyournoodle.com  Thu Sep 17 12:23:54 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Thu, 17 Sep 2015 22:23:54 +1000
Subject: [openstack-dev] [all][gate] grende jobs failing.
In-Reply-To: <CANw6fcGS4C6s_WTaJ87mTBxx6BwaMDZsNbDFT5-AuEzkNJZqeA@mail.gmail.com>
References: <20150917050153.GE68449@thor.bakeyournoodle.com>
 <20150917053429.GF68449@thor.bakeyournoodle.com>
 <20150917080353.GA72725@thor.bakeyournoodle.com>
 <CANw6fcGS4C6s_WTaJ87mTBxx6BwaMDZsNbDFT5-AuEzkNJZqeA@mail.gmail.com>
Message-ID: <20150917122354.GB82515@thor.bakeyournoodle.com>

On Thu, Sep 17, 2015 at 06:22:47AM -0400, Davanum Srinivas wrote:
> Tony,
> Looks like the ban is holding up:
> https://review.openstack.org/#/c/224429/

Sorry yes.  Robert Collins pointed out that my new quicker plan wasn't going to
work so we went back to the original ban 1.4.1 solution.

It looks like the gate is mostly green again.

If people have grenade jobs that failed, a recheck shoudl fix that.

Thanks to all those that pushed on this to get things going again.

Yours Tony.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/e55e5a33/attachment.pgp>

From dmccowan at cisco.com  Thu Sep 17 12:50:02 2015
From: dmccowan at cisco.com (Dave McCowan (dmccowan))
Date: Thu, 17 Sep 2015 12:50:02 +0000
Subject: [openstack-dev] [Barbican] Providing service user read access
 to all tenant's certificates
In-Reply-To: <26B082831A2B1A4783604AB89B9B2C080E89F58B@SINPEX01CL02.citrite.net>
References: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
 <1A3C52DFCD06494D8528644858247BF01B7BF949@EX10MBOX06.pnnl.gov>
 <26B082831A2B1A4783604AB89B9B2C080E89F58B@SINPEX01CL02.citrite.net>
Message-ID: <D2202F18.1CAE5%dmccowan@cisco.com>


The tenant admin from Step 1, should also do Step 2.

From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Wednesday, September 16, 2015 at 9:57 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates


How does lbaas do step 2?
It does not have the privilege for that secret/container using the service user.
Should it use the keystone token through which user created LB config and assign read access for the secret/container to the LBaaS service user?

Thanks,
Vijay V.

From: Fox, Kevin M [mailto:Kevin.Fox at pnnl.gov]
Sent: 16 September 2015 19:24
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Why not have lbaas do step 2? Even better would be to help with the instance user spec and combined with lbaas doing step 2, you could restrict secret access to just the amphora that need the secret?

Thanks,
Kevin

________________________________
From: Vijay Venkatachalam
Sent: Tuesday, September 15, 2015 7:06:39 PM
To: OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)
Subject: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates
Hi,
               Is there a way to provide read access to a certain user to all secrets/containers of all project/tenant?s certificates?
               This user with universal ?read? privilege?s will be used as a service user by LBaaS plugin to read tenant?s certificates during LB configuration implementation.

               Today?s LBaaS users are following the below mentioned process

1.      tenant?s creator/admin user uploads a certificate info as secrets and container

2.      User then have to create ACLs for the LBaaS service user to access the containers and secrets

3.      User creates LB config with the container reference

4.      LBaaS plugin using the service user will then access container reference provided in LB config and proceeds to implement.

Ideally we would want to avoid step 2 in the process. Instead add a step 5 where the lbaas plugin?s service user checks if the user configuring the LB has read access to the container reference provided.

Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/9c5e5ced/attachment.html>

From jay.lau.513 at gmail.com  Thu Sep 17 12:53:29 2015
From: jay.lau.513 at gmail.com (Jay Lau)
Date: Thu, 17 Sep 2015 20:53:29 +0800
Subject: [openstack-dev] [magnum] Is magnum db going to be removed for
	k8s resources?
In-Reply-To: <CAFyztAEb_kG1-dciG=M4tifii17hKTjK3oWSVvLrvaq4cyZ1vw@mail.gmail.com>
References: <CABJxuZpQU=Pfvft9JQ_JAX6HcV=aUg+mWUvdJCa7FGmQA3P5qw@mail.gmail.com>
 <CAFyztAEb_kG1-dciG=M4tifii17hKTjK3oWSVvLrvaq4cyZ1vw@mail.gmail.com>
Message-ID: <CAFyztAFXXjyShnavJt2x8FbzBfmq4sa0xdC8nWWpMJg9YZtgZA@mail.gmail.com>

Anyone who have some comments/suggestions on this? Thanks!

On Mon, Sep 14, 2015 at 3:57 PM, Jay Lau <jay.lau.513 at gmail.com> wrote:

> Hi Vikas,
>
> Thanks for starting this thread. Here just show some of my comments here.
>
> The reason that Magnum want to get k8s resource via k8s API including two
> reasons:
> 1) Native clients support
> 2) With current implantation, we cannot get pod for a replication
> controller. The reason is that Magnum DB only persist replication
> controller info in Magnum DB.
>
> With the bp of objects-from-bay, the magnum will always call k8s API to
> get all objects for pod/service/rc. Can you please show some of your
> concerns for why do we need to persist those objects in Magnum DB? We may
> need to sync up Magnum DB and k8s periodically if we persist two copies of
> objects.
>
> Thanks!
>
> <https://blueprints.launchpad.net/openstack/?searchtext=objects-from-bay>
>
> 2015-09-14 14:39 GMT+08:00 Vikas Choudhary <choudharyvikas16 at gmail.com>:
>
>> Hi Team,
>>
>> As per object-from-bay blueprint implementation [1], all calls to magnum db
>> are being skipped for example pod.create() etc.
>>
>> Are not we going to use magnum db at all for pods/services/rc ?
>>
>>
>> Thanks
>> Vikas Choudhary
>>
>>
>> [1] https://review.openstack.org/#/c/213368/
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
>



-- 
Thanks,

Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/0acef20e/attachment.html>

From jay.lau.513 at gmail.com  Thu Sep 17 13:07:16 2015
From: jay.lau.513 at gmail.com (Jay Lau)
Date: Thu, 17 Sep 2015 21:07:16 +0800
Subject: [openstack-dev] [Magnum]bay/baymodel sharing for multi-tenants
Message-ID: <CAFyztAEiZ4dPey5VWe-cJbZpUc6csrBSf-bw5Qp53PRdpSXPtg@mail.gmail.com>

Hi Magnum,

Currently, there are about two blueprints related to bay/baymode sharing
for different tenants.
1) https://blueprints.launchpad.net/magnum/+spec/tenant-shared-model
2) https://blueprints.launchpad.net/magnum/+spec/public-baymodels

What we want to do is we can make the bay/baymodel to be shared by
multi-tenants, the reason is that sometimes, creating a bay maybe time
consuming, and enabling the bay shared by multi-tenant can save some time
for some users.

Any comments on this?

-- 
Thanks,

Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/abbf6b42/attachment.html>

From tdecacqu at redhat.com  Thu Sep 17 13:25:50 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Thu, 17 Sep 2015 13:25:50 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is now over
Message-ID: <55FABF5E.4000204@redhat.com>

PTL Nomination is now over. The official candidate list is available on
the wiki[0].

There are 5 projects without candidates, so according to this
resolution[1], the TC we'll have to appoint a new PTL for Barbican,
MagnetoDB, Magnum, Murano and Security

There are 7 projects that will have an election: Cinder, Glance, Ironic,
Keystone, Mistral, Neutron and Oslo. The details for those will be
posted tomorrow after Tony and I setup the CIVS system.

Thank you,
Tristan


[0]:
https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
[1]:
http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/25a95382/attachment.pgp>

From flavio at redhat.com  Thu Sep 17 13:32:54 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Thu, 17 Sep 2015 15:32:54 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FABF5E.4000204@redhat.com>
References: <55FABF5E.4000204@redhat.com>
Message-ID: <20150917133254.GH29319@redhat.com>

On 17/09/15 13:25 +0000, Tristan Cacqueray wrote:
>PTL Nomination is now over. The official candidate list is available on
>the wiki[0].
>
>There are 5 projects without candidates, so according to this
>resolution[1], the TC we'll have to appoint a new PTL for Barbican,
>MagnetoDB, Magnum, Murano and Security

Magnum had a candidacy on the mailing list. I'd assume this is because
it wasn't proposed to openstack/election. Right?

Thanks for the hard work here,
Flavio

>
>There are 7 projects that will have an election: Cinder, Glance, Ironic,
>Keystone, Mistral, Neutron and Oslo. The details for those will be
>posted tomorrow after Tony and I setup the CIVS system.
>
>Thank you,
>Tristan
>
>
>[0]:
>https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>[1]:
>http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>
>



>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/851da1e1/attachment.pgp>

From tdecacqu at redhat.com  Thu Sep 17 13:44:02 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Thu, 17 Sep 2015 13:44:02 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <20150917133254.GH29319@redhat.com>
References: <55FABF5E.4000204@redhat.com> <20150917133254.GH29319@redhat.com>
Message-ID: <55FAC3A2.6090709@redhat.com>

On 09/17/2015 01:32 PM, Flavio Percoco wrote:
> On 17/09/15 13:25 +0000, Tristan Cacqueray wrote:
>> PTL Nomination is now over. The official candidate list is available on
>> the wiki[0].
>>
>> There are 5 projects without candidates, so according to this
>> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
>> MagnetoDB, Magnum, Murano and Security
> 
> Magnum had a candidacy on the mailing list. I'd assume this is because
> it wasn't proposed to openstack/election. Right?

That is correct, but the candidacy was submitted after the deadlines so
we can't validate this candidate.

> 
> Thanks for the hard work here,
> Flavio
> 
>>
>> There are 7 projects that will have an election: Cinder, Glance, Ironic,
>> Keystone, Mistral, Neutron and Oslo. The details for those will be
>> posted tomorrow after Tony and I setup the CIVS system.
>>
>> Thank you,
>> Tristan
>>
>>
>> [0]:
>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>>
>> [1]:
>> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>>
>>
>>
> 
> 
> 
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/b616fdc0/attachment.pgp>

From flavio at redhat.com  Thu Sep 17 13:49:34 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Thu, 17 Sep 2015 15:49:34 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FAC3A2.6090709@redhat.com>
References: <55FABF5E.4000204@redhat.com> <20150917133254.GH29319@redhat.com>
 <55FAC3A2.6090709@redhat.com>
Message-ID: <20150917134934.GI29319@redhat.com>

On 17/09/15 13:44 +0000, Tristan Cacqueray wrote:
>On 09/17/2015 01:32 PM, Flavio Percoco wrote:
>> On 17/09/15 13:25 +0000, Tristan Cacqueray wrote:
>>> PTL Nomination is now over. The official candidate list is available on
>>> the wiki[0].
>>>
>>> There are 5 projects without candidates, so according to this
>>> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
>>> MagnetoDB, Magnum, Murano and Security
>>
>> Magnum had a candidacy on the mailing list. I'd assume this is because
>> it wasn't proposed to openstack/election. Right?
>
>That is correct, but the candidacy was submitted after the deadlines so
>we can't validate this candidate.

Awesome, thanks for the confirmation.
Flavio

>
>>
>> Thanks for the hard work here,
>> Flavio
>>
>>>
>>> There are 7 projects that will have an election: Cinder, Glance, Ironic,
>>> Keystone, Mistral, Neutron and Oslo. The details for those will be
>>> posted tomorrow after Tony and I setup the CIVS system.
>>>
>>> Thank you,
>>> Tristan
>>>
>>>
>>> [0]:
>>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>>>
>>> [1]:
>>> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>>>
>>>
>>>
>>
>>
>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>



-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/d5d8322f/attachment.pgp>

From douglas.mendizabal at rackspace.com  Thu Sep 17 13:56:18 2015
From: douglas.mendizabal at rackspace.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=)
Date: Thu, 17 Sep 2015 08:56:18 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <20150917134934.GI29319@redhat.com>
References: <55FABF5E.4000204@redhat.com> <20150917133254.GH29319@redhat.com>
 <55FAC3A2.6090709@redhat.com> <20150917134934.GI29319@redhat.com>
Message-ID: <55FAC682.9050102@rackspace.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

This is quite unfortunate, as I was intending to submit my candidacy
for the Barbican project today, but I did not realize the cutoff time
would be in the morning in CDT.

I'd like to apologize to the OpenStack community and the Barbican team
in particular for missing this deadline.

Thanks,

Douglas Mendiz?bal

On 9/17/15 8:49 AM, Flavio Percoco wrote:
> On 17/09/15 13:44 +0000, Tristan Cacqueray wrote:
>> On 09/17/2015 01:32 PM, Flavio Percoco wrote:
>>> On 17/09/15 13:25 +0000, Tristan Cacqueray wrote:
>>>> PTL Nomination is now over. The official candidate list is
>>>> available on the wiki[0].
>>>> 
>>>> There are 5 projects without candidates, so according to
>>>> this resolution[1], the TC we'll have to appoint a new PTL
>>>> for Barbican, MagnetoDB, Magnum, Murano and Security
>>> 
>>> Magnum had a candidacy on the mailing list. I'd assume this is
>>> because it wasn't proposed to openstack/election. Right?
>> 
>> That is correct, but the candidacy was submitted after the
>> deadlines so we can't validate this candidate.
> 
> Awesome, thanks for the confirmation. Flavio
> 
>> 
>>> 
>>> Thanks for the hard work here, Flavio
>>> 
>>>> 
>>>> There are 7 projects that will have an election: Cinder,
>>>> Glance, Ironic, Keystone, Mistral, Neutron and Oslo. The
>>>> details for those will be posted tomorrow after Tony and I
>>>> setup the CIVS system.
>>>> 
>>>> Thank you, Tristan
>>>> 
>>>> 
>>>> [0]: 
>>>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confir
med_Candidates
>>>>
>>>>
>>>>
>>>> 
[1]:
>>>> http://governance.openstack.org/resolutions/20141128-elections-proc
ess-for-leaderless-programs.html
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>>
>>>> 
________________________________________________________________________
__
>>>> 
>>>> 
>>>> OpenStack Development Mailing List (not for usage questions) 
>>>> Unsubscribe: 
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>
>>>> 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> 
>>> 
>>> ____________________________________________________________________
______
>>>
>>>
>>> 
OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>
>>> 
> 
> 
> 
> 
> ______________________________________________________________________
____
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV+saCAAoJEB7Z2EQgmLX7XG0P/AqOTGDDbVJrJSTPnCGwtqYd
275uDQgWqvbsGMKOrfKO7GBgI/5n/j8hCyiipq6niCfW5eWWH7rYgU1pLKLjiZmR
12Ui4PwkqvoJEa0J5NIiM8GOrt2TEDTu7vOQAWMzGEn+8fbs7Z9MRPIAg7bEXuk0
eQNs5LK6j/INXvuuKm4ZV2MjAxJRbtsSZYVn59U4IxM0GSJIC4MLu8dGaRzf+G8B
881hflBskg1Sa5UjEcj/yMUDrtBLOyNkM67dv8M9ojNB0bX3o+US8zmJnsbk6whD
ox3GrgoxT8he6iMNd/YYycFtBlBZ4fqNN8Uv5Vr/+k8s2umJf7Y3IbM2oQuhM1oJ
mWuwFbyc440ep9WkBeXeZTm0S0FYwR3MS40nW2D04eHEcTbCHchKIoLp/tO0AKYP
op116JlzTyWZatywL8rIOner4UJQX6yUqmGRdonACNQ6OAzTLTTaARtwqHacxL81
UqzOLEQ8nW9p5xCTPWhbPbR7t1T7ngwf5bJAuDKVx9JDUsM+mYjZ7smWdg+PV1yS
SwW94TzImOV4ujiT7AwzUBz6SZ0jHFt5yXVWscggpj5k7l8lPqFhd4xQVvidqLcZ
VsHfKwfdfWX22z+97n4/GjEd6B0seZqcxoP4qVsXXgpuFJETVLEifDM9DTOLccxy
xQR6UpOxTZxl5EdiOpxX
=nraX
-----END PGP SIGNATURE-----


From Neil.Jerram at metaswitch.com  Thu Sep 17 13:58:29 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Thu, 17 Sep 2015 13:58:29 +0000
Subject: [openstack-dev] [neutron] What semantics are expected when booting
 a VM on an external network?
Message-ID: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>

Thanks to the interesting 'default network model' thread, I now know
that Neutron allows booting a VM on an external network. :-)  I didn't
realize that before!

So, I'm now wondering what connectivity semantics are expected (or even
specified!) for such VMs, and whether they're the same as - or very
similar to - the 'routed' networking semantics I've described at [1].

[1]
https://review.openstack.org/#/c/198439/5/doc/source/devref/routed_networks.rst

Specifically I wonder if VM's attached to an external network expect any
particular L2 characteristics, such as being able to L2 broadcast to
each other?

By way of context - i.e. why am I asking this?...   The
networking-calico project [2] provides an implementation of the 'routed'
semantics at [1], but only if one suspends belief in some of the Neutron
semantics associated with non-external networks, such as needing a
virtual router to provide connectivity to the outside world.  (Because
networking-calico provides that external connectivity without any
virtual router.)  Therefore we believe that we need to propose some
enhancement of the Neutron API and data model, so that Neutron can
describe 'routed' semantics as well as all the traditional ones.  But,
if what we are doing is semantically equivalent to 'attaching to an
external network', perhaps no such enhancement is needed...

[2] https://git.openstack.org/cgit/openstack/networking-calico

Many thanks for any input!

    Neil



From mriedem at linux.vnet.ibm.com  Thu Sep 17 14:05:18 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Thu, 17 Sep 2015 09:05:18 -0500
Subject: [openstack-dev] [all][gate] grende jobs failing.
In-Reply-To: <20150917122354.GB82515@thor.bakeyournoodle.com>
References: <20150917050153.GE68449@thor.bakeyournoodle.com>
 <20150917053429.GF68449@thor.bakeyournoodle.com>
 <20150917080353.GA72725@thor.bakeyournoodle.com>
 <CANw6fcGS4C6s_WTaJ87mTBxx6BwaMDZsNbDFT5-AuEzkNJZqeA@mail.gmail.com>
 <20150917122354.GB82515@thor.bakeyournoodle.com>
Message-ID: <55FAC89E.1020607@linux.vnet.ibm.com>



On 9/17/2015 7:23 AM, Tony Breeds wrote:
> On Thu, Sep 17, 2015 at 06:22:47AM -0400, Davanum Srinivas wrote:
>> Tony,
>> Looks like the ban is holding up:
>> https://review.openstack.org/#/c/224429/
>
> Sorry yes.  Robert Collins pointed out that my new quicker plan wasn't going to
> work so we went back to the original ban 1.4.1 solution.
>
> It looks like the gate is mostly green again.
>
> If people have grenade jobs that failed, a recheck shoudl fix that.
>
> Thanks to all those that pushed on this to get things going again.
>
> Yours Tony.
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

As noted in the stable/kilo block on 1.4.1, we're good until we do a 
1.4.2 release at which point we either break juno or we break kilo since 
they overlap in supported version ranges.

I've asked Tony to push a requirements change on stable/juno and 
stable/kilo to add a note next to oslo.utils reminding us of this so we 
think twice before breaking it again (we should really make this part of 
the openstack/releases review process - to check that the proposed 
version change isn't going to show up in a branch that we don't intend 
it do).  Because if the proposed release has a g-r sync for branch n and 
the version will show up in branch n-1 or n+1, things will likely break.

Hopefully we just don't need an oslo.utils release on juno or kilo 
before juno-eol happens.

-- 

Thanks,

Matt Riedemann



From martin.andre at gmail.com  Thu Sep 17 14:07:29 2015
From: martin.andre at gmail.com (=?UTF-8?Q?Martin_Andr=C3=A9?=)
Date: Thu, 17 Sep 2015 23:07:29 +0900
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <0ff2ac38c6044b0f039992b0a1f53ecf@weites.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
 <D21A6FA1.12519%stdake@cisco.com>
 <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>
 <D21AFAE0.12587%stdake@cisco.com> <55F6AD3E.9090909@oracle.com>
 <CAJ3CzQWS4O-+V6A9L0GSDMUGcfpJc_3=DdQG9njxO+FBoRBDyw@mail.gmail.com>
 <0ff2ac38c6044b0f039992b0a1f53ecf@weites.com>
Message-ID: <CAHD=wReayuSHgN1JEbz=bZ9P293dQaqB+YsuZJitvP7E=Jw5hQ@mail.gmail.com>

On Thu, Sep 17, 2015 at 3:22 AM, <harm at weites.com> wrote:

> There is an apparent need for having official RHOS being supported from
> our end, and we just so happen to have the possibility of filling that
> need. Should the need arise to support whatever fancy proprietary backend
> system or even have Kolla integrate with Oracle Solaris or something, that
> need would most probably be backed by a company plus developer effort. I
> believe the burden for our current (great) team would more or less stay the
> same (eg. lets assume we don't know anything about Solaris), so this
> company should ship in devvers to aid their 'wish'. The team effort with
> these additional devvers would indeed grow, bigtime. Keeping our eyes on
> the matters feels like a fair solution, allowing for these additions while
> guarding the effort they take. Should Kolla start supporting LXC besides
> Docker, that would be awesome (uhm...) - but I honestly don't see a need to
> be thinking about that right now, if someone comes up with a spec about it
> and wants to invest time+effort we can atleast review it. We shouldn't
> prepare our Dockerfiles for such a possibility though, whereas the
> difference between RHOS and CentOS is very little. Hence, support is rather
> easy to implement.
>
> The question was if Kolla wants/should support integrating with 3rd party
> tools, and I think we should support it. There should be rules, yes. We
> probably shouldn't be worrying about proprietary stuff that other projects
> hardly take seriously (even though drivers have been accepted)...
>
> Vote: +1
>
> - harmw
>
> Sam Yaple schreef op 2015-09-14 13:44:
>
>> On Mon, Sep 14, 2015 at 11:19 AM, Paul Bourke <paul.bourke at oracle.com>
>> wrote:
>>
>> On 13/09/15 18:34, Steven Dake (stdake) wrote:
>>>
>>> Response inline.
>>>>
>>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>"
>>>> <sam at yaple.net<mailto:sam at yaple.net>>
>>>> Date: Sunday, September 13, 2015 at 1:35 AM
>>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>>> Cc: "OpenStack Development Mailing List (not for usage
>>>> questions)"
>>>>
>>>>
>>>       tack-dev at lists.openstack.org<mailto:
>> openstack-dev at lists.openstack.org>>
>>
>>> Subject: Re: [kolla] Followup to review in gerrit relating to
>>>> RHOS + RDO types
>>>>
>>>> On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake)
>>>> <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
>>>> Response inline.
>>>>
>>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>"
>>>> <sam at yaple.net<mailto:sam at yaple.net>>
>>>> Date: Saturday, September 12, 2015 at 11:34 PM
>>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>>> Cc: "OpenStack Development Mailing List (not for usage
>>>> questions)"
>>>>
>>>>
>>>       tack-dev at lists.openstack.org<mailto:
>> openstack-dev at lists.openstack.org>>
>>
>>> Subject: Re: [kolla] Followup to review in gerrit relating to
>>>> RHOS + RDO types
>>>>
>>>> Sam Yaple
>>>>
>>>> On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake)
>>>> <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
>>>>
>>>> From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net>>
>>>> Reply-To: "sam at yaple.net<mailto:sam at yaple.net>"
>>>> <sam at yaple.net<mailto:sam at yaple.net>>
>>>> Date: Saturday, September 12, 2015 at 11:01 PM
>>>> To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
>>>> Cc: "OpenStack Development Mailing List (not for usage
>>>> questions)"
>>>>
>>>>
>>>       tack-dev at lists.openstack.org<mailto:
>> openstack-dev at lists.openstack.org>>
>>
>>> Subject: Re: [kolla] Followup to review in gerrit relating to
>>>> RHOS + RDO types
>>>>
>>>> On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake)
>>>> <stdake at cisco.com<mailto:stdake at cisco.com>> wrote:
>>>> Hey folks,
>>>>
>>>> Sam had asked a reasonable set of questions regarding a patchset:
>>>> https://review.openstack.org/#/c/222893/ [1]
>>>>
>>>>
>>>> The purpose of the patchset is to enable both RDO and RHOS as
>>>> binary choices on RHEL platforms.  I suspect over time,
>>>> from-source deployments have the potential to become the norm, but
>>>> the business logistics of such a change are going to take some
>>>> significant time to sort out.
>>>>
>>>> Red Hat has two distros of OpenStack neither of which are from
>>>> source.  One is free called RDO and the other is paid called
>>>> RHOS.  In order to obtain support for RHEL VMs running in an
>>>> OpenStack cloud, you must be running on RHOS RPM binaries.  You
>>>> must also be running on RHEL.  It remains to be seen whether Red
>>>> Hat will actively support Kolla deployments with a RHEL+RHOS set
>>>> of packaging in containers, but my hunch says they will.  It is
>>>> in Kolla?s best interest to implement this model and not make it
>>>> hard on Operators since many of them do indeed want Red Hat?s
>>>> support structure for their OpenStack deployments.
>>>>
>>>> Now to Sam?s questions:
>>>> "Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many
>>>> more do we add? What's our policy on adding a new type??
>>>>
>>>> I?m not immediately clear on how binary fits in.  We could
>>>> make binary synonymous with the community supported version (RDO)
>>>> while still implementing the binary RHOS version.  Note Kolla
>>>> does not ?support? any distribution or deployment of OpenStack
>>>> ? Operators will have to look to their vendors for support.
>>>>
>>>> If everything between centos+rdo and rhel+rhos is mostly the same
>>>> then I would think it would make more sense to just use the base
>>>> ('rhel' in this case) to branch of any differences in the
>>>> templates. This would also allow for the least amount of change
>>>> and most generic implementation of this vendor specific packaging.
>>>> This would also match what we do with oraclelinux, we do not have
>>>> a special type for that and any specifics would be handled by an
>>>> if statement around 'oraclelinux' and not some special type.
>>>>
>>>> I think what you are proposing is RHEL + RHOS and CENTOS + RDO.
>>>> RDO also runs on RHEL.  I want to enable Red Hat customers to
>>>> make a choice to have a supported  operating system but not a
>>>> supported Cloud environment.  The answer here is RHEL + RDO.
>>>> This leads to full support down the road if the Operator chooses
>>>> to pay Red Hat for it by an easy transition to RHOS.
>>>>
>>>> I am against including vendor specific things like RHOS in Kolla
>>>> outright like you are purposing. Suppose another vendor comes
>>>> along with a new base and new packages. They are willing to
>>>> maintain it, but its something that no one but their customers
>>>> with their licensing can use. This is not something that belongs
>>>> in Kolla and I am unsure that it is even appropriate to belong in
>>>> OpenStack as a whole. Unless RHEL+RHOS can be used by those that
>>>> do not have a license for it, I do not agree with adding it at
>>>> all.
>>>>
>>>> Sam,
>>>>
>>>> Someone stepping up to maintain a completely independent set of
>>>> docker images hasn?t happened.  To date nobody has done that.
>>>> If someone were to make that offer, and it was a significant
>>>> change, I think the community as a whole would have to evaluate
>>>> such a drastic change.  That would certainly increase our
>>>> implementation and maintenance burden, which we don?t want  to
>>>> do.  I don?t think what you propose would be in the best
>>>> interest of the Kolla project, but I?d have to see the patch set
>>>> to evaluated the scenario appropriately.
>>>>
>>>> What we are talking about is 5 additional lines to enable
>>>> RHEL+RHOS specific repositories, which is not very onerous.
>>>>
>>>> The fact that you can?t use it directly has little bearing on
>>>> whether its valid technology for OpenStack.  There are already
>>>> two well-defined historical precedents for non-licensed unusable
>>>> integration in OpenStack.  Cinder has 55 [1] Volume drivers which
>>>> they SUPPORT.     At-leat 80% of them are completely
>>>> proprietary hardware which in reality is mostly just software
>>>> which without a license to, it would be impossible to use.  There
>>>> are 41 [2] Neutron drivers registered on the Neutron driver page;
>>>> almost the entirety require proprietary licenses to what amounts
>>>> as integration to access proprietary software.  The OpenStack
>>>> preferred license is ASL for a reason ? to be business
>>>> friendly.  Licensed software has a place in the world of
>>>> OpenStack, even it only serves as an integration point which the
>>>> proposed patch does.  We are consistent with community values on
>>>> this point or I wouldn?t have bothered proposing the patch.
>>>>
>>>> We want to encourage people to use Kolla for proprietary
>>>> solutions if they so choose.  This is how support manifests,
>>>> which increases the strength of the Kolla project.  The presence
>>>> of support increases the likelihood that Kolla will be adopted by
>>>> Operators.  If your asking the Operators to maintain a fork for
>>>> those 5 RHOS repo lines, that seems unreasonable.
>>>>
>>>> I?d like to hear other Core Reviewer opinions on this matter
>>>> and will hold a majority vote on this thread as to whether we will
>>>> facilitate integration with third party software such as the
>>>> Cinder Block Drivers, the Neutron Network drivers, and various
>>>> for-pay versions of OpenStack such as RHOS.  I?d like all core
>>>> reviewers to weigh in please.  Without a complete vote it will be
>>>> hard to gauge what the Kolla community really wants.
>>>>
>>>> Core reviewers:
>>>> Please vote +1 if you ARE satisfied with integration with third
>>>> party unusable without a license software, specifically Cinder
>>>> volume drivers, Neutron network drivers, and various for-pay
>>>> distributions of OpenStack and container runtimes.
>>>> Please vote ?1 if you ARE NOT satisfied with integration with
>>>> third party unusable without a license software, specifically
>>>> Cinder volume drivers, Neutron network drivers, and various for
>>>> pay distributions of OpenStack and container runtimes.
>>>>
>>>> A bit of explanation on your vote might be helpful.
>>>>
>>>> My vote is +1.  I have already provided my rationale.
>>>>
>>>> Regards,
>>>> -steve
>>>>
>>>> [1] https://wiki.openstack.org/wiki/CinderSupportMatrix [2]
>>>> [2] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>>>> [3]
>>>>
>>>>
>>>> I appreciate you calling a vote so early. But I haven't had my
>>>> questions answered yet enough to even vote on the matter at hand.
>>>>
>>>> In this situation the closest thing we have to a plugin type
>>>> system as Cinder or Neutron does is our header/footer system. What
>>>> you are proposing is integrating a proprietary solution into the
>>>> core of Kolla. Those Cinder and Neutron plugins have external
>>>> components and those external components are not baked into the
>>>> project.
>>>>
>>>> What happens if and when the RHOS packages require different
>>>> tweaks in the various containers? What if it requires changes to
>>>> the Ansible playbooks? It begins to balloon out past 5 lines of
>>>> code.
>>>>
>>>> Unfortunately, the community _wont_ get to vote on whether or not
>>>> to implement those changes because RHOS is already in place.
>>>> That's why I am asking the questions now as this _right_ _now_ is
>>>> the significant change you are talking about, regardless of the
>>>> lines of code.
>>>>
>>>> So the question is not whether we are going to integrate 3rd
>>>> party plugins, but whether we are going to allow companies to
>>>> build proprietary products in the Kolla repo. If we allow
>>>> RHEL+RHOS then we would need to allow another distro+company
>>>> packaging and potential Ansible tweaks to get it to work for them.
>>>>
>>>> If you really want to do what Cinder and Neutron do, we need a
>>>> better system for injecting code. That would be much closer to the
>>>> plugins that the other projects have.
>>>>
>>>> I'd like to have a discussion about this rather than immediately
>>>> call for a vote which is why I asked you to raise this question in
>>>> a public forum in the first place.
>>>>
>>>> Sam,
>>>>
>>>> While a true code injection system might be interesting and would
>>>> be more parallel with the plugin model used in cinder and neutron
>>>> (and to some degrees nova), those various systems didn?t begin
>>>> that way.  Their driver code at one point was completely
>>>> integrated.  Only after 2-3 years was the code broken into a
>>>> fully injectable state.  I think that is an awfully high bar to
>>>> set to sort out the design ahead of time.  One of the reasons
>>>> Neutron has taken so long to mature is the Neutron community
>>>> attempted to do plugins at too early a stage which created big
>>>> gaps in unit and functional tests.  A more appropriate design
>>>> would be for that pattern to emerge from the system over time as
>>>> people begin to adopt various distro tech to Kolla.  If you
>>>> looked at the patch in gerrit, there is one clear pattern ?Setup
>>>> distro repos? which at some point in the future could be made to
>>>> be injectable much as headers and footers are today.
>>>>
>>>> As for building proprietary products in the Kolla repository, the
>>>> license is ASL, which means it is inherently not proprietary.  I
>>>> am fine with the code base integrating with proprietary software
>>>> as long as the license terms are met; someone has to pay the
>>>> mortgages of the thousands of OpenStack developers.  We should
>>>> encourage growth of OpenStack, and one of the ways for that to
>>>> happen is to be business friendly.  This translates into first
>>>> knowing the world is increasingly adopting open source
>>>> methodologies and facilitating that transition, and second
>>>> accepting the world has a whole slew of proprietary software that
>>>> already exists today that requires integration.
>>>>
>>>> Nonetheless, we have a difference of opinion on this matter, and
>>>> I want this work to merge prior to rc1.  Since this is a project
>>>> policy decision and not a technical issue, it makes sense to put
>>>> it to a wider vote to either unblock or kill the work.  It would
>>>> be a shame if we reject all driver and supported distro
>>>> integration because we as a community take an anti-business stance
>>>> on our policies, but I?ll live by what the community decides.
>>>> This is not a decision either you or I may dictate which is why it
>>>> has been put to a vote.
>>>>
>>>> Regards
>>>> -steve
>>>>
>>>> For oracle linux, I?d like to keep RDO for oracle linux and
>>>> from source on oracle linux as choices.  RDO also runs on oracle
>>>> linux.  Perhaps the patch set needs some later work here to
>>>> address this point in more detail, but as is ?binary? covers
>>>> oracle linu.
>>>>
>>>> Perhaps what we should do is get rid of the binary type
>>>> entirely.  Ubuntu doesn?t really have a binary type, they have
>>>> a cloudarchive type, so binary doesn?t make a lot of sense.
>>>> Since Ubuntu to my knowledge doesn?t have two distributions of
>>>> OpenStack the same logic wouldn?t apply to providing a full
>>>> support onramp for Ubuntu customers.  Oracle doesn?t provide a
>>>> binary type either, their binary type is really RDO.
>>>>
>>>> The binary packages for Ubuntu are _packaged_ by the cloudarchive
>>>> team. But in the case of when OpenStack collides with an LTS
>>>> release (Icehouse and 14.04 was the last one) you do not add a new
>>>> repo because the packages are in the main Ubuntu repo.
>>>>
>>>> Debian provides its own packages as well. I do not want a type
>>>> name per distro. 'binary' catches all packaged OpenStack things by
>>>> a distro.
>>>>
>>>> FWIW I never liked the transition away from rdo in the repo names
>>>> to binary.  I guess I should have ?1?ed those reviews back
>>>> then, but I think its time to either revisit the decision or
>>>> compromise that binary and rdo mean the same thing in a centos and
>>>> rhel world.
>>>>
>>>> Regards
>>>> -steve
>>>>
>>>> Since we implement multiple bases, some of which are not RPM
>>>> based, it doesn't make much sense to me to have rhel and rdo as a
>>>> type which is why we removed rdo in the first place in favor of
>>>> the more generic 'binary'.
>>>>
>>>> As such the implied second question ?How many more do we
>>>> add?? sort of sounds like ?how many do we support??.  The
>>>> answer to the second question is none ? again the Kolla
>>>> community does not support any deployment of OpenStack.  To the
>>>> question as posed, how many we add, the answer is it is really up
>>>> to community members willing to  implement and maintain the
>>>> work.  In this case, I have personally stepped up to implement
>>>> RHOS and maintain it going forward.
>>>>
>>>> Our policy on adding a new type could be simple or onerous.  I
>>>> prefer simple.  If someone is willing to write the code and
>>>> maintain it so that is stays in good working order, I see no harm
>>>> in it remaining in tree.  I don?t suspect there will be a lot
>>>> of people interested in adding multiple distributions for a
>>>> particular operating system.  To my knowledge, and I could be
>>>> incorrect, Red Hat is the only OpenStack company with a paid and
>>>> community version available of OpenStack simultaneously and the
>>>> paid version is only available on RHEL.  I think the risk of RPM
>>>> based distributions plus their type count spiraling out of
>>>> manageability is low.  Even if the risk were high, I?d prefer
>>>> to keep an open mind to facilitate an increase in diversity in our
>>>> community (which is already fantastically diverse, btw ;)
>>>>
>>>> I am open to questions, comments or concerns.  Please feel free
>>>> to voice them.
>>>>
>>>> Regards,
>>>> -steve
>>>>
>>>>
>>>>
>>>
>>  _______________________________________________________________________
>>
>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [4]
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> [5]
>>>>
>>>
>>> Both arguments sound valid to me, both have pros and cons.
>>>
>>> I think it's valuable to look to the experiences of Cinder and
>>> Neutron in this area, both of which seem to have the same scenario
>>> and have existed much longer than Kolla. From what I know of how
>>> these operate, proprietary code is allowed to exist in the mainline
>>> so long as certain set of criteria is met. I'd have to look it up
>>> but I think it mostly comprises of the relevant parties must "play
>>> by the rules", e.g. provide a working CI, help with reviews, attend
>>> weekly meetings, etc. If Kolla can look to craft a similar set of
>>> criteria for proprietary code down the line, I think it should work
>>> well for us.
>>>
>>> Steve has a good point in that it may be too much overhead to
>>> implement a plugin system or similar up front. Instead, we should
>>> actively monitor the overhead in terms of reviews and code size that
>>> these extra implementations add. Perhaps agree to review it at the
>>> end of Mitaka?
>>>
>>> Given the project is young, I think it can also benefit from the
>>> increased usage and exposure from allowing these parties in. I would
>>> hope independent contributors would not feel rejected from not being
>>> able to use/test with the pieces that need a license. The libre
>>> distros will remain #1 for us.
>>>
>>> So based on the above explanation, I'm +1.
>>>
>>> -Paul
>>>
>>>
>>>
>>  _______________________________________________________________________
>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [4]
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> [5]
>>>
>>
>> Given Paul's comments I would agree here as well. I would like to get
>> that 'criteria' required for Kolla to allow this proprietary code into
>> the main repo down as soon as possible though and suggest that we have
>> a bare minimum of being able to gate against it as one of the
>> criteria.
>>
>> As for a plugin system, I also agree with Paul that we should check
>> the overhead of including these other distros and any types needed
>> after we have had time to see if they do introduce any additional
>> overhead.
>>
>> So for the question 'Do we allow code that relies on proprietary
>> packages?' I would vote +1, with the condition that we define the
>> requirements of allowing that code as soon as possible.
>>
>>
>> Links:
>> ------
>> [1] https://review.openstack.org/#/c/222893/
>> [2] https://wiki.openstack.org/wiki/CinderSupportMatrix
>> [3] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>> [4] http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> [5] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
I'm also +1 on principle of opening the Kolla doors to distros with paid
licence and accepting them in the source tree. We shouldn't build barriers
but bridges.
I would like to make sure we all agree, however, that there is no guarantee
that because the code for a distro is in the repo it means it will stay in
there. I want to reserve the right as a Kolla dev to remove paid distros
from the tree if it becomes a burden, for exemple unreliable CI or lack of
commitment from people backing the distro.

Martin


>    _______________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/10c275f7/attachment.html>

From davanum at gmail.com  Thu Sep 17 14:19:29 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Thu, 17 Sep 2015 10:19:29 -0400
Subject: [openstack-dev] [all][gate] grende jobs failing.
In-Reply-To: <55FAC89E.1020607@linux.vnet.ibm.com>
References: <20150917050153.GE68449@thor.bakeyournoodle.com>
 <20150917053429.GF68449@thor.bakeyournoodle.com>
 <20150917080353.GA72725@thor.bakeyournoodle.com>
 <CANw6fcGS4C6s_WTaJ87mTBxx6BwaMDZsNbDFT5-AuEzkNJZqeA@mail.gmail.com>
 <20150917122354.GB82515@thor.bakeyournoodle.com>
 <55FAC89E.1020607@linux.vnet.ibm.com>
Message-ID: <CANw6fcEOpExD0tNXKz6DF4AfBVjggWE1FK+jyPXn9jR3+vc2oQ@mail.gmail.com>

+1 for range checking and stable team +2's in openstack/releases repo

-- dims

On Thu, Sep 17, 2015 at 10:05 AM, Matt Riedemann <mriedem at linux.vnet.ibm.com
> wrote:

>
>
> On 9/17/2015 7:23 AM, Tony Breeds wrote:
>
>> On Thu, Sep 17, 2015 at 06:22:47AM -0400, Davanum Srinivas wrote:
>>
>>> Tony,
>>> Looks like the ban is holding up:
>>> https://review.openstack.org/#/c/224429/
>>>
>>
>> Sorry yes.  Robert Collins pointed out that my new quicker plan wasn't
>> going to
>> work so we went back to the original ban 1.4.1 solution.
>>
>> It looks like the gate is mostly green again.
>>
>> If people have grenade jobs that failed, a recheck shoudl fix that.
>>
>> Thanks to all those that pushed on this to get things going again.
>>
>> Yours Tony.
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> As noted in the stable/kilo block on 1.4.1, we're good until we do a 1.4.2
> release at which point we either break juno or we break kilo since they
> overlap in supported version ranges.
>
> I've asked Tony to push a requirements change on stable/juno and
> stable/kilo to add a note next to oslo.utils reminding us of this so we
> think twice before breaking it again (we should really make this part of
> the openstack/releases review process - to check that the proposed version
> change isn't going to show up in a branch that we don't intend it do).
> Because if the proposed release has a g-r sync for branch n and the version
> will show up in branch n-1 or n+1, things will likely break.
>
> Hopefully we just don't need an oslo.utils release on juno or kilo before
> juno-eol happens.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/3025fc51/attachment.html>

From mriedem at linux.vnet.ibm.com  Thu Sep 17 14:22:54 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Thu, 17 Sep 2015 09:22:54 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FABF5E.4000204@redhat.com>
References: <55FABF5E.4000204@redhat.com>
Message-ID: <55FACCBE.1080508@linux.vnet.ibm.com>



On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
> PTL Nomination is now over. The official candidate list is available on
> the wiki[0].
>
> There are 5 projects without candidates, so according to this
> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
> MagnetoDB, Magnum, Murano and Security

This is devil's advocate, but why does a project technically need a PTL? 
  Just so that there can be a contact point for cross-project things, 
i.e. a lightning rod?  There are projects that do a lot of group 
leadership/delegation/etc, so it doesn't seem that a PTL is technically 
required in all cases.

>
> There are 7 projects that will have an election: Cinder, Glance, Ironic,
> Keystone, Mistral, Neutron and Oslo. The details for those will be
> posted tomorrow after Tony and I setup the CIVS system.
>
> Thank you,
> Tristan
>
>
> [0]:
> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
> [1]:
> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 

Thanks,

Matt Riedemann



From tony at bakeyournoodle.com  Thu Sep 17 14:23:13 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Fri, 18 Sep 2015 00:23:13 +1000
Subject: [openstack-dev] [Nova] Design Summit Topics for Nova
In-Reply-To: <DFB24FBA-F45C-446E-96DE-F05993154BC6@gmail.com>
References: <CABib2_pVxCtF=0hCGtZzg18OmMRv1LNXeHwmdow9vWx+Sw7HMg@mail.gmail.com>
 <DFB24FBA-F45C-446E-96DE-F05993154BC6@gmail.com>
Message-ID: <20150917142312.GC82515@thor.bakeyournoodle.com>

On Wed, Sep 16, 2015 at 11:40:28AM -0700, melanie witt wrote:
 
> Today I was informed that google forms are blocked in China [1], so I wanted
> to mention it here so we can consider an alternate way to collect submissions
> from those who might not be able to access the form.

I'll act as an email to google forms proxy if needed. People that willbe at the
summit can fill in the temp;ate below.
(stolen from the google forms)

---
Topic Title:
Topic Description:
Submitter IRC handle:
Session leader IRC handle
 Please note the session leader must be there on the day at the summit. Please
 just leave this blank if you feel unable to find someone to lead the session.
 
Link to nova-spec review:
 Features you want to discuss need to have at least a WIP spec before being
 considered for the design summit track. Ideally we will merge the spec before
 the design summit, so a session would not be required.
 
Link to pre-reading:
 Before the submission is on the final list, we need to have some background
 reading for more complex topics, or topics that have had lots of previous
 discussion, so its easier for everyone to get involved. This could be a wiki
 page, an etherpad, an ML post, or devref.
---

Yours Tony.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/78fffa80/attachment.pgp>

From blak111 at gmail.com  Thu Sep 17 14:45:40 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 17 Sep 2015 07:45:40 -0700
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
In-Reply-To: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
Message-ID: <CAO_F6JNLrM4BMCUNZ+XcePjSkk2OAQWJiuH5UU4njYi9+aaZRg@mail.gmail.com>

It's not true for all plugins, but an external network should provide the
same semantics of a normal network. The only difference is that it allows
router gateway interfaces to be attached to it. We want to get rid of as
much special casing as possible for the external network.
On Sep 17, 2015 7:02 AM, "Neil Jerram" <Neil.Jerram at metaswitch.com> wrote:

> Thanks to the interesting 'default network model' thread, I now know
> that Neutron allows booting a VM on an external network. :-)  I didn't
> realize that before!
>
> So, I'm now wondering what connectivity semantics are expected (or even
> specified!) for such VMs, and whether they're the same as - or very
> similar to - the 'routed' networking semantics I've described at [1].
>
> [1]
>
> https://review.openstack.org/#/c/198439/5/doc/source/devref/routed_networks.rst
>
> Specifically I wonder if VM's attached to an external network expect any
> particular L2 characteristics, such as being able to L2 broadcast to
> each other?
>
> By way of context - i.e. why am I asking this?...   The
> networking-calico project [2] provides an implementation of the 'routed'
> semantics at [1], but only if one suspends belief in some of the Neutron
> semantics associated with non-external networks, such as needing a
> virtual router to provide connectivity to the outside world.  (Because
> networking-calico provides that external connectivity without any
> virtual router.)  Therefore we believe that we need to propose some
> enhancement of the Neutron API and data model, so that Neutron can
> describe 'routed' semantics as well as all the traditional ones.  But,
> if what we are doing is semantically equivalent to 'attaching to an
> external network', perhaps no such enhancement is needed...
>
> [2] https://git.openstack.org/cgit/openstack/networking-calico
>
> Many thanks for any input!
>
>     Neil
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/f5d4d6ae/attachment.html>

From rlrossit at linux.vnet.ibm.com  Thu Sep 17 14:47:08 2015
From: rlrossit at linux.vnet.ibm.com (Ryan Rossiter)
Date: Thu, 17 Sep 2015 09:47:08 -0500
Subject: [openstack-dev] [magnum] Is magnum db going to be removed for
 k8s resources?
In-Reply-To: <CAFyztAFXXjyShnavJt2x8FbzBfmq4sa0xdC8nWWpMJg9YZtgZA@mail.gmail.com>
References: <CABJxuZpQU=Pfvft9JQ_JAX6HcV=aUg+mWUvdJCa7FGmQA3P5qw@mail.gmail.com>
 <CAFyztAEb_kG1-dciG=M4tifii17hKTjK3oWSVvLrvaq4cyZ1vw@mail.gmail.com>
 <CAFyztAFXXjyShnavJt2x8FbzBfmq4sa0xdC8nWWpMJg9YZtgZA@mail.gmail.com>
Message-ID: <55FAD26C.2010001@linux.vnet.ibm.com>

Here's what I see if I look at this from a matter-of-fact standpoint.

When Nova works with libvirt, libvirt might have something that Nova 
doesn't know about, but Nova doesn't care. Nova's database is the only 
world that Nova cares about. This allows Nova to have one source of data.

With Magnum, if we take data from both our database and the k8s API, we 
will have a split view of the world. This has both positives and negatives.

It does allow an end-user to do whatever they want with their cluster, 
and they don't necessarily have to use Magnum to do things, but Magnum 
will still have the correct status of stuff. It allows the end-user to 
choose what they want to use. Another positive is that because each 
clustering service is architected slightly different, it allows each 
service to know what it knows, without Magnum trying to guess some 
commonality between them.

A problem I see arising is the complexity added when gathering data from 
separate clusters. If I have one of every cluster, what happens when I 
need to get my list of containers? I would rather do just one call to 
the DB and get them, otherwise I'll have to call k8s, then call swarm, 
then mesos, and then aggregate all of them to return. I don't know if 
the only thing we will be retrieving from k8s are k8s-unique objects, 
but this is a situation that comes to my mind. Another negative is the 
possibility that the API does not perform as well as the DB. If the nova 
instance running the k8s API is super overloaded, the k8s API return 
will take longer than a call to the DB.

Let me know if I'm way off-base in any of these points. I'm not going to 
give an opinion at this point, this is just how I see things.

On 9/17/2015 7:53 AM, Jay Lau wrote:
> Anyone who have some comments/suggestions on this? Thanks!
>
> On Mon, Sep 14, 2015 at 3:57 PM, Jay Lau <jay.lau.513 at gmail.com> wrote:
>
>> Hi Vikas,
>>
>> Thanks for starting this thread. Here just show some of my comments here.
>>
>> The reason that Magnum want to get k8s resource via k8s API including two
>> reasons:
>> 1) Native clients support
>> 2) With current implantation, we cannot get pod for a replication
>> controller. The reason is that Magnum DB only persist replication
>> controller info in Magnum DB.
>>
>> With the bp of objects-from-bay, the magnum will always call k8s API to
>> get all objects for pod/service/rc. Can you please show some of your
>> concerns for why do we need to persist those objects in Magnum DB? We may
>> need to sync up Magnum DB and k8s periodically if we persist two copies of
>> objects.
>>
>> Thanks!
>>
>> <https://blueprints.launchpad.net/openstack/?searchtext=objects-from-bay>
>>
>> 2015-09-14 14:39 GMT+08:00 Vikas Choudhary <choudharyvikas16 at gmail.com>:
>>
>>> Hi Team,
>>>
>>> As per object-from-bay blueprint implementation [1], all calls to magnum db
>>> are being skipped for example pod.create() etc.
>>>
>>> Are not we going to use magnum db at all for pods/services/rc ?
>>>
>>>
>>> Thanks
>>> Vikas Choudhary
>>>
>>>
>>> [1] https://review.openstack.org/#/c/213368/
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> --
>> Thanks,
>>
>> Jay Lau (Guangya Liu)
>>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Thanks,

Ryan Rossiter (rlrossit)



From anteaya at anteaya.info  Thu Sep 17 14:50:12 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Thu, 17 Sep 2015 08:50:12 -0600
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FACCBE.1080508@linux.vnet.ibm.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
Message-ID: <55FAD324.2020102@anteaya.info>

On 09/17/2015 08:22 AM, Matt Riedemann wrote:
> 
> 
> On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>> PTL Nomination is now over. The official candidate list is available on
>> the wiki[0].
>>
>> There are 5 projects without candidates, so according to this
>> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
>> MagnetoDB, Magnum, Murano and Security
> 
> This is devil's advocate, but why does a project technically need a PTL?
>  Just so that there can be a contact point for cross-project things,
> i.e. a lightning rod?  There are projects that do a lot of group
> leadership/delegation/etc, so it doesn't seem that a PTL is technically
> required in all cases.

I think that is a great question for the TC to consider when they
evaluate options for action with these projects.

The election officials are fulfilling their obligation according to the
resolution:
http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst

If you read the verb there the verb is "can" not "must", I choose the
verb "can" on purpose for the resolution when I wrote it. The TC has the
option to select an appointee. The TC can do other things as well,
should the TC choose.

Thanks,
Anita.

> 
>>
>> There are 7 projects that will have an election: Cinder, Glance, Ironic,
>> Keystone, Mistral, Neutron and Oslo. The details for those will be
>> posted tomorrow after Tony and I setup the CIVS system.
>>
>> Thank you,
>> Tristan
>>
>>
>> [0]:
>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>>
>> [1]:
>> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>>
>>
>>
>>
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 



From jan.provaznik at gmail.com  Thu Sep 17 14:54:01 2015
From: jan.provaznik at gmail.com (Jan Provaznik)
Date: Thu, 17 Sep 2015 16:54:01 +0200
Subject: [openstack-dev] [tripleo] Upgrade plans for RDO Manager -
 Brainstorming
In-Reply-To: <55F05182.4080906@redhat.com>
References: <55DB6C8A.7040602@redhat.com> <55F05182.4080906@redhat.com>
Message-ID: <55FAD409.5030100@solnet.cz>

On 09/09/2015 05:34 PM, Zane Bitter wrote:
> On 24/08/15 15:12, Emilien Macchi wrote:
>> Hi,
>>
>> So I've been working on OpenStack deployments for 4 years now and so far
>> RDO Manager is the second installer -after SpinalStack [1]- I'm
>> working on.
>>
>> SpinalStack already had interested features [2] that allowed us to
>> upgrade our customer platforms almost every months, with full testing
>> and automation.
>>
>> Now, we have RDO Manager, I would be happy to share my little experience
>> on the topic and help to make it possible in the next cycle.
>>
>> For that, I created an etherpad [3], which is not too long and focused
>> on basic topics for now. This is technical and focused on Infrastructure
>> upgrade automation.
>>
>> Feel free to continue discussion on this thread or directly in the
>> etherpad.
>>
>> [1] http://spinalstack.enovance.com
>> [2] http://spinalstack.enovance.com/en/latest/dev/upgrade.html
>> [3] https://etherpad.openstack.org/p/rdo-manager-upgrades
>
> I added some notes on the etherpad, but I think this discussion poses a
> larger question: what is TripleO? Why are we using Heat? Because to me
> the major benefit of Heat is that it maintains a record of the current
> state of the system that can be used to manage upgrades. And if we're
> not going to make use of that - if we're going to determine the state of
> the system by introspecting nodes and update it by using Ansible scripts
> without Heat's knowledge, then we probably shouldn't be using Heat at all.
>
> I'm not saying that to close off the option - I think if Heat is not the
> best tool for the job then we should definitely consider other options.
> And right now it really is not the best tool for the job. Adopting
> Puppet (which was a necessary choice IMO) has meant that the
> responsibility for what I call "software orchestration"[1] is split
> awkwardly between Puppet and Heat. For example, the Puppet manifests are
> baked in to images on the servers, so Heat doesn't know when they've
> changed and can't retrigger Puppet to update the configuration when they
> do. We're left trying to reverse-engineer what is supposed to be a
> declarative model from the workflow that we want for things like
> updates/upgrades.
>
> That said, I think there's still some cause for optimism: in a world
> where every service is deployed in a container and every container has
> its own Heat SoftwareDeployment, the boundary between Heat's
> responsibilities and Puppet's would be much clearer. The deployment
> could conceivably fit a declarative model much better, and even offer a
> lot of flexibility in which services run on which nodes. We won't really
> know until we try, but it seems distinctly possible to aspire toward
> Heat actually making things easier rather than just not making them too
> much harder. And there is stuff on the long-term roadmap that could be
> really great if only we had time to devote to it - for example, as I
> mentioned in the etherpad, I'd love to get Heat's user hooks integrated
> with Mistral so that we could have fully-automated, highly-available (in
> a hypothetical future HA undercloud) live migration of workloads off
> compute nodes during updates.
>

TBH I don't expect that using containers will significantly simplify (or 
make clearer) the upgrade process. It would work nicely if upgrade would 
mean just replacing one container with another (where a container is 
represented by a heat resource). But I'm convinced that a container 
replacement will actually involve a complex workflow of actions which 
have to be done before and after.

> In the meantime, however, I do think that we have all the tools in Heat
> that we need to cobble together what we need to do. In Liberty, Heat
> supports batched rolling updates of ResourceGroups, so we won't need to
> use user hooks to cobble together poor-man's batched update support any
> more. We can use the user hooks for their intended purpose of notifying
> the client when to live-migrate compute workloads off a server that is

Unfortunately rolling_updates supports only "pause time" between update 
batches, so if any workflow would be needed between batches (e.g. pause 
before next batch until user validates that previous batch update was 
successful), we still have to use user hooks. But I guess adding hooks 
support to rolling_updates wouldn't be too difficult.

> about to upgraded. The Heat templates should already tell us exactly
> which services are running on which nodes. We can trigger particular
> software deployments on a stack update with a parameter value change (as
> we already do with the yum update deployment). For operations that
> happen in isolation on a single server, we can model them as
> SoftwareDeployment resources within the individual server templates. For
> operations that are synchronised across a group of servers (e.g.
> disabling services on the controller nodes in preparation for a DB
> migration) we can model them as a SoftwareDeploymentGroup resource in
> the parent template. And for chaining multiple sequential operations
> (e.g. disable services, migrate database, enable services), we can chain
> outputs to inputs to handle both ordering and triggering. I'm sure there
> will be many subtleties, but I don't think we *need* Ansible in the mix.
>

I agree that both minor and major upgrades *can* be done with existing 
heat features. Other question is how well it works in practice. At this 
point not very well (only my experience), mainly because of these issues:
- (missing) convergence - let's suppose that a minor rolling upgrade on 
60 nodes would take 60 minutes, and during this upgrade I can not do 
another update of the stack (e.g. I can't add more compute nodes). (I 
know convergence is being worked on though)
- it's quite easy to get heat stack into a state from which it's pretty 
difficult to get it back into a consistent state

Your proposal of modeling actions as resources chained by input/output 
(or maybe just depends_on?) sounds like a good plan. Because of my lack 
of heat knowledge I wonder how well this will work in situation where 
combination of both inside-node tasks flow and cross-nodes orchestration 
is required.

Also I'm not sure how it will work in situations when first stack-update 
operation (e.g. package update) fails on some of nodes so stack is in 
FAILED state, then user runs a different stack-update operation (because 
it has higher priority than fixing failed package update - e.g. scaling 
up nodes). I wonder if user will be able to successfully finish scale-up 
update and then get back to package update.

> So it's really up to the wider TripleO project team to decide which path
> to go down. I am genuinely not bothered whether we choose Heat or
> Ansible. There may even be ways they can work together without
> compromising either model. But I would be pretty uncomfortable with a
> mix where we use Heat for deployment and Ansible for doing upgrades
> behind Heat's back.
>

Based on the idea behind TripleO (deploy Openstack by Openstack) I think 
that for TripleO project it makes sense to stick with Heat. Your 
suggestion with using resources fits well into this concept in theory. 
But honestly I think it would be significantly simpler to use an 
external tool ATM :).

> cheers,
> Zane.
>
>
> [1]
> http://www.zerobanana.com/archive/2014/05/08#heat-configuration-management
>

Jan


From james.slagle at gmail.com  Thu Sep 17 15:03:19 2015
From: james.slagle at gmail.com (James Slagle)
Date: Thu, 17 Sep 2015 11:03:19 -0400
Subject: [openstack-dev] [TripleO] Core reviewers for
 python-tripleoclient and tripleo-common
In-Reply-To: <20150910163414.GA16252@t430slt.redhat.com>
References: <CAHV77z-jrZZ7O+beb98NExannOqsvUJvfgQV=A09k5on1UT+=g@mail.gmail.com>
 <20150910163414.GA16252@t430slt.redhat.com>
Message-ID: <CAHV77z-psfh272QRfwnqBrK9FfjPw_eOUNqTgdO2d08hXnxJwg@mail.gmail.com>

Hi, It's been a week, and I've heard no objections this proposal:

>> Specifically, the folks I'm proposing are:
>> Brad P. Crochet <brad at redhat.com>
>> Dougal Matthews <dougal at redhat.com>

>> - keep just 1 tripleo acl, and add additional folks there, with a good
>> faith agreement not to +/-2,+A code that is not from the 2 client
>> repos.

I've added them to the core team. Thanks!



-- 
-- James Slagle
--


From douglas.mendizabal at rackspace.com  Thu Sep 17 15:05:20 2015
From: douglas.mendizabal at rackspace.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=)
Date: Thu, 17 Sep 2015 10:05:20 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FAD324.2020102@anteaya.info>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
Message-ID: <55FAD6B0.4080707@rackspace.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

I think someone jumped the gun on this thread.  According to the wiki
[1] the cutoff time is not until 5:59 UTC, which
doesn't happen for another few hours. [2]

Am I missing something?

[1] https://wiki.openstack.org/wiki/PTL_Elections_September_2015
[2] http://time.is/UTC

Douglas Mendiz?bal

On 9/17/15 9:50 AM, Anita Kuno wrote:
> On 09/17/2015 08:22 AM, Matt Riedemann wrote:
>> 
>> 
>> On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>>> PTL Nomination is now over. The official candidate list is 
>>> available on the wiki[0].
>>> 
>>> There are 5 projects without candidates, so according to this 
>>> resolution[1], the TC we'll have to appoint a new PTL for 
>>> Barbican, MagnetoDB, Magnum, Murano and Security
>> 
>> This is devil's advocate, but why does a project technically
>> need a PTL? Just so that there can be a contact point for 
>> cross-project things, i.e. a lightning rod?  There are projects 
>> that do a lot of group leadership/delegation/etc, so it doesn't 
>> seem that a PTL is technically required in all cases.
> 
> I think that is a great question for the TC to consider when they 
> evaluate options for action with these projects.
> 
> The election officials are fulfilling their obligation according
> to the resolution: 
> http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20
141128-elections-process-for-leaderless-programs.rst
>
>
> 
If you read the verb there the verb is "can" not "must", I choose
> the verb "can" on purpose for the resolution when I wrote it. The 
> TC has the option to select an appointee. The TC can do other 
> things as well, should the TC choose.
> 
> Thanks, Anita.
> 
>> 
>>> 
>>> There are 7 projects that will have an election: Cinder, 
>>> Glance, Ironic, Keystone, Mistral, Neutron and Oslo. The 
>>> details for those will be posted tomorrow after Tony and I 
>>> setup the CIVS system.
>>> 
>>> Thank you, Tristan
>>> 
>>> 
>>> [0]: 
>>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirm
ed_Candidates
>>>
>>>
>>>
>>> 
[1]:
>>> http://governance.openstack.org/resolutions/20141128-elections-proce
ss-for-leaderless-programs.html
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 
________________________________________________________________________
__
>>> 
>>> OpenStack Development Mailing List (not for usage questions) 
>>> Unsubscribe: 
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>
>>>
>>> 
> 
> ______________________________________________________________________
____
>
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV+tawAAoJEB7Z2EQgmLX7WKIP/RUdGYOkA5dHuNnWKdX8QhaD
VzZSyTOjIebNZshiMIz8FhKGrFUn8wqEScbtUIFHIJj8tKVjZQ19m2Gg6zh42X6V
kpogxGDBwE5a/vuBA1z9eiTocA4KYEypxY+SIh18ho84dj5hDooI9N7ZsCJaaF3+
yKTLvUw7YxMM2iPxjZgQTH1vE1pMUh2fcylv3T3NhpHzIRgeB9dfnfr938rnwUTj
NUTK3DmWJAraAgHcKnwIN+JwOYF1SexXFK1eO2TX0yNYSEzFI3Xina0Hq3bHAON9
KbWlgr4pN19PsqqQnQrPjJbBmBs8TCkXNhTAojHtlA1oIbUiK8c3h32PHEt2AyCx
5g3btXOAqsCqvKtmFH1sj5EACeMUCW4J98u8e212iQPizgG9SD4LZ8FPPHOqWMwV
4haWpWLczZyXf7w7/deP4gndoW7njU/uiwCUNBrgdjI5AeaP5vckHQZ9iQmETsRa
bwwu5Yq7K92rAupVRFx4bofTyG4I8bw+Lg2muYMnwuqvgf++xqsMVs0x97jFYja5
M2qGMgihYDytDoYvdL71WykuN39SzZmPHzxKXKmiOcTXWhAXp10pBwFUFzFJmt+V
/tNjHfzvqt6qvnDEtt65vuwGBEQyiQFqmKmyPFCONCibn8nwoqjwP9hWmG4y1vVa
geegYxrikuEQ3KnPFQWr
=jON5
-----END PGP SIGNATURE-----


From jbelamaric at infoblox.com  Thu Sep 17 15:12:56 2015
From: jbelamaric at infoblox.com (John Belamaric)
Date: Thu, 17 Sep 2015 15:12:56 +0000
Subject: [openstack-dev] [neutron] PTL Non-Candidacy
In-Reply-To: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
References: <CAL3VkVyBiNOfePT30kGi1C9HWfeOwjiL5G_uoj_66L4bg6VZ4g@mail.gmail.com>
Message-ID: <71624F26-C0A2-42B5-A2A0-AAEEB0115BD7@infoblox.com>

Thanks for all your hard work Kyle. Enjoy your more relaxed schedule :)


On Sep 11, 2015, at 5:12 PM, Kyle Mestery <mestery at mestery.com<mailto:mestery at mestery.com>> wrote:

I'm writing to let everyone know that I do not plan to run for Neutron PTL for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan recently put it in his non-candidacy email [1]. But it goes further than that for me. As Flavio put it in his post about "Being a PTL" [2], it's a full time job. In the case of Neutron, it's more than a full time job, it's literally an always on job.

I've tried really hard over my three cycles as PTL to build a stronger web of trust so the project can grow, and I feel that's been accomplished. We have a strong bench of future PTLs and leaders ready to go, I'm excited to watch them lead and help them in anyway I can.

As was said by Zane in a recent email [3], while Heat may have pioneered the concept of rotating PTL duties with each cycle, I'd like to highly encourage Neutron and other projects to do the same. Having a deep bench of leaders supporting each other is important for the future of all projects.

See you all in Tokyo!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/4617c03c/attachment.html>

From Neil.Jerram at metaswitch.com  Thu Sep 17 15:16:14 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Thu, 17 Sep 2015 15:16:14 +0000
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JNLrM4BMCUNZ+XcePjSkk2OAQWJiuH5UU4njYi9+aaZRg@mail.gmail.com>
Message-ID: <SN1PR02MB169592E08620777E123F6D34995A0@SN1PR02MB1695.namprd02.prod.outlook.com>

Thanks, Kevin.  Some further queries, then:

On 17/09/15 15:49, Kevin Benton wrote:
>
> It's not true for all plugins, but an external network should provide
> the same semantics of a normal network.
>
Yes, that makes sense.  Clearly the core semantic there is IP.  I can
imagine reasonable variation on less core details, e.g. L2 broadcast vs.
NBMA.  Perhaps it would be acceptable, if use cases need it, for such
details to be described by flags on the external network object.

I'm also wondering about what you wrote in the recent thread with Carl
about representing a network connected by routers.  I think you were
arguing that a L3-only network should not be represented by a kind of
Neutron network object, because a Neutron network has so many L2
properties/semantics that it just doesn't make sense, and better to have
a different kind of object for L3-only.  Do those L2
properties/semantics apply to an external network too?

> The only difference is that it allows router gateway interfaces to be
> attached to it.
>
Right.  From a networking-calico perspective, I think that means that
the implementation should (eventually) support that, and hence allow
interconnection between the external network and private Neutron networks.

> We want to get rid of as much special casing as possible for the
> external network.
>
I don't understand here.  What 'special casing' do you mean?

Regards,
    Neil

> On Sep 17, 2015 7:02 AM, "Neil Jerram" <Neil.Jerram at metaswitch.com
> <mailto:Neil.Jerram at metaswitch.com>> wrote:
>
>     Thanks to the interesting 'default network model' thread, I now know
>     that Neutron allows booting a VM on an external network. :-)  I didn't
>     realize that before!
>
>     So, I'm now wondering what connectivity semantics are expected (or
>     even
>     specified!) for such VMs, and whether they're the same as - or very
>     similar to - the 'routed' networking semantics I've described at [1].
>
>     [1]
>     https://review.openstack.org/#/c/198439/5/doc/source/devref/routed_networks.rst
>
>     Specifically I wonder if VM's attached to an external network
>     expect any
>     particular L2 characteristics, such as being able to L2 broadcast to
>     each other?
>
>     By way of context - i.e. why am I asking this?...   The
>     networking-calico project [2] provides an implementation of the
>     'routed'
>     semantics at [1], but only if one suspends belief in some of the
>     Neutron
>     semantics associated with non-external networks, such as needing a
>     virtual router to provide connectivity to the outside world.  (Because
>     networking-calico provides that external connectivity without any
>     virtual router.)  Therefore we believe that we need to propose some
>     enhancement of the Neutron API and data model, so that Neutron can
>     describe 'routed' semantics as well as all the traditional ones.  But,
>     if what we are doing is semantically equivalent to 'attaching to an
>     external network', perhaps no such enhancement is needed...
>
>     [2] https://git.openstack.org/cgit/openstack/networking-calico
>     <https://git.openstack.org/cgit/openstack/networking-calico>
>
>     Many thanks for any input!
>
>         Neil
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From sean at coreitpro.com  Thu Sep 17 15:17:24 2015
From: sean at coreitpro.com (Sean M. Collins)
Date: Thu, 17 Sep 2015 15:17:24 +0000
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
In-Reply-To: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
Message-ID: <0000014fdbe1ab74-e59a0dd9-2032-463c-8a5e-1f4e92b451f6-000000@email.amazonses.com>

On Thu, Sep 17, 2015 at 09:58:29AM EDT, Neil Jerram wrote:
> Specifically I wonder if VM's attached to an external network expect any
> particular L2 characteristics, such as being able to L2 broadcast to
> each other?

I am fairly certain that our definition of a Neutron Network, as a L2
broadcast segment, implies that yes this should be possible.
-- 
Sean M. Collins


From fungi at yuggoth.org  Thu Sep 17 15:17:51 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Thu, 17 Sep 2015 15:17:51 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FAD6B0.4080707@rackspace.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FAD6B0.4080707@rackspace.com>
Message-ID: <20150917151751.GI25159@yuggoth.org>

On 2015-09-17 10:05:20 -0500 (-0500), Douglas Mendiz?bal wrote:
> I think someone jumped the gun on this thread.  According to the wiki
> [1] the cutoff time is not until 5:59 UTC, which
> doesn't happen for another few hours. [2]

Per that page the deadline for nominations is September 17, 05:59
UTC which was over 9 hours ago now. The deadline for contributions
to count toward being part of the electorate is September 18, 2015
05:59 UTC (a little less than 15 hours from now).
-- 
Jeremy Stanley


From morgan.fainberg at gmail.com  Thu Sep 17 15:19:27 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Thu, 17 Sep 2015 08:19:27 -0700
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <55FAD6B0.4080707@rackspace.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FAD6B0.4080707@rackspace.com>
Message-ID: <6A74F4EA-95AE-4AD5-989E-680C08EF49B5@gmail.com>

Time.is is showing utc in "PM" not a 24 hour clock. It is past 1500 UTC at the moment. 

Sent via mobile

> On Sep 17, 2015, at 08:05, Douglas Mendiz?bal <douglas.mendizabal at rackspace.com> wrote:
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA512
> 
> I think someone jumped the gun on this thread.  According to the wiki
> [1] the cutoff time is not until 5:59 UTC, which
> doesn't happen for another few hours. [2]
> 
> Am I missing something?
> 
> [1] https://wiki.openstack.org/wiki/PTL_Elections_September_2015
> [2] http://time.is/UTC
> 
> Douglas Mendiz?bal
> 
>> On 9/17/15 9:50 AM, Anita Kuno wrote:
>>> On 09/17/2015 08:22 AM, Matt Riedemann wrote:
>>> 
>>> 
>>>> On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>>>> PTL Nomination is now over. The official candidate list is 
>>>> available on the wiki[0].
>>>> 
>>>> There are 5 projects without candidates, so according to this 
>>>> resolution[1], the TC we'll have to appoint a new PTL for 
>>>> Barbican, MagnetoDB, Magnum, Murano and Security
>>> 
>>> This is devil's advocate, but why does a project technically
>>> need a PTL? Just so that there can be a contact point for 
>>> cross-project things, i.e. a lightning rod?  There are projects 
>>> that do a lot of group leadership/delegation/etc, so it doesn't 
>>> seem that a PTL is technically required in all cases.
>> 
>> I think that is a great question for the TC to consider when they 
>> evaluate options for action with these projects.
>> 
>> The election officials are fulfilling their obligation according
>> to the resolution: 
>> http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20
> 141128-elections-process-for-leaderless-programs.rst
> If you read the verb there the verb is "can" not "must", I choose
>> the verb "can" on purpose for the resolution when I wrote it. The 
>> TC has the option to select an appointee. The TC can do other 
>> things as well, should the TC choose.
>> 
>> Thanks, Anita.
>> 
>>> 
>>>> 
>>>> There are 7 projects that will have an election: Cinder, 
>>>> Glance, Ironic, Keystone, Mistral, Neutron and Oslo. The 
>>>> details for those will be posted tomorrow after Tony and I 
>>>> setup the CIVS system.
>>>> 
>>>> Thank you, Tristan
>>>> 
>>>> 
>>>> [0]: 
>>>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirm
> ed_Candidates
> [1]:
>>>> http://governance.openstack.org/resolutions/20141128-elections-proce
> ss-for-leaderless-programs.html
> ________________________________________________________________________
> __
>>>> 
>>>> OpenStack Development Mailing List (not for usage questions) 
>>>> Unsubscribe: 
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ______________________________________________________________________
> ____
> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - https://gpgtools.org
> 
> iQIcBAEBCgAGBQJV+tawAAoJEB7Z2EQgmLX7WKIP/RUdGYOkA5dHuNnWKdX8QhaD
> VzZSyTOjIebNZshiMIz8FhKGrFUn8wqEScbtUIFHIJj8tKVjZQ19m2Gg6zh42X6V
> kpogxGDBwE5a/vuBA1z9eiTocA4KYEypxY+SIh18ho84dj5hDooI9N7ZsCJaaF3+
> yKTLvUw7YxMM2iPxjZgQTH1vE1pMUh2fcylv3T3NhpHzIRgeB9dfnfr938rnwUTj
> NUTK3DmWJAraAgHcKnwIN+JwOYF1SexXFK1eO2TX0yNYSEzFI3Xina0Hq3bHAON9
> KbWlgr4pN19PsqqQnQrPjJbBmBs8TCkXNhTAojHtlA1oIbUiK8c3h32PHEt2AyCx
> 5g3btXOAqsCqvKtmFH1sj5EACeMUCW4J98u8e212iQPizgG9SD4LZ8FPPHOqWMwV
> 4haWpWLczZyXf7w7/deP4gndoW7njU/uiwCUNBrgdjI5AeaP5vckHQZ9iQmETsRa
> bwwu5Yq7K92rAupVRFx4bofTyG4I8bw+Lg2muYMnwuqvgf++xqsMVs0x97jFYja5
> M2qGMgihYDytDoYvdL71WykuN39SzZmPHzxKXKmiOcTXWhAXp10pBwFUFzFJmt+V
> /tNjHfzvqt6qvnDEtt65vuwGBEQyiQFqmKmyPFCONCibn8nwoqjwP9hWmG4y1vVa
> geegYxrikuEQ3KnPFQWr
> =jON5
> -----END PGP SIGNATURE-----
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From tdecacqu at redhat.com  Thu Sep 17 15:20:03 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Thu, 17 Sep 2015 15:20:03 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FAD6B0.4080707@rackspace.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FAD6B0.4080707@rackspace.com>
Message-ID: <55FADA23.8050307@redhat.com>

On 09/17/2015 03:05 PM, Douglas Mendiz?bal wrote:
> I think someone jumped the gun on this thread.  According to the wiki
> [1] the cutoff time is not until 5:59 UTC, which
> doesn't happen for another few hours. [2]
> 
> Am I missing something?
> 
> [1] https://wiki.openstack.org/wiki/PTL_Elections_September_2015
> [2] http://time.is/UTC
> 
> Douglas Mendiz?bal
> 


Hi Douglas,

UTC time is now:  "Thu Sep 17 15:16:46 UTC 2015".
The deadline was: "Thu Sep 17 05:59:00 UTC 2015"

You can check UTC time using this command line "TZ=UTC date".

Regards,
Tristan

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/f3598b7c/attachment.pgp>

From armamig at gmail.com  Thu Sep 17 15:20:50 2015
From: armamig at gmail.com (Armando M.)
Date: Thu, 17 Sep 2015 08:20:50 -0700
Subject: [openstack-dev] [Neutron] Effective Neutron
Message-ID: <CAK+RQeZ32OzKF6nE-DeuUTnF6zsTB8BE77b20aJFQw+rWjSe7w@mail.gmail.com>

Hi fellow developers and reviewers,

Some of you may have noticed that I put together patch [1] up for review.

The intention of this initiative is to capture/share 'pills of wisdom' when
it comes to Neutron development and reviewing. In fact, there are a number
of common patterns (or anti-patterns) that we keep stumbling on, and I came
to the realization that if we all stopped for a second and captured those
in writing for any newcomer (or veteran) to see, we would all benefit
during code reviews and development, because we'd all know what to watch
out for. The wishful thinking is that once this document reaches critical
mass, we will all have learned how to avoid common mistakes and get our
patches merged swiftly.

It is particularly aimed at the Neutron contributor and it is not meant to
replace the wealth of information that is available on doc.openstack.org,
the wiki or the Internet. This is also not meant to be a cross-project
effort, because let's face it...every project is different, and pills of
wisdom in Neutron may as well be everyone's knowledge in Heat, etc.

I aim to add more material over the next couple of days, also with the help
of some of you, so bear with me while the patch is in WIP.

Feedback welcome.

Many thanks,
Armando

[1] https://review.openstack.org/#/c/224419/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/a40d8392/attachment.html>

From tony at bakeyournoodle.com  Thu Sep 17 15:23:49 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Fri, 18 Sep 2015 01:23:49 +1000
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FAD6B0.4080707@rackspace.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FAD6B0.4080707@rackspace.com>
Message-ID: <20150917152348.GD82515@thor.bakeyournoodle.com>

On Thu, Sep 17, 2015 at 10:05:20AM -0500, Douglas Mendiz?bal wrote:
 
> I think someone jumped the gun on this thread.  According to the wiki
> [1] the cutoff time is not until 5:59 UTC, which
> doesn't happen for another few hours. [2]
> 
> Am I missing something?

From the wiki[0]:
---
Timeline
September 11 - September 17, 05:59 UTC: Open candidacy for PTL positions
September 18 - September 24: PTL elections
---

The time in UTC is: http://www.timeanddate.com/worldclock/timezone/utc
which, at the time of this writing, is: Sept 17th 15:20ish

So the nomination period closed nearly 10 hours ago.

The time-frame to be eligible to vote in the election closes at 5:59am on Sept
18th (UTC)

I hope that clarifies.

Yours Tony.
[0] https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Timeline
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/00646c70/attachment.pgp>

From mordred at inaugust.com  Thu Sep 17 15:26:13 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Thu, 17 Sep 2015 17:26:13 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FAD324.2020102@anteaya.info>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
Message-ID: <55FADB95.6090801@inaugust.com>

On 09/17/2015 04:50 PM, Anita Kuno wrote:
> On 09/17/2015 08:22 AM, Matt Riedemann wrote:
>>
>>
>> On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>>> PTL Nomination is now over. The official candidate list is available on
>>> the wiki[0].
>>>
>>> There are 5 projects without candidates, so according to this
>>> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
>>> MagnetoDB, Magnum, Murano and Security
>>
>> This is devil's advocate, but why does a project technically need a PTL?
>>   Just so that there can be a contact point for cross-project things,
>> i.e. a lightning rod?  There are projects that do a lot of group
>> leadership/delegation/etc, so it doesn't seem that a PTL is technically
>> required in all cases.
>
> I think that is a great question for the TC to consider when they
> evaluate options for action with these projects.
>
> The election officials are fulfilling their obligation according to the
> resolution:
> http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst
>
> If you read the verb there the verb is "can" not "must", I choose the
> verb "can" on purpose for the resolution when I wrote it. The TC has the
> option to select an appointee. The TC can do other things as well,
> should the TC choose.

I agree- and this is a great example of places where human judgement is 
better than rules.

For instance - one of the projects had a nominee but it missed the 
deadline, so that's probably an easy on.

For one of the projects it had been looking dead for a while, so this is 
the final nail in the coffin from my POV

For the other three - I know they're still active projects with people 
interested in them, so sorting them out will be fun!

>
>>
>>>
>>> There are 7 projects that will have an election: Cinder, Glance, Ironic,
>>> Keystone, Mistral, Neutron and Oslo. The details for those will be
>>> posted tomorrow after Tony and I setup the CIVS system.
>>>
>>> Thank you,
>>> Tristan
>>>
>>>
>>> [0]:
>>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>>>
>>> [1]:
>>> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>>>
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From gkotton at vmware.com  Thu Sep 17 15:27:03 2015
From: gkotton at vmware.com (Gary Kotton)
Date: Thu, 17 Sep 2015 15:27:03 +0000
Subject: [openstack-dev] [Neutron] Effective Neutron
In-Reply-To: <CAK+RQeZ32OzKF6nE-DeuUTnF6zsTB8BE77b20aJFQw+rWjSe7w@mail.gmail.com>
References: <CAK+RQeZ32OzKF6nE-DeuUTnF6zsTB8BE77b20aJFQw+rWjSe7w@mail.gmail.com>
Message-ID: <D220B667.BE24F%gkotton@vmware.com>

Thanks for marking as WPI

From: "Armando M." <armamig at gmail.com<mailto:armamig at gmail.com>>
Reply-To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Thursday, September 17, 2015 at 6:20 PM
To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [Neutron] Effective Neutron

Hi fellow developers and reviewers,

Some of you may have noticed that I put together patch [1] up for review.

The intention of this initiative is to capture/share 'pills of wisdom' when it comes to Neutron development and reviewing. In fact, there are a number of common patterns (or anti-patterns) that we keep stumbling on, and I came to the realization that if we all stopped for a second and captured those in writing for any newcomer (or veteran) to see, we would all benefit during code reviews and development, because we'd all know what to watch out for. The wishful thinking is that once this document reaches critical mass, we will all have learned how to avoid common mistakes and get our patches merged swiftly.

It is particularly aimed at the Neutron contributor and it is not meant to replace the wealth of information that is available on doc.openstack.org<http://doc.openstack.org>, the wiki or the Internet. This is also not meant to be a cross-project effort, because let's face it...every project is different, and pills of wisdom in Neutron may as well be everyone's knowledge in Heat, etc.

I aim to add more material over the next couple of days, also with the help of some of you, so bear with me while the patch is in WIP.

Feedback welcome.

Many thanks,
Armando

[1] https://review.openstack.org/#/c/224419/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/74f669eb/attachment.html>

From robert.clark at hp.com  Thu Sep 17 15:29:34 2015
From: robert.clark at hp.com (Clark, Robert Graham)
Date: Thu, 17 Sep 2015 15:29:34 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FAC682.9050102@rackspace.com>
References: <55FABF5E.4000204@redhat.com>
 <20150917133254.GH29319@redhat.com> <55FAC3A2.6090709@redhat.com>
 <20150917134934.GI29319@redhat.com> <55FAC682.9050102@rackspace.com>
Message-ID: <A0C170085C37664D93EE1604364858A11FE456AE@G9W0763.americas.hpqcorp.net>

Likewise, I'm not sure I missed the candidacy window, I think our late mid-cycle threw things out of whack slightly.

When I saw the Magnum nomination I made a mental note to apply today. This is a poor-show on my part and I apologise to the TC, the community and the Security team for this apparent lack of awareness. 

Five projects without candidates seems like a lot, especially as several of them are very active. I think perhaps this is a busy time of year and like me a number of people were not paying close enough attention to the election window.

I understand that the official window for nominations has now closed, I'd like to understand more about how the process detailed in [1] below will operate, many of these projects have established PTLs (like me) who are obviously not great at tracking dates (like me) but still very much want to continue to lead in their communities (like me). I'd like to better understand what happens next.

-Rob



> -----Original Message-----
> From: Douglas Mendiz?bal [mailto:douglas.mendizabal at rackspace.com]
> Sent: 17 September 2015 14:56
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [all][elections] PTL nomination period is now over
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA512
> 
> This is quite unfortunate, as I was intending to submit my candidacy
> for the Barbican project today, but I did not realize the cutoff time
> would be in the morning in CDT.
> 
> I'd like to apologize to the OpenStack community and the Barbican team
> in particular for missing this deadline.
> 
> Thanks,
> 
> Douglas Mendiz?bal
> 
> On 9/17/15 8:49 AM, Flavio Percoco wrote:
> > On 17/09/15 13:44 +0000, Tristan Cacqueray wrote:
> >> On 09/17/2015 01:32 PM, Flavio Percoco wrote:
> >>> On 17/09/15 13:25 +0000, Tristan Cacqueray wrote:
> >>>> PTL Nomination is now over. The official candidate list is
> >>>> available on the wiki[0].
> >>>>
> >>>> There are 5 projects without candidates, so according to
> >>>> this resolution[1], the TC we'll have to appoint a new PTL
> >>>> for Barbican, MagnetoDB, Magnum, Murano and Security
> >>>
> >>> Magnum had a candidacy on the mailing list. I'd assume this is
> >>> because it wasn't proposed to openstack/election. Right?
> >>
> >> That is correct, but the candidacy was submitted after the
> >> deadlines so we can't validate this candidate.
> >
> > Awesome, thanks for the confirmation. Flavio
> >
> >>
> >>>
> >>> Thanks for the hard work here, Flavio
> >>>
> >>>>
> >>>> There are 7 projects that will have an election: Cinder,
> >>>> Glance, Ironic, Keystone, Mistral, Neutron and Oslo. The
> >>>> details for those will be posted tomorrow after Tony and I
> >>>> setup the CIVS system.
> >>>>
> >>>> Thank you, Tristan
> >>>>
> >>>>
> >>>> [0]:
> >>>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confir
> med_Candidates
> >>>>
> >>>>
> >>>>
> >>>>
> [1]:
> >>>> http://governance.openstack.org/resolutions/20141128-elections-proc
> ess-for-leaderless-programs.html
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>>>
> >>>>
> ________________________________________________________________________
> __
> >>>>
> >>>>
> >>>> OpenStack Development Mailing List (not for usage questions)
> >>>> Unsubscribe:
> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>>>
> >>>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>>
> >>>
> >>> ____________________________________________________________________
> ______
> >>>
> >>>
> >>>
> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >
> >>>
> >
> >
> >
> >
> > ______________________________________________________________________
> ____
> >
> >
> OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - https://gpgtools.org
> 
> iQIcBAEBCgAGBQJV+saCAAoJEB7Z2EQgmLX7XG0P/AqOTGDDbVJrJSTPnCGwtqYd
> 275uDQgWqvbsGMKOrfKO7GBgI/5n/j8hCyiipq6niCfW5eWWH7rYgU1pLKLjiZmR
> 12Ui4PwkqvoJEa0J5NIiM8GOrt2TEDTu7vOQAWMzGEn+8fbs7Z9MRPIAg7bEXuk0
> eQNs5LK6j/INXvuuKm4ZV2MjAxJRbtsSZYVn59U4IxM0GSJIC4MLu8dGaRzf+G8B
> 881hflBskg1Sa5UjEcj/yMUDrtBLOyNkM67dv8M9ojNB0bX3o+US8zmJnsbk6whD
> ox3GrgoxT8he6iMNd/YYycFtBlBZ4fqNN8Uv5Vr/+k8s2umJf7Y3IbM2oQuhM1oJ
> mWuwFbyc440ep9WkBeXeZTm0S0FYwR3MS40nW2D04eHEcTbCHchKIoLp/tO0AKYP
> op116JlzTyWZatywL8rIOner4UJQX6yUqmGRdonACNQ6OAzTLTTaARtwqHacxL81
> UqzOLEQ8nW9p5xCTPWhbPbR7t1T7ngwf5bJAuDKVx9JDUsM+mYjZ7smWdg+PV1yS
> SwW94TzImOV4ujiT7AwzUBz6SZ0jHFt5yXVWscggpj5k7l8lPqFhd4xQVvidqLcZ
> VsHfKwfdfWX22z+97n4/GjEd6B0seZqcxoP4qVsXXgpuFJETVLEifDM9DTOLccxy
> xQR6UpOxTZxl5EdiOpxX
> =nraX
> -----END PGP SIGNATURE-----
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From edgar.magana at workday.com  Thu Sep 17 15:37:35 2015
From: edgar.magana at workday.com (Edgar Magana)
Date: Thu, 17 Sep 2015 15:37:35 +0000
Subject: [openstack-dev] [Neutron] Effective Neutron
In-Reply-To: <D220B667.BE24F%gkotton@vmware.com>
References: <CAK+RQeZ32OzKF6nE-DeuUTnF6zsTB8BE77b20aJFQw+rWjSe7w@mail.gmail.com>
 <D220B667.BE24F%gkotton@vmware.com>
Message-ID: <A5ADB678-E9F9-4838-9437-CA3BE37B0239@workday.com>

Actually, I am wondering if replacing the Wiki with this great information will be better???
For sure, docs are well structured and they have a great team behind them but the wikis are not the same, gerrit review provide a better way to distribute and update this knowledge?

Great initiative Armax!

BTW. Gary is WPI = Work Provided Intentionally  or What the Problem Is?? ;-)

Edgar

From: Gary Kotton
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Thursday, September 17, 2015 at 8:27 AM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [Neutron] Effective Neutron

Thanks for marking as WPI

From: "Armando M." <armamig at gmail.com<mailto:armamig at gmail.com>>
Reply-To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Thursday, September 17, 2015 at 6:20 PM
To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [Neutron] Effective Neutron

Hi fellow developers and reviewers,

Some of you may have noticed that I put together patch [1] up for review.

The intention of this initiative is to capture/share 'pills of wisdom' when it comes to Neutron development and reviewing. In fact, there are a number of common patterns (or anti-patterns) that we keep stumbling on, and I came to the realization that if we all stopped for a second and captured those in writing for any newcomer (or veteran) to see, we would all benefit during code reviews and development, because we'd all know what to watch out for. The wishful thinking is that once this document reaches critical mass, we will all have learned how to avoid common mistakes and get our patches merged swiftly.

It is particularly aimed at the Neutron contributor and it is not meant to replace the wealth of information that is available on doc.openstack.org<http://doc.openstack.org>, the wiki or the Internet. This is also not meant to be a cross-project effort, because let's face it...every project is different, and pills of wisdom in Neutron may as well be everyone's knowledge in Heat, etc.

I aim to add more material over the next couple of days, also with the help of some of you, so bear with me while the patch is in WIP.

Feedback welcome.

Many thanks,
Armando

[1] https://review.openstack.org/#/c/224419/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/3b241161/attachment.html>

From mestery at mestery.com  Thu Sep 17 15:38:07 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Thu, 17 Sep 2015 10:38:07 -0500
Subject: [openstack-dev] [Neutron] Effective Neutron
In-Reply-To: <CAK+RQeZ32OzKF6nE-DeuUTnF6zsTB8BE77b20aJFQw+rWjSe7w@mail.gmail.com>
References: <CAK+RQeZ32OzKF6nE-DeuUTnF6zsTB8BE77b20aJFQw+rWjSe7w@mail.gmail.com>
Message-ID: <CAL3VkVyxYJT4vY4en2W_KhR_uzZ6dNPeB_UAvQvN51cPSgjauA@mail.gmail.com>

On Thu, Sep 17, 2015 at 10:20 AM, Armando M. <armamig at gmail.com> wrote:

> Hi fellow developers and reviewers,
>
> Some of you may have noticed that I put together patch [1] up for review.
>
> The intention of this initiative is to capture/share 'pills of wisdom'
> when it comes to Neutron development and reviewing. In fact, there are a
> number of common patterns (or anti-patterns) that we keep stumbling on, and
> I came to the realization that if we all stopped for a second and captured
> those in writing for any newcomer (or veteran) to see, we would all benefit
> during code reviews and development, because we'd all know what to watch
> out for. The wishful thinking is that once this document reaches critical
> mass, we will all have learned how to avoid common mistakes and get our
> patches merged swiftly.
>
> It is particularly aimed at the Neutron contributor and it is not meant to
> replace the wealth of information that is available on doc.openstack.org,
> the wiki or the Internet. This is also not meant to be a cross-project
> effort, because let's face it...every project is different, and pills of
> wisdom in Neutron may as well be everyone's knowledge in Heat, etc.
>
> I aim to add more material over the next couple of days, also with the
> help of some of you, so bear with me while the patch is in WIP.
>
> Feedback welcome.
>
>
Thanks for sending this out and working on this patch Armando. This is
going to greatly assist everyone working on Neutron. Nice job!

Kyle


> Many thanks,
> Armando
>
> [1] https://review.openstack.org/#/c/224419/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/e34a78b0/attachment.html>

From annegentle at justwriteclick.com  Thu Sep 17 15:40:17 2015
From: annegentle at justwriteclick.com (Anne Gentle)
Date: Thu, 17 Sep 2015 10:40:17 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <55FADB95.6090801@inaugust.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
Message-ID: <CAD0KtVHczQhmL7E39ftpm08iHpjNNGW+S90jRnwL+Upi6SaYLA@mail.gmail.com>

On Thu, Sep 17, 2015 at 10:26 AM, Monty Taylor <mordred at inaugust.com> wrote:

> On 09/17/2015 04:50 PM, Anita Kuno wrote:
>
>> On 09/17/2015 08:22 AM, Matt Riedemann wrote:
>>
>>>
>>>
>>> On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>>>
>>>> PTL Nomination is now over. The official candidate list is available on
>>>> the wiki[0].
>>>>
>>>> There are 5 projects without candidates, so according to this
>>>> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
>>>> MagnetoDB, Magnum, Murano and Security
>>>>
>>>
>>> This is devil's advocate, but why does a project technically need a PTL?
>>>   Just so that there can be a contact point for cross-project things,
>>> i.e. a lightning rod?  There are projects that do a lot of group
>>> leadership/delegation/etc, so it doesn't seem that a PTL is technically
>>> required in all cases.
>>>
>>
>> I think that is a great question for the TC to consider when they
>> evaluate options for action with these projects.
>>
>> The election officials are fulfilling their obligation according to the
>> resolution:
>>
>> http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst
>>
>> If you read the verb there the verb is "can" not "must", I choose the
>> verb "can" on purpose for the resolution when I wrote it. The TC has the
>> option to select an appointee. The TC can do other things as well,
>> should the TC choose.
>>
>
> I agree- and this is a great example of places where human judgement is
> better than rules.
>
> For instance - one of the projects had a nominee but it missed the
> deadline, so that's probably an easy on.
>

Honestly I did the "Thursday" alignment math in my head and thought of it
in my non-futuristic timezone. Plus I have a Thursday release mentality
thanks to Theirry's years of excellent release management. :)

Plus, I further confused a couple of candidates when encouraging candidacy
posts, so I apologize for that! I am trying to get ahead of what we'll have
to do on the TC anyway.

I'd like to apply some common sense here and if this many candidates on
both sides of the globe got confused, we can still take that into
consideration when taking next steps.


>
> For one of the projects it had been looking dead for a while, so this is
> the final nail in the coffin from my POV
>

I'm not sure we've defined a coffin really, more of an attic/shelving
system. :)

I'll definitely take some time to follow up with individuals in this
deadline confusion system when we take next steps on the TC.

Anne


>
> For the other three - I know they're still active projects with people
> interested in them, so sorting them out will be fun!
>
>
>
>>
>>>
>>>> There are 7 projects that will have an election: Cinder, Glance, Ironic,
>>>> Keystone, Mistral, Neutron and Oslo. The details for those will be
>>>> posted tomorrow after Tony and I setup the CIVS system.
>>>>
>>>> Thank you,
>>>> Tristan
>>>>
>>>>
>>>> [0]:
>>>>
>>>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>>>>
>>>> [1]:
>>>>
>>>> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/7a7a9320/attachment.html>

From mestery at mestery.com  Thu Sep 17 15:48:23 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Thu, 17 Sep 2015 10:48:23 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <55FADB95.6090801@inaugust.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
Message-ID: <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>

On Thu, Sep 17, 2015 at 10:26 AM, Monty Taylor <mordred at inaugust.com> wrote:

> On 09/17/2015 04:50 PM, Anita Kuno wrote:
>
>> On 09/17/2015 08:22 AM, Matt Riedemann wrote:
>>
>>>
>>>
>>> On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>>>
>>>> PTL Nomination is now over. The official candidate list is available on
>>>> the wiki[0].
>>>>
>>>> There are 5 projects without candidates, so according to this
>>>> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
>>>> MagnetoDB, Magnum, Murano and Security
>>>>
>>>
>>> This is devil's advocate, but why does a project technically need a PTL?
>>>   Just so that there can be a contact point for cross-project things,
>>> i.e. a lightning rod?  There are projects that do a lot of group
>>> leadership/delegation/etc, so it doesn't seem that a PTL is technically
>>> required in all cases.
>>>
>>
>> I think that is a great question for the TC to consider when they
>> evaluate options for action with these projects.
>>
>> The election officials are fulfilling their obligation according to the
>> resolution:
>>
>> http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst
>>
>> If you read the verb there the verb is "can" not "must", I choose the
>> verb "can" on purpose for the resolution when I wrote it. The TC has the
>> option to select an appointee. The TC can do other things as well,
>> should the TC choose.
>>
>
> I agree- and this is a great example of places where human judgement is
> better than rules.
>
> For instance - one of the projects had a nominee but it missed the
> deadline, so that's probably an easy on.
>
> For one of the projects it had been looking dead for a while, so this is
> the final nail in the coffin from my POV
>
> For the other three - I know they're still active projects with people
> interested in them, so sorting them out will be fun!
>
>
This is the right approach. Human judgement #ftw! :)


>
>
>>
>>>
>>>> There are 7 projects that will have an election: Cinder, Glance, Ironic,
>>>> Keystone, Mistral, Neutron and Oslo. The details for those will be
>>>> posted tomorrow after Tony and I setup the CIVS system.
>>>>
>>>> Thank you,
>>>> Tristan
>>>>
>>>>
>>>> [0]:
>>>>
>>>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>>>>
>>>> [1]:
>>>>
>>>> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/754fe50f/attachment.html>

From amuller at redhat.com  Thu Sep 17 15:50:51 2015
From: amuller at redhat.com (Assaf Muller)
Date: Thu, 17 Sep 2015 11:50:51 -0400
Subject: [openstack-dev] [dragonflow] Low OVS version for Ubuntu
In-Reply-To: <CAG9LJa554aMjAF06G_8U5AiGTOuZ1ODtci+6ykfNO-Vaet_B4Q@mail.gmail.com>
References: <CALFEDVcXE18WAWS=64sOQkyLn6ob2R0Nx+G2jSP3xOo37Zcubg@mail.gmail.com>
 <55FA6AB4.5030605@linux.vnet.ibm.com>
 <CAG9LJa554aMjAF06G_8U5AiGTOuZ1ODtci+6ykfNO-Vaet_B4Q@mail.gmail.com>
Message-ID: <CABARBAYV-UD=ntjBej+aWf5mBHqxEhmMzuGnvKUHhFbqEo-GrA@mail.gmail.com>

Another issue is that the gate is running with Ubuntu 14.04, which is
running OVS 2.0. This means we can't test
certain features in Neutron (For example, the OVS ARP responder).

On Thu, Sep 17, 2015 at 4:17 AM, Gal Sagie <gal.sagie at gmail.com> wrote:

> Hello Li Ma,
>
> Dragonflow uses OpenFlow1.3 to communicate with OVS and thats why we need
> OVS 2.3.1.
> As suggested you can build it from source.
> For Fedora 21 OVS2.3.1 is part of the default yum repository.
>
> You can ping me on IRC (gsagie at freenode) if you need any additional
> help how
> to compile OVS.
>
> Thanks
> Gal.
>
> On Thu, Sep 17, 2015 at 10:24 AM, Sudipto Biswas <
> sbiswas7 at linux.vnet.ibm.com> wrote:
>
>>
>>
>> On Thursday 17 September 2015 12:22 PM, Li Ma wrote:
>>
>>> Hi all,
>>>
>>> I tried to run devstack to deploy dragonflow, but I failed with lower
>>> OVS version.
>>>
>>> I used Ubuntu 14.10 server, but the official package of OVS is 2.1.3
>>> which is much lower than the required version 2.3.1+?
>>>
>>> So, can anyone provide a Ubuntu repository that contains the correct
>>> OVS packages?
>>>
>>
>> Why don't you just build the OVS you want from here:
>> http://openvswitch.org/download/
>>
>> Thanks,
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Best Regards ,
>
> The G.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/7060dbda/attachment-0001.html>

From robert.clark at hp.com  Thu Sep 17 15:54:07 2015
From: robert.clark at hp.com (Clark, Robert Graham)
Date: Thu, 17 Sep 2015 15:54:07 +0000
Subject: [openstack-dev] [Security] Leadership / Participation in PTL
	elections.
Message-ID: <D220A0AE.2C163%robert.clark@hp.com>

Security Folks,

Some how I missed the window to nominate myself as a PTL candidate for
Security. I have literally no idea how I missed it. I?ve been working on
Security project things all week (Anchor and OSSNs mainly) so it?s not like I
wasn?t thinking about the Security team!

Anyway, I missed the voting window (as did a few others it would seem) and this
makes me very sad, the PTL position will now be decided by the OpenStack
technical committee in line with:

http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html

Although it will get ?2 because it is late, I have submitted my candidacy
anyway because I wanted to express how much I?d like to continue working to
drive Security standard up in OpenStack by helping to keep our many security
projects moving along. My candidacy request can be seen here:

https://review.openstack.org/224798

My most sincere apologies,
-Rob


From harlowja at outlook.com  Thu Sep 17 16:00:24 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Thu, 17 Sep 2015 09:00:24 -0700
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
 service
In-Reply-To: <1442489679-sup-2659@lrrr.local>
References: <55FA6B24.5000602@mirantis.com> <1442489679-sup-2659@lrrr.local>
Message-ID: <BLU436-SMTP174C1CB75D1A14C935FEE18D85A0@phx.gbl>

So a few 'event' like constructs/libraries that I know about:

http://docs.openstack.org/developer/taskflow/types.html#taskflow.types.notifier.Notifier 


I'd be happy to extract that and move to somewhere else if needed, it 
provides basic event/pub/sub kind of activities for taskflow (in-memory, 
not over rpc...)

A second one that I almost used in taskflow (but didn't because it has 
more of a global registry, which didn't suit taskflow's non-global 
usage) that might fit this usage just fine...

https://pythonhosted.org/blinker/

Both could probably do the job...

-Josh

Doug Hellmann wrote:
> Excerpts from mhorban's message of 2015-09-17 10:26:28 +0300:
>> Hi Doug,
>>
>>   >  Rather than building hooks into oslo.config, why don't we build them
>>   >  into the thing that is catching the signal. That way the app can do lots
>>   >  of things in response to a signal, and one of them might be reloading
>>   >  the configuration.
>>
>> Hm... Yes... It is really stupid idea to put reloading hook into
>> oslo.config.
>> I'll move that hook mechanism into oslo.service. oslo.service is
>> responsible for catching/handling signals.
>>
>> Is it enough to have one callback function? Or should I must add ability
>> to register many different callback functions?
>>
>> What is your point of view?
>>
>> Marian
>>
>
> We probably want the ability to have multiple callbacks. There are
> already a lot of libraries available on PyPI for handling "events" like
> this, so maybe we can pick one of those that is well maintained and
> integrate it with oslo.service?
>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From harlowja at outlook.com  Thu Sep 17 16:01:35 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Thu, 17 Sep 2015 09:01:35 -0700
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
 service
In-Reply-To: <55FA6856.1@mirantis.com>
References: <55FA6856.1@mirantis.com>
Message-ID: <BLU436-SMTP1010CE812E6D54872024886D85A0@phx.gbl>

Agreed, +1 :)

Gotta start somewhere to get to somewhere else ;)

mhorban wrote:
> Hi Josh,
>
>  > Sounds like a useful idea if projects can plug-in themselves into the
>  > reloading process. I definitely think there needs to be a way for
>  > services to plug-in to this, although I'm not quite sure it will be
>  > sufficient at the current time though.
>  >
>  > An example of why:
>  >
>  > -
>  >
> https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/__init__.py#L24
>
>  > (unless this module is purged from python and reloaded it will likely
>  > not reload correctly).
>  >
>  > Likely these can all be easily fixed (I just don't know how many of
>  > those exist in the various projects); but I guess we have to start
>  > somewhere so getting the underlying code able to be reloaded is a first
>  > step of likely many.
>
> Each of openstack component should contain code responsible for
> reloading such objects.
> What objects will be reloaded? It depends of inspire and desire of
> developers/users.
> Writing such code is a second step.
>
> Marian
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From thierry at openstack.org  Thu Sep 17 16:10:26 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Thu, 17 Sep 2015 18:10:26 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FADB95.6090801@inaugust.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
Message-ID: <55FAE5F2.6050805@openstack.org>

Monty Taylor wrote:
> I agree- and this is a great example of places where human judgement is
> better than rules.
> 
> For instance - one of the projects had a nominee but it missed the
> deadline, so that's probably an easy on.
> 
> For one of the projects it had been looking dead for a while, so this is
> the final nail in the coffin from my POV
> 
> For the other three - I know they're still active projects with people
> interested in them, so sorting them out will be fun!

Looks like in 4 cases (Magnum, Barbican, Murano, Security) there is
actually a candidate, they just missed the deadline. So that should be
an easy discussion at the next TC meeting.

For the last one, it is not an accident. I think it is indeed the final
nail on the coffin.

-- 
Thierry Carrez (ttx)


From edgar.magana at workday.com  Thu Sep 17 16:11:59 2015
From: edgar.magana at workday.com (Edgar Magana)
Date: Thu, 17 Sep 2015 16:11:59 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
Message-ID: <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>

Folks,

Last year I found myself in the same position when I missed a deadline because my wrong planning and time zones nightmare!
However, the rules were very clear and I assumed my mistake. So, we should assume that we do not have candidates and follow the already described process. However, this should be very easy to figure out for the TC, it is just a matter to find our who is interested in the PTL role and consulting with the core team of that specific project.

Just my two cents?

Edgar

From: Kyle Mestery
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Thursday, September 17, 2015 at 8:48 AM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [all][elections] PTL nomination period is now over

On Thu, Sep 17, 2015 at 10:26 AM, Monty Taylor <mordred at inaugust.com<mailto:mordred at inaugust.com>> wrote:
On 09/17/2015 04:50 PM, Anita Kuno wrote:
On 09/17/2015 08:22 AM, Matt Riedemann wrote:


On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
PTL Nomination is now over. The official candidate list is available on
the wiki[0].

There are 5 projects without candidates, so according to this
resolution[1], the TC we'll have to appoint a new PTL for Barbican,
MagnetoDB, Magnum, Murano and Security

This is devil's advocate, but why does a project technically need a PTL?
  Just so that there can be a contact point for cross-project things,
i.e. a lightning rod?  There are projects that do a lot of group
leadership/delegation/etc, so it doesn't seem that a PTL is technically
required in all cases.

I think that is a great question for the TC to consider when they
evaluate options for action with these projects.

The election officials are fulfilling their obligation according to the
resolution:
http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst

If you read the verb there the verb is "can" not "must", I choose the
verb "can" on purpose for the resolution when I wrote it. The TC has the
option to select an appointee. The TC can do other things as well,
should the TC choose.

I agree- and this is a great example of places where human judgement is better than rules.

For instance - one of the projects had a nominee but it missed the deadline, so that's probably an easy on.

For one of the projects it had been looking dead for a while, so this is the final nail in the coffin from my POV

For the other three - I know they're still active projects with people interested in them, so sorting them out will be fun!


This is the right approach. Human judgement #ftw! :)





There are 7 projects that will have an election: Cinder, Glance, Ironic,
Keystone, Mistral, Neutron and Oslo. The details for those will be
posted tomorrow after Tony and I setup the CIVS system.

Thank you,
Tristan


[0]:
https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates

[1]:
http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html





__________________________________________________________________________

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/414ff845/attachment.html>

From stevemar at ca.ibm.com  Thu Sep 17 16:12:54 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Thu, 17 Sep 2015 12:12:54 -0400
Subject: [openstack-dev] [all][elections] PTL nomination period is
	nowover
In-Reply-To: <CAD0KtVHczQhmL7E39ftpm08iHpjNNGW+S90jRnwL+Upi6SaYLA@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAD0KtVHczQhmL7E39ftpm08iHpjNNGW+S90jRnwL+Upi6SaYLA@mail.gmail.com>
Message-ID: <201509171613.t8HGDWiO001230@d03av03.boulder.ibm.com>


>> I'd like to apply some common sense here and if this many candidates on
both sides of the globe got confused, we can still take that into
consideration when taking next steps.

++ to this, it was clearly a slip up from some folk. let's use our human
judgement and common sense here :)

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:	Anne Gentle <annegentle at justwriteclick.com>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>
Date:	2015/09/17 11:56 AM
Subject:	Re: [openstack-dev] [all][elections] PTL nomination period is
            now	over





On Thu, Sep 17, 2015 at 10:26 AM, Monty Taylor <mordred at inaugust.com>
wrote:
  On 09/17/2015 04:50 PM, Anita Kuno wrote:
   On 09/17/2015 08:22 AM, Matt Riedemann wrote:


     On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
      PTL Nomination is now over. The official candidate list is available
      on
      the wiki[0].

      There are 5 projects without candidates, so according to this
      resolution[1], the TC we'll have to appoint a new PTL for Barbican,
      MagnetoDB, Magnum, Murano and Security

     This is devil's advocate, but why does a project technically need a
     PTL?
     ? Just so that there can be a contact point for cross-project things,
     i.e. a lightning rod?? There are projects that do a lot of group
     leadership/delegation/etc, so it doesn't seem that a PTL is
     technically
     required in all cases.

   I think that is a great question for the TC to consider when they
   evaluate options for action with these projects.

   The election officials are fulfilling their obligation according to the
   resolution:
   http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst


   If you read the verb there the verb is "can" not "must", I choose the
   verb "can" on purpose for the resolution when I wrote it. The TC has the
   option to select an appointee. The TC can do other things as well,
   should the TC choose.

  I agree- and this is a great example of places where human judgement is
  better than rules.

  For instance - one of the projects had a nominee but it missed the
  deadline, so that's probably an easy on.

Honestly I did the "Thursday" alignment math in my head and thought of it
in my non-futuristic timezone. Plus I have a Thursday release mentality
thanks to Theirry's years of excellent release management. :)

Plus, I further confused a couple of candidates when encouraging candidacy
posts, so I apologize for that! I am trying to get ahead of what we'll have
to do on the TC anyway.

I'd like to apply some common sense here and if this many candidates on
both sides of the globe got confused, we can still take that into
consideration when taking next steps.


  For one of the projects it had been looking dead for a while, so this is
  the final nail in the coffin from my POV

I'm not sure we've defined a coffin really, more of an attic/shelving
system. :)

I'll definitely take some time to follow up with individuals in this
deadline confusion system when we take next steps on the TC.

Anne


  For the other three - I know they're still active projects with people
  interested in them, so sorting them out will be fun!





      There are 7 projects that will have an election: Cinder, Glance,
      Ironic,
      Keystone, Mistral, Neutron and Oslo. The details for those will be
      posted tomorrow after Tony and I setup the CIVS system.

      Thank you,
      Tristan


      [0]:
      https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates


      [1]:
      http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html






      __________________________________________________________________________


      OpenStack Development Mailing List (not for usage questions)
      Unsubscribe:
      OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
      http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




   __________________________________________________________________________

   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  __________________________________________________________________________

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/d0ec0a05/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/d0ec0a05/attachment.gif>

From blak111 at gmail.com  Thu Sep 17 16:26:05 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 17 Sep 2015 09:26:05 -0700
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
In-Reply-To: <SN1PR02MB169592E08620777E123F6D34995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JNLrM4BMCUNZ+XcePjSkk2OAQWJiuH5UU4njYi9+aaZRg@mail.gmail.com>
 <SN1PR02MB169592E08620777E123F6D34995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
Message-ID: <CAO_F6JMKtRD6G60=NGMGi-mAndy=yThRdzCgX9iF8R6BQk3cAw@mail.gmail.com>

Yes, the L2 semantics apply to the external network as well (at least with
ML2).

One example of the special casing is the external_network_bridge option in
the L3 agent. That would cause the agent to plug directly into a bridge so
none of the normal L2 agent wiring would occur. With the L2 bridge_mappings
option there is no reason for this to exist anymore because it ignoring
network attributes makes debugging a nightmare.

>Yes, that makes sense.  Clearly the core semantic there is IP.  I can
imagine reasonable variation on less core details, e.g. L2 broadcast vs.
NBMA.  Perhaps it would be acceptable, if use cases need it, for such
details to be described by flags on the external network object.

An external network object is just a regular network object with a
router:external flag set to true. Any changes to it would have to make
sense in the context of all networks. That's why I want to make sure that
whatever we come up with makes sense in all contexts and isn't just a bolt
on corner case.
On Sep 17, 2015 8:21 AM, "Neil Jerram" <Neil.Jerram at metaswitch.com> wrote:

> Thanks, Kevin.  Some further queries, then:
>
> On 17/09/15 15:49, Kevin Benton wrote:
> >
> > It's not true for all plugins, but an external network should provide
> > the same semantics of a normal network.
> >
> Yes, that makes sense.  Clearly the core semantic there is IP.  I can
> imagine reasonable variation on less core details, e.g. L2 broadcast vs.
> NBMA.  Perhaps it would be acceptable, if use cases need it, for such
> details to be described by flags on the external network object.
>
> I'm also wondering about what you wrote in the recent thread with Carl
> about representing a network connected by routers.  I think you were
> arguing that a L3-only network should not be represented by a kind of
> Neutron network object, because a Neutron network has so many L2
> properties/semantics that it just doesn't make sense, and better to have
> a different kind of object for L3-only.  Do those L2
> properties/semantics apply to an external network too?
>
> > The only difference is that it allows router gateway interfaces to be
> > attached to it.
> >
> Right.  From a networking-calico perspective, I think that means that
> the implementation should (eventually) support that, and hence allow
> interconnection between the external network and private Neutron networks.
>
> > We want to get rid of as much special casing as possible for the
> > external network.
> >
> I don't understand here.  What 'special casing' do you mean?
>
> Regards,
>     Neil
>
> > On Sep 17, 2015 7:02 AM, "Neil Jerram" <Neil.Jerram at metaswitch.com
> > <mailto:Neil.Jerram at metaswitch.com>> wrote:
> >
> >     Thanks to the interesting 'default network model' thread, I now know
> >     that Neutron allows booting a VM on an external network. :-)  I
> didn't
> >     realize that before!
> >
> >     So, I'm now wondering what connectivity semantics are expected (or
> >     even
> >     specified!) for such VMs, and whether they're the same as - or very
> >     similar to - the 'routed' networking semantics I've described at [1].
> >
> >     [1]
> >
> https://review.openstack.org/#/c/198439/5/doc/source/devref/routed_networks.rst
> >
> >     Specifically I wonder if VM's attached to an external network
> >     expect any
> >     particular L2 characteristics, such as being able to L2 broadcast to
> >     each other?
> >
> >     By way of context - i.e. why am I asking this?...   The
> >     networking-calico project [2] provides an implementation of the
> >     'routed'
> >     semantics at [1], but only if one suspends belief in some of the
> >     Neutron
> >     semantics associated with non-external networks, such as needing a
> >     virtual router to provide connectivity to the outside world.
> (Because
> >     networking-calico provides that external connectivity without any
> >     virtual router.)  Therefore we believe that we need to propose some
> >     enhancement of the Neutron API and data model, so that Neutron can
> >     describe 'routed' semantics as well as all the traditional ones.
> But,
> >     if what we are doing is semantically equivalent to 'attaching to an
> >     external network', perhaps no such enhancement is needed...
> >
> >     [2] https://git.openstack.org/cgit/openstack/networking-calico
> >     <https://git.openstack.org/cgit/openstack/networking-calico>
> >
> >     Many thanks for any input!
> >
> >         Neil
> >
> >
> >
>  __________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/7b76bab5/attachment.html>

From sean at coreitpro.com  Thu Sep 17 16:30:15 2015
From: sean at coreitpro.com (Sean M. Collins)
Date: Thu, 17 Sep 2015 16:30:15 +0000
Subject: [openstack-dev] [Neutron] Effective Neutron
In-Reply-To: <A5ADB678-E9F9-4838-9437-CA3BE37B0239@workday.com>
References: <CAK+RQeZ32OzKF6nE-DeuUTnF6zsTB8BE77b20aJFQw+rWjSe7w@mail.gmail.com>
 <D220B667.BE24F%gkotton@vmware.com>
 <A5ADB678-E9F9-4838-9437-CA3BE37B0239@workday.com>
Message-ID: <0000014fdc245e9d-44674805-cd4d-4705-b598-08ac4c465e91-000000@email.amazonses.com>

On Thu, Sep 17, 2015 at 11:37:35AM EDT, Edgar Magana wrote:
> Actually, I am wondering if replacing the Wiki with this great information will be better???
> For sure, docs are well structured and they have a great team behind them but the wikis are not the same, gerrit review provide a better way to distribute and update this knowledge?

+1 - I've fallen out of love with Wikis for this exact reason. 

-- 
Sean M. Collins


From itzshamail at gmail.com  Thu Sep 17 17:00:25 2015
From: itzshamail at gmail.com (Shamail Tahir)
Date: Thu, 17 Sep 2015 10:00:25 -0700
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
	communications
In-Reply-To: <CAD0KtVHyeuFTTJbMFcyvPD5ALZa3Pi5agxtoHXtFz-8MVP0L0A@mail.gmail.com>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
 <55F979A9.9040206@openstack.org>
 <CAD0KtVHyeuFTTJbMFcyvPD5ALZa3Pi5agxtoHXtFz-8MVP0L0A@mail.gmail.com>
Message-ID: <CALrdpTXkri+-V96JCjQ61G-GeoNqOYxbDJXx=RaqG1+gcWRaPQ@mail.gmail.com>

On Wed, Sep 16, 2015 at 11:30 AM, Anne Gentle <annegentle at justwriteclick.com
> wrote:

>
>
> On Wed, Sep 16, 2015 at 9:16 AM, Thierry Carrez <thierry at openstack.org>
> wrote:
>
>> Anne Gentle wrote:
>> > [...]
>> > What are some of the problems with each layer?
>> >
>> > 1. weekly meeting: time zones, global reach, size of cross-project
>> > concerns due to multiple projects being affected, another meeting for
>> > PTLs to attend and pay attention to
>>
>> A lot of PTLs (or liaisons/lieutenants) skip the meeting, or will only
>> attend when they have something to ask. Their time is precious and most
>> of the time the meeting is not relevant for them, so why bother ? You
>> have a few usual suspects attending all of them, but those people are
>> cross-project-aware already so those are not the people that would
>> benefit the most from the meeting
>
>
>> This partial attendance makes the meeting completely useless as a way to
>> disseminate information. It makes the meeting mostly useless as a way to
>> get general approval on cross-project specs.
>>
>> The meeting still is very useful IMHO to have more direct discussions on
>> hot topics. So a ML discussion is flagged for direct discussion on IRC
>> and we have a time slot already booked for that.
>>
> I wanted to add a +1 to the idea of using a tag other than [all] to
highlight cross-project communications.

>
>> > 2. specs: don't seem to get much attention unless they're brought up at
>> > weekly meeting, finding owners for the work needing to be done in a spec
>> > is difficult since each project team has its own priorities
>>
>> Right, it's difficult to get them reviewed, and getting consensus and TC
>> rubberstamp on them is also a bit of a thankless job. Basically you're
>> trying to make sure everyone is OK with what you propose and most people
>> ignore you (and then would be unhappy when they are impacted by the
>> implementation a month later). I don't think that system works well and
>> I'd prefer we change it.
>>
>> > 3. direct communications: decisions from these comms are difficult to
>> > then communicate more widely, it's difficult to get time with busy PTLs
>> > 4. Summits: only happens twice a year, decisions made then need to be
>> > widely communicated
>> >
>> > I'm sure there are more details and problems I'm missing -- feel free to
>> > fill in as needed.
>> >
>> > Lastly, what suggestions do you have for solving problems with any of
>> > these layers?
>>
>> I'm starting to think we need to overhaul the whole concept of
>> cross-project initiatives. The current system where an individual drives
>> a specific spec and goes through all the hoops to expose it to the rest
>> of the community is not really working. The current model doesn't
>> support big overall development cycle goals either, since there is no
>> team to implement those.
>>
>
> Completely agree, this is my observation as well from the service catalog
> improvement work. While the keystone team is crucial, so many other teams
> are affected. And I don't have all the key skills to implement the vision,
> nor do I want to be a spec writer who can't implement, ya know? It's a
> tough one.
>

Hi, would it make sense for the product WG to help write and/or track the
specs?  Apologies, in advance, if our workflow does not fit the needs being
discussed.

We have a defined workflow for how we will be working on user stories
through our process[1] and I wonder if a similar workflow could be built
for cross-project specs.  We are already trying to focus on
multi-release/cross-project user stories[2] but we are doing it from a
market perspective as opposed to tracking items that are cross-project
needs based on the current state.  The process could definitely be expanded
to help coordinate these needs as well.   This will allow an individual to
still be associated with a spec but if the person is unable to finish the
work or needs more help, the team could ask for more resources or let
stakeholders know that there might be a delay.

[1]
https://docs.google.com/presentation/d/1dZBm4cfpSyVlvPLpHSReaQiZ7e8SfgS7VLSbSqoWokw/edit?usp=sharing
[2] https://wiki.openstack.org/wiki/ProductTeam#Objectives

>
>
>>
>> Just brainstorming out loud, maybe we need to have a base team of people
>> committed to drive such initiatives to completion, a team that
>> individuals could leverage when they have a cross-project idea, a team
>> that could define a few cycle goals and actively push them during the
>> cycle.
>>
>
This is very similar to how the Product WG structure is setup as well.  We
have cross-project liaisons (CPLs) that participate in the project team
meetings and also user-story owners who cover the overall goal of
completing the user story.  The user story owner leverages the cross
project liaisons to help with tracking component/project specific
dependencies for implementing the user story but the user story owner is
looking at the overall state of the bigger picture.   Our CPLs work with
multiple user-story owners but the user story owner to user story mapping
is 1:1.

>
>>
> Or, to dig into this further, continue along the lines of the TC specialty
> teams we've set up? We ran out of time a few TC meetings ago to dive into
> solutions here, so I'm glad we can continue the conversation.
>
> I'm sure existing cross-project teams have ideas too, liaisons and the
> like may be matrixed somehow? We'll still need accountability and matching
> skill sets for tasks.
>

> Anne
>
>
>> Maybe cross-project initiatives are too important to be left to the
>> energy of an individual and rely on random weekly meetings to make
>> progress. They might need a clear team to own them.
>>
>> --
>> Thierry Carrez (ttx)
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Anne Gentle
> Rackspace
> Principal Engineer
> www.justwriteclick.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/57f970a4/attachment.html>

From andrew.melton at RACKSPACE.COM  Thu Sep 17 17:10:14 2015
From: andrew.melton at RACKSPACE.COM (Andrew Melton)
Date: Thu, 17 Sep 2015 17:10:14 +0000
Subject: [openstack-dev] Pycharm License for OpenStack developers
In-Reply-To: <CAPWkaSXOEX7H99zikScXJX47YpJ_3xrJhZbtupM2-Jy3qK+nWg@mail.gmail.com>
References: <E1FB4937BE24734DAD0D1D4E4E506D788A6FBD63@MAIL703.KDS.KEANE.COM>
 <3895CB36EABD4E49B816E6081F3B001735FD73F3@IRSMSX108.ger.corp.intel.com>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FBDA8@MAIL703.KDS.KEANE.COM>
 <BLU436-SMTP554C7011ADEEED98ACBCFBD85B0@phx.gbl>
 <CAGnj6augsF6DQ9bhuQGdTfLP_PJ_hudKhtPR-uiFbPL0_jfTLA@mail.gmail.com>,
 <CAPWkaSXOEX7H99zikScXJX47YpJ_3xrJhZbtupM2-Jy3qK+nWg@mail.gmail.com>
Message-ID: <1442509819023.35242@RACKSPACE.COM>

Hey Devs,


I'm the guy who's maintained our JetBrains licenses for the past couple of years. License philosophy aside, the IDE you choose to use is entirely up to you. I'm currently waiting to hear back from JetBrains on the actual license. From the sound of it, the opensource license hasn't changed much since last year. Originally we had an unlimited license, which made user management easier. This year it sounds like we will still have a bulk license, they just want an estimation for an initial user count limit, which I've given them. They've assured me that we will be able to increase the user limit on the license if we hit it.


Once I have the new license, I'll send out an email to the list letting everyone know. Please hold off on license requests until then. Unless JetBrains demands more strict control, I'll handle everything like I always have. I'll just need your launchpad-id to confirm you're actually a contributor, then I'll send the license your way.


--Andrew

________________________________
From: John Griffith <john.griffith8 at gmail.com>
Sent: Wednesday, September 16, 2015 3:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Pycharm License for OpenStack developers



On Wed, Sep 16, 2015 at 12:35 PM, Morgan Fainberg <morgan.fainberg at gmail.com<mailto:morgan.fainberg at gmail.com>> wrote:


On Wed, Sep 16, 2015 at 11:12 AM, Joshua Harlow <harlowja at outlook.com<mailto:harlowja at outlook.com>> wrote:
Anyone know about the impact of:

- https://mmilinkov.wordpress.com/2015/09/04/jetbrains-lockin-we-told-you-so/

- http://blog.jetbrains.com/blog/2015/09/03/introducing-jetbrains-toolbox/

I'm pretty sure a lot of openstack-devs are using pycharms, and wonder what this change will mean for those devs?

I have historically purchased my own license (because I believed in supporting the companies that produce the tools, even though there was a oss-project license that I didn't need to pay for).
?Total tangent, but props on purchasing a license to support something you use on a regular basis!
?

I am evaluating if I wish to continue with jetbrains based on this change or not. They have said they are evaluating the feedback - I'm willing to see what the end result will be.

There are other IDE options out there for python and I may consider those. I haven't run out my license, I am not sure this is enough to change my POV that pycharm is the best option at the moment for my workflow. It was for other software I historically purchased (business suites/photo editing) but those filled a different space.

How much impact will this make to those pure upstream developers? Very little unless jetbrains ceases the opensource project license.

Just my $0.02.

--Morgan


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/49ac39fe/attachment.html>

From Neil.Jerram at metaswitch.com  Thu Sep 17 17:17:54 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Thu, 17 Sep 2015 17:17:54 +0000
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JNLrM4BMCUNZ+XcePjSkk2OAQWJiuH5UU4njYi9+aaZRg@mail.gmail.com>
 <SN1PR02MB169592E08620777E123F6D34995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JMKtRD6G60=NGMGi-mAndy=yThRdzCgX9iF8R6BQk3cAw@mail.gmail.com>
Message-ID: <SN1PR02MB16950E11BFF66EB83AC3F361995A0@SN1PR02MB1695.namprd02.prod.outlook.com>

Thanks so much for your continuing answers; they are really helping me.

I see your points now about the special casing, and about the semantic
expectations and internal wiring of a Neutron network being just the
same for an external network as for non-external.  Hence, the model for
an L3-only external network should be the same as it would be for an
L3-only tenant network, except for the router:external flag (and might
be along the lines that you've suggested, of a subnet with a null network).

It still seems that 'router:external true' might be a good match for
some of the other 'routed' semantics in [1], though, so I'd like to
drill down more on exactly what 'router:external true' means.

A summary of the semantics at [1] is:
(a) L3-only connectivity between instances attached to the network.
(b) Data can be routed between between this network and the outside, and
between multiple networks of this type, without needing Neutron routers
(c) Floating IPs are not supported for instances on this network. 
Instead, wherever an instance needs to be routable from, attach it to a
network with a subnet of IP addresses that are routable from that place.

[1] https://review.openstack.org/#/c/198439/

According to [2], router:external "Indicates whether this network is
externally accessible."  Which I think is an exact match for (b) - would
you agree?  (Note: it can't mean that every instance on that network
actually _is_ contactable from outside, because that depends on IP
addressing, and router:external is a network property, not subnet.  But
it can mean that every instance is _potentially_ contactable, without
the mediation of a Neutron router.)

[2] http://developer.openstack.org/api-ref-networking-v2-ext.html

Also I believe that (c) is already true for Neutron external networks -
i.e. it doesn't make sense to assign a floating IP to an instance that
is directly on an external network.  Is that correct?

In summary, for the semantics that I'm wanting to model, it sounds like
router:external true already gives me 2 of the 3 main pieces.  There's
still serious work needed for (a), but that's really nice news, if I'm
seeing things correctly (since discovering that instances can be
attached to an external network).

Regards,
    Neil




On 17/09/15 17:29, Kevin Benton wrote:
>
> Yes, the L2 semantics apply to the external network as well (at least
> with ML2).
>
> One example of the special casing is the external_network_bridge
> option in the L3 agent. That would cause the agent to plug directly
> into a bridge so none of the normal L2 agent wiring would occur. With
> the L2 bridge_mappings option there is no reason for this to exist
> anymore because it ignoring network attributes makes debugging a
> nightmare.
>
> >Yes, that makes sense.  Clearly the core semantic there is IP.  I can
> imagine reasonable variation on less core details, e.g. L2 broadcast vs.
> NBMA.  Perhaps it would be acceptable, if use cases need it, for such
> details to be described by flags on the external network object.
>
> An external network object is just a regular network object with a
> router:external flag set to true. Any changes to it would have to make
> sense in the context of all networks. That's why I want to make sure
> that whatever we come up with makes sense in all contexts and isn't
> just a bolt on corner case.
>
> On Sep 17, 2015 8:21 AM, "Neil Jerram" <Neil.Jerram at metaswitch.com
> <mailto:Neil.Jerram at metaswitch.com>> wrote:
>
>     Thanks, Kevin.  Some further queries, then:
>
>     On 17/09/15 15:49, Kevin Benton wrote:
>     >
>     > It's not true for all plugins, but an external network should
>     provide
>     > the same semantics of a normal network.
>     >
>     Yes, that makes sense.  Clearly the core semantic there is IP.  I can
>     imagine reasonable variation on less core details, e.g. L2
>     broadcast vs.
>     NBMA.  Perhaps it would be acceptable, if use cases need it, for such
>     details to be described by flags on the external network object.
>
>     I'm also wondering about what you wrote in the recent thread with Carl
>     about representing a network connected by routers.  I think you were
>     arguing that a L3-only network should not be represented by a kind of
>     Neutron network object, because a Neutron network has so many L2
>     properties/semantics that it just doesn't make sense, and better
>     to have
>     a different kind of object for L3-only.  Do those L2
>     properties/semantics apply to an external network too?
>
>     > The only difference is that it allows router gateway interfaces
>     to be
>     > attached to it.
>     >
>     Right.  From a networking-calico perspective, I think that means that
>     the implementation should (eventually) support that, and hence allow
>     interconnection between the external network and private Neutron
>     networks.
>
>     > We want to get rid of as much special casing as possible for the
>     > external network.
>     >
>     I don't understand here.  What 'special casing' do you mean?
>
>     Regards,
>         Neil
>
>     > On Sep 17, 2015 7:02 AM, "Neil Jerram"
>     <Neil.Jerram at metaswitch.com <mailto:Neil.Jerram at metaswitch.com>
>     > <mailto:Neil.Jerram at metaswitch.com
>     <mailto:Neil.Jerram at metaswitch.com>>> wrote:
>     >
>     >     Thanks to the interesting 'default network model' thread, I
>     now know
>     >     that Neutron allows booting a VM on an external network.
>     :-)  I didn't
>     >     realize that before!
>     >
>     >     So, I'm now wondering what connectivity semantics are
>     expected (or
>     >     even
>     >     specified!) for such VMs, and whether they're the same as -
>     or very
>     >     similar to - the 'routed' networking semantics I've
>     described at [1].
>     >
>     >     [1]
>     >   
>      https://review.openstack.org/#/c/198439/5/doc/source/devref/routed_networks.rst
>     >
>     >     Specifically I wonder if VM's attached to an external network
>     >     expect any
>     >     particular L2 characteristics, such as being able to L2
>     broadcast to
>     >     each other?
>     >
>     >     By way of context - i.e. why am I asking this?...   The
>     >     networking-calico project [2] provides an implementation of the
>     >     'routed'
>     >     semantics at [1], but only if one suspends belief in some of the
>     >     Neutron
>     >     semantics associated with non-external networks, such as
>     needing a
>     >     virtual router to provide connectivity to the outside
>     world.  (Because
>     >     networking-calico provides that external connectivity
>     without any
>     >     virtual router.)  Therefore we believe that we need to
>     propose some
>     >     enhancement of the Neutron API and data model, so that
>     Neutron can
>     >     describe 'routed' semantics as well as all the traditional
>     ones.  But,
>     >     if what we are doing is semantically equivalent to
>     'attaching to an
>     >     external network', perhaps no such enhancement is needed...
>     >
>     >     [2]
>     https://git.openstack.org/cgit/openstack/networking-calico
>     <https://git.openstack.org/cgit/openstack/networking-calico>
>     >     <https://git.openstack.org/cgit/openstack/networking-calico>
>     >
>     >     Many thanks for any input!
>     >
>     >         Neil
>     >
>     >
>     >   
>      __________________________________________________________________________
>     >     OpenStack Development Mailing List (not for usage questions)
>     >     Unsubscribe:
>     >   
>      OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     >   
>      <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     >   
>      http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>     >
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From blak111 at gmail.com  Thu Sep 17 17:26:18 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 17 Sep 2015 10:26:18 -0700
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
Message-ID: <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>

Maybe it would be a good idea to switch to 23:59 AOE deadlines like many
paper submissions use for academic conferences. That way there is never a
need to convert TZs, you just get it in by the end of the day in your own
time zone.
On Sep 17, 2015 9:18 AM, "Edgar Magana" <edgar.magana at workday.com> wrote:

> Folks,
>
> Last year I found myself in the same position when I missed a deadline
> because my wrong planning and time zones nightmare!
> However, the rules were very clear and I assumed my mistake. So, we should
> assume that we do not have candidates and follow the already described
> process. However, this should be very easy to figure out for the TC, it is
> just a matter to find our who is interested in the PTL role and consulting
> with the core team of that specific project.
>
> Just my two cents?
>
> Edgar
>
> From: Kyle Mestery
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> Date: Thursday, September 17, 2015 at 8:48 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> Subject: Re: [openstack-dev] [all][elections] PTL nomination period is
> now over
>
> On Thu, Sep 17, 2015 at 10:26 AM, Monty Taylor <mordred at inaugust.com>
> wrote:
>
>> On 09/17/2015 04:50 PM, Anita Kuno wrote:
>>
>>> On 09/17/2015 08:22 AM, Matt Riedemann wrote:
>>>
>>>>
>>>>
>>>> On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>>>>
>>>>> PTL Nomination is now over. The official candidate list is available on
>>>>> the wiki[0].
>>>>>
>>>>> There are 5 projects without candidates, so according to this
>>>>> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
>>>>> MagnetoDB, Magnum, Murano and Security
>>>>>
>>>>
>>>> This is devil's advocate, but why does a project technically need a PTL?
>>>>   Just so that there can be a contact point for cross-project things,
>>>> i.e. a lightning rod?  There are projects that do a lot of group
>>>> leadership/delegation/etc, so it doesn't seem that a PTL is technically
>>>> required in all cases.
>>>>
>>>
>>> I think that is a great question for the TC to consider when they
>>> evaluate options for action with these projects.
>>>
>>> The election officials are fulfilling their obligation according to the
>>> resolution:
>>>
>>> http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst
>>>
>>> If you read the verb there the verb is "can" not "must", I choose the
>>> verb "can" on purpose for the resolution when I wrote it. The TC has the
>>> option to select an appointee. The TC can do other things as well,
>>> should the TC choose.
>>>
>>
>> I agree- and this is a great example of places where human judgement is
>> better than rules.
>>
>> For instance - one of the projects had a nominee but it missed the
>> deadline, so that's probably an easy on.
>>
>> For one of the projects it had been looking dead for a while, so this is
>> the final nail in the coffin from my POV
>>
>> For the other three - I know they're still active projects with people
>> interested in them, so sorting them out will be fun!
>>
>>
> This is the right approach. Human judgement #ftw! :)
>
>
>>
>>
>>>
>>>>
>>>>> There are 7 projects that will have an election: Cinder, Glance,
>>>>> Ironic,
>>>>> Keystone, Mistral, Neutron and Oslo. The details for those will be
>>>>> posted tomorrow after Tony and I setup the CIVS system.
>>>>>
>>>>> Thank you,
>>>>> Tristan
>>>>>
>>>>>
>>>>> [0]:
>>>>>
>>>>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>>>>>
>>>>> [1]:
>>>>>
>>>>> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>>
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/1f8fc717/attachment.html>

From brian at python.org  Thu Sep 17 17:30:32 2015
From: brian at python.org (Brian Curtin)
Date: Thu, 17 Sep 2015 12:30:32 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
Message-ID: <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>

On Thu, Sep 17, 2015 at 12:26 PM, Kevin Benton <blak111 at gmail.com> wrote:
> Maybe it would be a good idea to switch to 23:59 AOE deadlines like many
> paper submissions use for academic conferences. That way there is never a
> need to convert TZs, you just get it in by the end of the day in your own
> time zone.

This is somehow going to cause even more confusion because you'll have
to explain AOE (which is not what you described).


From eharney at redhat.com  Thu Sep 17 17:31:11 2015
From: eharney at redhat.com (Eric Harney)
Date: Thu, 17 Sep 2015 13:31:11 -0400
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
Message-ID: <55FAF8DF.2070901@redhat.com>

On 09/17/2015 05:00 AM, Duncan Thomas wrote:
> On 16 September 2015 at 23:43, Eric Harney <eharney at redhat.com> wrote:
> 
>> Currently, at least some options set in [DEFAULT] don't apply to
>> per-driver sections, and require you to set them in the driver section
>> as well.
>>
> 
> This is extremely confusing behaviour. Do you have any examples? I'm not
> sure if we can fix it without breaking people's existing configs but I
> think it is worth trying. I'll add it to the list of things to talk about
> briefly in Tokyo.
> 

The most recent place this bit me was with iscsi_helper.

If cinder.conf has:

[DEFAULT]
iscsi_helper = lioadm
enabled_backends = lvm1

[lvm1]
volume_driver = ...LVMISCSIDriver
# no iscsi_helper setting


You end up with c-vol showing "iscsi_helper = lioadm", and
"lvm1.iscsi_helper = tgtadm", which is the default in the code, and not
the default in the configuration file.

I agree that this is confusing, I think it's also blatantly wrong.  I'm
not sure how to fix it, but I think it's some combination of your
suggestions above and possibly having to introduce new option names.


From Neil.Jerram at metaswitch.com  Thu Sep 17 17:35:33 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Thu, 17 Sep 2015 17:35:33 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is
	now	over
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
Message-ID: <SN1PR02MB1695848FBF61230BD33AA06A995A0@SN1PR02MB1695.namprd02.prod.outlook.com>

But on the Internet, no one knows that I'm really a crocodile, and
writing from Midway...

    Neil


On 17/09/15 18:29, Kevin Benton wrote:
>
> Maybe it would be a good idea to switch to 23:59 AOE deadlines like
> many paper submissions use for academic conferences. That way there is
> never a need to convert TZs, you just get it in by the end of the day
> in your own time zone.
>
> On Sep 17, 2015 9:18 AM, "Edgar Magana" <edgar.magana at workday.com
> <mailto:edgar.magana at workday.com>> wrote:
>
>     Folks,
>
>     Last year I found myself in the same position when I missed a
>     deadline because my wrong planning and time zones nightmare!
>     However, the rules were very clear and I assumed my mistake. So,
>     we should assume that we do not have candidates and follow the
>     already described process. However, this should be very easy to
>     figure out for the TC, it is just a matter to find our who is
>     interested in the PTL role and consulting with the core team of
>     that specific project.
>
>     Just my two cents? 
>
>     Edgar
>
>     From: Kyle Mestery
>     Reply-To: "OpenStack Development Mailing List (not for usage
>     questions)"
>     Date: Thursday, September 17, 2015 at 8:48 AM
>     To: "OpenStack Development Mailing List (not for usage questions)"
>     Subject: Re: [openstack-dev] [all][elections] PTL nomination
>     period is now over
>
>     On Thu, Sep 17, 2015 at 10:26 AM, Monty Taylor
>     <mordred at inaugust.com <mailto:mordred at inaugust.com>> wrote:
>
>         On 09/17/2015 04:50 PM, Anita Kuno wrote:
>
>             On 09/17/2015 08:22 AM, Matt Riedemann wrote:
>
>
>
>                 On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>
>                     PTL Nomination is now over. The official candidate
>                     list is available on
>                     the wiki[0].
>
>                     There are 5 projects without candidates, so
>                     according to this
>                     resolution[1], the TC we'll have to appoint a new
>                     PTL for Barbican,
>                     MagnetoDB, Magnum, Murano and Security
>
>
>                 This is devil's advocate, but why does a project
>                 technically need a PTL?
>                   Just so that there can be a contact point for
>                 cross-project things,
>                 i.e. a lightning rod?  There are projects that do a
>                 lot of group
>                 leadership/delegation/etc, so it doesn't seem that a
>                 PTL is technically
>                 required in all cases.
>
>
>             I think that is a great question for the TC to consider
>             when they
>             evaluate options for action with these projects.
>
>             The election officials are fulfilling their obligation
>             according to the
>             resolution:
>             http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst
>
>             If you read the verb there the verb is "can" not "must", I
>             choose the
>             verb "can" on purpose for the resolution when I wrote it.
>             The TC has the
>             option to select an appointee. The TC can do other things
>             as well,
>             should the TC choose.
>
>
>         I agree- and this is a great example of places where human
>         judgement is better than rules.
>
>         For instance - one of the projects had a nominee but it missed
>         the deadline, so that's probably an easy on.
>
>         For one of the projects it had been looking dead for a while,
>         so this is the final nail in the coffin from my POV
>
>         For the other three - I know they're still active projects
>         with people interested in them, so sorting them out will be fun!
>
>
>     This is the right approach. Human judgement #ftw! :)
>      
>
>
>
>
>
>                     There are 7 projects that will have an election:
>                     Cinder, Glance, Ironic,
>                     Keystone, Mistral, Neutron and Oslo. The details
>                     for those will be
>                     posted tomorrow after Tony and I setup the CIVS
>                     system.
>
>                     Thank you,
>                     Tristan
>
>
>                     [0]:
>                     https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>
>                     [1]:
>                     http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>
>
>
>
>
>                     __________________________________________________________________________
>
>                     OpenStack Development Mailing List (not for usage
>                     questions)
>                     Unsubscribe:
>                     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>             __________________________________________________________________________
>             OpenStack Development Mailing List (not for usage questions)
>             Unsubscribe:
>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From clint at fewbar.com  Thu Sep 17 17:39:43 2015
From: clint at fewbar.com (Clint Byrum)
Date: Thu, 17 Sep 2015 10:39:43 -0700
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
Message-ID: <1442511434-sup-8780@fewbar.com>

Excerpts from Alex Schultz's message of 2015-09-16 09:53:10 -0700:
> Hey puppet folks,
> 
> Based on the meeting yesterday[0], I had proposed creating a parser
> function called is_service_default[1] to validate if a variable matched our
> agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking about how
> can we maybe not use the arbitrary string throughout the puppet that can
> not easily be validated.  So I tested creating another puppet function
> named service_default[2] to replace the use of '<SERVICE DEFAULT>'
> throughout all the puppet modules.  My tests seemed to indicate that you
> can use a parser function as parameter default for classes.
> 
> I wanted to send a note to gather comments around the second function.
> When we originally discussed what to use to designate for a service's
> default configuration, I really didn't like using an arbitrary string since
> it's hard to parse and validate. I think leveraging a function might be
> better since it is something that can be validated via tests and a syntax
> checker.  Thoughts?

I'm confused.

Why aren't you omitting the configuration option from the file if you
want to use the default? Isn't that what undef is for?


From edgar.magana at workday.com  Thu Sep 17 17:58:19 2015
From: edgar.magana at workday.com (Edgar Magana)
Date: Thu, 17 Sep 2015 17:58:19 +0000
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
 neutron-lbaas core team
In-Reply-To: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
References: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
Message-ID: <79408BAC-47AB-4CA6-A8ED-8ABD6C44CC6F@workday.com>

Not a core but I would like to share my +1 about Michael.

Cheers,

Edgar




On 9/16/15, 3:33 PM, "Doug Wiegley" <dougwig at parksidesoftware.com> wrote:

>Hi all,
>
>As the Lieutenant of the advanced services, I nominate Michael Johnson to be a member of the neutron-lbaas core reviewer team.
>
>Review stats are in line with other cores[2], and Michael has been instrumental in both neutron-lbaas and octavia.
>
>Existing cores, please vote +1/-1 for his addition to the team (that?s Brandon, Phil, Al, and Kyle.)
>
>Thanks,
>doug
>
>1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
>2. http://stackalytics.com/report/contribution/neutron-lbaas/90
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From matt at mattfischer.com  Thu Sep 17 17:58:50 2015
From: matt at mattfischer.com (Matt Fischer)
Date: Thu, 17 Sep 2015 11:58:50 -0600
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <1442511434-sup-8780@fewbar.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
 <1442511434-sup-8780@fewbar.com>
Message-ID: <CAHr1CO-mNd4cQdUSXM0C94ZATKrzFS=FUCt5kgBuxW=+m_JbZQ@mail.gmail.com>

Clint,

We're solving a different issue. Before anytime someone added an option we
had this logic:

if $setting {
  project_config/setting: value => $setting
}
else {
  project_config/setting: ensure => absent;
}

This was annoying to have to write for every single setting but without it,
nobody could remove settings that they didn't want and fall back to the
project defaults.

This discussion is about a way in the libraries to do the ensure absent but
to drop all the else {} clauses in all our modules.



On Thu, Sep 17, 2015 at 11:39 AM, Clint Byrum <clint at fewbar.com> wrote:

> Excerpts from Alex Schultz's message of 2015-09-16 09:53:10 -0700:
> > Hey puppet folks,
> >
> > Based on the meeting yesterday[0], I had proposed creating a parser
> > function called is_service_default[1] to validate if a variable matched
> our
> > agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking about how
> > can we maybe not use the arbitrary string throughout the puppet that can
> > not easily be validated.  So I tested creating another puppet function
> > named service_default[2] to replace the use of '<SERVICE DEFAULT>'
> > throughout all the puppet modules.  My tests seemed to indicate that you
> > can use a parser function as parameter default for classes.
> >
> > I wanted to send a note to gather comments around the second function.
> > When we originally discussed what to use to designate for a service's
> > default configuration, I really didn't like using an arbitrary string
> since
> > it's hard to parse and validate. I think leveraging a function might be
> > better since it is something that can be validated via tests and a syntax
> > checker.  Thoughts?
>
> I'm confused.
>
> Why aren't you omitting the configuration option from the file if you
> want to use the default? Isn't that what undef is for?
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/8325565b/attachment.html>

From blak111 at gmail.com  Thu Sep 17 18:17:02 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 17 Sep 2015 11:17:02 -0700
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
Message-ID: <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>

How is it not what I described? Time zones become irrelevant if you get it
in by the end of the day in your local time zone.

https://en.wikipedia.org/wiki/Anywhere_on_Earth

On Thu, Sep 17, 2015 at 10:30 AM, Brian Curtin <brian at python.org> wrote:

> On Thu, Sep 17, 2015 at 12:26 PM, Kevin Benton <blak111 at gmail.com> wrote:
> > Maybe it would be a good idea to switch to 23:59 AOE deadlines like many
> > paper submissions use for academic conferences. That way there is never a
> > need to convert TZs, you just get it in by the end of the day in your own
> > time zone.
>
> This is somehow going to cause even more confusion because you'll have
> to explain AOE (which is not what you described).
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/9cf02533/attachment.html>

From blak111 at gmail.com  Thu Sep 17 18:17:36 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 17 Sep 2015 11:17:36 -0700
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <SN1PR02MB1695848FBF61230BD33AA06A995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <SN1PR02MB1695848FBF61230BD33AA06A995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
Message-ID: <CAO_F6JP1kb0icFZ-+ZnH8j8EbBzZFAssRq92eFpEAGRMNsSXGQ@mail.gmail.com>

Midway is still on earth. :)

On Thu, Sep 17, 2015 at 10:35 AM, Neil Jerram <Neil.Jerram at metaswitch.com>
wrote:

> But on the Internet, no one knows that I'm really a crocodile, and
> writing from Midway...
>
>     Neil
>
>
> On 17/09/15 18:29, Kevin Benton wrote:
> >
> > Maybe it would be a good idea to switch to 23:59 AOE deadlines like
> > many paper submissions use for academic conferences. That way there is
> > never a need to convert TZs, you just get it in by the end of the day
> > in your own time zone.
> >
> > On Sep 17, 2015 9:18 AM, "Edgar Magana" <edgar.magana at workday.com
> > <mailto:edgar.magana at workday.com>> wrote:
> >
> >     Folks,
> >
> >     Last year I found myself in the same position when I missed a
> >     deadline because my wrong planning and time zones nightmare!
> >     However, the rules were very clear and I assumed my mistake. So,
> >     we should assume that we do not have candidates and follow the
> >     already described process. However, this should be very easy to
> >     figure out for the TC, it is just a matter to find our who is
> >     interested in the PTL role and consulting with the core team of
> >     that specific project.
> >
> >     Just my two cents?
> >
> >     Edgar
> >
> >     From: Kyle Mestery
> >     Reply-To: "OpenStack Development Mailing List (not for usage
> >     questions)"
> >     Date: Thursday, September 17, 2015 at 8:48 AM
> >     To: "OpenStack Development Mailing List (not for usage questions)"
> >     Subject: Re: [openstack-dev] [all][elections] PTL nomination
> >     period is now over
> >
> >     On Thu, Sep 17, 2015 at 10:26 AM, Monty Taylor
> >     <mordred at inaugust.com <mailto:mordred at inaugust.com>> wrote:
> >
> >         On 09/17/2015 04:50 PM, Anita Kuno wrote:
> >
> >             On 09/17/2015 08:22 AM, Matt Riedemann wrote:
> >
> >
> >
> >                 On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
> >
> >                     PTL Nomination is now over. The official candidate
> >                     list is available on
> >                     the wiki[0].
> >
> >                     There are 5 projects without candidates, so
> >                     according to this
> >                     resolution[1], the TC we'll have to appoint a new
> >                     PTL for Barbican,
> >                     MagnetoDB, Magnum, Murano and Security
> >
> >
> >                 This is devil's advocate, but why does a project
> >                 technically need a PTL?
> >                   Just so that there can be a contact point for
> >                 cross-project things,
> >                 i.e. a lightning rod?  There are projects that do a
> >                 lot of group
> >                 leadership/delegation/etc, so it doesn't seem that a
> >                 PTL is technically
> >                 required in all cases.
> >
> >
> >             I think that is a great question for the TC to consider
> >             when they
> >             evaluate options for action with these projects.
> >
> >             The election officials are fulfilling their obligation
> >             according to the
> >             resolution:
> >
> http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst
> >
> >             If you read the verb there the verb is "can" not "must", I
> >             choose the
> >             verb "can" on purpose for the resolution when I wrote it.
> >             The TC has the
> >             option to select an appointee. The TC can do other things
> >             as well,
> >             should the TC choose.
> >
> >
> >         I agree- and this is a great example of places where human
> >         judgement is better than rules.
> >
> >         For instance - one of the projects had a nominee but it missed
> >         the deadline, so that's probably an easy on.
> >
> >         For one of the projects it had been looking dead for a while,
> >         so this is the final nail in the coffin from my POV
> >
> >         For the other three - I know they're still active projects
> >         with people interested in them, so sorting them out will be fun!
> >
> >
> >     This is the right approach. Human judgement #ftw! :)
> >
> >
> >
> >
> >
> >
> >                     There are 7 projects that will have an election:
> >                     Cinder, Glance, Ironic,
> >                     Keystone, Mistral, Neutron and Oslo. The details
> >                     for those will be
> >                     posted tomorrow after Tony and I setup the CIVS
> >                     system.
> >
> >                     Thank you,
> >                     Tristan
> >
> >
> >                     [0]:
> >
> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
> >
> >                     [1]:
> >
> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
> >
> >
> >
> >
> >
> >
>  __________________________________________________________________________
> >
> >                     OpenStack Development Mailing List (not for usage
> >                     questions)
> >                     Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
>  __________________________________________________________________________
> >             OpenStack Development Mailing List (not for usage questions)
> >             Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >             <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
>  __________________________________________________________________________
> >         OpenStack Development Mailing List (not for usage questions)
> >         Unsubscribe:
> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >         <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
>  __________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/41ede5e1/attachment.html>

From aj at suse.com  Thu Sep 17 18:22:51 2015
From: aj at suse.com (Andreas Jaeger)
Date: Thu, 17 Sep 2015 20:22:51 +0200
Subject: [openstack-dev] [mistral][requirements] lockfile not in global
	requirements
Message-ID: <55FB04FB.8080903@suse.com>

The syncing of requirements fails from the requirements repository to 
mistral-extra with
'lockfile' is not in global-requirements.txt

Mistral team, could you either propose to add lockfile to the global 
requirements file - or remove it from your project, please?

for details see:
https://jenkins.openstack.org/job/propose-requirements-updates/363/consoleFull

Andreas
-- 
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
        HRB 21284 (AG N?rnberg)
     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126



From nkinder at redhat.com  Thu Sep 17 18:28:04 2015
From: nkinder at redhat.com (Nathan Kinder)
Date: Thu, 17 Sep 2015 11:28:04 -0700
Subject: [openstack-dev] [OSSN 0052] Python-swiftclient exposes raw token
 values in debug logs
Message-ID: <55FB0634.1090205@redhat.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Python-swiftclient exposes raw token values in debug logs
- ---

### Summary ###
The password and authentication token configuration options for the
python-swiftclient are not marked as secret. The values of these options
will be logged to the standard logging output when the controller is run
in debug mode.

### Affected Services / Software ###
Python-swiftclient, Swift, Glance, Juno, Kilo

### Discussion ###
When using the python-swiftclient to connect to Glance, and the
:glance-api.conf: has set the value of the debug option to True, the
requests sent through the API, including user and token details, will be
captured in the local log mechanism.

### Recommended Actions ###
It is recommended to use the debug level in configurations only when
necessary to troubleshoot an issue. When the debug flag is set, the
resulting logs should be treated as having sensitive information and as
such should have strict permissions around the file and containing
directory set in the operating system. Additionally, the logs should
not be transported off the system in plaintext such as through syslog.

The debug level can be turned off by setting the following option in
the `glance-api.conf` file:

    [DEFAULT]
    debug = false

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0052
Original LaunchPad Bug :
https://bugs.launchpad.net/python-swiftclient/+bug/1470740
OpenStack Security ML : openstack-security at lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJV+wY0AAoJEJa+6E7Ri+EVIDEH/jtjxXOMTiiGMVrlBCwRIbVj
qxqbe8ExtWxz21cMFHOvxxdZgeOerNegTxUqgil1MVQMC9DZuVmkyPt7NLwWhN9l
Xqul/Rrk4UNBlesGuCVlXPCmcaH6HXzZG1Jaty2QDok/0ev1QUynb1i4cktISUJm
Xo0TCt/R7CijyD/AHrLOxjHwh7NUyT+8v9dyD8wdalAA814wmyz31aFW/FCfunaP
v6yx89H0FjPbxksXcB9O+E1ZyGVGJBkU7GuzAGcr6Wa+9/dg14AzQmUb4/AbEyEt
udQ275vGkicGImGOrVANwEIHK7ooliede2sxgG1omhqAd3Ak/QsMcMpbw3gk1BE=
=w1KU
-----END PGP SIGNATURE-----


From sean at coreitpro.com  Thu Sep 17 18:28:54 2015
From: sean at coreitpro.com (Sean M. Collins)
Date: Thu, 17 Sep 2015 18:28:54 +0000
Subject: [openstack-dev] [Devstack][Sahara][Cinder] BlockDeviceDriver
 support in Devstack
In-Reply-To: <CA+O3VAiBdCGZhtfEAysdmSyXfFJhYO0RKKFmxAEjbdtpYUoHDQ@mail.gmail.com>
References: <CA+O3VAiBdCGZhtfEAysdmSyXfFJhYO0RKKFmxAEjbdtpYUoHDQ@mail.gmail.com>
Message-ID: <0000014fdc910177-f8a04316-5340-4e42-948c-f785781fffac-000000@email.amazonses.com>

You need to remove your Workflow-1.

-- 
Sean M. Collins


From brandon.logan at RACKSPACE.COM  Thu Sep 17 18:29:44 2015
From: brandon.logan at RACKSPACE.COM (Brandon Logan)
Date: Thu, 17 Sep 2015 18:29:44 +0000
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
 neutron-lbaas core team
In-Reply-To: <79408BAC-47AB-4CA6-A8ED-8ABD6C44CC6F@workday.com>
References: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>,
 <79408BAC-47AB-4CA6-A8ED-8ABD6C44CC6F@workday.com>
Message-ID: <c818a3f9b1e64888bcd15702dd73284f@544124-OEXCH02.ror-uc.rackspace.com>

I'm off today so my +1 is more like a +2

On Sep 17, 2015 12:59 PM, Edgar Magana <edgar.magana at workday.com> wrote:
Not a core but I would like to share my +1 about Michael.

Cheers,

Edgar




On 9/16/15, 3:33 PM, "Doug Wiegley" <dougwig at parksidesoftware.com> wrote:

>Hi all,
>
>As the Lieutenant of the advanced services, I nominate Michael Johnson to be a member of the neutron-lbaas core reviewer team.
>
>Review stats are in line with other cores[2], and Michael has been instrumental in both neutron-lbaas and octavia.
>
>Existing cores, please vote +1/-1 for his addition to the team (that?s Brandon, Phil, Al, and Kyle.)
>
>Thanks,
>doug
>
>1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
>2. http://stackalytics.com/report/contribution/neutron-lbaas/90
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/de940c69/attachment.html>

From stdake at cisco.com  Thu Sep 17 18:31:31 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Thu, 17 Sep 2015 18:31:31 +0000
Subject: [openstack-dev] [kolla] Followup to review in gerrit relating
 to RHOS + RDO types
In-Reply-To: <CAHD=wReayuSHgN1JEbz=bZ9P293dQaqB+YsuZJitvP7E=Jw5hQ@mail.gmail.com>
References: <D21A5A21.124FA%stdake@cisco.com>
 <CAJ3CzQUb+eyE6DVWe=No2UzbgMQMj5-ddCA8sX6L8khUQ7uZKQ@mail.gmail.com>
 <D21A60E8.12504%stdake@cisco.com>
 <CAJ3CzQXtcpaq2_OFv5GBbGTr9FWdcKFU9QKn5S6vCOQWR3vccw@mail.gmail.com>
 <D21A6FA1.12519%stdake@cisco.com>
 <CAJ3CzQXpaVeY0vS4KEnqme2Odd7HYur7h1WaJXtkBrLrmWsYiQ@mail.gmail.com>
 <D21AFAE0.12587%stdake@cisco.com> <55F6AD3E.9090909@oracle.com>
 <CAJ3CzQWS4O-+V6A9L0GSDMUGcfpJc_3=DdQG9njxO+FBoRBDyw@mail.gmail.com>
 <0ff2ac38c6044b0f039992b0a1f53ecf@weites.com>
 <CAHD=wReayuSHgN1JEbz=bZ9P293dQaqB+YsuZJitvP7E=Jw5hQ@mail.gmail.com>
Message-ID: <D220537E.12C99%stdake@cisco.com>

It appears the core team is mostly in agreement about the idea of having support for commercial distributions of OpenStack in Kolla.  The vote was nearly unanimous.  That said, there was some feedback that came out of the various comments.

As a community:
We should set standards by which distros of OpenStack should operate in order to stay inside the Kolla tree
We should be willing to remove in-tree distros if their engineering backing disappears or their contributors lose interest in maintaining the distros
We should judge each new distro with an eye towards minimizing maintenance burden
We should be inclusive and make an effort to facilitate participation in Kolla from commercial distributions of OpenStack
Down the road we should sort out our CI requirements, once we actually have effective CI for our free distros of OpenStack

Thanks everyone for your valued time in responding on this vote.

Regards
-steve


From: Martin Andr? <martin.andre at gmail.com<mailto:martin.andre at gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Thursday, September 17, 2015 at 7:07 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] Followup to review in gerrit relating to RHOS + RDO types



On Thu, Sep 17, 2015 at 3:22 AM, <harm at weites.com<mailto:harm at weites.com>> wrote:
There is an apparent need for having official RHOS being supported from our end, and we just so happen to have the possibility of filling that need. Should the need arise to support whatever fancy proprietary backend system or even have Kolla integrate with Oracle Solaris or something, that need would most probably be backed by a company plus developer effort. I believe the burden for our current (great) team would more or less stay the same (eg. lets assume we don't know anything about Solaris), so this company should ship in devvers to aid their 'wish'. The team effort with these additional devvers would indeed grow, bigtime. Keeping our eyes on the matters feels like a fair solution, allowing for these additions while guarding the effort they take. Should Kolla start supporting LXC besides Docker, that would be awesome (uhm...) - but I honestly don't see a need to be thinking about that right now, if someone comes up with a spec about it and wants to invest time+effort we can atleast review it. We shouldn't prepare our Dockerfiles for such a possibility though, whereas the difference between RHOS and CentOS is very little. Hence, support is rather easy to implement.

The question was if Kolla wants/should support integrating with 3rd party tools, and I think we should support it. There should be rules, yes. We probably shouldn't be worrying about proprietary stuff that other projects hardly take seriously (even though drivers have been accepted)...

Vote: +1

- harmw

Sam Yaple schreef op 2015-09-14 13:44:
On Mon, Sep 14, 2015 at 11:19 AM, Paul Bourke <paul.bourke at oracle.com<mailto:paul.bourke at oracle.com>>
wrote:

On 13/09/15 18:34, Steven Dake (stdake) wrote:

Response inline.

From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net><mailto:samuel at yaple.net<mailto:samuel at yaple.net>>>
Reply-To: "sam at yaple.net<mailto:sam at yaple.net><mailto:sam at yaple.net<mailto:sam at yaple.net>>"
<sam at yaple.net<mailto:sam at yaple.net><mailto:sam at yaple.net<mailto:sam at yaple.net>>>
Date: Sunday, September 13, 2015 at 1:35 AM
To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com><mailto:stdake at cisco.com<mailto:stdake at cisco.com>>>
Cc: "OpenStack Development Mailing List (not for usage
questions)"


      tack-dev at lists.openstack.org<mailto:tack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
Subject: Re: [kolla] Followup to review in gerrit relating to
RHOS + RDO types

On Sun, Sep 13, 2015 at 3:01 AM, Steven Dake (stdake)
<stdake at cisco.com<mailto:stdake at cisco.com><mailto:stdake at cisco.com<mailto:stdake at cisco.com>>> wrote:
Response inline.

From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net><mailto:samuel at yaple.net<mailto:samuel at yaple.net>>>
Reply-To: "sam at yaple.net<mailto:sam at yaple.net><mailto:sam at yaple.net<mailto:sam at yaple.net>>"
<sam at yaple.net<mailto:sam at yaple.net><mailto:sam at yaple.net<mailto:sam at yaple.net>>>
Date: Saturday, September 12, 2015 at 11:34 PM
To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com><mailto:stdake at cisco.com<mailto:stdake at cisco.com>>>
Cc: "OpenStack Development Mailing List (not for usage
questions)"


      tack-dev at lists.openstack.org<mailto:tack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
Subject: Re: [kolla] Followup to review in gerrit relating to
RHOS + RDO types

Sam Yaple

On Sun, Sep 13, 2015 at 1:15 AM, Steven Dake (stdake)
<stdake at cisco.com<mailto:stdake at cisco.com><mailto:stdake at cisco.com<mailto:stdake at cisco.com>>> wrote:

From: Sam Yaple <samuel at yaple.net<mailto:samuel at yaple.net><mailto:samuel at yaple.net<mailto:samuel at yaple.net>>>
Reply-To: "sam at yaple.net<mailto:sam at yaple.net><mailto:sam at yaple.net<mailto:sam at yaple.net>>"
<sam at yaple.net<mailto:sam at yaple.net><mailto:sam at yaple.net<mailto:sam at yaple.net>>>
Date: Saturday, September 12, 2015 at 11:01 PM
To: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com><mailto:stdake at cisco.com<mailto:stdake at cisco.com>>>
Cc: "OpenStack Development Mailing List (not for usage
questions)"


      tack-dev at lists.openstack.org<mailto:tack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
Subject: Re: [kolla] Followup to review in gerrit relating to
RHOS + RDO types

On Sun, Sep 13, 2015 at 12:39 AM, Steven Dake (stdake)
<stdake at cisco.com<mailto:stdake at cisco.com><mailto:stdake at cisco.com<mailto:stdake at cisco.com>>> wrote:
Hey folks,

Sam had asked a reasonable set of questions regarding a patchset:
https://review.openstack.org/#/c/222893/ [1]


The purpose of the patchset is to enable both RDO and RHOS as
binary choices on RHEL platforms.  I suspect over time,
from-source deployments have the potential to become the norm, but
the business logistics of such a change are going to take some
significant time to sort out.

Red Hat has two distros of OpenStack neither of which are from
source.  One is free called RDO and the other is paid called
RHOS.  In order to obtain support for RHEL VMs running in an
OpenStack cloud, you must be running on RHOS RPM binaries.  You
must also be running on RHEL.  It remains to be seen whether Red
Hat will actively support Kolla deployments with a RHEL+RHOS set
of packaging in containers, but my hunch says they will.  It is
in Kolla?s best interest to implement this model and not make it
hard on Operators since many of them do indeed want Red Hat?s
support structure for their OpenStack deployments.

Now to Sam?s questions:
"Where does 'binary' fit in if we have 'rdo' and 'rhos'? How many
more do we add? What's our policy on adding a new type??

I?m not immediately clear on how binary fits in.  We could
make binary synonymous with the community supported version (RDO)
while still implementing the binary RHOS version.  Note Kolla
does not ?support? any distribution or deployment of OpenStack
? Operators will have to look to their vendors for support.

If everything between centos+rdo and rhel+rhos is mostly the same
then I would think it would make more sense to just use the base
('rhel' in this case) to branch of any differences in the
templates. This would also allow for the least amount of change
and most generic implementation of this vendor specific packaging.
This would also match what we do with oraclelinux, we do not have
a special type for that and any specifics would be handled by an
if statement around 'oraclelinux' and not some special type.

I think what you are proposing is RHEL + RHOS and CENTOS + RDO.
RDO also runs on RHEL.  I want to enable Red Hat customers to
make a choice to have a supported  operating system but not a
supported Cloud environment.  The answer here is RHEL + RDO.
This leads to full support down the road if the Operator chooses
to pay Red Hat for it by an easy transition to RHOS.

I am against including vendor specific things like RHOS in Kolla
outright like you are purposing. Suppose another vendor comes
along with a new base and new packages. They are willing to
maintain it, but its something that no one but their customers
with their licensing can use. This is not something that belongs
in Kolla and I am unsure that it is even appropriate to belong in
OpenStack as a whole. Unless RHEL+RHOS can be used by those that
do not have a license for it, I do not agree with adding it at
all.

Sam,

Someone stepping up to maintain a completely independent set of
docker images hasn?t happened.  To date nobody has done that.
If someone were to make that offer, and it was a significant
change, I think the community as a whole would have to evaluate
such a drastic change.  That would certainly increase our
implementation and maintenance burden, which we don?t want  to
do.  I don?t think what you propose would be in the best
interest of the Kolla project, but I?d have to see the patch set
to evaluated the scenario appropriately.

What we are talking about is 5 additional lines to enable
RHEL+RHOS specific repositories, which is not very onerous.

The fact that you can?t use it directly has little bearing on
whether its valid technology for OpenStack.  There are already
two well-defined historical precedents for non-licensed unusable
integration in OpenStack.  Cinder has 55 [1] Volume drivers which
they SUPPORT.     At-leat 80% of them are completely
proprietary hardware which in reality is mostly just software
which without a license to, it would be impossible to use.  There
are 41 [2] Neutron drivers registered on the Neutron driver page;
almost the entirety require proprietary licenses to what amounts
as integration to access proprietary software.  The OpenStack
preferred license is ASL for a reason ? to be business
friendly.  Licensed software has a place in the world of
OpenStack, even it only serves as an integration point which the
proposed patch does.  We are consistent with community values on
this point or I wouldn?t have bothered proposing the patch.

We want to encourage people to use Kolla for proprietary
solutions if they so choose.  This is how support manifests,
which increases the strength of the Kolla project.  The presence
of support increases the likelihood that Kolla will be adopted by
Operators.  If your asking the Operators to maintain a fork for
those 5 RHOS repo lines, that seems unreasonable.

I?d like to hear other Core Reviewer opinions on this matter
and will hold a majority vote on this thread as to whether we will
facilitate integration with third party software such as the
Cinder Block Drivers, the Neutron Network drivers, and various
for-pay versions of OpenStack such as RHOS.  I?d like all core
reviewers to weigh in please.  Without a complete vote it will be
hard to gauge what the Kolla community really wants.

Core reviewers:
Please vote +1 if you ARE satisfied with integration with third
party unusable without a license software, specifically Cinder
volume drivers, Neutron network drivers, and various for-pay
distributions of OpenStack and container runtimes.
Please vote ?1 if you ARE NOT satisfied with integration with
third party unusable without a license software, specifically
Cinder volume drivers, Neutron network drivers, and various for
pay distributions of OpenStack and container runtimes.

A bit of explanation on your vote might be helpful.

My vote is +1.  I have already provided my rationale.

Regards,
-steve

[1] https://wiki.openstack.org/wiki/CinderSupportMatrix [2]
[2] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
[3]


I appreciate you calling a vote so early. But I haven't had my
questions answered yet enough to even vote on the matter at hand.

In this situation the closest thing we have to a plugin type
system as Cinder or Neutron does is our header/footer system. What
you are proposing is integrating a proprietary solution into the
core of Kolla. Those Cinder and Neutron plugins have external
components and those external components are not baked into the
project.

What happens if and when the RHOS packages require different
tweaks in the various containers? What if it requires changes to
the Ansible playbooks? It begins to balloon out past 5 lines of
code.

Unfortunately, the community _wont_ get to vote on whether or not
to implement those changes because RHOS is already in place.
That's why I am asking the questions now as this _right_ _now_ is
the significant change you are talking about, regardless of the
lines of code.

So the question is not whether we are going to integrate 3rd
party plugins, but whether we are going to allow companies to
build proprietary products in the Kolla repo. If we allow
RHEL+RHOS then we would need to allow another distro+company
packaging and potential Ansible tweaks to get it to work for them.

If you really want to do what Cinder and Neutron do, we need a
better system for injecting code. That would be much closer to the
plugins that the other projects have.

I'd like to have a discussion about this rather than immediately
call for a vote which is why I asked you to raise this question in
a public forum in the first place.

Sam,

While a true code injection system might be interesting and would
be more parallel with the plugin model used in cinder and neutron
(and to some degrees nova), those various systems didn?t begin
that way.  Their driver code at one point was completely
integrated.  Only after 2-3 years was the code broken into a
fully injectable state.  I think that is an awfully high bar to
set to sort out the design ahead of time.  One of the reasons
Neutron has taken so long to mature is the Neutron community
attempted to do plugins at too early a stage which created big
gaps in unit and functional tests.  A more appropriate design
would be for that pattern to emerge from the system over time as
people begin to adopt various distro tech to Kolla.  If you
looked at the patch in gerrit, there is one clear pattern ?Setup
distro repos? which at some point in the future could be made to
be injectable much as headers and footers are today.

As for building proprietary products in the Kolla repository, the
license is ASL, which means it is inherently not proprietary.  I
am fine with the code base integrating with proprietary software
as long as the license terms are met; someone has to pay the
mortgages of the thousands of OpenStack developers.  We should
encourage growth of OpenStack, and one of the ways for that to
happen is to be business friendly.  This translates into first
knowing the world is increasingly adopting open source
methodologies and facilitating that transition, and second
accepting the world has a whole slew of proprietary software that
already exists today that requires integration.

Nonetheless, we have a difference of opinion on this matter, and
I want this work to merge prior to rc1.  Since this is a project
policy decision and not a technical issue, it makes sense to put
it to a wider vote to either unblock or kill the work.  It would
be a shame if we reject all driver and supported distro
integration because we as a community take an anti-business stance
on our policies, but I?ll live by what the community decides.
This is not a decision either you or I may dictate which is why it
has been put to a vote.

Regards
-steve

For oracle linux, I?d like to keep RDO for oracle linux and
from source on oracle linux as choices.  RDO also runs on oracle
linux.  Perhaps the patch set needs some later work here to
address this point in more detail, but as is ?binary? covers
oracle linu.

Perhaps what we should do is get rid of the binary type
entirely.  Ubuntu doesn?t really have a binary type, they have
a cloudarchive type, so binary doesn?t make a lot of sense.
Since Ubuntu to my knowledge doesn?t have two distributions of
OpenStack the same logic wouldn?t apply to providing a full
support onramp for Ubuntu customers.  Oracle doesn?t provide a
binary type either, their binary type is really RDO.

The binary packages for Ubuntu are _packaged_ by the cloudarchive
team. But in the case of when OpenStack collides with an LTS
release (Icehouse and 14.04 was the last one) you do not add a new
repo because the packages are in the main Ubuntu repo.

Debian provides its own packages as well. I do not want a type
name per distro. 'binary' catches all packaged OpenStack things by
a distro.

FWIW I never liked the transition away from rdo in the repo names
to binary.  I guess I should have ?1?ed those reviews back
then, but I think its time to either revisit the decision or
compromise that binary and rdo mean the same thing in a centos and
rhel world.

Regards
-steve

Since we implement multiple bases, some of which are not RPM
based, it doesn't make much sense to me to have rhel and rdo as a
type which is why we removed rdo in the first place in favor of
the more generic 'binary'.

As such the implied second question ?How many more do we
add?? sort of sounds like ?how many do we support??.  The
answer to the second question is none ? again the Kolla
community does not support any deployment of OpenStack.  To the
question as posed, how many we add, the answer is it is really up
to community members willing to  implement and maintain the
work.  In this case, I have personally stepped up to implement
RHOS and maintain it going forward.

Our policy on adding a new type could be simple or onerous.  I
prefer simple.  If someone is willing to write the code and
maintain it so that is stays in good working order, I see no harm
in it remaining in tree.  I don?t suspect there will be a lot
of people interested in adding multiple distributions for a
particular operating system.  To my knowledge, and I could be
incorrect, Red Hat is the only OpenStack company with a paid and
community version available of OpenStack simultaneously and the
paid version is only available on RHEL.  I think the risk of RPM
based distributions plus their type count spiraling out of
manageability is low.  Even if the risk were high, I?d prefer
to keep an open mind to facilitate an increase in diversity in our
community (which is already fantastically diverse, btw ;)

I am open to questions, comments or concerns.  Please feel free
to voice them.

Regards,
-steve



   _______________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> [4]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[5]

Both arguments sound valid to me, both have pros and cons.

I think it's valuable to look to the experiences of Cinder and
Neutron in this area, both of which seem to have the same scenario
and have existed much longer than Kolla. From what I know of how
these operate, proprietary code is allowed to exist in the mainline
so long as certain set of criteria is met. I'd have to look it up
but I think it mostly comprises of the relevant parties must "play
by the rules", e.g. provide a working CI, help with reviews, attend
weekly meetings, etc. If Kolla can look to craft a similar set of
criteria for proprietary code down the line, I think it should work
well for us.

Steve has a good point in that it may be too much overhead to
implement a plugin system or similar up front. Instead, we should
actively monitor the overhead in terms of reviews and code size that
these extra implementations add. Perhaps agree to review it at the
end of Mitaka?

Given the project is young, I think it can also benefit from the
increased usage and exposure from allowing these parties in. I would
hope independent contributors would not feel rejected from not being
able to use/test with the pieces that need a license. The libre
distros will remain #1 for us.

So based on the above explanation, I'm +1.

-Paul


   _______________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> [4]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[5]

Given Paul's comments I would agree here as well. I would like to get
that 'criteria' required for Kolla to allow this proprietary code into
the main repo down as soon as possible though and suggest that we have
a bare minimum of being able to gate against it as one of the
criteria.

As for a plugin system, I also agree with Paul that we should check
the overhead of including these other distros and any types needed
after we have had time to see if they do introduce any additional
overhead.

So for the question 'Do we allow code that relies on proprietary
packages?' I would vote +1, with the condition that we define the
requirements of allowing that code as soon as possible.


Links:
------
[1] https://review.openstack.org/#/c/222893/
[2] https://wiki.openstack.org/wiki/CinderSupportMatrix
[3] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
[4] http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
[5] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I'm also +1 on principle of opening the Kolla doors to distros with paid licence and accepting them in the source tree. We shouldn't build barriers but bridges.
I would like to make sure we all agree, however, that there is no guarantee that because the code for a distro is in the repo it means it will stay in there. I want to reserve the right as a Kolla dev to remove paid distros from the tree if it becomes a burden, for exemple unreliable CI or lack of commitment from people backing the distro.

Martin

   _______________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/d346002d/attachment.html>

From doc at aedo.net  Thu Sep 17 18:33:24 2015
From: doc at aedo.net (Christopher Aedo)
Date: Thu, 17 Sep 2015 11:33:24 -0700
Subject: [openstack-dev] [app-catalog] App Catalog IRC meeting minutes -
	9/17/2015
Message-ID: <CA+odVQF_z2zCdRPnBWm-P98SshsYn-37e34sGyv466q46v0JEw@mail.gmail.com>

We had a nice meeting today and caught up on some of our
plans/intentions for the web site backend.  Long ago we knew that a
basically static site with assets listed in a YAML would not last for
long.  It's already causing issues with respect to versions and
updates, and makes determining when things were added really
cumbersome.

Over the next few weeks we will be transitioning to using flask for
the backend.  Initially we'll just replicate exactly what we have now,
then can slowly start adding the additional features we are planning.
We will also be using many of the same design elements (static files
and javascript) between the Horizon plugin and the website itself.

There's lots more happening here in the months to come at any rate :)
If you ever think about ways to make OpenStack clouds more useful for
the end-users (or are just curious about what we are up to), please
join us on IRC (#openstack-app-catalog).  Thanks!

-Christopher

=================================
#openstack-meeting-3: app-catalog
=================================
Meeting started by docaedo at 17:00:30 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/app_catalog/2015/app_catalog.2015-09-17-17.00.log.html

Meeting summary
---------------
* rollcall  (docaedo, 17:00:50)
* Status updates (docaedo)  (docaedo, 17:02:07)
  * ACTION: docaedo to get SSL cert for app catalog website  (docaedo,
    17:05:52)
* Discuss "new site plans" etherpad (docaedo)  (docaedo, 17:07:48)
  * LINK: https://etherpad.openstack.org/p/app-catalog-v2-backend
    (docaedo, 17:07:55)
  * ACTION: docaedo to request a new repo for common elements shared
    between app-catalog and ui  (docaedo, 17:15:38)
* Open discussion  (docaedo, 17:23:19)
Meeting ended at 17:28:31 UTC.

Action items, by person
-----------------------
* docaedo
  * docaedo to get SSL cert for app catalog website
  * docaedo to request a new repo for common elements shared between
    app-catalog and ui

People present (lines said)
---------------------------
* docaedo (40)
* kfox1111 (29)
* kzaitsev_mb (7)
* pkoros (4)
* openstack (3)

Generated by `MeetBot`_ 0.1.4


From brian at python.org  Thu Sep 17 18:33:39 2015
From: brian at python.org (Brian Curtin)
Date: Thu, 17 Sep 2015 13:33:39 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
Message-ID: <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>

Because it's still day_n AOE in early hours of day_n+1 local-time in a
lot of places. I think I have until 6 AM the day after an AOE deadline
where it's still considered the deadline date anywhere on earth, as
there are still places on earth where the date hasn't flipped.

Your EOD is not the deadline, as it's really only a reference to how
many more hours you have until it's no longer that date anywhere on
earth. People screw themselves out of things by using their EOD as the
definition.

(we've been using this with the PyCon CFP since forever)

On Thu, Sep 17, 2015 at 1:17 PM, Kevin Benton <blak111 at gmail.com> wrote:
> How is it not what I described? Time zones become irrelevant if you get it
> in by the end of the day in your local time zone.
>
> https://en.wikipedia.org/wiki/Anywhere_on_Earth
>
> On Thu, Sep 17, 2015 at 10:30 AM, Brian Curtin <brian at python.org> wrote:
>>
>> On Thu, Sep 17, 2015 at 12:26 PM, Kevin Benton <blak111 at gmail.com> wrote:
>> > Maybe it would be a good idea to switch to 23:59 AOE deadlines like many
>> > paper submissions use for academic conferences. That way there is never
>> > a
>> > need to convert TZs, you just get it in by the end of the day in your
>> > own
>> > time zone.
>>
>> This is somehow going to cause even more confusion because you'll have
>> to explain AOE (which is not what you described).
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Kevin Benton
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From blak111 at gmail.com  Thu Sep 17 18:35:24 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 17 Sep 2015 11:35:24 -0700
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
In-Reply-To: <SN1PR02MB16950E11BFF66EB83AC3F361995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JNLrM4BMCUNZ+XcePjSkk2OAQWJiuH5UU4njYi9+aaZRg@mail.gmail.com>
 <SN1PR02MB169592E08620777E123F6D34995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JMKtRD6G60=NGMGi-mAndy=yThRdzCgX9iF8R6BQk3cAw@mail.gmail.com>
 <SN1PR02MB16950E11BFF66EB83AC3F361995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
Message-ID: <CAO_F6JMn5Jv74NL1TBpciM7PxdLMT0SXN+6+9WUaju=bGqW5ig@mail.gmail.com>

router:external only affects the behavior of Neutron routers. It allows
them to attach to it with an external gateway interface which implies NAT
and floating IPs.

>From an instance's perspective, an external network would be no different
than any other provider network scenario that uses a non-Neutron router.
Nothing different happens with the routing of traffic.

>Also I believe that (c) is already true for Neutron external networks - i.e.
it doesn't make sense to assign a floating IP to an instance that is
directly on an external network.  Is that correct?

Well not floating IPs from the same external network, but you could
conceivably have layers where one external network has an internal Neutron
router interface that leads to another external network via a Neutron
router.


On Thu, Sep 17, 2015 at 10:17 AM, Neil Jerram <Neil.Jerram at metaswitch.com>
wrote:

> Thanks so much for your continuing answers; they are really helping me.
>
> I see your points now about the special casing, and about the semantic
> expectations and internal wiring of a Neutron network being just the
> same for an external network as for non-external.  Hence, the model for
> an L3-only external network should be the same as it would be for an
> L3-only tenant network, except for the router:external flag (and might
> be along the lines that you've suggested, of a subnet with a null network).
>
> It still seems that 'router:external true' might be a good match for
> some of the other 'routed' semantics in [1], though, so I'd like to
> drill down more on exactly what 'router:external true' means.
>
> A summary of the semantics at [1] is:
> (a) L3-only connectivity between instances attached to the network.
> (b) Data can be routed between between this network and the outside, and
> between multiple networks of this type, without needing Neutron routers
> (c) Floating IPs are not supported for instances on this network.
> Instead, wherever an instance needs to be routable from, attach it to a
> network with a subnet of IP addresses that are routable from that place.
>
> [1] https://review.openstack.org/#/c/198439/
>
> According to [2], router:external "Indicates whether this network is
> externally accessible."  Which I think is an exact match for (b) - would
> you agree?  (Note: it can't mean that every instance on that network
> actually _is_ contactable from outside, because that depends on IP
> addressing, and router:external is a network property, not subnet.  But
> it can mean that every instance is _potentially_ contactable, without
> the mediation of a Neutron router.)
>
> [2] http://developer.openstack.org/api-ref-networking-v2-ext.html
>
> Also I believe that (c) is already true for Neutron external networks -
> i.e. it doesn't make sense to assign a floating IP to an instance that
> is directly on an external network.  Is that correct?
>
> In summary, for the semantics that I'm wanting to model, it sounds like
> router:external true already gives me 2 of the 3 main pieces.  There's
> still serious work needed for (a), but that's really nice news, if I'm
> seeing things correctly (since discovering that instances can be
> attached to an external network).
>
> Regards,
>     Neil
>
>
>
>
> On 17/09/15 17:29, Kevin Benton wrote:
> >
> > Yes, the L2 semantics apply to the external network as well (at least
> > with ML2).
> >
> > One example of the special casing is the external_network_bridge
> > option in the L3 agent. That would cause the agent to plug directly
> > into a bridge so none of the normal L2 agent wiring would occur. With
> > the L2 bridge_mappings option there is no reason for this to exist
> > anymore because it ignoring network attributes makes debugging a
> > nightmare.
> >
> > >Yes, that makes sense.  Clearly the core semantic there is IP.  I can
> > imagine reasonable variation on less core details, e.g. L2 broadcast vs.
> > NBMA.  Perhaps it would be acceptable, if use cases need it, for such
> > details to be described by flags on the external network object.
> >
> > An external network object is just a regular network object with a
> > router:external flag set to true. Any changes to it would have to make
> > sense in the context of all networks. That's why I want to make sure
> > that whatever we come up with makes sense in all contexts and isn't
> > just a bolt on corner case.
> >
> > On Sep 17, 2015 8:21 AM, "Neil Jerram" <Neil.Jerram at metaswitch.com
> > <mailto:Neil.Jerram at metaswitch.com>> wrote:
> >
> >     Thanks, Kevin.  Some further queries, then:
> >
> >     On 17/09/15 15:49, Kevin Benton wrote:
> >     >
> >     > It's not true for all plugins, but an external network should
> >     provide
> >     > the same semantics of a normal network.
> >     >
> >     Yes, that makes sense.  Clearly the core semantic there is IP.  I can
> >     imagine reasonable variation on less core details, e.g. L2
> >     broadcast vs.
> >     NBMA.  Perhaps it would be acceptable, if use cases need it, for such
> >     details to be described by flags on the external network object.
> >
> >     I'm also wondering about what you wrote in the recent thread with
> Carl
> >     about representing a network connected by routers.  I think you were
> >     arguing that a L3-only network should not be represented by a kind of
> >     Neutron network object, because a Neutron network has so many L2
> >     properties/semantics that it just doesn't make sense, and better
> >     to have
> >     a different kind of object for L3-only.  Do those L2
> >     properties/semantics apply to an external network too?
> >
> >     > The only difference is that it allows router gateway interfaces
> >     to be
> >     > attached to it.
> >     >
> >     Right.  From a networking-calico perspective, I think that means that
> >     the implementation should (eventually) support that, and hence allow
> >     interconnection between the external network and private Neutron
> >     networks.
> >
> >     > We want to get rid of as much special casing as possible for the
> >     > external network.
> >     >
> >     I don't understand here.  What 'special casing' do you mean?
> >
> >     Regards,
> >         Neil
> >
> >     > On Sep 17, 2015 7:02 AM, "Neil Jerram"
> >     <Neil.Jerram at metaswitch.com <mailto:Neil.Jerram at metaswitch.com>
> >     > <mailto:Neil.Jerram at metaswitch.com
> >     <mailto:Neil.Jerram at metaswitch.com>>> wrote:
> >     >
> >     >     Thanks to the interesting 'default network model' thread, I
> >     now know
> >     >     that Neutron allows booting a VM on an external network.
> >     :-)  I didn't
> >     >     realize that before!
> >     >
> >     >     So, I'm now wondering what connectivity semantics are
> >     expected (or
> >     >     even
> >     >     specified!) for such VMs, and whether they're the same as -
> >     or very
> >     >     similar to - the 'routed' networking semantics I've
> >     described at [1].
> >     >
> >     >     [1]
> >     >
> >
> https://review.openstack.org/#/c/198439/5/doc/source/devref/routed_networks.rst
> >     >
> >     >     Specifically I wonder if VM's attached to an external network
> >     >     expect any
> >     >     particular L2 characteristics, such as being able to L2
> >     broadcast to
> >     >     each other?
> >     >
> >     >     By way of context - i.e. why am I asking this?...   The
> >     >     networking-calico project [2] provides an implementation of the
> >     >     'routed'
> >     >     semantics at [1], but only if one suspends belief in some of
> the
> >     >     Neutron
> >     >     semantics associated with non-external networks, such as
> >     needing a
> >     >     virtual router to provide connectivity to the outside
> >     world.  (Because
> >     >     networking-calico provides that external connectivity
> >     without any
> >     >     virtual router.)  Therefore we believe that we need to
> >     propose some
> >     >     enhancement of the Neutron API and data model, so that
> >     Neutron can
> >     >     describe 'routed' semantics as well as all the traditional
> >     ones.  But,
> >     >     if what we are doing is semantically equivalent to
> >     'attaching to an
> >     >     external network', perhaps no such enhancement is needed...
> >     >
> >     >     [2]
> >     https://git.openstack.org/cgit/openstack/networking-calico
> >     <https://git.openstack.org/cgit/openstack/networking-calico>
> >     >     <https://git.openstack.org/cgit/openstack/networking-calico>
> >     >
> >     >     Many thanks for any input!
> >     >
> >     >         Neil
> >     >
> >     >
> >     >
> >
> __________________________________________________________________________
> >     >     OpenStack Development Mailing List (not for usage questions)
> >     >     Unsubscribe:
> >     >
> >      OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     >
> >      <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     >
> >      http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >
> >
> >
> >
>  __________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/a57674be/attachment-0001.html>

From msm at redhat.com  Thu Sep 17 18:38:44 2015
From: msm at redhat.com (michael mccune)
Date: Thu, 17 Sep 2015 14:38:44 -0400
Subject: [openstack-dev] [Sahara] FFE request for improved secret storage
In-Reply-To: <55F059BC.9050609@redhat.com>
References: <55F059BC.9050609@redhat.com>
Message-ID: <55FB08B4.2050904@redhat.com>

i am retracting this request, i think this feature would benefit from 
more time to test and review.

thanks for the consideration,
mike

On 09/09/2015 12:09 PM, michael mccune wrote:
> hi all,
>
> i am requesting an FFE for the improved secret storage feature.
>
> this change will allow operators to utilize the key manager service for
> offloading the passwords stored by sahara. this change does not
> implement mandatory usage of barbican, and defaults to a backward
> compatible behavior that requires no change to a stack.
>
> there is currently 1 review up which addresses the main thrust of this
> change, there will be 1 additional review which will include more
> passwords being migrated to use the mechanisms for offloading.
>
> i expect this work to be complete by sept. 25.
>
> review
> https://review.openstack.org/#/c/220680/
>
> blueprint
> https://blueprints.launchpad.net/sahara/+spec/improved-secret-storage
>
> spec
> http://specs.openstack.org/openstack/sahara-specs/specs/liberty/improved-secret-storage.html
>
>
> thanks,
> mike
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From sharis at Brocade.com  Thu Sep 17 18:42:24 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Thu, 17 Sep 2015 18:42:24 +0000
Subject: [openstack-dev] [Congress] CLI equivalent
In-Reply-To: <1442433064831.38322@vmware.com>
References: <ef6b462e3f024c0bbe28bc864ced945b@HQ1WP-EXMB12.corp.brocade.com>
 <1442433064831.38322@vmware.com>
Message-ID: <0d6f652e188240b2827f7f2eb33ad31d@HQ1WP-EXMB12.corp.brocade.com>

Thanks Alex.

From: Alex Yip [mailto:ayip at vmware.com]
Sent: Wednesday, September 16, 2015 12:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] CLI equivalent


Try this:

openstack congress policy row list classification error?





________________________________
From: Shiv Haris <sharis at Brocade.com<mailto:sharis at Brocade.com>>
Sent: Wednesday, September 16, 2015 12:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Congress] CLI equivalent

Do we have a CLI way for  doing the equivalent of:

$ curl -X GET localhost:1789/v1/policies/classification/tables/error/rows
As described in the tutorial:

https://github.com/openstack/congress/blob/master/doc/source/tutorial-tenant-sharing.rst#listing-policy-violations<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_congress_blob_master_doc_source_tutorial-2Dtenant-2Dsharing.rst-23listing-2Dpolicy-2Dviolations&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=m3vnP788yf3Iil8q67Kfx9ViGERr356Hb7b2KBSss9M&s=PUH7xM0t0Uy3ovTTmks2NWmKbdfY_90-EJsXIoNvSEQ&e=>

-Shiv

From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Thursday, September 10, 2015 8:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Congress] Ending feature freeze

Hi all,

We're now finished with feature freeze.  We have our first release candidate and the stable/liberty branch.  So master is once again open for new features.  Couple of things to note:

1. Documentation.  We should also look through the docs and update them.  Documentation is really important.  There's one doc patch not yet merged, so be sure to pull that down before editing.  That patch officially deprecates a number of API calls that don't make sense for the new distributed architecture.  If you find places where we don't mention the deprecation, please fix that.

https://review.openstack.org/#/c/220707/

2. Bugs.  We should still all be manually testing, looking for bugs, and fixing them.  This will be true especially as other projects change their clients, which as we've seen can break our datasource drivers.

All bug fixes first go into master, and then we cherry-pick to stable/liberty.  Once you've patched a bug on master and it's been merged, you'll create another change for your bug-fix and push it to review.  Then one of the cores will +2/+1 it (usually without needing another formal round of reviews).  Here's the procedure.

// pull down the latest changes for master
$ git checkout master
$ git pull

// create a local branch for stable/liberty and switch to it
$ git checkout origin/stable/liberty -b stable/liberty

// cherry-pick your change from master onto the local stable/liberty
// The -x records the original <sha1 from master> in the commit msg
$ git cherry-pick -x <sha1 from master>

// Push to review and specify the stable/liberty branch.
// Notice in gerrit that the branch is stable/liberty, not master
$ git review stable/liberty

// Once your change to stable/liberty gets merged, fetch all the new
// changes.

// switch to local version of stable/liberty
$ git checkout stable/liberty

// fetch all the new changes to all the branches
$ git fetch origin

// update your local branch
$ git rebase origin/stable/liberty

Tim





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/ef991147/attachment.html>

From sbauza at redhat.com  Thu Sep 17 18:43:37 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Thu, 17 Sep 2015 20:43:37 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
Message-ID: <55FB09D9.9090506@redhat.com>



Le 17/09/2015 19:26, Kevin Benton a ?crit :
>
> Maybe it would be a good idea to switch to 23:59 AOE deadlines like 
> many paper submissions use for academic conferences. That way there is 
> never a need to convert TZs, you just get it in by the end of the day 
> in your own time zone.
>


IMHO, the current process leaves enough time for proposing a candidacy, 
given that it's first advertised by beginning of the cycle on the main 
Release schedule wiki page (eg. for Liberty [1]) and then officially 
announced 8 days before the deadline. We also know that PTL elections 
come around 6 weeks before the Summit every cycle. One last official 
annoucement is made 1 day before the deadline.

Trying to target the very last moment for providing a candidacy just 
seems risky to me in that condition and we should really propose to the 
candidates to not wait for the last minute and propose far eariler.

-Sylvain


[1] 
https://wiki.openstack.org/w/index.php?title=Liberty_Release_Schedule&oldid=78501

> On Sep 17, 2015 9:18 AM, "Edgar Magana" <edgar.magana at workday.com 
> <mailto:edgar.magana at workday.com>> wrote:
>
>     Folks,
>
>     Last year I found myself in the same position when I missed a
>     deadline because my wrong planning and time zones nightmare!
>     However, the rules were very clear and I assumed my mistake. So,
>     we should assume that we do not have candidates and follow the
>     already described process. However, this should be very easy to
>     figure out for the TC, it is just a matter to find our who is
>     interested in the PTL role and consulting with the core team of
>     that specific project.
>
>     Just my two cents?
>
>     Edgar
>
>     From: Kyle Mestery
>     Reply-To: "OpenStack Development Mailing List (not for usage
>     questions)"
>     Date: Thursday, September 17, 2015 at 8:48 AM
>     To: "OpenStack Development Mailing List (not for usage questions)"
>     Subject: Re: [openstack-dev] [all][elections] PTL nomination
>     period is now over
>
>     On Thu, Sep 17, 2015 at 10:26 AM, Monty Taylor
>     <mordred at inaugust.com <mailto:mordred at inaugust.com>> wrote:
>
>         On 09/17/2015 04:50 PM, Anita Kuno wrote:
>
>             On 09/17/2015 08:22 AM, Matt Riedemann wrote:
>
>
>
>                 On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>
>                     PTL Nomination is now over. The official candidate
>                     list is available on
>                     the wiki[0].
>
>                     There are 5 projects without candidates, so
>                     according to this
>                     resolution[1], the TC we'll have to appoint a new
>                     PTL for Barbican,
>                     MagnetoDB, Magnum, Murano and Security
>
>
>                 This is devil's advocate, but why does a project
>                 technically need a PTL?
>                   Just so that there can be a contact point for
>                 cross-project things,
>                 i.e. a lightning rod?  There are projects that do a
>                 lot of group
>                 leadership/delegation/etc, so it doesn't seem that a
>                 PTL is technically
>                 required in all cases.
>
>
>             I think that is a great question for the TC to consider
>             when they
>             evaluate options for action with these projects.
>
>             The election officials are fulfilling their obligation
>             according to the
>             resolution:
>             http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst
>
>             If you read the verb there the verb is "can" not "must", I
>             choose the
>             verb "can" on purpose for the resolution when I wrote it.
>             The TC has the
>             option to select an appointee. The TC can do other things
>             as well,
>             should the TC choose.
>
>
>         I agree- and this is a great example of places where human
>         judgement is better than rules.
>
>         For instance - one of the projects had a nominee but it missed
>         the deadline, so that's probably an easy on.
>
>         For one of the projects it had been looking dead for a while,
>         so this is the final nail in the coffin from my POV
>
>         For the other three - I know they're still active projects
>         with people interested in them, so sorting them out will be fun!
>
>
>     This is the right approach. Human judgement #ftw! :)
>
>
>
>
>
>                     There are 7 projects that will have an election:
>                     Cinder, Glance, Ironic,
>                     Keystone, Mistral, Neutron and Oslo. The details
>                     for those will be
>                     posted tomorrow after Tony and I setup the CIVS
>                     system.
>
>                     Thank you,
>                     Tristan
>
>
>                     [0]:
>                     https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>
>                     [1]:
>                     http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>
>
>
>
>
>                     __________________________________________________________________________
>
>                     OpenStack Development Mailing List (not for usage
>                     questions)
>                     Unsubscribe:
>                     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>             __________________________________________________________________________
>             OpenStack Development Mailing List (not for usage questions)
>             Unsubscribe:
>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/8bcdbbcb/attachment.html>

From nkinder at redhat.com  Thu Sep 17 18:47:25 2015
From: nkinder at redhat.com (Nathan Kinder)
Date: Thu, 17 Sep 2015 11:47:25 -0700
Subject: [openstack-dev] [OSSN 0055] Service accounts may have cloud admin
	privileges
Message-ID: <55FB0ABD.8090000@redhat.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Service accounts may have cloud admin privileges
- ---

### Summary ###
OpenStack services (for example Nova and Glance) typically use a
service account in Keystone to perform actions.  In some cases this
service account has full admin privileges, may therefore perform any
action on your cloud, and should be protected appropriately.

### Affected Services / Software ###
Most OpenStack services / all versions

### Discussion ###
In many cases, OpenStack services require an OpenStack account to
perform API actions such as validating Keystone tokens.  Some
deployment tools grant administrative level access to these service
accounts, making these accounts very powerful.

A service account with administrator access could be used to:

  - destroy/modify/access data
  - create or destroy admin accounts
  - potentially escalate to undercloud access
  - log in to Horizon

### Recommended Actions ###
Service accounts can use the "service" role rather than admin.  You
can check what role the service account has by performing the following
steps:

1. List roles:

     openstack role list

2. Check the role assignment for the service user in question:

     openstack role assignment list --user <service_user>

3. Compare the ID listed in the "role" column from step 2 with the role
IDs listed in step 1.  If the role is listed as "admin", the service
account has full admin privileges on the cloud.

It is possible to change the role to "service" for some accounts but
this may have unexpected consequences for services such as Nova and
Neutron, and is therefore not recommended for inexperienced admins.

If a service account does have admin, it's advisable to closely
monitor login events for that user to ensure that it is not used
unexpectedly.  In particular, pay attention to unusual IPs using the
service account.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0055
Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1464750
OpenStack Security ML : openstack-security at lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJV+wq9AAoJEJa+6E7Ri+EVxgUH/2EEDXgPQn0TZpds9kK10mVC
jsi8Y/OEAJD2xnUS+ZkT0GbpopOJa3XRWKUmPCufYDz5Ay3ATyTfMuSdH519ZDK6
0sfZmSZK/AKXALFoUUBB1J2QckmrbH9tvMlr3NQ6f1a4MT0UIgkQvGZnduGSF9Gm
aglGWcuhL7TvzuuawwHoivDtWdNWEYOgrvG1779U6uSz9Dj23GvQQwn79y6JE7qt
N2D9dcxqXBlOV+hfcidJVzZ9FvEz7YY2yHZ9EK1apCinDhRin8CH+0W5Ukbur2K+
I1JtvIqMHddoKu6REqQCe3vyme7IHUuYYpiwJgT3yNBDlk+3nbjwRLcUpW4XNPk=
=cGtS
-----END PGP SIGNATURE-----


From rlrossit at linux.vnet.ibm.com  Thu Sep 17 18:58:22 2015
From: rlrossit at linux.vnet.ibm.com (Ryan Rossiter)
Date: Thu, 17 Sep 2015 13:58:22 -0500
Subject: [openstack-dev] [magnum] Associating patches with bugs/bps (Please
	don't hurt me)
Message-ID: <55FB0D4E.50506@linux.vnet.ibm.com>

I'm going to start out by making this clear: I am not looking to incite 
a flame war.

I've been working in Magnum for a couple of weeks now, and I'm starting 
to get down the processes for contribution. I'm here to talk about the 
process of always needing to have a patch associated with a bug or 
blueprint.

I have a good example of this being too strict. I knew the rules, so I 
opened [1] to say there are some improperly named variables and classes. 
I think it took longer for me to open the bug than it did to actually 
fix it. I think we need to start taking a look at how strict we need to 
be in requiring bugs to be opened and linked to patches. I understand 
it's a fine line between "it's broken" to "it would be nice to make this 
better".

I remember the debate when I was originally putting up [2] for review. 
The worry was that if these new tests would cut into developer 
productivity because it is more strict. The same argument can be applied 
to opening these bugs. If we have to open something up for everything we 
want to upload a patch for, that's just another step in the process to 
take up time.

Now, with my opinion out there, if we still want to take the direction 
of opening up bugs for everything, I will comply (I'm not the guy making 
decisions around here). I would like to see clear and present 
documentation explaining this to new contributors, though ([3] would 
probably be a good place to explain this).

Once again, not looking to start an argument. If everyone feels the way 
it works now is the best, I'm more than happy to join in :)

[1] https://bugs.launchpad.net/magnum/+bug/1496568
[2] https://review.openstack.org/#/c/217342/
[3] http://docs.openstack.org/developer/magnum/contributing.html

-- 
Thanks,

Ryan Rossiter (rlrossit)



From blak111 at gmail.com  Thu Sep 17 19:00:15 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 17 Sep 2015 12:00:15 -0700
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
Message-ID: <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>

It guarantees that if you hit the date deadline local time, that you won't
miss the deadline. It doesn't matter if there are extra hours afterwards.
The idea is that it gets rid of the need to do time zone conversions.

If we are trying to do some weird optimization where everyone wants to
submit in the last 60 seconds, then sure AOE isn't great for that because
you still have to convert. It doesn't seem to me like that's what we are
trying to do though.
On Sep 17, 2015 11:36 AM, "Brian Curtin" <brian at python.org> wrote:

> Because it's still day_n AOE in early hours of day_n+1 local-time in a
> lot of places. I think I have until 6 AM the day after an AOE deadline
> where it's still considered the deadline date anywhere on earth, as
> there are still places on earth where the date hasn't flipped.
>
> Your EOD is not the deadline, as it's really only a reference to how
> many more hours you have until it's no longer that date anywhere on
> earth. People screw themselves out of things by using their EOD as the
> definition.
>
> (we've been using this with the PyCon CFP since forever)
>
> On Thu, Sep 17, 2015 at 1:17 PM, Kevin Benton <blak111 at gmail.com> wrote:
> > How is it not what I described? Time zones become irrelevant if you get
> it
> > in by the end of the day in your local time zone.
> >
> > https://en.wikipedia.org/wiki/Anywhere_on_Earth
> >
> > On Thu, Sep 17, 2015 at 10:30 AM, Brian Curtin <brian at python.org> wrote:
> >>
> >> On Thu, Sep 17, 2015 at 12:26 PM, Kevin Benton <blak111 at gmail.com>
> wrote:
> >> > Maybe it would be a good idea to switch to 23:59 AOE deadlines like
> many
> >> > paper submissions use for academic conferences. That way there is
> never
> >> > a
> >> > need to convert TZs, you just get it in by the end of the day in your
> >> > own
> >> > time zone.
> >>
> >> This is somehow going to cause even more confusion because you'll have
> >> to explain AOE (which is not what you described).
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> > Kevin Benton
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/b51973bd/attachment.html>

From andrew.melton at RACKSPACE.COM  Thu Sep 17 19:06:00 2015
From: andrew.melton at RACKSPACE.COM (Andrew Melton)
Date: Thu, 17 Sep 2015 19:06:00 +0000
Subject: [openstack-dev] [magnum] Discovery
In-Reply-To: <D21ED83B.67DF4%danehans@cisco.com>
References: <D21ED83B.67DF4%danehans@cisco.com>
Message-ID: <1442516764564.93792@RACKSPACE.COM>

Hey Daneyon,


I'm fairly partial towards #2 as well. Though, I'm wondering if it's possible to take it a step further. Could we run etcd in each Bay without using the public discovery endpoint? And then, configure Swarm to simply use the internal ectd as it's discovery mechanism? This could cut one of our external service dependencies and make it easier to run Magnum is an environment with locked down public internet access.?


Anyways, I think #2 could be a good start that we could iterate on later if need be.


--Andrew


________________________________
From: Daneyon Hansen (danehans) <danehans at cisco.com>
Sent: Wednesday, September 16, 2015 11:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Discovery

All,

While implementing the flannel --network-driver for swarm, I have come across an issue that requires feedback from the community. Here is the breakdown of the issue:

  1.  Flannel [1] requires etcd to store network configuration. Meeting this requirement is simple for the kubernetes bay types since kubernetes requires etcd.
  2.  A discovery process is needed for bootstrapping etcd. Magnum implements the public discovery option [2].
  3.  A discovery process is also required to bootstrap a swarm bay type. Again, Magnum implements a publicly hosted (Docker Hub) option [3].
  4.  Magnum API exposes the discovery_url attribute that is leveraged by swarm and etcd discovery.
  5.  Etcd can not be implemented in swarm because discovery_url is associated to swarm's discovery process and not etcd.

Here are a few options on how to overcome this obstacle:

  1.  Make the discovery_url more specific, for example etcd_discovery_url and swarm_discovery_url. However, this option would needlessly expose both discovery url's to all bay types.
  2.  Swarm supports etcd as a discovery backend. This would mean discovery is similar for both bay types. With both bay types using the same mechanism for discovery, it will be easier to provide a private discovery option in the future.
  3.  Do not support flannel as a network-driver for k8s bay types. This would require adding support for a different driver that supports multi-host networking such as libnetwork. Note: libnetwork is only implemented in the Docker experimental release: https://github.com/docker/docker/tree/master/experimental.

I lean towards #2 but their may be other options, so feel free to share your thoughts. I would like to obtain feedback from the community before proceeding in a particular direction.

[1] https://github.com/coreos/flannel
[2] https://github.com/coreos/etcd/blob/master/Documentation/discovery_protocol.md
[3] https://docs.docker.com/swarm/discovery/

Regards,
Daneyon Hansen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/0ee6f1f1/attachment.html>

From Vijay.Venkatachalam at citrix.com  Thu Sep 17 19:09:06 2015
From: Vijay.Venkatachalam at citrix.com (Vijay Venkatachalam)
Date: Thu, 17 Sep 2015 19:09:06 +0000
Subject: [openstack-dev] [Barbican] Providing service user read access
 to all tenant's certificates
In-Reply-To: <D2202F18.1CAE5%dmccowan@cisco.com>
References: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
 <1A3C52DFCD06494D8528644858247BF01B7BF949@EX10MBOX06.pnnl.gov>
 <26B082831A2B1A4783604AB89B9B2C080E89F58B@SINPEX01CL02.citrite.net>
 <D2202F18.1CAE5%dmccowan@cisco.com>
Message-ID: <26B082831A2B1A4783604AB89B9B2C080E8A132D@SINPEX01CL02.citrite.net>

Yes Dave, that is what is happening today.

But that approach looks a little untidy, because tenant admin has to do some infrastructure work.

It will be good from the user/tenant admin's perspective to just do 2 things

1.      Upload certificates info

2.      Create LBaaS Configuration with certificates already uploaded

Now because barbican and LBaaS does work nicely with each other, every tenant admin has to do like the following


1.      Upload certificates info

2.      Read a document or finds out there is a LBaaS service user and somehow gets hold of LBaaS service user's userid. Assigns read rights to that certificate to LBaaS service user.

3.      Creates LBaaS Configuration with certificates already uploaded

If feel this does not fit the "As a service" model where tenant's just care about what they have to.

Thanks,
Vijay V.

From: Dave McCowan (dmccowan) [mailto:dmccowan at cisco.com]
Sent: 17 September 2015 18:20
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates


The tenant admin from Step 1, should also do Step 2.

From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Wednesday, September 16, 2015 at 9:57 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates


How does lbaas do step 2?
It does not have the privilege for that secret/container using the service user.
Should it use the keystone token through which user created LB config and assign read access for the secret/container to the LBaaS service user?

Thanks,
Vijay V.

From: Fox, Kevin M [mailto:Kevin.Fox at pnnl.gov]
Sent: 16 September 2015 19:24
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Why not have lbaas do step 2? Even better would be to help with the instance user spec and combined with lbaas doing step 2, you could restrict secret access to just the amphora that need the secret?

Thanks,
Kevin

________________________________
From: Vijay Venkatachalam
Sent: Tuesday, September 15, 2015 7:06:39 PM
To: OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)
Subject: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates
Hi,
               Is there a way to provide read access to a certain user to all secrets/containers of all project/tenant's certificates?
               This user with universal "read" privilege's will be used as a service user by LBaaS plugin to read tenant's certificates during LB configuration implementation.

               Today's LBaaS users are following the below mentioned process

1.      tenant's creator/admin user uploads a certificate info as secrets and container

2.      User then have to create ACLs for the LBaaS service user to access the containers and secrets

3.      User creates LB config with the container reference

4.      LBaaS plugin using the service user will then access container reference provided in LB config and proceeds to implement.

Ideally we would want to avoid step 2 in the process. Instead add a step 5 where the lbaas plugin's service user checks if the user configuring the LB has read access to the container reference provided.

Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/66bd0a72/attachment.html>

From mriedem at linux.vnet.ibm.com  Thu Sep 17 19:13:03 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Thu, 17 Sep 2015 14:13:03 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FB09D9.9090506@redhat.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <55FB09D9.9090506@redhat.com>
Message-ID: <55FB10BF.4050100@linux.vnet.ibm.com>



On 9/17/2015 1:43 PM, Sylvain Bauza wrote:
>
>
> Le 17/09/2015 19:26, Kevin Benton a ?crit :
>>
>> Maybe it would be a good idea to switch to 23:59 AOE deadlines like
>> many paper submissions use for academic conferences. That way there is
>> never a need to convert TZs, you just get it in by the end of the day
>> in your own time zone.
>>
>
>
> IMHO, the current process leaves enough time for proposing a candidacy,
> given that it's first advertised by beginning of the cycle on the main
> Release schedule wiki page (eg. for Liberty [1]) and then officially
> announced 8 days before the deadline. We also know that PTL elections
> come around 6 weeks before the Summit every cycle. One last official
> annoucement is made 1 day before the deadline.
>
> Trying to target the very last moment for providing a candidacy just
> seems risky to me in that condition and we should really propose to the
> candidates to not wait for the last minute and propose far eariler.

Heh, yeah, +1. If running for PTL is something you had in mind to begin 
with, you should probably be looking forward to when the elections start 
and get your ducks in a row.  Part of being PTL, a large part I'd think, 
is the ability to organize and manage things. If you're waiting until 
the 11th hour to do this, I wouldn't have much sympathy.

>
> -Sylvain
>
>
> [1]
> https://wiki.openstack.org/w/index.php?title=Liberty_Release_Schedule&oldid=78501
>
>> On Sep 17, 2015 9:18 AM, "Edgar Magana" <edgar.magana at workday.com
>> <mailto:edgar.magana at workday.com>> wrote:
>>
>>     Folks,
>>
>>     Last year I found myself in the same position when I missed a
>>     deadline because my wrong planning and time zones nightmare!
>>     However, the rules were very clear and I assumed my mistake. So,
>>     we should assume that we do not have candidates and follow the
>>     already described process. However, this should be very easy to
>>     figure out for the TC, it is just a matter to find our who is
>>     interested in the PTL role and consulting with the core team of
>>     that specific project.
>>
>>     Just my two cents?
>>
>>     Edgar
>>
>>     From: Kyle Mestery
>>     Reply-To: "OpenStack Development Mailing List (not for usage
>>     questions)"
>>     Date: Thursday, September 17, 2015 at 8:48 AM
>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>     Subject: Re: [openstack-dev] [all][elections] PTL nomination
>>     period is now over
>>
>>     On Thu, Sep 17, 2015 at 10:26 AM, Monty Taylor
>>     <mordred at inaugust.com <mailto:mordred at inaugust.com>> wrote:
>>
>>         On 09/17/2015 04:50 PM, Anita Kuno wrote:
>>
>>             On 09/17/2015 08:22 AM, Matt Riedemann wrote:
>>
>>
>>
>>                 On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>>
>>                     PTL Nomination is now over. The official candidate
>>                     list is available on
>>                     the wiki[0].
>>
>>                     There are 5 projects without candidates, so
>>                     according to this
>>                     resolution[1], the TC we'll have to appoint a new
>>                     PTL for Barbican,
>>                     MagnetoDB, Magnum, Murano and Security
>>
>>
>>                 This is devil's advocate, but why does a project
>>                 technically need a PTL?
>>                   Just so that there can be a contact point for
>>                 cross-project things,
>>                 i.e. a lightning rod?  There are projects that do a
>>                 lot of group
>>                 leadership/delegation/etc, so it doesn't seem that a
>>                 PTL is technically
>>                 required in all cases.
>>
>>
>>             I think that is a great question for the TC to consider
>>             when they
>>             evaluate options for action with these projects.
>>
>>             The election officials are fulfilling their obligation
>>             according to the
>>             resolution:
>>             http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst
>>
>>             If you read the verb there the verb is "can" not "must", I
>>             choose the
>>             verb "can" on purpose for the resolution when I wrote it.
>>             The TC has the
>>             option to select an appointee. The TC can do other things
>>             as well,
>>             should the TC choose.
>>
>>
>>         I agree- and this is a great example of places where human
>>         judgement is better than rules.
>>
>>         For instance - one of the projects had a nominee but it missed
>>         the deadline, so that's probably an easy on.
>>
>>         For one of the projects it had been looking dead for a while,
>>         so this is the final nail in the coffin from my POV
>>
>>         For the other three - I know they're still active projects
>>         with people interested in them, so sorting them out will be fun!
>>
>>
>>     This is the right approach. Human judgement #ftw! :)
>>
>>
>>
>>
>>
>>                     There are 7 projects that will have an election:
>>                     Cinder, Glance, Ironic,
>>                     Keystone, Mistral, Neutron and Oslo. The details
>>                     for those will be
>>                     posted tomorrow after Tony and I setup the CIVS
>>                     system.
>>
>>                     Thank you,
>>                     Tristan
>>
>>
>>                     [0]:
>>                     https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>>
>>                     [1]:
>>                     http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>>
>>
>>
>>
>>
>>                     __________________________________________________________________________
>>
>>                     OpenStack Development Mailing List (not for usage
>>                     questions)
>>                     Unsubscribe:
>>                     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>                     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>                     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>             __________________________________________________________________________
>>             OpenStack Development Mailing List (not for usage questions)
>>             Unsubscribe:
>>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>         __________________________________________________________________________
>>         OpenStack Development Mailing List (not for usage questions)
>>         Unsubscribe:
>>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>     __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 

Thanks,

Matt Riedemann



From Kevin.Fox at pnnl.gov  Thu Sep 17 19:28:14 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Thu, 17 Sep 2015 19:28:14 +0000
Subject: [openstack-dev] [magnum] Associating patches with bugs/bps
 (Please	don't hurt me)
In-Reply-To: <55FB0D4E.50506@linux.vnet.ibm.com>
References: <55FB0D4E.50506@linux.vnet.ibm.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C48DD@EX10MBOX06.pnnl.gov>

I agree. Lots of projects have this issue. I submitted a bug fix once that literally was 3 characters long, and it took:
A short commit message, a long commit message, and a full bug report being filed and cross linked. The amount of time writing it up was orders of magnitude longer then the actual fix.

Seems a bit much...

Looking at this review, I'd go a step farther and argue that code cleanups like this one should be really really easy to get through. No one likes to do them, so we should be encouraging folks that actually do it. Not pile up roadblocks.

Thanks,
Kevin

________________________________________
From: Ryan Rossiter [rlrossit at linux.vnet.ibm.com]
Sent: Thursday, September 17, 2015 11:58 AM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [magnum] Associating patches with bugs/bps (Please     don't hurt me)

I'm going to start out by making this clear: I am not looking to incite
a flame war.

I've been working in Magnum for a couple of weeks now, and I'm starting
to get down the processes for contribution. I'm here to talk about the
process of always needing to have a patch associated with a bug or
blueprint.

I have a good example of this being too strict. I knew the rules, so I
opened [1] to say there are some improperly named variables and classes.
I think it took longer for me to open the bug than it did to actually
fix it. I think we need to start taking a look at how strict we need to
be in requiring bugs to be opened and linked to patches. I understand
it's a fine line between "it's broken" to "it would be nice to make this
better".

I remember the debate when I was originally putting up [2] for review.
The worry was that if these new tests would cut into developer
productivity because it is more strict. The same argument can be applied
to opening these bugs. If we have to open something up for everything we
want to upload a patch for, that's just another step in the process to
take up time.

Now, with my opinion out there, if we still want to take the direction
of opening up bugs for everything, I will comply (I'm not the guy making
decisions around here). I would like to see clear and present
documentation explaining this to new contributors, though ([3] would
probably be a good place to explain this).

Once again, not looking to start an argument. If everyone feels the way
it works now is the best, I'm more than happy to join in :)

[1] https://bugs.launchpad.net/magnum/+bug/1496568
[2] https://review.openstack.org/#/c/217342/
[3] http://docs.openstack.org/developer/magnum/contributing.html

--
Thanks,

Ryan Rossiter (rlrossit)


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From carl at ecbaldwin.net  Thu Sep 17 19:41:57 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Thu, 17 Sep 2015 13:41:57 -0600
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
In-Reply-To: <CAO_F6JMKtRD6G60=NGMGi-mAndy=yThRdzCgX9iF8R6BQk3cAw@mail.gmail.com>
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JNLrM4BMCUNZ+XcePjSkk2OAQWJiuH5UU4njYi9+aaZRg@mail.gmail.com>
 <SN1PR02MB169592E08620777E123F6D34995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JMKtRD6G60=NGMGi-mAndy=yThRdzCgX9iF8R6BQk3cAw@mail.gmail.com>
Message-ID: <CALiLy7q-THFf5WVBsLuLPkMLfg61uRR0U9TjVemi7y6G3Z2yEw@mail.gmail.com>

On Thu, Sep 17, 2015 at 10:26 AM, Kevin Benton <blak111 at gmail.com> wrote:
> Yes, the L2 semantics apply to the external network as well (at least with
> ML2).

This is true and should remain so.  I think we've come to the
agreement that a neutron Network, external, shared, or not, should be
an L2 broadcast domain and have these semantics uniformly.

> One example of the special casing is the external_network_bridge option in
> the L3 agent. That would cause the agent to plug directly into a bridge so
> none of the normal L2 agent wiring would occur. With the L2 bridge_mappings
> option there is no reason for this to exist anymore because it ignoring
> network attributes makes debugging a nightmare.

+1

>>Yes, that makes sense.  Clearly the core semantic there is IP.  I can
> imagine reasonable variation on less core details, e.g. L2 broadcast vs.
> NBMA.  Perhaps it would be acceptable, if use cases need it, for such
> details to be described by flags on the external network object.
>
> An external network object is just a regular network object with a
> router:external flag set to true. Any changes to it would have to make sense
> in the context of all networks. That's why I want to make sure that whatever
> we come up with makes sense in all contexts and isn't just a bolt on corner
> case.

I have been working on a proposal around adding better L3 modeling to
Neutron.  I will have something up by the end of this week.  As a
preview, my current thinking is that we should add a new object to
represent an L3 domain.  I will propose that floating ips move to
reference this object instead of a network.  I'm also thinking that a
router's external gateway will reference an instance of this new
object instead of a Network object.  To cover current use cases, a
Network would own one of these new instances to define the subnets
that live on the network.  I think we'll also have the flexibility to
create an L3 only domain or one that spans a group of L2 networks like
what is being requested by operators [1].

We can also discuss in the context of this proposal how a Port may be
able to associate with L3-only.  As you know, ports need to provide
certain L2 services to VMs in order for them to operate.  But, does
this mean they need to associate to a neutron Network directly?  I
don't know yet but I tend to think that the model could support this
as long as VM ports have a driver like Calico behind them to support
the VMs' dependence on DHCP and ARP.

This is all going to take a fair amount of work.  I'm hoping a good
chunk of it will fit in the Mitaka cycle.

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1458890


From doug at doughellmann.com  Thu Sep 17 19:47:08 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Thu, 17 Sep 2015 15:47:08 -0400
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <55FAE5F2.6050805@openstack.org>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com> <55FAE5F2.6050805@openstack.org>
Message-ID: <1442519175-sup-7818@lrrr.local>

Excerpts from Thierry Carrez's message of 2015-09-17 18:10:26 +0200:
> Monty Taylor wrote:
> > I agree- and this is a great example of places where human judgement is
> > better than rules.
> > 
> > For instance - one of the projects had a nominee but it missed the
> > deadline, so that's probably an easy on.
> > 
> > For one of the projects it had been looking dead for a while, so this is
> > the final nail in the coffin from my POV
> > 
> > For the other three - I know they're still active projects with people
> > interested in them, so sorting them out will be fun!
> 
> Looks like in 4 cases (Magnum, Barbican, Murano, Security) there is
> actually a candidate, they just missed the deadline. So that should be
> an easy discussion at the next TC meeting.
> 
> For the last one, it is not an accident. I think it is indeed the final
> nail on the coffin.
> 

Yes, I was planning to wait until after the summit to propose that we
drop MagnetoDB from the official list of projects due to inactivity. We
can deal with it sooner, obviously.

Doug


From morgan.fainberg at gmail.com  Thu Sep 17 19:51:33 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Thu, 17 Sep 2015 12:51:33 -0700
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
Message-ID: <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>

On Thu, Sep 17, 2015 at 12:00 PM, Kevin Benton <blak111 at gmail.com> wrote:

> It guarantees that if you hit the date deadline local time, that you won't
> miss the deadline. It doesn't matter if there are extra hours afterwards.
> The idea is that it gets rid of the need to do time zone conversions.
>
> If we are trying to do some weird optimization where everyone wants to
> submit in the last 60 seconds, then sure AOE isn't great for that because
> you still have to convert. It doesn't seem to me like that's what we are
> trying to do though.
>
Alternatively you give a UTC time (which all of our meetings are in anyway)
and set the deadline. Maybe we should be setting the deadline to the
western-most timezone (UTC-11/-12?) 23:59 as the deadline. This would
simply do what you're stating without having to explain AOE more concretely
than "submit by 23:59 your tz day X".

I think this is all superfluous however and we should simply encourage
people to not wait until the last minute. Waiting to see who is
running/what the field looks like isn't as important as standing up and
saying you're interested in running.

You shouldn't worry about hurting anyone's feelings by running and more
importantly most PTLs will be happy to have someone else shoulder some of
the weight; by tossing your name into the ring it signals you're willing to
help out in this regard. I know that as a PTL (an outgoing one at that)
having this clear signal would raise an individual towards the top of the
list for asking if they want the responsibility delegated to them as it was
indicated they already wanted to be part of leadership for the project.

Just a $0.02 on the timing concerns.

--Morgan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/edea856e/attachment.html>

From EGuz at walmartlabs.com  Thu Sep 17 19:58:26 2015
From: EGuz at walmartlabs.com (Egor Guz)
Date: Thu, 17 Sep 2015 19:58:26 +0000
Subject: [openstack-dev] [magnum] Discovery
In-Reply-To: <1442516764564.93792@RACKSPACE.COM>
References: <D21ED83B.67DF4%danehans@cisco.com>
 <1442516764564.93792@RACKSPACE.COM>
Message-ID: <D22068B1.1B7C3%eguz@walmartlabs.com>

+1 for stop using public discovery endpoint, most private cloud vms doesn?t have access to internet and operator must to run etcd instance somewhere just for discovery.

?
Egor

From: Andrew Melton <andrew.melton at RACKSPACE.COM<mailto:andrew.melton at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Thursday, September 17, 2015 at 12:06
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discovery


Hey Daneyon,


I'm fairly partial towards #2 as well. Though, I'm wondering if it's possible to take it a step further. Could we run etcd in each Bay without using the public discovery endpoint? And then, configure Swarm to simply use the internal ectd as it's discovery mechanism? This could cut one of our external service dependencies and make it easier to run Magnum is an environment with locked down public internet access.?


Anyways, I think #2 could be a good start that we could iterate on later if need be.


--Andrew


________________________________
From: Daneyon Hansen (danehans) <danehans at cisco.com<mailto:danehans at cisco.com>>
Sent: Wednesday, September 16, 2015 11:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Discovery

All,

While implementing the flannel --network-driver for swarm, I have come across an issue that requires feedback from the community. Here is the breakdown of the issue:

  1.  Flannel [1] requires etcd to store network configuration. Meeting this requirement is simple for the kubernetes bay types since kubernetes requires etcd.
  2.  A discovery process is needed for bootstrapping etcd. Magnum implements the public discovery option [2].
  3.  A discovery process is also required to bootstrap a swarm bay type. Again, Magnum implements a publicly hosted (Docker Hub) option [3].
  4.  Magnum API exposes the discovery_url attribute that is leveraged by swarm and etcd discovery.
  5.  Etcd can not be implemented in swarm because discovery_url is associated to swarm?s discovery process and not etcd.

Here are a few options on how to overcome this obstacle:

  1.  Make the discovery_url more specific, for example etcd_discovery_url and swarm_discovery_url. However, this option would needlessly expose both discovery url?s to all bay types.
  2.  Swarm supports etcd as a discovery backend. This would mean discovery is similar for both bay types. With both bay types using the same mechanism for discovery, it will be easier to provide a private discovery option in the future.
  3.  Do not support flannel as a network-driver for k8s bay types. This would require adding support for a different driver that supports multi-host networking such as libnetwork. Note: libnetwork is only implemented in the Docker experimental release: https://github.com/docker/docker/tree/master/experimental.

I lean towards #2 but their may be other options, so feel free to share your thoughts. I would like to obtain feedback from the community before proceeding in a particular direction.

[1] https://github.com/coreos/flannel
[2] https://github.com/coreos/etcd/blob/master/Documentation/discovery_protocol.md
[3] https://docs.docker.com/swarm/discovery/

Regards,
Daneyon Hansen

From EGuz at walmartlabs.com  Thu Sep 17 19:58:26 2015
From: EGuz at walmartlabs.com (Egor Guz)
Date: Thu, 17 Sep 2015 19:58:26 +0000
Subject: [openstack-dev] [magnum] Discovery
In-Reply-To: <1442516764564.93792@RACKSPACE.COM>
References: <D21ED83B.67DF4%danehans@cisco.com>
 <1442516764564.93792@RACKSPACE.COM>
Message-ID: <D22068B1.1B7C3%eguz@walmartlabs.com>

+1 for stop using public discovery endpoint, most private cloud vms doesn?t have access to internet and operator must to run etcd instance somewhere just for discovery.

?
Egor

From: Andrew Melton <andrew.melton at RACKSPACE.COM<mailto:andrew.melton at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Thursday, September 17, 2015 at 12:06
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discovery


Hey Daneyon,


I'm fairly partial towards #2 as well. Though, I'm wondering if it's possible to take it a step further. Could we run etcd in each Bay without using the public discovery endpoint? And then, configure Swarm to simply use the internal ectd as it's discovery mechanism? This could cut one of our external service dependencies and make it easier to run Magnum is an environment with locked down public internet access.?


Anyways, I think #2 could be a good start that we could iterate on later if need be.


--Andrew


________________________________
From: Daneyon Hansen (danehans) <danehans at cisco.com<mailto:danehans at cisco.com>>
Sent: Wednesday, September 16, 2015 11:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Discovery

All,

While implementing the flannel --network-driver for swarm, I have come across an issue that requires feedback from the community. Here is the breakdown of the issue:

  1.  Flannel [1] requires etcd to store network configuration. Meeting this requirement is simple for the kubernetes bay types since kubernetes requires etcd.
  2.  A discovery process is needed for bootstrapping etcd. Magnum implements the public discovery option [2].
  3.  A discovery process is also required to bootstrap a swarm bay type. Again, Magnum implements a publicly hosted (Docker Hub) option [3].
  4.  Magnum API exposes the discovery_url attribute that is leveraged by swarm and etcd discovery.
  5.  Etcd can not be implemented in swarm because discovery_url is associated to swarm?s discovery process and not etcd.

Here are a few options on how to overcome this obstacle:

  1.  Make the discovery_url more specific, for example etcd_discovery_url and swarm_discovery_url. However, this option would needlessly expose both discovery url?s to all bay types.
  2.  Swarm supports etcd as a discovery backend. This would mean discovery is similar for both bay types. With both bay types using the same mechanism for discovery, it will be easier to provide a private discovery option in the future.
  3.  Do not support flannel as a network-driver for k8s bay types. This would require adding support for a different driver that supports multi-host networking such as libnetwork. Note: libnetwork is only implemented in the Docker experimental release: https://github.com/docker/docker/tree/master/experimental.

I lean towards #2 but their may be other options, so feel free to share your thoughts. I would like to obtain feedback from the community before proceeding in a particular direction.

[1] https://github.com/coreos/flannel
[2] https://github.com/coreos/etcd/blob/master/Documentation/discovery_protocol.md
[3] https://docs.docker.com/swarm/discovery/

Regards,
Daneyon Hansen

From doug at doughellmann.com  Thu Sep 17 20:00:14 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Thu, 17 Sep 2015 16:00:14 -0400
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
Message-ID: <1442519996-sup-6683@lrrr.local>

Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:

> I think this is all superfluous however and we should simply encourage
> people to not wait until the last minute. Waiting to see who is
> running/what the field looks like isn't as important as standing up and
> saying you're interested in running.

+1

Doug


From doc at aedo.net  Thu Sep 17 20:00:47 2015
From: doc at aedo.net (Christopher Aedo)
Date: Thu, 17 Sep 2015 13:00:47 -0700
Subject: [openstack-dev] [murano] [app-catalog] versions for murano assets
	in the catalog
Message-ID: <CA+odVQHdkocESWDvNhwZbQaMAyBPCJciXCTeDrTcAsYGN7Y4nA@mail.gmail.com>

One big thing missing from the App Catalog right now is the ability to
version assets.  This is especially obvious with the Murano assets
which have some version/release dependencies.  Ideally an app-catalog
user would be able to pick an older version (ie "works with kilo
rather than liberty"), but we don't have that functionality yet.

We are working on resolving handling versions elegantly from the App
Catalog side but in the short term we believe Murano is going to need
a workaround.  In order to support multiple entries with the same name
(i.e. Apache Tomcat package for both Kilo and Liberty) we are
proposing the Liberty release of Murano have a new default URL, like:

MURANO_REPO_URL="http://apps.openstack.org/api/v1/murano_repo/liberty/"

We have a patch ready [1] which would redirect traffic hitting that
URL to http://storage.apps.openstack.org.  If we take this approach,
we will then retain the ability to manage where Murano fetches things
from without requiring clients of the Liberty-Murano release to do
anything.  For instance, if there is a need for Liberty versions of
Murano packages to be different from Kilo, we could set up a
Liberty-specific directory and put those versions there, and then
adjust the redirect appropriately.

What do you think?  We definitely need feedback here, otherwise we are
likely to break things Murano relies on.  kzaitsev is active on IRC
and was the one who highlighted this issue, but if there are other
compatibility or version concerns as Murano continues to grow and
improve, we could use one or two more people from Murano to stay in
touch with us wherever you intersect with the App Catalog so we don't
break something for you :)

[1] https://review.openstack.org/#/c/224869/

-Christopher


From jpeeler at redhat.com  Thu Sep 17 20:09:02 2015
From: jpeeler at redhat.com (Jeff Peeler)
Date: Thu, 17 Sep 2015 16:09:02 -0400
Subject: [openstack-dev] [magnum] Associating patches with bugs/bps
 (Please don't hurt me)
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7C48DD@EX10MBOX06.pnnl.gov>
References: <55FB0D4E.50506@linux.vnet.ibm.com>
 <1A3C52DFCD06494D8528644858247BF01B7C48DD@EX10MBOX06.pnnl.gov>
Message-ID: <CALesnTy_fbzFa6=1KUWdOO+Y+G3hbGNpJV7ShY_8DA4-DZMwZQ@mail.gmail.com>

On Thu, Sep 17, 2015 at 3:28 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:

> I agree. Lots of projects have this issue. I submitted a bug fix once that
> literally was 3 characters long, and it took:
> A short commit message, a long commit message, and a full bug report being
> filed and cross linked. The amount of time writing it up was orders of
> magnitude longer then the actual fix.
>
> Seems a bit much...
>
> Looking at this review, I'd go a step farther and argue that code cleanups
> like this one should be really really easy to get through. No one likes to
> do them, so we should be encouraging folks that actually do it. Not pile up
> roadblocks.


It is indeed frustrating. I've had a few similar reviews (in other projects
- hopefully it's okay I comment here) as well. Honestly, I think if a given
team is willing to draw the line as for what is permissible to commit
without bug creation, then they should be permitted that freedom.

However, that said, I'm sure somebody is going to point out that come
release time having the list of bugs fixed in a given release is handy,
spelling errors included.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/613ad7f6/attachment.html>

From adrian.otto at rackspace.com  Thu Sep 17 20:13:27 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Thu, 17 Sep 2015 20:13:27 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is
	now	over
In-Reply-To: <1442519996-sup-6683@lrrr.local>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
Message-ID: <5F389B6E-2F5E-498E-A8E3-49D987ADB8C6@rackspace.com>

I?d like to extend my apologies to the election officials and the TC for not submitting my Magnum PTL candidacy before the deadline. There was a miscommunication between me and Anne Gentle about the deadline, so I made inappropriate plans for submission. It is here:

https://review.openstack.org/224850

Thanks for your consideration.

Adrian Otto

> On Sep 17, 2015, at 1:00 PM, Doug Hellmann <doug at doughellmann.com> wrote:
> 
> Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:
> 
>> I think this is all superfluous however and we should simply encourage
>> people to not wait until the last minute. Waiting to see who is
>> running/what the field looks like isn't as important as standing up and
>> saying you're interested in running.
> 
> +1
> 
> Doug
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From john.griffith8 at gmail.com  Thu Sep 17 20:15:51 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Thu, 17 Sep 2015 14:15:51 -0600
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <1442519996-sup-6683@lrrr.local>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
Message-ID: <CAPWkaSXyjY9+g+ePH6B0iLnME0stKL+vGvicG51nENsCZdodHA@mail.gmail.com>

On Thu, Sep 17, 2015 at 2:00 PM, Doug Hellmann <doug at doughellmann.com>
wrote:

> Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:
>
> > I think this is all superfluous however and we should simply encourage
> > people to not wait until the last minute. Waiting to see who is
> > running/what the field looks like isn't as important as standing up and
> > saying you're interested in running.
>
> +1
>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

?My dog ate my homework...
My car wouldn't start...
I couldn't figure out what UTC time was...

The guidelines seemed pretty clear:
Any member of an election electorate can propose their candidacy for the
same election until September 17, 05:59 UTC?

That being said, a big analysis on date/time selection etc doesn't really
seem warranted here or harping on the fact that something 'went wrong'.  I
as a TC member have no problem saying "things happen" and those that have
submitted candidacy albeit late and are unopposed are in.. no muss no
fuss.  I *think* we're all reasonable adults and don't know that anybody
had in mind that the TC would arbitrarily assign somebody that wasn't even
listed as a PTL for one of the mentioned projects.

Moving on,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/182d1042/attachment.html>

From annegentle at justwriteclick.com  Thu Sep 17 20:19:54 2015
From: annegentle at justwriteclick.com (Anne Gentle)
Date: Thu, 17 Sep 2015 15:19:54 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <CAPWkaSXyjY9+g+ePH6B0iLnME0stKL+vGvicG51nENsCZdodHA@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
 <CAPWkaSXyjY9+g+ePH6B0iLnME0stKL+vGvicG51nENsCZdodHA@mail.gmail.com>
Message-ID: <CAD0KtVE_GPK4OSPghZLWYYGF8P4L3KLQAquOBJ=iRfd1LKJScQ@mail.gmail.com>

On Thu, Sep 17, 2015 at 3:15 PM, John Griffith <john.griffith8 at gmail.com>
wrote:

>
>
> On Thu, Sep 17, 2015 at 2:00 PM, Doug Hellmann <doug at doughellmann.com>
> wrote:
>
>> Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:
>>
>> > I think this is all superfluous however and we should simply encourage
>> > people to not wait until the last minute. Waiting to see who is
>> > running/what the field looks like isn't as important as standing up and
>> > saying you're interested in running.
>>
>> +1
>>
>> Doug
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ?My dog ate my homework...
> My car wouldn't start...
> I couldn't figure out what UTC time was...
>
> The guidelines seemed pretty clear:
> Any member of an election electorate can propose their candidacy for the
> same election until September 17, 05:59 UTC?
>
> That being said, a big analysis on date/time selection etc doesn't really
> seem warranted here or harping on the fact that something 'went wrong'.  I
> as a TC member have no problem saying "things happen" and those that have
> submitted candidacy albeit late and are unopposed are in.. no muss no
> fuss.  I *think* we're all reasonable adults and don't know that anybody
> had in mind that the TC would arbitrarily assign somebody that wasn't even
> listed as a PTL for one of the mentioned projects.
>
>
It's not so simple for Magnum, with 2 late candidacies. We'll figure it out
but yes, we have work to do.

Anne


Moving on,
> John
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/3fdeea56/attachment.html>

From mestery at mestery.com  Thu Sep 17 20:23:36 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Thu, 17 Sep 2015 15:23:36 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <CAD0KtVE_GPK4OSPghZLWYYGF8P4L3KLQAquOBJ=iRfd1LKJScQ@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
 <CAPWkaSXyjY9+g+ePH6B0iLnME0stKL+vGvicG51nENsCZdodHA@mail.gmail.com>
 <CAD0KtVE_GPK4OSPghZLWYYGF8P4L3KLQAquOBJ=iRfd1LKJScQ@mail.gmail.com>
Message-ID: <CAL3VkVwG+e_nLwg3it=W3HZsHDo0RBjU3xxgx-FGFXoz0FtdRw@mail.gmail.com>

On Thu, Sep 17, 2015 at 3:19 PM, Anne Gentle <annegentle at justwriteclick.com>
wrote:

>
>
> On Thu, Sep 17, 2015 at 3:15 PM, John Griffith <john.griffith8 at gmail.com>
> wrote:
>
>>
>>
>> On Thu, Sep 17, 2015 at 2:00 PM, Doug Hellmann <doug at doughellmann.com>
>> wrote:
>>
>>> Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:
>>>
>>> > I think this is all superfluous however and we should simply encourage
>>> > people to not wait until the last minute. Waiting to see who is
>>> > running/what the field looks like isn't as important as standing up and
>>> > saying you're interested in running.
>>>
>>> +1
>>>
>>> Doug
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ?My dog ate my homework...
>> My car wouldn't start...
>> I couldn't figure out what UTC time was...
>>
>> The guidelines seemed pretty clear:
>> Any member of an election electorate can propose their candidacy for the
>> same election until September 17, 05:59 UTC?
>>
>> That being said, a big analysis on date/time selection etc doesn't really
>> seem warranted here or harping on the fact that something 'went wrong'.  I
>> as a TC member have no problem saying "things happen" and those that have
>> submitted candidacy albeit late and are unopposed are in.. no muss no
>> fuss.  I *think* we're all reasonable adults and don't know that anybody
>> had in mind that the TC would arbitrarily assign somebody that wasn't even
>> listed as a PTL for one of the mentioned projects.
>>
>>
> It's not so simple for Magnum, with 2 late candidacies. We'll figure it
> out but yes, we have work to do.
>
>
It could be simple: Let magnum have an election with both candidates. As
Monty said:

"... this is a great example of places where human judgement is better than
rules."

Thanks,
Kyle


> Anne
>
>
> Moving on,
>> John
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>
> --
> Anne Gentle
> Rackspace
> Principal Engineer
> www.justwriteclick.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/f4235c51/attachment.html>

From john.griffith8 at gmail.com  Thu Sep 17 20:28:54 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Thu, 17 Sep 2015 14:28:54 -0600
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <CAL3VkVwG+e_nLwg3it=W3HZsHDo0RBjU3xxgx-FGFXoz0FtdRw@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
 <CAPWkaSXyjY9+g+ePH6B0iLnME0stKL+vGvicG51nENsCZdodHA@mail.gmail.com>
 <CAD0KtVE_GPK4OSPghZLWYYGF8P4L3KLQAquOBJ=iRfd1LKJScQ@mail.gmail.com>
 <CAL3VkVwG+e_nLwg3it=W3HZsHDo0RBjU3xxgx-FGFXoz0FtdRw@mail.gmail.com>
Message-ID: <CAPWkaSXom6sS6XW17ddKkt7VwKkKjAm+qgoXW-pqCzRKKfgnEA@mail.gmail.com>

On Thu, Sep 17, 2015 at 2:23 PM, Kyle Mestery <mestery at mestery.com> wrote:

> On Thu, Sep 17, 2015 at 3:19 PM, Anne Gentle <
> annegentle at justwriteclick.com> wrote:
>
>>
>>
>> On Thu, Sep 17, 2015 at 3:15 PM, John Griffith <john.griffith8 at gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Thu, Sep 17, 2015 at 2:00 PM, Doug Hellmann <doug at doughellmann.com>
>>> wrote:
>>>
>>>> Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:
>>>>
>>>> > I think this is all superfluous however and we should simply encourage
>>>> > people to not wait until the last minute. Waiting to see who is
>>>> > running/what the field looks like isn't as important as standing up
>>>> and
>>>> > saying you're interested in running.
>>>>
>>>> +1
>>>>
>>>> Doug
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>> ?My dog ate my homework...
>>> My car wouldn't start...
>>> I couldn't figure out what UTC time was...
>>>
>>> The guidelines seemed pretty clear:
>>> Any member of an election electorate can propose their candidacy for the
>>> same election until September 17, 05:59 UTC?
>>>
>>> That being said, a big analysis on date/time selection etc doesn't
>>> really seem warranted here or harping on the fact that something 'went
>>> wrong'.  I as a TC member have no problem saying "things happen" and those
>>> that have submitted candidacy albeit late and are unopposed are in.. no
>>> muss no fuss.  I *think* we're all reasonable adults and don't know that
>>> anybody had in mind that the TC would arbitrarily assign somebody that
>>> wasn't even listed as a PTL for one of the mentioned projects.
>>>
>>>
>> It's not so simple for Magnum, with 2 late candidacies. We'll figure it
>> out but yes, we have work to do.
>>
>>
> It could be simple: Let magnum have an election with both candidates. As
> Monty said:
>
?+1

Also, not the second submitter stated they only submitted because they
noticed nobody else was running.  Regardless seems easy enough to deal
with. ?


>
> "... this is a great example of places where human judgement is better
> than rules."
>
> Thanks,
> Kyle
>
>
>> Anne
>>
>>
>> Moving on,
>>> John
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>>
>> --
>> Anne Gentle
>> Rackspace
>> Principal Engineer
>> www.justwriteclick.com
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/2452ac4f/attachment.html>

From amuller at redhat.com  Thu Sep 17 20:53:25 2015
From: amuller at redhat.com (Assaf Muller)
Date: Thu, 17 Sep 2015 16:53:25 -0400
Subject: [openstack-dev] [magnum] Associating patches with bugs/bps
 (Please don't hurt me)
In-Reply-To: <CALesnTy_fbzFa6=1KUWdOO+Y+G3hbGNpJV7ShY_8DA4-DZMwZQ@mail.gmail.com>
References: <55FB0D4E.50506@linux.vnet.ibm.com>
 <1A3C52DFCD06494D8528644858247BF01B7C48DD@EX10MBOX06.pnnl.gov>
 <CALesnTy_fbzFa6=1KUWdOO+Y+G3hbGNpJV7ShY_8DA4-DZMwZQ@mail.gmail.com>
Message-ID: <CABARBAZO71wyCWSgk0h7T-rH+yG_N0tV8apKARwgpp42z6uP8Q@mail.gmail.com>

On Thu, Sep 17, 2015 at 4:09 PM, Jeff Peeler <jpeeler at redhat.com> wrote:

>
> On Thu, Sep 17, 2015 at 3:28 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>
>> I agree. Lots of projects have this issue. I submitted a bug fix once
>> that literally was 3 characters long, and it took:
>> A short commit message, a long commit message, and a full bug report
>> being filed and cross linked. The amount of time writing it up was orders
>> of magnitude longer then the actual fix.
>>
>> Seems a bit much...
>>
>> Looking at this review, I'd go a step farther and argue that code
>> cleanups like this one should be really really easy to get through. No one
>> likes to do them, so we should be encouraging folks that actually do it.
>> Not pile up roadblocks.
>
>
> It is indeed frustrating. I've had a few similar reviews (in other
> projects - hopefully it's okay I comment here) as well. Honestly, I think
> if a given team is willing to draw the line as for what is permissible to
> commit without bug creation, then they should be permitted that freedom.
>
> However, that said, I'm sure somebody is going to point out that come
> release time having the list of bugs fixed in a given release is handy,
> spelling errors included.
>

We've had the same debate in Neutron and we relaxed the rules. We don't
require bugs for trivial changes. In fact, my argument has always been:
Come release
time, when we say that the Neutron community fixed so and so bugs, we would
be lying if we were to include fixing spelling issues in comments. That's
not a bug.


>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/4e006198/attachment.html>

From adrian.otto at rackspace.com  Thu Sep 17 21:01:57 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Thu, 17 Sep 2015 21:01:57 +0000
Subject: [openstack-dev] [magnum] Associating patches with bugs/bps
 (Please don't hurt me)
In-Reply-To: <CABARBAZO71wyCWSgk0h7T-rH+yG_N0tV8apKARwgpp42z6uP8Q@mail.gmail.com>
References: <55FB0D4E.50506@linux.vnet.ibm.com>
 <1A3C52DFCD06494D8528644858247BF01B7C48DD@EX10MBOX06.pnnl.gov>
 <CALesnTy_fbzFa6=1KUWdOO+Y+G3hbGNpJV7ShY_8DA4-DZMwZQ@mail.gmail.com>
 <CABARBAZO71wyCWSgk0h7T-rH+yG_N0tV8apKARwgpp42z6uP8Q@mail.gmail.com>
Message-ID: <B3A42ACB-C486-480C-BA6A-151DF4A815D5@rackspace.com>

Let?s apply sensible reason. If it?s a new feature or a bug, it should be tracked against an artifact like a bug ticket or a blueprint. If it?s truly trivia, we don?t care. I can tell you that some of the worst bugs I have ever seen in my career had fixes that were about 4 bytes long. That did not make them any less serious.

If you are fixing an actual legitimate bug that has a three character fix, and you don?t want it to be tracked as the reviewer, then you can say so in the commit message. We can act accordingly going forward.

Adrian

On Sep 17, 2015, at 1:53 PM, Assaf Muller <amuller at redhat.com<mailto:amuller at redhat.com>> wrote:



On Thu, Sep 17, 2015 at 4:09 PM, Jeff Peeler <jpeeler at redhat.com<mailto:jpeeler at redhat.com>> wrote:

On Thu, Sep 17, 2015 at 3:28 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
I agree. Lots of projects have this issue. I submitted a bug fix once that literally was 3 characters long, and it took:
A short commit message, a long commit message, and a full bug report being filed and cross linked. The amount of time writing it up was orders of magnitude longer then the actual fix.

Seems a bit much...

Looking at this review, I'd go a step farther and argue that code cleanups like this one should be really really easy to get through. No one likes to do them, so we should be encouraging folks that actually do it. Not pile up roadblocks.

It is indeed frustrating. I've had a few similar reviews (in other projects - hopefully it's okay I comment here) as well. Honestly, I think if a given team is willing to draw the line as for what is permissible to commit without bug creation, then they should be permitted that freedom.

However, that said, I'm sure somebody is going to point out that come release time having the list of bugs fixed in a given release is handy, spelling errors included.

We've had the same debate in Neutron and we relaxed the rules. We don't require bugs for trivial changes. In fact, my argument has always been: Come release
time, when we say that the Neutron community fixed so and so bugs, we would be lying if we were to include fixing spelling issues in comments. That's not a bug.


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/199dad53/attachment.html>

From hongbin.lu at huawei.com  Thu Sep 17 21:04:15 2015
From: hongbin.lu at huawei.com (Hongbin Lu)
Date: Thu, 17 Sep 2015 21:04:15 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is
	now	over
In-Reply-To: <CAL3VkVwG+e_nLwg3it=W3HZsHDo0RBjU3xxgx-FGFXoz0FtdRw@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
 <CAPWkaSXyjY9+g+ePH6B0iLnME0stKL+vGvicG51nENsCZdodHA@mail.gmail.com>
 <CAD0KtVE_GPK4OSPghZLWYYGF8P4L3KLQAquOBJ=iRfd1LKJScQ@mail.gmail.com>
 <CAL3VkVwG+e_nLwg3it=W3HZsHDo0RBjU3xxgx-FGFXoz0FtdRw@mail.gmail.com>
Message-ID: <0957CD8F4B55C0418161614FEC580D6BCE5107@SZXEMI503-MBS.china.huawei.com>

Hi,

I am fine to have an election with Adrian Otto, and potentially with other candidates who are also late.

Best regards,
Hongbin

From: Kyle Mestery [mailto:mestery at mestery.com]
Sent: September-17-15 4:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][elections] PTL nomination period is now over

On Thu, Sep 17, 2015 at 3:19 PM, Anne Gentle <annegentle at justwriteclick.com<mailto:annegentle at justwriteclick.com>> wrote:


On Thu, Sep 17, 2015 at 3:15 PM, John Griffith <john.griffith8 at gmail.com<mailto:john.griffith8 at gmail.com>> wrote:


On Thu, Sep 17, 2015 at 2:00 PM, Doug Hellmann <doug at doughellmann.com<mailto:doug at doughellmann.com>> wrote:
Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:

> I think this is all superfluous however and we should simply encourage
> people to not wait until the last minute. Waiting to see who is
> running/what the field looks like isn't as important as standing up and
> saying you're interested in running.

+1

Doug

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

?My dog ate my homework...
My car wouldn't start...
I couldn't figure out what UTC time was...

The guidelines seemed pretty clear:
Any member of an election electorate can propose their candidacy for the same election until September 17, 05:59 UTC?

That being said, a big analysis on date/time selection etc doesn't really seem warranted here or harping on the fact that something 'went wrong'.  I as a TC member have no problem saying "things happen" and those that have submitted candidacy albeit late and are unopposed are in.. no muss no fuss.  I *think* we're all reasonable adults and don't know that anybody had in mind that the TC would arbitrarily assign somebody that wasn't even listed as a PTL for one of the mentioned projects.


It's not so simple for Magnum, with 2 late candidacies. We'll figure it out but yes, we have work to do.


It could be simple: Let magnum have an election with both candidates. As Monty said:

"... this is a great example of places where human judgement is better than rules."
Thanks,
Kyle

Anne


Moving on,
John



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com<http://www.justwriteclick.com>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/6527528c/attachment.html>

From dolph.mathews at gmail.com  Thu Sep 17 21:04:35 2015
From: dolph.mathews at gmail.com (Dolph Mathews)
Date: Thu, 17 Sep 2015 16:04:35 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <CAPWkaSXyjY9+g+ePH6B0iLnME0stKL+vGvicG51nENsCZdodHA@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
 <CAPWkaSXyjY9+g+ePH6B0iLnME0stKL+vGvicG51nENsCZdodHA@mail.gmail.com>
Message-ID: <CAC=h7gUGSx4pyZjrefBXuTCgscZ_EzFeD74vQ_j6_KiJu7cJ3g@mail.gmail.com>

On Thu, Sep 17, 2015 at 3:15 PM, John Griffith <john.griffith8 at gmail.com>
wrote:

>
>
> On Thu, Sep 17, 2015 at 2:00 PM, Doug Hellmann <doug at doughellmann.com>
> wrote:
>
>> Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:
>>
>> > I think this is all superfluous however and we should simply encourage
>> > people to not wait until the last minute. Waiting to see who is
>> > running/what the field looks like isn't as important as standing up and
>> > saying you're interested in running.
>>
>> +1
>>
>> Doug
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ?My dog ate my homework...
> My car wouldn't start...
> I couldn't figure out what UTC time was...
>
> The guidelines seemed pretty clear:
> Any member of an election electorate can propose their candidacy for the
> same election until September 17, 05:59 UTC?
>
> That being said, a big analysis on date/time selection etc doesn't really
> seem warranted here or harping on the fact that something 'went wrong'.  I
> as a TC member have no problem saying "things happen" and those that have
> submitted candidacy albeit late and are unopposed are in.. no muss no
> fuss.  I *think* we're all reasonable adults and don't know that anybody
> had in mind that the TC would arbitrarily assign somebody that wasn't even
> listed as a PTL for one of the mentioned projects.
>

+1 I don't think there's a problem to be solved here by changing the
deadline for candidacies. We all understand UTC. We have a well advertised
and well understood process for resolving such issues already. No matter
the deadline, that process will still be there.


>
> Moving on,
> John
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/9d94ad9f/attachment.html>

From adrian.otto at rackspace.com  Thu Sep 17 21:08:52 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Thu, 17 Sep 2015 21:08:52 +0000
Subject: [openstack-dev] [magnum] Associating patches with bugs/bps
 (Please don't hurt me)
In-Reply-To: <B3A42ACB-C486-480C-BA6A-151DF4A815D5@rackspace.com>
References: <55FB0D4E.50506@linux.vnet.ibm.com>
 <1A3C52DFCD06494D8528644858247BF01B7C48DD@EX10MBOX06.pnnl.gov>
 <CALesnTy_fbzFa6=1KUWdOO+Y+G3hbGNpJV7ShY_8DA4-DZMwZQ@mail.gmail.com>
 <CABARBAZO71wyCWSgk0h7T-rH+yG_N0tV8apKARwgpp42z6uP8Q@mail.gmail.com>
 <B3A42ACB-C486-480C-BA6A-151DF4A815D5@rackspace.com>
Message-ID: <A2C58676-B9D6-4EFB-B7EC-582BAE8ECFFA@rackspace.com>

For posterity, I have recorded this guidance in our Contributing Wiki:

See the NOTE section under:

https://wiki.openstack.org/wiki/Magnum/Contributing#Identify_bugs

Excerpt:

"NOTE: If you are fixing something trivial, that is not actually a functional defect in the software, you can do that without filing a bug ticket, if you don't want it to be tracked when we tally this work between releases. If you do this, just mention it in the commit message that it's a trivial change that does not require a bug ticket. You can reference this guideline if it comes up in discussion during the review process. Functional defects should be tracked in bug tickets. New features should be tracked in blueprints. Trivial features may be tracked using a bug ticket marked as 'Wishlist' importance."

I hope that helps.

Adrian

On Sep 17, 2015, at 2:01 PM, Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>> wrote:

Let?s apply sensible reason. If it?s a new feature or a bug, it should be tracked against an artifact like a bug ticket or a blueprint. If it?s truly trivia, we don?t care. I can tell you that some of the worst bugs I have ever seen in my career had fixes that were about 4 bytes long. That did not make them any less serious.

If you are fixing an actual legitimate bug that has a three character fix, and you don?t want it to be tracked as the reviewer, then you can say so in the commit message. We can act accordingly going forward.

Adrian

On Sep 17, 2015, at 1:53 PM, Assaf Muller <amuller at redhat.com<mailto:amuller at redhat.com>> wrote:



On Thu, Sep 17, 2015 at 4:09 PM, Jeff Peeler <jpeeler at redhat.com<mailto:jpeeler at redhat.com>> wrote:

On Thu, Sep 17, 2015 at 3:28 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
I agree. Lots of projects have this issue. I submitted a bug fix once that literally was 3 characters long, and it took:
A short commit message, a long commit message, and a full bug report being filed and cross linked. The amount of time writing it up was orders of magnitude longer then the actual fix.

Seems a bit much...

Looking at this review, I'd go a step farther and argue that code cleanups like this one should be really really easy to get through. No one likes to do them, so we should be encouraging folks that actually do it. Not pile up roadblocks.

It is indeed frustrating. I've had a few similar reviews (in other projects - hopefully it's okay I comment here) as well. Honestly, I think if a given team is willing to draw the line as for what is permissible to commit without bug creation, then they should be permitted that freedom.

However, that said, I'm sure somebody is going to point out that come release time having the list of bugs fixed in a given release is handy, spelling errors included.

We've had the same debate in Neutron and we relaxed the rules. We don't require bugs for trivial changes. In fact, my argument has always been: Come release
time, when we say that the Neutron community fixed so and so bugs, we would be lying if we were to include fixing spelling issues in comments. That's not a bug.


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/e0c21dee/attachment-0001.html>

From nkinder at redhat.com  Thu Sep 17 21:09:04 2015
From: nkinder at redhat.com (Nathan Kinder)
Date: Thu, 17 Sep 2015 14:09:04 -0700
Subject: [openstack-dev] [OSSN 0054] Potential Denial of Service in Horizon
	login
Message-ID: <55FB2BF0.70305@redhat.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Potential Denial of Service in Horizon login
- ---

### Summary ###
Horizon uses the Python based Django web framework. Older versions of
this framework allow an unauthorized user to fill up the session store
database causing a Horizon denial of service. A fix for Django is
available but works only with Kilo and later versions of Horizon.

### Affected Services / Software ###
Horizon, Django, Essex, Folsom, Grizzly, Havana, Icehouse, Juno

### Discussion ###
Django will record the session ID of web requests even when the request
is from an unauthorized user. This allows an attacker to populate the
session store database with invalid session information, potentially
causing a denial of service condition by filling the database with
useless session information.

### Recommended Actions ###
The Django developers have released a fix for this issue which is
included in software versions 1.4.21, 1.7.9 and 1.8.3. Horizon
administrators should ensure that they are using an up to date version
of Django to avoid being affected by this vulnerability.

Versions of Horizon prior to Kilo cannot run with the fixed version of
Django, and may require updating to a newer version of OpenStack.
Administrators can test if their deployment is affected by attempting to
inject invalid sessions into the session store database using the
following script and then querying the session store database to check
if multiple 'aaaaa' session ID's were recorded.

- ---- begin example ----
for i in {1..100}
do
  curl -b "sessionid=aaaaa;" http://HORIZON__IP/auth/login/ &> /dev/null
done
- ---- end example ----

If possible, affected users should upgrade to the Kilo or newer release
of Horizon, allowing them to use the fixed version of Django.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0054
Django fix :
https://www.djangoproject.com/weblog/2015/jul/08/security-releases/
Django CVE : CVE-2015-5143
Original LaunchPad Bug : https://bugs.launchpad.net/horizon/+bug/1457551
OpenStack Security ML : openstack-security at lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJV+yvwAAoJEJa+6E7Ri+EVCREIAIcQLlU0SJeabkcJKy1tBa5A
lNrQzBG4t4XRal0lyOl27hPIJ5JYmTIopnix1mNpuuYFV39zzC2vvPk04Znhz2bX
Tqw07UXu18wN8iI+/nt6V4fCIBSrnBmNv87ilNvCug0CilgnJdjYiBJqnueHZCMR
bdNmHnhDiq3LdumKmtTumEQ2LH1iUx6YJJuoUjdbtA8oE2kQ3wiTkCV2hWZaoDQx
fBEzyGzJt5RfGouzDK4pp1oG5eOMZHx1rGNMwJbta6pt4Gc6NGNNQowBLusap9Ko
W2xRN3fHOFmrjNJi8dK+RDX4lPexVh7TDY4/Jox5fTTZFwMcrCU+1d9QajwhEcU=
=KHI6
-----END PGP SIGNATURE-----


From nik.komawar at gmail.com  Thu Sep 17 21:09:04 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Thu, 17 Sep 2015 17:09:04 -0400
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
Message-ID: <55FB2BF0.1000409@gmail.com>



On 9/17/15 3:51 PM, Morgan Fainberg wrote:
>
>
> On Thu, Sep 17, 2015 at 12:00 PM, Kevin Benton <blak111 at gmail.com
> <mailto:blak111 at gmail.com>> wrote:
>
>     It guarantees that if you hit the date deadline local time, that
>     you won't miss the deadline. It doesn't matter if there are extra
>     hours afterwards. The idea is that it gets rid of the need to do
>     time zone conversions.
>
>     If we are trying to do some weird optimization where everyone
>     wants to submit in the last 60 seconds, then sure AOE isn't great
>     for that because you still have to convert. It doesn't seem to me
>     like that's what we are trying to do though.
>
> Alternatively you give a UTC time (which all of our meetings are in
> anyway) and set the deadline. Maybe we should be setting the deadline
> to the western-most timezone (UTC-11/-12?) 23:59 as the deadline. This
> would simply do what you're stating without having to explain AOE more
> concretely than "submit by 23:59 your tz day X".
>

> I think this is all superfluous however and we should simply encourage
> people to not wait until the last minute. Waiting to see who is
> running/what the field looks like isn't as important as standing up
> and saying you're interested in running.
>

I like that you have used the word encourage however, will have to
disagree here. Life in general can't permit that to everyone -- there
can be any important things pop up at unexpected time, someone on
vacation and getting late to come back etc. And on top of that people
can get caught up particularly at this week.  The time-line for
proposals is a good idea seems a good idea in general.

> You shouldn't worry about hurting anyone's feelings by running and
> more importantly most PTLs will be happy to have someone else shoulder
> some of the weight; by tossing your name into the ring it signals
> you're willing to help out in this regard. I know that as a PTL (an
> outgoing one at that) having this clear signal would raise an
> individual towards the top of the list for asking if they want the
> responsibility delegated to them as it was indicated they already
> wanted to be part of leadership for the project.
>
> Just a $0.02 on the timing concerns.
>
> --Morgan
>  
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 My 2 pennies worth.

-- 

Thanks,
Nikhil



From sharis at Brocade.com  Thu Sep 17 21:10:24 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Thu, 17 Sep 2015 21:10:24 +0000
Subject: [openstack-dev]  [Congress] Congress Usecases VM
Message-ID: <01fd2fae2e244c379f69833afd265c27@HQ1WP-EXMB12.corp.brocade.com>

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova

I usually run this on a macbook air - but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases - I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/14ef678d/attachment.html>

From nik.komawar at gmail.com  Thu Sep 17 21:15:15 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Thu, 17 Sep 2015 17:15:15 -0400
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FABF5E.4000204@redhat.com>
References: <55FABF5E.4000204@redhat.com>
Message-ID: <55FB2D63.3010402@gmail.com>

My 2 cents:

I like to solve problems and seems like this is a common problem in many
conferences, seminars, etc. The usual way of solving this issue is to
have a grace period with last minute extension to deadline for
proposals, possibly for a unknown period of time and unannounced.
However, in this particular case the published schedule of voting on the
wiki can be a spoiler. May be there's a workaround I'm not thinking here
but anyways, it's out there now.

On 9/17/15 9:25 AM, Tristan Cacqueray wrote:
> PTL Nomination is now over. The official candidate list is available on
> the wiki[0].
>
> There are 5 projects without candidates, so according to this
> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
> MagnetoDB, Magnum, Murano and Security
>
> There are 7 projects that will have an election: Cinder, Glance, Ironic,
> Keystone, Mistral, Neutron and Oslo. The details for those will be
> posted tomorrow after Tony and I setup the CIVS system.
>
> Thank you,
> Tristan
>
>
> [0]:
> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
> [1]:
> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From nkinder at redhat.com  Thu Sep 17 21:17:09 2015
From: nkinder at redhat.com (Nathan Kinder)
Date: Thu, 17 Sep 2015 14:17:09 -0700
Subject: [openstack-dev] [OSSN 0058] Cinder LVMISCIDriver allows possible
 unauthenticated mounting of volumes
Message-ID: <55FB2DD5.90604@redhat.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Cinder LVMISCIDriver allows possible unauthenticated mounting of volumes
- ---

### Summary ###
When using the LVMISCSIDriver with Cinder, the credentials for CHAP
authentication are not formatted correctly in the tgtadm configuration
file. This leads to a condition where an operator will expect that
volumes can only be mounted with the authentication credentials when,
in fact, they can be mounted without the credentials.

### Affected Services / Software ###
Cinder, Icehouse

### Discussion ###
When requesting that LVMISCSIDriver based volumes use the CHAP
authentication protocol, Cinder will add the credentials for
authentication to the configuration file for the tgtadm
application. In pre-Juno versions of Cinder the key name for these
credentials is incorrect. This incorrect key name will cause tgtadm
to not properly parse those credentials.

With incorrect credentials in place, tgtadm will fail to authenticate
volume mounting when requested by Cinder. The failed setting of
credentials through the configuration file will also allow
unauthenticated access to these volumes. This can allow instances
on the same network as the volumes to mount them without providing the
credentials to the tgtadm application.

This behavior can be confirmed by displaying the accounts associated
with a volume. For volumes which have authentication enabled, you will
see an account listed in the output of the tgtadm application. The
account names created by Cinder will be randomly generated and will
appear as 20 character strings. To print the information for volumes
the following command can be run on nodes with attached volumes:

    # tgtadm --lld iscsi --op show --mode target

User names will be found in the `Account information:` section.

### Recommended Actions ###
If possible, Cinder should be updated to the Juno release or newer. If
this is not possible, then the following guidance will help mitigate
unwanted traffic to the affected nodes.

1. Identify the nodes that will be exposing Cinder volumes with the
LVMISCSIDriver and the nodes that will need to attach those volumes.

2. Implement either security group port rules or iptables rules on
the nodes exposing the volumes to only allow traffic through port 3260
from nodes that will need to attach volumes.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0058
Original LaunchPad Bug : https://bugs.launchpad.net/cinder/+bug/1329214
OpenStack Security ML : openstack-security at lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJV+y3VAAoJEJa+6E7Ri+EVgg0IAILU1NC/FtRrbWNuEJg3Yryu
3lpxtYuAfVB9tEkJo9jTDeNjCY3Uz+iAMzI5ztjUrD7XVffR9PKB2X0IMTkGqxgT
l6KKPndmhtaD191yuomFQIn30H1cPaNg45ZTrqAJG4yR5ho4xgArQ9qhCtfOid8Q
gS85XraV56fB9Iw6helVGro6dCohz6S8rZksnSzw5rubHF2r56ZpzxgKQie8soOo
nh+XXNubUx8bY25TqCNEojtc7w2t2Ht6XxLHHq9e9JA13hmeO8t+OqzyyIduCOSl
tq42i+SEHACfn1zKoQw/02qNHsYbYtq94RTdavtZK6lkVdwc5DHPr9U7oAeU1Yw=
=9Kem
-----END PGP SIGNATURE-----


From carl at ecbaldwin.net  Thu Sep 17 21:29:13 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Thu, 17 Sep 2015 15:29:13 -0600
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
In-Reply-To: <CAO_F6JMn5Jv74NL1TBpciM7PxdLMT0SXN+6+9WUaju=bGqW5ig@mail.gmail.com>
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JNLrM4BMCUNZ+XcePjSkk2OAQWJiuH5UU4njYi9+aaZRg@mail.gmail.com>
 <SN1PR02MB169592E08620777E123F6D34995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JMKtRD6G60=NGMGi-mAndy=yThRdzCgX9iF8R6BQk3cAw@mail.gmail.com>
 <SN1PR02MB16950E11BFF66EB83AC3F361995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JMn5Jv74NL1TBpciM7PxdLMT0SXN+6+9WUaju=bGqW5ig@mail.gmail.com>
Message-ID: <CALiLy7qmqrSE=s=brFRw0uY5RV+_ZZ+fDqH8QH6J2TKy=EgA5w@mail.gmail.com>

On Thu, Sep 17, 2015 at 12:35 PM, Kevin Benton <blak111 at gmail.com> wrote:
>>Also I believe that (c) is already true for Neutron external networks -
>> i.e. it doesn't make sense to assign a floating IP to an instance that is
>> directly on an external network. Is that correct?
>
> Well not floating IPs from the same external network, but you could
> conceivably have layers where one external network has an internal Neutron
> router interface that leads to another external network via a Neutron
> router.

Today, a floating IP implies NAT to the instance's private IP. Without it,
the instance won't understand why its getting traffic destined for some
random public address. Also, today's floating ip implementation in Neutron
requires a router between the external network and the private network with
the instance.

Kris Lindgren described to me something that they do with floating ips that
doesn't use NAT. They inject routes to route traffic straight to the
instance, adjust allowed-address-pairs to allow the traffic in to the port
and then do something to inject the address in to the VM instance (iiuc)
for the instance to accept the traffic directly.

We've also thrown around the idea of doing may on the compute host.  I
don't know.  The point is that I think there may me some room to improve
our expand on the idea of floating ip.

Carl
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/874271ad/attachment.html>

From danehans at cisco.com  Thu Sep 17 21:33:15 2015
From: danehans at cisco.com (Daneyon Hansen (danehans))
Date: Thu, 17 Sep 2015 21:33:15 +0000
Subject: [openstack-dev] [magnum] Discovery
In-Reply-To: <1442516764564.93792@RACKSPACE.COM>
References: <D21ED83B.67DF4%danehans@cisco.com>
 <1442516764564.93792@RACKSPACE.COM>
Message-ID: <D2207AEE.67F7B%danehans@cisco.com>


From: Andrew Melton <andrew.melton at RACKSPACE.COM<mailto:andrew.melton at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Thursday, September 17, 2015 at 12:06 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discovery


Hey Daneyon,


I'm fairly partial towards #2 as well. Though, I'm wondering if it's possible to take it a step further. Could we run etcd in each Bay without using the public discovery endpoint? And then, configure Swarm to simply use the internal ectd as it's discovery mechanism? This could cut one of our external service dependencies and make it easier to run Magnum is an environment with locked down public internet access.?

Thanks for your feedback. #2 was the preferred direction of the Magnum Networking Subteam as well. Therefore, I have started working on [1] to move this option forward. As part of this effort, I am slightly refactoring the Swarm heat templates to more closely align them with the k8s templates. Until tcammann completes the larger template refactor [2], I think this will help dev?s more easily implement features across all bay types, distro?s, etc..

We can have etcd and Swarm use a local discovery backend. I have filed bp [3] to establish this effort. As a first step towards [3], I will modify Swarm to use etcd for discovery. However, etcd will still use public discovery for [1]. Either myself or someone from the community will need to attack etcd local discovery as a follow-on. For the longer-term, we may want to consider exposing a --discovery-backend attribute and optionally pass labels to allow users to modify default configurations of the ?discovery-backend.

[1] https://review.openstack.org/#/c/224367/
[2] https://blueprints.launchpad.net/magnum/+spec/generate-heat-templates
[3] https://blueprints.launchpad.net/magnum/+spec/bay-type-discovery-options




Anyways, I think #2 could be a good start that we could iterate on later if need be.


--Andrew


________________________________
From: Daneyon Hansen (danehans) <danehans at cisco.com<mailto:danehans at cisco.com>>
Sent: Wednesday, September 16, 2015 11:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Discovery

All,

While implementing the flannel --network-driver for swarm, I have come across an issue that requires feedback from the community. Here is the breakdown of the issue:

  1.  Flannel [1] requires etcd to store network configuration. Meeting this requirement is simple for the kubernetes bay types since kubernetes requires etcd.
  2.  A discovery process is needed for bootstrapping etcd. Magnum implements the public discovery option [2].
  3.  A discovery process is also required to bootstrap a swarm bay type. Again, Magnum implements a publicly hosted (Docker Hub) option [3].
  4.  Magnum API exposes the discovery_url attribute that is leveraged by swarm and etcd discovery.
  5.  Etcd can not be implemented in swarm because discovery_url is associated to swarm?s discovery process and not etcd.

Here are a few options on how to overcome this obstacle:

  1.  Make the discovery_url more specific, for example etcd_discovery_url and swarm_discovery_url. However, this option would needlessly expose both discovery url?s to all bay types.
  2.  Swarm supports etcd as a discovery backend. This would mean discovery is similar for both bay types. With both bay types using the same mechanism for discovery, it will be easier to provide a private discovery option in the future.
  3.  Do not support flannel as a network-driver for k8s bay types. This would require adding support for a different driver that supports multi-host networking such as libnetwork. Note: libnetwork is only implemented in the Docker experimental release: https://github.com/docker/docker/tree/master/experimental.

I lean towards #2 but their may be other options, so feel free to share your thoughts. I would like to obtain feedback from the community before proceeding in a particular direction.

[1] https://github.com/coreos/flannel
[2] https://github.com/coreos/etcd/blob/master/Documentation/discovery_protocol.md
[3] https://docs.docker.com/swarm/discovery/

Regards,
Daneyon Hansen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/d5348de6/attachment.html>

From dtyzhnenko at mirantis.com  Thu Sep 17 22:02:27 2015
From: dtyzhnenko at mirantis.com (Dmitry Tyzhnenko)
Date: Fri, 18 Sep 2015 01:02:27 +0300
Subject: [openstack-dev] [Fuel] Nominate Denis Dmitriev for
 fuel-qa(devops) core
In-Reply-To: <CAJWtyAMM-MOgazDkJNAS6FAiioXP3RCeE-W0LCNN+ZJi3-_C_w@mail.gmail.com>
References: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>
 <CAFNR43NRtM3FrdBuPBFuEwLAjmGQfKvqfVhMotqnSdYK8YyCsA@mail.gmail.com>
 <CAJWtyAMM-MOgazDkJNAS6FAiioXP3RCeE-W0LCNN+ZJi3-_C_w@mail.gmail.com>
Message-ID: <CAMZD-t-5p6bP2jvWUyofkqnC5D+fjwG=fWkLzBrzJUfzvcYHKA@mail.gmail.com>

+1
15 ??? 2015 ?. 14:26 ???????????? "Tatyana Leontovich" <
tleontovich at mirantis.com> ???????:

> +1
>
> Regards,
> Tatyana
>
> On Tue, Sep 15, 2015 at 12:16 PM, Alexander Kostrikov <
> akostrikov at mirantis.com> wrote:
>
>> +1
>>
>> On Mon, Sep 14, 2015 at 10:19 PM, Anastasia Urlapova <
>> aurlapova at mirantis.com> wrote:
>>
>>> Folks,
>>> I would like to nominate Denis Dmitriev[1] for fuel-qa/fuel-devops core.
>>>
>>> Dennis spent three months in Fuel BugFix team, his velocity was between
>>> 150-200% per week. Thanks to his efforts we have won these old issues with
>>> time sync and ceph's clock skew. Dennis's ideas constantly help us to
>>> improve our functional system suite.
>>>
>>> Fuelers, please vote for Denis!
>>>
>>> Nastya.
>>>
>>> [1]
>>> http://stackalytics.com/?user_id=ddmitriev&release=all&project_type=all&module=fuel-qa
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>>
>> Kind Regards,
>>
>> Alexandr Kostrikov,
>>
>> Mirantis, Inc.
>>
>> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>>
>>
>> Tel.: +7 (495) 640-49-04
>> Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>
>>
>> Skype: akostrikov_mirantis
>>
>> E-mail: akostrikov at mirantis.com <elogutova at mirantis.com>
>>
>> *www.mirantis.com <http://www.mirantis.ru/>*
>> *www.mirantis.ru <http://www.mirantis.ru/>*
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/b744fa9c/attachment.html>

From john.griffith8 at gmail.com  Thu Sep 17 22:06:51 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Thu, 17 Sep 2015 16:06:51 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55FAF8DF.2070901@redhat.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
Message-ID: <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>

On Thu, Sep 17, 2015 at 11:31 AM, Eric Harney <eharney at redhat.com> wrote:

> On 09/17/2015 05:00 AM, Duncan Thomas wrote:
> > On 16 September 2015 at 23:43, Eric Harney <eharney at redhat.com> wrote:
> >
> >> Currently, at least some options set in [DEFAULT] don't apply to
> >> per-driver sections, and require you to set them in the driver section
> >> as well.
> >>
> >
> > This is extremely confusing behaviour. Do you have any examples? I'm not
> > sure if we can fix it without breaking people's existing configs but I
> > think it is worth trying. I'll add it to the list of things to talk about
> > briefly in Tokyo.
> >
>
> The most recent place this bit me was with iscsi_helper.
>
> If cinder.conf has:
>
> [DEFAULT]
> iscsi_helper = lioadm
> enabled_backends = lvm1
>
> [lvm1]
> volume_driver = ...LVMISCSIDriver
> # no iscsi_helper setting
>
>
> You end up with c-vol showing "iscsi_helper = lioadm", and
> "lvm1.iscsi_helper = tgtadm", which is the default in the code, and not
> the default in the configuration file.
>
> I agree that this is confusing, I think it's also blatantly wrong.  I'm
> not sure how to fix it, but I think it's some combination of your
> suggestions above and possibly having to introduce new option names.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
?
I'm not sure why that's "blatantly wrong', this is a side effect of having
multiple backends enabled, it's by design really.  Any option that is
defined in driver.py needs to be set in the actual enabled-backend stanza
IIRC.  This includes iscsi_helper, volume_clear etc.

Having the "global conf" settings intermixed with the backend sections
caused a number of issues when we first started working on this.  That's
part of why we require the "self.configuration" usage all over in the
drivers.  Each driver instantiation is it's own independent entity.

I haven't looked at this for a long time, but if something has changed or
I'm missing something my apologies.  We can certainly consider changing it,
but because of the way we do multi-backend I'm not exactly sure how you
would do this, or honestly why you would want to.

John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/cbc409ba/attachment.html>

From davanum at gmail.com  Thu Sep 17 22:48:25 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Thu, 17 Sep 2015 18:48:25 -0400
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
Message-ID: <CANw6fcEsSF1khQwHZEn5sT+pq2kvCre2ayoBzqe47pZPfHMSeg@mail.gmail.com>

In the Fuel project, we recently ran into few issues with Apache2 +
mod_wsgi as we switched Keystone to run in that environment and we started
a large battery of tests and ended up seeing these issues. Please see [1]
and [2] for examples.

Looking deep into Apache2 issues specifically around "apache2ctl graceful"
and module loading/unloading and the hooks used by mod_wsgi [3]. I started
wondering if Apache2 + mod_wsgi is the "right" solution and if there was
something else better that people are already using.

One data point that keeps coming up is, all the CI jobs use Apache2 +
mod_wsgi so it must be the best solution....Is it? If not, what is?

Thanks,
Dims

PS: I will leave issues with memcached + keystone for another email later :)


[1] https://bugs.launchpad.net/mos/+bug/1491576
[2] https://bugs.launchpad.net/fuel/+bug/1493372
[3] https://bugs.launchpad.net/fuel/+bug/1493372/comments/35
-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/69d83eca/attachment.html>

From davanum at gmail.com  Thu Sep 17 22:48:50 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Thu, 17 Sep 2015 18:48:50 -0400
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
Message-ID: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>

In the fuel project, we recently ran into a couple of issues with Apache2 +
mod_wsgi as we switched Keystone to run . Please see [1] and [2].

Looking deep into Apache2 issues specifically around "apache2ctl graceful"
and module loading/unloading and the hooks used by mod_wsgi [3]. I started
wondering if Apache2 + mod_wsgi is the "right" solution and if there was
something else better that people are already using.

One data point that keeps coming up is, all the CI jobs use Apache2 +
mod_wsgi so it must be the best solution....Is it? If not, what is?

Thanks,
Dims


[1] https://bugs.launchpad.net/mos/+bug/1491576
[2] https://bugs.launchpad.net/fuel/+bug/1493372
[3] https://bugs.launchpad.net/fuel/+bug/1493372/comments/35
-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/224d8491/attachment.html>

From dborodaenko at mirantis.com  Thu Sep 17 23:14:08 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Thu, 17 Sep 2015 16:14:08 -0700
Subject: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on
 slave nodes
In-Reply-To: <CA+vYeFoP_dGpP3xr04Mr-iM00jG7AjnXMx+-f5OTED-BVeB7-g@mail.gmail.com>
References: <CAEg2Y8NL9aLTSu6Lp+=ci1Q7F3M0+X-Yi_Snb3i_q3kT3C0f6A@mail.gmail.com>
 <CAFLqvG5qYpe0HX_vYa96qy+00W9T8LA8waZwG0uqKSsAv37R9g@mail.gmail.com>
 <CA+vYeFq-y7ygoveRRtk9ASVybe8buq2XhrPJeJz6F-yxTqsgpw@mail.gmail.com>
 <CABzFt8N12ADSuafDBZHg+QTHqPGjXPigzCvYZ1LE48KZJSGzyA@mail.gmail.com>
 <CAFLqvG5P2Ckp61nB9woU=AP3e0rFPfVsDg81HJadM=v2bc6=5w@mail.gmail.com>
 <CABzFt8O1VH8DfOCZAP=yaS_UicaSd=6BNGS=46T5LOOa2H++xA@mail.gmail.com>
 <CAFkLEwoXR-C-etHpS-KDZokYkP8CfS8q9UXjKYF0eNYo6yOpcQ@mail.gmail.com>
 <CA+HkNVsj5m4CduyZ9duTgCsp-BKP-Dwt85iZ01=txPBxVasANg@mail.gmail.com>
 <CAHAWLf1SaOK_6RSSdgPkjUMdf+gPxRewQ-ygzLTXNiEN+oWqRg@mail.gmail.com>
 <CA+vYeFoP_dGpP3xr04Mr-iM00jG7AjnXMx+-f5OTED-BVeB7-g@mail.gmail.com>
Message-ID: <20150917231408.GO4451@localhost>

+1 to y'all :)

We already have a blueprint to enable building Fuel packages with
Perestroika:
https://blueprints.launchpad.net/fuel/+spec/build-fuel-packages-using-perestroika

Between that and packaging Perestroika itself as a self-sufficient tool
that a developer can easily set up and run locally (which we also need a
blueprint for), I think we'd have enough tools to distribute
fuel-library to target node as packages both in production, and, without
too much additional effort, as part of development workflow.

-DmitryB

On Fri, Sep 11, 2015 at 10:59:40PM +0300, Andrey Danin wrote:
> I support this proposal but I just wanted to mention that we'll lose an
> easy way to develop manifests. I agree that manifests in this case have no
> difference with Neutron code, for instance. But anyway I +1 this,
> especially with Vova Kuklin's additions.
> 
> On Thu, Sep 10, 2015 at 12:25 PM, Vladimir Kuklin <vkuklin at mirantis.com>
> wrote:
> 
> > Folks
> >
> > I have a strong +1 for the proposal to decouple master node and slave
> > nodes.
> > Here are the stregnths of this approach
> > 1) We can always decide which particular node runs which particular set of
> > manifests. This will allow us to do be able to apply/roll back changes
> > node-by-node. This is very important from operations perspective.
> > 2) We can decouple master and slave nodes manifests and not drag new
> > library version onto the master node when it is not needed. This allows us
> > to decrease probability of regressions
> > 3) This makes life easier for the user - you just run 'apt-get/yum
> > install' instead of some difficult to digest `mco` command.
> >
> > The only weakness that I see here is on mentioned by Andrey. I think we
> > can fix it by providing developers with clean and simple way of building
> > library package on the fly. This will make developers life easier enough to
> > work with proposed approach.
> >
> > Also, we need to provide ways for better UX, like provide one button/api
> > call for:
> >
> > 1) update all manifests on particular nodes (e.g. all or only a part of
> > nodes of the cluster) to particular version
> > 2)  revert all manifests back to the version which is provided by the
> > particular GA release
> > 3) <more things we need to think of>
> >
> > So far I would mark need for simple package-building system for developer
> > as a dependency for the proposed change, but I do not see any other way
> > than proceeding with it.
> >
> >
> >
> > On Thu, Sep 10, 2015 at 11:50 AM, Sergii Golovatiuk <
> > sgolovatiuk at mirantis.com> wrote:
> >
> >> Oleg,
> >>
> >> Alex gave a perfect example regarding support folks when they need to fix
> >> something really quick. It's client's choice what to patch or not. You may
> >> like it or not, but it's client's choice.
> >>
> >> On 10 Sep 2015, at 09:33, Oleg Gelbukh <ogelbukh at mirantis.com> wrote:
> >>
> >> Alex,
> >>
> >> I absolutely understand the point you are making about need for
> >> deployment engineers to modify things 'on the fly' in customer environment.
> >> It's makes things really flexible and lowers the entry barrier for sure.
> >>
> >> However, I would like to note that in my opinion this kind on 'monkey
> >> patching' is actually a bad practice for any environments other than dev
> >> ones. It immediately leads to emergence of unsupportable frankenclouds. I
> >> would greet any modification to the workflow that will discourage people
> >> from doing that.
> >>
> >> --
> >> Best regards,
> >> Oleg Gelbukh
> >>
> >> On Wed, Sep 9, 2015 at 5:56 PM, Alex Schultz <aschultz at mirantis.com>
> >> wrote:
> >>
> >>> Hey Vladimir,
> >>>
> >>>
> >>>
> >>>> Regarding plugins: plugins are welcome to install specific additional
> >>>> DEB/RPM repos on the master node, or just configure cluster to use
> >>>> additional onl?ne repos, where all necessary packages (including plugin
> >>>> specific puppet manifests) are to be available. Current granular deployment
> >>>> approach makes it easy to append specific pre-deployment tasks
> >>>> (master/slave does not matter). Correct me if I am wrong.
> >>>>
> >>>>
> >>> Don't get me wrong, I think it would be good to move to a fuel-library
> >>> distributed via package only.  I'm bringing these points up to indicate
> >>> that there is many other things that live in the fuel library puppet path
> >>> than just the fuel-library package.  The plugin example is just one place
> >>> that we will need to invest in further design and work to move to the
> >>> package only distribution.  What I don't want is some partially executed
> >>> work that only works for one type of deployment and creates headaches for
> >>> the people actually having to use fuel.  The deployment engineers and
> >>> customers who actually perform these actions should be asked about
> >>> packaging and their comfort level with this type of requirements.  I don't
> >>> have a complete understanding of the all the things supported today by the
> >>> fuel plugin system so it would be nice to get someone who is more familiar
> >>> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
> >>> don't think we are building fuel-library debs at this time either.  So
> >>> without some work on both sides, we cannot move to just packages.
> >>>
> >>>
> >>>> Regarding flexibility: having several versioned directories with puppet
> >>>> modules on the master node, having several fuel-libraryX.Y packages
> >>>> installed on the master node makes things "exquisitely convoluted" rather
> >>>> than flexible. Like I said, it is flexible enough to use mcollective, plain
> >>>> rsync, etc. if you really need to do things manually. But we have
> >>>> convenient service (Perestroika) which builds packages in minutes if you
> >>>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
> >>>> available as an application independent from CI. So, what is wrong with
> >>>> building fuel-library package? What if you want to troubleshoot nova (we
> >>>> install it using packages)? Should we also use rsync for everything else
> >>>> like nova, mysql, etc.?
> >>>>
> >>>>
> >>> Yes, we do have a service like Perestroika to build packages for us.
> >>> That doesn't mean everyone else does or has access to do that today.
> >>> Setting up a build system is a major undertaking and making that a hard
> >>> requirement to interact with our product may be a bit much for some
> >>> customers.  In speaking with some support folks, there are times when files
> >>> have to be munged to get around issues because there is no package or
> >>> things are on fire so they can't wait for a package to become available for
> >>> a fix.  We need to be careful not to impose limits without proper
> >>> justification and due diligence.  We already build the fuel-library
> >>> package, so there's no reason you couldn't try switching the rsync to
> >>> install the package if it's available on a mirror.  I just think you're
> >>> going to run into the issues I mentioned which need to be solved before we
> >>> could just mark it done.
> >>>
> >>> -Alex
> >>>
> >>>
> >>>
> >>>> Vladimir Kozhukalov
> >>>>
> >>>> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz <aschultz at mirantis.com>
> >>>> wrote:
> >>>>
> >>>>> I agree that we shouldn't need to sync as we should be able to just
> >>>>> update the fuel-library package. That being said, I think there might be a
> >>>>> few issues with this method. The first issue is with plugins and how to
> >>>>> properly handle the distribution of the plugins as they may also include
> >>>>> puppet code that needs to be installed on the other nodes for a deployment.
> >>>>> Currently I do not believe we install the plugin packages anywhere except
> >>>>> the master and when they do get installed there may be some post-install
> >>>>> actions that are only valid for the master.  Another issue is being
> >>>>> flexible enough to allow for deployment engineers to make custom changes
> >>>>> for a given environment.  Unless we can provide an improved process to
> >>>>> allow for people to provide in place modifications for an environment, we
> >>>>> can't do away with the rsync.
> >>>>>
> >>>>> If we want to go completely down the package route (and we probably
> >>>>> should), we need to make sure that all of the other pieces that currently
> >>>>> go together to make a complete fuel deployment can be updated in the same
> >>>>> way.
> >>>>>
> >>>>> -Alex
> >>>>>
> >>>>>
> >>>
> >>> __________________________________________________________________________
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> >
> > --
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 35bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com <http://www.mirantis.ru/>
> > www.mirantis.ru
> > vkuklin at mirantis.com
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> 
> -- 
> Andrey Danin
> adanin at mirantis.com
> skype: gcon.monolake

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From thingee at gmail.com  Thu Sep 17 23:39:19 2015
From: thingee at gmail.com (Mike Perez)
Date: Thu, 17 Sep 2015 16:39:19 -0700
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
 communications
In-Reply-To: <CALrdpTXkri+-V96JCjQ61G-GeoNqOYxbDJXx=RaqG1+gcWRaPQ@mail.gmail.com>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
 <55F979A9.9040206@openstack.org>
 <CAD0KtVHyeuFTTJbMFcyvPD5ALZa3Pi5agxtoHXtFz-8MVP0L0A@mail.gmail.com>
 <CALrdpTXkri+-V96JCjQ61G-GeoNqOYxbDJXx=RaqG1+gcWRaPQ@mail.gmail.com>
Message-ID: <20150917233919.GA3727@gmail.com>

On 10:00 Sep 17, Shamail Tahir wrote:
> On Wed, Sep 16, 2015 at 11:30 AM, Anne Gentle <annegentle at justwriteclick.com
> >
> > On Wed, Sep 16, 2015 at 9:16 AM, Thierry Carrez <thierry at openstack.org>
> > wrote:

<snip>

> >>
> >> Just brainstorming out loud, maybe we need to have a base team of people
> >> committed to drive such initiatives to completion, a team that
> >> individuals could leverage when they have a cross-project idea, a team
> >> that could define a few cycle goals and actively push them during the
> >> cycle.
> >>
> >
> This is very similar to how the Product WG structure is setup as well.  We
> have cross-project liaisons (CPLs) that participate in the project team
> meetings and also user-story owners who cover the overall goal of
> completing the user story.  The user story owner leverages the cross
> project liaisons to help with tracking component/project specific
> dependencies for implementing the user story but the user story owner is
> looking at the overall state of the bigger picture.   Our CPLs work with
> multiple user-story owners but the user story owner to user story mapping
> is 1:1.

+1

Before I got to Shamail's email, I was also thinking this sounds exactly what
the product working group is doing.


-- 
Mike Perez


From thingee at gmail.com  Thu Sep 17 23:50:35 2015
From: thingee at gmail.com (Mike Perez)
Date: Thu, 17 Sep 2015 16:50:35 -0700
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
 communications
In-Reply-To: <55F979A9.9040206@openstack.org>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
 <55F979A9.9040206@openstack.org>
Message-ID: <20150917235035.GB3727@gmail.com>

On 16:16 Sep 16, Thierry Carrez wrote:
> Anne Gentle wrote:
> > [...]
> > What are some of the problems with each layer? 
> > 
> > 1. weekly meeting: time zones, global reach, size of cross-project
> > concerns due to multiple projects being affected, another meeting for
> > PTLs to attend and pay attention to
> 
> A lot of PTLs (or liaisons/lieutenants) skip the meeting, or will only
> attend when they have something to ask. Their time is precious and most
> of the time the meeting is not relevant for them, so why bother ? You
> have a few usual suspects attending all of them, but those people are
> cross-project-aware already so those are not the people that would
> benefit the most from the meeting.
> 
> This partial attendance makes the meeting completely useless as a way to
> disseminate information. It makes the meeting mostly useless as a way to
> get general approval on cross-project specs.
> 
> The meeting still is very useful IMHO to have more direct discussions on
> hot topics. So a ML discussion is flagged for direct discussion on IRC
> and we have a time slot already booked for that.

Content for the cross project meeting are usually:

* Not ready for decisions.
* Lack solutions.

A proposal in steps of how cross project ideas start, to something ready for
the cross project IRC meeting, then the TC:

1) An idea starts from either or both:
   a) Mailing list discussion.
   b) A patch to a single project (until it's flagged that this patch could be
      benefical to other projects)
2) OpenStack Spec is proposed - discussions happen in gerrit from here on out.
   Not on the mailing list. Keep encouraging discussions back to gerrit to keep
   everything in one place in order to avoid confusion with having to fish
   for some random discussion elsewhere.
3) Once enough consensus happens an agenda item is posted cross project IRC
   meeting.
4) Final discussions happen in the meeting. If consensus is still met by
   interested parties who attend, it moves to TC.  If there is a lack of
   consensus it goes back to gerrit and repeat.

With this process, we should have less meetings. Less meetings is:

* Awesome
* Makes this meeting more meaningful when it happens because decisions are
  potentially going to be agreed and passed to the TC!

If a cross project spec is not getting attention, don't post it to the list for
attention. We get enough email and it'll probably be lost. Instead, let the
product working group recognize this and reach out to the projects that this
spec would benefit, to bring meaningful attention to the spec.

For vertical alignment, interaction like IRC is not necessary. A very brief,
bullet point of collected information from projects that have anything
interesting is given in a weekly digest email to the list If anyone has
questions or wants more information, they can use their own time to ask that
project team.

Potentially, if we kept everything to the spec on gerrit, and had the product
working group bringing needed attention to specs, we could eliminate the cross
project meeting.

-- 
Mike Perez


From sharis at Brocade.com  Fri Sep 18 00:03:06 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Fri, 18 Sep 2015 00:03:06 +0000
Subject: [openstack-dev]  [Congress] Congress Usecases VM
Message-ID: <0ea2b032200b4ac48a8a784de7bcf08a@HQ1WP-EXMB12.corp.brocade.com>

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova

I usually run this on a macbook air - but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases - I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/5f073e0b/attachment.html>

From Cathy.H.Zhang at huawei.com  Fri Sep 18 00:32:57 2015
From: Cathy.H.Zhang at huawei.com (Cathy Zhang)
Date: Fri, 18 Sep 2015 00:32:57 +0000
Subject: [openstack-dev] [neutron] Service Chain project IRC meeting
 minutes - 09/17/2015
In-Reply-To: <A2C96F6779E6A041BC7023CC207FC9942179FA78@SJCEML702-CHM.china.huawei.com>
References: <A2C96F6779E6A041BC7023CC207FC99421771959@SJCEML701-CHM.china.huawei.com>
 <A2C96F6779E6A041BC7023CC207FC994217775A5@SJCEML701-CHM.china.huawei.com>
 <A2C96F6779E6A041BC7023CC207FC99421793DDB@SJCEML701-CHM.china.huawei.com>
 <A2C96F6779E6A041BC7023CC207FC9942179FA78@SJCEML702-CHM.china.huawei.com>
Message-ID: <A2C96F6779E6A041BC7023CC207FC994217F803E@SJCEML701-CHM.china.huawei.com>

Hi Everyone,

Thanks for joining the service chaining project meeting on 9/17/2015. Here is the link to the meeting logs:
http://eavesdrop.openstack.org/meetings/service_chaining/2015/.

Due to some connection glitch, the HTML format of the meeting log is not quite complete and you may want to refer to the following full log for complete discussion.
http://eavesdrop.openstack.org/meetings/service_chaining/2015/service_chaining.2015-09-17-17.00.log.html

I will be on business trip and will not be able to run the next project IRC meeting. Louis Fourie will host the next project meeting.

Thanks,
Cathy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/32462c3c/attachment.html>

From ayoung at redhat.com  Fri Sep 18 00:36:23 2015
From: ayoung at redhat.com (Adam Young)
Date: Thu, 17 Sep 2015 20:36:23 -0400
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
Message-ID: <55FB5C87.9080700@redhat.com>

On 09/17/2015 06:48 PM, Davanum Srinivas wrote:
> In the fuel project, we recently ran into a couple of issues with 
> Apache2 + mod_wsgi as we switched Keystone to run . Please see [1] and 
> [2].
>
> Looking deep into Apache2 issues specifically around "apache2ctl 
> graceful" and module loading/unloading and the hooks used by mod_wsgi 
> [3]. I started wondering if Apache2 + mod_wsgi is the "right" solution 
> and if there was something else better that people are already using.
>
> One data point that keeps coming up is, all the CI jobs use Apache2 + 
> mod_wsgi so it must be the best solution....Is it? If not, what is?
>
> Thanks,
> Dims

I'd be surprised if switching web servers fixed more problems than it 
causes.  The issues with Apache seem to be issues that are solvable;  is 
there any reason to think that they are not?

>
>
> [1] https://bugs.launchpad.net/mos/+bug/1491576
> [2] https://bugs.launchpad.net/fuel/+bug/1493372
> [3] https://bugs.launchpad.net/fuel/+bug/1493372/comments/35
> -- 
> Davanum Srinivas :: https://twitter.com/dims
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/c9723008/attachment.html>

From mgagne at internap.com  Fri Sep 18 00:38:54 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Thu, 17 Sep 2015 20:38:54 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
Message-ID: <55FB5D1E.2080706@internap.com>

Hi,

While debugging LP bug #1491579 [1], we identified [2] an issue where an
API sitting being a proxy performing SSL termination would not generate
the right redirection. The protocol ends up being the wrong one (http
instead of https) and this could hang your request indefinitely if
tcp/80 is not opened and a firewall drops your connection.

I suggested [3] adding support for the X-Fowarded-Proto header, thinking
Nova didn't supported it yet. In fact, someone suggested setting the
public_endpoint config instead.

So today I stumbled across this review [4] which added the
secure_proxy_ssl_header config to Nova. It allows the API to detect SSL
termination based on the (suggested) header X-Forwarded-Proto just like
previously suggested.

I also found this bug report [5] (opened in 2014) which also happens to
complain about bad URLs when API is sitting behind a proxy.

Multiple projects applied patches to try to fix the issue (based on
Launchpad comments):

* Glance added public_endpoint config
* Cinder added public_endpoint config
* Heat added secure_proxy_ssl_header config (through
heat.api.openstack:sslmiddleware_filter)
* Nova added secure_proxy_ssl_header config
* Manila added secure_proxy_ssl_header config (through
oslo_middleware.ssl:SSLMiddleware.factory)
* Ironic added public_endpoint config
* Keystone added secure_proxy_ssl_header config (LP #1370022)

As you can see, there is a lot of inconsistency between projects. (there
is more but lets start with that one)

My wish is for a common and consistent way for *ALL* OpenStack APIs to
support the same solution for this common problem. Let me tell you (and
I guess I can speak for all operators), we will be very happy to have
ONE config to remember of and set for *ALL* OpenStack services.

How can we get the ball rolling so we can fix it together once and for
all in a timely fashion?

[1] https://bugs.launchpad.net/python-novaclient/+bug/1491579
[2] https://bugs.launchpad.net/python-novaclient/+bug/1491579/comments/15
[3] https://bugs.launchpad.net/python-novaclient/+bug/1491579/comments/17
[4] https://review.openstack.org/#/c/206479/
[5] https://bugs.launchpad.net/glance/+bug/1384379

-- 
Mathieu


From ben at swartzlander.org  Fri Sep 18 00:48:27 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Thu, 17 Sep 2015 20:48:27 -0400
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
 communications
In-Reply-To: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
Message-ID: <55FB5F5B.3060107@swartzlander.org>

On 09/15/2015 10:50 AM, Anne Gentle wrote:
> Hi all,
>
> What can we do to make the cross-project meeting more helpful and 
> useful for cross-project communications? I started with a proposal to 
> move it to a different time, which morphed into an idea to alternate 
> times. But, knowing that we need to layer communications I wonder if 
> we should troubleshoot cross-project communications further? These are 
> the current ways cross-project communications happen:
>
> 1. The weekly meeting in IRC
> 2. The cross-project specs and reviewing those
> 3. Direct connections between team members
> 4. Cross-project talks at the Summits
>
> What are some of the problems with each layer?
>
> 1. weekly meeting: time zones, global reach, size of cross-project 
> concerns due to multiple projects being affected, another meeting for 
> PTLs to attend and pay attention to

I would actually love to attend the cross-project IRC meeting but it 
falls in a perfectly bad time for my time zone so I can never make it. 
When daylight savings time end the first week of November I'll start 
attending because it will be 1 hour earlier for me.

> 2. specs: don't seem to get much attention unless they're brought up 
> at weekly meeting, finding owners for the work needing to be done in a 
> spec is difficult since each project team has its own priorities
> 3. direct communications: decisions from these comms are difficult to 
> then communicate more widely, it's difficult to get time with busy PTLs
> 4. Summits: only happens twice a year, decisions made then need to be 
> widely communicated
>
> I'm sure there are more details and problems I'm missing -- feel free 
> to fill in as needed.
>
> Lastly, what suggestions do you have for solving problems with any of 
> these layers?
>
> Thanks,
> Anne
>
> -- 
> Anne Gentle
> Rackspace
> Principal Engineer
> www.justwriteclick.com <http://www.justwriteclick.com>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/bcee9679/attachment.html>

From ayip at vmware.com  Fri Sep 18 01:03:45 2015
From: ayip at vmware.com (Alex Yip)
Date: Fri, 18 Sep 2015 01:03:45 +0000
Subject: [openstack-dev] [Congress] hands on lab
Message-ID: <89e38d2f55c04a15bfcf964050203dfc@EX13-MBX-012.vmware.com>

Hi all,
I have created a VirtualBox VM that matches the Vancouver handson-lab here:

https://drive.google.com/file/d/0B94E7u1TIA8oTEdOQlFERkFwMUE/view?usp=sharing

There's also an updated instruction document here:

https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub

If you have some time, please try it out to see if it all works as expected.
thanks, Alex


From jim at jimrollenhagen.com  Fri Sep 18 01:50:30 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Thu, 17 Sep 2015 18:50:30 -0700
Subject: [openstack-dev] [ironic] Liberty soft freeze
Message-ID: <20150918015030.GP21846@jimrollenhagen.com>

Hi folks,

It's time for our soft freeze for Liberty, as planned. Core reviewers
should do their best to refrain from landing risky code. We'd like to
ship 4.2.0 as the candidate for stable/liberty next Thursday, September
24.

Here's the things we still want to complete in 4.2.0:
https://launchpad.net/ironic/+milestone/4.2.0

Note that zapping is no longer there; sadly, after lots of writing and
reviewing code, we want to rethink how we implement this. We've talked
about being able to go from MANAGEABLE->CLEANING->MANAGEABLE with a list
of clean steps. Same idea, but without the word zapping, the new DB
fields, etc. At any rate, it's been bumped to Mitaka to give us time to
figure it out.

This may also mean in-band RAID configuration may not land; the
interface in general did land, and drivers may do out-of-band
configuration. We assumed that in-band RAID would be done through
zapping. However, if folks can agree on how to do it during automated
cleaning, I'd be happy to get that in Liberty if the code is not too
risky. If it is risky, we'll need to punt it to Mitaka as well.

I'd like to see the rest of the work on the milestone completed during
Liberty, and I hope everyone can jump in and help us to do that.

Thanks in advance!

// jim


From jim at jimrollenhagen.com  Fri Sep 18 02:04:52 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Thu, 17 Sep 2015 19:04:52 -0700
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
Message-ID: <20150918020452.GQ21846@jimrollenhagen.com>

On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
> In the fuel project, we recently ran into a couple of issues with Apache2 +
> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
> 
> Looking deep into Apache2 issues specifically around "apache2ctl graceful"
> and module loading/unloading and the hooks used by mod_wsgi [3]. I started
> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> something else better that people are already using.
> 
> One data point that keeps coming up is, all the CI jobs use Apache2 +
> mod_wsgi so it must be the best solution....Is it? If not, what is?

Disclaimer: it's been a while since I've cared about performance with a
web server in front of a Python app.

IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
on again. In general, I seem to remember it being thought of as a bit
old and crusty, but mostly working.

At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
and saw a significant performance increase. This was a Django app. uwsgi
is fairly straightforward to operate and comes loaded with a myriad of
options[1] to help folks make the most of it. I've played with Ironic
behind uwsgi and it seemed to work fine, though I haven't done any sort
of load testing. I'd encourage folks to give it a shot. :)

Of course, uwsgi can also be ran behind Apache2, if you'd prefer.

gunicorn[2] is another good option that may be worth investigating; I
personally don't have any experience with it, but I seem to remember
hearing it has good eventlet support.

// jim

[0] https://uwsgi-docs.readthedocs.org/en/latest/
[1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
[2] http://gunicorn.org/


From jim at jimrollenhagen.com  Fri Sep 18 02:08:53 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Thu, 17 Sep 2015 19:08:53 -0700
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <55FB5D1E.2080706@internap.com>
References: <55FB5D1E.2080706@internap.com>
Message-ID: <20150918020853.GR21846@jimrollenhagen.com>

On Thu, Sep 17, 2015 at 08:38:54PM -0400, Mathieu Gagn? wrote:
> Hi,
> 
> While debugging LP bug #1491579 [1], we identified [2] an issue where an
> API sitting being a proxy performing SSL termination would not generate
> the right redirection. The protocol ends up being the wrong one (http
> instead of https) and this could hang your request indefinitely if
> tcp/80 is not opened and a firewall drops your connection.
> 
> I suggested [3] adding support for the X-Fowarded-Proto header, thinking
> Nova didn't supported it yet. In fact, someone suggested setting the
> public_endpoint config instead.
> 
> So today I stumbled across this review [4] which added the
> secure_proxy_ssl_header config to Nova. It allows the API to detect SSL
> termination based on the (suggested) header X-Forwarded-Proto just like
> previously suggested.
> 
> I also found this bug report [5] (opened in 2014) which also happens to
> complain about bad URLs when API is sitting behind a proxy.
> 
> Multiple projects applied patches to try to fix the issue (based on
> Launchpad comments):
> 
> * Glance added public_endpoint config
> * Cinder added public_endpoint config
> * Heat added secure_proxy_ssl_header config (through
> heat.api.openstack:sslmiddleware_filter)
> * Nova added secure_proxy_ssl_header config
> * Manila added secure_proxy_ssl_header config (through
> oslo_middleware.ssl:SSLMiddleware.factory)
> * Ironic added public_endpoint config
> * Keystone added secure_proxy_ssl_header config (LP #1370022)
> 
> As you can see, there is a lot of inconsistency between projects. (there
> is more but lets start with that one)
> 
> My wish is for a common and consistent way for *ALL* OpenStack APIs to
> support the same solution for this common problem. Let me tell you (and
> I guess I can speak for all operators), we will be very happy to have
> ONE config to remember of and set for *ALL* OpenStack services.
> 
> How can we get the ball rolling so we can fix it together once and for
> all in a timely fashion?

Totally agree. This seems like maybe a good thing for the API working
group to put together.

FWIW, in Ironic, we added the public_endpoint config to fix the bug
quickly, but we'd really prefer to support both that and the
secure_proxy_ssl_header option. It would use public_endpoint if it is
set, then fall back to the header config, then fall back to
request_host like it was before.

// jim


From nkinder at redhat.com  Fri Sep 18 02:22:23 2015
From: nkinder at redhat.com (Nathan Kinder)
Date: Thu, 17 Sep 2015 19:22:23 -0700
Subject: [openstack-dev] [OSSN 0056] Cached keystone tokens may be accepted
	after revocation
Message-ID: <55FB755F.3070108@redhat.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Cached keystone tokens may be accepted after revocation
- ---

### Summary ###
Keystone auth_token middleware token and revocation list caching is used
to reduce the load on the keystone service. The default token cache time
is set to 300 seconds and the default token revocation list cache time
is set to 10 seconds. This creates a misleading expectation that revoked
tokens will not be accepted more than 10 seconds after revocation,
however the maximum validity of a cached token must be assumed to be the
cache duration. System owners should make a risk based decision to
balance token lifespan with performance requirements and if the use of
revoked tokens is an unacceptable risk then caching should be disabled.

### Affected Services / Software ###
OpenStack Services that use Keystone middleware: Juno, Kilo, Liberty

### Discussion ###
There are multiple options for configuring token caching in the keystone
auth_token middleware. These options include token_cache_time,
revocation_cache_time and check_revocations_for_cached, with each option
affecting the different stages of token caching and revocation.
Depending on the configuration the previously mentioned options, an
attacker could use a compromised token for up to token_cache_time second
s
before the token becomes disabled. To mitigate this vulnerability, a
change was issued in Juno where the default Token Revocation List (TRL)
cache time was reduced to 10 seconds and the
check_revocations_for_cached option was added. The addition of a token
to a TRL does not guarantee that cached tokens will be rejected
considering the operational nature of token caching. For instance, if
the check_revocations_for_cached is disabled then tokens are valid after
caching token_cache_time or the designated expiration given to the
token. Otherwise (if check_revocations_for_cached is enabled) then
tokens are rejected after the revocation_cache_time.

System owners should weigh the risk of an attacker using a revoked token
versus the performance implications of reducing the token cache time.

### Recommended Actions ###
Review the implications of the default 300 second token cache time and
any risks associated with the use of revoked tokens for up to that cache
time. If this is unacceptable, reduce the cache time to reduce the
attack window or disable token caching entirely.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0056
Original LaunchPad Bug :
https://bugs.launchpad.net/python-keystoneclient/+bug/1287301
OpenStack Security ML : openstack-security at lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJV+3VfAAoJEJa+6E7Ri+EVyUkH/iEcTFUsBrCPMm04vUtCXABj
lBT2QOGkpTj5rWC6Al5MsnrU4GeU/1hp3hQur1VJAxcro7w56N0eDQJvrpUc0PsP
yPePeiD9N6+WNVy1CTIYd852zepUlBBexOyBDt4k4g4vDn+xQppcm0QxYP3p3u2A
hp+Q6SfWsTpiPNsrMf/HngbGPtnoI92pGE/SGIXjDSMl/jaADmZzapQLEIaXbuXq
4G/3DbiS6oiGcC/Y5aB/Q7dl/baFoBc9wRxIQNEZQq9nhzOGUOshgrDciYMQcW4U
7hxDcs+W4Jnvdn2kvGPxJ8ZBq6z2pEUJf/7/qAUZtCwW7sGWeWvYs/84kZVH4Oo=
=FU3H
-----END PGP SIGNATURE-----


From slukjanov at mirantis.com  Fri Sep 18 02:39:36 2015
From: slukjanov at mirantis.com (Sergey Lukjanov)
Date: Fri, 18 Sep 2015 05:39:36 +0300
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
Message-ID: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>

Hi folks,

I'd like to announce that we're running the PTL and Component Leads
elections. Detailed information available on wiki. [0]

Project Team Lead: Manages day-to-day operations, drives the project
team goals, resolves technical disputes within the project team. [1]

Component Lead: Defines architecture of a module or component in Fuel,
reviews design specs, merges majority of commits and resolves conflicts
between Maintainers or contributors in the area of responsibility. [2]

Fuel has two large sub-teams, with roughly comparable codebases, that
need dedicated component leads: fuel-library and fuel-python. [2]

Nominees propose their candidacy by sending an email to the
openstack-dev at lists.openstack.org mailing-list, which the subject:
"[fuel] PTL candidacy" or "[fuel] <component> lead candidacy"
(for example, "[fuel] fuel-library lead candidacy").

Time line:

PTL elections
* September 18 - September 28, 21:59 UTC: Open candidacy for PTL position
* September 29 - October 8: PTL elections

Component leads elections (fuel-library and fuel-python)
* October 9 - October 15: Open candidacy for Component leads positions
* October 16 - October 22: Component leads elections

[0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015
[1] https://wiki.openstack.org/wiki/Governance
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
[3] https://lwn.net/Articles/648610/

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/09f21cfd/attachment.html>

From rlooyahoo at gmail.com  Fri Sep 18 02:57:13 2015
From: rlooyahoo at gmail.com (Ruby Loo)
Date: Thu, 17 Sep 2015 22:57:13 -0400
Subject: [openstack-dev] [ironic] Liberty soft freeze
In-Reply-To: <20150918015030.GP21846@jimrollenhagen.com>
References: <20150918015030.GP21846@jimrollenhagen.com>
Message-ID: <CA+5K_1F0hnznhZQ9UTfEgRkBu3qSEJ0OVwvYoMzJqGVf6gtMQg@mail.gmail.com>

On 17 September 2015 at 21:50, Jim Rollenhagen <jim at jimrollenhagen.com>
wrote:<snip>

This may also mean in-band RAID configuration may not land; the
> interface in general did land, and drivers may do out-of-band
> configuration. We assumed that in-band RAID would be done through
> zapping. However, if folks can agree on how to do it during automated
> cleaning, I'd be happy to get that in Liberty if the code is not too
> risky. If it is risky, we'll need to punt it to Mitaka as well.
>

Ramesh had worked on this but removed the part that hooks into automated
cleaning [0]. One reason being that if a target RAID config wasn't
specified, does this mean the create-raid-config clean step is skipped
(considered successful), or does it mean that the cleaning operation fails.
After a bit of discussion on IRC[1], some of us think that having the clean
operation fail makes sense.

There are two patches that will help with this [2] & [3]. The code was
extracted from [0]. I think it may need more work but I won't be able to
look into it until next week. In the meantime, hopefully someone else
(Ramesh? anyone?) will pick it up :)

--ruby

[0] https://review.openstack.org/#/c/198238/, removed from revision 21
[1] 2015-09-17T21:21:54 ish,
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2015-09-17.log
[2] https://review.openstack.org/#/c/222264/
[3] https://review.openstack.org/#/c/224938/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150917/de4bfdb8/attachment.html>

From skywalker.nick at gmail.com  Fri Sep 18 03:20:28 2015
From: skywalker.nick at gmail.com (Li Ma)
Date: Fri, 18 Sep 2015 11:20:28 +0800
Subject: [openstack-dev] [dragonflow] Low OVS version for Ubuntu
In-Reply-To: <CABARBAYV-UD=ntjBej+aWf5mBHqxEhmMzuGnvKUHhFbqEo-GrA@mail.gmail.com>
References: <CALFEDVcXE18WAWS=64sOQkyLn6ob2R0Nx+G2jSP3xOo37Zcubg@mail.gmail.com>
 <55FA6AB4.5030605@linux.vnet.ibm.com>
 <CAG9LJa554aMjAF06G_8U5AiGTOuZ1ODtci+6ykfNO-Vaet_B4Q@mail.gmail.com>
 <CABARBAYV-UD=ntjBej+aWf5mBHqxEhmMzuGnvKUHhFbqEo-GrA@mail.gmail.com>
Message-ID: <CALFEDVeh9O8m122UA4bkXNBa=ZaQvSogQo8APbZr2jucWX5Z1w@mail.gmail.com>

Thanks all your responses. I just wonder if there is a quick path for me.
I'll rebuild it from source then.

On Thu, Sep 17, 2015 at 11:50 PM, Assaf Muller <amuller at redhat.com> wrote:
> Another issue is that the gate is running with Ubuntu 14.04, which is
> running OVS 2.0. This means we can't test
> certain features in Neutron (For example, the OVS ARP responder).
>
> On Thu, Sep 17, 2015 at 4:17 AM, Gal Sagie <gal.sagie at gmail.com> wrote:
>>
>> Hello Li Ma,
>>
>> Dragonflow uses OpenFlow1.3 to communicate with OVS and thats why we need
>> OVS 2.3.1.
>> As suggested you can build it from source.
>> For Fedora 21 OVS2.3.1 is part of the default yum repository.
>>
>> You can ping me on IRC (gsagie at freenode) if you need any additional
>> help how
>> to compile OVS.
>>
>> Thanks
>> Gal.
>>
>> On Thu, Sep 17, 2015 at 10:24 AM, Sudipto Biswas
>> <sbiswas7 at linux.vnet.ibm.com> wrote:
>>>
>>>
>>>
>>> On Thursday 17 September 2015 12:22 PM, Li Ma wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I tried to run devstack to deploy dragonflow, but I failed with lower
>>>> OVS version.
>>>>
>>>> I used Ubuntu 14.10 server, but the official package of OVS is 2.1.3
>>>> which is much lower than the required version 2.3.1+?
>>>>
>>>> So, can anyone provide a Ubuntu repository that contains the correct
>>>> OVS packages?
>>>
>>>
>>> Why don't you just build the OVS you want from here:
>>> http://openvswitch.org/download/
>>>
>>>> Thanks,
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Li Ma (Nick)
Email: skywalker.nick at gmail.com


From adrian.otto at rackspace.com  Fri Sep 18 03:39:11 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Fri, 18 Sep 2015 03:39:11 +0000
Subject: [openstack-dev] [magnum] Discovery
In-Reply-To: <D22068B1.1B7C3%eguz@walmartlabs.com>
References: <D21ED83B.67DF4%danehans@cisco.com>
 <1442516764564.93792@RACKSPACE.COM>,<D22068B1.1B7C3%eguz@walmartlabs.com>
Message-ID: <65278211-710F-4EFB-BA49-422B536CFD71@rackspace.com>

In the case where a private cloud is used without access to the Internet, you do have the option of running your own etcd, and configuring that to be used instead.

Adding etcd to every bay should be optional, as a subsequent feature, but should be controlled by a flag in the Baymodel that defaults to off so the public discovery service is used. It might be nice to be able to configure Magnum in an isolated mode which would change the system level default for that flag from off to on.

Maybe the Baymodel resource attribute should be named local_discovery_service.

Should turning this on also set the minimum node count for the bay to 3? If not, etcd will not be highly available.

Adrian

> On Sep 17, 2015, at 1:01 PM, Egor Guz <EGuz at walmartlabs.com> wrote:
> 
> +1 for stop using public discovery endpoint, most private cloud vms doesn?t have access to internet and operator must to run etcd instance somewhere just for discovery.
> 
> ?
> Egor
> 
> From: Andrew Melton <andrew.melton at RACKSPACE.COM<mailto:andrew.melton at RACKSPACE.COM>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Date: Thursday, September 17, 2015 at 12:06
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [openstack-dev] [magnum] Discovery
> 
> 
> Hey Daneyon,
> 
> 
> I'm fairly partial towards #2 as well. Though, I'm wondering if it's possible to take it a step further. Could we run etcd in each Bay without using the public discovery endpoint? And then, configure Swarm to simply use the internal ectd as it's discovery mechanism? This could cut one of our external service dependencies and make it easier to run Magnum is an environment with locked down public internet access.?
> 
> 
> Anyways, I think #2 could be a good start that we could iterate on later if need be.
> 
> 
> --Andrew
> 
> 
> ________________________________
> From: Daneyon Hansen (danehans) <danehans at cisco.com<mailto:danehans at cisco.com>>
> Sent: Wednesday, September 16, 2015 11:26 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum] Discovery
> 
> All,
> 
> While implementing the flannel --network-driver for swarm, I have come across an issue that requires feedback from the community. Here is the breakdown of the issue:
> 
>  1.  Flannel [1] requires etcd to store network configuration. Meeting this requirement is simple for the kubernetes bay types since kubernetes requires etcd.
>  2.  A discovery process is needed for bootstrapping etcd. Magnum implements the public discovery option [2].
>  3.  A discovery process is also required to bootstrap a swarm bay type. Again, Magnum implements a publicly hosted (Docker Hub) option [3].
>  4.  Magnum API exposes the discovery_url attribute that is leveraged by swarm and etcd discovery.
>  5.  Etcd can not be implemented in swarm because discovery_url is associated to swarm?s discovery process and not etcd.
> 
> Here are a few options on how to overcome this obstacle:
> 
>  1.  Make the discovery_url more specific, for example etcd_discovery_url and swarm_discovery_url. However, this option would needlessly expose both discovery url?s to all bay types.
>  2.  Swarm supports etcd as a discovery backend. This would mean discovery is similar for both bay types. With both bay types using the same mechanism for discovery, it will be easier to provide a private discovery option in the future.
>  3.  Do not support flannel as a network-driver for k8s bay types. This would require adding support for a different driver that supports multi-host networking such as libnetwork. Note: libnetwork is only implemented in the Docker experimental release: https://github.com/docker/docker/tree/master/experimental.
> 
> I lean towards #2 but their may be other options, so feel free to share your thoughts. I would like to obtain feedback from the community before proceeding in a particular direction.
> 
> [1] https://github.com/coreos/flannel
> [2] https://github.com/coreos/etcd/blob/master/Documentation/discovery_protocol.md
> [3] https://docs.docker.com/swarm/discovery/
> 
> Regards,
> Daneyon Hansen
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From skywalker.nick at gmail.com  Fri Sep 18 04:11:48 2015
From: skywalker.nick at gmail.com (Li Ma)
Date: Fri, 18 Sep 2015 12:11:48 +0800
Subject: [openstack-dev] [neutron] [oslo.privsep] Any progress on privsep?
Message-ID: <CALFEDVehrHj+syJFDocOLG30X6xEVM5wApbSWS2y-kc=tq-dFw@mail.gmail.com>

Hi stackers,

Currently we are discussing the possibility of using a pure python
library to configure network in neutron [1]. We find out that it is
impossible to do it without privsep, because we run external commands
which cannot be replaced by python calls via rootwrap.

Privsep has been merged in the Liberty cycle. I just wonder how it is going on.

[1] https://bugs.launchpad.net/neutron/+bug/1492714

Thanks a lot,
-- 

Li Ma (Nick)
Email: skywalker.nick at gmail.com


From anteaya at anteaya.info  Fri Sep 18 04:12:48 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Thu, 17 Sep 2015 22:12:48 -0600
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
Message-ID: <55FB8F40.5040808@anteaya.info>

On 09/17/2015 11:26 AM, Kevin Benton wrote:
> Maybe it would be a good idea to switch to 23:59 AOE deadlines like many
> paper submissions use for academic conferences. That way there is never a
> need to convert TZs, you just get it in by the end of the day in your own
> time zone.

OpenStack uses UTC for activities overseen by the TC:
http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20150512-utc.rst

Thanks,
Anita.

> On Sep 17, 2015 9:18 AM, "Edgar Magana" <edgar.magana at workday.com> wrote:
> 
>> Folks,
>>
>> Last year I found myself in the same position when I missed a deadline
>> because my wrong planning and time zones nightmare!
>> However, the rules were very clear and I assumed my mistake. So, we should
>> assume that we do not have candidates and follow the already described
>> process. However, this should be very easy to figure out for the TC, it is
>> just a matter to find our who is interested in the PTL role and consulting
>> with the core team of that specific project.
>>
>> Just my two cents?
>>
>> Edgar
>>
>> From: Kyle Mestery
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> Date: Thursday, September 17, 2015 at 8:48 AM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> Subject: Re: [openstack-dev] [all][elections] PTL nomination period is
>> now over
>>
>> On Thu, Sep 17, 2015 at 10:26 AM, Monty Taylor <mordred at inaugust.com>
>> wrote:
>>
>>> On 09/17/2015 04:50 PM, Anita Kuno wrote:
>>>
>>>> On 09/17/2015 08:22 AM, Matt Riedemann wrote:
>>>>
>>>>>
>>>>>
>>>>> On 9/17/2015 8:25 AM, Tristan Cacqueray wrote:
>>>>>
>>>>>> PTL Nomination is now over. The official candidate list is available on
>>>>>> the wiki[0].
>>>>>>
>>>>>> There are 5 projects without candidates, so according to this
>>>>>> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
>>>>>> MagnetoDB, Magnum, Murano and Security
>>>>>>
>>>>>
>>>>> This is devil's advocate, but why does a project technically need a PTL?
>>>>>   Just so that there can be a contact point for cross-project things,
>>>>> i.e. a lightning rod?  There are projects that do a lot of group
>>>>> leadership/delegation/etc, so it doesn't seem that a PTL is technically
>>>>> required in all cases.
>>>>>
>>>>
>>>> I think that is a great question for the TC to consider when they
>>>> evaluate options for action with these projects.
>>>>
>>>> The election officials are fulfilling their obligation according to the
>>>> resolution:
>>>>
>>>> http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20141128-elections-process-for-leaderless-programs.rst
>>>>
>>>> If you read the verb there the verb is "can" not "must", I choose the
>>>> verb "can" on purpose for the resolution when I wrote it. The TC has the
>>>> option to select an appointee. The TC can do other things as well,
>>>> should the TC choose.
>>>>
>>>
>>> I agree- and this is a great example of places where human judgement is
>>> better than rules.
>>>
>>> For instance - one of the projects had a nominee but it missed the
>>> deadline, so that's probably an easy on.
>>>
>>> For one of the projects it had been looking dead for a while, so this is
>>> the final nail in the coffin from my POV
>>>
>>> For the other three - I know they're still active projects with people
>>> interested in them, so sorting them out will be fun!
>>>
>>>
>> This is the right approach. Human judgement #ftw! :)
>>
>>
>>>
>>>
>>>>
>>>>>
>>>>>> There are 7 projects that will have an election: Cinder, Glance,
>>>>>> Ironic,
>>>>>> Keystone, Mistral, Neutron and Oslo. The details for those will be
>>>>>> posted tomorrow after Tony and I setup the CIVS system.
>>>>>>
>>>>>> Thank you,
>>>>>> Tristan
>>>>>>
>>>>>>
>>>>>> [0]:
>>>>>>
>>>>>> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>>>>>>
>>>>>> [1]:
>>>>>>
>>>>>> http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> __________________________________________________________________________
>>>>>>
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From openstack at lanabrindley.com  Fri Sep 18 04:28:41 2015
From: openstack at lanabrindley.com (Lana Brindley)
Date: Fri, 18 Sep 2015 14:28:41 +1000
Subject: [openstack-dev] What's Up, Doc? 18 Sep 2015
Message-ID: <55FB92F9.3070504@lanabrindley.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi everyone,

First of all, with PTL candidacy now closed, it appears that I will be
continuing to hold the docs reins through to Mitaka. Thank you to all of
you who contacted me personally over the last week or so, expressing
their support. I'm very honoured to be leading such a great team, and
I'm looking forward to continuing to serve you. As always, my door will
continue to be open to ideas, comments, and criticism, so please keep
telling me what you do and don't like!

We're less than a month away from the Liberty release now, and with the
Mitaka PTL question settled, I'll be working on Design Summit session
planning over the next few weeks. I've created an etherpad to gather
your ideas, more on that in this newsletter. In other news, testing is
now well underway on the Install Guide, I've knocked our existing
blueprints into order, been doing some thinking about the DocImpact
script, and worked with the speciality teams on determining the final
work items for Liberty.

== Progress towards Liberty ==

26 days to go

514 bugs closed so far for this release.

* RST conversion:
** Done.

* User Guides information architecture overhaul
** Underway. Some tasks to be held over to Mitaka.

* Greater focus on helping out devs with docs in their repo
** A certain amount of progress has been made here, and some wrinkles
sorted out which will improve this process for the future.

* Improve how we communicate with and support our corporate contributors
** I'm still trying to come up with great ideas for this, please let me
know what you think.

* Improve communication with Docs Liaisons
** I'm very pleased to see liaisons getting more involved in our bugs
and reviews. Keep up the good work!

* Clearing out old bugs
** Great work this week, with two bugs from the last two weeks closed
(thanks Deena and Tom!). Three new bugs this week.

== Mitaka Summit Prep ==

The schedule app for your phone is now available. This is a great tool
once you're on the ground at Summit:
https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=98721
6292&mt=8
(Apple) and
https://play.google.com/store/apps/details?id=org.sched.openstacksummitm
ay2015vancouver
(Android)

Docs have been allocated two fishbowls, four workrooms, and a half day
meetup, so now is the time to start gathering ideas for what you would
like to see us cover. Add your ideas to the etherpad here:
https://etherpad.openstack.org/p/Tokyo-DocsSessions

== Outstanding Specs ==

* Third Party Driver Content: This spec has been outstanding for a
while, and we've finally reached a consensus on how to move forward.
Please take a moment to review the spec here before we go ahead merge
it: https://review.openstack.org/#/c/191041/

* DocImpact: These are my ideas for reducing the noise from the
DocImpact script, now up for discussion and ideas:
https://review.openstack.org/#/c/224420/

== Doc team meeting ==

The APAC meeting was held this week. The minutes are here:
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2015-09-16

The next meetings are:
US: Wednesday 23 September, 14:00:00 UTC
APAC: Wednesday 30 September, 00:30:00 UTC

Please go ahead and add any agenda items to the meeting page here:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_
meeting

== Spotlight bugs for this week ==

Let's show these three that we're never going to give them up, never
going to let them down:

https://bugs.launchpad.net/openstack-manuals/+bug/1293328 Review
Identity & LDAP integration

https://bugs.launchpad.net/openstack-manuals/+bug/1294726 Update Cloud
Administration Guide for OpenDaylight ML2 MechanismDriver

https://bugs.launchpad.net/openstack-manuals/+bug/1300960  Making DB
sanity checking be optional for DB migration

- --

Remember, if you have content you would like to add to this newsletter,
or you would like to be added to the distribution list, please email me
directly at openstack at lanabrindley.com, or visit:
https://wiki.openstack.org/w/index.php?title=Documentation/WhatsUpDoc

Keep on doc'ing!

Lana
- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV+5L5AAoJELppzVb4+KUyEUcIANEavuRkTwSC44KEMVCvdzBw
Q/luFp6zO/mySSfys0mFnB3/3GmQFQqSqW3FqLc9hWHIQQxxmG8VW9/Bhym5GNmr
UMveZw+0winC4y41en1ZFFBz03uiQTvHVAFDmqjFHtgvWd/L3O9isrCsP3hnQ3JK
Dl4Huaw6ZowbRgYTK7Jaspuu/mS7Me8hUsvOCpBPvxQX3lYFBvOR93lwBEqNHp5B
gu05Cy50KnwbrY+TQfrTEMiYxtyia1nf9SHF+9/5fxkzq9i4/Ine9WKIuYsOYI8u
crJl9IPPKbgyMSp775RLQ9nFIXeh6cRDYWVVW29z1n1zYt9C7zcpZiDngkD8Zqg=
=5yfK
-----END PGP SIGNATURE-----


From gus at inodes.org  Fri Sep 18 04:40:34 2015
From: gus at inodes.org (Angus Lees)
Date: Fri, 18 Sep 2015 04:40:34 +0000
Subject: [openstack-dev] [neutron] [oslo.privsep] Any progress on
	privsep?
In-Reply-To: <CALFEDVehrHj+syJFDocOLG30X6xEVM5wApbSWS2y-kc=tq-dFw@mail.gmail.com>
References: <CALFEDVehrHj+syJFDocOLG30X6xEVM5wApbSWS2y-kc=tq-dFw@mail.gmail.com>
Message-ID: <CAPA_H3dBUaC0Rr-PRXGbJwRnyyj63infspgYrzEcvErh015WEA@mail.gmail.com>

On Fri, 18 Sep 2015 at 14:13 Li Ma <skywalker.nick at gmail.com> wrote:

> Hi stackers,
>
> Currently we are discussing the possibility of using a pure python
> library to configure network in neutron [1]. We find out that it is
> impossible to do it without privsep, because we run external commands
> which cannot be replaced by python calls via rootwrap.
>
> Privsep has been merged in the Liberty cycle. I just wonder how it is
> going on.
>
> [1] https://bugs.launchpad.net/neutron/+bug/1492714


Thanks for your interest :)  This entire cycle has been spent on the spec.
It looks like it might be approved very soon (got the first +2 overnight),
which will then unblock a string of "create new oslo project" changes.

During the spec discussion, the API was changed (for the better).  Now it
looks like the discussion has settled down, I'm getting to work rewriting
it following the new API.  It took me about 2 weeks to write it the first
time around (almost all on testing framework), so I'd expect something of
similar magnitude this time.

I don't make predictions about timelines that rely on the OpenStack review
process, but if you forced me I'd _guess_ it will be ready for projects to
try out early in M.

 - Gus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/f9a8ccac/attachment.html>

From gilles at redhat.com  Fri Sep 18 05:22:21 2015
From: gilles at redhat.com (Gilles Dubreuil)
Date: Fri, 18 Sep 2015 15:22:21 +1000
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <55F9D7F6.2000604@puppetlabs.com>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com>
 <CAGnj6atXbpuzpNR6aF63cZ26WE-cbwUGozb9bvdxtaUaA7B1Ow@mail.gmail.com>
 <87oah44qtx.fsf@s390.unix4.net>
 <CAGnj6ave7EFDQkaFmZWDVTLOE0DQgkTksqh2QLqJe0aGkCXBpQ@mail.gmail.com>
 <55F9D7F6.2000604@puppetlabs.com>
Message-ID: <55FB9F8D.3080500@redhat.com>



On 17/09/15 06:58, Cody Herriges wrote:
> I wrote my first composite namevar type a few years and ago and all the
> magic is basically a single block of code inside the type...
> 
> https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L145-L169
> 
> It basically boils down to these three things:
> 
> * Pick your namevars
> (https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L49-L64)
> * Pick a delimiter
>   - Personally I'd use @ here since we are talking about domains
> * Build your self.title_patterns method, accounting for delimited names
> and arbitrary names.
> 
> While it looks like the README never got updated, the java_ks example
> supports both meaningful titles and arbitrary ones.
> 
> java_ks { 'activemq_puppetca_keystore':
>   ensure       => latest,
>   name         => 'puppetca',
>   certificate  => '/etc/puppet/ssl/certs/ca.pem',
>   target       => '/etc/activemq/broker.ks',
>   password     => 'puppet',
>   trustcacerts => true,
> }
> 
> java_ks { 'broker.example.com:/etc/activemq/broker.ks':
>   ensure      => latest,
>   certificate =>
> '/etc/puppet/ssl/certs/broker.example.com.pe-internal-broker.pem',
>   private_key =>
> '/etc/puppet/ssl/private_keys/broker.example.com.pe-internal-broker.pem',
>   password    => 'puppet',
> }
> 
> You'll notice the first being an arbitrary title and the second
> utilizing a ":" as a delimiter and omitting the name and target parameters.
> 
> Another code example can be found in the package type.
> 
> https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type/package.rb#L268-L291.
> 

Hi Cody,

Thank you for the example!

That's going to help as the expected returned array from
#self.title_pattern is effectively not intuitive!

Cheers,
Gilles


> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From duncan.thomas at gmail.com  Fri Sep 18 05:40:16 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Fri, 18 Sep 2015 08:40:16 +0300
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <20150918020853.GR21846@jimrollenhagen.com>
References: <55FB5D1E.2080706@internap.com>
 <20150918020853.GR21846@jimrollenhagen.com>
Message-ID: <CAOyZ2aF0KLuuEGaW+V7m3HB9HfpkV0K0NcvaW12yxezMBcV74w@mail.gmail.com>

On 18 Sep 2015 05:13, "Jim Rollenhagen" <jim at jimrollenhagen.com> wrote:

> FWIW, in Ironic, we added the public_endpoint config to fix the bug
> quickly, but we'd really prefer to support both that and the
> secure_proxy_ssl_header option. It would use public_endpoint if it is
> set, then fall back to the header config, then fall back to
> request_host like it was before.

This seems like the most sensible arrangement and the one if be happy
meeting for cinder. If the originator would like to file a bug against
cinder for the missing proto header support then I don't expect any
resistance to it being fixed.

Is there anybody with the time to start analysing different project's
config files and documenting the likely cross-project ones? I know glance
had a bunch of ssl related ones that were richer than most projects, for
example.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/38e2e2ec/attachment.html>

From flavio at redhat.com  Fri Sep 18 06:56:06 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 18 Sep 2015 08:56:06 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <1442519996-sup-6683@lrrr.local>
References: <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
Message-ID: <20150918065606.GJ29319@redhat.com>

On 17/09/15 16:00 -0400, Doug Hellmann wrote:
>Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:
>
>> I think this is all superfluous however and we should simply encourage
>> people to not wait until the last minute. Waiting to see who is
>> running/what the field looks like isn't as important as standing up and
>> saying you're interested in running.
>
>+1

Just want to +1 this. I'm going to be, probably, extrem here and
sugest that we should just shrink the candidacy period to 1 (max 2)
days.

The election period is announced way earlier in the cycle. Candidates
have 6 month to think about what should come next for their project.
Instead of having a full week to send candidacies, I'd just send a
reminder that the candidacy day is coming and everyone should get
their candidacies done.

If, for some reason, someone can't send the candidacy the day when
candidacies should be sent, then it'd be possible to designate someone
to submit it for review on his behalf. Even better, since now it's all
done through gerrit, that person can just send it in advance.

Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/b64c4dd2/attachment.pgp>

From flavio at redhat.com  Fri Sep 18 06:58:48 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 18 Sep 2015 08:58:48 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <1442519175-sup-7818@lrrr.local>
References: <55FABF5E.4000204@redhat.com> <55FACCBE.1080508@linux.vnet.ibm.com>
 <55FAD324.2020102@anteaya.info> <55FADB95.6090801@inaugust.com>
 <55FAE5F2.6050805@openstack.org> <1442519175-sup-7818@lrrr.local>
Message-ID: <20150918065848.GK29319@redhat.com>

On 17/09/15 15:47 -0400, Doug Hellmann wrote:
>Excerpts from Thierry Carrez's message of 2015-09-17 18:10:26 +0200:
>> Monty Taylor wrote:
>> > I agree- and this is a great example of places where human judgement is
>> > better than rules.
>> >
>> > For instance - one of the projects had a nominee but it missed the
>> > deadline, so that's probably an easy on.
>> >
>> > For one of the projects it had been looking dead for a while, so this is
>> > the final nail in the coffin from my POV
>> >
>> > For the other three - I know they're still active projects with people
>> > interested in them, so sorting them out will be fun!
>>
>> Looks like in 4 cases (Magnum, Barbican, Murano, Security) there is
>> actually a candidate, they just missed the deadline. So that should be
>> an easy discussion at the next TC meeting.
>>
>> For the last one, it is not an accident. I think it is indeed the final
>> nail on the coffin.
>>
>
>Yes, I was planning to wait until after the summit to propose that we
>drop MagnetoDB from the official list of projects due to inactivity. We
>can deal with it sooner, obviously.

+1

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/017614b7/attachment.pgp>

From sbauza at redhat.com  Fri Sep 18 08:16:11 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Fri, 18 Sep 2015 10:16:11 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FB2BF0.1000409@gmail.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <55FB2BF0.1000409@gmail.com>
Message-ID: <55FBC84B.30104@redhat.com>



Le 17/09/2015 23:09, Nikhil Komawar a ?crit :
>
> On 9/17/15 3:51 PM, Morgan Fainberg wrote:
>>
>> On Thu, Sep 17, 2015 at 12:00 PM, Kevin Benton <blak111 at gmail.com
>> <mailto:blak111 at gmail.com>> wrote:
>>
>>      It guarantees that if you hit the date deadline local time, that
>>      you won't miss the deadline. It doesn't matter if there are extra
>>      hours afterwards. The idea is that it gets rid of the need to do
>>      time zone conversions.
>>
>>      If we are trying to do some weird optimization where everyone
>>      wants to submit in the last 60 seconds, then sure AOE isn't great
>>      for that because you still have to convert. It doesn't seem to me
>>      like that's what we are trying to do though.
>>
>> Alternatively you give a UTC time (which all of our meetings are in
>> anyway) and set the deadline. Maybe we should be setting the deadline
>> to the western-most timezone (UTC-11/-12?) 23:59 as the deadline. This
>> would simply do what you're stating without having to explain AOE more
>> concretely than "submit by 23:59 your tz day X".
>>
>> I think this is all superfluous however and we should simply encourage
>> people to not wait until the last minute. Waiting to see who is
>> running/what the field looks like isn't as important as standing up
>> and saying you're interested in running.
>>
> I like that you have used the word encourage however, will have to
> disagree here. Life in general can't permit that to everyone -- there
> can be any important things pop up at unexpected time, someone on
> vacation and getting late to come back etc. And on top of that people
> can get caught up particularly at this week.  The time-line for
> proposals is a good idea seems a good idea in general.

That's exactly why the schedule is always proposed in the beginning of 
the cycle [1] so that any people interested in becoming PTLs would need 
to make sure that they could provide their candidacy (there are 7 days 
for proposing).

Also, the policy accepts to have a candidacy proposed by someone else, 
just by having the candidate +1'ing the change even after the deadline, 
so anyone in vacation can just proxy his candidacy by someone else.

Last but not the least, I assume that people wanting to be PTLs 
understand that they are here for helping the community so they have 
also to understand how the community works and what its rules are.

-Sylvain

[1] 
https://wiki.openstack.org/w/index.php?title=Liberty_Release_Schedule&oldid=78501

>> You shouldn't worry about hurting anyone's feelings by running and
>> more importantly most PTLs will be happy to have someone else shoulder
>> some of the weight; by tossing your name into the ring it signals
>> you're willing to help out in this regard. I know that as a PTL (an
>> outgoing one at that) having this clear signal would raise an
>> individual towards the top of the list for asking if they want the
>> responsibility delegated to them as it was indicated they already
>> wanted to be part of leadership for the project.
>>
>> Just a $0.02 on the timing concerns.
>>
>> --Morgan
>>   
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>   My 2 pennies worth.
>



From shardy at redhat.com  Fri Sep 18 08:17:53 2015
From: shardy at redhat.com (Steven Hardy)
Date: Fri, 18 Sep 2015 09:17:53 +0100
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <20150918065606.GJ29319@redhat.com>
References: <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
 <20150918065606.GJ29319@redhat.com>
Message-ID: <20150918081752.GD16534@t430slt.redhat.com>

On Fri, Sep 18, 2015 at 08:56:06AM +0200, Flavio Percoco wrote:
> On 17/09/15 16:00 -0400, Doug Hellmann wrote:
> >Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:
> >
> >>I think this is all superfluous however and we should simply encourage
> >>people to not wait until the last minute. Waiting to see who is
> >>running/what the field looks like isn't as important as standing up and
> >>saying you're interested in running.
> >
> >+1
> 
> Just want to +1 this. I'm going to be, probably, extrem here and
> sugest that we should just shrink the candidacy period to 1 (max 2)
> days.

-1 - the "problem" here (if you want to call it that) is that some folks
evidently found a week nomination period insufficient, for $whatever reason.

The obvious solution to that is to simply adopt the same branch model for
the openstack/election repo as all other projects - create a branch (or
directory) per release in openstack/election, and allow candidates to
propose their candidacy at any time during the preceding release cycle.

Then, if you clearly state the deadline ahead of time, you simply publish
results and/or start elections on that date, with whatever is in the repo
on that date and folks have the whole cycle (say from summit to RC1 time)
to consider running and propose their candidacy whenever they want.

I also think this would encourage discussion within the project teams about
who wants to run for PTL, with transparency about those interested/willing
ahead of time.

Perhaps you might WIP all submissions until a few days before the deadline,
such that if communities decide via mutual agreement one candidate should
take their turn as PTL submissions may be abandoned without any election.

IMHO rotation of PTL responsibilities is healthy, as is discussion
and openness in the community - being PTL isn't some sort of prize, it's a
time-consuming burden, which is mostly about coordination and release
management, not really about "leadership" at all (although it is about
community building and leading by example..)

I guess what I mean is I'm not really sure what the timeboxed nomination
period aims to achieve, particularly if you shrink it to one or two days -
that makes it extremely easy for folks to miss due to illness/travel or
$whatever, and implies some kind of race - which is the opposite, IMHO of
the dynamic we should be encouraging.

Steve


From john at johngarbutt.com  Fri Sep 18 08:38:15 2015
From: john at johngarbutt.com (John Garbutt)
Date: Fri, 18 Sep 2015 09:38:15 +0100
Subject: [openstack-dev] [Nova] Design Summit Topics for Nova
In-Reply-To: <20150917142312.GC82515@thor.bakeyournoodle.com>
References: <CABib2_pVxCtF=0hCGtZzg18OmMRv1LNXeHwmdow9vWx+Sw7HMg@mail.gmail.com>
 <DFB24FBA-F45C-446E-96DE-F05993154BC6@gmail.com>
 <20150917142312.GC82515@thor.bakeyournoodle.com>
Message-ID: <CABib2_pgx6kBML6QKs+7R-8E2MeYGwVB1FhdScysd-mq3jgR_g@mail.gmail.com>

On 17 September 2015 at 15:23, Tony Breeds <tony at bakeyournoodle.com> wrote:
> On Wed, Sep 16, 2015 at 11:40:28AM -0700, melanie witt wrote:
>
>> Today I was informed that google forms are blocked in China [1], so I wanted
>> to mention it here so we can consider an alternate way to collect submissions
>> from those who might not be able to access the form.
>
> I'll act as an email to google forms proxy if needed. People that willbe at the
> summit can fill in the temp;ate below.
> (stolen from the google forms)
>
> ---
> Topic Title:
> Topic Description:
> Submitter IRC handle:
> Session leader IRC handle
>  Please note the session leader must be there on the day at the summit. Please
>  just leave this blank if you feel unable to find someone to lead the session.
>
> Link to nova-spec review:
>  Features you want to discuss need to have at least a WIP spec before being
>  considered for the design summit track. Ideally we will merge the spec before
>  the design summit, so a session would not be required.
>
> Link to pre-reading:
>  Before the submission is on the final list, we need to have some background
>  reading for more complex topics, or topics that have had lots of previous
>  discussion, so its easier for everyone to get involved. This could be a wiki
>  page, an etherpad, an ML post, or devref.
> ---

Tony, thanks for taking on the paper work there.

Sorry for the two tier system. Its not intentional.
Let me know if there a China friendly alternative folks are able to
setup for us.

I have started an etherpad as a fallback catch-all system, mostly just
including the above suggestions:
https://etherpad.openstack.org/p/mitaka-nova-summit-suggestions

Thanks,
John


From flavio at redhat.com  Fri Sep 18 08:55:30 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 18 Sep 2015 10:55:30 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <20150918081752.GD16534@t430slt.redhat.com>
References: <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
 <20150918065606.GJ29319@redhat.com>
 <20150918081752.GD16534@t430slt.redhat.com>
Message-ID: <20150918085530.GM29319@redhat.com>

On 18/09/15 09:17 +0100, Steven Hardy wrote:
>On Fri, Sep 18, 2015 at 08:56:06AM +0200, Flavio Percoco wrote:
>> On 17/09/15 16:00 -0400, Doug Hellmann wrote:
>> >Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:
>> >
>> >>I think this is all superfluous however and we should simply encourage
>> >>people to not wait until the last minute. Waiting to see who is
>> >>running/what the field looks like isn't as important as standing up and
>> >>saying you're interested in running.
>> >
>> >+1
>>
>> Just want to +1 this. I'm going to be, probably, extrem here and
>> sugest that we should just shrink the candidacy period to 1 (max 2)
>> days.
>
>-1 - the "problem" here (if you want to call it that) is that some folks
>evidently found a week nomination period insufficient, for $whatever reason.
>
>The obvious solution to that is to simply adopt the same branch model for
>the openstack/election repo as all other projects - create a branch (or
>directory) per release in openstack/election, and allow candidates to
>propose their candidacy at any time during the preceding release cycle.
>
>Then, if you clearly state the deadline ahead of time, you simply publish
>results and/or start elections on that date, with whatever is in the repo
>on that date and folks have the whole cycle (say from summit to RC1 time)
>to consider running and propose their candidacy whenever they want.

This is the same thing I said in my previous email (you cut that off
of your reply) with the only difference that you're suggesting not
having a "candidacy day" but rather just have a "start election" day.

I'd argue saying that a deadline for candidacies is useful to have and
it brings more formality to the process. It helps, in the case of
using `openstack/elections` to have a deadline for cutting the branch
or freezing reviews, etc.

Setting up the election takes some time, which means there has to be a
date where the election officers stop considering new candidacies.

>
>I also think this would encourage discussion within the project teams about
>who wants to run for PTL, with transparency about those interested/willing
>ahead of time.

+1

>Perhaps you might WIP all submissions until a few days before the deadline,
>such that if communities decide via mutual agreement one candidate should
>take their turn as PTL submissions may be abandoned without any election.

I guess this may work in some cases but this defeats the whole purpose
of having an election and being able to vote, in private, which many
people value.

>IMHO rotation of PTL responsibilities is healthy, as is discussion
>and openness in the community - being PTL isn't some sort of prize, it's a
>time-consuming burden, which is mostly about coordination and release
>management, not really about "leadership" at all (although it is about
>community building and leading by example..)
>
>I guess what I mean is I'm not really sure what the timeboxed nomination
>period aims to achieve, particularly if you shrink it to one or two days -
>that makes it extremely easy for folks to miss due to illness/travel or
>$whatever, and implies some kind of race - which is the opposite, IMHO of
>the dynamic we should be encouraging.

In my previous email I mentioned that folks can simply send the
candidacy in advance or have someone else to propose it. Seriously,
it's not about having a single day for sending the candidacy, it's
about having a clear deadline where no more candidacies are
considered. If a candidacy is sent 4 months in advance, I guess that's
fine. I don't care.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/6d8c7499/attachment.pgp>

From anlin.kong at gmail.com  Fri Sep 18 09:08:53 2015
From: anlin.kong at gmail.com (Lingxian Kong)
Date: Fri, 18 Sep 2015 17:08:53 +0800
Subject: [openstack-dev] [mistral][requirements] lockfile not in global
	requirements
In-Reply-To: <55FB04FB.8080903@suse.com>
References: <55FB04FB.8080903@suse.com>
Message-ID: <CALjNAZ3SiTryNb4sLMTt8rOjxWJHZMarA2KuD+o21BG0_O_O2Q@mail.gmail.com>

Hi, Andreas,

Sorry for the late response from Mistral. Anyway, I'm coming :-)

I'll take a look and solve it if needed, thanks for letting us know that.

On Fri, Sep 18, 2015 at 2:22 AM, Andreas Jaeger <aj at suse.com> wrote:

> The syncing of requirements fails from the requirements repository to
> mistral-extra with
> 'lockfile' is not in global-requirements.txt
>
> Mistral team, could you either propose to add lockfile to the global
> requirements file - or remove it from your project, please?
>
> for details see:
>
> https://jenkins.openstack.org/job/propose-requirements-updates/363/consoleFull
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
>    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
>        HRB 21284 (AG N?rnberg)
>     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*Regards!*
*-----------------------------------*
*Lingxian Kong*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/abeac1f0/attachment.html>

From dtantsur at redhat.com  Fri Sep 18 09:36:52 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Fri, 18 Sep 2015 11:36:52 +0200
Subject: [openstack-dev] [ironic] [inspector] Liberty soft freeze
In-Reply-To: <20150918015030.GP21846@jimrollenhagen.com>
References: <20150918015030.GP21846@jimrollenhagen.com>
Message-ID: <55FBDB34.9040000@redhat.com>

Note for inspector folks: this applies to us as well. Lets land whatever 
we have planned for 2.2.0 and fix any issues arising.

Please see milestone page for list of things that we still need to 
review/fix:
https://launchpad.net/ironic-inspector/+milestone/2.2.0

On 09/18/2015 03:50 AM, Jim Rollenhagen wrote:
> Hi folks,
>
> It's time for our soft freeze for Liberty, as planned. Core reviewers
> should do their best to refrain from landing risky code. We'd like to
> ship 4.2.0 as the candidate for stable/liberty next Thursday, September
> 24.
>
> Here's the things we still want to complete in 4.2.0:
> https://launchpad.net/ironic/+milestone/4.2.0
>
> Note that zapping is no longer there; sadly, after lots of writing and
> reviewing code, we want to rethink how we implement this. We've talked
> about being able to go from MANAGEABLE->CLEANING->MANAGEABLE with a list
> of clean steps. Same idea, but without the word zapping, the new DB
> fields, etc. At any rate, it's been bumped to Mitaka to give us time to
> figure it out.
>
> This may also mean in-band RAID configuration may not land; the
> interface in general did land, and drivers may do out-of-band
> configuration. We assumed that in-band RAID would be done through
> zapping. However, if folks can agree on how to do it during automated
> cleaning, I'd be happy to get that in Liberty if the code is not too
> risky. If it is risky, we'll need to punt it to Mitaka as well.
>
> I'd like to see the rest of the work on the milestone completed during
> Liberty, and I hope everyone can jump in and help us to do that.
>
> Thanks in advance!
>
> // jim
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From john at johngarbutt.com  Fri Sep 18 09:39:06 2015
From: john at johngarbutt.com (John Garbutt)
Date: Fri, 18 Sep 2015 10:39:06 +0100
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
	communications
In-Reply-To: <20150917235035.GB3727@gmail.com>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
 <55F979A9.9040206@openstack.org> <20150917235035.GB3727@gmail.com>
Message-ID: <CABib2_oyAi+GKDU=o+gZpDeia2vmRrDz5_Sgpff_D8Hch17r5w@mail.gmail.com>

On 18 September 2015 at 00:50, Mike Perez <thingee at gmail.com> wrote:
> On 16:16 Sep 16, Thierry Carrez wrote:
>> Anne Gentle wrote:
>> > [...]
>> > What are some of the problems with each layer?
>> >
>> > 1. weekly meeting: time zones, global reach, size of cross-project
>> > concerns due to multiple projects being affected, another meeting for
>> > PTLs to attend and pay attention to
>>
>> A lot of PTLs (or liaisons/lieutenants) skip the meeting, or will only
>> attend when they have something to ask. Their time is precious and most
>> of the time the meeting is not relevant for them, so why bother ? You
>> have a few usual suspects attending all of them, but those people are
>> cross-project-aware already so those are not the people that would
>> benefit the most from the meeting.
>>
>> This partial attendance makes the meeting completely useless as a way to
>> disseminate information. It makes the meeting mostly useless as a way to
>> get general approval on cross-project specs.
>>
>> The meeting still is very useful IMHO to have more direct discussions on
>> hot topics. So a ML discussion is flagged for direct discussion on IRC
>> and we have a time slot already booked for that.
>
> Content for the cross project meeting are usually:
>
> * Not ready for decisions.
> * Lack solutions.
>
> A proposal in steps of how cross project ideas start, to something ready for
> the cross project IRC meeting, then the TC:
>
> 1) An idea starts from either or both:
>    a) Mailing list discussion.
>    b) A patch to a single project (until it's flagged that this patch could be
>       benefical to other projects)
> 2) OpenStack Spec is proposed - discussions happen in gerrit from here on out.
>    Not on the mailing list. Keep encouraging discussions back to gerrit to keep
>    everything in one place in order to avoid confusion with having to fish
>    for some random discussion elsewhere.
> 3) Once enough consensus happens an agenda item is posted cross project IRC
>    meeting.
> 4) Final discussions happen in the meeting. If consensus is still met by
>    interested parties who attend, it moves to TC.  If there is a lack of
>    consensus it goes back to gerrit and repeat.

+1

That totally seems worth a try.

Its extra process, but that should help drive the right conversations,
in more efficient way.

Thanks,
johnthetubaguy

> With this process, we should have less meetings. Less meetings is:
>
> * Awesome
> * Makes this meeting more meaningful when it happens because decisions are
>   potentially going to be agreed and passed to the TC!
>
> If a cross project spec is not getting attention, don't post it to the list for
> attention. We get enough email and it'll probably be lost. Instead, let the
> product working group recognize this and reach out to the projects that this
> spec would benefit, to bring meaningful attention to the spec.
>
> For vertical alignment, interaction like IRC is not necessary. A very brief,
> bullet point of collected information from projects that have anything
> interesting is given in a weekly digest email to the list If anyone has
> questions or wants more information, they can use their own time to ask that
> project team.
>
> Potentially, if we kept everything to the spec on gerrit, and had the product
> working group bringing needed attention to specs, we could eliminate the cross
> project meeting.


From shardy at redhat.com  Fri Sep 18 09:41:44 2015
From: shardy at redhat.com (Steven Hardy)
Date: Fri, 18 Sep 2015 10:41:44 +0100
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <20150918085530.GM29319@redhat.com>
References: <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
 <20150918065606.GJ29319@redhat.com>
 <20150918081752.GD16534@t430slt.redhat.com>
 <20150918085530.GM29319@redhat.com>
Message-ID: <20150918094143.GF16534@t430slt.redhat.com>

On Fri, Sep 18, 2015 at 10:55:30AM +0200, Flavio Percoco wrote:
> On 18/09/15 09:17 +0100, Steven Hardy wrote:
> >On Fri, Sep 18, 2015 at 08:56:06AM +0200, Flavio Percoco wrote:
> >>On 17/09/15 16:00 -0400, Doug Hellmann wrote:
> >>>Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:
> >>>
> >>>>I think this is all superfluous however and we should simply encourage
> >>>>people to not wait until the last minute. Waiting to see who is
> >>>>running/what the field looks like isn't as important as standing up and
> >>>>saying you're interested in running.
> >>>
> >>>+1
> >>
> >>Just want to +1 this. I'm going to be, probably, extrem here and
> >>sugest that we should just shrink the candidacy period to 1 (max 2)
> >>days.
> >
> >-1 - the "problem" here (if you want to call it that) is that some folks
> >evidently found a week nomination period insufficient, for $whatever reason.
> >
> >The obvious solution to that is to simply adopt the same branch model for
> >the openstack/election repo as all other projects - create a branch (or
> >directory) per release in openstack/election, and allow candidates to
> >propose their candidacy at any time during the preceding release cycle.
> >
> >Then, if you clearly state the deadline ahead of time, you simply publish
> >results and/or start elections on that date, with whatever is in the repo
> >on that date and folks have the whole cycle (say from summit to RC1 time)
> >to consider running and propose their candidacy whenever they want.
> 
> This is the same thing I said in my previous email (you cut that off
> of your reply) with the only difference that you're suggesting not
> having a "candidacy day" but rather just have a "start election" day.

Ok, apologies, it wasn't my intention to mis-quote you, but I interpreted
your comments as meaning "candidacy period" meant proposals for candidacy
could *only* be made during that time, e.g your remarks about designating
someone to submit a review on a particular day.

> I'd argue saying that a deadline for candidacies is useful to have and
> it brings more formality to the process. It helps, in the case of
> using `openstack/elections` to have a deadline for cutting the branch
> or freezing reviews, etc.
> 
> Setting up the election takes some time, which means there has to be a
> date where the election officers stop considering new candidacies.

I think we're basically in agreement and arguing for the same thing - folks
should be able to look at the release schedule, and see, just like the
clearly communicated "feature freeze" date, a PTL candidacy freeze.

> >I also think this would encourage discussion within the project teams about
> >who wants to run for PTL, with transparency about those interested/willing
> >ahead of time.
> 
> +1
> 
> >Perhaps you might WIP all submissions until a few days before the deadline,
> >such that if communities decide via mutual agreement one candidate should
> >take their turn as PTL submissions may be abandoned without any election.
> 
> I guess this may work in some cases but this defeats the whole purpose
> of having an election and being able to vote, in private, which many
> people value.

Sure, but the dynamic implied by voting in private, an election, and
proposing candidacy at the last minute is one of competition for the role.

That, IMHO, is not all that healthy in this context, and most communities
should already be making all sorts of decisions by consensus, discussion
and mutual agreement.

Nothing I'm prosing defeats the purpose of the existing system - if more
than one candidate is keen to volunteer, they just do so, and the election
happens just as it does now.

> >IMHO rotation of PTL responsibilities is healthy, as is discussion
> >and openness in the community - being PTL isn't some sort of prize, it's a
> >time-consuming burden, which is mostly about coordination and release
> >management, not really about "leadership" at all (although it is about
> >community building and leading by example..)
> >
> >I guess what I mean is I'm not really sure what the timeboxed nomination
> >period aims to achieve, particularly if you shrink it to one or two days -
> >that makes it extremely easy for folks to miss due to illness/travel or
> >$whatever, and implies some kind of race - which is the opposite, IMHO of
> >the dynamic we should be encouraging.
> 
> In my previous email I mentioned that folks can simply send the
> candidacy in advance or have someone else to propose it. Seriously,
> it's not about having a single day for sending the candidacy, it's
> about having a clear deadline where no more candidacies are
> considered. If a candidacy is sent 4 months in advance, I guess that's
> fine. I don't care.

Cool, that wasn't really how I interpreted your previous mail, sounds like
we're in violent agreement! :)

Cheers,

Steve


From alihaider907 at gmail.com  Fri Sep 18 10:21:34 2015
From: alihaider907 at gmail.com (Haider Ali)
Date: Fri, 18 Sep 2015 15:21:34 +0500
Subject: [openstack-dev] Hardware requirements for OpenStack
Message-ID: <CAMDfpCSv8nSKYtjpfyHCsVr5Rxi8rPXgp6Zw_OqKX_=viRd4sg@mail.gmail.com>

Hello

I am new to OpenStack and following
http://docs.openstack.org/juno/install-guide/install/apt/content/ch_basic_environment.html
guide to install OpenStack on my local PC. My question is do i need
separate machines Controller, Network and Compute node ( connected via LAN
) or i can install all these 3 nodes on a single PC as well.

Thanks

-- 
Haider Ali
National University of Computer and Emerging Sciences Lahore
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/01d31ae8/attachment.html>

From rakhmerov at mirantis.com  Fri Sep 18 10:54:32 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Fri, 18 Sep 2015 13:54:32 +0300
Subject: [openstack-dev] [mistral][requirements] lockfile not in global
	requirements
In-Reply-To: <CALjNAZ3SiTryNb4sLMTt8rOjxWJHZMarA2KuD+o21BG0_O_O2Q@mail.gmail.com>
References: <55FB04FB.8080903@suse.com>
 <CALjNAZ3SiTryNb4sLMTt8rOjxWJHZMarA2KuD+o21BG0_O_O2Q@mail.gmail.com>
Message-ID: <D8B95617-BAD8-4413-A46A-A3C55E1C7B66@mirantis.com>

Thanks Andreas and Lingxian,

Andreas, the corresponding patch has landed: https://review.openstack.org/#/c/225073/ <https://review.openstack.org/#/c/225073/>

Renat Akhmerov
@ Mirantis Inc.



> On 18 Sep 2015, at 12:08, Lingxian Kong <anlin.kong at gmail.com> wrote:
> 
> Hi, Andreas,
> 
> Sorry for the late response from Mistral. Anyway, I'm coming :-)
> 
> I'll take a look and solve it if needed, thanks for letting us know that.
> 
> On Fri, Sep 18, 2015 at 2:22 AM, Andreas Jaeger <aj at suse.com <mailto:aj at suse.com>> wrote:
> The syncing of requirements fails from the requirements repository to mistral-extra with
> 'lockfile' is not in global-requirements.txt
> 
> Mistral team, could you either propose to add lockfile to the global requirements file - or remove it from your project, please?
> 
> for details see:
> https://jenkins.openstack.org/job/propose-requirements-updates/363/consoleFull <https://jenkins.openstack.org/job/propose-requirements-updates/363/consoleFull>
> 
> Andreas
> -- 
>  Andreas Jaeger aj@{suse.com <http://suse.com/>,opensuse.org <http://opensuse.org/>} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
>    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
>        HRB 21284 (AG N?rnberg)
>     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> 
> -- 
> Regards!
> -----------------------------------
> Lingxian Kong
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/e74cf483/attachment-0001.html>

From vkuklin at mirantis.com  Fri Sep 18 10:56:22 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Fri, 18 Sep 2015 13:56:22 +0300
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
Message-ID: <CAHAWLf1OtHP5BdKsf+N-Fmv=34vLs36SP9UcQ=FurpEihr-hhg@mail.gmail.com>

Sergey, Fuelers

This is awesome news!

By the way, I have a question on who is eligible to vote and to nominate
him/her-self for both PTL and Component Leads. Could you elaborate on that?

And there is no such entity as Component Lead in OpenStack - so we are
actually creating one. What are the new rights and responsibilities of CL?

On Fri, Sep 18, 2015 at 5:39 AM, Sergey Lukjanov <slukjanov at mirantis.com>
wrote:

> Hi folks,
>
> I'd like to announce that we're running the PTL and Component Leads
> elections. Detailed information available on wiki. [0]
>
> Project Team Lead: Manages day-to-day operations, drives the project
> team goals, resolves technical disputes within the project team. [1]
>
> Component Lead: Defines architecture of a module or component in Fuel,
> reviews design specs, merges majority of commits and resolves conflicts
> between Maintainers or contributors in the area of responsibility. [2]
>
> Fuel has two large sub-teams, with roughly comparable codebases, that
> need dedicated component leads: fuel-library and fuel-python. [2]
>
> Nominees propose their candidacy by sending an email to the
> openstack-dev at lists.openstack.org mailing-list, which the subject:
> "[fuel] PTL candidacy" or "[fuel] <component> lead candidacy"
> (for example, "[fuel] fuel-library lead candidacy").
>
> Time line:
>
> PTL elections
> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL position
> * September 29 - October 8: PTL elections
>
> Component leads elections (fuel-library and fuel-python)
> * October 9 - October 15: Open candidacy for Component leads positions
> * October 16 - October 22: Component leads elections
>
> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015
> [1] https://wiki.openstack.org/wiki/Governance
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
> [3] https://lwn.net/Articles/648610/
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/ba76ed8d/attachment.html>

From robert.clark at hp.com  Fri Sep 18 10:55:54 2015
From: robert.clark at hp.com (Clark, Robert Graham)
Date: Fri, 18 Sep 2015 10:55:54 +0000
Subject: [openstack-dev] [Neutron] Separate floating IP pools?
Message-ID: <A0C170085C37664D93EE1604364858A11FE46F5B@G9W0763.americas.hpqcorp.net>

Is it possible to have separate floating-IP pools and grant a tenant access to only some of them?

Thought popped into my head while looking at the rbac-network spec here: https://review.openstack.org/#/c/132661/4/specs/liberty/rbac-networks.rst

Creating individual pools, allowing only some tenants access and having off-cloud network ACLs would get part way to satisfying the use cases that drive the above spec (I'm thinking of this as a more short term solution, certainly not a direct alternative).

I'm sure this is answered elsewhere but I couldn't find any direct information so I'm assuming no, it isn't supported but I wonder how much effort would be required to make it work?
-Rob
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/647df468/attachment.html>

From davanum at gmail.com  Fri Sep 18 11:30:39 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Fri, 18 Sep 2015 07:30:39 -0400
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <CAHAWLf1OtHP5BdKsf+N-Fmv=34vLs36SP9UcQ=FurpEihr-hhg@mail.gmail.com>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
 <CAHAWLf1OtHP5BdKsf+N-Fmv=34vLs36SP9UcQ=FurpEihr-hhg@mail.gmail.com>
Message-ID: <CANw6fcHkopMgfuVHUXnp-zxJUCmqNoHn=m02h5=jZ7A_8yyinA@mail.gmail.com>

Sergey,

Please see [1]. Did we codify some of these roles and responsibilities as a
community in a spec? There was also a request to use terminology like say
MAINTAINERS in that email as well.

Are we pulling the trigger a bit early for an actual election?

Thanks,
Dims

[1] http://markmail.org/message/2ls5obgac6tvcfss

On Fri, Sep 18, 2015 at 6:56 AM, Vladimir Kuklin <vkuklin at mirantis.com>
wrote:

> Sergey, Fuelers
>
> This is awesome news!
>
> By the way, I have a question on who is eligible to vote and to nominate
> him/her-self for both PTL and Component Leads. Could you elaborate on that?
>
> And there is no such entity as Component Lead in OpenStack - so we are
> actually creating one. What are the new rights and responsibilities of CL?
>
> On Fri, Sep 18, 2015 at 5:39 AM, Sergey Lukjanov <slukjanov at mirantis.com>
> wrote:
>
>> Hi folks,
>>
>> I'd like to announce that we're running the PTL and Component Leads
>> elections. Detailed information available on wiki. [0]
>>
>> Project Team Lead: Manages day-to-day operations, drives the project
>> team goals, resolves technical disputes within the project team. [1]
>>
>> Component Lead: Defines architecture of a module or component in Fuel,
>> reviews design specs, merges majority of commits and resolves conflicts
>> between Maintainers or contributors in the area of responsibility. [2]
>>
>> Fuel has two large sub-teams, with roughly comparable codebases, that
>> need dedicated component leads: fuel-library and fuel-python. [2]
>>
>> Nominees propose their candidacy by sending an email to the
>> openstack-dev at lists.openstack.org mailing-list, which the subject:
>> "[fuel] PTL candidacy" or "[fuel] <component> lead candidacy"
>> (for example, "[fuel] fuel-library lead candidacy").
>>
>> Time line:
>>
>> PTL elections
>> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL position
>> * September 29 - October 8: PTL elections
>>
>> Component leads elections (fuel-library and fuel-python)
>> * October 9 - October 15: Open candidacy for Component leads positions
>> * October 16 - October 22: Component leads elections
>>
>> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015
>> [1] https://wiki.openstack.org/wiki/Governance
>> [2]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
>> [3] https://lwn.net/Articles/648610/
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Sahara Technical Lead
>> (OpenStack Data Processing)
>> Principal Software Engineer
>> Mirantis Inc.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru
> vkuklin at mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/e62968e3/attachment.html>

From pmurray at hpe.com  Fri Sep 18 11:53:05 2015
From: pmurray at hpe.com (Murray, Paul (HP Cloud))
Date: Fri, 18 Sep 2015 11:53:05 +0000
Subject: [openstack-dev] [nova] live migration in Mitaka
Message-ID: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>

Hi All,

There are various efforts going on around live migration at the moment: fixing up CI, bug fixes, additions to cover more corner cases, proposals for new operations....

Generally live migration could do with a little TLC (see: [1]), so I am going to suggest we give some of that care in the next cycle.

Please respond to this post if you have an interest in this and what you would like to see done. Include anything you are already getting on with so we get a clear picture. If there is enough interest I'll put this together as a proposal for a work stream. Something along the lines of "robustify live migration".

Paul

[1]: https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/live-migration-at-hp-public-cloud


Paul Murray
Nova Technical Lead, HP Cloud
+44 117 316 2527

Hewlett-Packard Limited   |   Registered Office: Cain Road, Bracknell, Berkshire, RG12 1HN   |    Registered No: 690597 England   |    VAT Number: GB 314 1496 79

This e-mail may contain confidential and/or legally privileged material for the sole use of the intended recipient.  If you are not the intended recipient (or authorized to receive for the recipient) please contact the sender by reply e-mail and delete all copies of this message.  If you are receiving this message internally within the Hewlett Packard group of companies, you should consider the contents "CONFIDENTIAL".

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/f61be24c/attachment.html>

From aj at suse.com  Fri Sep 18 11:54:47 2015
From: aj at suse.com (Andreas Jaeger)
Date: Fri, 18 Sep 2015 13:54:47 +0200
Subject: [openstack-dev] [mistral][requirements] lockfile not in global
 requirements
In-Reply-To: <D8B95617-BAD8-4413-A46A-A3C55E1C7B66@mirantis.com>
References: <55FB04FB.8080903@suse.com>
 <CALjNAZ3SiTryNb4sLMTt8rOjxWJHZMarA2KuD+o21BG0_O_O2Q@mail.gmail.com>
 <D8B95617-BAD8-4413-A46A-A3C55E1C7B66@mirantis.com>
Message-ID: <55FBFB87.2030709@suse.com>

On 09/18/2015 12:54 PM, Renat Akhmerov wrote:
> Thanks Andreas and Lingxian,
>
> Andreas, the corresponding patch has landed:
> https://review.openstack.org/#/c/225073/


Great, then syncing of requirements should be green again.

thanks,
Andreas
-- 
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
        HRB 21284 (AG N?rnberg)
     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126



From tdecacqu at redhat.com  Fri Sep 18 12:00:42 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Fri, 18 Sep 2015 12:00:42 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <0957CD8F4B55C0418161614FEC580D6BCE5107@SZXEMI503-MBS.china.huawei.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local>
 <CAPWkaSXyjY9+g+ePH6B0iLnME0stKL+vGvicG51nENsCZdodHA@mail.gmail.com>
 <CAD0KtVE_GPK4OSPghZLWYYGF8P4L3KLQAquOBJ=iRfd1LKJScQ@mail.gmail.com>
 <CAL3VkVwG+e_nLwg3it=W3HZsHDo0RBjU3xxgx-FGFXoz0FtdRw@mail.gmail.com>
 <0957CD8F4B55C0418161614FEC580D6BCE5107@SZXEMI503-MBS.china.huawei.com>
Message-ID: <55FBFCEA.4050105@redhat.com>

On 09/17/2015 09:04 PM, Hongbin Lu wrote:
> Hi,
> 
> I am fine to have an election with Adrian Otto, and potentially with other candidates who are also late.
> 
> Best regards,
> Hongbin

That sounds like a excellent idea. Is there a reason why a current PTL
position couldn't be re-elected outside of the official election system ?

If candidates agrees, then project contributors should be able to elect
their leader.

Best,
Tristan

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/7adef2bf/attachment.pgp>

From tdecacqu at redhat.com  Fri Sep 18 12:10:00 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Fri, 18 Sep 2015 12:10:00 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <20150918094143.GF16534@t430slt.redhat.com>
References: <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <1442519996-sup-6683@lrrr.local> <20150918065606.GJ29319@redhat.com>
 <20150918081752.GD16534@t430slt.redhat.com>
 <20150918085530.GM29319@redhat.com>
 <20150918094143.GF16534@t430slt.redhat.com>
Message-ID: <55FBFF18.3070808@redhat.com>

On 09/18/2015 09:41 AM, Steven Hardy wrote:
>> In my previous email I mentioned that folks can simply send the
>> > candidacy in advance or have someone else to propose it. Seriously,
>> > it's not about having a single day for sending the candidacy, it's
>> > about having a clear deadline where no more candidacies are
>> > considered. If a candidacy is sent 4 months in advance, I guess that's
>> > fine. I don't care.
> Cool, that wasn't really how I interpreted your previous mail, sounds like
> we're in violent agreement! :)
> 
> Cheers,
> 
> Steve

There is one technical issue if we want to let candidate submit their
candidacy early on. The openstack/election repository needs a cycle name
as well as a the final project list.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/741d1dd1/attachment.pgp>

From tdecacqu at redhat.com  Fri Sep 18 12:16:56 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Fri, 18 Sep 2015 12:16:56 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <20150918065848.GK29319@redhat.com>
References: <55FABF5E.4000204@redhat.com>
 <55FACCBE.1080508@linux.vnet.ibm.com> <55FAD324.2020102@anteaya.info>
 <55FADB95.6090801@inaugust.com> <55FAE5F2.6050805@openstack.org>
 <1442519175-sup-7818@lrrr.local> <20150918065848.GK29319@redhat.com>
Message-ID: <55FC00B8.9010907@redhat.com>

On 09/18/2015 06:58 AM, Flavio Percoco wrote:
> On 17/09/15 15:47 -0400, Doug Hellmann wrote:
>> Excerpts from Thierry Carrez's message of 2015-09-17 18:10:26 +0200:
>>> Monty Taylor wrote:
>>> > I agree- and this is a great example of places where human
>>> judgement is
>>> > better than rules.
>>> >
>>> > For instance - one of the projects had a nominee but it missed the
>>> > deadline, so that's probably an easy on.
>>> >
>>> > For one of the projects it had been looking dead for a while, so
>>> this is
>>> > the final nail in the coffin from my POV
>>> >
>>> > For the other three - I know they're still active projects with people
>>> > interested in them, so sorting them out will be fun!
>>>
>>> Looks like in 4 cases (Magnum, Barbican, Murano, Security) there is
>>> actually a candidate, they just missed the deadline. So that should be
>>> an easy discussion at the next TC meeting.

To be more precise, there are 2 candidacies for Magnum and 1 for
Security. I would prefer all candidates have their candidacy statement
proposed and then merged upon TC decision.

>>>
>>> For the last one, it is not an accident. I think it is indeed the final
>>> nail on the coffin.
>>>
>>
>> Yes, I was planning to wait until after the summit to propose that we
>> drop MagnetoDB from the official list of projects due to inactivity. We
>> can deal with it sooner, obviously.
> 
> +1
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/c634d724/attachment.pgp>

From tdecacqu at redhat.com  Fri Sep 18 12:22:31 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Fri, 18 Sep 2015 12:22:31 +0000
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FB2D63.3010402@gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FB2D63.3010402@gmail.com>
Message-ID: <55FC0207.3070305@redhat.com>

On 09/17/2015 09:15 PM, Nikhil Komawar wrote:
> 
> I like to solve problems and seems like this is a common problem in many
> conferences, seminars, etc. The usual way of solving this issue is to
> have a grace period with last minute extension to deadline for
> proposals, possibly for a unknown period of time and unannounced.

I'm strongly against this extra rules. OpenStack Officials Elections are
ran by volunteers and any rules that adds complexity should be avoided.

Thanks,
Tristan


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/41c7df7d/attachment.pgp>

From flavio at redhat.com  Fri Sep 18 12:27:46 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 18 Sep 2015 14:27:46 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FC0207.3070305@redhat.com>
References: <55FABF5E.4000204@redhat.com> <55FB2D63.3010402@gmail.com>
 <55FC0207.3070305@redhat.com>
Message-ID: <20150918122746.GO29319@redhat.com>

On 18/09/15 12:22 +0000, Tristan Cacqueray wrote:
>On 09/17/2015 09:15 PM, Nikhil Komawar wrote:
>>
>> I like to solve problems and seems like this is a common problem in many
>> conferences, seminars, etc. The usual way of solving this issue is to
>> have a grace period with last minute extension to deadline for
>> proposals, possibly for a unknown period of time and unannounced.
>
>I'm strongly against this extra rules. OpenStack Officials Elections are
>ran by volunteers and any rules that adds complexity should be avoided.

+1

Also, the schedule is announced 6 months in advance. The candidacy
period is announced when it starts and a reminder is sent a couple of
days before it ends.

This is not to say that ppl is not subject to other, not expected,
inconvenienes but I don't think extending the period or bending the
process is the right solution here.


-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/f89b5144/attachment.pgp>

From mengxiandong at gmail.com  Fri Sep 18 12:55:31 2015
From: mengxiandong at gmail.com (Xiandong Meng)
Date: Fri, 18 Sep 2015 07:55:31 -0500
Subject: [openstack-dev] Hardware requirements for OpenStack
In-Reply-To: <CAMDfpCSv8nSKYtjpfyHCsVr5Rxi8rPXgp6Zw_OqKX_=viRd4sg@mail.gmail.com>
References: <CAMDfpCSv8nSKYtjpfyHCsVr5Rxi8rPXgp6Zw_OqKX_=viRd4sg@mail.gmail.com>
Message-ID: <CAGr1u-Xp-6_d5rnenEuKpr9_xH-rSOW9mwBMjK9fK=v7O1OhhA@mail.gmail.com>

Yes, you should be able to install controller, neutron and compute node on
a single PC (all-in-one).
If you google it, you will find many practical guide on how to do it.
Here is one example :
https://fosskb.wordpress.com/2015/04/18/installing-openstack-kilo-on-ubuntu-15-04-single-machine-setup/



Regards,

Xiandong Meng <mengxiandong at gmail.com>
mengxiandong at gmail.com

On Fri, Sep 18, 2015 at 5:21 AM, Haider Ali <alihaider907 at gmail.com> wrote:

> Hello
>
> I am new to OpenStack and following
> http://docs.openstack.org/juno/install-guide/install/apt/content/ch_basic_environment.html
> guide to install OpenStack on my local PC. My question is do i need
> separate machines Controller, Network and Compute node ( connected via LAN
> ) or i can install all these 3 nodes on a single PC as well.
>
> Thanks
>
> --
> Haider Ali
> National University of Computer and Emerging Sciences Lahore
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/5ca03c15/attachment.html>

From eharney at redhat.com  Fri Sep 18 12:57:10 2015
From: eharney at redhat.com (Eric Harney)
Date: Fri, 18 Sep 2015 08:57:10 -0400
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
Message-ID: <55FC0A26.4080806@redhat.com>

On 09/17/2015 06:06 PM, John Griffith wrote:
> On Thu, Sep 17, 2015 at 11:31 AM, Eric Harney <eharney at redhat.com> wrote:
> 
>> On 09/17/2015 05:00 AM, Duncan Thomas wrote:
>>> On 16 September 2015 at 23:43, Eric Harney <eharney at redhat.com> wrote:
>>>
>>>> Currently, at least some options set in [DEFAULT] don't apply to
>>>> per-driver sections, and require you to set them in the driver section
>>>> as well.
>>>>
>>>
>>> This is extremely confusing behaviour. Do you have any examples? I'm not
>>> sure if we can fix it without breaking people's existing configs but I
>>> think it is worth trying. I'll add it to the list of things to talk about
>>> briefly in Tokyo.
>>>
>>
>> The most recent place this bit me was with iscsi_helper.
>>
>> If cinder.conf has:
>>
>> [DEFAULT]
>> iscsi_helper = lioadm
>> enabled_backends = lvm1
>>
>> [lvm1]
>> volume_driver = ...LVMISCSIDriver
>> # no iscsi_helper setting
>>
>>
>> You end up with c-vol showing "iscsi_helper = lioadm", and
>> "lvm1.iscsi_helper = tgtadm", which is the default in the code, and not
>> the default in the configuration file.
>>
>> I agree that this is confusing, I think it's also blatantly wrong.  I'm
>> not sure how to fix it, but I think it's some combination of your
>> suggestions above and possibly having to introduce new option names.
>>
> ?
> I'm not sure why that's "blatantly wrong', this is a side effect of having
> multiple backends enabled, it's by design really.  Any option that is
> defined in driver.py needs to be set in the actual enabled-backend stanza
> IIRC.  This includes iscsi_helper, volume_clear etc.
> 

I think it's wrong because it's not predictable for someone configuring
Cinder.  I understand that this is a side effect of multi-backend, but
I'm not sure what the reasoning is if it's intentional design.  I think
most people would expect a setting set in a [DEFAULT] section to be
treated as a default rather than being ignored.

This is particularly odd in the case of "iscsi_helper", where I want to
ship packages configured to use LIO since tgt doesn't exist on the
platform, and is never the right value for my packages.

This isn't possible without patching the code directly, which seems like
a shortfall in our configuration system.

> Having the "global conf" settings intermixed with the backend sections
> caused a number of issues when we first started working on this.  That's
> part of why we require the "self.configuration" usage all over in the
> drivers.  Each driver instantiation is it's own independent entity.
> 

Yes, each driver instantiation is independent, but that would still be
the case if these settings inherited values set in [DEFAULT] when they
aren't set in the backend section.

> I haven't looked at this for a long time, but if something has changed or
> I'm missing something my apologies.  We can certainly consider changing it,
> but because of the way we do multi-backend I'm not exactly sure how you
> would do this, or honestly why you would want to.
> 
> John
> 



From nathan.s.reller at gmail.com  Fri Sep 18 13:01:37 2015
From: nathan.s.reller at gmail.com (Nathan Reller)
Date: Fri, 18 Sep 2015 09:01:37 -0400
Subject: [openstack-dev] [Barbican] Providing service user read access
 to all tenant's certificates
In-Reply-To: <26B082831A2B1A4783604AB89B9B2C080E8A132D@SINPEX01CL02.citrite.net>
References: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
 <1A3C52DFCD06494D8528644858247BF01B7BF949@EX10MBOX06.pnnl.gov>
 <26B082831A2B1A4783604AB89B9B2C080E89F58B@SINPEX01CL02.citrite.net>
 <D2202F18.1CAE5%dmccowan@cisco.com>
 <26B082831A2B1A4783604AB89B9B2C080E8A132D@SINPEX01CL02.citrite.net>
Message-ID: <CAMKdHYoj5D=pktVL+PMyPhF9KxzbZctDrkQx24QCw1W42FFH-Q@mail.gmail.com>

> But that approach looks a little untidy, because tenant admin has to do
some infrastructure work.

I would think infrastructure work would be part of the admin role. They are
doing other things such as creating LBaaS, which seems like an
infrastructure job to me. I would think configuring LBaaS and key
management are similar. It seems like you think they are not similar. Can
you explain more?

-Nate
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/70cc816c/attachment.html>

From major at mhtx.net  Fri Sep 18 13:03:06 2015
From: major at mhtx.net (Major Hayden)
Date: Fri, 18 Sep 2015 08:03:06 -0500
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,
	that is the question
Message-ID: <55FC0B8A.4060303@mhtx.net>

Hey there,

I start working on a bug[1] last night about adding a managed NTP configuration to openstack-ansible hosts.  My patch[2] gets chrony up and running with configurable NTP servers, but I'm still struggling to meet the "Proposal" section of the bug where the author has asked for non-infra physical nodes to get their time from the infra nodes.  I can't figure out how to make it work for AIO builds when one physical host is part of all of the groups. ;)

I'd argue that time synchronization is critical for a few areas:

  1) Security/auditing when comparing logs
  2) Troubleshooting when comparing logs
  3) I've been told swift is time-sensitive
  4) MySQL/Galera don't like time drift

However, there's a strong argument that this should be done by deployers, and not via openstack-ansible.  I'm still *very* new to the project and I'd like to hear some feedback from other folks.

[1] https://bugs.launchpad.net/openstack-ansible/+bug/1413018
[2] https://review.openstack.org/#/c/225006/

--
Major Hayden


From ayoung at redhat.com  Fri Sep 18 13:12:47 2015
From: ayoung at redhat.com (Adam Young)
Date: Fri, 18 Sep 2015 09:12:47 -0400
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <20150918020452.GQ21846@jimrollenhagen.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com>
Message-ID: <55FC0DCF.7060307@redhat.com>

On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
>> In the fuel project, we recently ran into a couple of issues with Apache2 +
>> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>>
>> Looking deep into Apache2 issues specifically around "apache2ctl graceful"
>> and module loading/unloading and the hooks used by mod_wsgi [3]. I started
>> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
>> something else better that people are already using.
>>
>> One data point that keeps coming up is, all the CI jobs use Apache2 +
>> mod_wsgi so it must be the best solution....Is it? If not, what is?
> Disclaimer: it's been a while since I've cared about performance with a
> web server in front of a Python app.
>
> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
> on again. In general, I seem to remember it being thought of as a bit
> old and crusty, but mostly working.

I am not aware of that.  It has been the workhorse of the Python/wsgi 
world for a while, and we use it heavily.

>
> At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
> and saw a significant performance increase. This was a Django app. uwsgi
> is fairly straightforward to operate and comes loaded with a myriad of
> options[1] to help folks make the most of it. I've played with Ironic
> behind uwsgi and it seemed to work fine, though I haven't done any sort
> of load testing. I'd encourage folks to give it a shot. :)

Again, switching web servers is as likely to introduce as to solve 
problems.  If there are performance issues:

1.  Idenitfy what causes them
2.  Change configuration settings to deal with them
3.  Fix upstream bugs in the underlying system.


Keystone is not about performance.  Keystone is about security.  The 
cloud is designed to scale horizontally first.  Before advocating 
switching to a difference web server, make sure it supports the 
technologies required.


1. TLS at the latest level
2. Kerberos/GSSAPI/SPNEGO
3. X509 Client cert validation
4. SAML

OpenID connect would be a good one to add to the list;  Its been 
requested for a while.

If Keystone is having performance issues, it is most likely at the 
database layer, not the web server.



"Programmers waste enormous amounts of time thinking about, or worrying 
about, the speed of noncritical parts of their programs, and these 
attempts at efficiency actually have a strong negative impact when 
debugging and maintenance are considered. We /should/ forget about small 
efficiencies, say about 97% of the time: *premature optimization is the 
root of all evil.* Yet we should not pass up our opportunities in that 
critical 3%." --Donald Knuth



>
> Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
>
> gunicorn[2] is another good option that may be worth investigating; I
> personally don't have any experience with it, but I seem to remember
> hearing it has good eventlet support.
>
> // jim
>
> [0] https://uwsgi-docs.readthedocs.org/en/latest/
> [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
> [2] http://gunicorn.org/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/3cae3b10/attachment.html>

From mmosesohn at mirantis.com  Fri Sep 18 13:21:00 2015
From: mmosesohn at mirantis.com (Matthew Mosesohn)
Date: Fri, 18 Sep 2015 16:21:00 +0300
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,
 that is the question
In-Reply-To: <55FC0B8A.4060303@mhtx.net>
References: <55FC0B8A.4060303@mhtx.net>
Message-ID: <CA+CvLD6dR1RaEEvtKazuh1PTi4j=ULb6r06zo4Jduq+2Sg+ytg@mail.gmail.com>

Major,

in Fuel, we've dealt with this problem for a long time in its varying
degrees of unpleasantness. Some virtualization platforms, such as
VirtualBox, are very prone to time drift. Hardware nodes, thankfully, don't
suffer so badly.

Time sync is very important for RabbitMQ, Corosync, and Ceph, in addition
to those items you mentioned above. I haven't seen swift itself break due
to time issues, but you may be right.

The most ideal situation is to point all hosts to public NTP pool servers.
Barring that, elect 1 host to base its time by its hardware clock, and then
direct all other hosts to sync time against that one host. This has major
issues when you're doing virtual deployments with snapshot/revert and
experiencing major time skew, so you may need extra VM management scripts
to manually sync time again after revert.


Best Regards,
Matthew Mosesohn

On Fri, Sep 18, 2015 at 4:03 PM, Major Hayden <major at mhtx.net> wrote:

> Hey there,
>
> I start working on a bug[1] last night about adding a managed NTP
> configuration to openstack-ansible hosts.  My patch[2] gets chrony up and
> running with configurable NTP servers, but I'm still struggling to meet the
> "Proposal" section of the bug where the author has asked for non-infra
> physical nodes to get their time from the infra nodes.  I can't figure out
> how to make it work for AIO builds when one physical host is part of all of
> the groups. ;)
>
> I'd argue that time synchronization is critical for a few areas:
>
>   1) Security/auditing when comparing logs
>   2) Troubleshooting when comparing logs
>   3) I've been told swift is time-sensitive
>   4) MySQL/Galera don't like time drift
>
> However, there's a strong argument that this should be done by deployers,
> and not via openstack-ansible.  I'm still *very* new to the project and I'd
> like to hear some feedback from other folks.
>
> [1] https://bugs.launchpad.net/openstack-ansible/+bug/1413018
> [2] https://review.openstack.org/#/c/225006/
>
> --
> Major Hayden
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/0f89f272/attachment.html>

From morgan.fainberg at gmail.com  Fri Sep 18 13:44:19 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Fri, 18 Sep 2015 06:44:19 -0700
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <55FC0DCF.7060307@redhat.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com> <55FC0DCF.7060307@redhat.com>
Message-ID: <262AA17A-E015-4191-BECF-4E044874D527@gmail.com>

There is and has been desire to support uWSGI and other alternatives to mod_wsgi. There are a variety of operational reasons to consider uWSGI and/or gunicorn behind apache most notably to facilitate easier management of the processes independently of the webserver itself. With mod_wsgi the processes are directly tied to the apache server where as with uWSGI and gunicorn you can manage the various services independently and/or with differing VENVs more easily. 

There are potential other concerns that must be weighed when considering which method of deployment to use. I hope we have clear documentation within the next cycle (and possible choices for the gate) for utilizing uWSGI and/or gunicorn. 

--Morgan

Sent via mobile

> On Sep 18, 2015, at 06:12, Adam Young <ayoung at redhat.com> wrote:
> 
>> On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
>>> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
>>> In the fuel project, we recently ran into a couple of issues with Apache2 +
>>> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>>> 
>>> Looking deep into Apache2 issues specifically around "apache2ctl graceful"
>>> and module loading/unloading and the hooks used by mod_wsgi [3]. I started
>>> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
>>> something else better that people are already using.
>>> 
>>> One data point that keeps coming up is, all the CI jobs use Apache2 +
>>> mod_wsgi so it must be the best solution....Is it? If not, what is?
>> Disclaimer: it's been a while since I've cared about performance with a
>> web server in front of a Python app.
>> 
>> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
>> on again. In general, I seem to remember it being thought of as a bit
>> old and crusty, but mostly working.
> 
> I am not aware of that.  It has been the workhorse of the Python/wsgi world for a while, and we use it heavily.
> 
>> At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
>> and saw a significant performance increase. This was a Django app. uwsgi
>> is fairly straightforward to operate and comes loaded with a myriad of
>> options[1] to help folks make the most of it. I've played with Ironic
>> behind uwsgi and it seemed to work fine, though I haven't done any sort
>> of load testing. I'd encourage folks to give it a shot. :)
> 
> Again, switching web servers is as likely to introduce as to solve problems.  If there are performance issues:
> 
> 1.  Idenitfy what causes them
> 2.  Change configuration settings to deal with them
> 3.  Fix upstream bugs in the underlying system.
> 
> 
> Keystone is not about performance.  Keystone is about security.  The cloud is designed to scale horizontally first.  Before advocating switching to a difference web server, make sure it supports the technologies required.
> 
> 
> 1. TLS at the latest level
> 2. Kerberos/GSSAPI/SPNEGO
> 3. X509 Client cert validation
> 4. SAML
> 
> OpenID connect would be a good one to add to the list;  Its been requested for a while.
> 
> If Keystone is having performance issues, it is most likely at the database layer, not the web server.
> 
> 
> 
> "Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."   --Donald Knuth
>  
> 
> 
>> Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
>> 
>> gunicorn[2] is another good option that may be worth investigating; I
>> personally don't have any experience with it, but I seem to remember
>> hearing it has good eventlet support.
>> 
>> // jim
>> 
>> [0] https://uwsgi-docs.readthedocs.org/en/latest/
>> [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
>> [2] http://gunicorn.org/
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/d2e71fe2/attachment.html>

From vkuklin at mirantis.com  Fri Sep 18 13:54:02 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Fri, 18 Sep 2015 16:54:02 +0300
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <55FC0DCF.7060307@redhat.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com>
 <55FC0DCF.7060307@redhat.com>
Message-ID: <CAHAWLf3ZhXQD=pcm9rwML9ZOTA7pMWvNXm4+zbfdA=KmS25g8w@mail.gmail.com>

Folks

I think we do not need to switch to nginx-only or consider any kind of war
between nginx and apache adherents. Everyone should be able to use
web-server he or she needs without being pinned to the unwanted one. It is
like Postgres vs MySQL war. Why not support both?

May be someone does not need something that apache supports and nginx not
and needs nginx features which apache does not support. Let's let our users
decide what they want.

And the first step should be simple here - support for uwsgi. It will allow
for usage of any web-server that can work with uwsgi. It will allow also us
to check for the support of all apache-like bindings like SPNEGO or
whatever and provide our users with enough info on making decisions. I did
not personally test nginx modules for SAML and SPNEGO, but I am pretty
confident about TLS/SSL parts of nginx.

Moreover, nginx will allow you to do things you cannot do with apache, e.g.
do smart load balancing, which may be crucial for high-loaded installations.


On Fri, Sep 18, 2015 at 4:12 PM, Adam Young <ayoung at redhat.com> wrote:

> On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
>
> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
>
> In the fuel project, we recently ran into a couple of issues with Apache2 +
> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>
> Looking deep into Apache2 issues specifically around "apache2ctl graceful"
> and module loading/unloading and the hooks used by mod_wsgi [3]. I started
> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> something else better that people are already using.
>
> One data point that keeps coming up is, all the CI jobs use Apache2 +
> mod_wsgi so it must be the best solution....Is it? If not, what is?
>
> Disclaimer: it's been a while since I've cared about performance with a
> web server in front of a Python app.
>
> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
> on again. In general, I seem to remember it being thought of as a bit
> old and crusty, but mostly working.
>
>
> I am not aware of that.  It has been the workhorse of the Python/wsgi
> world for a while, and we use it heavily.
>
>
> At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
> and saw a significant performance increase. This was a Django app. uwsgi
> is fairly straightforward to operate and comes loaded with a myriad of
> options[1] to help folks make the most of it. I've played with Ironic
> behind uwsgi and it seemed to work fine, though I haven't done any sort
> of load testing. I'd encourage folks to give it a shot. :)
>
>
> Again, switching web servers is as likely to introduce as to solve
> problems.  If there are performance issues:
>
> 1.  Idenitfy what causes them
> 2.  Change configuration settings to deal with them
> 3.  Fix upstream bugs in the underlying system.
>
>
> Keystone is not about performance.  Keystone is about security.  The cloud
> is designed to scale horizontally first.  Before advocating switching to a
> difference web server, make sure it supports the technologies required.
>
>
> 1. TLS at the latest level
> 2. Kerberos/GSSAPI/SPNEGO
> 3. X509 Client cert validation
> 4. SAML
>
> OpenID connect would be a good one to add to the list;  Its been requested
> for a while.
>
> If Keystone is having performance issues, it is most likely at the
> database layer, not the web server.
>
>
>
> "Programmers waste enormous amounts of time thinking about, or worrying
> about, the speed of noncritical parts of their programs, and these attempts
> at efficiency actually have a strong negative impact when debugging and
> maintenance are considered. We *should* forget about small efficiencies,
> say about 97% of the time: *premature optimization is the root of all
> evil.* Yet we should not pass up our opportunities in that critical
> 3%."   --Donald Knuth
>
>
>
>
> Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
>
> gunicorn[2] is another good option that may be worth investigating; I
> personally don't have any experience with it, but I seem to remember
> hearing it has good eventlet support.
>
> // jim
>
> [0] https://uwsgi-docs.readthedocs.org/en/latest/
> [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
> [2] http://gunicorn.org/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/1d54f240/attachment.html>

From rakhmerov at mirantis.com  Fri Sep 18 13:55:58 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Fri, 18 Sep 2015 16:55:58 +0300
Subject: [openstack-dev] [mistral] Define better terms for WAITING and
	DELAYED states
Message-ID: <156DBF58-7BBB-49D4-A229-DD1B96896532@mirantis.com>

Hello,

We have a bug [1] that addresses the small semantical issue in names of the states for workflow and task executions: WAITING and DELAYED.
I?m really interested in your opinion about this. Especially native english speakers? opinion because, IMO, they would be able to challenge better what we?re discussing.

Problem description

We now have a set of states:
IDLE  - Nothing is going on, object was just created
RUNNING - Workflow/task is running
PAUSED - Workflow/task has been paused
SUCCESS - Workflow/task has completed successfully
ERROR - Workflow/task has completed with an error
WAITING - Task execution object has been created but it is not ready to start because some preconditions were not met. For now it mostly refers to a case when we have a ?join? task depending on a number of other tasks, e.g. ?task1? depends on ?task2? and ?task3?. But say ?task2? has completed and ?task3? has not and hence ?task1? has to wait. I may assume that in the future it may be related not only to joins.
DELAYED - Task has been delayed for a certain number of seconds. I may happen, for example, in case of using ?retry? policy.

So the semantical difference between WAITING and DELAYED is the following: Unlike WAITING, DELAYED says that we know exactly that the task will run, it?s just a matter of time. In case of WAITING, it may never run just because some of the preconditions may never be met.

And the concern is that we probably don?t use good names for WAITING and DELAYED because, from English language perspective, they look similar to a number of folks (including myself) and it?s therefore confusing if we look at two tasks with states WAITING and DELAYED.

The latest idea that we had is just to rename DELAYED to POSTPONED because the latter sort of expresses the fact of being postponed for a certain period of time slightly better :) But I?m really not sure.

Would appreciate your input on this.

Thanks

[1] https://bugs.launchpad.net/mistral/+bug/1470369 <https://bugs.launchpad.net/mistral/+bug/1470369>

Renat Akhmerov
@ Mirantis Inc.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/6b4d52a7/attachment.html>

From amakarov at mirantis.com  Fri Sep 18 14:28:39 2015
From: amakarov at mirantis.com (Alexander Makarov)
Date: Fri, 18 Sep 2015 17:28:39 +0300
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CAHAWLf3ZhXQD=pcm9rwML9ZOTA7pMWvNXm4+zbfdA=KmS25g8w@mail.gmail.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com> <55FC0DCF.7060307@redhat.com>
 <CAHAWLf3ZhXQD=pcm9rwML9ZOTA7pMWvNXm4+zbfdA=KmS25g8w@mail.gmail.com>
Message-ID: <CAKb2=12gEBNYhY=uGsyWfv4UCc+c3kmEPF4coUDKqObzb0Eqxg@mail.gmail.com>

Please consider that we use some apache mods - does
nginx/uwsgi/gunicorn have oauth, shibboleth & openid support?

On Fri, Sep 18, 2015 at 4:54 PM, Vladimir Kuklin <vkuklin at mirantis.com> wrote:
> Folks
>
> I think we do not need to switch to nginx-only or consider any kind of war
> between nginx and apache adherents. Everyone should be able to use
> web-server he or she needs without being pinned to the unwanted one. It is
> like Postgres vs MySQL war. Why not support both?
>
> May be someone does not need something that apache supports and nginx not
> and needs nginx features which apache does not support. Let's let our users
> decide what they want.
>
> And the first step should be simple here - support for uwsgi. It will allow
> for usage of any web-server that can work with uwsgi. It will allow also us
> to check for the support of all apache-like bindings like SPNEGO or whatever
> and provide our users with enough info on making decisions. I did not
> personally test nginx modules for SAML and SPNEGO, but I am pretty confident
> about TLS/SSL parts of nginx.
>
> Moreover, nginx will allow you to do things you cannot do with apache, e.g.
> do smart load balancing, which may be crucial for high-loaded installations.
>
>
> On Fri, Sep 18, 2015 at 4:12 PM, Adam Young <ayoung at redhat.com> wrote:
>>
>> On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
>>
>> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
>>
>> In the fuel project, we recently ran into a couple of issues with Apache2
>> +
>> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>>
>> Looking deep into Apache2 issues specifically around "apache2ctl graceful"
>> and module loading/unloading and the hooks used by mod_wsgi [3]. I started
>> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
>> something else better that people are already using.
>>
>> One data point that keeps coming up is, all the CI jobs use Apache2 +
>> mod_wsgi so it must be the best solution....Is it? If not, what is?
>>
>> Disclaimer: it's been a while since I've cared about performance with a
>> web server in front of a Python app.
>>
>> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
>> on again. In general, I seem to remember it being thought of as a bit
>> old and crusty, but mostly working.
>>
>>
>> I am not aware of that.  It has been the workhorse of the Python/wsgi
>> world for a while, and we use it heavily.
>>
>> At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
>> and saw a significant performance increase. This was a Django app. uwsgi
>> is fairly straightforward to operate and comes loaded with a myriad of
>> options[1] to help folks make the most of it. I've played with Ironic
>> behind uwsgi and it seemed to work fine, though I haven't done any sort
>> of load testing. I'd encourage folks to give it a shot. :)
>>
>>
>> Again, switching web servers is as likely to introduce as to solve
>> problems.  If there are performance issues:
>>
>> 1.  Idenitfy what causes them
>> 2.  Change configuration settings to deal with them
>> 3.  Fix upstream bugs in the underlying system.
>>
>>
>> Keystone is not about performance.  Keystone is about security.  The cloud
>> is designed to scale horizontally first.  Before advocating switching to a
>> difference web server, make sure it supports the technologies required.
>>
>>
>> 1. TLS at the latest level
>> 2. Kerberos/GSSAPI/SPNEGO
>> 3. X509 Client cert validation
>> 4. SAML
>>
>> OpenID connect would be a good one to add to the list;  Its been requested
>> for a while.
>>
>> If Keystone is having performance issues, it is most likely at the
>> database layer, not the web server.
>>
>>
>>
>> "Programmers waste enormous amounts of time thinking about, or worrying
>> about, the speed of noncritical parts of their programs, and these attempts
>> at efficiency actually have a strong negative impact when debugging and
>> maintenance are considered. We should forget about small efficiencies, say
>> about 97% of the time: premature optimization is the root of all evil. Yet
>> we should not pass up our opportunities in that critical 3%."   --Donald
>> Knuth
>>
>>
>>
>> Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
>>
>> gunicorn[2] is another good option that may be worth investigating; I
>> personally don't have any experience with it, but I seem to remember
>> hearing it has good eventlet support.
>>
>> // jim
>>
>> [0] https://uwsgi-docs.readthedocs.org/en/latest/
>> [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
>> [2] http://gunicorn.org/
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuklin at mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP


From flavio at redhat.com  Fri Sep 18 14:29:46 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 18 Sep 2015 16:29:46 +0200
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <262AA17A-E015-4191-BECF-4E044874D527@gmail.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com>
 <55FC0DCF.7060307@redhat.com>
 <262AA17A-E015-4191-BECF-4E044874D527@gmail.com>
Message-ID: <20150918142946.GQ29319@redhat.com>

On 18/09/15 06:44 -0700, Morgan Fainberg wrote:
>There is and has been desire to support uWSGI and other alternatives to
>mod_wsgi. There are a variety of operational reasons to consider uWSGI and/or
>gunicorn behind apache most notably to facilitate easier management of the
>processes independently of the webserver itself. With mod_wsgi the processes
>are directly tied to the apache server where as with uWSGI and gunicorn you can
>manage the various services independently and/or with differing VENVs more
>easily. 
>
>There are potential other concerns that must be weighed when considering which
>method of deployment to use. I hope we have clear documentation within the next
>cycle (and possible choices for the gate) for utilizing uWSGI and/or gunicorn. 


+1

FWIW, Zaqar has always been shipped as a wsgi app and the container
the team has recommended ever since it was put in production for the
first time has been uWSGI. uWSGI is already used by Zaqar in the gate
but it's being installed independently.

Flavio

>
>--Morgan
>
>Sent via mobile
>
>On Sep 18, 2015, at 06:12, Adam Young <ayoung at redhat.com> wrote:
>
>
>    On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
>
>        On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
>
>            In the fuel project, we recently ran into a couple of issues with Apache2 +
>            mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>
>            Looking deep into Apache2 issues specifically around "apache2ctl graceful"
>            and module loading/unloading and the hooks used by mod_wsgi [3]. I started
>            wondering if Apache2 + mod_wsgi is the "right" solution and if there was
>            something else better that people are already using.
>
>            One data point that keeps coming up is, all the CI jobs use Apache2 +
>            mod_wsgi so it must be the best solution....Is it? If not, what is?
>
>        Disclaimer: it's been a while since I've cared about performance with a
>        web server in front of a Python app.
>
>        IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
>        on again. In general, I seem to remember it being thought of as a bit
>        old and crusty, but mostly working.
>
>
>    I am not aware of that.  It has been the workhorse of the Python/wsgi world
>    for a while, and we use it heavily.
>
>
>        At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
>        and saw a significant performance increase. This was a Django app. uwsgi
>        is fairly straightforward to operate and comes loaded with a myriad of
>        options[1] to help folks make the most of it. I've played with Ironic
>        behind uwsgi and it seemed to work fine, though I haven't done any sort
>        of load testing. I'd encourage folks to give it a shot. :)
>
>
>    Again, switching web servers is as likely to introduce as to solve
>    problems.  If there are performance issues:
>
>    1.  Idenitfy what causes them
>    2.  Change configuration settings to deal with them
>    3.  Fix upstream bugs in the underlying system.
>
>
>    Keystone is not about performance.  Keystone is about security.  The cloud
>    is designed to scale horizontally first.  Before advocating switching to a
>    difference web server, make sure it supports the technologies required.
>
>
>    1. TLS at the latest level
>    2. Kerberos/GSSAPI/SPNEGO
>    3. X509 Client cert validation
>    4. SAML
>
>    OpenID connect would be a good one to add to the list;  Its been requested
>    for a while.
>
>    If Keystone is having performance issues, it is most likely at the database
>    layer, not the web server.
>
>
>
>    "Programmers waste enormous amounts of time thinking about, or worrying
>    about, the speed of noncritical parts of their programs, and these attempts
>    at efficiency actually have a strong negative impact when debugging and
>    maintenance are considered. We should forget about small efficiencies, say
>    about 97% of the time: premature optimization is the root of all evil. Yet
>    we should not pass up our opportunities in that critical 3%."   --Donald
>    Knuth
>     
>
>
>
>        Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
>
>        gunicorn[2] is another good option that may be worth investigating; I
>        personally don't have any experience with it, but I seem to remember
>        hearing it has good eventlet support.
>
>        // jim
>
>        [0] https://uwsgi-docs.readthedocs.org/en/latest/
>        [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
>        [2] http://gunicorn.org/
>
>        __________________________________________________________________________
>        OpenStack Development Mailing List (not for usage questions)
>        Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>    __________________________________________________________________________
>    OpenStack Development Mailing List (not for usage questions)
>    Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/1c79c5d4/attachment.pgp>

From bbobrov at mirantis.com  Fri Sep 18 14:32:30 2015
From: bbobrov at mirantis.com (Boris Bobrov)
Date: Fri, 18 Sep 2015 19:32:30 +0500
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CAHAWLf3ZhXQD=pcm9rwML9ZOTA7pMWvNXm4+zbfdA=KmS25g8w@mail.gmail.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <55FC0DCF.7060307@redhat.com>
 <CAHAWLf3ZhXQD=pcm9rwML9ZOTA7pMWvNXm4+zbfdA=KmS25g8w@mail.gmail.com>
Message-ID: <2264316.uPXFtoJfFz@breton-pc>

There are 2 dimensions this discussion should happen in: web server and 
application server. Now we use apache2 as web server and mod_wsgi as app 
server.

I don't have a specific opinion on the app server (mod_wsgi vs uwsgi) and I 
don't really care.

Regarding apache2 vs nginx. I don't see any reasons for the switch. Apache2 is 
well known to deployers and sysadmins. It is very rich for modules. I wonder 
if there are customer-written modules.

On Friday 18 September 2015 16:54:02 Vladimir Kuklin wrote:
> Folks
> 
> I think we do not need to switch to nginx-only or consider any kind of war
> between nginx and apache adherents. Everyone should be able to use
> web-server he or she needs without being pinned to the unwanted one. It is
> like Postgres vs MySQL war. Why not support both?

Why nginx? Why not lighttpd? OpenLitespeed? Litespeed? <insert your web 
server>?

What do you understand by "support both"? I understand it as "both are tested 
in devstack". Apache2 is supported because you can set up devstack and 
everything works.

There are things in keystone that work under apache. They are not tested. They 
were written to work under apache because it's the simplest and the most 
standard way to do. Making them work in nginx means forcing developers write 
some code. You're ready to do that?

> May be someone does not need something that apache supports and nginx not
> and needs nginx features which apache does not support. Let's let our users
> decide what they want.
> 
> And the first step should be simple here - support for uwsgi.

Why uwsgi? Why not gunicorn? Cherrypy? Twisted?

> It will allow
> for usage of any web-server that can work with uwsgi. It will allow also us
> to check for the support of all apache-like bindings like SPNEGO or
> whatever and provide our users with enough info on making decisions. I did
> not personally test nginx modules for SAML and SPNEGO, but I am pretty
> confident about TLS/SSL parts of nginx.
> 
> Moreover, nginx will allow you to do things you cannot do with apache, e.g.
> do smart load balancing, which may be crucial for high-loaded installations.
> On Fri, Sep 18, 2015 at 4:12 PM, Adam Young <ayoung at redhat.com> wrote:
> > On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
> > 
> > On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
> > 
> > In the fuel project, we recently ran into a couple of issues with Apache2
> > +
> > mod_wsgi as we switched Keystone to run . Please see [1] and [2].
> > 
> > Looking deep into Apache2 issues specifically around "apache2ctl graceful"
> > and module loading/unloading and the hooks used by mod_wsgi [3]. I started
> > wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> > something else better that people are already using.
> > 
> > One data point that keeps coming up is, all the CI jobs use Apache2 +
> > mod_wsgi so it must be the best solution....Is it? If not, what is?
> > 
> > Disclaimer: it's been a while since I've cared about performance with a
> > web server in front of a Python app.
> > 
> > IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
> > on again. In general, I seem to remember it being thought of as a bit
> > old and crusty, but mostly working.
> > 
> > 
> > I am not aware of that.  It has been the workhorse of the Python/wsgi
> > world for a while, and we use it heavily.
> > 
> > 
> > At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
> > and saw a significant performance increase. This was a Django app. uwsgi
> > is fairly straightforward to operate and comes loaded with a myriad of
> > options[1] to help folks make the most of it. I've played with Ironic
> > behind uwsgi and it seemed to work fine, though I haven't done any sort
> > of load testing. I'd encourage folks to give it a shot. :)
> > 
> > 
> > Again, switching web servers is as likely to introduce as to solve
> > problems.  If there are performance issues:
> > 
> > 1.  Idenitfy what causes them
> > 2.  Change configuration settings to deal with them
> > 3.  Fix upstream bugs in the underlying system.
> > 
> > 
> > Keystone is not about performance.  Keystone is about security.  The cloud
> > is designed to scale horizontally first.  Before advocating switching to a
> > difference web server, make sure it supports the technologies required.
> > 
> > 
> > 1. TLS at the latest level
> > 2. Kerberos/GSSAPI/SPNEGO
> > 3. X509 Client cert validation
> > 4. SAML
> > 
> > OpenID connect would be a good one to add to the list;  Its been requested
> > for a while.
> > 
> > If Keystone is having performance issues, it is most likely at the
> > database layer, not the web server.
> > 
> > 
> > 
> > "Programmers waste enormous amounts of time thinking about, or worrying
> > about, the speed of noncritical parts of their programs, and these
> > attempts
> > at efficiency actually have a strong negative impact when debugging and
> > maintenance are considered. We *should* forget about small efficiencies,
> > say about 97% of the time: *premature optimization is the root of all
> > evil.* Yet we should not pass up our opportunities in that critical
> > 3%."   --Donald Knuth
> > 
> > 
> > 
> > 
> > Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
> > 
> > gunicorn[2] is another good option that may be worth investigating; I
> > personally don't have any experience with it, but I seem to remember
> > hearing it has good eventlet support.
> > 
> > // jim
> > 
> > [0] https://uwsgi-docs.readthedocs.org/en/latest/
> > [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
> > [2] http://gunicorn.org/
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists
> > .openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
? ?????????? ???????????,
Boris


From aj at suse.com  Fri Sep 18 14:46:24 2015
From: aj at suse.com (Andreas Jaeger)
Date: Fri, 18 Sep 2015 16:46:24 +0200
Subject: [openstack-dev] [nova][i18n] Is there any point in using _()
 inpython-novaclient?
In-Reply-To: <55EF0334.3030606@linux.vnet.ibm.com>
References: <55E9D9AD.1000402@linux.vnet.ibm.com>
 <201509060518.t865IeSf019572@d01av05.pok.ibm.com>
 <55EF0334.3030606@linux.vnet.ibm.com>
Message-ID: <55FC23C0.3040105@suse.com>

With the limited resources that the translation team has, we should not 
translate the clients but concentrate on the openstackclient, as 
discussed here:

http://lists.openstack.org/pipermail/openstack-i18n/2015-September/001402.html

Andreas
-- 
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
        HRB 21284 (AG N?rnberg)
     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126



From ian.cordasco at RACKSPACE.COM  Fri Sep 18 15:04:25 2015
From: ian.cordasco at RACKSPACE.COM (Ian Cordasco)
Date: Fri, 18 Sep 2015 15:04:25 +0000
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,
 that is the question
In-Reply-To: <55FC0B8A.4060303@mhtx.net>
References: <55FC0B8A.4060303@mhtx.net>
Message-ID: <D2219136.1CE8C%ian.cordasco@rackspace.com>



On 9/18/15, 08:03, "Major Hayden" <major at mhtx.net> wrote:

>Hey there,
>
>I start working on a bug[1] last night about adding a managed NTP
>configuration to openstack-ansible hosts.  My patch[2] gets chrony up and
>running with configurable NTP servers, but I'm still struggling to meet
>the "Proposal" section of the bug where the author has asked for
>non-infra physical nodes to get their time from the infra nodes.  I can't
>figure out how to make it work for AIO builds when one physical host is
>part of all of the groups. ;)
>
>I'd argue that time synchronization is critical for a few areas:
>
>  1) Security/auditing when comparing logs
>  2) Troubleshooting when comparing logs
>  3) I've been told swift is time-sensitive
>  4) MySQL/Galera don't like time drift
>
>However, there's a strong argument that this should be done by deployers,
>and not via openstack-ansible.  I'm still *very* new to the project and
>I'd like to hear some feedback from other folks.

Personally, I fall into the camp of "this is a deployer concern".
Specifically, there is already an ansible-galaxy role to enable NTP on
your deployment hosts (https://galaxy.ansible.com/list#/roles/464) which
*could* be expanded to do this very work that you're talking about. Using
specialized roles to achieve this (and contributing back to the larger
ansible community) seems like a bigger win than trying to reimplement some
of this in OSA instead of reusing other roles that already exist.

Compare it to a hypothetical situation where Keystone wrote its own
backing libraries to implement Fernet instead of using the cryptography
library. In that case there would be absolutely no argument that Keystone
should use cryptography (even if it uses cffi and has bindings to OpenSSL
which our infra team doesn't like and some deployers find difficult to
manage when using pure-python deployment tooling). Why should OSA be any
different from another OpenStack project?

Cheers,
Ian


From chris.friesen at windriver.com  Fri Sep 18 15:06:31 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Fri, 18 Sep 2015 09:06:31 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55FC0A26.4080806@redhat.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
 <55FC0A26.4080806@redhat.com>
Message-ID: <55FC2877.4040903@windriver.com>

On 09/18/2015 06:57 AM, Eric Harney wrote:
> On 09/17/2015 06:06 PM, John Griffith wrote:

>> Having the "global conf" settings intermixed with the backend sections
>> caused a number of issues when we first started working on this.  That's
>> part of why we require the "self.configuration" usage all over in the
>> drivers.  Each driver instantiation is it's own independent entity.
>>
>
> Yes, each driver instantiation is independent, but that would still be
> the case if these settings inherited values set in [DEFAULT] when they
> aren't set in the backend section.

Agreed.  If I explicitly set something in the [DEFAULT] section, that should 
carry through and apply to all the backends unless overridden in the 
backend-specific section.

Chris


From carl at ecbaldwin.net  Fri Sep 18 15:18:37 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Fri, 18 Sep 2015 09:18:37 -0600
Subject: [openstack-dev] [Neutron] Separate floating IP pools?
In-Reply-To: <A0C170085C37664D93EE1604364858A11FE46F5B@G9W0763.americas.hpqcorp.net>
References: <A0C170085C37664D93EE1604364858A11FE46F5B@G9W0763.americas.hpqcorp.net>
Message-ID: <CALiLy7oH_UkwNg93SJtUssh9rzkobVdrJC9KDAOs50us7ZMXWQ@mail.gmail.com>

On Fri, Sep 18, 2015 at 4:55 AM, Clark, Robert Graham
<robert.clark at hp.com> wrote:
> Is it possible to have separate floating-IP pools and grant a tenant access
> to only some of them?

It is possible to have multiple floating IP pools by creating multiple
external networks.  However, it is not currently possible to have
multiple pools on a single external network.  This is a modeling
limitation.  Also, it is not possible to do any kind of RBAC on
multiple pools.  Currently the semantics of floating ips are that all
tenants have access to them implicitly.  Essentially, marking a
network as external makes that network visible to any tenant wishing
to attach a router and allows them to also allocate floating IPs.

> Thought popped into my head while looking at the rbac-network spec here:
> https://review.openstack.org/#/c/132661/4/specs/liberty/rbac-networks.rst

This could be a possible future direction after this RBAC work is
completed and released.  However, there are no concrete plans around
this yet.

> Creating individual pools, allowing only some tenants access and having
> off-cloud network ACLs would get part way to satisfying the use cases that
> drive the above spec (I?m thinking of this as a more short term solution,
> certainly not a direct alternative).

Maybe you could tell us more about the use case you're after so that
we can understand the motivation behind it.  For example, are you
thinking about multiple pools on the same external network or
different external networks? Help us understand what you're trying to
enable and why.

> I?m sure this is answered elsewhere but I couldn?t find any direct
> information so I?m assuming no, it isn?t supported but I wonder how much
> effort would be required to make it work?
>
> -Rob
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From berrange at redhat.com  Fri Sep 18 15:23:46 2015
From: berrange at redhat.com (Daniel P. Berrange)
Date: Fri, 18 Sep 2015 16:23:46 +0100
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
Message-ID: <20150918152346.GI16906@redhat.com>

On Fri, Sep 18, 2015 at 11:53:05AM +0000, Murray, Paul (HP Cloud) wrote:
> Hi All,
> 
> There are various efforts going on around live migration at the moment:
> fixing up CI, bug fixes, additions to cover more corner cases, proposals
> for new operations....
> 
> Generally live migration could do with a little TLC (see: [1]), so I am
> going to suggest we give some of that care in the next cycle.
> 
> Please respond to this post if you have an interest in this and what you
> would like to see done. Include anything you are already getting on with
> so we get a clear picture. If there is enough interest I'll put this
> together as a proposal for a work stream. Something along the lines of
> "robustify live migration".

We merged some robustness improvements for migration during Liberty.
Specifically, with KVM we now track the progress of data transfer
and if it is not making forward progress during a set window of
time, we will abort the migration. This ensures you don't get a
migration that never ends. We also now have code which dynamically
increases the max permitted downtime during switchover, to try and
make it more likely to succeeed. We could do with getting feedback
on how well the various tunable settings work in practie for real
world deployments, to see if we need to change any defaults.

There was a proposal to nova to allow the 'pause' operation to be
invoked while migration was happening. This would turn a live
migration into a coma-migration, thereby ensuring it succeeds.
I cna't remember if this merged or not, as i can't find the review
offhand, but its important to have this ASAP IMHO, as when
evacuating VMs from a host admins need a knob to use to force
successful evacuation, even at the cost of pausing the guest
temporarily.

In libvirt upstream we now have the ability to filter what disks are
migrated during block migration. We need to leverage that new feature
to fix the long standing problems of block migration when non-local
images are attached - eg cinder volumes. We definitely want this
in Mitaka.

We should look at what we need to do to isolate the migration data
network from the main management network. Currently we live
migrate over whatever network is associated with the compute hosts
primary Hostname / IP address. This is not neccessarily the fastest
NIC on the host. We ought to be able to record an alternative
hostname / IP address against each compute host to indicate the
desired migration interface.

Libvirt/KVM have the ability to turn on compression for migration
which again improves the chances of convergance & thus success.
We would look at leveraging that.

QEMU has a crude "auto-converge" flag you can turn on, which limits
guest CPU execution time, in an attempt to slow down data dirtying
rate to again improve chance of successful convergance.

I'm working on enhancements to QEMU itself to support TLS encryption
for migration. This will enable openstack to have secure migration
datastream, without having to tunnel via libvirtd. This is useful
as tunneling via libvirtd doesn't work with block migration. It will
also be much faster than tunnelling. This probably might be merged
in QEMU before Mitaka cycle ends, but more likely it is Nxxx cycle

There is also work on post-copy migration in QEMU. Normally with
live migration, the guest doesn't start executing on the target
host until migration has transferred all data. There are many
workloads where that doesn't work, as the guest is dirtying data
too quickly, With post-copy you can start runing the guest on the
target at any time, and when it faults on a missing page that will
be pulled from the source host. This is slightly more fragile as
you risk loosing the guest entirely if the source host dies before
migration finally completes. It does guarantee that migration will
succeed no matter what workload is in the guest. This is probably
Nxxxx cycle material.

Testing. Testing. Testing.

Lots more I can't think of right now....

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


From carl at ecbaldwin.net  Fri Sep 18 15:25:26 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Fri, 18 Sep 2015 09:25:26 -0600
Subject: [openstack-dev] [neutron][L3][QA] DVR job failure rate and
	maintainability
In-Reply-To: <5988f344.fc6.14fd4067572.Coremail.ayshihanzhang@126.com>
References: <0000014fcde02877-55c10164-4eed-4552-ba1a-681c6a75fbcd-000000@email.amazonses.com>
 <5988f344.fc6.14fd4067572.Coremail.ayshihanzhang@126.com>
Message-ID: <CALiLy7q_mcqY0C97dWzZvuVUZ5Wo1YZroJu9tzDe7tXpWwmhyw@mail.gmail.com>

On Tue, Sep 15, 2015 at 8:40 PM, shihanzhang <ayshihanzhang at 126.com> wrote:
> [1] https://bugs.launchpad.net/neutron/+bug/1486795
> [2] https://bugs.launchpad.net/neutron/+bug/1486828
> [3] https://bugs.launchpad.net/neutron/+bug/1496201
> [4] https://bugs.launchpad.net/neutron/+bug/1496204
> [5] https://bugs.launchpad.net/neutron/+bug/1470909

Thanks for highlighting these bugs here.  I have triaged them ensuring
that they have the l3-dvr-backlog tag on them and have an importance
assigned.  We will be reviewing these bugs weekly in the L3 meeting
[1].

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam


From sheeprine-ml at nullplace.com  Fri Sep 18 15:26:30 2015
From: sheeprine-ml at nullplace.com (Stephane Albert)
Date: Fri, 18 Sep 2015 17:26:30 +0200
Subject: [openstack-dev] [rating] CloudKitty's future and Big tent
	application
Message-ID: <20150918152630.GA4899@kryptopad>

Hi,

We've lacked communication on the mailing list regarding CloudKitty.
This mail is basically an interest check regarding CloudKitty and rating
for OpenStack.

During the Vancouver OpenStack Summit we talked to a lot of people about
CloudKitty and its purpose. Many people seemed interested, and willing
to try it. We've recently released version 0.4.1.

I'm pleased to announce that we'll apply for the big tent hoping that it
will help the project gets the visibility it deserves. And bring more
people into it at the same time.

If you want to be part of the project feel free to join #cloudkitty or
be part of the next meeting, Monday, Sept 21st on #openstack-meeting-3
at 14:00 UTC.
If you want to discuss some specific topics please add them to the
agenda: https://wiki.openstack.org/wiki/Meetings/CloudKittyMeeting

Cheers


From john at johngarbutt.com  Fri Sep 18 15:47:09 2015
From: john at johngarbutt.com (John Garbutt)
Date: Fri, 18 Sep 2015 16:47:09 +0100
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <20150918152346.GI16906@redhat.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
Message-ID: <CABib2_rtNKpbHjxb-Cf23pUJRd2G-DAL+pLhhBKTNkejXq99BA@mail.gmail.com>

On 18 September 2015 at 16:23, Daniel P. Berrange <berrange at redhat.com> wrote:
> On Fri, Sep 18, 2015 at 11:53:05AM +0000, Murray, Paul (HP Cloud) wrote:
>> Hi All,
>>
>> There are various efforts going on around live migration at the moment:
>> fixing up CI, bug fixes, additions to cover more corner cases, proposals
>> for new operations....
>>
>> Generally live migration could do with a little TLC (see: [1]), so I am
>> going to suggest we give some of that care in the next cycle.
>>
>> Please respond to this post if you have an interest in this and what you
>> would like to see done. Include anything you are already getting on with
>> so we get a clear picture. If there is enough interest I'll put this
>> together as a proposal for a work stream. Something along the lines of
>> "robustify live migration".
>
><snip>
>
> Testing. Testing. Testing.

+1 for Testing

The "CI for reliable live-migration" thread was covering some of the
details on the multi-host CI options.

Thanks,
johnthetubaugy


From bob.haddleton at alcatel-lucent.com  Fri Sep 18 15:47:53 2015
From: bob.haddleton at alcatel-lucent.com (HADDLETON, Robert W (Bob))
Date: Fri, 18 Sep 2015 10:47:53 -0500
Subject: [openstack-dev] [mistral] Define better terms for WAITING and
 DELAYED states
In-Reply-To: <156DBF58-7BBB-49D4-A229-DD1B96896532@mirantis.com>
References: <156DBF58-7BBB-49D4-A229-DD1B96896532@mirantis.com>
Message-ID: <55FC3229.5020407@alcatel-lucent.com>

Hi Renat:

[TL;DR] - maybe use multiple words in the state name to avoid confusion

I agree that there is a lot of overlap - WAITING and DELAYED and 
POSTPONED are all very similar.  The context is important when trying to 
decipher what the words means.

I would normally interpret WAITING as having a known condition:

* I'm WAITING for the baseball game to begin

DELAYED implies WAITING but adds the context that something was supposed 
to have started already, or has already started, is now blocked by 
something out of your control, and you may or may not know when it will 
start again:

* The (start of the) ballgame has been DELAYED (by rain) (until 2:00). 
(So I'm still WAITING for it to begin)

POSTPONED implies DELAYED, but adds that something was "scheduled" to 
start at a certain time and has been re-scheduled for a later time.  It 
may or may not have started already, and the later time may or may not 
be known:

* The ballgame has been POSTPONED (because of rain) (until tomorrow) (so 
the game has been DELAYED and I'm still WAITING for it to start)

So using any of the three words on their own without context or 
additional information will likely be confusing, or at least subject to 
different interpretations.

I would be reluctant to rename DELAYED to POSTPONED, because it raises 
more questions (until when?) than DELAYED without providing more answers.

I think what it comes down to is the need to provide more information in 
the state name than is possible with one English word:

WAITING_FOR_PRECONDITIONS
DELAYED_BY_RETRY

These provide more specific context to the state but the state 
transition table gets to be unmanageable when there is a state for 
everything.

If more Waiting/delayed states are added it in the future it might make 
sense to create them as sub-states of RUNNING, to keep the transitions 
manageable.

Hope this helps

Bob


On 9/18/2015 8:55 AM, Renat Akhmerov wrote:
> Hello,
>
> We have a bug [1] that addresses the small semantical issue in names of
> the states for workflow and task executions: WAITING and DELAYED.
> I?m really interested in your opinion about this. Especially native
> english speakers? opinion because, IMO, they would be able to challenge
> better what we?re discussing.
>
> *Problem description*
> *
> *
> We now have a set of states:
>
>   * IDLE  - Nothing is going on, object was just created
>   * RUNNING - Workflow/task is running
>   * PAUSED - Workflow/task has been paused
>   * SUCCESS - Workflow/task has completed successfully
>   * ERROR - Workflow/task has completed with an error
>   * WAITING - Task execution object has been created but it is not ready
>     to start because some preconditions were not met. For now it mostly
>     refers to a case when we have a ?join? task depending on a number of
>     other tasks, e.g. ?task1? depends on ?task2? and ?task3?. But say
>     ?task2? has completed and ?task3? has not and hence ?task1? has to
>     wait. I may assume that in the future it may be related not only to
>     joins.
>   * DELAYED - Task has been delayed for a certain number of seconds. I
>     may happen, for example, in case of using ?retry? policy.
>
>
> So the semantical difference between WAITING and DELAYED is the
> following: Unlike WAITING, DELAYED says that we know exactly that the
> task will run, it?s just a matter of time. In case of WAITING, it may
> never run just because some of the preconditions may never be met.
>
> And the concern is that we probably don?t use good names for WAITING and
> DELAYED because, from English language perspective, they look similar to
> a number of folks (including myself) and it?s therefore confusing if we
> look at two tasks with states WAITING and DELAYED.
>
> The latest idea that we had is just to rename DELAYED to POSTPONED
> because the latter sort of expresses the fact of being postponed for a
> certain period of time slightly better :) But I?m really not sure.
>
> Would appreciate your input on this.
>
> Thanks
>
> [1] https://bugs.launchpad.net/mistral/+bug/1470369
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: bob_haddleton.vcf
Type: text/x-vcard
Size: 304 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/d4114433/attachment.vcf>

From Kevin.Fox at pnnl.gov  Fri Sep 18 15:53:15 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Fri, 18 Sep 2015 15:53:15 +0000
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <55FB5D1E.2080706@internap.com>
References: <55FB5D1E.2080706@internap.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C51A0@EX10MBOX06.pnnl.gov>

+1
________________________________________
From: Mathieu Gagn? [mgagne at internap.com]
Sent: Thursday, September 17, 2015 5:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [all] Consistent support for SSL termination proxies across all API services

Hi,

While debugging LP bug #1491579 [1], we identified [2] an issue where an
API sitting being a proxy performing SSL termination would not generate
the right redirection. The protocol ends up being the wrong one (http
instead of https) and this could hang your request indefinitely if
tcp/80 is not opened and a firewall drops your connection.

I suggested [3] adding support for the X-Fowarded-Proto header, thinking
Nova didn't supported it yet. In fact, someone suggested setting the
public_endpoint config instead.

So today I stumbled across this review [4] which added the
secure_proxy_ssl_header config to Nova. It allows the API to detect SSL
termination based on the (suggested) header X-Forwarded-Proto just like
previously suggested.

I also found this bug report [5] (opened in 2014) which also happens to
complain about bad URLs when API is sitting behind a proxy.

Multiple projects applied patches to try to fix the issue (based on
Launchpad comments):

* Glance added public_endpoint config
* Cinder added public_endpoint config
* Heat added secure_proxy_ssl_header config (through
heat.api.openstack:sslmiddleware_filter)
* Nova added secure_proxy_ssl_header config
* Manila added secure_proxy_ssl_header config (through
oslo_middleware.ssl:SSLMiddleware.factory)
* Ironic added public_endpoint config
* Keystone added secure_proxy_ssl_header config (LP #1370022)

As you can see, there is a lot of inconsistency between projects. (there
is more but lets start with that one)

My wish is for a common and consistent way for *ALL* OpenStack APIs to
support the same solution for this common problem. Let me tell you (and
I guess I can speak for all operators), we will be very happy to have
ONE config to remember of and set for *ALL* OpenStack services.

How can we get the ball rolling so we can fix it together once and for
all in a timely fashion?

[1] https://bugs.launchpad.net/python-novaclient/+bug/1491579
[2] https://bugs.launchpad.net/python-novaclient/+bug/1491579/comments/15
[3] https://bugs.launchpad.net/python-novaclient/+bug/1491579/comments/17
[4] https://review.openstack.org/#/c/206479/
[5] https://bugs.launchpad.net/glance/+bug/1384379

--
Mathieu

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From Kevin.Fox at pnnl.gov  Fri Sep 18 15:56:50 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Fri, 18 Sep 2015 15:56:50 +0000
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <20150918020452.GQ21846@jimrollenhagen.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>,
 <20150918020452.GQ21846@jimrollenhagen.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C51B7@EX10MBOX06.pnnl.gov>

Part of the reason to use Apache though is the diverse set of authentication mechanisms it supports. Operators have the desire to plugin Keystone into their existing authentication systems and Apache tends to be easier to do that then others.

Thanks,
Kevin
________________________________________
From: Jim Rollenhagen [jim at jimrollenhagen.com]
Sent: Thursday, September 17, 2015 7:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Apache2 vs uWSGI vs ...

On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
> In the fuel project, we recently ran into a couple of issues with Apache2 +
> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>
> Looking deep into Apache2 issues specifically around "apache2ctl graceful"
> and module loading/unloading and the hooks used by mod_wsgi [3]. I started
> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> something else better that people are already using.
>
> One data point that keeps coming up is, all the CI jobs use Apache2 +
> mod_wsgi so it must be the best solution....Is it? If not, what is?

Disclaimer: it's been a while since I've cared about performance with a
web server in front of a Python app.

IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
on again. In general, I seem to remember it being thought of as a bit
old and crusty, but mostly working.

At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
and saw a significant performance increase. This was a Django app. uwsgi
is fairly straightforward to operate and comes loaded with a myriad of
options[1] to help folks make the most of it. I've played with Ironic
behind uwsgi and it seemed to work fine, though I haven't done any sort
of load testing. I'd encourage folks to give it a shot. :)

Of course, uwsgi can also be ran behind Apache2, if you'd prefer.

gunicorn[2] is another good option that may be worth investigating; I
personally don't have any experience with it, but I seem to remember
hearing it has good eventlet support.

// jim

[0] https://uwsgi-docs.readthedocs.org/en/latest/
[1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
[2] http://gunicorn.org/

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From john at johngarbutt.com  Fri Sep 18 16:01:07 2015
From: john at johngarbutt.com (John Garbutt)
Date: Fri, 18 Sep 2015 17:01:07 +0100
Subject: [openstack-dev] [Openstack-i18n] [nova][i18n] Is there any
 point in using _() inpython-novaclient?
In-Reply-To: <55FC23C0.3040105@suse.com>
References: <55E9D9AD.1000402@linux.vnet.ibm.com>
 <201509060518.t865IeSf019572@d01av05.pok.ibm.com>
 <55EF0334.3030606@linux.vnet.ibm.com> <55FC23C0.3040105@suse.com>
Message-ID: <CABib2_qVGkM5ofP2J2dqJ0cU_=mVrHjem83AmvNhWuJdaJWYgg@mail.gmail.com>

On 18 September 2015 at 15:46, Andreas Jaeger <aj at suse.com> wrote:
> With the limited resources that the translation team has, we should not
> translate the clients but concentrate on the openstackclient, as discussed
> here:
>
> http://lists.openstack.org/pipermail/openstack-i18n/2015-September/001402.html

Given the current plans, that makes sense, I think.

Matt, thank you for raising this.

johnthetubaguy


From dstanek at dstanek.com  Fri Sep 18 16:01:20 2015
From: dstanek at dstanek.com (David Stanek)
Date: Fri, 18 Sep 2015 12:01:20 -0400
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <2264316.uPXFtoJfFz@breton-pc>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <55FC0DCF.7060307@redhat.com>
 <CAHAWLf3ZhXQD=pcm9rwML9ZOTA7pMWvNXm4+zbfdA=KmS25g8w@mail.gmail.com>
 <2264316.uPXFtoJfFz@breton-pc>
Message-ID: <CAO69NdnXju9kVLfcNEQZeDa4Q+wKM9aOTjSFVLSEbUX0+kheVw@mail.gmail.com>

I thoughts below mention Keystone, but in reality I would apply the same
logic to any OpenStack service.


On Fri, Sep 18, 2015 at 10:32 AM, Boris Bobrov <bbobrov at mirantis.com> wrote:

> There are 2 dimensions this discussion should happen in: web server and
> application server. Now we use apache2 as web server and mod_wsgi as app
> server.
>

This is exactly true and Keystone should be deployable in an WSGI compliant
application server. If it's not I would consider it a bug.



>
> I don't have a specific opinion on the app server (mod_wsgi vs uwsgi) and I
> don't really care.
>
> Regarding apache2 vs nginx. I don't see any reasons for the switch.
> Apache2 is
> well known to deployers and sysadmins. It is very rich for modules. I
> wonder
> if there are customer-written modules.
>

Keystone doesn't use or require Apache. We recommend that it is deployed
using Apache, but there is no requirement to do that if you don't need to
use any Apache modules. For example, at least one of my devstack nodes
happily runs Keystone's manage_all.

 [snip]



> There are things in keystone that work under apache. They are not tested.
> They
> were written to work under apache because it's the simplest and the most
> standard way to do. Making them work in nginx means forcing developers
> write
> some code. You're ready to do that?
>

This should only be true for optional features and currently require Apache
modules.



>
> > May be someone does not need something that apache supports and nginx not
> > and needs nginx features which apache does not support. Let's let our
> users
> > decide what they want.
> >
> > And the first step should be simple here - support for uwsgi.
>
> Why uwsgi? Why not gunicorn? Cherrypy? Twisted?
>

uwsgi and gunicorn should both work fine, as should any WSGI application
server. CherryPy and Twister are more framework than application server, so
I would not expect them to work.


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/e207acf6/attachment.html>

From vkuklin at mirantis.com  Fri Sep 18 16:09:26 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Fri, 18 Sep 2015 19:09:26 +0300
Subject: [openstack-dev]  Apache2 vs uWSGI vs ...
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7C51B7@EX10MBOX06.pnnl.gov>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7C51B7@EX10MBOX06.pnnl.gov>
Message-ID: <CAHAWLf27ON7goFPnmODoZsDRmg4yy_Z_XhHDu6zvH3+d35mLdw@mail.gmail.com>

Folks

I just suggested to untie keystone from wsgi and implement uwsgi support.
And then let the user decide what he or she wants.

There is a plenty of auth modules for nginx also.

Nginx us much better as a proxy server and you know it.

Regarding mod wsgi and apache we already saw that it cannot handle simple
restart. I think this is not in any way acceptable from operations point if
view.
18 ????. 2015 ?. 18:59 ???????????? "Fox, Kevin M" <Kevin.Fox at pnnl.gov>
???????:

> Part of the reason to use Apache though is the diverse set of
> authentication mechanisms it supports. Operators have the desire to plugin
> Keystone into their existing authentication systems and Apache tends to be
> easier to do that then others.
>
> Thanks,
> Kevin
> ________________________________________
> From: Jim Rollenhagen [jim at jimrollenhagen.com]
> Sent: Thursday, September 17, 2015 7:04 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Apache2 vs uWSGI vs ...
>
> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
> > In the fuel project, we recently ran into a couple of issues with
> Apache2 +
> > mod_wsgi as we switched Keystone to run . Please see [1] and [2].
> >
> > Looking deep into Apache2 issues specifically around "apache2ctl
> graceful"
> > and module loading/unloading and the hooks used by mod_wsgi [3]. I
> started
> > wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> > something else better that people are already using.
> >
> > One data point that keeps coming up is, all the CI jobs use Apache2 +
> > mod_wsgi so it must be the best solution....Is it? If not, what is?
>
> Disclaimer: it's been a while since I've cared about performance with a
> web server in front of a Python app.
>
> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
> on again. In general, I seem to remember it being thought of as a bit
> old and crusty, but mostly working.
>
> At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
> and saw a significant performance increase. This was a Django app. uwsgi
> is fairly straightforward to operate and comes loaded with a myriad of
> options[1] to help folks make the most of it. I've played with Ironic
> behind uwsgi and it seemed to work fine, though I haven't done any sort
> of load testing. I'd encourage folks to give it a shot. :)
>
> Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
>
> gunicorn[2] is another good option that may be worth investigating; I
> personally don't have any experience with it, but I seem to remember
> hearing it has good eventlet support.
>
> // jim
>
> [0] https://uwsgi-docs.readthedocs.org/en/latest/
> [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
> [2] http://gunicorn.org/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/5365344c/attachment.html>

From Kevin.Fox at pnnl.gov  Fri Sep 18 16:11:15 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Fri, 18 Sep 2015 16:11:15 +0000
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,
 that is the question
In-Reply-To: <55FC0B8A.4060303@mhtx.net>
References: <55FC0B8A.4060303@mhtx.net>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C524A@EX10MBOX06.pnnl.gov>

My $0.02
Support configuring ntp. Have a flag that can turn that piece off. Default it on..... Profit. :)

Thanks,
Kevin
________________________________________
From: Major Hayden [major at mhtx.net]
Sent: Friday, September 18, 2015 6:03 AM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,     that is the question

Hey there,

I start working on a bug[1] last night about adding a managed NTP configuration to openstack-ansible hosts.  My patch[2] gets chrony up and running with configurable NTP servers, but I'm still struggling to meet the "Proposal" section of the bug where the author has asked for non-infra physical nodes to get their time from the infra nodes.  I can't figure out how to make it work for AIO builds when one physical host is part of all of the groups. ;)

I'd argue that time synchronization is critical for a few areas:

  1) Security/auditing when comparing logs
  2) Troubleshooting when comparing logs
  3) I've been told swift is time-sensitive
  4) MySQL/Galera don't like time drift

However, there's a strong argument that this should be done by deployers, and not via openstack-ansible.  I'm still *very* new to the project and I'd like to hear some feedback from other folks.

[1] https://bugs.launchpad.net/openstack-ansible/+bug/1413018
[2] https://review.openstack.org/#/c/225006/

--
Major Hayden

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From thierry at openstack.org  Fri Sep 18 16:12:45 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Fri, 18 Sep 2015 18:12:45 +0200
Subject: [openstack-dev] [all] Cross-project workshop session suggestions
Message-ID: <55FC37FD.5070108@openstack.org>

Hello everyone,

At the Mitaka Design Summit in Tokyo we'll have a cross-project
workshops track on the Tuesday, as usual. The Technical Committee is
responsible for coming up with an agenda there, but we are interested in
your suggestions.

To that effect, we revived an instance of the good old "odsreg" system
for everyone to propose suggestions at:

http://odsreg.openstack.org/

If you have an idea of a topic that we should cover on that track,
please push it there. Please indicate if you're available to moderate
the session in case it's accepted.

Regards,

-- 
Thierry Carrez (ttx)


From tdecacqu at redhat.com  Fri Sep 18 16:29:32 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Fri, 18 Sep 2015 16:29:32 +0000
Subject: [openstack-dev] [all][elections] PTL Voting is now open
Message-ID: <55FC3BEC.7070606@redhat.com>

Elections are underway and will remain open for you to cast your vote
until 13:00 UTC September 24, 2015

We are having elections for Cinder, Glance, Ironic, Keystone, Mistral,
Neutron and Oslo.

If you are a Foundation individual member and had a commit in one of the
program's projects[0] over the Kilo-Liberty timeframe (September 18,
2014 06:00 UTC to September 18, 2015 05:59 UTC) then you are eligible to
vote. You should find your email with a link to the Condorcet page to
cast your vote in the inbox of your gerrit preferred email[1].

What to do if you don't see the email and have a commit in at least one
of the programs having an election:
  * check the trash or spam folders of your gerrit Preferred Email
    address, in case it went into trash or spam
  * wait a bit and check again, in case your email server is a bit slow
  * find the sha of at least one commit from the program project
    repos[0] and email me and Tony[2] at the below email addresses. If
    we can confirm that you are entitled to vote, we will add you to
    the voters list for the appropriate election.

Our democratic process is important to the health of OpenStack, please
exercise your right to vote.

Candidate statements/platforms can be found linked to Candidate names on
the wiki[3].

Happy voting,
Tristan


[0] The list of the program projects eligible for electoral
status:https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2015-elections

[1] Sign into review.openstack.org:
    Go to Settings > Contact Information.
    Look at the email listed as your Preferred Email.
    That is where the ballot has been sent.

[2] Tony's email: tony at bakeyournoodle dot com
    Tristan's email: tristan dot cacqueray at enovance dot com

[3]
https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_candidates:

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/0232bfcb/attachment.pgp>

From jaypipes at gmail.com  Fri Sep 18 16:38:56 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Fri, 18 Sep 2015 12:38:56 -0400
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,
 that is the question
In-Reply-To: <D2219136.1CE8C%ian.cordasco@rackspace.com>
References: <55FC0B8A.4060303@mhtx.net>
 <D2219136.1CE8C%ian.cordasco@rackspace.com>
Message-ID: <55FC3E20.4040800@gmail.com>

On 09/18/2015 11:04 AM, Ian Cordasco wrote:
> On 9/18/15, 08:03, "Major Hayden" <major at mhtx.net> wrote:
>
>> Hey there,
>>
>> I start working on a bug[1] last night about adding a managed NTP
>> configuration to openstack-ansible hosts.  My patch[2] gets chrony up and
>> running with configurable NTP servers, but I'm still struggling to meet
>> the "Proposal" section of the bug where the author has asked for
>> non-infra physical nodes to get their time from the infra nodes.  I can't
>> figure out how to make it work for AIO builds when one physical host is
>> part of all of the groups. ;)
>>
>> I'd argue that time synchronization is critical for a few areas:
>>
>>   1) Security/auditing when comparing logs
>>   2) Troubleshooting when comparing logs
>>   3) I've been told swift is time-sensitive
>>   4) MySQL/Galera don't like time drift
>>
>> However, there's a strong argument that this should be done by deployers,
>> and not via openstack-ansible.  I'm still *very* new to the project and
>> I'd like to hear some feedback from other folks.
>
> Personally, I fall into the camp of "this is a deployer concern".
> Specifically, there is already an ansible-galaxy role to enable NTP on
> your deployment hosts (https://galaxy.ansible.com/list#/roles/464) which
> *could* be expanded to do this very work that you're talking about. Using
> specialized roles to achieve this (and contributing back to the larger
> ansible community) seems like a bigger win than trying to reimplement some
> of this in OSA instead of reusing other roles that already exist.
>
> Compare it to a hypothetical situation where Keystone wrote its own
> backing libraries to implement Fernet instead of using the cryptography
> library. In that case there would be absolutely no argument that Keystone
> should use cryptography (even if it uses cffi and has bindings to OpenSSL
> which our infra team doesn't like and some deployers find difficult to
> manage when using pure-python deployment tooling). Why should OSA be any
> different from another OpenStack project?

Have to agree with Ian here. NTP, as Major wrote, is a critical piece of 
the deployment puzzle, but I don't think it's necessary to put anything 
in OSA specifically to configure NTP. As Ian wrote, better to contribute 
to upstream ansible-galaxy playbooks/roles that do this well.

Best,
-jay


From guang.yee at hpe.com  Fri Sep 18 16:53:20 2015
From: guang.yee at hpe.com (Yee, Guang)
Date: Fri, 18 Sep 2015 16:53:20 +0000
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CAHAWLf27ON7goFPnmODoZsDRmg4yy_Z_XhHDu6zvH3+d35mLdw@mail.gmail.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7C51B7@EX10MBOX06.pnnl.gov>
 <CAHAWLf27ON7goFPnmODoZsDRmg4yy_Z_XhHDu6zvH3+d35mLdw@mail.gmail.com>
Message-ID: <C73B35E94B12854596094F2CA86685496427116D@G4W3292.americas.hpqcorp.net>

I am with Adam, I kinda doubt Apache cause performance issues for Keystone, especially since all we have are just simple REST APIs. For other services with specific needs, like large file streaming, there may be some arguments to pick one over the other. We haven?t had the need to use Apache for dynamic routing or proxying at deployment. I am sure there are better tools out there that can do the job.

Making Keystone web service independent is a fine goal. However, since Keystone is moving away from being an identity provider, federation and external auth will play a major part going forward. Some of them are depended on the Apache at the moment. We may have to refactor Keystone to isolate the authentication service from the rest, then figure out how to gate it with Apache. I don?t think that?s trivial work though.

At this point, I don?t know what we are really gaining by ripping out Apache, comparing to amount of work to make that happen.


Guang


From: Vladimir Kuklin [mailto:vkuklin at mirantis.com]
Sent: Friday, September 18, 2015 9:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Apache2 vs uWSGI vs ...


Folks

I just suggested to untie keystone from wsgi and implement uwsgi support. And then let the user decide what he or she wants.

There is a plenty of auth modules for nginx also.

Nginx us much better as a proxy server and you know it.

Regarding mod wsgi and apache we already saw that it cannot handle simple restart. I think this is not in any way acceptable from operations point if view.
18 ????. 2015 ?. 18:59 ???????????? "Fox, Kevin M" <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> ???????:
Part of the reason to use Apache though is the diverse set of authentication mechanisms it supports. Operators have the desire to plugin Keystone into their existing authentication systems and Apache tends to be easier to do that then others.

Thanks,
Kevin
________________________________________
From: Jim Rollenhagen [jim at jimrollenhagen.com<mailto:jim at jimrollenhagen.com>]
Sent: Thursday, September 17, 2015 7:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Apache2 vs uWSGI vs ...

On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
> In the fuel project, we recently ran into a couple of issues with Apache2 +
> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>
> Looking deep into Apache2 issues specifically around "apache2ctl graceful"
> and module loading/unloading and the hooks used by mod_wsgi [3]. I started
> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> something else better that people are already using.
>
> One data point that keeps coming up is, all the CI jobs use Apache2 +
> mod_wsgi so it must be the best solution....Is it? If not, what is?

Disclaimer: it's been a while since I've cared about performance with a
web server in front of a Python app.

IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
on again. In general, I seem to remember it being thought of as a bit
old and crusty, but mostly working.

At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
and saw a significant performance increase. This was a Django app. uwsgi
is fairly straightforward to operate and comes loaded with a myriad of
options[1] to help folks make the most of it. I've played with Ironic
behind uwsgi and it seemed to work fine, though I haven't done any sort
of load testing. I'd encourage folks to give it a shot. :)

Of course, uwsgi can also be ran behind Apache2, if you'd prefer.

gunicorn[2] is another good option that may be worth investigating; I
personally don't have any experience with it, but I seem to remember
hearing it has good eventlet support.

// jim

[0] https://uwsgi-docs.readthedocs.org/en/latest/
[1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
[2] http://gunicorn.org/

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/28a12456/attachment.html>

From john.griffith8 at gmail.com  Fri Sep 18 17:01:06 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Fri, 18 Sep 2015 11:01:06 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55FC2877.4040903@windriver.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
 <55FC0A26.4080806@redhat.com> <55FC2877.4040903@windriver.com>
Message-ID: <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>

On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen <chris.friesen at windriver.com>
wrote:

> On 09/18/2015 06:57 AM, Eric Harney wrote:
>
>> On 09/17/2015 06:06 PM, John Griffith wrote:
>>
>
> Having the "global conf" settings intermixed with the backend sections
>>> caused a number of issues when we first started working on this.  That's
>>> part of why we require the "self.configuration" usage all over in the
>>> drivers.  Each driver instantiation is it's own independent entity.
>>>
>>>
>> Yes, each driver instantiation is independent, but that would still be
>> the case if these settings inherited values set in [DEFAULT] when they
>> aren't set in the backend section.
>>
>
> Agreed.  If I explicitly set something in the [DEFAULT] section, that
> should carry through and apply to all the backends unless overridden in the
> backend-specific section.
>
> Chris
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Meh I don't know about the "have to modify the code", the config file works
you just need to add that line to your driver section and configure the
backend correctly.

Regardless, I see your point (but I still certainly don't agree that it's
"blatantly wrong").

Bottom line "yes", ideally in the case of drivers we would check
global/default setting, and then override it if something was provided in
the driver specific setting, or if the driver itself set a different
default.  That seems like the right way to be doing it anyway.  I've looked
at that a bit this morning, the issue is that currently we don't even pass
any of those higher level conf settings in to the drivers init methods
anywhere.  Need to figure out how to change that, then it should be a
relatively simple fix.

Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/4acdfd48/attachment.html>

From ed at leafe.com  Fri Sep 18 17:15:30 2015
From: ed at leafe.com (Ed Leafe)
Date: Fri, 18 Sep 2015 12:15:30 -0500
Subject: [openstack-dev] [all][elections] PTL nomination period is now
	over
In-Reply-To: <20150918122746.GO29319@redhat.com>
References: <55FABF5E.4000204@redhat.com> <55FB2D63.3010402@gmail.com>
 <55FC0207.3070305@redhat.com> <20150918122746.GO29319@redhat.com>
Message-ID: <50B99C29-4390-4C04-9821-544E8780590A@leafe.com>

On Sep 18, 2015, at 7:27 AM, Flavio Percoco <flavio at redhat.com> wrote:

>> I'm strongly against this extra rules. OpenStack Officials Elections are
>> ran by volunteers and any rules that adds complexity should be avoided.
> 
> +1
> 
> Also, the schedule is announced 6 months in advance. The candidacy
> period is announced when it starts and a reminder is sent a couple of
> days before it ends.

+1 to sticking to deadlines. A grace period just mean a different deadline.

I don't, however, see the need for a firm start date. Comparing this to the feature freeze development cycle, in Nova we started opening up the specs for the N+1 cycle early in cycle N so that people can propose a spec early instead of waiting months and potentially missing the next window. So how about letting candidates declare early in the cycle by adding their names to the election repo at any time during the cycle up to the firm deadline? This might also encourage candidates not to wait until the last minute.

-- Ed Leafe





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 842 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/8b0688b1/attachment.pgp>

From rmeggins at redhat.com  Fri Sep 18 17:29:14 2015
From: rmeggins at redhat.com (Rich Megginson)
Date: Fri, 18 Sep 2015 11:29:14 -0600
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <55F9D7F6.2000604@puppetlabs.com>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com>
 <CAGnj6atXbpuzpNR6aF63cZ26WE-cbwUGozb9bvdxtaUaA7B1Ow@mail.gmail.com>
 <87oah44qtx.fsf@s390.unix4.net>
 <CAGnj6ave7EFDQkaFmZWDVTLOE0DQgkTksqh2QLqJe0aGkCXBpQ@mail.gmail.com>
 <55F9D7F6.2000604@puppetlabs.com>
Message-ID: <55FC49EA.7080704@redhat.com>

On 09/16/2015 02:58 PM, Cody Herriges wrote:
> I wrote my first composite namevar type a few years and ago and all the
> magic is basically a single block of code inside the type...
>
> https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L145-L169
>
> It basically boils down to these three things:
>
> * Pick your namevars
> (https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L49-L64)
> * Pick a delimiter
>    - Personally I'd use @ here since we are talking about domains

Unfortunately, not only is "domains" an overloaded term, but "@" is 
already in use as a delimiter for keystone_user_role, and "@" is a legal 
character in usernames.

> * Build your self.title_patterns method, accounting for delimited names
> and arbitrary names.
>
> While it looks like the README never got updated, the java_ks example
> supports both meaningful titles and arbitrary ones.
>
> java_ks { 'activemq_puppetca_keystore':
>    ensure       => latest,
>    name         => 'puppetca',
>    certificate  => '/etc/puppet/ssl/certs/ca.pem',
>    target       => '/etc/activemq/broker.ks',
>    password     => 'puppet',
>    trustcacerts => true,
> }
>
> java_ks { 'broker.example.com:/etc/activemq/broker.ks':
>    ensure      => latest,
>    certificate =>
> '/etc/puppet/ssl/certs/broker.example.com.pe-internal-broker.pem',
>    private_key =>
> '/etc/puppet/ssl/private_keys/broker.example.com.pe-internal-broker.pem',
>    password    => 'puppet',
> }
>
> You'll notice the first being an arbitrary title and the second
> utilizing a ":" as a delimiter and omitting the name and target parameters.
>
> Another code example can be found in the package type.
>
> https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type/package.rb#L268-L291.

Ok.  I've hacked a lib/puppet/type/keystone_tenant.rb to use name and 
domain with "isnamevar" and added a title_patterns like this:

   def self.title_patterns
     identity = lambda {|x| x}
     [
       [
         /^(.+)::(.+)$/,
         [
           [ :name, identity ],
           [ :domain, identity ]
         ]
       ],
       [
         /^(.+)$/,
         [
           [ :name, identity ]
         ]
       ]
     ]
   end

Then I hacked one of the simple rspec-puppet files to do this:

   let :pre_condition do
     [
      'keystone_tenant { "tenant1": name => "tenant", domain => 
"domain1" }',
      'keystone_tenant { "tenant2": name => "tenant", domain => "domain2" }'
     ]
   end

because what I'm trying to do is not rely on the title of the resource, 
but to make the combination of 'name' + 'domain' the actual "name" of 
the resource.  This doesn't work.  This is the error I get running spec:

      Failure/Error: it { is_expected.to 
contain_package('python-keystone').with_ensure("present") }
      Puppet::Error:
        Puppet::Parser::AST::Resource failed with error ArgumentError: 
Cannot alias Keystone_tenant[tenant2] to ["tenant"]; resource 
["Keystone_tenant", "tenant"] already declared at line 3 on node 
unused.redhat.com
      # ./vendor/gems/puppet-3.8.2/lib/puppet/resource/catalog.rb:137:in 
`alias'
      # ./vendor/gems/puppet-3.8.2/lib/puppet/resource/catalog.rb:111:in 
`create_resource_aliases'
      # ./vendor/gems/puppet-3.8.2/lib/puppet/resource/catalog.rb:90:in 
`add_one_resource'

Is there any way to accomplish the above?  If not, please tell me now 
and put me out of my misery, and we can go back to the original plan of 
forcing everyone to use "::" in the resource titles and names.

>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/cdf7b730/attachment.html>

From chris.friesen at windriver.com  Fri Sep 18 17:31:58 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Fri, 18 Sep 2015 11:31:58 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
 <55FC0A26.4080806@redhat.com> <55FC2877.4040903@windriver.com>
 <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>
Message-ID: <55FC4A8E.4070101@windriver.com>

On 09/18/2015 11:01 AM, John Griffith wrote:
>
>
> On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen <chris.friesen at windriver.com
> <mailto:chris.friesen at windriver.com>> wrote:
>
>     On 09/18/2015 06:57 AM, Eric Harney wrote:
>
>         On 09/17/2015 06:06 PM, John Griffith wrote:
>
>
>             Having the "global conf" settings intermixed with the backend sections
>             caused a number of issues when we first started working on this.  That's
>             part of why we require the "self.configuration" usage all over in the
>             drivers.  Each driver instantiation is it's own independent entity.
>
>
>         Yes, each driver instantiation is independent, but that would still be
>         the case if these settings inherited values set in [DEFAULT] when they
>         aren't set in the backend section.
>
>
>     Agreed.  If I explicitly set something in the [DEFAULT] section, that should
>     carry through and apply to all the backends unless overridden in the
>     backend-specific section.


> Bottom line "yes", ideally in the case of drivers we would check global/default
> setting, and then override it if something was provided in the driver specific
> setting, or if the driver itself set a different default.  That seems like the
> right way to be doing it anyway.  I've looked at that a bit this morning, the
> issue is that currently we don't even pass any of those higher level conf
> settings in to the drivers init methods anywhere.  Need to figure out how to
> change that, then it should be a relatively simple fix.

Actually, I think it should be slightly different.  If I set a variable in the 
global/default section of the config file, then that should override any 
defaults in the code for a driver.

So the order of descending precedence would go:

1) setting specified in driver-specific section of config file
2) setting specified in global/default section of config file
3) setting specified in driver-specific code
4) setting specified in global/default code (not sure if this exists)


Chris


From nik.komawar at gmail.com  Fri Sep 18 17:41:59 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 18 Sep 2015 13:41:59 -0400
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <50B99C29-4390-4C04-9821-544E8780590A@leafe.com>
References: <55FABF5E.4000204@redhat.com> <55FB2D63.3010402@gmail.com>
 <55FC0207.3070305@redhat.com> <20150918122746.GO29319@redhat.com>
 <50B99C29-4390-4C04-9821-544E8780590A@leafe.com>
Message-ID: <55FC4CE7.5050509@gmail.com>

I agree with Ed here.

If we have a stipulated time period set for proposals then grace period
sounds like a real-life deal. However, I would also encourage the idea
of opening this up early and keep the folder ready and officials review
the merge prop 2 months prior to the final date. It's better to have a
final date with longer open period than a small firm set date.

my 2 pennies worth.

On 9/18/15 1:15 PM, Ed Leafe wrote:
> On Sep 18, 2015, at 7:27 AM, Flavio Percoco <flavio at redhat.com> wrote:
>
>>> I'm strongly against this extra rules. OpenStack Officials Elections are
>>> ran by volunteers and any rules that adds complexity should be avoided.
>> +1
>>
>> Also, the schedule is announced 6 months in advance. The candidacy
>> period is announced when it starts and a reminder is sent a couple of
>> days before it ends.
> +1 to sticking to deadlines. A grace period just mean a different deadline.
>
> I don't, however, see the need for a firm start date. Comparing this to the feature freeze development cycle, in Nova we started opening up the specs for the N+1 cycle early in cycle N so that people can propose a spec early instead of waiting months and potentially missing the next window. So how about letting candidates declare early in the cycle by adding their names to the election repo at any time during the cycle up to the firm deadline? This might also encourage candidates not to wait until the last minute.
>
> -- Ed Leafe
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/ca1dbaa0/attachment.html>

From sharis at Brocade.com  Fri Sep 18 18:01:27 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Fri, 18 Sep 2015 18:01:27 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
Message-ID: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>

Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova

I usually run this on a macbook air - but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases - I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/c0a35256/attachment.html>

From corpqa at gmail.com  Fri Sep 18 18:03:43 2015
From: corpqa at gmail.com (OpenStack Mailing List Archive)
Date: Fri, 18 Sep 2015 11:03:43 -0700
Subject: [openstack-dev] openstack-dahboard directory is not created
Message-ID: <c9fe2da2e8e63f2f5ae4a99e47aebf23@openstack.nimeyo.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/55aa5431/attachment-0001.html>

From morgan.fainberg at gmail.com  Fri Sep 18 18:06:51 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Fri, 18 Sep 2015 11:06:51 -0700
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <C73B35E94B12854596094F2CA86685496427116D@G4W3292.americas.hpqcorp.net>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7C51B7@EX10MBOX06.pnnl.gov>
 <CAHAWLf27ON7goFPnmODoZsDRmg4yy_Z_XhHDu6zvH3+d35mLdw@mail.gmail.com>
 <C73B35E94B12854596094F2CA86685496427116D@G4W3292.americas.hpqcorp.net>
Message-ID: <5E9B7ECC-AADB-4AB0-B5C8-48A89A257FA4@gmail.com>

The conversations around alternative wsgi containers (uwsgi, gunicorn, etc) still would be tied to apache or nginx. While it is possible to deploy without the webservers in some cases, jt would not be recommended nor do I see that being tested in gate. We want to leverage the auth support of the webserver. While uWSGI has interesting features, it wouldnt really replace nginx or apache. It is just an alternative deployment strategy that makes managing the keystone (or any OpenStack service) processes independent of the web server. 

I do not expect any of these choices to have a performance impact. It will ease some operational concerns. There is no reason uwsgi or gunicorn wont work today (I have run keystone with both in a devstack locally). The documentation and communicating the reasons to pick one or the other is all that is needed. From a gate perspective configuration and restarts of the web server could be made easier in devstack with uWSGI. It would not prevent use of mod_wsgi. 

--Morgan

Sent via mobile

> On Sep 18, 2015, at 09:53, Yee, Guang <guang.yee at hpe.com> wrote:
> 
> I am with Adam, I kinda doubt Apache cause performance issues for Keystone, especially since all we have are just simple REST APIs. For other services with specific needs, like large file streaming, there may be some arguments to pick one over the other. We haven?t had the need to use Apache for dynamic routing or proxying at deployment. I am sure there are better tools out there that can do the job.
>  
> Making Keystone web service independent is a fine goal. However, since Keystone is moving away from being an identity provider, federation and external auth will play a major part going forward. Some of them are depended on the Apache at the moment. We may have to refactor Keystone to isolate the authentication service from the rest, then figure out how to gate it with Apache. I don?t think that?s trivial work though.
>  
> At this point, I don?t know what we are really gaining by ripping out Apache, comparing to amount of work to make that happen.
>  
>  
> Guang
>  
>  
> From: Vladimir Kuklin [mailto:vkuklin at mirantis.com] 
> Sent: Friday, September 18, 2015 9:09 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] Apache2 vs uWSGI vs ...
>  
> Folks
> 
> I just suggested to untie keystone from wsgi and implement uwsgi support. And then let the user decide what he or she wants.
> 
> There is a plenty of auth modules for nginx also.
> 
> Nginx us much better as a proxy server and you know it.
> 
> Regarding mod wsgi and apache we already saw that it cannot handle simple restart. I think this is not in any way acceptable from operations point if view.
> 
> 18 ????. 2015 ?. 18:59 ???????????? "Fox, Kevin M" <Kevin.Fox at pnnl.gov> ???????:
> Part of the reason to use Apache though is the diverse set of authentication mechanisms it supports. Operators have the desire to plugin Keystone into their existing authentication systems and Apache tends to be easier to do that then others.
> 
> Thanks,
> Kevin
> ________________________________________
> From: Jim Rollenhagen [jim at jimrollenhagen.com]
> Sent: Thursday, September 17, 2015 7:04 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Apache2 vs uWSGI vs ...
> 
> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
> > In the fuel project, we recently ran into a couple of issues with Apache2 +
> > mod_wsgi as we switched Keystone to run . Please see [1] and [2].
> >
> > Looking deep into Apache2 issues specifically around "apache2ctl graceful"
> > and module loading/unloading and the hooks used by mod_wsgi [3]. I started
> > wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> > something else better that people are already using.
> >
> > One data point that keeps coming up is, all the CI jobs use Apache2 +
> > mod_wsgi so it must be the best solution....Is it? If not, what is?
> 
> Disclaimer: it's been a while since I've cared about performance with a
> web server in front of a Python app.
> 
> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
> on again. In general, I seem to remember it being thought of as a bit
> old and crusty, but mostly working.
> 
> At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
> and saw a significant performance increase. This was a Django app. uwsgi
> is fairly straightforward to operate and comes loaded with a myriad of
> options[1] to help folks make the most of it. I've played with Ironic
> behind uwsgi and it seemed to work fine, though I haven't done any sort
> of load testing. I'd encourage folks to give it a shot. :)
> 
> Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
> 
> gunicorn[2] is another good option that may be worth investigating; I
> personally don't have any experience with it, but I seem to remember
> hearing it has good eventlet support.
> 
> // jim
> 
> [0] https://uwsgi-docs.readthedocs.org/en/latest/
> [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
> [2] http://gunicorn.org/
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/bcb38caf/attachment.html>

From john.griffith8 at gmail.com  Fri Sep 18 18:11:42 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Fri, 18 Sep 2015 12:11:42 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55FC4A8E.4070101@windriver.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
 <55FC0A26.4080806@redhat.com> <55FC2877.4040903@windriver.com>
 <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>
 <55FC4A8E.4070101@windriver.com>
Message-ID: <CAPWkaSW-URWM6FRcACDUJJdM7qyPYZtnzAONFy4Xm5cgkc48Fg@mail.gmail.com>

On Fri, Sep 18, 2015 at 11:31 AM, Chris Friesen <chris.friesen at windriver.com
> wrote:

> On 09/18/2015 11:01 AM, John Griffith wrote:
>
>>
>>
>> On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen <
>> chris.friesen at windriver.com
>> <mailto:chris.friesen at windriver.com>> wrote:
>>
>>     On 09/18/2015 06:57 AM, Eric Harney wrote:
>>
>>         On 09/17/2015 06:06 PM, John Griffith wrote:
>>
>>
>>             Having the "global conf" settings intermixed with the backend
>> sections
>>             caused a number of issues when we first started working on
>> this.  That's
>>             part of why we require the "self.configuration" usage all
>> over in the
>>             drivers.  Each driver instantiation is it's own independent
>> entity.
>>
>>
>>         Yes, each driver instantiation is independent, but that would
>> still be
>>         the case if these settings inherited values set in [DEFAULT] when
>> they
>>         aren't set in the backend section.
>>
>>
>>     Agreed.  If I explicitly set something in the [DEFAULT] section, that
>> should
>>     carry through and apply to all the backends unless overridden in the
>>     backend-specific section.
>>
>
>
> Bottom line "yes", ideally in the case of drivers we would check
>> global/default
>> setting, and then override it if something was provided in the driver
>> specific
>> setting, or if the driver itself set a different default.  That seems
>> like the
>> right way to be doing it anyway.  I've looked at that a bit this morning,
>> the
>> issue is that currently we don't even pass any of those higher level conf
>> settings in to the drivers init methods anywhere.  Need to figure out how
>> to
>> change that, then it should be a relatively simple fix.
>>
>
> Actually, I think it should be slightly different.  If I set a variable in
> the global/default section of the config file, then that should override
> any defaults in the code for a driver.
>

Hmm... well, on the bright side that might be easier to implement at least
:). I guess I don't agree on the meaning of "DEFAULT", to me a "DEFAULT"
section means, "these are the defaults if you don't specify something
else", no?  Your proposal seems really counter-intuitive to me.

?Anyway, I've come up with a way to read the DEFAULT section of CONF on
init in the driver so that's good.  The trick now though is when there's a
difference in value between the driver section and the default section;
knowing what was set explicitly and what wasn't.  In other words I don't
have any way of knowing for sure if the setting came from reading in the
defaults in the declaration options or if it was explicitly set in the
config file.?

?Still working on it, but curious if anybody might know how to get around
this little sticking point.

Thanks,
John?




> So the order of descending precedence would go:
>
> 1) setting specified in driver-specific section of config file
> 2) setting specified in global/default section of config file
> 3) setting specified in driver-specific code
> 4) setting specified in global/default code (not sure if this exists)
>
>
>
> Chris
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/3cb5ceac/attachment.html>

From itzshamail at gmail.com  Fri Sep 18 18:28:33 2015
From: itzshamail at gmail.com (Shamail)
Date: Fri, 18 Sep 2015 14:28:33 -0400
Subject: [openstack-dev] [all] [ptl] Troubleshooting cross-project
	communications
In-Reply-To: <20150917235035.GB3727@gmail.com>
References: <CAD0KtVF=8tw6tw7DZ8kef+fB1ECO=JhnV72L=nLhgtSb+rorwQ@mail.gmail.com>
 <55F979A9.9040206@openstack.org> <20150917235035.GB3727@gmail.com>
Message-ID: <CC17FD2E-77B4-4C17-800C-6DE54A9E540B@gmail.com>



> On Sep 17, 2015, at 7:50 PM, Mike Perez <thingee at gmail.com> wrote:
> 
>> On 16:16 Sep 16, Thierry Carrez wrote:
>> Anne Gentle wrote:
>>> [...]
>>> What are some of the problems with each layer? 
>>> 
>>> 1. weekly meeting: time zones, global reach, size of cross-project
>>> concerns due to multiple projects being affected, another meeting for
>>> PTLs to attend and pay attention to
>> 
>> A lot of PTLs (or liaisons/lieutenants) skip the meeting, or will only
>> attend when they have something to ask. Their time is precious and most
>> of the time the meeting is not relevant for them, so why bother ? You
>> have a few usual suspects attending all of them, but those people are
>> cross-project-aware already so those are not the people that would
>> benefit the most from the meeting.
>> 
>> This partial attendance makes the meeting completely useless as a way to
>> disseminate information. It makes the meeting mostly useless as a way to
>> get general approval on cross-project specs.
>> 
>> The meeting still is very useful IMHO to have more direct discussions on
>> hot topics. So a ML discussion is flagged for direct discussion on IRC
>> and we have a time slot already booked for that.
> 
> Content for the cross project meeting are usually:
> 
> * Not ready for decisions.
> * Lack solutions.
> 
> A proposal in steps of how cross project ideas start, to something ready for
> the cross project IRC meeting, then the TC:
> 
> 1) An idea starts from either or both:
>   a) Mailing list discussion.
>   b) A patch to a single project (until it's flagged that this patch could be
>      benefical to other projects)
> 2) OpenStack Spec is proposed - discussions happen in gerrit from here on out.
>   Not on the mailing list. Keep encouraging discussions back to gerrit to keep
>   everything in one place in order to avoid confusion with having to fish
>   for some random discussion elsewhere.
> 3) Once enough consensus happens an agenda item is posted cross project IRC
>   meeting.
> 4) Final discussions happen in the meeting. If consensus is still met by
>   interested parties who attend, it moves to TC.  If there is a lack of
>   consensus it goes back to gerrit and repeat.
> 
This approach makes sense.  It will also allow items that don't reach consensus to possibly be captured (once) and rise to the top again when/if it becomes a pressing need.  There has to be some process to abandon changes eventually too (if the scope changes or the initial ask if no longer relevant).  

> With this process, we should have less meetings. Less meetings is:
> 
> * Awesome
> * Makes this meeting more meaningful when it happens because decisions are
>  potentially going to be agreed and passed to the TC!
> 
> If a cross project spec is not getting attention, don't post it to the list for
> attention. We get enough email and it'll probably be lost. Instead, let the
> product working group recognize this and reach out to the projects that this
> spec would benefit, to bring meaningful attention to the spec.
+1 
If something needs attention, we would be glad to help evaluate/socialize.
> 
> For vertical alignment, interaction like IRC is not necessary. A very brief,
> bullet point of collected information from projects that have anything
> interesting is given in a weekly digest email to the list If anyone has
> questions or wants more information, they can use their own time to ask that
> project team.
> 
> Potentially, if we kept everything to the spec on gerrit, and had the product
> working group bringing needed attention to specs, we could eliminate the cross
> project meeting.
> 
Does it make sense to propose a continuation of this discussion at the summit (using the tool that Thierry just linked in another message) or a cross-project meeting before the summit?  A few of us from the Product WG will be glad to participate.

> -- 
> Mike Perez
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From gal.sagie at gmail.com  Fri Sep 18 18:30:05 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Fri, 18 Sep 2015 21:30:05 +0300
Subject: [openstack-dev] [Neutron] Kuryr - Spec
Message-ID: <CAG9LJa52qUkuU_X0xaRovX1908zUNaYbX+rx4OWD5yxWTAczQg@mail.gmail.com>

Hello everyone,

We have a spec for project Kuryr in Neutron repository [1] , we have been
iterating on it
internally and with the great help and feedback from the Magnum team.

I am glad to say that we reached a pretty good step where we have most of
the
Magnum team +1 the spec, i personally think all of the items for the first
milestone
(which is for Mitaka release) are well defined and already in working
process (and low level
design process).

I would like to thank the Magnum team for working closely with us on this
and for
the valuable feedback.

The reason why we put this in the Neutron repository is the fact that we
feel Kuryr
is not another Neutron implementation, it is an infrastructure project that
can be used by
any Neutron plugin and need (in my opinion) to go hand in hand with Neutron.
We would like to make it visible to the Neutron team and i hope that we can
get
this spec merged for the Mitaka release to define our goals in Kuryr.

We also have detailed designs and blueprints process in Kuryr repository
for
all the items described in the spec.
I hope to see more comments/review from Neutron members on this spec.

On a side note, we had a virtual sprint for Kuryr last week, apuimedo and
taku will have
a video of a demo thanks to the progress made on the sprint, so stay tuned
for that to see whats available.

Thanks
Gal.

[1] https://review.openstack.org/#/c/213490/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/90901efc/attachment.html>

From openstack at nemebean.com  Fri Sep 18 18:30:38 2015
From: openstack at nemebean.com (Ben Nemec)
Date: Fri, 18 Sep 2015 13:30:38 -0500
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <55FB5D1E.2080706@internap.com>
References: <55FB5D1E.2080706@internap.com>
Message-ID: <55FC584E.80805@nemebean.com>

I've been dealing with this issue lately myself, so here's my two cents:

It seems to me that solving this at the service level is actually kind
of wrong.  As you've discovered, that requires changes in a bunch of
different places to address what is really an external issue.  Since
it's the terminating proxy that is converting HTTPS traffic to HTTP that
feels like the right place for a fix IMHO.

My solution has been to have the proxy (HAProxy in my case) rewrite the
Location header on redirects (one example for the TripleO puppet config
here: https://review.openstack.org/#/c/223330/1/manifests/loadbalancer.pp).

I'm not absolutely opposed to having a way to make the services aware of
external SSL termination to allow use of a proxy that can't do header
rewriting, but I think proxy configuration should be the preferred way
to handle it.

-Ben

On 09/17/2015 07:38 PM, Mathieu Gagn? wrote:
> Hi,
> 
> While debugging LP bug #1491579 [1], we identified [2] an issue where an
> API sitting being a proxy performing SSL termination would not generate
> the right redirection. The protocol ends up being the wrong one (http
> instead of https) and this could hang your request indefinitely if
> tcp/80 is not opened and a firewall drops your connection.
> 
> I suggested [3] adding support for the X-Fowarded-Proto header, thinking
> Nova didn't supported it yet. In fact, someone suggested setting the
> public_endpoint config instead.
> 
> So today I stumbled across this review [4] which added the
> secure_proxy_ssl_header config to Nova. It allows the API to detect SSL
> termination based on the (suggested) header X-Forwarded-Proto just like
> previously suggested.
> 
> I also found this bug report [5] (opened in 2014) which also happens to
> complain about bad URLs when API is sitting behind a proxy.
> 
> Multiple projects applied patches to try to fix the issue (based on
> Launchpad comments):
> 
> * Glance added public_endpoint config
> * Cinder added public_endpoint config
> * Heat added secure_proxy_ssl_header config (through
> heat.api.openstack:sslmiddleware_filter)
> * Nova added secure_proxy_ssl_header config
> * Manila added secure_proxy_ssl_header config (through
> oslo_middleware.ssl:SSLMiddleware.factory)
> * Ironic added public_endpoint config
> * Keystone added secure_proxy_ssl_header config (LP #1370022)
> 
> As you can see, there is a lot of inconsistency between projects. (there
> is more but lets start with that one)
> 
> My wish is for a common and consistent way for *ALL* OpenStack APIs to
> support the same solution for this common problem. Let me tell you (and
> I guess I can speak for all operators), we will be very happy to have
> ONE config to remember of and set for *ALL* OpenStack services.
> 
> How can we get the ball rolling so we can fix it together once and for
> all in a timely fashion?
> 
> [1] https://bugs.launchpad.net/python-novaclient/+bug/1491579
> [2] https://bugs.launchpad.net/python-novaclient/+bug/1491579/comments/15
> [3] https://bugs.launchpad.net/python-novaclient/+bug/1491579/comments/17
> [4] https://review.openstack.org/#/c/206479/
> [5] https://bugs.launchpad.net/glance/+bug/1384379
> 



From chris.friesen at windriver.com  Fri Sep 18 18:32:57 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Fri, 18 Sep 2015 12:32:57 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <CAPWkaSW-URWM6FRcACDUJJdM7qyPYZtnzAONFy4Xm5cgkc48Fg@mail.gmail.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
 <55FC0A26.4080806@redhat.com> <55FC2877.4040903@windriver.com>
 <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>
 <55FC4A8E.4070101@windriver.com>
 <CAPWkaSW-URWM6FRcACDUJJdM7qyPYZtnzAONFy4Xm5cgkc48Fg@mail.gmail.com>
Message-ID: <55FC58D9.6010201@windriver.com>

On 09/18/2015 12:11 PM, John Griffith wrote:
>
>
> On Fri, Sep 18, 2015 at 11:31 AM, Chris Friesen <chris.friesen at windriver.com
> <mailto:chris.friesen at windriver.com>> wrote:
>
>     On 09/18/2015 11:01 AM, John Griffith wrote:
>
>
>
>         On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen
>         <chris.friesen at windriver.com <mailto:chris.friesen at windriver.com>
>         <mailto:chris.friesen at windriver.com
>         <mailto:chris.friesen at windriver.com>>> wrote:
>
>              On 09/18/2015 06:57 AM, Eric Harney wrote:
>
>                  On 09/17/2015 06:06 PM, John Griffith wrote:
>
>
>                      Having the "global conf" settings intermixed with the
>         backend sections
>                      caused a number of issues when we first started working on
>         this.  That's
>                      part of why we require the "self.configuration" usage all
>         over in the
>                      drivers.  Each driver instantiation is it's own independent
>         entity.
>
>
>                  Yes, each driver instantiation is independent, but that would
>         still be
>                  the case if these settings inherited values set in [DEFAULT]
>         when they
>                  aren't set in the backend section.
>
>
>              Agreed.  If I explicitly set something in the [DEFAULT] section,
>         that should
>              carry through and apply to all the backends unless overridden in the
>              backend-specific section.
>
>
>
>         Bottom line "yes", ideally in the case of drivers we would check
>         global/default
>         setting, and then override it if something was provided in the driver
>         specific
>         setting, or if the driver itself set a different default.  That seems
>         like the
>         right way to be doing it anyway.  I've looked at that a bit this
>         morning, the
>         issue is that currently we don't even pass any of those higher level conf
>         settings in to the drivers init methods anywhere.  Need to figure out how to
>         change that, then it should be a relatively simple fix.
>
>
>     Actually, I think it should be slightly different.  If I set a variable in
>     the global/default section of the config file, then that should override any
>     defaults in the code for a driver.
>
> Hmm... well, on the bright side that might be easier to implement at least :). I
> guess I don't agree on the meaning of "DEFAULT", to me a "DEFAULT" section
> means, "these are the defaults if you don't specify something else", no?  Your
> proposal seems really counter-intuitive to me.

That's what I proposed.

If you specify anything in the driver-specific portion of the config file, that 
takes priority over everything.  If you specify something in the DEFAULT portion 
of the config file, then that takes priority over the in-code defaults.  If you 
specify a default value in the driver-specific code, that takes priority over 
any defaults specified in the global/generic code.

Chris



From dolph.mathews at gmail.com  Fri Sep 18 18:42:44 2015
From: dolph.mathews at gmail.com (Dolph Mathews)
Date: Fri, 18 Sep 2015 13:42:44 -0500
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CAHAWLf27ON7goFPnmODoZsDRmg4yy_Z_XhHDu6zvH3+d35mLdw@mail.gmail.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7C51B7@EX10MBOX06.pnnl.gov>
 <CAHAWLf27ON7goFPnmODoZsDRmg4yy_Z_XhHDu6zvH3+d35mLdw@mail.gmail.com>
Message-ID: <CAC=h7gWY+UCpxt5vDK_ycWPsku0ODQ7mZ7ij4JJXB3sFMrKKPg@mail.gmail.com>

On Fri, Sep 18, 2015 at 11:09 AM, Vladimir Kuklin <vkuklin at mirantis.com>
wrote:

> I just suggested to untie keystone from wsgi and implement uwsgi support.
> And then let the user decide what he or she wants.
>
Keystone is not tied to Apache or mod_wsgi, if that's what you mean. We
provide a sample configuration for Apache + mod_wsgi because deployers tend
to be more familiar with Apache than other web servers, and there happens
to be a mature ecosystem of auth modules that keystone can utilize. It's
working documentation, just like devstack itself.

Use whatever you want to deploy keystone. Choose the thing that you're
familiar with, can support effectively, run reliably, and has sufficient
performance.

</thread>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/590f9283/attachment.html>

From eharney at redhat.com  Fri Sep 18 18:52:34 2015
From: eharney at redhat.com (Eric Harney)
Date: Fri, 18 Sep 2015 14:52:34 -0400
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
 <55FC0A26.4080806@redhat.com> <55FC2877.4040903@windriver.com>
 <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>
Message-ID: <55FC5D72.8040600@redhat.com>

On 09/18/2015 01:01 PM, John Griffith wrote:
> On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen <chris.friesen at windriver.com>
> wrote:
> 
>> On 09/18/2015 06:57 AM, Eric Harney wrote:
>>
>>> On 09/17/2015 06:06 PM, John Griffith wrote:
>>>
>>
>> Having the "global conf" settings intermixed with the backend sections
>>>> caused a number of issues when we first started working on this.  That's
>>>> part of why we require the "self.configuration" usage all over in the
>>>> drivers.  Each driver instantiation is it's own independent entity.
>>>>
>>>>
>>> Yes, each driver instantiation is independent, but that would still be
>>> the case if these settings inherited values set in [DEFAULT] when they
>>> aren't set in the backend section.
>>>
>>
>> Agreed.  If I explicitly set something in the [DEFAULT] section, that
>> should carry through and apply to all the backends unless overridden in the
>> backend-specific section.
>>
>> Chris
>>
>>
> Meh I don't know about the "have to modify the code", the config file works
> you just need to add that line to your driver section and configure the
> backend correctly.
> 

My point is that there doesn't seem to be a justification for "you just
need to add that line to your driver section", which seems to counter
what most people's expectation would be.

People can and do fail to do that, because they assume that [DEFAULT]
settings are treated as defaults.

To help people who make that assumption, yes, you have to modify the
code, because the code supplies a default value that you cannot supply
in the same way via config files.

> Regardless, I see your point (but I still certainly don't agree that it's
> "blatantly wrong").
> 

You can substitute "very confusing" for "blatantly wrong" but I think
those are about the same thing when talking about usability issues with
how to configure a service.

Look at options like:
 - strict_ssh_host_key_policy
 - sio_verify_server_certificate
 - driver_ssl_cert_verify

All of these default to False, and if turned on, enable protections
against MITM attacks.  All of them also fail to turn on for the relevant
drivers if set in [DEFAULT].  These should, if set in DEFAULT when using
multi-backend, issue a warning so the admin knows that they are not
getting the intended security guarantees.  Instead, nothing happens and
Cinder and the storage works.  Confusion is dangerous.

> Bottom line "yes", ideally in the case of drivers we would check
> global/default setting, and then override it if something was provided in
> the driver specific setting, or if the driver itself set a different
> default.  That seems like the right way to be doing it anyway.  I've looked
> at that a bit this morning, the issue is that currently we don't even pass
> any of those higher level conf settings in to the drivers init methods
> anywhere.  Need to figure out how to change that, then it should be a
> relatively simple fix.
> 

What I was getting at earlier though, is that I'm not really sure there
is a simple fix.  It may be simple to change the behavior to more
predictable behavior, but doing that in a way that doesn't introduce
upgrade problems for deployments relying on the current defaults seems
difficult to me.


From nik.komawar at gmail.com  Fri Sep 18 19:01:36 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 18 Sep 2015 15:01:36 -0400
Subject: [openstack-dev] [Glance] glance core rotation part 1
In-Reply-To: <55F6B36D.6000307@catalyst.net.nz>
References: <55F2E3F9.1000907@gmail.com>
 <etPan.55f31e93.35ed68df.5cd@MMR298FD58>
 <CABdthUSsbSxn1Vb5nTyGnLO__8VYn7KvGHUV8MeBD6ZERtD8ew@mail.gmail.com>
 <EA70533067B8F34F801E964ABCA4C4410F4D3E96@G9W0745.americas.hpqcorp.net>
 <CAHPxGAWbN1oFfPM1aS6=BmLcv2ZiDQzjjnBafQf_2n_O=71qqw@mail.gmail.com>
 <55F6B36D.6000307@catalyst.net.nz>
Message-ID: <55FC5F90.9090204@gmail.com>

This is done.

On 9/14/15 7:45 AM, Fei Long Wang wrote:
> +1
>
> Yep, it would be nice if Zhi Yan can promote OpenStack in Alibaba :)
>
> On 14/09/15 22:59, Mikhail Fedosin wrote:
>> +1.
>> I hope that Zhi Yan joined Alibaba to make it use Openstack in the
>> future :)
>>
>> On Mon, Sep 14, 2015 at 11:23 AM, Kuvaja, Erno <kuvaja at hpe.com
>> <mailto:kuvaja at hpe.com>> wrote:
>>
>>     +1
>>
>>      
>>
>>     *From:*Alex Meade [mailto:mr.alex.meade at gmail.com
>>     <mailto:mr.alex.meade at gmail.com>]
>>     *Sent:* Friday, September 11, 2015 7:37 PM
>>     *To:* OpenStack Development Mailing List (not for usage questions)
>>     *Subject:* Re: [openstack-dev] [Glance] glance core rotation part 1
>>
>>      
>>
>>     +1
>>
>>      
>>
>>     On Fri, Sep 11, 2015 at 2:33 PM, Ian Cordasco
>>     <ian.cordasco at rackspace.com <mailto:ian.cordasco at rackspace.com>>
>>     wrote:
>>
>>          
>>
>>         -----Original Message-----
>>         From: Nikhil Komawar <nik.komawar at gmail.com
>>         <mailto:nik.komawar at gmail.com>>
>>         Reply: OpenStack Development Mailing List (not for usage
>>         questions) <openstack-dev at lists.openstack.org
>>         <mailto:openstack-dev at lists.openstack.org>>
>>         Date: September 11, 2015 at 09:30:23
>>         To: openstack-dev at lists.openstack.org
>>         <mailto:openstack-dev at lists.openstack.org>
>>         <openstack-dev at lists.openstack.org
>>         <mailto:openstack-dev at lists.openstack.org>>
>>         Subject:  [openstack-dev] [Glance] glance core rotation part 1
>>
>>         > Hi,
>>         >
>>         > I would like to propose the following removals from
>>         glance-core based on
>>         > the simple criterion of inactivity/limited activity for a
>>         long period (2
>>         > cycles or more) of time:
>>         >
>>         > Alex Meade
>>         > Arnaud Legendre
>>         > Mark Washenberger
>>         > Iccha Sethi
>>
>>         I think these are overdue
>>
>>         > Zhi Yan Liu (Limited activity in Kilo and absent in Liberty)
>>
>>         Sad to see Zhi Yan Liu's activity drop off.
>>
>>         > Please vote +1 or -1 and we will decide by Monday EOD PT.
>>
>>         +1
>>
>>         --
>>         Ian Cordasco
>>
>>         __________________________________________________________________________
>>         OpenStack Development Mailing List (not for usage questions)
>>         Unsubscribe:
>>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>      
>>
>>
>>     __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -- 
> Cheers & Best regards,
> Fei Long Wang (???)
> --------------------------------------------------------------------------
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flwang at catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> -------------------------------------------------------------------------- 
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/cff35b40/attachment.html>

From john.griffith8 at gmail.com  Fri Sep 18 19:01:52 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Fri, 18 Sep 2015 13:01:52 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55FC58D9.6010201@windriver.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
 <55FC0A26.4080806@redhat.com> <55FC2877.4040903@windriver.com>
 <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>
 <55FC4A8E.4070101@windriver.com>
 <CAPWkaSW-URWM6FRcACDUJJdM7qyPYZtnzAONFy4Xm5cgkc48Fg@mail.gmail.com>
 <55FC58D9.6010201@windriver.com>
Message-ID: <CAPWkaSVnDYqAkQGvAsDCXmNec4fOUDyimreyDKgAR1wWEU4LLA@mail.gmail.com>

On Fri, Sep 18, 2015 at 12:32 PM, Chris Friesen <chris.friesen at windriver.com
> wrote:

> On 09/18/2015 12:11 PM, John Griffith wrote:
>
>>
>>
>> On Fri, Sep 18, 2015 at 11:31 AM, Chris Friesen <
>> chris.friesen at windriver.com
>> <mailto:chris.friesen at windriver.com>> wrote:
>>
>>     On 09/18/2015 11:01 AM, John Griffith wrote:
>>
>>
>>
>>         On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen
>>         <chris.friesen at windriver.com <mailto:chris.friesen at windriver.com>
>>         <mailto:chris.friesen at windriver.com
>>
>>         <mailto:chris.friesen at windriver.com>>> wrote:
>>
>>              On 09/18/2015 06:57 AM, Eric Harney wrote:
>>
>>                  On 09/17/2015 06:06 PM, John Griffith wrote:
>>
>>
>>                      Having the "global conf" settings intermixed with the
>>         backend sections
>>                      caused a number of issues when we first started
>> working on
>>         this.  That's
>>                      part of why we require the "self.configuration"
>> usage all
>>         over in the
>>                      drivers.  Each driver instantiation is it's own
>> independent
>>         entity.
>>
>>
>>                  Yes, each driver instantiation is independent, but that
>> would
>>         still be
>>                  the case if these settings inherited values set in
>> [DEFAULT]
>>         when they
>>                  aren't set in the backend section.
>>
>>
>>              Agreed.  If I explicitly set something in the [DEFAULT]
>> section,
>>         that should
>>              carry through and apply to all the backends unless
>> overridden in the
>>              backend-specific section.
>>
>>
>>
>>         Bottom line "yes", ideally in the case of drivers we would check
>>         global/default
>>         setting, and then override it if something was provided in the
>> driver
>>         specific
>>         setting, or if the driver itself set a different default.  That
>> seems
>>         like the
>>         right way to be doing it anyway.  I've looked at that a bit this
>>         morning, the
>>         issue is that currently we don't even pass any of those higher
>> level conf
>>         settings in to the drivers init methods anywhere.  Need to figure
>> out how to
>>         change that, then it should be a relatively simple fix.
>>
>>
>>     Actually, I think it should be slightly different.  If I set a
>> variable in
>>     the global/default section of the config file, then that should
>> override any
>>     defaults in the code for a driver.
>>
>> Hmm... well, on the bright side that might be easier to implement at
>> least :). I
>> guess I don't agree on the meaning of "DEFAULT", to me a "DEFAULT" section
>> means, "these are the defaults if you don't specify something else", no?
>> Your
>> proposal seems really counter-intuitive to me.
>>
>
> That's what I proposed.
>
> If you specify anything in the driver-specific portion of the config file,
> that takes priority over everything.  If you specify something in the
> DEFAULT portion of the config file, then that takes priority over the
> in-code defaults.  If you specify a default value in the driver-specific
> code, that takes priority over any defaults specified in the global/generic
> code.
>
>
> Chris
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
?Ahh... ok, yeah; sorry, I interpreted that differently when I read it.
Thanks for clarifying.

So the good news is we most definitely agree there, and that I've got
something that gets us at least part way there.  Now, to just figure out
how to determine if the value in the driver section was set explicitly or
is just parsing out the default value in the opt declaration.

Thanks!!
John?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/3976fc54/attachment.html>

From Vijay.Venkatachalam at citrix.com  Fri Sep 18 19:02:31 2015
From: Vijay.Venkatachalam at citrix.com (Vijay Venkatachalam)
Date: Fri, 18 Sep 2015 19:02:31 +0000
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible
 using non "admin" tenant?
In-Reply-To: <D21DD44E.218C2%adam.harwell@rackspace.com>
References: <26B082831A2B1A4783604AB89B9B2C080E899563@SINPEX01CL02.citrite.net>
 <D21DD44E.218C2%adam.harwell@rackspace.com>
Message-ID: <26B082831A2B1A4783604AB89B9B2C080E8A30D2@SINPEX01CL02.citrite.net>


>> This honestly hasn?t even been *fully* tested yet, but it SHOULD work.
It did not work. Please read on.
>> User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS user (right now using whatever user-id we publish in our docs) to read their data.
I did perform the above step to give read access for the container and secrets to ?admin?, but it did not work.

Root Cause
==========
The certmanager in lbaas which connects to barbican uses the keystone session gathered from
neutron_lbaas.common.keystone.get_session()
Since the keystone session is marked for tenant ?admin? lbaas is not able to get the tenant?s container/certificate.

I have filed a bug for the same.

https://bugs.launchpad.net/neutron/+bug/1497410

This is an important fix required since tenants wont be able to use SSL Offload. Will try to upload a fix for this next week.


Thanks,
Vijay V.

From: Adam Harwell [mailto:adam.harwell at RACKSPACE.COM]
Sent: 16 September 2015 00:32
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

There is not really good documentation for this yet?
When I say Neutron-LBaaS tenant, I am maybe using the wrong word ? I guess the user that is configured as the service-account in neutron.conf.
The user will hit the ACL API themselves to set up the ACLs on their own secrets/containers, we won?t do it for them. So, workflow is like:


  *   User creates Secrets in Barbican.
  *   User creates CertificateContainer in Barbican.
  *   User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS user (right now using whatever user-id we publish in our docs) to read their data.
  *   User creates a LoadBalancer in Neutron-LBaaS.
  *   LBaaS hits Barbican using its standard configured service-account to retrieve the Container/Secrets from the user?s Barbican account.
This honestly hasn?t even been *fully* tested yet, but it SHOULD work. The question is whether right now in devstack the admin user is allowed to read all user secrets just because it is the admin user (which I think might be the case), in which case we won?t actually know if ACLs are working as intended (but I think we assume that Barbican has tested that feature and we can just rely on it working).

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 14, 2015 at 9:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

Is there a documentation which records step by step?

What is Neutron-LBaaS tenant?

Is it the tenant who is configuring the listener? *OR* is it some tenant which is created for lbaas plugin that is the having all secrets for all tenants configuring lbaas.

>>You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant.
I checked the ACL docs
http://docs.openstack.org/developer/barbican/api/quickstart/acls.html

The ACL API is to allow ?users?(not ?Tenants?) access to secrets/containers. What is the API or CLI that the admin will use to allow access of the tenant?s secret+container to Neutron-LBaaS tenant.


From: Adam Harwell [mailto:adam.harwell at RACKSPACE.COM]
Sent: 15 September 2015 03:00
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant. For now, the tenant-id should just be documented, but we are looking into making an API call that would expose the admin tenant-id to the user so they can make an API call to discover it.

Once the user has the neutron-lbaas tenant ID, they use the Barbican ACL system to add that ID as a readable user of the container and all of the secrets. Then Neutron-LBaaS hits barbican with the credentials of the admin tenant, and is granted access to the user?s container.

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, September 11, 2015 at 2:35 PM
To: "OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

Hi,
              Has anyone tried configuring SSL Offload as a tenant?
              During listener creation there is an error thrown saying ?could not locate/find container?.
              The lbaas plugin is not able to fetch the tenant?s certificate.

              From the code it looks like the lbaas plugin is tyring to connect to barbican with keystone details provided in neutron.conf
              Which is by default username = ?admin? and tenant_name =?admin?.
              This means lbaas plugin is looking for tenant?s ceritifcate in ?admin? tenant, which it will never be able to find.

              What is the procedure for the lbaas plugin to get hold of the tenant?s certificate?

              Assuming ?admin? user has access to all tenant?s certificates. Should the lbaas plugin connect to barbican with username=?admin? and tenant_name =  listener?s tenant_name?

Is this, the way forward ? *OR* Am I missing something?


Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/5816f632/attachment.html>

From robertc at robertcollins.net  Fri Sep 18 19:32:18 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Sat, 19 Sep 2015 07:32:18 +1200
Subject: [openstack-dev] [releases][requirements][keystone]something
	incompatible with our requirements
Message-ID: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>

I know this is terrible timing with the release and all, but
constraints updates are failing. This is the first evidence - and it
doesn't look like a race to me:
http://logs.openstack.org/57/221157/10/check/gate-tempest-dsvm-full/18eb440/logs/devstacklog.txt.gz#_2015-09-18_13_51_46_902

https://review.openstack.org/#/c/221157/ is the updated review to
bring it all together. I'm worried that the incompatibility is going
to impact distributors and/or may even be from one of our own recent
library releases.



-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From john.griffith8 at gmail.com  Fri Sep 18 19:33:26 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Fri, 18 Sep 2015 13:33:26 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55FC5D72.8040600@redhat.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
 <55FC0A26.4080806@redhat.com> <55FC2877.4040903@windriver.com>
 <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>
 <55FC5D72.8040600@redhat.com>
Message-ID: <CAPWkaSWVO45Ha1oPPPjqy=rTW56-bS7KT7Rfz_hu68Xv2HPahw@mail.gmail.com>

On Fri, Sep 18, 2015 at 12:52 PM, Eric Harney <eharney at redhat.com> wrote:

> On 09/18/2015 01:01 PM, John Griffith wrote:
> > On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen <
> chris.friesen at windriver.com>
> > wrote:
> >
> >> On 09/18/2015 06:57 AM, Eric Harney wrote:
> >>
> >>> On 09/17/2015 06:06 PM, John Griffith wrote:
> >>>
> >>
> >> Having the "global conf" settings intermixed with the backend sections
> >>>> caused a number of issues when we first started working on this.
> That's
> >>>> part of why we require the "self.configuration" usage all over in the
> >>>> drivers.  Each driver instantiation is it's own independent entity.
> >>>>
> >>>>
> >>> Yes, each driver instantiation is independent, but that would still be
> >>> the case if these settings inherited values set in [DEFAULT] when they
> >>> aren't set in the backend section.
> >>>
> >>
> >> Agreed.  If I explicitly set something in the [DEFAULT] section, that
> >> should carry through and apply to all the backends unless overridden in
> the
> >> backend-specific section.
> >>
> >> Chris
> >>
> >>
> > Meh I don't know about the "have to modify the code", the config file
> works
> > you just need to add that line to your driver section and configure the
> > backend correctly.
> >
>
> My point is that there doesn't seem to be a justification for "you just
> need to add that line to your driver section", which seems to counter
> what most people's expectation would be.
>
?There certainly is, I don't want to force the same options against all
backends.  Perfect example is the issues with some distros in the past that
DID use global settings and stomp over any driver; which in turn broke
those that weren't compatible with that conf setting even though in the
driver section they overrode it.?


>
> People can and do fail to do that, because they assume that [DEFAULT]
> settings are treated as defaults.
>

?Bad assumption, we should probably document this until we fix it (making a
very large assumption that we'll ever agree on how to fix it).?

>
> To help people who make that assumption, yes, you have to modify the
> code, because the code supplies a default value that you cannot supply
> in the same way via config files.
>

?Or you could just fill out the config file properly:
    [lvm-1]
    iscsi_helper = lioadm

I didn't have to modify any code.
?


>
> > Regardless, I see your point (but I still certainly don't agree that it's
> > "blatantly wrong").
> >
>
> You can substitute "very confusing" for "blatantly wrong" but I think
> those are about the same thing when talking about usability issues with
> how to configure a service.
>

?Fair enough.  Call it whatever you like.?


>
> Look at options like:
>  - strict_ssh_host_key_policy
>  - sio_verify_server_certificate
>  - driver_ssl_cert_verify


> All of these default to False, and if turned on, enable protections
> against MITM attacks.  All of them also fail to turn on for the relevant
> drivers if set in [DEFAULT].  These should, if set in DEFAULT when using
> multi-backend, issue a warning so the admin knows that they are not
> getting the intended security guarantees.  Instead, nothing happens and
> Cinder and the storage works.  Confusion is dangerous.
>

?Yeah, so is crappy documentation lack of understanding.?


>
> > Bottom line "yes", ideally in the case of drivers we would check
> > global/default setting, and then override it if something was provided in
> > the driver specific setting, or if the driver itself set a different
> > default.  That seems like the right way to be doing it anyway.  I've
> looked
> > at that a bit this morning, the issue is that currently we don't even
> pass
> > any of those higher level conf settings in to the drivers init methods
> > anywhere.  Need to figure out how to change that, then it should be a
> > relatively simple fix.
> >
>
> What I was getting at earlier though, is that I'm not really sure there
> is a simple fix.  It may be simple to change the behavior to more
> predictable behavior, but doing that in a way that doesn't introduce
> upgrade problems for deployments relying on the current defaults seems
> difficult to me.
>
?Agreed, but honestly I'd like to at least try.  Especially when people use
terms like "blatantly wrong" and "dangerous", kinda prompts one to think ?

?that perhaps it should be looked at.  If nothing else, we shouldn't have
driver settings in the DEFAULT section, we should just create a driver
section, but we still need to figure out how to deal with things outside of
the "general" section vs the backend stanza.

Also, I'd argue that the behavior your arguing for is MORE dangerous and
troublesome.  The LIO being in the global CONF was a perfect example where
it broke third party devices on a specific distro because it assumed that
EVERYTHING on the system was using the lio methods and in that case you
really couldn't do anything but modify code.

I've pinged you a number of times on IRC, maybe we can chat there a bit
real-time and see if we can work together on a solution?

Thanks,
John?

>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/69d0f2c5/attachment.html>

From tdurakov at mirantis.com  Fri Sep 18 19:37:36 2015
From: tdurakov at mirantis.com (Timofei Durakov)
Date: Fri, 18 Sep 2015 22:37:36 +0300
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <CABib2_rtNKpbHjxb-Cf23pUJRd2G-DAL+pLhhBKTNkejXq99BA@mail.gmail.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <CABib2_rtNKpbHjxb-Cf23pUJRd2G-DAL+pLhhBKTNkejXq99BA@mail.gmail.com>
Message-ID: <CAHsr+ix685K6+LXkEeuh56QSm_=SjTVpEtxFbH_aSiCmEmUJyg@mail.gmail.com>

Hi,
some work items:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073965.html
- ci coverage for live-migration
https://blueprints.launchpad.net/nova/+spec/split-different-live-migration-types
- compute + drivers code cleanup

On Fri, Sep 18, 2015 at 6:47 PM, John Garbutt <john at johngarbutt.com> wrote:

> On 18 September 2015 at 16:23, Daniel P. Berrange <berrange at redhat.com>
> wrote:
> > On Fri, Sep 18, 2015 at 11:53:05AM +0000, Murray, Paul (HP Cloud) wrote:
> >> Hi All,
> >>
> >> There are various efforts going on around live migration at the moment:
> >> fixing up CI, bug fixes, additions to cover more corner cases, proposals
> >> for new operations....
> >>
> >> Generally live migration could do with a little TLC (see: [1]), so I am
> >> going to suggest we give some of that care in the next cycle.
> >>
> >> Please respond to this post if you have an interest in this and what you
> >> would like to see done. Include anything you are already getting on with
> >> so we get a clear picture. If there is enough interest I'll put this
> >> together as a proposal for a work stream. Something along the lines of
> >> "robustify live migration".
> >
> ><snip>
> >
> > Testing. Testing. Testing.
>
> +1 for Testing
>
> The "CI for reliable live-migration" thread was covering some of the
> details on the multi-host CI options.
>
> Thanks,
> johnthetubaugy
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/b02ecefe/attachment.html>

From mordred at inaugust.com  Fri Sep 18 19:41:31 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Fri, 18 Sep 2015 15:41:31 -0400
Subject: [openstack-dev] [releases][requirements][keystone]something
 incompatible with our requirements
In-Reply-To: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>
References: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>
Message-ID: <55FC68EB.80603@inaugust.com>

On 09/18/2015 03:32 PM, Robert Collins wrote:
> I know this is terrible timing with the release and all, but
> constraints updates are failing. This is the first evidence - and it
> doesn't look like a race to me:
> http://logs.openstack.org/57/221157/10/check/gate-tempest-dsvm-full/18eb440/logs/devstacklog.txt.gz#_2015-09-18_13_51_46_902
>
> https://review.openstack.org/#/c/221157/ is the updated review to
> bring it all together. I'm worried that the incompatibility is going
> to impact distributors and/or may even be from one of our own recent
> library releases.

It's related to an interaction between os-client-config and 
python-openstackclient. A fix just went into os-client-config and was 
released and we're getting the associated patch into 
python-openstackclient now.

Sorry for the disturbance. It's not a requirements issue.



From dstanek at dstanek.com  Fri Sep 18 19:45:51 2015
From: dstanek at dstanek.com (David Stanek)
Date: Fri, 18 Sep 2015 15:45:51 -0400
Subject: [openstack-dev] [releases][requirements][keystone]something
 incompatible with our requirements
In-Reply-To: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>
References: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>
Message-ID: <CAO69Nd=iS1uqc3HM9prB0mq4veaj2iyCm6mg21oazkokdJ_AsQ@mail.gmail.com>

On Fri, Sep 18, 2015 at 3:32 PM, Robert Collins <robertc at robertcollins.net>
wrote:

> I know this is terrible timing with the release and all, but
> constraints updates are failing. This is the first evidence - and it
> doesn't look like a race to me:
>
> http://logs.openstack.org/57/221157/10/check/gate-tempest-dsvm-full/18eb440/logs/devstacklog.txt.gz#_2015-09-18_13_51_46_902
>
> https://review.openstack.org/#/c/221157/ is the updated review to
> bring it all together. I'm worried that the incompatibility is going
> to impact distributors and/or may even be from one of our own recent
> library releases.
>

This looks like the issue I just ran into. The newest os-client-config
depends on keystoneauth1 and this breaks openstackclient since it registers
its plugins under the keystoneclient entrypoint and not the keystoneauth1
entry point.

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/35f44700/attachment.html>

From adam.harwell at RACKSPACE.COM  Fri Sep 18 19:46:33 2015
From: adam.harwell at RACKSPACE.COM (Adam Harwell)
Date: Fri, 18 Sep 2015 19:46:33 +0000
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible
 using non "admin" tenant?
In-Reply-To: <26B082831A2B1A4783604AB89B9B2C080E8A30D2@SINPEX01CL02.citrite.net>
Message-ID: <D221D443.21D1B%adam.harwell@rackspace.com>

That sounds like the Barbican ACLs are not working properly. The whole point of using Barbican ACLs is that the keystone session marked for tenant "admin" should be able to get access to ANY tenant?s container/secrets if the ACLs are set. I am still not convinced this is an issue on the LBaaS side. Unfortunately, I don?t have a lot of time to test this right now as we?re up against the clock for the gate, so your help in debugging and fixing this issue is greatly appreciated! I just want to make sure the expected workflow is fully understood.

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, September 18, 2015 at 2:02 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?


>> This honestly hasn?t even been *fully* tested yet, but it SHOULD work.
It did not work. Please read on.
>> User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS user (right now using whatever user-id we publish in our docs) to read their data.
I did perform the above step to give read access for the container and secrets to ?admin?, but it did not work.

Root Cause
==========
The certmanager in lbaas which connects to barbican uses the keystone session gathered from
neutron_lbaas.common.keystone.get_session()
Since the keystone session is marked for tenant ?admin? lbaas is not able to get the tenant?s container/certificate.

I have filed a bug for the same.

https://bugs.launchpad.net/neutron/+bug/1497410

This is an important fix required since tenants wont be able to use SSL Offload. Will try to upload a fix for this next week.


Thanks,
Vijay V.

From: Adam Harwell [mailto:adam.harwell at RACKSPACE.COM]
Sent: 16 September 2015 00:32
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

There is not really good documentation for this yet?
When I say Neutron-LBaaS tenant, I am maybe using the wrong word ? I guess the user that is configured as the service-account in neutron.conf.
The user will hit the ACL API themselves to set up the ACLs on their own secrets/containers, we won?t do it for them. So, workflow is like:


  *   User creates Secrets in Barbican.
  *   User creates CertificateContainer in Barbican.
  *   User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS user (right now using whatever user-id we publish in our docs) to read their data.
  *   User creates a LoadBalancer in Neutron-LBaaS.
  *   LBaaS hits Barbican using its standard configured service-account to retrieve the Container/Secrets from the user?s Barbican account.
This honestly hasn?t even been *fully* tested yet, but it SHOULD work. The question is whether right now in devstack the admin user is allowed to read all user secrets just because it is the admin user (which I think might be the case), in which case we won?t actually know if ACLs are working as intended (but I think we assume that Barbican has tested that feature and we can just rely on it working).

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 14, 2015 at 9:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

Is there a documentation which records step by step?

What is Neutron-LBaaS tenant?

Is it the tenant who is configuring the listener? *OR* is it some tenant which is created for lbaas plugin that is the having all secrets for all tenants configuring lbaas.

>>You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant.
I checked the ACL docs
http://docs.openstack.org/developer/barbican/api/quickstart/acls.html

The ACL API is to allow ?users?(not ?Tenants?) access to secrets/containers. What is the API or CLI that the admin will use to allow access of the tenant?s secret+container to Neutron-LBaaS tenant.


From: Adam Harwell [mailto:adam.harwell at RACKSPACE.COM]
Sent: 15 September 2015 03:00
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant. For now, the tenant-id should just be documented, but we are looking into making an API call that would expose the admin tenant-id to the user so they can make an API call to discover it.

Once the user has the neutron-lbaas tenant ID, they use the Barbican ACL system to add that ID as a readable user of the container and all of the secrets. Then Neutron-LBaaS hits barbican with the credentials of the admin tenant, and is granted access to the user?s container.

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, September 11, 2015 at 2:35 PM
To: "OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

Hi,
              Has anyone tried configuring SSL Offload as a tenant?
              During listener creation there is an error thrown saying ?could not locate/find container?.
              The lbaas plugin is not able to fetch the tenant?s certificate.

              From the code it looks like the lbaas plugin is tyring to connect to barbican with keystone details provided in neutron.conf
              Which is by default username = ?admin? and tenant_name =?admin?.
              This means lbaas plugin is looking for tenant?s ceritifcate in ?admin? tenant, which it will never be able to find.

              What is the procedure for the lbaas plugin to get hold of the tenant?s certificate?

              Assuming ?admin? user has access to all tenant?s certificates. Should the lbaas plugin connect to barbican with username=?admin? and tenant_name =  listener?s tenant_name?

Is this, the way forward ? *OR* Am I missing something?


Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/11825185/attachment.html>

From morgan.fainberg at gmail.com  Fri Sep 18 19:47:10 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Fri, 18 Sep 2015 12:47:10 -0700
Subject: [openstack-dev] [releases][requirements][keystone]something
	incompatible with our requirements
In-Reply-To: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>
References: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>
Message-ID: <162EAAE9-CBAA-44FC-B2C3-9929638800FB@gmail.com>

I'm not seeing the source of this at a quick glance (in keystoneclient where I am assuming the plugin is being loaded from?). I'll look a bit more closely after I finish my food. 

--Morgan

Sent via mobile

> On Sep 18, 2015, at 12:32, Robert Collins <robertc at robertcollins.net> wrote:
> 
> I know this is terrible timing with the release and all, but
> constraints updates are failing. This is the first evidence - and it
> doesn't look like a race to me:
> http://logs.openstack.org/57/221157/10/check/gate-tempest-dsvm-full/18eb440/logs/devstacklog.txt.gz#_2015-09-18_13_51_46_902
> 
> https://review.openstack.org/#/c/221157/ is the updated review to
> bring it all together. I'm worried that the incompatibility is going
> to impact distributors and/or may even be from one of our own recent
> library releases.
> 
> 
> 
> -- 
> Robert Collins <rbtcollins at hp.com>
> Distinguished Technologist
> HP Converged Cloud
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From Vijay.Venkatachalam at citrix.com  Fri Sep 18 19:47:30 2015
From: Vijay.Venkatachalam at citrix.com (Vijay Venkatachalam)
Date: Fri, 18 Sep 2015 19:47:30 +0000
Subject: [openstack-dev] [Barbican] Providing service user read access
 to all tenant's certificates
In-Reply-To: <CAMKdHYoj5D=pktVL+PMyPhF9KxzbZctDrkQx24QCw1W42FFH-Q@mail.gmail.com>
References: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
 <1A3C52DFCD06494D8528644858247BF01B7BF949@EX10MBOX06.pnnl.gov>
 <26B082831A2B1A4783604AB89B9B2C080E89F58B@SINPEX01CL02.citrite.net>
 <D2202F18.1CAE5%dmccowan@cisco.com>
 <26B082831A2B1A4783604AB89B9B2C080E8A132D@SINPEX01CL02.citrite.net>
 <CAMKdHYoj5D=pktVL+PMyPhF9KxzbZctDrkQx24QCw1W42FFH-Q@mail.gmail.com>
Message-ID: <26B082831A2B1A4783604AB89B9B2C080E8A31C1@SINPEX01CL02.citrite.net>


I would think OpenStack as Self Service portal.
Anyway, tenant?s admin need not play cloud admin?s role.
Only the cloud admin who sets up and manages openstack infrastructure (like controller Nodes etc) could know about the LBaaS service user. As much as possible the tenant admin should not be mandated to learn about the LBaaS service user.

From: Nathan Reller [mailto:nathan.s.reller at gmail.com]
Sent: 18 September 2015 18:32
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

> But that approach looks a little untidy, because tenant admin has to do some infrastructure work.

I would think infrastructure work would be part of the admin role. They are doing other things such as creating LBaaS, which seems like an infrastructure job to me. I would think configuring LBaaS and key management are similar. It seems like you think they are not similar. Can you explain more?

-Nate
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/9ad25672/attachment.html>

From toni+openstackml at midokura.com  Fri Sep 18 19:52:28 2015
From: toni+openstackml at midokura.com (Antoni Segura Puimedon)
Date: Fri, 18 Sep 2015 21:52:28 +0200
Subject: [openstack-dev] [Neutron] Kuryr - Spec
In-Reply-To: <CAG9LJa52qUkuU_X0xaRovX1908zUNaYbX+rx4OWD5yxWTAczQg@mail.gmail.com>
References: <CAG9LJa52qUkuU_X0xaRovX1908zUNaYbX+rx4OWD5yxWTAczQg@mail.gmail.com>
Message-ID: <CAP8JW8BNBRhPupYGX0Eb1RJHVqy0_zcdY0P5Tva+LKrJj0JYqQ@mail.gmail.com>

On Fri, Sep 18, 2015 at 8:30 PM, Gal Sagie <gal.sagie at gmail.com> wrote:

> Hello everyone,
>
> We have a spec for project Kuryr in Neutron repository [1] , we have been
> iterating on it
> internally and with the great help and feedback from the Magnum team.
>
> I am glad to say that we reached a pretty good step where we have most of
> the
> Magnum team +1 the spec, i personally think all of the items for the first
> milestone
> (which is for Mitaka release) are well defined and already in working
> process (and low level
> design process).
>
> I would like to thank the Magnum team for working closely with us on this
> and for
> the valuable feedback.
>
> The reason why we put this in the Neutron repository is the fact that we
> feel Kuryr
> is not another Neutron implementation, it is an infrastructure project
> that can be used by
> any Neutron plugin and need (in my opinion) to go hand in hand with
> Neutron.
> We would like to make it visible to the Neutron team and i hope that we
> can get
> this spec merged for the Mitaka release to define our goals in Kuryr.
>
> We also have detailed designs and blueprints process in Kuryr repository
> for
> all the items described in the spec.
> I hope to see more comments/review from Neutron members on this spec.
>
> On a side note, we had a virtual sprint for Kuryr last week, apuimedo and
> taku will have
> a video of a demo thanks to the progress made on the sprint, so stay tuned
> for that to see whats available.
>

And Gal and I will be demoing live the development version in the Summit
[1], so come
see our talk ;-)

[1]
https://openstacksummitoctober2015tokyo.sched.org/event/b90847a5496c0a2454929d95a0afc46e


>
> Thanks
> Gal.
>
> [1] https://review.openstack.org/#/c/213490/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/5b97655b/attachment.html>

From doug at doughellmann.com  Fri Sep 18 20:03:55 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 18 Sep 2015 16:03:55 -0400
Subject: [openstack-dev] [releases][requirements][keystone]something
	incompatible with our requirements
In-Reply-To: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>
References: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>
Message-ID: <1442606482-sup-8125@lrrr.local>

Excerpts from Robert Collins's message of 2015-09-19 07:32:18 +1200:
> I know this is terrible timing with the release and all, but
> constraints updates are failing. This is the first evidence - and it
> doesn't look like a race to me:
> http://logs.openstack.org/57/221157/10/check/gate-tempest-dsvm-full/18eb440/logs/devstacklog.txt.gz#_2015-09-18_13_51_46_902
> 
> https://review.openstack.org/#/c/221157/ is the updated review to
> bring it all together. I'm worried that the incompatibility is going
> to impact distributors and/or may even be from one of our own recent
> library releases.
> 

It looks like this is a problem from os-client-config 1.7.0 and later.
The constraints file has not been updating on new releases, so we're
still constrained to 1.6.4 in jobs that use the constraints, which is
why it isn't showing up elsewhere.

To debug, I ran:

git clone openstack/python-openstackclient
tox -e py27 --notest
.tox/py27/bin/openstack --debug
(error)
.tox/py27/bin/pip install os-client-config==1.7.1
.tox/py27/bin/openstack --debug
(error)
.tox/py27/bin/pip install os-client-config==1.7.0
.tox/py27/bin/openstack --debug
(error)
.tox/py27/bin/pip install os-client-config==1.6.4
.tox/py27/bin/openstack --debug
(works)

Doug


From Vijay.Venkatachalam at citrix.com  Fri Sep 18 20:15:31 2015
From: Vijay.Venkatachalam at citrix.com (Vijay Venkatachalam)
Date: Fri, 18 Sep 2015 20:15:31 +0000
Subject: [openstack-dev] [Barbican] Providing service user read access
 to all tenant's certificates
References: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
 <1A3C52DFCD06494D8528644858247BF01B7BF949@EX10MBOX06.pnnl.gov>
 <26B082831A2B1A4783604AB89B9B2C080E89F58B@SINPEX01CL02.citrite.net>
 <D2202F18.1CAE5%dmccowan@cisco.com> 
Message-ID: <26B082831A2B1A4783604AB89B9B2C080E8A341E@SINPEX01CL02.citrite.net>

Typos corrected.

From: Vijay Venkatachalam
Sent: 18 September 2015 00:36
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: RE: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Yes Dave, that is what is happening today.

But that approach looks a little untidy, because tenant admin has to do some infrastructure work.

It will be good from the user/tenant admin's perspective to just do 2 things

1.      Upload certificates info

2.      Create LBaaS Configuration with certificates already uploaded

Now because barbican and LBaaS does *not* work nicely with each other, every tenant admin has to do the following


1.      Upload certificates info

2.      Read a document or finds out there is a LBaaS service user and somehow gets hold of LBaaS service user's userid. Assigns read rights to that certificate to LBaaS service user.

3.      Creates LBaaS Configuration with certificates already uploaded

This does not fit the "As a service" model of OpenStack where tenant's just configure whatever they want and the infrastructure takes care of automating the rest.

Thanks,
Vijay V.

From: Dave McCowan (dmccowan) [mailto:dmccowan at cisco.com]
Sent: 17 September 2015 18:20
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates


The tenant admin from Step 1, should also do Step 2.

From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Wednesday, September 16, 2015 at 9:57 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates


How does lbaas do step 2?
It does not have the privilege for that secret/container using the service user.
Should it use the keystone token through which user created LB config and assign read access for the secret/container to the LBaaS service user?

Thanks,
Vijay V.

From: Fox, Kevin M [mailto:Kevin.Fox at pnnl.gov]
Sent: 16 September 2015 19:24
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Why not have lbaas do step 2? Even better would be to help with the instance user spec and combined with lbaas doing step 2, you could restrict secret access to just the amphora that need the secret?

Thanks,
Kevin

________________________________
From: Vijay Venkatachalam
Sent: Tuesday, September 15, 2015 7:06:39 PM
To: OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)
Subject: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates
Hi,
               Is there a way to provide read access to a certain user to all secrets/containers of all project/tenant's certificates?
               This user with universal "read" privilege's will be used as a service user by LBaaS plugin to read tenant's certificates during LB configuration implementation.

               Today's LBaaS users are following the below mentioned process

1.      tenant's creator/admin user uploads a certificate info as secrets and container

2.      User then have to create ACLs for the LBaaS service user to access the containers and secrets

3.      User creates LB config with the container reference

4.      LBaaS plugin using the service user will then access container reference provided in LB config and proceeds to implement.

Ideally we would want to avoid step 2 in the process. Instead add a step 5 where the lbaas plugin's service user checks if the user configuring the LB has read access to the container reference provided.

Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/958449cb/attachment.html>

From robertc at robertcollins.net  Fri Sep 18 20:29:51 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Sat, 19 Sep 2015 08:29:51 +1200
Subject: [openstack-dev] [releases][requirements][keystone]something
 incompatible with our requirements
In-Reply-To: <1442606482-sup-8125@lrrr.local>
References: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>
 <1442606482-sup-8125@lrrr.local>
Message-ID: <CAJ3HoZ2obPjf=8_Xwaseozo0Oenyq=d5rzV+ZXUu77YfZBA_9A@mail.gmail.com>

On 19 September 2015 at 08:03, Doug Hellmann <doug at doughellmann.com> wrote:
> Excerpts from Robert Collins's message of 2015-09-19 07:32:18 +1200:
>> I know this is terrible timing with the release and all, but
>> constraints updates are failing. This is the first evidence - and it
>> doesn't look like a race to me:
>> http://logs.openstack.org/57/221157/10/check/gate-tempest-dsvm-full/18eb440/logs/devstacklog.txt.gz#_2015-09-18_13_51_46_902
>>
>> https://review.openstack.org/#/c/221157/ is the updated review to
>> bring it all together. I'm worried that the incompatibility is going
>> to impact distributors and/or may even be from one of our own recent
>> library releases.
>>
>
> It looks like this is a problem from os-client-config 1.7.0 and later.
> The constraints file has not been updating on new releases, so we're
> still constrained to 1.6.4 in jobs that use the constraints, which is
> why it isn't showing up elsewhere.
>
> To debug, I ran:
>
> git clone openstack/python-openstackclient
> tox -e py27 --notest
> .tox/py27/bin/openstack --debug
> (error)
> .tox/py27/bin/pip install os-client-config==1.7.1
> .tox/py27/bin/openstack --debug
> (error)
> .tox/py27/bin/pip install os-client-config==1.7.0
> .tox/py27/bin/openstack --debug
> (error)
> .tox/py27/bin/pip install os-client-config==1.6.4
> .tox/py27/bin/openstack --debug
> (works)

Monty seems to think that this is a case where we can just roll
forward - I'm going to guess its a grouping thing:
A = os-client-config
B = python-openstackclient

A < x + B < y works
A < x + B >= y breaks
A >= x + B < y breaks
A >= x + B >= y works

And API use of A and B is not itself incompatible at any point:
clients of A and clients of B don't need to change - though one might
argue that A and B are mutually incompatible once A at x released and
that really we should have been able to detect that before releases
were cut of them.

That said, PyPI can express that situation (in that A < x can depend
on B <y and A >= x can depend on B >=y) natively...

But we can't express in OpenStack projects today due to limitations in
pip combined with the g-r syncing process - we take a flattened view
of everything and the algebra for specifiers is per package, not
composable/groupable like this would require.

We don't have a good canned answer here: while the transition is in
progress we're protected (in functional tests, and shortly in unit
tests), but anyone not using constraints will feel the pain.

Even once the transition is complete anyone doing partial upgrades can
be burnt (upgrade only B or only A and it breaks).

Worse, because of limitations in pip (specifically that reverse deps
are not checked when updating packages) there is little we can do to
stop people being broken in the field: we're contributing to pip to
get to the point that those things are possible, but currently its
future work.

So - I think the pragmatic thing here is to:
 - review the CI for A and B here to see if we can prevent the
incompatibilities occuring at all in future transitions like this
[expand-contract is a much safer pattern and should be usable]
 - document in the readme for python-openstackclient that this schism
exists so its discoverable for the supported lifetime of liberty

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From Vijay.Venkatachalam at citrix.com  Fri Sep 18 20:41:57 2015
From: Vijay.Venkatachalam at citrix.com (Vijay Venkatachalam)
Date: Fri, 18 Sep 2015 20:41:57 +0000
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible
 using non "admin" tenant?
In-Reply-To: <D221D443.21D1B%adam.harwell@rackspace.com>
References: <26B082831A2B1A4783604AB89B9B2C080E8A30D2@SINPEX01CL02.citrite.net>
 <D221D443.21D1B%adam.harwell@rackspace.com>
Message-ID: <26B082831A2B1A4783604AB89B9B2C080E8A3630@SINPEX01CL02.citrite.net>


Sure Adam. Pleasure is mine ?.

Also, I don?t see any wrong doing by LBaaS (infact it is the right thing to do) if LBaaS plugin is specifying the tenant containers unique URL and also correct tenant context in the keystone session to fetch the container.
Although, if barbican fixes to ignore the tenant value in keystone session and only authenticates the user for verification, it is a bonus and LBaaS current code will work.

LongTerm, We need to eliminate the step of assigning access by tenant?s admin and automate it.

I had initiated a thread 3 days  earlier with Barbican on the same issue. Here is the link.
https://www.mail-archive.com/openstack-dev at lists.openstack.org/msg63476.html

Thanks,
Vijay V.


From: Adam Harwell [mailto:adam.harwell at RACKSPACE.COM]
Sent: 19 September 2015 01:17
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

That sounds like the Barbican ACLs are not working properly. The whole point of using Barbican ACLs is that the keystone session marked for tenant "admin" should be able to get access to ANY tenant?s container/secrets if the ACLs are set. I am still not convinced this is an issue on the LBaaS side. Unfortunately, I don?t have a lot of time to test this right now as we?re up against the clock for the gate, so your help in debugging and fixing this issue is greatly appreciated! I just want to make sure the expected workflow is fully understood.

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, September 18, 2015 at 2:02 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?


>> This honestly hasn?t even been *fully* tested yet, but it SHOULD work.
It did not work. Please read on.
>> User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS user (right now using whatever user-id we publish in our docs) to read their data.
I did perform the above step to give read access for the container and secrets to ?admin?, but it did not work.

Root Cause
==========
The certmanager in lbaas which connects to barbican uses the keystone session gathered from
neutron_lbaas.common.keystone.get_session()
Since the keystone session is marked for tenant ?admin? lbaas is not able to get the tenant?s container/certificate.

I have filed a bug for the same.

https://bugs.launchpad.net/neutron/+bug/1497410

This is an important fix required since tenants wont be able to use SSL Offload. Will try to upload a fix for this next week.


Thanks,
Vijay V.

From: Adam Harwell [mailto:adam.harwell at RACKSPACE.COM]
Sent: 16 September 2015 00:32
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

There is not really good documentation for this yet?
When I say Neutron-LBaaS tenant, I am maybe using the wrong word ? I guess the user that is configured as the service-account in neutron.conf.
The user will hit the ACL API themselves to set up the ACLs on their own secrets/containers, we won?t do it for them. So, workflow is like:


  *   User creates Secrets in Barbican.
  *   User creates CertificateContainer in Barbican.
  *   User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS user (right now using whatever user-id we publish in our docs) to read their data.
  *   User creates a LoadBalancer in Neutron-LBaaS.
  *   LBaaS hits Barbican using its standard configured service-account to retrieve the Container/Secrets from the user?s Barbican account.
This honestly hasn?t even been *fully* tested yet, but it SHOULD work. The question is whether right now in devstack the admin user is allowed to read all user secrets just because it is the admin user (which I think might be the case), in which case we won?t actually know if ACLs are working as intended (but I think we assume that Barbican has tested that feature and we can just rely on it working).

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 14, 2015 at 9:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

Is there a documentation which records step by step?

What is Neutron-LBaaS tenant?

Is it the tenant who is configuring the listener? *OR* is it some tenant which is created for lbaas plugin that is the having all secrets for all tenants configuring lbaas.

>>You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant.
I checked the ACL docs
http://docs.openstack.org/developer/barbican/api/quickstart/acls.html

The ACL API is to allow ?users?(not ?Tenants?) access to secrets/containers. What is the API or CLI that the admin will use to allow access of the tenant?s secret+container to Neutron-LBaaS tenant.


From: Adam Harwell [mailto:adam.harwell at RACKSPACE.COM]
Sent: 15 September 2015 03:00
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

You need to set up ACLs on the Barbican side for that container, to make it readable to the Neutron-LBaaS tenant. For now, the tenant-id should just be documented, but we are looking into making an API call that would expose the admin tenant-id to the user so they can make an API call to discover it.

Once the user has the neutron-lbaas tenant ID, they use the Barbican ACL system to add that ID as a readable user of the container and all of the secrets. Then Neutron-LBaaS hits barbican with the credentials of the admin tenant, and is granted access to the user?s container.

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, September 11, 2015 at 2:35 PM
To: "OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

Hi,
              Has anyone tried configuring SSL Offload as a tenant?
              During listener creation there is an error thrown saying ?could not locate/find container?.
              The lbaas plugin is not able to fetch the tenant?s certificate.

              From the code it looks like the lbaas plugin is tyring to connect to barbican with keystone details provided in neutron.conf
              Which is by default username = ?admin? and tenant_name =?admin?.
              This means lbaas plugin is looking for tenant?s ceritifcate in ?admin? tenant, which it will never be able to find.

              What is the procedure for the lbaas plugin to get hold of the tenant?s certificate?

              Assuming ?admin? user has access to all tenant?s certificates. Should the lbaas plugin connect to barbican with username=?admin? and tenant_name =  listener?s tenant_name?

Is this, the way forward ? *OR* Am I missing something?


Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/a34bc497/attachment-0001.html>

From eharney at redhat.com  Fri Sep 18 20:46:14 2015
From: eharney at redhat.com (Eric Harney)
Date: Fri, 18 Sep 2015 16:46:14 -0400
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <CAPWkaSWVO45Ha1oPPPjqy=rTW56-bS7KT7Rfz_hu68Xv2HPahw@mail.gmail.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
 <55FC0A26.4080806@redhat.com> <55FC2877.4040903@windriver.com>
 <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>
 <55FC5D72.8040600@redhat.com>
 <CAPWkaSWVO45Ha1oPPPjqy=rTW56-bS7KT7Rfz_hu68Xv2HPahw@mail.gmail.com>
Message-ID: <55FC7816.4080709@redhat.com>

On 09/18/2015 03:33 PM, John Griffith wrote:
> On Fri, Sep 18, 2015 at 12:52 PM, Eric Harney <eharney at redhat.com> wrote:
> 
>> On 09/18/2015 01:01 PM, John Griffith wrote:
>>> On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen <
>> chris.friesen at windriver.com>
>>> wrote:
>>>
>>>> On 09/18/2015 06:57 AM, Eric Harney wrote:
>>>>
>>>>> On 09/17/2015 06:06 PM, John Griffith wrote:
>>>>>
>>>>
>>>> Having the "global conf" settings intermixed with the backend sections
>>>>>> caused a number of issues when we first started working on this.
>> That's
>>>>>> part of why we require the "self.configuration" usage all over in the
>>>>>> drivers.  Each driver instantiation is it's own independent entity.
>>>>>>
>>>>>>
>>>>> Yes, each driver instantiation is independent, but that would still be
>>>>> the case if these settings inherited values set in [DEFAULT] when they
>>>>> aren't set in the backend section.
>>>>>
>>>>
>>>> Agreed.  If I explicitly set something in the [DEFAULT] section, that
>>>> should carry through and apply to all the backends unless overridden in
>> the
>>>> backend-specific section.
>>>>
>>>> Chris
>>>>
>>>>
>>> Meh I don't know about the "have to modify the code", the config file
>> works
>>> you just need to add that line to your driver section and configure the
>>> backend correctly.
>>>
>>
>> My point is that there doesn't seem to be a justification for "you just
>> need to add that line to your driver section", which seems to counter
>> what most people's expectation would be.
>>
> ?There certainly is, I don't want to force the same options against all
> backends.  Perfect example is the issues with some distros in the past that
> DID use global settings and stomp over any driver; which in turn broke
> those that weren't compatible with that conf setting even though in the
> driver section they overrode it.?
> 
> 
>>
>> People can and do fail to do that, because they assume that [DEFAULT]
>> settings are treated as defaults.
>>
> 
> ?Bad assumption, we should probably document this until we fix it (making a
> very large assumption that we'll ever agree on how to fix it).?
> 
>>
>> To help people who make that assumption, yes, you have to modify the
>> code, because the code supplies a default value that you cannot supply
>> in the same way via config files.
>>
> 
> ?Or you could just fill out the config file properly:
>     [lvm-1]
>     iscsi_helper = lioadm
> 
> I didn't have to modify any code.
> ?
> 
> 

In the use case I was describing, I'm shipping a package, as a
distribution, with a default configuration file. The deployer (not me)
is the only one that knows about config sections that they want for
multi-backend. I don't think it's fair to require them to fill out
things like iscsi_helper, because there is only one correct value for
iscsi_helper on the platform I support, and defaulting to a different
one is not useful.

The fact that we don't inherit [DEFAULT] settings means that it is not
possible for me to ship a package with the correct defaults without
changing the hard-coded default value, in the code, to customize it for
my platform. I want to set iscsi_helper = lioadm in a configuration file
and have that be the default for any enabled_backend.


>>
>>> Regardless, I see your point (but I still certainly don't agree that it's
>>> "blatantly wrong").
>>>
>>
>> You can substitute "very confusing" for "blatantly wrong" but I think
>> those are about the same thing when talking about usability issues with
>> how to configure a service.
>>
> 
> ?Fair enough.  Call it whatever you like.?
> 
> 
>>
>> Look at options like:
>>  - strict_ssh_host_key_policy
>>  - sio_verify_server_certificate
>>  - driver_ssl_cert_verify
> 
> 
>> All of these default to False, and if turned on, enable protections
>> against MITM attacks.  All of them also fail to turn on for the relevant
>> drivers if set in [DEFAULT].  These should, if set in DEFAULT when using
>> multi-backend, issue a warning so the admin knows that they are not
>> getting the intended security guarantees.  Instead, nothing happens and
>> Cinder and the storage works.  Confusion is dangerous.
>>
> 
> ?Yeah, so is crappy documentation lack of understanding.?
> 
> 

I can't make my customers read documentation and test them for
understanding.  I can make software that's more robust and less prone to
misuse.  Warning people with "hey, you're using multi-backend and have
set this security-related option in a section where it will never have
an effect in your deployment" is one way to do this that we could do today.

>>
>>> Bottom line "yes", ideally in the case of drivers we would check
>>> global/default setting, and then override it if something was provided in
>>> the driver specific setting, or if the driver itself set a different
>>> default.  That seems like the right way to be doing it anyway.  I've
>> looked
>>> at that a bit this morning, the issue is that currently we don't even
>> pass
>>> any of those higher level conf settings in to the drivers init methods
>>> anywhere.  Need to figure out how to change that, then it should be a
>>> relatively simple fix.
>>>
>>
>> What I was getting at earlier though, is that I'm not really sure there
>> is a simple fix.  It may be simple to change the behavior to more
>> predictable behavior, but doing that in a way that doesn't introduce
>> upgrade problems for deployments relying on the current defaults seems
>> difficult to me.
>>
> ?Agreed, but honestly I'd like to at least try.  Especially when people use
> terms like "blatantly wrong" and "dangerous", kinda prompts one to think ?
> 
> ?that perhaps it should be looked at.  If nothing else, we shouldn't have
> driver settings in the DEFAULT section, we should just create a driver
> section, but we still need to figure out how to deal with things outside of
> the "general" section vs the backend stanza.
> 

Yeah, it should be looked at, that's why I'm talking about all of this...

Moving settings to a drivers section sounds like a good start but it
doesn't fix the issue I'm talking about without also changing how the
inheritance works.

> Also, I'd argue that the behavior your arguing for is MORE dangerous and
> troublesome.  The LIO being in the global CONF was a perfect example where
> it broke third party devices on a specific distro because it assumed that
> EVERYTHING on the system was using the lio methods and in that case you
> really couldn't do anything but modify code.
> 

I don't really know how to parse this, I was talking about dangerous
software behavior, I think you're talking about dangerous code changes?

When LIO originally landed, all we had was global conf, so I'm not sure
what you're getting at.  If the conf model was wrong for LIO, it was/is
wrong for dozens of other driver-specific options too.

If you're referring to https://bugs.launchpad.net/cinder/+bug/1400804 ,
all I can really say about that is, yeah, that kind of thing was more
likely to happen back then..?  That seems orthogonal other than the fact
that it's config-related code that I worked on once.

I don't really follow what I'm supposed to conclude about my current
proposal in that context...

> I've pinged you a number of times on IRC, maybe we can chat there a bit
> real-time and see if we can work together on a solution?
> 
> Thanks,
> John?
> 



From doug at doughellmann.com  Fri Sep 18 20:51:14 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 18 Sep 2015 16:51:14 -0400
Subject: [openstack-dev] [oslo.messaging][zmq]
In-Reply-To: <55F989C5.3040200@mirantis.com>
References: <55F989C5.3040200@mirantis.com>
Message-ID: <1442609404-sup-3399@lrrr.local>

Excerpts from ozamiatin's message of 2015-09-16 18:24:53 +0300:
> Hi All,
> 
> I'm excited to report that today we have merged [1] new zmq driver into 
> oslo.messaging master branch.
> The driver is not completely done yet, so we are going to continue 
> developing it on the master branch now.
> 
> What we've reached for now is passing functional tests gate (we are 
> going to turn it on in the master [2]).
> And we also have devstack up and running (almost 80% tempest tests 
> passed when I've tested it since last commit into feature/zmq). I need 
> to check all this after merge, to ensure that I didn't break something 
> resolving conflicts.
> 
> I'm going to put all ongoing tasks on launchpad and provide some 
> documentation soon, so anyone is welcome to develop new zmq driver.
> I also would like to thank Viktor Serhieiev and Doug Royal who already 
> contributed to feature/zmq.
> 
> [1] - https://review.openstack.org/#/c/223877
> [2] - https://review.openstack.org/#/c/224035
> 
> Thanks,
> Oleksii
> 

Congratulations on merging those changes into master! I know it has been
a long road, but I'm glad to see zmq reaching a usable state. Many
thanks to the entire team contributing to the work!

Doug


From davanum at gmail.com  Fri Sep 18 21:02:35 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Fri, 18 Sep 2015 17:02:35 -0400
Subject: [openstack-dev] [oslo.messaging][zmq]
In-Reply-To: <1442609404-sup-3399@lrrr.local>
References: <55F989C5.3040200@mirantis.com>
	<1442609404-sup-3399@lrrr.local>
Message-ID: <CANw6fcH_Ru0FXqhf-s3GFQovfjDp=qJ0vk2cEAw4joj9bq5t0g@mail.gmail.com>

Hear hear! nice work Oleksii and team!

-- Dims

On Fri, Sep 18, 2015 at 4:51 PM, Doug Hellmann <doug at doughellmann.com>
wrote:

> Excerpts from ozamiatin's message of 2015-09-16 18:24:53 +0300:
> > Hi All,
> >
> > I'm excited to report that today we have merged [1] new zmq driver into
> > oslo.messaging master branch.
> > The driver is not completely done yet, so we are going to continue
> > developing it on the master branch now.
> >
> > What we've reached for now is passing functional tests gate (we are
> > going to turn it on in the master [2]).
> > And we also have devstack up and running (almost 80% tempest tests
> > passed when I've tested it since last commit into feature/zmq). I need
> > to check all this after merge, to ensure that I didn't break something
> > resolving conflicts.
> >
> > I'm going to put all ongoing tasks on launchpad and provide some
> > documentation soon, so anyone is welcome to develop new zmq driver.
> > I also would like to thank Viktor Serhieiev and Doug Royal who already
> > contributed to feature/zmq.
> >
> > [1] - https://review.openstack.org/#/c/223877
> > [2] - https://review.openstack.org/#/c/224035
> >
> > Thanks,
> > Oleksii
> >
>
> Congratulations on merging those changes into master! I know it has been
> a long road, but I'm glad to see zmq reaching a usable state. Many
> thanks to the entire team contributing to the work!
>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/ff89edc0/attachment.html>

From mordred at inaugust.com  Fri Sep 18 21:17:17 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Fri, 18 Sep 2015 17:17:17 -0400
Subject: [openstack-dev] [releases][requirements][keystone]something
 incompatible with our requirements
In-Reply-To: <CAJ3HoZ2obPjf=8_Xwaseozo0Oenyq=d5rzV+ZXUu77YfZBA_9A@mail.gmail.com>
References: <CAJ3HoZ06G2z5Eaqn34dWbdqWr4AdcCz+S7+2SjBb9o3E=0K_hQ@mail.gmail.com>	<1442606482-sup-8125@lrrr.local>
 <CAJ3HoZ2obPjf=8_Xwaseozo0Oenyq=d5rzV+ZXUu77YfZBA_9A@mail.gmail.com>
Message-ID: <55FC7F5D.6010805@inaugust.com>

On 09/18/2015 04:29 PM, Robert Collins wrote:
> On 19 September 2015 at 08:03, Doug Hellmann <doug at doughellmann.com> wrote:
>> Excerpts from Robert Collins's message of 2015-09-19 07:32:18 +1200:
>>> I know this is terrible timing with the release and all, but
>>> constraints updates are failing. This is the first evidence - and it
>>> doesn't look like a race to me:
>>> http://logs.openstack.org/57/221157/10/check/gate-tempest-dsvm-full/18eb440/logs/devstacklog.txt.gz#_2015-09-18_13_51_46_902
>>>
>>> https://review.openstack.org/#/c/221157/ is the updated review to
>>> bring it all together. I'm worried that the incompatibility is going
>>> to impact distributors and/or may even be from one of our own recent
>>> library releases.
>>>
>>
>> It looks like this is a problem from os-client-config 1.7.0 and later.
>> The constraints file has not been updating on new releases, so we're
>> still constrained to 1.6.4 in jobs that use the constraints, which is
>> why it isn't showing up elsewhere.
>>
>> To debug, I ran:
>>
>> git clone openstack/python-openstackclient
>> tox -e py27 --notest
>> .tox/py27/bin/openstack --debug
>> (error)
>> .tox/py27/bin/pip install os-client-config==1.7.1
>> .tox/py27/bin/openstack --debug
>> (error)
>> .tox/py27/bin/pip install os-client-config==1.7.0
>> .tox/py27/bin/openstack --debug
>> (error)
>> .tox/py27/bin/pip install os-client-config==1.6.4
>> .tox/py27/bin/openstack --debug
>> (works)
>
> Monty seems to think that this is a case where we can just roll
> forward - I'm going to guess its a grouping thing:
> A = os-client-config
> B = python-openstackclient
>
> A < x + B < y works
> A < x + B >= y breaks
> A >= x + B < y breaks
> A >= x + B >= y works

Yeah. Actually, I believe we've found a case where the system is doing 
its job - it just took us a sec to see that.

I've got a new patch up to os-client-config which fixes the problem 
without requiring modifications to openstackclient- which is how this 
should work. It's an ugly patch - but meh, life is ugly.

> And API use of A and B is not itself incompatible at any point:
> clients of A and clients of B don't need to change - though one might
> argue that A and B are mutually incompatible once A at x released and
> that really we should have been able to detect that before releases
> were cut of them.
>
> That said, PyPI can express that situation (in that A < x can depend
> on B <y and A >= x can depend on B >=y) natively...
>
> But we can't express in OpenStack projects today due to limitations in
> pip combined with the g-r syncing process - we take a flattened view
> of everything and the algebra for specifiers is per package, not
> composable/groupable like this would require.
>
> We don't have a good canned answer here: while the transition is in
> progress we're protected (in functional tests, and shortly in unit
> tests), but anyone not using constraints will feel the pain.
>
> Even once the transition is complete anyone doing partial upgrades can
> be burnt (upgrade only B or only A and it breaks).
>
> Worse, because of limitations in pip (specifically that reverse deps
> are not checked when updating packages) there is little we can do to
> stop people being broken in the field: we're contributing to pip to
> get to the point that those things are possible, but currently its
> future work.
>
> So - I think the pragmatic thing here is to:
>   - review the CI for A and B here to see if we can prevent the
> incompatibilities occuring at all in future transitions like this
> [expand-contract is a much safer pattern and should be usable]
>   - document in the readme for python-openstackclient that this schism
> exists so its discoverable for the supported lifetime of liberty
>
> -Rob
>



From john.griffith8 at gmail.com  Fri Sep 18 21:30:50 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Fri, 18 Sep 2015 15:30:50 -0600
Subject: [openstack-dev] [cinder] LVM snapshot performance issue -- why
 isn't thin provisioning the default?
In-Reply-To: <55FC7816.4080709@redhat.com>
References: <55F84EA7.9080902@windriver.com> <55F857A2.3020603@redhat.com>
 <D21DF6C7.589BB%xing.yang@emc.com> <55F9A4C7.8060901@redhat.com>
 <D21F2025.58CD7%xing.yang@emc.com>
 <CAOyZ2aGZn6a-eRUv_TmB2r4s2FD719NONvBihzS4hNEKhNC=Nw@mail.gmail.com>
 <55F9D472.5000505@redhat.com>
 <CAOyZ2aFFnsrY6AaMe7bYrff5iGfQHcM7ZZtP=_yXYxY-75aqSw@mail.gmail.com>
 <55FAF8DF.2070901@redhat.com>
 <CAPWkaSXPYwGuc0HfM_3etTqnApzj=mP1AfLsAWJUdHXJfEP3og@mail.gmail.com>
 <55FC0A26.4080806@redhat.com> <55FC2877.4040903@windriver.com>
 <CAPWkaSWqbY17WfZKzg_RfPEctK6DCFT2Hc0foMcbHAb1aptmbw@mail.gmail.com>
 <55FC5D72.8040600@redhat.com>
 <CAPWkaSWVO45Ha1oPPPjqy=rTW56-bS7KT7Rfz_hu68Xv2HPahw@mail.gmail.com>
 <55FC7816.4080709@redhat.com>
Message-ID: <CAPWkaSW8fAZnrpahOfkCqBRvVAGU62uLyBfwVNsWbpmTFPNUbA@mail.gmail.com>

On Fri, Sep 18, 2015 at 2:46 PM, Eric Harney <eharney at redhat.com> wrote:

> On 09/18/2015 03:33 PM, John Griffith wrote:
> > On Fri, Sep 18, 2015 at 12:52 PM, Eric Harney <eharney at redhat.com>
> wrote:
> >
> >> On 09/18/2015 01:01 PM, John Griffith wrote:
> >>> On Fri, Sep 18, 2015 at 9:06 AM, Chris Friesen <
> >> chris.friesen at windriver.com>
> >>> wrote:
> >>>
> >>>> On 09/18/2015 06:57 AM, Eric Harney wrote:
> >>>>
> >>>>> On 09/17/2015 06:06 PM, John Griffith wrote:
> >>>>>
> >>>>
> >>>> Having the "global conf" settings intermixed with the backend sections
> >>>>>> caused a number of issues when we first started working on this.
> >> That's
> >>>>>> part of why we require the "self.configuration" usage all over in
> the
> >>>>>> drivers.  Each driver instantiation is it's own independent entity.
> >>>>>>
> >>>>>>
> >>>>> Yes, each driver instantiation is independent, but that would still
> be
> >>>>> the case if these settings inherited values set in [DEFAULT] when
> they
> >>>>> aren't set in the backend section.
> >>>>>
> >>>>
> >>>> Agreed.  If I explicitly set something in the [DEFAULT] section, that
> >>>> should carry through and apply to all the backends unless overridden
> in
> >> the
> >>>> backend-specific section.
> >>>>
> >>>> Chris
> >>>>
> >>>>
> >>> Meh I don't know about the "have to modify the code", the config file
> >> works
> >>> you just need to add that line to your driver section and configure the
> >>> backend correctly.
> >>>
> >>
> >> My point is that there doesn't seem to be a justification for "you just
> >> need to add that line to your driver section", which seems to counter
> >> what most people's expectation would be.
> >>
> > ?There certainly is, I don't want to force the same options against all
> > backends.  Perfect example is the issues with some distros in the past
> that
> > DID use global settings and stomp over any driver; which in turn broke
> > those that weren't compatible with that conf setting even though in the
> > driver section they overrode it.?
> >
> >
> >>
> >> People can and do fail to do that, because they assume that [DEFAULT]
> >> settings are treated as defaults.
> >>
> >
> > ?Bad assumption, we should probably document this until we fix it
> (making a
> > very large assumption that we'll ever agree on how to fix it).?
> >
> >>
> >> To help people who make that assumption, yes, you have to modify the
> >> code, because the code supplies a default value that you cannot supply
> >> in the same way via config files.
> >>
> >
> > ?Or you could just fill out the config file properly:
> >     [lvm-1]
> >     iscsi_helper = lioadm
> >
> > I didn't have to modify any code.
> > ?
> >
> >
>
> In the use case I was describing, I'm shipping a package, as a
> distribution, with a default configuration file. The deployer (not me)
> is the only one that knows about config sections that they want for
> multi-backend. I don't think it's fair to require them to fill out
> things like iscsi_helper, because there is only one correct value for
> iscsi_helper on the platform I support, and defaulting to a different
> one is not useful.
>
?Ahh, ok; so back to one of the problems with OpenStack IMO, too many
options/choices.  Regardless though yes I can see where you're coming from
now.  In your case there is only one supported/correct option here so that
creates a problem.
?


>
> The fact that we don't inherit [DEFAULT] settings means that it is not
> possible for me to ship a package with the correct defaults without
> changing the hard-coded default value, in the code, to customize it for
> my platform. I want to set iscsi_helper = lioadm in a configuration file
> and have that be the default for any enabled_backend.
>
?Yes, now I see the case you're referring to, thanks!  This is why I tried
to grab you on IRC to make sure I actually followed what your particular
case was.?



>
> >>
> >>> Regardless, I see your point (but I still certainly don't agree that
> it's
> >>> "blatantly wrong").
> >>>
> >>
> >> You can substitute "very confusing" for "blatantly wrong" but I think
> >> those are about the same thing when talking about usability issues with
> >> how to configure a service.
> >>
> >
> > ?Fair enough.  Call it whatever you like.?
> >
> >
> >>
> >> Look at options like:
> >>  - strict_ssh_host_key_policy
> >>  - sio_verify_server_certificate
> >>  - driver_ssl_cert_verify
> >
> >
> >> All of these default to False, and if turned on, enable protections
> >> against MITM attacks.  All of them also fail to turn on for the relevant
> >> drivers if set in [DEFAULT].  These should, if set in DEFAULT when using
> >> multi-backend, issue a warning so the admin knows that they are not
> >> getting the intended security guarantees.  Instead, nothing happens and
> >> Cinder and the storage works.  Confusion is dangerous.
> >>
> >
> > ?Yeah, so is crappy documentation lack of understanding.?
> >
> >
>
> I can't make my customers read documentation and test them for
> understanding.  I can make software that's more robust and less prone to
> misuse.  Warning people with "hey, you're using multi-backend and have
> set this security-related option in a section where it will never have
> an effect in your deployment" is one way to do this that we could do today.
>

?I get it now; thanks.?



> >>
> >>> Bottom line "yes", ideally in the case of drivers we would check
> >>> global/default setting, and then override it if something was provided
> in
> >>> the driver specific setting, or if the driver itself set a different
> >>> default.  That seems like the right way to be doing it anyway.  I've
> >> looked
> >>> at that a bit this morning, the issue is that currently we don't even
> >> pass
> >>> any of those higher level conf settings in to the drivers init methods
> >>> anywhere.  Need to figure out how to change that, then it should be a
> >>> relatively simple fix.
> >>>
> >>
> >> What I was getting at earlier though, is that I'm not really sure there
> >> is a simple fix.  It may be simple to change the behavior to more
> >> predictable behavior, but doing that in a way that doesn't introduce
> >> upgrade problems for deployments relying on the current defaults seems
> >> difficult to me.
> >>
> > ?Agreed, but honestly I'd like to at least try.  Especially when people
> use
> > terms like "blatantly wrong" and "dangerous", kinda prompts one to think
> ?
> >
> > ?that perhaps it should be looked at.  If nothing else, we shouldn't have
> > driver settings in the DEFAULT section, we should just create a driver
> > section, but we still need to figure out how to deal with things outside
> of
> > the "general" section vs the backend stanza.
> >
>
> Yeah, it should be looked at, that's why I'm talking about all of this...
>
> Moving settings to a drivers section sounds like a good start but it
> doesn't fix the issue I'm talking about without also changing how the
> inheritance works.
>
?Right, the sort of thing you're talking about indeed needs more than that.
As you may have already mentioned there's a more significant change needed
to how how we parse / use config altogether.?



> > Also, I'd argue that the behavior your arguing for is MORE dangerous and
> > troublesome.  The LIO being in the global CONF was a perfect example
> where
> > it broke third party devices on a specific distro because it assumed that
> > EVERYTHING on the system was using the lio methods and in that case you
> > really couldn't do anything but modify code.
> >
>
> I don't really know how to parse this, I was talking about dangerous
> software behavior, I think you're talking about dangerous code changes?
>
> When LIO originally landed, all we had was global conf, so I'm not sure
> what you're getting at.  If the conf model was wrong for LIO, it was/is
> wrong for dozens of other driver-specific options too.
>
> If you're referring to https://bugs.launchpad.net/cinder/+bug/1400804 ,
> all I can really say about that is, yeah, that kind of thing was more
> likely to happen back then..?  That seems orthogonal other than the fact
> that it's config-related code that I worked on once.
>
?Yes, that's one example of what I was considering in terms of the issues
with just applying globals.  Now that I understand your points a bit more
clearly I understand this isn't quite the same thing.?



> I don't really follow what I'm supposed to conclude about my current
> proposal in that context...
>
?Nothing at this point :)
?


>
> > I've pinged you a number of times on IRC, maybe we can chat there a bit
> > real-time and see if we can work together on a solution?
> >
> > Thanks,
> > John?
> >
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

?Sounds like a great topic for the Summit??
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/7034420c/attachment.html>

From rakhmerov at mirantis.com  Fri Sep 18 21:35:49 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Sat, 19 Sep 2015 00:35:49 +0300
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
	library releases needed
In-Reply-To: <55F87083.6080408@gmail.com>
References: <1442234537-sup-4636@lrrr.local>
 <EA70533067B8F34F801E964ABCA4C4410F4D6345@G9W0745.americas.hpqcorp.net>
 <55F87083.6080408@gmail.com>
Message-ID: <BF5D2E8A-A623-4CF2-9555-A054984D673C@mirantis.com>

Doug,

python-mistralclient-1.1.0 (also on pypi) is the final release for Liberty. Here?s the patch updating global-requirements.txt: https://review.openstack.org/#/c/225330/ <https://review.openstack.org/#/c/225330/> (upper-constraints.txt should be soon updated automatically, in my understanding)

I really apologize because I should have probably followed ML better and attended corresponding meetings in order to know all of this release management stuff.  But I still have a number questions on release management like:
So far I have been doing release management for Mistral myself (~2 years), and the last year I?ve been trying to be aligned with OpenStack schedule. In may 2015 Mistral was accepted into Big Tent so does that mean I?m not longer responsible for doing that? Or I can still do it on my own? Even with final Mistral client for Liberty I?ve done it just myself (didn?t create a stable branch though yet), maybe I shouldn?t have. Clarifications would be helpful.
Same question about stable branches.
Does this all apply to all Big Tent projects?
What exactly is upper-constraints.txt for? I?m still not sure why global-requirements.txt is not enough.
What?s the best source of info about release management? Is it complete?

Sorry for asking this probably basic stuff.

Let me know if some of what I?ve done is wrong. It?s a late night here but I?ll check ML the first thing in the morning just in case.

Thanks

Renat Akhmerov
@ Mirantis Inc.



> On 15 Sep 2015, at 22:24, Nikhil Komawar <nik.komawar at gmail.com> wrote:
> 
> Hi Doug,
> 
> And it would be good to lock in on glance_store (if it applies to this
> email) 0.9.1 too. (that's on pypi)
> 
> On 9/14/15 9:26 AM, Kuvaja, Erno wrote:
>> Hi Doug,
>> 
>> Please find python-glanceclient 1.0.1 release request https://review.openstack.org/#/c/222716/
>> 
>> - Erno
>> 
>>> -----Original Message-----
>>> From: Doug Hellmann [mailto:doug at doughellmann.com]
>>> Sent: Monday, September 14, 2015 1:46 PM
>>> To: openstack-dev
>>> Subject: [openstack-dev] [all][ptl][release] final liberty cycle client library
>>> releases needed
>>> 
>>> PTLs and release liaisons,
>>> 
>>> In order to keep the rest of our schedule for the end-of-cycle release tasks,
>>> we need to have final releases for all client libraries in the next day or two.
>>> 
>>> If you have not already submitted your final release request for this cycle,
>>> please do that as soon as possible.
>>> 
>>> If you *have* already submitted your final release request for this cycle,
>>> please reply to this email and let me know that you have so I can create your
>>> stable/liberty branch.
>>> 
>>> Thanks!
>>> Doug
>>> 
>>> __________________________________________________________
>>> ________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-
>>> request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -- 
> 
> Thanks,
> Nikhil
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150919/5fff489c/attachment.html>

From rakhmerov at mirantis.com  Fri Sep 18 21:37:03 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Sat, 19 Sep 2015 00:37:03 +0300
Subject: [openstack-dev] [mistral] Define better terms for WAITING and
	DELAYED states
In-Reply-To: <55FC3229.5020407@alcatel-lucent.com>
References: <156DBF58-7BBB-49D4-A229-DD1B96896532@mirantis.com>
 <55FC3229.5020407@alcatel-lucent.com>
Message-ID: <5CD93D0F-9874-4640-BE2C-156AC9B80670@mirantis.com>


> On 18 Sep 2015, at 18:47, HADDLETON, Robert W (Bob) <bob.haddleton at alcatel-lucent.com> wrote:
> 
> Hi Renat:
> 
> [TL;DR] - maybe use multiple words in the state name to avoid confusion
> 
> I agree that there is a lot of overlap - WAITING and DELAYED and POSTPONED are all very similar.  The context is important when trying to decipher what the words means.
> 
> I would normally interpret WAITING as having a known condition:
> 
> * I'm WAITING for the baseball game to begin
> 
> DELAYED implies WAITING but adds the context that something was supposed to have started already, or has already started, is now blocked by something out of your control, and you may or may not know when it will start again:
> 
> * The (start of the) ballgame has been DELAYED (by rain) (until 2:00). (So I'm still WAITING for it to begin)
> 
> POSTPONED implies DELAYED, but adds that something was "scheduled" to start at a certain time and has been re-scheduled for a later time.  It may or may not have started already, and the later time may or may not be known:
> 
> * The ballgame has been POSTPONED (because of rain) (until tomorrow) (so the game has been DELAYED and I'm still WAITING for it to start)
> 
> So using any of the three words on their own without context or additional information will likely be confusing, or at least subject to different interpretations.
> 
> I would be reluctant to rename DELAYED to POSTPONED, because it raises more questions (until when?) than DELAYED without providing more answers.

Yeah, this makes perfect sense to me. Really good analysis. I agree that context is really important to eliminate ambiguity.

> I think what it comes down to is the need to provide more information in the state name than is possible with one English word:
> 
> WAITING_FOR_PRECONDITIONS
> DELAYED_BY_RETRY
> 
> These provide more specific context to the state but the state transition table gets to be unmanageable when there is a state for everything.
> 
> If more Waiting/delayed states are added it in the future it might make sense to create them as sub-states of RUNNING, to keep the transitions manageable.


This all makes me think that we probably just need to clarify one of these names so they look like:
RUNNING_DELAYED - a substate of RUNNING and it has exactly this meaning: it?s generally running but delayed till some later time.
WAITING - it is not a substate of RUNNING and hence it means a task has not started yet

And at the same time these names still remain not too verbose.

What do you think?

P.S. Thank you very much Robert!

Renat Akhmerov
@ Mirantis Inc.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150919/fd09163e/attachment.html>

From arun.kant at hpe.com  Fri Sep 18 21:35:26 2015
From: arun.kant at hpe.com (Kant, Arun)
Date: Fri, 18 Sep 2015 21:35:26 +0000
Subject: [openstack-dev] [Barbican] Providing service user read access
 to all tenant's certificates
In-Reply-To: <26B082831A2B1A4783604AB89B9B2C080E8A341E@SINPEX01CL02.citrite.net>
References: <26B082831A2B1A4783604AB89B9B2C080E89C2D5@SINPEX01CL02.citrite.net>
 <1A3C52DFCD06494D8528644858247BF01B7BF949@EX10MBOX06.pnnl.gov>
 <26B082831A2B1A4783604AB89B9B2C080E89F58B@SINPEX01CL02.citrite.net>
 <D2202F18.1CAE5%dmccowan@cisco.com>
 <26B082831A2B1A4783604AB89B9B2C080E8A341E@SINPEX01CL02.citrite.net>
Message-ID: <73F9E923B857FC4685A32B062C964A5D81C57FBA@G4W3223.americas.hpqcorp.net>

>From description of use case, looks like you want 'service user' to access any tenant resource regardless of that user has a tenant role or not and  without explicit read assignment on that resource.  This can be done via a customized policy where related 'get' calls are allowed access for a specific role and assign that role to 'service user'. This role check can be made restrictive by looking for specific 'service' tenant or 'service' domain.

-Arun


From: Vijay Venkatachalam [mailto:Vijay.Venkatachalam at citrix.com]
Sent: Friday, September 18, 2015 1:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Typos corrected.

From: Vijay Venkatachalam
Sent: 18 September 2015 00:36
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: RE: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Yes Dave, that is what is happening today.

But that approach looks a little untidy, because tenant admin has to do some infrastructure work.

It will be good from the user/tenant admin's perspective to just do 2 things

1.       Upload certificates info

2.       Create LBaaS Configuration with certificates already uploaded

Now because barbican and LBaaS does *not* work nicely with each other, every tenant admin has to do the following


1.       Upload certificates info

2.       Read a document or finds out there is a LBaaS service user and somehow gets hold of LBaaS service user's userid. Assigns read rights to that certificate to LBaaS service user.

3.       Creates LBaaS Configuration with certificates already uploaded

This does not fit the "As a service" model of OpenStack where tenant's just configure whatever they want and the infrastructure takes care of automating the rest.

Thanks,
Vijay V.

From: Dave McCowan (dmccowan) [mailto:dmccowan at cisco.com]
Sent: 17 September 2015 18:20
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates


The tenant admin from Step 1, should also do Step 2.

From: Vijay Venkatachalam <Vijay.Venkatachalam at citrix.com<mailto:Vijay.Venkatachalam at citrix.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Wednesday, September 16, 2015 at 9:57 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates


How does lbaas do step 2?
It does not have the privilege for that secret/container using the service user.
Should it use the keystone token through which user created LB config and assign read access for the secret/container to the LBaaS service user?

Thanks,
Vijay V.

From: Fox, Kevin M [mailto:Kevin.Fox at pnnl.gov]
Sent: 16 September 2015 19:24
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

Why not have lbaas do step 2? Even better would be to help with the instance user spec and combined with lbaas doing step 2, you could restrict secret access to just the amphora that need the secret?

Thanks,
Kevin

________________________________
From: Vijay Venkatachalam
Sent: Tuesday, September 15, 2015 7:06:39 PM
To: OpenStack Development Mailing List (openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>)
Subject: [openstack-dev] [Barbican] Providing service user read access to all tenant's certificates
Hi,
               Is there a way to provide read access to a certain user to all secrets/containers of all project/tenant's certificates?
               This user with universal "read" privilege's will be used as a service user by LBaaS plugin to read tenant's certificates during LB configuration implementation.

               Today's LBaaS users are following the below mentioned process

1.      tenant's creator/admin user uploads a certificate info as secrets and container

2.      User then have to create ACLs for the LBaaS service user to access the containers and secrets

3.      User creates LB config with the container reference

4.      LBaaS plugin using the service user will then access container reference provided in LB config and proceeds to implement.

Ideally we would want to avoid step 2 in the process. Instead add a step 5 where the lbaas plugin's service user checks if the user configuring the LB has read access to the container reference provided.

Thanks,
Vijay V.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/5584a5d6/attachment.html>

From hongbin.lu at huawei.com  Fri Sep 18 23:01:21 2015
From: hongbin.lu at huawei.com (Hongbin Lu)
Date: Fri, 18 Sep 2015 23:01:21 +0000
Subject: [openstack-dev] [magnum] Associating patches with bugs/bps
 (Please don't hurt me)
In-Reply-To: <A2C58676-B9D6-4EFB-B7EC-582BAE8ECFFA@rackspace.com>
References: <55FB0D4E.50506@linux.vnet.ibm.com>
 <1A3C52DFCD06494D8528644858247BF01B7C48DD@EX10MBOX06.pnnl.gov>
 <CALesnTy_fbzFa6=1KUWdOO+Y+G3hbGNpJV7ShY_8DA4-DZMwZQ@mail.gmail.com>
 <CABARBAZO71wyCWSgk0h7T-rH+yG_N0tV8apKARwgpp42z6uP8Q@mail.gmail.com>
 <B3A42ACB-C486-480C-BA6A-151DF4A815D5@rackspace.com>
 <A2C58676-B9D6-4EFB-B7EC-582BAE8ECFFA@rackspace.com>
Message-ID: <0957CD8F4B55C0418161614FEC580D6BCE5F0C@SZXEMI503-MBS.china.huawei.com>

For the guidance, I saw the judgement is a bit subjective. It could happen that a contributor think his/her patch is trivial (or it is not fixing a function defect), but a reviewer think the opposite. For example, I find it hard to judge when I reviewed the following patches:

https://review.openstack.org/#/c/224183/
https://review.openstack.org/#/c/224198/
https://review.openstack.org/#/c/224184/

It could be helpful if the guide can provide some examples of what is a trivial patch, and what is not. OpenStack uses this approach to define what is a good/bad commit message, which I find it quite helpful.

https://wiki.openstack.org/wiki/GitCommitMessages#Examples_of_bad_practice
https://wiki.openstack.org/wiki/GitCommitMessages#Examples_of_good_practice

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.otto at rackspace.com]
Sent: September-17-15 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Associating patches with bugs/bps (Please don't hurt me)

For posterity, I have recorded this guidance in our Contributing Wiki:

See the NOTE section under:

https://wiki.openstack.org/wiki/Magnum/Contributing#Identify_bugs

Excerpt:

"NOTE: If you are fixing something trivial, that is not actually a functional defect in the software, you can do that without filing a bug ticket, if you don't want it to be tracked when we tally this work between releases. If you do this, just mention it in the commit message that it's a trivial change that does not require a bug ticket. You can reference this guideline if it comes up in discussion during the review process. Functional defects should be tracked in bug tickets. New features should be tracked in blueprints. Trivial features may be tracked using a bug ticket marked as 'Wishlist' importance."

I hope that helps.

Adrian

On Sep 17, 2015, at 2:01 PM, Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>> wrote:

Let?s apply sensible reason. If it?s a new feature or a bug, it should be tracked against an artifact like a bug ticket or a blueprint. If it?s truly trivia, we don?t care. I can tell you that some of the worst bugs I have ever seen in my career had fixes that were about 4 bytes long. That did not make them any less serious.

If you are fixing an actual legitimate bug that has a three character fix, and you don?t want it to be tracked as the reviewer, then you can say so in the commit message. We can act accordingly going forward.

Adrian

On Sep 17, 2015, at 1:53 PM, Assaf Muller <amuller at redhat.com<mailto:amuller at redhat.com>> wrote:


On Thu, Sep 17, 2015 at 4:09 PM, Jeff Peeler <jpeeler at redhat.com<mailto:jpeeler at redhat.com>> wrote:

On Thu, Sep 17, 2015 at 3:28 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
I agree. Lots of projects have this issue. I submitted a bug fix once that literally was 3 characters long, and it took:
A short commit message, a long commit message, and a full bug report being filed and cross linked. The amount of time writing it up was orders of magnitude longer then the actual fix.

Seems a bit much...

Looking at this review, I'd go a step farther and argue that code cleanups like this one should be really really easy to get through. No one likes to do them, so we should be encouraging folks that actually do it. Not pile up roadblocks.

It is indeed frustrating. I've had a few similar reviews (in other projects - hopefully it's okay I comment here) as well. Honestly, I think if a given team is willing to draw the line as for what is permissible to commit without bug creation, then they should be permitted that freedom.

However, that said, I'm sure somebody is going to point out that come release time having the list of bugs fixed in a given release is handy, spelling errors included.

We've had the same debate in Neutron and we relaxed the rules. We don't require bugs for trivial changes. In fact, my argument has always been: Come release
time, when we say that the Neutron community fixed so and so bugs, we would be lying if we were to include fixing spelling issues in comments. That's not a bug.


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/021cf754/attachment.html>

From mangalaman93 at gmail.com  Fri Sep 18 23:54:00 2015
From: mangalaman93 at gmail.com (aman mangal)
Date: Fri, 18 Sep 2015 23:54:00 +0000
Subject: [openstack-dev] [Novadocker] Resizing docker containers
Message-ID: <CAL6Z3hxh4UMU2E4vOEBQ=kuYuvVB2ZbObaqR6KNKfJ6hxeds_A@mail.gmail.com>

Hi,

I was wondering if it is possible to add support for resizing docker
containers without disrupting their execution. Is it even possible, without
any direct support from openstack and with modifications to only
novadocker? Any insights?

Aman
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/783e46da/attachment.html>

From tim at styra.com  Sat Sep 19 00:14:08 2015
From: tim at styra.com (Tim Hinrichs)
Date: Sat, 19 Sep 2015 00:14:08 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
Message-ID: <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>

It's great to have this available!  I think it'll help people understand
what's going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this
big?  I think we should finish this as a VM but then look into doing it
with containers to make it EVEN easier for people to get started.

- It gave me an error about a missing shared directory when I started up.

- I expected devstack to be running when I launched the VM.  devstack
startup time is substantial, and if there's a problem, it's good to assume
the user won't know how to fix it.  Is it possible to have devstack up and
running when we start the VM?  That said, it started up fine for me.

- It'd be good to have a README to explain how to use the use-case
structure. It wasn't obvious to me.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases
folder within it.  I assume the inner one shouldn't be there?

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization
problems.

But otherwise I think the setup looks reasonable.  Will there be an undo
script so that we can run the use cases one after another without worrying
about interactions?

Tim


On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com> wrote:

> Hi Congress folks,
>
>
>
> BTW the login/password for the VM is vagrant/vagrant
>
>
>
> -Shiv
>
>
>
>
>
> *From:* Shiv Haris [mailto:sharis at Brocade.com]
> *Sent:* Thursday, September 17, 2015 5:03 PM
> *To:* openstack-dev at lists.openstack.org
> *Subject:* [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Hi All,
>
>
>
> I have put my VM (virtualbox) at:
>
>
>
> http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova
>
>
>
> I usually run this on a macbook air ? but it should work on other
> platfroms as well. I chose virtualbox since it is free.
>
>
>
> Please send me your usecases ? I can incorporate in the VM and send you an
> updated image. Please take a look at the structure I have in place for the
> first usecase; would prefer it be the same for other usecases. (However I
> am still open to suggestions for changes)
>
>
>
> Thanks,
>
>
>
> -Shiv
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150919/e9a8f096/attachment.html>

From adrian.otto at rackspace.com  Sat Sep 19 00:44:11 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Sat, 19 Sep 2015 00:44:11 +0000
Subject: [openstack-dev] [magnum] Associating patches with bugs/bps
 (Please don't hurt me)
In-Reply-To: <0957CD8F4B55C0418161614FEC580D6BCE5F0C@SZXEMI503-MBS.china.huawei.com>
References: <55FB0D4E.50506@linux.vnet.ibm.com>
 <1A3C52DFCD06494D8528644858247BF01B7C48DD@EX10MBOX06.pnnl.gov>
 <CALesnTy_fbzFa6=1KUWdOO+Y+G3hbGNpJV7ShY_8DA4-DZMwZQ@mail.gmail.com>
 <CABARBAZO71wyCWSgk0h7T-rH+yG_N0tV8apKARwgpp42z6uP8Q@mail.gmail.com>
 <B3A42ACB-C486-480C-BA6A-151DF4A815D5@rackspace.com>
 <A2C58676-B9D6-4EFB-B7EC-582BAE8ECFFA@rackspace.com>,
 <0957CD8F4B55C0418161614FEC580D6BCE5F0C@SZXEMI503-MBS.china.huawei.com>
Message-ID: <C38BA25A-0032-4622-9AE3-1A0A48523513@rackspace.com>

Although I do think this is a good suggestion, let's resist the temptation to overthink this. No matter what guidance is offered, each exception to a policy needs to be judged individually. I suggest that when we encounter situations like this, that we allow the submitter to simply label the change as trivial and we trust that commit to go untracked unless there is a clear consensus to the contrary. Really the only downside to not tracking trivial changes is that our bug fix statistics are slightly lower. That's okay with me, as long as we are actually tracking the meaningful contributions.

--
Adrian

On Sep 18, 2015, at 4:02 PM, Hongbin Lu <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>> wrote:

For the guidance, I saw the judgement is a bit subjective. It could happen that a contributor think his/her patch is trivial (or it is not fixing a function defect), but a reviewer think the opposite. For example, I find it hard to judge when I reviewed the following patches:

https://review.openstack.org/#/c/224183/
https://review.openstack.org/#/c/224198/
https://review.openstack.org/#/c/224184/

It could be helpful if the guide can provide some examples of what is a trivial patch, and what is not. OpenStack uses this approach to define what is a good/bad commit message, which I find it quite helpful.

https://wiki.openstack.org/wiki/GitCommitMessages#Examples_of_bad_practice
https://wiki.openstack.org/wiki/GitCommitMessages#Examples_of_good_practice

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.otto at rackspace.com]
Sent: September-17-15 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Associating patches with bugs/bps (Please don't hurt me)

For posterity, I have recorded this guidance in our Contributing Wiki:

See the NOTE section under:

https://wiki.openstack.org/wiki/Magnum/Contributing#Identify_bugs

Excerpt:

"NOTE: If you are fixing something trivial, that is not actually a functional defect in the software, you can do that without filing a bug ticket, if you don't want it to be tracked when we tally this work between releases. If you do this, just mention it in the commit message that it's a trivial change that does not require a bug ticket. You can reference this guideline if it comes up in discussion during the review process. Functional defects should be tracked in bug tickets. New features should be tracked in blueprints. Trivial features may be tracked using a bug ticket marked as 'Wishlist' importance."

I hope that helps.

Adrian

On Sep 17, 2015, at 2:01 PM, Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>> wrote:

Let?s apply sensible reason. If it?s a new feature or a bug, it should be tracked against an artifact like a bug ticket or a blueprint. If it?s truly trivia, we don?t care. I can tell you that some of the worst bugs I have ever seen in my career had fixes that were about 4 bytes long. That did not make them any less serious.

If you are fixing an actual legitimate bug that has a three character fix, and you don?t want it to be tracked as the reviewer, then you can say so in the commit message. We can act accordingly going forward.

Adrian

On Sep 17, 2015, at 1:53 PM, Assaf Muller <amuller at redhat.com<mailto:amuller at redhat.com>> wrote:


On Thu, Sep 17, 2015 at 4:09 PM, Jeff Peeler <jpeeler at redhat.com<mailto:jpeeler at redhat.com>> wrote:

On Thu, Sep 17, 2015 at 3:28 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
I agree. Lots of projects have this issue. I submitted a bug fix once that literally was 3 characters long, and it took:
A short commit message, a long commit message, and a full bug report being filed and cross linked. The amount of time writing it up was orders of magnitude longer then the actual fix.

Seems a bit much...

Looking at this review, I'd go a step farther and argue that code cleanups like this one should be really really easy to get through. No one likes to do them, so we should be encouraging folks that actually do it. Not pile up roadblocks.

It is indeed frustrating. I've had a few similar reviews (in other projects - hopefully it's okay I comment here) as well. Honestly, I think if a given team is willing to draw the line as for what is permissible to commit without bug creation, then they should be permitted that freedom.

However, that said, I'm sure somebody is going to point out that come release time having the list of bugs fixed in a given release is handy, spelling errors included.

We've had the same debate in Neutron and we relaxed the rules. We don't require bugs for trivial changes. In fact, my argument has always been: Come release
time, when we say that the Neutron community fixed so and so bugs, we would be lying if we were to include fixing spelling issues in comments. That's not a bug.


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150919/5aad1302/attachment.html>

From dborodaenko at mirantis.com  Sat Sep 19 01:07:35 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Fri, 18 Sep 2015 18:07:35 -0700
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <CANw6fcHkopMgfuVHUXnp-zxJUCmqNoHn=m02h5=jZ7A_8yyinA@mail.gmail.com>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
 <CAHAWLf1OtHP5BdKsf+N-Fmv=34vLs36SP9UcQ=FurpEihr-hhg@mail.gmail.com>
 <CANw6fcHkopMgfuVHUXnp-zxJUCmqNoHn=m02h5=jZ7A_8yyinA@mail.gmail.com>
Message-ID: <20150919010735.GB16012@localhost>

Dims,

Thanks for the reminder!

I've summarized the uncontroversial parts of that thread in a policy
proposal as per you suggestion [0], please review and comment. I've
renamed SMEs to maintainers since Mike has agreed with that part, and I
omitted code review SLAs from the policy since that's the part that has
generated the most discussion.

[0] https://review.openstack.org/225376

I don't think we should postpone the election: the PTL election follows
the same rules as OpenStack so we don't need a Fuel-specific policy for
that, and the component leads election doesn't start until October 9,
which gives us 3 weeks to confirm consensus on that aspect of the
policy.

-- 
Dmitry Borodaenko


On Fri, Sep 18, 2015 at 07:30:39AM -0400, Davanum Srinivas wrote:
> Sergey,
> 
> Please see [1]. Did we codify some of these roles and responsibilities as a
> community in a spec? There was also a request to use terminology like say
> MAINTAINERS in that email as well.
> 
> Are we pulling the trigger a bit early for an actual election?
> 
> Thanks,
> Dims
> 
> [1] http://markmail.org/message/2ls5obgac6tvcfss
> 
> On Fri, Sep 18, 2015 at 6:56 AM, Vladimir Kuklin <vkuklin at mirantis.com>
> wrote:
> 
> > Sergey, Fuelers
> >
> > This is awesome news!
> >
> > By the way, I have a question on who is eligible to vote and to nominate
> > him/her-self for both PTL and Component Leads. Could you elaborate on that?
> >
> > And there is no such entity as Component Lead in OpenStack - so we are
> > actually creating one. What are the new rights and responsibilities of CL?
> >
> > On Fri, Sep 18, 2015 at 5:39 AM, Sergey Lukjanov <slukjanov at mirantis.com>
> > wrote:
> >
> >> Hi folks,
> >>
> >> I'd like to announce that we're running the PTL and Component Leads
> >> elections. Detailed information available on wiki. [0]
> >>
> >> Project Team Lead: Manages day-to-day operations, drives the project
> >> team goals, resolves technical disputes within the project team. [1]
> >>
> >> Component Lead: Defines architecture of a module or component in Fuel,
> >> reviews design specs, merges majority of commits and resolves conflicts
> >> between Maintainers or contributors in the area of responsibility. [2]
> >>
> >> Fuel has two large sub-teams, with roughly comparable codebases, that
> >> need dedicated component leads: fuel-library and fuel-python. [2]
> >>
> >> Nominees propose their candidacy by sending an email to the
> >> openstack-dev at lists.openstack.org mailing-list, which the subject:
> >> "[fuel] PTL candidacy" or "[fuel] <component> lead candidacy"
> >> (for example, "[fuel] fuel-library lead candidacy").
> >>
> >> Time line:
> >>
> >> PTL elections
> >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL position
> >> * September 29 - October 8: PTL elections
> >>
> >> Component leads elections (fuel-library and fuel-python)
> >> * October 9 - October 15: Open candidacy for Component leads positions
> >> * October 16 - October 22: Component leads elections
> >>
> >> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015
> >> [1] https://wiki.openstack.org/wiki/Governance
> >> [2]
> >> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
> >> [3] https://lwn.net/Articles/648610/
> >>
> >> --
> >> Sincerely yours,
> >> Sergey Lukjanov
> >> Sahara Technical Lead
> >> (OpenStack Data Processing)
> >> Principal Software Engineer
> >> Mirantis Inc.
> >>
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> >
> > --
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 35bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com <http://www.mirantis.ru/>
> > www.mirantis.ru
> > vkuklin at mirantis.com
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From davanum at gmail.com  Sat Sep 19 02:42:56 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Fri, 18 Sep 2015 22:42:56 -0400
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <20150919010735.GB16012@localhost>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
 <CAHAWLf1OtHP5BdKsf+N-Fmv=34vLs36SP9UcQ=FurpEihr-hhg@mail.gmail.com>
 <CANw6fcHkopMgfuVHUXnp-zxJUCmqNoHn=m02h5=jZ7A_8yyinA@mail.gmail.com>
 <20150919010735.GB16012@localhost>
Message-ID: <CANw6fcH6uePBTiZuKYWEoAJ_1sJFY_CbrjYNoqpKOyHCG6C70Q@mail.gmail.com>

+1 Dmitry

-- Dims

On Fri, Sep 18, 2015 at 9:07 PM, Dmitry Borodaenko <dborodaenko at mirantis.com
> wrote:

> Dims,
>
> Thanks for the reminder!
>
> I've summarized the uncontroversial parts of that thread in a policy
> proposal as per you suggestion [0], please review and comment. I've
> renamed SMEs to maintainers since Mike has agreed with that part, and I
> omitted code review SLAs from the policy since that's the part that has
> generated the most discussion.
>
> [0] https://review.openstack.org/225376
>
> I don't think we should postpone the election: the PTL election follows
> the same rules as OpenStack so we don't need a Fuel-specific policy for
> that, and the component leads election doesn't start until October 9,
> which gives us 3 weeks to confirm consensus on that aspect of the
> policy.
>
> --
> Dmitry Borodaenko
>
>
> On Fri, Sep 18, 2015 at 07:30:39AM -0400, Davanum Srinivas wrote:
> > Sergey,
> >
> > Please see [1]. Did we codify some of these roles and responsibilities
> as a
> > community in a spec? There was also a request to use terminology like say
> > MAINTAINERS in that email as well.
> >
> > Are we pulling the trigger a bit early for an actual election?
> >
> > Thanks,
> > Dims
> >
> > [1] http://markmail.org/message/2ls5obgac6tvcfss
> >
> > On Fri, Sep 18, 2015 at 6:56 AM, Vladimir Kuklin <vkuklin at mirantis.com>
> > wrote:
> >
> > > Sergey, Fuelers
> > >
> > > This is awesome news!
> > >
> > > By the way, I have a question on who is eligible to vote and to
> nominate
> > > him/her-self for both PTL and Component Leads. Could you elaborate on
> that?
> > >
> > > And there is no such entity as Component Lead in OpenStack - so we are
> > > actually creating one. What are the new rights and responsibilities of
> CL?
> > >
> > > On Fri, Sep 18, 2015 at 5:39 AM, Sergey Lukjanov <
> slukjanov at mirantis.com>
> > > wrote:
> > >
> > >> Hi folks,
> > >>
> > >> I'd like to announce that we're running the PTL and Component Leads
> > >> elections. Detailed information available on wiki. [0]
> > >>
> > >> Project Team Lead: Manages day-to-day operations, drives the project
> > >> team goals, resolves technical disputes within the project team. [1]
> > >>
> > >> Component Lead: Defines architecture of a module or component in Fuel,
> > >> reviews design specs, merges majority of commits and resolves
> conflicts
> > >> between Maintainers or contributors in the area of responsibility. [2]
> > >>
> > >> Fuel has two large sub-teams, with roughly comparable codebases, that
> > >> need dedicated component leads: fuel-library and fuel-python. [2]
> > >>
> > >> Nominees propose their candidacy by sending an email to the
> > >> openstack-dev at lists.openstack.org mailing-list, which the subject:
> > >> "[fuel] PTL candidacy" or "[fuel] <component> lead candidacy"
> > >> (for example, "[fuel] fuel-library lead candidacy").
> > >>
> > >> Time line:
> > >>
> > >> PTL elections
> > >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
> position
> > >> * September 29 - October 8: PTL elections
> > >>
> > >> Component leads elections (fuel-library and fuel-python)
> > >> * October 9 - October 15: Open candidacy for Component leads positions
> > >> * October 16 - October 22: Component leads elections
> > >>
> > >> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015
> > >> [1] https://wiki.openstack.org/wiki/Governance
> > >> [2]
> > >>
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
> > >> [3] https://lwn.net/Articles/648610/
> > >>
> > >> --
> > >> Sincerely yours,
> > >> Sergey Lukjanov
> > >> Sahara Technical Lead
> > >> (OpenStack Data Processing)
> > >> Principal Software Engineer
> > >> Mirantis Inc.
> > >>
> > >>
> __________________________________________________________________________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > >
> > >
> > > --
> > > Yours Faithfully,
> > > Vladimir Kuklin,
> > > Fuel Library Tech Lead,
> > > Mirantis, Inc.
> > > +7 (495) 640-49-04
> > > +7 (926) 702-39-68
> > > Skype kuklinvv
> > > 35bk3, Vorontsovskaya Str.
> > > Moscow, Russia,
> > > www.mirantis.com <http://www.mirantis.ru/>
> > > www.mirantis.ru
> > > vkuklin at mirantis.com
> > >
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> >
> >
> > --
> > Davanum Srinivas :: https://twitter.com/dims
>
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/e3db33b8/attachment.html>

From skywalker.nick at gmail.com  Sat Sep 19 03:06:38 2015
From: skywalker.nick at gmail.com (Li Ma)
Date: Sat, 19 Sep 2015 11:06:38 +0800
Subject: [openstack-dev] [neutron] [oslo.privsep] Any progress on
	privsep?
In-Reply-To: <CAPA_H3dBUaC0Rr-PRXGbJwRnyyj63infspgYrzEcvErh015WEA@mail.gmail.com>
References: <CALFEDVehrHj+syJFDocOLG30X6xEVM5wApbSWS2y-kc=tq-dFw@mail.gmail.com>
 <CAPA_H3dBUaC0Rr-PRXGbJwRnyyj63infspgYrzEcvErh015WEA@mail.gmail.com>
Message-ID: <CALFEDVd6Dg-nPosu5G6SD8knwnrq=SmqxPPT-iFiYwmG-3x8VA@mail.gmail.com>

Thanks for your reply, Gus. That's awesome. I'd like to have a look at
it or test if possible.

Any source code available in the upstream?

On Fri, Sep 18, 2015 at 12:40 PM, Angus Lees <gus at inodes.org> wrote:
> On Fri, 18 Sep 2015 at 14:13 Li Ma <skywalker.nick at gmail.com> wrote:
>>
>> Hi stackers,
>>
>> Currently we are discussing the possibility of using a pure python
>> library to configure network in neutron [1]. We find out that it is
>> impossible to do it without privsep, because we run external commands
>> which cannot be replaced by python calls via rootwrap.
>>
>> Privsep has been merged in the Liberty cycle. I just wonder how it is
>> going on.
>>
>> [1] https://bugs.launchpad.net/neutron/+bug/1492714
>
>
> Thanks for your interest :)  This entire cycle has been spent on the spec.
> It looks like it might be approved very soon (got the first +2 overnight),
> which will then unblock a string of "create new oslo project" changes.
>
> During the spec discussion, the API was changed (for the better).  Now it
> looks like the discussion has settled down, I'm getting to work rewriting it
> following the new API.  It took me about 2 weeks to write it the first time
> around (almost all on testing framework), so I'd expect something of similar
> magnitude this time.
>
> I don't make predictions about timelines that rely on the OpenStack review
> process, but if you forced me I'd _guess_ it will be ready for projects to
> try out early in M.
>
>  - Gus
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Li Ma (Nick)
Email: skywalker.nick at gmail.com


From choudharyvikas16 at gmail.com  Sat Sep 19 03:56:57 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Sat, 19 Sep 2015 09:26:57 +0530
Subject: [openstack-dev] [Magnum] Implementing a baymodel attributes
	validation restful api
Message-ID: <CABJxuZoK4YU3DUSs4uu1kq+LPuqbSjg0Diy9wN2TBVSrvXB_4g@mail.gmail.com>

Hi Team,

Any thoughts on this?

https://blueprints.launchpad.net/magnum/+spec/baymodel-all-attributes-validation


Thanks
Vikas Choudhary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150919/fa42fd2d/attachment.html>

From dhvanan at gmail.com  Sat Sep 19 04:55:02 2015
From: dhvanan at gmail.com (Dhvanan Shah)
Date: Sat, 19 Sep 2015 10:25:02 +0530
Subject: [openstack-dev] (NOVA) Is there a Queue maintained for instance
	requests
Message-ID: <CANovBq612E-iXGtv627ow0Cqg0t0-z5mp=mfeQnHnPZGEEd=pg@mail.gmail.com>

Hi,

I had a question regarding the process of spawning an instance.
In the whole process from requesting an instance to scheduling to spawning
,is there a queue maintained where the requests go first, like in the case
of a scheduler of any OS that has a queue for jobs to be scheduled.
This question arises as I wanted to look at handling multiple instance
requests at a time and wanted to see if there was a common place where all
the instances get registered first and get spawned after that.

I went through the code base and tried to find if there was a queue. I
tried to traceback from the side of the client
-> create    ( in nova/api/openstack/compute/servers.py ) api server that
handles all the requests
->create and _create_instance (in nova/compute/api.py) that handle all the
requests regarding the compute resources.
->build_instances (nova/conductor/manager.py) handles all the db operations
from here on the request is sent to the scheduler that return a host for
the instance.


So I'm not sure if I missed it but I was not able to find any queue where
the requests are being registered. Could someone please help me understand
how this works.

Cheers!
-- 
Dhvanan Shah
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150919/15071379/attachment.html>

From Varun_Lodaya at symantec.com  Sat Sep 19 05:13:20 2015
From: Varun_Lodaya at symantec.com (Varun Lodaya)
Date: Fri, 18 Sep 2015 22:13:20 -0700
Subject: [openstack-dev] [neutron][lbaas] Barbican container lookup fron
	lbaas
Message-ID: <D2223D00.814C%Lodaya_VarunMukesh@symantec.com>

Hi Guys,

With lbaasv2, I noticed that when we try to associate tls containers with lbaas listeners, lbaas tries to validate the container and while doing so, tries to get keystone token based on tenant/user credentials in neutron.conf file. However, the barbican containers could belong to different users in different tenants, in that case, container look up would always fail? Am I missing something?

Thanks,
Varun
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150918/51640bc1/attachment.html>

From douglas.mendizabal at rackspace.com  Sat Sep 19 05:53:48 2015
From: douglas.mendizabal at rackspace.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=)
Date: Sat, 19 Sep 2015 00:53:48 -0500
Subject: [openstack-dev] [neutron][lbaas] Barbican container lookup fron
 lbaas
In-Reply-To: <D2223D00.814C%Lodaya_VarunMukesh@symantec.com>
References: <D2223D00.814C%Lodaya_VarunMukesh@symantec.com>
Message-ID: <55FCF86C.7050106@rackspace.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Hi Varun,

I believe the expected workflow for this use case is:

1. User uploads cert + key to Barbican
2. User grants lbass access to the barbican certificate container
using the ACL API [1]
3. User requests tls container by providing Barbican container reference

Since the user grants the lbass user access in step 2, the token
generated using the conf file credentials will be accepted by Barbican
and the certificate will be made available to lbass.

- - Douglas Mendiz?bal

[1] http://docs.openstack.org/developer/barbican/api/quickstart/acls.htm
l

On 9/19/15 12:13 AM, Varun Lodaya wrote:
> Hi Guys,
> 
> With lbaasv2, I noticed that when we try to associate tls
> containers with lbaas listeners, lbaas tries to validate the
> container and while doing so, tries to get keystone token based on
> tenant/user credentials in neutron.conf file. However, the barbican
> containers could belong to different users in different tenants, in
> that case, container look up would always fail? Am I missing
> something?
> 
> Thanks, Varun
> 
> 
> ______________________________________________________________________
____
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV/PhsAAoJEB7Z2EQgmLX7yYQQAJLI+njJaIyDhG8uyJZiq9Rp
KIHFppR0HT10muGxAcUGcDlAFpH6+Ww62fxs6WIbPnGXutK0iwmNOvef3S3+HKLj
0jE4RHcrDQK8dCZ+FRslC3RuF8oxppTOUVHq/IcD9g6JAsFPvmFaPNf5+XLE5z+P
a7T+ycfrtoG8ZKDFIv8XJcb4knDKNUT3JLGtLZ8UuEBoQiSZcpm33UUQcUsZgdSE
EZPi4GSC9pwfDe3ujxOlPoAgEjKUApMMA+WtdMINLleJrw7FH9YWFXzHGv93Uwrl
BBNpZ5QDMCKXd/q2n1IMVj0ejC8EoOL9Wv5ZTvkRFZjDfA2x7P3U24gKGaERj+Lu
t4Llsn4PHIaZ+DFchI4SjPblApYQ4CGDYDzh6xqvOFAv3Gfi8strNzSdu4aHOQZM
TeaRd6A06nI/J/lA9YzEgZFaOhLlU8iWPfYEAqAHVZTZQrbaTTMwVxbttD++qK/q
VJ4jcUfxPyoPuY78sNiJ7W8HuZgaPVxMi/s5rfjcR8NREjOrSkJSQ4eG5OMR3LmA
Tem2/pF50a0Awb+RbSIDzDO2nBJzarKYONih+dCF/fgk66BKQC7D8vyujKYRhk5z
dHDUhFNnuLg9pmS0rtS9Rthc4bpz2gTph35ZFsjMNm55DfsGcsUoHge1w9HQHjXL
edqEMWH4eAZvO5cmioeH
=O44k
-----END PGP SIGNATURE-----


From doug at doughellmann.com  Sat Sep 19 13:04:19 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Sat, 19 Sep 2015 09:04:19 -0400
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
	library releases needed
In-Reply-To: <BF5D2E8A-A623-4CF2-9555-A054984D673C@mirantis.com>
References: <1442234537-sup-4636@lrrr.local>
 <EA70533067B8F34F801E964ABCA4C4410F4D6345@G9W0745.americas.hpqcorp.net>
 <55F87083.6080408@gmail.com>
 <BF5D2E8A-A623-4CF2-9555-A054984D673C@mirantis.com>
Message-ID: <1442667067-sup-828@lrrr.local>

Excerpts from Renat Akhmerov's message of 2015-09-19 00:35:49 +0300:
> Doug,
> 
> python-mistralclient-1.1.0 (also on pypi) is the final release for Liberty. Here?s the patch updating global-requirements.txt: https://review.openstack.org/#/c/225330/ <https://review.openstack.org/#/c/225330/> (upper-constraints.txt should be soon updated automatically, in my understanding)

Because we're in requirements freeze, we're trying not to update any of
the global-requirements.txt entries unless absolutely necessary. At this
point, no projects can be depending on the previously unreleased
features in 1.1.0, so as long as python-mistralclient doesn't have a cap
on the major version allowed the requirements list, it should only be
necessary to update the constraints.

Please update the constraints file by hand, only changing
python-mistralclient. That will allow us to land the update without
changing any other libraries in the test infrastructure (the automated
update submits all of the changes together, and we have several
outstanding right now).

> 
> I really apologize because I should have probably followed ML better and attended corresponding meetings in order to know all of this release management stuff.  But I still have a number questions on release management like:

Yes, clearly we're going to have to do a better job of communicating
with project teams next cycle. I welcome suggestions for that.

> So far I have been doing release management for Mistral myself (~2 years), and the last year I?ve been trying to be aligned with OpenStack schedule. In may 2015 Mistral was accepted into Big Tent so does that mean I?m not longer responsible for doing that? Or I can still do it on my own? Even with final Mistral client for Liberty I?ve done it just myself (didn?t create a stable branch though yet), maybe I shouldn?t have. Clarifications would be helpful.

It means you can now ask the release management team to take over for
the library, but that is not an automatic change.

> Same question about stable branches.

Same, for the stable maintenance team.

> Does this all apply to all Big Tent projects?

Yes, and to all horizontal teams. Every project team is expected
to provide liaisons to all horizontal teams now. The degree to which
a horizontal team does the work for you is up to each pair of teams
to negotiate.

> What exactly is upper-constraints.txt for? I?m still not sure why global-requirements.txt is not enough.

global-requirements.txt tells what versions of libraries we are
compatible with. upper-constraints.txt tells the most recent version of
libraries actually being used in tests running in our CI system.
Currently that only applies to integration tests, but the work to apply
it to unit tests is coming along.

> What?s the best source of info about release management? Is it complete?

Documentation is one of the weak points right now, and we'll be
addressing that with clearer documentation for milestone dates and
expectations and probably some centralization so there is only one
place folks need to go. Right now we have a section in the project
team guide [1], descriptions of the tools in the release-tools
REAMDE [2], and for managed teams we have instructions in the
releases repository [3].  Deadlines are listed in the wiki [4].

[1] http://docs.openstack.org/project-team-guide/release-management.html
[2] http://git.openstack.org/cgit/openstack-infra/release-tools/tree/README.rst
[3] http://git.openstack.org/cgit/openstack/releases/tree/README.rst
[4] https://wiki.openstack.org/wiki/Liberty_Release_Schedule

> 
> Sorry for asking this probably basic stuff.
> 
> Let me know if some of what I?ve done is wrong. It?s a late night here but I?ll check ML the first thing in the morning just in case.
> 
> Thanks
> 
> Renat Akhmerov
> @ Mirantis Inc.
> 
> > On 15 Sep 2015, at 22:24, Nikhil Komawar <nik.komawar at gmail.com> wrote:
> > 
> > Hi Doug,
> > 
> > And it would be good to lock in on glance_store (if it applies to this
> > email) 0.9.1 too. (that's on pypi)
> > 
> > On 9/14/15 9:26 AM, Kuvaja, Erno wrote:
> >> Hi Doug,
> >> 
> >> Please find python-glanceclient 1.0.1 release request https://review.openstack.org/#/c/222716/
> >> 
> >> - Erno
> >> 
> >>> -----Original Message-----
> >>> From: Doug Hellmann [mailto:doug at doughellmann.com]
> >>> Sent: Monday, September 14, 2015 1:46 PM
> >>> To: openstack-dev
> >>> Subject: [openstack-dev] [all][ptl][release] final liberty cycle client library
> >>> releases needed
> >>> 
> >>> PTLs and release liaisons,
> >>> 
> >>> In order to keep the rest of our schedule for the end-of-cycle release tasks,
> >>> we need to have final releases for all client libraries in the next day or two.
> >>> 
> >>> If you have not already submitted your final release request for this cycle,
> >>> please do that as soon as possible.
> >>> 
> >>> If you *have* already submitted your final release request for this cycle,
> >>> please reply to this email and let me know that you have so I can create your
> >>> stable/liberty branch.
> >>> 
> >>> Thanks!
> >>> Doug
> >>> 
> >>> __________________________________________________________
> >>> ________________
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: OpenStack-dev-
> >>> request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > -- 
> > 
> > Thanks,
> > Nikhil
> > 
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From rvasilets at mirantis.com  Sat Sep 19 17:30:17 2015
From: rvasilets at mirantis.com (Roman Vasilets)
Date: Sat, 19 Sep 2015 20:30:17 +0300
Subject: [openstack-dev]  [Rally][Meeting][Agenda]
Message-ID: <CABmajVVdb2KbNCNB94OkT=K=cjoxmzk7=FpPpicGVK-J1qkLEg@mail.gmail.com>

Hi, its a friendly reminder that if you what to discuss some topics at
Rally meetings, please add you topic to our Meeting agenda
https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
specify by whom led this topic. Add some information about topic(links,
etc.) Thank you for your attention.

- Best regards, Vasilets Roman.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150919/524f1dd1/attachment.html>

From mordred at inaugust.com  Sat Sep 19 17:57:18 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Sat, 19 Sep 2015 13:57:18 -0400
Subject: [openstack-dev] openstack-dahboard directory is not created
In-Reply-To: <c9fe2da2e8e63f2f5ae4a99e47aebf23@openstack.nimeyo.com>
References: <c9fe2da2e8e63f2f5ae4a99e47aebf23@openstack.nimeyo.com>
Message-ID: <55FDA1FE.5060801@inaugust.com>

On 09/18/2015 02:03 PM, OpenStack Mailing List Archive wrote:
> Link: https://openstack.nimeyo.com/59453/?show=59453#q59453
> From: vidya <niveeya at gmail.com>
>
> I am trying to create openstack -dashboard on my VM and
> /usr/share/openstack-dashboard in not create. Please help me what i am
> missing here.
>
> here is what i tried
> 1 yum install openstack-selinux
> 2 yum install yum-plugin-priorities
> 3 yum install
> http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
> 4 yum install openstack-dashboard httpd mod_wsgi memcached python-memcached
> 5 yum install python-pip

I HIGHLY recommend not doing this. python-pip packages in distros are 
broken and old and it is unlikely this will change, due to the nature of 
the problem. for pip, I recommend:

wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py

> 6 yum groupinstall 'Development Tools'
> 7 yum install python-devel
> 8 yum install libffi-devel
> 9 yum install openssl-devel
> 10 pip install dep/horizon-2014.1.1.tar.gz
> 11 yum install openstack-dashboard
> 12 yum upgrade
> 13 reboot
> 14 history
> 15 yum install openstack-dashboard
> 16 pip install horizon-2014.1.1.tar.gz

Great. So - try again with more modern pip and let's see how it works.

Also - are you sure you want to install horizon 2014.1.1? That's already 
end of life - if I were you, I'd not work on installing it now.

If you DO want to install it now, I HIGHLY recommend installing it from 
a linux distro - since you seem to be using CentOS 7 - perhaps you can use:

yum install 
http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-7.noarch.rpm

Also - I  just noticed that you are yum instlaling openstack-dashboard 
and then pip installing horizon. This is a very bad idea. either install 
the software from yum or from pip - do not mix the two. Literally nobody 
supports this (neither openstack nor python pip maintainers nor redhat)

So - please try again either all from a distro, or with a more modern 
openstack with a more modern pip and let's see where you are.

>     16 execution gave me this
>
> Processing ./dep/horizon-2014.1.1.tar.gz
> Requirement already satisfied (use --upgrade to upgrade):
> horizon==2014.1.1 from file:///home/centos/dep/horizon-2014.1.1.tar.gz
> in /usr/lib/python2.7/site-packages
> Requirement already satisfied (use --upgrade to upgrade):
> Django<1.7,>=1.4 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> django-compressor>=1.3 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> django-openstack-auth>=1.1.4 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> eventlet>=0.13.0 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): iso8601>=0.1.9
> in /usr/lib/python2.7/site-packages (from horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): kombu>=2.4.8
> in /usr/lib/python2.7/site-packages (from horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): lesscpy>=0.9j
> in /usr/lib/python2.7/site-packages (from horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): lockfile>=0.8
> in /usr/lib/python2.7/site-packages (from horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): netaddr>=0.7.6
> in /usr/lib/python2.7/site-packages (from horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): pbr<1.0,>=0.6
> in /usr/lib/python2.7/site-packages (from horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> python-ceilometerclient>=1.0.6 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> python-cinderclient>=1.0.6 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> python-glanceclient>=0.9.0 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> python-heatclient>=0.2.3 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> python-keystoneclient>=0.7.0 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> python-neutronclient<3,>=2.3.4 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> python-novaclient>=2.17.0 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> python-swiftclient>=1.6 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> python-troveclient>=1.0.3 in /usr/lib/python2.7/site-packages (from
> horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): pytz>=2010h in
> /usr/lib/python2.7/site-packages (from horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): six>=1.6.0 in
> /usr/lib/python2.7/site-packages (from horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> django-appconf>=0.4 in /usr/lib/python2.7/site-packages (from
> django-compressor>=1.3->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.policy>=0.5.0 in /usr/lib/python2.7/site-packages (from
> django-openstack-auth>=1.1.4->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.config>=2.3.0 in /usr/lib/python2.7/site-packages (from
> django-openstack-auth>=1.1.4->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): greenlet>=0.3
> in /usr/lib64/python2.7/site-packages (from
> eventlet>=0.13.0->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> amqp<2.0,>=1.4.5 in /usr/lib/python2.7/site-packages (from
> kombu>=2.4.8->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): anyjson>=0.3.3
> in /usr/lib/python2.7/site-packages (from kombu>=2.4.8->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): ply in
> /usr/lib/python2.7/site-packages (from lesscpy>=0.9j->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): pip in
> /usr/lib/python2.7/site-packages (from pbr<1.0,>=0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.serialization>=1.4.0 in /usr/lib/python2.7/site-packages (from
> python-ceilometerclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.i18n>=1.5.0 in /usr/lib/python2.7/site-packages (from
> python-ceilometerclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): argparse in
> /usr/lib/python2.7/site-packages (from
> python-ceilometerclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> requests>=2.5.2 in /usr/lib/python2.7/site-packages (from
> python-ceilometerclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> stevedore>=1.5.0 in /usr/lib/python2.7/site-packages (from
> python-ceilometerclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.utils>=2.0.0 in /usr/lib/python2.7/site-packages (from
> python-ceilometerclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> PrettyTable<0.8,>=0.7 in /usr/lib/python2.7/site-packages (from
> python-ceilometerclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): Babel>=1.3 in
> /usr/lib/python2.7/site-packages (from
> python-cinderclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> simplejson>=2.2.0 in /usr/lib64/python2.7/site-packages (from
> python-cinderclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> warlock<2,>=1.0.1 in /usr/lib/python2.7/site-packages (from
> python-glanceclient>=0.9.0->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): PyYAML>=3.1.0
> in /usr/lib64/python2.7/site-packages (from
> python-heatclient>=0.2.3->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> debtcollector>=0.3.0 in /usr/lib/python2.7/site-packages (from
> python-keystoneclient>=0.7.0->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): cliff>=1.10.0
> in /usr/lib/python2.7/site-packages (from
> python-neutronclient<3,>=2.3.4->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): futures>=2.1.3
> in /usr/lib/python2.7/site-packages (from
> python-swiftclient>=1.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> msgpack-python>=0.4.0 in /usr/lib64/python2.7/site-packages (from
> oslo.serialization>=1.4.0->python-ceilometerclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): monotonic>=0.3
> in /usr/lib/python2.7/site-packages (from
> oslo.utils>=2.0.0->python-ceilometerclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> netifaces>=0.10.4 in /usr/lib64/python2.7/site-packages (from
> oslo.utils>=2.0.0->python-ceilometerclient>=1.0.6->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> jsonschema<3,>=0.7 in /usr/lib/python2.7/site-packages (from
> warlock<2,>=1.0.1->python-glanceclient>=0.9.0->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> jsonpatch<2,>=0.10 in /usr/lib/python2.7/site-packages (from
> warlock<2,>=1.0.1->python-glanceclient>=0.9.0->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): wrapt>=1.7.0
> in /usr/lib64/python2.7/site-packages (from
> debtcollector>=0.3.0->python-keystoneclient>=0.7.0->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): cmd2>=0.6.7 in
> /usr/lib/python2.7/site-packages (from
> cliff>=1.10.0->python-neutronclient<3,>=2.3.4->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> pyparsing>=2.0.1 in /usr/lib/python2.7/site-packages (from
> cliff>=1.10.0->python-neutronclient<3,>=2.3.4->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> unicodecsv>=0.8.0 in /usr/lib/python2.7/site-packages (from
> cliff>=1.10.0->python-neutronclient<3,>=2.3.4->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade): functools32 in
> /usr/lib/python2.7/site-packages (from
> jsonschema<3,>=0.7->warlock<2,>=1.0.1->python-glanceclient>=0.9.0->horizon==2014.1.1)
> Requirement already satisfied (use --upgrade to upgrade):
> jsonpointer>=1.0 in /usr/lib/python2.7/site-packages (from
> jsonpatch<2,>=0.10->warlock<2,>=1.0.1->python-glanceclient>=0.9.0->horizon==2014.1.1)
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From jim at geekdaily.org  Sat Sep 19 22:03:44 2015
From: jim at geekdaily.org (Jim Meyer)
Date: Sat, 19 Sep 2015 15:03:44 -0700
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,
	that is the question
In-Reply-To: <55FC3E20.4040800@gmail.com>
References: <55FC0B8A.4060303@mhtx.net>
 <D2219136.1CE8C%ian.cordasco@rackspace.com> <55FC3E20.4040800@gmail.com>
Message-ID: <40F3CF4A-DD12-4263-973C-E499D9D6A39D@geekdaily.org>

> On Sep 18, 2015, at 9:38 AM, Jay Pipes <jaypipes at gmail.com> wrote:
> 
>> On 09/18/2015 11:04 AM, Ian Cordasco wrote:
>>> On 9/18/15, 08:03, "Major Hayden" <major at mhtx.net> wrote:
>>> 
>>> Hey there,
>>> 
>>> I start working on a bug[1] last night about adding a managed NTP
>>> configuration to openstack-ansible hosts.  My patch[2] gets chrony up and
>>> running with configurable NTP servers, but I'm still struggling to meet
>>> the "Proposal" section of the bug where the author has asked for
>>> non-infra physical nodes to get their time from the infra nodes.  I can't
>>> figure out how to make it work for AIO builds when one physical host is
>>> part of all of the groups. ;)
>>> 
>>> I'd argue that time synchronization is critical for a few areas:
>>> 
>>>  1) Security/auditing when comparing logs
>>>  2) Troubleshooting when comparing logs
>>>  3) I've been told swift is time-sensitive
>>>  4) MySQL/Galera don't like time drift
>>> 
>>> However, there's a strong argument that this should be done by deployers,
>>> and not via openstack-ansible.  I'm still *very* new to the project and
>>> I'd like to hear some feedback from other folks.
>> 
>> Personally, I fall into the camp of "this is a deployer concern".
>> Specifically, there is already an ansible-galaxy role to enable NTP on
>> your deployment hosts (https://galaxy.ansible.com/list#/roles/464) which
>> *could* be expanded to do this very work that you're talking about. Using
>> specialized roles to achieve this (and contributing back to the larger
>> ansible community) seems like a bigger win than trying to reimplement some
>> of this in OSA instead of reusing other roles that already exist.
>> 
>> Compare it to a hypothetical situation where Keystone wrote its own
>> backing libraries to implement Fernet instead of using the cryptography
>> library. In that case there would be absolutely no argument that Keystone
>> should use cryptography (even if it uses cffi and has bindings to OpenSSL
>> which our infra team doesn't like and some deployers find difficult to
>> manage when using pure-python deployment tooling). Why should OSA be any
>> different from another OpenStack project?
> 
> Have to agree with Ian here. NTP, as Major wrote, is a critical piece of the deployment puzzle, but I don't think it's necessary to put anything in OSA specifically to configure NTP. As Ian wrote, better to contribute to upstream ansible-galaxy playbooks/roles that do this well.

I have a nuanced agreement with this which borders on disagreement. 

An agreed-upon time tick is as crucial to a distributed system as oxygen is to a human. It's not only those components that care, it's the humans who have to understand and operate it. As such, an OpenStack cloud should come with a time source that all services listen to; even if it's wildly off from the real world, the value of all services sharing the same tick is immeasurable. For me, it's part of "batteries included."

I'd argue that we should pick a tool and configuration for this by default and allow others to change it. And, while I love Major*, I don't think the deployment tools are the right place for this.

--j

* and I do. Been too long, Major. We should fix that. =]

From Neil.Jerram at metaswitch.com  Sat Sep 19 22:09:13 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Sat, 19 Sep 2015 22:09:13 +0000
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JNLrM4BMCUNZ+XcePjSkk2OAQWJiuH5UU4njYi9+aaZRg@mail.gmail.com>
 <SN1PR02MB169592E08620777E123F6D34995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JMKtRD6G60=NGMGi-mAndy=yThRdzCgX9iF8R6BQk3cAw@mail.gmail.com>
 <SN1PR02MB16950E11BFF66EB83AC3F361995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JMn5Jv74NL1TBpciM7PxdLMT0SXN+6+9WUaju=bGqW5ig@mail.gmail.com>
Message-ID: <SN1PR02MB1695D86F23755AA6FF33240099580@SN1PR02MB1695.namprd02.prod.outlook.com>

On 17/09/15 19:38, Kevin Benton wrote:
> router:external only affects the behavior of Neutron routers. It
> allows them to attach to it with an external gateway interface which
> implies NAT and floating IPs. 

I presume you're talking about the reference implementation here.  OK,
but my concern first is about what it _means_, in particular for
instances attached to such a network.  Which your next paragraph addresses:

>
> From an instance's perspective, an external network would be no
> different than any other provider network scenario that uses a
> non-Neutron router. Nothing different happens with the routing of
> traffic.

Right, thanks.  It seems to me that you're saying the same thing here as
my (b) below.  In case not, please do say more precisely what isn't
correct about my (b) statement.

>
> >Also I believe that (c) is already true for Neutron external networks
> - i.e. it doesn't make sense to assign a floating IP to an instance
> that is directly on an external network.  Is that correct?
>
> Well not floating IPs from the same external network, but you could
> conceivably have layers where one external network has an internal
> Neutron router interface that leads to another external network via a
> Neutron router.

Agreed.

Many thanks for your input in this thread!

Regards,
    Neil

>
>
> On Thu, Sep 17, 2015 at 10:17 AM, Neil Jerram
> <Neil.Jerram at metaswitch.com <mailto:Neil.Jerram at metaswitch.com>> wrote:
>
>     Thanks so much for your continuing answers; they are really
>     helping me.
>
>     I see your points now about the special casing, and about the semantic
>     expectations and internal wiring of a Neutron network being just the
>     same for an external network as for non-external.  Hence, the
>     model for
>     an L3-only external network should be the same as it would be for an
>     L3-only tenant network, except for the router:external flag (and might
>     be along the lines that you've suggested, of a subnet with a null
>     network).
>
>     It still seems that 'router:external true' might be a good match for
>     some of the other 'routed' semantics in [1], though, so I'd like to
>     drill down more on exactly what 'router:external true' means.
>
>     A summary of the semantics at [1] is:
>     (a) L3-only connectivity between instances attached to the network.
>     (b) Data can be routed between between this network and the
>     outside, and
>     between multiple networks of this type, without needing Neutron
>     routers
>     (c) Floating IPs are not supported for instances on this network.
>     Instead, wherever an instance needs to be routable from, attach it
>     to a
>     network with a subnet of IP addresses that are routable from that
>     place.
>
>     [1] https://review.openstack.org/#/c/198439/
>     <https://review.openstack.org/#/c/198439/>
>
>     According to [2], router:external "Indicates whether this network is
>     externally accessible."  Which I think is an exact match for (b) -
>     would
>     you agree?  (Note: it can't mean that every instance on that network
>     actually _is_ contactable from outside, because that depends on IP
>     addressing, and router:external is a network property, not
>     subnet.  But
>     it can mean that every instance is _potentially_ contactable, without
>     the mediation of a Neutron router.)
>
>     [2] http://developer.openstack.org/api-ref-networking-v2-ext.html
>
>     Also I believe that (c) is already true for Neutron external
>     networks -
>     i.e. it doesn't make sense to assign a floating IP to an instance that
>     is directly on an external network.  Is that correct?
>
>     In summary, for the semantics that I'm wanting to model, it sounds
>     like
>     router:external true already gives me 2 of the 3 main pieces.  There's
>     still serious work needed for (a), but that's really nice news, if I'm
>     seeing things correctly (since discovering that instances can be
>     attached to an external network).
>
>     Regards,
>         Neil
>
>
>
>
>     On 17/09/15 17:29, Kevin Benton wrote:
>     >
>     > Yes, the L2 semantics apply to the external network as well (at
>     least
>     > with ML2).
>     >
>     > One example of the special casing is the external_network_bridge
>     > option in the L3 agent. That would cause the agent to plug directly
>     > into a bridge so none of the normal L2 agent wiring would occur.
>     With
>     > the L2 bridge_mappings option there is no reason for this to exist
>     > anymore because it ignoring network attributes makes debugging a
>     > nightmare.
>     >
>     > >Yes, that makes sense.  Clearly the core semantic there is IP. 
>     I can
>     > imagine reasonable variation on less core details, e.g. L2
>     broadcast vs.
>     > NBMA.  Perhaps it would be acceptable, if use cases need it, for
>     such
>     > details to be described by flags on the external network object.
>     >
>     > An external network object is just a regular network object with a
>     > router:external flag set to true. Any changes to it would have
>     to make
>     > sense in the context of all networks. That's why I want to make sure
>     > that whatever we come up with makes sense in all contexts and isn't
>     > just a bolt on corner case.
>     >
>     > On Sep 17, 2015 8:21 AM, "Neil Jerram"
>     <Neil.Jerram at metaswitch.com <mailto:Neil.Jerram at metaswitch.com>
>     > <mailto:Neil.Jerram at metaswitch.com
>     <mailto:Neil.Jerram at metaswitch.com>>> wrote:
>     >
>     >     Thanks, Kevin.  Some further queries, then:
>     >
>     >     On 17/09/15 15:49, Kevin Benton wrote:
>     >     >
>     >     > It's not true for all plugins, but an external network should
>     >     provide
>     >     > the same semantics of a normal network.
>     >     >
>     >     Yes, that makes sense.  Clearly the core semantic there is
>     IP.  I can
>     >     imagine reasonable variation on less core details, e.g. L2
>     >     broadcast vs.
>     >     NBMA.  Perhaps it would be acceptable, if use cases need it,
>     for such
>     >     details to be described by flags on the external network object.
>     >
>     >     I'm also wondering about what you wrote in the recent thread
>     with Carl
>     >     about representing a network connected by routers.  I think
>     you were
>     >     arguing that a L3-only network should not be represented by
>     a kind of
>     >     Neutron network object, because a Neutron network has so many L2
>     >     properties/semantics that it just doesn't make sense, and better
>     >     to have
>     >     a different kind of object for L3-only.  Do those L2
>     >     properties/semantics apply to an external network too?
>     >
>     >     > The only difference is that it allows router gateway
>     interfaces
>     >     to be
>     >     > attached to it.
>     >     >
>     >     Right.  From a networking-calico perspective, I think that
>     means that
>     >     the implementation should (eventually) support that, and
>     hence allow
>     >     interconnection between the external network and private Neutron
>     >     networks.
>     >
>     >     > We want to get rid of as much special casing as possible
>     for the
>     >     > external network.
>     >     >
>     >     I don't understand here.  What 'special casing' do you mean?
>     >
>     >     Regards,
>     >         Neil
>     >
>     >     > On Sep 17, 2015 7:02 AM, "Neil Jerram"
>     >     <Neil.Jerram at metaswitch.com
>     <mailto:Neil.Jerram at metaswitch.com>
>     <mailto:Neil.Jerram at metaswitch.com
>     <mailto:Neil.Jerram at metaswitch.com>>
>     >     > <mailto:Neil.Jerram at metaswitch.com
>     <mailto:Neil.Jerram at metaswitch.com>
>     >     <mailto:Neil.Jerram at metaswitch.com
>     <mailto:Neil.Jerram at metaswitch.com>>>> wrote:
>     >     >
>     >     >     Thanks to the interesting 'default network model'
>     thread, I
>     >     now know
>     >     >     that Neutron allows booting a VM on an external network.
>     >     :-)  I didn't
>     >     >     realize that before!
>     >     >
>     >     >     So, I'm now wondering what connectivity semantics are
>     >     expected (or
>     >     >     even
>     >     >     specified!) for such VMs, and whether they're the same
>     as -
>     >     or very
>     >     >     similar to - the 'routed' networking semantics I've
>     >     described at [1].
>     >     >
>     >     >     [1]
>     >     >
>     >     
>     https://review.openstack.org/#/c/198439/5/doc/source/devref/routed_networks.rst
>     >     >
>     >     >     Specifically I wonder if VM's attached to an external
>     network
>     >     >     expect any
>     >     >     particular L2 characteristics, such as being able to L2
>     >     broadcast to
>     >     >     each other?
>     >     >
>     >     >     By way of context - i.e. why am I asking this?...   The
>     >     >     networking-calico project [2] provides an
>     implementation of the
>     >     >     'routed'
>     >     >     semantics at [1], but only if one suspends belief in
>     some of the
>     >     >     Neutron
>     >     >     semantics associated with non-external networks, such as
>     >     needing a
>     >     >     virtual router to provide connectivity to the outside
>     >     world.  (Because
>     >     >     networking-calico provides that external connectivity
>     >     without any
>     >     >     virtual router.)  Therefore we believe that we need to
>     >     propose some
>     >     >     enhancement of the Neutron API and data model, so that
>     >     Neutron can
>     >     >     describe 'routed' semantics as well as all the traditional
>     >     ones.  But,
>     >     >     if what we are doing is semantically equivalent to
>     >     'attaching to an
>     >     >     external network', perhaps no such enhancement is
>     needed...
>     >     >
>     >     >     [2]
>     >     https://git.openstack.org/cgit/openstack/networking-calico
>     >     <https://git.openstack.org/cgit/openstack/networking-calico>
>     >     >   
>      <https://git.openstack.org/cgit/openstack/networking-calico>
>     >     >
>     >     >     Many thanks for any input!
>     >     >
>     >     >         Neil
>     >     >
>     >     >
>     >     >
>     >     
>     __________________________________________________________________________
>     >     >     OpenStack Development Mailing List (not for usage
>     questions)
>     >     >     Unsubscribe:
>     >     >
>     >     
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     >   
>      <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     >     >
>     >     
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     >     >
>     >     
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>     >     >
>     >
>     >
>     >   
>      __________________________________________________________________________
>     >     OpenStack Development Mailing List (not for usage questions)
>     >     Unsubscribe:
>     >   
>      OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     >   
>      <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     >   
>      http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>     >
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> -- 
> Kevin Benton



From mscherbakov at mirantis.com  Sat Sep 19 23:37:01 2015
From: mscherbakov at mirantis.com (Mike Scherbakov)
Date: Sat, 19 Sep 2015 23:37:01 +0000
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <CANw6fcH6uePBTiZuKYWEoAJ_1sJFY_CbrjYNoqpKOyHCG6C70Q@mail.gmail.com>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
 <CAHAWLf1OtHP5BdKsf+N-Fmv=34vLs36SP9UcQ=FurpEihr-hhg@mail.gmail.com>
 <CANw6fcHkopMgfuVHUXnp-zxJUCmqNoHn=m02h5=jZ7A_8yyinA@mail.gmail.com>
 <20150919010735.GB16012@localhost>
 <CANw6fcH6uePBTiZuKYWEoAJ_1sJFY_CbrjYNoqpKOyHCG6C70Q@mail.gmail.com>
Message-ID: <CAKYN3rMgm8iKmTCb6BknV=UsM6W-zBeW1Ca9JzZ+a=Y80taODQ@mail.gmail.com>

Let's move on.
I started work on MAINTAINERS files, proposed two patches:
https://review.openstack.org/#/c/225457/1
https://review.openstack.org/#/c/225458/1

These can be used as templates for other repos / folders.

Thanks,

On Fri, Sep 18, 2015 at 7:45 PM Davanum Srinivas <davanum at gmail.com> wrote:

> +1 Dmitry
>
> -- Dims
>
> On Fri, Sep 18, 2015 at 9:07 PM, Dmitry Borodaenko <
> dborodaenko at mirantis.com> wrote:
>
>> Dims,
>>
>> Thanks for the reminder!
>>
>> I've summarized the uncontroversial parts of that thread in a policy
>> proposal as per you suggestion [0], please review and comment. I've
>> renamed SMEs to maintainers since Mike has agreed with that part, and I
>> omitted code review SLAs from the policy since that's the part that has
>> generated the most discussion.
>>
>> [0] https://review.openstack.org/225376
>>
>> I don't think we should postpone the election: the PTL election follows
>> the same rules as OpenStack so we don't need a Fuel-specific policy for
>> that, and the component leads election doesn't start until October 9,
>> which gives us 3 weeks to confirm consensus on that aspect of the
>> policy.
>>
>> --
>> Dmitry Borodaenko
>>
>>
>> On Fri, Sep 18, 2015 at 07:30:39AM -0400, Davanum Srinivas wrote:
>> > Sergey,
>> >
>> > Please see [1]. Did we codify some of these roles and responsibilities
>> as a
>> > community in a spec? There was also a request to use terminology like
>> say
>> > MAINTAINERS in that email as well.
>> >
>> > Are we pulling the trigger a bit early for an actual election?
>> >
>> > Thanks,
>> > Dims
>> >
>> > [1] http://markmail.org/message/2ls5obgac6tvcfss
>> >
>> > On Fri, Sep 18, 2015 at 6:56 AM, Vladimir Kuklin <vkuklin at mirantis.com>
>> > wrote:
>> >
>> > > Sergey, Fuelers
>> > >
>> > > This is awesome news!
>> > >
>> > > By the way, I have a question on who is eligible to vote and to
>> nominate
>> > > him/her-self for both PTL and Component Leads. Could you elaborate on
>> that?
>> > >
>> > > And there is no such entity as Component Lead in OpenStack - so we are
>> > > actually creating one. What are the new rights and responsibilities
>> of CL?
>> > >
>> > > On Fri, Sep 18, 2015 at 5:39 AM, Sergey Lukjanov <
>> slukjanov at mirantis.com>
>> > > wrote:
>> > >
>> > >> Hi folks,
>> > >>
>> > >> I'd like to announce that we're running the PTL and Component Leads
>> > >> elections. Detailed information available on wiki. [0]
>> > >>
>> > >> Project Team Lead: Manages day-to-day operations, drives the project
>> > >> team goals, resolves technical disputes within the project team. [1]
>> > >>
>> > >> Component Lead: Defines architecture of a module or component in
>> Fuel,
>> > >> reviews design specs, merges majority of commits and resolves
>> conflicts
>> > >> between Maintainers or contributors in the area of responsibility.
>> [2]
>> > >>
>> > >> Fuel has two large sub-teams, with roughly comparable codebases, that
>> > >> need dedicated component leads: fuel-library and fuel-python. [2]
>> > >>
>> > >> Nominees propose their candidacy by sending an email to the
>> > >> openstack-dev at lists.openstack.org mailing-list, which the subject:
>> > >> "[fuel] PTL candidacy" or "[fuel] <component> lead candidacy"
>> > >> (for example, "[fuel] fuel-library lead candidacy").
>> > >>
>> > >> Time line:
>> > >>
>> > >> PTL elections
>> > >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
>> position
>> > >> * September 29 - October 8: PTL elections
>> > >>
>> > >> Component leads elections (fuel-library and fuel-python)
>> > >> * October 9 - October 15: Open candidacy for Component leads
>> positions
>> > >> * October 16 - October 22: Component leads elections
>> > >>
>> > >> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015
>> > >> [1] https://wiki.openstack.org/wiki/Governance
>> > >> [2]
>> > >>
>> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
>> > >> [3] https://lwn.net/Articles/648610/
>> > >>
>> > >> --
>> > >> Sincerely yours,
>> > >> Sergey Lukjanov
>> > >> Sahara Technical Lead
>> > >> (OpenStack Data Processing)
>> > >> Principal Software Engineer
>> > >> Mirantis Inc.
>> > >>
>> > >>
>> __________________________________________________________________________
>> > >> OpenStack Development Mailing List (not for usage questions)
>> > >> Unsubscribe:
>> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >>
>> > >>
>> > >
>> > >
>> > > --
>> > > Yours Faithfully,
>> > > Vladimir Kuklin,
>> > > Fuel Library Tech Lead,
>> > > Mirantis, Inc.
>> > > +7 (495) 640-49-04
>> > > +7 (926) 702-39-68
>> > > Skype kuklinvv
>> > > 35bk3, Vorontsovskaya Str.
>> > > Moscow, Russia,
>> > > www.mirantis.com <http://www.mirantis.ru/>
>> > > www.mirantis.ru
>> > > vkuklin at mirantis.com
>> > >
>> > >
>> __________________________________________________________________________
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > >
>> >
>> >
>> > --
>> > Davanum Srinivas :: https://twitter.com/dims
>>
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150919/20f78d1b/attachment.html>

From gal.sagie at gmail.com  Sun Sep 20 06:26:23 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Sun, 20 Sep 2015 09:26:23 +0300
Subject: [openstack-dev] [Neutron] Port Forwarding API
Message-ID: <CAG9LJa7gNwDauUahRNi1MRckppt=kdAbdEop6CV8vO9v62fH8A@mail.gmail.com>

Hello All,

I have sent a spec [1] to resume the work on port forwarding API and
reference implementation.

Its currently marked as "WIP", however i raised some "TBD" questions for
the community.
The way i see port forwarding is an API that is very similar to floating IP
API and implementation
with few changes:

1) Can only define port forwarding on the router external gateway IP (or
additional public IPs
   that are located on the router.  (Similar to the case of centralized
DNAT)

2) The same FIP address can be used for different mappings, for example FIP
with IP X
    can be used with different ports to map to different VM's X:4001  ->
VM1 IP
    X:4002 -> VM2 IP (This is the essence of port forwarding).
    So we also need the port mapping configuration fields

All the rest should probably behave (in my opinion) very similar to FIP's
(for example
not being able to remove external gateway if port forwarding entries are
configured,
if the VM is deletd the port forwarding entry is deleted as well and so
on..)
All of these points are mentioned in the spec and i am waiting for the
community feedback
on them.

I am trying to figure out if implementation wise, it would be smart to try
and use the floating IP
implementation and extend it for this (given all the above mechanism
described above already
works for floating IP's)
Or, add another new implementation which behaves very similar to floating
IP's in most aspects
(But still differ in some)
Or something else...

Would love to hear the community feedback on the spec, even that its WIP

Thanks
Gal.

[1] https://review.openstack.org/#/c/224727/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/283bbe96/attachment.html>

From him.funkyy at gmail.com  Sun Sep 20 07:33:40 2015
From: him.funkyy at gmail.com (himanshu sharma)
Date: Sun, 20 Sep 2015 13:03:40 +0530
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
Message-ID: <CAB+y+8Qtp+veJvzoTh6HftZus_Sk4tgpTEtyJhWKyQSM=tQ_iA@mail.gmail.com>

Hi,

Greetings for the day.

I am finding problem in finding the CLI commands for congress in which I
can create, delete a rule within a policy and viewing different data
sources.
Can you please provide me the list of CLI commands for the same.
Waiting for the reply.


Regards
Himanshu Sharma

On Sat, Sep 19, 2015 at 5:44 AM, Tim Hinrichs <tim at styra.com> wrote:

> It's great to have this available!  I think it'll help people understand
> what's going on MUCH more quickly.
>
> Some thoughts.
> - The image is 3GB, which took me 30 minutes to download.  Are all VMs
> this big?  I think we should finish this as a VM but then look into doing
> it with containers to make it EVEN easier for people to get started.
>
> - It gave me an error about a missing shared directory when I started up.
>
> - I expected devstack to be running when I launched the VM.  devstack
> startup time is substantial, and if there's a problem, it's good to assume
> the user won't know how to fix it.  Is it possible to have devstack up and
> running when we start the VM?  That said, it started up fine for me.
>
> - It'd be good to have a README to explain how to use the use-case
> structure. It wasn't obvious to me.
>
> - The top-level dir of the Congress_Usecases folder has a
> Congress_Usecases folder within it.  I assume the inner one shouldn't be
> there?
>
> - When I ran the 10_install_policy.sh, it gave me a bunch of authorization
> problems.
>
> But otherwise I think the setup looks reasonable.  Will there be an undo
> script so that we can run the use cases one after another without worrying
> about interactions?
>
> Tim
>
>
> On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com> wrote:
>
>> Hi Congress folks,
>>
>>
>>
>> BTW the login/password for the VM is vagrant/vagrant
>>
>>
>>
>> -Shiv
>>
>>
>>
>>
>>
>> *From:* Shiv Haris [mailto:sharis at Brocade.com]
>> *Sent:* Thursday, September 17, 2015 5:03 PM
>> *To:* openstack-dev at lists.openstack.org
>> *Subject:* [openstack-dev] [Congress] Congress Usecases VM
>>
>>
>>
>> Hi All,
>>
>>
>>
>> I have put my VM (virtualbox) at:
>>
>>
>>
>> http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova
>>
>>
>>
>> I usually run this on a macbook air ? but it should work on other
>> platfroms as well. I chose virtualbox since it is free.
>>
>>
>>
>> Please send me your usecases ? I can incorporate in the VM and send you
>> an updated image. Please take a look at the structure I have in place for
>> the first usecase; would prefer it be the same for other usecases. (However
>> I am still open to suggestions for changes)
>>
>>
>>
>> Thanks,
>>
>>
>>
>> -Shiv
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/0d7a1eee/attachment.html>

From tengqim at linux.vnet.ibm.com  Sun Sep 20 08:24:57 2015
From: tengqim at linux.vnet.ibm.com (Qiming Teng)
Date: Sun, 20 Sep 2015 16:24:57 +0800
Subject: [openstack-dev] [Heat] Integration Test Questions
In-Reply-To: <CACfB1uutGXqUbd2D5rRAjRvVMT=H2qTn0myxOS6eJLdNQ=nbsg@mail.gmail.com>
References: <D218618E.ADE81%sabeen.syed@rackspace.com>
 <CACfB1uutGXqUbd2D5rRAjRvVMT=H2qTn0myxOS6eJLdNQ=nbsg@mail.gmail.com>
Message-ID: <20150920082456.GA11642@qiming-ThinkCentre-M58p>

Speaking of adding tests, we need hands on improving Heat API tests in
Tempest [1]. The current test cases there is a weird combination of API
tests, resource type tests, template tests etc. If we decide to move
functional tests back to individual projects, some test cases may need
to be deleted from tempest.

Another important reason of adding API tests into Tempest is because
the orchestration service is assessed [2] by the DefCore team using
tests in Tempest, not in-tree test cases.

The heat team has done a lot (and killed a lot) work to make the API as
stable as possible. Most of the time, there would be nothing new for
testing. The API surface tests may become nothing but waste of time if
we keep running them for every single patch.

So... my suggestions:

- Remove unnecessary tests in Tempest;
- Stop adding API tests to Heat locally;
- Add API tests to Tempest instead, in an organized way. (refer to [3])

[1]
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/orchestration/
[2] https://review.openstack.org/#/c/216983/
[3] https://review.openstack.org/#/c/210080/



From duncan.thomas at gmail.com  Sun Sep 20 12:16:44 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Sun, 20 Sep 2015 15:16:44 +0300
Subject: [openstack-dev] [Openstack-i18n] [nova][i18n] Is there any
 point in using _() inpython-novaclient?
In-Reply-To: <55FC23C0.3040105@suse.com>
References: <55E9D9AD.1000402@linux.vnet.ibm.com>
 <201509060518.t865IeSf019572@d01av05.pok.ibm.com>
 <55EF0334.3030606@linux.vnet.ibm.com> <55FC23C0.3040105@suse.com>
Message-ID: <CAOyZ2aGT5O_K1nT6OKWd-TGkLoX0_H0XAz0KvPqErzSYexEDJg@mail.gmail.com>

Certainly for cinder, and I suspect many other project, the openstack
client is a wrapper for python-cinderclient libraries, so if you want
translated exceptions then you need to translate python-cinderclient too,
unless I'm missing something?

On 18 September 2015 at 17:46, Andreas Jaeger <aj at suse.com> wrote:

> With the limited resources that the translation team has, we should not
> translate the clients but concentrate on the openstackclient, as discussed
> here:
>
>
> http://lists.openstack.org/pipermail/openstack-i18n/2015-September/001402.html
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
>    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
>        HRB 21284 (AG N?rnberg)
>     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
>
> _______________________________________________
> Openstack-i18n mailing list
> Openstack-i18n at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n
>



-- 
-- 
Duncan Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/b8aaadd3/attachment.html>

From aj at suse.com  Sun Sep 20 13:06:55 2015
From: aj at suse.com (Andreas Jaeger)
Date: Sun, 20 Sep 2015 15:06:55 +0200
Subject: [openstack-dev] [Openstack-i18n] [nova][i18n] Is there any
 point in using _() inpython-novaclient?
In-Reply-To: <CAOyZ2aGT5O_K1nT6OKWd-TGkLoX0_H0XAz0KvPqErzSYexEDJg@mail.gmail.com>
References: <55E9D9AD.1000402@linux.vnet.ibm.com>
 <201509060518.t865IeSf019572@d01av05.pok.ibm.com>
 <55EF0334.3030606@linux.vnet.ibm.com> <55FC23C0.3040105@suse.com>
 <CAOyZ2aGT5O_K1nT6OKWd-TGkLoX0_H0XAz0KvPqErzSYexEDJg@mail.gmail.com>
Message-ID: <55FEAF6F.80805@suse.com>

On 09/20/2015 02:16 PM, Duncan Thomas wrote:
> Certainly for cinder, and I suspect many other project, the openstack
> client is a wrapper for python-cinderclient libraries, so if you want
> translated exceptions then you need to translate python-cinderclient
> too, unless I'm missing something?

Ah - let's investigate some more here.

Looking at python-cinderclient, I see translations only for the help 
strings of the client like in cinderclient/shell.py. Are there strings 
in the library of cinder that will be displayed to the user as well?

Andreas

> On 18 September 2015 at 17:46, Andreas Jaeger <aj at suse.com
> <mailto:aj at suse.com>> wrote:
>
>     With the limited resources that the translation team has, we should
>     not translate the clients but concentrate on the openstackclient, as
>     discussed here:
>
>     http://lists.openstack.org/pipermail/openstack-i18n/2015-September/001402.html
>
>     Andreas
>     --
>       Andreas Jaeger aj@{suse.com <http://suse.com>,opensuse.org
>     <http://opensuse.org>} Twitter/Identica: jaegerandi
>        SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
>         GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
>             HRB 21284 (AG N?rnberg)
>          GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272
>     A126
>
>
>
>     _______________________________________________
>     Openstack-i18n mailing list
>     Openstack-i18n at lists.openstack.org
>     <mailto:Openstack-i18n at lists.openstack.org>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n
>
>
>
>
> --
> --
> Duncan Thomas


-- 
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
        HRB 21284 (AG N?rnberg)
     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126



From rakhmerov at mirantis.com  Sun Sep 20 16:06:20 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Sun, 20 Sep 2015 19:06:20 +0300
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
	library releases needed
In-Reply-To: <1442667067-sup-828@lrrr.local>
References: <1442234537-sup-4636@lrrr.local>
 <EA70533067B8F34F801E964ABCA4C4410F4D6345@G9W0745.americas.hpqcorp.net>
 <55F87083.6080408@gmail.com>
 <BF5D2E8A-A623-4CF2-9555-A054984D673C@mirantis.com>
 <1442667067-sup-828@lrrr.local>
Message-ID: <7C478C34-3A03-4EF7-BE9E-2624754B923C@mirantis.com>




> On 19 Sep 2015, at 16:04, Doug Hellmann <doug at doughellmann.com> wrote:
> 
> Excerpts from Renat Akhmerov's message of 2015-09-19 00:35:49 +0300:
>> Doug,
>> 
>> python-mistralclient-1.1.0 (also on pypi) is the final release for Liberty. Here?s the patch updating global-requirements.txt: https://review.openstack.org/#/c/225330/ <https://review.openstack.org/#/c/225330/> <https://review.openstack.org/#/c/225330/ <https://review.openstack.org/#/c/225330/>> (upper-constraints.txt should be soon updated automatically, in my understanding)
> 
> Because we're in requirements freeze, we're trying not to update any of
> the global-requirements.txt entries unless absolutely necessary. At this
> point, no projects can be depending on the previously unreleased
> features in 1.1.0, so as long as python-mistralclient doesn't have a cap
> on the major version allowed the requirements list, it should only be
> necessary to update the constraints.
> 
> Please update the constraints file by hand, only changing
> python-mistralclient. That will allow us to land the update without
> changing any other libraries in the test infrastructure (the automated
> update submits all of the changes together, and we have several
> outstanding right now).

Ok, understood.

https://review.openstack.org/#/c/225491/ <https://review.openstack.org/#/c/225491/>
>> So far I have been doing release management for Mistral myself (~2 years), and the last year I?ve been trying to be aligned with OpenStack schedule. In may 2015 Mistral was accepted into Big Tent so does that mean I?m not longer responsible for doing that? Or I can still do it on my own? Even with final Mistral client for Liberty I?ve done it just myself (didn?t create a stable branch though yet), maybe I shouldn?t have. Clarifications would be helpful.
> 
> It means you can now ask the release management team to take over for
> the library, but that is not an automatic change.
>> Does this all apply to all Big Tent projects?
> 
> Yes, and to all horizontal teams. Every project team is expected
> to provide liaisons to all horizontal teams now. The degree to which
> a horizontal team does the work for you is up to each pair of teams
> to negotiate.

I?d prefer to take care about Liberty releases myself. We don?t have much time till the end of Liberty and we may not establish all required connections with horizontal teams. Is that ok?

Renat Akhmerov
@ Mirantis Inc.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/9797aed3/attachment.html>

From nodir.qodirov at gmail.com  Sun Sep 20 16:17:49 2015
From: nodir.qodirov at gmail.com (Nodir Kodirov)
Date: Sun, 20 Sep 2015 09:17:49 -0700
Subject: [openstack-dev] [neutron] Neutron debugging tool
Message-ID: <CADL6tVORSnamoEuGAzQ45BNvKUBRvBfrQx25YMfa4aQnCaPHEg@mail.gmail.com>

Hello,

I am planning to develop a tool for network debugging. Initially, it
will handle DVR case, which can also be extended to other too. Based
on my OpenStack deployment/operations experience, I am planning to
handle common pitfalls/misconfigurations, such as:
1) check external gateway validity
2) check if appropriate qrouter/qdhcp/fip namespaces are created in
compute/network hosts
3) execute probing commands inside namespaces, to verify reachability
4) etc.

I came across neutron-debug [1], which mostly focuses on namespace
debugging. Its coverage is limited to OpenStack, while I am planning
to cover compute/network nodes as well. In my experience, I had to ssh
to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
above). The tool I am considering will handle these, given the host
credentials.

I'd like get community's feedback on utility of such debugging tool.
Do people use neutron-debug on their OpenStack environment? Does the
tool I am planning to develop with complete diagnosis coverage sound
useful? Anyone is interested to join the development? All feedback are
welcome.

Thanks,

- Nodir

[1] http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html


From doug at doughellmann.com  Sun Sep 20 17:15:47 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Sun, 20 Sep 2015 13:15:47 -0400
Subject: [openstack-dev] [all][ptl][release] final liberty cycle client
	library releases needed
In-Reply-To: <7C478C34-3A03-4EF7-BE9E-2624754B923C@mirantis.com>
References: <1442234537-sup-4636@lrrr.local>
 <EA70533067B8F34F801E964ABCA4C4410F4D6345@G9W0745.americas.hpqcorp.net>
 <55F87083.6080408@gmail.com>
 <BF5D2E8A-A623-4CF2-9555-A054984D673C@mirantis.com>
 <1442667067-sup-828@lrrr.local>
 <7C478C34-3A03-4EF7-BE9E-2624754B923C@mirantis.com>
Message-ID: <1442769296-sup-6229@lrrr.local>

Excerpts from Renat Akhmerov's message of 2015-09-20 19:06:20 +0300:
> 
> > On 19 Sep 2015, at 16:04, Doug Hellmann <doug at doughellmann.com> wrote:
> > 
> > Excerpts from Renat Akhmerov's message of 2015-09-19 00:35:49 +0300:
> >> Doug,
> >> 
> >> python-mistralclient-1.1.0 (also on pypi) is the final release for Liberty. Here?s the patch updating global-requirements.txt: https://review.openstack.org/#/c/225330/ <https://review.openstack.org/#/c/225330/> <https://review.openstack.org/#/c/225330/ <https://review.openstack.org/#/c/225330/>> (upper-constraints.txt should be soon updated automatically, in my understanding)
> > 
> > Because we're in requirements freeze, we're trying not to update any of
> > the global-requirements.txt entries unless absolutely necessary. At this
> > point, no projects can be depending on the previously unreleased
> > features in 1.1.0, so as long as python-mistralclient doesn't have a cap
> > on the major version allowed the requirements list, it should only be
> > necessary to update the constraints.
> > 
> > Please update the constraints file by hand, only changing
> > python-mistralclient. That will allow us to land the update without
> > changing any other libraries in the test infrastructure (the automated
> > update submits all of the changes together, and we have several
> > outstanding right now).
> 
> Ok, understood.
> 
> https://review.openstack.org/#/c/225491/ <https://review.openstack.org/#/c/225491/>

+2

> >> So far I have been doing release management for Mistral myself (~2 years), and the last year I?ve been trying to be aligned with OpenStack schedule. In may 2015 Mistral was accepted into Big Tent so does that mean I?m not longer responsible for doing that? Or I can still do it on my own? Even with final Mistral client for Liberty I?ve done it just myself (didn?t create a stable branch though yet), maybe I shouldn?t have. Clarifications would be helpful.
> > 
> > It means you can now ask the release management team to take over for
> > the library, but that is not an automatic change.
> >> Does this all apply to all Big Tent projects?
> > 
> > Yes, and to all horizontal teams. Every project team is expected
> > to provide liaisons to all horizontal teams now. The degree to which
> > a horizontal team does the work for you is up to each pair of teams
> > to negotiate.
> 
> I?d prefer to take care about Liberty releases myself. We don?t have much time till the end of Liberty and we may not establish all required connections with horizontal teams. Is that ok?

That makes a lot of sense.

Doug

> 
> Renat Akhmerov
> @ Mirantis Inc.


From john.griffith8 at gmail.com  Sun Sep 20 17:30:15 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Sun, 20 Sep 2015 11:30:15 -0600
Subject: [openstack-dev] [CINDER] [PTL Candidates] Questions
Message-ID: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>

?PTL nomination emails are good, but I have a few questions that I'd like
to ask to help me in making my vote.  Some of these are covered in the
general proposal announcements, but I'd love to hear some more detail.

It would be awesome if the Cinder candidates could spend some time and
answer these to help me (and maybe others) make an informed choice:

1. Do you actually have the time to spend to be PTL

I don't think many people realize the time commitment. Between being on top
of reviews and having a pretty consistent view of what's going on and in
process; to meetings, questions on IRC, program management type stuff etc.
Do you feel you'll have the ability for PTL to be your FULL Time job?
Don't forget you're working with folks in a community that spans multiple
time zones.

2. What are your plans to make the Cinder project as a core component
better (no... really, what specifically and how does it make Cinder better)?

Most candidates are representing a storage vendor naturally.  Everyone says
"make Cinder better"; But how do you intend to balance vendor interest and
the interest of the general project?  Where will your focus in the M
release be?  On your vendor code or on Cinder as a whole?  Note; I'm not
suggesting that anybody isn't doing the "right" thing here, I'm just asking
for specifics.

3. ?Why do you want to be PTL for Cinder?

Seems like a silly question, but really when you start asking that question
the answers can be surprising and somewhat enlightening.  There's different
motivators for people, what's yours?  By the way, "my employer pays me a
big bonus if I win" is a perfectly acceptable answer in my opinion, I'd
prefer honesty over anything else.  You may not get my vote, but you'd get
respect.

Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/a82281bc/attachment.html>

From ton at us.ibm.com  Sun Sep 20 18:08:23 2015
From: ton at us.ibm.com (Ton Ngo)
Date: Sun, 20 Sep 2015 11:08:23 -0700
Subject: [openstack-dev]  [magnum] Handling password for k8s
Message-ID: <201509201808.t8KI8VoA018814@d03av05.boulder.ibm.com>



Hi everyone,
    I am running into a potential issue in implementing the support for
load balancer in k8s services.  After a chat with sdake, I would like to
run this by the team for feedback/suggestion.
First let me give a little background for context.  In the current k8s
cluster, all k8s pods and services run within a private subnet (on Flannel)
and they can access each other but they cannot be accessed from external
network.  The way to publish an endpoint to the external network is by
specifying this attribute in your service manifest:
	type: LoadBalancer
   Then k8s will talk to OpenStack Neutron to create the load balancer
pool, members, VIP, monitor.  The user would associate the VIP with a
floating IP and then the endpoint of the service would be accessible from
the external internet.
   To talk to Neutron, k8s needs the user credential and this is stored in
a config file on the master node.  This includes the username, tenant name,
password.  When k8s starts up, it will load the config file and create an
authenticated client with Keystone.
    The issue we need to find a good solution for is how to handle the
password.  With the current effort on security to make Magnum
production-ready, we want to make sure to handle the password properly.
    Ideally, the best solution is to pass the authenticated token to k8s to
use, but this will require sizeable change upstream in k8s.  We have good
reason to pursue this but it will take time.
    For now, my current implementation is as follows:
   In a bay-create, magnum client adds the password to the API call
   (normally it authenticates and sends the token)
   The conductor picks it up and uses it as an input parameter to the heat
   templates
   When configuring the master node, the password is saved in the config
   file for k8s services.
   Magnum does not store the password internally.

    This is probably not ideal, but it would let us proceed for now.  We
can deprecate it later when we have a better solution.  So leaving aside
the issue of how k8s should be changed, the question is:  is this approach
reasonable for the time, or is there a better approach?

Ton Ngo,

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/0873ea77/attachment.html>

From e0ne at e0ne.info  Sun Sep 20 20:49:45 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Sun, 20 Sep 2015 23:49:45 +0300
Subject: [openstack-dev] [CINDER] [PTL Candidates] Questions
In-Reply-To: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
References: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
Message-ID: <CAGocpaHuJvxsaiSm+0pzjT0+GPw6Q3CFSWNRL3pdT2NQAw4aBQ@mail.gmail.com>

Hi John,

Thank you for these question. Such questions with answers could be a good
part of PTL proposal in the future.

Please, see my answers inline.

Regards,
Ivan Kolodyazhny

On Sun, Sep 20, 2015 at 8:30 PM, John Griffith <john.griffith8 at gmail.com>
wrote:

> ?PTL nomination emails are good, but I have a few questions that I'd like
> to ask to help me in making my vote.  Some of these are covered in the
> general proposal announcements, but I'd love to hear some more detail.
>
> It would be awesome if the Cinder candidates could spend some time and
> answer these to help me (and maybe others) make an informed choice:
>
> 1. Do you actually have the time to spend to be PTL
>
> I don't think many people realize the time commitment. Between being on
> top of reviews and having a pretty consistent view of what's going on and
> in process; to meetings, questions on IRC, program management type stuff
> etc.
>

I sincerely admire if any PTL could be in TOP of reviews, commits, etc.
Cross-projects meetings and activities will take a lot of time. IRC
participation is required for every active contributor, especially for
PTLs. Talking about Cinder, we need to remember that not only Community is
involved into the project. Many vendors have their drivers and didn't
contribute to other parts of projects. Cinder PTL is also responsible for
communication and collaboration with vendors to make their drivers be
working with Cinder.


> Do you feel you'll have the ability for PTL to be your FULL Time job?
>
It was first question which I asked myself before a nomination.


> Don't forget you're working with folks in a community that spans multiple
> time zones.
>

Sure, I can't forget it because I spend time almost every night in our
#openstack-cinder channel. Talking about something more measurable, I would
like to provide only this http://stackalytics.com/report/users/e0ne report.



>
> 2. What are your plans to make the Cinder project as a core component
> better (no... really, what specifically and how does it make Cinder better)?
>
> Most candidates are representing a storage vendor naturally.  Everyone
> says "make Cinder better"; But how do you intend to balance vendor interest
> and the interest of the general project?  Where will your focus in the M
> release be?  On your vendor code or on Cinder as a whole?  Note; I'm not
> suggesting that anybody isn't doing the "right" thing here, I'm just asking
> for specifics.
>

My company doesn't have own driver. I don't want to talk about Block Device
Driver now. I'll be that person, who will create patch with removing after
M-2 milestone is this driver won't have CI and minimum required feature
set.

A a Cinder user and contributor, I'm interesting in making Cinder Core more
flexible (E.g. working w/o Nova), tested (e.g. functional tests, 3rd party
CI, unit tests coverage etc) and make our users more happy with it.


>
> 3. ?Why do you want to be PTL for Cinder?
>
> Seems like a silly question, but really when you start asking that
> question the answers can be surprising and somewhat enlightening.  There's
> different motivators for people, what's yours?  By the way, "my employer
> pays me a big bonus if I win" is a perfectly acceptable answer in my
> opinion, I'd prefer honesty over anything else.  You may not get my vote,
> but you'd get respect.
>

OpenStack itself is very dynamic project. It makes a big progress each
release. It varies greatly from one release to another. It's a real
community-driven project. And I think that   PTLs are not dictators. PTLs
only help community to work on Cinder to make it better. Each person as a
PTL could bring something new to the community. Sometimes, "something new"
may means "something bad", but after each mistake we'll do our work better.
Being PTL helps to understand not only Cinder developer needs. It helps to
understand other OpenStack project needs. PTL should take care not only on
Cider or Nova or Heat. IMO, The main task of each PTL is to coordinate
developers of one project, other OpenStack developers and vendors to work
together on regular basis. It will be very big challenge for me I'll do my
best to make Cinder better as PTL. I'm sure, Cinder community will help our
new PTL a lot with it.


> Thanks,
> John
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

Ivan.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/d5e078a8/attachment.html>

From mscherbakov at mirantis.com  Sun Sep 20 20:56:26 2015
From: mscherbakov at mirantis.com (Mike Scherbakov)
Date: Sun, 20 Sep 2015 20:56:26 +0000
Subject: [openstack-dev] [Fuel] Core Reviewers groups restructure
Message-ID: <CAKYN3rOpnBniOkHp6MtqfXnVxkxkV=mQNRRjQWLvwm5c9eEwzA@mail.gmail.com>

Hi all,
as of my larger proposal on improvements to code review workflow [1], we
need to have cores for repositories, not for the whole Fuel. It is the path
we are taking for a while, and new core reviewers added to specific repos
only. Now we need to complete this work.

My proposal is:

   1. Get rid of one common fuel-core [2] group, members of which can merge
   code anywhere in Fuel. Some members of this group may cover a couple of
   repositories, but can't really be cores in all repos.
   2. Extend existing groups, such as fuel-library [3], with members from
   fuel-core who are keeping up with large number of reviews / merges. This
   data can be queried at Stackalytics.
   3. Establish a new group "fuel-infra", and ensure that it's included
   into any other core group. This is for maintenance purposes, it is expected
   to be used only in exceptional cases. Fuel Infra team will have to decide
   whom to include into this group.
   4. Ensure that fuel-plugin-* repos will not be affected by removal of
   fuel-core group.

#2 needs specific details. Stackalytics can show active cores easily, we
can look at people with *:
http://stackalytics.com/report/contribution/fuel-web/180. This is for
fuel-web, change the link for other repos accordingly. If people are added
specifically to the particular group, leaving as is (some of them are no
longer active. But let's clean them up separately from this group
restructure process).

   - fuel-library-core [3] group will have following members: Bogdan D.,
   Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
   - fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin,
   Vitaly Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
   - fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
   - fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
   - fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky, Nastya
   Urlapova
   - fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
   Konstantinov, Olga Gusarenko
   - fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry Pyzhov,
   Sergii Golovatyuk, Vladimir Kuklin, Igor Kalnitsky
   - fuel-nailgun-agent-core [10]: Vladimir Sharshov, V.Kozhukalov
   - fuel-ostf-core [11]: Tatyana Leontovich, Nastya Urlapova, Andrey
   Sledzinsky, Dmitry Shulyak
   - fuel-plugins-core [12]: Igor Kalnitsky, Evgeny Li, Alexey Kasatkin
   - fuel-qa-core [13]: Andrey Sledzinsky, Tatyana Leontovich, Nastya
   Urlapova
   - fuel-stats-core [14]: Alex Kislitsky, Alexey Kasatkin, Vitaly Kramskikh
   - fuel-tasklib-core [15]: Igor Kalnitsky, Dima Shulyak, Alexey Kasatkin
   (this project seems to be dead, let's consider to rip it off)
   - fuel-specs-core: there is no such a group at the moment. I propose to
   create one with following members, based on stackalytics data [16]: Vitaly
   Kramskikh, Bogdan Dobrelia, Evgeny Li, Sergii Golovatyuk, Vladimir Kuklin,
   Igor Kalnitsky, Alexey Kasatkin, Roman Vyalov, Dmitry Borodaenko, Mike
   Scherbakov, Dmitry Pyzhov. We would need to reconsider who can merge after
   Fuel PTL/Component Leads elections
   - fuel-octane-core: needs to be created. Members: Yury Taraday, Oleg
   Gelbukh, Ilya Kharin
   - fuel-mirror-core: needs to be created. Sergey Kulanov, Vitaly Parakhin
   - fuel-upgrade-core: needs to be created. Sebastian Kalinowski, Alex
   Schultz, Evgeny Li, Igor Kalnitsky
   - fuel-provision: repo seems to be outdated, needs to be removed.

I suggest to make changes in groups first, and then separately address
specific issues like removing someone from cores (not doing enough reviews
anymore or too many positive reviews, let's say > 95%).

I hope I don't miss anyone / anything. Please check carefully.
Comments / objections?

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
[2] https://review.openstack.org/#/admin/groups/209,members
[3] https://review.openstack.org/#/admin/groups/658,members
[4] https://review.openstack.org/#/admin/groups/664,members
[5] https://review.openstack.org/#/admin/groups/655,members
[6] https://review.openstack.org/#/admin/groups/646,members
[7] https://review.openstack.org/#/admin/groups/656,members
[8] https://review.openstack.org/#/admin/groups/657,members
[9] https://review.openstack.org/#/admin/groups/659,members
[10] https://review.openstack.org/#/admin/groups/1000,members
[11] https://review.openstack.org/#/admin/groups/660,members
[12] https://review.openstack.org/#/admin/groups/661,members
[13] https://review.openstack.org/#/admin/groups/662,members
[14] https://review.openstack.org/#/admin/groups/663,members
[15] https://review.openstack.org/#/admin/groups/624,members
[16] http://stackalytics.com/report/contribution/fuel-specs/180


-- 
Mike Scherbakov
#mihgen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/f0006c9e/attachment.html>

From hongbin.lu at huawei.com  Sun Sep 20 23:26:19 2015
From: hongbin.lu at huawei.com (Hongbin Lu)
Date: Sun, 20 Sep 2015 23:26:19 +0000
Subject: [openstack-dev] [magnum] Handling password for k8s
In-Reply-To: <201509201808.t8KI8VoA018814@d03av05.boulder.ibm.com>
References: <201509201808.t8KI8VoA018814@d03av05.boulder.ibm.com>
Message-ID: <0957CD8F4B55C0418161614FEC580D6BCE62F1@SZXEMI503-MBS.china.huawei.com>

Hi Ton,

If I understand your proposal correctly, it means the inputted password will be exposed to users in the same tenant (since the password is passed as stack parameter, which is exposed within tenant). If users are not admin, they don't have privilege to create a temp user. As a result, users have to expose their own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for communication between k8s and neutron load balancer service. The password of the user can be written into config file, picked up by conductor and passed to heat. The drawback is that there is no multi-tenancy for openstack load balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain [1] for each bay (using admin credential in config file), and assign bay's owner to that domain. As a result, the user will have privilege to create a bay user within that domain. It seems Heat supports native keystone resource [2], which makes the administration of keystone users much easier. The drawback is the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:ton at us.ibm.com]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for load balancer in k8s services. After a chat with sdake, I would like to run this by the team for feedback/suggestion.
First let me give a little background for context. In the current k8s cluster, all k8s pods and services run within a private subnet (on Flannel) and they can access each other but they cannot be accessed from external network. The way to publish an endpoint to the external network is by specifying this attribute in your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, members, VIP, monitor. The user would associate the VIP with a floating IP and then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a config file on the master node. This includes the username, tenant name, password. When k8s starts up, it will load the config file and create an authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. With the current effort on security to make Magnum production-ready, we want to make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, but this will require sizeable change upstream in k8s. We have good reason to pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call (normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to the heat templates
  3.  When configuring the master node, the password is saved in the config file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can deprecate it later when we have a better solution. So leaving aside the issue of how k8s should be changed, the question is: is this approach reasonable for the time, or is there a better approach?

Ton Ngo,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/b7141e9e/attachment.html>

From skywalker.nick at gmail.com  Mon Sep 21 01:40:01 2015
From: skywalker.nick at gmail.com (Li Ma)
Date: Mon, 21 Sep 2015 09:40:01 +0800
Subject: [openstack-dev] [neutron] Neutron debugging tool
In-Reply-To: <CADL6tVORSnamoEuGAzQ45BNvKUBRvBfrQx25YMfa4aQnCaPHEg@mail.gmail.com>
References: <CADL6tVORSnamoEuGAzQ45BNvKUBRvBfrQx25YMfa4aQnCaPHEg@mail.gmail.com>
Message-ID: <CALFEDVdMdkjB1X002AA0DhvdLN4zDgeY-MyGOC_guopX8Gp1EA@mail.gmail.com>

AFAIK, there is a project available in the github that does the same thing.
https://github.com/yeasy/easyOVS

I used it before.

On Mon, Sep 21, 2015 at 12:17 AM, Nodir Kodirov <nodir.qodirov at gmail.com> wrote:
> Hello,
>
> I am planning to develop a tool for network debugging. Initially, it
> will handle DVR case, which can also be extended to other too. Based
> on my OpenStack deployment/operations experience, I am planning to
> handle common pitfalls/misconfigurations, such as:
> 1) check external gateway validity
> 2) check if appropriate qrouter/qdhcp/fip namespaces are created in
> compute/network hosts
> 3) execute probing commands inside namespaces, to verify reachability
> 4) etc.
>
> I came across neutron-debug [1], which mostly focuses on namespace
> debugging. Its coverage is limited to OpenStack, while I am planning
> to cover compute/network nodes as well. In my experience, I had to ssh
> to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
> above). The tool I am considering will handle these, given the host
> credentials.
>
> I'd like get community's feedback on utility of such debugging tool.
> Do people use neutron-debug on their OpenStack environment? Does the
> tool I am planning to develop with complete diagnosis coverage sound
> useful? Anyone is interested to join the development? All feedback are
> welcome.
>
> Thanks,
>
> - Nodir
>
> [1] http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Li Ma (Nick)
Email: skywalker.nick at gmail.com


From sean.mcginnis at gmx.com  Mon Sep 21 01:52:46 2015
From: sean.mcginnis at gmx.com (Sean McGinnis)
Date: Sun, 20 Sep 2015 20:52:46 -0500
Subject: [openstack-dev] [CINDER] [PTL Candidates] Questions
In-Reply-To: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
References: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
Message-ID: <20150921015244.GA14517@gmx.com>

On Sun, Sep 20, 2015 at 11:30:15AM -0600, John Griffith wrote:
> ?PTL nomination emails are good, but I have a few questions that I'd like
> to ask to help me in making my vote.  Some of these are covered in the
> general proposal announcements, but I'd love to hear some more detail.
> 
> It would be awesome if the Cinder candidates could spend some time and
> answer these to help me (and maybe others) make an informed choice:

Great idea John. We have a lot of candidates this time around, so it's
probably a good idea to get a little more info before the election is
over.

> 
> 1. Do you actually have the time to spend to be PTL
> 

Yes. Prior to submitting my name I had a few conversations with my
management to make sure this would be something they would support.

I have been assured I could make Cinder my primary and full time
responsibility should I become elected.

> 
> 2. What are your plans to make the Cinder project as a core component
> better (no... really, what specifically and how does it make Cinder better)?
> 
> Most candidates are representing a storage vendor naturally.  Everyone says
> "make Cinder better"; But how do you intend to balance vendor interest and
> the interest of the general project?  Where will your focus in the M
> release be?  On your vendor code or on Cinder as a whole?  Note; I'm not
> suggesting that anybody isn't doing the "right" thing here, I'm just asking
> for specifics.

Even though we have some vendor code in Cinder, I'm lucky enough to have
a couple folks on my team that I've been having take care of anything to
do with our driver. My role would be specifically to focus on the core
and overall (multi-vendor contributions, cross-project collaboration, etc.)
contributions.

I can't say I have a specific, actionable plan for exactly what I would
do to "make Cinder better". I think we already have several initiatives 
under way in that respect that I would help to make those happen. I
would see my role as more of a facilitator to help provide support and
focus resources on accomplishing these goals.

I would also work with OpenStack operators, other projects that are
consumers of Cinder services, and the community at large to make sure
Cinder is meeting their block storage needs.

> 
> 3. ?Why do you want to be PTL for Cinder?
> 
> Seems like a silly question, but really when you start asking that question
> the answers can be surprising and somewhat enlightening.  There's different
> motivators for people, what's yours?  By the way, "my employer pays me a
> big bonus if I win" is a perfectly acceptable answer in my opinion, I'd
> prefer honesty over anything else.  You may not get my vote, but you'd get
> respect.

I won't get a big bonus, and I doubt I would get any kind of promotion
or increase out of this. What I would get is the ability to focus full
time on OpenStack and Cinder. Right now it is one of several of my 
responsibilities, and something that I spend a lot of my own time
on because I enjoy working on the project, and working with the folks
involved, and I believe in the future of OpenStack. 

I want to be PTL because I feel I could be a "facilitator" of all the
different efforts underway, to help drive them to completion and to
help reduce the distractions away from all the smart folks that are
getting things done. I can organize, communicate, and simplify our efforts.

> 
> Thanks,
> John



From ayshihanzhang at 126.com  Mon Sep 21 01:57:42 2015
From: ayshihanzhang at 126.com (shihanzhang)
Date: Mon, 21 Sep 2015 09:57:42 +0800 (CST)
Subject: [openstack-dev] [Neutron] Port Forwarding API
In-Reply-To: <CAG9LJa7gNwDauUahRNi1MRckppt=kdAbdEop6CV8vO9v62fH8A@mail.gmail.com>
References: <CAG9LJa7gNwDauUahRNi1MRckppt=kdAbdEop6CV8vO9v62fH8A@mail.gmail.com>
Message-ID: <34a7617b.375e.14fed9ef851.Coremail.ayshihanzhang@126.com>



     2) The same FIP address can be used for different mappings, for example FIP with IP X

          can be used with different ports to map to different VM's X:4001  -> VM1 IP    

          X:4002 -> VM2 IP (This is the essence of port forwarding).

         So we also need the port mapping configuration fields


For the second use case, I have a question, does it support DVR?  if VM1 and VM2 are on
different compute nodes, how does it work?





? 2015-09-20 14:26:23?"Gal Sagie" <gal.sagie at gmail.com> ???

Hello All,


I have sent a spec [1] to resume the work on port forwarding API and reference implementation.


Its currently marked as "WIP", however i raised some "TBD" questions for the community.

The way i see port forwarding is an API that is very similar to floating IP API and implementation

with few changes:


1) Can only define port forwarding on the router external gateway IP (or additional public IPs

   that are located on the router.  (Similar to the case of centralized DNAT)


2) The same FIP address can be used for different mappings, for example FIP with IP X

    can be used with different ports to map to different VM's X:4001  -> VM1 IP   

    X:4002 -> VM2 IP (This is the essence of port forwarding).

    So we also need the port mapping configuration fields


All the rest should probably behave (in my opinion) very similar to FIP's (for example

not being able to remove external gateway if port forwarding entries are configured,

if the VM is deletd the port forwarding entry is deleted as well and so on..)

All of these points are mentioned in the spec and i am waiting for the community feedback

on them.


I am trying to figure out if implementation wise, it would be smart to try and use the floating IP

implementation and extend it for this (given all the above mechanism described above already

works for floating IP's)

Or, add another new implementation which behaves very similar to floating IP's in most aspects

(But still differ in some)

Or something else...


Would love to hear the community feedback on the spec, even that its WIP


Thanks
Gal.

[1] https://review.openstack.org/#/c/224727/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/77432e9b/attachment.html>

From sbaker at redhat.com  Mon Sep 21 02:01:06 2015
From: sbaker at redhat.com (Steve Baker)
Date: Mon, 21 Sep 2015 14:01:06 +1200
Subject: [openstack-dev] [Heat] Integration Test Questions
In-Reply-To: <20150920082456.GA11642@qiming-ThinkCentre-M58p>
References: <D218618E.ADE81%sabeen.syed@rackspace.com>
 <CACfB1uutGXqUbd2D5rRAjRvVMT=H2qTn0myxOS6eJLdNQ=nbsg@mail.gmail.com>
 <20150920082456.GA11642@qiming-ThinkCentre-M58p>
Message-ID: <55FF64E2.6000607@redhat.com>

On 20/09/15 20:24, Qiming Teng wrote:
> Speaking of adding tests, we need hands on improving Heat API tests in
> Tempest [1]. The current test cases there is a weird combination of API
> tests, resource type tests, template tests etc. If we decide to move
> functional tests back to individual projects, some test cases may need
> to be deleted from tempest.
>
> Another important reason of adding API tests into Tempest is because
> the orchestration service is assessed [2] by the DefCore team using
> tests in Tempest, not in-tree test cases.
>
> The heat team has done a lot (and killed a lot) work to make the API as
> stable as possible. Most of the time, there would be nothing new for
> testing. The API surface tests may become nothing but waste of time if
> we keep running them for every single patch.
Thanks for raising this. Wherever they live we do need a dedicated set 
of tests which ensure the REST API is fully exercised.
> So... my suggestions:
>
> - Remove unnecessary tests in Tempest;
agreed
> - Stop adding API tests to Heat locally;
> - Add API tests to Tempest instead, in an organized way. (refer to [3])
I would prefer an alternative approach which would result in the same 
end state:
- port heat_integrationtests to tempest-lib
- build a suite of REST API tests in heat_integrationtests
- work with defcore to identify which heat_integrationtests tests to 
move to tempest
> [1]
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/orchestration/
> [2] https://review.openstack.org/#/c/216983/
> [3] https://review.openstack.org/#/c/210080/
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From stdake at cisco.com  Mon Sep 21 02:34:18 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Mon, 21 Sep 2015 02:34:18 +0000
Subject: [openstack-dev] [magnum] Handling password for k8s
In-Reply-To: <0957CD8F4B55C0418161614FEC580D6BCE62F1@SZXEMI503-MBS.china.huawei.com>
References: <201509201808.t8KI8VoA018814@d03av05.boulder.ibm.com>
 <0957CD8F4B55C0418161614FEC580D6BCE62F1@SZXEMI503-MBS.china.huawei.com>
Message-ID: <D224BA39.12F90%stdake@cisco.com>

Hongbin,

I believe the domain approach is the preferred approach for the solution long term.  It will require more R&D to execute then other options but also be completely secure.

Regards
-steve


From: Hongbin Lu <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Sunday, September 20, 2015 at 4:26 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hi Ton,

If I understand your proposal correctly, it means the inputted password will be exposed to users in the same tenant (since the password is passed as stack parameter, which is exposed within tenant). If users are not admin, they don?t have privilege to create a temp user. As a result, users have to expose their own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for communication between k8s and neutron load balancer service. The password of the user can be written into config file, picked up by conductor and passed to heat. The drawback is that there is no multi-tenancy for openstack load balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain [1] for each bay (using admin credential in config file), and assign bay?s owner to that domain. As a result, the user will have privilege to create a bay user within that domain. It seems Heat supports native keystone resource [2], which makes the administration of keystone users much easier. The drawback is the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:ton at us.ibm.com]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for load balancer in k8s services. After a chat with sdake, I would like to run this by the team for feedback/suggestion.
First let me give a little background for context. In the current k8s cluster, all k8s pods and services run within a private subnet (on Flannel) and they can access each other but they cannot be accessed from external network. The way to publish an endpoint to the external network is by specifying this attribute in your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, members, VIP, monitor. The user would associate the VIP with a floating IP and then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a config file on the master node. This includes the username, tenant name, password. When k8s starts up, it will load the config file and create an authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. With the current effort on security to make Magnum production-ready, we want to make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, but this will require sizeable change upstream in k8s. We have good reason to pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call (normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to the heat templates
  3.  When configuring the master node, the password is saved in the config file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can deprecate it later when we have a better solution. So leaving aside the issue of how k8s should be changed, the question is: is this approach reasonable for the time, or is there a better approach?

Ton Ngo,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/75b9ab9e/attachment.html>

From choudharyvikas16 at gmail.com  Mon Sep 21 03:59:27 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Mon, 21 Sep 2015 09:29:27 +0530
Subject: [openstack-dev] [magnum] Handling password for k8s
Message-ID: <CABJxuZoBeK=rrBfqLy=VOx4Ff8AQObyhMhg6wLO7ZUcq2aUPbQ@mail.gmail.com>

Hi Ton,

kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able
to understand what are the concerns in storing password on
master-nodes.

Can you please list down concerns in our current approach?

-Vikas Choudhary

*Hi
everyone,*

    *I
am running into a potential issue in implementing the support for*

*load
balancer in k8s services.  After a chat with sdake, I would like to*

*run
this by the team for feedback/suggestion.*

*First
let me give a little background for context.  In the current k8s*

*cluster,
all k8s pods and services run within a private subnet (on Flannel)*

*and
they can access each other but they cannot be accessed from external*

*network.
 The way to publish an endpoint to the external network is by*

*specifying
this attribute in your service manifest:*

        *type:
LoadBalancer*

   *Then
k8s will talk to OpenStack Neutron to create the load balancer*

*pool,
members, VIP, monitor.  The user would associate the VIP with a*

*floating
IP and then the endpoint of the service would be accessible from*

*the
external internet.*

   *To
talk to Neutron, k8s needs the user credential and this is stored in*

*a
config file on the master node.  This includes the username, tenant
name,*

*password.
 When k8s starts up, it will load the config file and create an*

*authenticated
client with Keystone.*

    *The
issue we need to find a good solution for is how to handle the*

*password.
 With the current effort on security to make Magnum*

*production-ready,
we want to make sure to handle the password properly.*

    *Ideally,
the best solution is to pass the authenticated token to k8s to*

*use,
but this will require sizeable change upstream in k8s.  We have good*

*reason
to pursue this but it will take time.*

    *For
now, my current implementation is as follows:*

   *In
a bay-create, magnum client adds the password to the API call*

   *(normally
it authenticates and sends the token)*

   *The
conductor picks it up and uses it as an input parameter to the heat*

   *templates*

   *When
configuring the master node, the password is saved in the config*

   *file
for k8s services.*

   *Magnum
does not store the password internally.*


     *This
is probably not ideal, but it would let us proceed for now.  We*

*can
deprecate it later when we have a better solution.  So leaving aside*

*the
issue of how k8s should be changed, the question is:  is this
approach*

*reasonable
for the time, or is there a better approach?*


 *Ton
Ngo,*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/435da034/attachment.html>

From donald.d.dugger at intel.com  Mon Sep 21 04:10:43 2015
From: donald.d.dugger at intel.com (Dugger, Donald D)
Date: Mon, 21 Sep 2015 04:10:43 +0000
Subject: [openstack-dev] [nova-scheduler] no IRC meeting this week
Message-ID: <6AF484C0160C61439DE06F17668F3BCB53FFB504@ORSMSX114.amr.corp.intel.com>

As discussed last week we won't have a meeting this Mon., 9/21.  Everyone can concentrate on getting Liberty out the door and we'll meet again next week, 9/28, to talk about Mitaka planning a little.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/5554c405/attachment.html>

From gal.sagie at gmail.com  Mon Sep 21 04:28:25 2015
From: gal.sagie at gmail.com (Gal Sagie)
Date: Mon, 21 Sep 2015 07:28:25 +0300
Subject: [openstack-dev] [Neutron] Port Forwarding API
In-Reply-To: <34a7617b.375e.14fed9ef851.Coremail.ayshihanzhang@126.com>
References: <CAG9LJa7gNwDauUahRNi1MRckppt=kdAbdEop6CV8vO9v62fH8A@mail.gmail.com>
 <34a7617b.375e.14fed9ef851.Coremail.ayshihanzhang@126.com>
Message-ID: <CAG9LJa4X4U-PGXDFKPCtpPbyZt=uTVq34R5ZhN3RFuOoJQKoEQ@mail.gmail.com>

Hi shihanzhang,

As mentioned in the spec, this doesnt support distributed FIP's, it will
still work
if the VMs are on different compute nodes, similar to the way centralized
DNAT works (from the network node)

Distributing port forwarding entries in my opinion is similar to
distributing SNAT, and when
there will be a consensus in the community regarding SNAT distrubition (if
its really fully needed)
i think that any solution will also fit port forwarding distrubition.
(But thats not the scope of this proposed spec)

Gal.

On Mon, Sep 21, 2015 at 4:57 AM, shihanzhang <ayshihanzhang at 126.com> wrote:

>
>      2) The same FIP address can be used for different mappings, for
> example FIP with IP X
>           can be used with different ports to map to different VM's
> X:4001  -> VM1 IP
>           X:4002 -> VM2 IP (This is the essence of port forwarding).
>          So we also need the port mapping configuration fields
>
> For the second use case, I have a question, does it support DVR?  if VM1
> and VM2 are on
> different compute nodes, how does it work?
>
>
>
>
> ? 2015-09-20 14:26:23?"Gal Sagie" <gal.sagie at gmail.com> ???
>
> Hello All,
>
> I have sent a spec [1] to resume the work on port forwarding API and
> reference implementation.
>
> Its currently marked as "WIP", however i raised some "TBD" questions for
> the community.
> The way i see port forwarding is an API that is very similar to floating
> IP API and implementation
> with few changes:
>
> 1) Can only define port forwarding on the router external gateway IP (or
> additional public IPs
>    that are located on the router.  (Similar to the case of centralized
> DNAT)
>
> 2) The same FIP address can be used for different mappings, for example
> FIP with IP X
>     can be used with different ports to map to different VM's X:4001  ->
> VM1 IP
>     X:4002 -> VM2 IP (This is the essence of port forwarding).
>     So we also need the port mapping configuration fields
>
> All the rest should probably behave (in my opinion) very similar to FIP's
> (for example
> not being able to remove external gateway if port forwarding entries are
> configured,
> if the VM is deletd the port forwarding entry is deleted as well and so
> on..)
> All of these points are mentioned in the spec and i am waiting for the
> community feedback
> on them.
>
> I am trying to figure out if implementation wise, it would be smart to try
> and use the floating IP
> implementation and extend it for this (given all the above mechanism
> described above already
> works for floating IP's)
> Or, add another new implementation which behaves very similar to floating
> IP's in most aspects
> (But still differ in some)
> Or something else...
>
> Would love to hear the community feedback on the spec, even that its WIP
>
> Thanks
> Gal.
>
> [1] https://review.openstack.org/#/c/224727/
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/131544ab/attachment.html>

From choudharyvikas16 at gmail.com  Mon Sep 21 04:52:17 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Mon, 21 Sep 2015 10:22:17 +0530
Subject: [openstack-dev] [magnum] Handling password for k8s
Message-ID: <CABJxuZrVJXgb4tnBPKfX0KpUZc6LAa5WNv-gdAix0m=A6_5rAg@mail.gmail.com>

Thanks Hongbin.

I was not aware of stack-parameters visibility, so was not able to
figure out actual concerns in Ton's initial approach.

keystone domain approach seems secure enough.

-Vikas

________________________________________________________________

Hongbin,

I believe the domain approach is the preferred approach for the
solution long term.  It will require more R&D to execute then other
options but also be completely secure.

Regards
-steve


From: Hongbin Lu <hongbin.lu at huawei.com
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev><mailto:hongbin.lu
at huawei.com <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev at lists.openstack.org
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev><mailto:openstack-dev
at lists.openstack.org
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>>
Date: Sunday, September 20, 2015 at 4:26 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev><mailto:openstack-dev
at lists.openstack.org
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hi Ton,

If I understand your proposal correctly, it means the inputted
password will be exposed to users in the same tenant (since the
password is passed as stack parameter, which is exposed within
tenant). If users are not admin, they don?t have privilege to create a
temp user. As a result, users have to expose their own password to
create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is
dedicated for communication between k8s and neutron load balancer
service. The password of the user can be written into config file,
picked up by conductor and passed to heat. The drawback is that there
is no multi-tenancy for openstack load balancer service, since all
bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone
domain [1] for each bay (using admin credential in config file), and
assign bay?s owner to that domain. As a result, the user will have
privilege to create a bay user within that domain. It seems Heat
supports native keystone resource [2], which makes the administration
of keystone users much easier. The drawback is the implementation is
more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:ton at us.ibm.com
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for
load balancer in k8s services. After a chat with sdake, I would like
to run this by the team for feedback/suggestion.
First let me give a little background for context. In the current k8s
cluster, all k8s pods and services run within a private subnet (on
Flannel) and they can access each other but they cannot be accessed
from external network. The way to publish an endpoint to the external
network is by specifying this attribute in your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer
pool, members, VIP, monitor. The user would associate the VIP with a
floating IP and then the endpoint of the service would be accessible
from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored
in a config file on the master node. This includes the username,
tenant name, password. When k8s starts up, it will load the config
file and create an authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the
password. With the current effort on security to make Magnum
production-ready, we want to make sure to handle the password
properly.
Ideally, the best solution is to pass the authenticated token to k8s
to use, but this will require sizeable change upstream in k8s. We have
good reason to pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call
(normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to
the heat templates
  3.  When configuring the master node, the password is saved in the
config file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We
can deprecate it later when we have a better solution. So leaving
aside the issue of how k8s should be changed, the question is: is this
approach reasonable for the time, or is there a better approach?

Ton Ngo,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/ad6c7745/attachment.html>

From ton at us.ibm.com  Mon Sep 21 04:53:34 2015
From: ton at us.ibm.com (Ton Ngo)
Date: Sun, 20 Sep 2015 21:53:34 -0700
Subject: [openstack-dev] [magnum] Handling password for k8s
In-Reply-To: <CABJxuZoBeK=rrBfqLy=VOx4Ff8AQObyhMhg6wLO7ZUcq2aUPbQ@mail.gmail.com>
References: <CABJxuZoBeK=rrBfqLy=VOx4Ff8AQObyhMhg6wLO7ZUcq2aUPbQ@mail.gmail.com>
Message-ID: <201509210453.t8L4rnpB018144@d03av02.boulder.ibm.com>


Hi Vikas,
     It's correct that once the password is saved in the k8s master node,
then it would have the same security as the nova-instance.  The issue is as
Hongbin noted, the password is exposed along the chain of interaction
between magnum and heat.  Users in the same tenant can potentially see the
password of the user who creates the cluster.  The current k8s mode of
operation is k8s-centric, where the cluster is assumed to be managed
manually so it is reasonable to configure with one OpenStack user
credential.  With Magnum managing the k8s cluster, we add another layer of
management, hence the complication.

Thanks Hongbin, Steve for the suggestion.  If we don't see any fundamental
flaw, we can proceed with the initial sub-optimal implementation and refine
it later with the service domain implementation.

Ton Ngo,




From:	Vikas Choudhary <choudharyvikas16 at gmail.com>
To:	openstack-dev at lists.openstack.org
Date:	09/20/2015 09:02 PM
Subject:	[openstack-dev] [magnum] Handling password for k8s



Hi Ton,
kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able to
understand what are the concerns in storing password on master-nodes.
Can you please list down concerns in our current approach?
-Vikas Choudhary
Hi
everyone,
    I
am running into a potential issue in implementing the support for
load
balancer in k8s services.  After a chat with sdake, I would like to
run
this by the team for feedback/suggestion.
First
let me give a little background for context.  In the current k8s
cluster,
all k8s pods and services run within a private subnet (on Flannel)
and
they can access each other but they cannot be accessed from external
network.
 The way to publish an endpoint to the external network is by
specifying
this attribute in your service manifest:
        type:
LoadBalancer
   Then
k8s will talk to OpenStack Neutron to create the load balancer
pool,
members, VIP, monitor.  The user would associate the VIP with a
floating
IP and then the endpoint of the service would be accessible from
the
external internet.
   To
talk to Neutron, k8s needs the user credential and this is stored in
a
config file on the master node.  This includes the username, tenant
name,
password.
 When k8s starts up, it will load the config file and create an
authenticated
client with Keystone.
    The
issue we need to find a good solution for is how to handle the
password.
 With the current effort on security to make Magnum
production-ready,
we want to make sure to handle the password properly.
    Ideally,
the best solution is to pass the authenticated token to k8s to
use,
but this will require sizeable change upstream in k8s.  We have good
reason
to pursue this but it will take time.
    For
now, my current implementation is as follows:
   In
a bay-create, magnum client adds the password to the API call
   (normally
it authenticates and sends the token)
   The
conductor picks it up and uses it as an input parameter to the heat
   templates
   When
configuring the master node, the password is saved in the config
   file
for k8s services.
   Magnum
does not store the password internally.



    This
is probably not ideal, but it would let us proceed for now.  We
can
deprecate it later when we have a better solution.  So leaving aside
the
issue of how k8s should be changed, the question is:  is this
approach
reasonable
for the time, or is there a better approach?



Ton
Ngo,
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/08067128/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/08067128/attachment.gif>

From boris at pavlovic.me  Mon Sep 21 05:08:33 2015
From: boris at pavlovic.me (Boris Pavlovic)
Date: Sun, 20 Sep 2015 22:08:33 -0700
Subject: [openstack-dev] [openstack-operators][tc][tags] Rally tags
Message-ID: <CAD85om0e5Fwc08xcmce1h-BC0i9i6AyZiUP6-6__J5qitg9Yzg@mail.gmail.com>

Hi stackers,

Rally project is becoming more and more used by Operators to check that
live OpenStack clouds perform well and that they are ready for production.

Results of PAO OPS meeting showed that there are interest in Rally related
tags for projects:
http://www.gossamer-threads.com/lists/openstack/operators/49466

3) "works in rally" - new tag suggestion
> There was general interest in asking the Rally team to consider making a
> "works in rally" tag, since the rally tests were deemed 'good'.


I have few ideas about the rally tags:

- covered-by-rally
   It means that there are official (inside the rally repo) plugins for
testing of particular project

- has-rally-gates
   It means that Rally is run against every patch proposed to the project

- certified-by-rally [wip]
   As well we are starting working on certification task:
https://review.openstack.org/#/c/225176/5
   which will be the standard way to check whatever cloud is ready for
production based on volume, performance & scale testing.


Thoughts?


Best regards,
Boris Pavlovic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/7526350f/attachment.html>

From yorik.sar at gmail.com  Mon Sep 21 05:45:43 2015
From: yorik.sar at gmail.com (Yuriy Taraday)
Date: Mon, 21 Sep 2015 05:45:43 +0000
Subject: [openstack-dev] [neutron] [oslo.privsep] Any progress on
	privsep?
In-Reply-To: <CALFEDVd6Dg-nPosu5G6SD8knwnrq=SmqxPPT-iFiYwmG-3x8VA@mail.gmail.com>
References: <CALFEDVehrHj+syJFDocOLG30X6xEVM5wApbSWS2y-kc=tq-dFw@mail.gmail.com>
 <CAPA_H3dBUaC0Rr-PRXGbJwRnyyj63infspgYrzEcvErh015WEA@mail.gmail.com>
 <CALFEDVd6Dg-nPosu5G6SD8knwnrq=SmqxPPT-iFiYwmG-3x8VA@mail.gmail.com>
Message-ID: <CABocrW6BBj8YkD8LcwGVLH_eDZfTePenPGwME6jo2X7=h7cgFQ@mail.gmail.com>

Hello, Li.

On Sat, Sep 19, 2015 at 6:15 AM Li Ma <skywalker.nick at gmail.com> wrote:

> Thanks for your reply, Gus. That's awesome. I'd like to have a look at
> it or test if possible.
>
> Any source code available in the upstream?
>

You can find latest (almost approved from the looks of it) version of
blueprint here: https://review.openstack.org/204073
It links to current implementation (not API described in blueprint though):
https://review.openstack.org/155631
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/d531ccb7/attachment.html>

From bpavlovic at mirantis.com  Mon Sep 21 06:32:27 2015
From: bpavlovic at mirantis.com (Boris Pavlovic)
Date: Sun, 20 Sep 2015 23:32:27 -0700
Subject: [openstack-dev] [rally][releases] New Rally release model
Message-ID: <CAD85om01K=dJ7xOZTU1dR17iUOZhOOyGFMP3gNrAHhQSbWu5kQ@mail.gmail.com>

Hi stackers,

As far as you probably know Rally is using independent release model.

We are doing this to do releases as fast as possible.
Our goal is to have release at least 1 times per 2 week.

The major reason why we have to have separated release model is that we
should ship plugins as soon as possible and plugins are in the same repo
with tool, framework and docs.

Previous model was quite simple, we were planing changes for next release,
reviewing and merging those changes and cutting new versions.

This model worked well until we started doing things that are not fully
backward compatible and requires migrations.

Like 0.1.0 release will take us more then 100 days, which is terrible long
for people who is waiting new plugins.

I would like to propose the new release model that will allow us to do 2
things:
* Do the regular, fast releases with new plugins
* Have a months for developing new features and changes that are not fully
backward compatible

The main idea is next:
*) Master branch will be used for new major Rally versions development e.g.
0.x.y -> 0.x+1.0 switch
   that can include not backward compatible changes.
*) Latest version - we will port plugins, bug fixes and part of features to
it
*) Stable version - we will port only high & critical bug fixes if it is
possible

Here is the diagram that explains the release cycle:

[image: Inline image 1]

Thoughts?

Best regards,
Boris Pavlovic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/4c69d849/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Rally Release Cycles.png
Type: image/png
Size: 15481 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/4c69d849/attachment.png>

From ton at us.ibm.com  Mon Sep 21 06:44:18 2015
From: ton at us.ibm.com (Ton Ngo)
Date: Sun, 20 Sep 2015 23:44:18 -0700
Subject: [openstack-dev] [magnum] Handling password for k8s
In-Reply-To: <201509210453.t8L4rnpB018144@d03av02.boulder.ibm.com>
References: <CABJxuZoBeK=rrBfqLy=VOx4Ff8AQObyhMhg6wLO7ZUcq2aUPbQ@mail.gmail.com>
 <201509210453.t8L4rnpB018144@d03av02.boulder.ibm.com>
Message-ID: <201509210644.t8L6iQko002624@d03av04.boulder.ibm.com>


Another option is for Magnum to do all the necessary set up and leave to
the user the final step of editing the config file with the password.  Then
the load balancer feature will be disabled by default and we provide the
instruction for the user to enable it.  This would circumvent the issue
with handling the password and would actually match the intended usage in
k8s.
Ton Ngo,



From:	Ton Ngo/Watson/IBM at IBMUS
To:	"OpenStack Development Mailing List \(not for usage questions
            \)" <openstack-dev at lists.openstack.org>
Date:	09/20/2015 09:57 PM
Subject:	Re: [openstack-dev] [magnum] Handling password for k8s



Hi Vikas,
It's correct that once the password is saved in the k8s master node, then
it would have the same security as the nova-instance. The issue is as
Hongbin noted, the password is exposed along the chain of interaction
between magnum and heat. Users in the same tenant can potentially see the
password of the user who creates the cluster. The current k8s mode of
operation is k8s-centric, where the cluster is assumed to be managed
manually so it is reasonable to configure with one OpenStack user
credential. With Magnum managing the k8s cluster, we add another layer of
management, hence the complication.

Thanks Hongbin, Steve for the suggestion. If we don't see any fundamental
flaw, we can proceed with the initial sub-optimal implementation and refine
it later with the service domain implementation.

Ton Ngo,


Inactive hide details for Vikas Choudhary ---09/20/2015 09:02:49 PM---Hi
Ton, kube-masters will be nova instances only and becaVikas Choudhary
---09/20/2015 09:02:49 PM---Hi Ton, kube-masters will be nova instances
only and because any access to

From: Vikas Choudhary <choudharyvikas16 at gmail.com>
To: openstack-dev at lists.openstack.org
Date: 09/20/2015 09:02 PM
Subject: [openstack-dev] [magnum] Handling password for k8s



Hi Ton,
kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able to
understand what are the concerns in storing password on master-nodes.
Can you please list down concerns in our current approach?
-Vikas Choudhary
Hi
everyone,
    I
am running into a potential issue in implementing the support for
load
balancer in k8s services. After a chat with sdake, I would like to
run
this by the team for feedback/suggestion.
First
let me give a little background for context. In the current k8s
cluster,
all k8s pods and services run within a private subnet (on Flannel)
and
they can access each other but they cannot be accessed from external
network.
The way to publish an endpoint to the external network is by
specifying
this attribute in your service manifest:
        type:
LoadBalancer
   Then
k8s will talk to OpenStack Neutron to create the load balancer
pool,
members, VIP, monitor. The user would associate the VIP with a
floating
IP and then the endpoint of the service would be accessible from
the
external internet.
   To
talk to Neutron, k8s needs the user credential and this is stored in
a
config file on the master node. This includes the username, tenant
name,
password.
When k8s starts up, it will load the config file and create an
authenticated
client with Keystone.
    The
issue we need to find a good solution for is how to handle the
password.
With the current effort on security to make Magnum
production-ready,
we want to make sure to handle the password properly.
    Ideally,
the best solution is to pass the authenticated token to k8s to
use,
but this will require sizeable change upstream in k8s. We have good
reason
to pursue this but it will take time.
    For
now, my current implementation is as follows:
   In
a bay-create, magnum client adds the password to the API call
   (normally
it authenticates and sends the token)
   The
conductor picks it up and uses it as an input parameter to the heat
   templates
   When
configuring the master node, the password is saved in the config
   file
for k8s services.
   Magnum
does not store the password internally.



    This
is probably not ideal, but it would let us proceed for now. We
can
deprecate it later when we have a better solution. So leaving aside
the
issue of how k8s should be changed, the question is: is this
approach
reasonable
for the time, or is there a better approach?



Ton
Ngo,
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/d847be09/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/d847be09/attachment.gif>

From EGuz at walmartlabs.com  Mon Sep 21 06:52:01 2015
From: EGuz at walmartlabs.com (Egor Guz)
Date: Mon, 21 Sep 2015 06:52:01 +0000
Subject: [openstack-dev] [magnum] [Kuryr] Handling password for k8s
Message-ID: <D224F4EE.1BB15%eguz@walmartlabs.com>

+1, to Hongbin?s concerns about exposing passwords. I think we should start with dedicated kub user in magnum config and moved to keystone domains after.

I just wondering how how Kuryr team planning to solve similar issue (I believe libnetwork driver require Neutron?s credentials). Can someone comment on it?

?
Egor

From: "Steven Dake (stdake)" <stdake at cisco.com<mailto:stdake at cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Sunday, September 20, 2015 at 19:34
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hongbin,

I believe the domain approach is the preferred approach for the solution long term.  It will require more R&D to execute then other options but also be completely secure.

Regards
-steve


From: Hongbin Lu <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Sunday, September 20, 2015 at 4:26 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hi Ton,

If I understand your proposal correctly, it means the inputted password will be exposed to users in the same tenant (since the password is passed as stack parameter, which is exposed within tenant). If users are not admin, they don?t have privilege to create a temp user. As a result, users have to expose their own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for communication between k8s and neutron load balancer service. The password of the user can be written into config file, picked up by conductor and passed to heat. The drawback is that there is no multi-tenancy for openstack load balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain [1] for each bay (using admin credential in config file), and assign bay?s owner to that domain. As a result, the user will have privilege to create a bay user within that domain. It seems Heat supports native keystone resource [2], which makes the administration of keystone users much easier. The drawback is the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:ton at us.ibm.com]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for load balancer in k8s services. After a chat with sdake, I would like to run this by the team for feedback/suggestion.
First let me give a little background for context. In the current k8s cluster, all k8s pods and services run within a private subnet (on Flannel) and they can access each other but they cannot be accessed from external network. The way to publish an endpoint to the external network is by specifying this attribute in your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, members, VIP, monitor. The user would associate the VIP with a floating IP and then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a config file on the master node. This includes the username, tenant name, password. When k8s starts up, it will load the config file and create an authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. With the current effort on security to make Magnum production-ready, we want to make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, but this will require sizeable change upstream in k8s. We have good reason to pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call (normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to the heat templates
  3.  When configuring the master node, the password is saved in the config file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can deprecate it later when we have a better solution. So leaving aside the issue of how k8s should be changed, the question is: is this approach reasonable for the time, or is there a better approach?

Ton Ngo,


From mrunge at redhat.com  Mon Sep 21 06:54:53 2015
From: mrunge at redhat.com (Matthias Runge)
Date: Mon, 21 Sep 2015 08:54:53 +0200
Subject: [openstack-dev] openstack-dahboard directory is not created
In-Reply-To: <c9fe2da2e8e63f2f5ae4a99e47aebf23@openstack.nimeyo.com>
References: <c9fe2da2e8e63f2f5ae4a99e47aebf23@openstack.nimeyo.com>
Message-ID: <55FFA9BD.2030505@redhat.com>

On 18/09/15 20:03, OpenStack Mailing List Archive wrote:
> Link: https://openstack.nimeyo.com/59453/?show=59453#q59453
> From: vidya <niveeya at gmail.com>
> 
> I am trying to create openstack -dashboard on my VM and
> /usr/share/openstack-dashboard in not create. Please help me what i am
> missing here.
> 
> here is what i tried
> 1 yum install openstack-selinux
> 2 yum install yum-plugin-priorities
> 3 yum install
> http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
> 4 yum install openstack-dashboard httpd mod_wsgi memcached python-memcached
> 5 yum install python-pip
> 6 yum groupinstall 'Development Tools'
> 7 yum install python-devel
> 8 yum install libffi-devel
> 9 yum install openssl-devel
> 10 pip install dep/horizon-2014.1.1.tar.gz
> 11 yum install openstack-dashboard
> 12 yum upgrade
> 13 reboot
> 14 history
> 15 yum install openstack-dashboard
> 16 pip install horizon-2014.1.1.tar.gz
> 

Ugh,

you're mixing pip install and yum install.

For a distro, I recommend to use distro packages. We're not providing
horizon packages on pypi (and if we do, they are outdated).

horizon-2014.1.1.tar.gz is a good year old. If you're *really* looking
for installing that version, I'd suggest you to go with horizon-2014.1.4
(or later instead).


You can get a more recent version of horizon (and OpenStack) from
https://www.rdoproject.org

After following repo install instructions at
https://www.rdoproject.org/Quickstart
you can do yum install openstack-dashboard

Matthias



From wanghua.humble at gmail.com  Mon Sep 21 07:17:16 2015
From: wanghua.humble at gmail.com (=?UTF-8?B?546L5Y2O?=)
Date: Mon, 21 Sep 2015 15:17:16 +0800
Subject: [openstack-dev] [magnum] Handling password for k8s
In-Reply-To: <D224BA39.12F90%stdake@cisco.com>
References: <201509201808.t8KI8VoA018814@d03av05.boulder.ibm.com>
 <0957CD8F4B55C0418161614FEC580D6BCE62F1@SZXEMI503-MBS.china.huawei.com>
 <D224BA39.12F90%stdake@cisco.com>
Message-ID: <CAH5-jC_39-Ev=R3gTYsNb2opu-pjQFQ-_ffM3bm=XRwL5vdmTw@mail.gmail.com>

I think it is the same case in docker registry v2. User credentials are
needed in docker registry v2 config file. We can use the same user in all
bays, but different trust[1] to it. The user should have no role, it can
only work with trust.

[1] https://wiki.openstack.org/wiki/Keystone/Trusts

Regards
Wanghua

On Mon, Sep 21, 2015 at 10:34 AM, Steven Dake (stdake) <stdake at cisco.com>
wrote:

> Hongbin,
>
> I believe the domain approach is the preferred approach for the solution
> long term.  It will require more R&D to execute then other options but also
> be completely secure.
>
> Regards
> -steve
>
>
> From: Hongbin Lu <hongbin.lu at huawei.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Date: Sunday, September 20, 2015 at 4:26 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum] Handling password for k8s
>
> Hi Ton,
>
>
>
> If I understand your proposal correctly, it means the inputted password
> will be exposed to users in the same tenant (since the password is passed
> as stack parameter, which is exposed within tenant). If users are not
> admin, they don?t have privilege to create a temp user. As a result, users
> have to expose their own password to create a bay, which is suboptimal.
>
>
>
> A slightly amendment is to have operator to create a user that is
> dedicated for communication between k8s and neutron load balancer service.
> The password of the user can be written into config file, picked up by
> conductor and passed to heat. The drawback is that there is no
> multi-tenancy for openstack load balancer service, since all bays will
> share the same credential.
>
>
>
> Another solution I can think of is to have magnum to create a keystone
> domain [1] for each bay (using admin credential in config file), and assign
> bay?s owner to that domain. As a result, the user will have privilege to
> create a bay user within that domain. It seems Heat supports native
> keystone resource [2], which makes the administration of keystone users
> much easier. The drawback is the implementation is more complicated.
>
>
>
> [1] https://wiki.openstack.org/wiki/Domains
>
> [2]
> http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Ton Ngo [mailto:ton at us.ibm.com <ton at us.ibm.com>]
> *Sent:* September-20-15 2:08 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [magnum] Handling password for k8s
>
>
>
> Hi everyone,
> I am running into a potential issue in implementing the support for load
> balancer in k8s services. After a chat with sdake, I would like to run this
> by the team for feedback/suggestion.
> First let me give a little background for context. In the current k8s
> cluster, all k8s pods and services run within a private subnet (on Flannel)
> and they can access each other but they cannot be accessed from external
> network. The way to publish an endpoint to the external network is by
> specifying this attribute in your service manifest:
> type: LoadBalancer
> Then k8s will talk to OpenStack Neutron to create the load balancer pool,
> members, VIP, monitor. The user would associate the VIP with a floating IP
> and then the endpoint of the service would be accessible from the external
> internet.
> To talk to Neutron, k8s needs the user credential and this is stored in a
> config file on the master node. This includes the username, tenant name,
> password. When k8s starts up, it will load the config file and create an
> authenticated client with Keystone.
> The issue we need to find a good solution for is how to handle the
> password. With the current effort on security to make Magnum
> production-ready, we want to make sure to handle the password properly.
> Ideally, the best solution is to pass the authenticated token to k8s to
> use, but this will require sizeable change upstream in k8s. We have good
> reason to pursue this but it will take time.
> For now, my current implementation is as follows:
>
>    1. In a bay-create, magnum client adds the password to the API call
>    (normally it authenticates and sends the token)
>    2. The conductor picks it up and uses it as an input parameter to the
>    heat templates
>    3. When configuring the master node, the password is saved in the
>    config file for k8s services.
>    4. Magnum does not store the password internally.
>
>
> This is probably not ideal, but it would let us proceed for now. We can
> deprecate it later when we have a better solution. So leaving aside the
> issue of how k8s should be changed, the question is: is this approach
> reasonable for the time, or is there a better approach?
>
> Ton Ngo,
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/a21f6916/attachment-0001.html>

From flavio at redhat.com  Mon Sep 21 08:00:56 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Mon, 21 Sep 2015 10:00:56 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FC4CE7.5050509@gmail.com>
References: <55FABF5E.4000204@redhat.com> <55FB2D63.3010402@gmail.com>
 <55FC0207.3070305@redhat.com> <20150918122746.GO29319@redhat.com>
 <50B99C29-4390-4C04-9821-544E8780590A@leafe.com>
 <55FC4CE7.5050509@gmail.com>
Message-ID: <20150921080056.GS29319@redhat.com>

On 18/09/15 13:41 -0400, Nikhil Komawar wrote:
>I agree with Ed here.
>
>If we have a stipulated time period set for proposals then grace period sounds
>like a real-life deal. However, I would also encourage the idea of opening this
>up early and keep the folder ready and officials review the merge prop 2 months
>prior to the final date. It's better to have a final date with longer open
>period than a small firm set date.
>
>my 2 pennies worth.
>
>On 9/18/15 1:15 PM, Ed Leafe wrote:
>
>    On Sep 18, 2015, at 7:27 AM, Flavio Percoco <flavio at redhat.com> wrote:
>
>
>            I'm strongly against this extra rules. OpenStack Officials Elections are
>            ran by volunteers and any rules that adds complexity should be avoided.
>
>        +1
>
>        Also, the schedule is announced 6 months in advance. The candidacy
>        period is announced when it starts and a reminder is sent a couple of
>        days before it ends.
>
>    +1 to sticking to deadlines. A grace period just mean a different deadline.
>
>    I don't, however, see the need for a firm start date. Comparing this to the feature freeze development cycle, in Nova we started opening up the specs for the N+1 cycle early in cycle N so that people can propose a spec early instead of waiting months and potentially missing the next window. So how about letting candidates declare early in the cycle by adding their names to the election repo at any time during the cycle up to the firm deadline? This might also encourage candidates not to wait until the last minute.

Agreed,

Just wanted to mention that this was also proposed here: http://lists.openstack.org/pipermail/openstack-dev/2015-September/074888.html

Ideally, we'd have the repo open as soon as the code name for the next
cycle is chosen (since the code name is needed).

Flavio

>
>    -- Ed Leafe
>
>
>
>
>
>
>
>   
>    __________________________________________________________________________
>    OpenStack Development Mailing List (not for usage questions)
>    Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>--
>
>Thanks,
>Nikhil
>

>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/52be47db/attachment.pgp>

From thierry at openstack.org  Mon Sep 21 08:14:56 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Mon, 21 Sep 2015 10:14:56 +0200
Subject: [openstack-dev] [openstack-operators][tc][tags] Rally tags
In-Reply-To: <CAD85om0e5Fwc08xcmce1h-BC0i9i6AyZiUP6-6__J5qitg9Yzg@mail.gmail.com>
References: <CAD85om0e5Fwc08xcmce1h-BC0i9i6AyZiUP6-6__J5qitg9Yzg@mail.gmail.com>
Message-ID: <55FFBC80.7030805@openstack.org>

Boris Pavlovic wrote:
> I have few ideas about the rally tags: 
> 
> - covered-by-rally
>    It means that there are official (inside the rally repo) plugins for
> testing of particular project
> 
> - has-rally-gates
>    It means that Rally is run against every patch proposed to the project 
> 
> - certified-by-rally [wip]
>    As well we are starting working on certification
> task: https://review.openstack.org/#/c/225176/5
>    which will be the standard way to check whatever cloud is ready for
> production based on volume, performance & scale testing. 
> 
> Thoughts? 

Hi Boris,

The "next-tags" workgroup at the Technical Committee came up with a
number of families where I think your proposed tags could fit:

http://lists.openstack.org/pipermail/openstack-dev/2015-July/070651.html

The "integration" family of tags defines cross-project support. We want
to have tags that say that a specific service has a horizon dashboard
plugin, or a devstack integration, or heat templates... So I would say
that the "covered-by-rally" tag could be part of that family
('integration:rally' maybe ?). We haven't defined our first tag in that
family yet: sdague was working on the devstack ones[1] as a template for
the family but that effort stalled a bit:

https://review.openstack.org/#/c/203785/

As far as the 'has-rally-gates' tag goes, that would be part of the 'QA'
family ("qa:has-rally-gates" for example).

So I think those totally make sense as upstream-maintained tags and are
perfectly aligned with the families we already had in mind but haven't
had time to push yet. Feel free to propose those tags to the governance
repository. An example of such submission lives at:

https://review.openstack.org/#/c/207467/

The 'certified-by-rally' tag is a bit farther away I think (less
objective and needs your certification program to be set up first). You
should start with the other two.

-- 
Thierry Carrez (ttx)


From wanghua.humble at gmail.com  Mon Sep 21 08:18:00 2015
From: wanghua.humble at gmail.com (=?UTF-8?B?546L5Y2O?=)
Date: Mon, 21 Sep 2015 16:18:00 +0800
Subject: [openstack-dev] [magnum] Discovery
In-Reply-To: <65278211-710F-4EFB-BA49-422B536CFD71@rackspace.com>
References: <D21ED83B.67DF4%danehans@cisco.com>
 <1442516764564.93792@RACKSPACE.COM>
 <D22068B1.1B7C3%eguz@walmartlabs.com>
 <65278211-710F-4EFB-BA49-422B536CFD71@rackspace.com>
Message-ID: <CAH5-jC9iVOMKS5DX+shRZny4dnY9sQFck_Odccv+2+Dc9Rjb9g@mail.gmail.com>

Swarm already supports etcd as a discovery backend [1]. So we can implement
both hosted discovery with Docker Hub and using name etcd. And make hosted
discovery with Docker Hub default if discovery_url is not given.

If we run etcd in bay, etcd alse need discovery [2]. Operator should set up
a etcd cluster for other etcd clusters to discover or use public discovery
service. I think it is not necessary to run etcd in swarm cluster just for
discovery service. In a private cloud, operator should set up a local etcd
cluster for discovery service, and all the bays can use it.

[1] https://docs.docker.com/swarm/discovery/
[2] https://github.com/coreos/etcd/blob/master/Documentation/clustering.md

Regards,
Wanghua

On Fri, Sep 18, 2015 at 11:39 AM, Adrian Otto <adrian.otto at rackspace.com>
wrote:

> In the case where a private cloud is used without access to the Internet,
> you do have the option of running your own etcd, and configuring that to be
> used instead.
>
> Adding etcd to every bay should be optional, as a subsequent feature, but
> should be controlled by a flag in the Baymodel that defaults to off so the
> public discovery service is used. It might be nice to be able to configure
> Magnum in an isolated mode which would change the system level default for
> that flag from off to on.
>
> Maybe the Baymodel resource attribute should be named
> local_discovery_service.
>
> Should turning this on also set the minimum node count for the bay to 3?
> If not, etcd will not be highly available.
>
> Adrian
>
> > On Sep 17, 2015, at 1:01 PM, Egor Guz <EGuz at walmartlabs.com> wrote:
> >
> > +1 for stop using public discovery endpoint, most private cloud vms
> doesn?t have access to internet and operator must to run etcd instance
> somewhere just for discovery.
> >
> > ?
> > Egor
> >
> > From: Andrew Melton <andrew.melton at RACKSPACE.COM<mailto:
> andrew.melton at RACKSPACE.COM>>
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org<mailto:
> openstack-dev at lists.openstack.org>>
> > Date: Thursday, September 17, 2015 at 12:06
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> >>
> > Subject: Re: [openstack-dev] [magnum] Discovery
> >
> >
> > Hey Daneyon,
> >
> >
> > I'm fairly partial towards #2 as well. Though, I'm wondering if it's
> possible to take it a step further. Could we run etcd in each Bay without
> using the public discovery endpoint? And then, configure Swarm to simply
> use the internal ectd as it's discovery mechanism? This could cut one of
> our external service dependencies and make it easier to run Magnum is an
> environment with locked down public internet access.?
> >
> >
> > Anyways, I think #2 could be a good start that we could iterate on later
> if need be.
> >
> >
> > --Andrew
> >
> >
> > ________________________________
> > From: Daneyon Hansen (danehans) <danehans at cisco.com<mailto:
> danehans at cisco.com>>
> > Sent: Wednesday, September 16, 2015 11:26 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: [openstack-dev] [magnum] Discovery
> >
> > All,
> >
> > While implementing the flannel --network-driver for swarm, I have come
> across an issue that requires feedback from the community. Here is the
> breakdown of the issue:
> >
> >  1.  Flannel [1] requires etcd to store network configuration. Meeting
> this requirement is simple for the kubernetes bay types since kubernetes
> requires etcd.
> >  2.  A discovery process is needed for bootstrapping etcd. Magnum
> implements the public discovery option [2].
> >  3.  A discovery process is also required to bootstrap a swarm bay type.
> Again, Magnum implements a publicly hosted (Docker Hub) option [3].
> >  4.  Magnum API exposes the discovery_url attribute that is leveraged by
> swarm and etcd discovery.
> >  5.  Etcd can not be implemented in swarm because discovery_url is
> associated to swarm?s discovery process and not etcd.
> >
> > Here are a few options on how to overcome this obstacle:
> >
> >  1.  Make the discovery_url more specific, for example
> etcd_discovery_url and swarm_discovery_url. However, this option would
> needlessly expose both discovery url?s to all bay types.
> >  2.  Swarm supports etcd as a discovery backend. This would mean
> discovery is similar for both bay types. With both bay types using the same
> mechanism for discovery, it will be easier to provide a private discovery
> option in the future.
> >  3.  Do not support flannel as a network-driver for k8s bay types. This
> would require adding support for a different driver that supports
> multi-host networking such as libnetwork. Note: libnetwork is only
> implemented in the Docker experimental release:
> https://github.com/docker/docker/tree/master/experimental.
> >
> > I lean towards #2 but their may be other options, so feel free to share
> your thoughts. I would like to obtain feedback from the community before
> proceeding in a particular direction.
> >
> > [1] https://github.com/coreos/flannel
> > [2]
> https://github.com/coreos/etcd/blob/master/Documentation/discovery_protocol.md
> > [3] https://docs.docker.com/swarm/discovery/
> >
> > Regards,
> > Daneyon Hansen
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/f89429b5/attachment.html>

From thierry at openstack.org  Mon Sep 21 08:26:58 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Mon, 21 Sep 2015 10:26:58 +0200
Subject: [openstack-dev] [rally][releases] New Rally release model
In-Reply-To: <CAD85om01K=dJ7xOZTU1dR17iUOZhOOyGFMP3gNrAHhQSbWu5kQ@mail.gmail.com>
References: <CAD85om01K=dJ7xOZTU1dR17iUOZhOOyGFMP3gNrAHhQSbWu5kQ@mail.gmail.com>
Message-ID: <55FFBF52.3080009@openstack.org>

Boris Pavlovic wrote:
> The main idea is next: 
> *) Master branch will be used for new major Rally versions development
> e.g. 0.x.y -> 0.x+1.0 switch
>    that can include not backward compatible changes. 

You mean x.y.0 -> x+1.0.0, right ?

> *) Latest version - we will port plugins, bug fixes and part of features
> to it
> *) Stable version - we will port only high & critical bug fixes if it is
> possible 

So... this is pretty close from what we're doing elsewhere in OpenStack,
except that we do:

Feature branches: not backward compatible changes
Master: bug fixes, backward-compatible features, release regularly
Stable: High/Critical bugfixes backports, release on-demand

The only difference with your model is how you split feature development
between master and feature branches. In your model you do most of the
feature development in the experimental branch (master) and port pieces
of it in the release branch (latest). In our case only the
backward-incompatible work lands in the experimental branch (feature/*),
and the release branch (master) contains everything else.

I am just not sure it's different enough to justify being different :)

-- 
Thierry Carrez (ttx)


From kchamart at redhat.com  Mon Sep 21 08:27:10 2015
From: kchamart at redhat.com (Kashyap Chamarthy)
Date: Mon, 21 Sep 2015 10:27:10 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FB2BF0.1000409@gmail.com>
References: <55FADB95.6090801@inaugust.com>
 <CAL3VkVxRjp0ytqL3s5ZW7UjKqQfrqiLcYpkxkCaD55=yw2e_sg@mail.gmail.com>
 <D72C85CD-56CA-45F1-86F8-0959869D6DC4@workday.com>
 <CAO_F6JP+GrtUiLYxYzAsid0Pp_nxKnnLQ=GujTL3s-9Kwu+Okg@mail.gmail.com>
 <CAD+XWwoKEBsHCvwUnF5JVQ78kBV9AdZWgp1_R4XT46EpLcfuDw@mail.gmail.com>
 <CAO_F6JOd-Kszsc6DiJtDoJ-aWCGfkzq771E0bJjLQfNMPc04uA@mail.gmail.com>
 <CAD+XWwq-BebFUYmcM3yiFK4cSiKR6A2WpdkWoeC=+C9OJJunLg@mail.gmail.com>
 <CAO_F6JNQgLKucSRUFW-kh-4GF9YZg7vfbHDB+SbdV9RG7a1wHw@mail.gmail.com>
 <CAGnj6asQ7sWzFAYRLVbJxmwWkGFrH3cCEpcPKrnbAdceToLgzg@mail.gmail.com>
 <55FB2BF0.1000409@gmail.com>
Message-ID: <20150921082710.GA21500@tesla.redhat.com>

On Thu, Sep 17, 2015 at 05:09:04PM -0400, Nikhil Komawar wrote:

[. . .]

> > I think this is all superfluous however and we should simply encourage
> > people to not wait until the last minute. Waiting to see who is
> > running/what the field looks like isn't as important as standing up
> > and saying you're interested in running.
> 
> I like that you have used the word encourage however, will have to
> disagree here. Life in general can't permit that to everyone -- there
> can be any important things pop up at unexpected time, someone on
> vacation and getting late to come back etc. And on top of that people
> can get caught up particularly at this week.  The time-line for
> proposals is a good idea seems a good idea in general.

Morgan is absolutely right.

The risk of unexpected things cropping up is always there.  If one has
the _intention_ to run for PTL, then they should make it a priority to
send the nomination as early as possible once timelines are announced
(more so if you're an incumbent PTL), rather than waiting for the last
moment.

Also, don't miss the unambiguously clear comment from Matt Riedemann:

    "If running for PTL is something you had in mind to begin with, you
    should probably be looking forward to when the elections start and
    get your ducks in a row.  Part of being PTL, a large part I'd think,
    is the ability to organize and manage things. If you're waiting
    until the 11th hour to do this, I wouldn't have much sympathy."

-- 
/kashyap


From apevec at gmail.com  Mon Sep 21 08:43:11 2015
From: apevec at gmail.com (Alan Pevec)
Date: Mon, 21 Sep 2015 10:43:11 +0200
Subject: [openstack-dev] [stable] 2015.1.2 readiness check WAS :
 gate-trove-functional-dsvm-mysql needs some stable branch love
Message-ID: <CAGi==UUhSgR+tgv0PKd9J=ucd6B+2FqGmiignd=Ox9A=Gw0c1w@mail.gmail.com>

>> [1] https://bugs.launchpad.net/trove-integration/+bug/1479358
>> [2] https://review.openstack.org/#/c/207193/

This fix has been merged and bug closed but Trove on stable/kilo is
not green yet, summary from
https://etherpad.openstack.org/p/stable-tracker
* Trove
 * various proboscis.case.MethodTest errors
 * Mock issues too, some fixed (but not all) at
https://review.openstack.org/202015 MERGED but
periodic-trove-python27-kilo is still failing:
  * http://logs.openstack.org/periodic-stable/periodic-trove-python27-kilo/3a6b8c7/console.html

In https://wiki.openstack.org/wiki/StableBranchRelease#Rinse.2C_Repeat
we have planned 2015.1.2  mid-September, 2015
are we ready to freeze stable/kilo now?

Cheers,
Alan


From tengqim at linux.vnet.ibm.com  Mon Sep 21 08:54:07 2015
From: tengqim at linux.vnet.ibm.com (Qiming Teng)
Date: Mon, 21 Sep 2015 16:54:07 +0800
Subject: [openstack-dev] [Heat] Integration Test Questions
In-Reply-To: <55FF64E2.6000607@redhat.com>
References: <D218618E.ADE81%sabeen.syed@rackspace.com>
 <CACfB1uutGXqUbd2D5rRAjRvVMT=H2qTn0myxOS6eJLdNQ=nbsg@mail.gmail.com>
 <20150920082456.GA11642@qiming-ThinkCentre-M58p>
 <55FF64E2.6000607@redhat.com>
Message-ID: <20150921085406.GA14032@qiming-ThinkCentre-M58p>

On Mon, Sep 21, 2015 at 02:01:06PM +1200, Steve Baker wrote:
> On 20/09/15 20:24, Qiming Teng wrote:
> >Speaking of adding tests, we need hands on improving Heat API tests in
> >Tempest [1]. The current test cases there is a weird combination of API
> >tests, resource type tests, template tests etc. If we decide to move
> >functional tests back to individual projects, some test cases may need
> >to be deleted from tempest.
> >
> >Another important reason of adding API tests into Tempest is because
> >the orchestration service is assessed [2] by the DefCore team using
> >tests in Tempest, not in-tree test cases.
> >
> >The heat team has done a lot (and killed a lot) work to make the API as
> >stable as possible. Most of the time, there would be nothing new for
> >testing. The API surface tests may become nothing but waste of time if
> >we keep running them for every single patch.
> Thanks for raising this. Wherever they live we do need a dedicated
> set of tests which ensure the REST API is fully exercised.
> >So... my suggestions:
> >
> >- Remove unnecessary tests in Tempest;
> agreed
> >- Stop adding API tests to Heat locally;
> >- Add API tests to Tempest instead, in an organized way. (refer to [3])
> I would prefer an alternative approach which would result in the
> same end state:
> - port heat_integrationtests to tempest-lib
> - build a suite of REST API tests in heat_integrationtests
> - work with defcore to identify which heat_integrationtests tests to
> move to tempest

Sounds a reasonable approach. Thanks.

Regards,
 Qiming

> >[1]
> >http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/orchestration/
> >[2] https://review.openstack.org/#/c/216983/
> >[3] https://review.openstack.org/#/c/210080/
> >



From berrange at redhat.com  Mon Sep 21 08:56:09 2015
From: berrange at redhat.com (Daniel P. Berrange)
Date: Mon, 21 Sep 2015 09:56:09 +0100
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <B987741E651FDE4584B7A9C0F7180DEB1CDC2D02@G4W3208.americas.hpqcorp.net>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <B987741E651FDE4584B7A9C0F7180DEB1CDC2D02@G4W3208.americas.hpqcorp.net>
Message-ID: <20150921085609.GC28520@redhat.com>

On Fri, Sep 18, 2015 at 05:47:31PM +0000, Carlton, Paul (Cloud Services) wrote:
> However the most significant impediment we encountered was customer
> complaints about performance of instances during migration.  We did a little
> bit of work to identify the cause of this and concluded that the main issues
> was disk i/o contention.  I wonder if this is something you or others have
> encountered?  I'd be interested in any idea for managing the rate of the
> migration processing to prevent it from adversely impacting the customer
> application performance.  I appreciate that if we throttle the migration
> processing it will take longer and may not be able to keep up with the rate
> of disk/memory change in the instance.

I would not expect live migration to have an impact on disk I/O, unless
your storage is network based and using the same network as the migration
data. While migration is taking place you'll see a small impact on the
guest compute performance, due to page table dirty bitmap tracking, but
that shouldn't appear directly as disk I/O problem. There is no throttling
of guest I/O at all during migration.

> Could you point me at somewhere I can get details of the tuneable setting
> relating to cutover down time please?  I'm assuming that at these are
> libvirt/qemu settings?  I'd like to play with them in our test environment
> to see if we can simulate busy instances and determine what works.  I'd also
> be happy to do some work to expose these in nova so the cloud operator can
> tweak if necessary?

It is already exposed as 'live_migration_downtime' along with
live_migration_downtime_steps, and live_migration_downtime_delay.
Again, it shouldn't have any impact on guest performance while
live migration is taking place. It only comes into effect when
checking whether the guest is ready to switch to the new host.

> I understand that you have added some functionality to the nova compute
> manager to collect data on migration progress and emit this to the log file.
> I'd like to propose that we extend this to emit notification message
> containing progress information so a cloud operator's orchestration can
> consume these events and use them to monitor progress of individual
> migrations.  This information could be used to generate alerts or tickets so
> that support staff can intervene.  The smarts in qemu to help it make
> progress are very welcome and necessary but in my experience the cloud
> operator needs to be able to manage these and if it is necessary to slow
> down or even pause a customer's instance to complete the migration the cloud
> operator may need to gain customer consent before proceeding.

We already update the Nova  instance object's 'progress' value with the
info on the migration progress. IIRC, this is visible via 'nova show <instance>'
or something like that.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


From thierry at openstack.org  Mon Sep 21 09:05:17 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Mon, 21 Sep 2015 11:05:17 +0200
Subject: [openstack-dev] [stable] 2015.1.2 readiness check WAS :
 gate-trove-functional-dsvm-mysql needs some stable branch love
In-Reply-To: <CAGi==UUhSgR+tgv0PKd9J=ucd6B+2FqGmiignd=Ox9A=Gw0c1w@mail.gmail.com>
References: <CAGi==UUhSgR+tgv0PKd9J=ucd6B+2FqGmiignd=Ox9A=Gw0c1w@mail.gmail.com>
Message-ID: <55FFC84D.1010400@openstack.org>

Alan Pevec wrote:
> [..]
> In https://wiki.openstack.org/wiki/StableBranchRelease#Rinse.2C_Repeat
> we have planned 2015.1.2  mid-September, 2015
> are we ready to freeze stable/kilo now?

I don't think we can expect a lot of focus on stable/kilo in the coming
weeks, so now is as good a time as any.

-- 
Thierry Carrez (ttx)


From salv.orlando at gmail.com  Mon Sep 21 09:25:40 2015
From: salv.orlando at gmail.com (Salvatore Orlando)
Date: Mon, 21 Sep 2015 10:25:40 +0100
Subject: [openstack-dev] [neutron] Neutron debugging tool
In-Reply-To: <CALFEDVdMdkjB1X002AA0DhvdLN4zDgeY-MyGOC_guopX8Gp1EA@mail.gmail.com>
References: <CADL6tVORSnamoEuGAzQ45BNvKUBRvBfrQx25YMfa4aQnCaPHEg@mail.gmail.com>
 <CALFEDVdMdkjB1X002AA0DhvdLN4zDgeY-MyGOC_guopX8Gp1EA@mail.gmail.com>
Message-ID: <CAP0B2WMOFLMdH7t9hNc0uw8xqNcUvvrLBZa7D4hHU06c2bLMuA@mail.gmail.com>

It sounds like indeed that easyOVS covers what you're aiming too.
However, from what I gather there is still plenty to do in easy OVS, so
perhaps rather than starting a new toolset from scratch you might build on
the existing one.

Personally I'd welcome its adoption into the Neutron stadium as debugging
control plane/data plane issues in the neutron reference impl is becoming
difficult also for expert users and developers.
I'd just suggest renaming it because calling it "OVS" is just plain wrong.
The neutron reference implementation and OVS are two distinct things.

As concern neutron-debug, this is a tool that was developed in the early
stages of the project to verify connectivity using "probes" in namespaces.
These probes are simply tap interfaces associated with neutron ports. The
neutron-debug tool is still used in some devstack exercises. Nevertheless,
I'd rather keep building something like easyOVS and then deprecated
neutron-debug rather than develop it.

Salvatore


On 21 September 2015 at 02:40, Li Ma <skywalker.nick at gmail.com> wrote:

> AFAIK, there is a project available in the github that does the same thing.
> https://github.com/yeasy/easyOVS
>
> I used it before.
>
> On Mon, Sep 21, 2015 at 12:17 AM, Nodir Kodirov <nodir.qodirov at gmail.com>
> wrote:
> > Hello,
> >
> > I am planning to develop a tool for network debugging. Initially, it
> > will handle DVR case, which can also be extended to other too. Based
> > on my OpenStack deployment/operations experience, I am planning to
> > handle common pitfalls/misconfigurations, such as:
> > 1) check external gateway validity
> > 2) check if appropriate qrouter/qdhcp/fip namespaces are created in
> > compute/network hosts
> > 3) execute probing commands inside namespaces, to verify reachability
> > 4) etc.
> >
> > I came across neutron-debug [1], which mostly focuses on namespace
> > debugging. Its coverage is limited to OpenStack, while I am planning
> > to cover compute/network nodes as well. In my experience, I had to ssh
> > to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
> > above). The tool I am considering will handle these, given the host
> > credentials.
> >
> > I'd like get community's feedback on utility of such debugging tool.
> > Do people use neutron-debug on their OpenStack environment? Does the
> > tool I am planning to develop with complete diagnosis coverage sound
> > useful? Anyone is interested to join the development? All feedback are
> > welcome.
> >
> > Thanks,
> >
> > - Nodir
> >
> > [1]
> http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
>
> Li Ma (Nick)
> Email: skywalker.nick at gmail.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/83895586/attachment.html>

From pawel.koniszewski at intel.com  Mon Sep 21 09:43:58 2015
From: pawel.koniszewski at intel.com (Koniszewski, Pawel)
Date: Mon, 21 Sep 2015 09:43:58 +0000
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <20150918152346.GI16906@redhat.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
Message-ID: <191B00529A37FA4F9B1CAE61859E4D6E5AB44DC2@IRSMSX101.ger.corp.intel.com>

> -----Original Message-----
> From: Daniel P. Berrange [mailto:berrange at redhat.com]
> Sent: Friday, September 18, 2015 5:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] live migration in Mitaka
>
> On Fri, Sep 18, 2015 at 11:53:05AM +0000, Murray, Paul (HP Cloud) wrote:
> > Hi All,
> >
> > There are various efforts going on around live migration at the moment:
> > fixing up CI, bug fixes, additions to cover more corner cases,
> > proposals for new operations....
> >
> > Generally live migration could do with a little TLC (see: [1]), so I
> > am going to suggest we give some of that care in the next cycle.
> >
> > Please respond to this post if you have an interest in this and what
> > you would like to see done. Include anything you are already getting
> > on with so we get a clear picture. If there is enough interest I'll
> > put this together as a proposal for a work stream. Something along the
> > lines of "robustify live migration".
>
> We merged some robustness improvements for migration during Liberty.
> Specifically, with KVM we now track the progress of data transfer and if
it
> is
> not making forward progress during a set window of time, we will abort the
> migration. This ensures you don't get a migration that never ends. We also
> now have code which dynamically increases the max permitted downtime
> during switchover, to try and make it more likely to succeeed. We could do
> with getting feedback on how well the various tunable settings work in
> practie for real world deployments, to see if we need to change any
> defaults.
>
> There was a proposal to nova to allow the 'pause' operation to be invoked
> while migration was happening. This would turn a live migration into a
coma-
> migration, thereby ensuring it succeeds.
> I cna't remember if this merged or not, as i can't find the review
offhand,
> but
> its important to have this ASAP IMHO, as when evacuating VMs from a host
> admins need a knob to use to force successful evacuation, even at the cost
> of pausing the guest temporarily.

There are two different proposals - cancel on-going live migration and pause
VM during live migration. Both are very important. Right now there is no way

to interact with on-going live migration through Nova.

Specification for 'cancel on-going live migration' is up for review [1].
'Pause VM during live migration' (it might be something like
force-live-migration) depends on this change so I'm waiting with
specification
until 'cancel' spec is merged. I'll try to prepare it before summit so both
specs can be discussed in Tokyo.

> In libvirt upstream we now have the ability to filter what disks are
> migrated
> during block migration. We need to leverage that new feature to fix the
long
> standing problems of block migration when non-local images are attached -
> eg cinder volumes. We definitely want this in Mitaka.
>
> We should look at what we need to do to isolate the migration data network
> from the main management network. Currently we live migrate over
> whatever network is associated with the compute hosts primary Hostname /
> IP address. This is not neccessarily the fastest NIC on the host. We ought
> to
> be able to record an alternative hostname / IP address against each
compute
> host to indicate the desired migration interface.
>
> Libvirt/KVM have the ability to turn on compression for migration which
> again
> improves the chances of convergance & thus success.
> We would look at leveraging that.

It is merged in QEMU (version 2.4), however, it isn't merged in Libvirt
yet[2]
(1-9 patches from ShaoHe Feng). The simplest solution shouldn't require any
work in Nova, it's just another live migration flag. To extend this we will
probably need to add API call to nova, e.g, to change compression
ratio or to change number of compression threads.

However, this work is for O cycle (or even later) IMHO. The latest used QEMU
is 2.3 (in Ubuntu 15.10). Adoption of QEMU 2.4 and Libvirt with compression
will take some time, so we don't need to focus on it right now.

> QEMU has a crude "auto-converge" flag you can turn on, which limits guest
> CPU execution time, in an attempt to slow down data dirtying rate to again
> improve chance of successful convergance.
>
> I'm working on enhancements to QEMU itself to support TLS encryption for
> migration. This will enable openstack to have secure migration datastream,
> without having to tunnel via libvirtd. This is useful as tunneling via
> libvirtd
> doesn't work with block migration. It will also be much faster than
> tunnelling.
> This probably might be merged in QEMU before Mitaka cycle ends, but more
> likely it is Nxxx cycle

+++ Looking forward to see it!

> There is also work on post-copy migration in QEMU. Normally with live
> migration, the guest doesn't start executing on the target host until
> migration
> has transferred all data. There are many workloads where that doesn't
work,
> as the guest is dirtying data too quickly, With post-copy you can start
> runing
> the guest on the target at any time, and when it faults on a missing page
> that
> will be pulled from the source host. This is slightly more fragile as you
> risk
> loosing the guest entirely if the source host dies before migration
finally
> completes. It does guarantee that migration will succeed no matter what
> workload is in the guest. This is probably Nxxxx cycle material.
>
> Testing. Testing. Testing.

+++ We need functional tests for LM.

> Lots more I can't think of right now....
>

One more thing - there is a lot of effort around OpenStack upgradeability.
However, if nova-compute upgrade happens while there is live migration in
progress, it will leave things in a very messy state. We should consider,
e.g., soft restart that will wait for current live migration (or probably
any
other long running action) to finish. Long-term solution would be to
implement
some kind of a live migration recovery/cleanup mechanisms in nova.

[1] https://review.openstack.org/#/c/179149/
[2] https://libvirt.org/pending.html

Kind Regards,
Pawel Koniszewski
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6499 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/2834abc2/attachment.bin>

From duncan.thomas at gmail.com  Mon Sep 21 09:49:55 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Mon, 21 Sep 2015 12:49:55 +0300
Subject: [openstack-dev] [CINDER] [PTL Candidates] Questions
In-Reply-To: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
References: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
Message-ID: <CAOyZ2aEq6+Yf7fbZkzW=prqs21ErrV5cjX3L=e=NkeEOhMA3yg@mail.gmail.com>

Hi John. Thanks for the questions.


> 1. Do you actually have the time to spend to be PTL
>

I'm very much aware, and discussed with my management prior to standing,
that being PTL is a pretty much full time job. I realise I'm somewhat
limited by not being in a US time zone, however I'm pretty flexible with
working hours, and already spend a few evening a week working US hours. I'd
also like to use my time-zone shift as an advantage - I'm aware of how
difficult it is for non-US contributors to get really involved in cinder
due to our (generally very efficient) IRC-centric nature. I'd like to see
if we can make better use of the tools we have for getting attention on
bugs, features and reviews.


2. What are your plans to make the Cinder project as a core component
> better (no... really, what specifically and how does it make Cinder better)?
>

My main worry with Cinder is that we're drifting away from the core vision
of both Openstack and the original Cinder team - A really good cloud, with
really good block storage, no matter the technology behind it. We've so
many half-finished features, APIs that only work under limited
circumstances and general development debt that is seriously hurting us
going forward. The new features being proposed are getting more niche, more
'everything and the kitchen sink' and less 'top quality, rock solid
service'. I'd like to shift a focus on back-to-basics, and work on fixing
the road blocks to fixing these issues - we have plenty of competent
motivated people, but communication and bureaucratic issues issues both
within our team and between cinder and other projects (primarily but not
limited to nova and glance) have gotten in the way.

Things I'd like to see done this cycle:
- Python3 work - let's just push through it and get it done. Maybe focus on
it exclusively for a few days or a week some time this cycle. It's dragging
on, and since we aren't at the point where cinder actually runs under
python3, new problems slip in regularly.

- Replication, CGs, online backup etc rolled out to more drivers. Lets
limit the amount of new things drivers need to add this cycle until we've
caught up on the backlog.

- Nova <-> cinder API. Fixing this in a way that works for the nova team
appears to need micro-versions. This API has been a thorn in our side for
all sorts of new features and bugs many times, let's tame it.

- Making CI failures easier to understand. I really struggle to read most
CI failures, and so don't follow up on them as often as I should. I'm sure
I'm not alone. I'm convinced that a small amount of work with white space,
headings etc in devstack and tempest logs could give a really big boost.
I'd also like to see a state other than 'failed' for situations where there
was a problem with the CI system itself and so it didn't get as far as
trying to deploy devstack. As I mentioned, we've enough smart people to
make improvements that should allow us all to be more productive

- Reducing review noise. I suspect that some policing and emailing people
to improve etiquette on reviews (don't -1 for spelling and grammar, don't
post a review until it is ready to be reviewed, give people time to batch
comments rather than posting a new version for every nit, etc) will pay
off, but it needs time dedicated to it.

- Less out-of-band discussion on community decisions. I'm a big believer
that discussion on record and in public, either on IRC or email, has much
more value than private discussions and public statements. It also reduces
accusations of bias and unfairness.


> 3. ?Why do you want to be PTL for Cinder?
>

I wan to see cinder continue to succeed. My code contributions have, for
various reasons, reduced in quantity and value against my efforts on
mentoring, designs, reviews and communications. I'd like to free up the
people who are actually writing good code to do more of that, by taking on
more of the non-code burden and working to remove road blocks that are
stopping people from making progress - be those internally with-in the
team, between openstack teams or even helping people solve problems
(managerial, legal or educational) within their own companies. I've had a
fair bit of success at that in the past, and I believe that now is the time
when those skills are the most effective ones to move cinder forward. We've
a great technical team, so I want to enable them to do more, while keeping
on top of scope creep and non-standardisation enough to enable cinder to be
what I and many others would like it be.




I hope this helps people with their decision. Whomever wins, I have high
hopes for the future, there is nobody standing who hasn't been a pleasure
to work with in the past, and I don't expect that to change in the future.

-- 
Duncan Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/0773e619/attachment.html>

From emilien at redhat.com  Mon Sep 21 12:44:26 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Mon, 21 Sep 2015 08:44:26 -0400
Subject: [openstack-dev] [puppet] weekly meeting #52
Message-ID: <55FFFBAA.9050806@redhat.com>

Hello!

Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
in #openstack-meeting-4:

https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150922

Feel free to add any additional items you'd like to discuss.
If our schedule allows it, we'll make bug triage during the meeting.

Regards,
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/6ddb5cc0/attachment.pgp>

From paul.carlton2 at hpe.com  Mon Sep 21 12:50:32 2015
From: paul.carlton2 at hpe.com (Paul Carlton)
Date: Mon, 21 Sep 2015 13:50:32 +0100
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <20150921085609.GC28520@redhat.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <B987741E651FDE4584B7A9C0F7180DEB1CDC2D02@G4W3208.americas.hpqcorp.net>
 <20150921085609.GC28520@redhat.com>
Message-ID: <55FFFD18.5070203@hpe.com>

Daniel

Thanks.

We will need to do some work to recreate the instance performance  and 
disk i/o issues and investigate further.

My original message did not go out to the mailing list due to an 
subscription issue, so including it here


I'm just starting work on Nova upstream having been focused on live
migration orchestration in our large Public Cloud environment.  We were
trying to use live migration to do rolling reboots of compute nodes in 
order to apply software patches that required node or virtual machine 
restarts to apply.  For this sort of activity to work on a large scale 
the orchestration needs to be highly automated and integrate with the 
operations monitoring and issue tracking systems.  It also needs the 
mechanism used to move instances to be highly robust.

However the most significant impediment we encountered was customer
complaints about performance of instances during migration.  We did a 
little bit of work to identify the cause of this and concluded that the 
main issues was disk i/o contention.  I wonder if this is something you 
or others have encountered?  I'd be interested in any idea for managing 
the rate of the migration processing to prevent it from adversely 
impacting the customer application performance.  I appreciate that if we 
throttle the migration processing it will take longer and may not be 
able to keep up with the rate of disk/memory change in the instance.

Could you point me at somewhere I can get details of the tuneable 
setting relating to cutover down time please?  I'm assuming that at 
these are libvirt/qemu settings?  I'd like to play with them in our test 
environment to see if we can simulate busy instances and determine what 
works.  I'd also be happy to do some work to expose these in nova so the 
cloud operator can tweak if necessary?

I understand that you have added some functionality to the nova compute
manager to collect data on migration progress and emit this to the log file.

I'd like to propose that we extend this to emit notification message
containing progress information so a cloud operator's orchestration can
consume these events and use them to monitor progress of individual
migrations.  This information could be used to generate alerts or 
tickets so that support staff can intervene.  The smarts in qemu to help 
it make progress are very welcome and necessary but in my experience the 
cloud operator needs to be able to manage these and if it is necessary 
to slow down or even pause a customer's instance to complete the 
migration the cloudoperator may need to gain customer consent before 
proceeding.

I am also considering submitting a proposal to build on the current spec 
for monitoring and cancelling migrations to make the migration status 
information available to users (based on policy setting) and include an 
estimated time to complete information in the response.  I appreciate 
that this would only be an 'estimate' but it may give the user some idea 
of how long they will need to wait until they can perform operations on 
their instance that are not currently permitted during migration.  To 
cater for the scenario where a customer urgently needs to perform an 
inhibited operation (like attach or detach a volume) then I would 
propose that we allow for a user to cancel the migration of their own 
instances.  This would be enabled for authorized users based on granting 
them a specific role.

More thoughts Monday!




-----Original Message-----
From: Daniel P. Berrange [mailto:berrange at redhat.com]
Sent: 21 September 2015 09:56
To: Carlton, Paul (Cloud Services)
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] live migration in Mitaka

On Fri, Sep 18, 2015 at 05:47:31PM +0000, Carlton, Paul (Cloud Services) 
wrote:
> However the most significant impediment we encountered was customer
> complaints about performance of instances during migration.  We did a
> little bit of work to identify the cause of this and concluded that
> the main issues was disk i/o contention.  I wonder if this is
> something you or others have encountered?  I'd be interested in any
> idea for managing the rate of the migration processing to prevent it
> from adversely impacting the customer application performance.  I
> appreciate that if we throttle the migration processing it will take
> longer and may not be able to keep up with the rate of disk/memory change in
> the instance.

I would not expect live migration to have an impact on disk I/O, unless 
your storage is network based and using the same network as the 
migration data. While migration is taking place you'll see a small 
impact on the guest compute performance, due to page table dirty bitmap 
tracking, but that shouldn't appear directly as disk I/O problem. There 
is no throttling of guest I/O at all during migration.

> Could you point me at somewhere I can get details of the tuneable
> setting relating to cutover down time please?  I'm assuming that at
> these are libvirt/qemu settings?  I'd like to play with them in our
> test environment to see if we can simulate busy instances and
> determine what works.  I'd also be happy to do some work to expose
> these in nova so the cloud operator can tweak if necessary?

It is already exposed as 'live_migration_downtime' along with 
live_migration_downtime_steps, and live_migration_downtime_delay.
Again, it shouldn't have any impact on guest performance while live 
migration is taking place. It only comes into effect when checking 
whether the guest is ready to switch to the new host.

> I understand that you have added some functionality to the nova
> compute manager to collect data on migration progress and emit this to the
> log file.
> I'd like to propose that we extend this to emit notification message
> containing progress information so a cloud operator's orchestration
> can consume these events and use them to monitor progress of
> individual migrations.  This information could be used to generate
> alerts or tickets so that support staff can intervene.  The smarts in
> qemu to help it make progress are very welcome and necessary but in my
> experience the cloud operator needs to be able to manage these and if
> it is necessary to slow down or even pause a customer's instance to
> complete the migration the cloud operator may need to gain customer consent
> before proceeding.

We already update the Nova  instance object's 'progress' value with the 
info on the migration progress. IIRC, this is visible via 'nova show 
<instance>'
or something like that.

Regards,
Daniel
-- 
|: http://berrange.com      -o- 
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o- 
http://virt-manager.org :|
|: http://autobuild.org       -o- 
http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o- 
http://live.gnome.org/gtk-vnc :|

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4722 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/c11cdc43/attachment.bin>

From rakhmerov at mirantis.com  Mon Sep 21 13:08:54 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Mon, 21 Sep 2015 16:08:54 +0300
Subject: [openstack-dev] [mistral] Cancelling team meeting - 09/21/2015
Message-ID: <81645307-33C8-4F8A-BB7E-A4E5BBB849E2@mirantis.com>

Mistral Team,

We?re cancelling today?s team meeting because a number of key members won?t be able to attend.

The next one is scheduled for 28 Sep.

Renat Akhmerov
@ Mirantis Inc.





From paul.carlton2 at hpe.com  Mon Sep 21 13:09:48 2015
From: paul.carlton2 at hpe.com (Paul Carlton)
Date: Mon, 21 Sep 2015 14:09:48 +0100
Subject: [openstack-dev] Migrating offline instances
Message-ID: <5600019C.6070706@hpe.com>

Live migration using qemu only operates on running instances.  However when a cloud operator wants to move all instance off a hypervisor they need to be able to migrate stopped and suspended instances
too.  We achieved this by bringing these instances to a paused state while they are migrated using the 'VIR_DOMAIN_START_PAUSED'.  I see fromhttps://review.openstack.org/#/c/85048/  that this idea has
been rejected in the past but my understanding is there is still no solution to this issue in libvirt?  Is there work in progress to implement a capability for libvirt to migrate offline instances?

-- 
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:    +44 (0)7768 994283
Email:    mailto:paul.carlton2 at hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL".

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/0c088498/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4722 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/0c088498/attachment.bin>

From doug at doughellmann.com  Mon Sep 21 13:44:06 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 21 Sep 2015 09:44:06 -0400
Subject: [openstack-dev] [release][ptl][all] creating stable/liberty
	branches for non-oslo libraries today
Message-ID: <1442842916-sup-9093@lrrr.local>


All,

We are doing final releases, contraints updates, and creating
stable/liberty branches for all of the non-Oslo libraries (clients
as well as glance_store, os-brick, etc.) today. I have contacted
the designate, neutron, nova, and zaqar teams about final releases
for their clients today based on the list of unreleased changes.
All of the other libs looked like their most recent release would
be fine as a stable branch, so we'll be using those.

Doug


From tim at styra.com  Mon Sep 21 13:48:09 2015
From: tim at styra.com (Tim Hinrichs)
Date: Mon, 21 Sep 2015 13:48:09 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <CAB+y+8Qtp+veJvzoTh6HftZus_Sk4tgpTEtyJhWKyQSM=tQ_iA@mail.gmail.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <CAB+y+8Qtp+veJvzoTh6HftZus_Sk4tgpTEtyJhWKyQSM=tQ_iA@mail.gmail.com>
Message-ID: <CAJjxPAD9M+6ZkRzvJLWfAm+tznz8Hq_zTVtpgeD=4UsQdpWUGg@mail.gmail.com>

Did the client install properly?  Then you can use ...

$ openstack help congress

Tim

On Sun, Sep 20, 2015 at 12:35 AM himanshu sharma <him.funkyy at gmail.com>
wrote:

> Hi,
>
> Greetings for the day.
>
> I am finding problem in finding the CLI commands for congress in which I
> can create, delete a rule within a policy and viewing different data
> sources.
> Can you please provide me the list of CLI commands for the same.
> Waiting for the reply.
>
>
> Regards
> Himanshu Sharma
>
> On Sat, Sep 19, 2015 at 5:44 AM, Tim Hinrichs <tim at styra.com> wrote:
>
>> It's great to have this available!  I think it'll help people understand
>> what's going on MUCH more quickly.
>>
>> Some thoughts.
>> - The image is 3GB, which took me 30 minutes to download.  Are all VMs
>> this big?  I think we should finish this as a VM but then look into doing
>> it with containers to make it EVEN easier for people to get started.
>>
>> - It gave me an error about a missing shared directory when I started up.
>>
>> - I expected devstack to be running when I launched the VM.  devstack
>> startup time is substantial, and if there's a problem, it's good to assume
>> the user won't know how to fix it.  Is it possible to have devstack up and
>> running when we start the VM?  That said, it started up fine for me.
>>
>> - It'd be good to have a README to explain how to use the use-case
>> structure. It wasn't obvious to me.
>>
>> - The top-level dir of the Congress_Usecases folder has a
>> Congress_Usecases folder within it.  I assume the inner one shouldn't be
>> there?
>>
>> - When I ran the 10_install_policy.sh, it gave me a bunch of
>> authorization problems.
>>
>> But otherwise I think the setup looks reasonable.  Will there be an undo
>> script so that we can run the use cases one after another without worrying
>> about interactions?
>>
>> Tim
>>
>>
>> On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com> wrote:
>>
>>> Hi Congress folks,
>>>
>>>
>>>
>>> BTW the login/password for the VM is vagrant/vagrant
>>>
>>>
>>>
>>> -Shiv
>>>
>>>
>>>
>>>
>>>
>>> *From:* Shiv Haris [mailto:sharis at Brocade.com]
>>> *Sent:* Thursday, September 17, 2015 5:03 PM
>>> *To:* openstack-dev at lists.openstack.org
>>> *Subject:* [openstack-dev] [Congress] Congress Usecases VM
>>>
>>>
>>>
>>> Hi All,
>>>
>>>
>>>
>>> I have put my VM (virtualbox) at:
>>>
>>>
>>>
>>> http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova
>>>
>>>
>>>
>>> I usually run this on a macbook air ? but it should work on other
>>> platfroms as well. I chose virtualbox since it is free.
>>>
>>>
>>>
>>> Please send me your usecases ? I can incorporate in the VM and send you
>>> an updated image. Please take a look at the structure I have in place for
>>> the first usecase; would prefer it be the same for other usecases. (However
>>> I am still open to suggestions for changes)
>>>
>>>
>>>
>>> Thanks,
>>>
>>>
>>>
>>> -Shiv
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/818b3441/attachment.html>

From jesse.pretorius at gmail.com  Mon Sep 21 14:11:25 2015
From: jesse.pretorius at gmail.com (Jesse Pretorius)
Date: Mon, 21 Sep 2015 15:11:25 +0100
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,
 that is the question
In-Reply-To: <55FC0B8A.4060303@mhtx.net>
References: <55FC0B8A.4060303@mhtx.net>
Message-ID: <CAGSrQvzERPpAGb0e6NxOZZSSz-K9EKa1tRsPC=oNqWTD5DW2Xg@mail.gmail.com>

On 18 September 2015 at 14:03, Major Hayden <major at mhtx.net> wrote:

> Hey there,
>
> I start working on a bug[1] last night about adding a managed NTP
> configuration to openstack-ansible hosts.  My patch[2] gets chrony up and
> running with configurable NTP servers, but I'm still struggling to meet the
> "Proposal" section of the bug where the author has asked for non-infra
> physical nodes to get their time from the infra nodes.  I can't figure out
> how to make it work for AIO builds when one physical host is part of all of
> the groups. ;)
>
> I'd argue that time synchronization is critical for a few areas:
>
>   1) Security/auditing when comparing logs
>   2) Troubleshooting when comparing logs
>   3) I've been told swift is time-sensitive
>   4) MySQL/Galera don't like time drift
>
> However, there's a strong argument that this should be done by deployers,
> and not via openstack-ansible.  I'm still *very* new to the project and I'd
> like to hear some feedback from other folks.
>
> [1] https://bugs.launchpad.net/openstack-ansible/+bug/1413018
> [2] https://review.openstack.org/#/c/225006/


We have historically taken the stance of leaving something like this as a
deployer concern - much like setting up host networking and setting host
repositories. That said, there's value in opinionation based on best
practices learned from hard-won lessons in the trenches.

I'm somewhat on the fence with this. As-is I don't think the review should
go in. That said, I'd be more open to an individual role being used to
implement an appropriate network time configuration - whether that role be
something that exists within Ansible Galaxy, or whether it's a new role in
the current repository, or as its own repository in the OpenStack-Ansible
'big tent' as proposed in https://review.openstack.org/213779

I do definitely think that there's value in preparing some documentation
which will help prospective deployers understand how they can consume roles
from Ansible Galaxy (or some role in an arbitrary repository) to solve
common problems like this. The tooling is already in the OpenStack-Ansible
repository, so all it needs is a guiding document which describes how to
use it.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/95e00b0c/attachment.html>

From mordred at inaugust.com  Mon Sep 21 14:12:52 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Mon, 21 Sep 2015 09:12:52 -0500
Subject: [openstack-dev] Patches coming for .coveragerc
Message-ID: <56001064.3040909@inaugust.com>

Hey all!

Coverage released a 4.0 today (yay!) which changes how the config file 
is read. As a result, "ignore-errors" in the .coveragerc file needs to 
be ignore_errors.

We're running a script right now to submit a change to every project 
with this change. The topic will be coverage-v4

Enjoy the patch spam

Monty


From nik.komawar at gmail.com  Mon Sep 21 14:23:20 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Mon, 21 Sep 2015 10:23:20 -0400
Subject: [openstack-dev] [release][ptl][all] creating stable/liberty
 branches for non-oslo libraries today
In-Reply-To: <1442842916-sup-9093@lrrr.local>
References: <1442842916-sup-9093@lrrr.local>
Message-ID: <560012D8.3020809@gmail.com>

glance_store 0.9.1 sounds good. Thanks Doug!

On 9/21/15 9:44 AM, Doug Hellmann wrote:
> All,
>
> We are doing final releases, contraints updates, and creating
> stable/liberty branches for all of the non-Oslo libraries (clients
> as well as glance_store, os-brick, etc.) today. I have contacted
> the designate, neutron, nova, and zaqar teams about final releases
> for their clients today based on the list of unreleased changes.
> All of the other libs looked like their most recent release would
> be fine as a stable branch, so we'll be using those.
>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From andrew.melton at RACKSPACE.COM  Mon Sep 21 14:54:11 2015
From: andrew.melton at RACKSPACE.COM (Andrew Melton)
Date: Mon, 21 Sep 2015 14:54:11 +0000
Subject: [openstack-dev] New PyCharm License
Message-ID: <1442847252531.42564@RACKSPACE.COM>

Hi devs,


I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.


Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.

?

--Andrew
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/73c317dc/attachment.html>

From sombrafam at gmail.com  Mon Sep 21 15:02:25 2015
From: sombrafam at gmail.com (Erlon Cruz)
Date: Mon, 21 Sep 2015 12:02:25 -0300
Subject: [openstack-dev] [CINDER] [PTL Candidates] Questions
In-Reply-To: <CAOyZ2aEq6+Yf7fbZkzW=prqs21ErrV5cjX3L=e=NkeEOhMA3yg@mail.gmail.com>
References: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
 <CAOyZ2aEq6+Yf7fbZkzW=prqs21ErrV5cjX3L=e=NkeEOhMA3yg@mail.gmail.com>
Message-ID: <CAF+Cadu-LG-uNcJbeUjpz8rp8GM-nZsxEYre+wqNMJDFLyc5QQ@mail.gmail.com>

John,

Thanks for the questions, it Ill really help me to make a the best choice.
I hadn't pondered the first question . +1 to make those and other if
suggest part of the candidate proposal.

Erlon

On Mon, Sep 21, 2015 at 6:49 AM, Duncan Thomas <duncan.thomas at gmail.com>
wrote:

> Hi John. Thanks for the questions.
>
>
>> 1. Do you actually have the time to spend to be PTL
>>
>
> I'm very much aware, and discussed with my management prior to standing,
> that being PTL is a pretty much full time job. I realise I'm somewhat
> limited by not being in a US time zone, however I'm pretty flexible with
> working hours, and already spend a few evening a week working US hours. I'd
> also like to use my time-zone shift as an advantage - I'm aware of how
> difficult it is for non-US contributors to get really involved in cinder
> due to our (generally very efficient) IRC-centric nature. I'd like to see
> if we can make better use of the tools we have for getting attention on
> bugs, features and reviews.
>
>
> 2. What are your plans to make the Cinder project as a core component
>> better (no... really, what specifically and how does it make Cinder better)?
>>
>
> My main worry with Cinder is that we're drifting away from the core vision
> of both Openstack and the original Cinder team - A really good cloud, with
> really good block storage, no matter the technology behind it. We've so
> many half-finished features, APIs that only work under limited
> circumstances and general development debt that is seriously hurting us
> going forward. The new features being proposed are getting more niche, more
> 'everything and the kitchen sink' and less 'top quality, rock solid
> service'. I'd like to shift a focus on back-to-basics, and work on fixing
> the road blocks to fixing these issues - we have plenty of competent
> motivated people, but communication and bureaucratic issues issues both
> within our team and between cinder and other projects (primarily but not
> limited to nova and glance) have gotten in the way.
>
> Things I'd like to see done this cycle:
> - Python3 work - let's just push through it and get it done. Maybe focus
> on it exclusively for a few days or a week some time this cycle. It's
> dragging on, and since we aren't at the point where cinder actually runs
> under python3, new problems slip in regularly.
>
> - Replication, CGs, online backup etc rolled out to more drivers. Lets
> limit the amount of new things drivers need to add this cycle until we've
> caught up on the backlog.
>
> - Nova <-> cinder API. Fixing this in a way that works for the nova team
> appears to need micro-versions. This API has been a thorn in our side for
> all sorts of new features and bugs many times, let's tame it.
>
> - Making CI failures easier to understand. I really struggle to read most
> CI failures, and so don't follow up on them as often as I should. I'm sure
> I'm not alone. I'm convinced that a small amount of work with white space,
> headings etc in devstack and tempest logs could give a really big boost.
> I'd also like to see a state other than 'failed' for situations where there
> was a problem with the CI system itself and so it didn't get as far as
> trying to deploy devstack. As I mentioned, we've enough smart people to
> make improvements that should allow us all to be more productive
>
> - Reducing review noise. I suspect that some policing and emailing people
> to improve etiquette on reviews (don't -1 for spelling and grammar, don't
> post a review until it is ready to be reviewed, give people time to batch
> comments rather than posting a new version for every nit, etc) will pay
> off, but it needs time dedicated to it.
>
> - Less out-of-band discussion on community decisions. I'm a big believer
> that discussion on record and in public, either on IRC or email, has much
> more value than private discussions and public statements. It also reduces
> accusations of bias and unfairness.
>
>
>> 3. ?Why do you want to be PTL for Cinder?
>>
>
> I wan to see cinder continue to succeed. My code contributions have, for
> various reasons, reduced in quantity and value against my efforts on
> mentoring, designs, reviews and communications. I'd like to free up the
> people who are actually writing good code to do more of that, by taking on
> more of the non-code burden and working to remove road blocks that are
> stopping people from making progress - be those internally with-in the
> team, between openstack teams or even helping people solve problems
> (managerial, legal or educational) within their own companies. I've had a
> fair bit of success at that in the past, and I believe that now is the time
> when those skills are the most effective ones to move cinder forward. We've
> a great technical team, so I want to enable them to do more, while keeping
> on top of scope creep and non-standardisation enough to enable cinder to be
> what I and many others would like it be.
>
>
>
>
> I hope this helps people with their decision. Whomever wins, I have high
> hopes for the future, there is nobody standing who hasn't been a pleasure
> to work with in the past, and I don't expect that to change in the future.
>
> --
> Duncan Thomas
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/2cb05313/attachment.html>

From deepika at cisco.com  Mon Sep 21 15:19:01 2015
From: deepika at cisco.com (Deepika Gupta (deepika))
Date: Mon, 21 Sep 2015 15:19:01 +0000
Subject: [openstack-dev] New PyCharm License
Message-ID: <D225982B.51E9%deepika@cisco.com>

Hi Andrew,

My launchpad-id is ?deepika-j?.

Thanks,



Deepika Gupta
TECHNICAL LEADER.ENGINEERING
CVG
deepika at cisco.com<mailto:deepika at cisco.com>
Phone: +1 978 936 8295


Cisco.com<http://www.cisco.com>



[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] Think before you print.

This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.

Please click here<http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for Company Registration Information.




From: Andrew Melton <andrew.melton at RACKSPACE.COM<mailto:andrew.melton at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 21, 2015 at 10:54 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] New PyCharm License


Hi devs,


I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.


Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.

?

--Andrew
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/5789e7d0/attachment.html>

From vkommadi at redhat.com  Mon Sep 21 15:19:59 2015
From: vkommadi at redhat.com (Venkata Anil)
Date: Mon, 21 Sep 2015 20:49:59 +0530
Subject: [openstack-dev] [neutron] [floatingip] Selecting router for
 floatingip when subnet is connected to multiple routers
In-Reply-To: <CAF+Cadu-LG-uNcJbeUjpz8rp8GM-nZsxEYre+wqNMJDFLyc5QQ@mail.gmail.com>
References: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
 <CAOyZ2aEq6+Yf7fbZkzW=prqs21ErrV5cjX3L=e=NkeEOhMA3yg@mail.gmail.com>
 <CAF+Cadu-LG-uNcJbeUjpz8rp8GM-nZsxEYre+wqNMJDFLyc5QQ@mail.gmail.com>
Message-ID: <5600201F.20002@redhat.com>

Hi All

I need your opinion on selecting router for floatingip when subnet is 
connected to multiple routers.

When multiple routers connected to a subnet, vm on that subnet will only 
send packets destined for external network to the router with subnet's 
default gateway.
Should we always choose this router(i.e router with subnet's default 
gateway) for floatingip?

We have two scenarios -

1) Multiple routers connected to same subnet and also same external network.
    In this case, which router should we select for floatingip?
    Choose first router in db list or router with default gateway. What 
if router with subnet's default gateway not present?

2) Multiple routers connected to same subnet and different external 
networks.
    In this case, user has the choice to create floatingip on any 
external network( and on the router connected to that external network).
    But this router may not be the one having subnet's default 
gateway.   Should we allow this?

Thanks
Anil Venkata


From deepika at cisco.com  Mon Sep 21 15:23:05 2015
From: deepika at cisco.com (Deepika Gupta (deepika))
Date: Mon, 21 Sep 2015 15:23:05 +0000
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <D225982B.51E9%deepika@cisco.com>
References: <D225982B.51E9%deepika@cisco.com>
Message-ID: <D225992A.51EE%deepika@cisco.com>

Sorry for the mass distribution



Deepika Gupta
TECHNICAL LEADER.ENGINEERING
CVG
deepika at cisco.com<mailto:deepika at cisco.com>
Phone: +1 978 936 8295


Cisco.com<http://www.cisco.com>



[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] Think before you print.

This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.

Please click here<http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for Company Registration Information.




From: deepika Gupta <deepika at cisco.com<mailto:deepika at cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 21, 2015 at 11:19 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] New PyCharm License

Hi Andrew,

My launchpad-id is ?deepika-j?.

Thanks,



Deepika Gupta
TECHNICAL LEADER.ENGINEERING
CVG
deepika at cisco.com<mailto:deepika at cisco.com>
Phone: +1 978 936 8295


Cisco.com<http://www.cisco.com>



[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] Think before you print.

This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.

Please click here<http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for Company Registration Information.




From: Andrew Melton <andrew.melton at RACKSPACE.COM<mailto:andrew.melton at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 21, 2015 at 10:54 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] New PyCharm License


Hi devs,


I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.


Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.

?

--Andrew
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/f214b996/attachment.html>

From tengqim at linux.vnet.ibm.com  Mon Sep 21 15:26:44 2015
From: tengqim at linux.vnet.ibm.com (Qiming Teng)
Date: Mon, 21 Sep 2015 23:26:44 +0800
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <1442847252531.42564@RACKSPACE.COM>
References: <1442847252531.42564@RACKSPACE.COM>
Message-ID: <20150921152643.GB14359@qiming-ThinkCentre-M58p>

launchpad-id: tengqim

Thanks.
 Qiming



From jesse.pretorius at gmail.com  Mon Sep 21 15:31:16 2015
From: jesse.pretorius at gmail.com (Jesse Pretorius)
Date: Mon, 21 Sep 2015 16:31:16 +0100
Subject: [openstack-dev] [openstack-ansible] Mitaka Summit sessions
Message-ID: <CAGSrQvwNf6a563KrC=g22JM6uQnjzfe6Vs4H-QFQd65juX+wqg@mail.gmail.com>

Hi everyone,

As the Mitaka summit draws nearer, I'd like a broad view of what people
would like to discuss at the summit. This can include anyone's input!
Obviously our space and time will be limited, so any sessions that we don't
get to formally do at the summit we'll either try to do informally during
the summit (in a workgroup session or something like that), or we'll try to
engage at an alternative date (like a midcycle).

Some sessions have already been proposed - please feel free to add to the
list. Try to use the same format so that we know whether you're
facilitating or whether you want someone else to, what preparation
attendees would need, etc.

https://etherpad.openstack.org/p/openstack-ansible-mitaka-summit

It's probably a good idea to review the currently registered blueprints [0]
and specs [1] before adding any proposed sessions - just to make sure that
you're not covering something that's already on the go. For any current
blueprints/specs we're certainly open to more discussion, but it'd be great
to see that discussion happen in the spec reviews. :)

[0] https://blueprints.launchpad.net/openstack-ansible
[1]
https://review.openstack.org/#/q/project:openstack/openstack-ansible-specs+status:open,n,z

-- 
Jesse Pretorius
IRC: odyssey4me
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/a0e5bfba/attachment.html>

From josh at pcsforeducation.com  Mon Sep 21 15:49:01 2015
From: josh at pcsforeducation.com (Josh Gachnang)
Date: Mon, 21 Sep 2015 15:49:01 +0000
Subject: [openstack-dev] [Ironic] Stepping down from IPA core
Message-ID: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>

Hey y'all, it's with a heavy heart I have to announce I'll be stepping down
from the IPA core team on Thurs, 9/24. I'm leaving Rackspace for a
healthcare startup (Triggr Health) and won't have the time to dedicate to
being an effective OpenStack reviewer.

Ever since the OnMetal team proposed IPA allllll the way back in the
Icehouse midcycle, this community has been welcoming, helpful, and all
around great. You've all helped me grow as a developer with your in depth
and patient reviews, for which I am eternally grateful. I'm really sad I
won't get to see everyone in Tokyo.

I'll still be on IRC after leaving, so feel free to ping me for any reason
:)

- JoshNang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/1ef8ac50/attachment.html>

From akasatkin at mirantis.com  Mon Sep 21 16:07:09 2015
From: akasatkin at mirantis.com (Aleksey Kasatkin)
Date: Mon, 21 Sep 2015 11:07:09 -0500
Subject: [openstack-dev] [Fuel] Core Reviewers groups restructure
In-Reply-To: <CAKYN3rOpnBniOkHp6MtqfXnVxkxkV=mQNRRjQWLvwm5c9eEwzA@mail.gmail.com>
References: <CAKYN3rOpnBniOkHp6MtqfXnVxkxkV=mQNRRjQWLvwm5c9eEwzA@mail.gmail.com>
Message-ID: <CA+anEOB6Pdk4o50Kpgyd6Qezv+dH=WOGkWTaiAh8iNMmWdt2Lw@mail.gmail.com>

Hi,

Just a remark: python-fuelclient is missing here.


Aleksey Kasatkin


On Sun, Sep 20, 2015 at 3:56 PM, Mike Scherbakov <mscherbakov at mirantis.com>
wrote:

> Hi all,
> as of my larger proposal on improvements to code review workflow [1], we
> need to have cores for repositories, not for the whole Fuel. It is the path
> we are taking for a while, and new core reviewers added to specific repos
> only. Now we need to complete this work.
>
> My proposal is:
>
>    1. Get rid of one common fuel-core [2] group, members of which can
>    merge code anywhere in Fuel. Some members of this group may cover a couple
>    of repositories, but can't really be cores in all repos.
>    2. Extend existing groups, such as fuel-library [3], with members from
>    fuel-core who are keeping up with large number of reviews / merges. This
>    data can be queried at Stackalytics.
>    3. Establish a new group "fuel-infra", and ensure that it's included
>    into any other core group. This is for maintenance purposes, it is expected
>    to be used only in exceptional cases. Fuel Infra team will have to decide
>    whom to include into this group.
>    4. Ensure that fuel-plugin-* repos will not be affected by removal of
>    fuel-core group.
>
> #2 needs specific details. Stackalytics can show active cores easily, we
> can look at people with *:
> http://stackalytics.com/report/contribution/fuel-web/180. This is for
> fuel-web, change the link for other repos accordingly. If people are added
> specifically to the particular group, leaving as is (some of them are no
> longer active. But let's clean them up separately from this group
> restructure process).
>
>    - fuel-library-core [3] group will have following members: Bogdan D.,
>    Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
>    - fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin,
>    Vitaly Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
>    - fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
>    - fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
>    - fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky, Nastya
>    Urlapova
>    - fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
>    Konstantinov, Olga Gusarenko
>    - fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry
>    Pyzhov, Sergii Golovatyuk, Vladimir Kuklin, Igor Kalnitsky
>    - fuel-nailgun-agent-core [10]: Vladimir Sharshov, V.Kozhukalov
>    - fuel-ostf-core [11]: Tatyana Leontovich, Nastya Urlapova, Andrey
>    Sledzinsky, Dmitry Shulyak
>    - fuel-plugins-core [12]: Igor Kalnitsky, Evgeny Li, Alexey Kasatkin
>    - fuel-qa-core [13]: Andrey Sledzinsky, Tatyana Leontovich, Nastya
>    Urlapova
>    - fuel-stats-core [14]: Alex Kislitsky, Alexey Kasatkin, Vitaly
>    Kramskikh
>    - fuel-tasklib-core [15]: Igor Kalnitsky, Dima Shulyak, Alexey
>    Kasatkin (this project seems to be dead, let's consider to rip it off)
>    - fuel-specs-core: there is no such a group at the moment. I propose
>    to create one with following members, based on stackalytics data [16]:
>    Vitaly Kramskikh, Bogdan Dobrelia, Evgeny Li, Sergii Golovatyuk, Vladimir
>    Kuklin, Igor Kalnitsky, Alexey Kasatkin, Roman Vyalov, Dmitry Borodaenko,
>    Mike Scherbakov, Dmitry Pyzhov. We would need to reconsider who can merge
>    after Fuel PTL/Component Leads elections
>    - fuel-octane-core: needs to be created. Members: Yury Taraday, Oleg
>    Gelbukh, Ilya Kharin
>    - fuel-mirror-core: needs to be created. Sergey Kulanov, Vitaly
>    Parakhin
>    - fuel-upgrade-core: needs to be created. Sebastian Kalinowski, Alex
>    Schultz, Evgeny Li, Igor Kalnitsky
>    - fuel-provision: repo seems to be outdated, needs to be removed.
>
> I suggest to make changes in groups first, and then separately address
> specific issues like removing someone from cores (not doing enough reviews
> anymore or too many positive reviews, let's say > 95%).
>
> I hope I don't miss anyone / anything. Please check carefully.
> Comments / objections?
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
> [2] https://review.openstack.org/#/admin/groups/209,members
> [3] https://review.openstack.org/#/admin/groups/658,members
> [4] https://review.openstack.org/#/admin/groups/664,members
> [5] https://review.openstack.org/#/admin/groups/655,members
> [6] https://review.openstack.org/#/admin/groups/646,members
> [7] https://review.openstack.org/#/admin/groups/656,members
> [8] https://review.openstack.org/#/admin/groups/657,members
> [9] https://review.openstack.org/#/admin/groups/659,members
> [10] https://review.openstack.org/#/admin/groups/1000,members
> [11] https://review.openstack.org/#/admin/groups/660,members
> [12] https://review.openstack.org/#/admin/groups/661,members
> [13] https://review.openstack.org/#/admin/groups/662,members
> [14] https://review.openstack.org/#/admin/groups/663,members
> [15] https://review.openstack.org/#/admin/groups/624,members
> [16] http://stackalytics.com/report/contribution/fuel-specs/180
>
>
> --
> Mike Scherbakov
> #mihgen
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/dcaafbdc/attachment.html>

From emilien at redhat.com  Mon Sep 21 16:20:05 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Mon, 21 Sep 2015 12:20:05 -0400
Subject: [openstack-dev] [puppet] feedback request about puppet-keystone
Message-ID: <56002E35.2010807@redhat.com>

Hi,

Puppet OpenStack group would like to know your feedback about using
puppet-keystone module.

Please take two minutes and feel the form [1] that contains a few
questions. The answers will help us to define our roadmap for the next
cycle and make Keystone deployment stronger for our users.

The result of the forms should be visible online, otherwise I'll make
sure the results are 100% public and transparent.

Thank you for your time,

[1] http://goo.gl/forms/eiGWFkkXLZ
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/ed82e2a2/attachment.pgp>

From agordeev at mirantis.com  Mon Sep 21 16:25:13 2015
From: agordeev at mirantis.com (Alexander Gordeev)
Date: Mon, 21 Sep 2015 19:25:13 +0300
Subject: [openstack-dev] [Fuel] Core Reviewers groups restructure
Message-ID: <CAFneLEebn6D6tEnNa=ferXE7wvxhS492LopCiFi+NTxjohUkBQ@mail.gmail.com>

Hi,

Mike, fuel-agent is missing here too.

http://stackalytics.com/report/contribution/fuel-agent/180


From Varun_Lodaya at symantec.com  Mon Sep 21 16:41:10 2015
From: Varun_Lodaya at symantec.com (Varun Lodaya)
Date: Mon, 21 Sep 2015 09:41:10 -0700
Subject: [openstack-dev] [neutron][lbaas] Barbican container lookup fron
 lbaas
In-Reply-To: <55FCF86C.7050106@rackspace.com>
References: <D2223D00.814C%Lodaya_VarunMukesh@symantec.com>
 <55FCF86C.7050106@rackspace.com>
Message-ID: <D2258064.81E2%Lodaya_VarunMukesh@symantec.com>

Hey Douglas,

Thanks for the reply. Will look into barbican ACLs and test it out. Also,
had 1 more follow up question?
1) Currently the HAProxy LBaaS instance sits on the controller. The
certificate download happens on the controller too.
2) Once we move to service-vm model, where service-vms could reside on
compute hypervisors, where will the cert download happen? Still on
controller in the flow?

Thanks,
Varun

On 9/18/15, 10:53 PM, "Douglas Mendiz?bal"
<douglas.mendizabal at rackspace.com> wrote:

>* PGP Signed by an unknown key
>
>Hi Varun,
>
>I believe the expected workflow for this use case is:
>
>1. User uploads cert + key to Barbican
>2. User grants lbass access to the barbican certificate container
>using the ACL API [1]
>3. User requests tls container by providing Barbican container reference
>
>Since the user grants the lbass user access in step 2, the token
>generated using the conf file credentials will be accepted by Barbican
>and the certificate will be made available to lbass.
>
>- Douglas Mendiz?bal
>
>[1] http://docs.openstack.org/developer/barbican/api/quickstart/acls.htm
>l
>
>On 9/19/15 12:13 AM, Varun Lodaya wrote:
>> Hi Guys,
>> 
>> With lbaasv2, I noticed that when we try to associate tls
>> containers with lbaas listeners, lbaas tries to validate the
>> container and while doing so, tries to get keystone token based on
>> tenant/user credentials in neutron.conf file. However, the barbican
>> containers could belong to different users in different tenants, in
>> that case, container look up would always fail? Am I missing
>> something?
>> 
>> Thanks, Varun
>> 
>> 
>> ______________________________________________________________________
>____
>>
>> 
>OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>
>* Unknown Key
>* 0x2098B5FB(L)
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From douglas.mendizabal at rackspace.com  Mon Sep 21 16:57:43 2015
From: douglas.mendizabal at rackspace.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=)
Date: Mon, 21 Sep 2015 11:57:43 -0500
Subject: [openstack-dev] [neutron][lbaas] Barbican container lookup fron
 lbaas
In-Reply-To: <D2258064.81E2%Lodaya_VarunMukesh@symantec.com>
References: <D2223D00.814C%Lodaya_VarunMukesh@symantec.com>
 <55FCF86C.7050106@rackspace.com>
 <D2258064.81E2%Lodaya_VarunMukesh@symantec.com>
Message-ID: <56003707.4020700@rackspace.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

I'm not familiar with the low level details of the lbass
implementation, so hopefully someone from the lbass team will be able
to answer this.

The URL I sent last week for the API docs has been updated though.
Here's the current URL:

http://docs.openstack.org/developer/barbican/api/index.html

- - Douglas

On 9/21/15 11:41 AM, Varun Lodaya wrote:
> Hey Douglas,
> 
> Thanks for the reply. Will look into barbican ACLs and test it out.
> Also, had 1 more follow up question? 1) Currently the HAProxy LBaaS
> instance sits on the controller. The certificate download happens
> on the controller too. 2) Once we move to service-vm model, where
> service-vms could reside on compute hypervisors, where will the
> cert download happen? Still on controller in the flow?
> 
> Thanks, Varun
> 
> On 9/18/15, 10:53 PM, "Douglas Mendiz?bal" 
> <douglas.mendizabal at rackspace.com> wrote:
> 
>> * PGP Signed by an unknown key
>> 
>> Hi Varun,
>> 
>> I believe the expected workflow for this use case is:
>> 
>> 1. User uploads cert + key to Barbican 2. User grants lbass
>> access to the barbican certificate container using the ACL API
>> [1] 3. User requests tls container by providing Barbican
>> container reference
>> 
>> Since the user grants the lbass user access in step 2, the token 
>> generated using the conf file credentials will be accepted by
>> Barbican and the certificate will be made available to lbass.
>> 
>> - Douglas Mendiz?bal
>> 
>> [1]
>> http://docs.openstack.org/developer/barbican/api/quickstart/acls.htm
>>
>> 
l
>> 
>> On 9/19/15 12:13 AM, Varun Lodaya wrote:
>>> Hi Guys,
>>> 
>>> With lbaasv2, I noticed that when we try to associate tls 
>>> containers with lbaas listeners, lbaas tries to validate the 
>>> container and while doing so, tries to get keystone token based
>>> on tenant/user credentials in neutron.conf file. However, the
>>> barbican containers could belong to different users in
>>> different tenants, in that case, container look up would always
>>> fail? Am I missing something?
>>> 
>>> Thanks, Varun
>>> 
>>> 
>>> ____________________________________________________________________
__
>>
>>> 
____
>>> 
>>> 
>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>> 
* Unknown Key
>> * 0x2098B5FB(L)
>> 
>> _____________________________________________________________________
_____
>>
>> 
OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>> 
> 
> ______________________________________________________________________
____
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJWADcHAAoJEB7Z2EQgmLX7Me8QAJ1gTTMecCoWZBReLe+k5t98
8YIdoMjWgcavVTB+v08r5UYlsyLb5CkUQdWVagb+af9fQFThGvrEZKycffI078cb
KoNW/ow0MQTTBEhrVDr2x800NuG3uitUAFKdNfPkhiB+4NWXrRnlIYD+XVMAJQ0L
2n7PFIC/F2VckSdUofhTJwAYBVGTRS/OL1G6dsxKh1LD3DEswKxyXb7TgVKaI2AO
os5z0BRCiP4Y1Dl+vLN9C4Hj5/juFF9aVe8wmNTCwUUb/auXhjhNiy75BKmNwu1r
kL2iPBCjjFFhx4JItZ/WJFhdGkceG+F5C4TeqJM7SUPM7SNXlXbhi2sTeb+WxvQE
SjrdjEiRlzM/JCzsj1s634TwgJvLPmmRhxVnOgVm1mlXwgPaAk7b8PMXDik1Wkrq
JzIorRb83XnV14yoJAh7kOrxxOlnB1UjnYh7YPr0KwYACkP8QQFkXxuzcePGUkOa
cLDmu3kfofASOQEpLsbbn2Eu9/FIzwvJDXVbdr/nDYtzDUJiBi6AitMVal0H7kJs
0IdXZcaR7vt73Ln9RPCr6+3nMC57odB06cgDalLeG1Kn5pPY/MWkYZol7d+v2H7y
c+nN7tAGaCsLzyhnhUffvns/ogSjTTW+JH2tfVDwf2pSTQhPvppcXBGXi8w95Ood
KFZ5W9p/tAP4BEsWGNtS
=6fJ9
-----END PGP SIGNATURE-----


From lucasagomes at gmail.com  Mon Sep 21 16:59:53 2015
From: lucasagomes at gmail.com (Lucas Alvares Gomes)
Date: Mon, 21 Sep 2015 17:59:53 +0100
Subject: [openstack-dev] [Ironic] Stepping down from IPA core
In-Reply-To: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
References: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
Message-ID: <CAB1EZBoBnKi_ahZnNreArqcQNTcLr4iUknzRmjCS2kknkNBXhg@mail.gmail.com>

> Hey y'all, it's with a heavy heart I have to announce I'll be stepping down
> from the IPA core team on Thurs, 9/24. I'm leaving Rackspace for a
> healthcare startup (Triggr Health) and won't have the time to dedicate to
> being an effective OpenStack reviewer.
>

-2

Sad to see you go Josh I wish you good luck on your new job; and
thanks for all the work you've done in Ironic, the project will
certainly miss you!

Lucas


From dtantsur at redhat.com  Mon Sep 21 17:04:49 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Mon, 21 Sep 2015 19:04:49 +0200
Subject: [openstack-dev] [Ironic] Stepping down from IPA core
In-Reply-To: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
References: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
Message-ID: <560038B1.70007@redhat.com>

On 09/21/2015 05:49 PM, Josh Gachnang wrote:
> Hey y'all, it's with a heavy heart I have to announce I'll be stepping
> down from the IPA core team on Thurs, 9/24. I'm leaving Rackspace for a
> healthcare startup (Triggr Health) and won't have the time to dedicate
> to being an effective OpenStack reviewer.
>
> Ever since the OnMetal team proposed IPA allllll the way back in the
> Icehouse midcycle, this community has been welcoming, helpful, and all
> around great. You've all helped me grow as a developer with your in
> depth and patient reviews, for which I am eternally grateful. I'm really
> sad I won't get to see everyone in Tokyo.

I'm a bit sad to hear it :) it was a big pleasure to work with you. Have 
the best of luck in your new challenges!

>
> I'll still be on IRC after leaving, so feel free to ping me for any
> reason :)
>
> - JoshNang
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From Abhishek.Kekane at nttdata.com  Mon Sep 21 17:10:04 2015
From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek)
Date: Mon, 21 Sep 2015 17:10:04 +0000
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <1442847252531.42564@RACKSPACE.COM>
References: <1442847252531.42564@RACKSPACE.COM>
Message-ID: <E1FB4937BE24734DAD0D1D4E4E506D788A6FE88E@MAIL703.KDS.KEANE.COM>

Hi Andrew,

My launchpad id is abhishek-kekane

Thank you,

Abhishek
________________________________________
From: Andrew Melton [andrew.melton at RACKSPACE.COM]
Sent: Monday, September 21, 2015 10:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] New PyCharm License

Hi devs,


I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.


Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.

?

--Andrew


______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

From danehans at cisco.com  Mon Sep 21 17:25:32 2015
From: danehans at cisco.com (Daneyon Hansen (danehans))
Date: Mon, 21 Sep 2015 17:25:32 +0000
Subject: [openstack-dev] [magnum] Discovery
In-Reply-To: <CAH5-jC9iVOMKS5DX+shRZny4dnY9sQFck_Odccv+2+Dc9Rjb9g@mail.gmail.com>
References: <D21ED83B.67DF4%danehans@cisco.com>
 <1442516764564.93792@RACKSPACE.COM> <D22068B1.1B7C3%eguz@walmartlabs.com>
 <65278211-710F-4EFB-BA49-422B536CFD71@rackspace.com>
 <CAH5-jC9iVOMKS5DX+shRZny4dnY9sQFck_Odccv+2+Dc9Rjb9g@mail.gmail.com>
Message-ID: <D22588FA.681B0%danehans@cisco.com>

All,

Thanks for the feedback and additional ideas related to discovery. For clarity purposes, I would like to circle back to the specific issue that I am experiencing with implementing Flannel for Swarm. Flannel can not be implemented in Swarm bay types without making changes to discovery for Swarm. This is because:

  1.  Flannel requires etcd, which is not implemented in Magnum?s Swarm bay type.
  2.  The discovery_url is implemented differently among Kubernetes and Swarm bay types, making it impossible for Swarm and etcd discovery to coexist within the same bay type.

I am in the process of moving forward with option 2 of my original email so flannel can be implemented in swarm bay types [1]. I have created a bp [2] to address discovery more holistically. It would be helpful if you could provide your ideas in the whiteboard of the bp.

[1] https://review.openstack.org/#/c/224367/
[2] https://blueprints.launchpad.net/magnum/+spec/bay-type-discovery-options

Regards,
Daneyon Hansen
Software Engineer
Email: danehans at cisco.com
Phone: 303-718-0400
http://about.me/daneyon_hansen

From: ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 21, 2015 at 1:18 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discovery

Swarm already supports etcd as a discovery backend [1]. So we can implement both hosted discovery with Docker Hub and using name etcd. And make hosted discovery with Docker Hub default if discovery_url is not given.

If we run etcd in bay, etcd alse need discovery [2]. Operator should set up a etcd cluster for other etcd clusters to discover or use public discovery service. I think it is not necessary to run etcd in swarm cluster just for discovery service. In a private cloud, operator should set up a local etcd cluster for discovery service, and all the bays can use it.

[1] https://docs.docker.com/swarm/discovery/
[2] https://github.com/coreos/etcd/blob/master/Documentation/clustering.md

Regards,
Wanghua

On Fri, Sep 18, 2015 at 11:39 AM, Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>> wrote:
In the case where a private cloud is used without access to the Internet, you do have the option of running your own etcd, and configuring that to be used instead.

Adding etcd to every bay should be optional, as a subsequent feature, but should be controlled by a flag in the Baymodel that defaults to off so the public discovery service is used. It might be nice to be able to configure Magnum in an isolated mode which would change the system level default for that flag from off to on.

Maybe the Baymodel resource attribute should be named local_discovery_service.

Should turning this on also set the minimum node count for the bay to 3? If not, etcd will not be highly available.

Adrian

> On Sep 17, 2015, at 1:01 PM, Egor Guz <EGuz at walmartlabs.com<mailto:EGuz at walmartlabs.com>> wrote:
>
> +1 for stop using public discovery endpoint, most private cloud vms doesn?t have access to internet and operator must to run etcd instance somewhere just for discovery.
>
> ?
> Egor
>
> From: Andrew Melton <andrew.melton at RACKSPACE.COM<mailto:andrew.melton at RACKSPACE.COM><mailto:andrew.melton at RACKSPACE.COM<mailto:andrew.melton at RACKSPACE.COM>>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
> Date: Thursday, September 17, 2015 at 12:06
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
> Subject: Re: [openstack-dev] [magnum] Discovery
>
>
> Hey Daneyon,
>
>
> I'm fairly partial towards #2 as well. Though, I'm wondering if it's possible to take it a step further. Could we run etcd in each Bay without using the public discovery endpoint? And then, configure Swarm to simply use the internal ectd as it's discovery mechanism? This could cut one of our external service dependencies and make it easier to run Magnum is an environment with locked down public internet access.?
>
>
> Anyways, I think #2 could be a good start that we could iterate on later if need be.
>
>
> --Andrew
>
>
> ________________________________
> From: Daneyon Hansen (danehans) <danehans at cisco.com<mailto:danehans at cisco.com><mailto:danehans at cisco.com<mailto:danehans at cisco.com>>>
> Sent: Wednesday, September 16, 2015 11:26 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum] Discovery
>
> All,
>
> While implementing the flannel --network-driver for swarm, I have come across an issue that requires feedback from the community. Here is the breakdown of the issue:
>
>  1.  Flannel [1] requires etcd to store network configuration. Meeting this requirement is simple for the kubernetes bay types since kubernetes requires etcd.
>  2.  A discovery process is needed for bootstrapping etcd. Magnum implements the public discovery option [2].
>  3.  A discovery process is also required to bootstrap a swarm bay type. Again, Magnum implements a publicly hosted (Docker Hub) option [3].
>  4.  Magnum API exposes the discovery_url attribute that is leveraged by swarm and etcd discovery.
>  5.  Etcd can not be implemented in swarm because discovery_url is associated to swarm?s discovery process and not etcd.
>
> Here are a few options on how to overcome this obstacle:
>
>  1.  Make the discovery_url more specific, for example etcd_discovery_url and swarm_discovery_url. However, this option would needlessly expose both discovery url?s to all bay types.
>  2.  Swarm supports etcd as a discovery backend. This would mean discovery is similar for both bay types. With both bay types using the same mechanism for discovery, it will be easier to provide a private discovery option in the future.
>  3.  Do not support flannel as a network-driver for k8s bay types. This would require adding support for a different driver that supports multi-host networking such as libnetwork. Note: libnetwork is only implemented in the Docker experimental release: https://github.com/docker/docker/tree/master/experimental.
>
> I lean towards #2 but their may be other options, so feel free to share your thoughts. I would like to obtain feedback from the community before proceeding in a particular direction.
>
> [1] https://github.com/coreos/flannel
> [2] https://github.com/coreos/etcd/blob/master/Documentation/discovery_protocol.md
> [3] https://docs.docker.com/swarm/discovery/
>
> Regards,
> Daneyon Hansen
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/2fd9aecb/attachment.html>

From openstack.org at sodarock.com  Mon Sep 21 17:31:03 2015
From: openstack.org at sodarock.com (John Villalovos)
Date: Mon, 21 Sep 2015 10:31:03 -0700
Subject: [openstack-dev] [Ironic] Stepping down from IPA core
In-Reply-To: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
References: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
Message-ID: <CAGh2O4KAjedsd=Tt2TfUwGq5guSN5WJZVgx-c+eZpBcHFhU0-A@mail.gmail.com>

Sorry to see you go Josh :( Always great working with you.

Best of luck with the new job!

John

On Mon, Sep 21, 2015 at 8:49 AM, Josh Gachnang <josh at pcsforeducation.com>
wrote:

> Hey y'all, it's with a heavy heart I have to announce I'll be stepping
> down from the IPA core team on Thurs, 9/24. I'm leaving Rackspace for a
> healthcare startup (Triggr Health) and won't have the time to dedicate to
> being an effective OpenStack reviewer.
>
> Ever since the OnMetal team proposed IPA allllll the way back in the
> Icehouse midcycle, this community has been welcoming, helpful, and all
> around great. You've all helped me grow as a developer with your in depth
> and patient reviews, for which I am eternally grateful. I'm really sad I
> won't get to see everyone in Tokyo.
>
> I'll still be on IRC after leaving, so feel free to ping me for any reason
> :)
>
> - JoshNang
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/f3c5b5d0/attachment.html>

From doug at doughellmann.com  Mon Sep 21 17:35:55 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 21 Sep 2015 13:35:55 -0400
Subject: [openstack-dev] [rally][releases] New Rally release model
In-Reply-To: <55FFBF52.3080009@openstack.org>
References: <CAD85om01K=dJ7xOZTU1dR17iUOZhOOyGFMP3gNrAHhQSbWu5kQ@mail.gmail.com>
 <55FFBF52.3080009@openstack.org>
Message-ID: <1442856686-sup-6563@lrrr.local>

Excerpts from Thierry Carrez's message of 2015-09-21 10:26:58 +0200:
> Boris Pavlovic wrote:
> > The main idea is next: 
> > *) Master branch will be used for new major Rally versions development
> > e.g. 0.x.y -> 0.x+1.0 switch
> >    that can include not backward compatible changes. 
> 
> You mean x.y.0 -> x+1.0.0, right ?
> 
> > *) Latest version - we will port plugins, bug fixes and part of features
> > to it
> > *) Stable version - we will port only high & critical bug fixes if it is
> > possible 
> 
> So... this is pretty close from what we're doing elsewhere in OpenStack,
> except that we do:
> 
> Feature branches: not backward compatible changes
> Master: bug fixes, backward-compatible features, release regularly
> Stable: High/Critical bugfixes backports, release on-demand
> 
> The only difference with your model is how you split feature development
> between master and feature branches. In your model you do most of the
> feature development in the experimental branch (master) and port pieces
> of it in the release branch (latest). In our case only the
> backward-incompatible work lands in the experimental branch (feature/*),
> and the release branch (master) contains everything else.
> 
> I am just not sure it's different enough to justify being different :)
> 

I agree. The Oslo team releases as often as every week, and uses feature
branches for ongoing work like the zmq driver that shouldn't land
partially completed. Other teams that use feature branches and release
less frequently (neutron is one I can think of off the top of my head
from liberty, but there have been others in the past).

Parts of our CI system relies on this model being the same for all
projects. For example, testing source versions of projects (rather
than packaged versions) together works much much better if they are
on the same branch, since the fallback for a missing branch is to
use the 'master' branch. That means feature branches in one project can
be tested against master in another automatically. The release tools
also make some assumptions about the main release development work
happening on master instead of a separate release branch.

I'm concerned about the amount of tooling changes we would need to
implement in order to support one project being different in this
way.  Can you elaborate on how the current system won't meet your
needs?

Doug



From walter.boring at hpe.com  Mon Sep 21 17:45:13 2015
From: walter.boring at hpe.com (Walter A. Boring IV)
Date: Mon, 21 Sep 2015 10:45:13 -0700
Subject: [openstack-dev] [CINDER] [PTL Candidates] Questions
In-Reply-To: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
References: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
Message-ID: <56004229.8050701@hpe.com>

>
> 1. Do you actually have the time to spend to be PTL
>
> I don't think many people realize the time commitment. Between being
> on top of reviews and having a pretty consistent view of what's going
> on and in process; to meetings, questions on IRC, program management
> type stuff etc.  Do you feel you'll have the ability for PTL to be
> your FULL Time job?  Don't forget you're working with folks in a
> community that spans multiple time zones.
The short answer to this is yes.   Prior to even putting up my candidacy
I spoke with my management and informed them of what would be involved
with being PTL for Cinder, and that meant it was an upstream job.  I've
been working on Cinder for 3 years now and have seen the amount of time
that you and Mike have spent on the project, and it's significant to say
the least.   The wiki has a good guide for PTL candidates here:
https://wiki.openstack.org/wiki/PTL_Guide.   It's a decent start and
more of a "PTL for dummies" guide and is by no means everything a PTL is
and has to do.  Being a PTL means more than just attending meetings,
doing reviews, and communication.  It means being the lead evangelist
and ambassador for Cinder.   As PTL of a project, it's also important
not to forget about the future of the community and encourage new
members to contribute code to Cinder core itself, to help make Cinder a
better project.  For example, the recent additions by Kendall Nelson to
work on the cinder.conf.sample file
(https://review.openstack.org/#/c/219700).  The patch itself might have
more follow up work, as noted in the review, but she was very responsive
and was on top of the code to try and get it to land.  Sean, John and
myself all helped with reviews on that patch and worked together as a
team to help Kendall with her efforts.  We need more new contributors
like her.  The more inclusive and encouraging of new members in the
community the better.   I remember starting out working on Cinder back
in the Grizzly time frame and I also remember John, as the PTL, being
very helpful and encouraging of my efforts to learn how to write a
driver and how to contribute in general.  It was a very welcoming
experience at the time.  That is the type of PTL I'd like to be to help
repay the community.
>
> 2. What are your plans to make the Cinder project as a core component
> better (no... really, what specifically and how does it make Cinder
> better)?
>
> Most candidates are representing a storage vendor naturally.  Everyone
> says "make Cinder better"; But how do you intend to balance vendor
> interest and the interest of the general project?  Where will your
> focus in the M release be?  On your vendor code or on Cinder as a
> whole?  Note; I'm not suggesting that anybody isn't doing the "right"
> thing here, I'm just asking for specifics.
  I believe I detailed some of these in my candidacy letter.   I firmly
believe that there are some Nova and Cinder interactions that need to
get fixed.  This will be a good first step along the way to allowing
active/active c-vol services.   Making Cinder better means not only
guiding the direction of features and fixes, but it also means
encouraging the community of driver developers to get involved and
informed about Cinder core itself.   We need a cinder driver developer
how to guide.  There are some items for driver developers that they need
to be aware of, and it would be great to be able to point folks to that
place.  For example, Fibre Channel drivers need to use the Fibre Channel
Zone Manager utils decorators during initialize_connection and
terminate_connection time.   Also, during terminate_connection time, a
driver needs to not always return the initiator_target_map.   Where is
that documented?  It's not, and it's only being caught in reviews.  The
trick as always is keeping that guide relevant with updates. 

   I've been pretty fortunate at HP, to be able to convince my
management, that working on Cinder specific issues as a priority, such
as multi-attach, os-brick, live migration, Nova <--> Cinder interactions
to name a few.   My team at HP isn't just responsible for maintaining
3PAR/LeftHand drivers to Cinder.  We are also involved in making Cinder
a more robust, scalable project, so that we can make a better Helion
product for our customers.  Helion is OpenStack and how we work on
Helion is to first and foremost work on OpenStack Cinder and Nova.   So,
from my perspective. HP's interests allow me to work on Cinder core
first and foremost. 

>
> 3. ?Why do you want to be PTL for Cinder?
>
> Seems like a silly question, but really when you start asking that
> question the answers can be surprising and somewhat enlightening. 
> There's different motivators for people, what's yours?  By the way,
> "my employer pays me a big bonus if I win" is a perfectly acceptable
> answer in my opinion, I'd prefer honesty over anything else.  You may
> not get my vote, but you'd get respect.
I've been working on various Open Source projects since I got out of
college in 1992.   Since my early days using Linux, I've always wanted
to work full time on Open Source projects, and OpenStack has fit that
bill for me in a big way.  OpenStack is the single best project/product
I've ever worked on, and I feel very fortunate that HP is willing to pay
me to have this much fun.  I personally won't get any addition salary, 
monetary benefits or a promotion from being a PTL for Cinder.    What
motivates me is trying to become a better engineer to the point of being
a leader in the community.   I would be honoured and humbled to earn
your vote.


Cheers,
Walt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/c4a5ce4c/attachment.html>

From louis.fourie at huawei.com  Mon Sep 21 17:50:24 2015
From: louis.fourie at huawei.com (Henry Fourie)
Date: Mon, 21 Sep 2015 17:50:24 +0000
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <E1FB4937BE24734DAD0D1D4E4E506D788A6FE88E@MAIL703.KDS.KEANE.COM>
References: <1442847252531.42564@RACKSPACE.COM>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FE88E@MAIL703.KDS.KEANE.COM>
Message-ID: <0F8583BBE82FA449A8B78417CC07559A093E0A7A@SJCEML701-CHM.china.huawei.com>

Andrew,
   My launchpad id is lfourie
 - Louis

-----Original Message-----
From: Kekane, Abhishek [mailto:Abhishek.Kekane at nttdata.com] 
Sent: Monday, September 21, 2015 10:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] New PyCharm License

Hi Andrew,

My launchpad id is abhishek-kekane

Thank you,

Abhishek
________________________________________
From: Andrew Melton [andrew.melton at RACKSPACE.COM]
Sent: Monday, September 21, 2015 10:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] New PyCharm License

Hi devs,


I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.


Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.

?

--Andrew


______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding.
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From ogelbukh at mirantis.com  Mon Sep 21 18:05:26 2015
From: ogelbukh at mirantis.com (Oleg Gelbukh)
Date: Mon, 21 Sep 2015 21:05:26 +0300
Subject: [openstack-dev] [Fuel] Core Reviewers groups restructure
In-Reply-To: <CAKYN3rOpnBniOkHp6MtqfXnVxkxkV=mQNRRjQWLvwm5c9eEwzA@mail.gmail.com>
References: <CAKYN3rOpnBniOkHp6MtqfXnVxkxkV=mQNRRjQWLvwm5c9eEwzA@mail.gmail.com>
Message-ID: <CAFkLEwrzzAWjS=_v3kjOCHQyPFr5M5pboamjQDKUpkPBfGQCEQ@mail.gmail.com>

FYI, we have a separate core group for stackforge/fuel-octane repository
[1].

I'm supporting the move to modularization of Fuel with cleaner separation
of authority and better defined interfaces. Thus, I'm +1 to such a change
as a part of that move.

[1] https://review.openstack.org/#/admin/groups/1020,members

--
Best regards,
Oleg Gelbukh

On Sun, Sep 20, 2015 at 11:56 PM, Mike Scherbakov <mscherbakov at mirantis.com>
wrote:

> Hi all,
> as of my larger proposal on improvements to code review workflow [1], we
> need to have cores for repositories, not for the whole Fuel. It is the path
> we are taking for a while, and new core reviewers added to specific repos
> only. Now we need to complete this work.
>
> My proposal is:
>
>    1. Get rid of one common fuel-core [2] group, members of which can
>    merge code anywhere in Fuel. Some members of this group may cover a couple
>    of repositories, but can't really be cores in all repos.
>    2. Extend existing groups, such as fuel-library [3], with members from
>    fuel-core who are keeping up with large number of reviews / merges. This
>    data can be queried at Stackalytics.
>    3. Establish a new group "fuel-infra", and ensure that it's included
>    into any other core group. This is for maintenance purposes, it is expected
>    to be used only in exceptional cases. Fuel Infra team will have to decide
>    whom to include into this group.
>    4. Ensure that fuel-plugin-* repos will not be affected by removal of
>    fuel-core group.
>
> #2 needs specific details. Stackalytics can show active cores easily, we
> can look at people with *:
> http://stackalytics.com/report/contribution/fuel-web/180. This is for
> fuel-web, change the link for other repos accordingly. If people are added
> specifically to the particular group, leaving as is (some of them are no
> longer active. But let's clean them up separately from this group
> restructure process).
>
>    - fuel-library-core [3] group will have following members: Bogdan D.,
>    Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
>    - fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin,
>    Vitaly Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
>    - fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
>    - fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
>    - fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky, Nastya
>    Urlapova
>    - fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
>    Konstantinov, Olga Gusarenko
>    - fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry
>    Pyzhov, Sergii Golovatyuk, Vladimir Kuklin, Igor Kalnitsky
>    - fuel-nailgun-agent-core [10]: Vladimir Sharshov, V.Kozhukalov
>    - fuel-ostf-core [11]: Tatyana Leontovich, Nastya Urlapova, Andrey
>    Sledzinsky, Dmitry Shulyak
>    - fuel-plugins-core [12]: Igor Kalnitsky, Evgeny Li, Alexey Kasatkin
>    - fuel-qa-core [13]: Andrey Sledzinsky, Tatyana Leontovich, Nastya
>    Urlapova
>    - fuel-stats-core [14]: Alex Kislitsky, Alexey Kasatkin, Vitaly
>    Kramskikh
>    - fuel-tasklib-core [15]: Igor Kalnitsky, Dima Shulyak, Alexey
>    Kasatkin (this project seems to be dead, let's consider to rip it off)
>    - fuel-specs-core: there is no such a group at the moment. I propose
>    to create one with following members, based on stackalytics data [16]:
>    Vitaly Kramskikh, Bogdan Dobrelia, Evgeny Li, Sergii Golovatyuk, Vladimir
>    Kuklin, Igor Kalnitsky, Alexey Kasatkin, Roman Vyalov, Dmitry Borodaenko,
>    Mike Scherbakov, Dmitry Pyzhov. We would need to reconsider who can merge
>    after Fuel PTL/Component Leads elections
>    - fuel-octane-core: needs to be created. Members: Yury Taraday, Oleg
>    Gelbukh, Ilya Kharin
>    - fuel-mirror-core: needs to be created. Sergey Kulanov, Vitaly
>    Parakhin
>    - fuel-upgrade-core: needs to be created. Sebastian Kalinowski, Alex
>    Schultz, Evgeny Li, Igor Kalnitsky
>    - fuel-provision: repo seems to be outdated, needs to be removed.
>
> I suggest to make changes in groups first, and then separately address
> specific issues like removing someone from cores (not doing enough reviews
> anymore or too many positive reviews, let's say > 95%).
>
> I hope I don't miss anyone / anything. Please check carefully.
> Comments / objections?
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
> [2] https://review.openstack.org/#/admin/groups/209,members
> [3] https://review.openstack.org/#/admin/groups/658,members
> [4] https://review.openstack.org/#/admin/groups/664,members
> [5] https://review.openstack.org/#/admin/groups/655,members
> [6] https://review.openstack.org/#/admin/groups/646,members
> [7] https://review.openstack.org/#/admin/groups/656,members
> [8] https://review.openstack.org/#/admin/groups/657,members
> [9] https://review.openstack.org/#/admin/groups/659,members
> [10] https://review.openstack.org/#/admin/groups/1000,members
> [11] https://review.openstack.org/#/admin/groups/660,members
> [12] https://review.openstack.org/#/admin/groups/661,members
> [13] https://review.openstack.org/#/admin/groups/662,members
> [14] https://review.openstack.org/#/admin/groups/663,members
> [15] https://review.openstack.org/#/admin/groups/624,members
> [16] http://stackalytics.com/report/contribution/fuel-specs/180
>
>
> --
> Mike Scherbakov
> #mihgen
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/45b614f4/attachment.html>

From andrew.melton at RACKSPACE.COM  Mon Sep 21 18:27:53 2015
From: andrew.melton at RACKSPACE.COM (Andrew Melton)
Date: Mon, 21 Sep 2015 18:27:53 +0000
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <0F8583BBE82FA449A8B78417CC07559A093E0A7A@SJCEML701-CHM.china.huawei.com>
References: <1442847252531.42564@RACKSPACE.COM>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FE88E@MAIL703.KDS.KEANE.COM>,
 <0F8583BBE82FA449A8B78417CC07559A093E0A7A@SJCEML701-CHM.china.huawei.com>
Message-ID: <1442860074829.59721@RACKSPACE.COM>

Please follow this link to request a license: https://account.jetbrains.com/a/4c4ojw.

You will need a JetBrains account to request the license. This link is open for anyone to use, so please do not share it in the public. You may share it with other OpenStack contributors on your team, but if you do, please send me their launchpad-ids. Lastly, if you decide to stop using PyCharm, please send me an email so I can revoke the license and open it up for use by someone else.

Thanks!
Andrew
________________________________________
From: Henry Fourie <louis.fourie at huawei.com>
Sent: Monday, September 21, 2015 1:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] New PyCharm License

Andrew,
   My launchpad id is lfourie
 - Louis

-----Original Message-----
From: Kekane, Abhishek [mailto:Abhishek.Kekane at nttdata.com]
Sent: Monday, September 21, 2015 10:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] New PyCharm License

Hi Andrew,

My launchpad id is abhishek-kekane

Thank you,

Abhishek
________________________________________
From: Andrew Melton [andrew.melton at RACKSPACE.COM]
Sent: Monday, September 21, 2015 10:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] New PyCharm License

Hi devs,


I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.


Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.

?

--Andrew


______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding.
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From andrew.melton at RACKSPACE.COM  Mon Sep 21 18:30:30 2015
From: andrew.melton at RACKSPACE.COM (Andrew Melton)
Date: Mon, 21 Sep 2015 18:30:30 +0000
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <E1FB4937BE24734DAD0D1D4E4E506D788A6FE88E@MAIL703.KDS.KEANE.COM>
References: <1442847252531.42564@RACKSPACE.COM>,
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FE88E@MAIL703.KDS.KEANE.COM>
Message-ID: <1442860231482.95559@RACKSPACE.COM>

Please follow this link to request a license: https://account.jetbrains.com/a/4c4ojw.

You will need a JetBrains account to request the license. This link is open for anyone to use, so please do not share it in the public. You may share it with other OpenStack contributors on your team, but if you do, please send me their launchpad-ids. Lastly, if you decide to stop using PyCharm, please send me an email so I can revoke the license and open it up for use by someone else.

Thanks!
Andrew
________________________________________
From: Kekane, Abhishek <Abhishek.Kekane at nttdata.com>
Sent: Monday, September 21, 2015 1:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] New PyCharm License

Hi Andrew,

My launchpad id is abhishek-kekane

Thank you,

Abhishek
________________________________________
From: Andrew Melton [andrew.melton at RACKSPACE.COM]
Sent: Monday, September 21, 2015 10:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] New PyCharm License

Hi devs,


I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.


Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.

?

--Andrew


______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From jim at jimrollenhagen.com  Mon Sep 21 19:19:18 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Mon, 21 Sep 2015 12:19:18 -0700
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <1442860231482.95559@RACKSPACE.COM>
References: <1442847252531.42564@RACKSPACE.COM>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FE88E@MAIL703.KDS.KEANE.COM>
 <1442860231482.95559@RACKSPACE.COM>
Message-ID: <20150921191918.GS21846@jimrollenhagen.com>

On Mon, Sep 21, 2015 at 06:30:30PM +0000, Andrew Melton wrote:
> Please follow this link to request a license: https://account.jetbrains.com/a/4c4ojw.
> 
> You will need a JetBrains account to request the license. This link is open for anyone to use, so please do not share it in the public. You may share it with other OpenStack contributors on your team, but if you do, please send me their launchpad-ids. Lastly, if you decide to stop using PyCharm, please send me an email so I can revoke the license and open it up for use by someone else.

Welp, it's in the public now. :(

// jim


From victoria at vmartinezdelacruz.com  Mon Sep 21 19:27:57 2015
From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=)
Date: Mon, 21 Sep 2015 16:27:57 -0300
Subject: [openstack-dev] [Outreachy] New coordinator announcement and list
 of current applicants and mentors
Message-ID: <CAJ_e2gAUDWm4zzTYnH_ngvWS=YjH=GfJ9=Y0KbfXTpa73KU3ZQ@mail.gmail.com>

Hi all,

I'm glad to announce that Mahati Chamarthy (mahatic) will join the current
coordination efforts for the Outreachy internships. Thanks Mahati!

Also, I wanted to share this Etherpad [0] with you the current list of
applicants and mentors for this round.

Applicants -> If you want to apply and don't see your name on this list,
please add your name and the project/s you are interested on.

Mentors -> If you want to mentor someone and don't see your name on this
list, please add your name and the project/s you are willing to mentor for.

Everyone else -> If you want to give us a hand and you don't know how, help
us by spreading the word! Maybe a friend of yours want to join as an
applicant or a coworker want to join as a mentor.

Every help is appreciated.

Cheers,

Victoria

[0] https://etherpad.openstack.org/p/outreachy.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/5a7bd974/attachment.html>

From chris.friesen at windriver.com  Mon Sep 21 19:31:19 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Mon, 21 Sep 2015 13:31:19 -0600
Subject: [openstack-dev] Migrating offline instances
In-Reply-To: <5600019C.6070706@hpe.com>
References: <5600019C.6070706@hpe.com>
Message-ID: <56005B07.2030708@windriver.com>

On 09/21/2015 07:09 AM, Paul Carlton wrote:
> Live migration using qemu only operates on running instances.  However when a
> cloud operator wants to move all instance off a hypervisor they need to be
> able to migrate stopped and suspended instances too.  We achieved this by
> bringing these instances to a paused state while they are migrated using the
> 'VIR_DOMAIN_START_PAUSED'.  I see from https://review.openstack.org/#/c/85048/
> that this idea has been rejected in the past but my understanding is there is
> still no solution to this issue in libvirt?  Is there work in progress to
> implement a capability for libvirt to migrate offline instances?

We've run into the same limitations and would also like to see a solution.

Rather than starting it up in a paused state, it should be possible to use a 
variant of the cold migration code.  Suspending the instance currently results 
in calling libvirt's virDomainManagedSave() routine which ends up saving the 
memory contents to a file managed by libvirt itself.  I don't see any way to 
tell libvirt to migrate that file to another host.  It might be possible to 
re-work nova's suspend code to use virDomainSave() which would allow nova to 
specify the file to save to (and possibly then allow it to be migrated).  As far 
as I can tell nothing in nova uses that API currently.

The downside of using the cold migration code that is that it requires a 
passwordless ssh tunnel to the destination when not using shared instance 
storage (in order to copy the image file).

Chris


From robertc at robertcollins.net  Mon Sep 21 19:51:02 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Tue, 22 Sep 2015 07:51:02 +1200
Subject: [openstack-dev] [release][all] Release help needed - we are
	incompatible with ourselves
Message-ID: <CAJ3HoZ3XF37499bnurXZBdDbrPcY_gP4+BfnnbBPheaXyRxrEw@mail.gmail.com>

Constraint updates are still failing: you can see this on
https://review.openstack.org/#/c/221157/ or more generally
https://review.openstack.org/#/q/status:open+project:openstack/requirements+branch:master+topic:openstack/requirements/constraints,n,z

Now, the constraints system is *doing its job* - its made the presence
of an incompatible thing not-a-firedrill. However, we need to do our
part of the job too: we need to fix the incompatibility that exists so
that we can roll forward and start using the new releases that are
being made.

Right now the release team are picking individual components and
proposing them as merges to move things forward, but its fairly
fundamentally unsafe to cut the full liberty release while there is a
known incompatibility bug out there.

So - I'm manually ringing the fire-drill alarm now: we need to get
this fixed so that the released liberty is actually compatible with
the entire ecosystem at time of release.

What issues are there ?

Firstly,
2015-09-21 06:24:00.911 | + openstack --os-token
3dc712d5120b436ebb7d554405b7c15f --os-url http://127.0.0.1:9292 image
create cirros-0.3.4-x86_64-uec --public --container-format ami
--disk-format ami
2015-09-21 06:24:01.396 | openstack: 'image' is not an openstack
command. See 'openstack --help'.

(See the dvsm run from review 221157 -
http://logs.openstack.org/57/221157/12/check/gate-tempest-dsvm-full/17941bd/logs/devstacklog.txt.gz#_2015-09-21_06_24_00_911
)

Secondly, its likely that once thats fixed there will be more things to unwind.

What will help most is if a few folk familiar with devstack can pull
down review 221157 and do a binary search on the changes in it to
determine which ones are safe and which ones trigger the breakage:
then we can at least land all the safe ones at once and zero in on the
incompatibility - and get it addressed.

To repeat: this is effectively a release blocker IMO, and the release
is happening - well, $now.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From emilien at redhat.com  Mon Sep 21 19:59:11 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Mon, 21 Sep 2015 15:59:11 -0400
Subject: [openstack-dev] [puppet] Tokyo Summit - dev + ops
Message-ID: <5600618F.2050504@redhat.com>

Hello,

The summit is in a few weeks, and we are still defining our agenda [1].
Some topics have already been written, but it would be good to get more
topic, we have some room resources allocated for that purpose.
Both devs & ops, feel free to create your topic by defining a
description, owner (you if you can), and approximate needed time.

Thanks for your help,

[1] https://etherpad.openstack.org/p/HND-puppet
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/4bee8e20/attachment.pgp>

From doug at doughellmann.com  Mon Sep 21 20:02:50 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 21 Sep 2015 16:02:50 -0400
Subject: [openstack-dev] [release][ptl][all] creating stable/liberty
	branches for non-oslo libraries today
In-Reply-To: <1442842916-sup-9093@lrrr.local>
References: <1442842916-sup-9093@lrrr.local>
Message-ID: <1442865622-sup-8579@lrrr.local>

Excerpts from Doug Hellmann's message of 2015-09-21 09:44:06 -0400:
> 
> All,
> 
> We are doing final releases, contraints updates, and creating
> stable/liberty branches for all of the non-Oslo libraries (clients
> as well as glance_store, os-brick, etc.) today. I have contacted
> the designate, neutron, nova, and zaqar teams about final releases
> for their clients today based on the list of unreleased changes.
> All of the other libs looked like their most recent release would
> be fine as a stable branch, so we'll be using those.
> 
> Doug
> 

I have created stable/liberty branches from these versions:

ceilometermiddleware 0.3.0
cliff 1.15.0
django_openstack_auth 2.0.0
glance_store 0.9.1
keystoneauth 1.1.0
keystonemiddleware 2.3.0
os-client-config 1.7.4
pycadf 1.1.0
python-barbicanclient 3.3.0
python-ceilometerclient 1.5.0
python-cinderclient 1.4.0
python-glanceclient 1.1.0
python-heatclient 0.8.0
python-ironicclient 0.8.1
python-keystoneclient 1.7.1
python-manilaclient 1.4.0
python-neutronclient 3.0.0
python-novaclient 2.30.0
python-saharaclient 0.11.0
python-swiftclient 2.6.0
python-troveclient 1.3.0
python-zaqarclient 0.2.0

The updates to the .gitreview files are available for review in
https://review.openstack.org/#/q/topic:create-liberty,n,z

We have 3 projects we're waiting to branch:

os-brick
  wait for https://review.openstack.org/#/c/220902/ (merged)

python-designateclient
  https://review.openstack.org/#/c/224667/ (merged)

python-openstackclient
  https://review.openstack.org/#/c/225443/
  https://review.openstack.org/#/c/225505/

Doug


From nik.komawar at gmail.com  Mon Sep 21 20:09:29 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Mon, 21 Sep 2015 16:09:29 -0400
Subject: [openstack-dev] [release][all] Release help needed - we are
 incompatible with ourselves
In-Reply-To: <CAJ3HoZ3XF37499bnurXZBdDbrPcY_gP4+BfnnbBPheaXyRxrEw@mail.gmail.com>
References: <CAJ3HoZ3XF37499bnurXZBdDbrPcY_gP4+BfnnbBPheaXyRxrEw@mail.gmail.com>
Message-ID: <560063F9.9030704@gmail.com>

(fyi) That seems to have been fixed by
https://review.openstack.org/#/c/225443/ .

On 9/21/15 3:51 PM, Robert Collins wrote:
> Constraint updates are still failing: you can see this on
> https://review.openstack.org/#/c/221157/ or more generally
> https://review.openstack.org/#/q/status:open+project:openstack/requirements+branch:master+topic:openstack/requirements/constraints,n,z
>
> Now, the constraints system is *doing its job* - its made the presence
> of an incompatible thing not-a-firedrill. However, we need to do our
> part of the job too: we need to fix the incompatibility that exists so
> that we can roll forward and start using the new releases that are
> being made.
>
> Right now the release team are picking individual components and
> proposing them as merges to move things forward, but its fairly
> fundamentally unsafe to cut the full liberty release while there is a
> known incompatibility bug out there.
>
> So - I'm manually ringing the fire-drill alarm now: we need to get
> this fixed so that the released liberty is actually compatible with
> the entire ecosystem at time of release.
>
> What issues are there ?
>
> Firstly,
> 2015-09-21 06:24:00.911 | + openstack --os-token
> 3dc712d5120b436ebb7d554405b7c15f --os-url http://127.0.0.1:9292 image
> create cirros-0.3.4-x86_64-uec --public --container-format ami
> --disk-format ami
> 2015-09-21 06:24:01.396 | openstack: 'image' is not an openstack
> command. See 'openstack --help'.
>
> (See the dvsm run from review 221157 -
> http://logs.openstack.org/57/221157/12/check/gate-tempest-dsvm-full/17941bd/logs/devstacklog.txt.gz#_2015-09-21_06_24_00_911
> )
>
> Secondly, its likely that once thats fixed there will be more things to unwind.
>
> What will help most is if a few folk familiar with devstack can pull
> down review 221157 and do a binary search on the changes in it to
> determine which ones are safe and which ones trigger the breakage:
> then we can at least land all the safe ones at once and zero in on the
> incompatibility - and get it addressed.
>
> To repeat: this is effectively a release blocker IMO, and the release
> is happening - well, $now.
>
> -Rob
>

-- 

Thanks,
Nikhil



From boris at pavlovic.me  Mon Sep 21 20:13:14 2015
From: boris at pavlovic.me (Boris Pavlovic)
Date: Mon, 21 Sep 2015 13:13:14 -0700
Subject: [openstack-dev] [openstack-operators][tc][tags] Rally tags
In-Reply-To: <55FFBC80.7030805@openstack.org>
References: <CAD85om0e5Fwc08xcmce1h-BC0i9i6AyZiUP6-6__J5qitg9Yzg@mail.gmail.com>
 <55FFBC80.7030805@openstack.org>
Message-ID: <CAD85om0fdf5ZE_gBCFhVeuRLZrJpKPshDPfVZBSm27E0nc2u6Q@mail.gmail.com>

Thierry,

Okay great I will propose patches.

Best regards,
Boris Pavlovic

On Mon, Sep 21, 2015 at 1:14 AM, Thierry Carrez <thierry at openstack.org>
wrote:

> Boris Pavlovic wrote:
> > I have few ideas about the rally tags:
> >
> > - covered-by-rally
> >    It means that there are official (inside the rally repo) plugins for
> > testing of particular project
> >
> > - has-rally-gates
> >    It means that Rally is run against every patch proposed to the project
> >
> > - certified-by-rally [wip]
> >    As well we are starting working on certification
> > task: https://review.openstack.org/#/c/225176/5
> >    which will be the standard way to check whatever cloud is ready for
> > production based on volume, performance & scale testing.
> >
> > Thoughts?
>
> Hi Boris,
>
> The "next-tags" workgroup at the Technical Committee came up with a
> number of families where I think your proposed tags could fit:
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070651.html
>
> The "integration" family of tags defines cross-project support. We want
> to have tags that say that a specific service has a horizon dashboard
> plugin, or a devstack integration, or heat templates... So I would say
> that the "covered-by-rally" tag could be part of that family
> ('integration:rally' maybe ?). We haven't defined our first tag in that
> family yet: sdague was working on the devstack ones[1] as a template for
> the family but that effort stalled a bit:
>
> https://review.openstack.org/#/c/203785/
>
> As far as the 'has-rally-gates' tag goes, that would be part of the 'QA'
> family ("qa:has-rally-gates" for example).
>
> So I think those totally make sense as upstream-maintained tags and are
> perfectly aligned with the families we already had in mind but haven't
> had time to push yet. Feel free to propose those tags to the governance
> repository. An example of such submission lives at:
>
> https://review.openstack.org/#/c/207467/
>
> The 'certified-by-rally' tag is a bit farther away I think (less
> objective and needs your certification program to be set up first). You
> should start with the other two.
>
> --
> Thierry Carrez (ttx)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/34b001c7/attachment-0001.html>

From rlooyahoo at gmail.com  Mon Sep 21 20:20:05 2015
From: rlooyahoo at gmail.com (Ruby Loo)
Date: Mon, 21 Sep 2015 16:20:05 -0400
Subject: [openstack-dev] [ironic] weekly subteam status report
Message-ID: <CA+5K_1F6dCHn7wMkZMBshQYpTbSmfb5fT0KO2zga6AFqZVeDQA@mail.gmail.com>

Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
============
As of Mon, Sep 21 (diff with Sep 14)
- Open: 136 (-6). 7 new (-4), 41 in progress (-7), 0 critical, 12 high (+1)
and 9 incomplete
- Nova bugs with Ironic tag: 23. 0 new, 0 critical, 1 high

I've started to go through the dashboard and pinging people about their
stale (>1 month of inactivity) bugs.
    - I'll unassign/abandon everything where I won't get response next
Monday.


Neutron/Ironic work (jroll)
====================
No updates // bumped to Mitaka


ironic-lib adoption (dtantsur)
======================
- library released, but we didn't switch to it in time (DepFreeze)
    - therefore, there will not be a liberty/stable branch of ironic-lib
- plan to switch as soon as Mitaka opens


Nova Liaisons (jlvillal & mrda)
=======================
- No updates


Oslo (lintan)
==========
- Oslo verisonedobjects library adoption is done, the change of oslo
library should not break Ironic easily anymore.
    - need reviews on https://review.openstack.org/#/c/224079/
- Oslo team is working on a solution to reload configuration of service.
It's still under heavy construction, but maybe interesting to someone.
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html


Doc (pshige)
==========
No updates


Testing/Quality (jlvillal/lekha)
======================
- lekha: Has submitted a patch to do functional testing of
python-ironicclient using mimic. https://review.openstack.org/225375
 Blocked until mimic is added to global-requirements, which has to wait for
freeze to end
- krtaylor and lekha have both said they would like to be part of the
Testing/Quality subteam.
- Still need tempest to accept the API microversion testing patch
https://review.openstack.org/#/c/166386/


Inspector (dtansur)
===============
- stable/liberty branch created for python-ironic-inspector-client at 1.2.0
release
- IPA is a fully supported inspection ramdisk now:
    - doc: https://github.com/openstack/ironic-inspector#using-ipa
    - one extra patch is awaiting review:
https://review.openstack.org/#/c/225092/ (upstreaming a couple of plugins)
- ironic-inspector 2.2.0 with stable/liberty expected this Thursday, stuff
to finish is tracked here:
    - https://launchpad.net/ironic-inspector/+milestone/2.2.0


Bifrost (TheJulia)
=============
- Bifrost's gate is broken presently due to current work in-flight in shade
authentication support that is presently broken.


webclient (krotscheck / betherly)
=========================
- in debates with horizon ....


Drivers
======

IPA (jroll/JayF/JoshNang)
----------------------------------
- jroll and trown working on packaging releases

iRMC (naohirot)
---------------------
https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z
- Status: Active (solicit core team's spec review for Ironic 4.2.0)
    - New boot driver interface for iRMC drivers (bp/new-boot-interface)
- Status: Reactive
    - Enhance Power Interface for Soft Reboot and NMI
(bp/enhance-power-interface-for-soft-reboot-and-nmi)
    - iRMC out of band inspection (bp/ironic-node-properties-discovery)

OneView (gabriel-bezerra/thiagop/sinval)
-------------------------------------------------------
- https://review.openstack.org/#/c/191822/
- Driver status: Code complete. Wating for reviews.
- python-oneviewclient lib status: Code complete, publishing to PyPi as
soon as infra creates the "-release" team.
- 3rd Party CI status: power and management integration tests running. Full
deployment tests for iSCSI and Agent drivers with good progress.

........

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/b6640cf5/attachment.html>

From doug at doughellmann.com  Mon Sep 21 20:36:18 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 21 Sep 2015 16:36:18 -0400
Subject: [openstack-dev] [release][all] Release help needed - we are
	incompatible with ourselves
In-Reply-To: <1442865890-sup-8740@lrrr.local>
References: <CAJ3HoZ3XF37499bnurXZBdDbrPcY_gP4+BfnnbBPheaXyRxrEw@mail.gmail.com>
 <1442865890-sup-8740@lrrr.local>
Message-ID: <1442867754-sup-4511@lrrr.local>

[resending, the first copy was lost in transmission]

Excerpts from Doug Hellmann's message of 2015-09-21 16:08:51 -0400:
> Excerpts from Robert Collins's message of 2015-09-22 07:51:02 +1200:
> > Constraint updates are still failing: you can see this on
> > https://review.openstack.org/#/c/221157/ or more generally
> > https://review.openstack.org/#/q/status:open+project:openstack/requirements+branch:master+topic:openstack/requirements/constraints,n,z
> > 
> > Now, the constraints system is *doing its job* - its made the presence
> > of an incompatible thing not-a-firedrill. However, we need to do our
> > part of the job too: we need to fix the incompatibility that exists so
> > that we can roll forward and start using the new releases that are
> > being made.
> > 
> > Right now the release team are picking individual components and
> > proposing them as merges to move things forward, but its fairly
> > fundamentally unsafe to cut the full liberty release while there is a
> > known incompatibility bug out there.
> > 
> > So - I'm manually ringing the fire-drill alarm now: we need to get
> > this fixed so that the released liberty is actually compatible with
> > the entire ecosystem at time of release.
> > 
> > What issues are there ?
> > 
> > Firstly,
> > 2015-09-21 06:24:00.911 | + openstack --os-token
> > 3dc712d5120b436ebb7d554405b7c15f --os-url http://127.0.0.1:9292 image
> > create cirros-0.3.4-x86_64-uec --public --container-format ami
> > --disk-format ami
> > 2015-09-21 06:24:01.396 | openstack: 'image' is not an openstack
> > command. See 'openstack --help'.
> > 
> > (See the dvsm run from review 221157 -
> > http://logs.openstack.org/57/221157/12/check/gate-tempest-dsvm-full/17941bd/logs/devstacklog.txt.gz#_2015-09-21_06_24_00_911
> > )
> 
> This looks like the error we were seeing before the most recent
> os-client-config release. I wonder if it would help to update that
> patch to remove the os-client-config change.  There's a separate
> patch up to change that constraint in https://review.openstack.org/225363
> but it depends on some devstack changes.
> 
> Doug
> 
> > 
> > Secondly, its likely that once thats fixed there will be more things to unwind.
> > 
> > What will help most is if a few folk familiar with devstack can pull
> > down review 221157 and do a binary search on the changes in it to
> > determine which ones are safe and which ones trigger the breakage:
> > then we can at least land all the safe ones at once and zero in on the
> > incompatibility - and get it addressed.
> > 
> > To repeat: this is effectively a release blocker IMO, and the release
> > is happening - well, $now.
> > 
> > -Rob
> > 


From rlooyahoo at gmail.com  Mon Sep 21 20:41:02 2015
From: rlooyahoo at gmail.com (Ruby Loo)
Date: Mon, 21 Sep 2015 16:41:02 -0400
Subject: [openstack-dev] [ironic] subteam leads
Message-ID: <CA+5K_1EQCfW9aPFU=oR17njDQX8ry1uPOb0X+Reb6i3_i2QJZQ@mail.gmail.com>

Hi Subteam Leads,

In today's weekly meeting[1], it was requested that if you have no updates,
to please indicate that in the status report, so that we can distinguish
between no-update versus not-providing-a-status-report.

If you aren't sure whether you're a lead, we think you are if your name is
next to one of the subteams listed in the etherpad[2] (under 'Subteam
status reports').

Thanks,
--ruby

[1] around 17:27:01,
http://eavesdrop.openstack.org/meetings/ironic/2015/ironic.2015-09-21-17.01.log.html
[2] https://etherpad.openstack.org/p/IronicWhiteBoard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/a1d08030/attachment.html>

From schowdh at us.ibm.com  Mon Sep 21 20:47:07 2015
From: schowdh at us.ibm.com (Sisir Chowdhury)
Date: Mon, 21 Sep 2015 15:47:07 -0500
Subject: [openstack-dev] [networking-ovn]  Neutron-DVR feature on OVN/L3
In-Reply-To: <OF86C79150.8AA886D0-ON00257EC2.00625760-86257EC2.0063717A@LocalDomain>
References: <OF86C79150.8AA886D0-ON00257EC2.00625760-86257EC2.0063717A@LocalDomain>
Message-ID: <201509212047.t8LKlK3g015392@d01av05.pok.ibm.com>

Hi All -

    I have some proposal regarding ovn-networking project within 
Open-Stack.

#1.   Making Neutron-DVR feature intelligent enough so that we can 
completely remove Network Node(NN).

        Right now even with DVR, the egress traffic originated from VMs 
going outbound are SNAT'ed by the 
        Network Node but the Ingrerss traffic coming from Internet to the 
VMs are directly going through the 
        Compute Node and DNAT'ed by the L3 Agent of the Compute Node.

Any Thoughts/Comments ? 

Thanks..Sisir
Cloud Innovation Lab, IBM



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/7348908a/attachment.html>

From robertc at robertcollins.net  Mon Sep 21 20:57:42 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Tue, 22 Sep 2015 08:57:42 +1200
Subject: [openstack-dev] [testing] Python 3.4, eventlet and subunit
Message-ID: <CAJ3HoZ24T9_Kczk5hOKBDPZwDSHSX-RRYf1iePX3VL=9gA3sUg@mail.gmail.com>

Some of you may have noticed weird failures from testr running tests
on Python 3.4.

The cause seems to be eventlet monkey_patching. I've filed
https://github.com/eventlet/eventlet/issues/248 to get some discussion
about this.

In the interim I'm working up a patch to at least make this a
non-silent failure, though due to the evolution of the IO layer across
different Python versions its a little tricky :/.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From prometheanfire at gentoo.org  Mon Sep 21 21:03:02 2015
From: prometheanfire at gentoo.org (Matthew Thode)
Date: Mon, 21 Sep 2015 16:03:02 -0500
Subject: [openstack-dev] [release][ptl][all] creating stable/liberty
 branches for non-oslo libraries today
In-Reply-To: <1442865622-sup-8579@lrrr.local>
References: <1442842916-sup-9093@lrrr.local> <1442865622-sup-8579@lrrr.local>
Message-ID: <56007086.7040204@gentoo.org>

On 09/21/2015 03:02 PM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2015-09-21 09:44:06 -0400:
>>
>> All,
>>
>> We are doing final releases, contraints updates, and creating
>> stable/liberty branches for all of the non-Oslo libraries (clients
>> as well as glance_store, os-brick, etc.) today. I have contacted
>> the designate, neutron, nova, and zaqar teams about final releases
>> for their clients today based on the list of unreleased changes.
>> All of the other libs looked like their most recent release would
>> be fine as a stable branch, so we'll be using those.
>>
>> Doug
>>
> 
> I have created stable/liberty branches from these versions:
> 
> ceilometermiddleware 0.3.0
> cliff 1.15.0
> django_openstack_auth 2.0.0
> glance_store 0.9.1
> keystoneauth 1.1.0
> keystonemiddleware 2.3.0
> os-client-config 1.7.4
> pycadf 1.1.0
> python-barbicanclient 3.3.0
> python-ceilometerclient 1.5.0
> python-cinderclient 1.4.0
> python-glanceclient 1.1.0
> python-heatclient 0.8.0
> python-ironicclient 0.8.1
> python-keystoneclient 1.7.1
> python-manilaclient 1.4.0
> python-neutronclient 3.0.0
> python-novaclient 2.30.0
> python-saharaclient 0.11.0
> python-swiftclient 2.6.0
> python-troveclient 1.3.0
> python-zaqarclient 0.2.0
> 
> The updates to the .gitreview files are available for review in
> https://review.openstack.org/#/q/topic:create-liberty,n,z
> 
> We have 3 projects we're waiting to branch:
> 
> os-brick
>   wait for https://review.openstack.org/#/c/220902/ (merged)
> 
> python-designateclient
>   https://review.openstack.org/#/c/224667/ (merged)
> 
> python-openstackclient
>   https://review.openstack.org/#/c/225443/
>   https://review.openstack.org/#/c/225505/
> 
> Doug
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Thanks for the updated list.  I'll get to packaging these now so I'll
hopefully have less work when liberty is tagged.

-- 
-- Matthew Thode (prometheanfire)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/faa66b81/attachment.pgp>

From amit.gandhi at RACKSPACE.COM  Mon Sep 21 21:20:16 2015
From: amit.gandhi at RACKSPACE.COM (Amit Gandhi)
Date: Mon, 21 Sep 2015 21:20:16 +0000
Subject: [openstack-dev] [poppy] Nominate Sriram Madupasi Vasudevan for
	Poppy (CDN) to Core
Message-ID: <D225ECDD.53371%amit.gandhi@rackspace.com>

All,

I would like to nominate Sriram Madupasi Vasudevan (thesriram) [1] to Core for Poppy (CDN) [2].

Sriram has worked on the project for the past 12 months, and has been instrumental in building out various features and resolving bugs for the team.

Please respond with your votes.

Thanks
Amit.

[1] http://stackalytics.com/?release=all&project_type=stackforge&module=poppy
[2] http://www.poppycdn.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/f930f13f/attachment.html>

From bradley.klein at twcable.com  Mon Sep 21 21:20:42 2015
From: bradley.klein at twcable.com (Klein, Bradley)
Date: Mon, 21 Sep 2015 21:20:42 +0000
Subject: [openstack-dev]  [puppet] monasca,murano,mistral governance
Message-ID: <D225D0C7.B322%bradley.klein@twcable.com>

Matt/Emilien,

I'm cool with removing the one-off core group for puppet-monasca and falling into line with the other modules...

Let me know if you need me to take any action/push a patch..

Thanks,

Brad


Emilien,

I've discussed this with some of the Monasca puppet guys here who are doing
most of the work. I think it probably makes sense to move to that model
now, especially since the pace of development has slowed substantially. One
blocker before to having it "big tent" was the lack of test coverage, so as
long as we know that's a work in progress...  I'd also like to get Brad
Kiein's thoughts on this, but he's out of town this week. I'll ask him to
reply when he is back.


On Mon, Sep 14, 2015 at 3:44 PM, Emilien Macchi <emilien at redhat.com<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>> wrote:

> Hi,
>
> As a reminder, Puppet modules that are part of OpenStack are documented
> here [1].
>
> I can see puppet-murano & puppet-mistral Gerrit permissions different
> from other modules, because Mirantis helped to bootstrap the module a
> few months ago.
>
> I think [2] the modules should be consistent in governance and only
> Puppet OpenStack group should be able to merge patches for these modules.
>
> Same question for puppet-monasca: if Monasca team wants their module
> under the big tent, I think they'll have to change Gerrit permissions to
> only have Puppet OpenStack able to merge patches.
>
> [1]
> http://governance.openstack.org/reference/projects/puppet-openstack.html
> [2] https://review.openstack.org/223313
>
> Any feedback is welcome,
> --
> Emilien Macchi
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

________________________________

This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/f3237fcc/attachment.html>

From amit.gandhi at RACKSPACE.COM  Mon Sep 21 21:22:45 2015
From: amit.gandhi at RACKSPACE.COM (Amit Gandhi)
Date: Mon, 21 Sep 2015 21:22:45 +0000
Subject: [openstack-dev] [poppy] Nominate Tony Tan for Poppy (CDN) Core
Message-ID: <D225ED6C.53377%amit.gandhi@rackspace.com>

All,

I would like to nominate Tony Tan (tonytan4ever) [1] to Core for Poppy (CDN) [2].

Tony has worked on the project for the past 12 months, and has been instrumental in building out various features and resolving bugs for the team.  He has written the majority of the Akamai driver and has been working hard to bring SSL integration to the workflow.

Please respond with your votes.

Thanks
Amit.

[1] http://stackalytics.com/?release=all&project_type=stackforge&module=poppy
[2] http://www.poppycdn.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/402ad07c/attachment.html>

From tim at styra.com  Mon Sep 21 21:35:37 2015
From: tim at styra.com (Tim Hinrichs)
Date: Mon, 21 Sep 2015 21:35:37 +0000
Subject: [openstack-dev] [Congress] stable/kilo
Message-ID: <CAJjxPABhxS0_hnb1zPkYSGPZ7f_KSTYfFq3krK27nmpqOn38TQ@mail.gmail.com>

Could someone look into why we can't get this change to merge into
stable/kilo.  Note that kilo was last cycle's release.

https://review.openstack.org/#/c/222698/

I tried a couple of obvious things but still no luck:
https://review.openstack.org/#/c/225332/

Tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/d920827a/attachment.html>

From carl at ecbaldwin.net  Mon Sep 21 22:30:12 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Mon, 21 Sep 2015 16:30:12 -0600
Subject: [openstack-dev] [networking-ovn] Neutron-DVR feature on OVN/L3
In-Reply-To: <201509212047.t8LKlK3g015392@d01av05.pok.ibm.com>
References: <OF86C79150.8AA886D0-ON00257EC2.00625760-86257EC2.0063717A@LocalDomain>
 <201509212047.t8LKlK3g015392@d01av05.pok.ibm.com>
Message-ID: <CALiLy7rb8fm43=8vT2Rd58f=Vf=w24ioO4HFtRm8w6x74LrhNA@mail.gmail.com>

On Mon, Sep 21, 2015 at 2:47 PM, Sisir Chowdhury <schowdh at us.ibm.com> wrote:
> Hi All -
>
>     I have some proposal regarding ovn-networking project within Open-Stack.
>
> #1.   Making Neutron-DVR feature intelligent enough so that we can
> completely remove Network Node(NN).
>
>         Right now even with DVR, the egress traffic originated from VMs
> going outbound are SNAT'ed by the

This is only true for VMs which do not have a floating IP associated.
If a floating IP is associated both ingress and egress traffic will be
DNATed and SNATed using the floating IP.

The network node will be involved in the "shared SNAT" case.  If a VM
does not have its own floating ip from which to originate traffic, the
traffic will go to the network node and be SNATed using the shared
address.  The "shared" part is often left out in conversation which is
how this confusion comes up.

Carl

>         Network Node but the Ingrerss traffic coming from Internet to the
> VMs are directly going through the
>         Compute Node and DNAT'ed by the L3 Agent of the Compute Node.
>
> Any Thoughts/Comments ?
>
> Thanks..Sisir
> Cloud Innovation Lab, IBM
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From sriram.madapusivasud at RACKSPACE.COM  Mon Sep 21 22:36:20 2015
From: sriram.madapusivasud at RACKSPACE.COM (Sriram Madapusi Vasudevan)
Date: Mon, 21 Sep 2015 22:36:20 +0000
Subject: [openstack-dev] [poppy] Nominate Tony Tan for Poppy (CDN) Core
In-Reply-To: <D225ED6C.53377%amit.gandhi@rackspace.com>
References: <D225ED6C.53377%amit.gandhi@rackspace.com>
Message-ID: <EBA0CD78-0310-48A8-ACBB-F844691B7A30@RACKSPACE.COM>

+1.

Sent from my iPhone

On Sep 21, 2015, at 5:26 PM, Amit Gandhi <amit.gandhi at RACKSPACE.COM<mailto:amit.gandhi at rackspace.com>> wrote:

All,

I would like to nominate Tony Tan (tonytan4ever) [1] to Core for Poppy (CDN) [2].

Tony has worked on the project for the past 12 months, and has been instrumental in building out various features and resolving bugs for the team.  He has written the majority of the Akamai driver and has been working hard to bring SSL integration to the workflow.

Please respond with your votes.

Thanks
Amit.

[1] http://stackalytics.com/?release=all&project_type=stackforge&module=poppy
[2] http://www.poppycdn.org
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/569cfe94/attachment.html>

From dougwig at parksidesoftware.com  Mon Sep 21 22:48:16 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Mon, 21 Sep 2015 16:48:16 -0600
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
	neutron-lbaas core team
In-Reply-To: <c818a3f9b1e64888bcd15702dd73284f@544124-OEXCH02.ror-uc.rackspace.com>
References: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
 <79408BAC-47AB-4CA6-A8ED-8ABD6C44CC6F@workday.com>
 <c818a3f9b1e64888bcd15702dd73284f@544124-OEXCH02.ror-uc.rackspace.com>
Message-ID: <5ED89A5E-56CA-45DC-B056-6E06324DF45F@parksidesoftware.com>

HI all,

Since all cores have responded, this passes. Welcome, Michael!

Thanks,
doug


> On Sep 17, 2015, at 12:29 PM, Brandon Logan <brandon.logan at rackspace.com> wrote:
> 
> I'm off today so my +1 is more like a +2
> 
> On Sep 17, 2015 12:59 PM, Edgar Magana <edgar.magana at workday.com> wrote:
> Not a core but I would like to share my +1 about Michael.
> 
> Cheers,
> 
> Edgar
> 
> 
> 
> 
> On 9/16/15, 3:33 PM, "Doug Wiegley" <dougwig at parksidesoftware.com> wrote:
> 
> >Hi all,
> >
> >As the Lieutenant of the advanced services, I nominate Michael Johnson to be a member of the neutron-lbaas core reviewer team.
> >
> >Review stats are in line with other cores[2], and Michael has been instrumental in both neutron-lbaas and octavia.
> >
> >Existing cores, please vote +1/-1 for his addition to the team (that?s Brandon, Phil, Al, and Kyle.)
> >
> >Thanks,
> >doug
> >
> >1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy <http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy>
> >2. http://stackalytics.com/report/contribution/neutron-lbaas/90 <http://stackalytics.com/report/contribution/neutron-lbaas/90>
> >
> >
> >__________________________________________________________________________
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/eabad8fc/attachment.html>

From mscherbakov at mirantis.com  Mon Sep 21 23:17:50 2015
From: mscherbakov at mirantis.com (Mike Scherbakov)
Date: Mon, 21 Sep 2015 23:17:50 +0000
Subject: [openstack-dev] [Fuel] Core Reviewers groups restructure
In-Reply-To: <CAFkLEwrzzAWjS=_v3kjOCHQyPFr5M5pboamjQDKUpkPBfGQCEQ@mail.gmail.com>
References: <CAKYN3rOpnBniOkHp6MtqfXnVxkxkV=mQNRRjQWLvwm5c9eEwzA@mail.gmail.com>
 <CAFkLEwrzzAWjS=_v3kjOCHQyPFr5M5pboamjQDKUpkPBfGQCEQ@mail.gmail.com>
Message-ID: <CAKYN3rP1ipHxSkVPeUvkcNst3DWnbiKcCAHXJVQ_odjZLJLj3Q@mail.gmail.com>

Thanks guys.
So for fuel-octane then there are no actions needed.

For fuel-agent-core group [1], looks like we are already good (it doesn't
have fuel-core group nested). But it would need to include fuel-infra group
and remove Aleksandra Fedorova (she will be a part of fuel-infra group).

python-fuel-client-core [2] is good as well (no nested fuel-core). However,
there is another group python-fuelclient-release [3], which has to be
eliminated, and main python-fuelclient-core would just have fuel-infra
group included for maintenance purposes.

[1] https://review.openstack.org/#/admin/groups/995,members
[2] https://review.openstack.org/#/admin/groups/551,members
[3] https://review.openstack.org/#/admin/groups/552,members


On Mon, Sep 21, 2015 at 11:06 AM Oleg Gelbukh <ogelbukh at mirantis.com> wrote:

> FYI, we have a separate core group for stackforge/fuel-octane repository
> [1].
>
> I'm supporting the move to modularization of Fuel with cleaner separation
> of authority and better defined interfaces. Thus, I'm +1 to such a change
> as a part of that move.
>
> [1] https://review.openstack.org/#/admin/groups/1020,members
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Sun, Sep 20, 2015 at 11:56 PM, Mike Scherbakov <
> mscherbakov at mirantis.com> wrote:
>
>> Hi all,
>> as of my larger proposal on improvements to code review workflow [1], we
>> need to have cores for repositories, not for the whole Fuel. It is the path
>> we are taking for a while, and new core reviewers added to specific repos
>> only. Now we need to complete this work.
>>
>> My proposal is:
>>
>>    1. Get rid of one common fuel-core [2] group, members of which can
>>    merge code anywhere in Fuel. Some members of this group may cover a couple
>>    of repositories, but can't really be cores in all repos.
>>    2. Extend existing groups, such as fuel-library [3], with members
>>    from fuel-core who are keeping up with large number of reviews / merges.
>>    This data can be queried at Stackalytics.
>>    3. Establish a new group "fuel-infra", and ensure that it's included
>>    into any other core group. This is for maintenance purposes, it is expected
>>    to be used only in exceptional cases. Fuel Infra team will have to decide
>>    whom to include into this group.
>>    4. Ensure that fuel-plugin-* repos will not be affected by removal of
>>    fuel-core group.
>>
>> #2 needs specific details. Stackalytics can show active cores easily, we
>> can look at people with *:
>> http://stackalytics.com/report/contribution/fuel-web/180. This is for
>> fuel-web, change the link for other repos accordingly. If people are added
>> specifically to the particular group, leaving as is (some of them are no
>> longer active. But let's clean them up separately from this group
>> restructure process).
>>
>>    - fuel-library-core [3] group will have following members: Bogdan D.,
>>    Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
>>    - fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin,
>>    Vitaly Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
>>    - fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
>>    - fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
>>    - fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky, Nastya
>>    Urlapova
>>    - fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
>>    Konstantinov, Olga Gusarenko
>>    - fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry
>>    Pyzhov, Sergii Golovatyuk, Vladimir Kuklin, Igor Kalnitsky
>>    - fuel-nailgun-agent-core [10]: Vladimir Sharshov, V.Kozhukalov
>>    - fuel-ostf-core [11]: Tatyana Leontovich, Nastya Urlapova, Andrey
>>    Sledzinsky, Dmitry Shulyak
>>    - fuel-plugins-core [12]: Igor Kalnitsky, Evgeny Li, Alexey Kasatkin
>>    - fuel-qa-core [13]: Andrey Sledzinsky, Tatyana Leontovich, Nastya
>>    Urlapova
>>    - fuel-stats-core [14]: Alex Kislitsky, Alexey Kasatkin, Vitaly
>>    Kramskikh
>>    - fuel-tasklib-core [15]: Igor Kalnitsky, Dima Shulyak, Alexey
>>    Kasatkin (this project seems to be dead, let's consider to rip it off)
>>    - fuel-specs-core: there is no such a group at the moment. I propose
>>    to create one with following members, based on stackalytics data [16]:
>>    Vitaly Kramskikh, Bogdan Dobrelia, Evgeny Li, Sergii Golovatyuk, Vladimir
>>    Kuklin, Igor Kalnitsky, Alexey Kasatkin, Roman Vyalov, Dmitry Borodaenko,
>>    Mike Scherbakov, Dmitry Pyzhov. We would need to reconsider who can merge
>>    after Fuel PTL/Component Leads elections
>>    - fuel-octane-core: needs to be created. Members: Yury Taraday, Oleg
>>    Gelbukh, Ilya Kharin
>>    - fuel-mirror-core: needs to be created. Sergey Kulanov, Vitaly
>>    Parakhin
>>    - fuel-upgrade-core: needs to be created. Sebastian Kalinowski, Alex
>>    Schultz, Evgeny Li, Igor Kalnitsky
>>    - fuel-provision: repo seems to be outdated, needs to be removed.
>>
>> I suggest to make changes in groups first, and then separately address
>> specific issues like removing someone from cores (not doing enough reviews
>> anymore or too many positive reviews, let's say > 95%).
>>
>> I hope I don't miss anyone / anything. Please check carefully.
>> Comments / objections?
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
>> [2] https://review.openstack.org/#/admin/groups/209,members
>> [3] https://review.openstack.org/#/admin/groups/658,members
>> [4] https://review.openstack.org/#/admin/groups/664,members
>> [5] https://review.openstack.org/#/admin/groups/655,members
>> [6] https://review.openstack.org/#/admin/groups/646,members
>> [7] https://review.openstack.org/#/admin/groups/656,members
>> [8] https://review.openstack.org/#/admin/groups/657,members
>> [9] https://review.openstack.org/#/admin/groups/659,members
>> [10] https://review.openstack.org/#/admin/groups/1000,members
>> [11] https://review.openstack.org/#/admin/groups/660,members
>> [12] https://review.openstack.org/#/admin/groups/661,members
>> [13] https://review.openstack.org/#/admin/groups/662,members
>> [14] https://review.openstack.org/#/admin/groups/663,members
>> [15] https://review.openstack.org/#/admin/groups/624,members
>> [16] http://stackalytics.com/report/contribution/fuel-specs/180
>>
>>
>> --
>> Mike Scherbakov
>> #mihgen
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/220cffac/attachment.html>

From lin.tan at intel.com  Mon Sep 21 23:22:44 2015
From: lin.tan at intel.com (Tan, Lin)
Date: Mon, 21 Sep 2015 23:22:44 +0000
Subject: [openstack-dev] [Ironic] Stepping down from IPA core
In-Reply-To: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
References: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
Message-ID: <FA7C581A679C7C4AB32F87614B0A41542AA58B3A@CDSMSX102.ccr.corp.intel.com>

It?s great to work with you, Josh. Thanks for your valuable comments and suggestions.
Wish you luck in your new Job.

Tan
From: Josh Gachnang [mailto:josh at pcsforeducation.com]
Sent: Monday, September 21, 2015 11:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] Stepping down from IPA core

Hey y'all, it's with a heavy heart I have to announce I'll be stepping down from the IPA core team on Thurs, 9/24. I'm leaving Rackspace for a healthcare startup (Triggr Health) and won't have the time to dedicate to being an effective OpenStack reviewer.

Ever since the OnMetal team proposed IPA allllll the way back in the Icehouse midcycle, this community has been welcoming, helpful, and all around great. You've all helped me grow as a developer with your in depth and patient reviews, for which I am eternally grateful. I'm really sad I won't get to see everyone in Tokyo.

I'll still be on IRC after leaving, so feel free to ping me for any reason :)

- JoshNang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/accd21f9/attachment.html>

From ayip at vmware.com  Mon Sep 21 23:26:40 2015
From: ayip at vmware.com (Alex Yip)
Date: Mon, 21 Sep 2015 23:26:40 +0000
Subject: [openstack-dev] [Congress] hands on lab
In-Reply-To: <B8D164BED956C5439875951895CB4B223BF057D3@CAFRFD1MSGUSRIA.ITServices.sbc.com>
References: <89e38d2f55c04a15bfcf964050203dfc@EX13-MBX-012.vmware.com>,
 <B8D164BED956C5439875951895CB4B223BF057D3@CAFRFD1MSGUSRIA.ITServices.sbc.com>
Message-ID: <1442878012143.26837@vmware.com>

Thanks for the feedback David!

This VM does use the network, but only so that you can use the browser to connect to Horizon, and your own terminal to connect via SSH.  That is actually, not necessary, but it does make the user experience more responsive and makes cutting and pasting work.

I don't know why you encountered the error in your browser.  Were you able to debug that problem at all?

thanks, Alex

________________________________________
From: KARR, DAVID <dk068x at att.com>
Sent: Monday, September 21, 2015 1:50 PM
To: Alex Yip
Subject: RE: [Congress] hands on lab

Alex, I'm going to step through this.  I'll mention some issues or questions here.

I imported this OVA on a CentOS7 laptop, and when I tried to start it, it failed with:

"Could not start the machine devstack-congress
because the following physical network interfaces
were not found:

en0: Wi-Fi (AirPort)(adapter 1)

..."

Note that this laptop was not connected to the internet at this point in time.

I then connected it to a Wifi hotspot (all I have right now is my cell phone as a hotspot) and reopened it, which succeeded.  If a network connection is required, the doc should probably mention this.

I then was able to ssh into the VM, using the ip I got from ifconfig in the VM, which was 192.168.122.1.  However, when I tried "You can also point your browser to the IP address to use Horizon on the devstack VM", by trying to visit "https://urldefense.proofpoint.com/v2/url?u=http-3A__192.168.122.1&d=BQIFAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=02Z4gmeF5zku2_PehZe2cUAOGiPmJAe0gxBmxx34Xrg&s=rlCFKv9aIfiM-L0tHfhNdtFtodSzVee_6JiOqf3_wcc&e= " (or "https://urldefense.proofpoint.com/v2/url?u=https-3A__192.168.122.1&d=BQIFAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=02Z4gmeF5zku2_PehZe2cUAOGiPmJAe0gxBmxx34Xrg&s=YoPOPcZQR31zL_iE2RuzhNZuqqgzrMZLWrVInhNrppk&e= " or "https://urldefense.proofpoint.com/v2/url?u=https-3A__192.168.122.1_dashboard&d=BQIFAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=02Z4gmeF5zku2_PehZe2cUAOGiPmJAe0gxBmxx34Xrg&s=L68PZtLGvb8HCd4A63M26pUfrv8UUamOMq-c2RxIxrI&e= ") from the same laptop, it either failed immediately or eventually timed out.

I haven't yet ran the "rejoin-stack.sh" script.

> -----Original Message-----
> From: Alex Yip [mailto:ayip at vmware.com]
> Sent: Thursday, September 17, 2015 6:04 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Congress] hands on lab
>
> Hi all,
> I have created a VirtualBox VM that matches the Vancouver handson-
> lab here:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__drive.google.com_file_d_0B94E7u1TIA8oTEdOQlFERkFwMUE_view-3Fu&d=BQIFAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=02Z4gmeF5zku2_PehZe2cUAOGiPmJAe0gxBmxx34Xrg&s=c3AoOYZEQmPPjdVclbwbOPILxWvEwyc0S9YVTKCNTHE&e=
> sp=sharing
>
> There's also an updated instruction document here:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_document_d_1ispwf56bX8sy9T0KZyosdHrSR9WHEVA&d=BQIFAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=02Z4gmeF5zku2_PehZe2cUAOGiPmJAe0gxBmxx34Xrg&s=4cWofXZgZ6YTIXb0FRNFFWa_89Gn0X5b5xz1E1YdLuM&e=
> 1oGEIYA22Orw/pub
>
> If you have some time, please try it out to see if it all works as
> expected.
> thanks, Alex
>
> ___________________________________________________________________
> _______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sleipnir012 at gmail.com  Mon Sep 21 23:27:03 2015
From: sleipnir012 at gmail.com (sleipnir012 at gmail.com)
Date: Mon, 21 Sep 2015 19:27:03 -0400
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
	neutron-lbaas core team
In-Reply-To: <5ED89A5E-56CA-45DC-B056-6E06324DF45F@parksidesoftware.com>
References: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
 <79408BAC-47AB-4CA6-A8ED-8ABD6C44CC6F@workday.com>
 <c818a3f9b1e64888bcd15702dd73284f@544124-OEXCH02.ror-uc.rackspace.com>
 <5ED89A5E-56CA-45DC-B056-6E06324DF45F@parksidesoftware.com>
Message-ID: <C79240AD-010B-4C21-A0FD-02B47A5F1B7D@gmail.com>

Congrats Michael. It is well deserved

Susanne

Sent from my iPhone

> On Sep 21, 2015, at 6:48 PM, Doug Wiegley <dougwig at parksidesoftware.com> wrote:
> 
> HI all,
> 
> Since all cores have responded, this passes. Welcome, Michael!
> 
> Thanks,
> doug
> 
> 
>> On Sep 17, 2015, at 12:29 PM, Brandon Logan <brandon.logan at rackspace.com> wrote:
>> 
>> I'm off today so my +1 is more like a +2
>> 
>> On Sep 17, 2015 12:59 PM, Edgar Magana <edgar.magana at workday.com> wrote:
>> Not a core but I would like to share my +1 about Michael.
>> 
>> Cheers,
>> 
>> Edgar
>> 
>> 
>> 
>> 
>> On 9/16/15, 3:33 PM, "Doug Wiegley" <dougwig at parksidesoftware.com> wrote:
>> 
>> >Hi all,
>> >
>> >As the Lieutenant of the advanced services, I nominate Michael Johnson to be a member of the neutron-lbaas core reviewer team.
>> >
>> >Review stats are in line with other cores[2], and Michael has been instrumental in both neutron-lbaas and octavia.
>> >
>> >Existing cores, please vote +1/-1 for his addition to the team (that?s Brandon, Phil, Al, and Kyle.)
>> >
>> >Thanks,
>> >doug
>> >
>> >1. http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
>> >2. http://stackalytics.com/report/contribution/neutron-lbaas/90
>> >
>> >
>> >__________________________________________________________________________
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/33b06f13/attachment.html>

From davanum at gmail.com  Mon Sep 21 23:37:54 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Mon, 21 Sep 2015 19:37:54 -0400
Subject: [openstack-dev] [release][all] Release help needed - we are
 incompatible with ourselves
In-Reply-To: <1442867754-sup-4511@lrrr.local>
References: <CAJ3HoZ3XF37499bnurXZBdDbrPcY_gP4+BfnnbBPheaXyRxrEw@mail.gmail.com>
 <1442865890-sup-8740@lrrr.local> <1442867754-sup-4511@lrrr.local>
Message-ID: <CANw6fcF8xMAh+_Np5WyMwE78gSRHsbP5BfbQS-7zMkuMemArgA@mail.gmail.com>

+1 Doug. Done. separated the os-client-config from that review.

On Mon, Sep 21, 2015 at 4:36 PM, Doug Hellmann <doug at doughellmann.com>
wrote:

> [resending, the first copy was lost in transmission]
>
> Excerpts from Doug Hellmann's message of 2015-09-21 16:08:51 -0400:
> > Excerpts from Robert Collins's message of 2015-09-22 07:51:02 +1200:
> > > Constraint updates are still failing: you can see this on
> > > https://review.openstack.org/#/c/221157/ or more generally
> > >
> https://review.openstack.org/#/q/status:open+project:openstack/requirements+branch:master+topic:openstack/requirements/constraints,n,z
> > >
> > > Now, the constraints system is *doing its job* - its made the presence
> > > of an incompatible thing not-a-firedrill. However, we need to do our
> > > part of the job too: we need to fix the incompatibility that exists so
> > > that we can roll forward and start using the new releases that are
> > > being made.
> > >
> > > Right now the release team are picking individual components and
> > > proposing them as merges to move things forward, but its fairly
> > > fundamentally unsafe to cut the full liberty release while there is a
> > > known incompatibility bug out there.
> > >
> > > So - I'm manually ringing the fire-drill alarm now: we need to get
> > > this fixed so that the released liberty is actually compatible with
> > > the entire ecosystem at time of release.
> > >
> > > What issues are there ?
> > >
> > > Firstly,
> > > 2015-09-21 06:24:00.911 | + openstack --os-token
> > > 3dc712d5120b436ebb7d554405b7c15f --os-url http://127.0.0.1:9292 image
> > > create cirros-0.3.4-x86_64-uec --public --container-format ami
> > > --disk-format ami
> > > 2015-09-21 06:24:01.396 | openstack: 'image' is not an openstack
> > > command. See 'openstack --help'.
> > >
> > > (See the dvsm run from review 221157 -
> > >
> http://logs.openstack.org/57/221157/12/check/gate-tempest-dsvm-full/17941bd/logs/devstacklog.txt.gz#_2015-09-21_06_24_00_911
> > > )
> >
> > This looks like the error we were seeing before the most recent
> > os-client-config release. I wonder if it would help to update that
> > patch to remove the os-client-config change.  There's a separate
> > patch up to change that constraint in
> https://review.openstack.org/225363
> > but it depends on some devstack changes.
> >
> > Doug
> >
> > >
> > > Secondly, its likely that once thats fixed there will be more things
> to unwind.
> > >
> > > What will help most is if a few folk familiar with devstack can pull
> > > down review 221157 and do a binary search on the changes in it to
> > > determine which ones are safe and which ones trigger the breakage:
> > > then we can at least land all the safe ones at once and zero in on the
> > > incompatibility - and get it addressed.
> > >
> > > To repeat: this is effectively a release blocker IMO, and the release
> > > is happening - well, $now.
> > >
> > > -Rob
> > >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/8812338b/attachment.html>

From corpqa at gmail.com  Mon Sep 21 23:41:36 2015
From: corpqa at gmail.com (OpenStack Mailing List Archive)
Date: Mon, 21 Sep 2015 16:41:36 -0700
Subject: [openstack-dev] openstack-dahboard directory is not created
Message-ID: <f03855f395fa3d910d32cfbf2a172888@openstack.nimeyo.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/6c5f3d30/attachment.html>

From michael at the-davies.net  Mon Sep 21 23:43:25 2015
From: michael at the-davies.net (Michael Davies)
Date: Tue, 22 Sep 2015 09:13:25 +0930
Subject: [openstack-dev] [Ironic] Stepping down from IPA core
In-Reply-To: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
References: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
Message-ID: <CAPKtezx2rZ7kGVHvzY3kY9BLTs6sT9A8PLCSB3yW57pzbQ7mdg@mail.gmail.com>

On Tue, Sep 22, 2015 at 1:19 AM, Josh Gachnang <josh at pcsforeducation.com>
wrote:

> Hey y'all, it's with a heavy heart I have to announce I'll be stepping
> down from the IPA core team on Thurs, 9/24. I'm leaving Rackspace for a
> healthcare startup (Triggr Health) and won't have the time to dedicate to
> being an effective OpenStack reviewer.
>

Thanks Josh for everything you've done!  I've really appreciated how you're
always upbeat - we'll miss having your around.

All the best for the new adventure,

Michael...
-- 
Michael Davies   michael at the-davies.net
Rackspace Cloud Builders Australia
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/b0e0651d/attachment.html>

From sgolovatiuk at mirantis.com  Tue Sep 22 00:14:00 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Tue, 22 Sep 2015 02:14:00 +0200
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,
 that is the question
In-Reply-To: <CAGSrQvzERPpAGb0e6NxOZZSSz-K9EKa1tRsPC=oNqWTD5DW2Xg@mail.gmail.com>
References: <55FC0B8A.4060303@mhtx.net>
 <CAGSrQvzERPpAGb0e6NxOZZSSz-K9EKa1tRsPC=oNqWTD5DW2Xg@mail.gmail.com>
Message-ID: <CA+HkNVt=td0=3sEnCZ34VtkYKpQDjLcxyF8COssjg4UG8h_0mg@mail.gmail.com>

Hi,

Are any chance to configure chrony instead of ntpd? It acts more
predictable on virtual environments.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Sep 21, 2015 at 4:11 PM, Jesse Pretorius <jesse.pretorius at gmail.com>
wrote:

> On 18 September 2015 at 14:03, Major Hayden <major at mhtx.net> wrote:
>
>> Hey there,
>>
>> I start working on a bug[1] last night about adding a managed NTP
>> configuration to openstack-ansible hosts.  My patch[2] gets chrony up and
>> running with configurable NTP servers, but I'm still struggling to meet the
>> "Proposal" section of the bug where the author has asked for non-infra
>> physical nodes to get their time from the infra nodes.  I can't figure out
>> how to make it work for AIO builds when one physical host is part of all of
>> the groups. ;)
>>
>> I'd argue that time synchronization is critical for a few areas:
>>
>>   1) Security/auditing when comparing logs
>>   2) Troubleshooting when comparing logs
>>   3) I've been told swift is time-sensitive
>>   4) MySQL/Galera don't like time drift
>>
>> However, there's a strong argument that this should be done by deployers,
>> and not via openstack-ansible.  I'm still *very* new to the project and I'd
>> like to hear some feedback from other folks.
>>
>> [1] https://bugs.launchpad.net/openstack-ansible/+bug/1413018
>> [2] https://review.openstack.org/#/c/225006/
>
>
> We have historically taken the stance of leaving something like this as a
> deployer concern - much like setting up host networking and setting host
> repositories. That said, there's value in opinionation based on best
> practices learned from hard-won lessons in the trenches.
>
> I'm somewhat on the fence with this. As-is I don't think the review should
> go in. That said, I'd be more open to an individual role being used to
> implement an appropriate network time configuration - whether that role be
> something that exists within Ansible Galaxy, or whether it's a new role in
> the current repository, or as its own repository in the OpenStack-Ansible
> 'big tent' as proposed in https://review.openstack.org/213779
>
> I do definitely think that there's value in preparing some documentation
> which will help prospective deployers understand how they can consume roles
> from Ansible Galaxy (or some role in an arbitrary repository) to solve
> common problems like this. The tooling is already in the OpenStack-Ansible
> repository, so all it needs is a guiding document which describes how to
> use it.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/acded76f/attachment.html>

From major at mhtx.net  Tue Sep 22 00:45:03 2015
From: major at mhtx.net (Major Hayden)
Date: Mon, 21 Sep 2015 19:45:03 -0500
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,
 that is the question
In-Reply-To: <CA+HkNVt=td0=3sEnCZ34VtkYKpQDjLcxyF8COssjg4UG8h_0mg@mail.gmail.com>
References: <55FC0B8A.4060303@mhtx.net>
 <CAGSrQvzERPpAGb0e6NxOZZSSz-K9EKa1tRsPC=oNqWTD5DW2Xg@mail.gmail.com>
 <CA+HkNVt=td0=3sEnCZ34VtkYKpQDjLcxyF8COssjg4UG8h_0mg@mail.gmail.com>
Message-ID: <5600A48F.60507@mhtx.net>

On 09/21/2015 07:14 PM, Sergii Golovatiuk wrote:
> Are any chance to configure chrony instead of ntpd? It acts more predictable on virtual environments.

That's my plan, if I can find an upstream Ansible galaxy role to use. ;)

--
Major Hayden


From malini.kamalambal at RACKSPACE.COM  Tue Sep 22 00:50:27 2015
From: malini.kamalambal at RACKSPACE.COM (Malini Kamalambal)
Date: Tue, 22 Sep 2015 00:50:27 +0000
Subject: [openstack-dev] [poppy] Nominate Sriram Madupasi Vasudevan for
 Poppy (CDN) to Core
In-Reply-To: <D225ECDD.53371%amit.gandhi@rackspace.com>
References: <D225ECDD.53371%amit.gandhi@rackspace.com>
Message-ID: <D2261E3D.3A3B6%malini.kamalambal@rackspace.com>

+2

From: Amit Gandhi <amit.gandhi at RACKSPACE.COM<mailto:amit.gandhi at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 21, 2015 at 5:20 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [poppy] Nominate Sriram Madupasi Vasudevan for Poppy (CDN) to Core

All,

I would like to nominate Sriram Madupasi Vasudevan (thesriram) [1] to Core for Poppy (CDN) [2].

Sriram has worked on the project for the past 12 months, and has been instrumental in building out various features and resolving bugs for the team.

Please respond with your votes.

Thanks
Amit.

[1] http://stackalytics.com/?release=all&project_type=stackforge&module=poppy
[2] http://www.poppycdn.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/371e1b7d/attachment.html>

From malini.kamalambal at RACKSPACE.COM  Tue Sep 22 00:50:40 2015
From: malini.kamalambal at RACKSPACE.COM (Malini Kamalambal)
Date: Tue, 22 Sep 2015 00:50:40 +0000
Subject: [openstack-dev] [poppy] Nominate Tony Tan for Poppy (CDN) Core
Message-ID: <D2261E54.3A3B7%malini.kamalambal@rackspace.com>

+2

From: Amit Gandhi <amit.gandhi at RACKSPACE.COM<mailto:amit.gandhi at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 21, 2015 at 5:22 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [poppy] Nominate Tony Tan for Poppy (CDN) Core

All,

I would like to nominate Tony Tan (tonytan4ever) [1] to Core for Poppy (CDN) [2].

Tony has worked on the project for the past 12 months, and has been instrumental in building out various features and resolving bugs for the team.  He has written the majority of the Akamai driver and has been working hard to bring SSL integration to the workflow.

Please respond with your votes.

Thanks
Amit.

[1] http://stackalytics.com/?release=all&project_type=stackforge&module=poppy
[2] http://www.poppycdn.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/3400012a/attachment.html>

From banveerad at gmail.com  Tue Sep 22 00:57:24 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Mon, 21 Sep 2015 17:57:24 -0700
Subject: [openstack-dev]  [neutron][lbaas] - Heat support for LbaasV2
Message-ID: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>

Hi All,
I was thinking of starting the work on heat to support LBaasV2,  Is there
any concerns about that?

I don't know if it is the right time to bring this up :D .

Thanks,
Banashankar (bana_k)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150921/7f2fefbc/attachment.html>

From xiangfeiz at vmware.com  Tue Sep 22 00:58:34 2015
From: xiangfeiz at vmware.com (Xiangfei Zhu)
Date: Tue, 22 Sep 2015 00:58:34 +0000
Subject: [openstack-dev] New PyCharm License
Message-ID: <D226C887.16766%xiangfeiz@vmware.com>

Hi Andrew,

Do I still need to send you my launchpad ID after using the link to get
the license?

Thank you.
Xiangfei

On 9/22/15 2:30 AM, "Andrew Melton" <andrew.melton at RACKSPACE.COM> wrote:

>Please follow this link to request a license:
>https://urldefense.proofpoint.com/v2/url?u=https-3A__account.jetbrains.com
>_a_4c4ojw&d=BQIGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=OGorF65
>2R-rlzkrhTf-HG1EJV0TOb5Z7GjvWSJiGSCs&m=pHAMhy6mc7DbAL_vjreoK4PBtwF6IFHWi_b
>0UQkduNo&s=XKeYJ7JzMukfQaiX7-RqlsSIuKupObi7hZt7Z3UsCyo&e= .
>
>You will need a JetBrains account to request the license. This link is
>open for anyone to use, so please do not share it in the public. You may
>share it with other OpenStack contributors on your team, but if you do,
>please send me their launchpad-ids. Lastly, if you decide to stop using
>PyCharm, please send me an email so I can revoke the license and open it
>up for use by someone else.
>
>Thanks!
>Andrew
>________________________________________
>From: Kekane, Abhishek <Abhishek.Kekane at nttdata.com>
>Sent: Monday, September 21, 2015 1:10 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] New PyCharm License
>
>Hi Andrew,
>
>My launchpad id is abhishek-kekane
>
>Thank you,
>
>Abhishek
>________________________________________
>From: Andrew Melton [andrew.melton at RACKSPACE.COM]
>Sent: Monday, September 21, 2015 10:54 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: [openstack-dev] New PyCharm License
>
>Hi devs,
>
>
>I've got the new license for the next year. As always, please reply to
>this email with your launchpad-id if you would like a license.
>
>
>Also, if there are other JetBrains products you use to contribute to
>OpenStack, please let me know and I will request licenses.
>
>?
>
>--Andrew
>
>
>______________________________________________________________________
>Disclaimer: This email and any attachments are sent in strictest
>confidence
>for the sole use of the addressee and may contain legally privileged,
>confidential, and proprietary data. If you are not the intended recipient,
>please advise the sender by replying promptly to this email and then
>delete
>and destroy this email and any attachments without any further use,
>copying
>or forwarding.
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From yangbaohua at gmail.com  Tue Sep 22 01:59:01 2015
From: yangbaohua at gmail.com (Baohua Yang)
Date: Tue, 22 Sep 2015 09:59:01 +0800
Subject: [openstack-dev] New Pycharm License
In-Reply-To: <1380210077.13436630@apps.rackspace.com>
References: <1380210077.13436630@apps.rackspace.com>
Message-ID: <CACbo-ECCp_fpOEj4b5sF2i77Lct3N8DJLVZi_E_H5FK4skZgew@mail.gmail.com>

Hi andrew
Would greatly appreciate a PyCharm license key!
Thanks a lot!

On Thu, Sep 26, 2013 at 11:41 PM, Andrew Melton <andrew.melton at rackspace.com
> wrote:

> Hey Devs,
>
>
>
> It's almost been a year since I sent out the first email and I've been
> getting a few emails lately about alerts that the current license is about
> to expire. Well, I've got a hold of our new license, good for another year.
> This'll give you access to the new Pro edition of Pycharm and any updates
> for a year.
>
>
>
> As this list is public, I can't email the license out to everyone, so
> please reply to this email and I'll get you the license.
>
>
>
> Also, please note that if your current license expires, Pycharm will
> continue to work. You will just stop receiving updates until you've entered
> this new license.
>
>
>
> Thanks,
>
> Andrew Melton
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best wishes!
Baohua
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/062b9c36/attachment.html>

From robertc at robertcollins.net  Tue Sep 22 03:16:26 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Tue, 22 Sep 2015 15:16:26 +1200
Subject: [openstack-dev] [releases] semver and dependency changes
Message-ID: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>

Currently we don't provide specific guidance on what should happen
when the only changes in a project are dependency changes and a
release is made.

The releases team has been treating a dependency change as 'feature'
rather than 'bugfix' in semver modelling - so if the last release was
1.2.3, and a requirements sync happens to fix a bug (e.g. a too-low
minimum dependency), then the next release would be 1.3.0.

Reasoning about this can be a little complex to do on the fly, so I'd
like to put together some concrete guidance - which essentially means
being able to provide a heuristic to answer the questions:

'Is this requirements change an API break' or 'is this requirements
change feature work' or 'is this requirements change a bugfix'.

It seems clear to me that all three can be true. For example, consider
if library X exposes library Y as part of its API, and library Y's
dependency changes from
Y>=1
to
Y>=2

then thats happening due to an API break - e.g. Y has removed some old
backwards compatibility cruft - X won't break or need changing, and
its possible than none of X's callers will need to change either. But
some of them might have been using some of the thing that went away in
Y==2, and so will break. So its an API break in X. But why would X do
that, surely its doing its own API break - well no, lets say its
adding a feature that was only added in Y==2, then setting the minimum
to 2 is necessary, and entirely unrelated to the fact that an API
break is involved.

So the sequence there would be something like:
update X's requirements to Y >= 2
use new feature from Y >= 2 [ this is a 'feature' patch, not an api-break].
release X, and it should be a new major version.

Now, if Y is not exposed, a change in Y's dependencies for X clearly
has nothing to do with X's version... but users of X that
independently use Y will still be impacted, since upgrading X will
upgrade their Y [ignoring the intricacies surrounding pip here :)].

So, one answer we can use is "The version impact of a requirements
change is never less than the largest version change in the change."
That is:
nothing -> a requirement -> major version change
1.x.y -> 2.0.0 -> major version change
1.2.y -> 1.3.0 -> minor version change
1.2.3. -> 1.2.4 -> patch version change

We could calculate the needed change programmatically for this
approach in the requirements syncing process.

Another approach would be to say that only explicitly exposed
interfaces matter, but I think this is a disservice to our consumers.

A third approach would be to pick minor versions always as the
evolving process in the releases team does, but because requirements
changes *can* be API breaks to users of components, I think that that
is too conservative.

A fourth one would be to pick patch level for every change, but that
too is too conservative for exactly the same reasons.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From doc at aedo.net  Tue Sep 22 03:23:48 2015
From: doc at aedo.net (Christopher Aedo)
Date: Mon, 21 Sep 2015 20:23:48 -0700
Subject: [openstack-dev] [app-catalog] App Catalog core cleanup
Message-ID: <CA+odVQHOp+e8QWyzRm+xzztnVe=1SyEnDAQcyjDT4xpbKNzf3Q@mail.gmail.com>

In an effort to do some housekeeping, I plan to clean up the list of
core reviewers in the App Catalog.  Currently the members are:
Alexander Tivelkov, Angus Salkeld, Christopher Aedo, Georgy
Okrokvertskhov, Herman Narkaytis, Kevin Fox, Pavlo Shchelokovskyy,
Serg Melikyan and Tom Fifield

Based on Stackalytics[1] only Kevin Fox, Tom Fifield and myself have
maintained an ongoing effort of reviews and contributions.  Though
everyone on that list was absolutely instrumental in getting the App
Catalog off the ground, it seems like priorities have shifted since
then for many of the early contributors.

My intention is to remove the inactive members at the end of this week
unless they're interested in renewing their efforts in the very near
future.  If you do intend to get involved again, please speak up,
thanks!

[1] http://stackalytics.com/report/contribution/app-catalog/90

-Christopher


From ganeshna at cisco.com  Tue Sep 22 03:25:38 2015
From: ganeshna at cisco.com (Ganesh Narayanan (ganeshna))
Date: Tue, 22 Sep 2015 03:25:38 +0000
Subject: [openstack-dev] [neutron] Neutron debugging tool
In-Reply-To: <CAP0B2WMOFLMdH7t9hNc0uw8xqNcUvvrLBZa7D4hHU06c2bLMuA@mail.gmail.com>
Message-ID: <D226C7C9.7CEB9%ganeshna@cisco.com>

Another project for diagnosing OVS in Neutron:

https://github.com/CiscoSystems/don

Thanks,
Ganesh

From: Salvatore Orlando <salv.orlando at gmail.com<mailto:salv.orlando at gmail.com>>
Reply-To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, 21 September 2015 2:55 pm
To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] Neutron debugging tool

It sounds like indeed that easyOVS covers what you're aiming too.
However, from what I gather there is still plenty to do in easy OVS, so perhaps rather than starting a new toolset from scratch you might build on the existing one.

Personally I'd welcome its adoption into the Neutron stadium as debugging control plane/data plane issues in the neutron reference impl is becoming difficult also for expert users and developers.
I'd just suggest renaming it because calling it "OVS" is just plain wrong. The neutron reference implementation and OVS are two distinct things.

As concern neutron-debug, this is a tool that was developed in the early stages of the project to verify connectivity using "probes" in namespaces. These probes are simply tap interfaces associated with neutron ports. The neutron-debug tool is still used in some devstack exercises. Nevertheless, I'd rather keep building something like easyOVS and then deprecated neutron-debug rather than develop it.

Salvatore


On 21 September 2015 at 02:40, Li Ma <skywalker.nick at gmail.com<mailto:skywalker.nick at gmail.com>> wrote:
AFAIK, there is a project available in the github that does the same thing.
https://github.com/yeasy/easyOVS

I used it before.

On Mon, Sep 21, 2015 at 12:17 AM, Nodir Kodirov <nodir.qodirov at gmail.com<mailto:nodir.qodirov at gmail.com>> wrote:
> Hello,
>
> I am planning to develop a tool for network debugging. Initially, it
> will handle DVR case, which can also be extended to other too. Based
> on my OpenStack deployment/operations experience, I am planning to
> handle common pitfalls/misconfigurations, such as:
> 1) check external gateway validity
> 2) check if appropriate qrouter/qdhcp/fip namespaces are created in
> compute/network hosts
> 3) execute probing commands inside namespaces, to verify reachability
> 4) etc.
>
> I came across neutron-debug [1], which mostly focuses on namespace
> debugging. Its coverage is limited to OpenStack, while I am planning
> to cover compute/network nodes as well. In my experience, I had to ssh
> to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
> above). The tool I am considering will handle these, given the host
> credentials.
>
> I'd like get community's feedback on utility of such debugging tool.
> Do people use neutron-debug on their OpenStack environment? Does the
> tool I am planning to develop with complete diagnosis coverage sound
> useful? Anyone is interested to join the development? All feedback are
> welcome.
>
> Thanks,
>
> - Nodir
>
> [1] http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Li Ma (Nick)
Email: skywalker.nick at gmail.com<mailto:skywalker.nick at gmail.com>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/73c28b0b/attachment.html>

From judy_yujie at 126.com  Tue Sep 22 03:28:25 2015
From: judy_yujie at 126.com (yujie)
Date: Tue, 22 Sep 2015 11:28:25 +0800
Subject: [openstack-dev] [neutron][sriov] SRIOV-VM could not work well with
	normal VM
Message-ID: <5600CAD9.2070706@126.com>

Hi all,
I am using neutron kilo without dvr to create sriov instance VM-A,it 
works well and could connect to its gateway fine.
But when I let the normal instance VM-B which in the same compute-node 
with VM-A ping its gateway, it failed. I capture the packet on the 
network-node, find the gateway already reply the ARP-reply message to 
VM-B. But compute-node which VM-B lives could not send the package to VM-B.
If delete VM-A and set : echo 0 > 
/sys/class/enp5s0f0/device/sriov_numvfs, the problem solved.

Is it a same question with the bug: SR-IOV port doesn't reach OVS port 
on same compute node ?
https://bugs.launchpad.net/neutron/+bug/1492228
Any suggestions will be grateful.

Thanks,
Yujie



From rameshg87.openstack at gmail.com  Tue Sep 22 03:59:55 2015
From: rameshg87.openstack at gmail.com (Ramakrishnan G)
Date: Tue, 22 Sep 2015 09:29:55 +0530
Subject: [openstack-dev] [Ironic] Stepping down from IPA core
In-Reply-To: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
References: <CAOrc4CwzLEPDkbJwHQAwCsmowR=N2Jxzr-0V1P2jHRkseWp2rw@mail.gmail.com>
Message-ID: <CA+OfGHrzWj5x2tw6+j61JHv5C3db+2yvsuT=LgOCuZPOHAk8aQ@mail.gmail.com>

Josh, we will surely miss you in Ironic :(.  Thanks for all the work and
all the best on your new job.

On Mon, Sep 21, 2015 at 9:19 PM, Josh Gachnang <josh at pcsforeducation.com>
wrote:

> Hey y'all, it's with a heavy heart I have to announce I'll be stepping
> down from the IPA core team on Thurs, 9/24. I'm leaving Rackspace for a
> healthcare startup (Triggr Health) and won't have the time to dedicate to
> being an effective OpenStack reviewer.
>
> Ever since the OnMetal team proposed IPA allllll the way back in the
> Icehouse midcycle, this community has been welcoming, helpful, and all
> around great. You've all helped me grow as a developer with your in depth
> and patient reviews, for which I am eternally grateful. I'm really sad I
> won't get to see everyone in Tokyo.
>
> I'll still be on IRC after leaving, so feel free to ping me for any reason
> :)
>
> - JoshNang
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/381dd9a0/attachment.html>

From noorul at noorul.com  Tue Sep 22 04:14:18 2015
From: noorul at noorul.com (Noorul Islam K M)
Date: Tue, 22 Sep 2015 09:44:18 +0530
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <1442847252531.42564@RACKSPACE.COM> (Andrew Melton's message of
 "Mon, 21 Sep 2015 14:54:11 +0000")
References: <1442847252531.42564@RACKSPACE.COM>
Message-ID: <m2oagvytt1.fsf@noorul.com>

Andrew Melton <andrew.melton at RACKSPACE.COM> writes:

> Hi devs,
>
>
> I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.
>
>
> Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.
>

My id is noorul

Thanks and Regards
Noorul


From moshele at mellanox.com  Tue Sep 22 05:24:53 2015
From: moshele at mellanox.com (Moshe Levi)
Date: Tue, 22 Sep 2015 05:24:53 +0000
Subject: [openstack-dev] [neutron][sriov] SRIOV-VM could not work well
 with	normal VM
In-Reply-To: <5600CAD9.2070706@126.com>
References: <5600CAD9.2070706@126.com>
Message-ID: <AM4PR05MB1523979B5C1500571B0A3A2DD0450@AM4PR05MB1523.eurprd05.prod.outlook.com>

Hi Yujie,

There is a patch https://review.openstack.org/#/c/198736/ which I wrote to add the mac of the normal instance to 
the SR-IOV embedded switch so that the packet will go to the PF instead of going to the wire. 
This is done by using bridge tool with the command "bridge fdb add <mac> dev <interface>"

I was able to test it on Mellanox ConnectX3  card with both  vlan and flat network and it worked fine. 
I wasn't able to test it on any of the Intel cards, but I was told the it only working on flat network, in vlan network the Intel card is dropping the tagged packets and they are not go up to the VF. 

What NIC are you using? Can you try using "bridge fdb add <mac> dev <interface>" where <mac> is the mac of the normal vm and <interface> is the PF
and see if  that resolve the issue.  
Also can you check it with  flat and vlan networks.


-----Original Message-----
From: yujie [mailto:judy_yujie at 126.com] 
Sent: Tuesday, September 22, 2015 6:28 AM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [neutron][sriov] SRIOV-VM could not work well with normal VM

Hi all,
I am using neutron kilo without dvr to create sriov instance VM-A,it works well and could connect to its gateway fine.
But when I let the normal instance VM-B which in the same compute-node with VM-A ping its gateway, it failed. I capture the packet on the network-node, find the gateway already reply the ARP-reply message to VM-B. But compute-node which VM-B lives could not send the package to VM-B.
If delete VM-A and set : echo 0 >
/sys/class/enp5s0f0/device/sriov_numvfs, the problem solved.

Is it a same question with the bug: SR-IOV port doesn't reach OVS port on same compute node ?
https://bugs.launchpad.net/neutron/+bug/1492228
Any suggestions will be grateful.

Thanks,
Yujie


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From hejie.xu at intel.com  Tue Sep 22 06:03:16 2015
From: hejie.xu at intel.com (Alex Xu)
Date: Tue, 22 Sep 2015 14:03:16 +0800
Subject: [openstack-dev] [nova] Nova API sub-team meeting
Message-ID: <2114BDCF-24CE-44F6-AB12-46943EA005BF@intel.com>

Hi,

We have weekly Nova API meeting this week. The meeting is being held Tuesday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Tue)
Japan 21:00 (Tue)
China 20:00 (Tue)
United Kingdom 13:00 (Tue)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI <https://wiki.openstack.org/wiki/Meetings/NovaAPI>

Please feel free to add items to the agenda.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/ef3dbd51/attachment.html>

From bharath.thiruveedula at imaginea.com  Tue Sep 22 06:08:46 2015
From: bharath.thiruveedula at imaginea.com (Bharath Thiruveedula)
Date: Tue, 22 Sep 2015 11:38:46 +0530
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <1442847252531.42564@RACKSPACE.COM>
References: <1442847252531.42564@RACKSPACE.COM>
Message-ID: <CAO0guEf3EFdQW6aBVOKg3GbTTpV1ORLrDG6UB9KgQgLyfBHkYg@mail.gmail.com>

Hi  Andrew,

My Launchpad ID is "bharath-ves"

Regards
Bharath T

On Mon, Sep 21, 2015 at 8:24 PM, Andrew Melton <andrew.melton at rackspace.com>
wrote:

> Hi devs,
>
>
> I've got the new license for the next year. As always, please reply to
> this email with your launchpad-id if you would like a license.
>
>
> Also, if there are other JetBrains products you use to contribute to
> OpenStack, please let me know and I will request licenses.
>
> ?
>
> --Andrew
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/1a6d2666/attachment.html>

From gkotton at vmware.com  Tue Sep 22 06:21:01 2015
From: gkotton at vmware.com (Gary Kotton)
Date: Tue, 22 Sep 2015 06:21:01 +0000
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <CAO0guEf3EFdQW6aBVOKg3GbTTpV1ORLrDG6UB9KgQgLyfBHkYg@mail.gmail.com>
References: <1442847252531.42564@RACKSPACE.COM>
 <CAO0guEf3EFdQW6aBVOKg3GbTTpV1ORLrDG6UB9KgQgLyfBHkYg@mail.gmail.com>
Message-ID: <D226CDE6.BF035%gkotton@vmware.com>

Hi,
Mine is garyk
Thanks
Gary

From: Bharath Thiruveedula <bharath.thiruveedula at imaginea.com<mailto:bharath.thiruveedula at imaginea.com>>
Reply-To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, September 22, 2015 at 9:08 AM
To: OpenStack List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] New PyCharm License

Hi  Andrew,

My Launchpad ID is "bharath-ves"

Regards
Bharath T

On Mon, Sep 21, 2015 at 8:24 PM, Andrew Melton <andrew.melton at rackspace.com<mailto:andrew.melton at rackspace.com>> wrote:

Hi devs,


I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.


Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.

?

--Andrew

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/bb114ddc/attachment.html>

From xuanlangjian at gmail.com  Tue Sep 22 07:05:43 2015
From: xuanlangjian at gmail.com (Ethan Lynn)
Date: Tue, 22 Sep 2015 15:05:43 +0800
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
Message-ID: <CAH_1FkAytQwRA3viL0P06SsN3xwACJGgm5pozZbUBPTW0Oi6sg@mail.gmail.com>

Hi Banashankar,
  There's a BP for this
https://blueprints.launchpad.net/heat/+spec/support-neutron-lb-v2-model-definition
.
  And I plan to submit a spec for it but I haven't figure out how to
implement it. Maybe we can work together with huangtianhua.
There are two choices to implement it:
  1. Add totally new resources for LBaasV2 like OS::Neutron::LoadBalancerV2.
  2. Modify exists resources to support LBaasV2, like adding new property
'version' to control which properties should use for each version.

Hope to hear more feedback.

2015-09-22 8:57 GMT+08:00 Banashankar KV <banveerad at gmail.com>:

> Hi All,
> I was thinking of starting the work on heat to support LBaasV2,  Is there
> any concerns about that?
>
> I don't know if it is the right time to bring this up :D .
>
> Thanks,
> Banashankar (bana_k)
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/68d1db9a/attachment.html>

From banveerad at gmail.com  Tue Sep 22 07:38:08 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Tue, 22 Sep 2015 00:38:08 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CAH_1FkAytQwRA3viL0P06SsN3xwACJGgm5pozZbUBPTW0Oi6sg@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <CAH_1FkAytQwRA3viL0P06SsN3xwACJGgm5pozZbUBPTW0Oi6sg@mail.gmail.com>
Message-ID: <CABkBM5G3_7rhsqk15NsDPy=EZCSxiuxuz3uRz3nuuEoMJMcH8Q@mail.gmail.com>

Hi Ethan,
Oh thats cool, I didn't see that BP.
Sure we can work together on it.
As there will be no support for LBaasV1 in Liberty and upcoming releases, I
think we can stick to the first approach that you mentioned.

But need to discuss whether to have the type named as
OS::Neutron::[lb_elements]V2 or OS::Neutron::[lb_elements]

lb_element being
1. Loadbalancer.
2. Listener.
3. Pool.
4. Members.
5. Health monitors.

Thanks,
Banashankar


On Tue, Sep 22, 2015 at 12:05 AM, Ethan Lynn <xuanlangjian at gmail.com> wrote:

> Hi Banashankar,
>   There's a BP for this
> https://blueprints.launchpad.net/heat/+spec/support-neutron-lb-v2-model-definition
> .
>   And I plan to submit a spec for it but I haven't figure out how to
> implement it. Maybe we can work together with huangtianhua.
> There are two choices to implement it:
>   1. Add totally new resources for LBaasV2 like
> OS::Neutron::LoadBalancerV2.
>   2. Modify exists resources to support LBaasV2, like adding new property
> 'version' to control which properties should use for each version.
>
> Hope to hear more feedback.
>
> 2015-09-22 8:57 GMT+08:00 Banashankar KV <banveerad at gmail.com>:
>
>> Hi All,
>> I was thinking of starting the work on heat to support LBaasV2,  Is there
>> any concerns about that?
>>
>> I don't know if it is the right time to bring this up :D .
>>
>> Thanks,
>> Banashankar (bana_k)
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/c3ee6b5c/attachment.html>

From salv.orlando at gmail.com  Tue Sep 22 07:51:08 2015
From: salv.orlando at gmail.com (Salvatore Orlando)
Date: Tue, 22 Sep 2015 08:51:08 +0100
Subject: [openstack-dev] [neutron] Neutron debugging tool
In-Reply-To: <D226C7C9.7CEB9%ganeshna@cisco.com>
References: <CAP0B2WMOFLMdH7t9hNc0uw8xqNcUvvrLBZa7D4hHU06c2bLMuA@mail.gmail.com>
 <D226C7C9.7CEB9%ganeshna@cisco.com>
Message-ID: <CAP0B2WPHNGpPCY07d9abeuYwBZqk3XUcCK7wkfbNhSGGTV+k0A@mail.gmail.com>

Thanks Ganesh!

I did not know about this tool.
I also quite like the network visualization bits, though I wonder how
practical that would be when one debugs very large deployments.

I think it won't be a bad idea to list these tools in the networking guide
or in neutron's devref, or both.

Salvatore

On 22 September 2015 at 04:25, Ganesh Narayanan (ganeshna) <
ganeshna at cisco.com> wrote:

> Another project for diagnosing OVS in Neutron:
>
> https://github.com/CiscoSystems/don
>
> Thanks,
> Ganesh
>
> From: Salvatore Orlando <salv.orlando at gmail.com>
> Reply-To: OpenStack Development Mailing List <
> openstack-dev at lists.openstack.org>
> Date: Monday, 21 September 2015 2:55 pm
> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron] Neutron debugging tool
>
> It sounds like indeed that easyOVS covers what you're aiming too.
> However, from what I gather there is still plenty to do in easy OVS, so
> perhaps rather than starting a new toolset from scratch you might build on
> the existing one.
>
> Personally I'd welcome its adoption into the Neutron stadium as debugging
> control plane/data plane issues in the neutron reference impl is becoming
> difficult also for expert users and developers.
> I'd just suggest renaming it because calling it "OVS" is just plain wrong.
> The neutron reference implementation and OVS are two distinct things.
>
> As concern neutron-debug, this is a tool that was developed in the early
> stages of the project to verify connectivity using "probes" in namespaces.
> These probes are simply tap interfaces associated with neutron ports. The
> neutron-debug tool is still used in some devstack exercises. Nevertheless,
> I'd rather keep building something like easyOVS and then deprecated
> neutron-debug rather than develop it.
>
> Salvatore
>
>
> On 21 September 2015 at 02:40, Li Ma <skywalker.nick at gmail.com> wrote:
>
>> AFAIK, there is a project available in the github that does the same
>> thing.
>> https://github.com/yeasy/easyOVS
>>
>> I used it before.
>>
>> On Mon, Sep 21, 2015 at 12:17 AM, Nodir Kodirov <nodir.qodirov at gmail.com>
>> wrote:
>> > Hello,
>> >
>> > I am planning to develop a tool for network debugging. Initially, it
>> > will handle DVR case, which can also be extended to other too. Based
>> > on my OpenStack deployment/operations experience, I am planning to
>> > handle common pitfalls/misconfigurations, such as:
>> > 1) check external gateway validity
>> > 2) check if appropriate qrouter/qdhcp/fip namespaces are created in
>> > compute/network hosts
>> > 3) execute probing commands inside namespaces, to verify reachability
>> > 4) etc.
>> >
>> > I came across neutron-debug [1], which mostly focuses on namespace
>> > debugging. Its coverage is limited to OpenStack, while I am planning
>> > to cover compute/network nodes as well. In my experience, I had to ssh
>> > to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
>> > above). The tool I am considering will handle these, given the host
>> > credentials.
>> >
>> > I'd like get community's feedback on utility of such debugging tool.
>> > Do people use neutron-debug on their OpenStack environment? Does the
>> > tool I am planning to develop with complete diagnosis coverage sound
>> > useful? Anyone is interested to join the development? All feedback are
>> > welcome.
>> >
>> > Thanks,
>> >
>> > - Nodir
>> >
>> > [1]
>> http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>>
>> Li Ma (Nick)
>> Email: skywalker.nick at gmail.com
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/1d01f76b/attachment.html>

From blak111 at gmail.com  Tue Sep 22 08:48:14 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Tue, 22 Sep 2015 04:48:14 -0400
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
In-Reply-To: <SN1PR02MB1695D86F23755AA6FF33240099580@SN1PR02MB1695.namprd02.prod.outlook.com>
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JNLrM4BMCUNZ+XcePjSkk2OAQWJiuH5UU4njYi9+aaZRg@mail.gmail.com>
 <SN1PR02MB169592E08620777E123F6D34995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JMKtRD6G60=NGMGi-mAndy=yThRdzCgX9iF8R6BQk3cAw@mail.gmail.com>
 <SN1PR02MB16950E11BFF66EB83AC3F361995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JMn5Jv74NL1TBpciM7PxdLMT0SXN+6+9WUaju=bGqW5ig@mail.gmail.com>
 <SN1PR02MB1695D86F23755AA6FF33240099580@SN1PR02MB1695.namprd02.prod.outlook.com>
Message-ID: <CAO_F6JMLyD0yu-HH881FuWy=5YCrqcHT=3AQ2arCvH4UBbK9nQ@mail.gmail.com>

There is no guarantee that two separate external networks will have routing
connectivity to each other. It seems like your (b) statement implies that.
On Sep 19, 2015 3:12 PM, "Neil Jerram" <Neil.Jerram at metaswitch.com> wrote:

> On 17/09/15 19:38, Kevin Benton wrote:
> > router:external only affects the behavior of Neutron routers. It
> > allows them to attach to it with an external gateway interface which
> > implies NAT and floating IPs.
>
> I presume you're talking about the reference implementation here.  OK,
> but my concern first is about what it _means_, in particular for
> instances attached to such a network.  Which your next paragraph addresses:
>
> >
> > From an instance's perspective, an external network would be no
> > different than any other provider network scenario that uses a
> > non-Neutron router. Nothing different happens with the routing of
> > traffic.
>
> Right, thanks.  It seems to me that you're saying the same thing here as
> my (b) below.  In case not, please do say more precisely what isn't
> correct about my (b) statement.
>
> >
> > >Also I believe that (c) is already true for Neutron external networks
> > - i.e. it doesn't make sense to assign a floating IP to an instance
> > that is directly on an external network.  Is that correct?
> >
> > Well not floating IPs from the same external network, but you could
> > conceivably have layers where one external network has an internal
> > Neutron router interface that leads to another external network via a
> > Neutron router.
>
> Agreed.
>
> Many thanks for your input in this thread!
>
> Regards,
>     Neil
>
> >
> >
> > On Thu, Sep 17, 2015 at 10:17 AM, Neil Jerram
> > <Neil.Jerram at metaswitch.com <mailto:Neil.Jerram at metaswitch.com>> wrote:
> >
> >     Thanks so much for your continuing answers; they are really
> >     helping me.
> >
> >     I see your points now about the special casing, and about the
> semantic
> >     expectations and internal wiring of a Neutron network being just the
> >     same for an external network as for non-external.  Hence, the
> >     model for
> >     an L3-only external network should be the same as it would be for an
> >     L3-only tenant network, except for the router:external flag (and
> might
> >     be along the lines that you've suggested, of a subnet with a null
> >     network).
> >
> >     It still seems that 'router:external true' might be a good match for
> >     some of the other 'routed' semantics in [1], though, so I'd like to
> >     drill down more on exactly what 'router:external true' means.
> >
> >     A summary of the semantics at [1] is:
> >     (a) L3-only connectivity between instances attached to the network.
> >     (b) Data can be routed between between this network and the
> >     outside, and
> >     between multiple networks of this type, without needing Neutron
> >     routers
> >     (c) Floating IPs are not supported for instances on this network.
> >     Instead, wherever an instance needs to be routable from, attach it
> >     to a
> >     network with a subnet of IP addresses that are routable from that
> >     place.
> >
> >     [1] https://review.openstack.org/#/c/198439/
> >     <https://review.openstack.org/#/c/198439/>
> >
> >     According to [2], router:external "Indicates whether this network is
> >     externally accessible."  Which I think is an exact match for (b) -
> >     would
> >     you agree?  (Note: it can't mean that every instance on that network
> >     actually _is_ contactable from outside, because that depends on IP
> >     addressing, and router:external is a network property, not
> >     subnet.  But
> >     it can mean that every instance is _potentially_ contactable, without
> >     the mediation of a Neutron router.)
> >
> >     [2] http://developer.openstack.org/api-ref-networking-v2-ext.html
> >
> >     Also I believe that (c) is already true for Neutron external
> >     networks -
> >     i.e. it doesn't make sense to assign a floating IP to an instance
> that
> >     is directly on an external network.  Is that correct?
> >
> >     In summary, for the semantics that I'm wanting to model, it sounds
> >     like
> >     router:external true already gives me 2 of the 3 main pieces.
> There's
> >     still serious work needed for (a), but that's really nice news, if
> I'm
> >     seeing things correctly (since discovering that instances can be
> >     attached to an external network).
> >
> >     Regards,
> >         Neil
> >
> >
> >
> >
> >     On 17/09/15 17:29, Kevin Benton wrote:
> >     >
> >     > Yes, the L2 semantics apply to the external network as well (at
> >     least
> >     > with ML2).
> >     >
> >     > One example of the special casing is the external_network_bridge
> >     > option in the L3 agent. That would cause the agent to plug directly
> >     > into a bridge so none of the normal L2 agent wiring would occur.
> >     With
> >     > the L2 bridge_mappings option there is no reason for this to exist
> >     > anymore because it ignoring network attributes makes debugging a
> >     > nightmare.
> >     >
> >     > >Yes, that makes sense.  Clearly the core semantic there is IP.
> >     I can
> >     > imagine reasonable variation on less core details, e.g. L2
> >     broadcast vs.
> >     > NBMA.  Perhaps it would be acceptable, if use cases need it, for
> >     such
> >     > details to be described by flags on the external network object.
> >     >
> >     > An external network object is just a regular network object with a
> >     > router:external flag set to true. Any changes to it would have
> >     to make
> >     > sense in the context of all networks. That's why I want to make
> sure
> >     > that whatever we come up with makes sense in all contexts and isn't
> >     > just a bolt on corner case.
> >     >
> >     > On Sep 17, 2015 8:21 AM, "Neil Jerram"
> >     <Neil.Jerram at metaswitch.com <mailto:Neil.Jerram at metaswitch.com>
> >     > <mailto:Neil.Jerram at metaswitch.com
> >     <mailto:Neil.Jerram at metaswitch.com>>> wrote:
> >     >
> >     >     Thanks, Kevin.  Some further queries, then:
> >     >
> >     >     On 17/09/15 15:49, Kevin Benton wrote:
> >     >     >
> >     >     > It's not true for all plugins, but an external network should
> >     >     provide
> >     >     > the same semantics of a normal network.
> >     >     >
> >     >     Yes, that makes sense.  Clearly the core semantic there is
> >     IP.  I can
> >     >     imagine reasonable variation on less core details, e.g. L2
> >     >     broadcast vs.
> >     >     NBMA.  Perhaps it would be acceptable, if use cases need it,
> >     for such
> >     >     details to be described by flags on the external network
> object.
> >     >
> >     >     I'm also wondering about what you wrote in the recent thread
> >     with Carl
> >     >     about representing a network connected by routers.  I think
> >     you were
> >     >     arguing that a L3-only network should not be represented by
> >     a kind of
> >     >     Neutron network object, because a Neutron network has so many
> L2
> >     >     properties/semantics that it just doesn't make sense, and
> better
> >     >     to have
> >     >     a different kind of object for L3-only.  Do those L2
> >     >     properties/semantics apply to an external network too?
> >     >
> >     >     > The only difference is that it allows router gateway
> >     interfaces
> >     >     to be
> >     >     > attached to it.
> >     >     >
> >     >     Right.  From a networking-calico perspective, I think that
> >     means that
> >     >     the implementation should (eventually) support that, and
> >     hence allow
> >     >     interconnection between the external network and private
> Neutron
> >     >     networks.
> >     >
> >     >     > We want to get rid of as much special casing as possible
> >     for the
> >     >     > external network.
> >     >     >
> >     >     I don't understand here.  What 'special casing' do you mean?
> >     >
> >     >     Regards,
> >     >         Neil
> >     >
> >     >     > On Sep 17, 2015 7:02 AM, "Neil Jerram"
> >     >     <Neil.Jerram at metaswitch.com
> >     <mailto:Neil.Jerram at metaswitch.com>
> >     <mailto:Neil.Jerram at metaswitch.com
> >     <mailto:Neil.Jerram at metaswitch.com>>
> >     >     > <mailto:Neil.Jerram at metaswitch.com
> >     <mailto:Neil.Jerram at metaswitch.com>
> >     >     <mailto:Neil.Jerram at metaswitch.com
> >     <mailto:Neil.Jerram at metaswitch.com>>>> wrote:
> >     >     >
> >     >     >     Thanks to the interesting 'default network model'
> >     thread, I
> >     >     now know
> >     >     >     that Neutron allows booting a VM on an external network.
> >     >     :-)  I didn't
> >     >     >     realize that before!
> >     >     >
> >     >     >     So, I'm now wondering what connectivity semantics are
> >     >     expected (or
> >     >     >     even
> >     >     >     specified!) for such VMs, and whether they're the same
> >     as -
> >     >     or very
> >     >     >     similar to - the 'routed' networking semantics I've
> >     >     described at [1].
> >     >     >
> >     >     >     [1]
> >     >     >
> >     >
> >
> https://review.openstack.org/#/c/198439/5/doc/source/devref/routed_networks.rst
> >     >     >
> >     >     >     Specifically I wonder if VM's attached to an external
> >     network
> >     >     >     expect any
> >     >     >     particular L2 characteristics, such as being able to L2
> >     >     broadcast to
> >     >     >     each other?
> >     >     >
> >     >     >     By way of context - i.e. why am I asking this?...   The
> >     >     >     networking-calico project [2] provides an
> >     implementation of the
> >     >     >     'routed'
> >     >     >     semantics at [1], but only if one suspends belief in
> >     some of the
> >     >     >     Neutron
> >     >     >     semantics associated with non-external networks, such as
> >     >     needing a
> >     >     >     virtual router to provide connectivity to the outside
> >     >     world.  (Because
> >     >     >     networking-calico provides that external connectivity
> >     >     without any
> >     >     >     virtual router.)  Therefore we believe that we need to
> >     >     propose some
> >     >     >     enhancement of the Neutron API and data model, so that
> >     >     Neutron can
> >     >     >     describe 'routed' semantics as well as all the
> traditional
> >     >     ones.  But,
> >     >     >     if what we are doing is semantically equivalent to
> >     >     'attaching to an
> >     >     >     external network', perhaps no such enhancement is
> >     needed...
> >     >     >
> >     >     >     [2]
> >     >     https://git.openstack.org/cgit/openstack/networking-calico
> >     >     <https://git.openstack.org/cgit/openstack/networking-calico>
> >     >     >
> >      <https://git.openstack.org/cgit/openstack/networking-calico>
> >     >     >
> >     >     >     Many thanks for any input!
> >     >     >
> >     >     >         Neil
> >     >     >
> >     >     >
> >     >     >
> >     >
> >
>  __________________________________________________________________________
> >     >     >     OpenStack Development Mailing List (not for usage
> >     questions)
> >     >     >     Unsubscribe:
> >     >     >
> >     >
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     >
> >      <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     >     >
> >     >
> >     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     >     >
> >     >
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >     >
> >     >
> >     >
> >     >
> >
> __________________________________________________________________________
> >     >     OpenStack Development Mailing List (not for usage questions)
> >     >     Unsubscribe:
> >     >
> >      OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     >
> >      <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     >
> >      http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >
> >
> >
> >
>  __________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <
> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> > Kevin Benton
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/c0f0274d/attachment-0001.html>

From blak111 at gmail.com  Tue Sep 22 08:49:10 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Tue, 22 Sep 2015 04:49:10 -0400
Subject: [openstack-dev] [neutron] What semantics are expected when
 booting a VM on an external network?
In-Reply-To: <CALiLy7q-THFf5WVBsLuLPkMLfg61uRR0U9TjVemi7y6G3Z2yEw@mail.gmail.com>
References: <SN1PR02MB16952CE4CC50D6D2F5E1BB83995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JNLrM4BMCUNZ+XcePjSkk2OAQWJiuH5UU4njYi9+aaZRg@mail.gmail.com>
 <SN1PR02MB169592E08620777E123F6D34995A0@SN1PR02MB1695.namprd02.prod.outlook.com>
 <CAO_F6JMKtRD6G60=NGMGi-mAndy=yThRdzCgX9iF8R6BQk3cAw@mail.gmail.com>
 <CALiLy7q-THFf5WVBsLuLPkMLfg61uRR0U9TjVemi7y6G3Z2yEw@mail.gmail.com>
Message-ID: <CAO_F6JOxhuX1e9w5kDcKK_pEJ8kThdQJnihAarFEwyGa6tePmA@mail.gmail.com>

Great. I'll try to review as soon as possible since these will be big
changes.
On Sep 17, 2015 12:44 PM, "Carl Baldwin" <carl at ecbaldwin.net> wrote:

> On Thu, Sep 17, 2015 at 10:26 AM, Kevin Benton <blak111 at gmail.com> wrote:
> > Yes, the L2 semantics apply to the external network as well (at least
> with
> > ML2).
>
> This is true and should remain so.  I think we've come to the
> agreement that a neutron Network, external, shared, or not, should be
> an L2 broadcast domain and have these semantics uniformly.
>
> > One example of the special casing is the external_network_bridge option
> in
> > the L3 agent. That would cause the agent to plug directly into a bridge
> so
> > none of the normal L2 agent wiring would occur. With the L2
> bridge_mappings
> > option there is no reason for this to exist anymore because it ignoring
> > network attributes makes debugging a nightmare.
>
> +1
>
> >>Yes, that makes sense.  Clearly the core semantic there is IP.  I can
> > imagine reasonable variation on less core details, e.g. L2 broadcast vs.
> > NBMA.  Perhaps it would be acceptable, if use cases need it, for such
> > details to be described by flags on the external network object.
> >
> > An external network object is just a regular network object with a
> > router:external flag set to true. Any changes to it would have to make
> sense
> > in the context of all networks. That's why I want to make sure that
> whatever
> > we come up with makes sense in all contexts and isn't just a bolt on
> corner
> > case.
>
> I have been working on a proposal around adding better L3 modeling to
> Neutron.  I will have something up by the end of this week.  As a
> preview, my current thinking is that we should add a new object to
> represent an L3 domain.  I will propose that floating ips move to
> reference this object instead of a network.  I'm also thinking that a
> router's external gateway will reference an instance of this new
> object instead of a Network object.  To cover current use cases, a
> Network would own one of these new instances to define the subnets
> that live on the network.  I think we'll also have the flexibility to
> create an L3 only domain or one that spans a group of L2 networks like
> what is being requested by operators [1].
>
> We can also discuss in the context of this proposal how a Port may be
> able to associate with L3-only.  As you know, ports need to provide
> certain L2 services to VMs in order for them to operate.  But, does
> this mean they need to associate to a neutron Network directly?  I
> don't know yet but I tend to think that the model could support this
> as long as VM ports have a driver like Calico behind them to support
> the VMs' dependence on DHCP and ARP.
>
> This is all going to take a fair amount of work.  I'm hoping a good
> chunk of it will fit in the Mitaka cycle.
>
> Carl
>
> [1] https://bugs.launchpad.net/neutron/+bug/1458890
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/30ecfb9e/attachment.html>

From andrea.rosa at hpe.com  Tue Sep 22 08:53:36 2015
From: andrea.rosa at hpe.com (Rosa, Andrea (HP Cloud Services))
Date: Tue, 22 Sep 2015 08:53:36 +0000
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
Message-ID: <5F389C517756284F80239F55BCA5DDDD74CAC368@G4W3298.americas.hpqcorp.net>

Hi all,

> Please respond to this post if you have an interest in this and what you would like to see done. Include anything you are already getting on with so we get a clear picture. 

I have put a new spec about "allow more instance actions during the live migration" [0].
Please note that this is a follow up of a specs proposed for Liberty [1]  I put a new patch as I know that the author of the original spec is not going to work on it anymore.

Regards
--
Andrea Rosa

[0] https://review.openstack.org/226199
[1] https://review.openstack.org/179346


From rvasilets at mirantis.com  Tue Sep 22 09:25:26 2015
From: rvasilets at mirantis.com (Roman Vasilets)
Date: Tue, 22 Sep 2015 12:25:26 +0300
Subject: [openstack-dev] [openstack-operators][tc][tags] Rally tags
In-Reply-To: <CAD85om0fdf5ZE_gBCFhVeuRLZrJpKPshDPfVZBSm27E0nc2u6Q@mail.gmail.com>
References: <CAD85om0e5Fwc08xcmce1h-BC0i9i6AyZiUP6-6__J5qitg9Yzg@mail.gmail.com>
 <55FFBC80.7030805@openstack.org>
 <CAD85om0fdf5ZE_gBCFhVeuRLZrJpKPshDPfVZBSm27E0nc2u6Q@mail.gmail.com>
Message-ID: <CABmajVVd3YNfunD-T6FpLHVGGwT=JK0Rw-tsWCBAy-XBTQ1qQQ@mail.gmail.com>

As for me two ideal candidates for tags are "works-in-rally" and
"covered-by-rally". Other sound ugly. But this two are meaningful and
ingenious names.


On Mon, Sep 21, 2015 at 11:13 PM, Boris Pavlovic <boris at pavlovic.me> wrote:

> Thierry,
>
> Okay great I will propose patches.
>
> Best regards,
> Boris Pavlovic
>
> On Mon, Sep 21, 2015 at 1:14 AM, Thierry Carrez <thierry at openstack.org>
> wrote:
>
>> Boris Pavlovic wrote:
>> > I have few ideas about the rally tags:
>> >
>> > - covered-by-rally
>> >    It means that there are official (inside the rally repo) plugins for
>> > testing of particular project
>> >
>> > - has-rally-gates
>> >    It means that Rally is run against every patch proposed to the
>> project
>> >
>> > - certified-by-rally [wip]
>> >    As well we are starting working on certification
>> > task: https://review.openstack.org/#/c/225176/5
>> >    which will be the standard way to check whatever cloud is ready for
>> > production based on volume, performance & scale testing.
>> >
>> > Thoughts?
>>
>> Hi Boris,
>>
>> The "next-tags" workgroup at the Technical Committee came up with a
>> number of families where I think your proposed tags could fit:
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070651.html
>>
>> The "integration" family of tags defines cross-project support. We want
>> to have tags that say that a specific service has a horizon dashboard
>> plugin, or a devstack integration, or heat templates... So I would say
>> that the "covered-by-rally" tag could be part of that family
>> ('integration:rally' maybe ?). We haven't defined our first tag in that
>> family yet: sdague was working on the devstack ones[1] as a template for
>> the family but that effort stalled a bit:
>>
>> https://review.openstack.org/#/c/203785/
>>
>> As far as the 'has-rally-gates' tag goes, that would be part of the 'QA'
>> family ("qa:has-rally-gates" for example).
>>
>> So I think those totally make sense as upstream-maintained tags and are
>> perfectly aligned with the families we already had in mind but haven't
>> had time to push yet. Feel free to propose those tags to the governance
>> repository. An example of such submission lives at:
>>
>> https://review.openstack.org/#/c/207467/
>>
>> The 'certified-by-rally' tag is a bit farther away I think (less
>> objective and needs your certification program to be set up first). You
>> should start with the other two.
>>
>> --
>> Thierry Carrez (ttx)
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/2db63a17/attachment.html>

From apevec at gmail.com  Tue Sep 22 09:59:44 2015
From: apevec at gmail.com (Alan Pevec)
Date: Tue, 22 Sep 2015 11:59:44 +0200
Subject: [openstack-dev] Patches coming for .coveragerc
In-Reply-To: <56001064.3040909@inaugust.com>
References: <56001064.3040909@inaugust.com>
Message-ID: <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>

2015-09-21 16:12 GMT+02:00 Monty Taylor <mordred at inaugust.com>:
> We're running a script right now to submit a change to every project with
> this change. The topic will be coverage-v4

stable/kilo has uncapped coverage>=3.6 do we patch-spam it or cap coverage?
stable/juno has coverage>=3.6,<=3.7.1

Cheers,
Alan


From sean at dague.net  Tue Sep 22 11:00:02 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 22 Sep 2015 07:00:02 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <55FC584E.80805@nemebean.com>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
Message-ID: <560134B2.300@dague.net>

On 09/18/2015 02:30 PM, Ben Nemec wrote:
> I've been dealing with this issue lately myself, so here's my two cents:
> 
> It seems to me that solving this at the service level is actually kind
> of wrong.  As you've discovered, that requires changes in a bunch of
> different places to address what is really an external issue.  Since
> it's the terminating proxy that is converting HTTPS traffic to HTTP that
> feels like the right place for a fix IMHO.
> 
> My solution has been to have the proxy (HAProxy in my case) rewrite the
> Location header on redirects (one example for the TripleO puppet config
> here: https://review.openstack.org/#/c/223330/1/manifests/loadbalancer.pp).
> 
> I'm not absolutely opposed to having a way to make the services aware of
> external SSL termination to allow use of a proxy that can't do header
> rewriting, but I think proxy configuration should be the preferred way
> to handle it.

My feeling on this one is that we've got this thing in OpenStack... the
Service Catalog. It definitively tells the world what the service
addresses are.

We should use that in the services themselves to reflect back their
canonical addresses. Doing point solution rewriting of urls seems odd
when we could just have Nova/Cinder/etc return documents with URLs that
match what's in the service catalog for that service.

	-Sean

-- 
Sean Dague
http://dague.net


From tony.a.wang at alcatel-lucent.com  Tue Sep 22 11:55:44 2015
From: tony.a.wang at alcatel-lucent.com (WANG, Ming Hao (Tony T))
Date: Tue, 22 Sep 2015 11:55:44 +0000
Subject: [openstack-dev] Does neutron ovn plugin support to setup multiple
 neutron networks for one container?
In-Reply-To: <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
Message-ID: <F1F484A52BD63243B5497BFC9DE26E5A1A78C9FE@SG70YWXCHMBA05.zap.alcatel-lucent.com>

Dear all,

For neutron ovn plugin supports containers in one VM, My understanding is one container can't be assigned two network interfaces in different neutron networks. Is it right?
The reason:
1. One host VM only has one network interface.
2. all the VLAN tags are stripped out when the packet goes out the VM.

If it is True, does neutron ovn plugin or ovn has plan to support this?

Thanks,
Tony


From jesse.pretorius at gmail.com  Tue Sep 22 12:04:06 2015
From: jesse.pretorius at gmail.com (Jesse Pretorius)
Date: Tue, 22 Sep 2015 13:04:06 +0100
Subject: [openstack-dev] [openstack-ansible] To NTP, or not to NTP,
 that is the question
In-Reply-To: <5600A48F.60507@mhtx.net>
References: <55FC0B8A.4060303@mhtx.net>
 <CAGSrQvzERPpAGb0e6NxOZZSSz-K9EKa1tRsPC=oNqWTD5DW2Xg@mail.gmail.com>
 <CA+HkNVt=td0=3sEnCZ34VtkYKpQDjLcxyF8COssjg4UG8h_0mg@mail.gmail.com>
 <5600A48F.60507@mhtx.net>
Message-ID: <CAGSrQvyX4Kiz0p6N464PPjH+etUJn4T8uickNnjs88oUgUNeXg@mail.gmail.com>

On 22 September 2015 at 01:45, Major Hayden <major at mhtx.net> wrote:

> On 09/21/2015 07:14 PM, Sergii Golovatiuk wrote:
> > Are any chance to configure chrony instead of ntpd? It acts more
> predictable on virtual environments.
>
> That's my plan, if I can find an upstream Ansible galaxy role to use. ;)
>

Now that we have the spec for independent role repositories approved [1],
an option is for us to register a role which implements chrony as the
network time mechanism if there isn't a suitable one already in Ansible
Galaxy. This role can be an optional add-on to the supported use-cases in
OpenStack-Ansible and can also be registered in Ansible Galaxy once it's
ready. :)

[1] https://review.openstack.org/213779
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/c90e3313/attachment.html>

From tony.a.wang at alcatel-lucent.com  Tue Sep 22 12:08:47 2015
From: tony.a.wang at alcatel-lucent.com (WANG, Ming Hao (Tony T))
Date: Tue, 22 Sep 2015 12:08:47 +0000
Subject: [openstack-dev] [neutron] Does neutron ovn plugin support to setup
 multiple neutron networks for one container?
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com> 
Message-ID: <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>

Dear all,

For neutron ovn plugin supports containers in one VM, My understanding is one container can't be assigned two network interfaces in different neutron networks. Is it right?
The reason:
1. One host VM only has one network interface.
2. all the VLAN tags are stripped out when the packet goes out the VM.

If it is True, does neutron ovn plugin or ovn has plan to support this?

Thanks,
Tony


From Neil.Jerram at metaswitch.com  Tue Sep 22 12:19:47 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Tue, 22 Sep 2015 12:19:47 +0000
Subject: [openstack-dev] [release][all]Release help needed - we
	are	incompatible with ourselves
References: <CAJ3HoZ3XF37499bnurXZBdDbrPcY_gP4+BfnnbBPheaXyRxrEw@mail.gmail.com>
Message-ID: <SN1PR02MB16955002E7E9726025285C2E99450@SN1PR02MB1695.namprd02.prod.outlook.com>

To most people who write on this list about requirements, constraints
and gate deadlocks, not just Robert...

FWIW - this email is tagged [all], and asks for help, but I don't
understand it.  Even though I've been fairly heavily engaged with
OpenStack for more than a cycle now.

Obviously we can't avoid having lots of project-specific terminology and
jargon in various areas of OpenStack.  When those areas are
domain-specific, say Cinder or Sahara, that doesn't bother me.  But this
thread (and others like it) seems more about some of OpenStack's
vertical integration, that I ought to understand.  But the jargon and
imprecision of the language and description used are making it
impossible for me to start understanding.

So, please, if you really mean [all], say what you're saying in terms
that everyone can understand, and not just the (apparently, and
unfortunately) small group of people who already understand all this in
detail.

To emphasize again, I really don't mean to target this email
specifically - but it is a good example of lots of emails that (AFAICT)
are about this sort of thing.

Regards,
    Neil


On 21/09/15 20:53, Robert Collins wrote:
>Constraint updates are still failing: you can see this on
>https://review.openstack.org/#/c/221157/ or more generally
>https://review.openstack.org/#/q/status:open+project:openstack/requirements+branch:master+topic:openstack/requirements/constraints,n,z
>>Now, the constraints system is *doing its job* - its made the presence
>of an incompatible thing not-a-firedrill. However, we need to do our
>part of the job too: we need to fix the incompatibility that exists so
>that we can roll forward and start using the new releases that are
>being made.
>>Right now the release team are picking individual components and
>proposing them as merges to move things forward, but its fairly
>fundamentally unsafe to cut the full liberty release while there is a
>known incompatibility bug out there.
>>So - I'm manually ringing the fire-drill alarm now: we need to get
>this fixed so that the released liberty is actually compatible with
>the entire ecosystem at time of release.
>>What issues are there ?
>>Firstly,
>2015-09-21 06:24:00.911 | + openstack --os-token
>3dc712d5120b436ebb7d554405b7c15f --os-url http://127.0.0.1:9292 image
>create cirros-0.3.4-x86_64-uec --public --container-format ami
>--disk-format ami
>2015-09-21 06:24:01.396 | openstack: 'image' is not an openstack
>command. See 'openstack --help'.
>>(See the dvsm run from review 221157 -
>http://logs.openstack.org/57/221157/12/check/gate-tempest-dsvm-full/17941bd/logs/devstacklog.txt.gz#_2015-09-21_06_24_00_911
>)
>>Secondly, its likely that once thats fixed there will be more things to unwind.
>>What will help most is if a few folk familiar with devstack can pull
>down review 221157 and do a binary search on the changes in it to
>determine which ones are safe and which ones trigger the breakage:
>then we can at least land all the safe ones at once and zero in on the
>incompatibility - and get it addressed.
>>To repeat: this is effectively a release blocker IMO, and the release
>is happening - well, $now.
>>-Rob
>

From caboucha at cisco.com  Tue Sep 22 12:27:04 2015
From: caboucha at cisco.com (Carol Bouchard (caboucha))
Date: Tue, 22 Sep 2015 12:27:04 +0000
Subject: [openstack-dev] [neutron] Neutron debugging tool
In-Reply-To: <CAP0B2WPHNGpPCY07d9abeuYwBZqk3XUcCK7wkfbNhSGGTV+k0A@mail.gmail.com>
References: <CAP0B2WMOFLMdH7t9hNc0uw8xqNcUvvrLBZa7D4hHU06c2bLMuA@mail.gmail.com>
 <D226C7C9.7CEB9%ganeshna@cisco.com>
 <CAP0B2WPHNGpPCY07d9abeuYwBZqk3XUcCK7wkfbNhSGGTV+k0A@mail.gmail.com>
Message-ID: <d4edb48e00784aae809c732c6cb33c04@XCH-ALN-005.cisco.com>

There was a presentation on DON at the Vancouver summit.  Here is the link:

https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/don-diagnosing-ovs-in-neutron

From: Salvatore Orlando [mailto:salv.orlando at gmail.com]
Sent: Tuesday, September 22, 2015 3:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Neutron debugging tool

Thanks Ganesh!

I did not know about this tool.
I also quite like the network visualization bits, though I wonder how practical that would be when one debugs very large deployments.

I think it won't be a bad idea to list these tools in the networking guide or in neutron's devref, or both.

Salvatore

On 22 September 2015 at 04:25, Ganesh Narayanan (ganeshna) <ganeshna at cisco.com<mailto:ganeshna at cisco.com>> wrote:
Another project for diagnosing OVS in Neutron:

https://github.com/CiscoSystems/don

Thanks,
Ganesh

From: Salvatore Orlando <salv.orlando at gmail.com<mailto:salv.orlando at gmail.com>>
Reply-To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, 21 September 2015 2:55 pm
To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] Neutron debugging tool

It sounds like indeed that easyOVS covers what you're aiming too.
However, from what I gather there is still plenty to do in easy OVS, so perhaps rather than starting a new toolset from scratch you might build on the existing one.

Personally I'd welcome its adoption into the Neutron stadium as debugging control plane/data plane issues in the neutron reference impl is becoming difficult also for expert users and developers.
I'd just suggest renaming it because calling it "OVS" is just plain wrong. The neutron reference implementation and OVS are two distinct things.

As concern neutron-debug, this is a tool that was developed in the early stages of the project to verify connectivity using "probes" in namespaces. These probes are simply tap interfaces associated with neutron ports. The neutron-debug tool is still used in some devstack exercises. Nevertheless, I'd rather keep building something like easyOVS and then deprecated neutron-debug rather than develop it.

Salvatore


On 21 September 2015 at 02:40, Li Ma <skywalker.nick at gmail.com<mailto:skywalker.nick at gmail.com>> wrote:
AFAIK, there is a project available in the github that does the same thing.
https://github.com/yeasy/easyOVS

I used it before.

On Mon, Sep 21, 2015 at 12:17 AM, Nodir Kodirov <nodir.qodirov at gmail.com<mailto:nodir.qodirov at gmail.com>> wrote:
> Hello,
>
> I am planning to develop a tool for network debugging. Initially, it
> will handle DVR case, which can also be extended to other too. Based
> on my OpenStack deployment/operations experience, I am planning to
> handle common pitfalls/misconfigurations, such as:
> 1) check external gateway validity
> 2) check if appropriate qrouter/qdhcp/fip namespaces are created in
> compute/network hosts
> 3) execute probing commands inside namespaces, to verify reachability
> 4) etc.
>
> I came across neutron-debug [1], which mostly focuses on namespace
> debugging. Its coverage is limited to OpenStack, while I am planning
> to cover compute/network nodes as well. In my experience, I had to ssh
> to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
> above). The tool I am considering will handle these, given the host
> credentials.
>
> I'd like get community's feedback on utility of such debugging tool.
> Do people use neutron-debug on their OpenStack environment? Does the
> tool I am planning to develop with complete diagnosis coverage sound
> useful? Anyone is interested to join the development? All feedback are
> welcome.
>
> Thanks,
>
> - Nodir
>
> [1] http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Li Ma (Nick)
Email: skywalker.nick at gmail.com<mailto:skywalker.nick at gmail.com>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/9ff5aa91/attachment.html>

From davanum at gmail.com  Tue Sep 22 12:42:48 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Tue, 22 Sep 2015 08:42:48 -0400
Subject: [openstack-dev] [release][all]Release help needed - we are
 incompatible with ourselves
In-Reply-To: <SN1PR02MB16955002E7E9726025285C2E99450@SN1PR02MB1695.namprd02.prod.outlook.com>
References: <CAJ3HoZ3XF37499bnurXZBdDbrPcY_gP4+BfnnbBPheaXyRxrEw@mail.gmail.com>
 <SN1PR02MB16955002E7E9726025285C2E99450@SN1PR02MB1695.namprd02.prod.outlook.com>
Message-ID: <CANw6fcHXkORgBkEaKz=tFGikYETcQMxGRdOmnrK9OT8RP9R9sA@mail.gmail.com>

Neil,

The effort is to spread the knowledge and skills beyond the "small group of
people who already understand all this" :)

Let me explain a bit. working backwards from what we see.

Most of us have seen the Proposal bot update reviews that get logged
periodically that updates the requirements.txt and test-requirements.txt in
all projects. Now the proposal bot (actually just a jenkins job not a bot)
picks these up from global requirements repo:
http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt

Now who updates that repo? it's a bunch of folks who keep an eye out on the
different python libraries out in the eco system or our own libraries and
figure out if any new version is out or if an old version has problems. So
remember the global-requirements and the project requirements are ranges,
example:

jsonschema>=2.0.0,<3.0.0,!=2.5.0


Next problem is how do we figure out a basic set of pinned
requirements (not ranges) so all jobs pick up the same specific set,
hopefully latest set of libraries...That is maintained in

http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt


So who figures out the latest versions etc and updates the
upper-constraints.txt? It's again the same job/bot. example:

https://review.openstack.org/#/c/226182/


Some of this is documented here:

http://specs.openstack.org/openstack/openstack-specs/specs/requirements-management.html


In this specific review that lifeless is pointing out, the bot
calculated a set of specific versions and logged a review for a
specific set of libraries and they failed to work with each other and
the ask was to figure out why they are failing and suggest fixes where
necessary.


Hope that helps?


Thanks,

Dims


PS: For that specific problem from lifeless, mordred already filed
reviews with os-client-config and python-openstackclient so we are
good.





On Tue, Sep 22, 2015 at 8:19 AM, Neil Jerram <Neil.Jerram at metaswitch.com>
wrote:

> To most people who write on this list about requirements, constraints
> and gate deadlocks, not just Robert...
>
> FWIW - this email is tagged [all], and asks for help, but I don't
> understand it.  Even though I've been fairly heavily engaged with
> OpenStack for more than a cycle now.
>
> Obviously we can't avoid having lots of project-specific terminology and
> jargon in various areas of OpenStack.  When those areas are
> domain-specific, say Cinder or Sahara, that doesn't bother me.  But this
> thread (and others like it) seems more about some of OpenStack's
> vertical integration, that I ought to understand.  But the jargon and
> imprecision of the language and description used are making it
> impossible for me to start understanding.
>
> So, please, if you really mean [all], say what you're saying in terms
> that everyone can understand, and not just the (apparently, and
> unfortunately) small group of people who already understand all this in
> detail.
>
> To emphasize again, I really don't mean to target this email
> specifically - but it is a good example of lots of emails that (AFAICT)
> are about this sort of thing.
>
> Regards,
>     Neil
>
>
> On 21/09/15 20:53, Robert Collins wrote:
> >Constraint updates are still failing: you can see this on
> >https://review.openstack.org/#/c/221157/ or more generally
> >
> https://review.openstack.org/#/q/status:open+project:openstack/requirements+branch:master+topic:openstack/requirements/constraints,n,z
> >>Now, the constraints system is *doing its job* - its made the presence
> >of an incompatible thing not-a-firedrill. However, we need to do our
> >part of the job too: we need to fix the incompatibility that exists so
> >that we can roll forward and start using the new releases that are
> >being made.
> >>Right now the release team are picking individual components and
> >proposing them as merges to move things forward, but its fairly
> >fundamentally unsafe to cut the full liberty release while there is a
> >known incompatibility bug out there.
> >>So - I'm manually ringing the fire-drill alarm now: we need to get
> >this fixed so that the released liberty is actually compatible with
> >the entire ecosystem at time of release.
> >>What issues are there ?
> >>Firstly,
> >2015-09-21 06:24:00.911 | + openstack --os-token
> >3dc712d5120b436ebb7d554405b7c15f --os-url http://127.0.0.1:9292 image
> >create cirros-0.3.4-x86_64-uec --public --container-format ami
> >--disk-format ami
> >2015-09-21 06:24:01.396 | openstack: 'image' is not an openstack
> >command. See 'openstack --help'.
> >>(See the dvsm run from review 221157 -
> >
> http://logs.openstack.org/57/221157/12/check/gate-tempest-dsvm-full/17941bd/logs/devstacklog.txt.gz#_2015-09-21_06_24_00_911
> >)
> >>Secondly, its likely that once thats fixed there will be more things to
> unwind.
> >>What will help most is if a few folk familiar with devstack can pull
> >down review 221157 and do a binary search on the changes in it to
> >determine which ones are safe and which ones trigger the breakage:
> >then we can at least land all the safe ones at once and zero in on the
> >incompatibility - and get it addressed.
> >>To repeat: this is effectively a release blocker IMO, and the release
> >is happening - well, $now.
> >>-Rob
> >
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/a1d78368/attachment.html>

From doug at doughellmann.com  Tue Sep 22 12:43:37 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 22 Sep 2015 08:43:37 -0400
Subject: [openstack-dev] [releases] semver and dependency changes
In-Reply-To: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
References: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
Message-ID: <1442924599-sup-9078@lrrr.local>

Excerpts from Robert Collins's message of 2015-09-22 15:16:26 +1200:
> Currently we don't provide specific guidance on what should happen
> when the only changes in a project are dependency changes and a
> release is made.
> 
> The releases team has been treating a dependency change as 'feature'
> rather than 'bugfix' in semver modelling - so if the last release was
> 1.2.3, and a requirements sync happens to fix a bug (e.g. a too-low
> minimum dependency), then the next release would be 1.3.0.
> 
> Reasoning about this can be a little complex to do on the fly, so I'd
> like to put together some concrete guidance - which essentially means
> being able to provide a heuristic to answer the questions:
> 
> 'Is this requirements change an API break' or 'is this requirements
> change feature work' or 'is this requirements change a bugfix'.

Also, for our projects, "is this requirements change being triggered by
a need in some other project that also syncs with the g-r list"?

> 
> It seems clear to me that all three can be true. For example, consider
> if library X exposes library Y as part of its API, and library Y's
> dependency changes from
> Y>=1
> to
> Y>=2
> 
> then thats happening due to an API break - e.g. Y has removed some old
> backwards compatibility cruft - X won't break or need changing, and
> its possible than none of X's callers will need to change either. But
> some of them might have been using some of the thing that went away in
> Y==2, and so will break. So its an API break in X. But why would X do
> that, surely its doing its own API break - well no, lets say its
> adding a feature that was only added in Y==2, then setting the minimum
> to 2 is necessary, and entirely unrelated to the fact that an API
> break is involved.
> 
> So the sequence there would be something like:
> update X's requirements to Y >= 2
> use new feature from Y >= 2 [ this is a 'feature' patch, not an api-break].
> release X, and it should be a new major version.
> 
> Now, if Y is not exposed, a change in Y's dependencies for X clearly
> has nothing to do with X's version... but users of X that
> independently use Y will still be impacted, since upgrading X will
> upgrade their Y [ignoring the intricacies surrounding pip here :)].
> 
> So, one answer we can use is "The version impact of a requirements
> change is never less than the largest version change in the change."
> That is:
> nothing -> a requirement -> major version change
> 1.x.y -> 2.0.0 -> major version change
> 1.2.y -> 1.3.0 -> minor version change
> 1.2.3. -> 1.2.4 -> patch version change
> 
> We could calculate the needed change programmatically for this
> approach in the requirements syncing process.

We also have to consider that we can't assume that the dependency
is using semver itself, so we might not be able to tell from the
outside whether the API is in fact breaking. So, we would need something
other than the version number to make that determination.

> 
> Another approach would be to say that only explicitly exposed
> interfaces matter, but I think this is a disservice to our consumers.
> 
> A third approach would be to pick minor versions always as the
> evolving process in the releases team does, but because requirements
> changes *can* be API breaks to users of components, I think that that
> is too conservative.
> 
> A fourth one would be to pick patch level for every change, but that
> too is too conservative for exactly the same reasons.
> 
> -Rob
> 

I've been encouraging the application of a simple rule precisely
because this problem is so complicated. The 4 reasons for updates
can get lost over time between a requirements update landing and a
release being created, especially with automatic updates mixing
with updates a project actually cares about.  We aren't yet correctly
identifying our own API breaking changes and differentiating between
features and bug fixes in all cases, so until we're better at that
analysis I would rather continue over-simplifying the analysis of
requirements updates.

Doug


From dbelova at mirantis.com  Tue Sep 22 12:57:19 2015
From: dbelova at mirantis.com (Dina Belova)
Date: Tue, 22 Sep 2015 15:57:19 +0300
Subject: [openstack-dev] [Large Deployments Team][Performance Team] New
	informal working group suggestion
Message-ID: <CACsCO2yHugc0FQmXBxO_-uzaOvR_KXQNdPOEYYneU=vqoeJSEw@mail.gmail.com>

Hey, OpenStackers!

I'm writing to propose to organise new informal team to work specifically
on the OpenStack performance issues. This will be a sub team in already
existing Large Deployments Team, and I suppose it will be a good idea to
gather people interested in OpenStack performance in one room and identify
what issues are worrying contributors, what can be done and share results
of performance researches :)

So please volunteer to take part in this initiative. I hope it will be many
people interested and we'll be able to use cross-projects session slot
<http://odsreg.openstack.org/cfp/details/5> to meet in Tokyo and hold a
kick-off meeting.

I would like to apologise I'm writing to two mailing lists at the same
time, but I want to make sure that all possibly interested people will
notice the email.

Thanks and see you in Tokyo :)

Cheers,
Dina

-- 

Best regards,

Dina Belova

Senior Software Engineer

Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/c07035cb/attachment.html>

From thingee at gmail.com  Tue Sep 22 13:16:23 2015
From: thingee at gmail.com (Mike Perez)
Date: Tue, 22 Sep 2015 06:16:23 -0700
Subject: [openstack-dev] Cross-Project meeting, Tue Sept 22nd, 21:00 UTC
Message-ID: <20150922131623.GA3328@gmail.com>

Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting today at 21:00 UTC, with the following
agenda:

* Review past action items
* Team announcements (horizontal, vertical, diagonal)
* Open discussion

If you're from a horizontal team (Release management, QA, Infra, Docs,
Security, I18n...) or a vertical team (Nova, Swift, Keystone...) and have
something to communicate to the other teams, feel free to abuse the relevant
sections of that meeting and make sure it gets #info-ed by the meetbot in the
meeting summary.

See you there!

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Mike Perez


From sathlang at redhat.com  Tue Sep 22 13:27:20 2015
From: sathlang at redhat.com (Sofer Athlan-Guyot)
Date: Tue, 22 Sep 2015 15:27:20 +0200
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
	'composite namevar' or 'meaningless name'?
In-Reply-To: <55FC49EA.7080704@redhat.com> (Rich Megginson's message of "Fri, 
 18 Sep 2015 11:29:14 -0600")
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com>
 <CAGnj6atXbpuzpNR6aF63cZ26WE-cbwUGozb9bvdxtaUaA7B1Ow@mail.gmail.com>
 <87oah44qtx.fsf@s390.unix4.net>
 <CAGnj6ave7EFDQkaFmZWDVTLOE0DQgkTksqh2QLqJe0aGkCXBpQ@mail.gmail.com>
 <55F9D7F6.2000604@puppetlabs.com> <55FC49EA.7080704@redhat.com>
Message-ID: <87vbb28tzb.fsf@s390.unix4.net>

Hi Rich,

I've got the hang of it.

It boils down to this: for a property of a puppet resource to be taken
into account as a key of the resource it has to be a parameter, not a
property:

For the code to work you change

  newproperty(:domain) do

to

  newparam(:domain) do

As a side note, the test you setup won't cut it as self.title_patterns
is *not* called when using the :pre_condition rspec trick.  I provide
sample tests in the next section.

TLDR follows.

To be able to have a resource whose key is spread on multiple parameters
you have to:
 1. add isnamevar to each *parameter* defined with 'newparam' in the
 type (not 'newproperty');
 2. create the self.title_patterns

In this case this is enough (identity is done by default if no proc is
given):

  def self.title_patterns
    [
      [
        /^(.+)::(.+)$/,
        [
          [:name],
          [:domain]
        ]
      ],
      [
        /^(.+)$/,
        [
          [:name]
        ]
      ]
    ]
 end

To test this thing you can do it in the associated type test like
(spec/unit/type/keystone_tenant.rb):

  r = []
  r << Puppet::Type.type(:keystone_tenant).new(:title => 'one', :name => 'foo', :domain => 'one');
  r << Puppet::Type.type(:keystone_tenant).new(:title => 'two', :name => 'foo', :domain => 'two');
  r << Puppet::Type.type(:keystone_tenant).new(:title => 'foo::tree);

  catalog = Puppet::Resource::Catalog.new; 
  expect{r.each {|r| catalog.add_resource(r) }}.not_to raise_error

This is just inspiration.  If you remove title_pattern or isnamevar
definition the code above will blow.

Other tests can be:

  expect(r[2].uniqueness_key).to contain_exactly(['foo', 'three'])

This test the regex parsing.

Ok, I didn't look into the consequences of changing domain from property
to parameter.  But it looks like it does imply other changes in the
code.

Rich Megginson <rmeggins at redhat.com> writes:

> On 09/16/2015 02:58 PM, Cody Herriges wrote:
>
>     I wrote my first composite namevar type a few years and ago and all the
> magic is basically a single block of code inside the type...
>
> https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L145-L169
>
> It basically boils down to these three things:
>
> * Pick your namevars
> (https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L49-L64)
> * Pick a delimiter
>   - Personally I'd use @ here since we are talking about domains
>
> Unfortunately, not only is "domains" an overloaded term, but "@" is
> already in use as a delimiter for keystone_user_role, and "@" is a
> legal character in usernames.
>
>     
> * Build your self.title_patterns method, accounting for delimited names
> and arbitrary names.
>
> While it looks like the README never got updated, the java_ks example
> supports both meaningful titles and arbitrary ones.
>
> java_ks { 'activemq_puppetca_keystore':
>   ensure       => latest,
>   name         => 'puppetca',
>   certificate  => '/etc/puppet/ssl/certs/ca.pem',
>   target       => '/etc/activemq/broker.ks',
>   password     => 'puppet',
>   trustcacerts => true,
> }
>
> java_ks { 'broker.example.com:/etc/activemq/broker.ks':
>   ensure      => latest,
>   certificate =>
> '/etc/puppet/ssl/certs/broker.example.com.pe-internal-broker.pem',
>   private_key =>
> '/etc/puppet/ssl/private_keys/broker.example.com.pe-internal-broker.pem',
>   password    => 'puppet',
> }
>
> You'll notice the first being an arbitrary title and the second
> utilizing a ":" as a delimiter and omitting the name and target parameters.
>
> Another code example can be found in the package type.
>
> https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type/package.rb#L268-L291.
>
> Ok. I've hacked a lib/puppet/type/keystone_tenant.rb to use name and
> domain with "isnamevar" and added a title_patterns like this:
>
> def self.title_patterns
> identity = lambda {|x| x}
> [
> [
> /^(.+)::(.+)$/,
> [
> [ :name, identity ],
> [ :domain, identity ]
> ]
> ],
> [
> /^(.+)$/,
> [
> [ :name, identity ]
> ]
> ]
> ]
> end
>
> Then I hacked one of the simple rspec-puppet files to do this:
>
> let :pre_condition do
> [
> 'keystone_tenant { "tenant1": name => "tenant", domain => "domain1" }',
> 'keystone_tenant { "tenant2": name => "tenant", domain => "domain2" }'
> ]
> end
>
> because what I'm trying to do is not rely on the title of the
> resource, but to make the combination of 'name' + 'domain' the actual
> "name" of the resource. This doesn't work. This is the error I get
> running spec:
>
> Failure/Error: it { is_expected.to contain_package('python-keystone').with_ensure
> ("present") }
> Puppet::Error:
> Puppet::Parser::AST::Resource failed with error ArgumentError: Cannot
> alias Keystone_tenant[tenant2] to ["tenant"]; resource
> ["Keystone_tenant", "tenant"] already declared at line 3 on node
> unused.redhat.com
> # ./vendor/gems/puppet-3.8.2/lib/puppet/resource/catalog.rb:137:in
> `alias'
> # ./vendor/gems/puppet-3.8.2/lib/puppet/resource/catalog.rb:111:in
> `create_resource_aliases'
> # ./vendor/gems/puppet-3.8.2/lib/puppet/resource/catalog.rb:90:in
> `add_one_resource'
>
> Is there any way to accomplish the above? If not, please tell me now
> and put me out of my misery, and we can go back to the original plan
> of forcing everyone to use "::" in the resource titles and names.
>
>     
>
>     __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sofer Athlan-Guyot


From mvanwink at rackspace.com  Tue Sep 22 13:46:01 2015
From: mvanwink at rackspace.com (Matt Van Winkle)
Date: Tue, 22 Sep 2015 13:46:01 +0000
Subject: [openstack-dev] [Large Deployments Team][Performance Team] New
 informal working group suggestion
In-Reply-To: <CACsCO2yHugc0FQmXBxO_-uzaOvR_KXQNdPOEYYneU=vqoeJSEw@mail.gmail.com>
References: <CACsCO2yHugc0FQmXBxO_-uzaOvR_KXQNdPOEYYneU=vqoeJSEw@mail.gmail.com>
Message-ID: <D226C509.17B892%vw@rackspace.com>

Thanks, Dina!

For context to the rest of the LDT folks, Dina reached out to me about working on this under our umbrella for now.  It made sense until we understand if it's a large enough thing to live as its own working group because most of us have various performance concerns too.  So, like Public Clouds, we'll have to figure out how to integrate this sub group.

I suspect the time slot for Tokyo is already packed, so the work for the Performance subgroup may have to be informal or in other sessions, but I'll start working with Tom and the folks covering the session for me (since I won't be able to make it) on what we might be able to do.  I've also asked Dina to join the Oct meeting prior to the Summit so we can further discuss the sub team.

Thanks!
VW

From: Dina Belova <dbelova at mirantis.com<mailto:dbelova at mirantis.com>>
Date: Tuesday, September 22, 2015 7:57 AM
To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>, "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>" <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: [Large Deployments Team][Performance Team] New informal working group suggestion

Hey, OpenStackers!

I'm writing to propose to organise new informal team to work specifically on the OpenStack performance issues. This will be a sub team in already existing Large Deployments Team, and I suppose it will be a good idea to gather people interested in OpenStack performance in one room and identify what issues are worrying contributors, what can be done and share results of performance researches :)

So please volunteer to take part in this initiative. I hope it will be many people interested and we'll be able to use cross-projects session slot<http://odsreg.openstack.org/cfp/details/5> to meet in Tokyo and hold a kick-off meeting.

I would like to apologise I'm writing to two mailing lists at the same time, but I want to make sure that all possibly interested people will notice the email.

Thanks and see you in Tokyo :)

Cheers,
Dina

--

Best regards,

Dina Belova

Senior Software Engineer

Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/9edf3d40/attachment.html>

From Neil.Jerram at metaswitch.com  Tue Sep 22 13:46:03 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Tue, 22 Sep 2015 13:46:03 +0000
Subject: [openstack-dev] [Outreachy]New coordinator announcement and
 list of current applicants and mentors
References: <CAJ_e2gAUDWm4zzTYnH_ngvWS=YjH=GfJ9=Y0KbfXTpa73KU3ZQ@mail.gmail.com>
Message-ID: <SN1PR02MB16954C9FE361AB3C92DC390D99450@SN1PR02MB1695.namprd02.prod.outlook.com>

On 21/09/15 20:31, Victoria Mart?nez de la Cruz wrote:
Hi all,

I'm glad to announce that Mahati Chamarthy (mahatic) will join the current coordination efforts for the Outreachy internships. Thanks Mahati!

Also, I wanted to share this Etherpad [0]with you the current list of applicants and mentors for this round.

Applicants -> If you want to apply and don't see your name on this list, please add your name and the project/s you are interested on.

Mentors -> If you want to mentor someone and don't see your name on this list, please add your name and the project/s you are willing to mentor for.

Everyone else -> If you want to give us a hand and you don't know how, help us by spreading the word! Maybe a friend of yours want to join as an applicant or a coworker want to join as a mentor.

Every help is appreciated.

Cheers,

Victoria

[0]https://etherpad.openstack.org/p/outreachy.

FYI I just added myself as a possible mentor for Neutron.

Regards,
    Neil

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/6cb2268c/attachment.html>

From thierry at openstack.org  Tue Sep 22 14:03:32 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Tue, 22 Sep 2015 16:03:32 +0200
Subject: [openstack-dev] [releases] semver and dependency changes
In-Reply-To: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
References: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
Message-ID: <56015FB4.1030406@openstack.org>

Robert Collins wrote:
> [...]
> So, one answer we can use is "The version impact of a requirements
> change is never less than the largest version change in the change."
> That is:
> nothing -> a requirement -> major version change

That feels a bit too much. In a lot of cases, the added requirement will
be used in a new, backward-compatible feature (requiring y bump), or
will serve to remove code without changing functionality (requiring z
bump). I would think that the cases where a new requirement requires a x
bump are rare.

> 1.x.y -> 2.0.0 -> major version change
> 1.2.y -> 1.3.0 -> minor version change
> 1.2.3. -> 1.2.4 -> patch version change

The last two sound like good rules of thumb.

-- 
Thierry Carrez (ttx)


From vkramskikh at mirantis.com  Tue Sep 22 14:26:23 2015
From: vkramskikh at mirantis.com (Vitaly Kramskikh)
Date: Tue, 22 Sep 2015 17:26:23 +0300
Subject: [openstack-dev] [Fuel][UI] Bower is gone
Message-ID: <CABX1ep4MLi_NQ3LD5CS5Oeh6sz4R0J7r4CswWPxSF-Cpm_Otzg@mail.gmail.com>

Hi folks,

We used Bower to manage client-side dependencies of Fuel UI, but since
today they are managed by NPM which is already used to manage server-side
dependencies. This change reduces complexity of Fuel UI and is also a
preparation for some important changes.

For those who use fake mode for development, this change means that you
don't need to run "gulp bower" anymore after pulling latest changes, just
"npm install" is enough.

-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/2e5513f6/attachment.html>

From sean at coreitpro.com  Tue Sep 22 14:34:39 2015
From: sean at coreitpro.com (Sean M. Collins)
Date: Tue, 22 Sep 2015 14:34:39 +0000
Subject: [openstack-dev] [Large Deployments Team][Performance Team] New
 informal working group suggestion
In-Reply-To: <D226C509.17B892%vw@rackspace.com>
References: <CACsCO2yHugc0FQmXBxO_-uzaOvR_KXQNdPOEYYneU=vqoeJSEw@mail.gmail.com>
 <D226C509.17B892%vw@rackspace.com>
Message-ID: <0000014ff57a5699-3c96f383-0d24-4d0d-9347-5dfbdd2e88f2-000000@email.amazonses.com>

To help kick off this topic, the Neutron team has been tracking some
performance profiling in this etherpad.

https://etherpad.openstack.org/p/router-scheduling-performance

-- 
Sean M. Collins


From fungi at yuggoth.org  Tue Sep 22 14:48:12 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Tue, 22 Sep 2015 14:48:12 +0000
Subject: [openstack-dev] [releases] semver and dependency changes
In-Reply-To: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
References: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
Message-ID: <20150922144812.GN25159@yuggoth.org>

On 2015-09-22 15:16:26 +1200 (+1200), Robert Collins wrote:
[...]
> 'Is this requirements change an API break' or 'is this requirements
> change feature work' or 'is this requirements change a bugfix'.
[...]

It may also be worth considering whether this logic only applies to
increasing minimums (and perhaps in some very rare cases, lowering
maximums). Raising or removing a maximum (cap) or lowering a minimum
further means that downstream consumers are expected to still be
able to continue using the same dependency versions as before the
change, which means at worst a patchlevel increase in the next
release.
-- 
Jeremy Stanley


From chris.friesen at windriver.com  Tue Sep 22 15:05:11 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Tue, 22 Sep 2015 09:05:11 -0600
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <20150921085609.GC28520@redhat.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <B987741E651FDE4584B7A9C0F7180DEB1CDC2D02@G4W3208.americas.hpqcorp.net>
 <20150921085609.GC28520@redhat.com>
Message-ID: <56016E27.7070106@windriver.com>

On 09/21/2015 02:56 AM, Daniel P. Berrange wrote:
> On Fri, Sep 18, 2015 at 05:47:31PM +0000, Carlton, Paul (Cloud Services) wrote:
>> However the most significant impediment we encountered was customer
>> complaints about performance of instances during migration.  We did a little
>> bit of work to identify the cause of this and concluded that the main issues
>> was disk i/o contention.  I wonder if this is something you or others have
>> encountered?  I'd be interested in any idea for managing the rate of the
>> migration processing to prevent it from adversely impacting the customer
>> application performance.  I appreciate that if we throttle the migration
>> processing it will take longer and may not be able to keep up with the rate
>> of disk/memory change in the instance.
>
> I would not expect live migration to have an impact on disk I/O, unless
> your storage is network based and using the same network as the migration
> data. While migration is taking place you'll see a small impact on the
> guest compute performance, due to page table dirty bitmap tracking, but
> that shouldn't appear directly as disk I/O problem. There is no throttling
> of guest I/O at all during migration.

Technically if you're doing a lot of disk I/O couldn't you end up with a case 
where you're thrashing the page cache enough to interfere with migration?  So 
it's actually memory change that is the problem, but it might not be memory that 
the application is modifying directly but rather memory allocated by the kernel.

>> Could you point me at somewhere I can get details of the tuneable setting
>> relating to cutover down time please?  I'm assuming that at these are
>> libvirt/qemu settings?  I'd like to play with them in our test environment
>> to see if we can simulate busy instances and determine what works.  I'd also
>> be happy to do some work to expose these in nova so the cloud operator can
>> tweak if necessary?
>
> It is already exposed as 'live_migration_downtime' along with
> live_migration_downtime_steps, and live_migration_downtime_delay.
> Again, it shouldn't have any impact on guest performance while
> live migration is taking place. It only comes into effect when
> checking whether the guest is ready to switch to the new host.

Has anyone given thought to exposing some of these new parameters to the 
end-user?  I could see a scenario where an image might want to specify the 
acceptable downtime over migration.  (On the other hand that might be tricky 
from the operator perspective.)

Chris


From berrange at redhat.com  Tue Sep 22 15:20:12 2015
From: berrange at redhat.com (Daniel P. Berrange)
Date: Tue, 22 Sep 2015 16:20:12 +0100
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <56016E27.7070106@windriver.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <B987741E651FDE4584B7A9C0F7180DEB1CDC2D02@G4W3208.americas.hpqcorp.net>
 <20150921085609.GC28520@redhat.com>
 <56016E27.7070106@windriver.com>
Message-ID: <20150922152012.GA28888@redhat.com>

On Tue, Sep 22, 2015 at 09:05:11AM -0600, Chris Friesen wrote:
> On 09/21/2015 02:56 AM, Daniel P. Berrange wrote:
> >On Fri, Sep 18, 2015 at 05:47:31PM +0000, Carlton, Paul (Cloud Services) wrote:
> >>However the most significant impediment we encountered was customer
> >>complaints about performance of instances during migration.  We did a little
> >>bit of work to identify the cause of this and concluded that the main issues
> >>was disk i/o contention.  I wonder if this is something you or others have
> >>encountered?  I'd be interested in any idea for managing the rate of the
> >>migration processing to prevent it from adversely impacting the customer
> >>application performance.  I appreciate that if we throttle the migration
> >>processing it will take longer and may not be able to keep up with the rate
> >>of disk/memory change in the instance.
> >
> >I would not expect live migration to have an impact on disk I/O, unless
> >your storage is network based and using the same network as the migration
> >data. While migration is taking place you'll see a small impact on the
> >guest compute performance, due to page table dirty bitmap tracking, but
> >that shouldn't appear directly as disk I/O problem. There is no throttling
> >of guest I/O at all during migration.
> 
> Technically if you're doing a lot of disk I/O couldn't you end up with a
> case where you're thrashing the page cache enough to interfere with
> migration?  So it's actually memory change that is the problem, but it might
> not be memory that the application is modifying directly but rather memory
> allocated by the kernel.
> 
> >>Could you point me at somewhere I can get details of the tuneable setting
> >>relating to cutover down time please?  I'm assuming that at these are
> >>libvirt/qemu settings?  I'd like to play with them in our test environment
> >>to see if we can simulate busy instances and determine what works.  I'd also
> >>be happy to do some work to expose these in nova so the cloud operator can
> >>tweak if necessary?
> >
> >It is already exposed as 'live_migration_downtime' along with
> >live_migration_downtime_steps, and live_migration_downtime_delay.
> >Again, it shouldn't have any impact on guest performance while
> >live migration is taking place. It only comes into effect when
> >checking whether the guest is ready to switch to the new host.
> 
> Has anyone given thought to exposing some of these new parameters to the
> end-user?  I could see a scenario where an image might want to specify the
> acceptable downtime over migration.  (On the other hand that might be tricky
> from the operator perspective.)

I'm of the opinion that we should really try to avoid exposing *any*
migration tunables to the tenant user. All the tunables are pretty
hypervisor specific and low level and not very friendly to expose
to tenants. Instead our focus should be on ensuring that it will
always "just work" from the tenants POV.

When QEMU gets 'post copy' migration working, we'll want to adopt
that asap, as that will give us the means to guarantee that migration
will always complete with very little need for tuning.

At most I could see the users being able to given some high level
indication as to whether their images tolerate some level of
latency, so Nova can decide what migration characteristic is
acceptable.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


From chris.friesen at windriver.com  Tue Sep 22 15:29:46 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Tue, 22 Sep 2015 09:29:46 -0600
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <191B00529A37FA4F9B1CAE61859E4D6E5AB44DC2@IRSMSX101.ger.corp.intel.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <191B00529A37FA4F9B1CAE61859E4D6E5AB44DC2@IRSMSX101.ger.corp.intel.com>
Message-ID: <560173EA.2060506@windriver.com>

Apologies for the indirect quote, some of the earlier posts got deleted before I 
noticed the thread.

On 09/21/2015 03:43 AM, Koniszewski, Pawel wrote:
>> -----Original Message-----
>> From: Daniel P. Berrange [mailto:berrange at redhat.com]

>> There was a proposal to nova to allow the 'pause' operation to be invoked
>> while migration was happening. This would turn a live migration into a
>> coma-migration, thereby ensuring it succeeds. I cna't remember if this
>> merged or not, as i can't find the review offhand, but its important to
>> have this ASAP IMHO, as when evacuating VMs from a host admins need a knob
>> to use to force successful evacuation, even at the cost of pausing the
>> guest temporarily.

It's not strictly "live" migration, but for the same reason of pushing VMs off a 
host for maintenance it would be nice to have some way of migrating suspended 
instances.  (As brought up in 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075042.html)

>> In libvirt upstream we now have the ability to filter what disks are
>> migrated during block migration. We need to leverage that new feature to
>> fix the long standing problems of block migration when non-local images are
>> attached - eg cinder volumes. We definitely want this in Mitaka.

Agreed, this would be a very useful addition.

>> We should look at what we need to do to isolate the migration data network
>> from the main management network. Currently we live migrate over whatever
>> network is associated with the compute hosts primary Hostname / IP address.
>> This is not neccessarily the fastest NIC on the host. We ought to be able
>> to record an alternative hostname / IP address against each compute host to
>> indicate the desired migration interface.

Yes, this would be good to have upstream.  We've added this sort of thing 
locally (though with a hardcoded naming scheme) to allow migration over 10G 
links with management over 1G links.

>> There is also work on post-copy migration in QEMU. Normally with live
>> migration, the guest doesn't start executing on the target host until
>> migration has transferred all data. There are many workloads where that
>> doesn't work, as the guest is dirtying data too quickly, With post-copy you
>> can start running the guest on the target at any time, and when it faults
>> on a missing page that will be pulled from the source host. This is
>> slightly more fragile as you risk loosing the guest entirely if the source
>> host dies before migration finally completes. It does guarantee that
>> migration will succeed no matter what workload is in the guest. This is
>> probably Nxxxx cycle material.

It seems to me that the ideal solution would be to start doing pre-copy 
migration, then if that doesn't converge with the specified downtime value then 
maybe have the option to just cut over to the destination and do a post-copy 
migration of the remaining data.

Chris


From openstack at nemebean.com  Tue Sep 22 15:34:19 2015
From: openstack at nemebean.com (Ben Nemec)
Date: Tue, 22 Sep 2015 10:34:19 -0500
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <560134B2.300@dague.net>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net>
Message-ID: <560174FB.9050108@nemebean.com>

On 09/22/2015 06:00 AM, Sean Dague wrote:
> On 09/18/2015 02:30 PM, Ben Nemec wrote:
>> I've been dealing with this issue lately myself, so here's my two cents:
>>
>> It seems to me that solving this at the service level is actually kind
>> of wrong.  As you've discovered, that requires changes in a bunch of
>> different places to address what is really an external issue.  Since
>> it's the terminating proxy that is converting HTTPS traffic to HTTP that
>> feels like the right place for a fix IMHO.
>>
>> My solution has been to have the proxy (HAProxy in my case) rewrite the
>> Location header on redirects (one example for the TripleO puppet config
>> here: https://review.openstack.org/#/c/223330/1/manifests/loadbalancer.pp).
>>
>> I'm not absolutely opposed to having a way to make the services aware of
>> external SSL termination to allow use of a proxy that can't do header
>> rewriting, but I think proxy configuration should be the preferred way
>> to handle it.
> 
> My feeling on this one is that we've got this thing in OpenStack... the
> Service Catalog. It definitively tells the world what the service
> addresses are.
> 
> We should use that in the services themselves to reflect back their
> canonical addresses. Doing point solution rewriting of urls seems odd
> when we could just have Nova/Cinder/etc return documents with URLs that
> match what's in the service catalog for that service.
> 
> 	-Sean
> 

That also seems perfectly reasonable, although it looks like we're not
using the service catalog internally now?  I see hard-coded endpoints in
nova.conf for the services it talks to.


From berrange at redhat.com  Tue Sep 22 15:44:31 2015
From: berrange at redhat.com (Daniel P. Berrange)
Date: Tue, 22 Sep 2015 16:44:31 +0100
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <560173EA.2060506@windriver.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <191B00529A37FA4F9B1CAE61859E4D6E5AB44DC2@IRSMSX101.ger.corp.intel.com>
 <560173EA.2060506@windriver.com>
Message-ID: <20150922154431.GC28888@redhat.com>

On Tue, Sep 22, 2015 at 09:29:46AM -0600, Chris Friesen wrote:
> >>There is also work on post-copy migration in QEMU. Normally with live
> >>migration, the guest doesn't start executing on the target host until
> >>migration has transferred all data. There are many workloads where that
> >>doesn't work, as the guest is dirtying data too quickly, With post-copy you
> >>can start running the guest on the target at any time, and when it faults
> >>on a missing page that will be pulled from the source host. This is
> >>slightly more fragile as you risk loosing the guest entirely if the source
> >>host dies before migration finally completes. It does guarantee that
> >>migration will succeed no matter what workload is in the guest. This is
> >>probably Nxxxx cycle material.
> 
> It seems to me that the ideal solution would be to start doing pre-copy
> migration, then if that doesn't converge with the specified downtime value
> then maybe have the option to just cut over to the destination and do a
> post-copy migration of the remaining data.

Yes, that is precisely what the QEMU developers working on this
featue suggest we should do. The lazy page faulting on the target
host has a performance hit on the guest, so you definitely need
to give a little time for pre-copy to start off with, and then
switch to post-copy once some benchmark is reached, or if progress
info shows the transfer is not making progress.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


From fungi at yuggoth.org  Tue Sep 22 15:47:01 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Tue, 22 Sep 2015 15:47:01 +0000
Subject: [openstack-dev] Patches coming for .coveragerc
In-Reply-To: <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
Message-ID: <20150922154701.GP25159@yuggoth.org>

On 2015-09-22 11:59:44 +0200 (+0200), Alan Pevec wrote:
> stable/kilo has uncapped coverage>=3.6 do we patch-spam it or cap coverage?
> stable/juno has coverage>=3.6,<=3.7.1

There's nothing wrong with updating it there, but unless we're
actively running coverage jobs on those branches I'm not convinced
it's worth the effort/churn.
-- 
Jeremy Stanley


From cboylan at sapwetik.org  Tue Sep 22 15:55:14 2015
From: cboylan at sapwetik.org (Clark Boylan)
Date: Tue, 22 Sep 2015 08:55:14 -0700
Subject: [openstack-dev] Patches coming for .coveragerc
In-Reply-To: <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
Message-ID: <1442937314.2470383.390588497.0A2E2B51@webmail.messagingengine.com>

On Tue, Sep 22, 2015, at 02:59 AM, Alan Pevec wrote:
> 2015-09-21 16:12 GMT+02:00 Monty Taylor <mordred at inaugust.com>:
> > We're running a script right now to submit a change to every project with
> > this change. The topic will be coverage-v4
> 
> stable/kilo has uncapped coverage>=3.6 do we patch-spam it or cap
> coverage?
> stable/juno has coverage>=3.6,<=3.7.1
If you do cap coverage you should use a rule like 'coverage>=3.6,<4.0'
to allow for bug fix releases. I know its not likely that those will
happen but we do need to get into a better habit of not using <= on
upper bounds as they effectively prevent any bug fix releases from being
used.

Clark


From emilien at redhat.com  Tue Sep 22 16:05:31 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Tue, 22 Sep 2015 12:05:31 -0400
Subject: [openstack-dev] [puppet] weekly meeting #52
In-Reply-To: <55FFFBAA.9050806@redhat.com>
References: <55FFFBAA.9050806@redhat.com>
Message-ID: <56017C4B.3070500@redhat.com>



On 09/21/2015 08:44 AM, Emilien Macchi wrote:
> Hello!
> 
> Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
> in #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150922
> 
> Feel free to add any additional items you'd like to discuss.
> If our schedule allows it, we'll make bug triage during the meeting.

We did our meeting, you can read the notes here:

http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-22-15.00.html

Thanks for attending!

> Regards,
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/135224c0/attachment.pgp>

From fungi at yuggoth.org  Tue Sep 22 16:05:37 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Tue, 22 Sep 2015 16:05:37 +0000
Subject: [openstack-dev] [release][all]Release help needed - we are
 incompatible with ourselves
In-Reply-To: <SN1PR02MB16955002E7E9726025285C2E99450@SN1PR02MB1695.namprd02.prod.outlook.com>
References: <CAJ3HoZ3XF37499bnurXZBdDbrPcY_gP4+BfnnbBPheaXyRxrEw@mail.gmail.com>
 <SN1PR02MB16955002E7E9726025285C2E99450@SN1PR02MB1695.namprd02.prod.outlook.com>
Message-ID: <20150922160537.GQ25159@yuggoth.org>

On 2015-09-22 12:19:47 +0000 (+0000), Neil Jerram wrote:
> FWIW - this email is tagged [all], and asks for help, but I don't
> understand it.  Even though I've been fairly heavily engaged with
> OpenStack for more than a cycle now.
[...]
> So, please, if you really mean [all], say what you're saying in terms
> that everyone can understand, and not just the (apparently, and
> unfortunately) small group of people who already understand all this in
> detail.
[...]

To clarify, the [all] tag is actually a shortcut to identify
"Cross-project coordination" topics (see
http://lists.openstack.org/cgi-bin/mailman/options/openstack-dev for
details). The people who engage in these threads are generally going
to be those who are up to speed on/interested in cross-project
efforts and concerns, attend the weekly cross-project meeting (today
at 21:00 UTC!), sit in on cross-project sessions at the design
summit, review and/or implement specs proposed to the
openstack-specs repo, et cetera.

While I really wish cross-project discussions were of interest to
"all" subscribers on this mailing list, that's unfortunately not
likely to ever be the case. In that regard, its official subject tag
may be slightly misnamed.
-- 
Jeremy Stanley


From mgagne at internap.com  Tue Sep 22 16:12:23 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Tue, 22 Sep 2015 12:12:23 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <560134B2.300@dague.net>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net>
Message-ID: <56017DE7.7030209@internap.com>

On 2015-09-22 7:00 AM, Sean Dague wrote:
> 
> My feeling on this one is that we've got this thing in OpenStack... the
> Service Catalog. It definitively tells the world what the service
> addresses are.
> 
> We should use that in the services themselves to reflect back their
> canonical addresses. Doing point solution rewriting of urls seems odd
> when we could just have Nova/Cinder/etc return documents with URLs that
> match what's in the service catalog for that service.
> 

Sorry, this won't work for us. We have a "split view" in our service
catalog where internal management nodes have a specific catalog and
public nodes (for users) have a different one.

Implementing the secure_proxy_ssl_header config would require close to
little code change to all projects and accommodate our use case and
other ones we might not think of. For example, how do you know "from"
which of the following URLs (publicURL, internalURL, adminURL) the users
is coming? Each might be different and even not all be SSL.

The oslo.middleware project already has the SSL middleware [1]. It would
only be a matter of enabling this middleware by default in the paste
config of all projects.

[1]
https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/ssl.py

-- 
Mathieu


From brandon.logan at RACKSPACE.COM  Tue Sep 22 16:18:17 2015
From: brandon.logan at RACKSPACE.COM (Brandon Logan)
Date: Tue, 22 Sep 2015 16:18:17 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
Message-ID: <1442938697.30604.1.camel@localhost>

Hi Banashankar,
I think it'd be great if you got this going.  One of those things we
want to have and people ask for but has always gotten a lower priority
due to the critical things needed.

Thanks,
Brandon
On Mon, 2015-09-21 at 17:57 -0700, Banashankar KV wrote:
> Hi All,
> I was thinking of starting the work on heat to support LBaasV2,  Is
> there any concerns about that?
> 
> 
> I don't know if it is the right time to bring this up :D . 
> 
> Thanks,
> Banashankar (bana_k)
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From mriedem at linux.vnet.ibm.com  Tue Sep 22 16:41:05 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Tue, 22 Sep 2015 11:41:05 -0500
Subject: [openstack-dev] Patches coming for .coveragerc
In-Reply-To: <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
Message-ID: <560184A1.5080806@linux.vnet.ibm.com>



On 9/22/2015 4:59 AM, Alan Pevec wrote:
> 2015-09-21 16:12 GMT+02:00 Monty Taylor <mordred at inaugust.com>:
>> We're running a script right now to submit a change to every project with
>> this change. The topic will be coverage-v4
>
> stable/kilo has uncapped coverage>=3.6 do we patch-spam it or cap coverage?
> stable/juno has coverage>=3.6,<=3.7.1
>
> Cheers,
> Alan
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I'd vote for just capping at <4.0 on stable/kilo rather than doing a 
bunch of patches there.

-- 

Thanks,

Matt Riedemann



From rbryant at redhat.com  Tue Sep 22 16:42:55 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Tue, 22 Sep 2015 12:42:55 -0400
Subject: [openstack-dev] [networking-ovn] Neutron-DVR feature on OVN/L3
In-Reply-To: <201509212047.t8LKlK3g015392@d01av05.pok.ibm.com>
References: <OF86C79150.8AA886D0-ON00257EC2.00625760-86257EC2.0063717A@LocalDomain>
 <201509212047.t8LKlK3g015392@d01av05.pok.ibm.com>
Message-ID: <5601850F.4070700@redhat.com>

On 09/21/2015 04:47 PM, Sisir Chowdhury wrote:
> Hi All -
> 
>     I have some proposal regarding ovn-networking project within Open-Stack.
> 
> #1.   Making Neutron-DVR feature intelligent enough so that we can
> completely remove Network Node(NN).
> 
>         Right now even with DVR, the egress traffic originated from VMs
> going outbound are SNAT'ed by the
>         Network Node but the Ingrerss traffic coming from Internet to
> the VMs are directly going through the
>         Compute Node and DNAT'ed by the L3 Agent of the Compute Node.
> 
> Any Thoughts/Comments ?

The Neutron L3 agent is only used with networking-ovn temporarily while
we work through the L3 design and implementation in OVN itself.  OVN
will not use the L3 agent (or DVR) quite soon.  Some initial L3 design
notes are being discussed on the ovs dev list now.  L3 in OVN will be
distributed.

-- 
Russell Bryant


From rbryant at redhat.com  Tue Sep 22 16:46:20 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Tue, 22 Sep 2015 12:46:20 -0400
Subject: [openstack-dev] [neutron] Does neutron ovn plugin support to
 setup multiple neutron networks for one container?
In-Reply-To: <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
Message-ID: <560185DC.4060103@redhat.com>

On 09/22/2015 08:08 AM, WANG, Ming Hao (Tony T) wrote:
> Dear all,
> 
> For neutron ovn plugin supports containers in one VM, My understanding is one container can't be assigned two network interfaces in different neutron networks. Is it right?
> The reason:
> 1. One host VM only has one network interface.
> 2. all the VLAN tags are stripped out when the packet goes out the VM.
> 
> If it is True, does neutron ovn plugin or ovn has plan to support this?

You should be able to assign multiple interfaces to a container on
different networks.  The traffic for each interface will be tagged with
a unique VLAN ID on its way in and out of the VM, the same way it is
done for each container with a single interface.

-- 
Russell Bryant


From banveerad at gmail.com  Tue Sep 22 17:07:02 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Tue, 22 Sep 2015 10:07:02 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1442938697.30604.1.camel@localhost>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
Message-ID: <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>

Hi Brandon,
Work in progress, but need some input on the way we want them, like replace
the existing lbaasv1 or we still need to support them ?




Thanks
Banashankar


On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> Hi Banashankar,
> I think it'd be great if you got this going.  One of those things we
> want to have and people ask for but has always gotten a lower priority
> due to the critical things needed.
>
> Thanks,
> Brandon
> On Mon, 2015-09-21 at 17:57 -0700, Banashankar KV wrote:
> > Hi All,
> > I was thinking of starting the work on heat to support LBaasV2,  Is
> > there any concerns about that?
> >
> >
> > I don't know if it is the right time to bring this up :D .
> >
> > Thanks,
> > Banashankar (bana_k)
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/e436f39f/attachment.html>

From carl at ecbaldwin.net  Tue Sep 22 17:32:36 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Tue, 22 Sep 2015 11:32:36 -0600
Subject: [openstack-dev] [neutron] [floatingip] Selecting router for
 floatingip when subnet is connected to multiple routers
In-Reply-To: <5600201F.20002@redhat.com>
References: <CAPWkaSVTOCekQ3JdXEj8g3oygM4AZ_4YOWc4Y4CXbTsjD6n5jg@mail.gmail.com>
 <CAOyZ2aEq6+Yf7fbZkzW=prqs21ErrV5cjX3L=e=NkeEOhMA3yg@mail.gmail.com>
 <CAF+Cadu-LG-uNcJbeUjpz8rp8GM-nZsxEYre+wqNMJDFLyc5QQ@mail.gmail.com>
 <5600201F.20002@redhat.com>
Message-ID: <CALiLy7oT+s=xQLh58+9o6kQ4WS-qgSjb4nbr0T_eqPa646ygEg@mail.gmail.com>

On Sep 21, 2015 9:21 AM, "Venkata Anil" <vkommadi at redhat.com> wrote:
>
> Hi All
>
> I need your opinion on selecting router for floatingip when subnet is
connected to multiple routers.
>
> When multiple routers connected to a subnet, vm on that subnet will only
send packets destined for external network to the router with subnet's
default gateway.
> Should we always choose this router(i.e router with subnet's default
gateway) for floatingip?

Egress traffic from a vm goes to the default gateway.  This traffic needs
to be either SNATed to the floating ip address as the source or matched
with an existing DNAT connection with conntrack (assuming the current
stateful implementation).  It follows that the floating ip should always be
hosted on the router having the default gateway interface on the vm port's
subnet.

To choose a different, non-default gateway, router would mean that you'll
need extra routes in the vm in order for the floating ip to work.  If we
want to allow this, we could accept any router connected to both the port's
internal network and the eternal network.  That is what we do today.  The
problem remains when there are multiple routers connected this way making
them all eligible [1].  Assuming we stick with this, I think it makes sense
to *prefer* the one that has the default gateway interface on the private
network.  I think that is the solution given by this patch set [2].

The big question in my mind is do we want to continue allowing the case
where the only eligible router doesn't have the default gateway interface
on the private network?  Do we want to continue allowing this knowing that
something else must be done (e.g. DHCP static routes or something like
that) to the VM to make the floating ip work?

> We have two scenarios -
>
> 1) Multiple routers connected to same subnet and also same external
network.
>    In this case, which router should we select for floatingip?
>    Choose first router in db list or router with default gateway. What if
router with subnet's default gateway not present?

I think prefer the default gateway.  The question still remains whether we
fail if it isn't present or select another.

> 2) Multiple routers connected to same subnet and different external
networks.
>    In this case, user has the choice to create floatingip on any external
network( and on the router connected to that external network).

Only the router with the default gateway will work out of the box, right?

>    But this router may not be the one having subnet's default gateway.
 Should we allow this?

This case should fail unless we choose to continue allowing floating ips to
be hosted on a non-default gateway router.

Carl

[1] https://launchpad.net/bugs/1470765

[2] https://review.openstack.org/#/c/220135/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/91659a35/attachment.html>

From carl at ecbaldwin.net  Tue Sep 22 17:33:20 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Tue, 22 Sep 2015 11:33:20 -0600
Subject: [openstack-dev] [Neutron] Port Forwarding API
In-Reply-To: <CAG9LJa7gNwDauUahRNi1MRckppt=kdAbdEop6CV8vO9v62fH8A@mail.gmail.com>
References: <CAG9LJa7gNwDauUahRNi1MRckppt=kdAbdEop6CV8vO9v62fH8A@mail.gmail.com>
Message-ID: <CALiLy7pgUUKcUQhsC1WLLqyuWf4aEJUCirPLvx-sEQSeW1uu=g@mail.gmail.com>

Interesting, I'll have a look.  We should get this on the neutron drivers'
agenda.  The drivers team has been dormant for a couple of weeks but I'm
sure it will pick up again very soon.

Carl
On Sep 20, 2015 12:28 AM, "Gal Sagie" <gal.sagie at gmail.com> wrote:

> Hello All,
>
> I have sent a spec [1] to resume the work on port forwarding API and
> reference implementation.
>
> Its currently marked as "WIP", however i raised some "TBD" questions for
> the community.
> The way i see port forwarding is an API that is very similar to floating
> IP API and implementation
> with few changes:
>
> 1) Can only define port forwarding on the router external gateway IP (or
> additional public IPs
>    that are located on the router.  (Similar to the case of centralized
> DNAT)
>
> 2) The same FIP address can be used for different mappings, for example
> FIP with IP X
>     can be used with different ports to map to different VM's X:4001  ->
> VM1 IP
>     X:4002 -> VM2 IP (This is the essence of port forwarding).
>     So we also need the port mapping configuration fields
>
> All the rest should probably behave (in my opinion) very similar to FIP's
> (for example
> not being able to remove external gateway if port forwarding entries are
> configured,
> if the VM is deletd the port forwarding entry is deleted as well and so
> on..)
> All of these points are mentioned in the spec and i am waiting for the
> community feedback
> on them.
>
> I am trying to figure out if implementation wise, it would be smart to try
> and use the floating IP
> implementation and extend it for this (given all the above mechanism
> described above already
> works for floating IP's)
> Or, add another new implementation which behaves very similar to floating
> IP's in most aspects
> (But still differ in some)
> Or something else...
>
> Would love to hear the community feedback on the spec, even that its WIP
>
> Thanks
> Gal.
>
> [1] https://review.openstack.org/#/c/224727/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/f8b772fe/attachment.html>

From sean at dague.net  Tue Sep 22 17:34:42 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 22 Sep 2015 13:34:42 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <560174FB.9050108@nemebean.com>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <560174FB.9050108@nemebean.com>
Message-ID: <56019132.4010400@dague.net>

On 09/22/2015 11:34 AM, Ben Nemec wrote:
> On 09/22/2015 06:00 AM, Sean Dague wrote:
>> On 09/18/2015 02:30 PM, Ben Nemec wrote:
>>> I've been dealing with this issue lately myself, so here's my two cents:
>>>
>>> It seems to me that solving this at the service level is actually kind
>>> of wrong.  As you've discovered, that requires changes in a bunch of
>>> different places to address what is really an external issue.  Since
>>> it's the terminating proxy that is converting HTTPS traffic to HTTP that
>>> feels like the right place for a fix IMHO.
>>>
>>> My solution has been to have the proxy (HAProxy in my case) rewrite the
>>> Location header on redirects (one example for the TripleO puppet config
>>> here: https://review.openstack.org/#/c/223330/1/manifests/loadbalancer.pp).
>>>
>>> I'm not absolutely opposed to having a way to make the services aware of
>>> external SSL termination to allow use of a proxy that can't do header
>>> rewriting, but I think proxy configuration should be the preferred way
>>> to handle it.
>>
>> My feeling on this one is that we've got this thing in OpenStack... the
>> Service Catalog. It definitively tells the world what the service
>> addresses are.
>>
>> We should use that in the services themselves to reflect back their
>> canonical addresses. Doing point solution rewriting of urls seems odd
>> when we could just have Nova/Cinder/etc return documents with URLs that
>> match what's in the service catalog for that service.
>>
>> 	-Sean
>>
> 
> That also seems perfectly reasonable, although it looks like we're not
> using the service catalog internally now?  I see hard-coded endpoints in
> nova.conf for the services it talks to.

Nova uses it for cinder, and can for neutron, not for glance. A big part
of this is that people kept doing end runs around it instead of thinking
about how we make service discovery a base thing that all services use
in talking to each other or reflecting back to the user.

https://review.openstack.org/#/c/181393/ is an attempt to try to get a
handle on the whole situation.

	-Sean

-- 
Sean Dague
http://dague.net


From robertc at robertcollins.net  Tue Sep 22 17:42:29 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Wed, 23 Sep 2015 05:42:29 +1200
Subject: [openstack-dev] Patches coming for .coveragerc
In-Reply-To: <560184A1.5080806@linux.vnet.ibm.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <560184A1.5080806@linux.vnet.ibm.com>
Message-ID: <CAJ3HoZ2awFM-1t0FTbwUrLbqbmTGuV1h1SQsif_UznSnHGmryw@mail.gmail.com>

It's the same number of patches.
On 23 Sep 2015 4:42 am, "Matt Riedemann" <mriedem at linux.vnet.ibm.com> wrote:

>
>
> On 9/22/2015 4:59 AM, Alan Pevec wrote:
>
>> 2015-09-21 16:12 GMT+02:00 Monty Taylor <mordred at inaugust.com>:
>>
>>> We're running a script right now to submit a change to every project with
>>> this change. The topic will be coverage-v4
>>>
>>
>> stable/kilo has uncapped coverage>=3.6 do we patch-spam it or cap
>> coverage?
>> stable/juno has coverage>=3.6,<=3.7.1
>>
>> Cheers,
>> Alan
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I'd vote for just capping at <4.0 on stable/kilo rather than doing a bunch
> of patches there.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/75d464d2/attachment.html>

From andrew.melton at RACKSPACE.COM  Tue Sep 22 17:43:28 2015
From: andrew.melton at RACKSPACE.COM (Andrew Melton)
Date: Tue, 22 Sep 2015 17:43:28 +0000
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <20150921191918.GS21846@jimrollenhagen.com>
References: <1442847252531.42564@RACKSPACE.COM>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FE88E@MAIL703.KDS.KEANE.COM>
 <1442860231482.95559@RACKSPACE.COM>,
 <20150921191918.GS21846@jimrollenhagen.com>
Message-ID: <1442943810555.78167@RACKSPACE.COM>

Hey Devs,

Reply-All strikes again...

I've had to invalidate the old link. If you still need a PyCharm License, please reach out to me with your launchpad-id and I'll get you the updated link.

I'm also working on a WebStorm license. I'll let the list know when I have it.

--Andrew
________________________________________
From: Jim Rollenhagen <jim at jimrollenhagen.com>
Sent: Monday, September 21, 2015 3:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] New PyCharm License

On Mon, Sep 21, 2015 at 06:30:30PM +0000, Andrew Melton wrote:
> Please follow this link to request a license: https://account.jetbrains.com/a/4c4ojw.
>
> You will need a JetBrains account to request the license. This link is open for anyone to use, so please do not share it in the public. You may share it with other OpenStack contributors on your team, but if you do, please send me their launchpad-ids. Lastly, if you decide to stop using PyCharm, please send me an email so I can revoke the license and open it up for use by someone else.

Welp, it's in the public now. :(

// jim

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sean at dague.net  Tue Sep 22 17:46:10 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 22 Sep 2015 13:46:10 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <56017DE7.7030209@internap.com>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
Message-ID: <560193E2.7090603@dague.net>

On 09/22/2015 12:12 PM, Mathieu Gagn? wrote:
> On 2015-09-22 7:00 AM, Sean Dague wrote:
>>
>> My feeling on this one is that we've got this thing in OpenStack... the
>> Service Catalog. It definitively tells the world what the service
>> addresses are.
>>
>> We should use that in the services themselves to reflect back their
>> canonical addresses. Doing point solution rewriting of urls seems odd
>> when we could just have Nova/Cinder/etc return documents with URLs that
>> match what's in the service catalog for that service.
>>
> 
> Sorry, this won't work for us. We have a "split view" in our service
> catalog where internal management nodes have a specific catalog and
> public nodes (for users) have a different one.
> 
> Implementing the secure_proxy_ssl_header config would require close to
> little code change to all projects and accommodate our use case and
> other ones we might not think of. For example, how do you know "from"
> which of the following URLs (publicURL, internalURL, adminURL) the users
> is coming? Each might be different and even not all be SSL.
> 
> The oslo.middleware project already has the SSL middleware [1]. It would
> only be a matter of enabling this middleware by default in the paste
> config of all projects.
> 
> [1]
> https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/ssl.py

The split view definitely needs to be considered, but a big question
here is whether we should really be doing this with multiple urls per
catalog entry, or dedicated catalog entries for internal usage.

There are a lot of things to work through to get our use of the service
catalog consistent and useful going forward. I just don't relish another
layer of work arounds that decide the service catalog is not a good way
to keep track of what our service urls are, that has to be unwound later.

	-Sean

-- 
Sean Dague
http://dague.net


From carl at ecbaldwin.net  Tue Sep 22 18:35:52 2015
From: carl at ecbaldwin.net (Carl Baldwin)
Date: Tue, 22 Sep 2015 12:35:52 -0600
Subject: [openstack-dev] [networking-ovn] Neutron-DVR feature on OVN/L3
In-Reply-To: <5601850F.4070700@redhat.com>
References: <OF86C79150.8AA886D0-ON00257EC2.00625760-86257EC2.0063717A@LocalDomain>
 <201509212047.t8LKlK3g015392@d01av05.pok.ibm.com>
 <5601850F.4070700@redhat.com>
Message-ID: <CALiLy7q6eUJ1c-t0xYDprouf-OuuZoQDk1_b0eihm8b0GtfW2g@mail.gmail.com>

On Tue, Sep 22, 2015 at 10:42 AM, Russell Bryant <rbryant at redhat.com> wrote:
> The Neutron L3 agent is only used with networking-ovn temporarily while
> we work through the L3 design and implementation in OVN itself.  OVN
> will not use the L3 agent (or DVR) quite soon.  Some initial L3 design
> notes are being discussed on the ovs dev list now.  L3 in OVN will be
> distributed.

I'm curious when this is true, what models will be supported?  Will it
use NAT as current Neutron reference implementation?

Carl


From Kevin.Fox at pnnl.gov  Tue Sep 22 18:38:47 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 22 Sep 2015 18:38:47 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>,
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>

We're using the v1 resources...

If the v2 ones are compatible and can seamlessly upgrade, great

Otherwise, make new ones please.

Thanks,
Kevin
________________________________
From: Banashankar KV [banveerad at gmail.com]
Sent: Tuesday, September 22, 2015 10:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

Hi Brandon,
Work in progress, but need some input on the way we want them, like replace the existing lbaasv1 or we still need to support them ?




Thanks
Banashankar


On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
Hi Banashankar,
I think it'd be great if you got this going.  One of those things we
want to have and people ask for but has always gotten a lower priority
due to the critical things needed.

Thanks,
Brandon
On Mon, 2015-09-21 at 17:57 -0700, Banashankar KV wrote:
> Hi All,
> I was thinking of starting the work on heat to support LBaasV2,  Is
> there any concerns about that?
>
>
> I don't know if it is the right time to bring this up :D .
>
> Thanks,
> Banashankar (bana_k)
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/828dbed9/attachment.html>

From robertc at robertcollins.net  Tue Sep 22 18:40:19 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Wed, 23 Sep 2015 06:40:19 +1200
Subject: [openstack-dev] [releases] semver and dependency changes
In-Reply-To: <1442924599-sup-9078@lrrr.local>
References: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
 <1442924599-sup-9078@lrrr.local>
Message-ID: <CAJ3HoZ3_PqUxwrxxKO_6FUPk5t43-YELCMxQ9zPRNY65dSr4sw@mail.gmail.com>

On 23 September 2015 at 00:43, Doug Hellmann <doug at doughellmann.com> wrote:
> Excerpts from Robert Collins's message of 2015-09-22 15:16:26 +1200:
>> Currently we don't provide specific guidance on what should happen
>> when the only changes in a project are dependency changes and a
>> release is made.
>>
>> The releases team has been treating a dependency change as 'feature'
>> rather than 'bugfix' in semver modelling - so if the last release was
>> 1.2.3, and a requirements sync happens to fix a bug (e.g. a too-low
>> minimum dependency), then the next release would be 1.3.0.
>>
>> Reasoning about this can be a little complex to do on the fly, so I'd
>> like to put together some concrete guidance - which essentially means
>> being able to provide a heuristic to answer the questions:
>>
>> 'Is this requirements change an API break' or 'is this requirements
>> change feature work' or 'is this requirements change a bugfix'.
>
> Also, for our projects, "is this requirements change being triggered by
> a need in some other project that also syncs with the g-r list"?

I don't think that maps to semver though: which is why I didn't list
it. I agree that the /reason/ for a change may be 'consistency with
other OpenStack projects'. Until we've finished fixing up the pip
facilities around resolution we can't really make the requirements
syncing process more flexible - and even then its going to be really
very tricky [e.g. how do you test 'oslo.messaging works with
oslo.config version X when some other thing in devstack needs version
Y > X] - proving individually varied lower bounds is going to be
awkward at best if it depends on integration tests.
...
>> We could calculate the needed change programmatically for this
>> approach in the requirements syncing process.
>
> We also have to consider that we can't assume that the dependency
> is using semver itself, so we might not be able to tell from the
> outside whether the API is in fact breaking. So, we would need something
> other than the version number to make that determination.

Agreed - while its unusual, a project can do both of major version
bumps without backwards incompatibilities, and minor or patch version
bumps that do include [deliberate] backwards incompatibilities.

> I've been encouraging the application of a simple rule precisely
> because this problem is so complicated. The 4 reasons for updates
> can get lost over time between a requirements update landing and a
> release being created, especially with automatic updates mixing
> with updates a project actually cares about.  We aren't yet correctly
> identifying our own API breaking changes and differentiating between
> features and bug fixes in all cases, so until we're better at that
> analysis I would rather continue over-simplifying the analysis of
> requirements updates.

So how about we [from a releases perspective] just don't comment on
requirements syncs - let projects make their own assessment?

I don't think 'pick minor' is a very useful heuristic - for many of
our 0.x.y projects we're overstating the impact [since x here is
major-before-stable-has-happened], for consumers of those we're
underestimating.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From robertc at robertcollins.net  Tue Sep 22 18:43:13 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Wed, 23 Sep 2015 06:43:13 +1200
Subject: [openstack-dev] [releases] semver and dependency changes
In-Reply-To: <56015FB4.1030406@openstack.org>
References: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
 <56015FB4.1030406@openstack.org>
Message-ID: <CAJ3HoZ10+wA99SVxcYxAtThx50dgM7Cs+XM3qQaBY0=Bdd3htA@mail.gmail.com>

On 23 September 2015 at 02:03, Thierry Carrez <thierry at openstack.org> wrote:
> Robert Collins wrote:
>> [...]
>> So, one answer we can use is "The version impact of a requirements
>> change is never less than the largest version change in the change."
>> That is:
>> nothing -> a requirement -> major version change
>
> That feels a bit too much. In a lot of cases, the added requirement will
> be used in a new, backward-compatible feature (requiring y bump), or
> will serve to remove code without changing functionality (requiring z
> bump). I would think that the cases where a new requirement requires a x
> bump are rare.

So the question is 'will requiring this new thing force any users of
the library to upgrade past a major version of that new thing'. If its
new to OpenStack, then the answer is clearly no, for users of the
library within OpenStack, but we don't know about users outside of
OpenStack'. I am entirely happy to concede that this should be case by
case though.

>> 1.x.y -> 2.0.0 -> major version change
>> 1.2.y -> 1.3.0 -> minor version change
>> 1.2.3. -> 1.2.4 -> patch version change
>
> The last two sound like good rules of thumb.

What about the one you didn't comment on ? :)

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From robertc at robertcollins.net  Tue Sep 22 18:44:42 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Wed, 23 Sep 2015 06:44:42 +1200
Subject: [openstack-dev] [releases] semver and dependency changes
In-Reply-To: <20150922144812.GN25159@yuggoth.org>
References: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
 <20150922144812.GN25159@yuggoth.org>
Message-ID: <CAJ3HoZ0jy+5aNfeU=OFyhOH1NH3Qqgd5UZyHeUzqbbuSkTXBmQ@mail.gmail.com>

On 23 September 2015 at 02:48, Jeremy Stanley <fungi at yuggoth.org> wrote:
> On 2015-09-22 15:16:26 +1200 (+1200), Robert Collins wrote:
> [...]
>> 'Is this requirements change an API break' or 'is this requirements
>> change feature work' or 'is this requirements change a bugfix'.
> [...]
>
> It may also be worth considering whether this logic only applies to
> increasing minimums (and perhaps in some very rare cases, lowering
> maximums). Raising or removing a maximum (cap) or lowering a minimum
> further means that downstream consumers are expected to still be
> able to continue using the same dependency versions as before the
> change, which means at worst a patchlevel increase in the next
> release.

Ah yes - I was only analyzing minimums, but indeed, similar logic can
be applied to maximums and the constraints they place on consumers.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From wdec.ietf at gmail.com  Tue Sep 22 18:53:32 2015
From: wdec.ietf at gmail.com (Wojciech Dec)
Date: Tue, 22 Sep 2015 20:53:32 +0200
Subject: [openstack-dev] [Neutron] ImportError: cannot import name access
Message-ID: <CAFFjW4hkKNkwhDxyPwB0PpRcUHMQ0MXYOUXj6SG01zDVxNSkEw@mail.gmail.com>

Hi Folks,

I'm trying to run using tox some unit tests on the latest Neutron
stable/icehouse, and things pretty much grind to a halt with a series of
errors arounf "cannot import name access:

  File
"/Users/wdec/Downloads/openstack/neutron/.tox/py27/lib/python2.7/site-packages/keystoneclient/__init__.py",
line 27, in <module>
    from keystoneclient import access
ImportError: cannot import name access

Any suggestions on a possible fix?

Thanks,
Wojciech.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/7a82d586/attachment.html>

From smelikyan at mirantis.com  Tue Sep 22 19:06:38 2015
From: smelikyan at mirantis.com (Serg Melikyan)
Date: Tue, 22 Sep 2015 22:06:38 +0300
Subject: [openstack-dev] [murano] [app-catalog] versions for murano
 assets in the catalog
In-Reply-To: <CA+odVQHdkocESWDvNhwZbQaMAyBPCJciXCTeDrTcAsYGN7Y4nA@mail.gmail.com>
References: <CA+odVQHdkocESWDvNhwZbQaMAyBPCJciXCTeDrTcAsYGN7Y4nA@mail.gmail.com>
Message-ID: <CAOnDsYNYi7zgmxCe57mf9cC7ma7m9pt5Rqx9aMGzjDn3eoGPUg@mail.gmail.com>

Hi Chris,

concern regarding assets versioning in Community App Catalog indeed
affects Murano because we are constantly improving our language and
adding new features, e.g. we added ability to select existing Neutron
network for particular application in Liberty and if user wants to use
this feature - his application will be incompatible with Kilo. I think
this also valid for Heat because they HOT language is also improving
with each release.

Thank you for proposing workaround, I think this is a good way to
solve immediate blocker while Community App Catalog team is working on
resolving handling versions elegantly from they side. Kirill proposed
two changes in Murano to follow this approach that I've already +2 ed:

* https://review.openstack.org/225251 - openstack/murano-dashboard
* https://review.openstack.org/225249 - openstack/python-muranoclient

Looks like corresponding commit to Community App Catalog is already
merged [0] and our next step is to prepare new version of applications
from openstack/murano-apps and then figure out how to publish them
properly.

P.S. I've also talked with Alexander and Kirill regarding better ways
to handle versioning for assets in Community App Catalog and they
shared that they are starting working on PoC using Glance Artifact
Repository, probably they can share more details regarding this work
here. We would be happy to work on this together given that in Liberty
we implemented experimental support for package versioning inside the
Murano (e.g. having two version of the same app working side-by-side)
[1]

References:
[0] https://review.openstack.org/224869
[1] http://murano-specs.readthedocs.org/en/latest/specs/liberty/murano-versioning.html

On Thu, Sep 17, 2015 at 11:00 PM, Christopher Aedo <doc at aedo.net> wrote:
> One big thing missing from the App Catalog right now is the ability to
> version assets.  This is especially obvious with the Murano assets
> which have some version/release dependencies.  Ideally an app-catalog
> user would be able to pick an older version (ie "works with kilo
> rather than liberty"), but we don't have that functionality yet.
>
> We are working on resolving handling versions elegantly from the App
> Catalog side but in the short term we believe Murano is going to need
> a workaround.  In order to support multiple entries with the same name
> (i.e. Apache Tomcat package for both Kilo and Liberty) we are
> proposing the Liberty release of Murano have a new default URL, like:
>
> MURANO_REPO_URL="http://apps.openstack.org/api/v1/murano_repo/liberty/"
>
> We have a patch ready [1] which would redirect traffic hitting that
> URL to http://storage.apps.openstack.org.  If we take this approach,
> we will then retain the ability to manage where Murano fetches things
> from without requiring clients of the Liberty-Murano release to do
> anything.  For instance, if there is a need for Liberty versions of
> Murano packages to be different from Kilo, we could set up a
> Liberty-specific directory and put those versions there, and then
> adjust the redirect appropriately.
>
> What do you think?  We definitely need feedback here, otherwise we are
> likely to break things Murano relies on.  kzaitsev is active on IRC
> and was the one who highlighted this issue, but if there are other
> compatibility or version concerns as Murano continues to grow and
> improve, we could use one or two more people from Murano to stay in
> touch with us wherever you intersect with the App Catalog so we don't
> break something for you :)
>
> [1] https://review.openstack.org/#/c/224869/
>
> -Christopher
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelikyan at mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836


From jckasper at linux.vnet.ibm.com  Tue Sep 22 19:14:06 2015
From: jckasper at linux.vnet.ibm.com (John Kasperski)
Date: Tue, 22 Sep 2015 14:14:06 -0500
Subject: [openstack-dev] [Neutron] ImportError: cannot import name access
In-Reply-To: <CAFFjW4hkKNkwhDxyPwB0PpRcUHMQ0MXYOUXj6SG01zDVxNSkEw@mail.gmail.com>
References: <CAFFjW4hkKNkwhDxyPwB0PpRcUHMQ0MXYOUXj6SG01zDVxNSkEw@mail.gmail.com>
Message-ID: <5601A87E.6040108@linux.vnet.ibm.com>

Ran in this same situation a week or two ago.   I updated to the latest
code from the icehouse-eol branch and then pinned the client libraries
to an earlier release level.   After that, tox ran fine.  The version
levels that I pinned are:

   python-neutronclient==2.3.4
   python-keystoneclient==0.9.0
   python-novaclient==2.17.0

Other versions would probably have also worked, but those versions
matched what we were already using for icehouse.

John

On 09/22/2015 01:53 PM, Wojciech Dec wrote:
> Hi Folks,
>
> I'm trying to run using tox some unit tests on the latest Neutron
> stable/icehouse, and things pretty much grind to a halt with a series
> of errors arounf "cannot import name access:
>
>   File
> "/Users/wdec/Downloads/openstack/neutron/.tox/py27/lib/python2.7/site-packages/keystoneclient/__init__.py",
> line 27, in <module>
>     from keystoneclient import access
> ImportError: cannot import name access
>
> Any suggestions on a possible fix?
>
> Thanks,
> Wojciech.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
John Kasperski

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/104c0ec3/attachment.html>

From mgagne at internap.com  Tue Sep 22 19:16:33 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Tue, 22 Sep 2015 15:16:33 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <560193E2.7090603@dague.net>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net>
Message-ID: <5601A911.2030504@internap.com>

On 2015-09-22 1:46 PM, Sean Dague wrote:
> On 09/22/2015 12:12 PM, Mathieu Gagn? wrote:
>> On 2015-09-22 7:00 AM, Sean Dague wrote:
>>>
>>> My feeling on this one is that we've got this thing in OpenStack... the
>>> Service Catalog. It definitively tells the world what the service
>>> addresses are.
>>>
>>> We should use that in the services themselves to reflect back their
>>> canonical addresses. Doing point solution rewriting of urls seems odd
>>> when we could just have Nova/Cinder/etc return documents with URLs that
>>> match what's in the service catalog for that service.
>>>
>>
>> Sorry, this won't work for us. We have a "split view" in our service
>> catalog where internal management nodes have a specific catalog and
>> public nodes (for users) have a different one.
>>
>> Implementing the secure_proxy_ssl_header config would require close to
>> little code change to all projects and accommodate our use case and
>> other ones we might not think of. For example, how do you know "from"
>> which of the following URLs (publicURL, internalURL, adminURL) the users
>> is coming? Each might be different and even not all be SSL.
>>
>> The oslo.middleware project already has the SSL middleware [1]. It would
>> only be a matter of enabling this middleware by default in the paste
>> config of all projects.
>>
>> [1]
>> https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/ssl.py
> 
> The split view definitely needs to be considered, but a big question
> here is whether we should really be doing this with multiple urls per
> catalog entry, or dedicated catalog entries for internal usage.

We are using a dedicated catalog for internal usage and override service
endpoint wherever possible in OpenStack services. We don't use
publicURL, internalURL or adminURL.


> There are a lot of things to work through to get our use of the service
> catalog consistent and useful going forward. I just don't relish another
> layer of work arounds that decide the service catalog is not a good way
> to keep track of what our service urls are, that has to be unwound later.

The oslo_middleware.ssl middleware looks to offer little overhead and
offer the maximum flexibility. I appreciate the wish to use the Keystone
catalog but I don't feel this is the right answer.

For example, if I deploy Bifrost without Keystone, I won't have a
catalog to rely on and will still have the same lack of SSL termination
proxy support.

The simplest solution is often the right one.

-- 
Mathieu


From brandon.logan at RACKSPACE.COM  Tue Sep 22 19:27:41 2015
From: brandon.logan at RACKSPACE.COM (Brandon Logan)
Date: Tue, 22 Sep 2015 19:27:41 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>	,
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
Message-ID: <1442950062.30604.3.camel@localhost>

There is some overlap, but there was some incompatible differences when
we started designing v2.  I'm sure the same issues will arise this time
around so new resources sounds like the path to go.  However, I do not
know much about Heat and the resources so I'm speaking on a very
uneducated level here.

Thanks,
Brandon
On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
> We're using the v1 resources...
> 
> If the v2 ones are compatible and can seamlessly upgrade, great
> 
> Otherwise, make new ones please.
> 
> Thanks,
> Kevin
> 
> ______________________________________________________________________
> From: Banashankar KV [banveerad at gmail.com]
> Sent: Tuesday, September 22, 2015 10:07 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
> LbaasV2
> 
> 
> 
> Hi Brandon, 
> Work in progress, but need some input on the way we want them, like
> replace the existing lbaasv1 or we still need to support them ?
> 
> 
> 
> 
> 
> 
> 
> Thanks  
> Banashankar
> 
> 
> 
> On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
> <brandon.logan at rackspace.com> wrote:
>         Hi Banashankar,
>         I think it'd be great if you got this going.  One of those
>         things we
>         want to have and people ask for but has always gotten a lower
>         priority
>         due to the critical things needed.
>         
>         Thanks,
>         Brandon
>         On Mon, 2015-09-21 at 17:57 -0700, Banashankar KV wrote:
>         > Hi All,
>         > I was thinking of starting the work on heat to support
>         LBaasV2,  Is
>         > there any concerns about that?
>         >
>         >
>         > I don't know if it is the right time to bring this up :D .
>         >
>         > Thanks,
>         > Banashankar (bana_k)
>         >
>         >
>         
>         >
>         __________________________________________________________________________
>         > OpenStack Development Mailing List (not for usage questions)
>         > Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From rbryant at redhat.com  Tue Sep 22 19:48:51 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Tue, 22 Sep 2015 15:48:51 -0400
Subject: [openstack-dev] [networking-ovn] Neutron-DVR feature on OVN/L3
In-Reply-To: <CALiLy7q6eUJ1c-t0xYDprouf-OuuZoQDk1_b0eihm8b0GtfW2g@mail.gmail.com>
References: <OF86C79150.8AA886D0-ON00257EC2.00625760-86257EC2.0063717A@LocalDomain>
 <201509212047.t8LKlK3g015392@d01av05.pok.ibm.com>
 <5601850F.4070700@redhat.com>
 <CALiLy7q6eUJ1c-t0xYDprouf-OuuZoQDk1_b0eihm8b0GtfW2g@mail.gmail.com>
Message-ID: <5601B0A3.80302@redhat.com>

On 09/22/2015 02:35 PM, Carl Baldwin wrote:
> On Tue, Sep 22, 2015 at 10:42 AM, Russell Bryant <rbryant at redhat.com> wrote:
>> The Neutron L3 agent is only used with networking-ovn temporarily while
>> we work through the L3 design and implementation in OVN itself.  OVN
>> will not use the L3 agent (or DVR) quite soon.  Some initial L3 design
>> notes are being discussed on the ovs dev list now.  L3 in OVN will be
>> distributed.
> 
> I'm curious when this is true, what models will be supported?  Will it
> use NAT as current Neutron reference implementation?

Good questions.  We're aiming to have at least some initial L3 support
available by the Tokyo summit.  That will be distributed, but likely
won't include NAT at all.  Someone is working on figuring out the best
way to support NAT natively in OVS, and then we'll be using that.

Similar to the way we're doing security groups, NAT will be using the
new ovs conntrack integration, but details TBD.

I copied Ben Pfaff in case he wants to provide some more insight.

-- 
Russell Bryant


From doug at doughellmann.com  Tue Sep 22 19:52:38 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 22 Sep 2015 15:52:38 -0400
Subject: [openstack-dev] [release][ptl][all] creating stable/liberty
	branches for non-oslo libraries today
In-Reply-To: <1442865622-sup-8579@lrrr.local>
References: <1442842916-sup-9093@lrrr.local> <1442865622-sup-8579@lrrr.local>
Message-ID: <1442951502-sup-7183@lrrr.local>

Excerpts from Doug Hellmann's message of 2015-09-21 16:02:50 -0400:
> Excerpts from Doug Hellmann's message of 2015-09-21 09:44:06 -0400:
> > 
> > All,
> > 
> > We are doing final releases, contraints updates, and creating
> > stable/liberty branches for all of the non-Oslo libraries (clients
> > as well as glance_store, os-brick, etc.) today. I have contacted
> > the designate, neutron, nova, and zaqar teams about final releases
> > for their clients today based on the list of unreleased changes.
> > All of the other libs looked like their most recent release would
> > be fine as a stable branch, so we'll be using those.
> > 
> > Doug
> > 
> 
> I have created stable/liberty branches from these versions:
> 
> ceilometermiddleware 0.3.0
> cliff 1.15.0
> django_openstack_auth 2.0.0
> glance_store 0.9.1
> keystoneauth 1.1.0
> keystonemiddleware 2.3.0
> os-client-config 1.7.4
> pycadf 1.1.0
> python-barbicanclient 3.3.0
> python-ceilometerclient 1.5.0
> python-cinderclient 1.4.0
> python-glanceclient 1.1.0
> python-heatclient 0.8.0
> python-ironicclient 0.8.1
> python-keystoneclient 1.7.1
> python-manilaclient 1.4.0
> python-neutronclient 3.0.0
> python-novaclient 2.30.0
> python-saharaclient 0.11.0
> python-swiftclient 2.6.0
> python-troveclient 1.3.0
> python-zaqarclient 0.2.0
> 
> The updates to the .gitreview files are available for review in
> https://review.openstack.org/#/q/topic:create-liberty,n,z
> 
> We have 3 projects we're waiting to branch:
> 
> os-brick
>   wait for https://review.openstack.org/#/c/220902/ (merged)

os-brick 0.5.0

> 
> python-designateclient
>   https://review.openstack.org/#/c/224667/ (merged)

python-designateclient 1.5.0

> 
> python-openstackclient
>   https://review.openstack.org/#/c/225443/
>   https://review.openstack.org/#/c/225505/

python-openstackclient 1.7.0

I believe that covers all of the managed libraries for this cycle.

Doug


From skraynev at mirantis.com  Tue Sep 22 19:52:43 2015
From: skraynev at mirantis.com (Sergey Kraynev)
Date: Tue, 22 Sep 2015 22:52:43 +0300
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1442950062.30604.3.camel@localhost>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
Message-ID: <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>

Brandon.

As I understand we v1 and v2 have differences also in list of objects and
also in relationships between them.
So I don't think that it will be easy to upgrade old resources
(unfortunately).
I'd agree with second Kevin's suggestion about implementation new resources
in this case.

I see, that a lot of guys, who wants to help  with it :) And I suppose,
that me and Rabi Mishra may try to help with it, because we was involvement
in implementation of v1 resources in Heat.
Follow the list of v1 lbaas resources in Heat:

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor

Also, I suppose, that it may be discussed during summit talks :)
Will add to etherpad with potential sessions.


Regards,
Sergey.

On 22 September 2015 at 22:27, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> There is some overlap, but there was some incompatible differences when
> we started designing v2.  I'm sure the same issues will arise this time
> around so new resources sounds like the path to go.  However, I do not
> know much about Heat and the resources so I'm speaking on a very
> uneducated level here.
>
> Thanks,
> Brandon
> On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
> > We're using the v1 resources...
> >
> > If the v2 ones are compatible and can seamlessly upgrade, great
> >
> > Otherwise, make new ones please.
> >
> > Thanks,
> > Kevin
> >
> > ______________________________________________________________________
> > From: Banashankar KV [banveerad at gmail.com]
> > Sent: Tuesday, September 22, 2015 10:07 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
> > LbaasV2
> >
> >
> >
> > Hi Brandon,
> > Work in progress, but need some input on the way we want them, like
> > replace the existing lbaasv1 or we still need to support them ?
> >
> >
> >
> >
> >
> >
> >
> > Thanks
> > Banashankar
> >
> >
> >
> > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
> > <brandon.logan at rackspace.com> wrote:
> >         Hi Banashankar,
> >         I think it'd be great if you got this going.  One of those
> >         things we
> >         want to have and people ask for but has always gotten a lower
> >         priority
> >         due to the critical things needed.
> >
> >         Thanks,
> >         Brandon
> >         On Mon, 2015-09-21 at 17:57 -0700, Banashankar KV wrote:
> >         > Hi All,
> >         > I was thinking of starting the work on heat to support
> >         LBaasV2,  Is
> >         > there any concerns about that?
> >         >
> >         >
> >         > I don't know if it is the right time to bring this up :D .
> >         >
> >         > Thanks,
> >         > Banashankar (bana_k)
> >         >
> >         >
> >
> >         >
> >
>  __________________________________________________________________________
> >         > OpenStack Development Mailing List (not for usage questions)
> >         > Unsubscribe:
> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >         >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>  __________________________________________________________________________
> >         OpenStack Development Mailing List (not for usage questions)
> >         Unsubscribe:
> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/bdbaaba2/attachment-0001.html>

From tsufiev at mirantis.com  Tue Sep 22 20:13:50 2015
From: tsufiev at mirantis.com (Timur Sufiev)
Date: Tue, 22 Sep 2015 20:13:50 +0000
Subject: [openstack-dev] [Horizon] [Cinder] [Keystone] Showing Cinder
 quotas for non-admin users in Horizon
In-Reply-To: <CAGocpaHEd+VwXZ35GWq-L=y5z0zTm9G8-nj7n+hoRzjx04EupA@mail.gmail.com>
References: <CAEHC1ztdNfz_YPs4PzTEqwsdoCt6_R6PpoGVW+XDqqtgCtTNmw@mail.gmail.com>
 <CAGocpaHEd+VwXZ35GWq-L=y5z0zTm9G8-nj7n+hoRzjx04EupA@mail.gmail.com>
Message-ID: <CAEHC1zt0fdUKsJShQ=-t+NXJzG=CBUY2zQdfRey7yqCA1Oi5-A@mail.gmail.com>

If I understand correctly, the issue has been properly fixed with the
Ivan's patch [1]. Thanks, Ivan!

[1] https://review.openstack.org/#/c/225891/

On Wed, Sep 16, 2015 at 2:53 PM Ivan Kolodyazhny <e0ne at e0ne.info> wrote:

> Hi Timur,
>
> To get quotas  we need to retrieve project information from the
> Keystone. Unfortunately, Keystone set "admin_required" rule by default [1]
> in their API. We can handle it and raise 403 if Keystone return this error
> only.
>
> [1] https://github.com/openstack/keystone/blob/master/etc/policy.json#L37
>
> Regards,
> Ivan Kolodyazhny
>
> On Mon, Sep 14, 2015 at 1:49 PM, Timur Sufiev <tsufiev at mirantis.com>
> wrote:
>
>> Hi all!
>>
>> It seems that recent changes in Cinder policies [1] forbade non-admin
>> users to see the disk quotas. Yet the volume creation is allowed for
>> non-admins, which effectively means that from now on a volume creation in
>> Horizon is free for non-admins (as soon as quotas:show rule is propagated
>> into Horizon policies). Along with understanding that this is not a desired
>> UX for Volumes panel in Horizon, I know as well that [1] wasn't responsible
>> for this quota behavior change on its own. It merely tried to alleviate the
>> situation caused by [2], which changed the requirements of quota show being
>> authorized. From this point I'm starting to sense that my knowledge of
>> Cinder and Keystone (because the hierarchical feature is involved) is
>> insufficient to suggest the proper solution from the Horizon point of view.
>> Yet hiding quota values from non-admin users makes no sense to me.
>> Suggestions?
>>
>> [1] https://review.openstack.org/#/c/219231/7/etc/cinder/policy.json line
>> 36
>> [2]
>> https://review.openstack.org/#/c/205369/29/cinder/api/contrib/quotas.py line
>> 135
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/52bda3a8/attachment.html>

From sean at dague.net  Tue Sep 22 20:52:57 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 22 Sep 2015 16:52:57 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <5601A911.2030504@internap.com>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
Message-ID: <5601BFA9.7000902@dague.net>

On 09/22/2015 03:16 PM, Mathieu Gagn? wrote:
> On 2015-09-22 1:46 PM, Sean Dague wrote:
>> On 09/22/2015 12:12 PM, Mathieu Gagn? wrote:
>>> On 2015-09-22 7:00 AM, Sean Dague wrote:
>>>>
>>>> My feeling on this one is that we've got this thing in OpenStack... the
>>>> Service Catalog. It definitively tells the world what the service
>>>> addresses are.
>>>>
>>>> We should use that in the services themselves to reflect back their
>>>> canonical addresses. Doing point solution rewriting of urls seems odd
>>>> when we could just have Nova/Cinder/etc return documents with URLs that
>>>> match what's in the service catalog for that service.
>>>>
>>>
>>> Sorry, this won't work for us. We have a "split view" in our service
>>> catalog where internal management nodes have a specific catalog and
>>> public nodes (for users) have a different one.
>>>
>>> Implementing the secure_proxy_ssl_header config would require close to
>>> little code change to all projects and accommodate our use case and
>>> other ones we might not think of. For example, how do you know "from"
>>> which of the following URLs (publicURL, internalURL, adminURL) the users
>>> is coming? Each might be different and even not all be SSL.
>>>
>>> The oslo.middleware project already has the SSL middleware [1]. It would
>>> only be a matter of enabling this middleware by default in the paste
>>> config of all projects.
>>>
>>> [1]
>>> https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/ssl.py
>>
>> The split view definitely needs to be considered, but a big question
>> here is whether we should really be doing this with multiple urls per
>> catalog entry, or dedicated catalog entries for internal usage.
> 
> We are using a dedicated catalog for internal usage and override service
> endpoint wherever possible in OpenStack services. We don't use
> publicURL, internalURL or adminURL.
> 
> 
>> There are a lot of things to work through to get our use of the service
>> catalog consistent and useful going forward. I just don't relish another
>> layer of work arounds that decide the service catalog is not a good way
>> to keep track of what our service urls are, that has to be unwound later.
> 
> The oslo_middleware.ssl middleware looks to offer little overhead and
> offer the maximum flexibility. I appreciate the wish to use the Keystone
> catalog but I don't feel this is the right answer.
> 
> For example, if I deploy Bifrost without Keystone, I won't have a
> catalog to rely on and will still have the same lack of SSL termination
> proxy support.
> 
> The simplest solution is often the right one.

I do get there are specific edge cases here, but I don't think that in
the general case we should be pushing a mode where Keystone is optional.
It is a bedrock of our system.

	-Sean

-- 
Sean Dague
http://dague.net


From thierry at openstack.org  Tue Sep 22 21:03:16 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Tue, 22 Sep 2015 23:03:16 +0200
Subject: [openstack-dev] [Keystone] [Manila] Liberty RC1 available
Message-ID: <5601C214.3050206@openstack.org>

Hello everyone,

Manila and Keystone are the first projects to produce a release
candidate for the end of the Liberty cycle! The RC1 tarballs, as well as
a list of last-minute features and fixed bugs since liberty-1 are
available at:

https://launchpad.net/manila/liberty/liberty-rc1
https://launchpad.net/keystone/liberty/liberty-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as final versions
on October 15. You are therefore strongly encouraged to test
and validate these tarballs !

Alternatively, you can directly test the stable/liberty release branch at:

http://git.openstack.org/cgit/openstack/manila/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/keystone/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/manila/+filebug
or
https://bugs.launchpad.net/keystone/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branch of Manila and Keystone are now open for
Mitaka development, and feature freeze restrictions no longer apply there !

Regards,

-- 
Thierry Carrez (ttx)


From doug at doughellmann.com  Tue Sep 22 21:21:56 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 22 Sep 2015 17:21:56 -0400
Subject: [openstack-dev] [releases] semver and dependency changes
In-Reply-To: <CAJ3HoZ3_PqUxwrxxKO_6FUPk5t43-YELCMxQ9zPRNY65dSr4sw@mail.gmail.com>
References: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
 <1442924599-sup-9078@lrrr.local>
 <CAJ3HoZ3_PqUxwrxxKO_6FUPk5t43-YELCMxQ9zPRNY65dSr4sw@mail.gmail.com>
Message-ID: <1442956191-sup-1432@lrrr.local>

Excerpts from Robert Collins's message of 2015-09-23 06:40:19 +1200:
> On 23 September 2015 at 00:43, Doug Hellmann <doug at doughellmann.com> wrote:
> > Excerpts from Robert Collins's message of 2015-09-22 15:16:26 +1200:
> >> Currently we don't provide specific guidance on what should happen
> >> when the only changes in a project are dependency changes and a
> >> release is made.
> >>
> >> The releases team has been treating a dependency change as 'feature'
> >> rather than 'bugfix' in semver modelling - so if the last release was
> >> 1.2.3, and a requirements sync happens to fix a bug (e.g. a too-low
> >> minimum dependency), then the next release would be 1.3.0.
> >>
> >> Reasoning about this can be a little complex to do on the fly, so I'd
> >> like to put together some concrete guidance - which essentially means
> >> being able to provide a heuristic to answer the questions:
> >>
> >> 'Is this requirements change an API break' or 'is this requirements
> >> change feature work' or 'is this requirements change a bugfix'.
> >
> > Also, for our projects, "is this requirements change being triggered by
> > a need in some other project that also syncs with the g-r list"?
> 
> I don't think that maps to semver though: which is why I didn't list
> it. I agree that the /reason/ for a change may be 'consistency with
> other OpenStack projects'. Until we've finished fixing up the pip
> facilities around resolution we can't really make the requirements
> syncing process more flexible - and even then its going to be really
> very tricky [e.g. how do you test 'oslo.messaging works with
> oslo.config version X when some other thing in devstack needs version
> Y > X] - proving individually varied lower bounds is going to be
> awkward at best if it depends on integration tests.
> ...
> >> We could calculate the needed change programmatically for this
> >> approach in the requirements syncing process.
> >
> > We also have to consider that we can't assume that the dependency
> > is using semver itself, so we might not be able to tell from the
> > outside whether the API is in fact breaking. So, we would need something
> > other than the version number to make that determination.
> 
> Agreed - while its unusual, a project can do both of major version
> bumps without backwards incompatibilities, and minor or patch version
> bumps that do include [deliberate] backwards incompatibilities.
> 
> > I've been encouraging the application of a simple rule precisely
> > because this problem is so complicated. The 4 reasons for updates
> > can get lost over time between a requirements update landing and a
> > release being created, especially with automatic updates mixing
> > with updates a project actually cares about.  We aren't yet correctly
> > identifying our own API breaking changes and differentiating between
> > features and bug fixes in all cases, so until we're better at that
> > analysis I would rather continue over-simplifying the analysis of
> > requirements updates.
> 
> So how about we [from a releases perspective] just don't comment on
> requirements syncs - let projects make their own assessment?

Most folks can't follow the relatively simple rules we have for
that now. We regularly have requests for patch releases for libs with
new features (with commit messages like "Add fancy new blah blah
feature") and we have, at least once, completely missed a backwards
incompatibility that required a major version bump.

> 
> I don't think 'pick minor' is a very useful heuristic - for many of
> our 0.x.y projects we're overstating the impact [since x here is
> major-before-stable-has-happened], for consumers of those we're
> underestimating.

Our use of these versioning rules is relatively new, and everyone
is still figuring it out.  I've been trying to encourage
simple-to-understand interpretations for all of the rules so we're
at least consistent, if not 100% correct. For requirements changes,
I considered that since one could not simply install the new version
of the library without also updating its dependencies, it was not
a patch release. And since the requirements updates haven't been
backwards-incompatible (as far as I have been able to tell in most
cases), they didn't require a major version update. So, the minor
version seemed right. That also produced a rule we could evaluate
for the case where the project itself did not request the requirements
update which, as you point out, isn't covered by semver directly.

Doug


From mgagne at internap.com  Tue Sep 22 21:30:36 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Tue, 22 Sep 2015 17:30:36 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <5601BFA9.7000902@dague.net>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net>
Message-ID: <5601C87C.2010609@internap.com>

On 2015-09-22 4:52 PM, Sean Dague wrote:
> On 09/22/2015 03:16 PM, Mathieu Gagn? wrote:
>>
>> The oslo_middleware.ssl middleware looks to offer little overhead and
>> offer the maximum flexibility. I appreciate the wish to use the Keystone
>> catalog but I don't feel this is the right answer.
>>
>> For example, if I deploy Bifrost without Keystone, I won't have a
>> catalog to rely on and will still have the same lack of SSL termination
>> proxy support.
>>
>> The simplest solution is often the right one.
> 
> I do get there are specific edge cases here, but I don't think that in
> the general case we should be pushing a mode where Keystone is optional.
> It is a bedrock of our system.
> 

I understand that Keystone is "a bedrock of our system". This however
does not make it THE solution when simpler proven existing ones exist. I
fail to understand why other solutions can't be considered.

I opened a dialog with the community to express my concerns about the
lack of universal support for SSL termination proxy so we can find a
solution together which will cover ALL use cases.

I proposed using an existing solution (oslo_middleware.ssl) (that I
didn't know of until now) which takes little to no time to implement and
cover a lot of use cases and special edge cases.

Operators encounters and *HAVE* to handle a lot of edge cases. We are
trying *very hard* to open up a dialog with the developer community so
they can be heard, understood and addressed by sharing our real world
use cases.

In that specific case, my impression is that they unfortunately happens
to not be a priority when evaluating solutions and will actually make my
job harder. I'm sad to see we can't come with an agreement on that one.
I feel like I failed to properly communicate my needs as an operator and
make them understood to others.

-- 
Mathieu


From banveerad at gmail.com  Tue Sep 22 21:49:12 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Tue, 22 Sep 2015 14:49:12 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
Message-ID: <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>

So are we thinking of making it as ?
OS::Neutron::LoadBalancerV2
OS::Neutron::ListenerV2
OS::Neutron::PoolV2
OS::Neutron::PoolMemberV2
OS::Neutron::HealthMonitorV2

and add all those into the loadbalancer.py of heat engine ?

Thanks
Banashankar


On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev <skraynev at mirantis.com>
wrote:

> Brandon.
>
> As I understand we v1 and v2 have differences also in list of objects and
> also in relationships between them.
> So I don't think that it will be easy to upgrade old resources
> (unfortunately).
> I'd agree with second Kevin's suggestion about implementation new
> resources in this case.
>
> I see, that a lot of guys, who wants to help  with it :) And I suppose,
> that me and Rabi Mishra may try to help with it, because we was involvement
> in implementation of v1 resources in Heat.
> Follow the list of v1 lbaas resources in Heat:
>
>
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>
> Also, I suppose, that it may be discussed during summit talks :)
> Will add to etherpad with potential sessions.
>
>
> Regards,
> Sergey.
>
> On 22 September 2015 at 22:27, Brandon Logan <brandon.logan at rackspace.com>
> wrote:
>
>> There is some overlap, but there was some incompatible differences when
>> we started designing v2.  I'm sure the same issues will arise this time
>> around so new resources sounds like the path to go.  However, I do not
>> know much about Heat and the resources so I'm speaking on a very
>> uneducated level here.
>>
>> Thanks,
>> Brandon
>> On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
>> > We're using the v1 resources...
>> >
>> > If the v2 ones are compatible and can seamlessly upgrade, great
>> >
>> > Otherwise, make new ones please.
>> >
>> > Thanks,
>> > Kevin
>> >
>> > ______________________________________________________________________
>> > From: Banashankar KV [banveerad at gmail.com]
>> > Sent: Tuesday, September 22, 2015 10:07 AM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
>> > LbaasV2
>> >
>> >
>> >
>> > Hi Brandon,
>> > Work in progress, but need some input on the way we want them, like
>> > replace the existing lbaasv1 or we still need to support them ?
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Thanks
>> > Banashankar
>> >
>> >
>> >
>> > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
>> > <brandon.logan at rackspace.com> wrote:
>> >         Hi Banashankar,
>> >         I think it'd be great if you got this going.  One of those
>> >         things we
>> >         want to have and people ask for but has always gotten a lower
>> >         priority
>> >         due to the critical things needed.
>> >
>> >         Thanks,
>> >         Brandon
>> >         On Mon, 2015-09-21 at 17:57 -0700, Banashankar KV wrote:
>> >         > Hi All,
>> >         > I was thinking of starting the work on heat to support
>> >         LBaasV2,  Is
>> >         > there any concerns about that?
>> >         >
>> >         >
>> >         > I don't know if it is the right time to bring this up :D .
>> >         >
>> >         > Thanks,
>> >         > Banashankar (bana_k)
>> >         >
>> >         >
>> >
>> >         >
>> >
>>  __________________________________________________________________________
>> >         > OpenStack Development Mailing List (not for usage questions)
>> >         > Unsubscribe:
>> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >         >
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>>  __________________________________________________________________________
>> >         OpenStack Development Mailing List (not for usage questions)
>> >         Unsubscribe:
>> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/21b1236b/attachment.html>

From chris.friesen at windriver.com  Tue Sep 22 21:52:16 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Tue, 22 Sep 2015 15:52:16 -0600
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
	config file?
Message-ID: <5601CD90.2050102@windriver.com>

Hi,

I recently had an issue with one file out of a dozen or so in 
"/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm running 
stable/kilo if it makes a difference.

Looking at the code in volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm 
wondering if we should do a fsync() before the close().  The way it stands now, 
it seems like it might be possible to write the file, start making use of it, 
and then take a power outage before it actually gets written to persistent 
storage.  When we come back up we could have an instance expecting to make use 
of it, but no target information in the on-disk copy of the file.

Even then, fsync() explicitly says that it doesn't ensure that the directory 
changes have reached disk...for that another explicit fsync() on the file 
descriptor for the directory is needed.

So I think for robustness we'd want to add

f.flush()
os.fsync(f.fileno())

between the existing f.write() and f.close(), and then add something like:

f = open(volumes_dir, 'w+')
os.fsync(f.fileno())
f.close()

Thoughts?

Chris


From fungi at yuggoth.org  Tue Sep 22 22:00:35 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Tue, 22 Sep 2015 22:00:35 +0000
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <1442943810555.78167@RACKSPACE.COM>
References: <1442847252531.42564@RACKSPACE.COM>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FE88E@MAIL703.KDS.KEANE.COM>
 <1442860231482.95559@RACKSPACE.COM>
 <20150921191918.GS21846@jimrollenhagen.com>
 <1442943810555.78167@RACKSPACE.COM>
Message-ID: <20150922220035.GR25159@yuggoth.org>

On 2015-09-22 17:43:28 +0000 (+0000), Andrew Melton wrote:
[...]
> I've had to invalidate the old link. If you still need a PyCharm
> License, please reach out to me with your launchpad-id and I'll
> get you the updated link.
[...]

Further, please reply to Andrew directly, not to this mailing list.
The other thousands of people subscribed to this list don't really
need to know whether or not you wish to obtain a license.
-- 
Jeremy Stanley


From brandon.logan at RACKSPACE.COM  Tue Sep 22 22:22:40 2015
From: brandon.logan at RACKSPACE.COM (Brandon Logan)
Date: Tue, 22 Sep 2015 22:22:40 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
Message-ID: <1442960561.30604.5.camel@localhost>

Well I'd hate to have the V2 postfix on it because V1 will be deprecated
and removed, which means the V2 being there would be lame.  Is there any
kind of precedent set for for how to handle this?

Thanks,
Brandon
On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV wrote:
> So are we thinking of making it as ?
> OS::Neutron::LoadBalancerV2
> 
> OS::Neutron::ListenerV2
> 
> OS::Neutron::PoolV2
> 
> OS::Neutron::PoolMemberV2
> 
> OS::Neutron::HealthMonitorV2
> 
> 
> 
> and add all those into the loadbalancer.py of heat engine ?
> 
> Thanks 
> Banashankar
> 
> 
> 
> On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> <skraynev at mirantis.com> wrote:
>         Brandon.
>         
>         
>         As I understand we v1 and v2 have differences also in list of
>         objects and also in relationships between them.
>         So I don't think that it will be easy to upgrade old resources
>         (unfortunately). 
>         I'd agree with second Kevin's suggestion about implementation
>         new resources in this case.
>         
>         
>         I see, that a lot of guys, who wants to help  with it :) And I
>         suppose, that me and Rabi Mishra may try to help with it,
>         because we was involvement in implementation of v1 resources
>         in Heat.
>         Follow the list of v1 lbaas resources in Heat:
>         
>         
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>         
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>         
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>         
>         
>         
>         Also, I suppose, that it may be discussed during summit
>         talks :)
>         Will add to etherpad with potential sessions.
>         
>         
>         
>         Regards,
>         Sergey.
>         
>         On 22 September 2015 at 22:27, Brandon Logan
>         <brandon.logan at rackspace.com> wrote:
>                 There is some overlap, but there was some incompatible
>                 differences when
>                 we started designing v2.  I'm sure the same issues
>                 will arise this time
>                 around so new resources sounds like the path to go.
>                 However, I do not
>                 know much about Heat and the resources so I'm speaking
>                 on a very
>                 uneducated level here.
>                 
>                 Thanks,
>                 Brandon
>                 On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
>                 > We're using the v1 resources...
>                 >
>                 > If the v2 ones are compatible and can seamlessly
>                 upgrade, great
>                 >
>                 > Otherwise, make new ones please.
>                 >
>                 > Thanks,
>                 > Kevin
>                 >
>                 >
>                 ______________________________________________________________________
>                 > From: Banashankar KV [banveerad at gmail.com]
>                 > Sent: Tuesday, September 22, 2015 10:07 AM
>                 > To: OpenStack Development Mailing List (not for
>                 usage questions)
>                 > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>                 support for
>                 > LbaasV2
>                 >
>                 >
>                 >
>                 > Hi Brandon,
>                 > Work in progress, but need some input on the way we
>                 want them, like
>                 > replace the existing lbaasv1 or we still need to
>                 support them ?
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 > Thanks
>                 > Banashankar
>                 >
>                 >
>                 >
>                 > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
>                 > <brandon.logan at rackspace.com> wrote:
>                 >         Hi Banashankar,
>                 >         I think it'd be great if you got this going.
>                 One of those
>                 >         things we
>                 >         want to have and people ask for but has
>                 always gotten a lower
>                 >         priority
>                 >         due to the critical things needed.
>                 >
>                 >         Thanks,
>                 >         Brandon
>                 >         On Mon, 2015-09-21 at 17:57 -0700,
>                 Banashankar KV wrote:
>                 >         > Hi All,
>                 >         > I was thinking of starting the work on
>                 heat to support
>                 >         LBaasV2,  Is
>                 >         > there any concerns about that?
>                 >         >
>                 >         >
>                 >         > I don't know if it is the right time to
>                 bring this up :D .
>                 >         >
>                 >         > Thanks,
>                 >         > Banashankar (bana_k)
>                 >         >
>                 >         >
>                 >
>                 >         >
>                 >
>                  __________________________________________________________________________
>                 >         > OpenStack Development Mailing List (not
>                 for usage questions)
>                 >         > Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >         >
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                  __________________________________________________________________________
>                 >         OpenStack Development Mailing List (not for
>                 usage questions)
>                 >         Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 __________________________________________________________________________
>                 > OpenStack Development Mailing List (not for usage
>                 questions)
>                 > Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 
>         
>         
>         
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From Kevin.Fox at pnnl.gov  Tue Sep 22 22:39:12 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 22 Sep 2015 22:39:12 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1442960561.30604.5.camel@localhost>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>,
 <1442960561.30604.5.camel@localhost>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>

There needs to be a way to have both v1 and v2 supported in one engine....

Say I have templates that use v1 already in existence (I do), and I want to be able to heat stack update on them one at a time to v2. This will replace the v1 lb with v2, migrating the floating ip from the v1 lb to the v2 one. This gives a smoothish upgrade path.

Thanks,
Kevin
________________________________________
From: Brandon Logan [brandon.logan at RACKSPACE.COM]
Sent: Tuesday, September 22, 2015 3:22 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

Well I'd hate to have the V2 postfix on it because V1 will be deprecated
and removed, which means the V2 being there would be lame.  Is there any
kind of precedent set for for how to handle this?

Thanks,
Brandon
On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV wrote:
> So are we thinking of making it as ?
> OS::Neutron::LoadBalancerV2
>
> OS::Neutron::ListenerV2
>
> OS::Neutron::PoolV2
>
> OS::Neutron::PoolMemberV2
>
> OS::Neutron::HealthMonitorV2
>
>
>
> and add all those into the loadbalancer.py of heat engine ?
>
> Thanks
> Banashankar
>
>
>
> On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> <skraynev at mirantis.com> wrote:
>         Brandon.
>
>
>         As I understand we v1 and v2 have differences also in list of
>         objects and also in relationships between them.
>         So I don't think that it will be easy to upgrade old resources
>         (unfortunately).
>         I'd agree with second Kevin's suggestion about implementation
>         new resources in this case.
>
>
>         I see, that a lot of guys, who wants to help  with it :) And I
>         suppose, that me and Rabi Mishra may try to help with it,
>         because we was involvement in implementation of v1 resources
>         in Heat.
>         Follow the list of v1 lbaas resources in Heat:
>
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>
>
>
>         Also, I suppose, that it may be discussed during summit
>         talks :)
>         Will add to etherpad with potential sessions.
>
>
>
>         Regards,
>         Sergey.
>
>         On 22 September 2015 at 22:27, Brandon Logan
>         <brandon.logan at rackspace.com> wrote:
>                 There is some overlap, but there was some incompatible
>                 differences when
>                 we started designing v2.  I'm sure the same issues
>                 will arise this time
>                 around so new resources sounds like the path to go.
>                 However, I do not
>                 know much about Heat and the resources so I'm speaking
>                 on a very
>                 uneducated level here.
>
>                 Thanks,
>                 Brandon
>                 On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
>                 > We're using the v1 resources...
>                 >
>                 > If the v2 ones are compatible and can seamlessly
>                 upgrade, great
>                 >
>                 > Otherwise, make new ones please.
>                 >
>                 > Thanks,
>                 > Kevin
>                 >
>                 >
>                 ______________________________________________________________________
>                 > From: Banashankar KV [banveerad at gmail.com]
>                 > Sent: Tuesday, September 22, 2015 10:07 AM
>                 > To: OpenStack Development Mailing List (not for
>                 usage questions)
>                 > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>                 support for
>                 > LbaasV2
>                 >
>                 >
>                 >
>                 > Hi Brandon,
>                 > Work in progress, but need some input on the way we
>                 want them, like
>                 > replace the existing lbaasv1 or we still need to
>                 support them ?
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 > Thanks
>                 > Banashankar
>                 >
>                 >
>                 >
>                 > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
>                 > <brandon.logan at rackspace.com> wrote:
>                 >         Hi Banashankar,
>                 >         I think it'd be great if you got this going.
>                 One of those
>                 >         things we
>                 >         want to have and people ask for but has
>                 always gotten a lower
>                 >         priority
>                 >         due to the critical things needed.
>                 >
>                 >         Thanks,
>                 >         Brandon
>                 >         On Mon, 2015-09-21 at 17:57 -0700,
>                 Banashankar KV wrote:
>                 >         > Hi All,
>                 >         > I was thinking of starting the work on
>                 heat to support
>                 >         LBaasV2,  Is
>                 >         > there any concerns about that?
>                 >         >
>                 >         >
>                 >         > I don't know if it is the right time to
>                 bring this up :D .
>                 >         >
>                 >         > Thanks,
>                 >         > Banashankar (bana_k)
>                 >         >
>                 >         >
>                 >
>                 >         >
>                 >
>                  __________________________________________________________________________
>                 >         > OpenStack Development Mailing List (not
>                 for usage questions)
>                 >         > Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >         >
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                  __________________________________________________________________________
>                 >         OpenStack Development Mailing List (not for
>                 usage questions)
>                 >         Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 __________________________________________________________________________
>                 > OpenStack Development Mailing List (not for usage
>                 questions)
>                 > Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From banveerad at gmail.com  Tue Sep 22 22:42:04 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Tue, 22 Sep 2015 15:42:04 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1442960561.30604.5.camel@localhost>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
Message-ID: <CABkBM5HpdfD-M+VOEBhbFFrxJh5AddOByAiq+Yfd_EJn-kD+kQ@mail.gmail.com>

So if that's the case, I think we can just remove the old V1 resources
implementation and just have support for the V2 as.

OS::Neutron::LoadBalancer
OS::Neutron::Listener
OS::Neutron::Pool
OS::Neutron::PoolMember
OS::Neutron::HealthMonitor

Need input of Heat experts :) .

Thanks
Banashankar


On Tue, Sep 22, 2015 at 3:22 PM, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> Well I'd hate to have the V2 postfix on it because V1 will be deprecated
> and removed, which means the V2 being there would be lame.  Is there any
> kind of precedent set for for how to handle this?
>
> Thanks,
> Brandon
> On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV wrote:
> > So are we thinking of making it as ?
> > OS::Neutron::LoadBalancerV2
> >
> > OS::Neutron::ListenerV2
> >
> > OS::Neutron::PoolV2
> >
> > OS::Neutron::PoolMemberV2
> >
> > OS::Neutron::HealthMonitorV2
> >
> >
> >
> > and add all those into the loadbalancer.py of heat engine ?
> >
> > Thanks
> > Banashankar
> >
> >
> >
> > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> > <skraynev at mirantis.com> wrote:
> >         Brandon.
> >
> >
> >         As I understand we v1 and v2 have differences also in list of
> >         objects and also in relationships between them.
> >         So I don't think that it will be easy to upgrade old resources
> >         (unfortunately).
> >         I'd agree with second Kevin's suggestion about implementation
> >         new resources in this case.
> >
> >
> >         I see, that a lot of guys, who wants to help  with it :) And I
> >         suppose, that me and Rabi Mishra may try to help with it,
> >         because we was involvement in implementation of v1 resources
> >         in Heat.
> >         Follow the list of v1 lbaas resources in Heat:
> >
> >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
> >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
> >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
> >
> >
> >
> >         Also, I suppose, that it may be discussed during summit
> >         talks :)
> >         Will add to etherpad with potential sessions.
> >
> >
> >
> >         Regards,
> >         Sergey.
> >
> >         On 22 September 2015 at 22:27, Brandon Logan
> >         <brandon.logan at rackspace.com> wrote:
> >                 There is some overlap, but there was some incompatible
> >                 differences when
> >                 we started designing v2.  I'm sure the same issues
> >                 will arise this time
> >                 around so new resources sounds like the path to go.
> >                 However, I do not
> >                 know much about Heat and the resources so I'm speaking
> >                 on a very
> >                 uneducated level here.
> >
> >                 Thanks,
> >                 Brandon
> >                 On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
> >                 > We're using the v1 resources...
> >                 >
> >                 > If the v2 ones are compatible and can seamlessly
> >                 upgrade, great
> >                 >
> >                 > Otherwise, make new ones please.
> >                 >
> >                 > Thanks,
> >                 > Kevin
> >                 >
> >                 >
> >
>  ______________________________________________________________________
> >                 > From: Banashankar KV [banveerad at gmail.com]
> >                 > Sent: Tuesday, September 22, 2015 10:07 AM
> >                 > To: OpenStack Development Mailing List (not for
> >                 usage questions)
> >                 > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
> >                 support for
> >                 > LbaasV2
> >                 >
> >                 >
> >                 >
> >                 > Hi Brandon,
> >                 > Work in progress, but need some input on the way we
> >                 want them, like
> >                 > replace the existing lbaasv1 or we still need to
> >                 support them ?
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 > Thanks
> >                 > Banashankar
> >                 >
> >                 >
> >                 >
> >                 > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
> >                 > <brandon.logan at rackspace.com> wrote:
> >                 >         Hi Banashankar,
> >                 >         I think it'd be great if you got this going.
> >                 One of those
> >                 >         things we
> >                 >         want to have and people ask for but has
> >                 always gotten a lower
> >                 >         priority
> >                 >         due to the critical things needed.
> >                 >
> >                 >         Thanks,
> >                 >         Brandon
> >                 >         On Mon, 2015-09-21 at 17:57 -0700,
> >                 Banashankar KV wrote:
> >                 >         > Hi All,
> >                 >         > I was thinking of starting the work on
> >                 heat to support
> >                 >         LBaasV2,  Is
> >                 >         > there any concerns about that?
> >                 >         >
> >                 >         >
> >                 >         > I don't know if it is the right time to
> >                 bring this up :D .
> >                 >         >
> >                 >         > Thanks,
> >                 >         > Banashankar (bana_k)
> >                 >         >
> >                 >         >
> >                 >
> >                 >         >
> >                 >
> >
> __________________________________________________________________________
> >                 >         > OpenStack Development Mailing List (not
> >                 for usage questions)
> >                 >         > Unsubscribe:
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >         >
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 >         OpenStack Development Mailing List (not for
> >                 usage questions)
> >                 >         Unsubscribe:
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >
> >                 >
> >                 >
> >                 >
> >
>  __________________________________________________________________________
> >                 > OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 > Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>  __________________________________________________________________________
> >                 OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
>  __________________________________________________________________________
> >         OpenStack Development Mailing List (not for usage questions)
> >         Unsubscribe:
> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/19f0ec45/attachment.html>

From robertc at robertcollins.net  Tue Sep 22 22:43:47 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Wed, 23 Sep 2015 10:43:47 +1200
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <5601CD90.2050102@windriver.com>
References: <5601CD90.2050102@windriver.com>
Message-ID: <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>

On 23 September 2015 at 09:52, Chris Friesen
<chris.friesen at windriver.com> wrote:
> Hi,
>
> I recently had an issue with one file out of a dozen or so in
> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm
> running stable/kilo if it makes a difference.
>
> Looking at the code in volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm
> wondering if we should do a fsync() before the close().  The way it stands
> now, it seems like it might be possible to write the file, start making use
> of it, and then take a power outage before it actually gets written to
> persistent storage.  When we come back up we could have an instance
> expecting to make use of it, but no target information in the on-disk copy
> of the file.

If its being kept in sync with DB records, and won't self-heal from
this situation, then yes. e.g. if the overall workflow is something
like

receive RPC request
do some work, including writing the file
reply to the RPC with 'ok'
-> gets written to the DB and the state recorded as ACTIVE or whatever..

then yes, we need to behave as though its active even if the machine
is power cycled robustly.

> Even then, fsync() explicitly says that it doesn't ensure that the directory
> changes have reached disk...for that another explicit fsync() on the file
> descriptor for the directory is needed.
> So I think for robustness we'd want to add
>
> f.flush()
> os.fsync(f.fileno())

fdatasync would be better - we don't care about the metadata.

> between the existing f.write() and f.close(), and then add something like:
>
> f = open(volumes_dir, 'w+')
> os.fsync(f.fileno())
> f.close()

Yup, but again - fdatasync here I believe.

-Rob


-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From johnsomor at gmail.com  Tue Sep 22 22:54:46 2015
From: johnsomor at gmail.com (Michael Johnson)
Date: Tue, 22 Sep 2015 15:54:46 -0700
Subject: [openstack-dev] [neutron][lbaas] Proposing Michael Johnson for
 neutron-lbaas core team
In-Reply-To: <C79240AD-010B-4C21-A0FD-02B47A5F1B7D@gmail.com>
References: <FB4D40AE-3C76-4069-81D7-6593C5AFEB89@parksidesoftware.com>
 <79408BAC-47AB-4CA6-A8ED-8ABD6C44CC6F@workday.com>
 <c818a3f9b1e64888bcd15702dd73284f@544124-OEXCH02.ror-uc.rackspace.com>
 <5ED89A5E-56CA-45DC-B056-6E06324DF45F@parksidesoftware.com>
 <C79240AD-010B-4C21-A0FD-02B47A5F1B7D@gmail.com>
Message-ID: <CAMH0MgKad+cp-8gMcgxWhFnevhu88OWXfzJ0=tUnYhb_yaZN3A@mail.gmail.com>

Thank you team.  I look forward to contributing to neutron-LBaaS!

Michael

On Mon, Sep 21, 2015 at 4:27 PM,  <sleipnir012 at gmail.com> wrote:
> Congrats Michael. It is well deserved
>
> Susanne
>
> Sent from my iPhone
>
> On Sep 21, 2015, at 6:48 PM, Doug Wiegley <dougwig at parksidesoftware.com>
> wrote:
>
> HI all,
>
> Since all cores have responded, this passes. Welcome, Michael!
>
> Thanks,
> doug
>
>
> On Sep 17, 2015, at 12:29 PM, Brandon Logan <brandon.logan at rackspace.com>
> wrote:
>
> I'm off today so my +1 is more like a +2
>
> On Sep 17, 2015 12:59 PM, Edgar Magana <edgar.magana at workday.com> wrote:
> Not a core but I would like to share my +1 about Michael.
>
> Cheers,
>
> Edgar
>
>
>
>
> On 9/16/15, 3:33 PM, "Doug Wiegley" <dougwig at parksidesoftware.com> wrote:
>
>>Hi all,
>>
>>As the Lieutenant of the advanced services, I nominate Michael Johnson to
>> be a member of the neutron-lbaas core reviewer team.
>>
>>Review stats are in line with other cores[2], and Michael has been
>> instrumental in both neutron-lbaas and octavia.
>>
>>Existing cores, please vote +1/-1 for his addition to the team (that?s
>> Brandon, Phil, Al, and Kyle.)
>>
>>Thanks,
>>doug
>>
>>1.
>> http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy
>>2. http://stackalytics.com/report/contribution/neutron-lbaas/90
>>
>>
>>__________________________________________________________________________
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From jim at jimrollenhagen.com  Tue Sep 22 23:00:05 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Tue, 22 Sep 2015 16:00:05 -0700
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <5601C87C.2010609@internap.com>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
Message-ID: <20150922230005.GT21846@jimrollenhagen.com>

On Tue, Sep 22, 2015 at 05:30:36PM -0400, Mathieu Gagn? wrote:
> On 2015-09-22 4:52 PM, Sean Dague wrote:
> > On 09/22/2015 03:16 PM, Mathieu Gagn? wrote:
> >>
> >> The oslo_middleware.ssl middleware looks to offer little overhead and
> >> offer the maximum flexibility. I appreciate the wish to use the Keystone
> >> catalog but I don't feel this is the right answer.
> >>
> >> For example, if I deploy Bifrost without Keystone, I won't have a
> >> catalog to rely on and will still have the same lack of SSL termination
> >> proxy support.
> >>
> >> The simplest solution is often the right one.
> > 
> > I do get there are specific edge cases here, but I don't think that in
> > the general case we should be pushing a mode where Keystone is optional.
> > It is a bedrock of our system.
> > 
> 
> I understand that Keystone is "a bedrock of our system". This however
> does not make it THE solution when simpler proven existing ones exist. I
> fail to understand why other solutions can't be considered.
> 
> I opened a dialog with the community to express my concerns about the
> lack of universal support for SSL termination proxy so we can find a
> solution together which will cover ALL use cases.
> 
> I proposed using an existing solution (oslo_middleware.ssl) (that I
> didn't know of until now) which takes little to no time to implement and
> cover a lot of use cases and special edge cases.
> 
> Operators encounters and *HAVE* to handle a lot of edge cases. We are
> trying *very hard* to open up a dialog with the developer community so
> they can be heard, understood and addressed by sharing our real world
> use cases.
> 
> In that specific case, my impression is that they unfortunately happens
> to not be a priority when evaluating solutions and will actually make my
> job harder. I'm sad to see we can't come with an agreement on that one.
> I feel like I failed to properly communicate my needs as an operator and
> make them understood to others.

FWIW, Ironic fully supports a standalone deployment, and will continue
to do that for the foreseeable future. If it means we need a config that
is inconsistent with the rest of OpenStack, that's what we'll be doing.

// jim


From banveerad at gmail.com  Tue Sep 22 23:16:42 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Tue, 22 Sep 2015 16:16:42 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
Message-ID: <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>

But I think, V2 has introduced some new components and whole association of
the resources with each other is changed, we should be still able to do
what Kevin has mentioned ?

Thanks
Banashankar


On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:

> There needs to be a way to have both v1 and v2 supported in one engine....
>
> Say I have templates that use v1 already in existence (I do), and I want
> to be able to heat stack update on them one at a time to v2. This will
> replace the v1 lb with v2, migrating the floating ip from the v1 lb to the
> v2 one. This gives a smoothish upgrade path.
>
> Thanks,
> Kevin
> ________________________________________
> From: Brandon Logan [brandon.logan at RACKSPACE.COM]
> Sent: Tuesday, September 22, 2015 3:22 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
>
> Well I'd hate to have the V2 postfix on it because V1 will be deprecated
> and removed, which means the V2 being there would be lame.  Is there any
> kind of precedent set for for how to handle this?
>
> Thanks,
> Brandon
> On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV wrote:
> > So are we thinking of making it as ?
> > OS::Neutron::LoadBalancerV2
> >
> > OS::Neutron::ListenerV2
> >
> > OS::Neutron::PoolV2
> >
> > OS::Neutron::PoolMemberV2
> >
> > OS::Neutron::HealthMonitorV2
> >
> >
> >
> > and add all those into the loadbalancer.py of heat engine ?
> >
> > Thanks
> > Banashankar
> >
> >
> >
> > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> > <skraynev at mirantis.com> wrote:
> >         Brandon.
> >
> >
> >         As I understand we v1 and v2 have differences also in list of
> >         objects and also in relationships between them.
> >         So I don't think that it will be easy to upgrade old resources
> >         (unfortunately).
> >         I'd agree with second Kevin's suggestion about implementation
> >         new resources in this case.
> >
> >
> >         I see, that a lot of guys, who wants to help  with it :) And I
> >         suppose, that me and Rabi Mishra may try to help with it,
> >         because we was involvement in implementation of v1 resources
> >         in Heat.
> >         Follow the list of v1 lbaas resources in Heat:
> >
> >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
> >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
> >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
> >
> >
> >
> >         Also, I suppose, that it may be discussed during summit
> >         talks :)
> >         Will add to etherpad with potential sessions.
> >
> >
> >
> >         Regards,
> >         Sergey.
> >
> >         On 22 September 2015 at 22:27, Brandon Logan
> >         <brandon.logan at rackspace.com> wrote:
> >                 There is some overlap, but there was some incompatible
> >                 differences when
> >                 we started designing v2.  I'm sure the same issues
> >                 will arise this time
> >                 around so new resources sounds like the path to go.
> >                 However, I do not
> >                 know much about Heat and the resources so I'm speaking
> >                 on a very
> >                 uneducated level here.
> >
> >                 Thanks,
> >                 Brandon
> >                 On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
> >                 > We're using the v1 resources...
> >                 >
> >                 > If the v2 ones are compatible and can seamlessly
> >                 upgrade, great
> >                 >
> >                 > Otherwise, make new ones please.
> >                 >
> >                 > Thanks,
> >                 > Kevin
> >                 >
> >                 >
> >
>  ______________________________________________________________________
> >                 > From: Banashankar KV [banveerad at gmail.com]
> >                 > Sent: Tuesday, September 22, 2015 10:07 AM
> >                 > To: OpenStack Development Mailing List (not for
> >                 usage questions)
> >                 > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
> >                 support for
> >                 > LbaasV2
> >                 >
> >                 >
> >                 >
> >                 > Hi Brandon,
> >                 > Work in progress, but need some input on the way we
> >                 want them, like
> >                 > replace the existing lbaasv1 or we still need to
> >                 support them ?
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 > Thanks
> >                 > Banashankar
> >                 >
> >                 >
> >                 >
> >                 > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
> >                 > <brandon.logan at rackspace.com> wrote:
> >                 >         Hi Banashankar,
> >                 >         I think it'd be great if you got this going.
> >                 One of those
> >                 >         things we
> >                 >         want to have and people ask for but has
> >                 always gotten a lower
> >                 >         priority
> >                 >         due to the critical things needed.
> >                 >
> >                 >         Thanks,
> >                 >         Brandon
> >                 >         On Mon, 2015-09-21 at 17:57 -0700,
> >                 Banashankar KV wrote:
> >                 >         > Hi All,
> >                 >         > I was thinking of starting the work on
> >                 heat to support
> >                 >         LBaasV2,  Is
> >                 >         > there any concerns about that?
> >                 >         >
> >                 >         >
> >                 >         > I don't know if it is the right time to
> >                 bring this up :D .
> >                 >         >
> >                 >         > Thanks,
> >                 >         > Banashankar (bana_k)
> >                 >         >
> >                 >         >
> >                 >
> >                 >         >
> >                 >
> >
> __________________________________________________________________________
> >                 >         > OpenStack Development Mailing List (not
> >                 for usage questions)
> >                 >         > Unsubscribe:
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >         >
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 >         OpenStack Development Mailing List (not for
> >                 usage questions)
> >                 >         Unsubscribe:
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >
> >                 >
> >                 >
> >                 >
> >
>  __________________________________________________________________________
> >                 > OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 > Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>  __________________________________________________________________________
> >                 OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
>  __________________________________________________________________________
> >         OpenStack Development Mailing List (not for usage questions)
> >         Unsubscribe:
> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/e67d8869/attachment.html>

From Kevin.Fox at pnnl.gov  Tue Sep 22 23:33:03 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 22 Sep 2015 23:33:03 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>,
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>

Yes, hence the need to support the v2 resources as seperate things. Then I can rewrite the templates to include the new resources rather then the old resources as appropriate. IE, it will be a porting effort to rewrite them. Then do a heat update on the stack to migrate it from lbv1 to lbv2. Since they are different resources, it should create the new and delete the old.

Thanks,
Kevin

________________________________
From: Banashankar KV [banveerad at gmail.com]
Sent: Tuesday, September 22, 2015 4:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

But I think, V2 has introduced some new components and whole association of the resources with each other is changed, we should be still able to do what Kevin has mentioned ?

Thanks
Banashankar


On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
There needs to be a way to have both v1 and v2 supported in one engine....

Say I have templates that use v1 already in existence (I do), and I want to be able to heat stack update on them one at a time to v2. This will replace the v1 lb with v2, migrating the floating ip from the v1 lb to the v2 one. This gives a smoothish upgrade path.

Thanks,
Kevin
________________________________________
From: Brandon Logan [brandon.logan at RACKSPACE.COM<mailto:brandon.logan at RACKSPACE.COM>]
Sent: Tuesday, September 22, 2015 3:22 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

Well I'd hate to have the V2 postfix on it because V1 will be deprecated
and removed, which means the V2 being there would be lame.  Is there any
kind of precedent set for for how to handle this?

Thanks,
Brandon
On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV wrote:
> So are we thinking of making it as ?
> OS::Neutron::LoadBalancerV2
>
> OS::Neutron::ListenerV2
>
> OS::Neutron::PoolV2
>
> OS::Neutron::PoolMemberV2
>
> OS::Neutron::HealthMonitorV2
>
>
>
> and add all those into the loadbalancer.py of heat engine ?
>
> Thanks
> Banashankar
>
>
>
> On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> <skraynev at mirantis.com<mailto:skraynev at mirantis.com>> wrote:
>         Brandon.
>
>
>         As I understand we v1 and v2 have differences also in list of
>         objects and also in relationships between them.
>         So I don't think that it will be easy to upgrade old resources
>         (unfortunately).
>         I'd agree with second Kevin's suggestion about implementation
>         new resources in this case.
>
>
>         I see, that a lot of guys, who wants to help  with it :) And I
>         suppose, that me and Rabi Mishra may try to help with it,
>         because we was involvement in implementation of v1 resources
>         in Heat.
>         Follow the list of v1 lbaas resources in Heat:
>
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>
>
>
>         Also, I suppose, that it may be discussed during summit
>         talks :)
>         Will add to etherpad with potential sessions.
>
>
>
>         Regards,
>         Sergey.
>
>         On 22 September 2015 at 22:27, Brandon Logan
>         <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
>                 There is some overlap, but there was some incompatible
>                 differences when
>                 we started designing v2.  I'm sure the same issues
>                 will arise this time
>                 around so new resources sounds like the path to go.
>                 However, I do not
>                 know much about Heat and the resources so I'm speaking
>                 on a very
>                 uneducated level here.
>
>                 Thanks,
>                 Brandon
>                 On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
>                 > We're using the v1 resources...
>                 >
>                 > If the v2 ones are compatible and can seamlessly
>                 upgrade, great
>                 >
>                 > Otherwise, make new ones please.
>                 >
>                 > Thanks,
>                 > Kevin
>                 >
>                 >
>                 ______________________________________________________________________
>                 > From: Banashankar KV [banveerad at gmail.com<mailto:banveerad at gmail.com>]
>                 > Sent: Tuesday, September 22, 2015 10:07 AM
>                 > To: OpenStack Development Mailing List (not for
>                 usage questions)
>                 > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>                 support for
>                 > LbaasV2
>                 >
>                 >
>                 >
>                 > Hi Brandon,
>                 > Work in progress, but need some input on the way we
>                 want them, like
>                 > replace the existing lbaasv1 or we still need to
>                 support them ?
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 > Thanks
>                 > Banashankar
>                 >
>                 >
>                 >
>                 > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
>                 > <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
>                 >         Hi Banashankar,
>                 >         I think it'd be great if you got this going.
>                 One of those
>                 >         things we
>                 >         want to have and people ask for but has
>                 always gotten a lower
>                 >         priority
>                 >         due to the critical things needed.
>                 >
>                 >         Thanks,
>                 >         Brandon
>                 >         On Mon, 2015-09-21 at 17:57 -0700,
>                 Banashankar KV wrote:
>                 >         > Hi All,
>                 >         > I was thinking of starting the work on
>                 heat to support
>                 >         LBaasV2,  Is
>                 >         > there any concerns about that?
>                 >         >
>                 >         >
>                 >         > I don't know if it is the right time to
>                 bring this up :D .
>                 >         >
>                 >         > Thanks,
>                 >         > Banashankar (bana_k)
>                 >         >
>                 >         >
>                 >
>                 >         >
>                 >
>                  __________________________________________________________________________
>                 >         > OpenStack Development Mailing List (not
>                 for usage questions)
>                 >         > Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >         >
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                  __________________________________________________________________________
>                 >         OpenStack Development Mailing List (not for
>                 usage questions)
>                 >         Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 __________________________________________________________________________
>                 > OpenStack Development Mailing List (not for usage
>                 questions)
>                 > Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/d7e496e3/attachment.html>

From banveerad at gmail.com  Tue Sep 22 23:40:20 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Tue, 22 Sep 2015 16:40:20 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
Message-ID: <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>

Ok, sounds good. So now the question is how should we name the new V2
resources ?


Thanks
Banashankar


On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:

> Yes, hence the need to support the v2 resources as seperate things. Then I
> can rewrite the templates to include the new resources rather then the old
> resources as appropriate. IE, it will be a porting effort to rewrite them.
> Then do a heat update on the stack to migrate it from lbv1 to lbv2. Since
> they are different resources, it should create the new and delete the old.
>
> Thanks,
> Kevin
>
> ------------------------------
> *From:* Banashankar KV [banveerad at gmail.com]
> *Sent:* Tuesday, September 22, 2015 4:16 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
>
> But I think, V2 has introduced some new components and whole association
> of the resources with each other is changed, we should be still able to do
> what Kevin has mentioned ?
>
> Thanks
> Banashankar
>
>
> On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>
>> There needs to be a way to have both v1 and v2 supported in one engine....
>>
>> Say I have templates that use v1 already in existence (I do), and I want
>> to be able to heat stack update on them one at a time to v2. This will
>> replace the v1 lb with v2, migrating the floating ip from the v1 lb to the
>> v2 one. This gives a smoothish upgrade path.
>>
>> Thanks,
>> Kevin
>> ________________________________________
>> From: Brandon Logan [brandon.logan at RACKSPACE.COM]
>> Sent: Tuesday, September 22, 2015 3:22 PM
>> To: openstack-dev at lists.openstack.org
>> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
>>
>> Well I'd hate to have the V2 postfix on it because V1 will be deprecated
>> and removed, which means the V2 being there would be lame.  Is there any
>> kind of precedent set for for how to handle this?
>>
>> Thanks,
>> Brandon
>> On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV wrote:
>> > So are we thinking of making it as ?
>> > OS::Neutron::LoadBalancerV2
>> >
>> > OS::Neutron::ListenerV2
>> >
>> > OS::Neutron::PoolV2
>> >
>> > OS::Neutron::PoolMemberV2
>> >
>> > OS::Neutron::HealthMonitorV2
>> >
>> >
>> >
>> > and add all those into the loadbalancer.py of heat engine ?
>> >
>> > Thanks
>> > Banashankar
>> >
>> >
>> >
>> > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>> > <skraynev at mirantis.com> wrote:
>> >         Brandon.
>> >
>> >
>> >         As I understand we v1 and v2 have differences also in list of
>> >         objects and also in relationships between them.
>> >         So I don't think that it will be easy to upgrade old resources
>> >         (unfortunately).
>> >         I'd agree with second Kevin's suggestion about implementation
>> >         new resources in this case.
>> >
>> >
>> >         I see, that a lot of guys, who wants to help  with it :) And I
>> >         suppose, that me and Rabi Mishra may try to help with it,
>> >         because we was involvement in implementation of v1 resources
>> >         in Heat.
>> >         Follow the list of v1 lbaas resources in Heat:
>> >
>> >
>> >
>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>> >
>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>> >
>> >
>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>> >
>> >
>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>> >
>> >
>> >
>> >         Also, I suppose, that it may be discussed during summit
>> >         talks :)
>> >         Will add to etherpad with potential sessions.
>> >
>> >
>> >
>> >         Regards,
>> >         Sergey.
>> >
>> >         On 22 September 2015 at 22:27, Brandon Logan
>> >         <brandon.logan at rackspace.com> wrote:
>> >                 There is some overlap, but there was some incompatible
>> >                 differences when
>> >                 we started designing v2.  I'm sure the same issues
>> >                 will arise this time
>> >                 around so new resources sounds like the path to go.
>> >                 However, I do not
>> >                 know much about Heat and the resources so I'm speaking
>> >                 on a very
>> >                 uneducated level here.
>> >
>> >                 Thanks,
>> >                 Brandon
>> >                 On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
>> >                 > We're using the v1 resources...
>> >                 >
>> >                 > If the v2 ones are compatible and can seamlessly
>> >                 upgrade, great
>> >                 >
>> >                 > Otherwise, make new ones please.
>> >                 >
>> >                 > Thanks,
>> >                 > Kevin
>> >                 >
>> >                 >
>> >
>>  ______________________________________________________________________
>> >                 > From: Banashankar KV [banveerad at gmail.com]
>> >                 > Sent: Tuesday, September 22, 2015 10:07 AM
>> >                 > To: OpenStack Development Mailing List (not for
>> >                 usage questions)
>> >                 > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>> >                 support for
>> >                 > LbaasV2
>> >                 >
>> >                 >
>> >                 >
>> >                 > Hi Brandon,
>> >                 > Work in progress, but need some input on the way we
>> >                 want them, like
>> >                 > replace the existing lbaasv1 or we still need to
>> >                 support them ?
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 > Thanks
>> >                 > Banashankar
>> >                 >
>> >                 >
>> >                 >
>> >                 > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
>> >                 > <brandon.logan at rackspace.com> wrote:
>> >                 >         Hi Banashankar,
>> >                 >         I think it'd be great if you got this going.
>> >                 One of those
>> >                 >         things we
>> >                 >         want to have and people ask for but has
>> >                 always gotten a lower
>> >                 >         priority
>> >                 >         due to the critical things needed.
>> >                 >
>> >                 >         Thanks,
>> >                 >         Brandon
>> >                 >         On Mon, 2015-09-21 at 17:57 -0700,
>> >                 Banashankar KV wrote:
>> >                 >         > Hi All,
>> >                 >         > I was thinking of starting the work on
>> >                 heat to support
>> >                 >         LBaasV2,  Is
>> >                 >         > there any concerns about that?
>> >                 >         >
>> >                 >         >
>> >                 >         > I don't know if it is the right time to
>> >                 bring this up :D .
>> >                 >         >
>> >                 >         > Thanks,
>> >                 >         > Banashankar (bana_k)
>> >                 >         >
>> >                 >         >
>> >                 >
>> >                 >         >
>> >                 >
>> >
>> __________________________________________________________________________
>> >                 >         > OpenStack Development Mailing List (not
>> >                 for usage questions)
>> >                 >         > Unsubscribe:
>> >                 >
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >                 >         >
>> >                 >
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >                 >
>> >                 >
>> >
>> __________________________________________________________________________
>> >                 >         OpenStack Development Mailing List (not for
>> >                 usage questions)
>> >                 >         Unsubscribe:
>> >                 >
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >                 >
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >
>>  __________________________________________________________________________
>> >                 > OpenStack Development Mailing List (not for usage
>> >                 questions)
>> >                 > Unsubscribe:
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >                 >
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>>  __________________________________________________________________________
>> >                 OpenStack Development Mailing List (not for usage
>> >                 questions)
>> >                 Unsubscribe:
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> >
>>  __________________________________________________________________________
>> >         OpenStack Development Mailing List (not for usage questions)
>> >         Unsubscribe:
>> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/40aa68c6/attachment.html>

From brandon.logan at RACKSPACE.COM  Tue Sep 22 23:45:22 2015
From: brandon.logan at RACKSPACE.COM (Brandon Logan)
Date: Tue, 22 Sep 2015 23:45:22 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
Message-ID: <1442965523.30604.7.camel@localhost>

Forgive my ignorance on this, but by resources are we talking about API
resources? If so I would hate V2 to be in the names because backwards
compatibility means keeping that around.  If they're not API resources,
then if we named appended the resources with V2 right now, will we be
able to remove the V2 once V1 gets removed?

Thanks,
Brandon
On Tue, 2015-09-22 at 16:40 -0700, Banashankar KV wrote:
> Ok, sounds good. So now the question is how should we name the new V2
> resources ?
> 
> 
> 
> Thanks 
> Banashankar
> 
> 
> 
> On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
> wrote:
>         Yes, hence the need to support the v2 resources as seperate
>         things. Then I can rewrite the templates to include the new
>         resources rather then the old resources as appropriate. IE, it
>         will be a porting effort to rewrite them. Then do a heat
>         update on the stack to migrate it from lbv1 to lbv2. Since
>         they are different resources, it should create the new and
>         delete the old.
>         
>         Thanks,
>         Kevin
>         
>         
>         ______________________________________________________________
>         From: Banashankar KV [banveerad at gmail.com]
>         Sent: Tuesday, September 22, 2015 4:16 PM
>         
>         To: OpenStack Development Mailing List (not for usage
>         questions)
>         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
>         for LbaasV2
>         
>         
>         
>         
>         But I think, V2 has introduced some new components and whole
>         association of the resources with each other is changed, we
>         should be still able to do what Kevin has mentioned ?
>         
>         Thanks  
>         Banashankar
>         
>         
>         
>         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
>         <Kevin.Fox at pnnl.gov> wrote:
>                 There needs to be a way to have both v1 and v2
>                 supported in one engine....
>                 
>                 Say I have templates that use v1 already in existence
>                 (I do), and I want to be able to heat stack update on
>                 them one at a time to v2. This will replace the v1 lb
>                 with v2, migrating the floating ip from the v1 lb to
>                 the v2 one. This gives a smoothish upgrade path.
>                 
>                 Thanks,
>                 Kevin
>                 ________________________________________
>                 From: Brandon Logan [brandon.logan at RACKSPACE.COM]
>                 Sent: Tuesday, September 22, 2015 3:22 PM
>                 To: openstack-dev at lists.openstack.org
>                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>                 support for LbaasV2
>                 
>                 Well I'd hate to have the V2 postfix on it because V1
>                 will be deprecated
>                 and removed, which means the V2 being there would be
>                 lame.  Is there any
>                 kind of precedent set for for how to handle this?
>                 
>                 Thanks,
>                 Brandon
>                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
>                 wrote:
>                 > So are we thinking of making it as ?
>                 > OS::Neutron::LoadBalancerV2
>                 >
>                 > OS::Neutron::ListenerV2
>                 >
>                 > OS::Neutron::PoolV2
>                 >
>                 > OS::Neutron::PoolMemberV2
>                 >
>                 > OS::Neutron::HealthMonitorV2
>                 >
>                 >
>                 >
>                 > and add all those into the loadbalancer.py of heat
>                 engine ?
>                 >
>                 > Thanks
>                 > Banashankar
>                 >
>                 >
>                 >
>                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>                 > <skraynev at mirantis.com> wrote:
>                 >         Brandon.
>                 >
>                 >
>                 >         As I understand we v1 and v2 have
>                 differences also in list of
>                 >         objects and also in relationships between
>                 them.
>                 >         So I don't think that it will be easy to
>                 upgrade old resources
>                 >         (unfortunately).
>                 >         I'd agree with second Kevin's suggestion
>                 about implementation
>                 >         new resources in this case.
>                 >
>                 >
>                 >         I see, that a lot of guys, who wants to help
>                 with it :) And I
>                 >         suppose, that me and Rabi Mishra may try to
>                 help with it,
>                 >         because we was involvement in implementation
>                 of v1 resources
>                 >         in Heat.
>                 >         Follow the list of v1 lbaas resources in
>                 Heat:
>                 >
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>                 >
>                 >
>                 >
>                 >         Also, I suppose, that it may be discussed
>                 during summit
>                 >         talks :)
>                 >         Will add to etherpad with potential
>                 sessions.
>                 >
>                 >
>                 >
>                 >         Regards,
>                 >         Sergey.
>                 >
>                 >         On 22 September 2015 at 22:27, Brandon Logan
>                 >         <brandon.logan at rackspace.com> wrote:
>                 >                 There is some overlap, but there was
>                 some incompatible
>                 >                 differences when
>                 >                 we started designing v2.  I'm sure
>                 the same issues
>                 >                 will arise this time
>                 >                 around so new resources sounds like
>                 the path to go.
>                 >                 However, I do not
>                 >                 know much about Heat and the
>                 resources so I'm speaking
>                 >                 on a very
>                 >                 uneducated level here.
>                 >
>                 >                 Thanks,
>                 >                 Brandon
>                 >                 On Tue, 2015-09-22 at 18:38 +0000,
>                 Fox, Kevin M wrote:
>                 >                 > We're using the v1 resources...
>                 >                 >
>                 >                 > If the v2 ones are compatible and
>                 can seamlessly
>                 >                 upgrade, great
>                 >                 >
>                 >                 > Otherwise, make new ones please.
>                 >                 >
>                 >                 > Thanks,
>                 >                 > Kevin
>                 >                 >
>                 >                 >
>                 >
>                  ______________________________________________________________________
>                 >                 > From: Banashankar KV
>                 [banveerad at gmail.com]
>                 >                 > Sent: Tuesday, September 22, 2015
>                 10:07 AM
>                 >                 > To: OpenStack Development Mailing
>                 List (not for
>                 >                 usage questions)
>                 >                 > Subject: Re: [openstack-dev]
>                 [neutron][lbaas] - Heat
>                 >                 support for
>                 >                 > LbaasV2
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > Hi Brandon,
>                 >                 > Work in progress, but need some
>                 input on the way we
>                 >                 want them, like
>                 >                 > replace the existing lbaasv1 or we
>                 still need to
>                 >                 support them ?
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > Thanks
>                 >                 > Banashankar
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
>                 Brandon Logan
>                 >                 > <brandon.logan at rackspace.com>
>                 wrote:
>                 >                 >         Hi Banashankar,
>                 >                 >         I think it'd be great if
>                 you got this going.
>                 >                 One of those
>                 >                 >         things we
>                 >                 >         want to have and people
>                 ask for but has
>                 >                 always gotten a lower
>                 >                 >         priority
>                 >                 >         due to the critical things
>                 needed.
>                 >                 >
>                 >                 >         Thanks,
>                 >                 >         Brandon
>                 >                 >         On Mon, 2015-09-21 at
>                 17:57 -0700,
>                 >                 Banashankar KV wrote:
>                 >                 >         > Hi All,
>                 >                 >         > I was thinking of
>                 starting the work on
>                 >                 heat to support
>                 >                 >         LBaasV2,  Is
>                 >                 >         > there any concerns about
>                 that?
>                 >                 >         >
>                 >                 >         >
>                 >                 >         > I don't know if it is
>                 the right time to
>                 >                 bring this up :D .
>                 >                 >         >
>                 >                 >         > Thanks,
>                 >                 >         > Banashankar (bana_k)
>                 >                 >         >
>                 >                 >         >
>                 >                 >
>                 >                 >         >
>                 >                 >
>                 >
>                 __________________________________________________________________________
>                 >                 >         > OpenStack Development
>                 Mailing List (not
>                 >                 for usage questions)
>                 >                 >         > Unsubscribe:
>                 >                 >
>                 >
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >                 >         >
>                 >                 >
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >                 >
>                 >                 >
>                 >
>                 __________________________________________________________________________
>                 >                 >         OpenStack Development
>                 Mailing List (not for
>                 >                 usage questions)
>                 >                 >         Unsubscribe:
>                 >                 >
>                 >
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >                 >
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >
>                  __________________________________________________________________________
>                 >                 > OpenStack Development Mailing List
>                 (not for usage
>                 >                 questions)
>                 >                 > Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >                 >
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                  __________________________________________________________________________
>                 >                 OpenStack Development Mailing List
>                 (not for usage
>                 >                 questions)
>                 >                 Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 >
>                  __________________________________________________________________________
>                 >         OpenStack Development Mailing List (not for
>                 usage questions)
>                 >         Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 __________________________________________________________________________
>                 > OpenStack Development Mailing List (not for usage
>                 questions)
>                 > Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 
>         
>         
>         
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From Kevin.Fox at pnnl.gov  Tue Sep 22 23:48:05 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Tue, 22 Sep 2015 23:48:05 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>,
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>

Thats the problem. :/

I can't think of a way to have them coexist without: breaking old templates, including v2 in the name, or having a flag on the resource saying the version is v2. And as an app developer I'd rather not have my existing templates break.

I haven't compared the api's at all, but is there a required field of v2 that is different enough from v1 that by its simple existence in the resource you can tell a v2 from a v1 object? Would something like that work? PoolMember wouldn't have to change, the same resource could probably work for whatever lb it was pointing at I'm guessing.

Thanks,
Kevin


________________________________
From: Banashankar KV [banveerad at gmail.com]
Sent: Tuesday, September 22, 2015 4:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

Ok, sounds good. So now the question is how should we name the new V2 resources ?


Thanks
Banashankar


On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
Yes, hence the need to support the v2 resources as seperate things. Then I can rewrite the templates to include the new resources rather then the old resources as appropriate. IE, it will be a porting effort to rewrite them. Then do a heat update on the stack to migrate it from lbv1 to lbv2. Since they are different resources, it should create the new and delete the old.

Thanks,
Kevin

________________________________
From: Banashankar KV [banveerad at gmail.com<mailto:banveerad at gmail.com>]
Sent: Tuesday, September 22, 2015 4:16 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

But I think, V2 has introduced some new components and whole association of the resources with each other is changed, we should be still able to do what Kevin has mentioned ?

Thanks
Banashankar


On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
There needs to be a way to have both v1 and v2 supported in one engine....

Say I have templates that use v1 already in existence (I do), and I want to be able to heat stack update on them one at a time to v2. This will replace the v1 lb with v2, migrating the floating ip from the v1 lb to the v2 one. This gives a smoothish upgrade path.

Thanks,
Kevin
________________________________________
From: Brandon Logan [brandon.logan at RACKSPACE.COM<mailto:brandon.logan at RACKSPACE.COM>]
Sent: Tuesday, September 22, 2015 3:22 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

Well I'd hate to have the V2 postfix on it because V1 will be deprecated
and removed, which means the V2 being there would be lame.  Is there any
kind of precedent set for for how to handle this?

Thanks,
Brandon
On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV wrote:
> So are we thinking of making it as ?
> OS::Neutron::LoadBalancerV2
>
> OS::Neutron::ListenerV2
>
> OS::Neutron::PoolV2
>
> OS::Neutron::PoolMemberV2
>
> OS::Neutron::HealthMonitorV2
>
>
>
> and add all those into the loadbalancer.py of heat engine ?
>
> Thanks
> Banashankar
>
>
>
> On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> <skraynev at mirantis.com<mailto:skraynev at mirantis.com>> wrote:
>         Brandon.
>
>
>         As I understand we v1 and v2 have differences also in list of
>         objects and also in relationships between them.
>         So I don't think that it will be easy to upgrade old resources
>         (unfortunately).
>         I'd agree with second Kevin's suggestion about implementation
>         new resources in this case.
>
>
>         I see, that a lot of guys, who wants to help  with it :) And I
>         suppose, that me and Rabi Mishra may try to help with it,
>         because we was involvement in implementation of v1 resources
>         in Heat.
>         Follow the list of v1 lbaas resources in Heat:
>
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>
>
>
>         Also, I suppose, that it may be discussed during summit
>         talks :)
>         Will add to etherpad with potential sessions.
>
>
>
>         Regards,
>         Sergey.
>
>         On 22 September 2015 at 22:27, Brandon Logan
>         <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
>                 There is some overlap, but there was some incompatible
>                 differences when
>                 we started designing v2.  I'm sure the same issues
>                 will arise this time
>                 around so new resources sounds like the path to go.
>                 However, I do not
>                 know much about Heat and the resources so I'm speaking
>                 on a very
>                 uneducated level here.
>
>                 Thanks,
>                 Brandon
>                 On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
>                 > We're using the v1 resources...
>                 >
>                 > If the v2 ones are compatible and can seamlessly
>                 upgrade, great
>                 >
>                 > Otherwise, make new ones please.
>                 >
>                 > Thanks,
>                 > Kevin
>                 >
>                 >
>                 ______________________________________________________________________
>                 > From: Banashankar KV [banveerad at gmail.com<mailto:banveerad at gmail.com>]
>                 > Sent: Tuesday, September 22, 2015 10:07 AM
>                 > To: OpenStack Development Mailing List (not for
>                 usage questions)
>                 > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>                 support for
>                 > LbaasV2
>                 >
>                 >
>                 >
>                 > Hi Brandon,
>                 > Work in progress, but need some input on the way we
>                 want them, like
>                 > replace the existing lbaasv1 or we still need to
>                 support them ?
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 > Thanks
>                 > Banashankar
>                 >
>                 >
>                 >
>                 > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
>                 > <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
>                 >         Hi Banashankar,
>                 >         I think it'd be great if you got this going.
>                 One of those
>                 >         things we
>                 >         want to have and people ask for but has
>                 always gotten a lower
>                 >         priority
>                 >         due to the critical things needed.
>                 >
>                 >         Thanks,
>                 >         Brandon
>                 >         On Mon, 2015-09-21 at 17:57 -0700,
>                 Banashankar KV wrote:
>                 >         > Hi All,
>                 >         > I was thinking of starting the work on
>                 heat to support
>                 >         LBaasV2,  Is
>                 >         > there any concerns about that?
>                 >         >
>                 >         >
>                 >         > I don't know if it is the right time to
>                 bring this up :D .
>                 >         >
>                 >         > Thanks,
>                 >         > Banashankar (bana_k)
>                 >         >
>                 >         >
>                 >
>                 >         >
>                 >
>                  __________________________________________________________________________
>                 >         > OpenStack Development Mailing List (not
>                 for usage questions)
>                 >         > Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >         >
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                  __________________________________________________________________________
>                 >         OpenStack Development Mailing List (not for
>                 usage questions)
>                 >         Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 __________________________________________________________________________
>                 > OpenStack Development Mailing List (not for usage
>                 questions)
>                 > Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/7390dd33/attachment-0001.html>

From harlowja at outlook.com  Tue Sep 22 23:48:49 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Tue, 22 Sep 2015 16:48:49 -0700
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
Message-ID: <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>

A present:

 >>> import contextlib
 >>> import os
 >>>
 >>> @contextlib.contextmanager
... def synced_file(path, mode='wb'):
...   with open(path, mode) as fh:
...      yield fh
...      os.fdatasync(fh.fileno())
...
 >>> with synced_file("/tmp/b.txt") as fh:
...    fh.write("b")
...

Have fun :-P

-Josh

Robert Collins wrote:
> On 23 September 2015 at 09:52, Chris Friesen
> <chris.friesen at windriver.com>  wrote:
>> Hi,
>>
>> I recently had an issue with one file out of a dozen or so in
>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm
>> running stable/kilo if it makes a difference.
>>
>> Looking at the code in volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm
>> wondering if we should do a fsync() before the close().  The way it stands
>> now, it seems like it might be possible to write the file, start making use
>> of it, and then take a power outage before it actually gets written to
>> persistent storage.  When we come back up we could have an instance
>> expecting to make use of it, but no target information in the on-disk copy
>> of the file.
>
> If its being kept in sync with DB records, and won't self-heal from
> this situation, then yes. e.g. if the overall workflow is something
> like
>
> receive RPC request
> do some work, including writing the file
> reply to the RPC with 'ok'
> ->  gets written to the DB and the state recorded as ACTIVE or whatever..
>
> then yes, we need to behave as though its active even if the machine
> is power cycled robustly.
>
>> Even then, fsync() explicitly says that it doesn't ensure that the directory
>> changes have reached disk...for that another explicit fsync() on the file
>> descriptor for the directory is needed.
>> So I think for robustness we'd want to add
>>
>> f.flush()
>> os.fsync(f.fileno())
>
> fdatasync would be better - we don't care about the metadata.
>
>> between the existing f.write() and f.close(), and then add something like:
>>
>> f = open(volumes_dir, 'w+')
>> os.fsync(f.fileno())
>> f.close()
>
> Yup, but again - fdatasync here I believe.
>
> -Rob
>
>


From dougwig at parksidesoftware.com  Tue Sep 22 23:53:26 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Tue, 22 Sep 2015 17:53:26 -0600
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
Message-ID: <04E7E723-90F2-4FC3-BF69-8179E2215FD3@parksidesoftware.com>

The other option would be to change the namespace.  (Os::Lbaas instead of Os::Neutron).  The neutron CLI does something similar with neutron-lb-* versus neutron-lbaas-*, e.g.

One wrinkle with heat supporting both is that neutron doesn?t support both running at the same time, which certainly hurts the migration strategy. I think the answer at the time was that you could have different api servers running each version. Is that something that heat can deal with?

(I still don?t like that I can?t run both at the same time, and would love to re-litigate that argument.  :-)  ).

Thanks,
doug



> On Sep 22, 2015, at 5:40 PM, Banashankar KV <banveerad at gmail.com> wrote:
> 
> Ok, sounds good. So now the question is how should we name the new V2 resources ?
> 
> 
> Thanks 
> Banashankar
> 
> 
> On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov <mailto:Kevin.Fox at pnnl.gov>> wrote:
> Yes, hence the need to support the v2 resources as seperate things. Then I can rewrite the templates to include the new resources rather then the old resources as appropriate. IE, it will be a porting effort to rewrite them. Then do a heat update on the stack to migrate it from lbv1 to lbv2. Since they are different resources, it should create the new and delete the old.
> 
> Thanks,
> Kevin
> 
> From: Banashankar KV [banveerad at gmail.com <mailto:banveerad at gmail.com>]
> Sent: Tuesday, September 22, 2015 4:16 PM
> 
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
> 
> But I think, V2 has introduced some new components and whole association of the resources with each other is changed, we should be still able to do what Kevin has mentioned ?
> 
> Thanks 
> Banashankar
> 
> 
> On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov <mailto:Kevin.Fox at pnnl.gov>> wrote:
> There needs to be a way to have both v1 and v2 supported in one engine....
> 
> Say I have templates that use v1 already in existence (I do), and I want to be able to heat stack update on them one at a time to v2. This will replace the v1 lb with v2, migrating the floating ip from the v1 lb to the v2 one. This gives a smoothish upgrade path.
> 
> Thanks,
> Kevin
> ________________________________________
> From: Brandon Logan [brandon.logan at RACKSPACE.COM <mailto:brandon.logan at RACKSPACE.COM>]
> Sent: Tuesday, September 22, 2015 3:22 PM
> To: openstack-dev at lists.openstack.org <mailto:openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
> 
> Well I'd hate to have the V2 postfix on it because V1 will be deprecated
> and removed, which means the V2 being there would be lame.  Is there any
> kind of precedent set for for how to handle this?
> 
> Thanks,
> Brandon
> On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV wrote:
> > So are we thinking of making it as ?
> > OS::Neutron::LoadBalancerV2
> >
> > OS::Neutron::ListenerV2
> >
> > OS::Neutron::PoolV2
> >
> > OS::Neutron::PoolMemberV2
> >
> > OS::Neutron::HealthMonitorV2
> >
> >
> >
> > and add all those into the loadbalancer.py of heat engine ?
> >
> > Thanks
> > Banashankar
> >
> >
> >
> > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> > <skraynev at mirantis.com <mailto:skraynev at mirantis.com>> wrote:
> >         Brandon.
> >
> >
> >         As I understand we v1 and v2 have differences also in list of
> >         objects and also in relationships between them.
> >         So I don't think that it will be easy to upgrade old resources
> >         (unfortunately).
> >         I'd agree with second Kevin's suggestion about implementation
> >         new resources in this case.
> >
> >
> >         I see, that a lot of guys, who wants to help  with it :) And I
> >         suppose, that me and Rabi Mishra may try to help with it,
> >         because we was involvement in implementation of v1 resources
> >         in Heat.
> >         Follow the list of v1 lbaas resources in Heat:
> >
> >
> >         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer>
> >         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool>
> >
> >         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember>
> >
> >         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor>
> >
> >
> >
> >         Also, I suppose, that it may be discussed during summit
> >         talks :)
> >         Will add to etherpad with potential sessions.
> >
> >
> >
> >         Regards,
> >         Sergey.
> >
> >         On 22 September 2015 at 22:27, Brandon Logan
> >         <brandon.logan at rackspace.com <mailto:brandon.logan at rackspace.com>> wrote:
> >                 There is some overlap, but there was some incompatible
> >                 differences when
> >                 we started designing v2.  I'm sure the same issues
> >                 will arise this time
> >                 around so new resources sounds like the path to go.
> >                 However, I do not
> >                 know much about Heat and the resources so I'm speaking
> >                 on a very
> >                 uneducated level here.
> >
> >                 Thanks,
> >                 Brandon
> >                 On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
> >                 > We're using the v1 resources...
> >                 >
> >                 > If the v2 ones are compatible and can seamlessly
> >                 upgrade, great
> >                 >
> >                 > Otherwise, make new ones please.
> >                 >
> >                 > Thanks,
> >                 > Kevin
> >                 >
> >                 >
> >                 ______________________________________________________________________
> >                 > From: Banashankar KV [banveerad at gmail.com <mailto:banveerad at gmail.com>]
> >                 > Sent: Tuesday, September 22, 2015 10:07 AM
> >                 > To: OpenStack Development Mailing List (not for
> >                 usage questions)
> >                 > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
> >                 support for
> >                 > LbaasV2
> >                 >
> >                 >
> >                 >
> >                 > Hi Brandon,
> >                 > Work in progress, but need some input on the way we
> >                 want them, like
> >                 > replace the existing lbaasv1 or we still need to
> >                 support them ?
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                 > Thanks
> >                 > Banashankar
> >                 >
> >                 >
> >                 >
> >                 > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
> >                 > <brandon.logan at rackspace.com <mailto:brandon.logan at rackspace.com>> wrote:
> >                 >         Hi Banashankar,
> >                 >         I think it'd be great if you got this going.
> >                 One of those
> >                 >         things we
> >                 >         want to have and people ask for but has
> >                 always gotten a lower
> >                 >         priority
> >                 >         due to the critical things needed.
> >                 >
> >                 >         Thanks,
> >                 >         Brandon
> >                 >         On Mon, 2015-09-21 at 17:57 -0700,
> >                 Banashankar KV wrote:
> >                 >         > Hi All,
> >                 >         > I was thinking of starting the work on
> >                 heat to support
> >                 >         LBaasV2,  Is
> >                 >         > there any concerns about that?
> >                 >         >
> >                 >         >
> >                 >         > I don't know if it is the right time to
> >                 bring this up :D .
> >                 >         >
> >                 >         > Thanks,
> >                 >         > Banashankar (bana_k)
> >                 >         >
> >                 >         >
> >                 >
> >                 >         >
> >                 >
> >                  __________________________________________________________________________
> >                 >         > OpenStack Development Mailing List (not
> >                 for usage questions)
> >                 >         > Unsubscribe:
> >                 >
> >                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 >         >
> >                 >
> >                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >                 >
> >                 >
> >                  __________________________________________________________________________
> >                 >         OpenStack Development Mailing List (not for
> >                 usage questions)
> >                 >         Unsubscribe:
> >                 >
> >                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 >
> >                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >                 >
> >                 >
> >                 >
> >                 >
> >                 __________________________________________________________________________
> >                 > OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 > Unsubscribe:
> >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 >
> >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >
> >                 __________________________________________________________________________
> >                 OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 Unsubscribe:
> >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >
> >
> >
> >
> >         __________________________________________________________________________
> >         OpenStack Development Mailing List (not for usage questions)
> >         Unsubscribe:
> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/59699e0d/attachment.html>

From banveerad at gmail.com  Tue Sep 22 23:54:06 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Tue, 22 Sep 2015 16:54:06 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1442965523.30604.7.camel@localhost>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1442965523.30604.7.camel@localhost>
Message-ID: <CABkBM5FibE-aVhQP6aPCf4hwZv6+90eSKZP7vQjLVt1KoRxBuw@mail.gmail.com>

Resource I mean, the resource type as highlighted.

description: A simple lb pool test
heat_template_version: '2015-04-30'
resources:
 my_listener:
    type: OS::Neutron::*ListenerV2*
    properties:
      protocol_port: 88
      protocol: HTTP
      loadbalancer_id: e3e0b1d2-12b2-4855-a5cb-6faf121b2d10
      name: holy_listener
      description: Just a listener man


Thanks
Banashankar


On Tue, Sep 22, 2015 at 4:45 PM, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> Forgive my ignorance on this, but by resources are we talking about API
> resources? If so I would hate V2 to be in the names because backwards
> compatibility means keeping that around.  If they're not API resources,
> then if we named appended the resources with V2 right now, will we be
> able to remove the V2 once V1 gets removed?
>
> Thanks,
> Brandon
> On Tue, 2015-09-22 at 16:40 -0700, Banashankar KV wrote:
> > Ok, sounds good. So now the question is how should we name the new V2
> > resources ?
> >
> >
> >
> > Thanks
> > Banashankar
> >
> >
> >
> > On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
> > wrote:
> >         Yes, hence the need to support the v2 resources as seperate
> >         things. Then I can rewrite the templates to include the new
> >         resources rather then the old resources as appropriate. IE, it
> >         will be a porting effort to rewrite them. Then do a heat
> >         update on the stack to migrate it from lbv1 to lbv2. Since
> >         they are different resources, it should create the new and
> >         delete the old.
> >
> >         Thanks,
> >         Kevin
> >
> >
> >         ______________________________________________________________
> >         From: Banashankar KV [banveerad at gmail.com]
> >         Sent: Tuesday, September 22, 2015 4:16 PM
> >
> >         To: OpenStack Development Mailing List (not for usage
> >         questions)
> >         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
> >         for LbaasV2
> >
> >
> >
> >
> >         But I think, V2 has introduced some new components and whole
> >         association of the resources with each other is changed, we
> >         should be still able to do what Kevin has mentioned ?
> >
> >         Thanks
> >         Banashankar
> >
> >
> >
> >         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
> >         <Kevin.Fox at pnnl.gov> wrote:
> >                 There needs to be a way to have both v1 and v2
> >                 supported in one engine....
> >
> >                 Say I have templates that use v1 already in existence
> >                 (I do), and I want to be able to heat stack update on
> >                 them one at a time to v2. This will replace the v1 lb
> >                 with v2, migrating the floating ip from the v1 lb to
> >                 the v2 one. This gives a smoothish upgrade path.
> >
> >                 Thanks,
> >                 Kevin
> >                 ________________________________________
> >                 From: Brandon Logan [brandon.logan at RACKSPACE.COM]
> >                 Sent: Tuesday, September 22, 2015 3:22 PM
> >                 To: openstack-dev at lists.openstack.org
> >                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
> >                 support for LbaasV2
> >
> >                 Well I'd hate to have the V2 postfix on it because V1
> >                 will be deprecated
> >                 and removed, which means the V2 being there would be
> >                 lame.  Is there any
> >                 kind of precedent set for for how to handle this?
> >
> >                 Thanks,
> >                 Brandon
> >                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
> >                 wrote:
> >                 > So are we thinking of making it as ?
> >                 > OS::Neutron::LoadBalancerV2
> >                 >
> >                 > OS::Neutron::ListenerV2
> >                 >
> >                 > OS::Neutron::PoolV2
> >                 >
> >                 > OS::Neutron::PoolMemberV2
> >                 >
> >                 > OS::Neutron::HealthMonitorV2
> >                 >
> >                 >
> >                 >
> >                 > and add all those into the loadbalancer.py of heat
> >                 engine ?
> >                 >
> >                 > Thanks
> >                 > Banashankar
> >                 >
> >                 >
> >                 >
> >                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> >                 > <skraynev at mirantis.com> wrote:
> >                 >         Brandon.
> >                 >
> >                 >
> >                 >         As I understand we v1 and v2 have
> >                 differences also in list of
> >                 >         objects and also in relationships between
> >                 them.
> >                 >         So I don't think that it will be easy to
> >                 upgrade old resources
> >                 >         (unfortunately).
> >                 >         I'd agree with second Kevin's suggestion
> >                 about implementation
> >                 >         new resources in this case.
> >                 >
> >                 >
> >                 >         I see, that a lot of guys, who wants to help
> >                 with it :) And I
> >                 >         suppose, that me and Rabi Mishra may try to
> >                 help with it,
> >                 >         because we was involvement in implementation
> >                 of v1 resources
> >                 >         in Heat.
> >                 >         Follow the list of v1 lbaas resources in
> >                 Heat:
> >                 >
> >                 >
> >                 >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
> >                 >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
> >                 >
> >                 >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
> >                 >
> >                 >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
> >                 >
> >                 >
> >                 >
> >                 >         Also, I suppose, that it may be discussed
> >                 during summit
> >                 >         talks :)
> >                 >         Will add to etherpad with potential
> >                 sessions.
> >                 >
> >                 >
> >                 >
> >                 >         Regards,
> >                 >         Sergey.
> >                 >
> >                 >         On 22 September 2015 at 22:27, Brandon Logan
> >                 >         <brandon.logan at rackspace.com> wrote:
> >                 >                 There is some overlap, but there was
> >                 some incompatible
> >                 >                 differences when
> >                 >                 we started designing v2.  I'm sure
> >                 the same issues
> >                 >                 will arise this time
> >                 >                 around so new resources sounds like
> >                 the path to go.
> >                 >                 However, I do not
> >                 >                 know much about Heat and the
> >                 resources so I'm speaking
> >                 >                 on a very
> >                 >                 uneducated level here.
> >                 >
> >                 >                 Thanks,
> >                 >                 Brandon
> >                 >                 On Tue, 2015-09-22 at 18:38 +0000,
> >                 Fox, Kevin M wrote:
> >                 >                 > We're using the v1 resources...
> >                 >                 >
> >                 >                 > If the v2 ones are compatible and
> >                 can seamlessly
> >                 >                 upgrade, great
> >                 >                 >
> >                 >                 > Otherwise, make new ones please.
> >                 >                 >
> >                 >                 > Thanks,
> >                 >                 > Kevin
> >                 >                 >
> >                 >                 >
> >                 >
> >
> ______________________________________________________________________
> >                 >                 > From: Banashankar KV
> >                 [banveerad at gmail.com]
> >                 >                 > Sent: Tuesday, September 22, 2015
> >                 10:07 AM
> >                 >                 > To: OpenStack Development Mailing
> >                 List (not for
> >                 >                 usage questions)
> >                 >                 > Subject: Re: [openstack-dev]
> >                 [neutron][lbaas] - Heat
> >                 >                 support for
> >                 >                 > LbaasV2
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 > Hi Brandon,
> >                 >                 > Work in progress, but need some
> >                 input on the way we
> >                 >                 want them, like
> >                 >                 > replace the existing lbaasv1 or we
> >                 still need to
> >                 >                 support them ?
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 > Thanks
> >                 >                 > Banashankar
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
> >                 Brandon Logan
> >                 >                 > <brandon.logan at rackspace.com>
> >                 wrote:
> >                 >                 >         Hi Banashankar,
> >                 >                 >         I think it'd be great if
> >                 you got this going.
> >                 >                 One of those
> >                 >                 >         things we
> >                 >                 >         want to have and people
> >                 ask for but has
> >                 >                 always gotten a lower
> >                 >                 >         priority
> >                 >                 >         due to the critical things
> >                 needed.
> >                 >                 >
> >                 >                 >         Thanks,
> >                 >                 >         Brandon
> >                 >                 >         On Mon, 2015-09-21 at
> >                 17:57 -0700,
> >                 >                 Banashankar KV wrote:
> >                 >                 >         > Hi All,
> >                 >                 >         > I was thinking of
> >                 starting the work on
> >                 >                 heat to support
> >                 >                 >         LBaasV2,  Is
> >                 >                 >         > there any concerns about
> >                 that?
> >                 >                 >         >
> >                 >                 >         >
> >                 >                 >         > I don't know if it is
> >                 the right time to
> >                 >                 bring this up :D .
> >                 >                 >         >
> >                 >                 >         > Thanks,
> >                 >                 >         > Banashankar (bana_k)
> >                 >                 >         >
> >                 >                 >         >
> >                 >                 >
> >                 >                 >         >
> >                 >                 >
> >                 >
> >
>  __________________________________________________________________________
> >                 >                 >         > OpenStack Development
> >                 Mailing List (not
> >                 >                 for usage questions)
> >                 >                 >         > Unsubscribe:
> >                 >                 >
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >                 >         >
> >                 >                 >
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >                 >
> >                 >                 >
> >                 >
> >
>  __________________________________________________________________________
> >                 >                 >         OpenStack Development
> >                 Mailing List (not for
> >                 >                 usage questions)
> >                 >                 >         Unsubscribe:
> >                 >                 >
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >                 >
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 >                 > OpenStack Development Mailing List
> >                 (not for usage
> >                 >                 questions)
> >                 >                 > Unsubscribe:
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >                 >
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 >                 OpenStack Development Mailing List
> >                 (not for usage
> >                 >                 questions)
> >                 >                 Unsubscribe:
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 >         OpenStack Development Mailing List (not for
> >                 usage questions)
> >                 >         Unsubscribe:
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >
> >                 >
> >                 >
> >                 >
> >
>  __________________________________________________________________________
> >                 > OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 > Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>  __________________________________________________________________________
> >                 OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>  __________________________________________________________________________
> >                 OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
>  __________________________________________________________________________
> >         OpenStack Development Mailing List (not for usage questions)
> >         Unsubscribe:
> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/59fd8f98/attachment.html>

From banveerad at gmail.com  Wed Sep 23 00:00:53 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Tue, 22 Sep 2015 17:00:53 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <04E7E723-90F2-4FC3-BF69-8179E2215FD3@parksidesoftware.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <04E7E723-90F2-4FC3-BF69-8179E2215FD3@parksidesoftware.com>
Message-ID: <CABkBM5G0MN_RKg_R7VJTnE-sRYHQ4uvfNJeS+AKEFRmmHZu6rA@mail.gmail.com>

But whatever it is, I think we can't run the current heat template  as it
is for v2, those things must be ported no matter what.
So I think our main concern here is coexistence of v1 and v2 heat support.

Please correct me if I am wrong.

Thanks
Banashankar


On Tue, Sep 22, 2015 at 4:53 PM, Doug Wiegley <dougwig at parksidesoftware.com>
wrote:

> The other option would be to change the namespace.  (Os::Lbaas instead of
> Os::Neutron).  The neutron CLI does something similar with neutron-lb-*
> versus neutron-lbaas-*, e.g.
>
> One wrinkle with heat supporting both is that neutron doesn?t support both
> running at the same time, which certainly hurts the migration strategy. I
> think the answer at the time was that you could have different api servers
> running each version. Is that something that heat can deal with?
>
> (I still don?t like that I can?t run both at the same time, and would love
> to re-litigate that argument.  :-)  ).
>
> Thanks,
> doug
>
>
>
> On Sep 22, 2015, at 5:40 PM, Banashankar KV <banveerad at gmail.com> wrote:
>
> Ok, sounds good. So now the question is how should we name the new V2
> resources ?
>
>
> Thanks
> Banashankar
>
>
> On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>
>> Yes, hence the need to support the v2 resources as seperate things. Then
>> I can rewrite the templates to include the new resources rather then the
>> old resources as appropriate. IE, it will be a porting effort to rewrite
>> them. Then do a heat update on the stack to migrate it from lbv1 to lbv2.
>> Since they are different resources, it should create the new and delete the
>> old.
>>
>> Thanks,
>> Kevin
>>
>> ------------------------------
>> *From:* Banashankar KV [banveerad at gmail.com]
>> *Sent:* Tuesday, September 22, 2015 4:16 PM
>>
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for
>> LbaasV2
>>
>> But I think, V2 has introduced some new components and whole association
>> of the resources with each other is changed, we should be still able to do
>> what Kevin has mentioned ?
>>
>> Thanks
>> Banashankar
>>
>>
>> On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>>
>>> There needs to be a way to have both v1 and v2 supported in one
>>> engine....
>>>
>>> Say I have templates that use v1 already in existence (I do), and I want
>>> to be able to heat stack update on them one at a time to v2. This will
>>> replace the v1 lb with v2, migrating the floating ip from the v1 lb to the
>>> v2 one. This gives a smoothish upgrade path.
>>>
>>> Thanks,
>>> Kevin
>>> ________________________________________
>>> From: Brandon Logan [brandon.logan at RACKSPACE.COM]
>>> Sent: Tuesday, September 22, 2015 3:22 PM
>>> To: openstack-dev at lists.openstack.org
>>> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
>>>
>>> Well I'd hate to have the V2 postfix on it because V1 will be deprecated
>>> and removed, which means the V2 being there would be lame.  Is there any
>>> kind of precedent set for for how to handle this?
>>>
>>> Thanks,
>>> Brandon
>>> On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV wrote:
>>> > So are we thinking of making it as ?
>>> > OS::Neutron::LoadBalancerV2
>>> >
>>> > OS::Neutron::ListenerV2
>>> >
>>> > OS::Neutron::PoolV2
>>> >
>>> > OS::Neutron::PoolMemberV2
>>> >
>>> > OS::Neutron::HealthMonitorV2
>>> >
>>> >
>>> >
>>> > and add all those into the loadbalancer.py of heat engine ?
>>> >
>>> > Thanks
>>> > Banashankar
>>> >
>>> >
>>> >
>>> > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>>> > <skraynev at mirantis.com> wrote:
>>> >         Brandon.
>>> >
>>> >
>>> >         As I understand we v1 and v2 have differences also in list of
>>> >         objects and also in relationships between them.
>>> >         So I don't think that it will be easy to upgrade old resources
>>> >         (unfortunately).
>>> >         I'd agree with second Kevin's suggestion about implementation
>>> >         new resources in this case.
>>> >
>>> >
>>> >         I see, that a lot of guys, who wants to help  with it :) And I
>>> >         suppose, that me and Rabi Mishra may try to help with it,
>>> >         because we was involvement in implementation of v1 resources
>>> >         in Heat.
>>> >         Follow the list of v1 lbaas resources in Heat:
>>> >
>>> >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>>> >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>>> >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>>> >
>>> >
>>> >
>>> >         Also, I suppose, that it may be discussed during summit
>>> >         talks :)
>>> >         Will add to etherpad with potential sessions.
>>> >
>>> >
>>> >
>>> >         Regards,
>>> >         Sergey.
>>> >
>>> >         On 22 September 2015 at 22:27, Brandon Logan
>>> >         <brandon.logan at rackspace.com> wrote:
>>> >                 There is some overlap, but there was some incompatible
>>> >                 differences when
>>> >                 we started designing v2.  I'm sure the same issues
>>> >                 will arise this time
>>> >                 around so new resources sounds like the path to go.
>>> >                 However, I do not
>>> >                 know much about Heat and the resources so I'm speaking
>>> >                 on a very
>>> >                 uneducated level here.
>>> >
>>> >                 Thanks,
>>> >                 Brandon
>>> >                 On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
>>> >                 > We're using the v1 resources...
>>> >                 >
>>> >                 > If the v2 ones are compatible and can seamlessly
>>> >                 upgrade, great
>>> >                 >
>>> >                 > Otherwise, make new ones please.
>>> >                 >
>>> >                 > Thanks,
>>> >                 > Kevin
>>> >                 >
>>> >                 >
>>> >
>>>  ______________________________________________________________________
>>> >                 > From: Banashankar KV [banveerad at gmail.com]
>>> >                 > Sent: Tuesday, September 22, 2015 10:07 AM
>>> >                 > To: OpenStack Development Mailing List (not for
>>> >                 usage questions)
>>> >                 > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>>> >                 support for
>>> >                 > LbaasV2
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 > Hi Brandon,
>>> >                 > Work in progress, but need some input on the way we
>>> >                 want them, like
>>> >                 > replace the existing lbaasv1 or we still need to
>>> >                 support them ?
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 > Thanks
>>> >                 > Banashankar
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
>>> >                 > <brandon.logan at rackspace.com> wrote:
>>> >                 >         Hi Banashankar,
>>> >                 >         I think it'd be great if you got this going.
>>> >                 One of those
>>> >                 >         things we
>>> >                 >         want to have and people ask for but has
>>> >                 always gotten a lower
>>> >                 >         priority
>>> >                 >         due to the critical things needed.
>>> >                 >
>>> >                 >         Thanks,
>>> >                 >         Brandon
>>> >                 >         On Mon, 2015-09-21 at 17:57 -0700,
>>> >                 Banashankar KV wrote:
>>> >                 >         > Hi All,
>>> >                 >         > I was thinking of starting the work on
>>> >                 heat to support
>>> >                 >         LBaasV2,  Is
>>> >                 >         > there any concerns about that?
>>> >                 >         >
>>> >                 >         >
>>> >                 >         > I don't know if it is the right time to
>>> >                 bring this up :D .
>>> >                 >         >
>>> >                 >         > Thanks,
>>> >                 >         > Banashankar (bana_k)
>>> >                 >         >
>>> >                 >         >
>>> >                 >
>>> >                 >         >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >         > OpenStack Development Mailing List (not
>>> >                 for usage questions)
>>> >                 >         > Unsubscribe:
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >                 >         >
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >         OpenStack Development Mailing List (not for
>>> >                 usage questions)
>>> >                 >         Unsubscribe:
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >
>>>  __________________________________________________________________________
>>> >                 > OpenStack Development Mailing List (not for usage
>>> >                 questions)
>>> >                 > Unsubscribe:
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>>  __________________________________________________________________________
>>> >                 OpenStack Development Mailing List (not for usage
>>> >                 questions)
>>> >                 Unsubscribe:
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> >
>>> >
>>>  __________________________________________________________________________
>>> >         OpenStack Development Mailing List (not for usage questions)
>>> >         Unsubscribe:
>>> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> >
>>> __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/8f4dfba7/attachment.html>

From Kevin.Fox at pnnl.gov  Wed Sep 23 00:13:44 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 23 Sep 2015 00:13:44 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <04E7E723-90F2-4FC3-BF69-8179E2215FD3@parksidesoftware.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>,
 <04E7E723-90F2-4FC3-BF69-8179E2215FD3@parksidesoftware.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C90E2@EX10MBOX06.pnnl.gov>

Ugg, That's news to me. :/ Hopefully there's a way around it.

A different namespace sounds reasonable as a workaround.

Also consider the use case where you have a newer heat then neutron. During upgrade there may be a time when we very well may want to support one different then the other. So the heat engine should be able to work with whichever is enabled on a particular cloud.

Thanks,
Kevin

________________________________
From: Doug Wiegley [dougwig at parksidesoftware.com]
Sent: Tuesday, September 22, 2015 4:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

The other option would be to change the namespace.  (Os::Lbaas instead of Os::Neutron).  The neutron CLI does something similar with neutron-lb-* versus neutron-lbaas-*, e.g.

One wrinkle with heat supporting both is that neutron doesn?t support both running at the same time, which certainly hurts the migration strategy. I think the answer at the time was that you could have different api servers running each version. Is that something that heat can deal with?

(I still don?t like that I can?t run both at the same time, and would love to re-litigate that argument.  :-)  ).

Thanks,
doug



On Sep 22, 2015, at 5:40 PM, Banashankar KV <banveerad at gmail.com<mailto:banveerad at gmail.com>> wrote:

Ok, sounds good. So now the question is how should we name the new V2 resources ?


Thanks
Banashankar


On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
Yes, hence the need to support the v2 resources as seperate things. Then I can rewrite the templates to include the new resources rather then the old resources as appropriate. IE, it will be a porting effort to rewrite them. Then do a heat update on the stack to migrate it from lbv1 to lbv2. Since they are different resources, it should create the new and delete the old.

Thanks,
Kevin

________________________________
From: Banashankar KV [banveerad at gmail.com<mailto:banveerad at gmail.com>]
Sent: Tuesday, September 22, 2015 4:16 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

But I think, V2 has introduced some new components and whole association of the resources with each other is changed, we should be still able to do what Kevin has mentioned ?

Thanks
Banashankar


On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
There needs to be a way to have both v1 and v2 supported in one engine....

Say I have templates that use v1 already in existence (I do), and I want to be able to heat stack update on them one at a time to v2. This will replace the v1 lb with v2, migrating the floating ip from the v1 lb to the v2 one. This gives a smoothish upgrade path.

Thanks,
Kevin
________________________________________
From: Brandon Logan [brandon.logan at RACKSPACE.COM<mailto:brandon.logan at RACKSPACE.COM>]
Sent: Tuesday, September 22, 2015 3:22 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

Well I'd hate to have the V2 postfix on it because V1 will be deprecated
and removed, which means the V2 being there would be lame.  Is there any
kind of precedent set for for how to handle this?

Thanks,
Brandon
On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV wrote:
> So are we thinking of making it as ?
> OS::Neutron::LoadBalancerV2
>
> OS::Neutron::ListenerV2
>
> OS::Neutron::PoolV2
>
> OS::Neutron::PoolMemberV2
>
> OS::Neutron::HealthMonitorV2
>
>
>
> and add all those into the loadbalancer.py of heat engine ?
>
> Thanks
> Banashankar
>
>
>
> On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> <skraynev at mirantis.com<mailto:skraynev at mirantis.com>> wrote:
>         Brandon.
>
>
>         As I understand we v1 and v2 have differences also in list of
>         objects and also in relationships between them.
>         So I don't think that it will be easy to upgrade old resources
>         (unfortunately).
>         I'd agree with second Kevin's suggestion about implementation
>         new resources in this case.
>
>
>         I see, that a lot of guys, who wants to help  with it :) And I
>         suppose, that me and Rabi Mishra may try to help with it,
>         because we was involvement in implementation of v1 resources
>         in Heat.
>         Follow the list of v1 lbaas resources in Heat:
>
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>
>         http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>
>
>
>         Also, I suppose, that it may be discussed during summit
>         talks :)
>         Will add to etherpad with potential sessions.
>
>
>
>         Regards,
>         Sergey.
>
>         On 22 September 2015 at 22:27, Brandon Logan
>         <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
>                 There is some overlap, but there was some incompatible
>                 differences when
>                 we started designing v2.  I'm sure the same issues
>                 will arise this time
>                 around so new resources sounds like the path to go.
>                 However, I do not
>                 know much about Heat and the resources so I'm speaking
>                 on a very
>                 uneducated level here.
>
>                 Thanks,
>                 Brandon
>                 On Tue, 2015-09-22 at 18:38 +0000, Fox, Kevin M wrote:
>                 > We're using the v1 resources...
>                 >
>                 > If the v2 ones are compatible and can seamlessly
>                 upgrade, great
>                 >
>                 > Otherwise, make new ones please.
>                 >
>                 > Thanks,
>                 > Kevin
>                 >
>                 >
>                 ______________________________________________________________________
>                 > From: Banashankar KV [banveerad at gmail.com<mailto:banveerad at gmail.com>]
>                 > Sent: Tuesday, September 22, 2015 10:07 AM
>                 > To: OpenStack Development Mailing List (not for
>                 usage questions)
>                 > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>                 support for
>                 > LbaasV2
>                 >
>                 >
>                 >
>                 > Hi Brandon,
>                 > Work in progress, but need some input on the way we
>                 want them, like
>                 > replace the existing lbaasv1 or we still need to
>                 support them ?
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 > Thanks
>                 > Banashankar
>                 >
>                 >
>                 >
>                 > On Tue, Sep 22, 2015 at 9:18 AM, Brandon Logan
>                 > <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
>                 >         Hi Banashankar,
>                 >         I think it'd be great if you got this going.
>                 One of those
>                 >         things we
>                 >         want to have and people ask for but has
>                 always gotten a lower
>                 >         priority
>                 >         due to the critical things needed.
>                 >
>                 >         Thanks,
>                 >         Brandon
>                 >         On Mon, 2015-09-21 at 17:57 -0700,
>                 Banashankar KV wrote:
>                 >         > Hi All,
>                 >         > I was thinking of starting the work on
>                 heat to support
>                 >         LBaasV2,  Is
>                 >         > there any concerns about that?
>                 >         >
>                 >         >
>                 >         > I don't know if it is the right time to
>                 bring this up :D .
>                 >         >
>                 >         > Thanks,
>                 >         > Banashankar (bana_k)
>                 >         >
>                 >         >
>                 >
>                 >         >
>                 >
>                  __________________________________________________________________________
>                 >         > OpenStack Development Mailing List (not
>                 for usage questions)
>                 >         > Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>                 >         >
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                  __________________________________________________________________________
>                 >         OpenStack Development Mailing List (not for
>                 usage questions)
>                 >         Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 __________________________________________________________________________
>                 > OpenStack Development Mailing List (not for usage
>                 questions)
>                 > Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/1a11ea0f/attachment.html>

From chris.friesen at windriver.com  Wed Sep 23 00:14:45 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Tue, 22 Sep 2015 18:14:45 -0600
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
Message-ID: <5601EEF5.9090301@windriver.com>

On 09/22/2015 05:48 PM, Joshua Harlow wrote:
> A present:
>
>  >>> import contextlib
>  >>> import os
>  >>>
>  >>> @contextlib.contextmanager
> ... def synced_file(path, mode='wb'):
> ...   with open(path, mode) as fh:
> ...      yield fh
> ...      os.fdatasync(fh.fileno())
> ...
>  >>> with synced_file("/tmp/b.txt") as fh:
> ...    fh.write("b")

Isn't that missing an "fh.flush()" somewhere before the fdatasync()?

Chris





From john.griffith8 at gmail.com  Wed Sep 23 00:17:13 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Tue, 22 Sep 2015 18:17:13 -0600
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
Message-ID: <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>

On Tue, Sep 22, 2015 at 5:48 PM, Joshua Harlow <harlowja at outlook.com> wrote:

> A present:
>
> >>> import contextlib
> >>> import os
> >>>
> >>> @contextlib.contextmanager
> ... def synced_file(path, mode='wb'):
> ...   with open(path, mode) as fh:
> ...      yield fh
> ...      os.fdatasync(fh.fileno())
> ...
> >>> with synced_file("/tmp/b.txt") as fh:
> ...    fh.write("b")
> ...
>
> Have fun :-P
>
> -Josh
>
>
> Robert Collins wrote:
>
>> On 23 September 2015 at 09:52, Chris Friesen
>> <chris.friesen at windriver.com>  wrote:
>>
>>> Hi,
>>>
>>> I recently had an issue with one file out of a dozen or so in
>>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm
>>> running stable/kilo if it makes a difference.
>>>
>>> Looking at the code in volume.targets.tgt.TgtAdm.create_iscsi_target(),
>>> I'm
>>> wondering if we should do a fsync() before the close().  The way it
>>> stands
>>> now, it seems like it might be possible to write the file, start making
>>> use
>>> of it, and then take a power outage before it actually gets written to
>>> persistent storage.  When we come back up we could have an instance
>>> expecting to make use of it, but no target information in the on-disk
>>> copy
>>> of the file.
>>>
>>
>> If its being kept in sync with DB records, and won't self-heal from
>> this situation, then yes. e.g. if the overall workflow is something
>> like
>>
>> receive RPC request
>> do some work, including writing the file
>> reply to the RPC with 'ok'
>> ->  gets written to the DB and the state recorded as ACTIVE or whatever..
>>
>> then yes, we need to behave as though its active even if the machine
>> is power cycled robustly.
>>
>> Even then, fsync() explicitly says that it doesn't ensure that the
>>> directory
>>> changes have reached disk...for that another explicit fsync() on the file
>>> descriptor for the directory is needed.
>>> So I think for robustness we'd want to add
>>>
>>> f.flush()
>>> os.fsync(f.fileno())
>>>
>>
>> fdatasync would be better - we don't care about the metadata.
>>
>> between the existing f.write() and f.close(), and then add something like:
>>>
>>> f = open(volumes_dir, 'w+')
>>> os.fsync(f.fileno())
>>> f.close()
>>>
>>
>> Yup, but again - fdatasync here I believe.
>>
>> -Rob
>>
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
?That target file pretty much "is" the persistence record, the db entry is
the iqn and provider info only.  I think that adding the fdatasync isn't a
bad idea at all.  At the very least it doesn't hurt.  Power losses on
attach I would expect to be problematic regardless.

Keep in mind that file is built as part of the attach process stemming from
initialize-connection.

John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/6029d997/attachment.html>

From cody at puppetlabs.com  Wed Sep 23 00:17:44 2015
From: cody at puppetlabs.com (Cody Herriges)
Date: Tue, 22 Sep 2015 17:17:44 -0700
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
Message-ID: <5601EFA8.8040204@puppetlabs.com>

Alex Schultz wrote:
> Hey puppet folks,
> 
> Based on the meeting yesterday[0], I had proposed creating a parser
> function called is_service_default[1] to validate if a variable matched
> our agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking
> about how can we maybe not use the arbitrary string throughout the
> puppet that can not easily be validated.  So I tested creating another
> puppet function named service_default[2] to replace the use of '<SERVICE
> DEFAULT>' throughout all the puppet modules.  My tests seemed to
> indicate that you can use a parser function as parameter default for
> classes. 
> 
> I wanted to send a note to gather comments around the second function. 
> When we originally discussed what to use to designate for a service's
> default configuration, I really didn't like using an arbitrary string
> since it's hard to parse and validate. I think leveraging a function
> might be better since it is something that can be validated via tests
> and a syntax checker.  Thoughts?
> 
> 
> Thanks,
> -Alex
> 
> [0] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html
> [1] https://review.openstack.org/#/c/223672
> [2] https://review.openstack.org/#/c/224187
> 

I've been mulling this over the last several days and I just can't
accept an entire ruby function which would be ran for every parameter
with the desired static value of "<SERVICE DEFAULT>" when the class is
declared and parsed.  I am not generally against using functions as a
parameter default just not a fan in this case because running ruby just
to return a static string seems inappropriate and not optimal.

In this specific case I think the params pattern and inheritance can
obtain us the same goals.  I also find this a valid us of inheritance
cross module namespaces but...only because all our modules must depend
on puppet-openstacklib.

http://paste.openstack.org/show/473655


-- 
Cody

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 931 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/5a551a91/attachment.pgp>

From john.griffith8 at gmail.com  Wed Sep 23 00:19:49 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Tue, 22 Sep 2015 18:19:49 -0600
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
Message-ID: <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>

On Tue, Sep 22, 2015 at 6:17 PM, John Griffith <john.griffith8 at gmail.com>
wrote:

>
>
> On Tue, Sep 22, 2015 at 5:48 PM, Joshua Harlow <harlowja at outlook.com>
> wrote:
>
>> A present:
>>
>> >>> import contextlib
>> >>> import os
>> >>>
>> >>> @contextlib.contextmanager
>> ... def synced_file(path, mode='wb'):
>> ...   with open(path, mode) as fh:
>> ...      yield fh
>> ...      os.fdatasync(fh.fileno())
>> ...
>> >>> with synced_file("/tmp/b.txt") as fh:
>> ...    fh.write("b")
>> ...
>>
>> Have fun :-P
>>
>> -Josh
>>
>>
>> Robert Collins wrote:
>>
>>> On 23 September 2015 at 09:52, Chris Friesen
>>> <chris.friesen at windriver.com>  wrote:
>>>
>>>> Hi,
>>>>
>>>> I recently had an issue with one file out of a dozen or so in
>>>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm
>>>> running stable/kilo if it makes a difference.
>>>>
>>>> Looking at the code in volume.targets.tgt.TgtAdm.create_iscsi_target(),
>>>> I'm
>>>> wondering if we should do a fsync() before the close().  The way it
>>>> stands
>>>> now, it seems like it might be possible to write the file, start making
>>>> use
>>>> of it, and then take a power outage before it actually gets written to
>>>> persistent storage.  When we come back up we could have an instance
>>>> expecting to make use of it, but no target information in the on-disk
>>>> copy
>>>> of the file.
>>>>
>>>
>>> If its being kept in sync with DB records, and won't self-heal from
>>> this situation, then yes. e.g. if the overall workflow is something
>>> like
>>>
>>> receive RPC request
>>> do some work, including writing the file
>>> reply to the RPC with 'ok'
>>> ->  gets written to the DB and the state recorded as ACTIVE or whatever..
>>>
>>> then yes, we need to behave as though its active even if the machine
>>> is power cycled robustly.
>>>
>>> Even then, fsync() explicitly says that it doesn't ensure that the
>>>> directory
>>>> changes have reached disk...for that another explicit fsync() on the
>>>> file
>>>> descriptor for the directory is needed.
>>>> So I think for robustness we'd want to add
>>>>
>>>> f.flush()
>>>> os.fsync(f.fileno())
>>>>
>>>
>>> fdatasync would be better - we don't care about the metadata.
>>>
>>> between the existing f.write() and f.close(), and then add something
>>>> like:
>>>>
>>>> f = open(volumes_dir, 'w+')
>>>> os.fsync(f.fileno())
>>>> f.close()
>>>>
>>>
>>> Yup, but again - fdatasync here I believe.
>>>
>>> -Rob
>>>
>>>
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> ?That target file pretty much "is" the persistence record, the db entry is
> the iqn and provider info only.  I think that adding the fdatasync isn't a
> bad idea at all.  At the very least it doesn't hurt.  Power losses on
> attach I would expect to be problematic regardless.
>

?Let me clarify the statement above, if you loose power to the node in the
middle of an attach process and the file wasn't written properly you're
most likely 'stuck' and will have to detach (which deletes the file) or it
will be in an error state and rebuild the file when you try the attach
again anyway IIRC, it's been a while since we've mucked with that code
(thank goodness)!!
?


>
> Keep in mind that file is built as part of the attach process stemming
> from initialize-connection.
>
> John
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/5f1483f4/attachment.html>

From brandon.logan at RACKSPACE.COM  Wed Sep 23 00:36:10 2015
From: brandon.logan at RACKSPACE.COM (Brandon Logan)
Date: Wed, 23 Sep 2015 00:36:10 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CABkBM5G0MN_RKg_R7VJTnE-sRYHQ4uvfNJeS+AKEFRmmHZu6rA@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <04E7E723-90F2-4FC3-BF69-8179E2215FD3@parksidesoftware.com>
 <CABkBM5G0MN_RKg_R7VJTnE-sRYHQ4uvfNJeS+AKEFRmmHZu6rA@mail.gmail.com>
Message-ID: <1442968570.30604.10.camel@localhost>

I do like Doug's suggestion of namespace change.  One day lbaas will no
longer be an extension of neutron (at least that is my hope and it seems
to be going in that direction), and thus the only reason to be
considered part of neutron is just a logical grouping of networking
services.

Cutting the rant short, is that namespace change enough to avoid all
naming collisions between v1 and v2?

Thanks,
Brandon
On Tue, 2015-09-22 at 17:00 -0700, Banashankar KV wrote:
> But whatever it is, I think we can't run the current heat template  as
> it is for v2, those things must be ported no matter what.  
> So I think our main concern here is coexistence of v1 and v2 heat
> support. 
> 
> 
> Please correct me if I am wrong.
> 
> Thanks 
> Banashankar
> 
> 
> 
> On Tue, Sep 22, 2015 at 4:53 PM, Doug Wiegley
> <dougwig at parksidesoftware.com> wrote:
>         The other option would be to change the namespace.  (Os::Lbaas
>         instead of Os::Neutron).  The neutron CLI does something
>         similar with neutron-lb-* versus neutron-lbaas-*, e.g.
>         
>         
>         One wrinkle with heat supporting both is that neutron doesn?t
>         support both running at the same time, which certainly hurts
>         the migration strategy. I think the answer at the time was
>         that you could have different api servers running each
>         version. Is that something that heat can deal with?
>         
>         
>         (I still don?t like that I can?t run both at the same time,
>         and would love to re-litigate that argument.  :-)  ).
>         
>         
>         Thanks,
>         doug
>         
>         
>         
>         
>         > On Sep 22, 2015, at 5:40 PM, Banashankar KV
>         > <banveerad at gmail.com> wrote:
>         > 
>         > Ok, sounds good. So now the question is how should we name
>         > the new V2 resources ?
>         > 
>         > 
>         > 
>         > Thanks 
>         > Banashankar
>         > 
>         > 
>         > 
>         > On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M
>         > <Kevin.Fox at pnnl.gov> wrote:
>         >         Yes, hence the need to support the v2 resources as
>         >         seperate things. Then I can rewrite the templates to
>         >         include the new resources rather then the old
>         >         resources as appropriate. IE, it will be a porting
>         >         effort to rewrite them. Then do a heat update on the
>         >         stack to migrate it from lbv1 to lbv2. Since they
>         >         are different resources, it should create the new
>         >         and delete the old.
>         >         
>         >         Thanks,
>         >         Kevin
>         >         
>         >         
>         >         ____________________________________________________
>         >         From: Banashankar KV [banveerad at gmail.com]
>         >         Sent: Tuesday, September 22, 2015 4:16 PM
>         >         
>         >         To: OpenStack Development Mailing List (not for
>         >         usage questions)
>         >         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>         >         support for LbaasV2
>         >         
>         >         
>         >         
>         >         
>         >         But I think, V2 has introduced some new components
>         >         and whole association of the resources with each
>         >         other is changed, we should be still able to do what
>         >         Kevin has mentioned ?
>         >         
>         >         Thanks  
>         >         Banashankar
>         >         
>         >         
>         >         
>         >         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
>         >         <Kevin.Fox at pnnl.gov> wrote:
>         >                 There needs to be a way to have both v1 and
>         >                 v2 supported in one engine....
>         >                 
>         >                 Say I have templates that use v1 already in
>         >                 existence (I do), and I want to be able to
>         >                 heat stack update on them one at a time to
>         >                 v2. This will replace the v1 lb with v2,
>         >                 migrating the floating ip from the v1 lb to
>         >                 the v2 one. This gives a smoothish upgrade
>         >                 path.
>         >                 
>         >                 Thanks,
>         >                 Kevin
>         >                 ________________________________________
>         >                 From: Brandon Logan
>         >                 [brandon.logan at RACKSPACE.COM]
>         >                 Sent: Tuesday, September 22, 2015 3:22 PM
>         >                 To: openstack-dev at lists.openstack.org
>         >                 Subject: Re: [openstack-dev]
>         >                 [neutron][lbaas] - Heat support for LbaasV2
>         >                 
>         >                 Well I'd hate to have the V2 postfix on it
>         >                 because V1 will be deprecated
>         >                 and removed, which means the V2 being there
>         >                 would be lame.  Is there any
>         >                 kind of precedent set for for how to handle
>         >                 this?
>         >                 
>         >                 Thanks,
>         >                 Brandon
>         >                 On Tue, 2015-09-22 at 14:49 -0700,
>         >                 Banashankar KV wrote:
>         >                 > So are we thinking of making it as ?
>         >                 > OS::Neutron::LoadBalancerV2
>         >                 >
>         >                 > OS::Neutron::ListenerV2
>         >                 >
>         >                 > OS::Neutron::PoolV2
>         >                 >
>         >                 > OS::Neutron::PoolMemberV2
>         >                 >
>         >                 > OS::Neutron::HealthMonitorV2
>         >                 >
>         >                 >
>         >                 >
>         >                 > and add all those into the loadbalancer.py
>         >                 of heat engine ?
>         >                 >
>         >                 > Thanks
>         >                 > Banashankar
>         >                 >
>         >                 >
>         >                 >
>         >                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey
>         >                 Kraynev
>         >                 > <skraynev at mirantis.com> wrote:
>         >                 >         Brandon.
>         >                 >
>         >                 >
>         >                 >         As I understand we v1 and v2 have
>         >                 differences also in list of
>         >                 >         objects and also in relationships
>         >                 between them.
>         >                 >         So I don't think that it will be
>         >                 easy to upgrade old resources
>         >                 >         (unfortunately).
>         >                 >         I'd agree with second Kevin's
>         >                 suggestion about implementation
>         >                 >         new resources in this case.
>         >                 >
>         >                 >
>         >                 >         I see, that a lot of guys, who
>         >                 wants to help  with it :) And I
>         >                 >         suppose, that me and Rabi Mishra
>         >                 may try to help with it,
>         >                 >         because we was involvement in
>         >                 implementation of v1 resources
>         >                 >         in Heat.
>         >                 >         Follow the list of v1 lbaas
>         >                 resources in Heat:
>         >                 >
>         >                 >
>         >                 >
>         >                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>         >                 >
>         >                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>         >                 >
>         >                 >
>         >                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>         >                 >
>         >                 >
>         >                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>         >                 >
>         >                 >
>         >                 >
>         >                 >         Also, I suppose, that it may be
>         >                 discussed during summit
>         >                 >         talks :)
>         >                 >         Will add to etherpad with
>         >                 potential sessions.
>         >                 >
>         >                 >
>         >                 >
>         >                 >         Regards,
>         >                 >         Sergey.
>         >                 >
>         >                 >         On 22 September 2015 at 22:27,
>         >                 Brandon Logan
>         >                 >         <brandon.logan at rackspace.com>
>         >                 wrote:
>         >                 >                 There is some overlap, but
>         >                 there was some incompatible
>         >                 >                 differences when
>         >                 >                 we started designing v2.
>         >                 I'm sure the same issues
>         >                 >                 will arise this time
>         >                 >                 around so new resources
>         >                 sounds like the path to go.
>         >                 >                 However, I do not
>         >                 >                 know much about Heat and
>         >                 the resources so I'm speaking
>         >                 >                 on a very
>         >                 >                 uneducated level here.
>         >                 >
>         >                 >                 Thanks,
>         >                 >                 Brandon
>         >                 >                 On Tue, 2015-09-22 at
>         >                 18:38 +0000, Fox, Kevin M wrote:
>         >                 >                 > We're using the v1
>         >                 resources...
>         >                 >                 >
>         >                 >                 > If the v2 ones are
>         >                 compatible and can seamlessly
>         >                 >                 upgrade, great
>         >                 >                 >
>         >                 >                 > Otherwise, make new ones
>         >                 please.
>         >                 >                 >
>         >                 >                 > Thanks,
>         >                 >                 > Kevin
>         >                 >                 >
>         >                 >                 >
>         >                 >
>         >                  ______________________________________________________________________
>         >                 >                 > From: Banashankar KV
>         >                 [banveerad at gmail.com]
>         >                 >                 > Sent: Tuesday, September
>         >                 22, 2015 10:07 AM
>         >                 >                 > To: OpenStack
>         >                 Development Mailing List (not for
>         >                 >                 usage questions)
>         >                 >                 > Subject: Re:
>         >                 [openstack-dev] [neutron][lbaas] - Heat
>         >                 >                 support for
>         >                 >                 > LbaasV2
>         >                 >                 >
>         >                 >                 >
>         >                 >                 >
>         >                 >                 > Hi Brandon,
>         >                 >                 > Work in progress, but
>         >                 need some input on the way we
>         >                 >                 want them, like
>         >                 >                 > replace the existing
>         >                 lbaasv1 or we still need to
>         >                 >                 support them ?
>         >                 >                 >
>         >                 >                 >
>         >                 >                 >
>         >                 >                 >
>         >                 >                 >
>         >                 >                 >
>         >                 >                 >
>         >                 >                 > Thanks
>         >                 >                 > Banashankar
>         >                 >                 >
>         >                 >                 >
>         >                 >                 >
>         >                 >                 > On Tue, Sep 22, 2015 at
>         >                 9:18 AM, Brandon Logan
>         >                 >                 >
>         >                 <brandon.logan at rackspace.com> wrote:
>         >                 >                 >         Hi Banashankar,
>         >                 >                 >         I think it'd be
>         >                 great if you got this going.
>         >                 >                 One of those
>         >                 >                 >         things we
>         >                 >                 >         want to have and
>         >                 people ask for but has
>         >                 >                 always gotten a lower
>         >                 >                 >         priority
>         >                 >                 >         due to the
>         >                 critical things needed.
>         >                 >                 >
>         >                 >                 >         Thanks,
>         >                 >                 >         Brandon
>         >                 >                 >         On Mon,
>         >                 2015-09-21 at 17:57 -0700,
>         >                 >                 Banashankar KV wrote:
>         >                 >                 >         > Hi All,
>         >                 >                 >         > I was thinking
>         >                 of starting the work on
>         >                 >                 heat to support
>         >                 >                 >         LBaasV2,  Is
>         >                 >                 >         > there any
>         >                 concerns about that?
>         >                 >                 >         >
>         >                 >                 >         >
>         >                 >                 >         > I don't know
>         >                 if it is the right time to
>         >                 >                 bring this up :D .
>         >                 >                 >         >
>         >                 >                 >         > Thanks,
>         >                 >                 >         > Banashankar
>         >                 (bana_k)
>         >                 >                 >         >
>         >                 >                 >         >
>         >                 >                 >
>         >                 >                 >         >
>         >                 >                 >
>         >                 >
>         >                 __________________________________________________________________________
>         >                 >                 >         > OpenStack
>         >                 Development Mailing List (not
>         >                 >                 for usage questions)
>         >                 >                 >         > Unsubscribe:
>         >                 >                 >
>         >                 >
>         >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         >                 >                 >         >
>         >                 >                 >
>         >                 >
>         >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >                 >                 >
>         >                 >                 >
>         >                 >
>         >                 __________________________________________________________________________
>         >                 >                 >         OpenStack
>         >                 Development Mailing List (not for
>         >                 >                 usage questions)
>         >                 >                 >         Unsubscribe:
>         >                 >                 >
>         >                 >
>         >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         >                 >                 >
>         >                 >
>         >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >                 >                 >
>         >                 >                 >
>         >                 >                 >
>         >                 >                 >
>         >                 >
>         >                  __________________________________________________________________________
>         >                 >                 > OpenStack Development
>         >                 Mailing List (not for usage
>         >                 >                 questions)
>         >                 >                 > Unsubscribe:
>         >                 >
>         >                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         >                 >                 >
>         >                 >
>         >                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >                 >
>         >                 >
>         >                  __________________________________________________________________________
>         >                 >                 OpenStack Development
>         >                 Mailing List (not for usage
>         >                 >                 questions)
>         >                 >                 Unsubscribe:
>         >                 >
>         >                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         >                 >
>         >                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >                 >
>         >                 >
>         >                 >
>         >                 >
>         >                 >
>         >                  __________________________________________________________________________
>         >                 >         OpenStack Development Mailing List
>         >                 (not for usage questions)
>         >                 >         Unsubscribe:
>         >                 >
>         >                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         >                 >
>         >                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >                 >
>         >                 >
>         >                 >
>         >                 >
>         >                 __________________________________________________________________________
>         >                 > OpenStack Development Mailing List (not
>         >                 for usage questions)
>         >                 > Unsubscribe:
>         >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         >                 >
>         >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >                 
>         >                 __________________________________________________________________________
>         >                 OpenStack Development Mailing List (not for
>         >                 usage questions)
>         >                 Unsubscribe:
>         >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >                 
>         >                 __________________________________________________________________________
>         >                 OpenStack Development Mailing List (not for
>         >                 usage questions)
>         >                 Unsubscribe:
>         >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >                 
>         >         
>         >         
>         >         
>         >         __________________________________________________________________________
>         >         OpenStack Development Mailing List (not for usage
>         >         questions)
>         >         Unsubscribe:
>         >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         >         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >         
>         > 
>         > 
>         > __________________________________________________________________________
>         > OpenStack Development Mailing List (not for usage questions)
>         > Unsubscribe:
>         > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         > 
>         
>         
>         
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From brandon.logan at RACKSPACE.COM  Wed Sep 23 00:39:03 2015
From: brandon.logan at RACKSPACE.COM (Brandon Logan)
Date: Wed, 23 Sep 2015 00:39:03 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>	,
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>
Message-ID: <1442968743.30604.13.camel@localhost>

So for the API v1s api is of the structure:

<neutron-endpoint>/lb/(vip|pool|member|health_monitor)

V2s is:
<neutron-endpoint>/lbaas/(loadbalancer|listener|pool|healthmonitor)

member is a child of pool, so it would go down one level.

The only difference is the lb for v1 and lbaas for v2.  Not sure if that
is enough of a different.

Thanks,
Brandon
On Tue, 2015-09-22 at 23:48 +0000, Fox, Kevin M wrote:
> Thats the problem. :/
> 
> I can't think of a way to have them coexist without: breaking old
> templates, including v2 in the name, or having a flag on the resource
> saying the version is v2. And as an app developer I'd rather not have
> my existing templates break.
> 
> I haven't compared the api's at all, but is there a required field of
> v2 that is different enough from v1 that by its simple existence in
> the resource you can tell a v2 from a v1 object? Would something like
> that work? PoolMember wouldn't have to change, the same resource could
> probably work for whatever lb it was pointing at I'm guessing.
> 
> Thanks,
> Kevin
> 
> 
> 
> ______________________________________________________________________
> From: Banashankar KV [banveerad at gmail.com]
> Sent: Tuesday, September 22, 2015 4:40 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
> LbaasV2
> 
> 
> 
> Ok, sounds good. So now the question is how should we name the new V2
> resources ? 
> 
> 
> 
> Thanks  
> Banashankar
> 
> 
> 
> On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
> wrote:
>         Yes, hence the need to support the v2 resources as seperate
>         things. Then I can rewrite the templates to include the new
>         resources rather then the old resources as appropriate. IE, it
>         will be a porting effort to rewrite them. Then do a heat
>         update on the stack to migrate it from lbv1 to lbv2. Since
>         they are different resources, it should create the new and
>         delete the old.
>         
>         Thanks,
>         Kevin
>         
>         
>         ______________________________________________________________
>         From: Banashankar KV [banveerad at gmail.com]
>         Sent: Tuesday, September 22, 2015 4:16 PM 
>         
>         To: OpenStack Development Mailing List (not for usage
>         questions)
>         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
>         for LbaasV2
>         
>         
>         
>         
>         But I think, V2 has introduced some new components and whole
>         association of the resources with each other is changed, we
>         should be still able to do what Kevin has mentioned ?
>         
>         Thanks  
>         Banashankar
>         
>         
>         
>         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
>         <Kevin.Fox at pnnl.gov> wrote:
>                 There needs to be a way to have both v1 and v2
>                 supported in one engine....
>                 
>                 Say I have templates that use v1 already in existence
>                 (I do), and I want to be able to heat stack update on
>                 them one at a time to v2. This will replace the v1 lb
>                 with v2, migrating the floating ip from the v1 lb to
>                 the v2 one. This gives a smoothish upgrade path.
>                 
>                 Thanks,
>                 Kevin
>                 ________________________________________
>                 From: Brandon Logan [brandon.logan at RACKSPACE.COM]
>                 Sent: Tuesday, September 22, 2015 3:22 PM
>                 To: openstack-dev at lists.openstack.org
>                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>                 support for LbaasV2
>                 
>                 Well I'd hate to have the V2 postfix on it because V1
>                 will be deprecated
>                 and removed, which means the V2 being there would be
>                 lame.  Is there any
>                 kind of precedent set for for how to handle this?
>                 
>                 Thanks,
>                 Brandon
>                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
>                 wrote:
>                 > So are we thinking of making it as ?
>                 > OS::Neutron::LoadBalancerV2
>                 >
>                 > OS::Neutron::ListenerV2
>                 >
>                 > OS::Neutron::PoolV2
>                 >
>                 > OS::Neutron::PoolMemberV2
>                 >
>                 > OS::Neutron::HealthMonitorV2
>                 >
>                 >
>                 >
>                 > and add all those into the loadbalancer.py of heat
>                 engine ?
>                 >
>                 > Thanks
>                 > Banashankar
>                 >
>                 >
>                 >
>                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>                 > <skraynev at mirantis.com> wrote:
>                 >         Brandon.
>                 >
>                 >
>                 >         As I understand we v1 and v2 have
>                 differences also in list of
>                 >         objects and also in relationships between
>                 them.
>                 >         So I don't think that it will be easy to
>                 upgrade old resources
>                 >         (unfortunately).
>                 >         I'd agree with second Kevin's suggestion
>                 about implementation
>                 >         new resources in this case.
>                 >
>                 >
>                 >         I see, that a lot of guys, who wants to help
>                 with it :) And I
>                 >         suppose, that me and Rabi Mishra may try to
>                 help with it,
>                 >         because we was involvement in implementation
>                 of v1 resources
>                 >         in Heat.
>                 >         Follow the list of v1 lbaas resources in
>                 Heat:
>                 >
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>                 >
>                 >
>                 >
>                 >         Also, I suppose, that it may be discussed
>                 during summit
>                 >         talks :)
>                 >         Will add to etherpad with potential
>                 sessions.
>                 >
>                 >
>                 >
>                 >         Regards,
>                 >         Sergey.
>                 >
>                 >         On 22 September 2015 at 22:27, Brandon Logan
>                 >         <brandon.logan at rackspace.com> wrote:
>                 >                 There is some overlap, but there was
>                 some incompatible
>                 >                 differences when
>                 >                 we started designing v2.  I'm sure
>                 the same issues
>                 >                 will arise this time
>                 >                 around so new resources sounds like
>                 the path to go.
>                 >                 However, I do not
>                 >                 know much about Heat and the
>                 resources so I'm speaking
>                 >                 on a very
>                 >                 uneducated level here.
>                 >
>                 >                 Thanks,
>                 >                 Brandon
>                 >                 On Tue, 2015-09-22 at 18:38 +0000,
>                 Fox, Kevin M wrote:
>                 >                 > We're using the v1 resources...
>                 >                 >
>                 >                 > If the v2 ones are compatible and
>                 can seamlessly
>                 >                 upgrade, great
>                 >                 >
>                 >                 > Otherwise, make new ones please.
>                 >                 >
>                 >                 > Thanks,
>                 >                 > Kevin
>                 >                 >
>                 >                 >
>                 >
>                  ______________________________________________________________________
>                 >                 > From: Banashankar KV
>                 [banveerad at gmail.com]
>                 >                 > Sent: Tuesday, September 22, 2015
>                 10:07 AM
>                 >                 > To: OpenStack Development Mailing
>                 List (not for
>                 >                 usage questions)
>                 >                 > Subject: Re: [openstack-dev]
>                 [neutron][lbaas] - Heat
>                 >                 support for
>                 >                 > LbaasV2
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > Hi Brandon,
>                 >                 > Work in progress, but need some
>                 input on the way we
>                 >                 want them, like
>                 >                 > replace the existing lbaasv1 or we
>                 still need to
>                 >                 support them ?
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > Thanks
>                 >                 > Banashankar
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
>                 Brandon Logan
>                 >                 > <brandon.logan at rackspace.com>
>                 wrote:
>                 >                 >         Hi Banashankar,
>                 >                 >         I think it'd be great if
>                 you got this going.
>                 >                 One of those
>                 >                 >         things we
>                 >                 >         want to have and people
>                 ask for but has
>                 >                 always gotten a lower
>                 >                 >         priority
>                 >                 >         due to the critical things
>                 needed.
>                 >                 >
>                 >                 >         Thanks,
>                 >                 >         Brandon
>                 >                 >         On Mon, 2015-09-21 at
>                 17:57 -0700,
>                 >                 Banashankar KV wrote:
>                 >                 >         > Hi All,
>                 >                 >         > I was thinking of
>                 starting the work on
>                 >                 heat to support
>                 >                 >         LBaasV2,  Is
>                 >                 >         > there any concerns about
>                 that?
>                 >                 >         >
>                 >                 >         >
>                 >                 >         > I don't know if it is
>                 the right time to
>                 >                 bring this up :D .
>                 >                 >         >
>                 >                 >         > Thanks,
>                 >                 >         > Banashankar (bana_k)
>                 >                 >         >
>                 >                 >         >
>                 >                 >
>                 >                 >         >
>                 >                 >
>                 >
>                 __________________________________________________________________________
>                 >                 >         > OpenStack Development
>                 Mailing List (not
>                 >                 for usage questions)
>                 >                 >         > Unsubscribe:
>                 >                 >
>                 >
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >                 >         >
>                 >                 >
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >                 >
>                 >                 >
>                 >
>                 __________________________________________________________________________
>                 >                 >         OpenStack Development
>                 Mailing List (not for
>                 >                 usage questions)
>                 >                 >         Unsubscribe:
>                 >                 >
>                 >
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >                 >
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >
>                  __________________________________________________________________________
>                 >                 > OpenStack Development Mailing List
>                 (not for usage
>                 >                 questions)
>                 >                 > Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >                 >
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                  __________________________________________________________________________
>                 >                 OpenStack Development Mailing List
>                 (not for usage
>                 >                 questions)
>                 >                 Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 >
>                  __________________________________________________________________________
>                 >         OpenStack Development Mailing List (not for
>                 usage questions)
>                 >         Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 __________________________________________________________________________
>                 > OpenStack Development Mailing List (not for usage
>                 questions)
>                 > Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 
>         
>         
>         
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From Kevin.Fox at pnnl.gov  Wed Sep 23 01:01:53 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 23 Sep 2015 01:01:53 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1442968743.30604.13.camel@localhost>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>	,
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>,
 <1442968743.30604.13.camel@localhost>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C9145@EX10MBOX06.pnnl.gov>

As I understand it, loadbalancer in v2 is more like pool was in v1. Can we make it such that if you are using the loadbalancer resource and have the mandatory v2 properties that it tries to use v2 api, otherwise its a v1 resource? PoolMember should be ok being the same. It just needs to call v1 or v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api different between them? Can it be like pool member?

Thanks,
Kevin

________________________________
From: Brandon Logan
Sent: Tuesday, September 22, 2015 5:39:03 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

So for the API v1s api is of the structure:

<neutron-endpoint>/lb/(vip|pool|member|health_monitor)

V2s is:
<neutron-endpoint>/lbaas/(loadbalancer|listener|pool|healthmonitor)

member is a child of pool, so it would go down one level.

The only difference is the lb for v1 and lbaas for v2.  Not sure if that
is enough of a different.

Thanks,
Brandon
On Tue, 2015-09-22 at 23:48 +0000, Fox, Kevin M wrote:
> Thats the problem. :/
>
> I can't think of a way to have them coexist without: breaking old
> templates, including v2 in the name, or having a flag on the resource
> saying the version is v2. And as an app developer I'd rather not have
> my existing templates break.
>
> I haven't compared the api's at all, but is there a required field of
> v2 that is different enough from v1 that by its simple existence in
> the resource you can tell a v2 from a v1 object? Would something like
> that work? PoolMember wouldn't have to change, the same resource could
> probably work for whatever lb it was pointing at I'm guessing.
>
> Thanks,
> Kevin
>
>
>
> ______________________________________________________________________
> From: Banashankar KV [banveerad at gmail.com]
> Sent: Tuesday, September 22, 2015 4:40 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
> LbaasV2
>
>
>
> Ok, sounds good. So now the question is how should we name the new V2
> resources ?
>
>
>
> Thanks
> Banashankar
>
>
>
> On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
> wrote:
>         Yes, hence the need to support the v2 resources as seperate
>         things. Then I can rewrite the templates to include the new
>         resources rather then the old resources as appropriate. IE, it
>         will be a porting effort to rewrite them. Then do a heat
>         update on the stack to migrate it from lbv1 to lbv2. Since
>         they are different resources, it should create the new and
>         delete the old.
>
>         Thanks,
>         Kevin
>
>
>         ______________________________________________________________
>         From: Banashankar KV [banveerad at gmail.com]
>         Sent: Tuesday, September 22, 2015 4:16 PM
>
>         To: OpenStack Development Mailing List (not for usage
>         questions)
>         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
>         for LbaasV2
>
>
>
>
>         But I think, V2 has introduced some new components and whole
>         association of the resources with each other is changed, we
>         should be still able to do what Kevin has mentioned ?
>
>         Thanks
>         Banashankar
>
>
>
>         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
>         <Kevin.Fox at pnnl.gov> wrote:
>                 There needs to be a way to have both v1 and v2
>                 supported in one engine....
>
>                 Say I have templates that use v1 already in existence
>                 (I do), and I want to be able to heat stack update on
>                 them one at a time to v2. This will replace the v1 lb
>                 with v2, migrating the floating ip from the v1 lb to
>                 the v2 one. This gives a smoothish upgrade path.
>
>                 Thanks,
>                 Kevin
>                 ________________________________________
>                 From: Brandon Logan [brandon.logan at RACKSPACE.COM]
>                 Sent: Tuesday, September 22, 2015 3:22 PM
>                 To: openstack-dev at lists.openstack.org
>                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>                 support for LbaasV2
>
>                 Well I'd hate to have the V2 postfix on it because V1
>                 will be deprecated
>                 and removed, which means the V2 being there would be
>                 lame.  Is there any
>                 kind of precedent set for for how to handle this?
>
>                 Thanks,
>                 Brandon
>                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
>                 wrote:
>                 > So are we thinking of making it as ?
>                 > OS::Neutron::LoadBalancerV2
>                 >
>                 > OS::Neutron::ListenerV2
>                 >
>                 > OS::Neutron::PoolV2
>                 >
>                 > OS::Neutron::PoolMemberV2
>                 >
>                 > OS::Neutron::HealthMonitorV2
>                 >
>                 >
>                 >
>                 > and add all those into the loadbalancer.py of heat
>                 engine ?
>                 >
>                 > Thanks
>                 > Banashankar
>                 >
>                 >
>                 >
>                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>                 > <skraynev at mirantis.com> wrote:
>                 >         Brandon.
>                 >
>                 >
>                 >         As I understand we v1 and v2 have
>                 differences also in list of
>                 >         objects and also in relationships between
>                 them.
>                 >         So I don't think that it will be easy to
>                 upgrade old resources
>                 >         (unfortunately).
>                 >         I'd agree with second Kevin's suggestion
>                 about implementation
>                 >         new resources in this case.
>                 >
>                 >
>                 >         I see, that a lot of guys, who wants to help
>                 with it :) And I
>                 >         suppose, that me and Rabi Mishra may try to
>                 help with it,
>                 >         because we was involvement in implementation
>                 of v1 resources
>                 >         in Heat.
>                 >         Follow the list of v1 lbaas resources in
>                 Heat:
>                 >
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>                 >
>                 >
>                 >
>                 >         Also, I suppose, that it may be discussed
>                 during summit
>                 >         talks :)
>                 >         Will add to etherpad with potential
>                 sessions.
>                 >
>                 >
>                 >
>                 >         Regards,
>                 >         Sergey.
>                 >
>                 >         On 22 September 2015 at 22:27, Brandon Logan
>                 >         <brandon.logan at rackspace.com> wrote:
>                 >                 There is some overlap, but there was
>                 some incompatible
>                 >                 differences when
>                 >                 we started designing v2.  I'm sure
>                 the same issues
>                 >                 will arise this time
>                 >                 around so new resources sounds like
>                 the path to go.
>                 >                 However, I do not
>                 >                 know much about Heat and the
>                 resources so I'm speaking
>                 >                 on a very
>                 >                 uneducated level here.
>                 >
>                 >                 Thanks,
>                 >                 Brandon
>                 >                 On Tue, 2015-09-22 at 18:38 +0000,
>                 Fox, Kevin M wrote:
>                 >                 > We're using the v1 resources...
>                 >                 >
>                 >                 > If the v2 ones are compatible and
>                 can seamlessly
>                 >                 upgrade, great
>                 >                 >
>                 >                 > Otherwise, make new ones please.
>                 >                 >
>                 >                 > Thanks,
>                 >                 > Kevin
>                 >                 >
>                 >                 >
>                 >
>                  ______________________________________________________________________
>                 >                 > From: Banashankar KV
>                 [banveerad at gmail.com]
>                 >                 > Sent: Tuesday, September 22, 2015
>                 10:07 AM
>                 >                 > To: OpenStack Development Mailing
>                 List (not for
>                 >                 usage questions)
>                 >                 > Subject: Re: [openstack-dev]
>                 [neutron][lbaas] - Heat
>                 >                 support for
>                 >                 > LbaasV2
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > Hi Brandon,
>                 >                 > Work in progress, but need some
>                 input on the way we
>                 >                 want them, like
>                 >                 > replace the existing lbaasv1 or we
>                 still need to
>                 >                 support them ?
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > Thanks
>                 >                 > Banashankar
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
>                 Brandon Logan
>                 >                 > <brandon.logan at rackspace.com>
>                 wrote:
>                 >                 >         Hi Banashankar,
>                 >                 >         I think it'd be great if
>                 you got this going.
>                 >                 One of those
>                 >                 >         things we
>                 >                 >         want to have and people
>                 ask for but has
>                 >                 always gotten a lower
>                 >                 >         priority
>                 >                 >         due to the critical things
>                 needed.
>                 >                 >
>                 >                 >         Thanks,
>                 >                 >         Brandon
>                 >                 >         On Mon, 2015-09-21 at
>                 17:57 -0700,
>                 >                 Banashankar KV wrote:
>                 >                 >         > Hi All,
>                 >                 >         > I was thinking of
>                 starting the work on
>                 >                 heat to support
>                 >                 >         LBaasV2,  Is
>                 >                 >         > there any concerns about
>                 that?
>                 >                 >         >
>                 >                 >         >
>                 >                 >         > I don't know if it is
>                 the right time to
>                 >                 bring this up :D .
>                 >                 >         >
>                 >                 >         > Thanks,
>                 >                 >         > Banashankar (bana_k)
>                 >                 >         >
>                 >                 >         >
>                 >                 >
>                 >                 >         >
>                 >                 >
>                 >
>                 __________________________________________________________________________
>                 >                 >         > OpenStack Development
>                 Mailing List (not
>                 >                 for usage questions)
>                 >                 >         > Unsubscribe:
>                 >                 >
>                 >
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >                 >         >
>                 >                 >
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >                 >
>                 >                 >
>                 >
>                 __________________________________________________________________________
>                 >                 >         OpenStack Development
>                 Mailing List (not for
>                 >                 usage questions)
>                 >                 >         Unsubscribe:
>                 >                 >
>                 >
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >                 >
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >
>                  __________________________________________________________________________
>                 >                 > OpenStack Development Mailing List
>                 (not for usage
>                 >                 questions)
>                 >                 > Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >                 >
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                  __________________________________________________________________________
>                 >                 OpenStack Development Mailing List
>                 (not for usage
>                 >                 questions)
>                 >                 Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 >
>                  __________________________________________________________________________
>                 >         OpenStack Development Mailing List (not for
>                 usage questions)
>                 >         Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 __________________________________________________________________________
>                 > OpenStack Development Mailing List (not for usage
>                 questions)
>                 > Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/32e4677e/attachment-0001.html>

From zhengzhenyulixi at gmail.com  Wed Sep 23 01:18:54 2015
From: zhengzhenyulixi at gmail.com (Zhenyu Zheng)
Date: Wed, 23 Sep 2015 09:18:54 +0800
Subject: [openstack-dev] [nova] [api] Nova currently handles list with
 limit=0 quite different for different objects.
In-Reply-To: <CAO0b__9z7sjS3Gqm1uj2z=6X9Sz9uG9V1eUYQWSeCpkchxroEQ@mail.gmail.com>
References: <CAO0b____pvyYBSz7EzWrS--T9HSWbEBv5c-frbFT6NQ46ve-nQ@mail.gmail.com>
 <1441987655.14645.36.camel@einstein.kev>
 <CAO0b__9z7sjS3Gqm1uj2z=6X9Sz9uG9V1eUYQWSeCpkchxroEQ@mail.gmail.com>
Message-ID: <CAO0b___x667t-DnpaRysp0=GB-+5YPej4mTjpci6RCGNZnwDrA@mail.gmail.com>

Any thoughts on this?

On Mon, Sep 14, 2015 at 11:53 AM, Zhenyu Zheng <zhengzhenyulixi at gmail.com>
wrote:

> Hi, Thanks for your reply, after check again and I agree with you. I think
> we should come up with a conclusion about how we should treat this limit=0
> across nova. And that's also why I sent out this mail. I will register this
> topic in the API meeting open discussion section, my be a BP in M to fix
> this.
>
> BR,
>
> Zheng
>
> On Sat, Sep 12, 2015 at 12:07 AM, Kevin L. Mitchell <
> kevin.mitchell at rackspace.com> wrote:
>
>> On Fri, 2015-09-11 at 15:41 +0800, Zhenyu Zheng wrote:
>> > Hi, I found out that nova currently handles list with limit=0 quite
>> > different for different objects.
>> >
>> > Especially when list servers:
>> >
>> > According to the code:
>> >
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206
>> >
>> > when limit = 0, it should apply as max_limit, but currently, in:
>> >
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930
>> >
>> > we directly return [], this is quite different with comment in the api
>> > code.
>> >
>> >
>> > I checked other objects:
>> >
>> > when list security groups and server groups, it will return as no
>> > limit has been set. And for flavors it returns []. I will continue to
>> > try out other APIs if needed.
>> >
>> > I think maybe we should make a rule for all objects, at least fix the
>> > servers to make it same in api and db code.
>> >
>> > I have reported a bug in launchpad:
>> >
>> > https://bugs.launchpad.net/nova/+bug/1494617
>> >
>> >
>> > Any suggestions?
>>
>> After seeing the test failures that showed up on your proposed fix, I'm
>> thinking that the proposed change reads like an API change, requiring a
>> microversion bump.  That said, I approve of increased consistency across
>> the API, and perhaps the behavior on limit=0 is something the API group
>> needs to discuss a guideline for?
>> --
>> Kevin L. Mitchell <kevin.mitchell at rackspace.com>
>> Rackspace
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/a23f0e1d/attachment.html>

From tony.a.wang at alcatel-lucent.com  Wed Sep 23 02:01:42 2015
From: tony.a.wang at alcatel-lucent.com (WANG, Ming Hao (Tony T))
Date: Wed, 23 Sep 2015 02:01:42 +0000
Subject: [openstack-dev] [neutron] Does neutron ovn plugin support to
 setup multiple neutron networks for one container?
In-Reply-To: <560185DC.4060103@redhat.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
Message-ID: <F1F484A52BD63243B5497BFC9DE26E5A1A78CE20@SG70YWXCHMBA05.zap.alcatel-lucent.com>

Russell,

Thanks for your info.
If I want to assign multiple interfaces to a container on different neutron networks(for example, netA and netB), is it mandatory to let the VM hosting containers have network interfaces in netA and netB, and ovn will help to direct the container traffic to its corresponding VM network interfaces?

from https://github.com/openvswitch/ovs/blob/master/ovn/CONTAINERS.OpenStack.md :
"This VLAN tag is stripped out in the hypervisor by OVN."
I suppose when the traffic goes out the VM, the VLAN tag has already been stripped out. 
When the traffic arrives ovs on physical host, it will be tagged with neutron local vlan. Is it right?

Thanks in advance,
Tony

-----Original Message-----
From: Russell Bryant [mailto:rbryant at redhat.com] 
Sent: Wednesday, September 23, 2015 12:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Does neutron ovn plugin support to setup multiple neutron networks for one container?

On 09/22/2015 08:08 AM, WANG, Ming Hao (Tony T) wrote:
> Dear all,
> 
> For neutron ovn plugin supports containers in one VM, My understanding is one container can't be assigned two network interfaces in different neutron networks. Is it right?
> The reason:
> 1. One host VM only has one network interface.
> 2. all the VLAN tags are stripped out when the packet goes out the VM.
> 
> If it is True, does neutron ovn plugin or ovn has plan to support this?

You should be able to assign multiple interfaces to a container on different networks.  The traffic for each interface will be tagged with a unique VLAN ID on its way in and out of the VM, the same way it is done for each container with a single interface.

--
Russell Bryant

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From zhengzhenyulixi at gmail.com  Wed Sep 23 03:24:20 2015
From: zhengzhenyulixi at gmail.com (Zhenyu Zheng)
Date: Wed, 23 Sep 2015 11:24:20 +0800
Subject: [openstack-dev]  [nova] About availability zones
Message-ID: <CAO0b__-38=aj2+rDd1POEsQkJ+_dGUAnskD8LzKjKKCKoMicRg@mail.gmail.com>

Hi, all

I have a question about availability zones when performing live-migration.

Currently, when performing live-migration the AZ of the instance didn't
update. In usecase like this:
Instance_1 is in host1 which is in az1, we live-migrate it to host2
(provide host2 in API request) which is in az2. The operation will secusess
but the availability zone data stored in instance1 is still az1, this may
cause inconsistency with the az data stored in instance db and the actual
az. I think update the az information in instance using the host az can
solve this.

Also, I have heard from my collegue that in the future we are planning to
use host az information for instances. I couldn't find informations about
this, could anyone provide me some information about it if thats true?

Thanks,

Best Regards,

Zheng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/7534f71d/attachment.html>

From vilobhmeshram.openstack at gmail.com  Wed Sep 23 03:57:07 2015
From: vilobhmeshram.openstack at gmail.com (Vilobh Meshram)
Date: Tue, 22 Sep 2015 20:57:07 -0700
Subject: [openstack-dev]  [magnum] Unit Tests Objects from Bay
Message-ID: <CAPJ8RRVLfxFXi7J+yywosOQ6byD0kS8ZZNFR3gWtvBT7OOGy3g@mail.gmail.com>

Hi All,

As discussed in today's weekly meeting sending the list of Unit Tests that
need to be modified as part of "Objects from Bay" feature[1].

Right now after the switch to k8s v1 api yesterday, the changes for
Replication Controller are up [2] will update the patches for Service
object and Pod object by tomorrow.

Unit Test which will be affected for Replication Controller/Service/Pod
objects are covered in this etherpad :-

https://etherpad.openstack.org/p/objects-from-bay-tests

-Vilobh

[1] https://blueprints.launchpad.net/magnum/+spec/objects-from-bay
[2] https://review.openstack.org/#/c/213368/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/a902b334/attachment.html>

From banveerad at gmail.com  Wed Sep 23 04:14:35 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Tue, 22 Sep 2015 21:14:35 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7C9145@EX10MBOX06.pnnl.gov>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>
 <1442968743.30604.13.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C9145@EX10MBOX06.pnnl.gov>
Message-ID: <CABkBM5GvWpG57HkBHghvH+q7ZK8V8s_oHL2KAfHQdRiuOAcSOg@mail.gmail.com>

What you think about separating both of them with the name as Doug
mentioned. In future if we want to get rid of the v1 we can just remove
that namespace. Everything will be clean.

Thanks
Banashankar


On Tue, Sep 22, 2015 at 6:01 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:

> As I understand it, loadbalancer in v2 is more like pool was in v1. Can we
> make it such that if you are using the loadbalancer resource and have the
> mandatory v2 properties that it tries to use v2 api, otherwise its a v1
> resource? PoolMember should be ok being the same. It just needs to call v1
> or v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api
> different between them? Can it be like pool member?
>
> Thanks,
> Kevin
>
> ------------------------------
> *From:* Brandon Logan
> *Sent:* Tuesday, September 22, 2015 5:39:03 PM
>
> *To:* openstack-dev at lists.openstack.org
> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
>
> So for the API v1s api is of the structure:
>
> <neutron-endpoint>/lb/(vip|pool|member|health_monitor)
>
> V2s is:
> <neutron-endpoint>/lbaas/(loadbalancer|listener|pool|healthmonitor)
>
> member is a child of pool, so it would go down one level.
>
> The only difference is the lb for v1 and lbaas for v2.  Not sure if that
> is enough of a different.
>
> Thanks,
> Brandon
> On Tue, 2015-09-22 at 23:48 +0000, Fox, Kevin M wrote:
> > Thats the problem. :/
> >
> > I can't think of a way to have them coexist without: breaking old
> > templates, including v2 in the name, or having a flag on the resource
> > saying the version is v2. And as an app developer I'd rather not have
> > my existing templates break.
> >
> > I haven't compared the api's at all, but is there a required field of
> > v2 that is different enough from v1 that by its simple existence in
> > the resource you can tell a v2 from a v1 object? Would something like
> > that work? PoolMember wouldn't have to change, the same resource could
> > probably work for whatever lb it was pointing at I'm guessing.
> >
> > Thanks,
> > Kevin
> >
> >
> >
> > ______________________________________________________________________
> > From: Banashankar KV [banveerad at gmail.com]
> > Sent: Tuesday, September 22, 2015 4:40 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
> > LbaasV2
> >
> >
> >
> > Ok, sounds good. So now the question is how should we name the new V2
> > resources ?
> >
> >
> >
> > Thanks
> > Banashankar
> >
> >
> >
> > On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
> > wrote:
> >         Yes, hence the need to support the v2 resources as seperate
> >         things. Then I can rewrite the templates to include the new
> >         resources rather then the old resources as appropriate. IE, it
> >         will be a porting effort to rewrite them. Then do a heat
> >         update on the stack to migrate it from lbv1 to lbv2. Since
> >         they are different resources, it should create the new and
> >         delete the old.
> >
> >         Thanks,
> >         Kevin
> >
> >
> >         ______________________________________________________________
> >         From: Banashankar KV [banveerad at gmail.com]
> >         Sent: Tuesday, September 22, 2015 4:16 PM
> >
> >         To: OpenStack Development Mailing List (not for usage
> >         questions)
> >         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
> >         for LbaasV2
> >
> >
> >
> >
> >         But I think, V2 has introduced some new components and whole
> >         association of the resources with each other is changed, we
> >         should be still able to do what Kevin has mentioned ?
> >
> >         Thanks
> >         Banashankar
> >
> >
> >
> >         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
> >         <Kevin.Fox at pnnl.gov> wrote:
> >                 There needs to be a way to have both v1 and v2
> >                 supported in one engine....
> >
> >                 Say I have templates that use v1 already in existence
> >                 (I do), and I want to be able to heat stack update on
> >                 them one at a time to v2. This will replace the v1 lb
> >                 with v2, migrating the floating ip from the v1 lb to
> >                 the v2 one. This gives a smoothish upgrade path.
> >
> >                 Thanks,
> >                 Kevin
> >                 ________________________________________
> >                 From: Brandon Logan [brandon.logan at RACKSPACE.COM]
> >                 Sent: Tuesday, September 22, 2015 3:22 PM
> >                 To: openstack-dev at lists.openstack.org
> >                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
> >                 support for LbaasV2
> >
> >                 Well I'd hate to have the V2 postfix on it because V1
> >                 will be deprecated
> >                 and removed, which means the V2 being there would be
> >                 lame.  Is there any
> >                 kind of precedent set for for how to handle this?
> >
> >                 Thanks,
> >                 Brandon
> >                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
> >                 wrote:
> >                 > So are we thinking of making it as ?
> >                 > OS::Neutron::LoadBalancerV2
> >                 >
> >                 > OS::Neutron::ListenerV2
> >                 >
> >                 > OS::Neutron::PoolV2
> >                 >
> >                 > OS::Neutron::PoolMemberV2
> >                 >
> >                 > OS::Neutron::HealthMonitorV2
> >                 >
> >                 >
> >                 >
> >                 > and add all those into the loadbalancer.py of heat
> >                 engine ?
> >                 >
> >                 > Thanks
> >                 > Banashankar
> >                 >
> >                 >
> >                 >
> >                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> >                 > <skraynev at mirantis.com> wrote:
> >                 >         Brandon.
> >                 >
> >                 >
> >                 >         As I understand we v1 and v2 have
> >                 differences also in list of
> >                 >         objects and also in relationships between
> >                 them.
> >                 >         So I don't think that it will be easy to
> >                 upgrade old resources
> >                 >         (unfortunately).
> >                 >         I'd agree with second Kevin's suggestion
> >                 about implementation
> >                 >         new resources in this case.
> >                 >
> >                 >
> >                 >         I see, that a lot of guys, who wants to help
> >                 with it :) And I
> >                 >         suppose, that me and Rabi Mishra may try to
> >                 help with it,
> >                 >         because we was involvement in implementation
> >                 of v1 resources
> >                 >         in Heat.
> >                 >         Follow the list of v1 lbaas resources in
> >                 Heat:
> >                 >
> >                 >
> >                 >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
> >                 >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
> >                 >
> >                 >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
> >                 >
> >                 >
> >
> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
> >                 >
> >                 >
> >                 >
> >                 >         Also, I suppose, that it may be discussed
> >                 during summit
> >                 >         talks :)
> >                 >         Will add to etherpad with potential
> >                 sessions.
> >                 >
> >                 >
> >                 >
> >                 >         Regards,
> >                 >         Sergey.
> >                 >
> >                 >         On 22 September 2015 at 22:27, Brandon Logan
> >                 >         <brandon.logan at rackspace.com> wrote:
> >                 >                 There is some overlap, but there was
> >                 some incompatible
> >                 >                 differences when
> >                 >                 we started designing v2.  I'm sure
> >                 the same issues
> >                 >                 will arise this time
> >                 >                 around so new resources sounds like
> >                 the path to go.
> >                 >                 However, I do not
> >                 >                 know much about Heat and the
> >                 resources so I'm speaking
> >                 >                 on a very
> >                 >                 uneducated level here.
> >                 >
> >                 >                 Thanks,
> >                 >                 Brandon
> >                 >                 On Tue, 2015-09-22 at 18:38 +0000,
> >                 Fox, Kevin M wrote:
> >                 >                 > We're using the v1 resources...
> >                 >                 >
> >                 >                 > If the v2 ones are compatible and
> >                 can seamlessly
> >                 >                 upgrade, great
> >                 >                 >
> >                 >                 > Otherwise, make new ones please.
> >                 >                 >
> >                 >                 > Thanks,
> >                 >                 > Kevin
> >                 >                 >
> >                 >                 >
> >                 >
> >
> ______________________________________________________________________
> >                 >                 > From: Banashankar KV
> >                 [banveerad at gmail.com]
> >                 >                 > Sent: Tuesday, September 22, 2015
> >                 10:07 AM
> >                 >                 > To: OpenStack Development Mailing
> >                 List (not for
> >                 >                 usage questions)
> >                 >                 > Subject: Re: [openstack-dev]
> >                 [neutron][lbaas] - Heat
> >                 >                 support for
> >                 >                 > LbaasV2
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 > Hi Brandon,
> >                 >                 > Work in progress, but need some
> >                 input on the way we
> >                 >                 want them, like
> >                 >                 > replace the existing lbaasv1 or we
> >                 still need to
> >                 >                 support them ?
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 > Thanks
> >                 >                 > Banashankar
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
> >                 Brandon Logan
> >                 >                 > <brandon.logan at rackspace.com>
> >                 wrote:
> >                 >                 >         Hi Banashankar,
> >                 >                 >         I think it'd be great if
> >                 you got this going.
> >                 >                 One of those
> >                 >                 >         things we
> >                 >                 >         want to have and people
> >                 ask for but has
> >                 >                 always gotten a lower
> >                 >                 >         priority
> >                 >                 >         due to the critical things
> >                 needed.
> >                 >                 >
> >                 >                 >         Thanks,
> >                 >                 >         Brandon
> >                 >                 >         On Mon, 2015-09-21 at
> >                 17:57 -0700,
> >                 >                 Banashankar KV wrote:
> >                 >                 >         > Hi All,
> >                 >                 >         > I was thinking of
> >                 starting the work on
> >                 >                 heat to support
> >                 >                 >         LBaasV2,  Is
> >                 >                 >         > there any concerns about
> >                 that?
> >                 >                 >         >
> >                 >                 >         >
> >                 >                 >         > I don't know if it is
> >                 the right time to
> >                 >                 bring this up :D .
> >                 >                 >         >
> >                 >                 >         > Thanks,
> >                 >                 >         > Banashankar (bana_k)
> >                 >                 >         >
> >                 >                 >         >
> >                 >                 >
> >                 >                 >         >
> >                 >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 >                 >         > OpenStack Development
> >                 Mailing List (not
> >                 >                 for usage questions)
> >                 >                 >         > Unsubscribe:
> >                 >                 >
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >                 >         >
> >                 >                 >
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >                 >
> >                 >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 >                 >         OpenStack Development
> >                 Mailing List (not for
> >                 >                 usage questions)
> >                 >                 >         Unsubscribe:
> >                 >                 >
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >                 >
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 >                 > OpenStack Development Mailing List
> >                 (not for usage
> >                 >                 questions)
> >                 >                 > Unsubscribe:
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >                 >
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 >                 OpenStack Development Mailing List
> >                 (not for usage
> >                 >                 questions)
> >                 >                 Unsubscribe:
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 >         OpenStack Development Mailing List (not for
> >                 usage questions)
> >                 >         Unsubscribe:
> >                 >
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >                 >
> >                 >
> >                 >
> >                 >
> >
> __________________________________________________________________________
> >                 > OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 > Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >                 >
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __________________________________________________________________________
> >                 OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __________________________________________________________________________
> >                 OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 Unsubscribe:
> >
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> __________________________________________________________________________
> >         OpenStack Development Mailing List (not for usage questions)
> >         Unsubscribe:
> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150922/4333fd9f/attachment.html>

From harlowja at outlook.com  Wed Sep 23 04:44:23 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Tue, 22 Sep 2015 21:44:23 -0700
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <5601EEF5.9090301@windriver.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <5601EEF5.9090301@windriver.com>
Message-ID: <BLU436-SMTP45BB8208D2DE33E1B782D2D8440@phx.gbl>

Chris Friesen wrote:
> On 09/22/2015 05:48 PM, Joshua Harlow wrote:
>> A present:
>>
>> >>> import contextlib
>> >>> import os
>> >>>
>> >>> @contextlib.contextmanager
>> ... def synced_file(path, mode='wb'):
>> ... with open(path, mode) as fh:
>> ... yield fh
>> ... os.fdatasync(fh.fileno())
>> ...
>> >>> with synced_file("/tmp/b.txt") as fh:
>> ... fh.write("b")
>
> Isn't that missing an "fh.flush()" somewhere before the fdatasync()?

I was testing you, obviously, lol, congrats you passed ;)

>
> Chris
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From hejie.xu at intel.com  Wed Sep 23 04:56:40 2015
From: hejie.xu at intel.com (Alex Xu)
Date: Wed, 23 Sep 2015 12:56:40 +0800
Subject: [openstack-dev] [nova] [api] Nova currently handles list with
	limit=0 quite different for different objects.
In-Reply-To: <CAO0b___x667t-DnpaRysp0=GB-+5YPej4mTjpci6RCGNZnwDrA@mail.gmail.com>
References: <CAO0b____pvyYBSz7EzWrS--T9HSWbEBv5c-frbFT6NQ46ve-nQ@mail.gmail.com>
 <1441987655.14645.36.camel@einstein.kev>
 <CAO0b__9z7sjS3Gqm1uj2z=6X9Sz9uG9V1eUYQWSeCpkchxroEQ@mail.gmail.com>
 <CAO0b___x667t-DnpaRysp0=GB-+5YPej4mTjpci6RCGNZnwDrA@mail.gmail.com>
Message-ID: <87D7392F-8BC7-4A28-8CDF-644F2956DA0A@intel.com>

Hi, Zhengyu,

We discussed this in yesterday Nova API meeting. We think it should get consistent in API-WG.

And there already have patch for pagination guideline https://review.openstack.org/190743 <https://review.openstack.org/190743> , and there also have some discussion on limits.
So we are better waiting the guideline get consistent before fix it.

Thanks
Alex

> On Sep 23, 2015, at 9:18 AM, Zhenyu Zheng <zhengzhenyulixi at gmail.com> wrote:
> 
> Any thoughts on this?
> 
> On Mon, Sep 14, 2015 at 11:53 AM, Zhenyu Zheng <zhengzhenyulixi at gmail.com <mailto:zhengzhenyulixi at gmail.com>> wrote:
> Hi, Thanks for your reply, after check again and I agree with you. I think we should come up with a conclusion about how we should treat this limit=0 across nova. And that's also why I sent out this mail. I will register this topic in the API meeting open discussion section, my be a BP in M to fix this.
> 
> BR,
> 
> Zheng
> 
> On Sat, Sep 12, 2015 at 12:07 AM, Kevin L. Mitchell <kevin.mitchell at rackspace.com <mailto:kevin.mitchell at rackspace.com>> wrote:
> On Fri, 2015-09-11 at 15:41 +0800, Zhenyu Zheng wrote:
> > Hi, I found out that nova currently handles list with limit=0 quite
> > different for different objects.
> >
> > Especially when list servers:
> >
> > According to the code:
> > http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206 <http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206>
> >
> > when limit = 0, it should apply as max_limit, but currently, in:
> > http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930 <http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930>
> >
> > we directly return [], this is quite different with comment in the api
> > code.
> >
> >
> > I checked other objects:
> >
> > when list security groups and server groups, it will return as no
> > limit has been set. And for flavors it returns []. I will continue to
> > try out other APIs if needed.
> >
> > I think maybe we should make a rule for all objects, at least fix the
> > servers to make it same in api and db code.
> >
> > I have reported a bug in launchpad:
> >
> > https://bugs.launchpad.net/nova/+bug/1494617 <https://bugs.launchpad.net/nova/+bug/1494617>
> >
> >
> > Any suggestions?
> 
> After seeing the test failures that showed up on your proposed fix, I'm
> thinking that the proposed change reads like an API change, requiring a
> microversion bump.  That said, I approve of increased consistency across
> the API, and perhaps the behavior on limit=0 is something the API group
> needs to discuss a guideline for?
> --
> Kevin L. Mitchell <kevin.mitchell at rackspace.com <mailto:kevin.mitchell at rackspace.com>>
> Rackspace
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/5a4d985c/attachment.html>

From tony.a.wang at alcatel-lucent.com  Wed Sep 23 05:58:21 2015
From: tony.a.wang at alcatel-lucent.com (WANG, Ming Hao (Tony T))
Date: Wed, 23 Sep 2015 05:58:21 +0000
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com> 
Message-ID: <F1F484A52BD63243B5497BFC9DE26E5A1A78CF6B@SG70YWXCHMBA05.zap.alcatel-lucent.com>

Hi Russell,

Is there any material to explain how OVN parent port work?

Thanks,
Tony

-----Original Message-----
From: WANG, Ming Hao (Tony T) 
Sent: Wednesday, September 23, 2015 10:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [neutron] Does neutron ovn plugin support to setup multiple neutron networks for one container?

Russell,

Thanks for your info.
If I want to assign multiple interfaces to a container on different neutron networks(for example, netA and netB), is it mandatory to let the VM hosting containers have network interfaces in netA and netB, and ovn will help to direct the container traffic to its corresponding VM network interfaces?

from https://github.com/openvswitch/ovs/blob/master/ovn/CONTAINERS.OpenStack.md :
"This VLAN tag is stripped out in the hypervisor by OVN."
I suppose when the traffic goes out the VM, the VLAN tag has already been stripped out. 
When the traffic arrives ovs on physical host, it will be tagged with neutron local vlan. Is it right?

Thanks in advance,
Tony

-----Original Message-----
From: Russell Bryant [mailto:rbryant at redhat.com] 
Sent: Wednesday, September 23, 2015 12:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Does neutron ovn plugin support to setup multiple neutron networks for one container?

On 09/22/2015 08:08 AM, WANG, Ming Hao (Tony T) wrote:
> Dear all,
> 
> For neutron ovn plugin supports containers in one VM, My understanding is one container can't be assigned two network interfaces in different neutron networks. Is it right?
> The reason:
> 1. One host VM only has one network interface.
> 2. all the VLAN tags are stripped out when the packet goes out the VM.
> 
> If it is True, does neutron ovn plugin or ovn has plan to support this?

You should be able to assign multiple interfaces to a container on different networks.  The traffic for each interface will be tagged with a unique VLAN ID on its way in and out of the VM, the same way it is done for each container with a single interface.

--
Russell Bryant

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From zhengzhenyulixi at gmail.com  Wed Sep 23 06:14:33 2015
From: zhengzhenyulixi at gmail.com (Zhenyu Zheng)
Date: Wed, 23 Sep 2015 14:14:33 +0800
Subject: [openstack-dev] [nova] [api] Nova currently handles list with
 limit=0 quite different for different objects.
In-Reply-To: <87D7392F-8BC7-4A28-8CDF-644F2956DA0A@intel.com>
References: <CAO0b____pvyYBSz7EzWrS--T9HSWbEBv5c-frbFT6NQ46ve-nQ@mail.gmail.com>
 <1441987655.14645.36.camel@einstein.kev>
 <CAO0b__9z7sjS3Gqm1uj2z=6X9Sz9uG9V1eUYQWSeCpkchxroEQ@mail.gmail.com>
 <CAO0b___x667t-DnpaRysp0=GB-+5YPej4mTjpci6RCGNZnwDrA@mail.gmail.com>
 <87D7392F-8BC7-4A28-8CDF-644F2956DA0A@intel.com>
Message-ID: <CAO0b___cQ5FV1XPangaaeGjBOcDa8A0wrWJGJSr5ucgNF26jQw@mail.gmail.com>

Hi, Alex,

Thanks for the information, I was unable to join the conference yesterday.
Then lets get the dicision done before fix it.

BR,

Zheng

On Wed, Sep 23, 2015 at 12:56 PM, Alex Xu <hejie.xu at intel.com> wrote:

> Hi, Zhengyu,
>
> We discussed this in yesterday Nova API meeting. We think it should get
> consistent in API-WG.
>
> And there already have patch for pagination guideline
> https://review.openstack.org/190743 , and there also have some discussion
> on limits.
> So we are better waiting the guideline get consistent before fix it.
>
> Thanks
> Alex
>
> On Sep 23, 2015, at 9:18 AM, Zhenyu Zheng <zhengzhenyulixi at gmail.com>
> wrote:
>
> Any thoughts on this?
>
> On Mon, Sep 14, 2015 at 11:53 AM, Zhenyu Zheng <zhengzhenyulixi at gmail.com>
> wrote:
>
>> Hi, Thanks for your reply, after check again and I agree with you. I
>> think we should come up with a conclusion about how we should treat this
>> limit=0 across nova. And that's also why I sent out this mail. I will
>> register this topic in the API meeting open discussion section, my be a BP
>> in M to fix this.
>>
>> BR,
>>
>> Zheng
>>
>> On Sat, Sep 12, 2015 at 12:07 AM, Kevin L. Mitchell <
>> kevin.mitchell at rackspace.com> wrote:
>>
>>> On Fri, 2015-09-11 at 15:41 +0800, Zhenyu Zheng wrote:
>>> > Hi, I found out that nova currently handles list with limit=0 quite
>>> > different for different objects.
>>> >
>>> > Especially when list servers:
>>> >
>>> > According to the code:
>>> >
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n206
>>> >
>>> > when limit = 0, it should apply as max_limit, but currently, in:
>>> >
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n1930
>>> >
>>> > we directly return [], this is quite different with comment in the api
>>> > code.
>>> >
>>> >
>>> > I checked other objects:
>>> >
>>> > when list security groups and server groups, it will return as no
>>> > limit has been set. And for flavors it returns []. I will continue to
>>> > try out other APIs if needed.
>>> >
>>> > I think maybe we should make a rule for all objects, at least fix the
>>> > servers to make it same in api and db code.
>>> >
>>> > I have reported a bug in launchpad:
>>> >
>>> > https://bugs.launchpad.net/nova/+bug/1494617
>>> >
>>> >
>>> > Any suggestions?
>>>
>>> After seeing the test failures that showed up on your proposed fix, I'm
>>> thinking that the proposed change reads like an API change, requiring a
>>> microversion bump.  That said, I approve of increased consistency across
>>> the API, and perhaps the behavior on limit=0 is something the API group
>>> needs to discuss a guideline for?
>>> --
>>> Kevin L. Mitchell <kevin.mitchell at rackspace.com>
>>> Rackspace
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/0c8567b9/attachment.html>

From ftersin at hotmail.com  Wed Sep 23 06:56:50 2015
From: ftersin at hotmail.com (Feodor Tersin)
Date: Wed, 23 Sep 2015 09:56:50 +0300
Subject: [openstack-dev] [nova] About availability zones
In-Reply-To: <CAO0b__-38=aj2+rDd1POEsQkJ+_dGUAnskD8LzKjKKCKoMicRg@mail.gmail.com>
References: <CAO0b__-38=aj2+rDd1POEsQkJ+_dGUAnskD8LzKjKKCKoMicRg@mail.gmail.com>
Message-ID: <COL130-W5472D8C7EEFF201157A157BE440@phx.gbl>

Hi.

Currently, when performing live-migration the AZ of the instance didn't update. In usecase like this:Instance_1 is in host1 which is in az1, we live-migrate it to host2 (provide host2 in API request) which is in az2. The operation will secusess but the availability zone data stored in instance1 is still az1, this may cause inconsistency with the az data stored in instance db and the actual az. I think update the az information in instance using the host az can solve this.
Since a host can be included into different AZs, you probably need to allow a caller optionally to specify a target AZ. And as i understand the same problem occurs for resize, evacuate and other similar operations, doesn't it?

 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/363f90fb/attachment.html>

From liuxinguo at huawei.com  Wed Sep 23 08:06:04 2015
From: liuxinguo at huawei.com (liuxinguo)
Date: Wed, 23 Sep 2015 08:06:04 +0000
Subject: [openstack-dev] [cinder] How to make a mock effactive for all
	method of a testclass
Message-ID: <E97AE00C7CAEC3489EF8B5A429A8B2842DB14F29@szxeml504-mbx.china.huawei.com>

Hi,

In a.py we have a function:
def _change_file_mode(filepath):
utils.execute('chmod', '600', filepath, run_as_root=True)

In test_xxx.py, there is a testclass:
class xxxxDriverTestCase(test.TestCase):
def test_a(self)
    ...
    Call a. _change_file_mode
...

def test_b(self)
    ...
    Call a. _change_file_mode
...

I have tried to mock like mock out function _change_file_mode like this:
@mock.patch.object(a, '_change_file_mode', return_value=None)
class xxxxDriverTestCase(test.TestCase):
def test_a(self)
    ...
    Call a. _change_file_mode
...

def test_b(self)
    ...
    Call a. _change_file_mode
...

But the mock takes no effort, the real function _change_file_mode is still executed.
So how to make a mock effactive for all method of a testclass?
Thanks for any input!

Wilson Liu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/d577470a/attachment.html>

From sbauza at redhat.com  Wed Sep 23 08:08:33 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Wed, 23 Sep 2015 10:08:33 +0200
Subject: [openstack-dev] [nova] About availability zones
In-Reply-To: <COL130-W5472D8C7EEFF201157A157BE440@phx.gbl>
References: <CAO0b__-38=aj2+rDd1POEsQkJ+_dGUAnskD8LzKjKKCKoMicRg@mail.gmail.com>
 <COL130-W5472D8C7EEFF201157A157BE440@phx.gbl>
Message-ID: <56025E01.5040106@redhat.com>



Le 23/09/2015 08:56, Feodor Tersin a ?crit :
> Hi.
>
>     /Currently, when performing live-migration the AZ of the instance
>     didn't update. In usecase like this:/
>     /Instance_1 is in host1 which is in az1, we live-migrate it to
>     host2 (provide host2 in API request) which is in az2. The
>     operation will secusess but the availability zone data stored in
>     instance1 is still az1, this may cause inconsistency with the az
>     data stored in instance db and the actual az. I think update the
>     az information in instance using the host az can solve this./
>
>
> Since a host can be included into different AZs, you probably need to 
> allow a caller optionally to specify a target AZ. And as i understand 
> the same problem occurs for resize, evacuate and other similar 
> operations, doesn't it?

No, an host can't be in two different AZs. It can be in two or more 
aggregates, but then only one of those aggregates can be having an AZ 
metadata key.
https://github.com/openstack/nova/blob/1df8248b6ad7982174c417abf80070107eac8909/nova/compute/api.py#L3645-L3670

FWIW, I provided a devref change for explaining that : 
https://review.openstack.org/#/c/223802/

-Sylvain

>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/5ff80055/attachment.html>

From geguileo at redhat.com  Wed Sep 23 08:25:50 2015
From: geguileo at redhat.com (Gorka Eguileor)
Date: Wed, 23 Sep 2015 10:25:50 +0200
Subject: [openstack-dev] [cinder] How to make a mock effactive for all
 method of a testclass
In-Reply-To: <E97AE00C7CAEC3489EF8B5A429A8B2842DB14F29@szxeml504-mbx.china.huawei.com>
References: <E97AE00C7CAEC3489EF8B5A429A8B2842DB14F29@szxeml504-mbx.china.huawei.com>
Message-ID: <20150923082550.GJ3713@localhost>

On 23/09, liuxinguo wrote:
> Hi,
> 
> In a.py we have a function:
> def _change_file_mode(filepath):
> utils.execute('chmod', '600', filepath, run_as_root=True)
> 
> In test_xxx.py, there is a testclass:
> class xxxxDriverTestCase(test.TestCase):
> def test_a(self)
>     ...
>     Call a. _change_file_mode
> ...
> 
> def test_b(self)
>     ...
>     Call a. _change_file_mode
> ...
> 
> I have tried to mock like mock out function _change_file_mode like this:
> @mock.patch.object(a, '_change_file_mode', return_value=None)
> class xxxxDriverTestCase(test.TestCase):
> def test_a(self)
>     ...
>     Call a. _change_file_mode
> ...
> 
> def test_b(self)
>     ...
>     Call a. _change_file_mode
> ...
> 
> But the mock takes no effort, the real function _change_file_mode is still executed.
> So how to make a mock effactive for all method of a testclass?
> Thanks for any input!
> 
> Wilson Liu

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi,

There is something else going on with your code, as the code you are
showing us here should not even execute at all.  You should be getting a
couple of errors:

 TypeError: test_a() takes exactly 1 argument (2 given)
 TypeError: test_b() takes exactly 1 argument (2 given)

Because mocking the whole class will still pass the mock object as an
argument to the methods.

Anyway, if you accept the mocked object as an argument in your test
methods it would work.

Cheers,
Gorka.


From sbauza at redhat.com  Wed Sep 23 08:30:06 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Wed, 23 Sep 2015 10:30:06 +0200
Subject: [openstack-dev] [nova] About availability zones
In-Reply-To: <CAO0b__-38=aj2+rDd1POEsQkJ+_dGUAnskD8LzKjKKCKoMicRg@mail.gmail.com>
References: <CAO0b__-38=aj2+rDd1POEsQkJ+_dGUAnskD8LzKjKKCKoMicRg@mail.gmail.com>
Message-ID: <5602630E.1090007@redhat.com>



Le 23/09/2015 05:24, Zhenyu Zheng a ?crit :
> Hi, all
>
> I have a question about availability zones when performing live-migration.
>
> Currently, when performing live-migration the AZ of the instance 
> didn't update. In usecase like this:
> Instance_1 is in host1 which is in az1, we live-migrate it to host2 
> (provide host2 in API request) which is in az2. The operation will 
> secusess but the availability zone data stored in instance1 is still 
> az1, this may cause inconsistency with the az data stored in instance 
> db and the actual az. I think update the az information in instance 
> using the host az can solve this.
>

Well, no. Instance.AZ is only the reflect of what the user asked, not 
what the current AZ is from the host the instance belongs to. In other 
words, instance.az is set once forever by taking the --az hint from the 
API request and persisting it in DB.

That means that if you want to create a new VM without explicitly 
specifying one AZ in the CLI, it will take the default value of 
CONF.default_schedule_az which is None (unless you modify that flag).

Consequently, when it will go to the scheduler, the AZFilter will not 
check the related AZs from any host because you didn't asked for an AZ. 
That means that the instance is considered "AZ-free".

Now, when live-migrating, *if you specify a destination*, you totally 
bypass the scheduler and thus the AZFilter. By doing that, you can put 
your instance to another host without really checking the AZ.

That said, if you *don't specify a destination*, then the scheduler will 
be called and will enforce the instance.az field with regards to the 
host AZ. That should still work (again, depending on whether you 
explicitly set an AZ at the boot time)

To be clear, there is no reason of updating that instance AZ field. We 
can tho consider it's a new "request"' field and could be potentially 
moved to the RequestSpec object, but for the moment, this is a bit too 
early since we don't really use that new RequestSpec object yet.



> Also, I have heard from my collegue that in the future we are planning 
> to use host az information for instances. I couldn't find informations 
> about this, could anyone provide me some information about it if thats 
> true?
>

See my point above, I'd rather prefer to fix how live-migrations check 
the scheduler (and not bypass it when specifying a destination) and 
possibly move the instance AZ field to the RequestSpec object once that 
object is persisted, but I don't think we should check the host instead 
of the instance in the AZFilter.


I assume all of that can be very confusing and mostly tribal knowledge, 
that's why we need to document that properly and one first shot is 
https://review.openstack.org/#/c/223802/

-Sylvain

> Thanks,
>
> Best Regards,
>
> Zheng
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/f32e0f78/attachment.html>

From flavio at redhat.com  Wed Sep 23 08:45:56 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Wed, 23 Sep 2015 10:45:56 +0200
Subject: [openstack-dev] [all] Cross-Project track topic proposals
Message-ID: <20150923084556.GF26372@redhat.com>

Greetings,

The community is in the process of collecting topics for the
cross-project tack that we'll have in the Mitaka summit.

The good ol' OSDREG has been setup[0] to help collectiong these topics
and we'd like to encourage the community to propose sessions there.

During the TC meeting last night, it was pointed out that some people
have already proposed sessions on this[1] etherpad. I'd also like to
ask these folks to move these proposals to OSDREG as that's the tool
the cross-project track committee will be using as a reference for
proposals.

The deadline for proposing topics for the cross-project track is
October 9th. That will leave the committee roughly 2 weeks to review
the proposals and schedule them before the summit.

Bests,
Flavio

[0] http://odsreg.openstack.org/
[1] https://etherpad.openstack.org/p/mitaka-cross-project-session-planning

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/5fe3f8fe/attachment.pgp>

From zhengzhenyulixi at gmail.com  Wed Sep 23 08:49:53 2015
From: zhengzhenyulixi at gmail.com (Zhenyu Zheng)
Date: Wed, 23 Sep 2015 16:49:53 +0800
Subject: [openstack-dev] [nova] About availability zones
In-Reply-To: <5602630E.1090007@redhat.com>
References: <CAO0b__-38=aj2+rDd1POEsQkJ+_dGUAnskD8LzKjKKCKoMicRg@mail.gmail.com>
 <5602630E.1090007@redhat.com>
Message-ID: <CAO0b__-yYtuXM4Ye945fbAhzHQoh=-x5HOaNaJHhRy7y76MMBQ@mail.gmail.com>

Hi,

Thanks for the reply, one possible usecase is that user wants to
live-migrate to az2 so he specified host2. As we didn't update the
instance.az, if the user live-migrate again without specifiying destination
host, the instance will migrate to az1 again, this might be different as
the user expect. Any thought about this?

BR,

Zheng

On Wed, Sep 23, 2015 at 4:30 PM, Sylvain Bauza <sbauza at redhat.com> wrote:

>
>
> Le 23/09/2015 05:24, Zhenyu Zheng a ?crit :
>
> Hi, all
>
> I have a question about availability zones when performing live-migration.
>
> Currently, when performing live-migration the AZ of the instance didn't
> update. In usecase like this:
> Instance_1 is in host1 which is in az1, we live-migrate it to host2
> (provide host2 in API request) which is in az2. The operation will secusess
> but the availability zone data stored in instance1 is still az1, this may
> cause inconsistency with the az data stored in instance db and the actual
> az. I think update the az information in instance using the host az can
> solve this.
>
>
> Well, no. Instance.AZ is only the reflect of what the user asked, not what
> the current AZ is from the host the instance belongs to. In other words,
> instance.az is set once forever by taking the --az hint from the API
> request and persisting it in DB.
>
> That means that if you want to create a new VM without explicitly
> specifying one AZ in the CLI, it will take the default value of
> CONF.default_schedule_az which is None (unless you modify that flag).
>
> Consequently, when it will go to the scheduler, the AZFilter will not
> check the related AZs from any host because you didn't asked for an AZ.
> That means that the instance is considered "AZ-free".
>
> Now, when live-migrating, *if you specify a destination*, you totally
> bypass the scheduler and thus the AZFilter. By doing that, you can put your
> instance to another host without really checking the AZ.
>
> That said, if you *don't specify a destination*, then the scheduler will
> be called and will enforce the instance.az field with regards to the host
> AZ. That should still work (again, depending on whether you explicitly set
> an AZ at the boot time)
>
> To be clear, there is no reason of updating that instance AZ field. We can
> tho consider it's a new "request"' field and could be potentially moved to
> the RequestSpec object, but for the moment, this is a bit too early since
> we don't really use that new RequestSpec object yet.
>
>
>
> Also, I have heard from my collegue that in the future we are planning to
> use host az information for instances. I couldn't find informations about
> this, could anyone provide me some information about it if thats true?
>
>
> See my point above, I'd rather prefer to fix how live-migrations check the
> scheduler (and not bypass it when specifying a destination) and possibly
> move the instance AZ field to the RequestSpec object once that object is
> persisted, but I don't think we should check the host instead of the
> instance in the AZFilter.
>
>
> I assume all of that can be very confusing and mostly tribal knowledge,
> that's why we need to document that properly and one first shot is
> https://review.openstack.org/#/c/223802/
>
> -Sylvain
>
> Thanks,
>
> Best Regards,
>
> Zheng
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/d77c5270/attachment.html>

From Abhishek.Kekane at nttdata.com  Wed Sep 23 09:04:48 2015
From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek)
Date: Wed, 23 Sep 2015 09:04:48 +0000
Subject: [openstack-dev] [oslo] incubator move to private modules
In-Reply-To: <CANw6fcEnVqZ1bN8dKOg3J1K=CfNzVZhdaYndjFTALw7aCjfcmg@mail.gmail.com>
References: <C3B15E1C-D3DF-424B-9C09-A3B2588A5CFD@gmail.com>
 <CANw6fcFi8yKUMvEv1f_gPG368Xsq3-Vs6m9otgFO5gmWsWQVUQ@mail.gmail.com>
 <20150826072045.GP10047@redhat.com> <55DD9288.4040907@openstack.org>
 <CANw6fcEnVqZ1bN8dKOg3J1K=CfNzVZhdaYndjFTALw7aCjfcmg@mail.gmail.com>
Message-ID: <E1FB4937BE24734DAD0D1D4E4E506D7890D170BA@MAIL703.KDS.KEANE.COM>

Hi All,

I am working on "Returning request-id to caller".
Please refer, https://review.openstack.org/#/c/156508

To implement this specs in cross-projects it is also required to move oslo-incubator to private modules.

IMO moving oslo-incubator/openstack as a private module oslo-incubator/_openstack will require lot of changes (import statements) in all projects which are using it which will be time consuming.
I also want to know if anyone is proposing blueprint/specs for the same.

Thank you,

Abhishek Kekane

From: Davanum Srinivas [mailto:davanum at gmail.com]
Sent: 26 August 2015 15:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo] incubator move to private modules

yep ttx. this will be for Mitaka.

-- Dims

On Wed, Aug 26, 2015 at 6:18 AM, Thierry Carrez <thierry at openstack.org<mailto:thierry at openstack.org>> wrote:
Flavio Percoco wrote:
> On 25/08/15 06:01 -0400, Davanum Srinivas wrote:
>> Morgan,
>>
>> Bit more radical :) I am inclined to just yank all code from
>> oslo-incubator and
>> let the projects modify/move what they have left into their own
>> package/module
>> structure (and change the contracts however they see fit).
>
> Glad this conversation is happening, I've started to think about this
> as well. I think we're at a point were we could just let projects move
> from were they are.
>
> However, I'd like this to be a bit more organized. For instance, if we
> dismiss oslo-incubator and let projects move forward on their own,
> it'd be better to have all the `openstack/common/` packages renamed so
> that it'll create less confusion to newcomers. At the very least, as
> Morgan mentioned, these packages could be prefixed with an `_` and
> become 'private' and 'owned' by the project.
>
> We still need a 'deprecation' process for the code in the
> oslo-incubator repository and we would still have to accept fixes for
> previous releases.

The Death of the Incubator. Sounds like a great thing to discuss at the
Design Summit. I don't think we would kill it before start of Mitaka
anyway ?

--
Thierry Carrez (ttx)


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/4310aeb3/attachment.html>

From tony at bakeyournoodle.com  Wed Sep 23 09:26:24 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Wed, 23 Sep 2015 19:26:24 +1000
Subject: [openstack-dev] Election tools work session in Tokyo
Message-ID: <20150923092624.GK707@thor.bakeyournoodle.com>

Hi All,
    As most of you will have seen we used a gerrit based workflow for this PTL
election (and will use the same system for the upcoming TC election).

As a trial it went well, Tristan and I wrote a few tools to help the process
along.  However I'd like to take advantage of a work session in Tokyo to get a
few interested parties together to enhance the system before the N release.

To be fair, not of this *requires* people to be in a room together but the
momentum would be helpful.

What is the process for requesting a room?

Once we have a room it'd be good to work through what we want the 'N' election
season to work like so that we can start implementing that.

Thoughts?

Yours Tony.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/2787a883/attachment.pgp>

From thierry at openstack.org  Wed Sep 23 09:41:24 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Wed, 23 Sep 2015 11:41:24 +0200
Subject: [openstack-dev] Election tools work session in Tokyo
In-Reply-To: <20150923092624.GK707@thor.bakeyournoodle.com>
References: <20150923092624.GK707@thor.bakeyournoodle.com>
Message-ID: <560273C4.8050302@openstack.org>

Tony Breeds wrote:
> Hi All,
>     As most of you will have seen we used a gerrit based workflow for this PTL
> election (and will use the same system for the upcoming TC election).
> 
> As a trial it went well, Tristan and I wrote a few tools to help the process
> along.  However I'd like to take advantage of a work session in Tokyo to get a
> few interested parties together to enhance the system before the N release.
> 
> To be fair, not of this *requires* people to be in a room together but the
> momentum would be helpful.
> 
> What is the process for requesting a room?
> 
> Once we have a room it'd be good to work through what we want the 'N' election
> season to work like so that we can start implementing that.
> 
> Thoughts?

This is a bit cross-project, so the natural fit would be a cross-project
workshop, which you can suggest at http://odsreg.openstack.org (see
Flavio's recent email on that). It's a bit specific though, so I'm not
sure it will be approved for a full session.

Alternatively that could fit in a infra work session, you may want to
suggest it there (not sure if they have opened an etherpad for session
suggestion yet).

Otherwise we could also discuss it as part of the Infra/QA/RelMgt
contributors meetup on Friday.

-- 
Thierry Carrez (ttx)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/2398ecb9/attachment.pgp>

From tony.a.wang at alcatel-lucent.com  Wed Sep 23 09:57:38 2015
From: tony.a.wang at alcatel-lucent.com (WANG, Ming Hao (Tony T))
Date: Wed, 23 Sep 2015 09:57:38 +0000
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>  
Message-ID: <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>

Hi Russell,

I just realized OVN plugin is an independent plugin of OVS plugin.
In this case, how do we handle the provider network connections between compute nodes? Is it handled by OVN actually?

Thanks,
Tony 

-----Original Message-----
From: WANG, Ming Hao (Tony T) 
Sent: Wednesday, September 23, 2015 1:58 PM
To: WANG, Ming Hao (Tony T); 'OpenStack Development Mailing List (not for usage questions)'
Subject: RE: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

Hi Russell,

Is there any material to explain how OVN parent port work?

Thanks,
Tony

-----Original Message-----
From: WANG, Ming Hao (Tony T) 
Sent: Wednesday, September 23, 2015 10:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [neutron] Does neutron ovn plugin support to setup multiple neutron networks for one container?

Russell,

Thanks for your info.
If I want to assign multiple interfaces to a container on different neutron networks(for example, netA and netB), is it mandatory to let the VM hosting containers have network interfaces in netA and netB, and ovn will help to direct the container traffic to its corresponding VM network interfaces?

from https://github.com/openvswitch/ovs/blob/master/ovn/CONTAINERS.OpenStack.md :
"This VLAN tag is stripped out in the hypervisor by OVN."
I suppose when the traffic goes out the VM, the VLAN tag has already been stripped out. 
When the traffic arrives ovs on physical host, it will be tagged with neutron local vlan. Is it right?

Thanks in advance,
Tony

-----Original Message-----
From: Russell Bryant [mailto:rbryant at redhat.com] 
Sent: Wednesday, September 23, 2015 12:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Does neutron ovn plugin support to setup multiple neutron networks for one container?

On 09/22/2015 08:08 AM, WANG, Ming Hao (Tony T) wrote:
> Dear all,
> 
> For neutron ovn plugin supports containers in one VM, My understanding is one container can't be assigned two network interfaces in different neutron networks. Is it right?
> The reason:
> 1. One host VM only has one network interface.
> 2. all the VLAN tags are stripped out when the packet goes out the VM.
> 
> If it is True, does neutron ovn plugin or ovn has plan to support this?

You should be able to assign multiple interfaces to a container on different networks.  The traffic for each interface will be tagged with a unique VLAN ID on its way in and out of the VM, the same way it is done for each container with a single interface.

--
Russell Bryant

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From thierry at openstack.org  Wed Sep 23 10:07:54 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Wed, 23 Sep 2015 12:07:54 +0200
Subject: [openstack-dev] [all][elections] PTL nomination period is now
 over
In-Reply-To: <55FABF5E.4000204@redhat.com>
References: <55FABF5E.4000204@redhat.com>
Message-ID: <560279FA.7020307@openstack.org>

Tristan Cacqueray wrote:
> [...]
> There are 5 projects without candidates, so according to this
> resolution[1], the TC we'll have to appoint a new PTL for Barbican,
> MagnetoDB, Magnum, Murano and Security
> [...]

Following our policy[1], the Technical Committee decided[2] the
following for project teams without PTL candidates in the nomination
timeframe:

- Robert Clark was nominated PTL for the Security Team.

- Serg Melikyan was nominated PTL for the Murano Team.

- Douglas Mendizabal was nominated PTL for the Barbican Team.

- An election should be held for Magnum contributors to pick their PTL
between Adrian Otto and Hongbin Lu. It will be organized by election
officials at their earliest convenience.

- MagnetoDB being abandoned, no PTL was chosen. Instead, we decided to
fast-track the removal[3] of MagnetoDB from the official list of
OpenStack projects.

[1]
http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html

[2] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-09-22-20.01.html

[3] https://review.openstack.org/#/c/224743/

Regards,

-- 
Thierry Carrez (ttx)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/ce45457a/attachment.pgp>

From vkuklin at mirantis.com  Wed Sep 23 10:10:04 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Wed, 23 Sep 2015 13:10:04 +0300
Subject: [openstack-dev] [Fuel] Core Reviewers groups restructure
In-Reply-To: <CAKYN3rP1ipHxSkVPeUvkcNst3DWnbiKcCAHXJVQ_odjZLJLj3Q@mail.gmail.com>
References: <CAKYN3rOpnBniOkHp6MtqfXnVxkxkV=mQNRRjQWLvwm5c9eEwzA@mail.gmail.com>
 <CAFkLEwrzzAWjS=_v3kjOCHQyPFr5M5pboamjQDKUpkPBfGQCEQ@mail.gmail.com>
 <CAKYN3rP1ipHxSkVPeUvkcNst3DWnbiKcCAHXJVQ_odjZLJLj3Q@mail.gmail.com>
Message-ID: <CAHAWLf188CUXU=WsSNgeAL_GcygtvVxj7KaZE4CBR_ww69nncg@mail.gmail.com>

+1


On Tue, Sep 22, 2015 at 2:17 AM, Mike Scherbakov <mscherbakov at mirantis.com>
wrote:

> Thanks guys.
> So for fuel-octane then there are no actions needed.
>
> For fuel-agent-core group [1], looks like we are already good (it doesn't
> have fuel-core group nested). But it would need to include fuel-infra group
> and remove Aleksandra Fedorova (she will be a part of fuel-infra group).
>
> python-fuel-client-core [2] is good as well (no nested fuel-core).
> However, there is another group python-fuelclient-release [3], which has to
> be eliminated, and main python-fuelclient-core would just have fuel-infra
> group included for maintenance purposes.
>
> [1] https://review.openstack.org/#/admin/groups/995,members
> [2] https://review.openstack.org/#/admin/groups/551,members
> [3] https://review.openstack.org/#/admin/groups/552,members
>
>
> On Mon, Sep 21, 2015 at 11:06 AM Oleg Gelbukh <ogelbukh at mirantis.com>
> wrote:
>
>> FYI, we have a separate core group for stackforge/fuel-octane repository
>> [1].
>>
>> I'm supporting the move to modularization of Fuel with cleaner separation
>> of authority and better defined interfaces. Thus, I'm +1 to such a change
>> as a part of that move.
>>
>> [1] https://review.openstack.org/#/admin/groups/1020,members
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Sun, Sep 20, 2015 at 11:56 PM, Mike Scherbakov <
>> mscherbakov at mirantis.com> wrote:
>>
>>> Hi all,
>>> as of my larger proposal on improvements to code review workflow [1], we
>>> need to have cores for repositories, not for the whole Fuel. It is the path
>>> we are taking for a while, and new core reviewers added to specific repos
>>> only. Now we need to complete this work.
>>>
>>> My proposal is:
>>>
>>>    1. Get rid of one common fuel-core [2] group, members of which can
>>>    merge code anywhere in Fuel. Some members of this group may cover a couple
>>>    of repositories, but can't really be cores in all repos.
>>>    2. Extend existing groups, such as fuel-library [3], with members
>>>    from fuel-core who are keeping up with large number of reviews / merges.
>>>    This data can be queried at Stackalytics.
>>>    3. Establish a new group "fuel-infra", and ensure that it's included
>>>    into any other core group. This is for maintenance purposes, it is expected
>>>    to be used only in exceptional cases. Fuel Infra team will have to decide
>>>    whom to include into this group.
>>>    4. Ensure that fuel-plugin-* repos will not be affected by removal
>>>    of fuel-core group.
>>>
>>> #2 needs specific details. Stackalytics can show active cores easily, we
>>> can look at people with *:
>>> http://stackalytics.com/report/contribution/fuel-web/180. This is for
>>> fuel-web, change the link for other repos accordingly. If people are added
>>> specifically to the particular group, leaving as is (some of them are no
>>> longer active. But let's clean them up separately from this group
>>> restructure process).
>>>
>>>    - fuel-library-core [3] group will have following members: Bogdan
>>>    D., Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
>>>    - fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin,
>>>    Vitaly Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
>>>    - fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
>>>    - fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
>>>    - fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky,
>>>    Nastya Urlapova
>>>    - fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
>>>    Konstantinov, Olga Gusarenko
>>>    - fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry
>>>    Pyzhov, Sergii Golovatyuk, Vladimir Kuklin, Igor Kalnitsky
>>>    - fuel-nailgun-agent-core [10]: Vladimir Sharshov, V.Kozhukalov
>>>    - fuel-ostf-core [11]: Tatyana Leontovich, Nastya Urlapova, Andrey
>>>    Sledzinsky, Dmitry Shulyak
>>>    - fuel-plugins-core [12]: Igor Kalnitsky, Evgeny Li, Alexey Kasatkin
>>>    - fuel-qa-core [13]: Andrey Sledzinsky, Tatyana Leontovich, Nastya
>>>    Urlapova
>>>    - fuel-stats-core [14]: Alex Kislitsky, Alexey Kasatkin, Vitaly
>>>    Kramskikh
>>>    - fuel-tasklib-core [15]: Igor Kalnitsky, Dima Shulyak, Alexey
>>>    Kasatkin (this project seems to be dead, let's consider to rip it off)
>>>    - fuel-specs-core: there is no such a group at the moment. I propose
>>>    to create one with following members, based on stackalytics data [16]:
>>>    Vitaly Kramskikh, Bogdan Dobrelia, Evgeny Li, Sergii Golovatyuk, Vladimir
>>>    Kuklin, Igor Kalnitsky, Alexey Kasatkin, Roman Vyalov, Dmitry Borodaenko,
>>>    Mike Scherbakov, Dmitry Pyzhov. We would need to reconsider who can merge
>>>    after Fuel PTL/Component Leads elections
>>>    - fuel-octane-core: needs to be created. Members: Yury Taraday, Oleg
>>>    Gelbukh, Ilya Kharin
>>>    - fuel-mirror-core: needs to be created. Sergey Kulanov, Vitaly
>>>    Parakhin
>>>    - fuel-upgrade-core: needs to be created. Sebastian Kalinowski, Alex
>>>    Schultz, Evgeny Li, Igor Kalnitsky
>>>    - fuel-provision: repo seems to be outdated, needs to be removed.
>>>
>>> I suggest to make changes in groups first, and then separately address
>>> specific issues like removing someone from cores (not doing enough reviews
>>> anymore or too many positive reviews, let's say > 95%).
>>>
>>> I hope I don't miss anyone / anything. Please check carefully.
>>> Comments / objections?
>>>
>>> [1]
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
>>> [2] https://review.openstack.org/#/admin/groups/209,members
>>> [3] https://review.openstack.org/#/admin/groups/658,members
>>> [4] https://review.openstack.org/#/admin/groups/664,members
>>> [5] https://review.openstack.org/#/admin/groups/655,members
>>> [6] https://review.openstack.org/#/admin/groups/646,members
>>> [7] https://review.openstack.org/#/admin/groups/656,members
>>> [8] https://review.openstack.org/#/admin/groups/657,members
>>> [9] https://review.openstack.org/#/admin/groups/659,members
>>> [10] https://review.openstack.org/#/admin/groups/1000,members
>>> [11] https://review.openstack.org/#/admin/groups/660,members
>>> [12] https://review.openstack.org/#/admin/groups/661,members
>>> [13] https://review.openstack.org/#/admin/groups/662,members
>>> [14] https://review.openstack.org/#/admin/groups/663,members
>>> [15] https://review.openstack.org/#/admin/groups/624,members
>>> [16] http://stackalytics.com/report/contribution/fuel-specs/180
>>>
>>>
>>> --
>>> Mike Scherbakov
>>> #mihgen
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Mike Scherbakov
> #mihgen
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/e8a4976b/attachment.html>

From thierry at openstack.org  Wed Sep 23 10:15:01 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Wed, 23 Sep 2015 12:15:01 +0200
Subject: [openstack-dev] [Heat] [Zaqar] Liberty RC1 available
Message-ID: <56027BA5.7030703@openstack.org>

Hello everyone,

Heat and Zaqar just produced their first release candidate for the end
of the Liberty cycle. The RC1 tarballs, as well as a list of last-minute
features and fixed bugs since liberty-1 are available at:

https://launchpad.net/heat/liberty/liberty-rc1
https://launchpad.net/zaqar/liberty/liberty-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as final versions
on October 15. You are therefore strongly encouraged to test and
validate these tarballs !

Alternatively, you can directly test the stable/liberty release branch at:

http://git.openstack.org/cgit/openstack/heat/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/zaqar/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/heat/+filebug
or
https://bugs.launchpad.net/zaqar/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branches of Heat and Zaqar are now officially
open for Mitaka development, so feature freeze restrictions no longer
apply there.

Regards,

-- 
Thierry Carrez (ttx)


From vkuklin at mirantis.com  Wed Sep 23 10:27:07 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Wed, 23 Sep 2015 13:27:07 +0300
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <CAKYN3rMgm8iKmTCb6BknV=UsM6W-zBeW1Ca9JzZ+a=Y80taODQ@mail.gmail.com>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
 <CAHAWLf1OtHP5BdKsf+N-Fmv=34vLs36SP9UcQ=FurpEihr-hhg@mail.gmail.com>
 <CANw6fcHkopMgfuVHUXnp-zxJUCmqNoHn=m02h5=jZ7A_8yyinA@mail.gmail.com>
 <20150919010735.GB16012@localhost>
 <CANw6fcH6uePBTiZuKYWEoAJ_1sJFY_CbrjYNoqpKOyHCG6C70Q@mail.gmail.com>
 <CAKYN3rMgm8iKmTCb6BknV=UsM6W-zBeW1Ca9JzZ+a=Y80taODQ@mail.gmail.com>
Message-ID: <CAHAWLf11d3Krn3Y9_EpSeR_07OsHfp6TEHcVmtLh+7vpD00ShA@mail.gmail.com>

Dmitry, Mike

Thank you for the list of usable links.

But still - we do not have clearly defined procedure on determening who is
eligible to nominate and vote for PTL and Component Leads. Remember, that
Fuel still has different release cycle and Kilo+Liberty contributors list
is not exactly the same for "365days" contributors list.

Can we finally come up with the list of people eligible to nominate and
vote?

On Sun, Sep 20, 2015 at 2:37 AM, Mike Scherbakov <mscherbakov at mirantis.com>
wrote:

> Let's move on.
> I started work on MAINTAINERS files, proposed two patches:
> https://review.openstack.org/#/c/225457/1
> https://review.openstack.org/#/c/225458/1
>
> These can be used as templates for other repos / folders.
>
> Thanks,
>
> On Fri, Sep 18, 2015 at 7:45 PM Davanum Srinivas <davanum at gmail.com>
> wrote:
>
>> +1 Dmitry
>>
>> -- Dims
>>
>> On Fri, Sep 18, 2015 at 9:07 PM, Dmitry Borodaenko <
>> dborodaenko at mirantis.com> wrote:
>>
>>> Dims,
>>>
>>> Thanks for the reminder!
>>>
>>> I've summarized the uncontroversial parts of that thread in a policy
>>> proposal as per you suggestion [0], please review and comment. I've
>>> renamed SMEs to maintainers since Mike has agreed with that part, and I
>>> omitted code review SLAs from the policy since that's the part that has
>>> generated the most discussion.
>>>
>>> [0] https://review.openstack.org/225376
>>>
>>> I don't think we should postpone the election: the PTL election follows
>>> the same rules as OpenStack so we don't need a Fuel-specific policy for
>>> that, and the component leads election doesn't start until October 9,
>>> which gives us 3 weeks to confirm consensus on that aspect of the
>>> policy.
>>>
>>> --
>>> Dmitry Borodaenko
>>>
>>>
>>> On Fri, Sep 18, 2015 at 07:30:39AM -0400, Davanum Srinivas wrote:
>>> > Sergey,
>>> >
>>> > Please see [1]. Did we codify some of these roles and responsibilities
>>> as a
>>> > community in a spec? There was also a request to use terminology like
>>> say
>>> > MAINTAINERS in that email as well.
>>> >
>>> > Are we pulling the trigger a bit early for an actual election?
>>> >
>>> > Thanks,
>>> > Dims
>>> >
>>> > [1] http://markmail.org/message/2ls5obgac6tvcfss
>>> >
>>> > On Fri, Sep 18, 2015 at 6:56 AM, Vladimir Kuklin <vkuklin at mirantis.com
>>> >
>>> > wrote:
>>> >
>>> > > Sergey, Fuelers
>>> > >
>>> > > This is awesome news!
>>> > >
>>> > > By the way, I have a question on who is eligible to vote and to
>>> nominate
>>> > > him/her-self for both PTL and Component Leads. Could you elaborate
>>> on that?
>>> > >
>>> > > And there is no such entity as Component Lead in OpenStack - so we
>>> are
>>> > > actually creating one. What are the new rights and responsibilities
>>> of CL?
>>> > >
>>> > > On Fri, Sep 18, 2015 at 5:39 AM, Sergey Lukjanov <
>>> slukjanov at mirantis.com>
>>> > > wrote:
>>> > >
>>> > >> Hi folks,
>>> > >>
>>> > >> I'd like to announce that we're running the PTL and Component Leads
>>> > >> elections. Detailed information available on wiki. [0]
>>> > >>
>>> > >> Project Team Lead: Manages day-to-day operations, drives the project
>>> > >> team goals, resolves technical disputes within the project team. [1]
>>> > >>
>>> > >> Component Lead: Defines architecture of a module or component in
>>> Fuel,
>>> > >> reviews design specs, merges majority of commits and resolves
>>> conflicts
>>> > >> between Maintainers or contributors in the area of responsibility.
>>> [2]
>>> > >>
>>> > >> Fuel has two large sub-teams, with roughly comparable codebases,
>>> that
>>> > >> need dedicated component leads: fuel-library and fuel-python. [2]
>>> > >>
>>> > >> Nominees propose their candidacy by sending an email to the
>>> > >> openstack-dev at lists.openstack.org mailing-list, which the subject:
>>> > >> "[fuel] PTL candidacy" or "[fuel] <component> lead candidacy"
>>> > >> (for example, "[fuel] fuel-library lead candidacy").
>>> > >>
>>> > >> Time line:
>>> > >>
>>> > >> PTL elections
>>> > >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
>>> position
>>> > >> * September 29 - October 8: PTL elections
>>> > >>
>>> > >> Component leads elections (fuel-library and fuel-python)
>>> > >> * October 9 - October 15: Open candidacy for Component leads
>>> positions
>>> > >> * October 16 - October 22: Component leads elections
>>> > >>
>>> > >> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015
>>> > >> [1] https://wiki.openstack.org/wiki/Governance
>>> > >> [2]
>>> > >>
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
>>> > >> [3] https://lwn.net/Articles/648610/
>>> > >>
>>> > >> --
>>> > >> Sincerely yours,
>>> > >> Sergey Lukjanov
>>> > >> Sahara Technical Lead
>>> > >> (OpenStack Data Processing)
>>> > >> Principal Software Engineer
>>> > >> Mirantis Inc.
>>> > >>
>>> > >>
>>> __________________________________________________________________________
>>> > >> OpenStack Development Mailing List (not for usage questions)
>>> > >> Unsubscribe:
>>> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> > >>
>>> > >>
>>> > >
>>> > >
>>> > > --
>>> > > Yours Faithfully,
>>> > > Vladimir Kuklin,
>>> > > Fuel Library Tech Lead,
>>> > > Mirantis, Inc.
>>> > > +7 (495) 640-49-04
>>> > > +7 (926) 702-39-68
>>> > > Skype kuklinvv
>>> > > 35bk3, Vorontsovskaya Str.
>>> > > Moscow, Russia,
>>> > > www.mirantis.com <http://www.mirantis.ru/>
>>> > > www.mirantis.ru
>>> > > vkuklin at mirantis.com
>>> > >
>>> > >
>>> __________________________________________________________________________
>>> > > OpenStack Development Mailing List (not for usage questions)
>>> > > Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> > >
>>> > >
>>> >
>>> >
>>> > --
>>> > Davanum Srinivas :: https://twitter.com/dims
>>>
>>> >
>>> __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Mike Scherbakov
> #mihgen
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/43a9bccb/attachment.html>

From pc at michali.net  Wed Sep 23 11:02:53 2015
From: pc at michali.net (Paul Michali)
Date: Wed, 23 Sep 2015 11:02:53 +0000
Subject: [openstack-dev] [neutron] Stumped...need help with neutronclient
	job failure
Message-ID: <CA+ikoRN0-GNCw+RDT87vUZAnDq0_LbKa72JrS-EmcuiHueMsvw@mail.gmail.com>

Hi,

I created a pair of experimental jobs for python-neutronclient that will
run functional tests on core and advanced services, respectively. In the
python-neutronclient repo, I have a commit [1] that splits the tests into
two directories for core/adv-svcs, enables the VPN devstack plugin for the
advanced services tests, and removes the skip decorator for the VPN tests.

When these two jobs run, the core job pass (as expected). The advanced
services job shows all four advanced services tests (testing REST LIST
requests for IKE policy, IPSec policy, IPSec site-to-site connection, and
VPN service resources) failing, with this T/B:

ft1.1: neutronclient.tests.functional.adv-svcs.test_readonly_neutron_vpn.SimpleReadOnlyNeutronVpnClientTest.test_neutron_vpn_*ipsecpolicy_list*_StringException:
Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "neutronclient/tests/functional/adv-svcs/test_readonly_neutron_vpn.py",
line 37, in test_neutron_vpn_ipsecpolicy_list
    ipsecpolicy = self.parser.listing(self.neutron('vpn-ipsecpolicy-list'))
  File "neutronclient/tests/functional/base.py", line 78, in neutron
    **kwargs)
  File "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py",
line 292, in neutron
    'neutron', action, flags, params, fail_ok, merge_stderr)
  File "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py",
line 361, in cmd_with_auth
    self.cli_dir)
  File "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py",
line 61, in execute
    proc = subprocess.Popen(cmd, stdout=stdout, stderr=stderr)
  File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
    errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory


When I look at the other logs on this run [2], I see these things:
- The VPN agent is running (so the DevStack plugin started up VPN)
- screen-q-svc.log shows only two of the four REST GET requests
- Initially there was no testr results, but I modified post test hook
script similar to what Neutron does (so it shows results now)
- No other errors seen, including nothing on the StringException

When I run this locally, all four tests pass, and I see four REST requests
in the screen-q-svc.log.

I tried a hack to enable NEUTRONCLIENT_DEBUG environment variable, but no
additional information was shown.

Does anyone have any thoughts on what may be going wrong here?
Any ideas on how to troubleshoot this issue?

Thanks in advance!

Paul Michali (pc_m)

Refs
[1] https://review.openstack.org/#/c/214587/
[2]
http://logs.openstack.org/87/214587/8/experimental/gate-neutronclient-test-dsvm-functional-adv-svcs/5dfa152/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/b16552d4/attachment.html>

From sgolovatiuk at mirantis.com  Wed Sep 23 11:09:46 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Wed, 23 Sep 2015 13:09:46 +0200
Subject: [openstack-dev] [Fuel] Core Reviewers groups restructure
In-Reply-To: <CAHAWLf188CUXU=WsSNgeAL_GcygtvVxj7KaZE4CBR_ww69nncg@mail.gmail.com>
References: <CAKYN3rOpnBniOkHp6MtqfXnVxkxkV=mQNRRjQWLvwm5c9eEwzA@mail.gmail.com>
 <CAFkLEwrzzAWjS=_v3kjOCHQyPFr5M5pboamjQDKUpkPBfGQCEQ@mail.gmail.com>
 <CAKYN3rP1ipHxSkVPeUvkcNst3DWnbiKcCAHXJVQ_odjZLJLj3Q@mail.gmail.com>
 <CAHAWLf188CUXU=WsSNgeAL_GcygtvVxj7KaZE4CBR_ww69nncg@mail.gmail.com>
Message-ID: <CA+HkNVu-3fyP8beZ7WhJiPzcnd-9_TsGfQAAM_9yJaSdKVZUCw@mail.gmail.com>

+1

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Sep 23, 2015 at 12:10 PM, Vladimir Kuklin <vkuklin at mirantis.com>
wrote:

> +1
>
>
> On Tue, Sep 22, 2015 at 2:17 AM, Mike Scherbakov <mscherbakov at mirantis.com
> > wrote:
>
>> Thanks guys.
>> So for fuel-octane then there are no actions needed.
>>
>> For fuel-agent-core group [1], looks like we are already good (it doesn't
>> have fuel-core group nested). But it would need to include fuel-infra group
>> and remove Aleksandra Fedorova (she will be a part of fuel-infra group).
>>
>> python-fuel-client-core [2] is good as well (no nested fuel-core).
>> However, there is another group python-fuelclient-release [3], which has to
>> be eliminated, and main python-fuelclient-core would just have fuel-infra
>> group included for maintenance purposes.
>>
>> [1] https://review.openstack.org/#/admin/groups/995,members
>> [2] https://review.openstack.org/#/admin/groups/551,members
>> [3] https://review.openstack.org/#/admin/groups/552,members
>>
>>
>> On Mon, Sep 21, 2015 at 11:06 AM Oleg Gelbukh <ogelbukh at mirantis.com>
>> wrote:
>>
>>> FYI, we have a separate core group for stackforge/fuel-octane repository
>>> [1].
>>>
>>> I'm supporting the move to modularization of Fuel with cleaner
>>> separation of authority and better defined interfaces. Thus, I'm +1 to such
>>> a change as a part of that move.
>>>
>>> [1] https://review.openstack.org/#/admin/groups/1020,members
>>>
>>> --
>>> Best regards,
>>> Oleg Gelbukh
>>>
>>> On Sun, Sep 20, 2015 at 11:56 PM, Mike Scherbakov <
>>> mscherbakov at mirantis.com> wrote:
>>>
>>>> Hi all,
>>>> as of my larger proposal on improvements to code review workflow [1],
>>>> we need to have cores for repositories, not for the whole Fuel. It is the
>>>> path we are taking for a while, and new core reviewers added to specific
>>>> repos only. Now we need to complete this work.
>>>>
>>>> My proposal is:
>>>>
>>>>    1. Get rid of one common fuel-core [2] group, members of which can
>>>>    merge code anywhere in Fuel. Some members of this group may cover a couple
>>>>    of repositories, but can't really be cores in all repos.
>>>>    2. Extend existing groups, such as fuel-library [3], with members
>>>>    from fuel-core who are keeping up with large number of reviews / merges.
>>>>    This data can be queried at Stackalytics.
>>>>    3. Establish a new group "fuel-infra", and ensure that it's
>>>>    included into any other core group. This is for maintenance purposes, it is
>>>>    expected to be used only in exceptional cases. Fuel Infra team will have to
>>>>    decide whom to include into this group.
>>>>    4. Ensure that fuel-plugin-* repos will not be affected by removal
>>>>    of fuel-core group.
>>>>
>>>> #2 needs specific details. Stackalytics can show active cores easily,
>>>> we can look at people with *:
>>>> http://stackalytics.com/report/contribution/fuel-web/180. This is for
>>>> fuel-web, change the link for other repos accordingly. If people are added
>>>> specifically to the particular group, leaving as is (some of them are no
>>>> longer active. But let's clean them up separately from this group
>>>> restructure process).
>>>>
>>>>    - fuel-library-core [3] group will have following members: Bogdan
>>>>    D., Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
>>>>    - fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin,
>>>>    Vitaly Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
>>>>    - fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
>>>>    - fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
>>>>    - fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky,
>>>>    Nastya Urlapova
>>>>    - fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
>>>>    Konstantinov, Olga Gusarenko
>>>>    - fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry
>>>>    Pyzhov, Sergii Golovatyuk, Vladimir Kuklin, Igor Kalnitsky
>>>>    - fuel-nailgun-agent-core [10]: Vladimir Sharshov, V.Kozhukalov
>>>>    - fuel-ostf-core [11]: Tatyana Leontovich, Nastya Urlapova, Andrey
>>>>    Sledzinsky, Dmitry Shulyak
>>>>    - fuel-plugins-core [12]: Igor Kalnitsky, Evgeny Li, Alexey Kasatkin
>>>>    - fuel-qa-core [13]: Andrey Sledzinsky, Tatyana Leontovich, Nastya
>>>>    Urlapova
>>>>    - fuel-stats-core [14]: Alex Kislitsky, Alexey Kasatkin, Vitaly
>>>>    Kramskikh
>>>>    - fuel-tasklib-core [15]: Igor Kalnitsky, Dima Shulyak, Alexey
>>>>    Kasatkin (this project seems to be dead, let's consider to rip it off)
>>>>    - fuel-specs-core: there is no such a group at the moment. I
>>>>    propose to create one with following members, based on stackalytics data
>>>>    [16]: Vitaly Kramskikh, Bogdan Dobrelia, Evgeny Li, Sergii Golovatyuk,
>>>>    Vladimir Kuklin, Igor Kalnitsky, Alexey Kasatkin, Roman Vyalov, Dmitry
>>>>    Borodaenko, Mike Scherbakov, Dmitry Pyzhov. We would need to reconsider who
>>>>    can merge after Fuel PTL/Component Leads elections
>>>>    - fuel-octane-core: needs to be created. Members: Yury Taraday,
>>>>    Oleg Gelbukh, Ilya Kharin
>>>>    - fuel-mirror-core: needs to be created. Sergey Kulanov, Vitaly
>>>>    Parakhin
>>>>    - fuel-upgrade-core: needs to be created. Sebastian Kalinowski,
>>>>    Alex Schultz, Evgeny Li, Igor Kalnitsky
>>>>    - fuel-provision: repo seems to be outdated, needs to be removed.
>>>>
>>>> I suggest to make changes in groups first, and then separately address
>>>> specific issues like removing someone from cores (not doing enough reviews
>>>> anymore or too many positive reviews, let's say > 95%).
>>>>
>>>> I hope I don't miss anyone / anything. Please check carefully.
>>>> Comments / objections?
>>>>
>>>> [1]
>>>> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
>>>> [2] https://review.openstack.org/#/admin/groups/209,members
>>>> [3] https://review.openstack.org/#/admin/groups/658,members
>>>> [4] https://review.openstack.org/#/admin/groups/664,members
>>>> [5] https://review.openstack.org/#/admin/groups/655,members
>>>> [6] https://review.openstack.org/#/admin/groups/646,members
>>>> [7] https://review.openstack.org/#/admin/groups/656,members
>>>> [8] https://review.openstack.org/#/admin/groups/657,members
>>>> [9] https://review.openstack.org/#/admin/groups/659,members
>>>> [10] https://review.openstack.org/#/admin/groups/1000,members
>>>> [11] https://review.openstack.org/#/admin/groups/660,members
>>>> [12] https://review.openstack.org/#/admin/groups/661,members
>>>> [13] https://review.openstack.org/#/admin/groups/662,members
>>>> [14] https://review.openstack.org/#/admin/groups/663,members
>>>> [15] https://review.openstack.org/#/admin/groups/624,members
>>>> [16] http://stackalytics.com/report/contribution/fuel-specs/180
>>>>
>>>>
>>>> --
>>>> Mike Scherbakov
>>>> #mihgen
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> --
>> Mike Scherbakov
>> #mihgen
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru
> vkuklin at mirantis.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/ef3234ee/attachment-0001.html>

From sean at dague.net  Wed Sep 23 11:17:31 2015
From: sean at dague.net (Sean Dague)
Date: Wed, 23 Sep 2015 07:17:31 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <5601C87C.2010609@internap.com>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
Message-ID: <56028A4B.9010203@dague.net>

On 09/22/2015 05:30 PM, Mathieu Gagn? wrote:
> On 2015-09-22 4:52 PM, Sean Dague wrote:
>> On 09/22/2015 03:16 PM, Mathieu Gagn? wrote:
>>>
>>> The oslo_middleware.ssl middleware looks to offer little overhead and
>>> offer the maximum flexibility. I appreciate the wish to use the Keystone
>>> catalog but I don't feel this is the right answer.
>>>
>>> For example, if I deploy Bifrost without Keystone, I won't have a
>>> catalog to rely on and will still have the same lack of SSL termination
>>> proxy support.
>>>
>>> The simplest solution is often the right one.
>>
>> I do get there are specific edge cases here, but I don't think that in
>> the general case we should be pushing a mode where Keystone is optional.
>> It is a bedrock of our system.
>>
> 
> I understand that Keystone is "a bedrock of our system". This however
> does not make it THE solution when simpler proven existing ones exist. I
> fail to understand why other solutions can't be considered.
> 
> I opened a dialog with the community to express my concerns about the
> lack of universal support for SSL termination proxy so we can find a
> solution together which will cover ALL use cases.
> 
> I proposed using an existing solution (oslo_middleware.ssl) (that I
> didn't know of until now) which takes little to no time to implement and
> cover a lot of use cases and special edge cases.

Does that solution work in the HA Proxy case where there is one
terminating address for multiple backend servers? Because there is the
concern that this impacts not only the Location header, but the link
documents inside the responses which clients are expected to be able to
link.follow. This is an honest question, I don't know how the
oslo_middleware.ssl acts in these cases. And HA Proxy 1 to N mapping is
very common deployment model.

The dialog isn't over, this is still an exploration. Nothing's going to
happen in code right now, because everything is in deep freeze for the
RC spins, the release, and summit planning. But we do need all the cards
on the table about options and trade offs if we are going to get 20+
projects aligned on a thing.

> Operators encounters and *HAVE* to handle a lot of edge cases. We are
> trying *very hard* to open up a dialog with the developer community so
> they can be heard, understood and addressed by sharing our real world
> use cases.
> 
> In that specific case, my impression is that they unfortunately happens
> to not be a priority when evaluating solutions and will actually make my
> job harder. I'm sad to see we can't come with an agreement on that one.
> I feel like I failed to properly communicate my needs as an operator and
> make them understood to others.

Here is the parameter space we are playing with:

1. Are the "Location:" headers correct, or understood in a way that
clients will do the right thing:

a. when direct connected
b. when connected in a 1 to 1 SSL termination proxy
c. when connected in a 1 to N HA Proxy

2. Are the links provided in REST documents correct, or understood in a
way that libraries like requests / phantomjs can follow links natively:

a. when direct connected
b. when connected in a 1 to 1 SSL termination proxy
c. when connected in a 1 to N HA Proxy

3. Are the minority of services that have "operate without keystone" as
a design tenant able to function.

a. when direct connected
b. when connected in a 1 to 1 SSL termination proxy
c. when connected in a 1 to N HA proxy

Today {1,2,3}{a} are handled by services auto figuring out their URL.
That might mean that you jump across from internalURL to publicURL when
following links, because the addresses in the documents are what the
server believes it's URL is.

{1,2,3}{b,c} are handled by manually configuring each api daemon about
what it's URL should be. Recent experiences with a novaclient release
demonstrated that a lot of people aren't actually doing that.

The answer to 3,c seems like it's always going to be manual.

The X-Forwarded-* headers seem like they address {1,3}{b} fine. But the
implications on {1,2,3}{c} are unclear. It's also unclear how {2}{b}
functions in that model and if the clients will interpret that in all
scenarios even document links.

Upgrading the service catalog usage would address {1,2}{a,b,c}, but make
{3}{b} live in the land of manual configs.

We need to get the full map out here, and help fill in details of where
our holes are, which solutions plug which ones, and what set of
tradeoffs we're going to be working around in future releases. Once
we've got that, and our solution going forward, we can start banging the
drum to get all the projects headed in a direction here.

	-Sean

-- 
Sean Dague
http://dague.net


From aurlapova at mirantis.com  Wed Sep 23 11:21:06 2015
From: aurlapova at mirantis.com (Anastasia Urlapova)
Date: Wed, 23 Sep 2015 14:21:06 +0300
Subject: [openstack-dev] [Fuel] Nominate Denis Dmitriev for
	fuel-qa(devops) core
In-Reply-To: <CAAh3_4G6_FVcP_hFFVUHkZH+P7p6jeptr_fVktrJRygkgUvc9A@mail.gmail.com>
References: <CAC+Xjbb7thAdcrZfrHWzACzLrEVzks0pBoMBW9UH-tCWX=PP_Q@mail.gmail.com>
 <CAE1YzriPm16vPUSfv0bMEEUwiTs37a7xAoWNAFCE+enVCo8F6A@mail.gmail.com>
 <CAGRGKG5E9LM9z4YTQU14H6VsnxjWzWrqWxuf+jATb2c+A7ZrVg@mail.gmail.com>
 <CADnVcXqN0zWzkK5hk29mpnmL=d0qmufFpT9Q5yypQ6vobpvatA@mail.gmail.com>
 <CAAh3_4G6_FVcP_hFFVUHkZH+P7p6jeptr_fVktrJRygkgUvc9A@mail.gmail.com>
Message-ID: <CAC+Xjba_Zfb=-GSdk0HRViknD5GUE0vQxjsD4vW76_1rufwVZA@mail.gmail.com>

Congratulations to Denis!



On Fri, Sep 18, 2015 at 2:55 PM, Yuriy Shamray <ishamrai at mirantis.com>
wrote:

> +1
>
> On Fri, Sep 18, 2015 at 1:34 PM, Yegor Kotko <ykotko at mirantis.com> wrote:
>
>> +1
>>
>> On Tue, Sep 15, 2015 at 6:29 AM, Sebastian Kalinowski <
>> skalinowski at mirantis.com> wrote:
>>
>>> +1
>>>
>>> 2015-09-14 22:37 GMT+02:00 Ivan Kliuk <ikliuk at mirantis.com>:
>>>
>>>> +1
>>>>
>>>> On Mon, Sep 14, 2015 at 10:19 PM, Anastasia Urlapova <
>>>> aurlapova at mirantis.com> wrote:
>>>>
>>>>> Folks,
>>>>> I would like to nominate Denis Dmitriev[1] for fuel-qa/fuel-devops
>>>>> core.
>>>>>
>>>>> Dennis spent three months in Fuel BugFix team, his velocity was
>>>>> between 150-200% per week. Thanks to his efforts we have won these old
>>>>> issues with time sync and ceph's clock skew. Dennis's ideas constantly help
>>>>> us to improve our functional system suite.
>>>>>
>>>>> Fuelers, please vote for Denis!
>>>>>
>>>>> Nastya.
>>>>>
>>>>> [1]
>>>>> http://stackalytics.com/?user_id=ddmitriev&release=all&project_type=all&module=fuel-qa
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "fuel-core-team" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to fuel-core-team+unsubscribe at mirantis.com.
>>>>> For more options, visit
>>>>> https://groups.google.com/a/mirantis.com/d/optout.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best regards,
>>>> Ivan Kliuk
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "fuel-core-team" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to fuel-core-team+unsubscribe at mirantis.com.
>>>> For more options, visit
>>>> https://groups.google.com/a/mirantis.com/d/optout.
>>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "fuel-core-team" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to fuel-core-team+unsubscribe at mirantis.com.
>>> For more options, visit
>>> https://groups.google.com/a/mirantis.com/d/optout.
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "fuel-core-team" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to fuel-core-team+unsubscribe at mirantis.com.
>> For more options, visit https://groups.google.com/a/mirantis.com/d/optout
>> .
>>
>
>
>
> --
> --
> Best regards,
> Shamray Yuriy
> QA/DevOps, Mirantis, Inc.
> Skype: ss-yuriy
> +38 (098) 400 48 41
> 38 Lenina ave., Kharkov, Ukraine
> www.mirantis.com <http://www.mirantis.ru/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/d68bdf74/attachment.html>

From gong_ys2004 at aliyun.com  Wed Sep 23 11:32:04 2015
From: gong_ys2004 at aliyun.com (gong_ys2004)
Date: Wed, 23 Sep 2015 19:32:04 +0800
Subject: [openstack-dev] =?utf-8?q?=5Bnova=5D_where_the_aggregate_and_cell?=
	=?utf-8?q?_document_is?=
Message-ID: <----1a------ucW1a$8be1b18a-865c-4dcb-9f98-5b86b1fb254c@aliyun.com>

Hi stackers,I want to set up cell and aggregator env, but I failed to find the document about them.could you please help me to find the document?
regards,yong sheng gong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/451fc878/attachment.html>

From julien at danjou.info  Wed Sep 23 11:36:04 2015
From: julien at danjou.info (Julien Danjou)
Date: Wed, 23 Sep 2015 13:36:04 +0200
Subject: [openstack-dev] [all] Consistent support for SSL termination
	proxies across all API services
In-Reply-To: <56028A4B.9010203@dague.net> (Sean Dague's message of "Wed, 23
 Sep 2015 07:17:31 -0400")
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
 <56028A4B.9010203@dague.net>
Message-ID: <m0fv25fjvf.fsf@danjou.info>

On Wed, Sep 23 2015, Sean Dague wrote:

> Does that solution work in the HA Proxy case where there is one
> terminating address for multiple backend servers?

Yep.

> Because there is the concern that this impacts not only the Location
> header, but the link documents inside the responses which clients are
> expected to be able to link.follow. This is an honest question, I
> don't know how the oslo_middleware.ssl acts in these cases. And HA
> Proxy 1 to N mapping is very common deployment model.

It should, but some project like Keystone does not handle that
correctly. I just submitted a patch that fixes this kind of thing by
using correctly the WSGI environment variable to build a correct URL.
That fixes also the use cases where Keystone does not run on / but on
e.g. /identity (the bug I initially wanted to fix).

  https://review.openstack.org/#/c/226464/

If you use `wsgiref.util.application_uri(environment)' it should do
everything correctly. With the SSL middleware enabled that Mathieu
talked about, it will translate correctly http to https too.

The {public,admin}_endpoint are only useful in the case where you map
http://myproxy/identity -> http://mykeystone/ using a proxy

Because the prefix is not passed to Keystone. If you map 1:1 the path
part, we could also leverage X-Forwarded-Host and X-Forwarded-Port to
avoid having {public,admin}_endpoint options.


-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/4f29cafa/attachment.pgp>

From Kevin.Fox at pnnl.gov  Wed Sep 23 11:37:48 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 23 Sep 2015 11:37:48 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CABkBM5GvWpG57HkBHghvH+q7ZK8V8s_oHL2KAfHQdRiuOAcSOg@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>
 <1442968743.30604.13.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C9145@EX10MBOX06.pnnl.gov>,
 <CABkBM5GvWpG57HkBHghvH+q7ZK8V8s_oHL2KAfHQdRiuOAcSOg@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C93B5@EX10MBOX06.pnnl.gov>

Seperate ns would work great.

Thanks,
Kevin

________________________________
From: Banashankar KV
Sent: Tuesday, September 22, 2015 9:14:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

What you think about separating both of them with the name as Doug mentioned. In future if we want to get rid of the v1 we can just remove that namespace. Everything will be clean.

Thanks
Banashankar


On Tue, Sep 22, 2015 at 6:01 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
As I understand it, loadbalancer in v2 is more like pool was in v1. Can we make it such that if you are using the loadbalancer resource and have the mandatory v2 properties that it tries to use v2 api, otherwise its a v1 resource? PoolMember should be ok being the same. It just needs to call v1 or v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api different between them? Can it be like pool member?

Thanks,
Kevin

________________________________
From: Brandon Logan
Sent: Tuesday, September 22, 2015 5:39:03 PM

To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

So for the API v1s api is of the structure:

<neutron-endpoint>/lb/(vip|pool|member|health_monitor)

V2s is:
<neutron-endpoint>/lbaas/(loadbalancer|listener|pool|healthmonitor)

member is a child of pool, so it would go down one level.

The only difference is the lb for v1 and lbaas for v2.  Not sure if that
is enough of a different.

Thanks,
Brandon
On Tue, 2015-09-22 at 23:48 +0000, Fox, Kevin M wrote:
> Thats the problem. :/
>
> I can't think of a way to have them coexist without: breaking old
> templates, including v2 in the name, or having a flag on the resource
> saying the version is v2. And as an app developer I'd rather not have
> my existing templates break.
>
> I haven't compared the api's at all, but is there a required field of
> v2 that is different enough from v1 that by its simple existence in
> the resource you can tell a v2 from a v1 object? Would something like
> that work? PoolMember wouldn't have to change, the same resource could
> probably work for whatever lb it was pointing at I'm guessing.
>
> Thanks,
> Kevin
>
>
>
> ______________________________________________________________________
> From: Banashankar KV [banveerad at gmail.com<mailto:banveerad at gmail.com>]
> Sent: Tuesday, September 22, 2015 4:40 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
> LbaasV2
>
>
>
> Ok, sounds good. So now the question is how should we name the new V2
> resources ?
>
>
>
> Thanks
> Banashankar
>
>
>
> On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>>
> wrote:
>         Yes, hence the need to support the v2 resources as seperate
>         things. Then I can rewrite the templates to include the new
>         resources rather then the old resources as appropriate. IE, it
>         will be a porting effort to rewrite them. Then do a heat
>         update on the stack to migrate it from lbv1 to lbv2. Since
>         they are different resources, it should create the new and
>         delete the old.
>
>         Thanks,
>         Kevin
>
>
>         ______________________________________________________________
>         From: Banashankar KV [banveerad at gmail.com<mailto:banveerad at gmail.com>]
>         Sent: Tuesday, September 22, 2015 4:16 PM
>
>         To: OpenStack Development Mailing List (not for usage
>         questions)
>         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
>         for LbaasV2
>
>
>
>
>         But I think, V2 has introduced some new components and whole
>         association of the resources with each other is changed, we
>         should be still able to do what Kevin has mentioned ?
>
>         Thanks
>         Banashankar
>
>
>
>         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
>         <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
>                 There needs to be a way to have both v1 and v2
>                 supported in one engine....
>
>                 Say I have templates that use v1 already in existence
>                 (I do), and I want to be able to heat stack update on
>                 them one at a time to v2. This will replace the v1 lb
>                 with v2, migrating the floating ip from the v1 lb to
>                 the v2 one. This gives a smoothish upgrade path.
>
>                 Thanks,
>                 Kevin
>                 ________________________________________
>                 From: Brandon Logan [brandon.logan at RACKSPACE.COM<mailto:brandon.logan at RACKSPACE.COM>]
>                 Sent: Tuesday, September 22, 2015 3:22 PM
>                 To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
>                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>                 support for LbaasV2
>
>                 Well I'd hate to have the V2 postfix on it because V1
>                 will be deprecated
>                 and removed, which means the V2 being there would be
>                 lame.  Is there any
>                 kind of precedent set for for how to handle this?
>
>                 Thanks,
>                 Brandon
>                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
>                 wrote:
>                 > So are we thinking of making it as ?
>                 > OS::Neutron::LoadBalancerV2
>                 >
>                 > OS::Neutron::ListenerV2
>                 >
>                 > OS::Neutron::PoolV2
>                 >
>                 > OS::Neutron::PoolMemberV2
>                 >
>                 > OS::Neutron::HealthMonitorV2
>                 >
>                 >
>                 >
>                 > and add all those into the loadbalancer.py of heat
>                 engine ?
>                 >
>                 > Thanks
>                 > Banashankar
>                 >
>                 >
>                 >
>                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>                 > <skraynev at mirantis.com<mailto:skraynev at mirantis.com>> wrote:
>                 >         Brandon.
>                 >
>                 >
>                 >         As I understand we v1 and v2 have
>                 differences also in list of
>                 >         objects and also in relationships between
>                 them.
>                 >         So I don't think that it will be easy to
>                 upgrade old resources
>                 >         (unfortunately).
>                 >         I'd agree with second Kevin's suggestion
>                 about implementation
>                 >         new resources in this case.
>                 >
>                 >
>                 >         I see, that a lot of guys, who wants to help
>                 with it :) And I
>                 >         suppose, that me and Rabi Mishra may try to
>                 help with it,
>                 >         because we was involvement in implementation
>                 of v1 resources
>                 >         in Heat.
>                 >         Follow the list of v1 lbaas resources in
>                 Heat:
>                 >
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>                 >
>                 >
>                 >
>                 >         Also, I suppose, that it may be discussed
>                 during summit
>                 >         talks :)
>                 >         Will add to etherpad with potential
>                 sessions.
>                 >
>                 >
>                 >
>                 >         Regards,
>                 >         Sergey.
>                 >
>                 >         On 22 September 2015 at 22:27, Brandon Logan
>                 >         <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
>                 >                 There is some overlap, but there was
>                 some incompatible
>                 >                 differences when
>                 >                 we started designing v2.  I'm sure
>                 the same issues
>                 >                 will arise this time
>                 >                 around so new resources sounds like
>                 the path to go.
>                 >                 However, I do not
>                 >                 know much about Heat and the
>                 resources so I'm speaking
>                 >                 on a very
>                 >                 uneducated level here.
>                 >
>                 >                 Thanks,
>                 >                 Brandon
>                 >                 On Tue, 2015-09-22 at 18:38 +0000,
>                 Fox, Kevin M wrote:
>                 >                 > We're using the v1 resources...
>                 >                 >
>                 >                 > If the v2 ones are compatible and
>                 can seamlessly
>                 >                 upgrade, great
>                 >                 >
>                 >                 > Otherwise, make new ones please.
>                 >                 >
>                 >                 > Thanks,
>                 >                 > Kevin
>                 >                 >
>                 >                 >
>                 >
>                  ______________________________________________________________________
>                 >                 > From: Banashankar KV
>                 [banveerad at gmail.com<mailto:banveerad at gmail.com>]
>                 >                 > Sent: Tuesday, September 22, 2015
>                 10:07 AM
>                 >                 > To: OpenStack Development Mailing
>                 List (not for
>                 >                 usage questions)
>                 >                 > Subject: Re: [openstack-dev]
>                 [neutron][lbaas] - Heat
>                 >                 support for
>                 >                 > LbaasV2
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > Hi Brandon,
>                 >                 > Work in progress, but need some
>                 input on the way we
>                 >                 want them, like
>                 >                 > replace the existing lbaasv1 or we
>                 still need to
>                 >                 support them ?
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > Thanks
>                 >                 > Banashankar
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
>                 Brandon Logan
>                 >                 > <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>>
>                 wrote:
>                 >                 >         Hi Banashankar,
>                 >                 >         I think it'd be great if
>                 you got this going.
>                 >                 One of those
>                 >                 >         things we
>                 >                 >         want to have and people
>                 ask for but has
>                 >                 always gotten a lower
>                 >                 >         priority
>                 >                 >         due to the critical things
>                 needed.
>                 >                 >
>                 >                 >         Thanks,
>                 >                 >         Brandon
>                 >                 >         On Mon, 2015-09-21 at
>                 17:57 -0700,
>                 >                 Banashankar KV wrote:
>                 >                 >         > Hi All,
>                 >                 >         > I was thinking of
>                 starting the work on
>                 >                 heat to support
>                 >                 >         LBaasV2,  Is
>                 >                 >         > there any concerns about
>                 that?
>                 >                 >         >
>                 >                 >         >
>                 >                 >         > I don't know if it is
>                 the right time to
>                 >                 bring this up :D .
>                 >                 >         >
>                 >                 >         > Thanks,
>                 >                 >         > Banashankar (bana_k)
>                 >                 >         >
>                 >                 >         >
>                 >                 >
>                 >                 >         >
>                 >                 >
>                 >
>                 __________________________________________________________________________
>                 >                 >         > OpenStack Development
>                 Mailing List (not
>                 >                 for usage questions)
>                 >                 >         > Unsubscribe:
>                 >                 >
>                 >
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >                 >         >
>                 >                 >
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >                 >
>                 >                 >
>                 >
>                 __________________________________________________________________________
>                 >                 >         OpenStack Development
>                 Mailing List (not for
>                 >                 usage questions)
>                 >                 >         Unsubscribe:
>                 >                 >
>                 >
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >                 >
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >
>                  __________________________________________________________________________
>                 >                 > OpenStack Development Mailing List
>                 (not for usage
>                 >                 questions)
>                 >                 > Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >                 >
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                  __________________________________________________________________________
>                 >                 OpenStack Development Mailing List
>                 (not for usage
>                 >                 questions)
>                 >                 Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 >
>                  __________________________________________________________________________
>                 >         OpenStack Development Mailing List (not for
>                 usage questions)
>                 >         Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 __________________________________________________________________________
>                 > OpenStack Development Mailing List (not for usage
>                 questions)
>                 > Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/353d7665/attachment.html>

From zzelle at gmail.com  Wed Sep 23 12:01:15 2015
From: zzelle at gmail.com (ZZelle)
Date: Wed, 23 Sep 2015 14:01:15 +0200
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <56028A4B.9010203@dague.net>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
 <56028A4B.9010203@dague.net>
Message-ID: <CAMS-DWgr0-7RQ-hb5xcqYPeRidmQPz6xatn4uvn2JzDgHFK6KQ@mail.gmail.com>

Hi,

SSLMiddleware takes into account a Header[1] to set wsgi.url_scheme
which allows a proxy to provide the original protocol to Heat/Neutron/...


Does that solution work in the HA Proxy case where there is one
> terminating address for multiple backend servers? Because there is the
> concern that this impacts not only the Location header, but the link
> documents inside the responses which clients are expected to be able to
> link.follow. This is an honest question, I don't know how the
> oslo_middleware.ssl acts in these cases. And HA Proxy 1 to N mapping is
> very common deployment model.
>

It ensures the protocol provided in headers will be used to generate
correct Location Headers and links.

BUT there are some limitations:

* It doesn't work when the service itself acts as a proxy (typically nova
image-list)
* it doesn't work when you rewrite from
https://<proxy-host>:<proxy-port>/<base>/...
to http://<host>:<port>/...
  because the <base> information is not provided in the headers (except if
you exploit a webob limitation)


C?dric/ZZelle at IRC
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/e7a9c0a4/attachment.html>

From julien at danjou.info  Wed Sep 23 12:12:19 2015
From: julien at danjou.info (Julien Danjou)
Date: Wed, 23 Sep 2015 14:12:19 +0200
Subject: [openstack-dev] [all] Consistent support for SSL termination
	proxies across all API services
In-Reply-To: <CAMS-DWgr0-7RQ-hb5xcqYPeRidmQPz6xatn4uvn2JzDgHFK6KQ@mail.gmail.com>
 (zzelle@gmail.com's message of "Wed, 23 Sep 2015 14:01:15 +0200")
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
 <56028A4B.9010203@dague.net>
 <CAMS-DWgr0-7RQ-hb5xcqYPeRidmQPz6xatn4uvn2JzDgHFK6KQ@mail.gmail.com>
Message-ID: <m06131fi70.fsf@danjou.info>

On Wed, Sep 23 2015, ZZelle wrote:

> * It doesn't work when the service itself acts as a proxy (typically nova
> image-list)
> * it doesn't work when you rewrite from
> https://<proxy-host>:<proxy-port>/<base>/...
> to http://<host>:<port>/...
>   because the <base> information is not provided in the headers (except if
> you exploit a webob limitation)

Yup, that's what I wrote in my previous mail ? that's the only case not
covered correctly except if you have specific oslo.config option ? la
Keystone.

Though we could also use and document a header that we would use inside
OpenStack to pass the <base> around. That would avoid having anything to
configure in each service, just setting a proper header in your proxy.
Which means less configuration to do for OpenStack ? always a good thing
IMHO.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/1b63389f/attachment.pgp>

From thierry at openstack.org  Wed Sep 23 12:18:18 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Wed, 23 Sep 2015 14:18:18 +0200
Subject: [openstack-dev] [releases] semver and dependency changes
In-Reply-To: <CAJ3HoZ10+wA99SVxcYxAtThx50dgM7Cs+XM3qQaBY0=Bdd3htA@mail.gmail.com>
References: <CAJ3HoZ3L8oxsC3HZhaNtZJU-BwTNviFVSvxJid8zxxjbYWP0CQ@mail.gmail.com>
 <56015FB4.1030406@openstack.org>
 <CAJ3HoZ10+wA99SVxcYxAtThx50dgM7Cs+XM3qQaBY0=Bdd3htA@mail.gmail.com>
Message-ID: <5602988A.60003@openstack.org>

Robert Collins wrote:
> On 23 September 2015 at 02:03, Thierry Carrez <thierry at openstack.org> wrote:
>> Robert Collins wrote:
>>> [...]
>>> So, one answer we can use is "The version impact of a requirements
>>> change is never less than the largest version change in the change."
>>> That is:
>>> nothing -> a requirement -> major version change
>>
>> That feels a bit too much. In a lot of cases, the added requirement will
>> be used in a new, backward-compatible feature (requiring y bump), or
>> will serve to remove code without changing functionality (requiring z
>> bump). I would think that the cases where a new requirement requires a x
>> bump are rare.
> 
> So the question is 'will requiring this new thing force any users of
> the library to upgrade past a major version of that new thing'. If its
> new to OpenStack, then the answer is clearly no, for users of the
> library within OpenStack, but we don't know about users outside of
> OpenStack'. I am entirely happy to concede that this should be case by
> case though.
> 
>>> 1.x.y -> 2.0.0 -> major version change
>>> 1.2.y -> 1.3.0 -> minor version change
>>> 1.2.3. -> 1.2.4 -> patch version change
>>
>> The last two sound like good rules of thumb.
> 
> What about the one you didn't comment on ? :)

Damn, you got me. Hmm... "it depends" ? I think we can't really have a
default rule for a major bump. Depending on how the library is used, it
could trigger anything. Defaulting to a major bump as a result sounds a
bit overkill.

-- 
Thierry Carrez (ttx)


From paul.carlton2 at hpe.com  Wed Sep 23 12:36:09 2015
From: paul.carlton2 at hpe.com (Paul Carlton)
Date: Wed, 23 Sep 2015 13:36:09 +0100
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <20150922152012.GA28888@redhat.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <B987741E651FDE4584B7A9C0F7180DEB1CDC2D02@G4W3208.americas.hpqcorp.net>
 <20150921085609.GC28520@redhat.com> <56016E27.7070106@windriver.com>
 <20150922152012.GA28888@redhat.com>
Message-ID: <56029CB9.6020308@hpe.com>


On 22/09/15 16:20, Daniel P. Berrange wrote:
> On Tue, Sep 22, 2015 at 09:05:11AM -0600, Chris Friesen wrote:
>> On 09/21/2015 02:56 AM, Daniel P. Berrange wrote:
>>> On Fri, Sep 18, 2015 at 05:47:31PM +0000, Carlton, Paul (Cloud Services) wrote:
>>>> However the most significant impediment we encountered was customer
>>>> complaints about performance of instances during migration.  We did a little
>>>> bit of work to identify the cause of this and concluded that the main issues
>>>> was disk i/o contention.  I wonder if this is something you or others have
>>>> encountered?  I'd be interested in any idea for managing the rate of the
>>>> migration processing to prevent it from adversely impacting the customer
>>>> application performance.  I appreciate that if we throttle the migration
>>>> processing it will take longer and may not be able to keep up with the rate
>>>> of disk/memory change in the instance.
>>> I would not expect live migration to have an impact on disk I/O, unless
>>> your storage is network based and using the same network as the migration
>>> data. While migration is taking place you'll see a small impact on the
>>> guest compute performance, due to page table dirty bitmap tracking, but
>>> that shouldn't appear directly as disk I/O problem. There is no throttling
>>> of guest I/O at all during migration.
>> Technically if you're doing a lot of disk I/O couldn't you end up with a
>> case where you're thrashing the page cache enough to interfere with
>> migration?  So it's actually memory change that is the problem, but it might
>> not be memory that the application is modifying directly but rather memory
>> allocated by the kernel.
>>
>>>> Could you point me at somewhere I can get details of the tuneable setting
>>>> relating to cutover down time please?  I'm assuming that at these are
>>>> libvirt/qemu settings?  I'd like to play with them in our test environment
>>>> to see if we can simulate busy instances and determine what works.  I'd also
>>>> be happy to do some work to expose these in nova so the cloud operator can
>>>> tweak if necessary?
>>> It is already exposed as 'live_migration_downtime' along with
>>> live_migration_downtime_steps, and live_migration_downtime_delay.
>>> Again, it shouldn't have any impact on guest performance while
>>> live migration is taking place. It only comes into effect when
>>> checking whether the guest is ready to switch to the new host.
>> Has anyone given thought to exposing some of these new parameters to the
>> end-user?  I could see a scenario where an image might want to specify the
>> acceptable downtime over migration.  (On the other hand that might be tricky
>> from the operator perspective.)
> I'm of the opinion that we should really try to avoid exposing *any*
> migration tunables to the tenant user. All the tunables are pretty
> hypervisor specific and low level and not very friendly to expose
> to tenants. Instead our focus should be on ensuring that it will
> always "just work" from the tenants POV.
>
> When QEMU gets 'post copy' migration working, we'll want to adopt
> that asap, as that will give us the means to guarantee that migration
> will always complete with very little need for tuning.
>
> At most I could see the users being able to given some high level
> indication as to whether their images tolerate some level of
> latency, so Nova can decide what migration characteristic is
> acceptable.
>
> Regards,
> Daniel
Actually I was not envisaging the controls on migration tuning being
made available to the user.  I was thinking we should provide the cloud
administrator with the facility to increase the live_migration_downtime
setting to increase the chance of a migration being able to complete.
I would expect that this would be used in consultation with the instance
owner.  It seems to me, it might be a viable alternative to pausing the
instance to allow the migration to complete.

-- 
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:    +44 (0)7768 994283
Email:    mailto:paul.carlton2 at hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL".


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4722 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/ef10215a/attachment.bin>

From paul.carlton2 at hpe.com  Wed Sep 23 12:48:17 2015
From: paul.carlton2 at hpe.com (Paul Carlton)
Date: Wed, 23 Sep 2015 13:48:17 +0100
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <20150922154431.GC28888@redhat.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <191B00529A37FA4F9B1CAE61859E4D6E5AB44DC2@IRSMSX101.ger.corp.intel.com>
 <560173EA.2060506@windriver.com> <20150922154431.GC28888@redhat.com>
Message-ID: <56029F91.8000803@hpe.com>



On 22/09/15 16:44, Daniel P. Berrange wrote:
> On Tue, Sep 22, 2015 at 09:29:46AM -0600, Chris Friesen wrote:
>>>> There is also work on post-copy migration in QEMU. Normally with live
>>>> migration, the guest doesn't start executing on the target host until
>>>> migration has transferred all data. There are many workloads where that
>>>> doesn't work, as the guest is dirtying data too quickly, With post-copy you
>>>> can start running the guest on the target at any time, and when it faults
>>>> on a missing page that will be pulled from the source host. This is
>>>> slightly more fragile as you risk loosing the guest entirely if the source
>>>> host dies before migration finally completes. It does guarantee that
>>>> migration will succeed no matter what workload is in the guest. This is
>>>> probably Nxxxx cycle material.
>> It seems to me that the ideal solution would be to start doing pre-copy
>> migration, then if that doesn't converge with the specified downtime value
>> then maybe have the option to just cut over to the destination and do a
>> post-copy migration of the remaining data.
> Yes, that is precisely what the QEMU developers working on this
> featue suggest we should do. The lazy page faulting on the target
> host has a performance hit on the guest, so you definitely need
> to give a little time for pre-copy to start off with, and then
> switch to post-copy once some benchmark is reached, or if progress
> info shows the transfer is not making progress.
>
> Regards,
> Daniel
I'd be a bit concerned about automatically switching to the post copy
mode.  As Daniel commented perviously, if something goes wrong on the
source node the customer's instance could be lost.  Many cloud operators
will want to control the use of this mode.  As per my previous message
this could be something that could be set on or off by default but
provide a PUT operation on os-migration to update setting on for a
specific migration

-- 
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:    +44 (0)7768 994283
Email:    mailto:paul.carlton2 at hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL".


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4722 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/e5737ba0/attachment.bin>

From berrange at redhat.com  Wed Sep 23 13:11:28 2015
From: berrange at redhat.com (Daniel P. Berrange)
Date: Wed, 23 Sep 2015 14:11:28 +0100
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <56029F91.8000803@hpe.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <191B00529A37FA4F9B1CAE61859E4D6E5AB44DC2@IRSMSX101.ger.corp.intel.com>
 <560173EA.2060506@windriver.com>
 <20150922154431.GC28888@redhat.com> <56029F91.8000803@hpe.com>
Message-ID: <20150923131128.GZ15421@redhat.com>

On Wed, Sep 23, 2015 at 01:48:17PM +0100, Paul Carlton wrote:
> 
> 
> On 22/09/15 16:44, Daniel P. Berrange wrote:
> >On Tue, Sep 22, 2015 at 09:29:46AM -0600, Chris Friesen wrote:
> >>>>There is also work on post-copy migration in QEMU. Normally with live
> >>>>migration, the guest doesn't start executing on the target host until
> >>>>migration has transferred all data. There are many workloads where that
> >>>>doesn't work, as the guest is dirtying data too quickly, With post-copy you
> >>>>can start running the guest on the target at any time, and when it faults
> >>>>on a missing page that will be pulled from the source host. This is
> >>>>slightly more fragile as you risk loosing the guest entirely if the source
> >>>>host dies before migration finally completes. It does guarantee that
> >>>>migration will succeed no matter what workload is in the guest. This is
> >>>>probably Nxxxx cycle material.
> >>It seems to me that the ideal solution would be to start doing pre-copy
> >>migration, then if that doesn't converge with the specified downtime value
> >>then maybe have the option to just cut over to the destination and do a
> >>post-copy migration of the remaining data.
> >Yes, that is precisely what the QEMU developers working on this
> >featue suggest we should do. The lazy page faulting on the target
> >host has a performance hit on the guest, so you definitely need
> >to give a little time for pre-copy to start off with, and then
> >switch to post-copy once some benchmark is reached, or if progress
> >info shows the transfer is not making progress.
> >
> >Regards,
> >Daniel
> I'd be a bit concerned about automatically switching to the post copy
> mode.  As Daniel commented perviously, if something goes wrong on the
> source node the customer's instance could be lost.  Many cloud operators
> will want to control the use of this mode.  As per my previous message
> this could be something that could be set on or off by default but
> provide a PUT operation on os-migration to update setting on for a
> specific migration

NB, if you are concerned about the source host going down while
migration is still taking place, you will loose the VM even with
pre-copy mode too, since the VM will of course still be running
on the source.

The new failure scenario is essentially about the network
connection between the source & host guest - if the network
layer fails while post-copy is running, then you loose the
VM.

In some sense post-copy will reduce the window of failure,
because it should ensure that the VM migration completes
in a faster & finite amount of time. I think this is
probably particularly important for host evacuation so
the admin can guarantee to get all the VMs off a host in
a reasonable amount of time.

As such I don't think you need expose post-copy as a concept in the
API, but I could see a nova.conf value to say whether use of post-copy
was acceptable, so those who want to have stronger resilience against
network failure can turn off post-copy.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


From paul.carlton2 at hpe.com  Wed Sep 23 13:27:11 2015
From: paul.carlton2 at hpe.com (Paul Carlton)
Date: Wed, 23 Sep 2015 14:27:11 +0100
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <20150923131128.GZ15421@redhat.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <20150918152346.GI16906@redhat.com>
 <191B00529A37FA4F9B1CAE61859E4D6E5AB44DC2@IRSMSX101.ger.corp.intel.com>
 <560173EA.2060506@windriver.com> <20150922154431.GC28888@redhat.com>
 <56029F91.8000803@hpe.com> <20150923131128.GZ15421@redhat.com>
Message-ID: <5602A8AF.2040001@hpe.com>



On 23/09/15 14:11, Daniel P. Berrange wrote:
> On Wed, Sep 23, 2015 at 01:48:17PM +0100, Paul Carlton wrote:
>>
>> On 22/09/15 16:44, Daniel P. Berrange wrote:
>>> On Tue, Sep 22, 2015 at 09:29:46AM -0600, Chris Friesen wrote:
>>>>>> There is also work on post-copy migration in QEMU. Normally with live
>>>>>> migration, the guest doesn't start executing on the target host until
>>>>>> migration has transferred all data. There are many workloads where that
>>>>>> doesn't work, as the guest is dirtying data too quickly, With post-copy you
>>>>>> can start running the guest on the target at any time, and when it faults
>>>>>> on a missing page that will be pulled from the source host. This is
>>>>>> slightly more fragile as you risk loosing the guest entirely if the source
>>>>>> host dies before migration finally completes. It does guarantee that
>>>>>> migration will succeed no matter what workload is in the guest. This is
>>>>>> probably Nxxxx cycle material.
>>>> It seems to me that the ideal solution would be to start doing pre-copy
>>>> migration, then if that doesn't converge with the specified downtime value
>>>> then maybe have the option to just cut over to the destination and do a
>>>> post-copy migration of the remaining data.
>>> Yes, that is precisely what the QEMU developers working on this
>>> featue suggest we should do. The lazy page faulting on the target
>>> host has a performance hit on the guest, so you definitely need
>>> to give a little time for pre-copy to start off with, and then
>>> switch to post-copy once some benchmark is reached, or if progress
>>> info shows the transfer is not making progress.
>>>
>>> Regards,
>>> Daniel
>> I'd be a bit concerned about automatically switching to the post copy
>> mode.  As Daniel commented perviously, if something goes wrong on the
>> source node the customer's instance could be lost.  Many cloud operators
>> will want to control the use of this mode.  As per my previous message
>> this could be something that could be set on or off by default but
>> provide a PUT operation on os-migration to update setting on for a
>> specific migration
> NB, if you are concerned about the source host going down while
> migration is still taking place, you will loose the VM even with
> pre-copy mode too, since the VM will of course still be running
> on the source.
>
> The new failure scenario is essentially about the network
> connection between the source & host guest - if the network
> layer fails while post-copy is running, then you loose the
> VM.
>
> In some sense post-copy will reduce the window of failure,
> because it should ensure that the VM migration completes
> in a faster & finite amount of time. I think this is
> probably particularly important for host evacuation so
> the admin can guarantee to get all the VMs off a host in
> a reasonable amount of time.
>
> As such I don't think you need expose post-copy as a concept in the
> API, but I could see a nova.conf value to say whether use of post-copy
> was acceptable, so those who want to have stronger resilience against
> network failure can turn off post-copy.
>
> Regards,
> Daniel

If the source node fails during a pre-copy migration then when that node
is restored the instance is ok again (usually).  With the post-copy
approach the risk is that the instance will be corrupted which many
cloud operators would consider to be an unacceptable risk.

However, let's start by exposing it as a nova.conf setting and see how
that goes.

-- 
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:    +44 (0)7768 994283
Email:    mailto:paul.carlton2 at hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL".


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4722 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/670383c8/attachment.bin>

From mriedem at linux.vnet.ibm.com  Wed Sep 23 13:31:26 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 23 Sep 2015 08:31:26 -0500
Subject: [openstack-dev] [nova] How to properly detect and fence a
 compromised host (and why I dislike TrustedFilter)
In-Reply-To: <558BC304.2080006@redhat.com>
References: <558937E1.4090103@redhat.com>
 <CAHXdxOedgyWi_uAtrPhzTo59XSKshbQ1ExK473qVpdPeR32XPw@mail.gmail.com>
 <558BC304.2080006@redhat.com>
Message-ID: <5602A9AE.2090604@linux.vnet.ibm.com>



On 6/25/2015 3:59 AM, Sylvain Bauza wrote:
>
>
> Le 24/06/2015 19:56, Joe Gordon a ?crit :
>>
>>
>> On Tue, Jun 23, 2015 at 3:41 AM, Sylvain Bauza <sbauza at redhat.com
>> <mailto:sbauza at redhat.com>> wrote:
>>
>>     Hi team,
>>
>>     Some discussion occurred over IRC about a bug which was publicly
>>     open related to TrustedFilter [1]
>>     I want to take the opportunity for raising my concerns about that
>>     specific filter, why I dislike it and how I think we could improve
>>     the situation - and clarify everyone's thoughts)
>>
>>     The current situation is that way : Nova only checks if one host
>>     is compromised only when the scheduler is called, ie. only when
>>     booting/migrating/evacuating/unshelving an instance (well, not
>>     exactly all the evacuate/live-migrate cases, but let's not discuss
>>     about that now). When the request goes in the scheduler, all the
>>     hosts are checked against all the enabled filters and the
>>     TrustedFilter is making an external HTTP(S) call to the
>>     Attestation API service (not handled by Nova) for *each host* to
>>     see if the host is valid (not compromised) or not.
>>
>>     To be clear, that's the only in-tree scheduler filter which
>>     explicitly does an external call to a separate service that Nova
>>     is not managing. I can see at least 3 reasons for thinking about
>>     why it's bad :
>>
>>     #1 : that's a terrible bottleneck for performance, because we're
>>     IO-blocking N times given N hosts (we're even not multiplexing the
>>     HTTP requests)
>>     #2 : all the filters are checking an internal Nova state for the
>>     host (called HostState) but that the TrustedFilter, which means
>>     that conceptually we defer the decision to a 3rd-party engine
>>     #3 : that Attestation API services becomes a de facto dependency
>>     for Nova (since it's an in-tree filter) while it's not listed as a
>>     dependency and thus not gated.
>>
>>
>>     All of these reasons could be acceptable if that would cover the
>>     exposed usecase given in [1] (ie. I want to make sure that if my
>>     host gets compromised, my instances will not be running on that
>>     host) but that just doesn't work, due to the situation I mentioned
>>     above.
>>
>>     So, given that, here are my thoughts :
>>     a/ if a host gets compromised, we can just disable its service to
>>     prevent its election as a valid destination host. There is no need
>>     for a specialised filter.
>>     b/ if a host is compromised, we can assume that the instances have
>>     to resurrect elsewhere, ie. we can call a nova evacuate
>>     c/ checking if an host is compromised or not is not a Nova
>>     responsibility since it's already perfectly done by [2]
>>
>>     In other words, I'm considering that "security" usecase as
>>     something analog as the HA usecase [3] where we need a 3rd-party
>>     tool responsible for periodically checking the state of the hosts,
>>     and if compromised then call the Nova API for fencing the host and
>>     evacuating the compromised instances.
>>
>>     Given that, I'm proposing to deprecate TrustedFilter and explictly
>>     mention to drop it from in-tree in a later cycle
>>     https://review.openstack.org/194592
>>
>>
>> Given people are using this, it is a negligible maintenance burden.  I
>> think deprecating with the intention of removing is not worth it.
>>
>> Although it would be very useful to further document the risks with
>> this filter (live migration, possible performance issues etc.)
>
> Well, I can understand that customers could not be agreeing to remove
> the filter because there is no clear alternative for them. That said, I
> think saying that the filter is deprecated without saying when it would
> be removed would help some contributors thinking about that and working
> on a better solution, exactly like we did for EC2 API.
>
> To be clear, I want to freeze the filter by deprecating it and
> explaining why it's wrong (by amending the devref section and giving a
> LOG warning saying it's deprecated) and then leave the filter within
> in-tree unless we are sure that there is a good solution out of Nova.
>
> -Sylvain
>
>
>>
>>
>>     Thoughts ?
>>     -Sylvain
>>
>>
>>
>>     [1] https://bugs.launchpad.net/nova/+bug/1456228
>>     [2] https://github.com/OpenAttestation/OpenAttestation
>>     [3]
>>     http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/
>>
>>
>>     __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I just reviewed the change https://review.openstack.org/#/c/194592/ and 
agree with Joe.

We can't justify deprecation and removal due to lack of CI testing - 
there are many scheduler filters which aren't tested in the gate.  Or if 
we can justify it that way, then we're setting a precedent.  So if 
testing is the sore spot, then maybe we want Intel to look at setting up 
3rd party CI?  Maybe they could work it into their existing PCI CI?

I also don't think we can justify the external dependency as grounds for 
removal.  There are many possible configurations that require external 
dependencies.  90% of cinder/neutron configurations probably fall into 
this camp.

 From other parts of this thread it also sounds like there are 
potentially alternatives to this filter but they aren't implemented, or 
even written up in a spec.  Given there are users of this, I'd think 
we'd want to see an agreed to alternative proposal to replace this filter.

I'm all for logging a warning that this filter is experimental (meaning 
it's not tested in our CI system).  I don't think there is a good reason 
to deprecate it right now though with an open-ended removal date.

-- 

Thanks,

Matt Riedemann



From mmagr at redhat.com  Wed Sep 23 13:42:59 2015
From: mmagr at redhat.com (=?UTF-8?Q?Martin_M=c3=a1gr?=)
Date: Wed, 23 Sep 2015 15:42:59 +0200
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <5601EFA8.8040204@puppetlabs.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
 <5601EFA8.8040204@puppetlabs.com>
Message-ID: <5602AC63.302@redhat.com>



On 09/23/2015 02:17 AM, Cody Herriges wrote:
> Alex Schultz wrote:
>> Hey puppet folks,
>>
>> Based on the meeting yesterday[0], I had proposed creating a parser
>> function called is_service_default[1] to validate if a variable matched
>> our agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking
>> about how can we maybe not use the arbitrary string throughout the
>> puppet that can not easily be validated.  So I tested creating another
>> puppet function named service_default[2] to replace the use of '<SERVICE
>> DEFAULT>' throughout all the puppet modules.  My tests seemed to
>> indicate that you can use a parser function as parameter default for
>> classes.
>>
>> I wanted to send a note to gather comments around the second function.
>> When we originally discussed what to use to designate for a service's
>> default configuration, I really didn't like using an arbitrary string
>> since it's hard to parse and validate. I think leveraging a function
>> might be better since it is something that can be validated via tests
>> and a syntax checker.  Thoughts?
>>
>>
>> Thanks,
>> -Alex
>>
>> [0] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html
>> [1] https://review.openstack.org/#/c/223672
>> [2] https://review.openstack.org/#/c/224187
>>
> I've been mulling this over the last several days and I just can't
> accept an entire ruby function which would be ran for every parameter
> with the desired static value of "<SERVICE DEFAULT>" when the class is
> declared and parsed.  I am not generally against using functions as a
> parameter default just not a fan in this case because running ruby just
> to return a static string seems inappropriate and not optimal.
>
> In this specific case I think the params pattern and inheritance can
> obtain us the same goals.  I also find this a valid us of inheritance
> cross module namespaces but...only because all our modules must depend
> on puppet-openstacklib.
>
> http://paste.openstack.org/show/473655

+1 for implementation in pastebin above. Much better solution than 
running function.

Regards,
Martin

>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/359316a6/attachment.html>

From blk at acm.org  Wed Sep 23 13:46:40 2015
From: blk at acm.org (Brant Knudson)
Date: Wed, 23 Sep 2015 08:46:40 -0500
Subject: [openstack-dev] [cinder] How to make a mock effactive for all
 method of a testclass
In-Reply-To: <E97AE00C7CAEC3489EF8B5A429A8B2842DB14F29@szxeml504-mbx.china.huawei.com>
References: <E97AE00C7CAEC3489EF8B5A429A8B2842DB14F29@szxeml504-mbx.china.huawei.com>
Message-ID: <CAHjeE=TwYgWsw3uJCF-LyJ5DuPdZz0iNUdyspVUsQ+xaVzJn9A@mail.gmail.com>

On Wed, Sep 23, 2015 at 3:06 AM, liuxinguo <liuxinguo at huawei.com> wrote:

> Hi,
>
>
>
> In a.py we have a function:
>
> def *_change_file_mode*(filepath):
>
> utils.execute(*'chmod'*, *'600'*, filepath, run_as_root=True)
>
>
>
> In test_xxx.py, there is a testclass:
>
> clas*s xxxxDriverTestCase(test.TestCase):*
>
> *def test_a(self)*
>
> *    ?*
>
> *    Call a. _change_file_mode*
>
> *?*
>
>
>
> *def test_b(self)*
>
> *    ?*
>
> *    Call a. _change_file_mode*
>
> *?*
>
>
>
> I have tried to mock like mock out function *_change_file_mode *like this:
>
> *@mock.patch.object*(a, *'_change_file_mode', *return_value=None)
>
> clas*s xxxxDriverTestCase(test.TestCase):*
>
> *def test_a(self)*
>
> *    ?*
>
> *    Call a. _change_file_mode*
>
> *?*
>
>
>
> *def test_b(self)*
>
> *    ?*
>
> *    Call a. _change_file_mode*
>
> *?*
>
>
>
> But the mock takes no effort, the real function _change_file_mode is still
> executed.
>
> So how to make a mock effactive for all method of a testclass?
>
> Thanks for any input!
>
>
>

Use oslotest's mockpatch.PatchObject fixture:
http://docs.openstack.org/developer/oslotest/api.html#oslotest.mockpatch.PatchObject


> Wilson Liu
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/45e9dd82/attachment.html>

From degorenko at mirantis.com  Wed Sep 23 14:04:00 2015
From: degorenko at mirantis.com (Denis Egorenko)
Date: Wed, 23 Sep 2015 17:04:00 +0300
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <5602AC63.302@redhat.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
 <5601EFA8.8040204@puppetlabs.com> <5602AC63.302@redhat.com>
Message-ID: <CAN2iLr4SDSk1Ht21wtWZ0D+dEjtGymkhLuyGRFmfxqz9nq1zJg@mail.gmail.com>

+1 to way from paste above.

2015-09-23 16:42 GMT+03:00 Martin M?gr <mmagr at redhat.com>:

>
>
> On 09/23/2015 02:17 AM, Cody Herriges wrote:
>
> Alex Schultz wrote:
>
> Hey puppet folks,
>
> Based on the meeting yesterday[0], I had proposed creating a parser
> function called is_service_default[1] to validate if a variable matched
> our agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking
> about how can we maybe not use the arbitrary string throughout the
> puppet that can not easily be validated.  So I tested creating another
> puppet function named service_default[2] to replace the use of '<SERVICE
> DEFAULT>' throughout all the puppet modules.  My tests seemed to
> indicate that you can use a parser function as parameter default for
> classes.
>
> I wanted to send a note to gather comments around the second function.
> When we originally discussed what to use to designate for a service's
> default configuration, I really didn't like using an arbitrary string
> since it's hard to parse and validate. I think leveraging a function
> might be better since it is something that can be validated via tests
> and a syntax checker.  Thoughts?
>
>
> Thanks,
> -Alex
>
> [0] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html
> [1] https://review.openstack.org/#/c/223672
> [2] https://review.openstack.org/#/c/224187
>
> I've been mulling this over the last several days and I just can't
> accept an entire ruby function which would be ran for every parameter
> with the desired static value of "<SERVICE DEFAULT>" when the class is
> declared and parsed.  I am not generally against using functions as a
> parameter default just not a fan in this case because running ruby just
> to return a static string seems inappropriate and not optimal.
>
> In this specific case I think the params pattern and inheritance can
> obtain us the same goals.  I also find this a valid us of inheritance
> cross module namespaces but...only because all our modules must depend
> on puppet-openstacklib.
> http://paste.openstack.org/show/473655
>
>
> +1 for implementation in pastebin above. Much better solution than running
> function.
>
> Regards,
> Martin
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Egorenko Denis,
Deployment Engineer
Mirantis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/ba8c4f0f/attachment.html>

From rbryant at redhat.com  Wed Sep 23 14:21:36 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Wed, 23 Sep 2015 10:21:36 -0400
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
Message-ID: <5602B570.9000207@redhat.com>

I'll reply to each of your 3 messages here:

On 09/23/2015 05:57 AM, WANG, Ming Hao (Tony T) wrote:
> Hi Russell,
> 
> I just realized OVN plugin is an independent plugin of OVS plugin.

Yes, it's a plugin developed in the "networking-ovn" project.

http://git.openstack.org/cgit/openstack/networking-ovn/

> In this case, how do we handle the provider network connections between compute nodes? Is it handled by OVN actually?

I'm going to start by explaining the status of OVN itself, and then I'll
come back and address the Neutron integration:

 -- OVN --

OVN implements logical networks as overlays using the Geneve protocol.
Connecting from logical to physical networks is done by one of two ways.

The first is using VTEP gateways.  This could be hardware or software
gateways that implement the hardware_vtep schema.  This is typically a
TOR switch that supports the vtep schema, but I believe someone is going
to build a software version based on ovs and dpdk.  OVN includes a
daemon called "ovn-controller-vtep" that is run for each vtep gateway to
manage connectivity between OVN networks and the gateway.  It could run
on the switch itself, or some other management host.  The last set of
patches to get this working initially were merged just 8 days ago.

The ovn-architecture document describes "Life Cycle of a VTEP gateway":


https://github.com/openvswitch/ovs/blob/master/ovn/ovn-architecture.7.xml#L820

or you can find a temporary copy of a rendered version here:

  http://www.russellbryant.net/ovs-docs/ovn-architecture.7.pdf

The second is what Neutron refers to as "provider networks".  OVN does
support this, as well.  It was merge just a couple weeks ago.  The
commit message for OVN "localnet" ports goes into quite a bit of detail
about how this works in OVN:


https://github.com/openvswitch/ovs/commit/c02819293d52f7ea7b714242d871b2b01f57f905

 -- Neutron --

Both of these things are freshly implemented in OVN so the Neutron
integration is a WIP.

For vtep gateways, there's not an established API.  networking-l2gw is
the closest thing, but I've got some concerns with both the API and
implementation.  As a first baby step, we're just going to provide a
hack that lets an admin create a connection between a network and
gateway using a neutron port with a special binding:profile.  We'll also
be continuing to look at providing a proper API.

For provider networks, working with them in Neutron will be no different
than it is today with the current OVS support.  I just have to finish
the Neutron plugin integration, which I just started on yesterday.

> 
> Thanks,
> Tony 
> 
> -----Original Message-----
> From: WANG, Ming Hao (Tony T) 
> Sent: Wednesday, September 23, 2015 1:58 PM
> To: WANG, Ming Hao (Tony T); 'OpenStack Development Mailing List (not for usage questions)'
> Subject: RE: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?
> 
> Hi Russell,
> 
> Is there any material to explain how OVN parent port work?

Note that while this uses a binding:profile hack for now, we're going to
update the plugin to support the vlan-aware-vms API for this use case
once that is completed.

http://docs.openstack.org/developer/networking-ovn/containers.html

http://specs.openstack.org/openstack/neutron-specs/specs/liberty/vlan-aware-vms.html

https://github.com/openvswitch/ovs/blob/master/ovn/CONTAINERS.OpenStack.md

https://github.com/shettyg/ovn-docker

> Thanks,
> Tony
> 
> -----Original Message-----
> From: WANG, Ming Hao (Tony T) 
> Sent: Wednesday, September 23, 2015 10:02 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [neutron] Does neutron ovn plugin support to setup multiple neutron networks for one container?
> 
> Russell,
> 
> Thanks for your info.
> If I want to assign multiple interfaces to a container on different
> neutron networks(for example, netA and netB), is it mandatory to let
> the VM hosting containers have network interfaces in netA and netB,
> and ovn will help to direct the container traffic to its
> corresponding VM network interfaces?
> 
> from https://github.com/openvswitch/ovs/blob/master/ovn/CONTAINERS.OpenStack.md :
> "This VLAN tag is stripped out in the hypervisor by OVN."
> I suppose when the traffic goes out the VM, the VLAN tag has already been stripped out. 
> When the traffic arrives ovs on physical host, it will be tagged with neutron local vlan. Is it right?

Hopefully the links provided in response to the above mail help explain
it.  In short, the VM only needs one network interface and all traffic
for all containers go over that network interface.  To put each
container on different Neutron networks, the hypervisor needs to be able
to differentiate the traffic from each container even though its all
going over the same network interface to/from the VM.  That's where VLAN
ids are used.  It's used as a simple way to tag traffic as it goes over
the VMs network interface.  As it arrives in the VM, the tag is stripped
and traffic sent to the right container.  As it leaves the VM, the tag
is stripped and then forwarded to the proper Neutron network (which
could itself be a VLAN network, but the tags are not related, and the
traffic would be re-tagged at that point).

Does that make sense?

> Thanks in advance,
> Tony
> 
> -----Original Message-----
> From: Russell Bryant [mailto:rbryant at redhat.com] 
> Sent: Wednesday, September 23, 2015 12:46 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] Does neutron ovn plugin support to setup multiple neutron networks for one container?
> 
> On 09/22/2015 08:08 AM, WANG, Ming Hao (Tony T) wrote:
>> Dear all,
>>
>> For neutron ovn plugin supports containers in one VM, My understanding is one container can't be assigned two network interfaces in different neutron networks. Is it right?
>> The reason:
>> 1. One host VM only has one network interface.
>> 2. all the VLAN tags are stripped out when the packet goes out the VM.
>>
>> If it is True, does neutron ovn plugin or ovn has plan to support this?
> 
> You should be able to assign multiple interfaces to a container on different networks.  The traffic for each interface will be tagged with a unique VLAN ID on its way in and out of the VM, the same way it is done for each container with a single interface.
> 
> --
> Russell Bryant
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Russell Bryant


From eharney at redhat.com  Wed Sep 23 14:40:54 2015
From: eharney at redhat.com (Eric Harney)
Date: Wed, 23 Sep 2015 10:40:54 -0400
Subject: [openstack-dev] [cinder] How to make a mock effactive for all
 method of a testclass
In-Reply-To: <E97AE00C7CAEC3489EF8B5A429A8B2842DB14F29@szxeml504-mbx.china.huawei.com>
References: <E97AE00C7CAEC3489EF8B5A429A8B2842DB14F29@szxeml504-mbx.china.huawei.com>
Message-ID: <5602B9F6.9000906@redhat.com>

On 09/23/2015 04:06 AM, liuxinguo wrote:
> Hi,
> 
> In a.py we have a function:
> def _change_file_mode(filepath):
> utils.execute('chmod', '600', filepath, run_as_root=True)
> 
> In test_xxx.py, there is a testclass:
> class xxxxDriverTestCase(test.TestCase):
> def test_a(self)
>     ...
>     Call a. _change_file_mode
> ...
> 
> def test_b(self)
>     ...
>     Call a. _change_file_mode
> ...
> 
> I have tried to mock like mock out function _change_file_mode like this:
> @mock.patch.object(a, '_change_file_mode', return_value=None)
> class xxxxDriverTestCase(test.TestCase):
> def test_a(self)
>     ...
>     Call a. _change_file_mode
> ...
> 
> def test_b(self)
>     ...
>     Call a. _change_file_mode
> ...
> 
> But the mock takes no effort, the real function _change_file_mode is still executed.
> So how to make a mock effactive for all method of a testclass?
> Thanks for any input!
> 
> Wilson Liu

The simplest way I found to do this was to use mock.patch in the test
class's setUp() method, and tear it down again in tearDown().

There may be cleaner ways to do this with tools in oslotest etc. (I'm
not sure), but this is fairly straightforward.

See here -- self._clear_patch stores the mock:
http://git.openstack.org/cgit/openstack/cinder/tree/cinder/tests/unit/test_volume.py?id=8de60a8b#n257



From fungi at yuggoth.org  Wed Sep 23 14:44:44 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Wed, 23 Sep 2015 14:44:44 +0000
Subject: [openstack-dev] Election tools work session in Tokyo
In-Reply-To: <560273C4.8050302@openstack.org>
References: <20150923092624.GK707@thor.bakeyournoodle.com>
 <560273C4.8050302@openstack.org>
Message-ID: <20150923144444.GU25159@yuggoth.org>

On 2015-09-23 11:41:24 +0200 (+0200), Thierry Carrez wrote:
[...]
> This is a bit cross-project, so the natural fit would be a
> cross-project workshop [...] Alternatively that could fit in a
> infra work session, you may want to suggest it there

Yep, I agree both are a possible fit. Cross-project tooling is
basically another term for our community infrastructure, after all.

> (not sure if they have opened an etherpad for session suggestion
> yet).
[...]

Not yet. Don't hate me, but I was hoping odsreg would be useful for
this. ;)

If you want to stick with only using odsreg for the cross-project
session proposals, Infra can just start ours straight from an
etherpad. We don't usually get more than a dozen sessions proposed,
but because of the wide-reaching impact of our work throughout the
project we do tend to get at least a few from outside our immediate
team.
-- 
Jeremy Stanley


From sbauza at redhat.com  Wed Sep 23 15:00:06 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Wed, 23 Sep 2015 17:00:06 +0200
Subject: [openstack-dev] [nova] How to properly detect and fence a
 compromised host (and why I dislike TrustedFilter)
In-Reply-To: <5602A9AE.2090604@linux.vnet.ibm.com>
References: <558937E1.4090103@redhat.com>
 <CAHXdxOedgyWi_uAtrPhzTo59XSKshbQ1ExK473qVpdPeR32XPw@mail.gmail.com>
 <558BC304.2080006@redhat.com> <5602A9AE.2090604@linux.vnet.ibm.com>
Message-ID: <5602BE76.4040504@redhat.com>



Le 23/09/2015 15:31, Matt Riedemann a ?crit :
>
>
> On 6/25/2015 3:59 AM, Sylvain Bauza wrote:
>>
>>
>> Le 24/06/2015 19:56, Joe Gordon a ?crit :
>>>
>>>
>>> On Tue, Jun 23, 2015 at 3:41 AM, Sylvain Bauza <sbauza at redhat.com
>>> <mailto:sbauza at redhat.com>> wrote:
>>>
>>>     Hi team,
>>>
>>>     Some discussion occurred over IRC about a bug which was publicly
>>>     open related to TrustedFilter [1]
>>>     I want to take the opportunity for raising my concerns about that
>>>     specific filter, why I dislike it and how I think we could improve
>>>     the situation - and clarify everyone's thoughts)
>>>
>>>     The current situation is that way : Nova only checks if one host
>>>     is compromised only when the scheduler is called, ie. only when
>>>     booting/migrating/evacuating/unshelving an instance (well, not
>>>     exactly all the evacuate/live-migrate cases, but let's not discuss
>>>     about that now). When the request goes in the scheduler, all the
>>>     hosts are checked against all the enabled filters and the
>>>     TrustedFilter is making an external HTTP(S) call to the
>>>     Attestation API service (not handled by Nova) for *each host* to
>>>     see if the host is valid (not compromised) or not.
>>>
>>>     To be clear, that's the only in-tree scheduler filter which
>>>     explicitly does an external call to a separate service that Nova
>>>     is not managing. I can see at least 3 reasons for thinking about
>>>     why it's bad :
>>>
>>>     #1 : that's a terrible bottleneck for performance, because we're
>>>     IO-blocking N times given N hosts (we're even not multiplexing the
>>>     HTTP requests)
>>>     #2 : all the filters are checking an internal Nova state for the
>>>     host (called HostState) but that the TrustedFilter, which means
>>>     that conceptually we defer the decision to a 3rd-party engine
>>>     #3 : that Attestation API services becomes a de facto dependency
>>>     for Nova (since it's an in-tree filter) while it's not listed as a
>>>     dependency and thus not gated.
>>>
>>>
>>>     All of these reasons could be acceptable if that would cover the
>>>     exposed usecase given in [1] (ie. I want to make sure that if my
>>>     host gets compromised, my instances will not be running on that
>>>     host) but that just doesn't work, due to the situation I mentioned
>>>     above.
>>>
>>>     So, given that, here are my thoughts :
>>>     a/ if a host gets compromised, we can just disable its service to
>>>     prevent its election as a valid destination host. There is no need
>>>     for a specialised filter.
>>>     b/ if a host is compromised, we can assume that the instances have
>>>     to resurrect elsewhere, ie. we can call a nova evacuate
>>>     c/ checking if an host is compromised or not is not a Nova
>>>     responsibility since it's already perfectly done by [2]
>>>
>>>     In other words, I'm considering that "security" usecase as
>>>     something analog as the HA usecase [3] where we need a 3rd-party
>>>     tool responsible for periodically checking the state of the hosts,
>>>     and if compromised then call the Nova API for fencing the host and
>>>     evacuating the compromised instances.
>>>
>>>     Given that, I'm proposing to deprecate TrustedFilter and explictly
>>>     mention to drop it from in-tree in a later cycle
>>>     https://review.openstack.org/194592
>>>
>>>
>>> Given people are using this, it is a negligible maintenance burden.  I
>>> think deprecating with the intention of removing is not worth it.
>>>
>>> Although it would be very useful to further document the risks with
>>> this filter (live migration, possible performance issues etc.)
>>
>> Well, I can understand that customers could not be agreeing to remove
>> the filter because there is no clear alternative for them. That said, I
>> think saying that the filter is deprecated without saying when it would
>> be removed would help some contributors thinking about that and working
>> on a better solution, exactly like we did for EC2 API.
>>
>> To be clear, I want to freeze the filter by deprecating it and
>> explaining why it's wrong (by amending the devref section and giving a
>> LOG warning saying it's deprecated) and then leave the filter within
>> in-tree unless we are sure that there is a good solution out of Nova.
>>
>> -Sylvain
>>
>>
>>>
>>>
>>>     Thoughts ?
>>>     -Sylvain
>>>
>>>
>>>
>>>     [1] https://bugs.launchpad.net/nova/+bug/1456228
>>>     [2] https://github.com/OpenAttestation/OpenAttestation
>>>     [3]
>>> http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/ 
>>>
>>>
>>>
>>> __________________________________________________________________________
>>>     OpenStack Development Mailing List (not for usage questions)
>>>     Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________ 
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________ 
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> I just reviewed the change https://review.openstack.org/#/c/194592/ 
> and agree with Joe.
>
> We can't justify deprecation and removal due to lack of CI testing - 
> there are many scheduler filters which aren't tested in the gate.  Or 
> if we can justify it that way, then we're setting a precedent.  So if 
> testing is the sore spot, then maybe we want Intel to look at setting 
> up 3rd party CI?  Maybe they could work it into their existing PCI CI?
>

Well, there is a difference between that filter and others since we 
could just provide some functional testing against the other filters 
just by adding Tempest tests while it would require far more than that 
for the TrustedFilter (ie. either pulling OAT as a dependency for Nova, 
or considering a 3rd-party CI).

For sure, I'd love to see some efforts for providing an integration with 
OAT if that filter stays in-tree.

> I also don't think we can justify the external dependency as grounds 
> for removal.  There are many possible configurations that require 
> external dependencies.  90% of cinder/neutron configurations probably 
> fall into this camp.
>
Fair enough, I just want to stress the point that some work has to be 
done before considering that this filter is having the same level of 
confidence than the others.

> From other parts of this thread it also sounds like there are 
> potentially alternatives to this filter but they aren't implemented, 
> or even written up in a spec.  Given there are users of this, I'd 
> think we'd want to see an agreed to alternative proposal to replace 
> this filter.
>

I totally support that. Like I said in my original email, this is not 
only a dependency problem, but rather a design problem. If we want to 
cover the given usecases, it requires more than just a filter, and IMHO 
all of this needs to be done outside Nova.


> I'm all for logging a warning that this filter is experimental 
> (meaning it's not tested in our CI system).  I don't think there is a 
> good reason to deprecate it right now though with an open-ended 
> removal date.
>

That's a very valid point, I'm fine with that. Thanks for the idea.

-Sylvain


From doug at doughellmann.com  Wed Sep 23 15:25:06 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 23 Sep 2015 11:25:06 -0400
Subject: [openstack-dev] [sahara] fixing python-saharaclient gate jobs
Message-ID: <1443021785-sup-3143@lrrr.local>

It looks like all of the python-saharaclient gate jobs are failing
because of some new checks added to devstack to ensure the LIBS_FROM_GIT
feature is fully configured properly in a given job. The jobs for
python-saharaclient do not install the client from source because
they're running in a configuration without sahara at all.

There is some discussion about how to fix that in the openstack-qa
channel logs starting from [1]. The solution is complex enough that
the Sahara team infra liaison should be involved to ensure that the
correct jobs are updated.

Doug

[1] http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2015-09-23.log.html#t2015-09-23T15:04:52


From aschultz at mirantis.com  Wed Sep 23 15:30:36 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Wed, 23 Sep 2015 10:30:36 -0500
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <5601EFA8.8040204@puppetlabs.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
 <5601EFA8.8040204@puppetlabs.com>
Message-ID: <CABzFt8NT0vDvkoUNzpCYdSCaxgrtz3xjw6CAooyJ8eMOvJp7OA@mail.gmail.com>

>
> I've been mulling this over the last several days and I just can't
> accept an entire ruby function which would be ran for every parameter
> with the desired static value of "<SERVICE DEFAULT>" when the class is
> declared and parsed.  I am not generally against using functions as a
> parameter default just not a fan in this case because running ruby just
> to return a static string seems inappropriate and not optimal.
>
> In this specific case I think the params pattern and inheritance can
> obtain us the same goals.  I also find this a valid us of inheritance
> cross module namespaces but...only because all our modules must depend
> on puppet-openstacklib.
>
> http://paste.openstack.org/show/473655
>

Yes after thinking it over, I agree that the function for a parameter
is probably not the best route.  I think going the inheritance route
is a much more established pattern and would make more sense.

Just throwing this out there, another option could be using a fact in
puppet-openstacklib. We could create an os_service_default fact or
something named similarly that would be a static constant that could
be used for a parameter default and we could leverage it as part of
the is_service_default() function.  We would still require
puppet-openstacklib be included but we wouldn't need to do all the
inherits in the classes.  Just a thought.

Thanks,
-Alex


>
>
> --
> Cody
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From doug at doughellmann.com  Wed Sep 23 15:32:49 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Wed, 23 Sep 2015 11:32:49 -0400
Subject: [openstack-dev] [sahara][trove] fixing client gate jobs
In-Reply-To: <1443021785-sup-3143@lrrr.local>
References: <1443021785-sup-3143@lrrr.local>
Message-ID: <A016B43F-0EDB-4688-923A-3E70AD4C2376@doughellmann.com>


> On Sep 23, 2015, at 11:25 AM, Doug Hellmann <doug at doughellmann.com> wrote:
> 
> It looks like all of the python-saharaclient gate jobs are failing
> because of some new checks added to devstack to ensure the LIBS_FROM_GIT
> feature is fully configured properly in a given job. The jobs for
> python-saharaclient do not install the client from source because
> they're running in a configuration without sahara at all.
> 
> There is some discussion about how to fix that in the openstack-qa
> channel logs starting from [1]. The solution is complex enough that
> the Sahara team infra liaison should be involved to ensure that the
> correct jobs are updated.
> 
> Doug
> 
> [1] http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2015-09-23.log.html#t2015-09-23T15:04:52

The same problem appears to affect python-troveclient, so adding their project tag to the subject.

Doug



From klindgren at godaddy.com  Wed Sep 23 15:55:08 2015
From: klindgren at godaddy.com (Kris G. Lindgren)
Date: Wed, 23 Sep 2015 15:55:08 +0000
Subject: [openstack-dev] [ops] Operator Local Patches
Message-ID: <96EEC404-293D-40A6-9B64-D3025B335ACF@godaddy.com>


Cross-posting to the dev list as well for better coverage.

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren"
Date: Tuesday, September 22, 2015 at 4:21 PM
To: openstack-operators
Subject: Re: Operator Local Patches

Hello all,

Friendly reminder: If you have local patches and haven't yet done so, please contribute to the etherpad at: https://etherpad.openstack.org/p/operator-local-patches

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren"
Date: Friday, September 18, 2015 at 4:35 PM
To: openstack-operators
Cc: Tom Fifield
Subject: Operator Local Patches

Hello Operators!

During the ops meetup in Palo Alto were we talking about sessions for Tokyo. A session that I purposed, that got a bunch of +1's,  was about local patches that operators were carrying.  From my experience this is done to either implement business logic,  fix assumptions in projects that do not apply to your implementation, implement business requirements that are not yet implemented in openstack, or fix scale related bugs.  What I would like to do is get a working group together to do the following:

1.) Document local patches that operators have (even those that are in gerrit right now waiting to be committed upstream)
2.) Figure out commonality in those patches
3.) Either upstream the common fixes to the appropriate projects or figure out if a hook can be added to allow people to run their code at that specific point
4.) ????
5.) Profit

To start this off, I have documented every patch, along with a description of what it does and why we did it (where needed), that GoDaddy is running [1].  What I am asking is that the operator community please update the etherpad with the patches that you are running, so that we have a good starting point for discussions in Tokyo and beyond.

[1] - https://etherpad.openstack.org/p/operator-local-patches
___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/3345cd09/attachment.html>

From mriedem at linux.vnet.ibm.com  Wed Sep 23 16:11:04 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 23 Sep 2015 11:11:04 -0500
Subject: [openstack-dev] [nova] How to properly detect and fence a
 compromised host (and why I dislike TrustedFilter)
In-Reply-To: <5602BE76.4040504@redhat.com>
References: <558937E1.4090103@redhat.com>
 <CAHXdxOedgyWi_uAtrPhzTo59XSKshbQ1ExK473qVpdPeR32XPw@mail.gmail.com>
 <558BC304.2080006@redhat.com> <5602A9AE.2090604@linux.vnet.ibm.com>
 <5602BE76.4040504@redhat.com>
Message-ID: <5602CF18.9030901@linux.vnet.ibm.com>



On 9/23/2015 10:00 AM, Sylvain Bauza wrote:
>
>
> Le 23/09/2015 15:31, Matt Riedemann a ?crit :
>>
>>
>> On 6/25/2015 3:59 AM, Sylvain Bauza wrote:
>>>
>>>
>>> Le 24/06/2015 19:56, Joe Gordon a ?crit :
>>>>
>>>>
>>>> On Tue, Jun 23, 2015 at 3:41 AM, Sylvain Bauza <sbauza at redhat.com
>>>> <mailto:sbauza at redhat.com>> wrote:
>>>>
>>>>     Hi team,
>>>>
>>>>     Some discussion occurred over IRC about a bug which was publicly
>>>>     open related to TrustedFilter [1]
>>>>     I want to take the opportunity for raising my concerns about that
>>>>     specific filter, why I dislike it and how I think we could improve
>>>>     the situation - and clarify everyone's thoughts)
>>>>
>>>>     The current situation is that way : Nova only checks if one host
>>>>     is compromised only when the scheduler is called, ie. only when
>>>>     booting/migrating/evacuating/unshelving an instance (well, not
>>>>     exactly all the evacuate/live-migrate cases, but let's not discuss
>>>>     about that now). When the request goes in the scheduler, all the
>>>>     hosts are checked against all the enabled filters and the
>>>>     TrustedFilter is making an external HTTP(S) call to the
>>>>     Attestation API service (not handled by Nova) for *each host* to
>>>>     see if the host is valid (not compromised) or not.
>>>>
>>>>     To be clear, that's the only in-tree scheduler filter which
>>>>     explicitly does an external call to a separate service that Nova
>>>>     is not managing. I can see at least 3 reasons for thinking about
>>>>     why it's bad :
>>>>
>>>>     #1 : that's a terrible bottleneck for performance, because we're
>>>>     IO-blocking N times given N hosts (we're even not multiplexing the
>>>>     HTTP requests)
>>>>     #2 : all the filters are checking an internal Nova state for the
>>>>     host (called HostState) but that the TrustedFilter, which means
>>>>     that conceptually we defer the decision to a 3rd-party engine
>>>>     #3 : that Attestation API services becomes a de facto dependency
>>>>     for Nova (since it's an in-tree filter) while it's not listed as a
>>>>     dependency and thus not gated.
>>>>
>>>>
>>>>     All of these reasons could be acceptable if that would cover the
>>>>     exposed usecase given in [1] (ie. I want to make sure that if my
>>>>     host gets compromised, my instances will not be running on that
>>>>     host) but that just doesn't work, due to the situation I mentioned
>>>>     above.
>>>>
>>>>     So, given that, here are my thoughts :
>>>>     a/ if a host gets compromised, we can just disable its service to
>>>>     prevent its election as a valid destination host. There is no need
>>>>     for a specialised filter.
>>>>     b/ if a host is compromised, we can assume that the instances have
>>>>     to resurrect elsewhere, ie. we can call a nova evacuate
>>>>     c/ checking if an host is compromised or not is not a Nova
>>>>     responsibility since it's already perfectly done by [2]
>>>>
>>>>     In other words, I'm considering that "security" usecase as
>>>>     something analog as the HA usecase [3] where we need a 3rd-party
>>>>     tool responsible for periodically checking the state of the hosts,
>>>>     and if compromised then call the Nova API for fencing the host and
>>>>     evacuating the compromised instances.
>>>>
>>>>     Given that, I'm proposing to deprecate TrustedFilter and explictly
>>>>     mention to drop it from in-tree in a later cycle
>>>>     https://review.openstack.org/194592
>>>>
>>>>
>>>> Given people are using this, it is a negligible maintenance burden.  I
>>>> think deprecating with the intention of removing is not worth it.
>>>>
>>>> Although it would be very useful to further document the risks with
>>>> this filter (live migration, possible performance issues etc.)
>>>
>>> Well, I can understand that customers could not be agreeing to remove
>>> the filter because there is no clear alternative for them. That said, I
>>> think saying that the filter is deprecated without saying when it would
>>> be removed would help some contributors thinking about that and working
>>> on a better solution, exactly like we did for EC2 API.
>>>
>>> To be clear, I want to freeze the filter by deprecating it and
>>> explaining why it's wrong (by amending the devref section and giving a
>>> LOG warning saying it's deprecated) and then leave the filter within
>>> in-tree unless we are sure that there is a good solution out of Nova.
>>>
>>> -Sylvain
>>>
>>>
>>>>
>>>>
>>>>     Thoughts ?
>>>>     -Sylvain
>>>>
>>>>
>>>>
>>>>     [1] https://bugs.launchpad.net/nova/+bug/1456228
>>>>     [2] https://github.com/OpenAttestation/OpenAttestation
>>>>     [3]
>>>> http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>>
>>>>     OpenStack Development Mailing List (not for usage questions)
>>>>     Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> I just reviewed the change https://review.openstack.org/#/c/194592/
>> and agree with Joe.
>>
>> We can't justify deprecation and removal due to lack of CI testing -
>> there are many scheduler filters which aren't tested in the gate.  Or
>> if we can justify it that way, then we're setting a precedent.  So if
>> testing is the sore spot, then maybe we want Intel to look at setting
>> up 3rd party CI?  Maybe they could work it into their existing PCI CI?
>>
>
> Well, there is a difference between that filter and others since we
> could just provide some functional testing against the other filters
> just by adding Tempest tests while it would require far more than that
> for the TrustedFilter (ie. either pulling OAT as a dependency for Nova,
> or considering a 3rd-party CI).

Tempest tests the API, so adding tests to Tempest would be tricky, 
unless the test added is a scenario that would only be expected to 
behave a certain why if a given filter is available in the configuration.

The scheduler_default_filters is really all we have in the gate runs today.

>
> For sure, I'd love to see some efforts for providing an integration with
> OAT if that filter stays in-tree.
>
>> I also don't think we can justify the external dependency as grounds
>> for removal.  There are many possible configurations that require
>> external dependencies.  90% of cinder/neutron configurations probably
>> fall into this camp.
>>
> Fair enough, I just want to stress the point that some work has to be
> done before considering that this filter is having the same level of
> confidence than the others.
>
>> From other parts of this thread it also sounds like there are
>> potentially alternatives to this filter but they aren't implemented,
>> or even written up in a spec.  Given there are users of this, I'd
>> think we'd want to see an agreed to alternative proposal to replace
>> this filter.
>>
>
> I totally support that. Like I said in my original email, this is not
> only a dependency problem, but rather a design problem. If we want to
> cover the given usecases, it requires more than just a filter, and IMHO
> all of this needs to be done outside Nova.
>
>
>> I'm all for logging a warning that this filter is experimental
>> (meaning it's not tested in our CI system).  I don't think there is a
>> good reason to deprecate it right now though with an open-ended
>> removal date.
>>
>
> That's a very valid point, I'm fine with that. Thanks for the idea.
>
> -Sylvain
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 

Thanks,

Matt Riedemann



From amit.gandhi at RACKSPACE.COM  Wed Sep 23 16:33:13 2015
From: amit.gandhi at RACKSPACE.COM (Amit Gandhi)
Date: Wed, 23 Sep 2015 16:33:13 +0000
Subject: [openstack-dev] [poppy] Nominate Tony Tan for Poppy (CDN) Core
In-Reply-To: <D2261E54.3A3B7%malini.kamalambal@rackspace.com>
References: <D2261E54.3A3B7%malini.kamalambal@rackspace.com>
Message-ID: <D2284C5D.53596%amit.gandhi@rackspace.com>

I have received a majority vote for Tony, and would like to congratulate Tony Tan for being elevated to Core status on Poppy.

Amit.


From: Malini Kamalambal <malini.kamalambal at RACKSPACE.COM<mailto:malini.kamalambal at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 21, 2015 at 8:50 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [poppy] Nominate Tony Tan for Poppy (CDN) Core

+2

From: Amit Gandhi <amit.gandhi at RACKSPACE.COM<mailto:amit.gandhi at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 21, 2015 at 5:22 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [poppy] Nominate Tony Tan for Poppy (CDN) Core

All,

I would like to nominate Tony Tan (tonytan4ever) [1] to Core for Poppy (CDN) [2].

Tony has worked on the project for the past 12 months, and has been instrumental in building out various features and resolving bugs for the team.  He has written the majority of the Akamai driver and has been working hard to bring SSL integration to the workflow.

Please respond with your votes.

Thanks
Amit.

[1] http://stackalytics.com/?release=all&project_type=stackforge&module=poppy
[2] http://www.poppycdn.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/0b59100f/attachment.html>

From amit.gandhi at RACKSPACE.COM  Wed Sep 23 16:33:47 2015
From: amit.gandhi at RACKSPACE.COM (Amit Gandhi)
Date: Wed, 23 Sep 2015 16:33:47 +0000
Subject: [openstack-dev] [poppy] Nominate Sriram Madupasi Vasudevan for
 Poppy (CDN) to Core
In-Reply-To: <D2261E3D.3A3B6%malini.kamalambal@rackspace.com>
References: <D225ECDD.53371%amit.gandhi@rackspace.com>
 <D2261E3D.3A3B6%malini.kamalambal@rackspace.com>
Message-ID: <D2284CAE.53599%amit.gandhi@rackspace.com>

I have received a majority vote for Sriram, and would like to congratulate Sriram for being elevated to Core status on Poppy.

Amit.


From: Malini Kamalambal <malini.kamalambal at RACKSPACE.COM<mailto:malini.kamalambal at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 21, 2015 at 8:50 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [poppy] Nominate Sriram Madupasi Vasudevan for Poppy (CDN) to Core

+2

From: Amit Gandhi <amit.gandhi at RACKSPACE.COM<mailto:amit.gandhi at RACKSPACE.COM>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 21, 2015 at 5:20 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [poppy] Nominate Sriram Madupasi Vasudevan for Poppy (CDN) to Core

All,

I would like to nominate Sriram Madupasi Vasudevan (thesriram) [1] to Core for Poppy (CDN) [2].

Sriram has worked on the project for the past 12 months, and has been instrumental in building out various features and resolving bugs for the team.

Please respond with your votes.

Thanks
Amit.

[1] http://stackalytics.com/?release=all&project_type=stackforge&module=poppy
[2] http://www.poppycdn.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/172b4227/attachment.html>

From cody at puppetlabs.com  Wed Sep 23 16:46:58 2015
From: cody at puppetlabs.com (Cody Herriges)
Date: Wed, 23 Sep 2015 09:46:58 -0700
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <CABzFt8NT0vDvkoUNzpCYdSCaxgrtz3xjw6CAooyJ8eMOvJp7OA@mail.gmail.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
 <5601EFA8.8040204@puppetlabs.com>
 <CABzFt8NT0vDvkoUNzpCYdSCaxgrtz3xjw6CAooyJ8eMOvJp7OA@mail.gmail.com>
Message-ID: <5602D782.9080906@puppetlabs.com>

Alex Schultz wrote:
>> I've been mulling this over the last several days and I just can't
>> accept an entire ruby function which would be ran for every parameter
>> with the desired static value of "<SERVICE DEFAULT>" when the class is
>> declared and parsed.  I am not generally against using functions as a
>> parameter default just not a fan in this case because running ruby just
>> to return a static string seems inappropriate and not optimal.
>>
>> In this specific case I think the params pattern and inheritance can
>> obtain us the same goals.  I also find this a valid us of inheritance
>> cross module namespaces but...only because all our modules must depend
>> on puppet-openstacklib.
>>
>> http://paste.openstack.org/show/473655
>>
> 
> Yes after thinking it over, I agree that the function for a parameter
> is probably not the best route.  I think going the inheritance route
> is a much more established pattern and would make more sense.
> 
> Just throwing this out there, another option could be using a fact in
> puppet-openstacklib. We could create an os_service_default fact or
> something named similarly that would be a static constant that could
> be used for a parameter default and we could leverage it as part of
> the is_service_default() function.  We would still require
> puppet-openstacklib be included but we wouldn't need to do all the
> inherits in the classes.  Just a thought.
> 

That is a viable option.  I can't recall which versions of puppet
support facts.d but if all that we support do we could even make it 100%
static and just put a file on disk with the contents
"servicedefault='<SERVICE DEFAULT>'.

-- 
Cody

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 931 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/8d0714fe/attachment.pgp>

From mriedem at linux.vnet.ibm.com  Wed Sep 23 16:52:34 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 23 Sep 2015 11:52:34 -0500
Subject: [openstack-dev] [nova] where the aggregate and cell document is
In-Reply-To: <----1a------ucW1a$8be1b18a-865c-4dcb-9f98-5b86b1fb254c@aliyun.com>
References: <----1a------ucW1a$8be1b18a-865c-4dcb-9f98-5b86b1fb254c@aliyun.com>
Message-ID: <5602D8D2.5090005@linux.vnet.ibm.com>



On 9/23/2015 6:32 AM, gong_ys2004 wrote:
> Hi stackers,
> I want to set up cell and aggregator env, but I failed to find the
> document about them.
> could you please help me to find the document?
>
> regards,
> yong sheng gong
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Go here:

http://docs.openstack.org/

There is a search box.  Enter 'aggregate' or 'cells' and hit Enter. 
You'll get results.

You can also find things about aggregates and cells in the nova devref:

http://docs.openstack.org/developer/nova/

-- 

Thanks,

Matt Riedemann



From aschultz at mirantis.com  Wed Sep 23 17:28:14 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Wed, 23 Sep 2015 12:28:14 -0500
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <5602D782.9080906@puppetlabs.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
 <5601EFA8.8040204@puppetlabs.com>
 <CABzFt8NT0vDvkoUNzpCYdSCaxgrtz3xjw6CAooyJ8eMOvJp7OA@mail.gmail.com>
 <5602D782.9080906@puppetlabs.com>
Message-ID: <CABzFt8NkL5ARMxnfsTMJCP5DDunpNBD55N=X76g-CLf5PcRvTA@mail.gmail.com>

On Wed, Sep 23, 2015 at 11:46 AM, Cody Herriges <cody at puppetlabs.com> wrote:
> Alex Schultz wrote:
>>> I've been mulling this over the last several days and I just can't
>>> accept an entire ruby function which would be ran for every parameter
>>> with the desired static value of "<SERVICE DEFAULT>" when the class is
>>> declared and parsed.  I am not generally against using functions as a
>>> parameter default just not a fan in this case because running ruby just
>>> to return a static string seems inappropriate and not optimal.
>>>
>>> In this specific case I think the params pattern and inheritance can
>>> obtain us the same goals.  I also find this a valid us of inheritance
>>> cross module namespaces but...only because all our modules must depend
>>> on puppet-openstacklib.
>>>
>>> http://paste.openstack.org/show/473655
>>>
>>
>> Yes after thinking it over, I agree that the function for a parameter
>> is probably not the best route.  I think going the inheritance route
>> is a much more established pattern and would make more sense.
>>
>> Just throwing this out there, another option could be using a fact in
>> puppet-openstacklib. We could create an os_service_default fact or
>> something named similarly that would be a static constant that could
>> be used for a parameter default and we could leverage it as part of
>> the is_service_default() function.  We would still require
>> puppet-openstacklib be included but we wouldn't need to do all the
>> inherits in the classes.  Just a thought.
>>
>
> That is a viable option.  I can't recall which versions of puppet
> support facts.d but if all that we support do we could even make it 100%
> static and just put a file on disk with the contents
> "servicedefault='<SERVICE DEFAULT>'.
>


I think it was 3.3 or 3.4 which are our minimums I believe.

-Alex

> --
> Cody
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From David_Paterson at Dell.com  Wed Sep 23 17:47:43 2015
From: David_Paterson at Dell.com (David_Paterson at Dell.com)
Date: Wed, 23 Sep 2015 12:47:43 -0500
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <1442847252531.42564@RACKSPACE.COM>
References: <1442847252531.42564@RACKSPACE.COM>
Message-ID: <12D53C41B45C0B45B009937DA1D8229805E65C1267@AUSX7MCPC103.AMER.DELL.COM>

Dell - Internal Use - Confidential

Launchpad id: davpat2112

Thanks,
David
From: Andrew Melton [mailto:andrew.melton at RACKSPACE.COM]
Sent: Monday, September 21, 2015 10:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] New PyCharm License


Hi devs,



I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.



Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.

?

--Andrew
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/dc1d6693/attachment.html>

From mriedem at linux.vnet.ibm.com  Wed Sep 23 18:00:53 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 23 Sep 2015 13:00:53 -0500
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
Message-ID: <5602E8D5.9080407@linux.vnet.ibm.com>

I came across bug 1496235 [1] today.  In this case the user is booting 
an instance from a volume using source=image, so nova actually does the 
volume create call to the volume API.  They are booting the instance 
into a valid nova availability zone, but that same AZ isn't defined in 
Cinder, so the volume create request fails (since nova passes the 
instance AZ to cinder [2]).

I marked this as invalid given how the code works.

I'm posting here since I'm wondering if there are alternatives worth 
pursuing.  For example, nova could get the list of AZs from the volume 
API and if the nova AZ isn't in that list, don't provide it on the 
volume create request.  That's essentially the same as first creating 
the volume outside of nova and not specifying an AZ, then when doing the 
boot from volume, provide the volume_id as the source.

The question is, is it worth doing that?  I'm not familiar enough with 
how availability zones are meant to work between nova and cinder so it's 
hard for me to have much of an opinion here.

[1] https://bugs.launchpad.net/nova/+bug/1496235
[2] 
https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L381-L383

-- 

Thanks,

Matt Riedemann



From mtreinish at kortar.org  Wed Sep 23 18:06:26 2015
From: mtreinish at kortar.org (Matthew Treinish)
Date: Wed, 23 Sep 2015 14:06:26 -0400
Subject: [openstack-dev] [QA] Mitaka Summit Session Planning
Message-ID: <20150923180626.GA19740@sazabi.kortar.org>

Hi Everyone,

I started an etherpad to track ideas for summit sessions:

https://etherpad.openstack.org/p/mitaka-qa-summit-topics

If you have an idea for a session feel free to add it to the etherpad.

As we get closer to summit we'll dedicate a QA meeting to selecting
which topics we'll have sessions on.

Thanks,

Matt Treinish
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/9dd8bcd6/attachment.pgp>

From skraynev at mirantis.com  Wed Sep 23 18:09:00 2015
From: skraynev at mirantis.com (Sergey Kraynev)
Date: Wed, 23 Sep 2015 21:09:00 +0300
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7C93B5@EX10MBOX06.pnnl.gov>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>
 <1442968743.30604.13.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C9145@EX10MBOX06.pnnl.gov>
 <CABkBM5GvWpG57HkBHghvH+q7ZK8V8s_oHL2KAfHQdRiuOAcSOg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C93B5@EX10MBOX06.pnnl.gov>
Message-ID: <CAAbQNRmXG82C+4VAuuZcY6NRG5eNwQB=aYLe3T00wWAHyC65tQ@mail.gmail.com>

Guys. I happy, that you already discussed it here :)
However, I'd like to raise same question on our Heat IRC meeting.
Probably we should define some common concepts, because I think, that lbaas
is not single example of service with
several APIs.
I will post update in this thread later (after meeting).

Regards,
Sergey.

On 23 September 2015 at 14:37, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:

> Seperate ns would work great.
>
> Thanks,
> Kevin
>
> ------------------------------
> *From:* Banashankar KV
> *Sent:* Tuesday, September 22, 2015 9:14:35 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
>
> What you think about separating both of them with the name as Doug
> mentioned. In future if we want to get rid of the v1 we can just remove
> that namespace. Everything will be clean.
>
> Thanks
> Banashankar
>
>
> On Tue, Sep 22, 2015 at 6:01 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>
>> As I understand it, loadbalancer in v2 is more like pool was in v1. Can
>> we make it such that if you are using the loadbalancer resource and have
>> the mandatory v2 properties that it tries to use v2 api, otherwise its a v1
>> resource? PoolMember should be ok being the same. It just needs to call v1
>> or v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api
>> different between them? Can it be like pool member?
>>
>> Thanks,
>> Kevin
>>
>> ------------------------------
>> *From:* Brandon Logan
>> *Sent:* Tuesday, September 22, 2015 5:39:03 PM
>>
>> *To:* openstack-dev at lists.openstack.org
>> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for
>> LbaasV2
>>
>> So for the API v1s api is of the structure:
>>
>> <neutron-endpoint>/lb/(vip|pool|member|health_monitor)
>>
>> V2s is:
>> <neutron-endpoint>/lbaas/(loadbalancer|listener|pool|healthmonitor)
>>
>> member is a child of pool, so it would go down one level.
>>
>> The only difference is the lb for v1 and lbaas for v2.  Not sure if that
>> is enough of a different.
>>
>> Thanks,
>> Brandon
>> On Tue, 2015-09-22 at 23:48 +0000, Fox, Kevin M wrote:
>> > Thats the problem. :/
>> >
>> > I can't think of a way to have them coexist without: breaking old
>> > templates, including v2 in the name, or having a flag on the resource
>> > saying the version is v2. And as an app developer I'd rather not have
>> > my existing templates break.
>> >
>> > I haven't compared the api's at all, but is there a required field of
>> > v2 that is different enough from v1 that by its simple existence in
>> > the resource you can tell a v2 from a v1 object? Would something like
>> > that work? PoolMember wouldn't have to change, the same resource could
>> > probably work for whatever lb it was pointing at I'm guessing.
>> >
>> > Thanks,
>> > Kevin
>> >
>> >
>> >
>> > ______________________________________________________________________
>> > From: Banashankar KV [banveerad at gmail.com]
>> > Sent: Tuesday, September 22, 2015 4:40 PM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
>> > LbaasV2
>> >
>> >
>> >
>> > Ok, sounds good. So now the question is how should we name the new V2
>> > resources ?
>> >
>> >
>> >
>> > Thanks
>> > Banashankar
>> >
>> >
>> >
>> > On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
>> > wrote:
>> >         Yes, hence the need to support the v2 resources as seperate
>> >         things. Then I can rewrite the templates to include the new
>> >         resources rather then the old resources as appropriate. IE, it
>> >         will be a porting effort to rewrite them. Then do a heat
>> >         update on the stack to migrate it from lbv1 to lbv2. Since
>> >         they are different resources, it should create the new and
>> >         delete the old.
>> >
>> >         Thanks,
>> >         Kevin
>> >
>> >
>> >         ______________________________________________________________
>> >         From: Banashankar KV [banveerad at gmail.com]
>> >         Sent: Tuesday, September 22, 2015 4:16 PM
>> >
>> >         To: OpenStack Development Mailing List (not for usage
>> >         questions)
>> >         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
>> >         for LbaasV2
>> >
>> >
>> >
>> >
>> >         But I think, V2 has introduced some new components and whole
>> >         association of the resources with each other is changed, we
>> >         should be still able to do what Kevin has mentioned ?
>> >
>> >         Thanks
>> >         Banashankar
>> >
>> >
>> >
>> >         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
>> >         <Kevin.Fox at pnnl.gov> wrote:
>> >                 There needs to be a way to have both v1 and v2
>> >                 supported in one engine....
>> >
>> >                 Say I have templates that use v1 already in existence
>> >                 (I do), and I want to be able to heat stack update on
>> >                 them one at a time to v2. This will replace the v1 lb
>> >                 with v2, migrating the floating ip from the v1 lb to
>> >                 the v2 one. This gives a smoothish upgrade path.
>> >
>> >                 Thanks,
>> >                 Kevin
>> >                 ________________________________________
>> >                 From: Brandon Logan [brandon.logan at RACKSPACE.COM]
>> >                 Sent: Tuesday, September 22, 2015 3:22 PM
>> >                 To: openstack-dev at lists.openstack.org
>> >                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>> >                 support for LbaasV2
>> >
>> >                 Well I'd hate to have the V2 postfix on it because V1
>> >                 will be deprecated
>> >                 and removed, which means the V2 being there would be
>> >                 lame.  Is there any
>> >                 kind of precedent set for for how to handle this?
>> >
>> >                 Thanks,
>> >                 Brandon
>> >                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
>> >                 wrote:
>> >                 > So are we thinking of making it as ?
>> >                 > OS::Neutron::LoadBalancerV2
>> >                 >
>> >                 > OS::Neutron::ListenerV2
>> >                 >
>> >                 > OS::Neutron::PoolV2
>> >                 >
>> >                 > OS::Neutron::PoolMemberV2
>> >                 >
>> >                 > OS::Neutron::HealthMonitorV2
>> >                 >
>> >                 >
>> >                 >
>> >                 > and add all those into the loadbalancer.py of heat
>> >                 engine ?
>> >                 >
>> >                 > Thanks
>> >                 > Banashankar
>> >                 >
>> >                 >
>> >                 >
>> >                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>> >                 > <skraynev at mirantis.com> wrote:
>> >                 >         Brandon.
>> >                 >
>> >                 >
>> >                 >         As I understand we v1 and v2 have
>> >                 differences also in list of
>> >                 >         objects and also in relationships between
>> >                 them.
>> >                 >         So I don't think that it will be easy to
>> >                 upgrade old resources
>> >                 >         (unfortunately).
>> >                 >         I'd agree with second Kevin's suggestion
>> >                 about implementation
>> >                 >         new resources in this case.
>> >                 >
>> >                 >
>> >                 >         I see, that a lot of guys, who wants to help
>> >                 with it :) And I
>> >                 >         suppose, that me and Rabi Mishra may try to
>> >                 help with it,
>> >                 >         because we was involvement in implementation
>> >                 of v1 resources
>> >                 >         in Heat.
>> >                 >         Follow the list of v1 lbaas resources in
>> >                 Heat:
>> >                 >
>> >                 >
>> >                 >
>> >
>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>> >                 >
>> >
>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>> >                 >
>> >                 >
>> >
>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>> >                 >
>> >                 >
>> >
>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>> >                 >
>> >                 >
>> >                 >
>> >                 >         Also, I suppose, that it may be discussed
>> >                 during summit
>> >                 >         talks :)
>> >                 >         Will add to etherpad with potential
>> >                 sessions.
>> >                 >
>> >                 >
>> >                 >
>> >                 >         Regards,
>> >                 >         Sergey.
>> >                 >
>> >                 >         On 22 September 2015 at 22:27, Brandon Logan
>> >                 >         <brandon.logan at rackspace.com> wrote:
>> >                 >                 There is some overlap, but there was
>> >                 some incompatible
>> >                 >                 differences when
>> >                 >                 we started designing v2.  I'm sure
>> >                 the same issues
>> >                 >                 will arise this time
>> >                 >                 around so new resources sounds like
>> >                 the path to go.
>> >                 >                 However, I do not
>> >                 >                 know much about Heat and the
>> >                 resources so I'm speaking
>> >                 >                 on a very
>> >                 >                 uneducated level here.
>> >                 >
>> >                 >                 Thanks,
>> >                 >                 Brandon
>> >                 >                 On Tue, 2015-09-22 at 18:38 +0000,
>> >                 Fox, Kevin M wrote:
>> >                 >                 > We're using the v1 resources...
>> >                 >                 >
>> >                 >                 > If the v2 ones are compatible and
>> >                 can seamlessly
>> >                 >                 upgrade, great
>> >                 >                 >
>> >                 >                 > Otherwise, make new ones please.
>> >                 >                 >
>> >                 >                 > Thanks,
>> >                 >                 > Kevin
>> >                 >                 >
>> >                 >                 >
>> >                 >
>> >
>> ______________________________________________________________________
>> >                 >                 > From: Banashankar KV
>> >                 [banveerad at gmail.com]
>> >                 >                 > Sent: Tuesday, September 22, 2015
>> >                 10:07 AM
>> >                 >                 > To: OpenStack Development Mailing
>> >                 List (not for
>> >                 >                 usage questions)
>> >                 >                 > Subject: Re: [openstack-dev]
>> >                 [neutron][lbaas] - Heat
>> >                 >                 support for
>> >                 >                 > LbaasV2
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 > Hi Brandon,
>> >                 >                 > Work in progress, but need some
>> >                 input on the way we
>> >                 >                 want them, like
>> >                 >                 > replace the existing lbaasv1 or we
>> >                 still need to
>> >                 >                 support them ?
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 > Thanks
>> >                 >                 > Banashankar
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
>> >                 Brandon Logan
>> >                 >                 > <brandon.logan at rackspace.com>
>> >                 wrote:
>> >                 >                 >         Hi Banashankar,
>> >                 >                 >         I think it'd be great if
>> >                 you got this going.
>> >                 >                 One of those
>> >                 >                 >         things we
>> >                 >                 >         want to have and people
>> >                 ask for but has
>> >                 >                 always gotten a lower
>> >                 >                 >         priority
>> >                 >                 >         due to the critical things
>> >                 needed.
>> >                 >                 >
>> >                 >                 >         Thanks,
>> >                 >                 >         Brandon
>> >                 >                 >         On Mon, 2015-09-21 at
>> >                 17:57 -0700,
>> >                 >                 Banashankar KV wrote:
>> >                 >                 >         > Hi All,
>> >                 >                 >         > I was thinking of
>> >                 starting the work on
>> >                 >                 heat to support
>> >                 >                 >         LBaasV2,  Is
>> >                 >                 >         > there any concerns about
>> >                 that?
>> >                 >                 >         >
>> >                 >                 >         >
>> >                 >                 >         > I don't know if it is
>> >                 the right time to
>> >                 >                 bring this up :D .
>> >                 >                 >         >
>> >                 >                 >         > Thanks,
>> >                 >                 >         > Banashankar (bana_k)
>> >                 >                 >         >
>> >                 >                 >         >
>> >                 >                 >
>> >                 >                 >         >
>> >                 >                 >
>> >                 >
>> >
>> __________________________________________________________________________
>> >                 >                 >         > OpenStack Development
>> >                 Mailing List (not
>> >                 >                 for usage questions)
>> >                 >                 >         > Unsubscribe:
>> >                 >                 >
>> >                 >
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >                 >                 >         >
>> >                 >                 >
>> >                 >
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >                 >                 >
>> >                 >                 >
>> >                 >
>> >
>> __________________________________________________________________________
>> >                 >                 >         OpenStack Development
>> >                 Mailing List (not for
>> >                 >                 usage questions)
>> >                 >                 >         Unsubscribe:
>> >                 >                 >
>> >                 >
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >                 >                 >
>> >                 >
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >
>> >
>> __________________________________________________________________________
>> >                 >                 > OpenStack Development Mailing List
>> >                 (not for usage
>> >                 >                 questions)
>> >                 >                 > Unsubscribe:
>> >                 >
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >                 >                 >
>> >                 >
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >                 >
>> >                 >
>> >
>> __________________________________________________________________________
>> >                 >                 OpenStack Development Mailing List
>> >                 (not for usage
>> >                 >                 questions)
>> >                 >                 Unsubscribe:
>> >                 >
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >                 >
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >
>> __________________________________________________________________________
>> >                 >         OpenStack Development Mailing List (not for
>> >                 usage questions)
>> >                 >         Unsubscribe:
>> >                 >
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >                 >
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >
>> __________________________________________________________________________
>> >                 > OpenStack Development Mailing List (not for usage
>> >                 questions)
>> >                 > Unsubscribe:
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >                 >
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> __________________________________________________________________________
>> >                 OpenStack Development Mailing List (not for usage
>> >                 questions)
>> >                 Unsubscribe:
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> __________________________________________________________________________
>> >                 OpenStack Development Mailing List (not for usage
>> >                 questions)
>> >                 Unsubscribe:
>> >
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> >
>> __________________________________________________________________________
>> >         OpenStack Development Mailing List (not for usage questions)
>> >         Unsubscribe:
>> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/882b16fd/attachment.html>

From vilobhmeshram.openstack at gmail.com  Wed Sep 23 18:11:30 2015
From: vilobhmeshram.openstack at gmail.com (Vilobh Meshram)
Date: Wed, 23 Sep 2015 11:11:30 -0700
Subject: [openstack-dev] [nova] Servicegroup refactoring for the Control
 Plane - Mitaka
Message-ID: <CAPJ8RRXQaPHRS7d8SytK52838agAq7RHCfTnjPfed7W0089ykA@mail.gmail.com>

Hi All,

As part of Liberty spec [1] was approved with the conclusion that
nova.services data be stored and managed by respective driver backend that
is selected by the CONF.servicegroup_driver (which can be
DB/Zookeeper/Memcache).

When this spec was proposed again for Mitaka[3], the idea that has come up
is that the nova.services data will remain in nova database itself and the
servicegroup zookeeper, memcache drivers be used solely for
liveliness/up/down ness of the service. The idea being using the best of
both worlds and few operations for example getting min/max for a service id
can be quicker when done a query in DB in comparison to ZK/Memcache
backends. But ZK driver is worthwhile for state management as it minimizes
the burden on nova db to store the additional *periodic* (depending on
service_down_time) liveliness information.

Please note [4] depends on [3] and a conclusion on [3] can pave a way
forward for [4] (similarly [1] was a dependency for [2]). A detail document
[5] encompassing all the possible options by having different permutation
of various drivers (db/zk/mc). Once we have a conclusion on one of the
approach proposed in [5] will update spec [3] to reflect these changes.

So in short

*Accepted in Liberty [1] [2] :*
[1] Services information be stored in respective backend configured by
CONF.servicegroup_driver
and all the interfaces which plan to access service information go through
servicegroup layer.
[2] Add tooz specific drivers e.g. replace existing nova servicegroup
zookeeper driver with a new zookeeper driver backed by Tooz zookeeper
driver.

*Proposal for Mitaka [3][4] :*
[3] Services information be stored in nova.services (nova database) and
liveliness information be managed by CONF.servicegroup_driver
(DB/Zookeeper/Memcache)
[4] Stick to what is accepted for #2. Just that the scope will be decided
based on whether we go with #1 (as accepted for Liberty) or #3 (what is
proposed for Mitaka)


- Vilobh

[1] Servicegroup foundational refactoring for Control Plane *(Liberty)* -
https://review.openstack.org/#/c/190322/

[2] Add tooz service group driver* (Liberty)* -
https://review.openstack.org/#/c/138607/

[3] Servicegroup foundational refactoring for Control Plane *(Mitaka)* -
https://review.openstack.org/#/c/222423/

[4] Add tooz service group driver *(Mitaka) *-
https://review.openstack.org/#/c/222422/

[5] *Various options and there impact* :
https://etherpad.openstack.org/p/servicegroup-refactoring
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/59a71e69/attachment-0001.html>

From sean at dague.net  Wed Sep 23 18:17:35 2015
From: sean at dague.net (Sean Dague)
Date: Wed, 23 Sep 2015 14:17:35 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <m0fv25fjvf.fsf@danjou.info>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
 <56028A4B.9010203@dague.net> <m0fv25fjvf.fsf@danjou.info>
Message-ID: <5602ECBF.2020900@dague.net>

On 09/23/2015 07:36 AM, Julien Danjou wrote:
> On Wed, Sep 23 2015, Sean Dague wrote:
> 
>> Does that solution work in the HA Proxy case where there is one
>> terminating address for multiple backend servers?
> 
> Yep.

Ok, how exactly does that work? Because it seems like
oslo_middleware.ssl is only changing the protocol if the proxy sets it.

But the host in the urls will still be the individual host, which isn't
the proxy hostname/ip. Sorry if I'm being daft here, just want to
understand how that flow ends up working.

>> Because there is the concern that this impacts not only the Location
>> header, but the link documents inside the responses which clients are
>> expected to be able to link.follow. This is an honest question, I
>> don't know how the oslo_middleware.ssl acts in these cases. And HA
>> Proxy 1 to N mapping is very common deployment model.
> 
> It should, but some project like Keystone does not handle that
> correctly. I just submitted a patch that fixes this kind of thing by
> using correctly the WSGI environment variable to build a correct URL.
> That fixes also the use cases where Keystone does not run on / but on
> e.g. /identity (the bug I initially wanted to fix).
> 
>   https://review.openstack.org/#/c/226464/
> 
> If you use `wsgiref.util.application_uri(environment)' it should do
> everything correctly. With the SSL middleware enabled that Mathieu
> talked about, it will translate correctly http to https too.

Will that cover the case of webob's request.application_uri? If so I
think that covers the REST documents in at least Nova (one good data
point, and one that I know has been copied around). At least as far as
the protocol is concerned, it's still got a potential url issue.

> The {public,admin}_endpoint are only useful in the case where you map
> http://myproxy/identity -> http://mykeystone/ using a proxy
> 
> Because the prefix is not passed to Keystone. If you map 1:1 the path
> part, we could also leverage X-Forwarded-Host and X-Forwarded-Port to
> avoid having {public,admin}_endpoint options.

It also looks like there are new standards for Forwarded headers, so the
middleware should probably support those as well.
http://tools.ietf.org/html/rfc7239.

	-Sean

-- 
Sean Dague
http://dague.net

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 465 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/1b1ea40e/attachment.pgp>

From Nate_Johnston at cable.comcast.com  Wed Sep 23 18:26:32 2015
From: Nate_Johnston at cable.comcast.com (Johnston, Nate)
Date: Wed, 23 Sep 2015 18:26:32 +0000
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <1442847252531.42564@RACKSPACE.COM>
References: <1442847252531.42564@RACKSPACE.COM>
Message-ID: <4EF8B9F2-ECD2-4F29-BB61-907949320C39@cable.comcast.com>

Yes please - my launchpad ID is ?nate-johnston?.

Thanks!

?N.

On Sep 21, 2015, at 10:54 AM, Andrew Melton <andrew.melton at RACKSPACE.COM<mailto:andrew.melton at RACKSPACE.COM>> wrote:


Hi devs,


I've got the new license for the next year. As always, please reply to this email with your launchpad-id if you would like a license.


Also, if there are other JetBrains products you use to contribute to OpenStack, please let me know and I will request licenses.

?

--Andrew

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/b4474bef/attachment.html>

From klindgren at godaddy.com  Wed Sep 23 18:32:36 2015
From: klindgren at godaddy.com (Kris G. Lindgren)
Date: Wed, 23 Sep 2015 18:32:36 +0000
Subject: [openstack-dev] [Openstack-operators] [Large Deployments
 Team][Performance Team] New informal working group suggestion
Message-ID: <D75DC775-5E3E-44E9-BC86-EB724DE253F5@godaddy.com>

Dina,

Do we have a place to put things (etherpad) that we are seeing performance issues with?  I know we are seeing issues with CPU load under nova-conductor as well as some stuff with the neutron API timing out (seems like it never responds to the request (no log entry on the neutron side).

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Matt Van Winkle
Date: Tuesday, September 22, 2015 at 7:46 AM
To: Dina Belova, OpenStack Development Mailing List, "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>"
Subject: Re: [Openstack-operators] [Large Deployments Team][Performance Team] New informal working group suggestion

Thanks, Dina!

For context to the rest of the LDT folks, Dina reached out to me about working on this under our umbrella for now.  It made sense until we understand if it's a large enough thing to live as its own working group because most of us have various performance concerns too.  So, like Public Clouds, we'll have to figure out how to integrate this sub group.

I suspect the time slot for Tokyo is already packed, so the work for the Performance subgroup may have to be informal or in other sessions, but I'll start working with Tom and the folks covering the session for me (since I won't be able to make it) on what we might be able to do.  I've also asked Dina to join the Oct meeting prior to the Summit so we can further discuss the sub team.

Thanks!
VW

From: Dina Belova <dbelova at mirantis.com<mailto:dbelova at mirantis.com>>
Date: Tuesday, September 22, 2015 7:57 AM
To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>, "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>" <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: [Large Deployments Team][Performance Team] New informal working group suggestion

Hey, OpenStackers!

I'm writing to propose to organise new informal team to work specifically on the OpenStack performance issues. This will be a sub team in already existing Large Deployments Team, and I suppose it will be a good idea to gather people interested in OpenStack performance in one room and identify what issues are worrying contributors, what can be done and share results of performance researches :)

So please volunteer to take part in this initiative. I hope it will be many people interested and we'll be able to use cross-projects session slot<http://odsreg.openstack.org/cfp/details/5> to meet in Tokyo and hold a kick-off meeting.

I would like to apologise I'm writing to two mailing lists at the same time, but I want to make sure that all possibly interested people will notice the email.

Thanks and see you in Tokyo :)

Cheers,
Dina

--

Best regards,

Dina Belova

Senior Software Engineer

Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/471b3b22/attachment.html>

From fungi at yuggoth.org  Wed Sep 23 18:32:49 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Wed, 23 Sep 2015 18:32:49 +0000
Subject: [openstack-dev] [all] Criteria for applying
	vulnerability:managed tag
In-Reply-To: <20150901185638.GB7955@yuggoth.org>
References: <20150901185638.GB7955@yuggoth.org>
Message-ID: <20150923183248.GW25159@yuggoth.org>

On 2015-09-01 18:56:38 +0000 (+0000), Jeremy Stanley wrote:
[...]
> In the spirit of proper transparency, I'm initiating a frank and
> open dialogue on what our criteria for direct vulnerability
> management within the VMT would require of a deliverable and its
> controlling project-team.
[...]

Since there's been no further discussion on the thread for at least
a couple weeks, I've proposed https://review.openstack.org/226869 to
formalize this in governance. Please direct further followup
comments to the review. Thanks!
-- 
Jeremy Stanley


From Kevin.Fox at pnnl.gov  Wed Sep 23 18:39:57 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Wed, 23 Sep 2015 18:39:57 +0000
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CAAbQNRmXG82C+4VAuuZcY6NRG5eNwQB=aYLe3T00wWAHyC65tQ@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>
 <1442968743.30604.13.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C9145@EX10MBOX06.pnnl.gov>
 <CABkBM5GvWpG57HkBHghvH+q7ZK8V8s_oHL2KAfHQdRiuOAcSOg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C93B5@EX10MBOX06.pnnl.gov>,
 <CAAbQNRmXG82C+4VAuuZcY6NRG5eNwQB=aYLe3T00wWAHyC65tQ@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7C9755@EX10MBOX06.pnnl.gov>

One of the weird things about the lbaasv1 vs v2 thing which is different from just about every other v1->v2 change I've seen is v1 and v2 lb's are totally separate things. Unlike, say cinder, where doing a list volumes would show up in both api's, so upgrading is smooth.

Thanks,
Kevin
________________________________
From: Sergey Kraynev [skraynev at mirantis.com]
Sent: Wednesday, September 23, 2015 11:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

Guys. I happy, that you already discussed it here :)
However, I'd like to raise same question on our Heat IRC meeting.
Probably we should define some common concepts, because I think, that lbaas is not single example of service with
several APIs.
I will post update in this thread later (after meeting).

Regards,
Sergey.

On 23 September 2015 at 14:37, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
Seperate ns would work great.

Thanks,
Kevin

________________________________
From: Banashankar KV
Sent: Tuesday, September 22, 2015 9:14:35 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

What you think about separating both of them with the name as Doug mentioned. In future if we want to get rid of the v1 we can just remove that namespace. Everything will be clean.

Thanks
Banashankar


On Tue, Sep 22, 2015 at 6:01 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
As I understand it, loadbalancer in v2 is more like pool was in v1. Can we make it such that if you are using the loadbalancer resource and have the mandatory v2 properties that it tries to use v2 api, otherwise its a v1 resource? PoolMember should be ok being the same. It just needs to call v1 or v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api different between them? Can it be like pool member?

Thanks,
Kevin

________________________________
From: Brandon Logan
Sent: Tuesday, September 22, 2015 5:39:03 PM

To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

So for the API v1s api is of the structure:

<neutron-endpoint>/lb/(vip|pool|member|health_monitor)

V2s is:
<neutron-endpoint>/lbaas/(loadbalancer|listener|pool|healthmonitor)

member is a child of pool, so it would go down one level.

The only difference is the lb for v1 and lbaas for v2.  Not sure if that
is enough of a different.

Thanks,
Brandon
On Tue, 2015-09-22 at 23:48 +0000, Fox, Kevin M wrote:
> Thats the problem. :/
>
> I can't think of a way to have them coexist without: breaking old
> templates, including v2 in the name, or having a flag on the resource
> saying the version is v2. And as an app developer I'd rather not have
> my existing templates break.
>
> I haven't compared the api's at all, but is there a required field of
> v2 that is different enough from v1 that by its simple existence in
> the resource you can tell a v2 from a v1 object? Would something like
> that work? PoolMember wouldn't have to change, the same resource could
> probably work for whatever lb it was pointing at I'm guessing.
>
> Thanks,
> Kevin
>
>
>
> ______________________________________________________________________
> From: Banashankar KV [banveerad at gmail.com<mailto:banveerad at gmail.com>]
> Sent: Tuesday, September 22, 2015 4:40 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
> LbaasV2
>
>
>
> Ok, sounds good. So now the question is how should we name the new V2
> resources ?
>
>
>
> Thanks
> Banashankar
>
>
>
> On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>>
> wrote:
>         Yes, hence the need to support the v2 resources as seperate
>         things. Then I can rewrite the templates to include the new
>         resources rather then the old resources as appropriate. IE, it
>         will be a porting effort to rewrite them. Then do a heat
>         update on the stack to migrate it from lbv1 to lbv2. Since
>         they are different resources, it should create the new and
>         delete the old.
>
>         Thanks,
>         Kevin
>
>
>         ______________________________________________________________
>         From: Banashankar KV [banveerad at gmail.com<mailto:banveerad at gmail.com>]
>         Sent: Tuesday, September 22, 2015 4:16 PM
>
>         To: OpenStack Development Mailing List (not for usage
>         questions)
>         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
>         for LbaasV2
>
>
>
>
>         But I think, V2 has introduced some new components and whole
>         association of the resources with each other is changed, we
>         should be still able to do what Kevin has mentioned ?
>
>         Thanks
>         Banashankar
>
>
>
>         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
>         <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
>                 There needs to be a way to have both v1 and v2
>                 supported in one engine....
>
>                 Say I have templates that use v1 already in existence
>                 (I do), and I want to be able to heat stack update on
>                 them one at a time to v2. This will replace the v1 lb
>                 with v2, migrating the floating ip from the v1 lb to
>                 the v2 one. This gives a smoothish upgrade path.
>
>                 Thanks,
>                 Kevin
>                 ________________________________________
>                 From: Brandon Logan [brandon.logan at RACKSPACE.COM<mailto:brandon.logan at RACKSPACE.COM>]
>                 Sent: Tuesday, September 22, 2015 3:22 PM
>                 To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
>                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>                 support for LbaasV2
>
>                 Well I'd hate to have the V2 postfix on it because V1
>                 will be deprecated
>                 and removed, which means the V2 being there would be
>                 lame.  Is there any
>                 kind of precedent set for for how to handle this?
>
>                 Thanks,
>                 Brandon
>                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
>                 wrote:
>                 > So are we thinking of making it as ?
>                 > OS::Neutron::LoadBalancerV2
>                 >
>                 > OS::Neutron::ListenerV2
>                 >
>                 > OS::Neutron::PoolV2
>                 >
>                 > OS::Neutron::PoolMemberV2
>                 >
>                 > OS::Neutron::HealthMonitorV2
>                 >
>                 >
>                 >
>                 > and add all those into the loadbalancer.py of heat
>                 engine ?
>                 >
>                 > Thanks
>                 > Banashankar
>                 >
>                 >
>                 >
>                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>                 > <skraynev at mirantis.com<mailto:skraynev at mirantis.com>> wrote:
>                 >         Brandon.
>                 >
>                 >
>                 >         As I understand we v1 and v2 have
>                 differences also in list of
>                 >         objects and also in relationships between
>                 them.
>                 >         So I don't think that it will be easy to
>                 upgrade old resources
>                 >         (unfortunately).
>                 >         I'd agree with second Kevin's suggestion
>                 about implementation
>                 >         new resources in this case.
>                 >
>                 >
>                 >         I see, that a lot of guys, who wants to help
>                 with it :) And I
>                 >         suppose, that me and Rabi Mishra may try to
>                 help with it,
>                 >         because we was involvement in implementation
>                 of v1 resources
>                 >         in Heat.
>                 >         Follow the list of v1 lbaas resources in
>                 Heat:
>                 >
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>                 >
>                 >
>                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>                 >
>                 >
>                 >
>                 >         Also, I suppose, that it may be discussed
>                 during summit
>                 >         talks :)
>                 >         Will add to etherpad with potential
>                 sessions.
>                 >
>                 >
>                 >
>                 >         Regards,
>                 >         Sergey.
>                 >
>                 >         On 22 September 2015 at 22:27, Brandon Logan
>                 >         <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>> wrote:
>                 >                 There is some overlap, but there was
>                 some incompatible
>                 >                 differences when
>                 >                 we started designing v2.  I'm sure
>                 the same issues
>                 >                 will arise this time
>                 >                 around so new resources sounds like
>                 the path to go.
>                 >                 However, I do not
>                 >                 know much about Heat and the
>                 resources so I'm speaking
>                 >                 on a very
>                 >                 uneducated level here.
>                 >
>                 >                 Thanks,
>                 >                 Brandon
>                 >                 On Tue, 2015-09-22 at 18:38 +0000,
>                 Fox, Kevin M wrote:
>                 >                 > We're using the v1 resources...
>                 >                 >
>                 >                 > If the v2 ones are compatible and
>                 can seamlessly
>                 >                 upgrade, great
>                 >                 >
>                 >                 > Otherwise, make new ones please.
>                 >                 >
>                 >                 > Thanks,
>                 >                 > Kevin
>                 >                 >
>                 >                 >
>                 >
>                  ______________________________________________________________________
>                 >                 > From: Banashankar KV
>                 [banveerad at gmail.com<mailto:banveerad at gmail.com>]
>                 >                 > Sent: Tuesday, September 22, 2015
>                 10:07 AM
>                 >                 > To: OpenStack Development Mailing
>                 List (not for
>                 >                 usage questions)
>                 >                 > Subject: Re: [openstack-dev]
>                 [neutron][lbaas] - Heat
>                 >                 support for
>                 >                 > LbaasV2
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > Hi Brandon,
>                 >                 > Work in progress, but need some
>                 input on the way we
>                 >                 want them, like
>                 >                 > replace the existing lbaasv1 or we
>                 still need to
>                 >                 support them ?
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > Thanks
>                 >                 > Banashankar
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
>                 Brandon Logan
>                 >                 > <brandon.logan at rackspace.com<mailto:brandon.logan at rackspace.com>>
>                 wrote:
>                 >                 >         Hi Banashankar,
>                 >                 >         I think it'd be great if
>                 you got this going.
>                 >                 One of those
>                 >                 >         things we
>                 >                 >         want to have and people
>                 ask for but has
>                 >                 always gotten a lower
>                 >                 >         priority
>                 >                 >         due to the critical things
>                 needed.
>                 >                 >
>                 >                 >         Thanks,
>                 >                 >         Brandon
>                 >                 >         On Mon, 2015-09-21 at
>                 17:57 -0700,
>                 >                 Banashankar KV wrote:
>                 >                 >         > Hi All,
>                 >                 >         > I was thinking of
>                 starting the work on
>                 >                 heat to support
>                 >                 >         LBaasV2,  Is
>                 >                 >         > there any concerns about
>                 that?
>                 >                 >         >
>                 >                 >         >
>                 >                 >         > I don't know if it is
>                 the right time to
>                 >                 bring this up :D .
>                 >                 >         >
>                 >                 >         > Thanks,
>                 >                 >         > Banashankar (bana_k)
>                 >                 >         >
>                 >                 >         >
>                 >                 >
>                 >                 >         >
>                 >                 >
>                 >
>                 __________________________________________________________________________
>                 >                 >         > OpenStack Development
>                 Mailing List (not
>                 >                 for usage questions)
>                 >                 >         > Unsubscribe:
>                 >                 >
>                 >
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >                 >         >
>                 >                 >
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >                 >
>                 >                 >
>                 >
>                 __________________________________________________________________________
>                 >                 >         OpenStack Development
>                 Mailing List (not for
>                 >                 usage questions)
>                 >                 >         Unsubscribe:
>                 >                 >
>                 >
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >                 >
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >
>                  __________________________________________________________________________
>                 >                 > OpenStack Development Mailing List
>                 (not for usage
>                 >                 questions)
>                 >                 > Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >                 >
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                  __________________________________________________________________________
>                 >                 OpenStack Development Mailing List
>                 (not for usage
>                 >                 questions)
>                 >                 Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 >
>                  __________________________________________________________________________
>                 >         OpenStack Development Mailing List (not for
>                 usage questions)
>                 >         Unsubscribe:
>                 >
>                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >
>                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>                 >
>                 >
>                 >
>                 >
>                 __________________________________________________________________________
>                 > OpenStack Development Mailing List (not for usage
>                 questions)
>                 > Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 >
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>                 __________________________________________________________________________
>                 OpenStack Development Mailing List (not for usage
>                 questions)
>                 Unsubscribe:
>                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/4802401c/attachment.html>

From andrew.melton at RACKSPACE.COM  Wed Sep 23 18:40:56 2015
From: andrew.melton at RACKSPACE.COM (Andrew Melton)
Date: Wed, 23 Sep 2015 18:40:56 +0000
Subject: [openstack-dev] New PyCharm License
In-Reply-To: <1442943810555.78167@RACKSPACE.COM>
References: <1442847252531.42564@RACKSPACE.COM>
 <E1FB4937BE24734DAD0D1D4E4E506D788A6FE88E@MAIL703.KDS.KEANE.COM>
 <1442860231482.95559@RACKSPACE.COM>,
 <20150921191918.GS21846@jimrollenhagen.com>,
 <1442943810555.78167@RACKSPACE.COM>
Message-ID: <1443033655622.77961@RACKSPACE.COM>

Hey Devs,

I now have a set of WebStorm licenses. Please reply with your launchpad-id and I'll get you the invitation link for WebStorm.

--Andrew
________________________________________
From: Andrew Melton <andrew.melton at RACKSPACE.COM>
Sent: Tuesday, September 22, 2015 1:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] New PyCharm License

Hey Devs,

Reply-All strikes again...

I've had to invalidate the old link. If you still need a PyCharm License, please reach out to me with your launchpad-id and I'll get you the updated link.

I'm also working on a WebStorm license. I'll let the list know when I have it.

--Andrew
________________________________________
From: Jim Rollenhagen <jim at jimrollenhagen.com>
Sent: Monday, September 21, 2015 3:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] New PyCharm License

On Mon, Sep 21, 2015 at 06:30:30PM +0000, Andrew Melton wrote:
> Please follow this link to request a license: https://account.jetbrains.com/a/4c4ojw.
>
> You will need a JetBrains account to request the license. This link is open for anyone to use, so please do not share it in the public. You may share it with other OpenStack contributors on your team, but if you do, please send me their launchpad-ids. Lastly, if you decide to stop using PyCharm, please send me an email so I can revoke the license and open it up for use by someone else.

Welp, it's in the public now. :(

// jim

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From e0ne at e0ne.info  Wed Sep 23 18:46:46 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Wed, 23 Sep 2015 21:46:46 +0300
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <5602E8D5.9080407@linux.vnet.ibm.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
Message-ID: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>

Hi Matt,

In Liberty, we introduced allow_availability_zone_fallback [1] option in
Cinder config as fix for bug [2]. If you set this option, Cinder will
create volume in a default AZ instead of set volume into the error state

[1]
https://github.com/openstack/cinder/commit/b85d2812a8256ff82934d150dbc4909e041d8b31
[2] https://bugs.launchpad.net/cinder/+bug/1489575

Regards,
Ivan Kolodyazhny

On Wed, Sep 23, 2015 at 9:00 PM, Matt Riedemann <mriedem at linux.vnet.ibm.com>
wrote:

> I came across bug 1496235 [1] today.  In this case the user is booting an
> instance from a volume using source=image, so nova actually does the volume
> create call to the volume API.  They are booting the instance into a valid
> nova availability zone, but that same AZ isn't defined in Cinder, so the
> volume create request fails (since nova passes the instance AZ to cinder
> [2]).
>
> I marked this as invalid given how the code works.
>
> I'm posting here since I'm wondering if there are alternatives worth
> pursuing.  For example, nova could get the list of AZs from the volume API
> and if the nova AZ isn't in that list, don't provide it on the volume
> create request.  That's essentially the same as first creating the volume
> outside of nova and not specifying an AZ, then when doing the boot from
> volume, provide the volume_id as the source.
>
> The question is, is it worth doing that?  I'm not familiar enough with how
> availability zones are meant to work between nova and cinder so it's hard
> for me to have much of an opinion here.
>
> [1] https://bugs.launchpad.net/nova/+bug/1496235
> [2]
> https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L381-L383
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/dadd308c/attachment.html>

From robert.clark at hpe.com  Wed Sep 23 18:56:40 2015
From: robert.clark at hpe.com (Clark, Robert Graham)
Date: Wed, 23 Sep 2015 18:56:40 +0000
Subject: [openstack-dev] [Security] Weekly Meeting Agenda
Message-ID: <A0C170085C37664D93EE1604364858A11FE58B25@G4W3229.americas.hpqcorp.net>

Hi All,

I won't be available to run the weekly meeting tomorrow as I'm out travelling, Michael McCune (elmiko) has volunteered to lead the meeting.

There's IRC information on our wiki page : https://wiki.openstack.org/wiki/Security

Agenda items (Please reply to add any more):

*        PTL Shenanigans : https://review.openstack.org/#/c/224798/

*        VMT Project Tracking : https://review.openstack.org/#/c/226869/

*        Anchor (Ephemeral PKI)

*        Bandit (Security Linter)

*        Developer Guidance (Don't do this)

*        Security Documents (Do do this)

*        Security Notes (Think about not doing this)

*        Syntribos (API Fuzzing)

*        Any Other Business

Have a good meeting all!

-Rob
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/d4a637da/attachment.html>

From msm at redhat.com  Wed Sep 23 19:01:59 2015
From: msm at redhat.com (michael mccune)
Date: Wed, 23 Sep 2015 15:01:59 -0400
Subject: [openstack-dev] [Security] Weekly Meeting Agenda
In-Reply-To: <A0C170085C37664D93EE1604364858A11FE58B25@G4W3229.americas.hpqcorp.net>
References: <A0C170085C37664D93EE1604364858A11FE58B25@G4W3229.americas.hpqcorp.net>
Message-ID: <5602F727.10108@redhat.com>

On 09/23/2015 02:56 PM, Clark, Robert Graham wrote:
> Hi All,
>
> I won?t be available to run the weekly meeting tomorrow as I?m out
> travelling, Michael McCune (elmiko) has volunteered to lead the meeting.
>
> There?s IRC information on our wiki page :
> https://wiki.openstack.org/wiki/Security
>
> Agenda items (Please reply to add any more):
>
> ?PTL Shenanigans : https://review.openstack.org/#/c/224798/
>
> ?VMT Project Tracking : https://review.openstack.org/#/c/226869/
>
> ?Anchor (Ephemeral PKI)
>
> ?Bandit (Security Linter)
>
> ?Developer Guidance (Don?t do this)
>
> ?Security Documents (Do do this)
>
> ?Security Notes (Think about not doing this)
>
> ?Syntribos (API Fuzzing)
>
> ?Any Other Business
>
> *//*
>
> Have a good meeting all!
>
> -Rob

thanks Rob



From mriedem at linux.vnet.ibm.com  Wed Sep 23 19:15:46 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 23 Sep 2015 14:15:46 -0500
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
Message-ID: <5602FA62.4000904@linux.vnet.ibm.com>



On 9/23/2015 1:46 PM, Ivan Kolodyazhny wrote:
> Hi Matt,
>
> In Liberty, we introduced allow_availability_zone_fallback [1] option in
> Cinder config as fix for bug [2]. If you set this option, Cinder will
> create volume in a default AZ instead of set volume into the error state
>
> [1]
> https://github.com/openstack/cinder/commit/b85d2812a8256ff82934d150dbc4909e041d8b31
> [2] https://bugs.launchpad.net/cinder/+bug/1489575
>
> Regards,
> Ivan Kolodyazhny
>
> On Wed, Sep 23, 2015 at 9:00 PM, Matt Riedemann
> <mriedem at linux.vnet.ibm.com <mailto:mriedem at linux.vnet.ibm.com>> wrote:
>
>     I came across bug 1496235 [1] today.  In this case the user is
>     booting an instance from a volume using source=image, so nova
>     actually does the volume create call to the volume API.  They are
>     booting the instance into a valid nova availability zone, but that
>     same AZ isn't defined in Cinder, so the volume create request fails
>     (since nova passes the instance AZ to cinder [2]).
>
>     I marked this as invalid given how the code works.
>
>     I'm posting here since I'm wondering if there are alternatives worth
>     pursuing.  For example, nova could get the list of AZs from the
>     volume API and if the nova AZ isn't in that list, don't provide it
>     on the volume create request.  That's essentially the same as first
>     creating the volume outside of nova and not specifying an AZ, then
>     when doing the boot from volume, provide the volume_id as the source.
>
>     The question is, is it worth doing that?  I'm not familiar enough
>     with how availability zones are meant to work between nova and
>     cinder so it's hard for me to have much of an opinion here.
>
>     [1] https://bugs.launchpad.net/nova/+bug/1496235
>     [2]
>     https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L381-L383
>
>     --
>
>     Thanks,
>
>     Matt Riedemann
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Sorry but that seems like a hack.

I'm trying to figure out the relationship between AZs in nova and cinder 
and so far no one seems to really know.  In the cinder IRC channel I was 
told there isn't one, which would mean we shouldn't even try creating 
the volume using the server instance AZ.

Also, if there is no relationship, I was trying to figure out why there 
is the cinder.cross_az_attach config option.  That was added in grizzly 
[1].  I was thinking maybe it was a legacy artifact from nova-volume, 
but that was dropped in grizzly.

So is cinder.cross_az_attach even useful?

[1] https://review.openstack.org/#/c/21672/

-- 

Thanks,

Matt Riedemann



From corpqa at gmail.com  Wed Sep 23 19:24:16 2015
From: corpqa at gmail.com (OpenStack Mailing List Archive)
Date: Wed, 23 Sep 2015 12:24:16 -0700
Subject: [openstack-dev] New PyCharm License
Message-ID: <47457da50caf461af9bd63c2b44b9074@openstack.nimeyo.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/5923d868/attachment.html>

From aschultz at mirantis.com  Wed Sep 23 19:32:48 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Wed, 23 Sep 2015 14:32:48 -0500
Subject: [openstack-dev] [puppet][swift] Applying security recommendations
	within puppet-swift
Message-ID: <CABzFt8Pxd9kAf=TNv8t1FyT5vSk4iewbLJqeR8Qp6uV4P-k43A@mail.gmail.com>

Hey all,

So as part of the Puppet mid-cycle, we did bug triage.  One of the
bugs that was looked into was bug 1289631[0].  This bug is about
applying the recommendations from the security guide[1] within the
puppet-swift module.  So I'm sending a note out to get other feedback
on if this is a good idea or not.  Should we be applying this type of
security items within the puppet modules by default? Should we make
this optional?  Thoughts?


Thanks,
-Alex


[0] https://bugs.launchpad.net/puppet-swift/+bug/1289631
[1] http://docs.openstack.org/security-guide/object-storage.html#securing-services-general


From emilien at redhat.com  Wed Sep 23 19:33:44 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Wed, 23 Sep 2015 15:33:44 -0400
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <5601EFA8.8040204@puppetlabs.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
 <5601EFA8.8040204@puppetlabs.com>
Message-ID: <5602FE98.6000407@redhat.com>



On 09/22/2015 08:17 PM, Cody Herriges wrote:
> Alex Schultz wrote:
>> Hey puppet folks,
>>
>> Based on the meeting yesterday[0], I had proposed creating a parser
>> function called is_service_default[1] to validate if a variable matched
>> our agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking
>> about how can we maybe not use the arbitrary string throughout the
>> puppet that can not easily be validated.  So I tested creating another
>> puppet function named service_default[2] to replace the use of '<SERVICE
>> DEFAULT>' throughout all the puppet modules.  My tests seemed to
>> indicate that you can use a parser function as parameter default for
>> classes. 
>>
>> I wanted to send a note to gather comments around the second function. 
>> When we originally discussed what to use to designate for a service's
>> default configuration, I really didn't like using an arbitrary string
>> since it's hard to parse and validate. I think leveraging a function
>> might be better since it is something that can be validated via tests
>> and a syntax checker.  Thoughts?
>>
>>
>> Thanks,
>> -Alex
>>
>> [0] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html
>> [1] https://review.openstack.org/#/c/223672
>> [2] https://review.openstack.org/#/c/224187
>>
> 
> I've been mulling this over the last several days and I just can't
> accept an entire ruby function which would be ran for every parameter
> with the desired static value of "<SERVICE DEFAULT>" when the class is
> declared and parsed.  I am not generally against using functions as a
> parameter default just not a fan in this case because running ruby just
> to return a static string seems inappropriate and not optimal.
> 
> In this specific case I think the params pattern and inheritance can
> obtain us the same goals.  I also find this a valid us of inheritance
> cross module namespaces but...only because all our modules must depend
> on puppet-openstacklib.
> 
> http://paste.openstack.org/show/473655

+1 for this solution, which is straightforward and easy to
maintain/implement.
-1 for a fact which is (to me) more code and relying on puppet versions
is something a bit dangerous, even if facts.d are supported by our CI jobs.
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/7ced0912/attachment.pgp>

From mriedem at linux.vnet.ibm.com  Wed Sep 23 19:34:10 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 23 Sep 2015 14:34:10 -0500
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <5602FA62.4000904@linux.vnet.ibm.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com>
Message-ID: <5602FEB2.6030804@linux.vnet.ibm.com>



On 9/23/2015 2:15 PM, Matt Riedemann wrote:
>
>
> On 9/23/2015 1:46 PM, Ivan Kolodyazhny wrote:
>> Hi Matt,
>>
>> In Liberty, we introduced allow_availability_zone_fallback [1] option in
>> Cinder config as fix for bug [2]. If you set this option, Cinder will
>> create volume in a default AZ instead of set volume into the error state
>>
>> [1]
>> https://github.com/openstack/cinder/commit/b85d2812a8256ff82934d150dbc4909e041d8b31
>>
>> [2] https://bugs.launchpad.net/cinder/+bug/1489575
>>
>> Regards,
>> Ivan Kolodyazhny
>>
>> On Wed, Sep 23, 2015 at 9:00 PM, Matt Riedemann
>> <mriedem at linux.vnet.ibm.com <mailto:mriedem at linux.vnet.ibm.com>> wrote:
>>
>>     I came across bug 1496235 [1] today.  In this case the user is
>>     booting an instance from a volume using source=image, so nova
>>     actually does the volume create call to the volume API.  They are
>>     booting the instance into a valid nova availability zone, but that
>>     same AZ isn't defined in Cinder, so the volume create request fails
>>     (since nova passes the instance AZ to cinder [2]).
>>
>>     I marked this as invalid given how the code works.
>>
>>     I'm posting here since I'm wondering if there are alternatives worth
>>     pursuing.  For example, nova could get the list of AZs from the
>>     volume API and if the nova AZ isn't in that list, don't provide it
>>     on the volume create request.  That's essentially the same as first
>>     creating the volume outside of nova and not specifying an AZ, then
>>     when doing the boot from volume, provide the volume_id as the source.
>>
>>     The question is, is it worth doing that?  I'm not familiar enough
>>     with how availability zones are meant to work between nova and
>>     cinder so it's hard for me to have much of an opinion here.
>>
>>     [1] https://bugs.launchpad.net/nova/+bug/1496235
>>     [2]
>>
>> https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L381-L383
>>
>>
>>     --
>>
>>     Thanks,
>>
>>     Matt Riedemann
>>
>>
>>
>> __________________________________________________________________________
>>
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>
>> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Sorry but that seems like a hack.
>
> I'm trying to figure out the relationship between AZs in nova and cinder
> and so far no one seems to really know.  In the cinder IRC channel I was
> told there isn't one, which would mean we shouldn't even try creating
> the volume using the server instance AZ.
>
> Also, if there is no relationship, I was trying to figure out why there
> is the cinder.cross_az_attach config option.  That was added in grizzly
> [1].  I was thinking maybe it was a legacy artifact from nova-volume,
> but that was dropped in grizzly.
>
> So is cinder.cross_az_attach even useful?
>
> [1] https://review.openstack.org/#/c/21672/
>

The plot thickens.

I was checking to see what change was made to start passing the server 
instance az on the volume create call during boot from volume, and that 
was [1] which was added in kilo to fix a bug where boot from volume into 
a nova az will fail if cinder.cross_az_attach=False and 
storage_availability_zone is set in cinder.conf.

So I guess we can't just stop passing the instance az to the volume 
create call.

But what I'd really like to know is how this is all used between cinder 
and nova, or was this all some work done as part of a larger effort that 
was never completed?  Basically, can we deprecate the 
cinder.cross_az_attach config option in nova and start decoupling this code?

[1] https://review.openstack.org/#/c/157041/

-- 

Thanks,

Matt Riedemann



From nkinder at redhat.com  Wed Sep 23 19:35:20 2015
From: nkinder at redhat.com (Nathan Kinder)
Date: Wed, 23 Sep 2015 12:35:20 -0700
Subject: [openstack-dev] [OSSN 0053] Keystone token disclosure may result in
 malicious trust creation
Message-ID: <5602FEF8.4070401@redhat.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Keystone token disclosure may result in malicious trust creation
- ---

### Summary ###
Keystone tokens are the foundation of authentication and authorization
in OpenStack. When a service node is compromised, it is possible that
an attacker would have access to all tokens passing through that node.
With a valid token an attacker will be able to issue new tokens that
may be used to create trusts between the originating user and a new
user.

### Affected Services / Software ###
Keystone, Grizzly, Havana, Icehouse, Juno, Kilo

### Discussion ###
If a service node is compromised, an attacker now has access to every
token that passes through that node. By default, a Keystone token can
be exchanged for another token, and there is no restriction on scoping
of the new token. With the trust API, these tokens can be used to
delegate roles between the original user and a new user.

Trusts allow a user to set up a long term delegation that permits
another user to perform operations on their behalf. While tokens
created through trusts are limited in what they can do, the
limitations are only on things like changing passwords or creating
new tokens. This would grant an attacker access to all the operations
available to the originating user in their projects, and the roles that
are delegated through the trust.

There are other ways that a compromised token can be misused beyond the
methods described here. This note addresses one possible path for
vulnerabilities based on the unintended access that could be gained
from trusts created through intercepted tokens.

This behavior is intrinsic to the bearer token model used within
Keystone / OpenStack.

### Recommended Actions ###
The following steps are recommended to reduce exposure, based on the
granularity and accepted level of risk in a given environment:

1. Monitor and audit trust creation events within your environment.
Keystone emits notifications on trust creation and deletion that are
accessible through system logs or, if configured, the CADF
data/security/trust resource extension.

2. Offer roles that cannot create trusts / delegate permissions /
assign new roles via Keystone to users. This limits the vector of
attack to compromising Keystone directly or man-in-the-middle capture
of a separate token that has the authorization to create
trusts/delegate/assign roles.

3. Retain the default token lifespan of 1 hour.  Many workloads require
a single token for the whole workload, and take more than one hour, so
installations have increased token lifespans back to the old value of
24 hours - increasing their exposure to this issue.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0053
Original LaunchPad Bug : https://bugs.launchpad.net/keystone/+bug/145558
2
OpenStack Security ML : openstack-security at lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
Hierarchical Roles : https://review.openstack.org/#/c/125704
Policy by URL : https://review.openstack.org/#/c/192422
Unified policy file : https://review.openstack.org/#/c/134656
Endpoint_ID from URL : https://review.openstack.org/#/c/199844
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJWAv74AAoJEJa+6E7Ri+EV3IcH/jv2OGH4fcPz6ftTLbvDgS2T
+5j+Os43ME5KRPIzqgcsQwga3Vse8dSIf8OAiJehqsfuB5wt/nmooFikE56WA/ah
m7fn6g20KmHdGF9EVBaOwhSBFStN9bGDffmR1tEdJ4Z/9rGDYQCOl3/KbUdXyLMr
/WrrBPu2MgeM9XcnyxN+fXhRWp4W2t5MmQCsXky14grtyY1hPmC03wZ98qUZR9CE
KT3UEmtLqG7rfy6UN8msndNeHTj2ZdWiZUc5Og2F/DROIh3KHAbHxl+oi/AqkbXX
ABoVGY2g0PSI1par25mYpOMX1D5k/Pe1DAcMfG07f1xvYwSZfieTDTCSL6yuwq8=
=O6MG
-----END PGP SIGNATURE-----


From dbelova at mirantis.com  Wed Sep 23 19:42:34 2015
From: dbelova at mirantis.com (Dina Belova)
Date: Wed, 23 Sep 2015 22:42:34 +0300
Subject: [openstack-dev] [Openstack-operators] [Large Deployments
 Team][Performance Team] New informal working group suggestion
In-Reply-To: <D75DC775-5E3E-44E9-BC86-EB724DE253F5@godaddy.com>
References: <D75DC775-5E3E-44E9-BC86-EB724DE253F5@godaddy.com>
Message-ID: <CACsCO2zJiUoNYCDWwn5V52dBmXjVq9k9uQNxXz2CrG_F5-e8Lg@mail.gmail.com>

Kris,

I've created a ether pad - we can fill it with data before the summit and
discuss them in Tokyo.
https://etherpad.openstack.org/p/openstack-performance-issues

Cheers,
Dina

On Wed, Sep 23, 2015 at 9:32 PM, Kris G. Lindgren <klindgren at godaddy.com>
wrote:

> Dina,
>
> Do we have a place to put things (etherpad) that we are seeing performance
> issues with?  I know we are seeing issues with CPU load under
> nova-conductor as well as some stuff with the neutron API timing out (seems
> like it never responds to the request (no log entry on the neutron side).
>
> ___________________________________________________________________
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
> From: Matt Van Winkle
> Date: Tuesday, September 22, 2015 at 7:46 AM
> To: Dina Belova, OpenStack Development Mailing List, "
> openstack-operators at lists.openstack.org"
> Subject: Re: [Openstack-operators] [Large Deployments Team][Performance
> Team] New informal working group suggestion
>
> Thanks, Dina!
>
> For context to the rest of the LDT folks, Dina reached out to me about
> working on this under our umbrella for now.  It made sense until we
> understand if it's a large enough thing to live as its own working group
> because most of us have various performance concerns too.  So, like Public
> Clouds, we'll have to figure out how to integrate this sub group.
>
> I suspect the time slot for Tokyo is already packed, so the work for the
> Performance subgroup may have to be informal or in other sessions, but I'll
> start working with Tom and the folks covering the session for me (since I
> won't be able to make it) on what we might be able to do.  I've also asked
> Dina to join the Oct meeting prior to the Summit so we can further discuss
> the sub team.
>
> Thanks!
> VW
>
> From: Dina Belova <dbelova at mirantis.com>
> Date: Tuesday, September 22, 2015 7:57 AM
> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>,
> "openstack-operators at lists.openstack.org" <
> openstack-operators at lists.openstack.org>
> Subject: [Large Deployments Team][Performance Team] New informal working
> group suggestion
>
> Hey, OpenStackers!
>
> I'm writing to propose to organise new informal team to work specifically
> on the OpenStack performance issues. This will be a sub team in already
> existing Large Deployments Team, and I suppose it will be a good idea to
> gather people interested in OpenStack performance in one room and identify
> what issues are worrying contributors, what can be done and share results
> of performance researches :)
>
> So please volunteer to take part in this initiative. I hope it will be
> many people interested and we'll be able to use cross-projects session
> slot <http://odsreg.openstack.org/cfp/details/5> to meet in Tokyo and
> hold a kick-off meeting.
>
> I would like to apologise I'm writing to two mailing lists at the same
> time, but I want to make sure that all possibly interested people will
> notice the email.
>
> Thanks and see you in Tokyo :)
>
> Cheers,
> Dina
>
> --
>
> Best regards,
>
> Dina Belova
>
> Senior Software Engineer
>
> Mirantis Inc.
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/a7180a45/attachment.html>

From dms at redhat.com  Wed Sep 23 19:44:53 2015
From: dms at redhat.com (David Moreau Simard)
Date: Wed, 23 Sep 2015 15:44:53 -0400
Subject: [openstack-dev] Mitaka travel tips ?
Message-ID: <CAH7C+Pq6eCJCvRXYs9C5AdzOdEJk+dAXX+Deu7-q+=e9iWsV_Q@mail.gmail.com>

There was a travel tips document for the Kilo summit in Paris [1].
Lots of great helpful information in there not covered on the Openstack
Summit page [2] like where to get SIM cards and stuff.

Is there one for Mitaka yet ? I can't find it.

[1] https://wiki.openstack.org/wiki/Design_Summit/Kilo/Travel_Tips
[2] https://www.openstack.org/summit/tokyo-2015/tokyo-and-travel/

David Moreau Simard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/aa89070b/attachment.html>

From john.griffith8 at gmail.com  Wed Sep 23 19:45:51 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Wed, 23 Sep 2015 13:45:51 -0600
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <5602FEB2.6030804@linux.vnet.ibm.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com>
 <5602FEB2.6030804@linux.vnet.ibm.com>
Message-ID: <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>

On Wed, Sep 23, 2015 at 1:34 PM, Matt Riedemann <mriedem at linux.vnet.ibm.com>
wrote:

>
>
> On 9/23/2015 2:15 PM, Matt Riedemann wrote:
>
>>
>>
>> On 9/23/2015 1:46 PM, Ivan Kolodyazhny wrote:
>>
>>> Hi Matt,
>>>
>>> In Liberty, we introduced allow_availability_zone_fallback [1] option in
>>> Cinder config as fix for bug [2]. If you set this option, Cinder will
>>> create volume in a default AZ instead of set volume into the error state
>>>
>>> [1]
>>>
>>> https://github.com/openstack/cinder/commit/b85d2812a8256ff82934d150dbc4909e041d8b31
>>>
>>> [2] https://bugs.launchpad.net/cinder/+bug/1489575
>>>
>>> Regards,
>>> Ivan Kolodyazhny
>>>
>>> On Wed, Sep 23, 2015 at 9:00 PM, Matt Riedemann
>>> <mriedem at linux.vnet.ibm.com <mailto:mriedem at linux.vnet.ibm.com>> wrote:
>>>
>>>     I came across bug 1496235 [1] today.  In this case the user is
>>>     booting an instance from a volume using source=image, so nova
>>>     actually does the volume create call to the volume API.  They are
>>>     booting the instance into a valid nova availability zone, but that
>>>     same AZ isn't defined in Cinder, so the volume create request fails
>>>     (since nova passes the instance AZ to cinder [2]).
>>>
>>>     I marked this as invalid given how the code works.
>>>
>>>     I'm posting here since I'm wondering if there are alternatives worth
>>>     pursuing.  For example, nova could get the list of AZs from the
>>>     volume API and if the nova AZ isn't in that list, don't provide it
>>>     on the volume create request.  That's essentially the same as first
>>>     creating the volume outside of nova and not specifying an AZ, then
>>>     when doing the boot from volume, provide the volume_id as the source.
>>>
>>>     The question is, is it worth doing that?  I'm not familiar enough
>>>     with how availability zones are meant to work between nova and
>>>     cinder so it's hard for me to have much of an opinion here.
>>>
>>>     [1] https://bugs.launchpad.net/nova/+bug/1496235
>>>     [2]
>>>
>>>
>>> https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L381-L383
>>>
>>>
>>>     --
>>>
>>>     Thanks,
>>>
>>>     Matt Riedemann
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>>
>>>     OpenStack Development Mailing List (not for usage questions)
>>>     Unsubscribe:
>>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>
>>> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> Sorry but that seems like a hack.
>>
>> I'm trying to figure out the relationship between AZs in nova and cinder
>> and so far no one seems to really know.  In the cinder IRC channel I was
>> told there isn't one, which would mean we shouldn't even try creating
>> the volume using the server instance AZ.
>>
>> Also, if there is no relationship, I was trying to figure out why there
>> is the cinder.cross_az_attach config option.  That was added in grizzly
>> [1].  I was thinking maybe it was a legacy artifact from nova-volume,
>> but that was dropped in grizzly.
>>
>> So is cinder.cross_az_attach even useful?
>>
>> [1] https://review.openstack.org/#/c/21672/
>>
>>
> The plot thickens.
>
> I was checking to see what change was made to start passing the server
> instance az on the volume create call during boot from volume, and that was
> [1] which was added in kilo to fix a bug where boot from volume into a nova
> az will fail if cinder.cross_az_attach=False and storage_availability_zone
> is set in cinder.conf.
>
> So I guess we can't just stop passing the instance az to the volume create
> call.
>
> But what I'd really like to know is how this is all used between cinder
> and nova, or was this all some work done as part of a larger effort that
> was never completed?  Basically, can we deprecate the
> cinder.cross_az_attach config option in nova and start decoupling this code?
>
> [1] https://review.openstack.org/#/c/157041/
>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
?To be honest this is probably my fault, AZ's were pulled in as part of the
nova-volume migration to Cinder and just sort of died.  Quite frankly I
wasn't sure "what" to do with them but brought over the concept and the
zones that existing in Nova-Volume.  It's been an issue since day 1 of
Cinder, and as you note there are little hacks here and there over the
years to do different things.

I think your question about whether they should be there at all or not is a
good one.  We have had some interest from folks lately that want to couple
Nova and Cinder AZ's (I'm really not sure of any details or use-cases here).

My opinion would be until somebody proposes a clear use case and need that
actually works that we consider deprecating it.

While we're on the subject (kinda) I've never been a very fond of having
Nova create the volume during boot process either; there's a number of
things that go wrong here (timeouts almost guaranteed for a "real" image)
and some things that are missing last I looked like type selection etc.

We do have a proposal to talk about this at the Summit, so maybe we'll have
a descent primer before we get there :)

Thanks,

John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/8f12cafc/attachment.html>

From mriedem at linux.vnet.ibm.com  Wed Sep 23 19:55:35 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 23 Sep 2015 14:55:35 -0500
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
Message-ID: <560303B7.6080906@linux.vnet.ibm.com>



On 9/23/2015 2:45 PM, John Griffith wrote:
>
>
> On Wed, Sep 23, 2015 at 1:34 PM, Matt Riedemann
> <mriedem at linux.vnet.ibm.com <mailto:mriedem at linux.vnet.ibm.com>> wrote:
>
>
>
>     On 9/23/2015 2:15 PM, Matt Riedemann wrote:
>
>
>
>         On 9/23/2015 1:46 PM, Ivan Kolodyazhny wrote:
>
>             Hi Matt,
>
>             In Liberty, we introduced allow_availability_zone_fallback
>             [1] option in
>             Cinder config as fix for bug [2]. If you set this option,
>             Cinder will
>             create volume in a default AZ instead of set volume into the
>             error state
>
>             [1]
>             https://github.com/openstack/cinder/commit/b85d2812a8256ff82934d150dbc4909e041d8b31
>
>             [2] https://bugs.launchpad.net/cinder/+bug/1489575
>
>             Regards,
>             Ivan Kolodyazhny
>
>             On Wed, Sep 23, 2015 at 9:00 PM, Matt Riedemann
>             <mriedem at linux.vnet.ibm.com
>             <mailto:mriedem at linux.vnet.ibm.com>
>             <mailto:mriedem at linux.vnet.ibm.com
>             <mailto:mriedem at linux.vnet.ibm.com>>> wrote:
>
>                  I came across bug 1496235 [1] today.  In this case the
>             user is
>                  booting an instance from a volume using source=image,
>             so nova
>                  actually does the volume create call to the volume
>             API.  They are
>                  booting the instance into a valid nova availability
>             zone, but that
>                  same AZ isn't defined in Cinder, so the volume create
>             request fails
>                  (since nova passes the instance AZ to cinder [2]).
>
>                  I marked this as invalid given how the code works.
>
>                  I'm posting here since I'm wondering if there are
>             alternatives worth
>                  pursuing.  For example, nova could get the list of AZs
>             from the
>                  volume API and if the nova AZ isn't in that list, don't
>             provide it
>                  on the volume create request.  That's essentially the
>             same as first
>                  creating the volume outside of nova and not specifying
>             an AZ, then
>                  when doing the boot from volume, provide the volume_id
>             as the source.
>
>                  The question is, is it worth doing that?  I'm not
>             familiar enough
>                  with how availability zones are meant to work between
>             nova and
>                  cinder so it's hard for me to have much of an opinion here.
>
>                  [1] https://bugs.launchpad.net/nova/+bug/1496235
>                  [2]
>
>             https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L381-L383
>
>
>                  --
>
>                  Thanks,
>
>                  Matt Riedemann
>
>
>
>             __________________________________________________________________________
>
>                  OpenStack Development Mailing List (not for usage
>             questions)
>                  Unsubscribe:
>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>
>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>             __________________________________________________________________________
>
>             OpenStack Development Mailing List (not for usage questions)
>             Unsubscribe:
>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>         Sorry but that seems like a hack.
>
>         I'm trying to figure out the relationship between AZs in nova
>         and cinder
>         and so far no one seems to really know.  In the cinder IRC
>         channel I was
>         told there isn't one, which would mean we shouldn't even try
>         creating
>         the volume using the server instance AZ.
>
>         Also, if there is no relationship, I was trying to figure out
>         why there
>         is the cinder.cross_az_attach config option.  That was added in
>         grizzly
>         [1].  I was thinking maybe it was a legacy artifact from
>         nova-volume,
>         but that was dropped in grizzly.
>
>         So is cinder.cross_az_attach even useful?
>
>         [1] https://review.openstack.org/#/c/21672/
>
>
>     The plot thickens.
>
>     I was checking to see what change was made to start passing the
>     server instance az on the volume create call during boot from
>     volume, and that was [1] which was added in kilo to fix a bug where
>     boot from volume into a nova az will fail if
>     cinder.cross_az_attach=False and storage_availability_zone is set in
>     cinder.conf.
>
>     So I guess we can't just stop passing the instance az to the volume
>     create call.
>
>     But what I'd really like to know is how this is all used between
>     cinder and nova, or was this all some work done as part of a larger
>     effort that was never completed?  Basically, can we deprecate the
>     cinder.cross_az_attach config option in nova and start decoupling
>     this code?
>
>     [1] https://review.openstack.org/#/c/157041/
>
>
>     --
>
>     Thanks,
>
>     Matt Riedemann
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ?To be honest this is probably my fault, AZ's were pulled in as part of
> the nova-volume migration to Cinder and just sort of died.  Quite
> frankly I wasn't sure "what" to do with them but brought over the
> concept and the zones that existing in Nova-Volume.  It's been an issue
> since day 1 of Cinder, and as you note there are little hacks here and
> there over the years to do different things.
>
> I think your question about whether they should be there at all or not
> is a good one.  We have had some interest from folks lately that want to
> couple Nova and Cinder AZ's (I'm really not sure of any details or
> use-cases here).
>
> My opinion would be until somebody proposes a clear use case and need
> that actually works that we consider deprecating it.
>
> While we're on the subject (kinda) I've never been a very fond of having
> Nova create the volume during boot process either; there's a number of
> things that go wrong here (timeouts almost guaranteed for a "real"
> image) and some things that are missing last I looked like type
> selection etc.
>
> We do have a proposal to talk about this at the Summit, so maybe we'll
> have a descent primer before we get there :)
>
> Thanks,
>
> John
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Heh, so when I just asked in the cinder channel if we can just deprecate 
nova boot from volume with source=(image|snapshot|blank) (which 
automatically creates the volume and polls for it to be available) and 
then add a microversion that doesn't allow it, I was half joking, but I 
see we're on the same page.  This scenario seems to introduce a lot of 
orchestration work that nova shouldn't necessarily be in the business of 
handling.

-- 

Thanks,

Matt Riedemann



From sbauza at redhat.com  Wed Sep 23 19:57:58 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Wed, 23 Sep 2015 21:57:58 +0200
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
Message-ID: <56030446.7060901@redhat.com>



Le 23/09/2015 21:45, John Griffith a ?crit :
>
>
> On Wed, Sep 23, 2015 at 1:34 PM, Matt Riedemann 
> <mriedem at linux.vnet.ibm.com <mailto:mriedem at linux.vnet.ibm.com>> wrote:
>
>
>
>     On 9/23/2015 2:15 PM, Matt Riedemann wrote:
>
>
>
>         On 9/23/2015 1:46 PM, Ivan Kolodyazhny wrote:
>
>             Hi Matt,
>
>             In Liberty, we introduced allow_availability_zone_fallback
>             [1] option in
>             Cinder config as fix for bug [2]. If you set this option,
>             Cinder will
>             create volume in a default AZ instead of set volume into
>             the error state
>
>             [1]
>             https://github.com/openstack/cinder/commit/b85d2812a8256ff82934d150dbc4909e041d8b31
>
>             [2] https://bugs.launchpad.net/cinder/+bug/1489575
>
>             Regards,
>             Ivan Kolodyazhny
>
>             On Wed, Sep 23, 2015 at 9:00 PM, Matt Riedemann
>             <mriedem at linux.vnet.ibm.com
>             <mailto:mriedem at linux.vnet.ibm.com>
>             <mailto:mriedem at linux.vnet.ibm.com
>             <mailto:mriedem at linux.vnet.ibm.com>>> wrote:
>
>                 I came across bug 1496235 [1] today.  In this case the
>             user is
>                 booting an instance from a volume using source=image,
>             so nova
>                 actually does the volume create call to the volume
>             API.  They are
>                 booting the instance into a valid nova availability
>             zone, but that
>                 same AZ isn't defined in Cinder, so the volume create
>             request fails
>                 (since nova passes the instance AZ to cinder [2]).
>
>                 I marked this as invalid given how the code works.
>
>                 I'm posting here since I'm wondering if there are
>             alternatives worth
>                 pursuing.  For example, nova could get the list of AZs
>             from the
>                 volume API and if the nova AZ isn't in that list,
>             don't provide it
>                 on the volume create request.  That's essentially the
>             same as first
>                 creating the volume outside of nova and not specifying
>             an AZ, then
>                 when doing the boot from volume, provide the volume_id
>             as the source.
>
>                 The question is, is it worth doing that?  I'm not
>             familiar enough
>                 with how availability zones are meant to work between
>             nova and
>                 cinder so it's hard for me to have much of an opinion
>             here.
>
>                 [1] https://bugs.launchpad.net/nova/+bug/1496235
>                 [2]
>
>             https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L381-L383
>
>
>                 --
>
>                 Thanks,
>
>                 Matt Riedemann
>
>
>
>             __________________________________________________________________________
>
>                 OpenStack Development Mailing List (not for usage
>             questions)
>                 Unsubscribe:
>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>
>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>             __________________________________________________________________________
>
>             OpenStack Development Mailing List (not for usage questions)
>             Unsubscribe:
>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>         Sorry but that seems like a hack.
>
>         I'm trying to figure out the relationship between AZs in nova
>         and cinder
>         and so far no one seems to really know.  In the cinder IRC
>         channel I was
>         told there isn't one, which would mean we shouldn't even try
>         creating
>         the volume using the server instance AZ.
>
>         Also, if there is no relationship, I was trying to figure out
>         why there
>         is the cinder.cross_az_attach config option.  That was added
>         in grizzly
>         [1].  I was thinking maybe it was a legacy artifact from
>         nova-volume,
>         but that was dropped in grizzly.
>
>         So is cinder.cross_az_attach even useful?
>
>         [1] https://review.openstack.org/#/c/21672/
>
>
>     The plot thickens.
>
>     I was checking to see what change was made to start passing the
>     server instance az on the volume create call during boot from
>     volume, and that was [1] which was added in kilo to fix a bug
>     where boot from volume into a nova az will fail if
>     cinder.cross_az_attach=False and storage_availability_zone is set
>     in cinder.conf.
>
>     So I guess we can't just stop passing the instance az to the
>     volume create call.
>
>     But what I'd really like to know is how this is all used between
>     cinder and nova, or was this all some work done as part of a
>     larger effort that was never completed? Basically, can we
>     deprecate the cinder.cross_az_attach config option in nova and
>     start decoupling this code?
>
>     [1] https://review.openstack.org/#/c/157041/
>
>
>     -- 
>
>     Thanks,
>
>     Matt Riedemann
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ?To be honest this is probably my fault, AZ's were pulled in as part 
> of the nova-volume migration to Cinder and just sort of died. Quite 
> frankly I wasn't sure "what" to do with them but brought over the 
> concept and the zones that existing in Nova-Volume.  It's been an 
> issue since day 1 of Cinder, and as you note there are little hacks 
> here and there over the years to do different things.
>
> I think your question about whether they should be there at all or not 
> is a good one.  We have had some interest from folks lately that want 
> to couple Nova and Cinder AZ's (I'm really not sure of any details or 
> use-cases here).
>
> My opinion would be until somebody proposes a clear use case and need 
> that actually works that we consider deprecating it.
>

Given what are currently AZs in Nova (explicit user-visible aggregates 
of compute nodes, see [1]), I tend to say that there is no reason to 
provide the AZ information to Cinder because AZs are not failure 
domains, just very specific placement information used for scheduling 
instances.

Based on some conversation we had on IRC, MHO is that we should at least 
deprecate the config flag that Matt discussed above and we should also 
plan to remove the AZ information from the Cinder call we do in Nova 
when creating a volume.

I leave Cinder community to discuss the plans about having AZs or not, 
but I also think that any reboot of the idea could necessarly need a 
renaming and no longer use the "availability zone" wording.


[1] 
http://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs

> While we're on the subject (kinda) I've never been a very fond of 
> having Nova create the volume during boot process either; there's a 
> number of things that go wrong here (timeouts almost guaranteed for a 
> "real" image) and some things that are missing last I looked like type 
> selection etc.
>
> We do have a proposal to talk about this at the Summit, so maybe we'll 
> have a descent primer before we get there :)
>
> Thanks,
>
> John
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/a3c43e66/attachment.html>

From zzelle at gmail.com  Wed Sep 23 20:11:31 2015
From: zzelle at gmail.com (ZZelle)
Date: Wed, 23 Sep 2015 22:11:31 +0200
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <5602ECBF.2020900@dague.net>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
 <56028A4B.9010203@dague.net> <m0fv25fjvf.fsf@danjou.info>
 <5602ECBF.2020900@dague.net>
Message-ID: <CAMS-DWgN-91_R-BT4Cw_8Owz7TV0F+7xEhZzHpVHP1X5GY4T1Q@mail.gmail.com>

Hi


> Ok, how exactly does that work? Because it seems like
> oslo_middleware.ssl is only changing the protocol if the proxy sets it.
>
> But the host in the urls will still be the individual host, which isn't
> the proxy hostname/ip. Sorry if I'm being daft here, just want to
> understand how that flow ends up working.
>

Host and X-Forwarded-Proto headers are provided by the proxy to the service.
Host and X-Forwarded-Proto headers are either built by the proxy or
forwarded (if there are many proxies).


> Will that cover the case of webob's request.application_uri? If so I
> think that covers the REST documents in at least Nova (one good data
> point, and one that I know has been copied around). At least as far as
> the protocol is concerned, it's still got a potential url issue.


I let Julien answers :)


> It also looks like there are new standards for Forwarded headers, so the
> middleware should probably support those as well.
> http://tools.ietf.org/html/rfc7239.
>

Good to know! I can update SSLMiddleware to handle it as the rfc uses the
format:

  "Forwarded: proto=https"

which is different from de facto standard (supported by SSLMiddleware):

  "X-Forwarded-Proto: https"

C?dric
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/7e731b73/attachment.html>

From andrew at lascii.com  Wed Sep 23 20:12:51 2015
From: andrew at lascii.com (Andrew Laski)
Date: Wed, 23 Sep 2015 16:12:51 -0400
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <560303B7.6080906@linux.vnet.ibm.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com>
 <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com>
Message-ID: <20150923201251.GA8745@crypt>

On 09/23/15 at 02:55pm, Matt Riedemann wrote:
>
>
>On 9/23/2015 2:45 PM, John Griffith wrote:
>>
>>
>>On Wed, Sep 23, 2015 at 1:34 PM, Matt Riedemann
>><mriedem at linux.vnet.ibm.com <mailto:mriedem at linux.vnet.ibm.com>> wrote:
>>
>>
>>
>>    On 9/23/2015 2:15 PM, Matt Riedemann wrote:
>>
>>
>>
>>        On 9/23/2015 1:46 PM, Ivan Kolodyazhny wrote:
>>
>>            Hi Matt,
>>
>>            In Liberty, we introduced allow_availability_zone_fallback
>>            [1] option in
>>            Cinder config as fix for bug [2]. If you set this option,
>>            Cinder will
>>            create volume in a default AZ instead of set volume into the
>>            error state
>>
>>            [1]
>>            https://github.com/openstack/cinder/commit/b85d2812a8256ff82934d150dbc4909e041d8b31
>>
>>            [2] https://bugs.launchpad.net/cinder/+bug/1489575
>>
>>            Regards,
>>            Ivan Kolodyazhny
>>
>>            On Wed, Sep 23, 2015 at 9:00 PM, Matt Riedemann
>>            <mriedem at linux.vnet.ibm.com
>>            <mailto:mriedem at linux.vnet.ibm.com>
>>            <mailto:mriedem at linux.vnet.ibm.com
>>            <mailto:mriedem at linux.vnet.ibm.com>>> wrote:
>>
>>                 I came across bug 1496235 [1] today.  In this case the
>>            user is
>>                 booting an instance from a volume using source=image,
>>            so nova
>>                 actually does the volume create call to the volume
>>            API.  They are
>>                 booting the instance into a valid nova availability
>>            zone, but that
>>                 same AZ isn't defined in Cinder, so the volume create
>>            request fails
>>                 (since nova passes the instance AZ to cinder [2]).
>>
>>                 I marked this as invalid given how the code works.
>>
>>                 I'm posting here since I'm wondering if there are
>>            alternatives worth
>>                 pursuing.  For example, nova could get the list of AZs
>>            from the
>>                 volume API and if the nova AZ isn't in that list, don't
>>            provide it
>>                 on the volume create request.  That's essentially the
>>            same as first
>>                 creating the volume outside of nova and not specifying
>>            an AZ, then
>>                 when doing the boot from volume, provide the volume_id
>>            as the source.
>>
>>                 The question is, is it worth doing that?  I'm not
>>            familiar enough
>>                 with how availability zones are meant to work between
>>            nova and
>>                 cinder so it's hard for me to have much of an opinion here.
>>
>>                 [1] https://bugs.launchpad.net/nova/+bug/1496235
>>                 [2]
>>
>>            https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L381-L383
>>
>>
>>                 --
>>
>>                 Thanks,
>>
>>                 Matt Riedemann
>>
>>
>>
>>            __________________________________________________________________________
>>
>>                 OpenStack Development Mailing List (not for usage
>>            questions)
>>                 Unsubscribe:
>>            OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>
>>            <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>            http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>            __________________________________________________________________________
>>
>>            OpenStack Development Mailing List (not for usage questions)
>>            Unsubscribe:
>>            OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>            http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>        Sorry but that seems like a hack.
>>
>>        I'm trying to figure out the relationship between AZs in nova
>>        and cinder
>>        and so far no one seems to really know.  In the cinder IRC
>>        channel I was
>>        told there isn't one, which would mean we shouldn't even try
>>        creating
>>        the volume using the server instance AZ.
>>
>>        Also, if there is no relationship, I was trying to figure out
>>        why there
>>        is the cinder.cross_az_attach config option.  That was added in
>>        grizzly
>>        [1].  I was thinking maybe it was a legacy artifact from
>>        nova-volume,
>>        but that was dropped in grizzly.
>>
>>        So is cinder.cross_az_attach even useful?
>>
>>        [1] https://review.openstack.org/#/c/21672/
>>
>>
>>    The plot thickens.
>>
>>    I was checking to see what change was made to start passing the
>>    server instance az on the volume create call during boot from
>>    volume, and that was [1] which was added in kilo to fix a bug where
>>    boot from volume into a nova az will fail if
>>    cinder.cross_az_attach=False and storage_availability_zone is set in
>>    cinder.conf.
>>
>>    So I guess we can't just stop passing the instance az to the volume
>>    create call.
>>
>>    But what I'd really like to know is how this is all used between
>>    cinder and nova, or was this all some work done as part of a larger
>>    effort that was never completed?  Basically, can we deprecate the
>>    cinder.cross_az_attach config option in nova and start decoupling
>>    this code?
>>
>>    [1] https://review.openstack.org/#/c/157041/
>>
>>
>>    --
>>
>>    Thanks,
>>
>>    Matt Riedemann
>>
>>
>>    __________________________________________________________________________
>>    OpenStack Development Mailing List (not for usage questions)
>>    Unsubscribe:
>>    OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>    <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>?To be honest this is probably my fault, AZ's were pulled in as part of
>>the nova-volume migration to Cinder and just sort of died.  Quite
>>frankly I wasn't sure "what" to do with them but brought over the
>>concept and the zones that existing in Nova-Volume.  It's been an issue
>>since day 1 of Cinder, and as you note there are little hacks here and
>>there over the years to do different things.
>>
>>I think your question about whether they should be there at all or not
>>is a good one.  We have had some interest from folks lately that want to
>>couple Nova and Cinder AZ's (I'm really not sure of any details or
>>use-cases here).
>>
>>My opinion would be until somebody proposes a clear use case and need
>>that actually works that we consider deprecating it.
>>
>>While we're on the subject (kinda) I've never been a very fond of having
>>Nova create the volume during boot process either; there's a number of
>>things that go wrong here (timeouts almost guaranteed for a "real"
>>image) and some things that are missing last I looked like type
>>selection etc.
>>
>>We do have a proposal to talk about this at the Summit, so maybe we'll
>>have a descent primer before we get there :)
>>
>>Thanks,
>>
>>John
>>
>>
>>__________________________________________________________________________
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>Heh, so when I just asked in the cinder channel if we can just 
>deprecate nova boot from volume with source=(image|snapshot|blank) 
>(which automatically creates the volume and polls for it to be 
>available) and then add a microversion that doesn't allow it, I was 
>half joking, but I see we're on the same page.  This scenario seems 
>to introduce a lot of orchestration work that nova shouldn't 
>necessarily be in the business of handling.

I am very much in support of this.  This has been a source of 
frustration for our users because it is prone to failures we can't 
properly expose to users and timeouts.  There are much better places to 
handle the orchestration of creating a volume and then booting from it 
than Nova.

>
>-- 
>
>Thanks,
>
>Matt Riedemann
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From andrew at lascii.com  Wed Sep 23 20:15:54 2015
From: andrew at lascii.com (Andrew Laski)
Date: Wed, 23 Sep 2015 16:15:54 -0400
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com>
 <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
Message-ID: <20150923201554.GB8745@crypt>

On 09/23/15 at 01:45pm, John Griffith wrote:
<snip>
>
>?To be honest this is probably my fault, AZ's were pulled in as part of the
>nova-volume migration to Cinder and just sort of died.  Quite frankly I
>wasn't sure "what" to do with them but brought over the concept and the
>zones that existing in Nova-Volume.  It's been an issue since day 1 of
>Cinder, and as you note there are little hacks here and there over the
>years to do different things.
>
>I think your question about whether they should be there at all or not is a
>good one.  We have had some interest from folks lately that want to couple
>Nova and Cinder AZ's (I'm really not sure of any details or use-cases here).
>
>My opinion would be until somebody proposes a clear use case and need that
>actually works that we consider deprecating it.

I've heard some discussion about trying to use coupled AZs in order to 
schedule volumes close to instances.  However I think that is occurring 
because it's possible to do that, not because that would be a good way 
to handle the coordinated scheduling problem.


From mgagne at internap.com  Wed Sep 23 20:30:14 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Wed, 23 Sep 2015 16:30:14 -0400
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <20150923201251.GA8745@crypt>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
Message-ID: <56030BD6.10500@internap.com>

On 2015-09-23 4:12 PM, Andrew Laski wrote:
> On 09/23/15 at 02:55pm, Matt Riedemann wrote:
>>
>> Heh, so when I just asked in the cinder channel if we can just
>> deprecate nova boot from volume with source=(image|snapshot|blank)
>> (which automatically creates the volume and polls for it to be
>> available) and then add a microversion that doesn't allow it, I was
>> half joking, but I see we're on the same page.  This scenario seems to
>> introduce a lot of orchestration work that nova shouldn't necessarily
>> be in the business of handling.
> 
> I am very much in support of this.  This has been a source of
> frustration for our users because it is prone to failures we can't
> properly expose to users and timeouts.  There are much better places to
> handle the orchestration of creating a volume and then booting from it
> than Nova.
> 

Unfortunately, this is a feature our users *heavily* rely on and we
worked very hard to make it happen. We had a private patch on our side
for years to optimize boot-from-volume before John Griffith came up with
an upstream solution for SolidFire [2] and others with a generic
solution [3] [4].

Being able to "nova boot" and have everything done for you is awesome.
Just see what Monty Taylor mentioned in his thread about sane default
networking [1]. Having orchestration on the client side is just
something our users don't want to have to do and often complain about.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074527.html
[2] https://review.openstack.org/#/c/142859/
[3] https://review.openstack.org/#/c/195795/
[4] https://review.openstack.org/#/c/201754/

-- 
Mathieu


From mriedem at linux.vnet.ibm.com  Wed Sep 23 20:29:56 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 23 Sep 2015 15:29:56 -0500
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <56030446.7060901@redhat.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <56030446.7060901@redhat.com>
Message-ID: <56030BC4.6080409@linux.vnet.ibm.com>



On 9/23/2015 2:57 PM, Sylvain Bauza wrote:
>
>
> Le 23/09/2015 21:45, John Griffith a ?crit :
>>
>>
>> On Wed, Sep 23, 2015 at 1:34 PM, Matt Riedemann
>> <mriedem at linux.vnet.ibm.com <mailto:mriedem at linux.vnet.ibm.com>> wrote:
>>
>>
>>
>>     On 9/23/2015 2:15 PM, Matt Riedemann wrote:
>>
>>
>>
>>         On 9/23/2015 1:46 PM, Ivan Kolodyazhny wrote:
>>
>>             Hi Matt,
>>
>>             In Liberty, we introduced allow_availability_zone_fallback
>>             [1] option in
>>             Cinder config as fix for bug [2]. If you set this option,
>>             Cinder will
>>             create volume in a default AZ instead of set volume into
>>             the error state
>>
>>             [1]
>>             https://github.com/openstack/cinder/commit/b85d2812a8256ff82934d150dbc4909e041d8b31
>>
>>             [2] https://bugs.launchpad.net/cinder/+bug/1489575
>>
>>             Regards,
>>             Ivan Kolodyazhny
>>
>>             On Wed, Sep 23, 2015 at 9:00 PM, Matt Riedemann
>>             <mriedem at linux.vnet.ibm.com
>>             <mailto:mriedem at linux.vnet.ibm.com>
>>             <mailto:mriedem at linux.vnet.ibm.com
>>             <mailto:mriedem at linux.vnet.ibm.com>>> wrote:
>>
>>                 I came across bug 1496235 [1] today.  In this case the
>>             user is
>>                 booting an instance from a volume using source=image,
>>             so nova
>>                 actually does the volume create call to the volume
>>             API.  They are
>>                 booting the instance into a valid nova availability
>>             zone, but that
>>                 same AZ isn't defined in Cinder, so the volume create
>>             request fails
>>                 (since nova passes the instance AZ to cinder [2]).
>>
>>                 I marked this as invalid given how the code works.
>>
>>                 I'm posting here since I'm wondering if there are
>>             alternatives worth
>>                 pursuing.  For example, nova could get the list of AZs
>>             from the
>>                 volume API and if the nova AZ isn't in that list,
>>             don't provide it
>>                 on the volume create request.  That's essentially the
>>             same as first
>>                 creating the volume outside of nova and not specifying
>>             an AZ, then
>>                 when doing the boot from volume, provide the volume_id
>>             as the source.
>>
>>                 The question is, is it worth doing that?  I'm not
>>             familiar enough
>>                 with how availability zones are meant to work between
>>             nova and
>>                 cinder so it's hard for me to have much of an opinion
>>             here.
>>
>>                 [1] https://bugs.launchpad.net/nova/+bug/1496235
>>                 [2]
>>
>>             https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L381-L383
>>
>>
>>                 --
>>
>>                 Thanks,
>>
>>                 Matt Riedemann
>>
>>
>>
>>             __________________________________________________________________________
>>
>>                 OpenStack Development Mailing List (not for usage
>>             questions)
>>                 Unsubscribe:
>>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>
>>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>             __________________________________________________________________________
>>
>>             OpenStack Development Mailing List (not for usage questions)
>>             Unsubscribe:
>>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>         Sorry but that seems like a hack.
>>
>>         I'm trying to figure out the relationship between AZs in nova
>>         and cinder
>>         and so far no one seems to really know.  In the cinder IRC
>>         channel I was
>>         told there isn't one, which would mean we shouldn't even try
>>         creating
>>         the volume using the server instance AZ.
>>
>>         Also, if there is no relationship, I was trying to figure out
>>         why there
>>         is the cinder.cross_az_attach config option.  That was added
>>         in grizzly
>>         [1].  I was thinking maybe it was a legacy artifact from
>>         nova-volume,
>>         but that was dropped in grizzly.
>>
>>         So is cinder.cross_az_attach even useful?
>>
>>         [1] https://review.openstack.org/#/c/21672/
>>
>>
>>     The plot thickens.
>>
>>     I was checking to see what change was made to start passing the
>>     server instance az on the volume create call during boot from
>>     volume, and that was [1] which was added in kilo to fix a bug
>>     where boot from volume into a nova az will fail if
>>     cinder.cross_az_attach=False and storage_availability_zone is set
>>     in cinder.conf.
>>
>>     So I guess we can't just stop passing the instance az to the
>>     volume create call.
>>
>>     But what I'd really like to know is how this is all used between
>>     cinder and nova, or was this all some work done as part of a
>>     larger effort that was never completed? Basically, can we
>>     deprecate the cinder.cross_az_attach config option in nova and
>>     start decoupling this code?
>>
>>     [1] https://review.openstack.org/#/c/157041/
>>
>>
>>     --
>>
>>     Thanks,
>>
>>     Matt Riedemann
>>
>>
>>     __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ?To be honest this is probably my fault, AZ's were pulled in as part
>> of the nova-volume migration to Cinder and just sort of died. Quite
>> frankly I wasn't sure "what" to do with them but brought over the
>> concept and the zones that existing in Nova-Volume.  It's been an
>> issue since day 1 of Cinder, and as you note there are little hacks
>> here and there over the years to do different things.
>>
>> I think your question about whether they should be there at all or not
>> is a good one.  We have had some interest from folks lately that want
>> to couple Nova and Cinder AZ's (I'm really not sure of any details or
>> use-cases here).
>>
>> My opinion would be until somebody proposes a clear use case and need
>> that actually works that we consider deprecating it.
>>
>
> Given what are currently AZs in Nova (explicit user-visible aggregates
> of compute nodes, see [1]), I tend to say that there is no reason to
> provide the AZ information to Cinder because AZs are not failure
> domains, just very specific placement information used for scheduling
> instances.
>
> Based on some conversation we had on IRC, MHO is that we should at least
> deprecate the config flag that Matt discussed above and we should also
> plan to remove the AZ information from the Cinder call we do in Nova
> when creating a volume.
>
> I leave Cinder community to discuss the plans about having AZs or not,
> but I also think that any reboot of the idea could necessarly need a
> renaming and no longer use the "availability zone" wording.
>
>
> [1]
> http://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs
>
>> While we're on the subject (kinda) I've never been a very fond of
>> having Nova create the volume during boot process either; there's a
>> number of things that go wrong here (timeouts almost guaranteed for a
>> "real" image) and some things that are missing last I looked like type
>> selection etc.
>>
>> We do have a proposal to talk about this at the Summit, so maybe we'll
>> have a descent primer before we get there :)
>>
>> Thanks,
>>
>> John
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Here is the change to deprecate the cinder.cross_az_attach option:

https://review.openstack.org/#/c/226977/

-- 

Thanks,

Matt Riedemann



From sbauza at redhat.com  Wed Sep 23 20:31:18 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Wed, 23 Sep 2015 22:31:18 +0200
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <20150923201554.GB8745@crypt>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <20150923201554.GB8745@crypt>
Message-ID: <56030C16.4040805@redhat.com>



Le 23/09/2015 22:15, Andrew Laski a ?crit :
> On 09/23/15 at 01:45pm, John Griffith wrote:
> <snip>
>>
>> ?To be honest this is probably my fault, AZ's were pulled in as part 
>> of the
>> nova-volume migration to Cinder and just sort of died.  Quite frankly I
>> wasn't sure "what" to do with them but brought over the concept and the
>> zones that existing in Nova-Volume.  It's been an issue since day 1 of
>> Cinder, and as you note there are little hacks here and there over the
>> years to do different things.
>>
>> I think your question about whether they should be there at all or 
>> not is a
>> good one.  We have had some interest from folks lately that want to 
>> couple
>> Nova and Cinder AZ's (I'm really not sure of any details or use-cases 
>> here).
>>
>> My opinion would be until somebody proposes a clear use case and need 
>> that
>> actually works that we consider deprecating it.
>
> I've heard some discussion about trying to use coupled AZs in order to 
> schedule volumes close to instances.  However I think that is 
> occurring because it's possible to do that, not because that would be 
> a good way to handle the coordinated scheduling problem.
>

So, while I think it's understandable to have that done, since Nova AZs 
are related to compute nodes and Cinder AZs could be related to volumes, 
I'd tend to ask Cinder to rename the AZ concept into something else less 
confusing.

Also, there is a long story about trying to have Cinder provide 
resources to the Nova scheduler so that we could have volume affinity 
when booting, so I would prefer to go that way instead of trying to 
misuse AZs.

I'm about to ask for a Nova/Cinder/Neutron room at the Summit to discuss 
how Cinder and Neutron could provide resources to the scheduler, I'd 
love to get feedback from those teams there.

  -Sylvain

> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From andrew at lascii.com  Wed Sep 23 20:50:20 2015
From: andrew at lascii.com (Andrew Laski)
Date: Wed, 23 Sep 2015 16:50:20 -0400
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <56030BD6.10500@internap.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com>
 <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com>
Message-ID: <20150923205020.GC8745@crypt>

On 09/23/15 at 04:30pm, Mathieu Gagn? wrote:
>On 2015-09-23 4:12 PM, Andrew Laski wrote:
>> On 09/23/15 at 02:55pm, Matt Riedemann wrote:
>>>
>>> Heh, so when I just asked in the cinder channel if we can just
>>> deprecate nova boot from volume with source=(image|snapshot|blank)
>>> (which automatically creates the volume and polls for it to be
>>> available) and then add a microversion that doesn't allow it, I was
>>> half joking, but I see we're on the same page.  This scenario seems to
>>> introduce a lot of orchestration work that nova shouldn't necessarily
>>> be in the business of handling.
>>
>> I am very much in support of this.  This has been a source of
>> frustration for our users because it is prone to failures we can't
>> properly expose to users and timeouts.  There are much better places to
>> handle the orchestration of creating a volume and then booting from it
>> than Nova.
>>
>
>Unfortunately, this is a feature our users *heavily* rely on and we
>worked very hard to make it happen. We had a private patch on our side
>for years to optimize boot-from-volume before John Griffith came up with
>an upstream solution for SolidFire [2] and others with a generic
>solution [3] [4].
>
>Being able to "nova boot" and have everything done for you is awesome.
>Just see what Monty Taylor mentioned in his thread about sane default
>networking [1]. Having orchestration on the client side is just
>something our users don't want to have to do and often complain about.

At risk of getting too offtopic I think there's an alternate solution to 
doing this in Nova or on the client side.  I think we're missing some 
sort of OpenStack API and service that can handle this.  Nova is a low 
level infrastructure API and service, it is not designed to handle these 
orchestrations.  I haven't checked in on Heat in a while but perhaps 
this is a role that it could fill.

I think that too many people consider Nova to be *the* OpenStack API 
when considering instances/volumes/networking/images and that's not 
something I would like to see continue.  Or at the very least I would 
like to see a split between the orchestration/proxy pieces and the 
"manage my VM/container/baremetal" bits.

>
>[1]
>http://lists.openstack.org/pipermail/openstack-dev/2015-September/074527.html
>[2] https://review.openstack.org/#/c/142859/
>[3] https://review.openstack.org/#/c/195795/
>[4] https://review.openstack.org/#/c/201754/
>
>-- 
>Mathieu
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From aschultz at mirantis.com  Wed Sep 23 21:10:02 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Wed, 23 Sep 2015 16:10:02 -0500
Subject: [openstack-dev] [puppet][swift] Applying security
	recommendations within puppet-swift
In-Reply-To: <CABzFt8Pxd9kAf=TNv8t1FyT5vSk4iewbLJqeR8Qp6uV4P-k43A@mail.gmail.com>
References: <CABzFt8Pxd9kAf=TNv8t1FyT5vSk4iewbLJqeR8Qp6uV4P-k43A@mail.gmail.com>
Message-ID: <CABzFt8MVCwht_7JNDe5MVMO7ju4OXyWiV-WAkBBPhFFRB66VLQ@mail.gmail.com>

On Wed, Sep 23, 2015 at 2:32 PM, Alex Schultz <aschultz at mirantis.com> wrote:
> Hey all,
>
> So as part of the Puppet mid-cycle, we did bug triage.  One of the
> bugs that was looked into was bug 1289631[0].  This bug is about
> applying the recommendations from the security guide[1] within the
> puppet-swift module.  So I'm sending a note out to get other feedback
> on if this is a good idea or not.  Should we be applying this type of
> security items within the puppet modules by default? Should we make
> this optional?  Thoughts?
>
>
> Thanks,
> -Alex
>
>
> [0] https://bugs.launchpad.net/puppet-swift/+bug/1289631
> [1] http://docs.openstack.org/security-guide/object-storage.html#securing-services-general

Also for the puppet side of this conversation, the change for the
security items[0] also seems to conflict with bug 1458915[1] which is
about removing the posix users/groups/file modes.  So which direction
should we go?

[0] https://review.openstack.org/#/c/219883/
[1] https://bugs.launchpad.net/puppet-swift/+bug/1458915


From tdecacqu at redhat.com  Wed Sep 23 21:22:09 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Wed, 23 Sep 2015 21:22:09 +0000
Subject: [openstack-dev] [elections] Last day to elect your PTL for Cinder,
 Glance, Ironic, Keystone, Mistral, Neutron and Oslo!
Message-ID: <56031801.5000702@redhat.com>

Hello Cinder, Glance, Ironic, Keystone, Mistral, Neutron and Oslo
contributors,

Just a quick reminder that elections are closing soon, if you haven't
already you should use your right to vote and pick your favourite candidate!

Thanks for your time,
Tristan

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/8584d81a/attachment.pgp>

From tdecacqu at redhat.com  Wed Sep 23 21:36:35 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Wed, 23 Sep 2015 21:36:35 +0000
Subject: [openstack-dev] [elections] Last day to elect your PTL for
 Cinder, Glance, Ironic, Keystone, Mistral, Neutron and Oslo!
In-Reply-To: <56031801.5000702@redhat.com>
References: <56031801.5000702@redhat.com>
Message-ID: <56031B63.1010000@redhat.com>

On 09/23/2015 09:22 PM, Tristan Cacqueray wrote:
> Hello Cinder, Glance, Ironic, Keystone, Mistral, Neutron and Oslo
> contributors,
> 
> Just a quick reminder that elections are closing soon, if you haven't
> already you should use your right to vote and pick your favourite candidate!
> 

The start of the election period was delayed by approximately 24 hours,
however we overlooked that when announcing the close date for the
election [1].

To ensure we conduct a valid election as outlined in the OpenStack
charter [2]  we are extending the voting deadline until
2015-09-25 23:59 UTC [3]

Thanks for your time and understanding.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074938.html
[2]
http://git.openstack.org/cgit/openstack/governance/tree/reference/charter.rst#n97
[3] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150925T2359


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/eb5ceea6/attachment.pgp>

From mgagne at internap.com  Wed Sep 23 21:43:26 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Wed, 23 Sep 2015 17:43:26 -0400
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <20150923205020.GC8745@crypt>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
Message-ID: <56031CFE.2020906@internap.com>

On 2015-09-23 4:50 PM, Andrew Laski wrote:
> On 09/23/15 at 04:30pm, Mathieu Gagn? wrote:
>> On 2015-09-23 4:12 PM, Andrew Laski wrote:
>>> On 09/23/15 at 02:55pm, Matt Riedemann wrote:
>>>>
>>>> Heh, so when I just asked in the cinder channel if we can just
>>>> deprecate nova boot from volume with source=(image|snapshot|blank)
>>>> (which automatically creates the volume and polls for it to be
>>>> available) and then add a microversion that doesn't allow it, I was
>>>> half joking, but I see we're on the same page.  This scenario seems to
>>>> introduce a lot of orchestration work that nova shouldn't necessarily
>>>> be in the business of handling.
>>>
>>> I am very much in support of this.  This has been a source of
>>> frustration for our users because it is prone to failures we can't
>>> properly expose to users and timeouts.  There are much better places to
>>> handle the orchestration of creating a volume and then booting from it
>>> than Nova.
>>>
>>
>> Unfortunately, this is a feature our users *heavily* rely on and we
>> worked very hard to make it happen. We had a private patch on our side
>> for years to optimize boot-from-volume before John Griffith came up with
>> an upstream solution for SolidFire [2] and others with a generic
>> solution [3] [4].
>>
>> Being able to "nova boot" and have everything done for you is awesome.
>> Just see what Monty Taylor mentioned in his thread about sane default
>> networking [1]. Having orchestration on the client side is just
>> something our users don't want to have to do and often complain about.
> 
> At risk of getting too offtopic I think there's an alternate solution to
> doing this in Nova or on the client side.  I think we're missing some
> sort of OpenStack API and service that can handle this.  Nova is a low
> level infrastructure API and service, it is not designed to handle these
> orchestrations.  I haven't checked in on Heat in a while but perhaps
> this is a role that it could fill.
> 
> I think that too many people consider Nova to be *the* OpenStack API
> when considering instances/volumes/networking/images and that's not
> something I would like to see continue.  Or at the very least I would
> like to see a split between the orchestration/proxy pieces and the
> "manage my VM/container/baremetal" bits.
> 

"too many people" happens to include a lot of 3rd party tools supporting
OpenStack which our users complain a lot about. Just see all the
possible way to get an external IP [5]. Introducing yet another service
would increase the pain on our users which will see their tools and
products not working even more.

Just see how EC2 is doing it [6], you won't see them suggest to use yet
another service to orchestrate what I consider a fundamental feature "I
wish to boot an instance on a volume".

The current ease to boot from volume is THE selling feature our users
want and heavily/actively use. We fought very hard to make it work and
reading about how it should be removed is frustrating.

Issues we identified shouldn't be a reason to drop this feature. Other
providers are making it work and I don't see why we couldn't. I'm
convinced we can do better.

[5]
https://github.com/openstack-infra/shade/blob/03c1556a12aabfc21de60a9fac97aea7871485a3/shade/meta.py#L106-L173
[6]
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html

Mathieu

>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074527.html
>>
>> [2] https://review.openstack.org/#/c/142859/
>> [3] https://review.openstack.org/#/c/195795/
>> [4] https://review.openstack.org/#/c/201754/
>>
>> -- 
>> Mathieu



From julien at danjou.info  Wed Sep 23 21:51:35 2015
From: julien at danjou.info (Julien Danjou)
Date: Wed, 23 Sep 2015 23:51:35 +0200
Subject: [openstack-dev] [all] Consistent support for SSL termination
	proxies across all API services
In-Reply-To: <5602ECBF.2020900@dague.net> (Sean Dague's message of "Wed, 23
 Sep 2015 14:17:35 -0400")
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
 <56028A4B.9010203@dague.net> <m0fv25fjvf.fsf@danjou.info>
 <5602ECBF.2020900@dague.net>
Message-ID: <m0a8scerdk.fsf@danjou.info>

On Wed, Sep 23 2015, Sean Dague wrote:

> Ok, how exactly does that work? Because it seems like
> oslo_middleware.ssl is only changing the protocol if the proxy sets it.
>
> But the host in the urls will still be the individual host, which isn't
> the proxy hostname/ip. Sorry if I'm being daft here, just want to
> understand how that flow ends up working.

No problem, you're no supposed to know everything. :)

As ZZelle said too, we can set the correct host and port expected by
honoring X-Forwarded-Host and X-Forwarded-Port, which are set by HTTP
proxies when they act as reverse-proxies and forward requests.
That will make the WSGI application unaware of the fact that there is a
request proxy in front of them. Magic!

We could do that in the SSL middleware (and maybe rename it?) or in
another middleware, and enable them by default. So we'd have that
working by default, which would be great IMHO.

> Will that cover the case of webob's request.application_uri? If so I
> think that covers the REST documents in at least Nova (one good data
> point, and one that I know has been copied around). At least as far as
> the protocol is concerned, it's still got a potential url issue.

That should work with any WSGI request, so I'd say yes.

>> The {public,admin}_endpoint are only useful in the case where you map
>> http://myproxy/identity -> http://mykeystone/ using a proxy
>> 
>> Because the prefix is not passed to Keystone. If you map 1:1 the path
>> part, we could also leverage X-Forwarded-Host and X-Forwarded-Port to
>> avoid having {public,admin}_endpoint options.
>
> It also looks like there are new standards for Forwarded headers, so the
> middleware should probably support those as well.
> http://tools.ietf.org/html/rfc7239.

Good point, we should update the middleware as needed.

Though they still not cover that use case where you have a base URL that
is different between the proxy and the application. I don't think it's a
widely used case, but still, there are at 2 least two way to support it:
1. Having config option (like Keystone currently has)
2. Having a special e.g. X-Forwarded-BaseURL header set by the proxy
   that we would catch in our middleware and would prepend to
   environment['SCRIPT_NAME'].

The 2 options are even compatible, though I'd say 2. is probably simpler
in the long run and more? "unified".

I'm willing to clear that out and come with specs and patches if that
can help. :)

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/201fcf91/attachment.pgp>

From emilien at redhat.com  Wed Sep 23 21:56:25 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Wed, 23 Sep 2015 17:56:25 -0400
Subject: [openstack-dev] [puppet] use zuul-cloner when running rspec
Message-ID: <56032009.5020103@redhat.com>

Background
==========

Current rspec tests are tested with modules mentioned in .fixtures.yaml
file of each module.

* the file is not consistent across all modules
* it hardcodes module names & versions
* this way does not allow to use "Depend-On" feature, that would allow
to test cross-modules patches

Proposal
========

* Like we do in beaker & integration jobs, use zuul-cloner to clone
modules in our CI jobs.
* Use r10k to prepare fixtures modules.
* Use Puppetfile hosted by openstack/puppet-openstack-integration

In that way:
* we will have modules name + versions testing consistency across all
modules
* the same Puppetfile would be used by unit/beaker/integration testing.
* the patch that pass tests on your laptop would pass tests in upstream CI
* if you don't have zuul-cloner on your laptop, don't worry it will use
git clone. Though you won't have Depends-On feature working on your
laptop (technically not possible).
* Though your patch will support Depends-On in OpenStack Infra for unit
tests. If you submit a patch in puppet-openstacklib that drop something
wrong, you can send a patch in puppet-nova that will test it, and unit
tests will fail.

Drawbacks
=========
* cloning from .fixtures.yaml takes ~ 10 seconds
* using r10k + zuul-clone takes ~50 seconds (more modules to clone).

I think 40 seconds is something accept regarding the benefit.


Next steps
==========

* PoC in puppet-nova: https://review.openstack.org/#/c/226830/
* Patch openstack/puppet-modulesync-config to be consistent across all
our modules.

Bonus
=====
we might need (asap) a canary job for puppet-openstack-integration
repository, that would run tests on a puppet-* module (since we're using
install_modules.sh & Puppetfile files in puppet-* modules).
Nothing has been done yet for this work.


Thoughts?
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/fec21cc5/attachment.pgp>

From xarses at gmail.com  Wed Sep 23 21:59:28 2015
From: xarses at gmail.com (Andrew Woodward)
Date: Wed, 23 Sep 2015 21:59:28 +0000
Subject: [openstack-dev] [fuel] Weekly IRC meeting 9/24
Message-ID: <CACEfbZijoUuxu03v5w0G6sLb1O2-U9_8_W676uH_1Jg14dcNLg@mail.gmail.com>

As a reminder, the weekly IRC meeting is scheduled for 16:00 UTC Tomorrow
in #openstack-meeting-alt

Please review meeting agenda and update if there is something you wish to
discuss.

https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/dc8d5719/attachment.html>

From davanum at gmail.com  Wed Sep 23 23:18:15 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Wed, 23 Sep 2015 19:18:15 -0400
Subject: [openstack-dev] [oslo][doc] Oslo doc sprint 9/24-9/25
In-Reply-To: <CAHjMoV0vx=mEcnsJHyq29hwoAA6oq9fwdXYqVFGttVCwLEeicg@mail.gmail.com>
References: <CAHjMoV0vx=mEcnsJHyq29hwoAA6oq9fwdXYqVFGttVCwLEeicg@mail.gmail.com>
Message-ID: <CANw6fcFgQUGnkUW7x7CkyDeza8c+VXAYtosXqOmvK6-ug6_hjg@mail.gmail.com>

Reminder, we are doing the Doc Sprint tomorrow. Please help out with what
ever item or items you can.

Thanks,
Dims

On Wed, Sep 16, 2015 at 5:40 PM, James Carey <bellerophon at flyinghorsie.com>
wrote:

> In order to improve the Oslo libraries documentation, the Oslo team is
> having a documentation sprint from 9/24 to 9/25.
>
> We'll kick things off at 14:00 UTC on 9/24 in the
> #openstack-oslo-docsprint IRC channel and we'll use an etherpad [0].
>
> All help is appreciated.   If you can help or have suggestions for
> areas of focus, please update the etherpad.
>
> [0] https://etherpad.openstack.org/p/oslo-liberty-virtual-doc-sprint
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/6af46d57/attachment.html>

From anteaya at anteaya.info  Wed Sep 23 23:26:11 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Wed, 23 Sep 2015 19:26:11 -0400
Subject: [openstack-dev] [oslo][doc] Oslo doc sprint 9/24-9/25
In-Reply-To: <CANw6fcFgQUGnkUW7x7CkyDeza8c+VXAYtosXqOmvK6-ug6_hjg@mail.gmail.com>
References: <CAHjMoV0vx=mEcnsJHyq29hwoAA6oq9fwdXYqVFGttVCwLEeicg@mail.gmail.com>
 <CANw6fcFgQUGnkUW7x7CkyDeza8c+VXAYtosXqOmvK6-ug6_hjg@mail.gmail.com>
Message-ID: <56033513.90108@anteaya.info>

On 09/23/2015 07:18 PM, Davanum Srinivas wrote:
> Reminder, we are doing the Doc Sprint tomorrow. Please help out with what
> ever item or items you can.
> 
> Thanks,
> Dims
> 
> On Wed, Sep 16, 2015 at 5:40 PM, James Carey <bellerophon at flyinghorsie.com>
> wrote:
> 
>> In order to improve the Oslo libraries documentation, the Oslo team is
>> having a documentation sprint from 9/24 to 9/25.
>>
>> We'll kick things off at 14:00 UTC on 9/24 in the
>> #openstack-oslo-docsprint IRC channel and we'll use an etherpad [0].

Have you considered using the #openstack-sprint channel, which can be
booked here: https://wiki.openstack.org/wiki/VirtualSprints

and was created for just this kind of occasion. Also it has channel
logging, helpful for those trying to co-ordinate across timezones.

May you have a good sprint,
Anita.

>>
>> All help is appreciated.   If you can help or have suggestions for
>> areas of focus, please update the etherpad.
>>
>> [0] https://etherpad.openstack.org/p/oslo-liberty-virtual-doc-sprint
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> 
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From sorrison at gmail.com  Wed Sep 23 23:34:08 2015
From: sorrison at gmail.com (Sam Morrison)
Date: Thu, 24 Sep 2015 09:34:08 +1000
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <56031CFE.2020906@internap.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
Message-ID: <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>

Just got alerted to this on the operator list.

We very much rely on this.

We have multiple availability zones in nova and each zone has a corresponding cinder-volume service(s) in the same availability zone.

We don?t want people attaching a volume from one zone to another as the network won?t allow that as the zones are in different network domains and different data centres.

I wonder if you guys can reconsider deprecating this option as it is very useful to us.

Cheers,
Sam



> On 24 Sep 2015, at 7:43 am, Mathieu Gagn? <mgagne at internap.com> wrote:
> 
> On 2015-09-23 4:50 PM, Andrew Laski wrote:
>> On 09/23/15 at 04:30pm, Mathieu Gagn? wrote:
>>> On 2015-09-23 4:12 PM, Andrew Laski wrote:
>>>> On 09/23/15 at 02:55pm, Matt Riedemann wrote:
>>>>> 
>>>>> Heh, so when I just asked in the cinder channel if we can just
>>>>> deprecate nova boot from volume with source=(image|snapshot|blank)
>>>>> (which automatically creates the volume and polls for it to be
>>>>> available) and then add a microversion that doesn't allow it, I was
>>>>> half joking, but I see we're on the same page.  This scenario seems to
>>>>> introduce a lot of orchestration work that nova shouldn't necessarily
>>>>> be in the business of handling.
>>>> 
>>>> I am very much in support of this.  This has been a source of
>>>> frustration for our users because it is prone to failures we can't
>>>> properly expose to users and timeouts.  There are much better places to
>>>> handle the orchestration of creating a volume and then booting from it
>>>> than Nova.
>>>> 
>>> 
>>> Unfortunately, this is a feature our users *heavily* rely on and we
>>> worked very hard to make it happen. We had a private patch on our side
>>> for years to optimize boot-from-volume before John Griffith came up with
>>> an upstream solution for SolidFire [2] and others with a generic
>>> solution [3] [4].
>>> 
>>> Being able to "nova boot" and have everything done for you is awesome.
>>> Just see what Monty Taylor mentioned in his thread about sane default
>>> networking [1]. Having orchestration on the client side is just
>>> something our users don't want to have to do and often complain about.
>> 
>> At risk of getting too offtopic I think there's an alternate solution to
>> doing this in Nova or on the client side.  I think we're missing some
>> sort of OpenStack API and service that can handle this.  Nova is a low
>> level infrastructure API and service, it is not designed to handle these
>> orchestrations.  I haven't checked in on Heat in a while but perhaps
>> this is a role that it could fill.
>> 
>> I think that too many people consider Nova to be *the* OpenStack API
>> when considering instances/volumes/networking/images and that's not
>> something I would like to see continue.  Or at the very least I would
>> like to see a split between the orchestration/proxy pieces and the
>> "manage my VM/container/baremetal" bits.
>> 
> 
> "too many people" happens to include a lot of 3rd party tools supporting
> OpenStack which our users complain a lot about. Just see all the
> possible way to get an external IP [5]. Introducing yet another service
> would increase the pain on our users which will see their tools and
> products not working even more.
> 
> Just see how EC2 is doing it [6], you won't see them suggest to use yet
> another service to orchestrate what I consider a fundamental feature "I
> wish to boot an instance on a volume".
> 
> The current ease to boot from volume is THE selling feature our users
> want and heavily/actively use. We fought very hard to make it work and
> reading about how it should be removed is frustrating.
> 
> Issues we identified shouldn't be a reason to drop this feature. Other
> providers are making it work and I don't see why we couldn't. I'm
> convinced we can do better.
> 
> [5]
> https://github.com/openstack-infra/shade/blob/03c1556a12aabfc21de60a9fac97aea7871485a3/shade/meta.py#L106-L173
> [6]
> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
> 
> Mathieu
> 
>>> 
>>> [1]
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074527.html
>>> 
>>> [2] https://review.openstack.org/#/c/142859/
>>> [3] https://review.openstack.org/#/c/195795/
>>> [4] https://review.openstack.org/#/c/201754/
>>> 
>>> -- 
>>> Mathieu
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From andrew at lascii.com  Wed Sep 23 23:59:53 2015
From: andrew at lascii.com (Andrew Laski)
Date: Wed, 23 Sep 2015 19:59:53 -0400
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com>
 <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
Message-ID: <20150923235953.GD8745@crypt>

On 09/24/15 at 09:34am, Sam Morrison wrote:
>Just got alerted to this on the operator list.
>
>We very much rely on this.
>
>We have multiple availability zones in nova and each zone has a corresponding cinder-volume service(s) in the same availability zone.
>
>We don?t want people attaching a volume from one zone to another as the network won?t allow that as the zones are in different network domains and different data centres.
>
>I wonder if you guys can reconsider deprecating this option as it is very useful to us.

I was perhaps hasty in approving that patch and didn't realize that Matt 
had reached out for operator feedback at the same time that he proposed 
it.  Since this is being used in production I wouldn't want it to be 
removed without at least having an alternative, and hopefully better, 
method of achieving your goal.  Reverting the deprecation seems 
reasonable to me for now while we work out the details around 
Cinder/Nova AZ interactions.



>
>Cheers,
>Sam
>
>
>
>> On 24 Sep 2015, at 7:43 am, Mathieu Gagn? <mgagne at internap.com> wrote:
>>
>> On 2015-09-23 4:50 PM, Andrew Laski wrote:
>>> On 09/23/15 at 04:30pm, Mathieu Gagn? wrote:
>>>> On 2015-09-23 4:12 PM, Andrew Laski wrote:
>>>>> On 09/23/15 at 02:55pm, Matt Riedemann wrote:
>>>>>>
>>>>>> Heh, so when I just asked in the cinder channel if we can just
>>>>>> deprecate nova boot from volume with source=(image|snapshot|blank)
>>>>>> (which automatically creates the volume and polls for it to be
>>>>>> available) and then add a microversion that doesn't allow it, I was
>>>>>> half joking, but I see we're on the same page.  This scenario seems to
>>>>>> introduce a lot of orchestration work that nova shouldn't necessarily
>>>>>> be in the business of handling.
>>>>>
>>>>> I am very much in support of this.  This has been a source of
>>>>> frustration for our users because it is prone to failures we can't
>>>>> properly expose to users and timeouts.  There are much better places to
>>>>> handle the orchestration of creating a volume and then booting from it
>>>>> than Nova.
>>>>>
>>>>
>>>> Unfortunately, this is a feature our users *heavily* rely on and we
>>>> worked very hard to make it happen. We had a private patch on our side
>>>> for years to optimize boot-from-volume before John Griffith came up with
>>>> an upstream solution for SolidFire [2] and others with a generic
>>>> solution [3] [4].
>>>>
>>>> Being able to "nova boot" and have everything done for you is awesome.
>>>> Just see what Monty Taylor mentioned in his thread about sane default
>>>> networking [1]. Having orchestration on the client side is just
>>>> something our users don't want to have to do and often complain about.
>>>
>>> At risk of getting too offtopic I think there's an alternate solution to
>>> doing this in Nova or on the client side.  I think we're missing some
>>> sort of OpenStack API and service that can handle this.  Nova is a low
>>> level infrastructure API and service, it is not designed to handle these
>>> orchestrations.  I haven't checked in on Heat in a while but perhaps
>>> this is a role that it could fill.
>>>
>>> I think that too many people consider Nova to be *the* OpenStack API
>>> when considering instances/volumes/networking/images and that's not
>>> something I would like to see continue.  Or at the very least I would
>>> like to see a split between the orchestration/proxy pieces and the
>>> "manage my VM/container/baremetal" bits.
>>>
>>
>> "too many people" happens to include a lot of 3rd party tools supporting
>> OpenStack which our users complain a lot about. Just see all the
>> possible way to get an external IP [5]. Introducing yet another service
>> would increase the pain on our users which will see their tools and
>> products not working even more.
>>
>> Just see how EC2 is doing it [6], you won't see them suggest to use yet
>> another service to orchestrate what I consider a fundamental feature "I
>> wish to boot an instance on a volume".
>>
>> The current ease to boot from volume is THE selling feature our users
>> want and heavily/actively use. We fought very hard to make it work and
>> reading about how it should be removed is frustrating.
>>
>> Issues we identified shouldn't be a reason to drop this feature. Other
>> providers are making it work and I don't see why we couldn't. I'm
>> convinced we can do better.
>>
>> [5]
>> https://github.com/openstack-infra/shade/blob/03c1556a12aabfc21de60a9fac97aea7871485a3/shade/meta.py#L106-L173
>> [6]
>> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
>>
>> Mathieu
>>
>>>>
>>>> [1]
>>>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074527.html
>>>>
>>>> [2] https://review.openstack.org/#/c/142859/
>>>> [3] https://review.openstack.org/#/c/195795/
>>>> [4] https://review.openstack.org/#/c/201754/
>>>>
>>>> --
>>>> Mathieu
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From jamielennox at redhat.com  Thu Sep 24 00:04:34 2015
From: jamielennox at redhat.com (Jamie Lennox)
Date: Thu, 24 Sep 2015 10:04:34 +1000
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <m0a8scerdk.fsf@danjou.info>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net>
 <5601A911.2030504@internap.com> <5601BFA9.7000902@dague.net>
 <5601C87C.2010609@internap.com> <56028A4B.9010203@dague.net>
 <m0fv25fjvf.fsf@danjou.info> <5602ECBF.2020900@dague.net>
 <m0a8scerdk.fsf@danjou.info>
Message-ID: <CAHiyDT8_2_3Y72On6RxCQmk+tJo6JDdKAdM_680MLAPz_hLdaQ@mail.gmail.com>

So this is a long thread and i may have missed something in it,
however this exact topic came up as a blocker on a devstack patch to
get TLS testing in the gate with HAproxy.

The long term solution we had come up with (but granted not proposed
anywhere public) is that we should transition services to use relative
links.

As far as i'm aware this is only a problem within the services
themselves as the URL they receive is not what was actually requested
if it went via HAproxy. It is not a problem with interservice requests
because they should get URLs from the service catalog (or otherwise
not display them to the user). Which means that this generally affects
the version discovery page, and "links" from resources to like a next,
prev, and base url.

Is there a reason we can't transition this to use a relative URL
possibly with a django style WEBROOT so that a discovery response
returned /v2.0 and /v3 rather than the fully qualified URL and the
clients be smart enough to figure this out?



On 24 September 2015 at 07:51, Julien Danjou <julien at danjou.info> wrote:
> On Wed, Sep 23 2015, Sean Dague wrote:
>
>> Ok, how exactly does that work? Because it seems like
>> oslo_middleware.ssl is only changing the protocol if the proxy sets it.
>>
>> But the host in the urls will still be the individual host, which isn't
>> the proxy hostname/ip. Sorry if I'm being daft here, just want to
>> understand how that flow ends up working.
>
> No problem, you're no supposed to know everything. :)
>
> As ZZelle said too, we can set the correct host and port expected by
> honoring X-Forwarded-Host and X-Forwarded-Port, which are set by HTTP
> proxies when they act as reverse-proxies and forward requests.
> That will make the WSGI application unaware of the fact that there is a
> request proxy in front of them. Magic!
>
> We could do that in the SSL middleware (and maybe rename it?) or in
> another middleware, and enable them by default. So we'd have that
> working by default, which would be great IMHO.
>
>> Will that cover the case of webob's request.application_uri? If so I
>> think that covers the REST documents in at least Nova (one good data
>> point, and one that I know has been copied around). At least as far as
>> the protocol is concerned, it's still got a potential url issue.
>
> That should work with any WSGI request, so I'd say yes.
>
>>> The {public,admin}_endpoint are only useful in the case where you map
>>> http://myproxy/identity -> http://mykeystone/ using a proxy
>>>
>>> Because the prefix is not passed to Keystone. If you map 1:1 the path
>>> part, we could also leverage X-Forwarded-Host and X-Forwarded-Port to
>>> avoid having {public,admin}_endpoint options.
>>
>> It also looks like there are new standards for Forwarded headers, so the
>> middleware should probably support those as well.
>> http://tools.ietf.org/html/rfc7239.
>
> Good point, we should update the middleware as needed.
>
> Though they still not cover that use case where you have a base URL that
> is different between the proxy and the application. I don't think it's a
> widely used case, but still, there are at 2 least two way to support it:
> 1. Having config option (like Keystone currently has)
> 2. Having a special e.g. X-Forwarded-BaseURL header set by the proxy
>    that we would catch in our middleware and would prepend to
>    environment['SCRIPT_NAME'].
>
> The 2 options are even compatible, though I'd say 2. is probably simpler
> in the long run and more? "unified".
>
> I'm willing to clear that out and come with specs and patches if that
> can help. :)
>
> --
> Julien Danjou
> # Free Software hacker
> # http://julien.danjou.info
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From nodir.qodirov at gmail.com  Thu Sep 24 00:23:30 2015
From: nodir.qodirov at gmail.com (Nodir Kodirov)
Date: Wed, 23 Sep 2015 17:23:30 -0700
Subject: [openstack-dev] [neutron] Neutron debugging tool
In-Reply-To: <d4edb48e00784aae809c732c6cb33c04@XCH-ALN-005.cisco.com>
References: <CAP0B2WMOFLMdH7t9hNc0uw8xqNcUvvrLBZa7D4hHU06c2bLMuA@mail.gmail.com>
 <D226C7C9.7CEB9%ganeshna@cisco.com>
 <CAP0B2WPHNGpPCY07d9abeuYwBZqk3XUcCK7wkfbNhSGGTV+k0A@mail.gmail.com>
 <d4edb48e00784aae809c732c6cb33c04@XCH-ALN-005.cisco.com>
Message-ID: <CADL6tVMmBg72KM+JVFBJsS1ZJ7CyGr7yArmWLzRceriFi+2gYA@mail.gmail.com>

Thanks for the great suggestions and pointers, everyone!

easyOVS seems to cover many use cases I had in my mind. I'll give it a
try and see if/how I can extend it.

I do agree with Salvatore about putting reference to all these tools
in OpenStack docs. I filed a docs bug [1] suggesting that we need to
add (sub-)section to Chapter 12. Network Troubleshooting in OpenStack
Operations Guide [2]. This way, tools will be quickly exposed to the
docs surface and one does not need to spend hours to discover them
from scattered places.

Again, thanks for all input!

Nodir

[1] https://bugs.launchpad.net/openstack-manuals/+bug/1499114
[2] http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html

On 22 September 2015 at 05:27, Carol Bouchard (caboucha)
<caboucha at cisco.com> wrote:
> There was a presentation on DON at the Vancouver summit.  Here is the link:
>
>
>
> https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/don-diagnosing-ovs-in-neutron
>
>
>
> From: Salvatore Orlando [mailto:salv.orlando at gmail.com]
> Sent: Tuesday, September 22, 2015 3:51 AM
> To: OpenStack Development Mailing List (not for usage questions)
>
>
> Subject: Re: [openstack-dev] [neutron] Neutron debugging tool
>
>
>
> Thanks Ganesh!
>
>
>
> I did not know about this tool.
>
> I also quite like the network visualization bits, though I wonder how
> practical that would be when one debugs very large deployments.
>
>
>
> I think it won't be a bad idea to list these tools in the networking guide
> or in neutron's devref, or both.
>
>
>
> Salvatore
>
>
>
> On 22 September 2015 at 04:25, Ganesh Narayanan (ganeshna)
> <ganeshna at cisco.com> wrote:
>
> Another project for diagnosing OVS in Neutron:
>
>
>
> https://github.com/CiscoSystems/don
>
>
>
> Thanks,
>
> Ganesh
>
>
>
> From: Salvatore Orlando <salv.orlando at gmail.com>
> Reply-To: OpenStack Development Mailing List
> <openstack-dev at lists.openstack.org>
> Date: Monday, 21 September 2015 2:55 pm
> To: OpenStack Development Mailing List <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron] Neutron debugging tool
>
>
>
> It sounds like indeed that easyOVS covers what you're aiming too.
>
> However, from what I gather there is still plenty to do in easy OVS, so
> perhaps rather than starting a new toolset from scratch you might build on
> the existing one.
>
>
>
> Personally I'd welcome its adoption into the Neutron stadium as debugging
> control plane/data plane issues in the neutron reference impl is becoming
> difficult also for expert users and developers.
>
> I'd just suggest renaming it because calling it "OVS" is just plain wrong.
> The neutron reference implementation and OVS are two distinct things.
>
>
>
> As concern neutron-debug, this is a tool that was developed in the early
> stages of the project to verify connectivity using "probes" in namespaces.
> These probes are simply tap interfaces associated with neutron ports. The
> neutron-debug tool is still used in some devstack exercises. Nevertheless,
> I'd rather keep building something like easyOVS and then deprecated
> neutron-debug rather than develop it.
>
>
>
> Salvatore
>
>
>
>
>
> On 21 September 2015 at 02:40, Li Ma <skywalker.nick at gmail.com> wrote:
>
> AFAIK, there is a project available in the github that does the same thing.
> https://github.com/yeasy/easyOVS
>
> I used it before.
>
>
> On Mon, Sep 21, 2015 at 12:17 AM, Nodir Kodirov <nodir.qodirov at gmail.com>
> wrote:
>> Hello,
>>
>> I am planning to develop a tool for network debugging. Initially, it
>> will handle DVR case, which can also be extended to other too. Based
>> on my OpenStack deployment/operations experience, I am planning to
>> handle common pitfalls/misconfigurations, such as:
>> 1) check external gateway validity
>> 2) check if appropriate qrouter/qdhcp/fip namespaces are created in
>> compute/network hosts
>> 3) execute probing commands inside namespaces, to verify reachability
>> 4) etc.
>>
>> I came across neutron-debug [1], which mostly focuses on namespace
>> debugging. Its coverage is limited to OpenStack, while I am planning
>> to cover compute/network nodes as well. In my experience, I had to ssh
>> to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
>> above). The tool I am considering will handle these, given the host
>> credentials.
>>
>> I'd like get community's feedback on utility of such debugging tool.
>> Do people use neutron-debug on their OpenStack environment? Does the
>> tool I am planning to develop with complete diagnosis coverage sound
>> useful? Anyone is interested to join the development? All feedback are
>> welcome.
>>
>> Thanks,
>>
>> - Nodir
>>
>> [1]
>> http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
>
> Li Ma (Nick)
> Email: skywalker.nick at gmail.com
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From chuck.short at canonical.com  Thu Sep 24 00:47:31 2015
From: chuck.short at canonical.com (Chuck Short)
Date: Wed, 23 Sep 2015 20:47:31 -0400
Subject: [openstack-dev] [all][stable][release] 2015.1.2
Message-ID: <CANZa-e+LZg0PZgPDrkhgifuZ_BQ6EhTua-420C5K2Z+A8cbPsg@mail.gmail.com>

Hi,

We would like to do a stable/kilo branch release, next Thursday. In order
to do that I would like to freeze the branches on Friday. Cut some test
tarballs on Tuesday and release on Thursday. Does anyone have an opinnon on
this?

Thanks
chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150923/e81b426c/attachment.html>

From sorrison at gmail.com  Thu Sep 24 01:05:49 2015
From: sorrison at gmail.com (Sam Morrison)
Date: Thu, 24 Sep 2015 11:05:49 +1000
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <20150923235953.GD8745@crypt>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
Message-ID: <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>


> On 24 Sep 2015, at 9:59 am, Andrew Laski <andrew at lascii.com> wrote:
> 
> I was perhaps hasty in approving that patch and didn't realize that Matt had reached out for operator feedback at the same time that he proposed it. Since this is being used in production I wouldn't want it to be removed without at least having an alternative, and hopefully better, method of achieving your goal.  Reverting the deprecation seems reasonable to me for now while we work out the details around Cinder/Nova AZ interactions.

Thanks Andrew,

What we basically want is for our users to have instances and volumes on a section of hardware and then for them to be able to have other instances and volumes in another section of hardware.

If one section dies then the other section is fine. For us we use availability-zones for this. If this is not the intended use for AZs what is a better way for us to do this.

Cheers,
Sam




From jim at jimrollenhagen.com  Thu Sep 24 01:07:25 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Wed, 23 Sep 2015 18:07:25 -0700
Subject: [openstack-dev] [all] Cross-Project track topic proposals
In-Reply-To: <20150923084556.GF26372@redhat.com>
References: <20150923084556.GF26372@redhat.com>
Message-ID: <20150924010725.GB14957@jimrollenhagen.com>

On Wed, Sep 23, 2015 at 10:45:56AM +0200, Flavio Percoco wrote:
> Greetings,
> 
> The community is in the process of collecting topics for the
> cross-project tack that we'll have in the Mitaka summit.
> 
> The good ol' OSDREG has been setup[0] to help collectiong these topics
> and we'd like to encourage the community to propose sessions there.

As a note, ODSREG appears to require a valid oauth login, even for read
access. Though all developers should have a valid login, this isn't
awesome in the spirit of openness. Is this intentional / should that be
fixed?

// jim



From dborodaenko at mirantis.com  Thu Sep 24 01:49:19 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Wed, 23 Sep 2015 18:49:19 -0700
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <CAHAWLf11d3Krn3Y9_EpSeR_07OsHfp6TEHcVmtLh+7vpD00ShA@mail.gmail.com>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
 <CAHAWLf1OtHP5BdKsf+N-Fmv=34vLs36SP9UcQ=FurpEihr-hhg@mail.gmail.com>
 <CANw6fcHkopMgfuVHUXnp-zxJUCmqNoHn=m02h5=jZ7A_8yyinA@mail.gmail.com>
 <20150919010735.GB16012@localhost>
 <CANw6fcH6uePBTiZuKYWEoAJ_1sJFY_CbrjYNoqpKOyHCG6C70Q@mail.gmail.com>
 <CAKYN3rMgm8iKmTCb6BknV=UsM6W-zBeW1Ca9JzZ+a=Y80taODQ@mail.gmail.com>
 <CAHAWLf11d3Krn3Y9_EpSeR_07OsHfp6TEHcVmtLh+7vpD00ShA@mail.gmail.com>
Message-ID: <20150924014919.GA6291@localhost>

Vladimir,

Sergey's initial email from this thread has a link to the Fuel elections
wiki page that describes the exact procedure to determine the electorate
and the candidates [0]:

    The electorate for a given PTL and Component Leads election are the
    Foundation individual members that are also committers for one of
    the Fuel team's repositories over the last year timeframe (September
    18, 2014 06:00 UTC to September 18, 2015 05:59 UTC).

    ...

    Any member of an election electorate can propose their candidacy for
    the same election.

[0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015#Electorate

If you follow more links from that page, you will find the Governance
page [1] and from there the Election Officiating Guidelines [2] that
provide a specific shell one-liner to generate that list:

    git log --pretty=%aE --since '1 year ago' | sort -u

[1] https://wiki.openstack.org/wiki/Governance
[2] https://wiki.openstack.org/wiki/Election_Officiating_Guidelines

As I have specified in the proposed Team Structure policy document [3],
this is the same process that is used by other OpenStack projects.

[3] https://review.openstack.org/225376

Having a different release schedule is not a sufficient reason for Fuel
to reinvent the wheel, for example OpenStack Infrastructure project
doesn't even have a release schedule for many of its deliverables, and
still follows the same elections schedule as the rest of OpenStack:

[4] http://governance.openstack.org/reference/projects/infrastructure.html

Lets keep things simple.

-- 
Dmitry Borodaenko


On Wed, Sep 23, 2015 at 01:27:07PM +0300, Vladimir Kuklin wrote:
> Dmitry, Mike
> 
> Thank you for the list of usable links.
> 
> But still - we do not have clearly defined procedure on determening who is
> eligible to nominate and vote for PTL and Component Leads. Remember, that
> Fuel still has different release cycle and Kilo+Liberty contributors list
> is not exactly the same for "365days" contributors list.
> 
> Can we finally come up with the list of people eligible to nominate and
> vote?
> 
> On Sun, Sep 20, 2015 at 2:37 AM, Mike Scherbakov <mscherbakov at mirantis.com>
> wrote:
> 
> > Let's move on.
> > I started work on MAINTAINERS files, proposed two patches:
> > https://review.openstack.org/#/c/225457/1
> > https://review.openstack.org/#/c/225458/1
> >
> > These can be used as templates for other repos / folders.
> >
> > Thanks,
> >
> > On Fri, Sep 18, 2015 at 7:45 PM Davanum Srinivas <davanum at gmail.com>
> > wrote:
> >
> >> +1 Dmitry
> >>
> >> -- Dims
> >>
> >> On Fri, Sep 18, 2015 at 9:07 PM, Dmitry Borodaenko <
> >> dborodaenko at mirantis.com> wrote:
> >>
> >>> Dims,
> >>>
> >>> Thanks for the reminder!
> >>>
> >>> I've summarized the uncontroversial parts of that thread in a policy
> >>> proposal as per you suggestion [0], please review and comment. I've
> >>> renamed SMEs to maintainers since Mike has agreed with that part, and I
> >>> omitted code review SLAs from the policy since that's the part that has
> >>> generated the most discussion.
> >>>
> >>> [0] https://review.openstack.org/225376
> >>>
> >>> I don't think we should postpone the election: the PTL election follows
> >>> the same rules as OpenStack so we don't need a Fuel-specific policy for
> >>> that, and the component leads election doesn't start until October 9,
> >>> which gives us 3 weeks to confirm consensus on that aspect of the
> >>> policy.
> >>>
> >>> --
> >>> Dmitry Borodaenko
> >>>
> >>>
> >>> On Fri, Sep 18, 2015 at 07:30:39AM -0400, Davanum Srinivas wrote:
> >>> > Sergey,
> >>> >
> >>> > Please see [1]. Did we codify some of these roles and responsibilities
> >>> as a
> >>> > community in a spec? There was also a request to use terminology like
> >>> say
> >>> > MAINTAINERS in that email as well.
> >>> >
> >>> > Are we pulling the trigger a bit early for an actual election?
> >>> >
> >>> > Thanks,
> >>> > Dims
> >>> >
> >>> > [1] http://markmail.org/message/2ls5obgac6tvcfss
> >>> >
> >>> > On Fri, Sep 18, 2015 at 6:56 AM, Vladimir Kuklin <vkuklin at mirantis.com
> >>> >
> >>> > wrote:
> >>> >
> >>> > > Sergey, Fuelers
> >>> > >
> >>> > > This is awesome news!
> >>> > >
> >>> > > By the way, I have a question on who is eligible to vote and to
> >>> nominate
> >>> > > him/her-self for both PTL and Component Leads. Could you elaborate
> >>> on that?
> >>> > >
> >>> > > And there is no such entity as Component Lead in OpenStack - so we
> >>> are
> >>> > > actually creating one. What are the new rights and responsibilities
> >>> of CL?
> >>> > >
> >>> > > On Fri, Sep 18, 2015 at 5:39 AM, Sergey Lukjanov <
> >>> slukjanov at mirantis.com>
> >>> > > wrote:
> >>> > >
> >>> > >> Hi folks,
> >>> > >>
> >>> > >> I'd like to announce that we're running the PTL and Component Leads
> >>> > >> elections. Detailed information available on wiki. [0]
> >>> > >>
> >>> > >> Project Team Lead: Manages day-to-day operations, drives the project
> >>> > >> team goals, resolves technical disputes within the project team. [1]
> >>> > >>
> >>> > >> Component Lead: Defines architecture of a module or component in
> >>> Fuel,
> >>> > >> reviews design specs, merges majority of commits and resolves
> >>> conflicts
> >>> > >> between Maintainers or contributors in the area of responsibility.
> >>> [2]
> >>> > >>
> >>> > >> Fuel has two large sub-teams, with roughly comparable codebases,
> >>> that
> >>> > >> need dedicated component leads: fuel-library and fuel-python. [2]
> >>> > >>
> >>> > >> Nominees propose their candidacy by sending an email to the
> >>> > >> openstack-dev at lists.openstack.org mailing-list, which the subject:
> >>> > >> "[fuel] PTL candidacy" or "[fuel] <component> lead candidacy"
> >>> > >> (for example, "[fuel] fuel-library lead candidacy").
> >>> > >>
> >>> > >> Time line:
> >>> > >>
> >>> > >> PTL elections
> >>> > >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
> >>> position
> >>> > >> * September 29 - October 8: PTL elections
> >>> > >>
> >>> > >> Component leads elections (fuel-library and fuel-python)
> >>> > >> * October 9 - October 15: Open candidacy for Component leads
> >>> positions
> >>> > >> * October 16 - October 22: Component leads elections
> >>> > >>
> >>> > >> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015
> >>> > >> [1] https://wiki.openstack.org/wiki/Governance
> >>> > >> [2]
> >>> > >>
> >>> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
> >>> > >> [3] https://lwn.net/Articles/648610/
> >>> > >>
> >>> > >> --
> >>> > >> Sincerely yours,
> >>> > >> Sergey Lukjanov
> >>> > >> Sahara Technical Lead
> >>> > >> (OpenStack Data Processing)
> >>> > >> Principal Software Engineer
> >>> > >> Mirantis Inc.
> >>> > >>
> >>> > >>
> >>> __________________________________________________________________________
> >>> > >> OpenStack Development Mailing List (not for usage questions)
> >>> > >> Unsubscribe:
> >>> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> > >>
> >>> > >>
> >>> > >
> >>> > >
> >>> > > --
> >>> > > Yours Faithfully,
> >>> > > Vladimir Kuklin,
> >>> > > Fuel Library Tech Lead,
> >>> > > Mirantis, Inc.
> >>> > > +7 (495) 640-49-04
> >>> > > +7 (926) 702-39-68
> >>> > > Skype kuklinvv
> >>> > > 35bk3, Vorontsovskaya Str.
> >>> > > Moscow, Russia,
> >>> > > www.mirantis.com <http://www.mirantis.ru/>
> >>> > > www.mirantis.ru
> >>> > > vkuklin at mirantis.com
> >>> > >
> >>> > >
> >>> __________________________________________________________________________
> >>> > > OpenStack Development Mailing List (not for usage questions)
> >>> > > Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> > >
> >>> > >
> >>> >
> >>> >
> >>> > --
> >>> > Davanum Srinivas :: https://twitter.com/dims
> >>>
> >>> >
> >>> __________________________________________________________________________
> >>> > OpenStack Development Mailing List (not for usage questions)
> >>> > Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>>
> >>> __________________________________________________________________________
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >>
> >> --
> >> Davanum Srinivas :: https://twitter.com/dims
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> > --
> > Mike Scherbakov
> > #mihgen
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> 
> -- 
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru
> vkuklin at mirantis.com

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From i-kumagai at bit-isle.co.jp  Thu Sep 24 02:34:54 2015
From: i-kumagai at bit-isle.co.jp (Ikuo Kumagai)
Date: Thu, 24 Sep 2015 11:34:54 +0900
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com>
 <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
Message-ID: <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>

Hi All,

I'm sorry I was in vacation yesterday(in JST), and I did not notice this
discussion.
I registered "bug 1496235".

In our case , there is Nova 2 az(az1, az2),and Cinder 1 az (default).
Cinder backend is ceph, that is a cluster of compute nodes inclued az1 and
az2 of nova. Nova's 2 az always use cinder default zone .

When I resistered, the option I wanted is that I can select "sync" or
"async" az between nova and cinder.

Regards,
IKUO Kumagai


2015-09-24 10:05 GMT+09:00 Sam Morrison <sorrison at gmail.com>:

>
> > On 24 Sep 2015, at 9:59 am, Andrew Laski <andrew at lascii.com> wrote:
> >
> > I was perhaps hasty in approving that patch and didn't realize that Matt
> had reached out for operator feedback at the same time that he proposed it.
> Since this is being used in production I wouldn't want it to be removed
> without at least having an alternative, and hopefully better, method of
> achieving your goal.  Reverting the deprecation seems reasonable to me for
> now while we work out the details around Cinder/Nova AZ interactions.
>
> Thanks Andrew,
>
> What we basically want is for our users to have instances and volumes on a
> section of hardware and then for them to be able to have other instances
> and volumes in another section of hardware.
>
> If one section dies then the other section is fine. For us we use
> availability-zones for this. If this is not the intended use for AZs what
> is a better way for us to do this.
>
> Cheers,
> Sam
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/e547ccef/attachment.html>

From abiswas at us.ibm.com  Thu Sep 24 05:17:04 2015
From: abiswas at us.ibm.com (Amitabha Biswas)
Date: Thu, 24 Sep 2015 05:17:04 +0000
Subject: [openstack-dev] [neutron][networking-ovn][vtep] Proposal: support
	for vtep-gateway in ovn
Message-ID: <201509240517.t8O5HC4p019939@d01av04.pok.ibm.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/821c8bbd/attachment.html>

From irenab.dev at gmail.com  Thu Sep 24 06:06:29 2015
From: irenab.dev at gmail.com (Irena Berezovsky)
Date: Thu, 24 Sep 2015 09:06:29 +0300
Subject: [openstack-dev] [neutron][nova][qos] network QoS support driven by
 VM flavor/image requirements
Message-ID: <CALqgCCrdhe0OTG4ZDsZ4Pv8wn8P2ndrO=WMXzLFN362nsa=1xQ@mail.gmail.com>

I would like to start discussion regarding user experience when certain
level of network QoS is expected to be applied on VM ports. As you may know
basic networking QoS support was introduced during Liberty Release
following spec, Ref [1]
As it was discussed during last networking-QoS meeting, Ref [2], nova team
drives to the approach where neutron port is created with all required
settings and then VM is created with pre-created port  and not with
requested network. While this approach serves decoupling and separation of
compute and networking concerns, it will require smarter Client
orchestration and  we may loose some functionality we have today. One of
the usage scenarios that currently supported, is that Cloud Provider may
associate certain requirements with nova flavors. Once Tenant requests VM
for this flavor, nova (nova-scheduler) will make sure to fulfill the
requirements. Possible way to make this work for networking -qos is to set :
 nova-manage flavor set_key --name m1.small --key quota:vif_qos_policy
--value <the-policy-id>

With current VM creation workflow  this will require nova to request
neutron to create port and apply qos policy with specified policy_id. This
will require changes on nova side.
I am not sure how to support the above user scenario with pre-created port
approach.

I would like to ask your opinion regarding the direction  for QoS in
particular, but the question is general for nova-neutron integration.
Should explicitly  decoupled networking/compute approach replace the
current way that nova delegates networking requirements to neutron.

BR,
Irena


[1]
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/qos-api-extension.html
[2]
http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-09-16-14.02.log.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/88d05b77/attachment.html>

From flavio at redhat.com  Thu Sep 24 06:55:59 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Thu, 24 Sep 2015 08:55:59 +0200
Subject: [openstack-dev] [all] Cross-Project track topic proposals
In-Reply-To: <20150924010725.GB14957@jimrollenhagen.com>
References: <20150923084556.GF26372@redhat.com>
 <20150924010725.GB14957@jimrollenhagen.com>
Message-ID: <20150924065559.GK26372@redhat.com>

On 23/09/15 18:07 -0700, Jim Rollenhagen wrote:
>On Wed, Sep 23, 2015 at 10:45:56AM +0200, Flavio Percoco wrote:
>> Greetings,
>>
>> The community is in the process of collecting topics for the
>> cross-project tack that we'll have in the Mitaka summit.
>>
>> The good ol' OSDREG has been setup[0] to help collectiong these topics
>> and we'd like to encourage the community to propose sessions there.
>
>As a note, ODSREG appears to require a valid oauth login, even for read
>access. Though all developers should have a valid login, this isn't
>awesome in the spirit of openness. Is this intentional / should that be
>fixed?

I keep calling it "osdreg" (facepalm).

It would be great to have odsreg not require a oauth login for
reading. However, if I recall correctly, it's always been like this
(not clear memory here). If it doesn't take much time to fix this, it
would be great. Otherwise, considering that using odsreg was a late
decision and we are all stuck with RCs, I'd recommend to change this
for the next cycle.

Thanks for noticing, Jim.
Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/b3085563/attachment.pgp>

From duncan.thomas at gmail.com  Thu Sep 24 07:04:52 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Thu, 24 Sep 2015 10:04:52 +0300
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt> <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
Message-ID: <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>

Hi

I thought I was late on this thread, but looking at the time stamps, it is
just something that escalated very quickly. I am honestly surprised an
cross-project interaction option went from 'we don't seem to understand
this' to 'deprecation merged' in 4 hours, with only a 12 hour discussion on
the mailing list, right at the end of a cycle when we're supposed to be
stabilising features.

I proposed a session at the Tokyo summit for a discussion of Cinder AZs,
since there was clear confusion about what they are intended for and how
they should be configured. Since then I've reached out to and gotten good
feedback from, a number of operators. There are two distinct configurations
for AZ behaviour in cinder, and both sort-of worked until very recently.

1) No AZs in cinder
This is the config where a single 'blob' of storage (most of the operators
who responded so far are using Ceph, though that isn't required). The
storage takes care of availability concerns, and any AZ info from nova
should just be ignored.

2) Cinder AZs map to Nova AZs
In this case, some combination of storage / networking / etc couples
storage to nova AZs. It is may be that an AZ is used as a unit of scaling,
or it could be a real storage failure domain. Eitehr way, there are a
number of operators who have this configuration and want to keep it.
Storage can certainly have a failure domain, and limiting the scalability
problem of storage to a single cmpute AZ can have definite advantages in
failure scenarios. These people do not want cross-az attach.

My hope at the summit session was to agree these two configurations,
discuss any scenarios not covered by these two configuration, and nail down
the changes we need to get these to work properly. There's definitely been
interest and activity in the operator community in making nova and cinder
AZs interact, and every desired interaction I've gotten details about so
far matches one of the above models.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/32f15b55/attachment.html>

From thierry at openstack.org  Thu Sep 24 07:28:20 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Thu, 24 Sep 2015 09:28:20 +0200
Subject: [openstack-dev] [Cinder] [Designate] Liberty RC1 available
Message-ID: <5603A614.507@openstack.org>

Hello everyone,

Cinder and Designate just produced their first release candidate for the
end of the Liberty cycle. The RC1 tarballs, as well as a list of
last-minute features and fixed bugs since liberty-1 are available at:

https://launchpad.net/cinder/liberty/liberty-rc1
https://launchpad.net/designate/liberty/liberty-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as final versions
on October 15. You are therefore strongly encouraged to test and
validate these tarballs !

Alternatively, you can directly test the stable/liberty release branch at:

http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/designate/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/cinder/+filebug
or
https://bugs.launchpad.net/designate/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branches of Cinder and Designate are now
officially open for Mitaka development, so feature freeze restrictions
no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)


From mrunge at redhat.com  Thu Sep 24 07:31:07 2015
From: mrunge at redhat.com (Matthias Runge)
Date: Thu, 24 Sep 2015 09:31:07 +0200
Subject: [openstack-dev] [all][stable][release] 2015.1.2
In-Reply-To: <CANZa-e+LZg0PZgPDrkhgifuZ_BQ6EhTua-420C5K2Z+A8cbPsg@mail.gmail.com>
References: <CANZa-e+LZg0PZgPDrkhgifuZ_BQ6EhTua-420C5K2Z+A8cbPsg@mail.gmail.com>
Message-ID: <20150924073107.GF24386@sofja.berg.ol>

On Wed, Sep 23, 2015 at 08:47:31PM -0400, Chuck Short wrote:
> Hi,
> 
> We would like to do a stable/kilo branch release, next Thursday. In order
> to do that I would like to freeze the branches on Friday. Cut some test
> tarballs on Tuesday and release on Thursday. Does anyone have an opinnon on
> this?

For Horizon, it would make sense to move this a week back. We discovered
a few issues in Liberty, which are present in current kilo, too. I'd
love to cherry-pick a few of them to kilo.

Unfortunately, it takes a bit, until Kilo (or in general: stable)
reviews are being done.
-- 
Matthias Runge <mrunge at redhat.com>


From julien at danjou.info  Thu Sep 24 07:40:48 2015
From: julien at danjou.info (Julien Danjou)
Date: Thu, 24 Sep 2015 09:40:48 +0200
Subject: [openstack-dev] [all] Consistent support for SSL termination
	proxies across all API services
In-Reply-To: <CAHiyDT8_2_3Y72On6RxCQmk+tJo6JDdKAdM_680MLAPz_hLdaQ@mail.gmail.com>
 (Jamie Lennox's message of "Thu, 24 Sep 2015 10:04:34 +1000")
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
 <56028A4B.9010203@dague.net> <m0fv25fjvf.fsf@danjou.info>
 <5602ECBF.2020900@dague.net> <m0a8scerdk.fsf@danjou.info>
 <CAHiyDT8_2_3Y72On6RxCQmk+tJo6JDdKAdM_680MLAPz_hLdaQ@mail.gmail.com>
Message-ID: <m0pp18clj3.fsf@danjou.info>

On Thu, Sep 24 2015, Jamie Lennox wrote:

Hi Jamie,

> So this is a long thread and i may have missed something in it,
> however this exact topic came up as a blocker on a devstack patch to
> get TLS testing in the gate with HAproxy.
>
> The long term solution we had come up with (but granted not proposed
> anywhere public) is that we should transition services to use relative
> links.

This would be a good solution too indeed, but I'm not sure it's *always*
doable.

> As far as i'm aware this is only a problem within the services
> themselves as the URL they receive is not what was actually requested
> if it went via HAproxy. It is not a problem with interservice requests
> because they should get URLs from the service catalog (or otherwise
> not display them to the user). Which means that this generally affects
> the version discovery page, and "links" from resources to like a next,
> prev, and base url.

Yes, but what we were saying is that this is fixable by using HTTP
headers that the proxy set, and translating them to a correct WSGI
environment. Basically, that will make think WSGI that it's a front-end,
so it'll build URL correctly for the outer world.

> Is there a reason we can't transition this to use a relative URL
> possibly with a django style WEBROOT so that a discovery response
> returned /v2.0 and /v3 rather than the fully qualified URL and the
> clients be smart enough to figure this out?

We definitely can do that, but there is still a use case that would not
be covered without a configuration somewhere which is:
  e.g. http://foobar/myservice/v3 -> http://myservice/v3

If you return an absolute /v3, it won't work. :)

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/9087bcae/attachment.pgp>

From tlashchova at mirantis.com  Thu Sep 24 07:45:32 2015
From: tlashchova at mirantis.com (Tetiana Lashchova)
Date: Thu, 24 Sep 2015 10:45:32 +0300
Subject: [openstack-dev] [murano] Fix order of arguments in assertEqual
Message-ID: <CAN4ORgYdV8KL+m2KySkqNNHFURz9k70YABSE8fOT6Z8oubLkNA@mail.gmail.com>

Hi folks!

Some tests in murano code use incorrect order of arguments in assertEqual.
The correct order expected by the testtools is

        def assertEqual(self, expected, observed, message=''):
            """Assert that 'expected' is equal to 'observed'.

            :param expected: The expected value.
            :param observed: The observed value.
            :param message: An optional message to include in the error.
            """

Error message has the following format:

raise mismatch_error
    testtools.matchers._impl.MismatchError: !=:
    reference = <expected value>
    actual    = <observed value>

Use of arguments in incorrect order could make debug output very confusing.
Let's fix it to make debugging easier.

Best regards,
Tetiana Lashchova
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/c627798b/attachment.html>

From bgaifullin at mirantis.com  Thu Sep 24 07:58:43 2015
From: bgaifullin at mirantis.com (Bulat Gaifullin)
Date: Thu, 24 Sep 2015 10:58:43 +0300
Subject: [openstack-dev] [murano] Fix order of arguments in assertEqual
In-Reply-To: <CAN4ORgYdV8KL+m2KySkqNNHFURz9k70YABSE8fOT6Z8oubLkNA@mail.gmail.com>
References: <CAN4ORgYdV8KL+m2KySkqNNHFURz9k70YABSE8fOT6Z8oubLkNA@mail.gmail.com>
Message-ID: <0A2625CC-25BB-4CCE-ABD4-927076A50D30@mirantis.com>

+1

> On 24 Sep 2015, at 10:45, Tetiana Lashchova <tlashchova at mirantis.com> wrote:
> 
> Hi folks!
> 
> Some tests in murano code use incorrect order of arguments in assertEqual.
> The correct order expected by the testtools is
> 
>         def assertEqual(self, expected, observed, message=''):
>             """Assert that 'expected' is equal to 'observed'.
> 
>             :param expected: The expected value.
>             :param observed: The observed value.
>             :param message: An optional message to include in the error.
>             """
> 
> Error message has the following format:
> 
> raise mismatch_error
>     testtools.matchers._impl.MismatchError: !=:
>     reference = <expected value>
>     actual    = <observed value>
> 
> Use of arguments in incorrect order could make debug output very confusing.
> Let's fix it to make debugging easier.
> 
> Best regards,
> Tetiana Lashchova
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From sbauza at redhat.com  Thu Sep 24 08:19:52 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Thu, 24 Sep 2015 10:19:52 +0200
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
 <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
Message-ID: <5603B228.3070502@redhat.com>



Le 24/09/2015 09:04, Duncan Thomas a ?crit :
> Hi
>
> I thought I was late on this thread, but looking at the time stamps, 
> it is just something that escalated very quickly. I am honestly 
> surprised an cross-project interaction option went from 'we don't seem 
> to understand this' to 'deprecation merged' in 4 hours, with only a 12 
> hour discussion on the mailing list, right at the end of a cycle when 
> we're supposed to be stabilising features.
>

So, I agree it was maybe a bit too quick hence the revert. That said, 
Nova master is now Mitaka, which means that the deprecation change was 
provided for the next cycle, not the one currently stabilising.

Anyway, I'm really all up with discussing why Cinder needs to know the 
Nova AZs.

> I proposed a session at the Tokyo summit for a discussion of Cinder 
> AZs, since there was clear confusion about what they are intended for 
> and how they should be configured.

Cool, count me in from the Nova standpoint.

> Since then I've reached out to and gotten good feedback from, a number 
> of operators. There are two distinct configurations for AZ behaviour 
> in cinder, and both sort-of worked until very recently.
>
> 1) No AZs in cinder
> This is the config where a single 'blob' of storage (most of the 
> operators who responded so far are using Ceph, though that isn't 
> required). The storage takes care of availability concerns, and any AZ 
> info from nova should just be ignored.
>
> 2) Cinder AZs map to Nova AZs
> In this case, some combination of storage / networking / etc couples 
> storage to nova AZs. It is may be that an AZ is used as a unit of 
> scaling, or it could be a real storage failure domain. Eitehr way, 
> there are a number of operators who have this configuration and want 
> to keep it. Storage can certainly have a failure domain, and limiting 
> the scalability problem of storage to a single cmpute AZ can have 
> definite advantages in failure scenarios. These people do not want 
> cross-az attach.
>

Ahem, Nova AZs are not failure domains - I mean the current 
implementation, in the sense of many people understand what is a failure 
domain, ie. a physical unit of machines (a bay, a room, a floor, a 
datacenter).
All the AZs in Nova share the same controlplane with the same message 
queue and database, which means that one failure can be propagated to 
the other AZ.

To be honest, there is one very specific usecase where AZs *are* failure 
domains : when cells exact match with AZs (ie. one AZ grouping all the 
hosts behind one cell). That's the very specific usecase that Sam is 
mentioning in his email, and I certainly understand we need to keep that.

What are AZs in Nova is pretty well explained in a quite old blogpost : 
http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/

We also added a few comments in our developer doc here 
http://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs

tl;dr: AZs are aggregate metadata that makes those aggregates of compute 
nodes visible to the users. Nothing more than that, no magic sauce. 
That's just a logical abstraction that can be mapping your physical 
deployment, but like I said, which would share the same bus and DB.
Of course, you could still provide networks distinct between AZs but 
that just gives you the L2 isolation, not the real failure domain in a 
Business Continuity Plan way.

What puzzles me is how Cinder is managing a datacenter-level of 
isolation given there is no cells concept AFAIK. I assume that 
cinder-volumes are belonging to a specific datacenter but how is managed 
the controlplane of it ? I can certainly understand the need of affinity 
placement between physical units, but I'm missing that piece, and 
consequently I wonder why Nova need to provide AZs to Cinder on a 
general case.



> My hope at the summit session was to agree these two configurations, 
> discuss any scenarios not covered by these two configuration, and nail 
> down the changes we need to get these to work properly. There's 
> definitely been interest and activity in the operator community in 
> making nova and cinder AZs interact, and every desired interaction 
> I've gotten details about so far matches one of the above models.
>

I'm all with you about providing a way for users to get volume affinity 
for Nova. That's a long story I'm trying to consider and we are 
constantly trying to improve the nova scheduler interfaces so that other 
projects could provide resources to the nova scheduler for decision 
making. I just want to consider whether AZs are the best concept for 
that or we should do thing by other ways (again, because AZs are not 
what people expect).

Again, count me in for the Cinder session, and just lemme know when the 
session is planned so I could attend it.

-Sylvain


>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/843411fb/attachment.html>

From thierry at openstack.org  Thu Sep 24 08:43:56 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Thu, 24 Sep 2015 10:43:56 +0200
Subject: [openstack-dev] Mitaka travel tips ?
In-Reply-To: <CAH7C+Pq6eCJCvRXYs9C5AdzOdEJk+dAXX+Deu7-q+=e9iWsV_Q@mail.gmail.com>
References: <CAH7C+Pq6eCJCvRXYs9C5AdzOdEJk+dAXX+Deu7-q+=e9iWsV_Q@mail.gmail.com>
Message-ID: <5603B7CC.40009@openstack.org>

David Moreau Simard wrote:
> There was a travel tips document for the Kilo summit in Paris [1].
> Lots of great helpful information in there not covered on the Openstack
> Summit page [2] like where to get SIM cards and stuff.
> 
> Is there one for Mitaka yet ? I can't find it.

There isn't one yet (that I know of). In Paris (and Hong-Kong) it was
created by the local OpenStack user group, so hopefully the Japanese
user group will set up something :)

-- 
Thierry Carrez (ttx)


From efedorova at mirantis.com  Thu Sep 24 08:46:44 2015
From: efedorova at mirantis.com (Ekaterina Chernova)
Date: Thu, 24 Sep 2015 11:46:44 +0300
Subject: [openstack-dev] [murano] Fix order of arguments in assertEqual
In-Reply-To: <0A2625CC-25BB-4CCE-ABD4-927076A50D30@mirantis.com>
References: <CAN4ORgYdV8KL+m2KySkqNNHFURz9k70YABSE8fOT6Z8oubLkNA@mail.gmail.com>
 <0A2625CC-25BB-4CCE-ABD4-927076A50D30@mirantis.com>
Message-ID: <CAOFFu8Z06t4Xjedvd4LRRJEah=dRS7Htbcom+fxEQcZdAm-RsA@mail.gmail.com>

Hi!

Good catch. Have no objections for fixing it right now.

Regards,
Kate.


On Thu, Sep 24, 2015 at 10:58 AM, Bulat Gaifullin <bgaifullin at mirantis.com>
wrote:

> +1
>
> > On 24 Sep 2015, at 10:45, Tetiana Lashchova <tlashchova at mirantis.com>
> wrote:
> >
> > Hi folks!
> >
> > Some tests in murano code use incorrect order of arguments in
> assertEqual.
> > The correct order expected by the testtools is
> >
> >         def assertEqual(self, expected, observed, message=''):
> >             """Assert that 'expected' is equal to 'observed'.
> >
> >             :param expected: The expected value.
> >             :param observed: The observed value.
> >             :param message: An optional message to include in the error.
> >             """
> >
> > Error message has the following format:
> >
> > raise mismatch_error
> >     testtools.matchers._impl.MismatchError: !=:
> >     reference = <expected value>
> >     actual    = <observed value>
> >
> > Use of arguments in incorrect order could make debug output very
> confusing.
> > Let's fix it to make debugging easier.
> >
> > Best regards,
> > Tetiana Lashchova
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/ef4bf33b/attachment.html>

From Abhishek.Kekane at nttdata.com  Thu Sep 24 08:48:59 2015
From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek)
Date: Thu, 24 Sep 2015 08:48:59 +0000
Subject: [openstack-dev] [murano] Fix order of arguments in assertEqual
In-Reply-To: <0A2625CC-25BB-4CCE-ABD4-927076A50D30@mirantis.com>
References: <CAN4ORgYdV8KL+m2KySkqNNHFURz9k70YABSE8fOT6Z8oubLkNA@mail.gmail.com>
 <0A2625CC-25BB-4CCE-ABD4-927076A50D30@mirantis.com>
Message-ID: <E1FB4937BE24734DAD0D1D4E4E506D7890D1794D@MAIL703.KDS.KEANE.COM>

Hi,

There is a bug for this, you can add murano projects to this bug.

https://bugs.launchpad.net/heat/+bug/1259292

Thanks,

Abhishek Kekane

-----Original Message-----
From: Bulat Gaifullin [mailto:bgaifullin at mirantis.com] 
Sent: 24 September 2015 13:29
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [murano] Fix order of arguments in assertEqual

+1

> On 24 Sep 2015, at 10:45, Tetiana Lashchova <tlashchova at mirantis.com> wrote:
> 
> Hi folks!
> 
> Some tests in murano code use incorrect order of arguments in assertEqual.
> The correct order expected by the testtools is
> 
>         def assertEqual(self, expected, observed, message=''):
>             """Assert that 'expected' is equal to 'observed'.
> 
>             :param expected: The expected value.
>             :param observed: The observed value.
>             :param message: An optional message to include in the error.
>             """
> 
> Error message has the following format:
> 
> raise mismatch_error
>     testtools.matchers._impl.MismatchError: !=:
>     reference = <expected value>
>     actual    = <observed value>
> 
> Use of arguments in incorrect order could make debug output very confusing.
> Let's fix it to make debugging easier.
> 
> Best regards,
> Tetiana Lashchova
> ______________________________________________________________________
> ____ OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


From tom at openstack.org  Thu Sep 24 09:04:57 2015
From: tom at openstack.org (Tom Fifield)
Date: Thu, 24 Sep 2015 17:04:57 +0800
Subject: [openstack-dev] Mitaka travel tips ?
In-Reply-To: <5603B7CC.40009@openstack.org>
References: <CAH7C+Pq6eCJCvRXYs9C5AdzOdEJk+dAXX+Deu7-q+=e9iWsV_Q@mail.gmail.com>
 <5603B7CC.40009@openstack.org>
Message-ID: <5603BCB9.1030201@openstack.org>

On 24/09/15 16:43, Thierry Carrez wrote:
> David Moreau Simard wrote:
>> There was a travel tips document for the Kilo summit in Paris [1].
>> Lots of great helpful information in there not covered on the Openstack
>> Summit page [2] like where to get SIM cards and stuff.
>>
>> Is there one for Mitaka yet ? I can't find it.
>
> There isn't one yet (that I know of). In Paris (and Hong-Kong) it was
> created by the local OpenStack user group, so hopefully the Japanese
> user group will set up something :)
>

I found some! buried in the FAQ!

https://www.openstack.org/summit/tokyo-2015/faq/#Category-5

but, maybe we need a wiki page to collect more. I suggest:

https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Travel_Tips


Regards,


Tom


From rakhmerov at mirantis.com  Thu Sep 24 09:08:21 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Thu, 24 Sep 2015 15:08:21 +0600
Subject: [openstack-dev] [mistral] Etherpad for Tokyo design summit topics
Message-ID: <FD8DA244-4B03-4746-BF0C-1C7824F6FEA0@mirantis.com>

Hi,

I created an etherpad where you can suggest summit topics for Mistral: https://etherpad.openstack.org/p/mistral-tokyo-summit-2015 <https://etherpad.openstack.org/p/mistral-tokyo-summit-2015>


Renat Akhmerov
@ Mirantis Inc.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/56c5d2f1/attachment.html>

From thierry at openstack.org  Thu Sep 24 10:06:11 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Thu, 24 Sep 2015 12:06:11 +0200
Subject: [openstack-dev] [Nova] [Trove] Liberty RC1 available
Message-ID: <5603CB13.8030204@openstack.org>

Hello everyone,

Nova and Trove just produced their first release candidate for the end
of the Liberty cycle. The RC1 tarballs, as well as a list of last-minute
features and fixed bugs since liberty-1 are available at:

https://launchpad.net/nova/liberty/liberty-rc1
https://launchpad.net/trove/liberty/liberty-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as final versions
on October 15. You are therefore strongly encouraged to test and
validate these tarballs !

Alternatively, you can directly test the stable/liberty release branch at:

http://git.openstack.org/cgit/openstack/nova/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/trove/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/nova/+filebug
or
https://bugs.launchpad.net/trove/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branches of Nova and Trove are now officially
open for Mitaka development, so feature freeze restrictions no longer
apply there.

Regards,

-- 
Thierry Carrez (ttx)


From vkuklin at mirantis.com  Thu Sep 24 10:17:58 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Thu, 24 Sep 2015 13:17:58 +0300
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <20150924014919.GA6291@localhost>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
 <CAHAWLf1OtHP5BdKsf+N-Fmv=34vLs36SP9UcQ=FurpEihr-hhg@mail.gmail.com>
 <CANw6fcHkopMgfuVHUXnp-zxJUCmqNoHn=m02h5=jZ7A_8yyinA@mail.gmail.com>
 <20150919010735.GB16012@localhost>
 <CANw6fcH6uePBTiZuKYWEoAJ_1sJFY_CbrjYNoqpKOyHCG6C70Q@mail.gmail.com>
 <CAKYN3rMgm8iKmTCb6BknV=UsM6W-zBeW1Ca9JzZ+a=Y80taODQ@mail.gmail.com>
 <CAHAWLf11d3Krn3Y9_EpSeR_07OsHfp6TEHcVmtLh+7vpD00ShA@mail.gmail.com>
 <20150924014919.GA6291@localhost>
Message-ID: <CAHAWLf05fOy_NJKAwgmRKZSGu1ELeVYvn_V=ttqqjNPnb=d8Sw@mail.gmail.com>

Dmitry

Thank you for the clarification, but my questions still remain unanswered,
unfortunately. It seems I did not phrase them correctly.

1) For each of the positions, which set of git repositories should I run
this command against? E.g. which stackforge/fuel-* projects contributors
are electing PTL or CL?
2) Who is voting for component leads? Mike's email says these are core
reviewers. Our previous IRC meeting mentioned all the contributors to
particular components. Documentation link you sent is mentioning all
contributors to Fuel projects. Whom should I trust? What is the final
version? Is it fine that documentation contributor is eligible to nominate
himself and vote for Library Component Lead?

Until there is a clear and sealed answer to these questions we do not have
a list of people who can vote and who can nominate. Let's get it clear at
least before PTL elections start.

On Thu, Sep 24, 2015 at 4:49 AM, Dmitry Borodaenko <dborodaenko at mirantis.com
> wrote:

> Vladimir,
>
> Sergey's initial email from this thread has a link to the Fuel elections
> wiki page that describes the exact procedure to determine the electorate
> and the candidates [0]:
>
>     The electorate for a given PTL and Component Leads election are the
>     Foundation individual members that are also committers for one of
>     the Fuel team's repositories over the last year timeframe (September
>     18, 2014 06:00 UTC to September 18, 2015 05:59 UTC).
>
>     ...
>
>     Any member of an election electorate can propose their candidacy for
>     the same election.
>
> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015#Electorate
>
> If you follow more links from that page, you will find the Governance
> page [1] and from there the Election Officiating Guidelines [2] that
> provide a specific shell one-liner to generate that list:
>
>     git log --pretty=%aE --since '1 year ago' | sort -u
>
> [1] https://wiki.openstack.org/wiki/Governance
> [2] https://wiki.openstack.org/wiki/Election_Officiating_Guidelines
>
> As I have specified in the proposed Team Structure policy document [3],
> this is the same process that is used by other OpenStack projects.
>
> [3] https://review.openstack.org/225376
>
> Having a different release schedule is not a sufficient reason for Fuel
> to reinvent the wheel, for example OpenStack Infrastructure project
> doesn't even have a release schedule for many of its deliverables, and
> still follows the same elections schedule as the rest of OpenStack:
>
> [4] http://governance.openstack.org/reference/projects/infrastructure.html
>
> Lets keep things simple.
>
> --
> Dmitry Borodaenko
>
>
> On Wed, Sep 23, 2015 at 01:27:07PM +0300, Vladimir Kuklin wrote:
> > Dmitry, Mike
> >
> > Thank you for the list of usable links.
> >
> > But still - we do not have clearly defined procedure on determening who
> is
> > eligible to nominate and vote for PTL and Component Leads. Remember, that
> > Fuel still has different release cycle and Kilo+Liberty contributors list
> > is not exactly the same for "365days" contributors list.
> >
> > Can we finally come up with the list of people eligible to nominate and
> > vote?
> >
> > On Sun, Sep 20, 2015 at 2:37 AM, Mike Scherbakov <
> mscherbakov at mirantis.com>
> > wrote:
> >
> > > Let's move on.
> > > I started work on MAINTAINERS files, proposed two patches:
> > > https://review.openstack.org/#/c/225457/1
> > > https://review.openstack.org/#/c/225458/1
> > >
> > > These can be used as templates for other repos / folders.
> > >
> > > Thanks,
> > >
> > > On Fri, Sep 18, 2015 at 7:45 PM Davanum Srinivas <davanum at gmail.com>
> > > wrote:
> > >
> > >> +1 Dmitry
> > >>
> > >> -- Dims
> > >>
> > >> On Fri, Sep 18, 2015 at 9:07 PM, Dmitry Borodaenko <
> > >> dborodaenko at mirantis.com> wrote:
> > >>
> > >>> Dims,
> > >>>
> > >>> Thanks for the reminder!
> > >>>
> > >>> I've summarized the uncontroversial parts of that thread in a policy
> > >>> proposal as per you suggestion [0], please review and comment. I've
> > >>> renamed SMEs to maintainers since Mike has agreed with that part,
> and I
> > >>> omitted code review SLAs from the policy since that's the part that
> has
> > >>> generated the most discussion.
> > >>>
> > >>> [0] https://review.openstack.org/225376
> > >>>
> > >>> I don't think we should postpone the election: the PTL election
> follows
> > >>> the same rules as OpenStack so we don't need a Fuel-specific policy
> for
> > >>> that, and the component leads election doesn't start until October 9,
> > >>> which gives us 3 weeks to confirm consensus on that aspect of the
> > >>> policy.
> > >>>
> > >>> --
> > >>> Dmitry Borodaenko
> > >>>
> > >>>
> > >>> On Fri, Sep 18, 2015 at 07:30:39AM -0400, Davanum Srinivas wrote:
> > >>> > Sergey,
> > >>> >
> > >>> > Please see [1]. Did we codify some of these roles and
> responsibilities
> > >>> as a
> > >>> > community in a spec? There was also a request to use terminology
> like
> > >>> say
> > >>> > MAINTAINERS in that email as well.
> > >>> >
> > >>> > Are we pulling the trigger a bit early for an actual election?
> > >>> >
> > >>> > Thanks,
> > >>> > Dims
> > >>> >
> > >>> > [1] http://markmail.org/message/2ls5obgac6tvcfss
> > >>> >
> > >>> > On Fri, Sep 18, 2015 at 6:56 AM, Vladimir Kuklin <
> vkuklin at mirantis.com
> > >>> >
> > >>> > wrote:
> > >>> >
> > >>> > > Sergey, Fuelers
> > >>> > >
> > >>> > > This is awesome news!
> > >>> > >
> > >>> > > By the way, I have a question on who is eligible to vote and to
> > >>> nominate
> > >>> > > him/her-self for both PTL and Component Leads. Could you
> elaborate
> > >>> on that?
> > >>> > >
> > >>> > > And there is no such entity as Component Lead in OpenStack - so
> we
> > >>> are
> > >>> > > actually creating one. What are the new rights and
> responsibilities
> > >>> of CL?
> > >>> > >
> > >>> > > On Fri, Sep 18, 2015 at 5:39 AM, Sergey Lukjanov <
> > >>> slukjanov at mirantis.com>
> > >>> > > wrote:
> > >>> > >
> > >>> > >> Hi folks,
> > >>> > >>
> > >>> > >> I'd like to announce that we're running the PTL and Component
> Leads
> > >>> > >> elections. Detailed information available on wiki. [0]
> > >>> > >>
> > >>> > >> Project Team Lead: Manages day-to-day operations, drives the
> project
> > >>> > >> team goals, resolves technical disputes within the project
> team. [1]
> > >>> > >>
> > >>> > >> Component Lead: Defines architecture of a module or component in
> > >>> Fuel,
> > >>> > >> reviews design specs, merges majority of commits and resolves
> > >>> conflicts
> > >>> > >> between Maintainers or contributors in the area of
> responsibility.
> > >>> [2]
> > >>> > >>
> > >>> > >> Fuel has two large sub-teams, with roughly comparable codebases,
> > >>> that
> > >>> > >> need dedicated component leads: fuel-library and fuel-python.
> [2]
> > >>> > >>
> > >>> > >> Nominees propose their candidacy by sending an email to the
> > >>> > >> openstack-dev at lists.openstack.org mailing-list, which the
> subject:
> > >>> > >> "[fuel] PTL candidacy" or "[fuel] <component> lead candidacy"
> > >>> > >> (for example, "[fuel] fuel-library lead candidacy").
> > >>> > >>
> > >>> > >> Time line:
> > >>> > >>
> > >>> > >> PTL elections
> > >>> > >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
> > >>> position
> > >>> > >> * September 29 - October 8: PTL elections
> > >>> > >>
> > >>> > >> Component leads elections (fuel-library and fuel-python)
> > >>> > >> * October 9 - October 15: Open candidacy for Component leads
> > >>> positions
> > >>> > >> * October 16 - October 22: Component leads elections
> > >>> > >>
> > >>> > >> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015
> > >>> > >> [1] https://wiki.openstack.org/wiki/Governance
> > >>> > >> [2]
> > >>> > >>
> > >>>
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
> > >>> > >> [3] https://lwn.net/Articles/648610/
> > >>> > >>
> > >>> > >> --
> > >>> > >> Sincerely yours,
> > >>> > >> Sergey Lukjanov
> > >>> > >> Sahara Technical Lead
> > >>> > >> (OpenStack Data Processing)
> > >>> > >> Principal Software Engineer
> > >>> > >> Mirantis Inc.
> > >>> > >>
> > >>> > >>
> > >>>
> __________________________________________________________________________
> > >>> > >> OpenStack Development Mailing List (not for usage questions)
> > >>> > >> Unsubscribe:
> > >>> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >>> > >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>> > >>
> > >>> > >>
> > >>> > >
> > >>> > >
> > >>> > > --
> > >>> > > Yours Faithfully,
> > >>> > > Vladimir Kuklin,
> > >>> > > Fuel Library Tech Lead,
> > >>> > > Mirantis, Inc.
> > >>> > > +7 (495) 640-49-04
> > >>> > > +7 (926) 702-39-68
> > >>> > > Skype kuklinvv
> > >>> > > 35bk3, Vorontsovskaya Str.
> > >>> > > Moscow, Russia,
> > >>> > > www.mirantis.com <http://www.mirantis.ru/>
> > >>> > > www.mirantis.ru
> > >>> > > vkuklin at mirantis.com
> > >>> > >
> > >>> > >
> > >>>
> __________________________________________________________________________
> > >>> > > OpenStack Development Mailing List (not for usage questions)
> > >>> > > Unsubscribe:
> > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >>> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>> > >
> > >>> > >
> > >>> >
> > >>> >
> > >>> > --
> > >>> > Davanum Srinivas :: https://twitter.com/dims
> > >>>
> > >>> >
> > >>>
> __________________________________________________________________________
> > >>> > OpenStack Development Mailing List (not for usage questions)
> > >>> > Unsubscribe:
> > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>>
> > >>>
> > >>>
> > >>>
> __________________________________________________________________________
> > >>> OpenStack Development Mailing List (not for usage questions)
> > >>> Unsubscribe:
> > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> Davanum Srinivas :: https://twitter.com/dims
> > >>
> __________________________________________________________________________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > > --
> > > Mike Scherbakov
> > > #mihgen
> > >
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> >
> >
> > --
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 35bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com <http://www.mirantis.ru/>
> > www.mirantis.ru
> > vkuklin at mirantis.com
>
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/79ec7a2e/attachment.html>

From akurilin at mirantis.com  Thu Sep 24 10:18:50 2015
From: akurilin at mirantis.com (Andrey Kurilin)
Date: Thu, 24 Sep 2015 13:18:50 +0300
Subject: [openstack-dev] [murano] Fix order of arguments in assertEqual
In-Reply-To: <E1FB4937BE24734DAD0D1D4E4E506D7890D1794D@MAIL703.KDS.KEANE.COM>
References: <CAN4ORgYdV8KL+m2KySkqNNHFURz9k70YABSE8fOT6Z8oubLkNA@mail.gmail.com>
 <0A2625CC-25BB-4CCE-ABD4-927076A50D30@mirantis.com>
 <E1FB4937BE24734DAD0D1D4E4E506D7890D1794D@MAIL703.KDS.KEANE.COM>
Message-ID: <CAEVmkayNpEtY8mKapyPRDg6wfApR6SfBORPx71yMDhefbv92eQ@mail.gmail.com>

Hi everyone!

I agree that wrong order of arguments misleads while debugging errors, BUT
how we can prevent regression? Imo, this is not a good idea to make patches
like https://review.openstack.org/#/c/64415/ in each release(without check
in CI, such patches are redundant).


PS: This question relates not only for murano.

On Thu, Sep 24, 2015 at 11:48 AM, Kekane, Abhishek <
Abhishek.Kekane at nttdata.com> wrote:

> Hi,
>
> There is a bug for this, you can add murano projects to this bug.
>
> https://bugs.launchpad.net/heat/+bug/1259292
>
> Thanks,
>
> Abhishek Kekane
>
> -----Original Message-----
> From: Bulat Gaifullin [mailto:bgaifullin at mirantis.com]
> Sent: 24 September 2015 13:29
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [murano] Fix order of arguments in assertEqual
>
> +1
>
> > On 24 Sep 2015, at 10:45, Tetiana Lashchova <tlashchova at mirantis.com>
> wrote:
> >
> > Hi folks!
> >
> > Some tests in murano code use incorrect order of arguments in
> assertEqual.
> > The correct order expected by the testtools is
> >
> >         def assertEqual(self, expected, observed, message=''):
> >             """Assert that 'expected' is equal to 'observed'.
> >
> >             :param expected: The expected value.
> >             :param observed: The observed value.
> >             :param message: An optional message to include in the error.
> >             """
> >
> > Error message has the following format:
> >
> > raise mismatch_error
> >     testtools.matchers._impl.MismatchError: !=:
> >     reference = <expected value>
> >     actual    = <observed value>
> >
> > Use of arguments in incorrect order could make debug output very
> confusing.
> > Let's fix it to make debugging easier.
> >
> > Best regards,
> > Tetiana Lashchova
> > ______________________________________________________________________
> > ____ OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ______________________________________________________________________
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/2663dce2/attachment.html>

From davanum at gmail.com  Thu Sep 24 10:27:31 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Thu, 24 Sep 2015 06:27:31 -0400
Subject: [openstack-dev] [oslo][doc] Oslo doc sprint 9/24-9/25
In-Reply-To: <56033513.90108@anteaya.info>
References: <CAHjMoV0vx=mEcnsJHyq29hwoAA6oq9fwdXYqVFGttVCwLEeicg@mail.gmail.com>
 <CANw6fcFgQUGnkUW7x7CkyDeza8c+VXAYtosXqOmvK6-ug6_hjg@mail.gmail.com>
 <56033513.90108@anteaya.info>
Message-ID: <CANw6fcHGjGF+OnQFuZww56rf6onaiQct2XKpQdKixfdAvDk7MA@mail.gmail.com>

Thanks Anita, +1 to switching to the #openstack-sprint channel - i've
updated the wiki page

-- Dims

On Wed, Sep 23, 2015 at 7:26 PM, Anita Kuno <anteaya at anteaya.info> wrote:

> On 09/23/2015 07:18 PM, Davanum Srinivas wrote:
> > Reminder, we are doing the Doc Sprint tomorrow. Please help out with what
> > ever item or items you can.
> >
> > Thanks,
> > Dims
> >
> > On Wed, Sep 16, 2015 at 5:40 PM, James Carey <
> bellerophon at flyinghorsie.com>
> > wrote:
> >
> >> In order to improve the Oslo libraries documentation, the Oslo team is
> >> having a documentation sprint from 9/24 to 9/25.
> >>
> >> We'll kick things off at 14:00 UTC on 9/24 in the
> >> #openstack-oslo-docsprint IRC channel and we'll use an etherpad [0].
>
> Have you considered using the #openstack-sprint channel, which can be
> booked here: https://wiki.openstack.org/wiki/VirtualSprints
>
> and was created for just this kind of occasion. Also it has channel
> logging, helpful for those trying to co-ordinate across timezones.
>
> May you have a good sprint,
> Anita.
>
> >>
> >> All help is appreciated.   If you can help or have suggestions for
> >> areas of focus, please update the etherpad.
> >>
> >> [0] https://etherpad.openstack.org/p/oslo-liberty-virtual-doc-sprint
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/45017d44/attachment.html>

From sean at dague.net  Thu Sep 24 10:59:06 2015
From: sean at dague.net (Sean Dague)
Date: Thu, 24 Sep 2015 06:59:06 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <m0pp18clj3.fsf@danjou.info>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
 <56028A4B.9010203@dague.net> <m0fv25fjvf.fsf@danjou.info>
 <5602ECBF.2020900@dague.net> <m0a8scerdk.fsf@danjou.info>
 <CAHiyDT8_2_3Y72On6RxCQmk+tJo6JDdKAdM_680MLAPz_hLdaQ@mail.gmail.com>
 <m0pp18clj3.fsf@danjou.info>
Message-ID: <5603D77A.6020803@dague.net>

On 09/24/2015 03:40 AM, Julien Danjou wrote:
> On Thu, Sep 24 2015, Jamie Lennox wrote:
> 
> Hi Jamie,
> 
>> So this is a long thread and i may have missed something in it,
>> however this exact topic came up as a blocker on a devstack patch to
>> get TLS testing in the gate with HAproxy.
>>
>> The long term solution we had come up with (but granted not proposed
>> anywhere public) is that we should transition services to use relative
>> links.
> 
> This would be a good solution too indeed, but I'm not sure it's *always*
> doable.
> 
>> As far as i'm aware this is only a problem within the services
>> themselves as the URL they receive is not what was actually requested
>> if it went via HAproxy. It is not a problem with interservice requests
>> because they should get URLs from the service catalog (or otherwise
>> not display them to the user). Which means that this generally affects
>> the version discovery page, and "links" from resources to like a next,
>> prev, and base url.
> 
> Yes, but what we were saying is that this is fixable by using HTTP
> headers that the proxy set, and translating them to a correct WSGI
> environment. Basically, that will make think WSGI that it's a front-end,
> so it'll build URL correctly for the outer world.
> 
>> Is there a reason we can't transition this to use a relative URL
>> possibly with a django style WEBROOT so that a discovery response
>> returned /v2.0 and /v3 rather than the fully qualified URL and the
>> clients be smart enough to figure this out?
> 
> We definitely can do that, but there is still a use case that would not
> be covered without a configuration somewhere which is:
>   e.g. http://foobar/myservice/v3 -> http://myservice/v3
> 
> If you return an absolute /v3, it won't work. :)

It's also a pretty serious change in document content. We've been
returning absolute URLs forever, so assuming that all the client code
out there would work with relative code is a really big assumption.
That's a major API bump for sure.

And it seems like we have enough pieces here to get something better
with the proxy headers (which could happen early in Mitaka) and to fill
in the remaining bits if we clean up the service catalogue use.

	-Sean


-- 
Sean Dague
http://dague.net

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 465 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/55d0456f/attachment.pgp>

From dougal at redhat.com  Thu Sep 24 11:12:04 2015
From: dougal at redhat.com (Dougal Matthews)
Date: Thu, 24 Sep 2015 12:12:04 +0100
Subject: [openstack-dev] [TripleO] Remove Tuskar from tripleo-common and
	python-tripleoclient
In-Reply-To: <CAPMB-2TmL=ZdmA7mJjTBs5SjPqEfNi-pzf09dMGbay5AYj8Rew@mail.gmail.com>
References: <CAPMB-2TmL=ZdmA7mJjTBs5SjPqEfNi-pzf09dMGbay5AYj8Rew@mail.gmail.com>
Message-ID: <CAPMB-2RrJKAeWHXfW3rsma6Oqu4eFP4jh+XkfemX9dbQbbh5mg@mail.gmail.com>

On 15 September 2015 at 17:33, Dougal Matthews <dougal at redhat.com> wrote:

>  [snip]
>
>
[2]: https://review.openstack.org/223527
> [3]: https://review.openstack.org/223535
> [4]: https://review.openstack.org/223605
>

For anyone interested in reviewing this work, the above reviews are now
ready for feedback and should be otherwise complete.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/dcfecd82/attachment.html>

From sathlang at redhat.com  Thu Sep 24 11:52:55 2015
From: sathlang at redhat.com (Sofer Athlan-Guyot)
Date: Thu, 24 Sep 2015 13:52:55 +0200
Subject: [openstack-dev] [puppet] use zuul-cloner when running rspec
In-Reply-To: <56032009.5020103@redhat.com> (Emilien Macchi's message of "Wed, 
 23 Sep 2015 17:56:25 -0400")
References: <56032009.5020103@redhat.com>
Message-ID: <87mvwc6nl4.fsf@s390.unix4.net>

Emilien Macchi <emilien at redhat.com> writes:

> Background
> ==========
>
> Current rspec tests are tested with modules mentioned in .fixtures.yaml
> file of each module.
>
> * the file is not consistent across all modules
> * it hardcodes module names & versions

IMHO, this alone justify it.

> * this way does not allow to use "Depend-On" feature, that would allow
> to test cross-modules patches
>
> Proposal
> ========
>
> * Like we do in beaker & integration jobs, use zuul-cloner to clone
> modules in our CI jobs.
> * Use r10k to prepare fixtures modules.
> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>
> In that way:
> * we will have modules name + versions testing consistency across all
> modules
> * the same Puppetfile would be used by unit/beaker/integration testing.
> * the patch that pass tests on your laptop would pass tests in upstream CI
> * if you don't have zuul-cloner on your laptop, don't worry it will use
> git clone. Though you won't have Depends-On feature working on your
> laptop (technically not possible).
> * Though your patch will support Depends-On in OpenStack Infra for unit
> tests. If you submit a patch in puppet-openstacklib that drop something
> wrong, you can send a patch in puppet-nova that will test it, and unit
> tests will fail.

+1

>
> Drawbacks
> =========
> * cloning from .fixtures.yaml takes ~ 10 seconds
> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>
> I think 40 seconds is something accept regarding the benefit.

Especially if one is using this workflow : 
 1. rake spec_prep and then:
    - rake spec_standalone;
    - rake spec_standalone;
    - rake spec_standalone;
    - ...

So it's a one time 40 seconds.

>
> Next steps
> ==========
>
> * PoC in puppet-nova: https://review.openstack.org/#/c/226830/
> * Patch openstack/puppet-modulesync-config to be consistent across all
> our modules.
>
> Bonus
> =====
> we might need (asap) a canary job for puppet-openstack-integration
> repository, that would run tests on a puppet-* module (since we're using
> install_modules.sh & Puppetfile files in puppet-* modules).
> Nothing has been done yet for this work.
>
>
> Thoughts?

-- 
Sofer Athlan-Guyot


From rbryant at redhat.com  Thu Sep 24 12:05:29 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Thu, 24 Sep 2015 08:05:29 -0400
Subject: [openstack-dev] [neutron][networking-ovn][vtep] Proposal:
 support for vtep-gateway in ovn
In-Reply-To: <201509240517.t8O5HC4p019939@d01av04.pok.ibm.com>
References: <201509240517.t8O5HC4p019939@d01av04.pok.ibm.com>
Message-ID: <5603E709.2020602@redhat.com>

On 09/24/2015 01:17 AM, Amitabha Biswas wrote:
> Hi everyone,
> 
> I want to open up the discussion regarding how to support OVN
> VTEP gateway deployment and its lifecycle in Neutron. 

Thanks a lot for looking into this!

> In the "Life Cycle of a VTEP gateway" part in the OVN architecture
> document (http://www.russellbryant.net/ovs-docs/ovn-architecture.7.pdf),
> step 3 is where the Neutron OVN plugin is involved. At a minimum, the
> Neutron OVN plugin will enable setting the type as "vtep" and the
> vtep-logical-switch and vtep-physical-switch options in the
> OVN_Northbound database.

I have the docs published there just to make it easier to read the
rendered version.  The source of that document is:

https://github.com/openvswitch/ovs/blob/master/ovn/ovn-architecture.7.xml

> There are 2 parts to the proposal/discussion - a short term solution and
> a long term one:
> 
> A short term solution (proposed by Russell Bryant) is similar to the
> work that was done for container support in OVN - using a binding
> profile http://networking-ovn.readthedocs.org/en/latest/containers.html.
> A ovn logical network/switch can be mapped to a vtep logical gateway by
> creating a port in that logical network and creating a binding profile
> for that port in the following manner:
> 
> neutron port-create --binding-profile
> '{"vtep-logical-switch":"vtep_lswitch_key",
> "vtep-physical-switch":"vtep_pswitch_key"}' private.
> 
> Where vtep-logical-switch and vtep-physical-switch should have been
> defined in the OVN_Southbound database by the previous steps (1,2) in
> the life cycle. 

Yes, this sounds great to me.  Since there's not a clear well accepted
API to use, we should go this route to get the functionality exposed
more quickly.  We should also include in our documentation that this is
not expected to be how this is done long term.

The comparison to the containers-in-VMs support is a good one.  In that
case we used binding:profile as a quick way to expose it, but we're
aiming to support a proper API.  For that feature, we've identified the
"VLAN aware VMs" API as the way forward, which will hopefully be
available next cycle.

> For the longer term solution, there needs to be a discussion:
> 
> Should the knowledge about the physical and logical step gateway should
> be exposed to Neutron - if yes how? This would allow a Neutron NB
> API/extension to bind a ?known? vtep gateway to the neutron logical
> network. This would be similar to the workflow done in the
> networking-l2gw extension
> https://review.openstack.org/#/c/144173/3/specs/kilo/l2-gateway-api.rst
> 
> 1. Allow the admin to define and manage the vtep gateway through Neutron
> REST API.
> 
> 2. Define connections between Neutron networks and gateways. This is
> conceptually similar to Step 3 of the step gateway performed by the OVN
> Plugin in the short term solution.

networking-l2gw does seem to be the closest thing to what's needed, but
it's not a small amount of work.  I think the API might need to be
extended a bit for our needs.  A bigger concern for me is actually with
some of the current implementation details.

One particular issue is that the project implements the ovsdb protocol
from scratch.  The ovs project provides a Python library for this.  Both
Neutron and networking-ovn use it, at least.  From some discussion, I've
gathered that the ovs Python library lacked one feature that was needed,
but has since been added because we wanted the same thing in networking-ovn.

The networking-l2gw route will require some pretty significant work.
It's still the closest existing effort, so I think we should explore it
until it's absolutely clear that it *can't* work for what we need.

> OR
> 
> Should OVN pursue it?s own Neutron extension (including vtep gateway
> support).

I don't think this option provides a lot of value over the short term
binding:profile solution.  Both are OVN specific.  I think I'd rather
just stick to binding:profile as the OVN specific stopgap because it's a
*lot* less work.

Thanks again,

-- 
Russell Bryant


From gkotton at vmware.com  Thu Sep 24 12:10:56 2015
From: gkotton at vmware.com (Gary Kotton)
Date: Thu, 24 Sep 2015 12:10:56 +0000
Subject: [openstack-dev] [all] gerri performance
Message-ID: <D229C302.BF978%gkotton@vmware.com>

Hi,
Anyone else experiencing bad performance with gerri at the moment? Access files in a review takes ages. So now the review cycle will be months instead of week ....
Thanks
Gary
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/dd581fb2/attachment.html>

From mangelajo at redhat.com  Thu Sep 24 12:18:44 2015
From: mangelajo at redhat.com (Miguel Angel Ajo)
Date: Thu, 24 Sep 2015 14:18:44 +0200
Subject: [openstack-dev] [all] gerri performance
In-Reply-To: <D229C302.BF978%gkotton@vmware.com>
References: <D229C302.BF978%gkotton@vmware.com>
Message-ID: <5603EA24.2060907@redhat.com>

I am experiencing it, yes :/

Gary Kotton wrote:
> Hi,
> Anyone else experiencing bad performance with gerri at the moment? Access files in a review takes ages. So now the review cycle will be months instead of week ....
> Thanks
> Gary
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From Neil.Jerram at metaswitch.com  Thu Sep 24 12:28:25 2015
From: Neil.Jerram at metaswitch.com (Neil Jerram)
Date: Thu, 24 Sep 2015 12:28:25 +0000
Subject: [openstack-dev] [all]gerri performance
References: <D229C302.BF978%gkotton@vmware.com><5603EA24.2060907@redhat.com>
Message-ID: <SN1PR02MB16953FFF3974743FA6BA19E899430@SN1PR02MB1695.namprd02.prod.outlook.com>

Yes, a 'git review' just took around 20s for me.

On 24/09/15 13:21, Miguel Angel Ajo wrote:
>I am experiencing it, yes :/
>>Gary Kotton wrote:
>>Hi,
>>Anyone else experiencing bad performance with gerri at the moment? Access files in a review takes ages. So now the review cycle will be months instead of week ....
>>Thanks
>>Gary
>>>>__________________________________________________________________________
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

From vikschw at gmail.com  Thu Sep 24 12:42:19 2015
From: vikschw at gmail.com (Vikram Choudhary)
Date: Thu, 24 Sep 2015 18:12:19 +0530
Subject: [openstack-dev] [all]gerri performance
In-Reply-To: <SN1PR02MB16953FFF3974743FA6BA19E899430@SN1PR02MB1695.namprd02.prod.outlook.com>
References: <D229C302.BF978%gkotton@vmware.com> <5603EA24.2060907@redhat.com>
 <SN1PR02MB16953FFF3974743FA6BA19E899430@SN1PR02MB1695.namprd02.prod.outlook.com>
Message-ID: <CAFeBh8t7T6b3=fRexGHEa7ACyYXKmRjEy=TxZNktr9mOhq70CQ@mail.gmail.com>

+1 for me as well :(

On Thu, Sep 24, 2015 at 5:58 PM, Neil Jerram <Neil.Jerram at metaswitch.com>
wrote:

> Yes, a 'git review' just took around 20s for me.
>
> On 24/09/15 13:21, Miguel Angel Ajo wrote:
> >I am experiencing it, yes :/
> >>Gary Kotton wrote:
> >>Hi,
> >>Anyone else experiencing bad performance with gerri at the moment?
> Access files in a review takes ages. So now the review cycle will be months
> instead of week ....
> >>Thanks
> >>Gary
>
> >>>>__________________________________________________________________________
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >>__________________________________________________________________________
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/229c58c1/attachment.html>

From tony at bakeyournoodle.com  Thu Sep 24 12:55:47 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Thu, 24 Sep 2015 22:55:47 +1000
Subject: [openstack-dev] [all][stable][release] 2015.1.2
In-Reply-To: <CANZa-e+LZg0PZgPDrkhgifuZ_BQ6EhTua-420C5K2Z+A8cbPsg@mail.gmail.com>
References: <CANZa-e+LZg0PZgPDrkhgifuZ_BQ6EhTua-420C5K2Z+A8cbPsg@mail.gmail.com>
Message-ID: <20150924125547.GA11465@thor.bakeyournoodle.com>

On Wed, Sep 23, 2015 at 08:47:31PM -0400, Chuck Short wrote:
> Hi,
> 
> We would like to do a stable/kilo branch release, next Thursday. In order
> to do that I would like to freeze the branches on Friday. Cut some test
> tarballs on Tuesday and release on Thursday. Does anyone have an opinnon on
> this?

I'm trying to fix a series of issues in Juno and it resulting in
global-requirents changes for kilo.  I'd hope to have them settled by this time
next week.

I think it'd good but not essential for them to be in 2015.1.2

Yours Tony.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/93de2cdc/attachment.pgp>

From jspray at redhat.com  Thu Sep 24 13:49:17 2015
From: jspray at redhat.com (John Spray)
Date: Thu, 24 Sep 2015 14:49:17 +0100
Subject: [openstack-dev] [Manila] CephFS native driver
Message-ID: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>

Hi all,

I've recently started work on a CephFS driver for Manila.  The (early)
code is here:
https://github.com/openstack/manila/compare/master...jcsp:ceph

It requires a special branch of ceph which is here:
https://github.com/ceph/ceph/compare/master...jcsp:wip-manila

This isn't done yet (hence this email rather than a gerrit review),
but I wanted to give everyone a heads up that this work is going on,
and a brief status update.

This is the 'native' driver in the sense that clients use the CephFS
client to access the share, rather than re-exporting it over NFS.  The
idea is that this driver will be useful for anyone who has such
clients, as well as acting as the basis for a later NFS-enabled
driver.

The export location returned by the driver gives the client the Ceph
mon IP addresses, the share path, and an authentication token.  This
authentication token is what permits the clients access (Ceph does not
do access control based on IP addresses).

It's just capable of the minimal functionality of creating and
deleting shares so far, but I will shortly be looking into hooking up
snapshots/consistency groups, albeit for read-only snapshots only
(cephfs does not have writeable shapshots).  Currently deletion is
just a move into a 'trash' directory, the idea is to add something
later that cleans this up in the background: the downside to the
"shares are just directories" approach is that clearing them up has a
"rm -rf" cost!

A note on the implementation: cephfs recently got the ability (not yet
in master) to restrict client metadata access based on path, so this
driver is simply creating shares by creating directories within a
cluster-wide filesystem, and issuing credentials to clients that
restrict them to their own directory.  They then mount that subpath,
so that from the client's point of view it's like having their own
filesystem.  We also have a quota mechanism that I'll hook in later to
enforce the share size.

Currently the security here requires clients (i.e. the ceph-fuse code
on client hosts, not the userspace applications) to be trusted, as
quotas are enforced on the client side.  The OSD access control
operates on a per-pool basis, and creating a separate pool for each
share is inefficient.  In the future it is expected that CephFS will
be extended to support file layouts that use RADOS namespaces, which
are cheap, such that we can issue a new namespace to each share and
enforce the separation between shares on the OSD side.

However, for many people the ultimate access control solution will be
to use a NFS gateway in front of their CephFS filesystem: it is
expected that an NFS-enabled cephfs driver will follow this native
driver in the not-too-distant future.

This will be my first openstack contribution, so please bear with me
while I come up to speed with the submission process.  I'll also be in
Tokyo for the summit next month, so I hope to meet other interested
parties there.

All the best,
John


From mriedem at linux.vnet.ibm.com  Thu Sep 24 14:06:12 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Thu, 24 Sep 2015 09:06:12 -0500
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <5603B228.3070502@redhat.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
 <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
 <5603B228.3070502@redhat.com>
Message-ID: <56040354.50404@linux.vnet.ibm.com>



On 9/24/2015 3:19 AM, Sylvain Bauza wrote:
>
>
> Le 24/09/2015 09:04, Duncan Thomas a ?crit :
>> Hi
>>
>> I thought I was late on this thread, but looking at the time stamps,
>> it is just something that escalated very quickly. I am honestly
>> surprised an cross-project interaction option went from 'we don't seem
>> to understand this' to 'deprecation merged' in 4 hours, with only a 12
>> hour discussion on the mailing list, right at the end of a cycle when
>> we're supposed to be stabilising features.
>>
>
> So, I agree it was maybe a bit too quick hence the revert. That said,
> Nova master is now Mitaka, which means that the deprecation change was
> provided for the next cycle, not the one currently stabilising.
>
> Anyway, I'm really all up with discussing why Cinder needs to know the
> Nova AZs.
>
>> I proposed a session at the Tokyo summit for a discussion of Cinder
>> AZs, since there was clear confusion about what they are intended for
>> and how they should be configured.
>
> Cool, count me in from the Nova standpoint.
>
>> Since then I've reached out to and gotten good feedback from, a number
>> of operators. There are two distinct configurations for AZ behaviour
>> in cinder, and both sort-of worked until very recently.
>>
>> 1) No AZs in cinder
>> This is the config where a single 'blob' of storage (most of the
>> operators who responded so far are using Ceph, though that isn't
>> required). The storage takes care of availability concerns, and any AZ
>> info from nova should just be ignored.
>>
>> 2) Cinder AZs map to Nova AZs
>> In this case, some combination of storage / networking / etc couples
>> storage to nova AZs. It is may be that an AZ is used as a unit of
>> scaling, or it could be a real storage failure domain. Eitehr way,
>> there are a number of operators who have this configuration and want
>> to keep it. Storage can certainly have a failure domain, and limiting
>> the scalability problem of storage to a single cmpute AZ can have
>> definite advantages in failure scenarios. These people do not want
>> cross-az attach.
>>
>
> Ahem, Nova AZs are not failure domains - I mean the current
> implementation, in the sense of many people understand what is a failure
> domain, ie. a physical unit of machines (a bay, a room, a floor, a
> datacenter).
> All the AZs in Nova share the same controlplane with the same message
> queue and database, which means that one failure can be propagated to
> the other AZ.
>
> To be honest, there is one very specific usecase where AZs *are* failure
> domains : when cells exact match with AZs (ie. one AZ grouping all the
> hosts behind one cell). That's the very specific usecase that Sam is
> mentioning in his email, and I certainly understand we need to keep that.
>
> What are AZs in Nova is pretty well explained in a quite old blogpost :
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
>
> We also added a few comments in our developer doc here
> http://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs
>
> tl;dr: AZs are aggregate metadata that makes those aggregates of compute
> nodes visible to the users. Nothing more than that, no magic sauce.
> That's just a logical abstraction that can be mapping your physical
> deployment, but like I said, which would share the same bus and DB.
> Of course, you could still provide networks distinct between AZs but
> that just gives you the L2 isolation, not the real failure domain in a
> Business Continuity Plan way.
>
> What puzzles me is how Cinder is managing a datacenter-level of
> isolation given there is no cells concept AFAIK. I assume that
> cinder-volumes are belonging to a specific datacenter but how is managed
> the controlplane of it ? I can certainly understand the need of affinity
> placement between physical units, but I'm missing that piece, and
> consequently I wonder why Nova need to provide AZs to Cinder on a
> general case.
>
>
>
>> My hope at the summit session was to agree these two configurations,
>> discuss any scenarios not covered by these two configuration, and nail
>> down the changes we need to get these to work properly. There's
>> definitely been interest and activity in the operator community in
>> making nova and cinder AZs interact, and every desired interaction
>> I've gotten details about so far matches one of the above models.
>>
>
> I'm all with you about providing a way for users to get volume affinity
> for Nova. That's a long story I'm trying to consider and we are
> constantly trying to improve the nova scheduler interfaces so that other
> projects could provide resources to the nova scheduler for decision
> making. I just want to consider whether AZs are the best concept for
> that or we should do thing by other ways (again, because AZs are not
> what people expect).
>
> Again, count me in for the Cinder session, and just lemme know when the
> session is planned so I could attend it.
>
> -Sylvain
>
>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I plan on reverting the deprecation change (which was a mitaka change, 
not a liberty change, as Sylvain pointed out).

However, given how many nova and cinder cores were talking about this 
yesterday and thought it was the right thing to do speaks to the fact 
that this is not a well understood use case (or documented at all).  So 
as part of reverting the deprecation I also want to see improved docs 
for the cross_az_attach option itself and probably a nova devref change 
explaining the use cases and issues with this.

I think the volume attach case is pretty straightforward.  You create a 
nova instance in some nova AZ x and create a cinder volume in some 
cinder AZ y and try to attach the volume to the server instance.  If 
cinder.cross_az_attach=True this is OK, else it fails.

The problem I have is with the boot from volume case where 
source=(blank/image/snapshot).  In those cases nova is creating the 
volume and passing the server instance AZ to the volume create API.  How 
are people that are using cinder.cross_az_attach=False handling the BFV 
case?

Per bug 1496235 that started this, the user is booting a nova instance 
in a nova AZ with bdm source=image and when nova tries to create the 
volume it fails because that AZ doesn't exist in cinder.  This fails in 
the compute manager when building the instance, so this results in a 
NoValidHost error for the user - which we all know and love as a super 
useful error.  So how do we handle this case?  If 
cinder.cross_az_attach=True in nova we could just not pass the instance 
AZ to the volume create, or only pass it if cinder has that AZ available.

But if cinder.cross_az_attach=False when creating the volume, what do we 
do?  I guess we can just leave the code as-is and if the AZ isn't in 
cinder (or your admin hasn't set allow_availability_zone_fallback=True 
in cinder.conf), then it fails and you open a support ticket.  That 
seems gross to me.  I'd like to at least see some of this validated in 
the nova API layer before it gets to the scheduler and compute so we can 
avoid NoValidHost.  My thinking is, in the BFV case where source != 
volume, if cinder.cross_az_attach is False and instance.az is not None, 
then we check the list of AZs from the volume API.  If the instance.az 
is not in that list, we fail fast (400 response to the user).  However, 
if allow_availability_zone_fallback=True in cinder.conf, we'd be 
rejecting the request even though the actual volume create would 
succeed.  These are just details that we don't have in the nova API 
since it's all policy driven gorp using config options that the user 
doesn't know about, which makes it really hard to write applications 
against this - and was part of the reason I moved to deprecate that option.

Am I off in the weeds?  It sounds like Duncan is going to try and get a 
plan together in Tokyo about how to handle this and decouple nova and 
cinder in this case, which is the right long-term goal.

-- 

Thanks,

Matt Riedemann



From geguileo at redhat.com  Thu Sep 24 14:11:26 2015
From: geguileo at redhat.com (Gorka Eguileor)
Date: Thu, 24 Sep 2015 16:11:26 +0200
Subject: [openstack-dev] [cinder] How to make a mock effactive for all
 method of a testclass
In-Reply-To: <5602B9F6.9000906@redhat.com>
References: <E97AE00C7CAEC3489EF8B5A429A8B2842DB14F29@szxeml504-mbx.china.huawei.com>
 <5602B9F6.9000906@redhat.com>
Message-ID: <20150924141126.GL3713@localhost>

On 23/09, Eric Harney wrote:
> On 09/23/2015 04:06 AM, liuxinguo wrote:
> > Hi,
> > 
> > In a.py we have a function:
> > def _change_file_mode(filepath):
> > utils.execute('chmod', '600', filepath, run_as_root=True)
> > 
> > In test_xxx.py, there is a testclass:
> > class xxxxDriverTestCase(test.TestCase):
> > def test_a(self)
> >     ...
> >     Call a. _change_file_mode
> > ...
> > 
> > def test_b(self)
> >     ...
> >     Call a. _change_file_mode
> > ...
> > 
> > I have tried to mock like mock out function _change_file_mode like this:
> > @mock.patch.object(a, '_change_file_mode', return_value=None)
> > class xxxxDriverTestCase(test.TestCase):
> > def test_a(self)
> >     ...
> >     Call a. _change_file_mode
> > ...
> > 
> > def test_b(self)
> >     ...
> >     Call a. _change_file_mode
> > ...
> > 
> > But the mock takes no effort, the real function _change_file_mode is still executed.
> > So how to make a mock effactive for all method of a testclass?
> > Thanks for any input!
> > 
> > Wilson Liu
> 
> The simplest way I found to do this was to use mock.patch in the test
> class's setUp() method, and tear it down again in tearDown().
> 
> There may be cleaner ways to do this with tools in oslotest etc. (I'm
> not sure), but this is fairly straightforward.
> 
> See here -- self._clear_patch stores the mock:
> http://git.openstack.org/cgit/openstack/cinder/tree/cinder/tests/unit/test_volume.py?id=8de60a8b#n257
> 

When doing the mock in the setUp it is recommended to add the stop to
the cleanup instead of doing it in the tearDown, in that code it would
be: self.addCleanUp(self._clear_patch.stop)


From aschultz at mirantis.com  Thu Sep 24 14:14:06 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Thu, 24 Sep 2015 09:14:06 -0500
Subject: [openstack-dev] [puppet] use zuul-cloner when running rspec
In-Reply-To: <56032009.5020103@redhat.com>
References: <56032009.5020103@redhat.com>
Message-ID: <CABzFt8PH9khELUr=9KqLcrAO78t6Wy+E7e+gKnLnVEbeVWka1w@mail.gmail.com>

On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi <emilien at redhat.com> wrote:
> Background
> ==========
>
> Current rspec tests are tested with modules mentioned in .fixtures.yaml
> file of each module.
>
> * the file is not consistent across all modules
> * it hardcodes module names & versions
> * this way does not allow to use "Depend-On" feature, that would allow
> to test cross-modules patches
>
> Proposal
> ========
>
> * Like we do in beaker & integration jobs, use zuul-cloner to clone
> modules in our CI jobs.
> * Use r10k to prepare fixtures modules.
> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>
> In that way:
> * we will have modules name + versions testing consistency across all
> modules
> * the same Puppetfile would be used by unit/beaker/integration testing.
> * the patch that pass tests on your laptop would pass tests in upstream CI
> * if you don't have zuul-cloner on your laptop, don't worry it will use
> git clone. Though you won't have Depends-On feature working on your
> laptop (technically not possible).
> * Though your patch will support Depends-On in OpenStack Infra for unit
> tests. If you submit a patch in puppet-openstacklib that drop something
> wrong, you can send a patch in puppet-nova that will test it, and unit
> tests will fail.
>
> Drawbacks
> =========
> * cloning from .fixtures.yaml takes ~ 10 seconds
> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>
> I think 40 seconds is something accept regarding the benefit.
>

As someone who consumes these modules downstream and has our own CI
setup to run the rspec items, this ties it too closely to the
openstack infrastructure. If we replace the .fixtures.yml with
zuul-cloner, it assumes I always want the openstack version of the
modules. This is not necessarily true. I like being able to replace
items within fixtures.yml when doing dev work. For example If i want
to test upgrading another module not related to openstack, like
inifile, how does that work with the proposed solution?  This is also
moving away from general puppet module conventions for testing. My
preference would be that this be a different task and we have both
.fixtures.yml (for general use/development) and the zuul method of
cloning (for CI).  You have to also think about this from a consumer
standpoint and this is adding an external dependency on the OpenStack
infrastructure for anyone trying to run rspec or trying to consume the
published versions from the forge.  Would I be able to run these tests
in an offline mode with this change? With the .fixures.yml it's a
minor edit to switch to local versions. Is the same true for the
zuul-cloner version?

>
> Next steps
> ==========
>
> * PoC in puppet-nova: https://review.openstack.org/#/c/226830/
> * Patch openstack/puppet-modulesync-config to be consistent across all
> our modules.
>
> Bonus
> =====
> we might need (asap) a canary job for puppet-openstack-integration
> repository, that would run tests on a puppet-* module (since we're using
> install_modules.sh & Puppetfile files in puppet-* modules).
> Nothing has been done yet for this work.
>
>
> Thoughts?
> --
> Emilien Macchi
>
>

I think we need this functionality, I just don't think it's a
replacement for the .fixures.yml.

Thanks,
-Alex


From salv.orlando at gmail.com  Thu Sep 24 14:18:51 2015
From: salv.orlando at gmail.com (Salvatore Orlando)
Date: Thu, 24 Sep 2015 16:18:51 +0200
Subject: [openstack-dev] [neutron][networking-ovn][vtep] Proposal:
 support for vtep-gateway in ovn
In-Reply-To: <5603E709.2020602@redhat.com>
References: <201509240517.t8O5HC4p019939@d01av04.pok.ibm.com>
 <5603E709.2020602@redhat.com>
Message-ID: <CAP0B2WOYFDk17dX73vJKHYz5HbijDTNYGC=4t2_zq6XFy6L_3A@mail.gmail.com>

Random comments inline.

Salvatore

On 24 September 2015 at 14:05, Russell Bryant <rbryant at redhat.com> wrote:

> On 09/24/2015 01:17 AM, Amitabha Biswas wrote:
> > Hi everyone,
> >
> > I want to open up the discussion regarding how to support OVN
> > VTEP gateway deployment and its lifecycle in Neutron.
>
> Thanks a lot for looking into this!
>
> > In the "Life Cycle of a VTEP gateway" part in the OVN architecture
> > document (http://www.russellbryant.net/ovs-docs/ovn-architecture.7.pdf),
> > step 3 is where the Neutron OVN plugin is involved. At a minimum, the
> > Neutron OVN plugin will enable setting the type as "vtep" and the
> > vtep-logical-switch and vtep-physical-switch options in the
> > OVN_Northbound database.
>
> I have the docs published there just to make it easier to read the
> rendered version.  The source of that document is:
>
> https://github.com/openvswitch/ovs/blob/master/ovn/ovn-architecture.7.xml
>
> > There are 2 parts to the proposal/discussion - a short term solution and
> > a long term one:
> >
> > A short term solution (proposed by Russell Bryant) is similar to the
> > work that was done for container support in OVN - using a binding
> > profile http://networking-ovn.readthedocs.org/en/latest/containers.html.
> > A ovn logical network/switch can be mapped to a vtep logical gateway by
> > creating a port in that logical network and creating a binding profile
> > for that port in the following manner:
> >
> > neutron port-create --binding-profile
> > '{"vtep-logical-switch":"vtep_lswitch_key",
> > "vtep-physical-switch":"vtep_pswitch_key"}' private.
> >
> > Where vtep-logical-switch and vtep-physical-switch should have been
> > defined in the OVN_Southbound database by the previous steps (1,2) in
> > the life cycle.
>
> Yes, this sounds great to me.  Since there's not a clear well accepted
> API to use, we should go this route to get the functionality exposed
> more quickly.  We should also include in our documentation that this is
> not expected to be how this is done long term.
>
> The comparison to the containers-in-VMs support is a good one.  In that
> case we used binding:profile as a quick way to expose it, but we're
> aiming to support a proper API.  For that feature, we've identified the
> "VLAN aware VMs" API as the way forward, which will hopefully be
> available next cycle.
>
> > For the longer term solution, there needs to be a discussion:
> >
> > Should the knowledge about the physical and logical step gateway should
> > be exposed to Neutron - if yes how? This would allow a Neutron NB
> > API/extension to bind a ?known? vtep gateway to the neutron logical
> > network. This would be similar to the workflow done in the
> > networking-l2gw extension
> > https://review.openstack.org/#/c/144173/3/specs/kilo/l2-gateway-api.rst
> >
> > 1. Allow the admin to define and manage the vtep gateway through Neutron
> > REST API.
> >
> > 2. Define connections between Neutron networks and gateways. This is
> > conceptually similar to Step 3 of the step gateway performed by the OVN
> > Plugin in the short term solution.
>
> networking-l2gw does seem to be the closest thing to what's needed, but
> it's not a small amount of work.  I think the API might need to be
> extended a bit for our needs.  A bigger concern for me is actually with
> some of the current implementation details.
>

It is indeed. While I like very much the solution based on binding profiles
it does not work very well from a UX perspective in environments where
operators control the whole cloud with openstack tools.


>
> One particular issue is that the project implements the ovsdb protocol
> from scratch.  The ovs project provides a Python library for this.  Both
> Neutron and networking-ovn use it, at least.  From some discussion, I've
> gathered that the ovs Python library lacked one feature that was needed,
> but has since been added because we wanted the same thing in
> networking-ovn.
>

My take here is that we don't need to use the whole implementation of
networking-l2gw, but only the APIs and the DB management layer it exposes.
Networking-l2gw provides a VTEP network gateway solution that, if you want,
will eventually be part of Neutron's "reference" control plane.
OVN provides its implementation; I think it should be possible to leverage
networking-l2gw either by pushing an OVN driver there, or implementing the
same driver in openstack/networking-ovn.


>
> The networking-l2gw route will require some pretty significant work.
> It's still the closest existing effort, so I think we should explore it
> until it's absolutely clear that it *can't* work for what we need.
>

I would say that it is definitely not trivial but probably a bit less than
"significant". abhraut from my team has done something quite similar for
openstack/vmware-nsx [1]


> > OR
> >
> > Should OVN pursue it?s own Neutron extension (including vtep gateway
> > support).
>
> I don't think this option provides a lot of value over the short term
> binding:profile solution.  Both are OVN specific.  I think I'd rather
> just stick to binding:profile as the OVN specific stopgap because it's a
> *lot* less work.
>

I totally agree. The solution based on the binding profile is indeed a
decent one in my opinion.
If OVN cannot converge on the extension proposed by networking-l2gw then
I'd keep using the binding profile for specifying gateway ports.

[1] https://review.openstack.org/#/c/210623/


>
> Thanks again,
>
> --
> Russell Bryant
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/9dabc0f0/attachment.html>

From zigo at debian.org  Thu Sep 24 14:25:31 2015
From: zigo at debian.org (Thomas Goirand)
Date: Thu, 24 Sep 2015 16:25:31 +0200
Subject: [openstack-dev] repairing so many OpenStack components writing
 configuration files in /usr/etc
Message-ID: <560407DB.3040602@debian.org>

Hi,

It's about the 3rd time just this week, that I'm repairing an OpenStack
component which is trying to write config files in /usr/etc. Could this
non-sense stop please?

FYI, this time, it's with os-brick... but it happened with so many
components already:
- bandit (with an awesome reply from upstream to my launchpad bug,
basically saying he doesn't care about downstream distros...)
- neutron
- neutron-fwaas
- tempest
- lots of Neutron drivers (ie: networking-FOO)
- pycadf
- and probably more which I forgot.

Yes, I can repair things at the packaging level, but I just hope I wont
have to do this for each and every OpenStack component, and I suppose
everyone understands how frustrating it is...

I also wonder where this /usr/etc is coming from. If it was
/usr/local/etc, I could somehow get it. But here... ?!?

Cheers,

Thomas Goirand (zigo)


From fungi at yuggoth.org  Thu Sep 24 14:29:30 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Thu, 24 Sep 2015 14:29:30 +0000
Subject: [openstack-dev] [all] gerri performance
In-Reply-To: <D229C302.BF978%gkotton@vmware.com>
References: <D229C302.BF978%gkotton@vmware.com>
Message-ID: <20150924142930.GY25159@yuggoth.org>

On 2015-09-24 12:10:56 +0000 (+0000), Gary Kotton wrote:
> Anyone else experiencing bad performance with gerri at the moment?
> Access files in a review takes ages. So now the review cycle will
> be months instead of week .... Thanks

The version we're currently running seems to get into a state where
it's continually garbage collecting within the JVM, system load
spikes up into the teens and performance suffers. So far we've been
restarting Gerrit when it gets itself into this state, which is of
course a terrible non-solution to the problem, but I'll see if I can
find any upstream bug reports and confirm whether this is solved in
the version for which we've been preparing to upgrade.
-- 
Jeremy Stanley


From tony.a.wang at alcatel-lucent.com  Thu Sep 24 14:37:31 2015
From: tony.a.wang at alcatel-lucent.com (WANG, Ming Hao (Tony T))
Date: Thu, 24 Sep 2015 14:37:31 +0000
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <5602B570.9000207@redhat.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
Message-ID: <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>

Russell,

Thanks for your detail explanation and kind help!
I have understand how container in VM can acquire network interfaces in different neutron networks now.
For the connections between compute nodes, I think I need to study Geneve protocol and VTEP first.
Any further question, I may need to continue consulting you. :-) 

Thanks for your help again, 
Tony

-----Original Message-----
From: Russell Bryant [mailto:rbryant at redhat.com] 
Sent: Wednesday, September 23, 2015 10:22 PM
To: OpenStack Development Mailing List (not for usage questions); WANG, Ming Hao (Tony T)
Subject: Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

I'll reply to each of your 3 messages here:

On 09/23/2015 05:57 AM, WANG, Ming Hao (Tony T) wrote:
> Hi Russell,
> 
> I just realized OVN plugin is an independent plugin of OVS plugin.

Yes, it's a plugin developed in the "networking-ovn" project.

http://git.openstack.org/cgit/openstack/networking-ovn/

> In this case, how do we handle the provider network connections between compute nodes? Is it handled by OVN actually?

I'm going to start by explaining the status of OVN itself, and then I'll come back and address the Neutron integration:

 -- OVN --

OVN implements logical networks as overlays using the Geneve protocol.
Connecting from logical to physical networks is done by one of two ways.

The first is using VTEP gateways.  This could be hardware or software gateways that implement the hardware_vtep schema.  This is typically a TOR switch that supports the vtep schema, but I believe someone is going to build a software version based on ovs and dpdk.  OVN includes a daemon called "ovn-controller-vtep" that is run for each vtep gateway to manage connectivity between OVN networks and the gateway.  It could run on the switch itself, or some other management host.  The last set of patches to get this working initially were merged just 8 days ago.

The ovn-architecture document describes "Life Cycle of a VTEP gateway":


https://github.com/openvswitch/ovs/blob/master/ovn/ovn-architecture.7.xml#L820

or you can find a temporary copy of a rendered version here:

  http://www.russellbryant.net/ovs-docs/ovn-architecture.7.pdf

The second is what Neutron refers to as "provider networks".  OVN does support this, as well.  It was merge just a couple weeks ago.  The commit message for OVN "localnet" ports goes into quite a bit of detail about how this works in OVN:


https://github.com/openvswitch/ovs/commit/c02819293d52f7ea7b714242d871b2b01f57f905

 -- Neutron --

Both of these things are freshly implemented in OVN so the Neutron integration is a WIP.

For vtep gateways, there's not an established API.  networking-l2gw is the closest thing, but I've got some concerns with both the API and implementation.  As a first baby step, we're just going to provide a hack that lets an admin create a connection between a network and gateway using a neutron port with a special binding:profile.  We'll also be continuing to look at providing a proper API.

For provider networks, working with them in Neutron will be no different than it is today with the current OVS support.  I just have to finish the Neutron plugin integration, which I just started on yesterday.

> 
> Thanks,
> Tony
> 
> -----Original Message-----
> From: WANG, Ming Hao (Tony T)
> Sent: Wednesday, September 23, 2015 1:58 PM
> To: WANG, Ming Hao (Tony T); 'OpenStack Development Mailing List (not for usage questions)'
> Subject: RE: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?
> 
> Hi Russell,
> 
> Is there any material to explain how OVN parent port work?

Note that while this uses a binding:profile hack for now, we're going to update the plugin to support the vlan-aware-vms API for this use case once that is completed.

http://docs.openstack.org/developer/networking-ovn/containers.html

http://specs.openstack.org/openstack/neutron-specs/specs/liberty/vlan-aware-vms.html

https://github.com/openvswitch/ovs/blob/master/ovn/CONTAINERS.OpenStack.md

https://github.com/shettyg/ovn-docker

> Thanks,
> Tony
> 
> -----Original Message-----
> From: WANG, Ming Hao (Tony T)
> Sent: Wednesday, September 23, 2015 10:02 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [neutron] Does neutron ovn plugin support to setup multiple neutron networks for one container?
> 
> Russell,
> 
> Thanks for your info.
> If I want to assign multiple interfaces to a container on different 
> neutron networks(for example, netA and netB), is it mandatory to let 
> the VM hosting containers have network interfaces in netA and netB, 
> and ovn will help to direct the container traffic to its corresponding 
> VM network interfaces?
> 
> from https://github.com/openvswitch/ovs/blob/master/ovn/CONTAINERS.OpenStack.md :
> "This VLAN tag is stripped out in the hypervisor by OVN."
> I suppose when the traffic goes out the VM, the VLAN tag has already been stripped out. 
> When the traffic arrives ovs on physical host, it will be tagged with neutron local vlan. Is it right?

Hopefully the links provided in response to the above mail help explain it.  In short, the VM only needs one network interface and all traffic for all containers go over that network interface.  To put each container on different Neutron networks, the hypervisor needs to be able to differentiate the traffic from each container even though its all going over the same network interface to/from the VM.  That's where VLAN ids are used.  It's used as a simple way to tag traffic as it goes over the VMs network interface.  As it arrives in the VM, the tag is stripped and traffic sent to the right container.  As it leaves the VM, the tag is stripped and then forwarded to the proper Neutron network (which could itself be a VLAN network, but the tags are not related, and the traffic would be re-tagged at that point).

Does that make sense?

> Thanks in advance,
> Tony
> 
> -----Original Message-----
> From: Russell Bryant [mailto:rbryant at redhat.com]
> Sent: Wednesday, September 23, 2015 12:46 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] Does neutron ovn plugin support to setup multiple neutron networks for one container?
> 
> On 09/22/2015 08:08 AM, WANG, Ming Hao (Tony T) wrote:
>> Dear all,
>>
>> For neutron ovn plugin supports containers in one VM, My understanding is one container can't be assigned two network interfaces in different neutron networks. Is it right?
>> The reason:
>> 1. One host VM only has one network interface.
>> 2. all the VLAN tags are stripped out when the packet goes out the VM.
>>
>> If it is True, does neutron ovn plugin or ovn has plan to support this?
> 
> You should be able to assign multiple interfaces to a container on different networks.  The traffic for each interface will be tagged with a unique VLAN ID on its way in and out of the VM, the same way it is done for each container with a single interface.
> 
> --
> Russell Bryant
> 
> ______________________________________________________________________
> ____ OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ______________________________________________________________________
> ____ OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


--
Russell Bryant


From julien at danjou.info  Thu Sep 24 14:54:24 2015
From: julien at danjou.info (Julien Danjou)
Date: Thu, 24 Sep 2015 16:54:24 +0200
Subject: [openstack-dev] repairing so many OpenStack components writing
	configuration files in /usr/etc
In-Reply-To: <560407DB.3040602@debian.org> (Thomas Goirand's message of "Thu, 
 24 Sep 2015 16:25:31 +0200")
References: <560407DB.3040602@debian.org>
Message-ID: <m0a8sbdg0v.fsf@danjou.info>

On Thu, Sep 24 2015, Thomas Goirand wrote:

Hi Thomas,

> I also wonder where this /usr/etc is coming from. If it was
> /usr/local/etc, I could somehow get it. But here... ?!?

Do you have a way to reproduce that, or a backtrace maybe?

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/9448e493/attachment.pgp>

From mtreinish at kortar.org  Thu Sep 24 14:54:28 2015
From: mtreinish at kortar.org (Matthew Treinish)
Date: Thu, 24 Sep 2015 10:54:28 -0400
Subject: [openstack-dev] repairing so many OpenStack components writing
 configuration files in /usr/etc
In-Reply-To: <560407DB.3040602@debian.org>
References: <560407DB.3040602@debian.org>
Message-ID: <20150924145428.GA1473@sazabi.kortar.org>

On Thu, Sep 24, 2015 at 04:25:31PM +0200, Thomas Goirand wrote:
> Hi,
> 
> It's about the 3rd time just this week, that I'm repairing an OpenStack
> component which is trying to write config files in /usr/etc. Could this
> non-sense stop please?

So I'm almost 100% that the intent for everyone doing this is for the files to
be written to /etc when system installing the packaging. It's being caused by
data_file lines in the setup.cfg putting things in etc/foo. Like in neutron:

http://git.openstack.org/cgit/openstack/neutron/tree/setup.cfg#n23

The PBR docs [1] say this will go to /etc if installing it in the system python
which obviously isn't the case. The are instead being installed to
sys.prefix/etc which works well for the venv case but not so much for system
installing a package.

The issue is with the use of data_files. I'm sure dstufft can elaborate on all
the prickly bits, but IIRC it's the use of setuptools of distutils depending on
how the package is being installed. (either via a wheel or sdist) I think the
distutils behavior is to install relative to sys.prefix and setuptools puts it
relative to site-packages. But, neither of those are really the desired
behavior...

> 
> FYI, this time, it's with os-brick... but it happened with so many
> components already:
> - bandit (with an awesome reply from upstream to my launchpad bug,
> basically saying he doesn't care about downstream distros...)
> - neutron
> - neutron-fwaas
> - tempest
> - lots of Neutron drivers (ie: networking-FOO)
> - pycadf
> - and probably more which I forgot.
> 
> Yes, I can repair things at the packaging level, but I just hope I wont
> have to do this for each and every OpenStack component, and I suppose
> everyone understands how frustrating it is...

It's an issue with python packaging that we need to fix, likely in PBR first.
But, I doubt this is isolated to PBR, we'll probably have to work on fixes to
distutils and/or setuptools too.

> 
> I also wonder where this /usr/etc is coming from. If it was
> /usr/local/etc, I could somehow get it. But here... ?!?

IIRC if you set the python sys.prefix to /usr/local it'll put the etc files
in /usr/local.

-Matt Treinish

[1] http://docs.openstack.org/developer/pbr/#files
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/dc1c0ec2/attachment.pgp>

From stevemar at ca.ibm.com  Thu Sep 24 14:57:48 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Thu, 24 Sep 2015 10:57:48 -0400
Subject: [openstack-dev] repairing so many OpenStack components
	writing	configuration files in /usr/etc
In-Reply-To: <m0a8sbdg0v.fsf@danjou.info>
References: <560407DB.3040602@debian.org> <m0a8sbdg0v.fsf@danjou.info>
Message-ID: <OF179E3D68.34091123-ON00257ECA.005CD35C-85257ECA.0052323C@notes.na.collabserv.com>


just a general FYI - so for pyCADF (and I'm guessing others), it was a very
subtle error:

https://github.com/openstack/pycadf/commit/4e70ff2e6204f74767c5cab13f118d72c2594760

Essentially the entry points in setup.cfg were missing a leading slash.

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:	Julien Danjou <julien at danjou.info>
To:	Thomas Goirand <zigo at debian.org>
Cc:	"openstack-dev at lists.openstack.org"
            <openstack-dev at lists.openstack.org>
Date:	2015/09/24 10:55 AM
Subject:	Re: [openstack-dev] repairing so many OpenStack components
            writing	configuration files in /usr/etc



On Thu, Sep 24 2015, Thomas Goirand wrote:

Hi Thomas,

> I also wonder where this /usr/etc is coming from. If it was
> /usr/local/etc, I could somehow get it. But here... ?!?

Do you have a way to reproduce that, or a backtrace maybe?

--
Julien Danjou
# Free Software hacker
# http://julien.danjou.info
[attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/863f5ebe/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/863f5ebe/attachment.gif>

From mriedem at linux.vnet.ibm.com  Thu Sep 24 14:59:07 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Thu, 24 Sep 2015 09:59:07 -0500
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <56040354.50404@linux.vnet.ibm.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
 <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
 <5603B228.3070502@redhat.com> <56040354.50404@linux.vnet.ibm.com>
Message-ID: <56040FBB.70708@linux.vnet.ibm.com>



On 9/24/2015 9:06 AM, Matt Riedemann wrote:
>
>
> On 9/24/2015 3:19 AM, Sylvain Bauza wrote:
>>
>>
>> Le 24/09/2015 09:04, Duncan Thomas a ?crit :
>>> Hi
>>>
>>> I thought I was late on this thread, but looking at the time stamps,
>>> it is just something that escalated very quickly. I am honestly
>>> surprised an cross-project interaction option went from 'we don't seem
>>> to understand this' to 'deprecation merged' in 4 hours, with only a 12
>>> hour discussion on the mailing list, right at the end of a cycle when
>>> we're supposed to be stabilising features.
>>>
>>
>> So, I agree it was maybe a bit too quick hence the revert. That said,
>> Nova master is now Mitaka, which means that the deprecation change was
>> provided for the next cycle, not the one currently stabilising.
>>
>> Anyway, I'm really all up with discussing why Cinder needs to know the
>> Nova AZs.
>>
>>> I proposed a session at the Tokyo summit for a discussion of Cinder
>>> AZs, since there was clear confusion about what they are intended for
>>> and how they should be configured.
>>
>> Cool, count me in from the Nova standpoint.
>>
>>> Since then I've reached out to and gotten good feedback from, a number
>>> of operators. There are two distinct configurations for AZ behaviour
>>> in cinder, and both sort-of worked until very recently.
>>>
>>> 1) No AZs in cinder
>>> This is the config where a single 'blob' of storage (most of the
>>> operators who responded so far are using Ceph, though that isn't
>>> required). The storage takes care of availability concerns, and any AZ
>>> info from nova should just be ignored.
>>>
>>> 2) Cinder AZs map to Nova AZs
>>> In this case, some combination of storage / networking / etc couples
>>> storage to nova AZs. It is may be that an AZ is used as a unit of
>>> scaling, or it could be a real storage failure domain. Eitehr way,
>>> there are a number of operators who have this configuration and want
>>> to keep it. Storage can certainly have a failure domain, and limiting
>>> the scalability problem of storage to a single cmpute AZ can have
>>> definite advantages in failure scenarios. These people do not want
>>> cross-az attach.
>>>
>>
>> Ahem, Nova AZs are not failure domains - I mean the current
>> implementation, in the sense of many people understand what is a failure
>> domain, ie. a physical unit of machines (a bay, a room, a floor, a
>> datacenter).
>> All the AZs in Nova share the same controlplane with the same message
>> queue and database, which means that one failure can be propagated to
>> the other AZ.
>>
>> To be honest, there is one very specific usecase where AZs *are* failure
>> domains : when cells exact match with AZs (ie. one AZ grouping all the
>> hosts behind one cell). That's the very specific usecase that Sam is
>> mentioning in his email, and I certainly understand we need to keep that.
>>
>> What are AZs in Nova is pretty well explained in a quite old blogpost :
>> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
>>
>>
>> We also added a few comments in our developer doc here
>> http://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs
>>
>>
>> tl;dr: AZs are aggregate metadata that makes those aggregates of compute
>> nodes visible to the users. Nothing more than that, no magic sauce.
>> That's just a logical abstraction that can be mapping your physical
>> deployment, but like I said, which would share the same bus and DB.
>> Of course, you could still provide networks distinct between AZs but
>> that just gives you the L2 isolation, not the real failure domain in a
>> Business Continuity Plan way.
>>
>> What puzzles me is how Cinder is managing a datacenter-level of
>> isolation given there is no cells concept AFAIK. I assume that
>> cinder-volumes are belonging to a specific datacenter but how is managed
>> the controlplane of it ? I can certainly understand the need of affinity
>> placement between physical units, but I'm missing that piece, and
>> consequently I wonder why Nova need to provide AZs to Cinder on a
>> general case.
>>
>>
>>
>>> My hope at the summit session was to agree these two configurations,
>>> discuss any scenarios not covered by these two configuration, and nail
>>> down the changes we need to get these to work properly. There's
>>> definitely been interest and activity in the operator community in
>>> making nova and cinder AZs interact, and every desired interaction
>>> I've gotten details about so far matches one of the above models.
>>>
>>
>> I'm all with you about providing a way for users to get volume affinity
>> for Nova. That's a long story I'm trying to consider and we are
>> constantly trying to improve the nova scheduler interfaces so that other
>> projects could provide resources to the nova scheduler for decision
>> making. I just want to consider whether AZs are the best concept for
>> that or we should do thing by other ways (again, because AZs are not
>> what people expect).
>>
>> Again, count me in for the Cinder session, and just lemme know when the
>> session is planned so I could attend it.
>>
>> -Sylvain
>>
>>
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> I plan on reverting the deprecation change (which was a mitaka change,
> not a liberty change, as Sylvain pointed out).
>
> However, given how many nova and cinder cores were talking about this
> yesterday and thought it was the right thing to do speaks to the fact
> that this is not a well understood use case (or documented at all).  So
> as part of reverting the deprecation I also want to see improved docs
> for the cross_az_attach option itself and probably a nova devref change
> explaining the use cases and issues with this.
>
> I think the volume attach case is pretty straightforward.  You create a
> nova instance in some nova AZ x and create a cinder volume in some
> cinder AZ y and try to attach the volume to the server instance.  If
> cinder.cross_az_attach=True this is OK, else it fails.
>
> The problem I have is with the boot from volume case where
> source=(blank/image/snapshot).  In those cases nova is creating the
> volume and passing the server instance AZ to the volume create API.  How
> are people that are using cinder.cross_az_attach=False handling the BFV
> case?
>
> Per bug 1496235 that started this, the user is booting a nova instance
> in a nova AZ with bdm source=image and when nova tries to create the
> volume it fails because that AZ doesn't exist in cinder.  This fails in
> the compute manager when building the instance, so this results in a
> NoValidHost error for the user - which we all know and love as a super
> useful error.  So how do we handle this case?  If
> cinder.cross_az_attach=True in nova we could just not pass the instance
> AZ to the volume create, or only pass it if cinder has that AZ available.
>
> But if cinder.cross_az_attach=False when creating the volume, what do we
> do?  I guess we can just leave the code as-is and if the AZ isn't in
> cinder (or your admin hasn't set allow_availability_zone_fallback=True
> in cinder.conf), then it fails and you open a support ticket.  That
> seems gross to me.  I'd like to at least see some of this validated in
> the nova API layer before it gets to the scheduler and compute so we can
> avoid NoValidHost.  My thinking is, in the BFV case where source !=
> volume, if cinder.cross_az_attach is False and instance.az is not None,
> then we check the list of AZs from the volume API.  If the instance.az
> is not in that list, we fail fast (400 response to the user).  However,
> if allow_availability_zone_fallback=True in cinder.conf, we'd be
> rejecting the request even though the actual volume create would
> succeed.  These are just details that we don't have in the nova API
> since it's all policy driven gorp using config options that the user
> doesn't know about, which makes it really hard to write applications
> against this - and was part of the reason I moved to deprecate that option.
>
> Am I off in the weeds?  It sounds like Duncan is going to try and get a
> plan together in Tokyo about how to handle this and decouple nova and
> cinder in this case, which is the right long-term goal.
>

Revert is approved: https://review.openstack.org/#/c/227340/

-- 

Thanks,

Matt Riedemann



From mtreinish at kortar.org  Thu Sep 24 15:03:28 2015
From: mtreinish at kortar.org (Matthew Treinish)
Date: Thu, 24 Sep 2015 11:03:28 -0400
Subject: [openstack-dev] repairing so many OpenStack components writing
 configuration files in /usr/etc
In-Reply-To: <OF179E3D68.34091123-ON00257ECA.005CD35C-85257ECA.0052323C@notes.na.collabserv.com>
References: <560407DB.3040602@debian.org> <m0a8sbdg0v.fsf@danjou.info>
 <OF179E3D68.34091123-ON00257ECA.005CD35C-85257ECA.0052323C@notes.na.collabserv.com>
Message-ID: <20150924150328.GA31010@sazabi.kortar.org>

On Thu, Sep 24, 2015 at 10:57:48AM -0400, Steve Martinelli wrote:
> 
> just a general FYI - so for pyCADF (and I'm guessing others), it was a very
> subtle error:
> 
> https://github.com/openstack/pycadf/commit/4e70ff2e6204f74767c5cab13f118d72c2594760
> 
> Essentially the entry points in setup.cfg were missing a leading slash.

I would actually view adding the leading slash as a bug in the setup.cfg. You
don't want your package trying to write to /etc when you're installing it inside
a venv without as a user that doesn't have write access to /etc.

Which is exactly why that commit was reverted over a year ago:

https://github.com/openstack/pycadf/commit/39a99398ce79067b1ae98e7273a8b47eb576bb54

-Matt Treinish

> 
> 
> 
> From:	Julien Danjou <julien at danjou.info>
> To:	Thomas Goirand <zigo at debian.org>
> Cc:	"openstack-dev at lists.openstack.org"
>             <openstack-dev at lists.openstack.org>
> Date:	2015/09/24 10:55 AM
> Subject:	Re: [openstack-dev] repairing so many OpenStack components
>             writing	configuration files in /usr/etc
> 
> 
> 
> On Thu, Sep 24 2015, Thomas Goirand wrote:
> 
> Hi Thomas,
> 
> > I also wonder where this /usr/etc is coming from. If it was
> > /usr/local/etc, I could somehow get it. But here... ?!?
> 
> Do you have a way to reproduce that, or a backtrace maybe?
> 
> --
> Julien Danjou
> # Free Software hacker
> # http://julien.danjou.info
> [attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 



> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/6c78fe94/attachment.pgp>

From robert.clark at hpe.com  Thu Sep 24 15:12:47 2015
From: robert.clark at hpe.com (Clark, Robert Graham)
Date: Thu, 24 Sep 2015 15:12:47 +0000
Subject: [openstack-dev] [Barbican][Security] Automatic Certificate
	Management Environment
Message-ID: <D229D17E.2D4E7%robert.clark@hpe.com>

Hi All,

So I did a bit of tyre kicking with Letsencrypt today, one of the things I thought was interesting was the adherence to the burgeoning Automatic Certificate Management Environment (ACME) standard.

https://letsencrypt.github.io/acme-spec/

It?s one of the more readable crypto related standards drafts out there, reading it has me wondering how this might be useful for Anchor, or indeed for Barbican where things get quite interesting, both at the front end (enabling ACME clients to engage with Barbican) or at the back end (enabling Barbican to talk to any number of ACME enabled CA endpoints.

I was wondering if there?s been any discussion/review here, I?m new to ACME but I?m not sure if I?m late to the party?

Cheers
-Rob


From corvus at inaugust.com  Thu Sep 24 15:24:53 2015
From: corvus at inaugust.com (James E. Blair)
Date: Thu, 24 Sep 2015 08:24:53 -0700
Subject: [openstack-dev] Do not modify (or read) ERROR_ON_CLONE in devstack
	gate jobs
Message-ID: <87k2rfc01m.fsf@meyer.lemoncheese.net>

Hi,

Recently we noted some projects modifying the ERROR_ON_CLONE environment
variable in devstack gate jobs.  It is never acceptable to do that.  It
is also not acceptable to read its value and alter a program's behavior.

Devstack is used by developers and users to set up a simple OpenStack
environment.  It does this by cloning all of the projects' git repos and
installing them.

It is also used by our CI system to test changes.  Because the logic
regarding what state each of the repositories should be in is
complicated, that is offloaded to Zuul and the devstack-gate project.
They ensure that all of the repositories involved in a change are set up
correctly before devstack runs.  However, they need to be identified in
advance, and to ensure that we don't accidentally miss one, the
ERROR_ON_CLONE variable is checked by devstack and if it is asked to
clone a repository because it does not already exist (i.e., because it
was not set up in advance by devstack-gate), it fails with an error
message.

If you encounter this, simply add the missing project to the $PROJECTS
variable in your job definition.  There is no need to detect whether
your program is being tested and alter its behavior (a practice which I
gather may be popular but is falling out of favor).

-Jim


From ibalutoiu at cloudbasesolutions.com  Thu Sep 24 15:38:39 2015
From: ibalutoiu at cloudbasesolutions.com (Ionut Balutoiu)
Date: Thu, 24 Sep 2015 15:38:39 +0000
Subject: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server
Message-ID: <F8D0A41F2F840442963726CFDA7D173E251C27@CBSEX1.cloudbase.local>

Hello, guys!

I'm starting a new implementation for a dhcp provider,
mainly to be used for Ironic standalone. I'm planning to
push it upstream. I'm using isc-dhcp-server service from
Linux. So, when an Ironic node is started, the ironic-conductor
writes in the config file the MAC-IP reservation for that node and
reloads dhcp service. I'm using a SQL database as a backend to store
the dhcp reservations (I think is cleaner and it should allow us
to have more than one DHCP server). What do you think about my
implementation ?
Also, I'm not sure how can I scale this out to provide HA/failover.
Do you guys have any idea ?

Regards,
Ionut Balutoiu


From walter.boring at hpe.com  Thu Sep 24 15:53:04 2015
From: walter.boring at hpe.com (Walter A. Boring IV)
Date: Thu, 24 Sep 2015 08:53:04 -0700
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <560303B7.6080906@linux.vnet.ibm.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com>
Message-ID: <56041C60.2040709@hpe.com>


>> ?To be honest this is probably my fault, AZ's were pulled in as part of
>> the nova-volume migration to Cinder and just sort of died.  Quite
>> frankly I wasn't sure "what" to do with them but brought over the
>> concept and the zones that existing in Nova-Volume.  It's been an issue
>> since day 1 of Cinder, and as you note there are little hacks here and
>> there over the years to do different things.
>>
>> I think your question about whether they should be there at all or not
>> is a good one.  We have had some interest from folks lately that want to
>> couple Nova and Cinder AZ's (I'm really not sure of any details or
>> use-cases here).
>>
>> My opinion would be until somebody proposes a clear use case and need
>> that actually works that we consider deprecating it.
>>
>> While we're on the subject (kinda) I've never been a very fond of having
>> Nova create the volume during boot process either; there's a number of
>> things that go wrong here (timeouts almost guaranteed for a "real"
>> image) and some things that are missing last I looked like type
>> selection etc.
>>
>> We do have a proposal to talk about this at the Summit, so maybe we'll
>> have a descent primer before we get there :)
>>
>> Thanks,
>>
>> John
>>
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Heh, so when I just asked in the cinder channel if we can just
> deprecate nova boot from volume with source=(image|snapshot|blank)
> (which automatically creates the volume and polls for it to be
> available) and then add a microversion that doesn't allow it, I was
> half joking, but I see we're on the same page.  This scenario seems to
> introduce a lot of orchestration work that nova shouldn't necessarily
> be in the business of handling.
I tend to agree with this.   I believe the ability to boot from a volume
with source=image was just a convenience thing and shortcut for users. 
As John stated, we know that we have issues with large images and/or
volumes here with timeouts.  If we want to continue to support this,
then the only way to make sure we don't run into timeout issues is to
look into a callback mechanism from Cinder to Nova, but that seems
awfully heavy handed, just to continue to support Nova orchestrating
this.   The good thing about the Nova and Cinder clients/APIs is that
anyone can write a quick python script to do the orchestration
themselves, if we want to deprecate this.  I'm all for deprecating this.

Walt



From rbryant at redhat.com  Thu Sep 24 16:04:23 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Thu, 24 Sep 2015 12:04:23 -0400
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
Message-ID: <56041F07.9080705@redhat.com>

On 09/24/2015 10:37 AM, WANG, Ming Hao (Tony T) wrote:
> Russell,
> 
> Thanks for your detail explanation and kind help!
> I have understand how container in VM can acquire network interfaces in different neutron networks now.
> For the connections between compute nodes, I think I need to study Geneve protocol and VTEP first.
> Any further question, I may need to continue consulting you. :-) 

OVN uses Geneve in conceptually the same way as to how the Neutron
reference implementation (ML2+OVS) uses VxLAN to create overlay networks
among the compute nodes for tenant overlay networks.

VTEP gateways or provider networks come into play when you want to
connect these overlay networks to physical, or "underlay" networks.

Hope that helps,

-- 
Russell Bryant


From jay at jvf.cc  Thu Sep 24 16:05:46 2015
From: jay at jvf.cc (Jay Faulkner)
Date: Thu, 24 Sep 2015 16:05:46 +0000
Subject: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server
In-Reply-To: <F8D0A41F2F840442963726CFDA7D173E251C27@CBSEX1.cloudbase.local>
References: <F8D0A41F2F840442963726CFDA7D173E251C27@CBSEX1.cloudbase.local>
Message-ID: <DM2PR02MB1305AA9D5F561715918E4AC1DC430@DM2PR02MB1305.namprd02.prod.outlook.com>

Hi Ionut,

I like the idea -- I think there's only going to be one potential hiccup with getting this upstream: the use of an additional external database.

My suggestion is to go ahead and post what you have up to Gerrit -- even if there's no spec and it's not ready to merge, everyone will be able to see what you're working on. If it's important for you to merge this upstream, I'd suggest starting on a spec for Ironic (https://wiki.openstack.org/wiki/Ironic/Specs_Process). 

Also as always, feel free to drop by #openstack-ironic on Freenode and chat about this as well. It sounds like you have a big use case for Ironic and we'd love to have you in the IRC community.

Thanks,
Jay Faulkner

________________________________________
From: Ionut Balutoiu <ibalutoiu at cloudbasesolutions.com>
Sent: Thursday, September 24, 2015 8:38 AM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

Hello, guys!

I'm starting a new implementation for a dhcp provider,
mainly to be used for Ironic standalone. I'm planning to
push it upstream. I'm using isc-dhcp-server service from
Linux. So, when an Ironic node is started, the ironic-conductor
writes in the config file the MAC-IP reservation for that node and
reloads dhcp service. I'm using a SQL database as a backend to store
the dhcp reservations (I think is cleaner and it should allow us
to have more than one DHCP server). What do you think about my
implementation ?
Also, I'm not sure how can I scale this out to provide HA/failover.
Do you guys have any idea ?

Regards,
Ionut Balutoiu

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From rbryant at redhat.com  Thu Sep 24 16:12:47 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Thu, 24 Sep 2015 12:12:47 -0400
Subject: [openstack-dev] [neutron][networking-ovn][vtep] Proposal:
 support for vtep-gateway in ovn
In-Reply-To: <CAP0B2WOYFDk17dX73vJKHYz5HbijDTNYGC=4t2_zq6XFy6L_3A@mail.gmail.com>
References: <201509240517.t8O5HC4p019939@d01av04.pok.ibm.com>
 <5603E709.2020602@redhat.com>
 <CAP0B2WOYFDk17dX73vJKHYz5HbijDTNYGC=4t2_zq6XFy6L_3A@mail.gmail.com>
Message-ID: <560420FF.5070903@redhat.com>

On 09/24/2015 10:18 AM, Salvatore Orlando wrote:
>     One particular issue is that the project implements the ovsdb protocol
>     from scratch.  The ovs project provides a Python library for this.  Both
>     Neutron and networking-ovn use it, at least.  From some discussion, I've
>     gathered that the ovs Python library lacked one feature that was needed,
>     but has since been added because we wanted the same thing in
>     networking-ovn.
> 
> 
> My take here is that we don't need to use the whole implementation of
> networking-l2gw, but only the APIs and the DB management layer it exposes.
> Networking-l2gw provides a VTEP network gateway solution that, if you
> want, will eventually be part of Neutron's "reference" control plane.
> OVN provides its implementation; I think it should be possible to
> leverage networking-l2gw either by pushing an OVN driver there, or
> implementing the same driver in openstack/networking-ovn.

>From a quick look, it seemed like networking-l2gw was doing 2 things.

  1) Management of vtep switches themselves

  2) Management of connectivity between Neutron networks and VTEP
     gateways

I figured the implementation of #1 would be the same whether you were
using ML2+OVS, OVN, (or whatever else).  This part is not addressed in
OVN.  You point OVN at VTEP gateways, but it's expected you manage the
gateway provisioning some other way.

It's #2 that has a very different implementation.  For OVN, it's just
creating a row in OVN's northbound database.

or did I mis-interpret what networking-l2gw is doing?

>     The networking-l2gw route will require some pretty significant work.
>     It's still the closest existing effort, so I think we should explore it
>     until it's absolutely clear that it *can't* work for what we need.
> 
> 
> I would say that it is definitely not trivial but probably a bit less
> than "significant". abhraut from my team has done something quite
> similar for openstack/vmware-nsx [1]

but specific to nsx.  :(

Does it look like networking-l2gw could be a common API for what's
needed for NSX?

> 
> 
>     > OR
>     >
>     > Should OVN pursue it?s own Neutron extension (including vtep gateway
>     > support).
> 
>     I don't think this option provides a lot of value over the short term
>     binding:profile solution.  Both are OVN specific.  I think I'd rather
>     just stick to binding:profile as the OVN specific stopgap because it's a
>     *lot* less work.
> 
> 
> I totally agree. The solution based on the binding profile is indeed a
> decent one in my opinion.
> If OVN cannot converge on the extension proposed by networking-l2gw then
> I'd keep using the binding profile for specifying gateway ports.

Great, thanks for the feedback!

> [1] https://review.openstack.org/#/c/210623/

-- 
Russell Bryant


From Tim.Bell at cern.ch  Thu Sep 24 16:13:23 2015
From: Tim.Bell at cern.ch (Tim Bell)
Date: Thu, 24 Sep 2015 16:13:23 +0000
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <56040FBB.70708@linux.vnet.ibm.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
 <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
 <5603B228.3070502@redhat.com> <56040354.50404@linux.vnet.ibm.com>
 <56040FBB.70708@linux.vnet.ibm.com>
Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5010A466C5C@CERNXCHG41.cern.ch>

> -----Original Message-----
> From: Matt Riedemann [mailto:mriedem at linux.vnet.ibm.com]
> Sent: 24 September 2015 16:59
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
> 
> 
> 
> On 9/24/2015 9:06 AM, Matt Riedemann wrote:
> >
> >
> > On 9/24/2015 3:19 AM, Sylvain Bauza wrote:
> >>
> >>
> >> Le 24/09/2015 09:04, Duncan Thomas a ?crit :
> >>> Hi
> >>>
> >>> I thought I was late on this thread, but looking at the time stamps,
> >>> it is just something that escalated very quickly. I am honestly
> >>> surprised an cross-project interaction option went from 'we don't
> >>> seem to understand this' to 'deprecation merged' in 4 hours, with
> >>> only a 12 hour discussion on the mailing list, right at the end of a
> >>> cycle when we're supposed to be stabilising features.
> >>>
> >>
> >> So, I agree it was maybe a bit too quick hence the revert. That said,
> >> Nova master is now Mitaka, which means that the deprecation change
> >> was provided for the next cycle, not the one currently stabilising.
> >>
> >> Anyway, I'm really all up with discussing why Cinder needs to know
> >> the Nova AZs.
> >>
> >>> I proposed a session at the Tokyo summit for a discussion of Cinder
> >>> AZs, since there was clear confusion about what they are intended
> >>> for and how they should be configured.
> >>
> >> Cool, count me in from the Nova standpoint.
> >>
> >>> Since then I've reached out to and gotten good feedback from, a
> >>> number of operators. There are two distinct configurations for AZ
> >>> behaviour in cinder, and both sort-of worked until very recently.
> >>>
> >>> 1) No AZs in cinder
> >>> This is the config where a single 'blob' of storage (most of the
> >>> operators who responded so far are using Ceph, though that isn't
> >>> required). The storage takes care of availability concerns, and any
> >>> AZ info from nova should just be ignored.
> >>>
> >>> 2) Cinder AZs map to Nova AZs
> >>> In this case, some combination of storage / networking / etc couples
> >>> storage to nova AZs. It is may be that an AZ is used as a unit of
> >>> scaling, or it could be a real storage failure domain. Eitehr way,
> >>> there are a number of operators who have this configuration and want
> >>> to keep it. Storage can certainly have a failure domain, and
> >>> limiting the scalability problem of storage to a single cmpute AZ
> >>> can have definite advantages in failure scenarios. These people do
> >>> not want cross-az attach.
> >>>
> >>
> >> Ahem, Nova AZs are not failure domains - I mean the current
> >> implementation, in the sense of many people understand what is a
> >> failure domain, ie. a physical unit of machines (a bay, a room, a
> >> floor, a datacenter).
> >> All the AZs in Nova share the same controlplane with the same message
> >> queue and database, which means that one failure can be propagated to
> >> the other AZ.
> >>
> >> To be honest, there is one very specific usecase where AZs *are*
> >> failure domains : when cells exact match with AZs (ie. one AZ
> >> grouping all the hosts behind one cell). That's the very specific
> >> usecase that Sam is mentioning in his email, and I certainly understand
we
> need to keep that.
> >>
> >> What are AZs in Nova is pretty well explained in a quite old blogpost :
> >> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-
> >> aggregates-in-openstack-compute-nova/
> >>
> >>
> >> We also added a few comments in our developer doc here
> >> http://docs.openstack.org/developer/nova/aggregates.html#availability
> >> -zones-azs
> >>
> >>
> >> tl;dr: AZs are aggregate metadata that makes those aggregates of
> >> compute nodes visible to the users. Nothing more than that, no magic
> sauce.
> >> That's just a logical abstraction that can be mapping your physical
> >> deployment, but like I said, which would share the same bus and DB.
> >> Of course, you could still provide networks distinct between AZs but
> >> that just gives you the L2 isolation, not the real failure domain in
> >> a Business Continuity Plan way.
> >>
> >> What puzzles me is how Cinder is managing a datacenter-level of
> >> isolation given there is no cells concept AFAIK. I assume that
> >> cinder-volumes are belonging to a specific datacenter but how is
> >> managed the controlplane of it ? I can certainly understand the need
> >> of affinity placement between physical units, but I'm missing that
> >> piece, and consequently I wonder why Nova need to provide AZs to
> >> Cinder on a general case.
> >>
> >>
> >>
> >>> My hope at the summit session was to agree these two configurations,
> >>> discuss any scenarios not covered by these two configuration, and
> >>> nail down the changes we need to get these to work properly. There's
> >>> definitely been interest and activity in the operator community in
> >>> making nova and cinder AZs interact, and every desired interaction
> >>> I've gotten details about so far matches one of the above models.
> >>>
> >>
> >> I'm all with you about providing a way for users to get volume
> >> affinity for Nova. That's a long story I'm trying to consider and we
> >> are constantly trying to improve the nova scheduler interfaces so
> >> that other projects could provide resources to the nova scheduler for
> >> decision making. I just want to consider whether AZs are the best
> >> concept for that or we should do thing by other ways (again, because
> >> AZs are not what people expect).
> >>
> >> Again, count me in for the Cinder session, and just lemme know when
> >> the session is planned so I could attend it.
> >>
> >> -Sylvain
> >>
> >>
> >>>
> >>>
> __________________________________________________________
> __________
> >>> ______
> >>>
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:OpenStack-dev-
> request at lists.openstack.org?subject:unsubs
> >>> cribe
> >>>
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> __________________________________________________________
> ___________
> >> _____
> >>
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > I plan on reverting the deprecation change (which was a mitaka change,
> > not a liberty change, as Sylvain pointed out).
> >
> > However, given how many nova and cinder cores were talking about this
> > yesterday and thought it was the right thing to do speaks to the fact
> > that this is not a well understood use case (or documented at all).
> > So as part of reverting the deprecation I also want to see improved
> > docs for the cross_az_attach option itself and probably a nova devref
> > change explaining the use cases and issues with this.
> >
> > I think the volume attach case is pretty straightforward.  You create
> > a nova instance in some nova AZ x and create a cinder volume in some
> > cinder AZ y and try to attach the volume to the server instance.  If
> > cinder.cross_az_attach=True this is OK, else it fails.
> >
> > The problem I have is with the boot from volume case where
> > source=(blank/image/snapshot).  In those cases nova is creating the
> > volume and passing the server instance AZ to the volume create API.
> > How are people that are using cinder.cross_az_attach=False handling
> > the BFV case?
> >
> > Per bug 1496235 that started this, the user is booting a nova instance
> > in a nova AZ with bdm source=image and when nova tries to create the
> > volume it fails because that AZ doesn't exist in cinder.  This fails
> > in the compute manager when building the instance, so this results in
> > a NoValidHost error for the user - which we all know and love as a
> > super useful error.  So how do we handle this case?  If
> > cinder.cross_az_attach=True in nova we could just not pass the
> > instance AZ to the volume create, or only pass it if cinder has that AZ
> available.
> >
> > But if cinder.cross_az_attach=False when creating the volume, what do
> > we do?  I guess we can just leave the code as-is and if the AZ isn't
> > in cinder (or your admin hasn't set
> > allow_availability_zone_fallback=True
> > in cinder.conf), then it fails and you open a support ticket.  That
> > seems gross to me.  I'd like to at least see some of this validated in
> > the nova API layer before it gets to the scheduler and compute so we
> > can avoid NoValidHost.  My thinking is, in the BFV case where source
> > != volume, if cinder.cross_az_attach is False and instance.az is not
> > None, then we check the list of AZs from the volume API.  If the
> > instance.az is not in that list, we fail fast (400 response to the
> > user).  However, if allow_availability_zone_fallback=True in
> > cinder.conf, we'd be rejecting the request even though the actual
> > volume create would succeed.  These are just details that we don't
> > have in the nova API since it's all policy driven gorp using config
> > options that the user doesn't know about, which makes it really hard
> > to write applications against this - and was part of the reason I moved
to
> deprecate that option.
> >
> > Am I off in the weeds?  It sounds like Duncan is going to try and get
> > a plan together in Tokyo about how to handle this and decouple nova
> > and cinder in this case, which is the right long-term goal.
> >
> 
> Revert is approved: https://review.openstack.org/#/c/227340/
> 

Matt, 

Thanks for reverting the change.

Is there a process description for deprecating features ? It would be good
to include

- notification of operators (in operator's list) and agreed time to reply
- documentation of workaround for those who are using a deprecated feature
in production

Thanks
Tim

> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __________________________________________________________
> ________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 7349 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/c0517dbf/attachment.bin>

From mgagne at internap.com  Thu Sep 24 16:16:43 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Thu, 24 Sep 2015 12:16:43 -0400
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <56041C60.2040709@hpe.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <56041C60.2040709@hpe.com>
Message-ID: <560421EB.2040808@internap.com>

On 2015-09-24 11:53 AM, Walter A. Boring IV wrote:
> The good thing about the Nova and Cinder clients/APIs is that
> anyone can write a quick python script to do the orchestration
> themselves, if we want to deprecate this.  I'm all for deprecating this.

I don't like this kind of reasoning which can justify close to anything.
It's easy to make those suggestions when you know Python. Please
consider non-technical/non-developers users when suggesting deprecating
features or proposing alternative solutions.

I could also say (in bad faith, I know): why have Heat when you can
write your own Python script. And yet, I don't think we would appreciate
anyone making such a controversial statement.

Our users don't know Python, use 3rd party tools (which don't often
perform/support orchestration) or the Horizon dashboard. They don't want
to have to learn Heat or Python so they can orchestrate volume creation
in place of Nova for a single instance. You don't write CloudFormation
templates on AWS just to boot an instance on volume. That's not the UX I
want to offer to my users.

-- 
Mathieu


From divius.inside at gmail.com  Thu Sep 24 16:40:53 2015
From: divius.inside at gmail.com (Dmitry Tantsur)
Date: Thu, 24 Sep 2015 18:40:53 +0200
Subject: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server
In-Reply-To: <F8D0A41F2F840442963726CFDA7D173E251C27@CBSEX1.cloudbase.local>
References: <F8D0A41F2F840442963726CFDA7D173E251C27@CBSEX1.cloudbase.local>
Message-ID: <CAAL3yRs4Ss1t_O_YBVxeiMznT9Yd6btc-Neqy_Y6ZhmptYs9Wg@mail.gmail.com>

2015-09-24 17:38 GMT+02:00 Ionut Balutoiu <ibalutoiu at cloudbasesolutions.com>
:

> Hello, guys!
>
> I'm starting a new implementation for a dhcp provider,
> mainly to be used for Ironic standalone. I'm planning to
> push it upstream. I'm using isc-dhcp-server service from
> Linux. So, when an Ironic node is started, the ironic-conductor
> writes in the config file the MAC-IP reservation for that node and
> reloads dhcp service. I'm using a SQL database as a backend to store
> the dhcp reservations (I think is cleaner and it should allow us
> to have more than one DHCP server). What do you think about my
> implementation ?
>

What you describe slightly resembles how ironic-inspector works. It needs
to serve DHCP to nodes that are NOT know to Ironic, so it manages iptables
rules giving (or not giving access) to the dnsmasq instance. I wonder if we
may find some common code between these 2, but I definitely don't want to
reinvent Neutron :) I'll think about it after seeing your spec and/or code,
I'm already looking forward to them!


> Also, I'm not sure how can I scale this out to provide HA/failover.
> Do you guys have any idea ?
>
> Regards,
> Ionut Balutoiu
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/b4c47c64/attachment.html>

From mgagne at internap.com  Thu Sep 24 16:50:12 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Thu, 24 Sep 2015 12:50:12 -0400
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
 <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
Message-ID: <560429C4.8060601@internap.com>

On 2015-09-24 3:04 AM, Duncan Thomas wrote:
> 
> I proposed a session at the Tokyo summit for a discussion of Cinder AZs,
> since there was clear confusion about what they are intended for and how
> they should be configured. Since then I've reached out to and gotten
> good feedback from, a number of operators.

Thanks for your proposition. I will make sure to attend this session.


> There are two distinct
> configurations for AZ behaviour in cinder, and both sort-of worked until
> very recently.
> 
> 1) No AZs in cinder
> This is the config where a single 'blob' of storage (most of the
> operators who responded so far are using Ceph, though that isn't
> required). The storage takes care of availability concerns, and any AZ
> info from nova should just be ignored.

Unless I'm very mistaken, I think it's the main "feature" missing from
OpenStack itself. The concept of AZ isn't global and anyone can still
make it so Nova AZ != Cinder AZ.

In my opinion, AZ should be a global concept where they are available
and the same for all services so Nova AZ == Cinder AZ. This could result
in a behavior similar to "regions within regions".

We should survey and ask how AZ are actually used by operators and
users. Some might create an AZ for each server racks, others for each
power segments in their datacenter or even business units so they can
segregate to specific physical servers. Some AZ use cases might just be
a "perverted" way of bypassing shortcomings in OpenStack itself. We
should find out those use cases and see if we should still support them
or offer them an existing or new alternatives.

(I don't run Ceph yet, only SolidFire but I guess the same could apply)

For people running Ceph (or other big clustered block storage), they
will have one big Cinder backend. For resources or business reasons,
they can't afford to create as many clusters (and Cinder AZ) as there
are AZ in Nova. So they end up with one big Cinder AZ (lets call it
az-1) in Cinder. Nova won't be able to create volumes in Cinder az-2 if
an instance is created in Nova az-2.

May I suggest the following solutions:

1) Add ability to disable this whole AZ concept in Cinder so it doesn't
fail to create volumes when Nova asks for a specific AZ. This could
result in the same behavior as cinder.cross_az_attach config.

2) Add ability for a volume backend to be in multiple AZ. Of course,
this would defeat the whole AZ concept. This could however be something
our operators/users might accept.


> 2) Cinder AZs map to Nova AZs
> In this case, some combination of storage / networking / etc couples
> storage to nova AZs. It is may be that an AZ is used as a unit of
> scaling, or it could be a real storage failure domain. Eitehr way, there
> are a number of operators who have this configuration and want to keep
> it. Storage can certainly have a failure domain, and limiting the
> scalability problem of storage to a single cmpute AZ can have definite
> advantages in failure scenarios. These people do not want cross-az attach.
> 
> My hope at the summit session was to agree these two configurations,
> discuss any scenarios not covered by these two configuration, and nail
> down the changes we need to get these to work properly. There's
> definitely been interest and activity in the operator community in
> making nova and cinder AZs interact, and every desired interaction I've
> gotten details about so far matches one of the above models.


-- 
Mathieu


From sharis at Brocade.com  Thu Sep 24 16:53:06 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Thu, 24 Sep 2015 16:53:06 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
Message-ID: <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>

First of all I apologize for not making it at the meeting yesterday, could not cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you posed however I am still working on some of the subtle issues raised. Once I have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this big?  I think we should finish this as a VM but then look into doing it with containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB ? but the OVA compress the image and disk to 3 GB. I will looking at other options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup time is substantial, and if there's a problem, it's good to assume the user won't know how to fix it.  Is it possible to have devstack up and running when we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack will be down when you bring up  the VM. I agree a snapshot will be a better choice.

- It'd be good to have a README to explain how to use the use-case structure. It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script so that we can run the use cases one after another without worrying about interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com<mailto:sharis at brocade.com>> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com<mailto:sharis at Brocade.com>]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova

I usually run this on a macbook air ? but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases ? I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/e1936fa5/attachment.html>

From emilien at redhat.com  Thu Sep 24 16:54:50 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Thu, 24 Sep 2015 12:54:50 -0400
Subject: [openstack-dev] [puppet] use zuul-cloner when running rspec
In-Reply-To: <CABzFt8PH9khELUr=9KqLcrAO78t6Wy+E7e+gKnLnVEbeVWka1w@mail.gmail.com>
References: <56032009.5020103@redhat.com>
 <CABzFt8PH9khELUr=9KqLcrAO78t6Wy+E7e+gKnLnVEbeVWka1w@mail.gmail.com>
Message-ID: <56042ADA.2060508@redhat.com>



On 09/24/2015 10:14 AM, Alex Schultz wrote:
> On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi <emilien at redhat.com> wrote:
>> Background
>> ==========
>>
>> Current rspec tests are tested with modules mentioned in .fixtures.yaml
>> file of each module.
>>
>> * the file is not consistent across all modules
>> * it hardcodes module names & versions
>> * this way does not allow to use "Depend-On" feature, that would allow
>> to test cross-modules patches
>>
>> Proposal
>> ========
>>
>> * Like we do in beaker & integration jobs, use zuul-cloner to clone
>> modules in our CI jobs.
>> * Use r10k to prepare fixtures modules.
>> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>>
>> In that way:
>> * we will have modules name + versions testing consistency across all
>> modules
>> * the same Puppetfile would be used by unit/beaker/integration testing.
>> * the patch that pass tests on your laptop would pass tests in upstream CI
>> * if you don't have zuul-cloner on your laptop, don't worry it will use
>> git clone. Though you won't have Depends-On feature working on your
>> laptop (technically not possible).
>> * Though your patch will support Depends-On in OpenStack Infra for unit
>> tests. If you submit a patch in puppet-openstacklib that drop something
>> wrong, you can send a patch in puppet-nova that will test it, and unit
>> tests will fail.
>>
>> Drawbacks
>> =========
>> * cloning from .fixtures.yaml takes ~ 10 seconds
>> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>>
>> I think 40 seconds is something accept regarding the benefit.
>>
> 
> As someone who consumes these modules downstream and has our own CI
> setup to run the rspec items, this ties it too closely to the
> openstack infrastructure. If we replace the .fixtures.yml with
> zuul-cloner, it assumes I always want the openstack version of the
> modules. This is not necessarily true. I like being able to replace
> items within fixtures.yml when doing dev work. For example If i want
> to test upgrading another module not related to openstack, like
> inifile, how does that work with the proposed solution?  This is also
> moving away from general puppet module conventions for testing. My
> preference would be that this be a different task and we have both
> .fixtures.yml (for general use/development) and the zuul method of
> cloning (for CI).  You have to also think about this from a consumer
> standpoint and this is adding an external dependency on the OpenStack
> infrastructure for anyone trying to run rspec or trying to consume the
> published versions from the forge.  Would I be able to run these tests
> in an offline mode with this change? With the .fixures.yml it's a
> minor edit to switch to local versions. Is the same true for the
> zuul-cloner version?

What you did before:
* Edit .fixtures.yaml and put the version you like.

What you would do this the current proposal:
* Edit openstack/puppet-openstack-integration/Puppetfile and put the
version you like.

What you're suggesting has a huge downside:
People will still use fixtures by default and not test what is actually
tested by our CI.
A few people will know about the specific Rake task so a few people will
test exactly what upstream does. That will cause frustration to the most
of people who will see tests failing in our CI and not on their laptop.
I'm not sure we want that.

I think more than most of people that run tests on their laptops want to
see them passing in upstream CI.
The few people that want to trick versions & modules, will have to run
Rake, trick the Puppetfile and run Rake again. It's not a big deal and
I'm sure this few people can deal with that.

>>
>> Next steps
>> ==========
>>
>> * PoC in puppet-nova: https://review.openstack.org/#/c/226830/
>> * Patch openstack/puppet-modulesync-config to be consistent across all
>> our modules.
>>
>> Bonus
>> =====
>> we might need (asap) a canary job for puppet-openstack-integration
>> repository, that would run tests on a puppet-* module (since we're using
>> install_modules.sh & Puppetfile files in puppet-* modules).
>> Nothing has been done yet for this work.
>>
>>
>> Thoughts?
>> --
>> Emilien Macchi
>>
>>
> 
> I think we need this functionality, I just don't think it's a
> replacement for the .fixures.yml.
> 
> Thanks,
> -Alex
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/fcc6b941/attachment.pgp>

From chris.friesen at windriver.com  Thu Sep 24 16:54:58 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Thu, 24 Sep 2015 10:54:58 -0600
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
 <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>
Message-ID: <56042AE2.6000707@windriver.com>

On 09/22/2015 06:19 PM, John Griffith wrote:
> On Tue, Sep 22, 2015 at 6:17 PM, John Griffith <john.griffith8 at gmail.com

>     ?That target file pretty much "is" the persistence record, the db entry is
>     the iqn and provider info only.  I think that adding the fdatasync isn't a
>     bad idea at all.  At the very least it doesn't hurt.  Power losses on attach
>     I would expect to be problematic regardless.
>
> ?Let me clarify the statement above, if you loose power to the node in the
> middle of an attach process and the file wasn't written properly you're most
> likely 'stuck' and will have to detach (which deletes the file) or it will be in
> an error state and rebuild the file when you try the attach again anyway IIRC,
> it's been a while since we've mucked with that code (thank goodness)!!

I took another look at the code and realized that the file *should* get rebuilt 
on restart after a power outage--if the file already exists it will print a 
warning message in the logs but it should still overwrite the contents of the 
file with the desired contents.  However, that didn't happen in my case.

That made me confused about how I ever ended up with an empty persistence file. 
  I went back to my logs and found this:

File "./usr/lib64/python2.7/site-packages/cinder/volume/manager.py", line 334, 
in init_host
File "/usr/lib64/python2.7/site-packages/osprofiler/profiler.py", line 105, in 
wrapper
File "./usr/lib64/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 
603, in ensure_export
File "./usr/lib64/python2.7/site-packages/cinder/volume/targets/iscsi.py", line 
296, in ensure_export
File "./usr/lib64/python2.7/site-packages/cinder/volume/targets/tgt.py", line 
185, in create_iscsi_target
TypeError: not enough arguments for format string


So it seems like we might have a bug in the handling of an empty file.


In kilo/stable, line 185 code looks like this:

	chap_str = 'incominguser %s %s' % chap_auth

This means that "chap_auth" must not be None coming into this function.  In 
volume.targets.iscsi.ISCSITarget.ensure_export() we call

	chap_auth = self._get_target_chap_auth(context, iscsi_name)

which I think would return None given the empty file.

I tried to reproduce with devstack running stable/kilo by booting from volume 
and then removing the persistance file and restarting cinder-volume.  It 
recreated the persistance file, but the "incominguser" line was missing 
completely from the regenerated file.

I think I need to try to reproduce this in our load with some extra debugging, 
see if I can figure out exactly what's going on.
	

Chris


From ramy.asselin at hpe.com  Thu Sep 24 17:04:28 2015
From: ramy.asselin at hpe.com (Asselin, Ramy)
Date: Thu, 24 Sep 2015 17:04:28 +0000
Subject: [openstack-dev] [third-party] Nodepool: OpenStackCloudException:
 Image creation failed: 403 Forbidden: Attribute 'is_public' is reserved.
 (HTTP 403)
Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D16965B076A2@G4W3223.americas.hpqcorp.net>

If anyone is getting the following stack trace in nodepool trying to upload images to their providers:

2015-09-24 09:16:53,639 ERROR nodepool.DiskImageUpdater: Exception updating image dpc in p222fc:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/nodepool/nodepool.py", line 979, in _run
    self.updateImage(session)
  File "/usr/local/lib/python2.7/dist-packages/nodepool/nodepool.py", line 1023, in updateImage
    self.image.meta)
  File "/usr/local/lib/python2.7/dist-packages/nodepool/provider_manager.py", line 531, in uploadImage
    **meta)
  File "/usr/local/lib/python2.7/dist-packages/shade/__init__.py", line 1401, in create_image
    "Image creation failed: {message}".format(message=str(e)))
OpenStackCloudException: Image creation failed: 403 Forbidden: Attribute 'is_public' is reserved. (HTTP 403)


This was worked-around / fixed in this patch of shade which merged a few hours ago: https://review.openstack.org/#/c/226492/

It will be released to pypi "soon", but I don't know when. In the meantime you can pip install it from openstack-infra/shade master.

Ramy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/8f958864/attachment.html>

From walter.boring at hpe.com  Thu Sep 24 17:10:51 2015
From: walter.boring at hpe.com (Walter A. Boring IV)
Date: Thu, 24 Sep 2015 10:10:51 -0700
Subject: [openstack-dev] repairing so many OpenStack components writing
 configuration files in /usr/etc
In-Reply-To: <560407DB.3040602@debian.org>
References: <560407DB.3040602@debian.org>
Message-ID: <56042E9B.1010501@hpe.com>

Hi Thomas,
  I can't speak to the other packages, but as far as os-brick goes,  the
/usr/local/etc stuff is simply for the embedded
rootwrap filter that os-brick is currently exporting.   You can see it here:
https://github.com/openstack/os-brick/blob/master/etc/os-brick/rootwrap.d/os-brick.filters
https://github.com/openstack/os-brick/blob/master/setup.cfg#L30-L31

The intention was to have devstack pull that file and dump it into
/etc/nova and /etc/cinder
for usage, but this has several problems.   We have plans to talk about
a workable solution in Tokyo, which will most likely embed the
rootwrap filter file into the package itself and it won't go into
/usr/local/etc.  

As of the current release of os-brick, no project is currently using
that rootwrap filter file directly. 
For Liberty, we decided just to manually update the nova and cinder's
filter files manually for all of
the entries. 


Walt

On 09/24/2015 07:25 AM, Thomas Goirand wrote:
> Hi,
>
> It's about the 3rd time just this week, that I'm repairing an OpenStack
> component which is trying to write config files in /usr/etc. Could this
> non-sense stop please?
>
> FYI, this time, it's with os-brick... but it happened with so many
> components already:
> - bandit (with an awesome reply from upstream to my launchpad bug,
> basically saying he doesn't care about downstream distros...)
> - neutron
> - neutron-fwaas
> - tempest
> - lots of Neutron drivers (ie: networking-FOO)
> - pycadf
> - and probably more which I forgot.
>
> Yes, I can repair things at the packaging level, but I just hope I wont
> have to do this for each and every OpenStack component, and I suppose
> everyone understands how frustrating it is...
>
> I also wonder where this /usr/etc is coming from. If it was
> /usr/local/etc, I could somehow get it. But here... ?!?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> .
>



From doug at doughellmann.com  Thu Sep 24 17:12:48 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Thu, 24 Sep 2015 13:12:48 -0400
Subject: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core
Message-ID: <1443114453-sup-7374@lrrr.local>

Oslo team,

I am nominating Brant Knudson for Oslo core.

As liaison from the Keystone team Brant has participated in meetings,
summit sessions, and other discussions at a level higher than some
of our own core team members.  He is already core on oslo.policy
and oslo.cache, and given his track record I am confident that he would
make a good addition to the team.

Please indicate your opinion by responding with +1/-1 as usual.

Doug


From davanum at gmail.com  Thu Sep 24 17:14:16 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Thu, 24 Sep 2015 13:14:16 -0400
Subject: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core
In-Reply-To: <1443114453-sup-7374@lrrr.local>
References: <1443114453-sup-7374@lrrr.local>
Message-ID: <CANw6fcHqfmy+8ruO4URj3XtWOBk1RwU6G4ugx8-v4Fop3cWhdA@mail.gmail.com>

+1 from me.

-- Dims

On Thu, Sep 24, 2015 at 1:12 PM, Doug Hellmann <doug at doughellmann.com>
wrote:

> Oslo team,
>
> I am nominating Brant Knudson for Oslo core.
>
> As liaison from the Keystone team Brant has participated in meetings,
> summit sessions, and other discussions at a level higher than some
> of our own core team members.  He is already core on oslo.policy
> and oslo.cache, and given his track record I am confident that he would
> make a good addition to the team.
>
> Please indicate your opinion by responding with +1/-1 as usual.
>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/9582ba17/attachment.html>

From armamig at gmail.com  Thu Sep 24 17:25:12 2015
From: armamig at gmail.com (Armando M.)
Date: Thu, 24 Sep 2015 10:25:12 -0700
Subject: [openstack-dev] [neutron][networking-ovn][vtep] Proposal:
 support for vtep-gateway in ovn
In-Reply-To: <560420FF.5070903@redhat.com>
References: <201509240517.t8O5HC4p019939@d01av04.pok.ibm.com>
 <5603E709.2020602@redhat.com>
 <CAP0B2WOYFDk17dX73vJKHYz5HbijDTNYGC=4t2_zq6XFy6L_3A@mail.gmail.com>
 <560420FF.5070903@redhat.com>
Message-ID: <CAK+RQeYqyvznDJQibPp9g0dsgeORUCXsUJQoeM6cV8Tz_S+AMg@mail.gmail.com>

On 24 September 2015 at 09:12, Russell Bryant <rbryant at redhat.com> wrote:

> On 09/24/2015 10:18 AM, Salvatore Orlando wrote:
> >     One particular issue is that the project implements the ovsdb
> protocol
> >     from scratch.  The ovs project provides a Python library for this.
> Both
> >     Neutron and networking-ovn use it, at least.  From some discussion,
> I've
> >     gathered that the ovs Python library lacked one feature that was
> needed,
> >     but has since been added because we wanted the same thing in
> >     networking-ovn.
> >
> >
> > My take here is that we don't need to use the whole implementation of
> > networking-l2gw, but only the APIs and the DB management layer it
> exposes.
> > Networking-l2gw provides a VTEP network gateway solution that, if you
> > want, will eventually be part of Neutron's "reference" control plane.
> > OVN provides its implementation; I think it should be possible to
> > leverage networking-l2gw either by pushing an OVN driver there, or
> > implementing the same driver in openstack/networking-ovn.
>
> From a quick look, it seemed like networking-l2gw was doing 2 things.
>
>   1) Management of vtep switches themselves
>
>   2) Management of connectivity between Neutron networks and VTEP
>      gateways
>
> I figured the implementation of #1 would be the same whether you were
> using ML2+OVS, OVN, (or whatever else).  This part is not addressed in
> OVN.  You point OVN at VTEP gateways, but it's expected you manage the
> gateway provisioning some other way.
>
> It's #2 that has a very different implementation.  For OVN, it's just
> creating a row in OVN's northbound database.
>
> or did I mis-interpret what networking-l2gw is doing?
>

No, you did not misinterpret what the objective of the project were (which
I reinstate here):

* Provide an API to OpenStack admins to extend neutron logical networks
into unmanaged pre-existing vlans. Bear in mind that things like address
collision prevention is left in the hands on the operator. Other aspects
like L2/L3 interoperability instead should be taken care of, at least from
an implementation point of view.

* Provide a pluggable framework for multiple drivers of the API.

* Provide an PoC implementation on top of the ovsdb vtep schema. This can
be implemented both in hardware (ToR switches) and software (software L2
gateways).



>
> >     The networking-l2gw route will require some pretty significant work.
> >     It's still the closest existing effort, so I think we should explore
> it
> >     until it's absolutely clear that it *can't* work for what we need.
>

We may have fallen short of some/all expectations, but I would like to
believe than it is nothing that can't be fixed by iterating on, especially
if active project participation raises.

I don't think there's a procedural mandate to make OVN abide by the l2gw
proposed API. As you said, it is not a clear well accepted API, but that's
only because we live in a brand new world, where people should be allowed
to experiment and reconcile later as community forces play out.

That said, should the conclusion that "it (the API) *can't* work for what
OVN needs" be reached, I would like to understand/document why for the sake
of all us involved so that lessons will yield from our mistakes.

>
> >
> > I would say that it is definitely not trivial but probably a bit less
> > than "significant". abhraut from my team has done something quite
> > similar for openstack/vmware-nsx [1]
>
> but specific to nsx.  :(
>
> Does it look like networking-l2gw could be a common API for what's
> needed for NSX?
>
> >
> >
> >     > OR
> >     >
> >     > Should OVN pursue it?s own Neutron extension (including vtep
> gateway
> >     > support).
> >
> >     I don't think this option provides a lot of value over the short term
> >     binding:profile solution.  Both are OVN specific.  I think I'd rather
> >     just stick to binding:profile as the OVN specific stopgap because
> it's a
> >     *lot* less work.
> >
> >
> > I totally agree. The solution based on the binding profile is indeed a
> > decent one in my opinion.
> > If OVN cannot converge on the extension proposed by networking-l2gw then
> > I'd keep using the binding profile for specifying gateway ports.
>
> Great, thanks for the feedback!
>
> > [1] https://review.openstack.org/#/c/210623/
>
> --
> Russell Bryant
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/25dbae37/attachment.html>

From ozamiatin at mirantis.com  Thu Sep 24 17:27:49 2015
From: ozamiatin at mirantis.com (ozamiatin)
Date: Thu, 24 Sep 2015 20:27:49 +0300
Subject: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core
In-Reply-To: <1443114453-sup-7374@lrrr.local>
References: <1443114453-sup-7374@lrrr.local>
Message-ID: <56043295.1000003@mirantis.com>

+1 from me

9/24/15 20:12, Doug Hellmann ?????:
> Oslo team,
>
> I am nominating Brant Knudson for Oslo core.
>
> As liaison from the Keystone team Brant has participated in meetings,
> summit sessions, and other discussions at a level higher than some
> of our own core team members.  He is already core on oslo.policy
> and oslo.cache, and given his track record I am confident that he would
> make a good addition to the team.
>
> Please indicate your opinion by responding with +1/-1 as usual.
>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From sharis at Brocade.com  Thu Sep 24 17:29:24 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Thu, 24 Sep 2015 17:29:24 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>
Message-ID: <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user instantiates the Usecase-VM. However creating a OVA file is possible only when the VM is halted which means Openstack is not running and the user will have to run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM and using it in another setup is not very straight forward. It involves modifying the .vbox file and seems that it is prone to user errors. I am leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ?.

Thanks,

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you posed however I am still working on some of the subtle issues raised. Once I have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this big?  I think we should finish this as a VM but then look into doing it with containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB ? but the OVA compress the image and disk to 3 GB. I will looking at other options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup time is substantial, and if there's a problem, it's good to assume the user won't know how to fix it.  Is it possible to have devstack up and running when we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack will be down when you bring up  the VM. I agree a snapshot will be a better choice.

- It'd be good to have a README to explain how to use the use-case structure. It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script so that we can run the use cases one after another without worrying about interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com<mailto:sharis at brocade.com>> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com<mailto:sharis at Brocade.com>]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova

I usually run this on a macbook air ? but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases ? I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/385516b9/attachment-0001.html>

From ayip at vmware.com  Thu Sep 24 17:36:59 2015
From: ayip at vmware.com (Alex Yip)
Date: Thu, 24 Sep 2015 17:36:59 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>,
 <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>
Message-ID: <1443116221875.72882@vmware.com>

I have been using images, rather than snapshots.


It doesn't take that long to start up.  First, I boot the VM which takes a minute or so.  Then I run rejoin-stack.sh which takes just another minute or so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack state that was running before.


- Alex



________________________________
From: Shiv Haris <sharis at Brocade.com>
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user instantiates the Usecase-VM. However creating a OVA file is possible only when the VM is halted which means Openstack is not running and the user will have to run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM and using it in another setup is not very straight forward. It involves modifying the .vbox file and seems that it is prone to user errors. I am leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ?.

Thanks,

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you posed however I am still working on some of the subtle issues raised. Once I have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this big?  I think we should finish this as a VM but then look into doing it with containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB ? but the OVA compress the image and disk to 3 GB. I will looking at other options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup time is substantial, and if there's a problem, it's good to assume the user won't know how to fix it.  Is it possible to have devstack up and running when we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack will be down when you bring up  the VM. I agree a snapshot will be a better choice.

- It'd be good to have a README to explain how to use the use-case structure. It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script so that we can run the use cases one after another without worrying about interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com<mailto:sharis at brocade.com>> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com<mailto:sharis at Brocade.com>]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova<https://urldefense.proofpoint.com/v2/url?u=http-3A__paloaltan.net_Congress_Congress-5FUsecases-5FSEPT-5F17-5F2015.ova&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=3IP4igrLri-BaK8VbjbEq2l_AGknCI7-t3UbP5VwlU8&s=wVyys8I915mHTzrOp8f0KLqProw6ygNfaMSP0T-yqCg&e=>

I usually run this on a macbook air ? but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases ? I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/29fc7560/attachment.html>

From mriedem at linux.vnet.ibm.com  Thu Sep 24 17:45:52 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Thu, 24 Sep 2015 12:45:52 -0500
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <560429C4.8060601@internap.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
 <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
 <560429C4.8060601@internap.com>
Message-ID: <560436D0.1080304@linux.vnet.ibm.com>



On 9/24/2015 11:50 AM, Mathieu Gagn? wrote:
> On 2015-09-24 3:04 AM, Duncan Thomas wrote:
>>
>> I proposed a session at the Tokyo summit for a discussion of Cinder AZs,
>> since there was clear confusion about what they are intended for and how
>> they should be configured. Since then I've reached out to and gotten
>> good feedback from, a number of operators.
>
> Thanks for your proposition. I will make sure to attend this session.
>
>
>> There are two distinct
>> configurations for AZ behaviour in cinder, and both sort-of worked until
>> very recently.
>>
>> 1) No AZs in cinder
>> This is the config where a single 'blob' of storage (most of the
>> operators who responded so far are using Ceph, though that isn't
>> required). The storage takes care of availability concerns, and any AZ
>> info from nova should just be ignored.
>
> Unless I'm very mistaken, I think it's the main "feature" missing from
> OpenStack itself. The concept of AZ isn't global and anyone can still
> make it so Nova AZ != Cinder AZ.
>
> In my opinion, AZ should be a global concept where they are available
> and the same for all services so Nova AZ == Cinder AZ. This could result
> in a behavior similar to "regions within regions".
>
> We should survey and ask how AZ are actually used by operators and
> users. Some might create an AZ for each server racks, others for each
> power segments in their datacenter or even business units so they can
> segregate to specific physical servers. Some AZ use cases might just be
> a "perverted" way of bypassing shortcomings in OpenStack itself. We
> should find out those use cases and see if we should still support them
> or offer them an existing or new alternatives.
>
> (I don't run Ceph yet, only SolidFire but I guess the same could apply)
>
> For people running Ceph (or other big clustered block storage), they
> will have one big Cinder backend. For resources or business reasons,
> they can't afford to create as many clusters (and Cinder AZ) as there
> are AZ in Nova. So they end up with one big Cinder AZ (lets call it
> az-1) in Cinder. Nova won't be able to create volumes in Cinder az-2 if
> an instance is created in Nova az-2.
>
> May I suggest the following solutions:
>
> 1) Add ability to disable this whole AZ concept in Cinder so it doesn't
> fail to create volumes when Nova asks for a specific AZ. This could
> result in the same behavior as cinder.cross_az_attach config.

That's essentially what this does:

https://review.openstack.org/#/c/217857/

It defaults to False though so you have to be aware and set it if you're 
hitting this problem.

The nova block_device code that tries to create the volume and passes 
the nova AZ should have probably been taking into account the 
cinder.cross_az_attach config option, because just blindly passing it 
was the reason why cinder added that option.  There is now a change up 
for review to consider cinder.cross_az_attach in block_device:

https://review.openstack.org/#/c/225119/

But that's still making the assumption that we should be passing the AZ 
on the volume create request and will still fail if the AZ isn't in 
cinder (and allow_availability_zone_fallback=False in cinder.conf).

In talking with Duncan this morning he's going to propose a spec for an 
attempt to clean some of this up and decouple nova from handling this 
logic.  Basically a new Cinder API where you give it an AZ and it tells 
you if that's OK.  We could then use this on the nova side before we 
ever get to the compute node and fail.

>
> 2) Add ability for a volume backend to be in multiple AZ. Of course,
> this would defeat the whole AZ concept. This could however be something
> our operators/users might accept.

I'd nix this on the point about it defeating the purpose of AZs.

>
>
>> 2) Cinder AZs map to Nova AZs
>> In this case, some combination of storage / networking / etc couples
>> storage to nova AZs. It is may be that an AZ is used as a unit of
>> scaling, or it could be a real storage failure domain. Eitehr way, there
>> are a number of operators who have this configuration and want to keep
>> it. Storage can certainly have a failure domain, and limiting the
>> scalability problem of storage to a single cmpute AZ can have definite
>> advantages in failure scenarios. These people do not want cross-az attach.
>>
>> My hope at the summit session was to agree these two configurations,
>> discuss any scenarios not covered by these two configuration, and nail
>> down the changes we need to get these to work properly. There's
>> definitely been interest and activity in the operator community in
>> making nova and cinder AZs interact, and every desired interaction I've
>> gotten details about so far matches one of the above models.
>
>

-- 

Thanks,

Matt Riedemann



From dborodaenko at mirantis.com  Thu Sep 24 17:46:11 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Thu, 24 Sep 2015 10:46:11 -0700
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <CAHAWLf05fOy_NJKAwgmRKZSGu1ELeVYvn_V=ttqqjNPnb=d8Sw@mail.gmail.com>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
 <CAHAWLf1OtHP5BdKsf+N-Fmv=34vLs36SP9UcQ=FurpEihr-hhg@mail.gmail.com>
 <CANw6fcHkopMgfuVHUXnp-zxJUCmqNoHn=m02h5=jZ7A_8yyinA@mail.gmail.com>
 <20150919010735.GB16012@localhost>
 <CANw6fcH6uePBTiZuKYWEoAJ_1sJFY_CbrjYNoqpKOyHCG6C70Q@mail.gmail.com>
 <CAKYN3rMgm8iKmTCb6BknV=UsM6W-zBeW1Ca9JzZ+a=Y80taODQ@mail.gmail.com>
 <CAHAWLf11d3Krn3Y9_EpSeR_07OsHfp6TEHcVmtLh+7vpD00ShA@mail.gmail.com>
 <20150924014919.GA6291@localhost>
 <CAHAWLf05fOy_NJKAwgmRKZSGu1ELeVYvn_V=ttqqjNPnb=d8Sw@mail.gmail.com>
Message-ID: <20150924174611.GA6332@localhost>

I've updated the policy document to explicitly spell out committers to
which repositories vote for PTL and for CLs:

https://review.openstack.org/#/c/225376/3..4/policy/team-structure.rst

This policy document is going to become the primary source of truth on
our governance process, I encourage all Fuel contributors, especially
core reviewers, to read it carefully, provide comments, and vote. So far
only Mike and Alexey have done that.

-- 
Dmitry Borodaenko

On Thu, Sep 24, 2015 at 01:17:58PM +0300, Vladimir Kuklin wrote:
> Dmitry
> 
> Thank you for the clarification, but my questions still remain unanswered,
> unfortunately. It seems I did not phrase them correctly.
> 
> 1) For each of the positions, which set of git repositories should I run
> this command against? E.g. which stackforge/fuel-* projects contributors
> are electing PTL or CL?
> 2) Who is voting for component leads? Mike's email says these are core
> reviewers. Our previous IRC meeting mentioned all the contributors to
> particular components. Documentation link you sent is mentioning all
> contributors to Fuel projects. Whom should I trust? What is the final
> version? Is it fine that documentation contributor is eligible to nominate
> himself and vote for Library Component Lead?
> 
> Until there is a clear and sealed answer to these questions we do not have
> a list of people who can vote and who can nominate. Let's get it clear at
> least before PTL elections start.
> 
> On Thu, Sep 24, 2015 at 4:49 AM, Dmitry Borodaenko <dborodaenko at mirantis.com
> > wrote:
> 
> > Vladimir,
> >
> > Sergey's initial email from this thread has a link to the Fuel elections
> > wiki page that describes the exact procedure to determine the electorate
> > and the candidates [0]:
> >
> >     The electorate for a given PTL and Component Leads election are the
> >     Foundation individual members that are also committers for one of
> >     the Fuel team's repositories over the last year timeframe (September
> >     18, 2014 06:00 UTC to September 18, 2015 05:59 UTC).
> >
> >     ...
> >
> >     Any member of an election electorate can propose their candidacy for
> >     the same election.
> >
> > [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015#Electorate
> >
> > If you follow more links from that page, you will find the Governance
> > page [1] and from there the Election Officiating Guidelines [2] that
> > provide a specific shell one-liner to generate that list:
> >
> >     git log --pretty=%aE --since '1 year ago' | sort -u
> >
> > [1] https://wiki.openstack.org/wiki/Governance
> > [2] https://wiki.openstack.org/wiki/Election_Officiating_Guidelines
> >
> > As I have specified in the proposed Team Structure policy document [3],
> > this is the same process that is used by other OpenStack projects.
> >
> > [3] https://review.openstack.org/225376
> >
> > Having a different release schedule is not a sufficient reason for Fuel
> > to reinvent the wheel, for example OpenStack Infrastructure project
> > doesn't even have a release schedule for many of its deliverables, and
> > still follows the same elections schedule as the rest of OpenStack:
> >
> > [4] http://governance.openstack.org/reference/projects/infrastructure.html
> >
> > Lets keep things simple.
> >
> > --
> > Dmitry Borodaenko
> >
> >
> > On Wed, Sep 23, 2015 at 01:27:07PM +0300, Vladimir Kuklin wrote:
> > > Dmitry, Mike
> > >
> > > Thank you for the list of usable links.
> > >
> > > But still - we do not have clearly defined procedure on determening who
> > is
> > > eligible to nominate and vote for PTL and Component Leads. Remember, that
> > > Fuel still has different release cycle and Kilo+Liberty contributors list
> > > is not exactly the same for "365days" contributors list.
> > >
> > > Can we finally come up with the list of people eligible to nominate and
> > > vote?
> > >
> > > On Sun, Sep 20, 2015 at 2:37 AM, Mike Scherbakov <
> > mscherbakov at mirantis.com>
> > > wrote:
> > >
> > > > Let's move on.
> > > > I started work on MAINTAINERS files, proposed two patches:
> > > > https://review.openstack.org/#/c/225457/1
> > > > https://review.openstack.org/#/c/225458/1
> > > >
> > > > These can be used as templates for other repos / folders.
> > > >
> > > > Thanks,
> > > >
> > > > On Fri, Sep 18, 2015 at 7:45 PM Davanum Srinivas <davanum at gmail.com>
> > > > wrote:
> > > >
> > > >> +1 Dmitry
> > > >>
> > > >> -- Dims
> > > >>
> > > >> On Fri, Sep 18, 2015 at 9:07 PM, Dmitry Borodaenko <
> > > >> dborodaenko at mirantis.com> wrote:
> > > >>
> > > >>> Dims,
> > > >>>
> > > >>> Thanks for the reminder!
> > > >>>
> > > >>> I've summarized the uncontroversial parts of that thread in a policy
> > > >>> proposal as per you suggestion [0], please review and comment. I've
> > > >>> renamed SMEs to maintainers since Mike has agreed with that part,
> > and I
> > > >>> omitted code review SLAs from the policy since that's the part that
> > has
> > > >>> generated the most discussion.
> > > >>>
> > > >>> [0] https://review.openstack.org/225376
> > > >>>
> > > >>> I don't think we should postpone the election: the PTL election
> > follows
> > > >>> the same rules as OpenStack so we don't need a Fuel-specific policy
> > for
> > > >>> that, and the component leads election doesn't start until October 9,
> > > >>> which gives us 3 weeks to confirm consensus on that aspect of the
> > > >>> policy.
> > > >>>
> > > >>> --
> > > >>> Dmitry Borodaenko
> > > >>>
> > > >>>
> > > >>> On Fri, Sep 18, 2015 at 07:30:39AM -0400, Davanum Srinivas wrote:
> > > >>> > Sergey,
> > > >>> >
> > > >>> > Please see [1]. Did we codify some of these roles and
> > responsibilities
> > > >>> as a
> > > >>> > community in a spec? There was also a request to use terminology
> > like
> > > >>> say
> > > >>> > MAINTAINERS in that email as well.
> > > >>> >
> > > >>> > Are we pulling the trigger a bit early for an actual election?
> > > >>> >
> > > >>> > Thanks,
> > > >>> > Dims
> > > >>> >
> > > >>> > [1] http://markmail.org/message/2ls5obgac6tvcfss
> > > >>> >
> > > >>> > On Fri, Sep 18, 2015 at 6:56 AM, Vladimir Kuklin <
> > vkuklin at mirantis.com
> > > >>> >
> > > >>> > wrote:
> > > >>> >
> > > >>> > > Sergey, Fuelers
> > > >>> > >
> > > >>> > > This is awesome news!
> > > >>> > >
> > > >>> > > By the way, I have a question on who is eligible to vote and to
> > > >>> nominate
> > > >>> > > him/her-self for both PTL and Component Leads. Could you
> > elaborate
> > > >>> on that?
> > > >>> > >
> > > >>> > > And there is no such entity as Component Lead in OpenStack - so
> > we
> > > >>> are
> > > >>> > > actually creating one. What are the new rights and
> > responsibilities
> > > >>> of CL?
> > > >>> > >
> > > >>> > > On Fri, Sep 18, 2015 at 5:39 AM, Sergey Lukjanov <
> > > >>> slukjanov at mirantis.com>
> > > >>> > > wrote:
> > > >>> > >
> > > >>> > >> Hi folks,
> > > >>> > >>
> > > >>> > >> I'd like to announce that we're running the PTL and Component
> > Leads
> > > >>> > >> elections. Detailed information available on wiki. [0]
> > > >>> > >>
> > > >>> > >> Project Team Lead: Manages day-to-day operations, drives the
> > project
> > > >>> > >> team goals, resolves technical disputes within the project
> > team. [1]
> > > >>> > >>
> > > >>> > >> Component Lead: Defines architecture of a module or component in
> > > >>> Fuel,
> > > >>> > >> reviews design specs, merges majority of commits and resolves
> > > >>> conflicts
> > > >>> > >> between Maintainers or contributors in the area of
> > responsibility.
> > > >>> [2]
> > > >>> > >>
> > > >>> > >> Fuel has two large sub-teams, with roughly comparable codebases,
> > > >>> that
> > > >>> > >> need dedicated component leads: fuel-library and fuel-python.
> > [2]
> > > >>> > >>
> > > >>> > >> Nominees propose their candidacy by sending an email to the
> > > >>> > >> openstack-dev at lists.openstack.org mailing-list, which the
> > subject:
> > > >>> > >> "[fuel] PTL candidacy" or "[fuel] <component> lead candidacy"
> > > >>> > >> (for example, "[fuel] fuel-library lead candidacy").
> > > >>> > >>
> > > >>> > >> Time line:
> > > >>> > >>
> > > >>> > >> PTL elections
> > > >>> > >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
> > > >>> position
> > > >>> > >> * September 29 - October 8: PTL elections
> > > >>> > >>
> > > >>> > >> Component leads elections (fuel-library and fuel-python)
> > > >>> > >> * October 9 - October 15: Open candidacy for Component leads
> > > >>> positions
> > > >>> > >> * October 16 - October 22: Component leads elections
> > > >>> > >>
> > > >>> > >> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015
> > > >>> > >> [1] https://wiki.openstack.org/wiki/Governance
> > > >>> > >> [2]
> > > >>> > >>
> > > >>>
> > http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
> > > >>> > >> [3] https://lwn.net/Articles/648610/
> > > >>> > >>
> > > >>> > >> --
> > > >>> > >> Sincerely yours,
> > > >>> > >> Sergey Lukjanov
> > > >>> > >> Sahara Technical Lead
> > > >>> > >> (OpenStack Data Processing)
> > > >>> > >> Principal Software Engineer
> > > >>> > >> Mirantis Inc.
> > > >>> > >>
> > > >>> > >>
> > > >>>
> > __________________________________________________________________________
> > > >>> > >> OpenStack Development Mailing List (not for usage questions)
> > > >>> > >> Unsubscribe:
> > > >>> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > >>> > >>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >>> > >>
> > > >>> > >>
> > > >>> > >
> > > >>> > >
> > > >>> > > --
> > > >>> > > Yours Faithfully,
> > > >>> > > Vladimir Kuklin,
> > > >>> > > Fuel Library Tech Lead,
> > > >>> > > Mirantis, Inc.
> > > >>> > > +7 (495) 640-49-04
> > > >>> > > +7 (926) 702-39-68
> > > >>> > > Skype kuklinvv
> > > >>> > > 35bk3, Vorontsovskaya Str.
> > > >>> > > Moscow, Russia,
> > > >>> > > www.mirantis.com <http://www.mirantis.ru/>
> > > >>> > > www.mirantis.ru
> > > >>> > > vkuklin at mirantis.com
> > > >>> > >
> > > >>> > >
> > > >>>
> > __________________________________________________________________________
> > > >>> > > OpenStack Development Mailing List (not for usage questions)
> > > >>> > > Unsubscribe:
> > > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > >>> > >
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >>> > >
> > > >>> > >
> > > >>> >
> > > >>> >
> > > >>> > --
> > > >>> > Davanum Srinivas :: https://twitter.com/dims
> > > >>>
> > > >>> >
> > > >>>
> > __________________________________________________________________________
> > > >>> > OpenStack Development Mailing List (not for usage questions)
> > > >>> > Unsubscribe:
> > > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >>>
> > > >>>
> > > >>>
> > > >>>
> > __________________________________________________________________________
> > > >>> OpenStack Development Mailing List (not for usage questions)
> > > >>> Unsubscribe:
> > > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> Davanum Srinivas :: https://twitter.com/dims
> > > >>
> > __________________________________________________________________________
> > > >> OpenStack Development Mailing List (not for usage questions)
> > > >> Unsubscribe:
> > > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >>
> > > > --
> > > > Mike Scherbakov
> > > > #mihgen
> > > >
> > > >
> > __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > >
> > >
> > > --
> > > Yours Faithfully,
> > > Vladimir Kuklin,
> > > Fuel Library Tech Lead,
> > > Mirantis, Inc.
> > > +7 (495) 640-49-04
> > > +7 (926) 702-39-68
> > > Skype kuklinvv
> > > 35bk3, Vorontsovskaya Str.
> > > Moscow, Russia,
> > > www.mirantis.com <http://www.mirantis.ru/>
> > > www.mirantis.ru
> > > vkuklin at mirantis.com
> >
> > >
> > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com <http://www.mirantis.ru/>
> www.mirantis.ru
> vkuklin at mirantis.com

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From blak111 at gmail.com  Thu Sep 24 17:52:45 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 24 Sep 2015 10:52:45 -0700
Subject: [openstack-dev] [neutron] Stumped...need help with
 neutronclient job failure
In-Reply-To: <CA+ikoRN0-GNCw+RDT87vUZAnDq0_LbKa72JrS-EmcuiHueMsvw@mail.gmail.com>
References: <CA+ikoRN0-GNCw+RDT87vUZAnDq0_LbKa72JrS-EmcuiHueMsvw@mail.gmail.com>
Message-ID: <CAO_F6JNzB2qdmhE5ybY=3M3qtwckO4aE0R-DyqXiL=3C1rKnvA@mail.gmail.com>

Can you look to see what process tempest_lib is trying to execute?

On Wed, Sep 23, 2015 at 4:02 AM, Paul Michali <pc at michali.net> wrote:

> Hi,
>
> I created a pair of experimental jobs for python-neutronclient that will
> run functional tests on core and advanced services, respectively. In the
> python-neutronclient repo, I have a commit [1] that splits the tests into
> two directories for core/adv-svcs, enables the VPN devstack plugin for the
> advanced services tests, and removes the skip decorator for the VPN tests.
>
> When these two jobs run, the core job pass (as expected). The advanced
> services job shows all four advanced services tests (testing REST LIST
> requests for IKE policy, IPSec policy, IPSec site-to-site connection, and
> VPN service resources) failing, with this T/B:
>
> ft1.1: neutronclient.tests.functional.adv-svcs.test_readonly_neutron_vpn.SimpleReadOnlyNeutronVpnClientTest.test_neutron_vpn_*ipsecpolicy_list*_StringException: Empty attachments:
>   pythonlogging:''
>   stderr
>   stdout
>
> Traceback (most recent call last):
>   File "neutronclient/tests/functional/adv-svcs/test_readonly_neutron_vpn.py", line 37, in test_neutron_vpn_ipsecpolicy_list
>     ipsecpolicy = self.parser.listing(self.neutron('vpn-ipsecpolicy-list'))
>   File "neutronclient/tests/functional/base.py", line 78, in neutron
>     **kwargs)
>   File "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py", line 292, in neutron
>     'neutron', action, flags, params, fail_ok, merge_stderr)
>   File "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py", line 361, in cmd_with_auth
>     self.cli_dir)
>   File "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py", line 61, in execute
>     proc = subprocess.Popen(cmd, stdout=stdout, stderr=stderr)
>   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
>     errread, errwrite)
>   File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
>     raise child_exception
> OSError: [Errno 2] No such file or directory
>
>
> When I look at the other logs on this run [2], I see these things:
> - The VPN agent is running (so the DevStack plugin started up VPN)
> - screen-q-svc.log shows only two of the four REST GET requests
> - Initially there was no testr results, but I modified post test hook
> script similar to what Neutron does (so it shows results now)
> - No other errors seen, including nothing on the StringException
>
> When I run this locally, all four tests pass, and I see four REST requests
> in the screen-q-svc.log.
>
> I tried a hack to enable NEUTRONCLIENT_DEBUG environment variable, but no
> additional information was shown.
>
> Does anyone have any thoughts on what may be going wrong here?
> Any ideas on how to troubleshoot this issue?
>
> Thanks in advance!
>
> Paul Michali (pc_m)
>
> Refs
> [1] https://review.openstack.org/#/c/214587/
> [2]
> http://logs.openstack.org/87/214587/8/experimental/gate-neutronclient-test-dsvm-functional-adv-svcs/5dfa152/
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/310d812e/attachment.html>

From sbauza at redhat.com  Thu Sep 24 17:55:29 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Thu, 24 Sep 2015 19:55:29 +0200
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <560421EB.2040808@internap.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <56041C60.2040709@hpe.com>
 <560421EB.2040808@internap.com>
Message-ID: <56043911.4080206@redhat.com>



Le 24/09/2015 18:16, Mathieu Gagn? a ?crit :
> On 2015-09-24 11:53 AM, Walter A. Boring IV wrote:
>> The good thing about the Nova and Cinder clients/APIs is that
>> anyone can write a quick python script to do the orchestration
>> themselves, if we want to deprecate this.  I'm all for deprecating this.
> I don't like this kind of reasoning which can justify close to anything.
> It's easy to make those suggestions when you know Python. Please
> consider non-technical/non-developers users when suggesting deprecating
> features or proposing alternative solutions.
>
> I could also say (in bad faith, I know): why have Heat when you can
> write your own Python script. And yet, I don't think we would appreciate
> anyone making such a controversial statement.
>
> Our users don't know Python, use 3rd party tools (which don't often
> perform/support orchestration) or the Horizon dashboard. They don't want
> to have to learn Heat or Python so they can orchestrate volume creation
> in place of Nova for a single instance. You don't write CloudFormation
> templates on AWS just to boot an instance on volume. That's not the UX I
> want to offer to my users.
>

I'd tend to answer that if it's an user problem, then I would prefer to 
see the orchestration done by a python-novaclient wrapping CLI module 
like we have for host-evacuate (for example) and deprecate the REST and 
novaclient APIs so it would still be possible for the users to get the 
orchestration done by the same CLI but the API no longer supporting that.

-Sylvain



From rbryant at redhat.com  Thu Sep 24 18:06:26 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Thu, 24 Sep 2015 14:06:26 -0400
Subject: [openstack-dev] [neutron][networking-ovn][vtep] Proposal:
 support for vtep-gateway in ovn
In-Reply-To: <CAK+RQeYqyvznDJQibPp9g0dsgeORUCXsUJQoeM6cV8Tz_S+AMg@mail.gmail.com>
References: <201509240517.t8O5HC4p019939@d01av04.pok.ibm.com>
 <5603E709.2020602@redhat.com>
 <CAP0B2WOYFDk17dX73vJKHYz5HbijDTNYGC=4t2_zq6XFy6L_3A@mail.gmail.com>
 <560420FF.5070903@redhat.com>
 <CAK+RQeYqyvznDJQibPp9g0dsgeORUCXsUJQoeM6cV8Tz_S+AMg@mail.gmail.com>
Message-ID: <56043BA2.9060008@redhat.com>

On 09/24/2015 01:25 PM, Armando M. wrote:
> 
> 
> 
> On 24 September 2015 at 09:12, Russell Bryant <rbryant at redhat.com
> <mailto:rbryant at redhat.com>> wrote:
> 
>     On 09/24/2015 10:18 AM, Salvatore Orlando wrote:
>     >     One particular issue is that the project implements the ovsdb protocol
>     >     from scratch.  The ovs project provides a Python library for this.  Both
>     >     Neutron and networking-ovn use it, at least.  From some discussion, I've
>     >     gathered that the ovs Python library lacked one feature that was needed,
>     >     but has since been added because we wanted the same thing in
>     >     networking-ovn.
>     >
>     >
>     > My take here is that we don't need to use the whole implementation of
>     > networking-l2gw, but only the APIs and the DB management layer it exposes.
>     > Networking-l2gw provides a VTEP network gateway solution that, if you
>     > want, will eventually be part of Neutron's "reference" control plane.
>     > OVN provides its implementation; I think it should be possible to
>     > leverage networking-l2gw either by pushing an OVN driver there, or
>     > implementing the same driver in openstack/networking-ovn.
> 
>     From a quick look, it seemed like networking-l2gw was doing 2 things.
> 
>       1) Management of vtep switches themselves
> 
>       2) Management of connectivity between Neutron networks and VTEP
>          gateways
> 
>     I figured the implementation of #1 would be the same whether you were
>     using ML2+OVS, OVN, (or whatever else).  This part is not addressed in
>     OVN.  You point OVN at VTEP gateways, but it's expected you manage the
>     gateway provisioning some other way.
> 
>     It's #2 that has a very different implementation.  For OVN, it's just
>     creating a row in OVN's northbound database.
> 
>     or did I mis-interpret what networking-l2gw is doing?
> 
> 
> No, you did not misinterpret what the objective of the project were
> (which I reinstate here):
> 
> * Provide an API to OpenStack admins to extend neutron logical networks
> into unmanaged pre-existing vlans. Bear in mind that things like address
> collision prevention is left in the hands on the operator. Other aspects
> like L2/L3 interoperability instead should be taken care of, at least
> from an implementation point of view.
> 
> * Provide a pluggable framework for multiple drivers of the API.
> 
> * Provide an PoC implementation on top of the ovsdb vtep schema. This
> can be implemented both in hardware (ToR switches) and software
> (software L2 gateways). 

Thanks for clarifying the project's goals!

>     >     The networking-l2gw route will require some pretty significant work.
>     >     It's still the closest existing effort, so I think we should explore it
>     >     until it's absolutely clear that it *can't* work for what we need.
> 
> 
> We may have fallen short of some/all expectations, but I would like to
> believe than it is nothing that can't be fixed by iterating on,
> especially if active project participation raises.
> 
> I don't think there's a procedural mandate to make OVN abide by the l2gw
> proposed API. As you said, it is not a clear well accepted API, but
> that's only because we live in a brand new world, where people should be
> allowed to experiment and reconcile later as community forces play out.
> 
> That said, should the conclusion that "it (the API) *can't* work for
> what OVN needs" be reached, I would like to understand/document why for
> the sake of all us involved so that lessons will yield from our mistakes.

My gut says we should be able to work together and make it work.  I
expect we'll talk in more detail in the next cycle.  :-)

-- 
Russell Bryant


From andrew at lascii.com  Thu Sep 24 18:10:25 2015
From: andrew at lascii.com (Andrew Laski)
Date: Thu, 24 Sep 2015 14:10:25 -0400
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <560421EB.2040808@internap.com>
References: <5602E8D5.9080407@linux.vnet.ibm.com>
 <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com>
 <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <56041C60.2040709@hpe.com>
 <560421EB.2040808@internap.com>
Message-ID: <20150924181025.GE8745@crypt>

On 09/24/15 at 12:16pm, Mathieu Gagn? wrote:
>On 2015-09-24 11:53 AM, Walter A. Boring IV wrote:
>> The good thing about the Nova and Cinder clients/APIs is that
>> anyone can write a quick python script to do the orchestration
>> themselves, if we want to deprecate this.  I'm all for deprecating this.
>
>I don't like this kind of reasoning which can justify close to anything.
>It's easy to make those suggestions when you know Python. Please
>consider non-technical/non-developers users when suggesting deprecating
>features or proposing alternative solutions.
>
>I could also say (in bad faith, I know): why have Heat when you can
>write your own Python script. And yet, I don't think we would appreciate
>anyone making such a controversial statement.
>
>Our users don't know Python, use 3rd party tools (which don't often
>perform/support orchestration) or the Horizon dashboard. They don't want
>to have to learn Heat or Python so they can orchestrate volume creation
>in place of Nova for a single instance. You don't write CloudFormation
>templates on AWS just to boot an instance on volume. That's not the UX I
>want to offer to my users.

The issues that I've seen with having this happen in Nova are that there 
are many different ways for this process to fail and the user is 
provided no control or visibility.

As an example we have some images that should convert to volumes quickly 
so failure would be defined as taking longer than x amount of time, but 
for another set of images that are expected to take longer failure would 
be 3x amount of time.  Nova shouldn't be the place to decide how long 
volume creation should take, and I wouldn't expect to ask users to pass 
this in during an API request.

When volume creation does take a decent amount of time there is no 
indication of progress in the Nova API.  When monitoring it via the 
Cinder API you can get a rough approximation of progress.  I don't 
expect Nova to expose volume creation progress as part of the feedback 
during an instance boot request.

At the moment the volume creation request happens from the computes 
themselves.  This means that a failure presents itself as a build 
failure leading to a reschedule and ultimately the user is given a 
NoValidHost.  This is unhelpful and as an operator tracking down the 
root cause is time consuming.

When there is a failure to build an instance while Cinder is creating a 
volume it's possible to end up with the volume left around while the 
instance is deleted.  This is not at all made visible to users in the 
Nova API unless they query the list of volumes and see one they don't 
expect, though it's often immediately clear in the DELETE request sent 
to Cinder.

In short, it ends up being much nicer for users to control the process 
themselves.  Alternatively it would be nice if there was an 
orchestration system that could handle it for them.  But Nova is not 
designed to do that very well.


>
>-- 
>Mathieu
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From mtreinish at kortar.org  Thu Sep 24 18:11:20 2015
From: mtreinish at kortar.org (Matthew Treinish)
Date: Thu, 24 Sep 2015 14:11:20 -0400
Subject: [openstack-dev] [neutron] Stumped...need help with
 neutronclient job failure
In-Reply-To: <CAO_F6JNzB2qdmhE5ybY=3M3qtwckO4aE0R-DyqXiL=3C1rKnvA@mail.gmail.com>
References: <CA+ikoRN0-GNCw+RDT87vUZAnDq0_LbKa72JrS-EmcuiHueMsvw@mail.gmail.com>
 <CAO_F6JNzB2qdmhE5ybY=3M3qtwckO4aE0R-DyqXiL=3C1rKnvA@mail.gmail.com>
Message-ID: <20150924181120.GA8982@sazabi.kortar.org>

On Thu, Sep 24, 2015 at 10:52:45AM -0700, Kevin Benton wrote:
> Can you look to see what process tempest_lib is trying to execute?
> 
> On Wed, Sep 23, 2015 at 4:02 AM, Paul Michali <pc at michali.net> wrote:
> 
> > Hi,
> >
> > I created a pair of experimental jobs for python-neutronclient that will
> > run functional tests on core and advanced services, respectively. In the
> > python-neutronclient repo, I have a commit [1] that splits the tests into
> > two directories for core/adv-svcs, enables the VPN devstack plugin for the
> > advanced services tests, and removes the skip decorator for the VPN tests.
> >
> > When these two jobs run, the core job pass (as expected). The advanced
> > services job shows all four advanced services tests (testing REST LIST
> > requests for IKE policy, IPSec policy, IPSec site-to-site connection, and
> > VPN service resources) failing, with this T/B:
> >
> > ft1.1: neutronclient.tests.functional.adv-svcs.test_readonly_neutron_vpn.SimpleReadOnlyNeutronVpnClientTest.test_neutron_vpn_*ipsecpolicy_list*_StringException: Empty attachments:
> >   pythonlogging:''
> >   stderr
> >   stdout
> >
> > Traceback (most recent call last):
> >   File "neutronclient/tests/functional/adv-svcs/test_readonly_neutron_vpn.py", line 37, in test_neutron_vpn_ipsecpolicy_list
> >     ipsecpolicy = self.parser.listing(self.neutron('vpn-ipsecpolicy-list'))
> >   File "neutronclient/tests/functional/base.py", line 78, in neutron
> >     **kwargs)
> >   File "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py", line 292, in neutron
> >     'neutron', action, flags, params, fail_ok, merge_stderr)
> >   File "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py", line 361, in cmd_with_auth
> >     self.cli_dir)
> >   File "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py", line 61, in execute
> >     proc = subprocess.Popen(cmd, stdout=stdout, stderr=stderr)
> >   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
> >     errread, errwrite)
> >   File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
> >     raise child_exception
> > OSError: [Errno 2] No such file or directory

So taking a blind guess without actually looking at anything besides this email
my thinking is that you aren't installing neutronclient in /usr/bin in that job.
Either it's being installed in the tox venv only or going into /usr/local/bin or
something like that. There is a parameter to give tempest-lib the bin dir where
the cli commands live. You need to make sure that's set to where ever you're
installing the CLI commands.

> >
> >
> > When I look at the other logs on this run [2], I see these things:
> > - The VPN agent is running (so the DevStack plugin started up VPN)
> > - screen-q-svc.log shows only two of the four REST GET requests
> > - Initially there was no testr results, but I modified post test hook
> > script similar to what Neutron does (so it shows results now)
> > - No other errors seen, including nothing on the StringException
> >
> > When I run this locally, all four tests pass, and I see four REST requests
> > in the screen-q-svc.log.
> >
> > I tried a hack to enable NEUTRONCLIENT_DEBUG environment variable, but no
> > additional information was shown.
> >
> > Does anyone have any thoughts on what may be going wrong here?
> > Any ideas on how to troubleshoot this issue?
> >
> > Thanks in advance!
> >
> > Paul Michali (pc_m)
> >
> > Refs
> > [1] https://review.openstack.org/#/c/214587/
> > [2]
> > http://logs.openstack.org/87/214587/8/experimental/gate-neutronclient-test-dsvm-functional-adv-svcs/5dfa152/
> >
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/de349040/attachment.pgp>

From sbauza at redhat.com  Thu Sep 24 18:16:02 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Thu, 24 Sep 2015 20:16:02 +0200
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <560436D0.1080304@linux.vnet.ibm.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
 <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
 <560429C4.8060601@internap.com> <560436D0.1080304@linux.vnet.ibm.com>
Message-ID: <56043DE2.7020305@redhat.com>



Le 24/09/2015 19:45, Matt Riedemann a ?crit :
>
>
> On 9/24/2015 11:50 AM, Mathieu Gagn? wrote:
>> On 2015-09-24 3:04 AM, Duncan Thomas wrote:
>>>
>>> I proposed a session at the Tokyo summit for a discussion of Cinder 
>>> AZs,
>>> since there was clear confusion about what they are intended for and 
>>> how
>>> they should be configured. Since then I've reached out to and gotten
>>> good feedback from, a number of operators.
>>
>> Thanks for your proposition. I will make sure to attend this session.
>>
>>
>>> There are two distinct
>>> configurations for AZ behaviour in cinder, and both sort-of worked 
>>> until
>>> very recently.
>>>
>>> 1) No AZs in cinder
>>> This is the config where a single 'blob' of storage (most of the
>>> operators who responded so far are using Ceph, though that isn't
>>> required). The storage takes care of availability concerns, and any AZ
>>> info from nova should just be ignored.
>>
>> Unless I'm very mistaken, I think it's the main "feature" missing from
>> OpenStack itself. The concept of AZ isn't global and anyone can still
>> make it so Nova AZ != Cinder AZ.
>>
>> In my opinion, AZ should be a global concept where they are available
>> and the same for all services so Nova AZ == Cinder AZ. This could result
>> in a behavior similar to "regions within regions".
>>
>> We should survey and ask how AZ are actually used by operators and
>> users. Some might create an AZ for each server racks, others for each
>> power segments in their datacenter or even business units so they can
>> segregate to specific physical servers. Some AZ use cases might just be
>> a "perverted" way of bypassing shortcomings in OpenStack itself. We
>> should find out those use cases and see if we should still support them
>> or offer them an existing or new alternatives.
>>
>> (I don't run Ceph yet, only SolidFire but I guess the same could apply)
>>
>> For people running Ceph (or other big clustered block storage), they
>> will have one big Cinder backend. For resources or business reasons,
>> they can't afford to create as many clusters (and Cinder AZ) as there
>> are AZ in Nova. So they end up with one big Cinder AZ (lets call it
>> az-1) in Cinder. Nova won't be able to create volumes in Cinder az-2 if
>> an instance is created in Nova az-2.
>>
>> May I suggest the following solutions:
>>
>> 1) Add ability to disable this whole AZ concept in Cinder so it doesn't
>> fail to create volumes when Nova asks for a specific AZ. This could
>> result in the same behavior as cinder.cross_az_attach config.
>
> That's essentially what this does:
>
> https://review.openstack.org/#/c/217857/
>
> It defaults to False though so you have to be aware and set it if 
> you're hitting this problem.
>
> The nova block_device code that tries to create the volume and passes 
> the nova AZ should have probably been taking into account the 
> cinder.cross_az_attach config option, because just blindly passing it 
> was the reason why cinder added that option.  There is now a change up 
> for review to consider cinder.cross_az_attach in block_device:
>
> https://review.openstack.org/#/c/225119/
>
> But that's still making the assumption that we should be passing the 
> AZ on the volume create request and will still fail if the AZ isn't in 
> cinder (and allow_availability_zone_fallback=False in cinder.conf).
>
> In talking with Duncan this morning he's going to propose a spec for 
> an attempt to clean some of this up and decouple nova from handling 
> this logic.  Basically a new Cinder API where you give it an AZ and it 
> tells you if that's OK.  We could then use this on the nova side 
> before we ever get to the compute node and fail.

MHO is like you, we should decouple Nova AZs from Cinder AZs and just 
have a lazy relationship between those by getting a way to call Cinder 
to know which AZ before calling the scheduler.


>
>>
>> 2) Add ability for a volume backend to be in multiple AZ. Of course,
>> this would defeat the whole AZ concept. This could however be something
>> our operators/users might accept.
>
> I'd nix this on the point about it defeating the purpose of AZs.

Well, if we rename Cinder AZs to something else, then I'm honestly not 
really opiniated,since it's already always confusing, because Nova AZs 
are groups of hosts, not anything else.

If we keep the naming as AZs, then I'm not OK since it creates more 
confusion.

-Sylvain


>
>>
>>
>>> 2) Cinder AZs map to Nova AZs
>>> In this case, some combination of storage / networking / etc couples
>>> storage to nova AZs. It is may be that an AZ is used as a unit of
>>> scaling, or it could be a real storage failure domain. Eitehr way, 
>>> there
>>> are a number of operators who have this configuration and want to keep
>>> it. Storage can certainly have a failure domain, and limiting the
>>> scalability problem of storage to a single cmpute AZ can have definite
>>> advantages in failure scenarios. These people do not want cross-az 
>>> attach.
>>>
>>> My hope at the summit session was to agree these two configurations,
>>> discuss any scenarios not covered by these two configuration, and nail
>>> down the changes we need to get these to work properly. There's
>>> definitely been interest and activity in the operator community in
>>> making nova and cinder AZs interact, and every desired interaction I've
>>> gotten details about so far matches one of the above models.
>>
>>
>



From chris.friesen at windriver.com  Thu Sep 24 18:18:03 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Thu, 24 Sep 2015 12:18:03 -0600
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <56042AE2.6000707@windriver.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
 <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>
 <56042AE2.6000707@windriver.com>
Message-ID: <56043E5B.7020709@windriver.com>

On 09/24/2015 10:54 AM, Chris Friesen wrote:

> I took another look at the code and realized that the file *should* get rebuilt
> on restart after a power outage--if the file already exists it will print a
> warning message in the logs but it should still overwrite the contents of the
> file with the desired contents.  However, that didn't happen in my case.
>
> That made me confused about how I ever ended up with an empty persistence file.
>   I went back to my logs and found this:
>
> File "./usr/lib64/python2.7/site-packages/cinder/volume/manager.py", line 334,
> in init_host
> File "/usr/lib64/python2.7/site-packages/osprofiler/profiler.py", line 105, in
> wrapper
> File "./usr/lib64/python2.7/site-packages/cinder/volume/drivers/lvm.py", line
> 603, in ensure_export
> File "./usr/lib64/python2.7/site-packages/cinder/volume/targets/iscsi.py", line
> 296, in ensure_export
> File "./usr/lib64/python2.7/site-packages/cinder/volume/targets/tgt.py", line
> 185, in create_iscsi_target
> TypeError: not enough arguments for format string
>
>
> So it seems like we might have a bug in the handling of an empty file.

And I think I know how we got the empty file in the first place, and it wasn't 
the original file creation but rather the file re-creation.

I have logs from shortly before the above logs showing cinder-volume receiving a 
SIGTERM while it was processing the volume in question:


2015-09-21 19:23:59.123 12429 WARNING cinder.volume.targets.tgt 
[req-7d092503-198a-4f59-97e9-d4d520d38379 - - - - -] Persistence file already 
exists for volume, found file at: 
/opt/cgcs/cinder/data/volumes/volume-76c5f285-a15e-474e-b59e-fd609a624090
2015-09-21 19:24:01.252 12429 WARNING cinder.volume.targets.tgt 
[req-7d092503-198a-4f59-97e9-d4d520d38379 - - - - -] Persistence file already 
exists for volume, found file at: 
/opt/cgcs/cinder/data/volumes/volume-993c94b2-e256-4baf-ab55-805a8e28f547
2015-09-21 19:24:01.951 8201 INFO cinder.openstack.common.service 
[req-904f88a8-8e6f-425e-8df7-5cbb9baae0c5 - - - - -] Caught SIGTERM, stopping 
children


I think what happened is that we took the SIGTERM after the open() call in 
create_iscsi_target(), but before writing anything to the file.

         f = open(volume_path, 'w+')
         f.write(volume_conf)
         f.close()

The 'w+' causes the file to be immediately truncated on opening, leading to an 
empty file.

To work around this, I think we need to do the classic "write to a temporary 
file and then rename it to the desired filename" trick.  The atomicity of the 
rename ensures that either the old contents or the new contents are present.

Chris


From aschultz at mirantis.com  Thu Sep 24 18:19:26 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Thu, 24 Sep 2015 13:19:26 -0500
Subject: [openstack-dev] [puppet] use zuul-cloner when running rspec
In-Reply-To: <56042ADA.2060508@redhat.com>
References: <56032009.5020103@redhat.com>
 <CABzFt8PH9khELUr=9KqLcrAO78t6Wy+E7e+gKnLnVEbeVWka1w@mail.gmail.com>
 <56042ADA.2060508@redhat.com>
Message-ID: <CABzFt8NErQh-G4RX0riFg5CbSpzPzbdAk9ZWBBmyActD_Wkvbw@mail.gmail.com>

On Thu, Sep 24, 2015 at 11:54 AM, Emilien Macchi <emilien at redhat.com> wrote:
>
>
> On 09/24/2015 10:14 AM, Alex Schultz wrote:
>> On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi <emilien at redhat.com> wrote:
>>> Background
>>> ==========
>>>
>>> Current rspec tests are tested with modules mentioned in .fixtures.yaml
>>> file of each module.
>>>
>>> * the file is not consistent across all modules
>>> * it hardcodes module names & versions
>>> * this way does not allow to use "Depend-On" feature, that would allow
>>> to test cross-modules patches
>>>
>>> Proposal
>>> ========
>>>
>>> * Like we do in beaker & integration jobs, use zuul-cloner to clone
>>> modules in our CI jobs.
>>> * Use r10k to prepare fixtures modules.
>>> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>>>
>>> In that way:
>>> * we will have modules name + versions testing consistency across all
>>> modules
>>> * the same Puppetfile would be used by unit/beaker/integration testing.
>>> * the patch that pass tests on your laptop would pass tests in upstream CI
>>> * if you don't have zuul-cloner on your laptop, don't worry it will use
>>> git clone. Though you won't have Depends-On feature working on your
>>> laptop (technically not possible).
>>> * Though your patch will support Depends-On in OpenStack Infra for unit
>>> tests. If you submit a patch in puppet-openstacklib that drop something
>>> wrong, you can send a patch in puppet-nova that will test it, and unit
>>> tests will fail.
>>>
>>> Drawbacks
>>> =========
>>> * cloning from .fixtures.yaml takes ~ 10 seconds
>>> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>>>
>>> I think 40 seconds is something accept regarding the benefit.
>>>
>>
>> As someone who consumes these modules downstream and has our own CI
>> setup to run the rspec items, this ties it too closely to the
>> openstack infrastructure. If we replace the .fixtures.yml with
>> zuul-cloner, it assumes I always want the openstack version of the
>> modules. This is not necessarily true. I like being able to replace
>> items within fixtures.yml when doing dev work. For example If i want
>> to test upgrading another module not related to openstack, like
>> inifile, how does that work with the proposed solution?  This is also
>> moving away from general puppet module conventions for testing. My
>> preference would be that this be a different task and we have both
>> .fixtures.yml (for general use/development) and the zuul method of
>> cloning (for CI).  You have to also think about this from a consumer
>> standpoint and this is adding an external dependency on the OpenStack
>> infrastructure for anyone trying to run rspec or trying to consume the
>> published versions from the forge.  Would I be able to run these tests
>> in an offline mode with this change? With the .fixures.yml it's a
>> minor edit to switch to local versions. Is the same true for the
>> zuul-cloner version?
>
> What you did before:
> * Edit .fixtures.yaml and put the version you like.
>
> What you would do this the current proposal:
> * Edit openstack/puppet-openstack-integration/Puppetfile and put the
> version you like.
>

So I have to edit a file in another module to test changes in
puppet-neutron, puppet-nova, etc? With the zuul-cloner version, for
local testing what does that workflow look like?

> What you're suggesting has a huge downside:
> People will still use fixtures by default and not test what is actually
> tested by our CI.
> A few people will know about the specific Rake task so a few people will
> test exactly what upstream does. That will cause frustration to the most
> of people who will see tests failing in our CI and not on their laptop.
> I'm not sure we want that.

You're right that the specific rake task may not be ideal. But that
was one option, another option could be use fixtures first then
replace with zuul-cloner provided versions but provide me the ability
to turn of the zuul cloner part? I'm just saying as it is today, this
change adds more complexity and hard ties into the OpenStack
infrastructure with non-trival work arounds. I would love to solve the
Depends-On issue, but I don't think that should include a deviation
from generally accepted testing practices of puppet modules.

>
> I think more than most of people that run tests on their laptops want to
> see them passing in upstream CI.
> The few people that want to trick versions & modules, will have to run
> Rake, trick the Puppetfile and run Rake again. It's not a big deal and
> I'm sure this few people can deal with that.
>

So for me the zuul-cloner task seems more of a CI specific job that
solves the Depends-On issues we currently have. Much like the beaker
and acceptance tests that's not something I run locally. I usually run
the local rspec tests first before shipping off to CI to see how that
plays out but I would manage the .fixtures.yml if necessary to test
cross module dependancies.  I don't expect to replicate an entire CI
environment setup on my laptop for testing.  The rspec tests for me,
represent a quick way to test fixes before shipping off to CI for more
testing.

Going back to the background item from the original email, the
.fixtures.yml shouldn't be identical for all modules. It should only
be the modules required to test the specific module. I doubt all of
the puppet OpenStack modules require each other, right? So that's not
a problem, that's an expectation. Additionally, we should be managing
these anyway so when we publish the modules to the forge, it has
proper metadata indicating the dependancies.

This change seems targeted towards solving OpenStack CI environmental
setup issues, and not really improving individual module development
from a regular puppet standpoint.

>>>
>>> Next steps
>>> ==========
>>>
>>> * PoC in puppet-nova: https://review.openstack.org/#/c/226830/
>>> * Patch openstack/puppet-modulesync-config to be consistent across all
>>> our modules.
>>>
>>> Bonus
>>> =====
>>> we might need (asap) a canary job for puppet-openstack-integration
>>> repository, that would run tests on a puppet-* module (since we're using
>>> install_modules.sh & Puppetfile files in puppet-* modules).
>>> Nothing has been done yet for this work.
>>>
>>>
>>> Thoughts?
>>> --
>>> Emilien Macchi
>>>
>>>
>>
>> I think we need this functionality, I just don't think it's a
>> replacement for the .fixures.yml.
>>
>> Thanks,
>> -Alex
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> Emilien Macchi
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From mgagne at internap.com  Thu Sep 24 18:28:39 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Thu, 24 Sep 2015 14:28:39 -0400
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <560436D0.1080304@linux.vnet.ibm.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
 <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
 <560429C4.8060601@internap.com> <560436D0.1080304@linux.vnet.ibm.com>
Message-ID: <560440D7.7060405@internap.com>

Hi Matt,

On 2015-09-24 1:45 PM, Matt Riedemann wrote:
> 
> 
> On 9/24/2015 11:50 AM, Mathieu Gagn? wrote:
>>
>> May I suggest the following solutions:
>>
>> 1) Add ability to disable this whole AZ concept in Cinder so it doesn't
>> fail to create volumes when Nova asks for a specific AZ. This could
>> result in the same behavior as cinder.cross_az_attach config.
> 
> That's essentially what this does:
> 
> https://review.openstack.org/#/c/217857/
> 
> It defaults to False though so you have to be aware and set it if you're
> hitting this problem.
> 
> The nova block_device code that tries to create the volume and passes
> the nova AZ should have probably been taking into account the
> cinder.cross_az_attach config option, because just blindly passing it
> was the reason why cinder added that option.  There is now a change up
> for review to consider cinder.cross_az_attach in block_device:
> 
> https://review.openstack.org/#/c/225119/
> 
> But that's still making the assumption that we should be passing the AZ
> on the volume create request and will still fail if the AZ isn't in
> cinder (and allow_availability_zone_fallback=False in cinder.conf).
> 
> In talking with Duncan this morning he's going to propose a spec for an
> attempt to clean some of this up and decouple nova from handling this
> logic.  Basically a new Cinder API where you give it an AZ and it tells
> you if that's OK.  We could then use this on the nova side before we
> ever get to the compute node and fail.

IMO, the confusion comes from what I consider a wrong usage of AZ. To
quote Sylvain Bauza from a recent review [1][2]:

"because Nova AZs and Cinder AZs are very different failure domains"

This is not the concept of AZ I learned to know from cloud providers
where an AZ is global to the region, not per-service.

Google Cloud Platform:
- Persistent disks are per-zone resources. [3]
- Resources that are specific to a zone or a region can only be used by
other resources in the same zone or region. For example, disks and
instances are both zonal resources. To attach a disk to an instance,
both resources must be in the same zone. [4]

Amazon Web Services:
- Instances and disks are per-zone resources. [5]

So now we are stuck with AZ not being consistent across services and
confusing people.


[1] https://review.openstack.org/#/c/225119/2
[2] https://review.openstack.org/#/c/225119/2/nova/virt/block_device.py
[3] https://cloud.google.com/compute/docs/disks/persistent-disks
[4] https://cloud.google.com/compute/docs/zones
[5] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resources.html

-- 
Mathieu


From julien at danjou.info  Thu Sep 24 18:32:37 2015
From: julien at danjou.info (Julien Danjou)
Date: Thu, 24 Sep 2015 20:32:37 +0200
Subject: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core
In-Reply-To: <1443114453-sup-7374@lrrr.local> (Doug Hellmann's message of
 "Thu, 24 Sep 2015 13:12:48 -0400")
References: <1443114453-sup-7374@lrrr.local>
Message-ID: <m0612zd5x6.fsf@danjou.info>

On Thu, Sep 24 2015, Doug Hellmann wrote:

> As liaison from the Keystone team Brant has participated in meetings,
> summit sessions, and other discussions at a level higher than some
> of our own core team members.  He is already core on oslo.policy
> and oslo.cache, and given his track record I am confident that he would
> make a good addition to the team.
>
> Please indicate your opinion by responding with +1/-1 as usual.

Good to go! +1.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/19b3e147/attachment.pgp>

From flavio at redhat.com  Thu Sep 24 18:43:40 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Thu, 24 Sep 2015 20:43:40 +0200
Subject: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core
In-Reply-To: <1443114453-sup-7374@lrrr.local>
References: <1443114453-sup-7374@lrrr.local>
Message-ID: <20150924184340.GO26372@redhat.com>

On 24/09/15 13:12 -0400, Doug Hellmann wrote:
>Oslo team,
>
>I am nominating Brant Knudson for Oslo core.
>
>As liaison from the Keystone team Brant has participated in meetings,
>summit sessions, and other discussions at a level higher than some
>of our own core team members.  He is already core on oslo.policy
>and oslo.cache, and given his track record I am confident that he would
>make a good addition to the team.
>
>Please indicate your opinion by responding with +1/-1 as usual.

+1

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/e43ed2ad/attachment.pgp>

From stevemar at ca.ibm.com  Thu Sep 24 18:57:20 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Thu, 24 Sep 2015 14:57:20 -0400
Subject: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core
In-Reply-To: <1443114453-sup-7374@lrrr.local>
References: <1443114453-sup-7374@lrrr.local>
Message-ID: <OF3311FC94.7E592B21-ON00257ECA.0067F7F4-85257ECA.0068208F@notes.na.collabserv.com>


Though I'm not Oslo Core, big +1 from me, Brant is a great benefit to any
project.

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:	Doug Hellmann <doug at doughellmann.com>
To:	openstack-dev <openstack-dev at lists.openstack.org>
Date:	2015/09/24 01:13 PM
Subject:	[openstack-dev] [oslo] nominating Brant Knudson for Oslo core



Oslo team,

I am nominating Brant Knudson for Oslo core.

As liaison from the Keystone team Brant has participated in meetings,
summit sessions, and other discussions at a level higher than some
of our own core team members.  He is already core on oslo.policy
and oslo.cache, and given his track record I am confident that he would
make a good addition to the team.

Please indicate your opinion by responding with +1/-1 as usual.

Doug

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/ce9b0bcf/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/ce9b0bcf/attachment.gif>

From emilien at redhat.com  Thu Sep 24 18:58:23 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Thu, 24 Sep 2015 14:58:23 -0400
Subject: [openstack-dev] [puppet] use zuul-cloner when running rspec
In-Reply-To: <CABzFt8NErQh-G4RX0riFg5CbSpzPzbdAk9ZWBBmyActD_Wkvbw@mail.gmail.com>
References: <56032009.5020103@redhat.com>
 <CABzFt8PH9khELUr=9KqLcrAO78t6Wy+E7e+gKnLnVEbeVWka1w@mail.gmail.com>
 <56042ADA.2060508@redhat.com>
 <CABzFt8NErQh-G4RX0riFg5CbSpzPzbdAk9ZWBBmyActD_Wkvbw@mail.gmail.com>
Message-ID: <560447CF.4060104@redhat.com>



On 09/24/2015 02:19 PM, Alex Schultz wrote:
> On Thu, Sep 24, 2015 at 11:54 AM, Emilien Macchi <emilien at redhat.com> wrote:
>>
>>
>> On 09/24/2015 10:14 AM, Alex Schultz wrote:
>>> On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi <emilien at redhat.com> wrote:
>>>> Background
>>>> ==========
>>>>
>>>> Current rspec tests are tested with modules mentioned in .fixtures.yaml
>>>> file of each module.
>>>>
>>>> * the file is not consistent across all modules
>>>> * it hardcodes module names & versions
>>>> * this way does not allow to use "Depend-On" feature, that would allow
>>>> to test cross-modules patches
>>>>
>>>> Proposal
>>>> ========
>>>>
>>>> * Like we do in beaker & integration jobs, use zuul-cloner to clone
>>>> modules in our CI jobs.
>>>> * Use r10k to prepare fixtures modules.
>>>> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>>>>
>>>> In that way:
>>>> * we will have modules name + versions testing consistency across all
>>>> modules
>>>> * the same Puppetfile would be used by unit/beaker/integration testing.
>>>> * the patch that pass tests on your laptop would pass tests in upstream CI
>>>> * if you don't have zuul-cloner on your laptop, don't worry it will use
>>>> git clone. Though you won't have Depends-On feature working on your
>>>> laptop (technically not possible).
>>>> * Though your patch will support Depends-On in OpenStack Infra for unit
>>>> tests. If you submit a patch in puppet-openstacklib that drop something
>>>> wrong, you can send a patch in puppet-nova that will test it, and unit
>>>> tests will fail.
>>>>
>>>> Drawbacks
>>>> =========
>>>> * cloning from .fixtures.yaml takes ~ 10 seconds
>>>> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>>>>
>>>> I think 40 seconds is something accept regarding the benefit.
>>>>
>>>
>>> As someone who consumes these modules downstream and has our own CI
>>> setup to run the rspec items, this ties it too closely to the
>>> openstack infrastructure. If we replace the .fixtures.yml with
>>> zuul-cloner, it assumes I always want the openstack version of the
>>> modules. This is not necessarily true. I like being able to replace
>>> items within fixtures.yml when doing dev work. For example If i want
>>> to test upgrading another module not related to openstack, like
>>> inifile, how does that work with the proposed solution?  This is also
>>> moving away from general puppet module conventions for testing. My
>>> preference would be that this be a different task and we have both
>>> .fixtures.yml (for general use/development) and the zuul method of
>>> cloning (for CI).  You have to also think about this from a consumer
>>> standpoint and this is adding an external dependency on the OpenStack
>>> infrastructure for anyone trying to run rspec or trying to consume the
>>> published versions from the forge.  Would I be able to run these tests
>>> in an offline mode with this change? With the .fixures.yml it's a
>>> minor edit to switch to local versions. Is the same true for the
>>> zuul-cloner version?
>>
>> What you did before:
>> * Edit .fixtures.yaml and put the version you like.
>>
>> What you would do this the current proposal:
>> * Edit openstack/puppet-openstack-integration/Puppetfile and put the
>> version you like.
>>
> 
> So I have to edit a file in another module to test changes in
> puppet-neutron, puppet-nova, etc? With the zuul-cloner version, for
> local testing what does that workflow look like?

If you need to test your code with cross-project dependencies, having
current .fixtures.yaml or the proposal won't change anything regarding
that, you'll still have to trick the YAML file that define the modules
name/versions.

> 
>> What you're suggesting has a huge downside:
>> People will still use fixtures by default and not test what is actually
>> tested by our CI.
>> A few people will know about the specific Rake task so a few people will
>> test exactly what upstream does. That will cause frustration to the most
>> of people who will see tests failing in our CI and not on their laptop.
>> I'm not sure we want that.
> 
> You're right that the specific rake task may not be ideal. But that
> was one option, another option could be use fixtures first then
> replace with zuul-cloner provided versions but provide me the ability
> to turn of the zuul cloner part? I'm just saying as it is today, this
> change adds more complexity and hard ties into the OpenStack
> infrastructure with non-trival work arounds. I would love to solve the
> Depends-On issue, but I don't think that should include a deviation
> from generally accepted testing practices of puppet modules.

I agree it's not best practice in Puppet but I don't see that as an huge
blocker. Our Puppet modules are Approved by Puppetlabs and respect most
of best practices AFIK. Is that fixctures thing a big deal?
I would like to hear from *cough*Hunner/Cody*cough* Puppetlabs about that.
Another proposal is welcome though, please go ahead.

>>
>> I think more than most of people that run tests on their laptops want to
>> see them passing in upstream CI.
>> The few people that want to trick versions & modules, will have to run
>> Rake, trick the Puppetfile and run Rake again. It's not a big deal and
>> I'm sure this few people can deal with that.
>>
> 
> So for me the zuul-cloner task seems more of a CI specific job that
> solves the Depends-On issues we currently have. Much like the beaker
> and acceptance tests that's not something I run locally.

Hum. We implemented beaker tests in our modules so you can test the
module on your infra (laptop/cloud/whatever).
Here, we're just talking about unit testing, but it's still testing
after all.

Beaker code is already using this proposal to clone the module.
The proposal is re-using the same code to be consistent, and keep one
single centralized Puppetfile.

> I usually run the local rspec tests first before shipping off to CI to see how that
> plays out but I would manage the .fixtures.yml if necessary to test
> cross module dependancies.  I don't expect to replicate an entire CI
> environment setup on my laptop for testing.

This proposal do not "replicate an entire CI". It just clone all
modules, on the same version, hence the 40 seconds difference.
The rspec tests will still work for you, and you won't see any difference.

> The rspec tests for me, represent a quick way to test fixes before shipping off to CI for more
> testing.

This is wrong. What you test on your laptop is not what we test in
upstream CI: not the same modules, not the same dependencies.
OpenStack Puppet modules are working with (upstream) dependencies that
help to build OpenStack Clouds.

OpenStack is already running the same structure with Global Requirements
[1] (Python dependencies), where each project works with it.

[1] http://git.openstack.org/cgit/openstack/requirements

With this proposal, our modules will follow the same concept, where they
would be tested (unit + functional) against the same dependencies.

> Going back to the background item from the original email, the
> .fixtures.yml shouldn't be identical for all modules. It should only
> be the modules required to test the specific module. I doubt all of
> the puppet OpenStack modules require each other, right? So that's not
> a problem, that's an expectation. Additionally, we should be managing
> these anyway so when we publish the modules to the forge, it has
> proper metadata indicating the dependancies.

All modules have quite often openstacklib/mysql/rabbitmq/qpid/keystone
at least.

> This change seems targeted towards solving OpenStack CI environmental
> setup issues, and not really improving individual module development
> from a regular puppet standpoint.

Nope, if you read again the proposal, it's solving:
"I would like to run tests on my laptop that will pass OpenStack CI with
the right modules and the right versions".

If you want to run your specific modules & version, I guess you'll have
to do like you're doing with fixtures: running rake (stop it after
cloning Puppetfile), editing Puppetfile, running rake again, instead of
just editing fixtures.

>>>>
>>>> Next steps
>>>> ==========
>>>>
>>>> * PoC in puppet-nova: https://review.openstack.org/#/c/226830/
>>>> * Patch openstack/puppet-modulesync-config to be consistent across all
>>>> our modules.
>>>>
>>>> Bonus
>>>> =====
>>>> we might need (asap) a canary job for puppet-openstack-integration
>>>> repository, that would run tests on a puppet-* module (since we're using
>>>> install_modules.sh & Puppetfile files in puppet-* modules).
>>>> Nothing has been done yet for this work.
>>>>
>>>>
>>>> Thoughts?
>>>> --
>>>> Emilien Macchi
>>>>
>>>>
>>>
>>> I think we need this functionality, I just don't think it's a
>>> replacement for the .fixures.yml.
>>>
>>> Thanks,
>>> -Alex
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> --
>> Emilien Macchi
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/c75793e7/attachment.pgp>

From harlowja at outlook.com  Thu Sep 24 18:59:26 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Thu, 24 Sep 2015 11:59:26 -0700
Subject: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core
In-Reply-To: <56043295.1000003@mirantis.com>
References: <1443114453-sup-7374@lrrr.local> <56043295.1000003@mirantis.com>
Message-ID: <BLU436-SMTP169B4686F60D2F4F77629C4D8430@phx.gbl>

+1 from me, welcome aboard.

Please tar the deck and clean up the rigging, thanks :-P

ozamiatin wrote:
> +1 from me
>
> 9/24/15 20:12, Doug Hellmann ?????:
>> Oslo team,
>>
>> I am nominating Brant Knudson for Oslo core.
>>
>> As liaison from the Keystone team Brant has participated in meetings,
>> summit sessions, and other discussions at a level higher than some
>> of our own core team members. He is already core on oslo.policy
>> and oslo.cache, and given his track record I am confident that he would
>> make a good addition to the team.
>>
>> Please indicate your opinion by responding with +1/-1 as usual.
>>
>> Doug
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From robertc at robertcollins.net  Thu Sep 24 19:32:12 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Fri, 25 Sep 2015 07:32:12 +1200
Subject: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core
In-Reply-To: <1443114453-sup-7374@lrrr.local>
References: <1443114453-sup-7374@lrrr.local>
Message-ID: <CAJ3HoZ2QzZeyhtc-4quO+P0m-EVyxF0Szqaj0AoL7tKWWH63gg@mail.gmail.com>

+1

On 25 September 2015 at 05:12, Doug Hellmann <doug at doughellmann.com> wrote:
> Oslo team,
>
> I am nominating Brant Knudson for Oslo core.
>
> As liaison from the Keystone team Brant has participated in meetings,
> summit sessions, and other discussions at a level higher than some
> of our own core team members.  He is already core on oslo.policy
> and oslo.cache, and given his track record I am confident that he would
> make a good addition to the team.
>
> Please indicate your opinion by responding with +1/-1 as usual.
>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From guimalufb at gmail.com  Thu Sep 24 19:37:33 2015
From: guimalufb at gmail.com (Gui Maluf)
Date: Thu, 24 Sep 2015 16:37:33 -0300
Subject: [openstack-dev] [puppet][swift] Applying security
 recommendations within puppet-swift
In-Reply-To: <CABzFt8MVCwht_7JNDe5MVMO7ju4OXyWiV-WAkBBPhFFRB66VLQ@mail.gmail.com>
References: <CABzFt8Pxd9kAf=TNv8t1FyT5vSk4iewbLJqeR8Qp6uV4P-k43A@mail.gmail.com>
 <CABzFt8MVCwht_7JNDe5MVMO7ju4OXyWiV-WAkBBPhFFRB66VLQ@mail.gmail.com>
Message-ID: <CAJArKkcf4jAStZNnisimE-V1xVhZQqE5WOeQyERf0XjtzwG7mg@mail.gmail.com>

I think we should follow bug 1458915 principles and remove any POSIX
user/group control. So all modules are consistent among which other
This hardening actions should be reported to specific package mantainers.

On Wed, Sep 23, 2015 at 6:10 PM, Alex Schultz <aschultz at mirantis.com> wrote:

> On Wed, Sep 23, 2015 at 2:32 PM, Alex Schultz <aschultz at mirantis.com>
> wrote:
> > Hey all,
> >
> > So as part of the Puppet mid-cycle, we did bug triage.  One of the
> > bugs that was looked into was bug 1289631[0].  This bug is about
> > applying the recommendations from the security guide[1] within the
> > puppet-swift module.  So I'm sending a note out to get other feedback
> > on if this is a good idea or not.  Should we be applying this type of
> > security items within the puppet modules by default? Should we make
> > this optional?  Thoughts?
> >
> >
> > Thanks,
> > -Alex
> >
> >
> > [0] https://bugs.launchpad.net/puppet-swift/+bug/1289631
> > [1]
> http://docs.openstack.org/security-guide/object-storage.html#securing-services-general
>
> Also for the puppet side of this conversation, the change for the
> security items[0] also seems to conflict with bug 1458915[1] which is
> about removing the posix users/groups/file modes.  So which direction
> should we go?
>
> [0] https://review.openstack.org/#/c/219883/
> [1] https://bugs.launchpad.net/puppet-swift/+bug/1458915
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*guilherme* \n
\t *maluf*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/39b2c72f/attachment.html>

From aschultz at mirantis.com  Thu Sep 24 19:39:49 2015
From: aschultz at mirantis.com (Alex Schultz)
Date: Thu, 24 Sep 2015 14:39:49 -0500
Subject: [openstack-dev] [puppet] use zuul-cloner when running rspec
In-Reply-To: <560447CF.4060104@redhat.com>
References: <56032009.5020103@redhat.com>
 <CABzFt8PH9khELUr=9KqLcrAO78t6Wy+E7e+gKnLnVEbeVWka1w@mail.gmail.com>
 <56042ADA.2060508@redhat.com>
 <CABzFt8NErQh-G4RX0riFg5CbSpzPzbdAk9ZWBBmyActD_Wkvbw@mail.gmail.com>
 <560447CF.4060104@redhat.com>
Message-ID: <CABzFt8P=oc7bPKy7uUm8NgjDe01uRF=7mWCvHy_cmykTzjEHwQ@mail.gmail.com>

On Thu, Sep 24, 2015 at 1:58 PM, Emilien Macchi <emilien at redhat.com> wrote:
>
>
> On 09/24/2015 02:19 PM, Alex Schultz wrote:
>> On Thu, Sep 24, 2015 at 11:54 AM, Emilien Macchi <emilien at redhat.com> wrote:
>>>
>>>
>>> On 09/24/2015 10:14 AM, Alex Schultz wrote:
>>>> On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi <emilien at redhat.com> wrote:
>>>>> Background
>>>>> ==========
>>>>>
>>>>> Current rspec tests are tested with modules mentioned in .fixtures.yaml
>>>>> file of each module.
>>>>>
>>>>> * the file is not consistent across all modules
>>>>> * it hardcodes module names & versions
>>>>> * this way does not allow to use "Depend-On" feature, that would allow
>>>>> to test cross-modules patches
>>>>>
>>>>> Proposal
>>>>> ========
>>>>>
>>>>> * Like we do in beaker & integration jobs, use zuul-cloner to clone
>>>>> modules in our CI jobs.
>>>>> * Use r10k to prepare fixtures modules.
>>>>> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>>>>>
>>>>> In that way:
>>>>> * we will have modules name + versions testing consistency across all
>>>>> modules
>>>>> * the same Puppetfile would be used by unit/beaker/integration testing.
>>>>> * the patch that pass tests on your laptop would pass tests in upstream CI
>>>>> * if you don't have zuul-cloner on your laptop, don't worry it will use
>>>>> git clone. Though you won't have Depends-On feature working on your
>>>>> laptop (technically not possible).
>>>>> * Though your patch will support Depends-On in OpenStack Infra for unit
>>>>> tests. If you submit a patch in puppet-openstacklib that drop something
>>>>> wrong, you can send a patch in puppet-nova that will test it, and unit
>>>>> tests will fail.
>>>>>
>>>>> Drawbacks
>>>>> =========
>>>>> * cloning from .fixtures.yaml takes ~ 10 seconds
>>>>> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>>>>>
>>>>> I think 40 seconds is something accept regarding the benefit.
>>>>>
>>>>
>>>> As someone who consumes these modules downstream and has our own CI
>>>> setup to run the rspec items, this ties it too closely to the
>>>> openstack infrastructure. If we replace the .fixtures.yml with
>>>> zuul-cloner, it assumes I always want the openstack version of the
>>>> modules. This is not necessarily true. I like being able to replace
>>>> items within fixtures.yml when doing dev work. For example If i want
>>>> to test upgrading another module not related to openstack, like
>>>> inifile, how does that work with the proposed solution?  This is also
>>>> moving away from general puppet module conventions for testing. My
>>>> preference would be that this be a different task and we have both
>>>> .fixtures.yml (for general use/development) and the zuul method of
>>>> cloning (for CI).  You have to also think about this from a consumer
>>>> standpoint and this is adding an external dependency on the OpenStack
>>>> infrastructure for anyone trying to run rspec or trying to consume the
>>>> published versions from the forge.  Would I be able to run these tests
>>>> in an offline mode with this change? With the .fixures.yml it's a
>>>> minor edit to switch to local versions. Is the same true for the
>>>> zuul-cloner version?
>>>
>>> What you did before:
>>> * Edit .fixtures.yaml and put the version you like.
>>>
>>> What you would do this the current proposal:
>>> * Edit openstack/puppet-openstack-integration/Puppetfile and put the
>>> version you like.
>>>
>>
>> So I have to edit a file in another module to test changes in
>> puppet-neutron, puppet-nova, etc? With the zuul-cloner version, for
>> local testing what does that workflow look like?
>
> If you need to test your code with cross-project dependencies, having
> current .fixtures.yaml or the proposal won't change anything regarding
> that, you'll still have to trick the YAML file that define the modules
> name/versions.
>
>>
>>> What you're suggesting has a huge downside:
>>> People will still use fixtures by default and not test what is actually
>>> tested by our CI.
>>> A few people will know about the specific Rake task so a few people will
>>> test exactly what upstream does. That will cause frustration to the most
>>> of people who will see tests failing in our CI and not on their laptop.
>>> I'm not sure we want that.
>>
>> You're right that the specific rake task may not be ideal. But that
>> was one option, another option could be use fixtures first then
>> replace with zuul-cloner provided versions but provide me the ability
>> to turn of the zuul cloner part? I'm just saying as it is today, this
>> change adds more complexity and hard ties into the OpenStack
>> infrastructure with non-trival work arounds. I would love to solve the
>> Depends-On issue, but I don't think that should include a deviation
>> from generally accepted testing practices of puppet modules.
>
> I agree it's not best practice in Puppet but I don't see that as an huge
> blocker. Our Puppet modules are Approved by Puppetlabs and respect most
> of best practices AFIK. Is that fixctures thing a big deal?
> I would like to hear from *cough*Hunner/Cody*cough* Puppetlabs about that.
> Another proposal is welcome though, please go ahead.
>

IMHO, it's more of a new developer thing. As a person who works on
other puppet modules, to have a completely different dependency method
for the OpenStack modules raises the barrier to entry for development.
It could be addressed with better documentation...

>>>
>>> I think more than most of people that run tests on their laptops want to
>>> see them passing in upstream CI.
>>> The few people that want to trick versions & modules, will have to run
>>> Rake, trick the Puppetfile and run Rake again. It's not a big deal and
>>> I'm sure this few people can deal with that.
>>>
>>
>> So for me the zuul-cloner task seems more of a CI specific job that
>> solves the Depends-On issues we currently have. Much like the beaker
>> and acceptance tests that's not something I run locally.
>
> Hum. We implemented beaker tests in our modules so you can test the
> module on your infra (laptop/cloud/whatever).
> Here, we're just talking about unit testing, but it's still testing
> after all.
>
> Beaker code is already using this proposal to clone the module.
> The proposal is re-using the same code to be consistent, and keep one
> single centralized Puppetfile.
>
>> I usually run the local rspec tests first before shipping off to CI to see how that
>> plays out but I would manage the .fixtures.yml if necessary to test
>> cross module dependancies.  I don't expect to replicate an entire CI
>> environment setup on my laptop for testing.
>
> This proposal do not "replicate an entire CI". It just clone all
> modules, on the same version, hence the 40 seconds difference.
> The rspec tests will still work for you, and you won't see any difference.
>
>> The rspec tests for me, represent a quick way to test fixes before shipping off to CI for more
>> testing.
>
> This is wrong. What you test on your laptop is not what we test in
> upstream CI: not the same modules, not the same dependencies.
> OpenStack Puppet modules are working with (upstream) dependencies that
> help to build OpenStack Clouds.
>
> OpenStack is already running the same structure with Global Requirements
> [1] (Python dependencies), where each project works with it.
>
> [1] http://git.openstack.org/cgit/openstack/requirements
>
> With this proposal, our modules will follow the same concept, where they
> would be tested (unit + functional) against the same dependencies.
>

Sorry, by this I meant for additional version testing. Rather than
running puppet 3.{4,5,6,7,8} and 4.x locally. Yes we need to test with
the same version of the modules as would be in CI but that would be
where the .fixtures.yml comes in.

>> Going back to the background item from the original email, the
>> .fixtures.yml shouldn't be identical for all modules. It should only
>> be the modules required to test the specific module. I doubt all of
>> the puppet OpenStack modules require each other, right? So that's not
>> a problem, that's an expectation. Additionally, we should be managing
>> these anyway so when we publish the modules to the forge, it has
>> proper metadata indicating the dependancies.
>
> All modules have quite often openstacklib/mysql/rabbitmq/qpid/keystone
> at least.
>
>> This change seems targeted towards solving OpenStack CI environmental
>> setup issues, and not really improving individual module development
>> from a regular puppet standpoint.
>
> Nope, if you read again the proposal, it's solving:
> "I would like to run tests on my laptop that will pass OpenStack CI with
> the right modules and the right versions".
>
> If you want to run your specific modules & version, I guess you'll have
> to do like you're doing with fixtures: running rake (stop it after
> cloning Puppetfile), editing Puppetfile, running rake again, instead of
> just editing fixtures.
>

That is not very user-friendly and that's what I was referring to with
the non-trival work arounds.  This would make this a very OpenStack
puppet specific development workflow.

I also have concerns around the downstream CI implications for this.
Being able to run tests without internet connectivity is important to
some people so I want to make sure that can continue without having to
break the process mid-cycle to try and inject a workaround. It would
better if we could have a workaround upfront. For example make a
Puppetfile location an environment variable and if not defined pull
down the puppet-openstack-integration one?  I wish there was a better
dependency resolution method that just pulling everything down from
the internets.  I just know that doesn't work everywhere.

>>>>>
>>>>> Next steps
>>>>> ==========
>>>>>
>>>>> * PoC in puppet-nova: https://review.openstack.org/#/c/226830/
>>>>> * Patch openstack/puppet-modulesync-config to be consistent across all
>>>>> our modules.
>>>>>
>>>>> Bonus
>>>>> =====
>>>>> we might need (asap) a canary job for puppet-openstack-integration
>>>>> repository, that would run tests on a puppet-* module (since we're using
>>>>> install_modules.sh & Puppetfile files in puppet-* modules).
>>>>> Nothing has been done yet for this work.
>>>>>
>>>>>
>>>>> Thoughts?
>>>>> --
>>>>> Emilien Macchi
>>>>>
>>>>>
>>>>
>>>> I think we need this functionality, I just don't think it's a
>>>> replacement for the .fixures.yml.
>>>>
>>>> Thanks,
>>>> -Alex
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>> --
>>> Emilien Macchi
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> Emilien Macchi
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From cboylan at sapwetik.org  Thu Sep 24 20:43:07 2015
From: cboylan at sapwetik.org (Clark Boylan)
Date: Thu, 24 Sep 2015 13:43:07 -0700
Subject: [openstack-dev] [puppet] use zuul-cloner when running rspec
In-Reply-To: <CABzFt8P=oc7bPKy7uUm8NgjDe01uRF=7mWCvHy_cmykTzjEHwQ@mail.gmail.com>
References: <56032009.5020103@redhat.com>
 <CABzFt8PH9khELUr=9KqLcrAO78t6Wy+E7e+gKnLnVEbeVWka1w@mail.gmail.com>
 <56042ADA.2060508@redhat.com>
 <CABzFt8NErQh-G4RX0riFg5CbSpzPzbdAk9ZWBBmyActD_Wkvbw@mail.gmail.com>
 <560447CF.4060104@redhat.com>
 <CABzFt8P=oc7bPKy7uUm8NgjDe01uRF=7mWCvHy_cmykTzjEHwQ@mail.gmail.com>
Message-ID: <1443127387.1429921.392871457.2A430821@webmail.messagingengine.com>

On Thu, Sep 24, 2015, at 12:39 PM, Alex Schultz wrote:
> On Thu, Sep 24, 2015 at 1:58 PM, Emilien Macchi <emilien at redhat.com>
> wrote:
> >
> >
> > On 09/24/2015 02:19 PM, Alex Schultz wrote:
> >> On Thu, Sep 24, 2015 at 11:54 AM, Emilien Macchi <emilien at redhat.com> wrote:
> >>>
> >>>
> >>> On 09/24/2015 10:14 AM, Alex Schultz wrote:
> >>>> On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi <emilien at redhat.com> wrote:
> >>>>> Background
> >>>>> ==========
> >>>>>
> >>>>> Current rspec tests are tested with modules mentioned in .fixtures.yaml
> >>>>> file of each module.
> >>>>>
> >>>>> * the file is not consistent across all modules
> >>>>> * it hardcodes module names & versions
> >>>>> * this way does not allow to use "Depend-On" feature, that would allow
> >>>>> to test cross-modules patches
> >>>>>
> >>>>> Proposal
> >>>>> ========
> >>>>>
> >>>>> * Like we do in beaker & integration jobs, use zuul-cloner to clone
> >>>>> modules in our CI jobs.
> >>>>> * Use r10k to prepare fixtures modules.
> >>>>> * Use Puppetfile hosted by openstack/puppet-openstack-integration
> >>>>>
> >>>>> In that way:
> >>>>> * we will have modules name + versions testing consistency across all
> >>>>> modules
> >>>>> * the same Puppetfile would be used by unit/beaker/integration testing.
> >>>>> * the patch that pass tests on your laptop would pass tests in upstream CI
> >>>>> * if you don't have zuul-cloner on your laptop, don't worry it will use
> >>>>> git clone. Though you won't have Depends-On feature working on your
> >>>>> laptop (technically not possible).
> >>>>> * Though your patch will support Depends-On in OpenStack Infra for unit
> >>>>> tests. If you submit a patch in puppet-openstacklib that drop something
> >>>>> wrong, you can send a patch in puppet-nova that will test it, and unit
> >>>>> tests will fail.
> >>>>>
> >>>>> Drawbacks
> >>>>> =========
> >>>>> * cloning from .fixtures.yaml takes ~ 10 seconds
> >>>>> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
> >>>>>
> >>>>> I think 40 seconds is something accept regarding the benefit.
> >>>>>
> >>>>
> >>>> As someone who consumes these modules downstream and has our own CI
> >>>> setup to run the rspec items, this ties it too closely to the
> >>>> openstack infrastructure. If we replace the .fixtures.yml with
> >>>> zuul-cloner, it assumes I always want the openstack version of the
> >>>> modules. This is not necessarily true. I like being able to replace
> >>>> items within fixtures.yml when doing dev work. For example If i want
> >>>> to test upgrading another module not related to openstack, like
> >>>> inifile, how does that work with the proposed solution?  This is also
> >>>> moving away from general puppet module conventions for testing. My
> >>>> preference would be that this be a different task and we have both
> >>>> .fixtures.yml (for general use/development) and the zuul method of
> >>>> cloning (for CI).  You have to also think about this from a consumer
> >>>> standpoint and this is adding an external dependency on the OpenStack
> >>>> infrastructure for anyone trying to run rspec or trying to consume the
> >>>> published versions from the forge.  Would I be able to run these tests
> >>>> in an offline mode with this change? With the .fixures.yml it's a
> >>>> minor edit to switch to local versions. Is the same true for the
> >>>> zuul-cloner version?
> >>>
> >>> What you did before:
> >>> * Edit .fixtures.yaml and put the version you like.
> >>>
> >>> What you would do this the current proposal:
> >>> * Edit openstack/puppet-openstack-integration/Puppetfile and put the
> >>> version you like.
> >>>
> >>
> >> So I have to edit a file in another module to test changes in
> >> puppet-neutron, puppet-nova, etc? With the zuul-cloner version, for
> >> local testing what does that workflow look like?
> >
> > If you need to test your code with cross-project dependencies, having
> > current .fixtures.yaml or the proposal won't change anything regarding
> > that, you'll still have to trick the YAML file that define the modules
> > name/versions.
> >
> >>
> >>> What you're suggesting has a huge downside:
> >>> People will still use fixtures by default and not test what is actually
> >>> tested by our CI.
> >>> A few people will know about the specific Rake task so a few people will
> >>> test exactly what upstream does. That will cause frustration to the most
> >>> of people who will see tests failing in our CI and not on their laptop.
> >>> I'm not sure we want that.
> >>
> >> You're right that the specific rake task may not be ideal. But that
> >> was one option, another option could be use fixtures first then
> >> replace with zuul-cloner provided versions but provide me the ability
> >> to turn of the zuul cloner part? I'm just saying as it is today, this
> >> change adds more complexity and hard ties into the OpenStack
> >> infrastructure with non-trival work arounds. I would love to solve the
> >> Depends-On issue, but I don't think that should include a deviation
> >> from generally accepted testing practices of puppet modules.
> >
> > I agree it's not best practice in Puppet but I don't see that as an huge
> > blocker. Our Puppet modules are Approved by Puppetlabs and respect most
> > of best practices AFIK. Is that fixctures thing a big deal?
> > I would like to hear from *cough*Hunner/Cody*cough* Puppetlabs about that.
> > Another proposal is welcome though, please go ahead.
> >
> 
> IMHO, it's more of a new developer thing. As a person who works on
> other puppet modules, to have a completely different dependency method
> for the OpenStack modules raises the barrier to entry for development.
> It could be addressed with better documentation...
> 
> >>>
> >>> I think more than most of people that run tests on their laptops want to
> >>> see them passing in upstream CI.
> >>> The few people that want to trick versions & modules, will have to run
> >>> Rake, trick the Puppetfile and run Rake again. It's not a big deal and
> >>> I'm sure this few people can deal with that.
> >>>
> >>
> >> So for me the zuul-cloner task seems more of a CI specific job that
> >> solves the Depends-On issues we currently have. Much like the beaker
> >> and acceptance tests that's not something I run locally.
> >
> > Hum. We implemented beaker tests in our modules so you can test the
> > module on your infra (laptop/cloud/whatever).
> > Here, we're just talking about unit testing, but it's still testing
> > after all.
> >
> > Beaker code is already using this proposal to clone the module.
> > The proposal is re-using the same code to be consistent, and keep one
> > single centralized Puppetfile.
> >
> >> I usually run the local rspec tests first before shipping off to CI to see how that
> >> plays out but I would manage the .fixtures.yml if necessary to test
> >> cross module dependancies.  I don't expect to replicate an entire CI
> >> environment setup on my laptop for testing.
> >
> > This proposal do not "replicate an entire CI". It just clone all
> > modules, on the same version, hence the 40 seconds difference.
> > The rspec tests will still work for you, and you won't see any difference.
> >
> >> The rspec tests for me, represent a quick way to test fixes before shipping off to CI for more
> >> testing.
> >
> > This is wrong. What you test on your laptop is not what we test in
> > upstream CI: not the same modules, not the same dependencies.
> > OpenStack Puppet modules are working with (upstream) dependencies that
> > help to build OpenStack Clouds.
> >
> > OpenStack is already running the same structure with Global Requirements
> > [1] (Python dependencies), where each project works with it.
> >
> > [1] http://git.openstack.org/cgit/openstack/requirements
> >
> > With this proposal, our modules will follow the same concept, where they
> > would be tested (unit + functional) against the same dependencies.
> >
> 
> Sorry, by this I meant for additional version testing. Rather than
> running puppet 3.{4,5,6,7,8} and 4.x locally. Yes we need to test with
> the same version of the modules as would be in CI but that would be
> where the .fixtures.yml comes in.
> 
> >> Going back to the background item from the original email, the
> >> .fixtures.yml shouldn't be identical for all modules. It should only
> >> be the modules required to test the specific module. I doubt all of
> >> the puppet OpenStack modules require each other, right? So that's not
> >> a problem, that's an expectation. Additionally, we should be managing
> >> these anyway so when we publish the modules to the forge, it has
> >> proper metadata indicating the dependancies.
> >
> > All modules have quite often openstacklib/mysql/rabbitmq/qpid/keystone
> > at least.
> >
> >> This change seems targeted towards solving OpenStack CI environmental
> >> setup issues, and not really improving individual module development
> >> from a regular puppet standpoint.
> >
> > Nope, if you read again the proposal, it's solving:
> > "I would like to run tests on my laptop that will pass OpenStack CI with
> > the right modules and the right versions".
> >
> > If you want to run your specific modules & version, I guess you'll have
> > to do like you're doing with fixtures: running rake (stop it after
> > cloning Puppetfile), editing Puppetfile, running rake again, instead of
> > just editing fixtures.
> >
> 
> That is not very user-friendly and that's what I was referring to with
> the non-trival work arounds.  This would make this a very OpenStack
> puppet specific development workflow.
> 
> I also have concerns around the downstream CI implications for this.
> Being able to run tests without internet connectivity is important to
> some people so I want to make sure that can continue without having to
> break the process mid-cycle to try and inject a workaround. It would
> better if we could have a workaround upfront. For example make a
> Puppetfile location an environment variable and if not defined pull
> down the puppet-openstack-integration one?  I wish there was a better
> dependency resolution method that just pulling everything down from
> the internets.  I just know that doesn't work everywhere.
> 
I will admit to not properly following this entire thread, but I want to
jump in and point out that zuul-cloner clones from arbitrary locations.
You can point it to your local disk (in fact we do this for cached
repos), or any other git server that can be fetched from without
interactive authentication needs. This means you can do downstream
testing just fine without an internet connection, just point zuul-cloner
at wherever your git repos are.

The idea behind zuul-cloner is that it will do all of the correct things
with zuul refs and fallback to the appropriate branches when the refs
are not set. This means that it should just work by falling back to your
dev branches if you tell it to. If it doesn't work outside of a zuul
context that would be a bug and we should fix it.

Clark


From fungi at yuggoth.org  Thu Sep 24 21:16:28 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Thu, 24 Sep 2015 21:16:28 +0000
Subject: [openstack-dev] [puppet] use zuul-cloner when running rspec
In-Reply-To: <CABzFt8P=oc7bPKy7uUm8NgjDe01uRF=7mWCvHy_cmykTzjEHwQ@mail.gmail.com>
References: <56032009.5020103@redhat.com>
 <CABzFt8PH9khELUr=9KqLcrAO78t6Wy+E7e+gKnLnVEbeVWka1w@mail.gmail.com>
 <56042ADA.2060508@redhat.com>
 <CABzFt8NErQh-G4RX0riFg5CbSpzPzbdAk9ZWBBmyActD_Wkvbw@mail.gmail.com>
 <560447CF.4060104@redhat.com>
 <CABzFt8P=oc7bPKy7uUm8NgjDe01uRF=7mWCvHy_cmykTzjEHwQ@mail.gmail.com>
Message-ID: <20150924211628.GB25159@yuggoth.org>

On 2015-09-24 14:39:49 -0500 (-0500), Alex Schultz wrote:
[...]
> Being able to run tests without internet connectivity is important to
> some people so I want to make sure that can continue without having to
> break the process mid-cycle to try and inject a workaround. It would
> better if we could have a workaround upfront. For example make a
> Puppetfile location an environment variable and if not defined pull
> down the puppet-openstack-integration one?  I wish there was a better
> dependency resolution method that just pulling everything down from
> the internets.  I just know that doesn't work everywhere.

To build on Clark's response, THIS is basically why we use tools
like zuul-cloner. In our CI we're attempting to minimize or even
eliminate network use during tests, and so zuul-cloner leverages
local caches and is sufficiently flexible to obtain the repositories
in question from anywhere you want (which could also just be to
always use your locally cached/modified copy and never hit the
network at all). Pass it whatever Git repository URLs you want,
including file:///some/thing.git if that's your preference.
-- 
Jeremy Stanley


From doc at aedo.net  Thu Sep 24 21:16:45 2015
From: doc at aedo.net (Christopher Aedo)
Date: Thu, 24 Sep 2015 14:16:45 -0700
Subject: [openstack-dev] [murano] [app-catalog] versions for murano
 assets in the catalog
In-Reply-To: <CAOnDsYNYi7zgmxCe57mf9cC7ma7m9pt5Rqx9aMGzjDn3eoGPUg@mail.gmail.com>
References: <CA+odVQHdkocESWDvNhwZbQaMAyBPCJciXCTeDrTcAsYGN7Y4nA@mail.gmail.com>
 <CAOnDsYNYi7zgmxCe57mf9cC7ma7m9pt5Rqx9aMGzjDn3eoGPUg@mail.gmail.com>
Message-ID: <CA+odVQG47k6N7--zC-ge+c3S507GnVSAarEeQs2ZULNyqABD8w@mail.gmail.com>

On Tue, Sep 22, 2015 at 12:06 PM, Serg Melikyan <smelikyan at mirantis.com> wrote:
> Hi Chris,
>
> concern regarding assets versioning in Community App Catalog indeed
> affects Murano because we are constantly improving our language and
> adding new features, e.g. we added ability to select existing Neutron
> network for particular application in Liberty and if user wants to use
> this feature - his application will be incompatible with Kilo. I think
> this also valid for Heat because they HOT language is also improving
> with each release.
>
> Thank you for proposing workaround, I think this is a good way to
> solve immediate blocker while Community App Catalog team is working on
> resolving handling versions elegantly from they side. Kirill proposed
> two changes in Murano to follow this approach that I've already +2 ed:
>
> * https://review.openstack.org/225251 - openstack/murano-dashboard
> * https://review.openstack.org/225249 - openstack/python-muranoclient
>
> Looks like corresponding commit to Community App Catalog is already
> merged [0] and our next step is to prepare new version of applications
> from openstack/murano-apps and then figure out how to publish them
> properly.

Yep, thanks, this looks like a step in the right direction to give us
some wiggle room to handle different versions of assets in the App
Catalog for the next few months.

Down the road we want to make sure that the App Catalog is not closely
tied to any other projects, or how those projects handle versions.  We
will clearly communicate our intentions around versions of assets (and
how to specify which version is desired when retrieving an asset) here
on the mailing list, during the weekly meetings, and during the weekly
cross-project meeting as well.

> P.S. I've also talked with Alexander and Kirill regarding better ways
> to handle versioning for assets in Community App Catalog and they
> shared that they are starting working on PoC using Glance Artifact
> Repository, probably they can share more details regarding this work
> here. We would be happy to work on this together given that in Liberty
> we implemented experimental support for package versioning inside the
> Murano (e.g. having two version of the same app working side-by-side)
> [1]
>
> References:
> [0] https://review.openstack.org/224869
> [1] http://murano-specs.readthedocs.org/en/latest/specs/liberty/murano-versioning.html

Thanks, looking forward to the PoC.  We have discussed whether or not
using Glance Artifact Repository makes sense for the App Catalog and
so far the consensus has been that it is not a great fit for what we
need.  Though it brings a lot of great stuff to the table, all we
really need is a place to drop large (and small) binaries.  Swift as a
storage component is the obvious choice for that - the metadata around
the asset itself (when it was added, by whom, rating, version, etc.)
will have to live in a DB anyway.  Given that, seems like Glance is
not an obvious great fit, but like I said I'm looking forward to
hearing/seeing more on this front :)

-Christopher


From melwittt at gmail.com  Thu Sep 24 21:17:29 2015
From: melwittt at gmail.com (melanie witt)
Date: Thu, 24 Sep 2015 14:17:29 -0700
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
Message-ID: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>

Hi All,

I have been looking and haven't yet located documentation about how to upgrade from glance v1 to glance v2.

From what I understand, images and snapshots created with v1 can't be listed/accessed through the v2 api. Are there instructions about how to migrate images and snapshots from v1 to v2? Are there other incompatibilities between v1 and v2?

I'm asking because I have read that glance v1 isn't defcore compliant and so we need all projects to move to v2, but the incompatibility from v1 to v2 is preventing that in nova. Is there anything else preventing v2 adoption? Could we move to glance v2 if there's a migration path from v1 to v2 that operators can run through before upgrading to a version that uses v2 as the default?

Thanks,
-melanie (irc: melwitt)





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/d6bb7810/attachment.pgp>

From sorrison at gmail.com  Thu Sep 24 21:22:43 2015
From: sorrison at gmail.com (Sam Morrison)
Date: Fri, 25 Sep 2015 07:22:43 +1000
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <5603B228.3070502@redhat.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com> <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
 <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
 <5603B228.3070502@redhat.com>
Message-ID: <7ED8380F-5F13-430E-AE74-F9C74C37FEE1@gmail.com>


> On 24 Sep 2015, at 6:19 pm, Sylvain Bauza <sbauza at redhat.com> wrote:
> 
> Ahem, Nova AZs are not failure domains - I mean the current implementation, in the sense of many people understand what is a failure domain, ie. a physical unit of machines (a bay, a room, a floor, a datacenter).
> All the AZs in Nova share the same controlplane with the same message queue and database, which means that one failure can be propagated to the other AZ.
> 
> To be honest, there is one very specific usecase where AZs *are* failure domains : when cells exact match with AZs (ie. one AZ grouping all the hosts behind one cell). That's the very specific usecase that Sam is mentioning in his email, and I certainly understand we need to keep that.
> 
> What are AZs in Nova is pretty well explained in a quite old blogpost : http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/ <http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/>
Yes an AZ may not be considered a failure domain in terms of control infrastructure, I think all operators understand this. If you want control infrastructure failure domains use regions.

However from a resource level (eg. running instance/ running volume) I would consider them some kind of failure domain. It?s a way of saying to a user if you have resources running in 2 AZs you have a more available service. 

Every cloud will have a different definition of what an AZ is, a rack/collection of racks/DC etc. openstack doesn?t need to decide what that is.

Sam

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/b9417e01/attachment.html>

From emilien at redhat.com  Thu Sep 24 21:42:33 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Thu, 24 Sep 2015 17:42:33 -0400
Subject: [openstack-dev] [puppet] use zuul-cloner when running rspec
In-Reply-To: <20150924211628.GB25159@yuggoth.org>
References: <56032009.5020103@redhat.com>
 <CABzFt8PH9khELUr=9KqLcrAO78t6Wy+E7e+gKnLnVEbeVWka1w@mail.gmail.com>
 <56042ADA.2060508@redhat.com>
 <CABzFt8NErQh-G4RX0riFg5CbSpzPzbdAk9ZWBBmyActD_Wkvbw@mail.gmail.com>
 <560447CF.4060104@redhat.com>
 <CABzFt8P=oc7bPKy7uUm8NgjDe01uRF=7mWCvHy_cmykTzjEHwQ@mail.gmail.com>
 <20150924211628.GB25159@yuggoth.org>
Message-ID: <56046E49.3000907@redhat.com>



On 09/24/2015 05:16 PM, Jeremy Stanley wrote:
> On 2015-09-24 14:39:49 -0500 (-0500), Alex Schultz wrote:
> [...]
>> Being able to run tests without internet connectivity is important to
>> some people so I want to make sure that can continue without having to
>> break the process mid-cycle to try and inject a workaround. It would
>> better if we could have a workaround upfront. For example make a
>> Puppetfile location an environment variable and if not defined pull
>> down the puppet-openstack-integration one?  I wish there was a better
>> dependency resolution method that just pulling everything down from
>> the internets.  I just know that doesn't work everywhere.
> 
> To build on Clark's response, THIS is basically why we use tools
> like zuul-cloner. In our CI we're attempting to minimize or even
> eliminate network use during tests, and so zuul-cloner leverages
> local caches and is sufficiently flexible to obtain the repositories
> in question from anywhere you want (which could also just be to
> always use your locally cached/modified copy and never hit the
> network at all). Pass it whatever Git repository URLs you want,
> including file:///some/thing.git if that's your preference.
> 

So we had a discussion on #puppet-openstack channel, and Alex's main
concern was people should be able to continue to be able to edit a file
(it was .fixtures.yaml, it will be a Puppetfile now) to run tests
against custom dependencies (modules + version can be whatever you like).

It has been addressed in the last patchset:
https://review.openstack.org/#/c/226830/21..22/Rakefile,cm

So in your care, you'll have to:

1/ Build your Puppetfile that contains your custom deps (instead of
editing the .fixtures.yaml)
2/ Export PUPPETFILE=/path/Puppetfile
3/ Run `rake rspec` like before.

This solution should make happy everyone:

* Default usage (on my laptop) will test the same things as Puppet
OpenStack CI
* Allow to use Depends-On in OpenStack CI
* Allow to customize the dependencies for downstream CI by creating a
Puppetfile and exporting its path.

Deal?
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/1fc7cbb0/attachment.pgp>

From sabari.bits at gmail.com  Thu Sep 24 21:55:30 2015
From: sabari.bits at gmail.com (Sabari Murugesan)
Date: Thu, 24 Sep 2015 14:55:30 -0700
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
Message-ID: <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>

Hi Melanie

In general, images created by glance v1 API should be accessible using v2
and
vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with
an image was
causing incompatibility. These fixes were back-ported to stable/kilo.

Thanks
Sabari

[1] - https://bugs.launchpad.net/glance/+bug/1447215
[2] - https://bugs.launchpad.net/bugs/1419823
[3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193


On Thu, Sep 24, 2015 at 2:17 PM, melanie witt <melwittt at gmail.com> wrote:

> Hi All,
>
> I have been looking and haven't yet located documentation about how to
> upgrade from glance v1 to glance v2.
>
> From what I understand, images and snapshots created with v1 can't be
> listed/accessed through the v2 api. Are there instructions about how to
> migrate images and snapshots from v1 to v2? Are there other
> incompatibilities between v1 and v2?
>
> I'm asking because I have read that glance v1 isn't defcore compliant and
> so we need all projects to move to v2, but the incompatibility from v1 to
> v2 is preventing that in nova. Is there anything else preventing v2
> adoption? Could we move to glance v2 if there's a migration path from v1 to
> v2 that operators can run through before upgrading to a version that uses
> v2 as the default?
>
> Thanks,
> -melanie (irc: melwitt)
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/0558803a/attachment.html>

From feilong at catalyst.net.nz  Thu Sep 24 21:59:10 2015
From: feilong at catalyst.net.nz (Fei Long Wang)
Date: Fri, 25 Sep 2015 09:59:10 +1200
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
Message-ID: <5604722E.1090803@catalyst.net.nz>

Thanks for raising this topic. I don't know why you think the 
images/snapshots created by v1 can't be accessed by v2. But I would say 
it's not true, see http://paste.openstack.org/show/473956/

Personally, I don't think there is any big blocker for this. What we 
need to do is working out a plan on the summit(Glance team have planned 
a design session for this). And meanwhile I'm going to split this 
https://review.openstack.org/#/c/144875/ to make it easier to review.


On 25/09/15 09:17, melanie witt wrote:
> Hi All,
>
> I have been looking and haven't yet located documentation about how to upgrade from glance v1 to glance v2.
>
>  From what I understand, images and snapshots created with v1 can't be listed/accessed through the v2 api. Are there instructions about how to migrate images and snapshots from v1 to v2? Are there other incompatibilities between v1 and v2?
>
> I'm asking because I have read that glance v1 isn't defcore compliant and so we need all projects to move to v2, but the incompatibility from v1 to v2 is preventing that in nova. Is there anything else preventing v2 adoption? Could we move to glance v2 if there's a migration path from v1 to v2 that operators can run through before upgrading to a version that uses v2 as the default?
>
> Thanks,
> -melanie (irc: melwitt)
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Fei Long Wang (???)
--------------------------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang at catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--------------------------------------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/f00d6e2e/attachment.html>

From jpenick at gmail.com  Thu Sep 24 22:13:03 2015
From: jpenick at gmail.com (James Penick)
Date: Thu, 24 Sep 2015 15:13:03 -0700
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to handle
 AZ bug 1496235?)
Message-ID: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>

>
>
> At risk of getting too offtopic I think there's an alternate solution to
> doing this in Nova or on the client side.  I think we're missing some sort
> of OpenStack API and service that can handle this.  Nova is a low level
> infrastructure API and service, it is not designed to handle these
> orchestrations.  I haven't checked in on Heat in a while but perhaps this
> is a role that it could fill.
>
> I think that too many people consider Nova to be *the* OpenStack API when
> considering instances/volumes/networking/images and that's not something I
> would like to see continue.  Or at the very least I would like to see a
> split between the orchestration/proxy pieces and the "manage my
> VM/container/baremetal" bits


(new thread)
 You've hit on one of my biggest issues right now: As far as many deployers
and consumers are concerned (and definitely what I tell my users within
Yahoo): The value of an OpenStack value-stream (compute, network, storage)
is to provide a single consistent API for abstracting and managing those
infrastructure resources.

 Take networking: I can manage Firewalls, switches, IP selection, SDN, etc
through Neutron. But for compute, If I want VM I go through Nova, for
Baremetal I can -mostly- go through Nova, and for containers I would talk
to Magnum or use something like the nova docker driver.

 This means that, by default, Nova -is- the closest thing to a top level
abstraction layer for compute. But if that is explicitly against Nova's
charter, and Nova isn't going to be the top level abstraction for all
things Compute, then something else needs to fill that space. When that
happens, all things common to compute provisioning should come out of Nova
and move into that new API. Availability zones, Quota, etc.

-James
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/5ebac9ff/attachment.html>

From apevec at gmail.com  Thu Sep 24 22:17:28 2015
From: apevec at gmail.com (Alan Pevec)
Date: Fri, 25 Sep 2015 00:17:28 +0200
Subject: [openstack-dev] [all][stable][release] 2015.1.2
In-Reply-To: <20150924073107.GF24386@sofja.berg.ol>
References: <CANZa-e+LZg0PZgPDrkhgifuZ_BQ6EhTua-420C5K2Z+A8cbPsg@mail.gmail.com>
 <20150924073107.GF24386@sofja.berg.ol>
Message-ID: <CAGi==UXRm7mARJecBT69qqQMfOycdx_crVf-OCD_x+O9z2J2nw@mail.gmail.com>

> For Horizon, it would make sense to move this a week back. We discovered
> a few issues in Liberty, which are present in current kilo, too. I'd
> love to cherry-pick a few of them to kilo.

What are LP bug#s ? Are you saying that fixes are still work in
progress on master and not ready for backporting yet?

> Unfortunately, it takes a bit, until Kilo (or in general: stable)
> reviews are being done.

One suggestion how to speed up reviews of such patch series:
put them all under the same topic for an easy gerrit URL, then ping on
#openstack-stable.
Stable backport reviews are supposed to be done primarily by
per-project stable-maint teams but stable-maint-core could also help.

Cheers,
Alan


From sbalukoff at bluebox.net  Thu Sep 24 22:17:27 2015
From: sbalukoff at bluebox.net (Stephen Balukoff)
Date: Thu, 24 Sep 2015 15:17:27 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7C9755@EX10MBOX06.pnnl.gov>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>
 <1442968743.30604.13.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C9145@EX10MBOX06.pnnl.gov>
 <CABkBM5GvWpG57HkBHghvH+q7ZK8V8s_oHL2KAfHQdRiuOAcSOg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C93B5@EX10MBOX06.pnnl.gov>
 <CAAbQNRmXG82C+4VAuuZcY6NRG5eNwQB=aYLe3T00wWAHyC65tQ@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9755@EX10MBOX06.pnnl.gov>
Message-ID: <CAAGw+ZqUe92i+PVkDP6YK8vA2YoPE0dqOOAd4H2N7=4Emd5oHg@mail.gmail.com>

Sergey--

When is the Heat IRC meeting? Would it be helpful to have an LBaaS person
there to help explain things?

Also yes, Kevin is right: LBaaS v1 and LBaaS v2 are very incompatible (both
the API and the underlying object model). They are different enough that
when we looked at making some way of making LBaaS v2 backward compatible
with v1, we eventually gave up after a couple months of trying to figure
out how to make this work, and decided people would have to live with the
fact that v1 would eventually be deprecated and go away entirely, but in
the mean time maintain effectively two different major code paths in the
same source tree. Nobody claims it's pretty, eh.

I also agree with Doug's suggestion that a namespace change seems like the
right way to approach this.

Stephen

On Wed, Sep 23, 2015 at 11:39 AM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:

> One of the weird things about the lbaasv1 vs v2 thing which is different
> from just about every other v1->v2 change I've seen is v1 and v2 lb's are
> totally separate things. Unlike, say cinder, where doing a list volumes
> would show up in both api's, so upgrading is smooth.
>
> Thanks,
> Kevin
> ------------------------------
> *From:* Sergey Kraynev [skraynev at mirantis.com]
> *Sent:* Wednesday, September 23, 2015 11:09 AM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
>
> Guys. I happy, that you already discussed it here :)
> However, I'd like to raise same question on our Heat IRC meeting.
> Probably we should define some common concepts, because I think, that
> lbaas is not single example of service with
> several APIs.
> I will post update in this thread later (after meeting).
>
> Regards,
> Sergey.
>
> On 23 September 2015 at 14:37, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>
>> Seperate ns would work great.
>>
>> Thanks,
>> Kevin
>>
>> ------------------------------
>> *From:* Banashankar KV
>> *Sent:* Tuesday, September 22, 2015 9:14:35 PM
>>
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for
>> LbaasV2
>>
>> What you think about separating both of them with the name as Doug
>> mentioned. In future if we want to get rid of the v1 we can just remove
>> that namespace. Everything will be clean.
>>
>> Thanks
>> Banashankar
>>
>>
>> On Tue, Sep 22, 2015 at 6:01 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>>
>>> As I understand it, loadbalancer in v2 is more like pool was in v1. Can
>>> we make it such that if you are using the loadbalancer resource and have
>>> the mandatory v2 properties that it tries to use v2 api, otherwise its a v1
>>> resource? PoolMember should be ok being the same. It just needs to call v1
>>> or v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api
>>> different between them? Can it be like pool member?
>>>
>>> Thanks,
>>> Kevin
>>>
>>> ------------------------------
>>> *From:* Brandon Logan
>>> *Sent:* Tuesday, September 22, 2015 5:39:03 PM
>>>
>>> *To:* openstack-dev at lists.openstack.org
>>> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for
>>> LbaasV2
>>>
>>> So for the API v1s api is of the structure:
>>>
>>> <neutron-endpoint>/lb/(vip|pool|member|health_monitor)
>>>
>>> V2s is:
>>> <neutron-endpoint>/lbaas/(loadbalancer|listener|pool|healthmonitor)
>>>
>>> member is a child of pool, so it would go down one level.
>>>
>>> The only difference is the lb for v1 and lbaas for v2.  Not sure if that
>>> is enough of a different.
>>>
>>> Thanks,
>>> Brandon
>>> On Tue, 2015-09-22 at 23:48 +0000, Fox, Kevin M wrote:
>>> > Thats the problem. :/
>>> >
>>> > I can't think of a way to have them coexist without: breaking old
>>> > templates, including v2 in the name, or having a flag on the resource
>>> > saying the version is v2. And as an app developer I'd rather not have
>>> > my existing templates break.
>>> >
>>> > I haven't compared the api's at all, but is there a required field of
>>> > v2 that is different enough from v1 that by its simple existence in
>>> > the resource you can tell a v2 from a v1 object? Would something like
>>> > that work? PoolMember wouldn't have to change, the same resource could
>>> > probably work for whatever lb it was pointing at I'm guessing.
>>> >
>>> > Thanks,
>>> > Kevin
>>> >
>>> >
>>> >
>>> > ______________________________________________________________________
>>> > From: Banashankar KV [banveerad at gmail.com]
>>> > Sent: Tuesday, September 22, 2015 4:40 PM
>>> > To: OpenStack Development Mailing List (not for usage questions)
>>> > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
>>> > LbaasV2
>>> >
>>> >
>>> >
>>> > Ok, sounds good. So now the question is how should we name the new V2
>>> > resources ?
>>> >
>>> >
>>> >
>>> > Thanks
>>> > Banashankar
>>> >
>>> >
>>> >
>>> > On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
>>> > wrote:
>>> >         Yes, hence the need to support the v2 resources as seperate
>>> >         things. Then I can rewrite the templates to include the new
>>> >         resources rather then the old resources as appropriate. IE, it
>>> >         will be a porting effort to rewrite them. Then do a heat
>>> >         update on the stack to migrate it from lbv1 to lbv2. Since
>>> >         they are different resources, it should create the new and
>>> >         delete the old.
>>> >
>>> >         Thanks,
>>> >         Kevin
>>> >
>>> >
>>> >         ______________________________________________________________
>>> >         From: Banashankar KV [banveerad at gmail.com]
>>> >         Sent: Tuesday, September 22, 2015 4:16 PM
>>> >
>>> >         To: OpenStack Development Mailing List (not for usage
>>> >         questions)
>>> >         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
>>> >         for LbaasV2
>>> >
>>> >
>>> >
>>> >
>>> >         But I think, V2 has introduced some new components and whole
>>> >         association of the resources with each other is changed, we
>>> >         should be still able to do what Kevin has mentioned ?
>>> >
>>> >         Thanks
>>> >         Banashankar
>>> >
>>> >
>>> >
>>> >         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
>>> >         <Kevin.Fox at pnnl.gov> wrote:
>>> >                 There needs to be a way to have both v1 and v2
>>> >                 supported in one engine....
>>> >
>>> >                 Say I have templates that use v1 already in existence
>>> >                 (I do), and I want to be able to heat stack update on
>>> >                 them one at a time to v2. This will replace the v1 lb
>>> >                 with v2, migrating the floating ip from the v1 lb to
>>> >                 the v2 one. This gives a smoothish upgrade path.
>>> >
>>> >                 Thanks,
>>> >                 Kevin
>>> >                 ________________________________________
>>> >                 From: Brandon Logan [brandon.logan at RACKSPACE.COM]
>>> >                 Sent: Tuesday, September 22, 2015 3:22 PM
>>> >                 To: openstack-dev at lists.openstack.org
>>> >                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>>> >                 support for LbaasV2
>>> >
>>> >                 Well I'd hate to have the V2 postfix on it because V1
>>> >                 will be deprecated
>>> >                 and removed, which means the V2 being there would be
>>> >                 lame.  Is there any
>>> >                 kind of precedent set for for how to handle this?
>>> >
>>> >                 Thanks,
>>> >                 Brandon
>>> >                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
>>> >                 wrote:
>>> >                 > So are we thinking of making it as ?
>>> >                 > OS::Neutron::LoadBalancerV2
>>> >                 >
>>> >                 > OS::Neutron::ListenerV2
>>> >                 >
>>> >                 > OS::Neutron::PoolV2
>>> >                 >
>>> >                 > OS::Neutron::PoolMemberV2
>>> >                 >
>>> >                 > OS::Neutron::HealthMonitorV2
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 > and add all those into the loadbalancer.py of heat
>>> >                 engine ?
>>> >                 >
>>> >                 > Thanks
>>> >                 > Banashankar
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>>> >                 > <skraynev at mirantis.com> wrote:
>>> >                 >         Brandon.
>>> >                 >
>>> >                 >
>>> >                 >         As I understand we v1 and v2 have
>>> >                 differences also in list of
>>> >                 >         objects and also in relationships between
>>> >                 them.
>>> >                 >         So I don't think that it will be easy to
>>> >                 upgrade old resources
>>> >                 >         (unfortunately).
>>> >                 >         I'd agree with second Kevin's suggestion
>>> >                 about implementation
>>> >                 >         new resources in this case.
>>> >                 >
>>> >                 >
>>> >                 >         I see, that a lot of guys, who wants to help
>>> >                 with it :) And I
>>> >                 >         suppose, that me and Rabi Mishra may try to
>>> >                 help with it,
>>> >                 >         because we was involvement in implementation
>>> >                 of v1 resources
>>> >                 >         in Heat.
>>> >                 >         Follow the list of v1 lbaas resources in
>>> >                 Heat:
>>> >                 >
>>> >                 >
>>> >                 >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>>> >                 >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>>> >                 >
>>> >                 >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>>> >                 >
>>> >                 >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >         Also, I suppose, that it may be discussed
>>> >                 during summit
>>> >                 >         talks :)
>>> >                 >         Will add to etherpad with potential
>>> >                 sessions.
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >         Regards,
>>> >                 >         Sergey.
>>> >                 >
>>> >                 >         On 22 September 2015 at 22:27, Brandon Logan
>>> >                 >         <brandon.logan at rackspace.com> wrote:
>>> >                 >                 There is some overlap, but there was
>>> >                 some incompatible
>>> >                 >                 differences when
>>> >                 >                 we started designing v2.  I'm sure
>>> >                 the same issues
>>> >                 >                 will arise this time
>>> >                 >                 around so new resources sounds like
>>> >                 the path to go.
>>> >                 >                 However, I do not
>>> >                 >                 know much about Heat and the
>>> >                 resources so I'm speaking
>>> >                 >                 on a very
>>> >                 >                 uneducated level here.
>>> >                 >
>>> >                 >                 Thanks,
>>> >                 >                 Brandon
>>> >                 >                 On Tue, 2015-09-22 at 18:38 +0000,
>>> >                 Fox, Kevin M wrote:
>>> >                 >                 > We're using the v1 resources...
>>> >                 >                 >
>>> >                 >                 > If the v2 ones are compatible and
>>> >                 can seamlessly
>>> >                 >                 upgrade, great
>>> >                 >                 >
>>> >                 >                 > Otherwise, make new ones please.
>>> >                 >                 >
>>> >                 >                 > Thanks,
>>> >                 >                 > Kevin
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >
>>> >
>>> ______________________________________________________________________
>>> >                 >                 > From: Banashankar KV
>>> >                 [banveerad at gmail.com]
>>> >                 >                 > Sent: Tuesday, September 22, 2015
>>> >                 10:07 AM
>>> >                 >                 > To: OpenStack Development Mailing
>>> >                 List (not for
>>> >                 >                 usage questions)
>>> >                 >                 > Subject: Re: [openstack-dev]
>>> >                 [neutron][lbaas] - Heat
>>> >                 >                 support for
>>> >                 >                 > LbaasV2
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 > Hi Brandon,
>>> >                 >                 > Work in progress, but need some
>>> >                 input on the way we
>>> >                 >                 want them, like
>>> >                 >                 > replace the existing lbaasv1 or we
>>> >                 still need to
>>> >                 >                 support them ?
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 > Thanks
>>> >                 >                 > Banashankar
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
>>> >                 Brandon Logan
>>> >                 >                 > <brandon.logan at rackspace.com>
>>> >                 wrote:
>>> >                 >                 >         Hi Banashankar,
>>> >                 >                 >         I think it'd be great if
>>> >                 you got this going.
>>> >                 >                 One of those
>>> >                 >                 >         things we
>>> >                 >                 >         want to have and people
>>> >                 ask for but has
>>> >                 >                 always gotten a lower
>>> >                 >                 >         priority
>>> >                 >                 >         due to the critical things
>>> >                 needed.
>>> >                 >                 >
>>> >                 >                 >         Thanks,
>>> >                 >                 >         Brandon
>>> >                 >                 >         On Mon, 2015-09-21 at
>>> >                 17:57 -0700,
>>> >                 >                 Banashankar KV wrote:
>>> >                 >                 >         > Hi All,
>>> >                 >                 >         > I was thinking of
>>> >                 starting the work on
>>> >                 >                 heat to support
>>> >                 >                 >         LBaasV2,  Is
>>> >                 >                 >         > there any concerns about
>>> >                 that?
>>> >                 >                 >         >
>>> >                 >                 >         >
>>> >                 >                 >         > I don't know if it is
>>> >                 the right time to
>>> >                 >                 bring this up :D .
>>> >                 >                 >         >
>>> >                 >                 >         > Thanks,
>>> >                 >                 >         > Banashankar (bana_k)
>>> >                 >                 >         >
>>> >                 >                 >         >
>>> >                 >                 >
>>> >                 >                 >         >
>>> >                 >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >                 >         > OpenStack Development
>>> >                 Mailing List (not
>>> >                 >                 for usage questions)
>>> >                 >                 >         > Unsubscribe:
>>> >                 >                 >
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >                 >                 >         >
>>> >                 >                 >
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >                 >         OpenStack Development
>>> >                 Mailing List (not for
>>> >                 >                 usage questions)
>>> >                 >                 >         Unsubscribe:
>>> >                 >                 >
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >                 >                 >
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >                 > OpenStack Development Mailing List
>>> >                 (not for usage
>>> >                 >                 questions)
>>> >                 >                 > Unsubscribe:
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >                 >                 >
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >                 OpenStack Development Mailing List
>>> >                 (not for usage
>>> >                 >                 questions)
>>> >                 >                 Unsubscribe:
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >         OpenStack Development Mailing List (not for
>>> >                 usage questions)
>>> >                 >         Unsubscribe:
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 > OpenStack Development Mailing List (not for usage
>>> >                 questions)
>>> >                 > Unsubscribe:
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> __________________________________________________________________________
>>> >                 OpenStack Development Mailing List (not for usage
>>> >                 questions)
>>> >                 Unsubscribe:
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> __________________________________________________________________________
>>> >                 OpenStack Development Mailing List (not for usage
>>> >                 questions)
>>> >                 Unsubscribe:
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> >
>>> >
>>> __________________________________________________________________________
>>> >         OpenStack Development Mailing List (not for usage questions)
>>> >         Unsubscribe:
>>> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> >
>>> __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Principal Technologist
Blue Box, An IBM Company
www.blueboxcloud.com
sbalukoff at blueboxcloud.com
206-607-0660 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/0e72dbf4/attachment-0001.html>

From chris.friesen at windriver.com  Thu Sep 24 22:21:23 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Thu, 24 Sep 2015 16:21:23 -0600
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <56043E5B.7020709@windriver.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
 <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>
 <56042AE2.6000707@windriver.com> <56043E5B.7020709@windriver.com>
Message-ID: <56047763.7090302@windriver.com>

On 09/24/2015 12:18 PM, Chris Friesen wrote:

>
> I think what happened is that we took the SIGTERM after the open() call in
> create_iscsi_target(), but before writing anything to the file.
>
>          f = open(volume_path, 'w+')
>          f.write(volume_conf)
>          f.close()
>
> The 'w+' causes the file to be immediately truncated on opening, leading to an
> empty file.
>
> To work around this, I think we need to do the classic "write to a temporary
> file and then rename it to the desired filename" trick.  The atomicity of the
> rename ensures that either the old contents or the new contents are present.

I'm pretty sure that upstream code is still susceptible to zeroing out the file 
in the above scenario.  However, it doesn't take an exception--that's due to a 
local change on our part that attempted to fix the below issue.

The stable/kilo code *does* have a problem in that when it regenerates the file 
it's missing the CHAP authentication line (beginning with "incominguser").

Chris


From dougwig at parksidesoftware.com  Thu Sep 24 22:30:59 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Thu, 24 Sep 2015 16:30:59 -0600
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CAAbQNRmXG82C+4VAuuZcY6NRG5eNwQB=aYLe3T00wWAHyC65tQ@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>
 <1442968743.30604.13.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C9145@EX10MBOX06.pnnl.gov>
 <CABkBM5GvWpG57HkBHghvH+q7ZK8V8s_oHL2KAfHQdRiuOAcSOg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C93B5@EX10MBOX06.pnnl.gov>
 <CAAbQNRmXG82C+4VAuuZcY6NRG5eNwQB=aYLe3T00wWAHyC65tQ@mail.gmail.com>
Message-ID: <EF3B902C-A4BD-4011-9D24-6F6AE2806FAA@parksidesoftware.com>

Hi Sergey,

I agree with the previous comments here. While supporting several APIs at once, with one set of objects, is a noble goal, in this case, the object relationships are *completely* different. Unless you want to get into the business of redefining your own higher-level API abstractions in all cases, that general strategy for all things will be awkward and difficult.

Some API changes lend themselves well to object reuse abstractions. Some don?t. Lbaas v2 is definitely the latter, IMO.

What was the result of your meeting discussion?  (*goes to grub around in eavesdrop logs after typing this.*)

Thanks,
doug



> On Sep 23, 2015, at 12:09 PM, Sergey Kraynev <skraynev at mirantis.com> wrote:
> 
> Guys. I happy, that you already discussed it here :)
> However, I'd like to raise same question on our Heat IRC meeting.
> Probably we should define some common concepts, because I think, that lbaas is not single example of service with
> several APIs.
> I will post update in this thread later (after meeting).
> 
> Regards,
> Sergey.
> 
> On 23 September 2015 at 14:37, Fox, Kevin M <Kevin.Fox at pnnl.gov <mailto:Kevin.Fox at pnnl.gov>> wrote:
> Seperate ns would work great.
> 
> Thanks,
> Kevin
>  
> From: Banashankar KV
> Sent: Tuesday, September 22, 2015 9:14:35 PM
> 
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
> 
> What you think about separating both of them with the name as Doug mentioned. In future if we want to get rid of the v1 we can just remove that namespace. Everything will be clean. 
> 
> Thanks 
> Banashankar
> 
> 
> On Tue, Sep 22, 2015 at 6:01 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov <mailto:Kevin.Fox at pnnl.gov>> wrote:
> As I understand it, loadbalancer in v2 is more like pool was in v1. Can we make it such that if you are using the loadbalancer resource and have the mandatory v2 properties that it tries to use v2 api, otherwise its a v1 resource? PoolMember should be ok being the same. It just needs to call v1 or v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api different between them? Can it be like pool member?
> 
> Thanks,
> Kevin
>  
> From: Brandon Logan
> Sent: Tuesday, September 22, 2015 5:39:03 PM
> 
> To: openstack-dev at lists.openstack.org <mailto:openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
> 
> So for the API v1s api is of the structure:
> 
> <neutron-endpoint>/lb/(vip|pool|member|health_monitor)
> 
> V2s is:
> <neutron-endpoint>/lbaas/(loadbalancer|listener|pool|healthmonitor)
> 
> member is a child of pool, so it would go down one level.
> 
> The only difference is the lb for v1 and lbaas for v2.  Not sure if that
> is enough of a different.
> 
> Thanks,
> Brandon
> On Tue, 2015-09-22 at 23:48 +0000, Fox, Kevin M wrote:
> > Thats the problem. :/
> > 
> > I can't think of a way to have them coexist without: breaking old
> > templates, including v2 in the name, or having a flag on the resource
> > saying the version is v2. And as an app developer I'd rather not have
> > my existing templates break.
> > 
> > I haven't compared the api's at all, but is there a required field of
> > v2 that is different enough from v1 that by its simple existence in
> > the resource you can tell a v2 from a v1 object? Would something like
> > that work? PoolMember wouldn't have to change, the same resource could
> > probably work for whatever lb it was pointing at I'm guessing.
> > 
> > Thanks,
> > Kevin
> > 
> > 
> > 
> > ______________________________________________________________________
> > From: Banashankar KV [banveerad at gmail.com <mailto:banveerad at gmail.com>]
> > Sent: Tuesday, September 22, 2015 4:40 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
> > LbaasV2
> > 
> > 
> > 
> > Ok, sounds good. So now the question is how should we name the new V2
> > resources ? 
> > 
> > 
> > 
> > Thanks  
> > Banashankar
> > 
> > 
> > 
> > On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov <mailto:Kevin.Fox at pnnl.gov>>
> > wrote:
> >         Yes, hence the need to support the v2 resources as seperate
> >         things. Then I can rewrite the templates to include the new
> >         resources rather then the old resources as appropriate. IE, it
> >         will be a porting effort to rewrite them. Then do a heat
> >         update on the stack to migrate it from lbv1 to lbv2. Since
> >         they are different resources, it should create the new and
> >         delete the old.
> >         
> >         Thanks,
> >         Kevin
> >         
> >         
> >         ______________________________________________________________
> >         From: Banashankar KV [banveerad at gmail.com <mailto:banveerad at gmail.com>]
> >         Sent: Tuesday, September 22, 2015 4:16 PM 
> >         
> >         To: OpenStack Development Mailing List (not for usage
> >         questions)
> >         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
> >         for LbaasV2
> >         
> >         
> >         
> >         
> >         But I think, V2 has introduced some new components and whole
> >         association of the resources with each other is changed, we
> >         should be still able to do what Kevin has mentioned ?
> >         
> >         Thanks  
> >         Banashankar
> >         
> >         
> >         
> >         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
> >         <Kevin.Fox at pnnl.gov <mailto:Kevin.Fox at pnnl.gov>> wrote:
> >                 There needs to be a way to have both v1 and v2
> >                 supported in one engine....
> >                 
> >                 Say I have templates that use v1 already in existence
> >                 (I do), and I want to be able to heat stack update on
> >                 them one at a time to v2. This will replace the v1 lb
> >                 with v2, migrating the floating ip from the v1 lb to
> >                 the v2 one. This gives a smoothish upgrade path.
> >                 
> >                 Thanks,
> >                 Kevin
> >                 ________________________________________
> >                 From: Brandon Logan [brandon.logan at RACKSPACE.COM <mailto:brandon.logan at RACKSPACE.COM>]
> >                 Sent: Tuesday, September 22, 2015 3:22 PM
> >                 To: openstack-dev at lists.openstack.org <mailto:openstack-dev at lists.openstack.org>
> >                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
> >                 support for LbaasV2
> >                 
> >                 Well I'd hate to have the V2 postfix on it because V1
> >                 will be deprecated
> >                 and removed, which means the V2 being there would be
> >                 lame.  Is there any
> >                 kind of precedent set for for how to handle this?
> >                 
> >                 Thanks,
> >                 Brandon
> >                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
> >                 wrote:
> >                 > So are we thinking of making it as ?
> >                 > OS::Neutron::LoadBalancerV2
> >                 >
> >                 > OS::Neutron::ListenerV2
> >                 >
> >                 > OS::Neutron::PoolV2
> >                 >
> >                 > OS::Neutron::PoolMemberV2
> >                 >
> >                 > OS::Neutron::HealthMonitorV2
> >                 >
> >                 >
> >                 >
> >                 > and add all those into the loadbalancer.py of heat
> >                 engine ?
> >                 >
> >                 > Thanks
> >                 > Banashankar
> >                 >
> >                 >
> >                 >
> >                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
> >                 > <skraynev at mirantis.com <mailto:skraynev at mirantis.com>> wrote:
> >                 >         Brandon.
> >                 >
> >                 >
> >                 >         As I understand we v1 and v2 have
> >                 differences also in list of
> >                 >         objects and also in relationships between
> >                 them.
> >                 >         So I don't think that it will be easy to
> >                 upgrade old resources
> >                 >         (unfortunately).
> >                 >         I'd agree with second Kevin's suggestion
> >                 about implementation
> >                 >         new resources in this case.
> >                 >
> >                 >
> >                 >         I see, that a lot of guys, who wants to help
> >                 with it :) And I
> >                 >         suppose, that me and Rabi Mishra may try to
> >                 help with it,
> >                 >         because we was involvement in implementation
> >                 of v1 resources
> >                 >         in Heat.
> >                 >         Follow the list of v1 lbaas resources in
> >                 Heat:
> >                 >
> >                 >
> >                 >
> >                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer>
> >                 >
> >                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool>
> >                 >
> >                 >
> >                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember>
> >                 >
> >                 >
> >                  http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor <http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor>
> >                 >
> >                 >
> >                 >
> >                 >         Also, I suppose, that it may be discussed
> >                 during summit
> >                 >         talks :)
> >                 >         Will add to etherpad with potential
> >                 sessions.
> >                 >
> >                 >
> >                 >
> >                 >         Regards,
> >                 >         Sergey.
> >                 >
> >                 >         On 22 September 2015 at 22:27, Brandon Logan
> >                 >         <brandon.logan at rackspace.com <mailto:brandon.logan at rackspace.com>> wrote:
> >                 >                 There is some overlap, but there was
> >                 some incompatible
> >                 >                 differences when
> >                 >                 we started designing v2.  I'm sure
> >                 the same issues
> >                 >                 will arise this time
> >                 >                 around so new resources sounds like
> >                 the path to go.
> >                 >                 However, I do not
> >                 >                 know much about Heat and the
> >                 resources so I'm speaking
> >                 >                 on a very
> >                 >                 uneducated level here.
> >                 >
> >                 >                 Thanks,
> >                 >                 Brandon
> >                 >                 On Tue, 2015-09-22 at 18:38 +0000,
> >                 Fox, Kevin M wrote:
> >                 >                 > We're using the v1 resources...
> >                 >                 >
> >                 >                 > If the v2 ones are compatible and
> >                 can seamlessly
> >                 >                 upgrade, great
> >                 >                 >
> >                 >                 > Otherwise, make new ones please.
> >                 >                 >
> >                 >                 > Thanks,
> >                 >                 > Kevin
> >                 >                 >
> >                 >                 >
> >                 >
> >                  ______________________________________________________________________
> >                 >                 > From: Banashankar KV
> >                 [banveerad at gmail.com <mailto:banveerad at gmail.com>]
> >                 >                 > Sent: Tuesday, September 22, 2015
> >                 10:07 AM
> >                 >                 > To: OpenStack Development Mailing
> >                 List (not for
> >                 >                 usage questions)
> >                 >                 > Subject: Re: [openstack-dev]
> >                 [neutron][lbaas] - Heat
> >                 >                 support for
> >                 >                 > LbaasV2
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 > Hi Brandon,
> >                 >                 > Work in progress, but need some
> >                 input on the way we
> >                 >                 want them, like
> >                 >                 > replace the existing lbaasv1 or we
> >                 still need to
> >                 >                 support them ?
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 > Thanks
> >                 >                 > Banashankar
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
> >                 Brandon Logan
> >                 >                 > <brandon.logan at rackspace.com <mailto:brandon.logan at rackspace.com>>
> >                 wrote:
> >                 >                 >         Hi Banashankar,
> >                 >                 >         I think it'd be great if
> >                 you got this going.
> >                 >                 One of those
> >                 >                 >         things we
> >                 >                 >         want to have and people
> >                 ask for but has
> >                 >                 always gotten a lower
> >                 >                 >         priority
> >                 >                 >         due to the critical things
> >                 needed.
> >                 >                 >
> >                 >                 >         Thanks,
> >                 >                 >         Brandon
> >                 >                 >         On Mon, 2015-09-21 at
> >                 17:57 -0700,
> >                 >                 Banashankar KV wrote:
> >                 >                 >         > Hi All,
> >                 >                 >         > I was thinking of
> >                 starting the work on
> >                 >                 heat to support
> >                 >                 >         LBaasV2,  Is
> >                 >                 >         > there any concerns about
> >                 that?
> >                 >                 >         >
> >                 >                 >         >
> >                 >                 >         > I don't know if it is
> >                 the right time to
> >                 >                 bring this up :D .
> >                 >                 >         >
> >                 >                 >         > Thanks,
> >                 >                 >         > Banashankar (bana_k)
> >                 >                 >         >
> >                 >                 >         >
> >                 >                 >
> >                 >                 >         >
> >                 >                 >
> >                 >
> >                 __________________________________________________________________________
> >                 >                 >         > OpenStack Development
> >                 Mailing List (not
> >                 >                 for usage questions)
> >                 >                 >         > Unsubscribe:
> >                 >                 >
> >                 >
> >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 >                 >         >
> >                 >                 >
> >                 >
> >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >                 >                 >
> >                 >                 >
> >                 >
> >                 __________________________________________________________________________
> >                 >                 >         OpenStack Development
> >                 Mailing List (not for
> >                 >                 usage questions)
> >                 >                 >         Unsubscribe:
> >                 >                 >
> >                 >
> >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 >                 >
> >                 >
> >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >                 >
> >                 >
> >                  __________________________________________________________________________
> >                 >                 > OpenStack Development Mailing List
> >                 (not for usage
> >                 >                 questions)
> >                 >                 > Unsubscribe:
> >                 >
> >                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 >                 >
> >                 >
> >                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >                 >
> >                 >
> >                  __________________________________________________________________________
> >                 >                 OpenStack Development Mailing List
> >                 (not for usage
> >                 >                 questions)
> >                 >                 Unsubscribe:
> >                 >
> >                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 >
> >                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >                 >
> >                 >
> >                 >
> >                 >
> >                 >
> >                  __________________________________________________________________________
> >                 >         OpenStack Development Mailing List (not for
> >                 usage questions)
> >                 >         Unsubscribe:
> >                 >
> >                  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 >
> >                  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >                 >
> >                 >
> >                 >
> >                 >
> >                 __________________________________________________________________________
> >                 > OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 > Unsubscribe:
> >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 >
> >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >                 
> >                 __________________________________________________________________________
> >                 OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 Unsubscribe:
> >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >                 
> >                 __________________________________________________________________________
> >                 OpenStack Development Mailing List (not for usage
> >                 questions)
> >                 Unsubscribe:
> >                 OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >                 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >                 
> >         
> >         
> >         
> >         __________________________________________________________________________
> >         OpenStack Development Mailing List (not for usage questions)
> >         Unsubscribe:
> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> >         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >         
> > 
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/bb202f27/attachment.html>

From jpenick at gmail.com  Thu Sep 24 22:39:24 2015
From: jpenick at gmail.com (James Penick)
Date: Thu, 24 Sep 2015 15:39:24 -0700
Subject: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
In-Reply-To: <7ED8380F-5F13-430E-AE74-F9C74C37FEE1@gmail.com>
References: <CAGocpaFANOE47CPtmN0cGbLW22Q+ix7fio82AnVE2yzMmnbeKw@mail.gmail.com>
 <5602FA62.4000904@linux.vnet.ibm.com>
 <5602FEB2.6030804@linux.vnet.ibm.com>
 <CAPWkaSWUues-yeOkr7bVX75F=AL_5j6eJMA9+qQcZLQ8nD2_yg@mail.gmail.com>
 <560303B7.6080906@linux.vnet.ibm.com> <20150923201251.GA8745@crypt>
 <56030BD6.10500@internap.com> <20150923205020.GC8745@crypt>
 <56031CFE.2020906@internap.com>
 <0EC37EE9-59A0-48B3-97CB-7C66CB5969E6@gmail.com>
 <20150923235953.GD8745@crypt>
 <26A7F873-91A8-44A9-B130-BBEFE682E94B@gmail.com>
 <CAENqGMF71BZPP5EErCaur-gLiVk5H9nAFO=64Y_Z0m4LdJGsYg@mail.gmail.com>
 <CAOyZ2aEu3238K-ETutR0Acrsf+_C0XXTYTDFY9kiKD6kqPUo6g@mail.gmail.com>
 <5603B228.3070502@redhat.com>
 <7ED8380F-5F13-430E-AE74-F9C74C37FEE1@gmail.com>
Message-ID: <CAMomh-57RzyFjWje-HVW9kK1GO1VJzP2Rw2PxY8XTuDoUrEPzQ@mail.gmail.com>

On Thu, Sep 24, 2015 at 2:22 PM, Sam Morrison <sorrison at gmail.com> wrote:

>
> Yes an AZ may not be considered a failure domain in terms of control
> infrastructure, I think all operators understand this. If you want control
> infrastructure failure domains use regions.
>
> However from a resource level (eg. running instance/ running volume) I
> would consider them some kind of failure domain. It?s a way of saying to a
> user if you have resources running in 2 AZs you have a more available
> service.
>
> Every cloud will have a different definition of what an AZ is, a
> rack/collection of racks/DC etc. openstack doesn?t need to decide what that
> is.
>
> Sam
>

This seems to map more closely to how we use AZs.

Turning it around to the user perspective:
 My users want to be sure that when they boot compute resources, they can
do so in such a way that their application will be immune to a certain
amount of physical infrastructure failure.

Use cases I get from my users:
1. "I want to boot 10 instances, and be sure that if a single leg of power
goes down, I wont lose more than 2 instances"
2. "My instances move a lot of network traffic. I want to ensure that I
don't have more than 3 of my instances per rack, or else they'll saturate
the ToR"
3. "Compute room #1 has been overrun by crazed ferrets. I need to boot new
instances in compute room #2."
4. "I want to boot 10 instances, striped across at least two power domains,
under no less than 5 top of rack switches, with access to network security
zone X."

For my users, abstractions for availability and scale of the control plane
should be hidden from their view. I've almost never been asked by my users
whether or not the control plane is resilient. They assume that my team, as
the deployers, have taken adequate steps to ensure that the control plane
is deployed in a resilient and highly available fashion.

I think it would be good for the operator community to come to an agreement
on what an AZ should be from the perspective of those who deploy both
public and private clouds and bring that back to the dev teams.

-James
:)=
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/b91c1fef/attachment.html>

From devdatta.kulkarni at RACKSPACE.COM  Thu Sep 24 22:44:08 2015
From: devdatta.kulkarni at RACKSPACE.COM (Devdatta Kulkarni)
Date: Thu, 24 Sep 2015 22:44:08 +0000
Subject: [openstack-dev] [Glance][Solum] Using os-auth-token and
 os-image-url with glance client
Message-ID: <1443134649448.43790@RACKSPACE.COM>

Hi, Glance team,


In Solum, we use Glance to store Docker images that we create for applications. We use Glance client internally to upload these images. Till recently, 'glance image-create' with only token has been

working for us (in devstack). Today, I started noticing that glance image-create with just token is not working anymore. It is also not working when os-auth-token and os-image-url are passed in. According to documentation (http://docs.openstack.org/developer/python-glanceclient/), passing token and image-url should work. The client, which I have installed from master, is asking username (and password, if username is specified).


Solum does not have access to end-user's password. So we need the ability to interact with Glance without providing password, as it has been working till recently.


I investigated the issue a bit and have filed a bug with my findings.

https://bugs.launchpad.net/python-glanceclient/+bug/1499540


Can someone help with resolving this issue.


Regards,

Devdatta
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/9a6d5f29/attachment.html>

From clint at fewbar.com  Thu Sep 24 22:53:28 2015
From: clint at fewbar.com (Clint Byrum)
Date: Thu, 24 Sep 2015 15:53:28 -0700
Subject: [openstack-dev] [Large Deployments Team][Performance Team] New
	informal working group suggestion
In-Reply-To: <CACsCO2yHugc0FQmXBxO_-uzaOvR_KXQNdPOEYYneU=vqoeJSEw@mail.gmail.com>
References: <CACsCO2yHugc0FQmXBxO_-uzaOvR_KXQNdPOEYYneU=vqoeJSEw@mail.gmail.com>
Message-ID: <1443135171-sup-1175@fewbar.com>

Excerpts from Dina Belova's message of 2015-09-22 05:57:19 -0700:
> Hey, OpenStackers!
> 
> I'm writing to propose to organise new informal team to work specifically
> on the OpenStack performance issues. This will be a sub team in already
> existing Large Deployments Team, and I suppose it will be a good idea to
> gather people interested in OpenStack performance in one room and identify
> what issues are worrying contributors, what can be done and share results
> of performance researches :)
> 
> So please volunteer to take part in this initiative. I hope it will be many
> people interested and we'll be able to use cross-projects session slot
> <http://odsreg.openstack.org/cfp/details/5> to meet in Tokyo and hold a
> kick-off meeting.
> 
> I would like to apologise I'm writing to two mailing lists at the same
> time, but I want to make sure that all possibly interested people will
> notice the email.
> 

Dina, this is great. Count me in, and see you in Tokyo!


From grisha at alum.mit.edu  Thu Sep 24 23:01:06 2015
From: grisha at alum.mit.edu (Gregory Golberg)
Date: Thu, 24 Sep 2015 16:01:06 -0700
Subject: [openstack-dev] Nova request to Neutron - can instance ID be added?
Message-ID: <CAJxDUeWAnweDV6nH6RoCENRrPJFz=PgBiLC6W_dJW0ozSoBMBw@mail.gmail.com>

Hi All,

When launching a new VM, Nova sends a request to Neutron which contains
various items but does not contain instance ID. Would it be a problem to
add that to the request?

[image: --]
Gregory Golberg
[image: http://]about.me/gregorygolberg
<http://about.me/gregorygolberg?promo=email_sig>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/7cd801f2/attachment.html>

From blak111 at gmail.com  Thu Sep 24 23:05:35 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Thu, 24 Sep 2015 16:05:35 -0700
Subject: [openstack-dev] Nova request to Neutron - can instance ID be
	added?
In-Reply-To: <CAJxDUeWAnweDV6nH6RoCENRrPJFz=PgBiLC6W_dJW0ozSoBMBw@mail.gmail.com>
References: <CAJxDUeWAnweDV6nH6RoCENRrPJFz=PgBiLC6W_dJW0ozSoBMBw@mail.gmail.com>
Message-ID: <CAO_F6JMAPKeieKBt+UfJcczRvvruHJxcciea_oanmcdDF7sAdw@mail.gmail.com>

The device_id field should be populated with the instance ID.

On Thu, Sep 24, 2015 at 4:01 PM, Gregory Golberg <grisha at alum.mit.edu>
wrote:

> Hi All,
>
> When launching a new VM, Nova sends a request to Neutron which contains
> various items but does not contain instance ID. Would it be a problem to
> add that to the request?
>
> [image: --]
> Gregory Golberg
> [image: http://]about.me/gregorygolberg
> <http://about.me/gregorygolberg?promo=email_sig>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/dd54157a/attachment.html>

From akhivin at mirantis.com  Thu Sep 24 23:13:33 2015
From: akhivin at mirantis.com (Alexey Khivin)
Date: Fri, 25 Sep 2015 02:13:33 +0300
Subject: [openstack-dev] [murano] suggestion on commit message title format
 for the murano-apps repository
Message-ID: <CAM9f5rjmKSVfn0PySHVZC2FB81_gzgB5a0Vc1=BSL31JEU40WA@mail.gmail.com>

Hello everyone

Almost an every commit-message in the murano-apps repository contains a
name of the application which it is related to

I suggest to specify application within commit message title using strict
and uniform format


For example, something like this:

[ApacheHTTPServer] Utilize Custom Network selector
<https://review.openstack.org/201659>
[Docker/Kubernetes <https://review.openstack.org/208452>] Fix typo
<https://review.openstack.org/208452>

instead of this:

Utilize Custom Network selector in Apache App
Fix typo in Kubernetes Cluster app <https://review.openstack.org/208452>


I think it would be useful for readability of the messages list

-- 
Regards,
Alexey Khivin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/334168ad/attachment.html>

From spandhe.openstack at gmail.com  Thu Sep 24 23:19:57 2015
From: spandhe.openstack at gmail.com (Shraddha Pandhe)
Date: Thu, 24 Sep 2015 16:19:57 -0700
Subject: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server
In-Reply-To: <CAAL3yRs4Ss1t_O_YBVxeiMznT9Yd6btc-Neqy_Y6ZhmptYs9Wg@mail.gmail.com>
References: <F8D0A41F2F840442963726CFDA7D173E251C27@CBSEX1.cloudbase.local>
 <CAAL3yRs4Ss1t_O_YBVxeiMznT9Yd6btc-Neqy_Y6ZhmptYs9Wg@mail.gmail.com>
Message-ID: <CAGwnvr7NVQKpod3Xd9=cDyKjewBZhhuhJpvH+ANjxG3YCGKfWQ@mail.gmail.com>

Hi Ionut,

I am working on a similar effort: Adding driver for neutron-dhcp-agent [1]
& [2]. Is it similar to what you are trying to do? My approach doesn't need
any extra database. There are two ways to achieve HA in my case:

1. Run multiple neutron-dhcp-agents and set agents_per_network >1 so more
than one dhcp servers will have the config needed to serve the dhcp request
2. ISC-DHCPD itself has some HA where you can setup peers. But I haven't
tried that yet.

I have this driver fully implemented and working here at Yahoo!. Working on
making it more generic and upstreaming it. Please let me know if this
effort is similar so that we can consider working together on a single
effort.



[1] https://review.openstack.org/#/c/212836/
[2] https://bugs.launchpad.net/neutron/+bug/1464793

On Thu, Sep 24, 2015 at 9:40 AM, Dmitry Tantsur <divius.inside at gmail.com>
wrote:

> 2015-09-24 17:38 GMT+02:00 Ionut Balutoiu <
> ibalutoiu at cloudbasesolutions.com>:
>
>> Hello, guys!
>>
>> I'm starting a new implementation for a dhcp provider,
>> mainly to be used for Ironic standalone. I'm planning to
>> push it upstream. I'm using isc-dhcp-server service from
>> Linux. So, when an Ironic node is started, the ironic-conductor
>> writes in the config file the MAC-IP reservation for that node and
>> reloads dhcp service. I'm using a SQL database as a backend to store
>> the dhcp reservations (I think is cleaner and it should allow us
>> to have more than one DHCP server). What do you think about my
>> implementation ?
>>
>
> What you describe slightly resembles how ironic-inspector works. It needs
> to serve DHCP to nodes that are NOT know to Ironic, so it manages iptables
> rules giving (or not giving access) to the dnsmasq instance. I wonder if we
> may find some common code between these 2, but I definitely don't want to
> reinvent Neutron :) I'll think about it after seeing your spec and/or code,
> I'm already looking forward to them!
>
>
>> Also, I'm not sure how can I scale this out to provide HA/failover.
>> Do you guys have any idea ?
>>
>> Regards,
>> Ionut Balutoiu
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> --
> -- Dmitry Tantsur
> --
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/1c61485e/attachment.html>

From zhenzan.zhou at intel.com  Thu Sep 24 23:56:25 2015
From: zhenzan.zhou at intel.com (Zhou, Zhenzan)
Date: Thu, 24 Sep 2015 23:56:25 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <1443116221875.72882@vmware.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>,
 <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>
 <1443116221875.72882@vmware.com>
Message-ID: <EB8DB51184817F479FC9C47B120861EE1986D904@SHSMSX101.ccr.corp.intel.com>

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:ayip at vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a minute or so.  Then I run rejoin-stack.sh which takes just another minute or so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack state that was running before.



- Alex





________________________________
From: Shiv Haris <sharis at Brocade.com<mailto:sharis at Brocade.com>>
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user instantiates the Usecase-VM. However creating a OVA file is possible only when the VM is halted which means Openstack is not running and the user will have to run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM and using it in another setup is not very straight forward. It involves modifying the .vbox file and seems that it is prone to user errors. I am leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ....

Thanks,

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you posed however I am still working on some of the subtle issues raised. Once I have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this big?  I think we should finish this as a VM but then look into doing it with containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB - but the OVA compress the image and disk to 3 GB. I will looking at other options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup time is substantial, and if there's a problem, it's good to assume the user won't know how to fix it.  Is it possible to have devstack up and running when we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack will be down when you bring up  the VM. I agree a snapshot will be a better choice.

- It'd be good to have a README to explain how to use the use-case structure. It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script so that we can run the use cases one after another without worrying about interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com<mailto:sharis at brocade.com>> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com<mailto:sharis at Brocade.com>]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova<https://urldefense.proofpoint.com/v2/url?u=http-3A__paloaltan.net_Congress_Congress-5FUsecases-5FSEPT-5F17-5F2015.ova&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=3IP4igrLri-BaK8VbjbEq2l_AGknCI7-t3UbP5VwlU8&s=wVyys8I915mHTzrOp8f0KLqProw6ygNfaMSP0T-yqCg&e=>

I usually run this on a macbook air - but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases - I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/816d7ec5/attachment.html>

From boris at pavlovic.me  Fri Sep 25 00:05:38 2015
From: boris at pavlovic.me (Boris Pavlovic)
Date: Thu, 24 Sep 2015 17:05:38 -0700
Subject: [openstack-dev] [openstack-operators][Rally] Rally plugins
 reference is available
Message-ID: <CAD85om30QcMUHi4vB8BjD0dUf_h=ZZNjKo-w72+ffKMC9GRTAg@mail.gmail.com>

Hi stackers,

As far as you know Rally test cases are created as a mix of plugins.

At this point of time we have more than 200 plugins for almost all
OpenStack projects.
Before you had to analyze code of plugins or use "rally plugin find/list"
commands to find plugins that you need, which was the pain in neck.

So finally we have auto generated plugin reference:
https://rally.readthedocs.org/en/latest/plugin/plugin_reference.html


Best regards,
Boris Pavlovic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/985c4339/attachment.html>

From ayip at vmware.com  Fri Sep 25 00:08:40 2015
From: ayip at vmware.com (Alex Yip)
Date: Fri, 25 Sep 2015 00:08:40 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <EB8DB51184817F479FC9C47B120861EE1986D904@SHSMSX101.ccr.corp.intel.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>,
 <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>
 <1443116221875.72882@vmware.com>,
 <EB8DB51184817F479FC9C47B120861EE1986D904@SHSMSX101.ccr.corp.intel.com>
Message-ID: <1443139720841.25541@vmware.com>

I was able to make devstack run without a network connection by disabling tempest.  So, I think it uses the loopback IP address, and that does not change, so rejoin-stack.sh works without a network at all.


- Alex



________________________________
From: Zhou, Zhenzan <zhenzan.zhou at intel.com>
Sent: Thursday, September 24, 2015 4:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:ayip at vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a minute or so.  Then I run rejoin-stack.sh which takes just another minute or so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack state that was running before.



- Alex





________________________________
From: Shiv Haris <sharis at Brocade.com<mailto:sharis at Brocade.com>>
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user instantiates the Usecase-VM. However creating a OVA file is possible only when the VM is halted which means Openstack is not running and the user will have to run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM and using it in another setup is not very straight forward. It involves modifying the .vbox file and seems that it is prone to user errors. I am leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ?.

Thanks,

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you posed however I am still working on some of the subtle issues raised. Once I have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this big?  I think we should finish this as a VM but then look into doing it with containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB ? but the OVA compress the image and disk to 3 GB. I will looking at other options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup time is substantial, and if there's a problem, it's good to assume the user won't know how to fix it.  Is it possible to have devstack up and running when we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack will be down when you bring up  the VM. I agree a snapshot will be a better choice.

- It'd be good to have a README to explain how to use the use-case structure. It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script so that we can run the use cases one after another without worrying about interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com<mailto:sharis at brocade.com>> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com<mailto:sharis at Brocade.com>]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova<https://urldefense.proofpoint.com/v2/url?u=http-3A__paloaltan.net_Congress_Congress-5FUsecases-5FSEPT-5F17-5F2015.ova&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=3IP4igrLri-BaK8VbjbEq2l_AGknCI7-t3UbP5VwlU8&s=wVyys8I915mHTzrOp8f0KLqProw6ygNfaMSP0T-yqCg&e=>

I usually run this on a macbook air ? but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases ? I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/52adf968/attachment.html>

From jim at jimrollenhagen.com  Fri Sep 25 00:40:49 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Thu, 24 Sep 2015 17:40:49 -0700
Subject: [openstack-dev] [ironic] Ironic is open for Mitaka development
Message-ID: <20150925004049.GD14957@jimrollenhagen.com>

Hey all,

I've proposed a patch to release Ironic 4.2.0, and we will be cutting
the stable/liberty branch from the same SHA:
https://review.openstack.org/#/c/227582/

This means Ironic is now open for Mitaka development; commit away!

I'll be cleaning up Launchpad and prioritizing features for our next
release next week sometime, and I hope to make a 4.3 release shortly
after the summit.

Thanks everyone for your continued hard work. :)

// jim


From mvoelker at vmware.com  Fri Sep 25 01:20:04 2015
From: mvoelker at vmware.com (Mark Voelker)
Date: Fri, 25 Sep 2015 01:20:04 +0000
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
Message-ID: <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>

> 
> On Sep 24, 2015, at 5:55 PM, Sabari Murugesan <sabari.bits at gmail.com> wrote:
> 
> Hi Melanie
> 
> In general, images created by glance v1 API should be accessible using v2 and
> vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with an image was
> causing incompatibility. These fixes were back-ported to stable/kilo.
> 
> Thanks
> Sabari
> 
> [1] - https://bugs.launchpad.net/glance/+bug/1447215
> [2] - https://bugs.launchpad.net/bugs/1419823 
> [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193 
> 
> 
> On Thu, Sep 24, 2015 at 2:17 PM, melanie witt <melwittt at gmail.com> wrote:
> Hi All,
> 
> I have been looking and haven't yet located documentation about how to upgrade from glance v1 to glance v2.
> 
> From what I understand, images and snapshots created with v1 can't be listed/accessed through the v2 api. Are there instructions about how to migrate images and snapshots from v1 to v2? Are there other incompatibilities between v1 and v2?
> 
> I'm asking because I have read that glance v1 isn't defcore compliant and so we need all projects to move to v2, but the incompatibility from v1 to v2 is preventing that in nova. Is there anything else preventing v2 adoption? Could we move to glance v2 if there's a migration path from v1 to v2 that operators can run through before upgrading to a version that uses v2 as the default?

Just to clarify the DefCore situation a bit here: 

The DefCore Committee is considering adding some Glance v2 capabilities [1] as ?advisory? (e.g. not required now but might be in the future unless folks provide feedback as to why it shouldn?t be) in it?s next Guideline, which is due to go the Board of Directors in January and will cover Juno, Kilo, and Liberty [2].  The Nova image API?s are already required [3][4].  As discussion began about which Glance capabilities to include and whether or not to keep the Nova image API?s as required, it was pointed out that the many ways images can currently be created in OpenStack is problematic from an interoperability point of view in that some clouds use one and some use others.  To be included in a DefCore Guideline, capabilities are scored against twelve Criteria [5], and need to achieve a certain total to be included.  Having a bunch of different ways to deal with images actually hurts the chances of any one of them meeting the bar because it makes it less likely that they?ll achieve several criteria.  For example:

One of the criteria is ?widely deployed? [6].  In the case of images, both the Nova image-create API and Glance v2 are both pretty widely deployed [7]; Glance v1 isn?t, and at least one uses none of those but instead uses the import task API.

Another criteria is ?atomic? [8] which basically means the capability is unique and can?t be built out of other required capabilities.  Since the Nova image-create API is already required and effectively does the same thing as glance v1 and v2?s image create API?s, the latter lose points.

Another criteria is ?future direction? [9].  Glance v1 gets no points here since v2 is the current API, has been for a while, and there?s even been some work on v3 already.

There are also criteria for  ?used by clients? [11].  Unfortunately both Glance v1 and v2 fall down pretty hard here as it turns out that of all the client libraries users reported in the last user survey, it appears the only one other than the OpenStack clients supports Glance v2 and one supports Glance v1 while the rest all rely on the Nova API's.  Even within OpenStack we don?t necessarily have good adoption since Nova still uses the v1 API to talk to Glance and OpenStackClient didn?t support image creation with v2 until this week?s 1.7.0 release. [13]

So, it?s a bit problematic that v1 is still being used even within the project (though it did get slightly better this week).  It?s highly unlikely at this point that it makes any sense for DefCore to require OpenStack Powered products to expose v1 to end users.  Even if DefCore does end up requiring Glance v2 to be exposed to end users, that doesn?t necessarily mean Nova couldn?t continue to use v1: OpenStack Powered products wouldn?t be required to expose v1 to end users, but if the nova image-create API remains required then they?d have to expose it at least internally to the cloud.  But?.really?  That?s still sort of an ugly position to be in, because at the end of the day that?s still a lot more moving parts than are really necessary and that?s not particularly good for operators, end users, developers who want interoperable ways of doing things, or pretty much anybody else.  

So basically: yes, it would be *lovely* if we could all get behind fewer ways of dealing with images. [10]  

[1] https://review.openstack.org/#/c/213353/
[2] http://git.openstack.org/cgit/openstack/defcore/tree/2016.next.json#n8
[3] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json#n23
[4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n20
[5] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst
[6] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n40
[7] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074540.html
[8] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n87
[9] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n60
[10] Meh, entirely too many footnotes here so why not put one out of order for fun: https://www.youtube.com/watch?v=oHg5SJYRHA0
[11] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n48
[12] See comments in https://review.openstack.org/#/c/213353/7/working_materials/scoring.txt
[13] http://docs.openstack.org/developer/python-openstackclient/releases.html#sep-2015

At Your Service,

Mark T. Voelker

> 
> Thanks,
> -melanie (irc: melwitt)
> 
> 
> 
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From stevemar at ca.ibm.com  Fri Sep 25 02:01:30 2015
From: stevemar at ca.ibm.com (Steve Martinelli)
Date: Thu, 24 Sep 2015 22:01:30 -0400
Subject: [openstack-dev] [Glance][Solum] Using os-auth-token and
 os-image-url with glance client
In-Reply-To: <1443134649448.43790@RACKSPACE.COM>
References: <1443134649448.43790@RACKSPACE.COM>
Message-ID: <OF69663DD6.8D614D95-ON00257ECB.000A7F24-85257ECB.000B1FB4@notes.na.collabserv.com>


I can't speak to the glance client changes, but this seems like an awkward
design.

If you don't know the end user's name and password, then how are you
getting the token? Is it the admin token? Why not create a service account
and use keystonemiddleware?

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:	Devdatta Kulkarni <devdatta.kulkarni at RACKSPACE.COM>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>
Date:	2015/09/24 06:44 PM
Subject:	[openstack-dev] [Glance][Solum] Using os-auth-token and
            os-image-url with glance client



Hi, Glance team,

In Solum, we use Glance to store Docker images that we create for
applications. We use Glance client internally to upload these images. Till
recently, 'glance image-create' with only token has been
working for us (in devstack). Today, I started noticing that glance
image-create with just token is not working anymore. It is also not working
when os-auth-token and os-image-url are passed in. According to
documentation (http://docs.openstack.org/developer/python-glanceclient/),
passing token and image-url should work. The client, which I have installed
from master, is asking username (and password, if username is specified).

Solum does not have access to end-user's password. So we need the ability
to interact with Glance without providing password, as it has been working
till recently.

I investigated the issue a bit and have filed a bug with my findings.
https://bugs.launchpad.net/python-glanceclient/+bug/1499540

Can someone help with resolving this issue.

Regards,
Devdatta
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/5d44069c/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/5d44069c/attachment.gif>

From xiwhuang at ebay.com  Fri Sep 25 02:32:02 2015
From: xiwhuang at ebay.com (Huang, Oscar)
Date: Fri, 25 Sep 2015 02:32:02 +0000
Subject: [openstack-dev]  [oslo] LnP tests for oslo.messaging
Message-ID: <147889938968B74596B6EE7D57BEB44A01D4E5A1@PHX-EXRDA-S62.corp.ebay.com>

Hi, 
	We would like to setup a LnP test environment for oslo.messaging and Rabbit MQ so that we can continuously track the stability and performance impact of the olso/kombu/amqp library changes and MQ upgrades. 
	I wonder whether there are some existing packs of test cases can be used as the workload. 
    Basically we want to emulate the running status of a nova cell of large scales of computes(>1000), but focus only on messaging subsystem. 
	Thanks.


Best wishes, 

Oscar (???)



From pieter.c.kruithof-jr at hpe.com  Fri Sep 25 03:17:22 2015
From: pieter.c.kruithof-jr at hpe.com (Kruithof, Piet)
Date: Fri, 25 Sep 2015 03:17:22 +0000
Subject: [openstack-dev] [all] [neutron] [ux] [nova] Nova Network/Neutron
	Migration Survey
Message-ID: <D22A18E0.18294%pieter.c.kruithof-jr@hp.com>

The OpenStack UX project team would like to invite you to participate in a survey to help drive product direction for the Neutron project.  The survey should take no more than fifteen minutes to complete.

We are specifically looking for folks with the following kinds of roles; Architect, Cloud Operator, System Administrator, Network Architect / Engineer / Administrator, Developer, or IT Manager / Director.  In addition, they should be familiar with either Nova Networks and/or Neutron.

You will also be entered in a raffle for one of two $100 US Amazon gift cards at the end of the survey.   As always, the results from the survey will be shared with the OpenStack community.

Please click on the following link to begin the survey.

https://www.surveymonkey.com/r/osnetworking

Thanks,

Piet Kruithof
Sr UX Architect, HP Helion Cloud
PTL, OpenStack UX project

"For every complex problem, there is a solution that is simple, neat and wrong.?

H L Menken



From nik.komawar at gmail.com  Fri Sep 25 04:55:47 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 25 Sep 2015 00:55:47 -0400
Subject: [openstack-dev] [Glance][Solum] Using os-auth-token and
 os-image-url with glance client
In-Reply-To: <OF69663DD6.8D614D95-ON00257ECB.000A7F24-85257ECB.000B1FB4@notes.na.collabserv.com>
References: <1443134649448.43790@RACKSPACE.COM>
 <OF69663DD6.8D614D95-ON00257ECB.000A7F24-85257ECB.000B1FB4@notes.na.collabserv.com>
Message-ID: <5604D3D3.7060707@gmail.com>

Devdatta, Thanks for creating the bug. We should get this fixed. It was
opened in the upgrade to 1.x.x.

Steve, my guess is that the context is being used to delegate the token
to glanceclient and then glance. We have this awkwardness with other
openstack services and is cause of much issue for some operations.

I am not sure what's the best way Solum should adopt as there seem to be
tradeoffs with each approach. In this case however, we should try to fix
this for the next release of client.

On 9/24/15 10:01 PM, Steve Martinelli wrote:
>
> I can't speak to the glance client changes, but this seems like an
> awkward design.
>
> If you don't know the end user's name and password, then how are you
> getting the token? Is it the admin token? Why not create a service
> account and use keystonemiddleware?
>
> Thanks,
>
> Steve Martinelli
> OpenStack Keystone Core
>
> Inactive hide details for Devdatta Kulkarni ---2015/09/24 06:44:57
> PM---Hi, Glance team, In Solum, we use Glance to store DockeDevdatta
> Kulkarni ---2015/09/24 06:44:57 PM---Hi, Glance team, In Solum, we use
> Glance to store Docker images that we create for applications. We
>
> From: Devdatta Kulkarni <devdatta.kulkarni at RACKSPACE.COM>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Date: 2015/09/24 06:44 PM
> Subject: [openstack-dev] [Glance][Solum] Using os-auth-token and
> os-image-url with glance client
>
> ------------------------------------------------------------------------
>
>
>
> Hi, Glance team,
>
> In Solum, we use Glance to store Docker images that we create for
> applications. We use Glance client internally to upload these images.
> Till recently, 'glance image-create' with only token has been
> working for us (in devstack). Today, I started noticing that glance
> image-create with just token is not working anymore. It is also not
> working when os-auth-token and os-image-url are passed in. According
> to documentation
> (_http://docs.openstack.org/developer/python-glan__ceclient/_
> <http://docs.openstack.org/developer/python-glanceclient/>), passing
> token and image-url should work. The client, which I have installed
> from master, is asking username (and password, if username is specified).
>
> Solum does not have access to end-user's password. So we need the
> ability to interact with Glance without providing password, as it has
> been working till recently.
>
> I investigated the issue a bit and have filed a bug with my findings.
> _https://bugs.launchpad.net/python-glanceclient/+bug/1499540_
>
> Can someone help with resolving this issue.
>
> Regards,
> Devdatta__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/64918610/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/64918610/attachment.gif>

From openstack at lanabrindley.com  Fri Sep 25 05:00:03 2015
From: openstack at lanabrindley.com (Lana Brindley)
Date: Fri, 25 Sep 2015 15:00:03 +1000
Subject: [openstack-dev] What's Up, Doc? 25 September 2015
Message-ID: <5604D4D3.1070309@lanabrindley.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi everyone,

First of all, thank you to everyone who has suggested sessions for docs
at Tokyo. I'll keep the etherpad open for another week or so, just to
give everyone a chance, and will start working on a draft schedule next
week. I have spent most of this week on bug triage, which is repetitive
work, but it's gratifying to see the numbers drop a little!

== Progress towards Liberty ==

19 days to go

534 bugs closed so far for this release.

The main areas of focus we had during Liberty were the RST migration,
paying down some of our technical debt, and improving our general
communication issues. The RST conversion is well and truly complete,
with just a couple of books waiting in the wings to be published. We
have made some progress on improving our communication generally, with
liaisons now more visible and involved, and some improved processes.
Regarding old bugs, during Liberty we spotlighted a total of 24 bugs,
exactly half of which have now been closed.

== Mitaka Summit Prep ==

The schedule app for your phone is now available. This is a great tool
once you're on the ground at Summit:
https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=98721
6292&mt=8
(Apple) and
https://play.google.com/store/apps/details?id=org.sched.openstacksummitm
ay2015vancouver
(Android)

Docs have been allocated two fishbowls, four workrooms, and a half day
meetup, so now is the time to start gathering ideas for what you would
like to see us cover. Add your ideas to the etherpad here:
https://etherpad.openstack.org/p/Tokyo-DocsSessions I'll create a draft
session based on this feedback next week.

== Bug Triage Process ==

The core team have been noticing a lot of people trying to close bugs
that haven't been triaged recently. It is very important that you have
someone else confirm your bug before you go ahead and create a patch.
Waiting just a little while for triage can save you (and our core team!)
a lot of work, and makes sure we don't let wayward changes past the
gate. If you're unsure about whether you need to wait or not, or if
you're in a hurry to work on a bug, you can always ping one of the core
team on IRC or on our mailing list. The core team members are listed
here, and we're always happy to help you out with a bug or a patch:
https://launchpad.net/~openstack-doc-core

This is even more important than usual as we get closer to the Liberty
release, so remember: If it's not triaged, don't fix it!

== Spotlight Bugs ==

During Liberty we spotlighted a total of 24 bugs, exactly half of which
have now been closed. This is a great effort to clear out some old dusty
bugs and pay down some technical debt. Thanks go out to Tom Fifield,
Olga Gusarenko, Yusuke Ide, Brian Moss, Deena Cooper, and Alexandra
Settle for taking them on! With Liberty and the Summit bearing down on
us, though, let's focus back on what needs to be done for release. At
Summit, we'll discuss how to expand and improve on this effort in future
releases.

== Doc team meeting ==

The US meeting was held this week. The minutes are here:
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2015-09-23

The next meetings will be the final meetings before the Liberty release:
APAC: Wednesday 30 September, 00:30:00 UTC
US: Wednesday 7 October, 14:00:00 UTC

Please go ahead and add any agenda items to the meeting page here:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_
meeting

- --

Remember, if you have content you would like to add to this newsletter,
or you would like to be added to the distribution list, please email me
directly at openstack at lanabrindley.com, or visit:
https://wiki.openstack.org/w/index.php?title=Documentation/WhatsUpDoc

Keep on doc'ing!

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBAgAGBQJWBNTTAAoJELppzVb4+KUyW6EH/2kYOYHa2DHfHJxN2XC1QcpD
L3wwDWxu9B2G6M/hNzRJ/KELSVikfuK9+ChOA+kD2kXLs6dV1oLXXCbV2KIdHKWP
+EYPG0G/CsMkoq8KIfzYwW6c2k+RzSXZAnI1ZhcRJiwhlIC5JlSZ118Y64GeLxmB
jISEqsi7WdK76caMmEy+3+QrpBY/WNOVE5Dyu0TiwGTPtU1EuZEAXHkdFjczKEKM
azlyyFIF4V28+BjrZpUybarEL2hZerHt2XiNd5dCRoQgE7b8ubJ0cidErPazI9AV
mZNZO3VtLTfSC2ytNBwIBaN3ezJzC5mvba6ofDdej3jIvZf3o6IQihbMaYWyLDI=
=uZBu
-----END PGP SIGNATURE-----


From nik.komawar at gmail.com  Fri Sep 25 05:01:10 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Fri, 25 Sep 2015 01:01:10 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
Message-ID: <5604D516.3060201@gmail.com>

Hi Melanie,

In short, you can work on images using Glance's v1 and v2 at the same
time and your images should be created, listed etc.

When you operate Glance with Nova and likely use Nova's Images API, you
will need to have Glance's v1 operated currently. There is a upgrade
plan (v1->v2) we are working on and is being planned for Mitaka. So,
currently you need Glance v1 to work with Nova.

On 9/24/15 5:17 PM, melanie witt wrote:
> Hi All,
>
> I have been looking and haven't yet located documentation about how to upgrade from glance v1 to glance v2.
>
> From what I understand, images and snapshots created with v1 can't be listed/accessed through the v2 api. Are there instructions about how to migrate images and snapshots from v1 to v2? Are there other incompatibilities between v1 and v2?
>
> I'm asking because I have read that glance v1 isn't defcore compliant and so we need all projects to move to v2, but the incompatibility from v1 to v2 is preventing that in nova. Is there anything else preventing v2 adoption? Could we move to glance v2 if there's a migration path from v1 to v2 that operators can run through before upgrading to a version that uses v2 as the default?
>
> Thanks,
> -melanie (irc: melwitt)
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil



From viktor.tikkanen at nokia.com  Fri Sep 25 05:29:14 2015
From: viktor.tikkanen at nokia.com (Tikkanen, Viktor (Nokia - FI/Espoo))
Date: Fri, 25 Sep 2015 05:29:14 +0000
Subject: [openstack-dev] [OPNFV] [Functest] Tempest & Rally
In-Reply-To: <17168_1443102983_56040107_17168_166_1_56040104.5080602@orange.com>
References: <17168_1443102983_56040107_17168_166_1_56040104.5080602@orange.com>
Message-ID: <3E4E0D3116F17747BB7E466168535A7427229868@DEMUMBX002.nsn-intra.net>

Hi Morgan

and thank you for the overview.

I'm now waiting for the POD#2 VPN profile (will be ready soon). We will try then to figure out what OpenStack/tempest/rally configuration changes are needed in order to get rid of those test failures.

I suppose that most of the problems (like "Multiple possible networks found" etc.) are relatively easy to solve.

BTW, since tempest is being currently developed in "branchless" mode (without release specific stable versions), do we have some common understanding/requirements how "dynamically" Functest should use its code? 

For example, config_functest.py seems to contain routines for cloning/installing rally (and indirectly tempest) code, does it mean that the code will be cloned/installed at the time when the test set is executed for the first time? (I'm just wondering if it is necessary or not to "freeze" somehow used code for each OPNFV release to make sure that it will remain compatible and that test results will be comparable between different OPNFV setups). 

-Viktor

> -----Original Message-----
> From: EXT morgan.richomme at orange.com [mailto:morgan.richomme at orange.com]
> Sent: Thursday, September 24, 2015 4:56 PM
> To: Kosonen, Juha (Nokia - FI/Espoo); Tikkanen, Viktor (Nokia - FI/Espoo)
> Cc: Jose Lausuch
> Subject: [OPNFV] [Functest] Tempest & Rally
> 
> Hi,
> 
> I was wondering whether you could have a look at Rally/Tempest tests we
> automatically launch in Functest.
> We have still some errors and I assume most of them are due to
> misconfiguration and/or quota ...
> With Jose, we planned to have a look after SR0 but we do not have much
> time and we are not fully skilled (even if we progressed a little bit:))
> 
> If you could have a look and give your feedback, it would be very
> helpful, we could discuss it during an IRC weekly meeting
> In Arno we did not use the SLA criteria, that is also something we could
> do for the B Release
> 
> for instance if you look at
> https://build.opnfv.org/ci/view/functest/job/functest-foreman-
> master/19/consoleText
> 
> you will see rally and Tempest log
> 
> Rally scenario are a compilation of default Rally scenario played one
> after the other and can be found in
> https://git.opnfv.org/cgit/functest/tree/testcases/VIM/OpenStack/CI/suites
> 
> the Rally artifacts are also pushed into the artifact server
> http://artifacts.opnfv.org/
> e.g.
> http://artifacts.opnfv.org/functest/lf_pod2/2015-09-23_17-36-
> 07/results/rally/opnfv-authenticate.html
> look for 09-23 to get Rally json/html files and tempest.conf
> 
> thanks
> 
> Morgan
> 
> 
> __________________________________________________________________________
> _______________________________________________
> 
> Ce message et ses pieces jointes peuvent contenir des informations
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou
> falsifie. Merci.
> 
> This message and its attachments may contain confidential or privileged
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been
> modified, changed or falsified.
> Thank you.


From aj at suse.com  Fri Sep 25 05:42:00 2015
From: aj at suse.com (Andreas Jaeger)
Date: Fri, 25 Sep 2015 07:42:00 +0200
Subject: [openstack-dev] [Openstack-i18n] [nova][i18n] Is there any
 point in using _() inpython-novaclient?
In-Reply-To: <55FEAF6F.80805@suse.com>
References: <55E9D9AD.1000402@linux.vnet.ibm.com>
 <201509060518.t865IeSf019572@d01av05.pok.ibm.com>
 <55EF0334.3030606@linux.vnet.ibm.com> <55FC23C0.3040105@suse.com>
 <CAOyZ2aGT5O_K1nT6OKWd-TGkLoX0_H0XAz0KvPqErzSYexEDJg@mail.gmail.com>
 <55FEAF6F.80805@suse.com>
Message-ID: <5604DEA8.5090905@suse.com>

On 09/20/2015 03:06 PM, Andreas Jaeger wrote:
> On 09/20/2015 02:16 PM, Duncan Thomas wrote:
>> Certainly for cinder, and I suspect many other project, the openstack
>> client is a wrapper for python-cinderclient libraries, so if you want
>> translated exceptions then you need to translate python-cinderclient
>> too, unless I'm missing something?
>
> Ah - let's investigate some more here.
>
> Looking at python-cinderclient, I see translations only for the help
> strings of the client like in cinderclient/shell.py. Are there strings
> in the library of cinder that will be displayed to the user as well?

We discussed on the i18n team list and will enable those repos if the 
teams send patches. The i18n team will prioritize which resources gets 
translated,

Andreas
-- 
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
        HRB 21284 (AG N?rnberg)
     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126



From rahulsharmaait at gmail.com  Fri Sep 25 05:44:42 2015
From: rahulsharmaait at gmail.com (Rahul Sharma)
Date: Fri, 25 Sep 2015 01:44:42 -0400
Subject: [openstack-dev] [puppet] need help understanding specific puppet
	logs in openstack installation
Message-ID: <CAAw0FvT+j3tPewdr_xJuZK6jqRG46C67W-R_aOU9u+9Usbc4sg@mail.gmail.com>

Hi All,

I am trying the RDO installation of Openstack using puppet. It works fine
when I puppetize my controller node for the first time after fresh
installation of OS. There are no errors thrown at terminal during the
installation. However, if I try to rerun the puppet agent, it configures
something additional this time which is not configured during the first
run. Mentioned below are the logs it prints to terminal:-

Notice: /File[/etc/sysconfig/iptables]/seluser: seluser changed
'unconfined_u' to 'system_u'
Notice:
/Stage[main]/Neutron::Agents::Ml2::Ovs/Service[ovs-cleanup-service]/enable:
enable changed 'false' to 'true'
Notice:
/Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[physext1:br-ex]/Vs_bridge[br-ex]/external_ids:
external_ids changed '' to 'bridge-id=br-ex'
Notice: /Stage[main]/Quickstack::Neutron::Controller/Exec[neutron-db-manage
upgrade]/returns: executed successfully
Notice: Finished catalog run in 36.61 seconds

>From these logs, I am unable to figure out which of the parameters were not
configured when it was run for the first time so that I can try fixing them
in the script itself. I want to avoid the dependency of running puppet
multiple times. Looking at the *.conf files on the controller node doesn't
help much.

Can someone please help me out in understanding what the above mentioned
logs point to? Any pointers would be really helpful.

Thanks.

*Rahul Sharma*
*MS in Computer Science, 2016*
College of Computer and Information Science, Northeastern University
Mobile:  801-706-7860
Email: rahulsharmaait at gmail.com
Linkedin: www.linkedin.com/in/rahulsharmaait
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/e4c79e7b/attachment.html>

From bpavlovic at mirantis.com  Fri Sep 25 05:55:44 2015
From: bpavlovic at mirantis.com (Boris Pavlovic)
Date: Thu, 24 Sep 2015 22:55:44 -0700
Subject: [openstack-dev] [OPNFV] [Functest] Tempest & Rally
In-Reply-To: <3E4E0D3116F17747BB7E466168535A7427229868@DEMUMBX002.nsn-intra.net>
References: <17168_1443102983_56040107_17168_166_1_56040104.5080602@orange.com>
 <3E4E0D3116F17747BB7E466168535A7427229868@DEMUMBX002.nsn-intra.net>
Message-ID: <CAD85om2cHYuJ7Q_ubD1UPZ6eFAMUdgY6Te=7KDu2LqP7Kn-ggA@mail.gmail.com>

Morgan,


You should add at least:

sla:
  failure_rate:
    max: 0

Otherwise rally will pass 100% no matter what is happening.


Best regards,
Boris Pavlovic

On Thu, Sep 24, 2015 at 10:29 PM, Tikkanen, Viktor (Nokia - FI/Espoo) <
viktor.tikkanen at nokia.com> wrote:

> Hi Morgan
>
> and thank you for the overview.
>
> I'm now waiting for the POD#2 VPN profile (will be ready soon). We will
> try then to figure out what OpenStack/tempest/rally configuration changes
> are needed in order to get rid of those test failures.
>
> I suppose that most of the problems (like "Multiple possible networks
> found" etc.) are relatively easy to solve.
>
> BTW, since tempest is being currently developed in "branchless" mode
> (without release specific stable versions), do we have some common
> understanding/requirements how "dynamically" Functest should use its code?
>
> For example, config_functest.py seems to contain routines for
> cloning/installing rally (and indirectly tempest) code, does it mean that
> the code will be cloned/installed at the time when the test set is executed
> for the first time? (I'm just wondering if it is necessary or not to
> "freeze" somehow used code for each OPNFV release to make sure that it will
> remain compatible and that test results will be comparable between
> different OPNFV setups).
>
> -Viktor
>
> > -----Original Message-----
> > From: EXT morgan.richomme at orange.com [mailto:morgan.richomme at orange.com]
> > Sent: Thursday, September 24, 2015 4:56 PM
> > To: Kosonen, Juha (Nokia - FI/Espoo); Tikkanen, Viktor (Nokia - FI/Espoo)
> > Cc: Jose Lausuch
> > Subject: [OPNFV] [Functest] Tempest & Rally
> >
> > Hi,
> >
> > I was wondering whether you could have a look at Rally/Tempest tests we
> > automatically launch in Functest.
> > We have still some errors and I assume most of them are due to
> > misconfiguration and/or quota ...
> > With Jose, we planned to have a look after SR0 but we do not have much
> > time and we are not fully skilled (even if we progressed a little bit:))
> >
> > If you could have a look and give your feedback, it would be very
> > helpful, we could discuss it during an IRC weekly meeting
> > In Arno we did not use the SLA criteria, that is also something we could
> > do for the B Release
> >
> > for instance if you look at
> > https://build.opnfv.org/ci/view/functest/job/functest-foreman-
> > master/19/consoleText
> >
> > you will see rally and Tempest log
> >
> > Rally scenario are a compilation of default Rally scenario played one
> > after the other and can be found in
> >
> https://git.opnfv.org/cgit/functest/tree/testcases/VIM/OpenStack/CI/suites
> >
> > the Rally artifacts are also pushed into the artifact server
> > http://artifacts.opnfv.org/
> > e.g.
> > http://artifacts.opnfv.org/functest/lf_pod2/2015-09-23_17-36-
> > 07/results/rally/opnfv-authenticate.html
> > look for 09-23 to get Rally json/html files and tempest.conf
> >
> > thanks
> >
> > Morgan
> >
> >
> >
> __________________________________________________________________________
> > _______________________________________________
> >
> > Ce message et ses pieces jointes peuvent contenir des informations
> > confidentielles ou privilegiees et ne doivent donc
> > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> > recu ce message par erreur, veuillez le signaler
> > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> > electroniques etant susceptibles d'alteration,
> > Orange decline toute responsabilite si ce message a ete altere, deforme
> ou
> > falsifie. Merci.
> >
> > This message and its attachments may contain confidential or privileged
> > information that may be protected by law;
> > they should not be distributed, used or copied without authorisation.
> > If you have received this email in error, please notify the sender and
> > delete this message and its attachments.
> > As emails may be altered, Orange is not liable for messages that have
> been
> > modified, changed or falsified.
> > Thank you.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150924/64dcf896/attachment-0001.html>

From tony at bakeyournoodle.com  Fri Sep 25 06:33:45 2015
From: tony at bakeyournoodle.com (Tony Breeds)
Date: Fri, 25 Sep 2015 16:33:45 +1000
Subject: [openstack-dev] [all][elections] Candidate proposals for TC
 (Technical Committee) positions are now open
Message-ID: <20150925063344.GB11465@thor.bakeyournoodle.com>

Candidate proposals for the Technical Committee positions (6 positions) are now
open and will remain open until October 1, 05:59 UTC.

All candidacies must be submitted as a text file to the openstack/election
repository as explained on the wiki[0].

Candidates for the Technical Committee Positions: Any Foundation individual
member can propose their candidacy for an available, directly-elected TC seat.
(except the seven TC members who were elected for a one-year seat in April[1]).

The election will be held from October 2nd through to 23:59 October 09, 2015
UTC. The electorate are the Foundation individual members that are also
committers for one of the official programs projects[2] over the Kilo-Liberty
timeframe (September 18, 2014 06:00 UTC to September 18, 2015 05:59 UTC), as
well as the extra-ATCs who are acknowledged by the TC.[3]

Please see the wikipage[0] for additional details about this election. Please
find below the timeline:

Nominations open          @ now
Nominations close         @ 2015-10-01 05:59:00 UTC
Election open             @ 2015-10-02 ~16:00:00 UTC
Election close            @ 2015-10-09 23:59:59 UTC


If you have any questions please be sure to either voice them on the mailing
list or to the elections officials[4].

Thank you, and we look forward to reading your candidate proposals,

[0] https://wiki.openstack.org/wiki/TC_Elections_September/October_2015
[1] https://wiki.openstack.org/wiki/TC_Elections_April_2015#Results
[2] http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2015-elections
    Note the tag for this repo, sept-2015-elections.
[3] http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs?id=sept-2015-elections
[4] Tony's email: tony at bakeyournoodle dot com
    Tristan's email: tdecacqu at redhat dot com

Yours Tony.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/cc2af45f/attachment.pgp>

From nmakhotkin at mirantis.com  Fri Sep 25 06:37:55 2015
From: nmakhotkin at mirantis.com (Nikolay Makhotkin)
Date: Fri, 25 Sep 2015 09:37:55 +0300
Subject: [openstack-dev] [mistral] Define better terms for WAITING and
 DELAYED states
In-Reply-To: <5CD93D0F-9874-4640-BE2C-156AC9B80670@mirantis.com>
References: <156DBF58-7BBB-49D4-A229-DD1B96896532@mirantis.com>
 <55FC3229.5020407@alcatel-lucent.com>
 <5CD93D0F-9874-4640-BE2C-156AC9B80670@mirantis.com>
Message-ID: <CACarOJbKfN3j4Ne1URwakCao6JWyZ6p243e5fBbOWxUtdN5=ZA@mail.gmail.com>

Thank you Robert for the explanation in details!



>    - RUNNING_DELAYED - a substate of RUNNING and it has exactly this
>    meaning: it?s generally running but delayed till some later time.
>    - WAITING - it is not a substate of RUNNING and hence it means a task
>    has not started yet
>
>
Yes, I agree, need to introduce RUNNING_DELAYED state. It reflects the task
is already running but delayed for certain amount of time.

So, we can proceed with this right in Liberty cycle.

-- 
Best Regards,
Nikolay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/9f5552a8/attachment.html>

From mrunge at redhat.com  Fri Sep 25 07:02:02 2015
From: mrunge at redhat.com (Matthias Runge)
Date: Fri, 25 Sep 2015 09:02:02 +0200
Subject: [openstack-dev] [all][stable][release] 2015.1.2
In-Reply-To: <CAGi==UXRm7mARJecBT69qqQMfOycdx_crVf-OCD_x+O9z2J2nw@mail.gmail.com>
References: <CANZa-e+LZg0PZgPDrkhgifuZ_BQ6EhTua-420C5K2Z+A8cbPsg@mail.gmail.com>
 <20150924073107.GF24386@sofja.berg.ol>
 <CAGi==UXRm7mARJecBT69qqQMfOycdx_crVf-OCD_x+O9z2J2nw@mail.gmail.com>
Message-ID: <5604F16A.6010807@redhat.com>

On 25/09/15 00:17, Alan Pevec wrote:
>> For Horizon, it would make sense to move this a week back. We discovered
>> a few issues in Liberty, which are present in current kilo, too. I'd
>> love to cherry-pick a few of them to kilo.
> 
> What are LP bug#s ? Are you saying that fixes are still work in
> progress on master and not ready for backporting yet?
> 
Current backports:
https://review.openstack.org/#/q/status:open+project:openstack/horizon+branch:stable/kilo,n,z

They are tagged with lp bugs currently under review. As you see, they
are in (slow) progress.

Matthias


From skinjo at redhat.com  Fri Sep 25 07:04:14 2015
From: skinjo at redhat.com (Shinobu Kinjo)
Date: Fri, 25 Sep 2015 03:04:14 -0400 (EDT)
Subject: [openstack-dev] [Manila] CephFS native driver
In-Reply-To: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
Message-ID: <1886719958.21640286.1443164654402.JavaMail.zimbra@redhat.com>

So here are questions from my side.
Just question.


 1.What is the biggest advantage comparing others such as RDB?
  We should be able to implement what you are going to do in
  existing module, shouldn't we?

 2.What are you going to focus on with a new implementation?
  It seems to be to use NFS in front of that implementation
  with more transparently.

 3.What are you thinking of integration with OpenStack using
  a new implementation?
  Since it's going to be new kind of, there should be differ-
  ent architecture.

 4.Is this implementation intended for OneStack integration 
  mainly?

Since velocity of OpenStack feature expansion is much more than 
it used to be, it's much more important to think of performance.

Is a new implementation also going to improve Ceph integration
with OpenStack system?
 
Thank you so much for your explanation in advance.

Shinobu 

----- Original Message -----
From: "John Spray" <jspray at redhat.com>
To: openstack-dev at lists.openstack.org, "Ceph Development" <ceph-devel at vger.kernel.org>
Sent: Thursday, September 24, 2015 10:49:17 PM
Subject: [openstack-dev] [Manila] CephFS native driver

Hi all,

I've recently started work on a CephFS driver for Manila.  The (early)
code is here:
https://github.com/openstack/manila/compare/master...jcsp:ceph

It requires a special branch of ceph which is here:
https://github.com/ceph/ceph/compare/master...jcsp:wip-manila

This isn't done yet (hence this email rather than a gerrit review),
but I wanted to give everyone a heads up that this work is going on,
and a brief status update.

This is the 'native' driver in the sense that clients use the CephFS
client to access the share, rather than re-exporting it over NFS.  The
idea is that this driver will be useful for anyone who has such
clients, as well as acting as the basis for a later NFS-enabled
driver.

The export location returned by the driver gives the client the Ceph
mon IP addresses, the share path, and an authentication token.  This
authentication token is what permits the clients access (Ceph does not
do access control based on IP addresses).

It's just capable of the minimal functionality of creating and
deleting shares so far, but I will shortly be looking into hooking up
snapshots/consistency groups, albeit for read-only snapshots only
(cephfs does not have writeable shapshots).  Currently deletion is
just a move into a 'trash' directory, the idea is to add something
later that cleans this up in the background: the downside to the
"shares are just directories" approach is that clearing them up has a
"rm -rf" cost!

A note on the implementation: cephfs recently got the ability (not yet
in master) to restrict client metadata access based on path, so this
driver is simply creating shares by creating directories within a
cluster-wide filesystem, and issuing credentials to clients that
restrict them to their own directory.  They then mount that subpath,
so that from the client's point of view it's like having their own
filesystem.  We also have a quota mechanism that I'll hook in later to
enforce the share size.

Currently the security here requires clients (i.e. the ceph-fuse code
on client hosts, not the userspace applications) to be trusted, as
quotas are enforced on the client side.  The OSD access control
operates on a per-pool basis, and creating a separate pool for each
share is inefficient.  In the future it is expected that CephFS will
be extended to support file layouts that use RADOS namespaces, which
are cheap, such that we can issue a new namespace to each share and
enforce the separation between shares on the OSD side.

However, for many people the ultimate access control solution will be
to use a NFS gateway in front of their CephFS filesystem: it is
expected that an NFS-enabled cephfs driver will follow this native
driver in the not-too-distant future.

This will be my first openstack contribution, so please bear with me
while I come up to speed with the submission process.  I'll also be in
Tokyo for the summit next month, so I hope to meet other interested
parties there.

All the best,
John

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From thierry at openstack.org  Fri Sep 25 07:57:05 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Fri, 25 Sep 2015 09:57:05 +0200
Subject: [openstack-dev] [Neutron] [Ceilometer] Liberty RC1 available
Message-ID: <5604FE51.8050109@openstack.org>

Hello everyone,

Ceilometer and Neutron just produced their first release candidate for
the end of the Liberty cycle. The RC1 tarballs, as well as a list of
last-minute features and fixed bugs since liberty-1 are available at:

https://launchpad.net/ceilometer/liberty/liberty-rc1
https://launchpad.net/neutron/liberty/liberty-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as final versions
on October 15. You are therefore strongly encouraged to test and
validate these tarballs !

Alternatively, you can directly test the stable/liberty release branch at:

http://git.openstack.org/cgit/openstack/ceilometer/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/neutron/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/ceilometer/+filebug
or
https://bugs.launchpad.net/neutron/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branches of Ceilometer and Neutron are now
officially open for Mitaka development, so feature freeze restrictions
no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)


From amaretskiy at mirantis.com  Fri Sep 25 08:06:49 2015
From: amaretskiy at mirantis.com (Aleksandr Maretskiy)
Date: Fri, 25 Sep 2015 11:06:49 +0300
Subject: [openstack-dev] [openstack-operators][Rally] Rally plugins
 reference is available
In-Reply-To: <CAD85om30QcMUHi4vB8BjD0dUf_h=ZZNjKo-w72+ffKMC9GRTAg@mail.gmail.com>
References: <CAD85om30QcMUHi4vB8BjD0dUf_h=ZZNjKo-w72+ffKMC9GRTAg@mail.gmail.com>
Message-ID: <CA+NpHya5QDBrX0ECa=0qviDZBx1Bc0aotZ-VaArGtOzOcHgoVw@mail.gmail.com>

Cool!

On Fri, Sep 25, 2015 at 3:05 AM, Boris Pavlovic <boris at pavlovic.me> wrote:

> Hi stackers,
>
> As far as you know Rally test cases are created as a mix of plugins.
>
> At this point of time we have more than 200 plugins for almost all
> OpenStack projects.
> Before you had to analyze code of plugins or use "rally plugin find/list"
> commands to find plugins that you need, which was the pain in neck.
>
> So finally we have auto generated plugin reference:
> https://rally.readthedocs.org/en/latest/plugin/plugin_reference.html
>
>
> Best regards,
> Boris Pavlovic
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/2bc2fd54/attachment.html>

From thomas.morin at orange.com  Fri Sep 25 08:32:12 2015
From: thomas.morin at orange.com (thomas.morin at orange.com)
Date: Fri, 25 Sep 2015 10:32:12 +0200
Subject: [openstack-dev] [neutron] How could an L2 agent extension access
	agent methods ?
Message-ID: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>

Hi everyone,

(TL;DR: we would like an L2 agent extension to be able to call methods 
on the agent class, e.g. OVSAgent)

In the networking-bgpvpn project, we need the reference driver to 
interact with the ML2 openvswitch agent with new RPCs to allow 
exchanging information with the BGP VPN implementation running on the 
compute nodes. We also need the OVS agent to setup specific things on 
the OVS bridges for MPLS traffic.

To extend the agent behavior, we currently create a new agent by 
mimicking the main() in ovs_neutron_agent.py but instead of 
instantiating instantiate OVSAgent, with instantiate a class that 
overloads the OVSAgent class with the additional behavior we need [1] .

This is really not the ideal way of extending the agent, and we would 
prefer using the L2 agent extension framework [2].

Using the L2 agent extension framework would work, but only partially: 
it would easily allos us to register our RPC consumers, but not to let 
us access to some datastructures/methods of the agent that we need to 
use: setup_entry_for_arp_reply and local_vlan_map, access to the 
OVSBridge objects to manipulate OVS ports.

I've filled-in an RFE bug to track this issue [5].

We would like something like one of the following:
1) augment the L2 agent extension interface (AgentCoreResourceExtension) 
to give access to the agent object (and thus let the extension call 
methods of the agent) by giving the agent as a parameter of the 
initialize method [4]
2) augment the L2 agent extension interface (AgentCoreResourceExtension) 
to give access to the agent object (and thus let the extension call 
methods of the agent) by giving the agent as a parameter of a new 
setAgent method
3) augment the L2 agent extension interface (AgentCoreResourceExtension) 
to give access only to specific/chosen methods on the agent object, for 
instance by giving a dict as a parameter of the initialize method [4], 
whose keys would be method names, and values would be pointer to these 
methods on the agent object
4) define a new interface with methods to access things inside the 
agent, this interface would be implemented by an object instantiated by 
the agent, and that the agent would pass to the extension manager, thus 
allowing the extension manager to passe the object to an extension 
through the initialize method of AgentCoreResourceExtension [4]

Any feedback on these ideas...?
Of course any other idea is welcome...

For the sake of triggering reaction, the question could be rephrased as: 
if we submit a change doing (1) above, would it have a reasonable chance 
of merging ?

-Thomas

[1] 
https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
[2] https://review.openstack.org/#/c/195439/
[3] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
[4] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
[5] https://bugs.launchpad.net/neutron/+bug/1499637


_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/b3a011e9/attachment.html>

From thierry at openstack.org  Fri Sep 25 08:39:06 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Fri, 25 Sep 2015 10:39:06 +0200
Subject: [openstack-dev] [ironic] Ironic is open for Mitaka development
In-Reply-To: <20150925004049.GD14957@jimrollenhagen.com>
References: <20150925004049.GD14957@jimrollenhagen.com>
Message-ID: <5605082A.2090907@openstack.org>

Jim Rollenhagen wrote:
> I've proposed a patch to release Ironic 4.2.0, and we will be cutting
> the stable/liberty branch from the same SHA:
> https://review.openstack.org/#/c/227582/

It's now released at:
https://launchpad.net/ironic/liberty/4.2.0

and the Liberty release branch was cut at:
http://git.openstack.org/cgit/openstack/ironic/log/?h=stable/liberty

> This means Ironic is now open for Mitaka development; commit away!
> [...]

Yay!

-- 
Thierry Carrez (ttx)


From jspray at redhat.com  Fri Sep 25 08:51:36 2015
From: jspray at redhat.com (John Spray)
Date: Fri, 25 Sep 2015 09:51:36 +0100
Subject: [openstack-dev] [Manila] CephFS native driver
In-Reply-To: <1886719958.21640286.1443164654402.JavaMail.zimbra@redhat.com>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
 <1886719958.21640286.1443164654402.JavaMail.zimbra@redhat.com>
Message-ID: <CALe9h7fhTL2zUZZM_SGCPVnBg0EZdF88ikfSqRLyOrUh5vHukA@mail.gmail.com>

On Fri, Sep 25, 2015 at 8:04 AM, Shinobu Kinjo <skinjo at redhat.com> wrote:
> So here are questions from my side.
> Just question.
>
>
>  1.What is the biggest advantage comparing others such as RDB?
>   We should be able to implement what you are going to do in
>   existing module, shouldn't we?

I guess you mean compared to using a local filesystem on top of RBD,
and exporting it over NFS?  The main distinction here is that for
native CephFS clients, they get a shared filesystem where all the
clients can talk to all the Ceph OSDs directly, and avoid the
potential bottleneck of an NFS->local fs->RBD server.

Workloads requiring a local filesystem would probably continue to map
a cinder block device and use that.  The Manila driver is intended for
use cases that require a shared filesystem.

>  2.What are you going to focus on with a new implementation?
>   It seems to be to use NFS in front of that implementation
>   with more transparently.

The goal here is to make cephfs accessible to people by making it easy
to provision it for their applications, just like Manila in general.
The motivation for putting an NFS layer in front of CephFS is to make
it easier for people to adopt, because they won't need to install any
ceph-specific code in their guests.  It will also be easier to
support, because any ceph client bugfixes would not need to be
installed within guests (if we assume existing nfs clients are bug
free :-))

>  3.What are you thinking of integration with OpenStack using
>   a new implementation?
>   Since it's going to be new kind of, there should be differ-
>   ent architecture.

Not sure I understand this question?

>  4.Is this implementation intended for OneStack integration
>   mainly?

Nope (I had not heard of onestack before).

> Since velocity of OpenStack feature expansion is much more than
> it used to be, it's much more important to think of performance.

> Is a new implementation also going to improve Ceph integration
> with OpenStack system?

This piece of work is specifically about Manila; general improvements
in Ceph integration would be a different topic.

Thanks,
John

>
> Thank you so much for your explanation in advance.
>
> Shinobu
>
> ----- Original Message -----
> From: "John Spray" <jspray at redhat.com>
> To: openstack-dev at lists.openstack.org, "Ceph Development" <ceph-devel at vger.kernel.org>
> Sent: Thursday, September 24, 2015 10:49:17 PM
> Subject: [openstack-dev] [Manila] CephFS native driver
>
> Hi all,
>
> I've recently started work on a CephFS driver for Manila.  The (early)
> code is here:
> https://github.com/openstack/manila/compare/master...jcsp:ceph
>
> It requires a special branch of ceph which is here:
> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>
> This isn't done yet (hence this email rather than a gerrit review),
> but I wanted to give everyone a heads up that this work is going on,
> and a brief status update.
>
> This is the 'native' driver in the sense that clients use the CephFS
> client to access the share, rather than re-exporting it over NFS.  The
> idea is that this driver will be useful for anyone who has such
> clients, as well as acting as the basis for a later NFS-enabled
> driver.
>
> The export location returned by the driver gives the client the Ceph
> mon IP addresses, the share path, and an authentication token.  This
> authentication token is what permits the clients access (Ceph does not
> do access control based on IP addresses).
>
> It's just capable of the minimal functionality of creating and
> deleting shares so far, but I will shortly be looking into hooking up
> snapshots/consistency groups, albeit for read-only snapshots only
> (cephfs does not have writeable shapshots).  Currently deletion is
> just a move into a 'trash' directory, the idea is to add something
> later that cleans this up in the background: the downside to the
> "shares are just directories" approach is that clearing them up has a
> "rm -rf" cost!
>
> A note on the implementation: cephfs recently got the ability (not yet
> in master) to restrict client metadata access based on path, so this
> driver is simply creating shares by creating directories within a
> cluster-wide filesystem, and issuing credentials to clients that
> restrict them to their own directory.  They then mount that subpath,
> so that from the client's point of view it's like having their own
> filesystem.  We also have a quota mechanism that I'll hook in later to
> enforce the share size.
>
> Currently the security here requires clients (i.e. the ceph-fuse code
> on client hosts, not the userspace applications) to be trusted, as
> quotas are enforced on the client side.  The OSD access control
> operates on a per-pool basis, and creating a separate pool for each
> share is inefficient.  In the future it is expected that CephFS will
> be extended to support file layouts that use RADOS namespaces, which
> are cheap, such that we can issue a new namespace to each share and
> enforce the separation between shares on the OSD side.
>
> However, for many people the ultimate access control solution will be
> to use a NFS gateway in front of their CephFS filesystem: it is
> expected that an NFS-enabled cephfs driver will follow this native
> driver in the not-too-distant future.
>
> This will be my first openstack contribution, so please bear with me
> while I come up to speed with the submission process.  I'll also be in
> Tokyo for the summit next month, so I hope to meet other interested
> parties there.
>
> All the best,
> John
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From blak111 at gmail.com  Fri Sep 25 08:57:28 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Fri, 25 Sep 2015 01:57:28 -0700
Subject: [openstack-dev] [neutron] How could an L2 agent extension
 access agent methods ?
In-Reply-To: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
Message-ID: <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>

I think the 4th of the options you proposed would be the best. We don't
want to give agents direct access to the agent object or else we will run
the risk of breaking extensions all of the time during any kind of
reorganization or refactoring. Having a well defined API in between will
give us flexibility to move things around.

On Fri, Sep 25, 2015 at 1:32 AM, <thomas.morin at orange.com> wrote:

> Hi everyone,
>
> (TL;DR: we would like an L2 agent extension to be able to call methods on
> the agent class, e.g. OVSAgent)
>
> In the networking-bgpvpn project, we need the reference driver to interact
> with the ML2 openvswitch agent with new RPCs to allow exchanging
> information with the BGP VPN implementation running on the compute nodes.
> We also need the OVS agent to setup specific things on the OVS bridges for
> MPLS traffic.
>
> To extend the agent behavior, we currently create a new agent by mimicking
> the main() in ovs_neutron_agent.py but instead of instantiating instantiate
> OVSAgent, with instantiate a class that overloads the OVSAgent class with
> the additional behavior we need [1] .
>
> This is really not the ideal way of extending the agent, and we would
> prefer using the L2 agent extension framework [2].
>
> Using the L2 agent extension framework would work, but only partially: it
> would easily allos us to register our RPC consumers, but not to let us
> access to some datastructures/methods of the agent that we need to use:
> setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge
> objects to manipulate OVS ports.
>
> I've filled-in an RFE bug to track this issue [5].
>
> We would like something like one of the following:
> 1) augment the L2 agent extension interface (AgentCoreResourceExtension)
> to give access to the agent object (and thus let the extension call methods
> of the agent) by giving the agent as a parameter of the initialize method
> [4]
> 2) augment the L2 agent extension interface (AgentCoreResourceExtension)
> to give access to the agent object (and thus let the extension call methods
> of the agent) by giving the agent as a parameter of a new setAgent method
> 3) augment the L2 agent extension interface (AgentCoreResourceExtension)
> to give access only to specific/chosen methods on the agent object, for
> instance by giving a dict as a parameter of the initialize method [4],
> whose keys would be method names, and values would be pointer to these
> methods on the agent object
> 4) define a new interface with methods to access things inside the agent,
> this interface would be implemented by an object instantiated by the agent,
> and that the agent would pass to the extension manager, thus allowing the
> extension manager to passe the object to an extension through the
> initialize method of AgentCoreResourceExtension [4]
>
> Any feedback on these ideas...?
> Of course any other idea is welcome...
>
> For the sake of triggering reaction, the question could be rephrased as:
> if we submit a change doing (1) above, would it have a reasonable chance of
> merging ?
>
> -Thomas
>
> [1]
> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
> [2] https://review.openstack.org/#/c/195439/
> [3]
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
> [4]
> https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
> [5] https://bugs.launchpad.net/neutron/+bug/1499637
>
> _________________________________________________________________________________________________________________________
>
> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
> Thank you.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/33772e5c/attachment.html>

From blak111 at gmail.com  Fri Sep 25 08:59:08 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Fri, 25 Sep 2015 01:59:08 -0700
Subject: [openstack-dev] [neutron] How could an L2 agent extension
 access agent methods ?
In-Reply-To: <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
Message-ID: <CAO_F6JMshR58KAKLeiS21FV4m7EBWtDX6kLdcirAeBfkPNEJug@mail.gmail.com>

Sorry, that should have said, "We don't want to give extensions direct
access to the agent object..."

On Fri, Sep 25, 2015 at 1:57 AM, Kevin Benton <blak111 at gmail.com> wrote:

> I think the 4th of the options you proposed would be the best. We don't
> want to give agents direct access to the agent object or else we will run
> the risk of breaking extensions all of the time during any kind of
> reorganization or refactoring. Having a well defined API in between will
> give us flexibility to move things around.
>
> On Fri, Sep 25, 2015 at 1:32 AM, <thomas.morin at orange.com> wrote:
>
>> Hi everyone,
>>
>> (TL;DR: we would like an L2 agent extension to be able to call methods on
>> the agent class, e.g. OVSAgent)
>>
>> In the networking-bgpvpn project, we need the reference driver to
>> interact with the ML2 openvswitch agent with new RPCs to allow exchanging
>> information with the BGP VPN implementation running on the compute nodes.
>> We also need the OVS agent to setup specific things on the OVS bridges for
>> MPLS traffic.
>>
>> To extend the agent behavior, we currently create a new agent by
>> mimicking the main() in ovs_neutron_agent.py but instead of instantiating
>> instantiate OVSAgent, with instantiate a class that overloads the OVSAgent
>> class with the additional behavior we need [1] .
>>
>> This is really not the ideal way of extending the agent, and we would
>> prefer using the L2 agent extension framework [2].
>>
>> Using the L2 agent extension framework would work, but only partially: it
>> would easily allos us to register our RPC consumers, but not to let us
>> access to some datastructures/methods of the agent that we need to use:
>> setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge
>> objects to manipulate OVS ports.
>>
>> I've filled-in an RFE bug to track this issue [5].
>>
>> We would like something like one of the following:
>> 1) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access to the agent object (and thus let the extension call methods
>> of the agent) by giving the agent as a parameter of the initialize method
>> [4]
>> 2) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access to the agent object (and thus let the extension call methods
>> of the agent) by giving the agent as a parameter of a new setAgent method
>> 3) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access only to specific/chosen methods on the agent object, for
>> instance by giving a dict as a parameter of the initialize method [4],
>> whose keys would be method names, and values would be pointer to these
>> methods on the agent object
>> 4) define a new interface with methods to access things inside the agent,
>> this interface would be implemented by an object instantiated by the agent,
>> and that the agent would pass to the extension manager, thus allowing the
>> extension manager to passe the object to an extension through the
>> initialize method of AgentCoreResourceExtension [4]
>>
>> Any feedback on these ideas...?
>> Of course any other idea is welcome...
>>
>> For the sake of triggering reaction, the question could be rephrased as:
>> if we submit a change doing (1) above, would it have a reasonable chance of
>> merging ?
>>
>> -Thomas
>>
>> [1]
>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
>> [2] https://review.openstack.org/#/c/195439/
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
>> [4]
>> https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
>> [5] https://bugs.launchpad.net/neutron/+bug/1499637
>>
>> _________________________________________________________________________________________________________________________
>>
>> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
>> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
>> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
>> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>
>> This message and its attachments may contain confidential or privileged information that may be protected by law;
>> they should not be distributed, used or copied without authorisation.
>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>> Thank you.
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/4cc8ea3b/attachment.html>

From flavio at redhat.com  Fri Sep 25 09:02:25 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Fri, 25 Sep 2015 11:02:25 +0200
Subject: [openstack-dev] [Glance][Solum] Using os-auth-token and
 os-image-url with glance client
In-Reply-To: <1443134649448.43790@RACKSPACE.COM>
References: <1443134649448.43790@RACKSPACE.COM>
Message-ID: <20150925090225.GP26372@redhat.com>

On 24/09/15 22:44 +0000, Devdatta Kulkarni wrote:
>Hi, Glance team,
>
>
>In Solum, we use Glance to store Docker images that we create for applications.
>We use Glance client internally to upload these images. Till recently, 'glance
>image-create' with only token has been
>
>working for us (in devstack). Today, I started noticing that glance
>image-create with just token is not working anymore. It is also not working
>when os-auth-token and os-image-url are passed in. According to documentation (
>http://docs.openstack.org/developer/python-glanceclient/), passing token and
>image-url should work. The client, which I have installed from master, is
>asking username (and password, if username is specified).
>
>
>Solum does not have access to end-user's password. So we need the ability to
>interact with Glance without providing password, as it has been working till
>recently.
>
>
>I investigated the issue a bit and have filed a bug with my findings.
>
>https://bugs.launchpad.net/python-glanceclient/+bug/1499540
>
>
>Can someone help with resolving this issue.
>

This should fix your issue and we'll backport it to Liberty.

https://review.openstack.org/#/c/227723/

Thanks for reporting,
Flavio


-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/98649eb5/attachment.pgp>

From mangelajo at redhat.com  Fri Sep 25 09:12:45 2015
From: mangelajo at redhat.com (Miguel Angel Ajo)
Date: Fri, 25 Sep 2015 11:12:45 +0200
Subject: [openstack-dev] [neutron] How could an L2 agent extension
 access agent methods ?
In-Reply-To: <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
Message-ID: <5605100D.4070506@redhat.com>

I didn't finish reading it, and was thinking about the same thing exactly.

IMHO option 4th is the best. So we will be able to provide an interface 
where stability
is controlled, where we can deprecate things in a controlled manner, and 
we know what we
support and what we don't.

Do you have a rough idea of what operations you may need to do?

Please bear in mind, the extension interface will be available from 
different agent types
(OVS, SR-IOV, [eventually LB]), so this interface you're talking about 
could also serve as
a translation driver for the agents (where the translation is possible), 
I totally understand
that most extensions are specific agent bound, and we must be able to 
identify
the agent we're serving back exactly.


Best regards,
Miguel ?ngel Ajo

Kevin Benton wrote:
> I think the 4th of the options you proposed would be the best. We don't
> want to give agents direct access to the agent object or else we will run
> the risk of breaking extensions all of the time during any kind of
> reorganization or refactoring. Having a well defined API in between will
> give us flexibility to move things around.
>
> On Fri, Sep 25, 2015 at 1:32 AM,<thomas.morin at orange.com>  wrote:
>
>> Hi everyone,
>>
>> (TL;DR: we would like an L2 agent extension to be able to call methods on
>> the agent class, e.g. OVSAgent)
>>
>> In the networking-bgpvpn project, we need the reference driver to interact
>> with the ML2 openvswitch agent with new RPCs to allow exchanging
>> information with the BGP VPN implementation running on the compute nodes.
>> We also need the OVS agent to setup specific things on the OVS bridges for
>> MPLS traffic.
>>
>> To extend the agent behavior, we currently create a new agent by mimicking
>> the main() in ovs_neutron_agent.py but instead of instantiating instantiate
>> OVSAgent, with instantiate a class that overloads the OVSAgent class with
>> the additional behavior we need [1] .
>>
>> This is really not the ideal way of extending the agent, and we would
>> prefer using the L2 agent extension framework [2].
>>
>> Using the L2 agent extension framework would work, but only partially: it
>> would easily allos us to register our RPC consumers, but not to let us
>> access to some datastructures/methods of the agent that we need to use:
>> setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge
>> objects to manipulate OVS ports.
>>
>> I've filled-in an RFE bug to track this issue [5].
>>
>> We would like something like one of the following:
>> 1) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access to the agent object (and thus let the extension call methods
>> of the agent) by giving the agent as a parameter of the initialize method
>> [4]
>> 2) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access to the agent object (and thus let the extension call methods
>> of the agent) by giving the agent as a parameter of a new setAgent method
>> 3) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access only to specific/chosen methods on the agent object, for
>> instance by giving a dict as a parameter of the initialize method [4],
>> whose keys would be method names, and values would be pointer to these
>> methods on the agent object
>> 4) define a new interface with methods to access things inside the agent,
>> this interface would be implemented by an object instantiated by the agent,
>> and that the agent would pass to the extension manager, thus allowing the
>> extension manager to passe the object to an extension through the
>> initialize method of AgentCoreResourceExtension [4]
>>
>> Any feedback on these ideas...?
>> Of course any other idea is welcome...
>>
>> For the sake of triggering reaction, the question could be rephrased as:
>> if we submit a change doing (1) above, would it have a reasonable chance of
>> merging ?
>>
>> -Thomas
>>
>> [1]
>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
>> [2] https://review.openstack.org/#/c/195439/
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
>> [4]
>> https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
>> [5] https://bugs.launchpad.net/neutron/+bug/1499637
>>
>> _________________________________________________________________________________________________________________________
>>
>> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
>> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
>> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
>> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>
>> This message and its attachments may contain confidential or privileged information that may be protected by law;
>> they should not be distributed, used or copied without authorisation.
>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>> Thank you.
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From skinjo at redhat.com  Fri Sep 25 09:16:14 2015
From: skinjo at redhat.com (Shinobu Kinjo)
Date: Fri, 25 Sep 2015 05:16:14 -0400 (EDT)
Subject: [openstack-dev] [Manila] CephFS native driver
In-Reply-To: <CALe9h7fhTL2zUZZM_SGCPVnBg0EZdF88ikfSqRLyOrUh5vHukA@mail.gmail.com>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
 <1886719958.21640286.1443164654402.JavaMail.zimbra@redhat.com>
 <CALe9h7fhTL2zUZZM_SGCPVnBg0EZdF88ikfSqRLyOrUh5vHukA@mail.gmail.com>
Message-ID: <1268826320.21692761.1443172574805.JavaMail.zimbra@redhat.com>

Thank you for your reply.

> The main distinction here is that for
> native CephFS clients, they get a shared filesystem where all the
> clients can talk to all the Ceph OSDs directly, and avoid the
> potential bottleneck of an NFS->local fs->RBD server.

As you know each pass from clients to rados is:

 1) CephFS
  [Apps] -> [VFS] -> [Kernel Driver] -> [Ceph-Kernel Client]
   -> [MON], [MDS], [OSD]

 2) RBD
  [Apps] -> [VFS] -> [librbd] -> [librados] -> [MON], [OSD]

Considering above, there could be more bottleneck in 1) than 2),
I think.

What do you think?


> It will also be easier to support, because any ceph client 
> bugfixes would not need to be installed within guests 

Yeah, you're right.
There have been many unexpected errors coming from assert -;

Any clients should not be affected by those kind of bugs, as
much as possible.

That's just ideally.


>  3.What are you thinking of integration with OpenStack using
>   a new implementation?
>   Since it's going to be new kind of, there should be differ-
>   ent architecture.

Sorry, it's just too ambiguous. Frankly how are you going to
implement such a new future, was my question.

Make sense?


>  4.Is this implementation intended for OneStack integration
>   mainly?

Yes, that's just my typo -;

 OneStack -> OpenStack


> This piece of work is specifically about Manila; general improvements
> in Ceph integration would be a different topic.

That's interesting to me.

Shinobu

----- Original Message -----
From: "John Spray" <jspray at redhat.com>
To: "Shinobu Kinjo" <skinjo at redhat.com>
Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Sent: Friday, September 25, 2015 5:51:36 PM
Subject: Re: [openstack-dev] [Manila] CephFS native driver

On Fri, Sep 25, 2015 at 8:04 AM, Shinobu Kinjo <skinjo at redhat.com> wrote:
> So here are questions from my side.
> Just question.
>
>
>  1.What is the biggest advantage comparing others such as RDB?
>   We should be able to implement what you are going to do in
>   existing module, shouldn't we?

I guess you mean compared to using a local filesystem on top of RBD,
and exporting it over NFS?  The main distinction here is that for
native CephFS clients, they get a shared filesystem where all the
clients can talk to all the Ceph OSDs directly, and avoid the
potential bottleneck of an NFS->local fs->RBD server.

Workloads requiring a local filesystem would probably continue to map
a cinder block device and use that.  The Manila driver is intended for
use cases that require a shared filesystem.

>  2.What are you going to focus on with a new implementation?
>   It seems to be to use NFS in front of that implementation
>   with more transparently.

The goal here is to make cephfs accessible to people by making it easy
to provision it for their applications, just like Manila in general.
The motivation for putting an NFS layer in front of CephFS is to make
it easier for people to adopt, because they won't need to install any
ceph-specific code in their guests.  It will also be easier to
support, because any ceph client bugfixes would not need to be
installed within guests (if we assume existing nfs clients are bug
free :-))

>  3.What are you thinking of integration with OpenStack using
>   a new implementation?
>   Since it's going to be new kind of, there should be differ-
>   ent architecture.

Not sure I understand this question?

>  4.Is this implementation intended for OneStack integration
>   mainly?

Nope (I had not heard of onestack before).

> Since velocity of OpenStack feature expansion is much more than
> it used to be, it's much more important to think of performance.

> Is a new implementation also going to improve Ceph integration
> with OpenStack system?

This piece of work is specifically about Manila; general improvements
in Ceph integration would be a different topic.

Thanks,
John

>
> Thank you so much for your explanation in advance.
>
> Shinobu
>
> ----- Original Message -----
> From: "John Spray" <jspray at redhat.com>
> To: openstack-dev at lists.openstack.org, "Ceph Development" <ceph-devel at vger.kernel.org>
> Sent: Thursday, September 24, 2015 10:49:17 PM
> Subject: [openstack-dev] [Manila] CephFS native driver
>
> Hi all,
>
> I've recently started work on a CephFS driver for Manila.  The (early)
> code is here:
> https://github.com/openstack/manila/compare/master...jcsp:ceph
>
> It requires a special branch of ceph which is here:
> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>
> This isn't done yet (hence this email rather than a gerrit review),
> but I wanted to give everyone a heads up that this work is going on,
> and a brief status update.
>
> This is the 'native' driver in the sense that clients use the CephFS
> client to access the share, rather than re-exporting it over NFS.  The
> idea is that this driver will be useful for anyone who has such
> clients, as well as acting as the basis for a later NFS-enabled
> driver.
>
> The export location returned by the driver gives the client the Ceph
> mon IP addresses, the share path, and an authentication token.  This
> authentication token is what permits the clients access (Ceph does not
> do access control based on IP addresses).
>
> It's just capable of the minimal functionality of creating and
> deleting shares so far, but I will shortly be looking into hooking up
> snapshots/consistency groups, albeit for read-only snapshots only
> (cephfs does not have writeable shapshots).  Currently deletion is
> just a move into a 'trash' directory, the idea is to add something
> later that cleans this up in the background: the downside to the
> "shares are just directories" approach is that clearing them up has a
> "rm -rf" cost!
>
> A note on the implementation: cephfs recently got the ability (not yet
> in master) to restrict client metadata access based on path, so this
> driver is simply creating shares by creating directories within a
> cluster-wide filesystem, and issuing credentials to clients that
> restrict them to their own directory.  They then mount that subpath,
> so that from the client's point of view it's like having their own
> filesystem.  We also have a quota mechanism that I'll hook in later to
> enforce the share size.
>
> Currently the security here requires clients (i.e. the ceph-fuse code
> on client hosts, not the userspace applications) to be trusted, as
> quotas are enforced on the client side.  The OSD access control
> operates on a per-pool basis, and creating a separate pool for each
> share is inefficient.  In the future it is expected that CephFS will
> be extended to support file layouts that use RADOS namespaces, which
> are cheap, such that we can issue a new namespace to each share and
> enforce the separation between shares on the OSD side.
>
> However, for many people the ultimate access control solution will be
> to use a NFS gateway in front of their CephFS filesystem: it is
> expected that an NFS-enabled cephfs driver will follow this native
> driver in the not-too-distant future.
>
> This will be my first openstack contribution, so please bear with me
> while I come up to speed with the submission process.  I'll also be in
> Tokyo for the summit next month, so I hope to meet other interested
> parties there.
>
> All the best,
> John
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From liyong.qiao at intel.com  Fri Sep 25 09:19:25 2015
From: liyong.qiao at intel.com (Qiao,Liyong)
Date: Fri, 25 Sep 2015 17:19:25 +0800
Subject: [openstack-dev] [magnum] debugging functional testing on gate
Message-ID: <5605119D.3020403@intel.com>

hi folks,
I am working on add functional testing case for "creating" bays on gate.
for now, there is k8s bay creation/deletion testing, I want to adding 
more swarm/mesos
type bay testing, but I tried times , gate failed to create swarm bay.

per my experience, swarm-master/node requires to access network outside 
swarm-cluster.
I wonder if gate can support such cases?
can we do some debugging on gate?

I got serious patches:
https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/swarm-functional-testing,n,z

-- 
BR, Eli(Li Yong)Qiao

-------------- next part --------------
A non-text attachment was scrubbed...
Name: liyong_qiao.vcf
Type: text/x-vcard
Size: 123 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/12045783/attachment.vcf>

From ihrachys at redhat.com  Fri Sep 25 09:35:30 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Fri, 25 Sep 2015 11:35:30 +0200
Subject: [openstack-dev] [neutron] How could an L2 agent extension
	access agent methods ?
In-Reply-To: <5605100D.4070506@redhat.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
 <5605100D.4070506@redhat.com>
Message-ID: <E110C2F2-1D54-4757-AB66-23D4234E56B8@redhat.com>

Yes, looks like option 4 is the best. We need an abstraction layer between extensions and agents, to make sure API makes sense for all AMQP based agents.

Common agent framework that I think Sean side looks at [1] could partially define that agent interface for us.

[1]: https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:lb_common_agent_experiment,n,z

Ihar

> On 25 Sep 2015, at 11:12, Miguel Angel Ajo <mangelajo at redhat.com> wrote:
> 
> I didn't finish reading it, and was thinking about the same thing exactly.
> 
> IMHO option 4th is the best. So we will be able to provide an interface where stability
> is controlled, where we can deprecate things in a controlled manner, and we know what we
> support and what we don't.
> 
> Do you have a rough idea of what operations you may need to do?
> 
> Please bear in mind, the extension interface will be available from different agent types
> (OVS, SR-IOV, [eventually LB]), so this interface you're talking about could also serve as
> a translation driver for the agents (where the translation is possible), I totally understand
> that most extensions are specific agent bound, and we must be able to identify
> the agent we're serving back exactly.
> 
> 
> Best regards,
> Miguel ?ngel Ajo
> 
> Kevin Benton wrote:
>> I think the 4th of the options you proposed would be the best. We don't
>> want to give agents direct access to the agent object or else we will run
>> the risk of breaking extensions all of the time during any kind of
>> reorganization or refactoring. Having a well defined API in between will
>> give us flexibility to move things around.
>> 
>> On Fri, Sep 25, 2015 at 1:32 AM,<thomas.morin at orange.com>  wrote:
>> 
>>> Hi everyone,
>>> 
>>> (TL;DR: we would like an L2 agent extension to be able to call methods on
>>> the agent class, e.g. OVSAgent)
>>> 
>>> In the networking-bgpvpn project, we need the reference driver to interact
>>> with the ML2 openvswitch agent with new RPCs to allow exchanging
>>> information with the BGP VPN implementation running on the compute nodes.
>>> We also need the OVS agent to setup specific things on the OVS bridges for
>>> MPLS traffic.
>>> 
>>> To extend the agent behavior, we currently create a new agent by mimicking
>>> the main() in ovs_neutron_agent.py but instead of instantiating instantiate
>>> OVSAgent, with instantiate a class that overloads the OVSAgent class with
>>> the additional behavior we need [1] .
>>> 
>>> This is really not the ideal way of extending the agent, and we would
>>> prefer using the L2 agent extension framework [2].
>>> 
>>> Using the L2 agent extension framework would work, but only partially: it
>>> would easily allos us to register our RPC consumers, but not to let us
>>> access to some datastructures/methods of the agent that we need to use:
>>> setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge
>>> objects to manipulate OVS ports.
>>> 
>>> I've filled-in an RFE bug to track this issue [5].
>>> 
>>> We would like something like one of the following:
>>> 1) augment the L2 agent extension interface (AgentCoreResourceExtension)
>>> to give access to the agent object (and thus let the extension call methods
>>> of the agent) by giving the agent as a parameter of the initialize method
>>> [4]
>>> 2) augment the L2 agent extension interface (AgentCoreResourceExtension)
>>> to give access to the agent object (and thus let the extension call methods
>>> of the agent) by giving the agent as a parameter of a new setAgent method
>>> 3) augment the L2 agent extension interface (AgentCoreResourceExtension)
>>> to give access only to specific/chosen methods on the agent object, for
>>> instance by giving a dict as a parameter of the initialize method [4],
>>> whose keys would be method names, and values would be pointer to these
>>> methods on the agent object
>>> 4) define a new interface with methods to access things inside the agent,
>>> this interface would be implemented by an object instantiated by the agent,
>>> and that the agent would pass to the extension manager, thus allowing the
>>> extension manager to passe the object to an extension through the
>>> initialize method of AgentCoreResourceExtension [4]
>>> 
>>> Any feedback on these ideas...?
>>> Of course any other idea is welcome...
>>> 
>>> For the sake of triggering reaction, the question could be rephrased as:
>>> if we submit a change doing (1) above, would it have a reasonable chance of
>>> merging ?
>>> 
>>> -Thomas
>>> 
>>> [1]
>>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
>>> [2] https://review.openstack.org/#/c/195439/
>>> [3]
>>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
>>> [4]
>>> https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
>>> [5] https://bugs.launchpad.net/neutron/+bug/1499637
>>> 
>>> _________________________________________________________________________________________________________________________
>>> 
>>> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
>>> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
>>> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
>>> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>> 
>>> This message and its attachments may contain confidential or privileged information that may be protected by law;
>>> they should not be distributed, used or copied without authorisation.
>>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>>> Thank you.
>>> 
>>> 
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/84be95e8/attachment.pgp>

From jspray at redhat.com  Fri Sep 25 09:54:09 2015
From: jspray at redhat.com (John Spray)
Date: Fri, 25 Sep 2015 10:54:09 +0100
Subject: [openstack-dev] [Manila] CephFS native driver
In-Reply-To: <1268826320.21692761.1443172574805.JavaMail.zimbra@redhat.com>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
 <1886719958.21640286.1443164654402.JavaMail.zimbra@redhat.com>
 <CALe9h7fhTL2zUZZM_SGCPVnBg0EZdF88ikfSqRLyOrUh5vHukA@mail.gmail.com>
 <1268826320.21692761.1443172574805.JavaMail.zimbra@redhat.com>
Message-ID: <CALe9h7dP-HKxkkHK5eEspBOQGoWe_YGDPx8wWhZHShP_cej-Mw@mail.gmail.com>

On Fri, Sep 25, 2015 at 10:16 AM, Shinobu Kinjo <skinjo at redhat.com> wrote:
> Thank you for your reply.
>
>> The main distinction here is that for
>> native CephFS clients, they get a shared filesystem where all the
>> clients can talk to all the Ceph OSDs directly, and avoid the
>> potential bottleneck of an NFS->local fs->RBD server.
>
> As you know each pass from clients to rados is:
>
>  1) CephFS
>   [Apps] -> [VFS] -> [Kernel Driver] -> [Ceph-Kernel Client]
>    -> [MON], [MDS], [OSD]
>
>  2) RBD
>   [Apps] -> [VFS] -> [librbd] -> [librados] -> [MON], [OSD]
>
> Considering above, there could be more bottleneck in 1) than 2),
> I think.
>
> What do you think?

The bottleneck I'm talking about is when you share the filesystem
between many guests.  In the RBD image case, you would have a single
NFS server, through which all the data and metadata would have to
flow: that becomes a limiting factor.  In the CephFS case, the clients
can talk to the MDS and OSD daemons individually, without having to
flow through one NFS server.

The preference depends on the use case: the benefits of a shared
filesystem like CephFS don't become apparent until you have lots of
guests using the same shared filesystem.  I'd expect people to keep
using Cinder+RBD for cases where a filesystem is just exposed to one
guest at a time.

>>  3.What are you thinking of integration with OpenStack using
>>   a new implementation?
>>   Since it's going to be new kind of, there should be differ-
>>   ent architecture.
>
> Sorry, it's just too ambiguous. Frankly how are you going to
> implement such a new future, was my question.
>
> Make sense?

Right now this is just about building Manila drivers to enable use of
Ceph, rather than re-architecting anything.  A user would create a
conventional Ceph cluster and a conventional OpenStack cluster, this
is just about enabling the use of the two together via Manila (i.e. to
do for CephFS/Manila what is already done for RBD/Cinder).

I expect there will be more discussion later about exactly what the
NFS layer will look like, though we can start with the simple case of
creating a guest VM that acts as a gateway.

>>  4.Is this implementation intended for OneStack integration
>>   mainly?
>
> Yes, that's just my typo -;
>
>  OneStack -> OpenStack

Naturally the Manila part is just for openstack.  However, some of the
utility parts (e.g. the "VolumeClient" class) might get re-used in
other systems that require a similar concept (like containers, other
clouds).

John

>
>
>> This piece of work is specifically about Manila; general improvements
>> in Ceph integration would be a different topic.
>
> That's interesting to me.
>
> Shinobu
>
> ----- Original Message -----
> From: "John Spray" <jspray at redhat.com>
> To: "Shinobu Kinjo" <skinjo at redhat.com>
> Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Sent: Friday, September 25, 2015 5:51:36 PM
> Subject: Re: [openstack-dev] [Manila] CephFS native driver
>
> On Fri, Sep 25, 2015 at 8:04 AM, Shinobu Kinjo <skinjo at redhat.com> wrote:
>> So here are questions from my side.
>> Just question.
>>
>>
>>  1.What is the biggest advantage comparing others such as RDB?
>>   We should be able to implement what you are going to do in
>>   existing module, shouldn't we?
>
> I guess you mean compared to using a local filesystem on top of RBD,
> and exporting it over NFS?  The main distinction here is that for
> native CephFS clients, they get a shared filesystem where all the
> clients can talk to all the Ceph OSDs directly, and avoid the
> potential bottleneck of an NFS->local fs->RBD server.
>
> Workloads requiring a local filesystem would probably continue to map
> a cinder block device and use that.  The Manila driver is intended for
> use cases that require a shared filesystem.
>
>>  2.What are you going to focus on with a new implementation?
>>   It seems to be to use NFS in front of that implementation
>>   with more transparently.
>
> The goal here is to make cephfs accessible to people by making it easy
> to provision it for their applications, just like Manila in general.
> The motivation for putting an NFS layer in front of CephFS is to make
> it easier for people to adopt, because they won't need to install any
> ceph-specific code in their guests.  It will also be easier to
> support, because any ceph client bugfixes would not need to be
> installed within guests (if we assume existing nfs clients are bug
> free :-))
>
>>  3.What are you thinking of integration with OpenStack using
>>   a new implementation?
>>   Since it's going to be new kind of, there should be differ-
>>   ent architecture.
>
> Not sure I understand this question?
>
>>  4.Is this implementation intended for OneStack integration
>>   mainly?
>
> Nope (I had not heard of onestack before).
>
>> Since velocity of OpenStack feature expansion is much more than
>> it used to be, it's much more important to think of performance.
>
>> Is a new implementation also going to improve Ceph integration
>> with OpenStack system?
>
> This piece of work is specifically about Manila; general improvements
> in Ceph integration would be a different topic.
>
> Thanks,
> John
>
>>
>> Thank you so much for your explanation in advance.
>>
>> Shinobu
>>
>> ----- Original Message -----
>> From: "John Spray" <jspray at redhat.com>
>> To: openstack-dev at lists.openstack.org, "Ceph Development" <ceph-devel at vger.kernel.org>
>> Sent: Thursday, September 24, 2015 10:49:17 PM
>> Subject: [openstack-dev] [Manila] CephFS native driver
>>
>> Hi all,
>>
>> I've recently started work on a CephFS driver for Manila.  The (early)
>> code is here:
>> https://github.com/openstack/manila/compare/master...jcsp:ceph
>>
>> It requires a special branch of ceph which is here:
>> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>>
>> This isn't done yet (hence this email rather than a gerrit review),
>> but I wanted to give everyone a heads up that this work is going on,
>> and a brief status update.
>>
>> This is the 'native' driver in the sense that clients use the CephFS
>> client to access the share, rather than re-exporting it over NFS.  The
>> idea is that this driver will be useful for anyone who has such
>> clients, as well as acting as the basis for a later NFS-enabled
>> driver.
>>
>> The export location returned by the driver gives the client the Ceph
>> mon IP addresses, the share path, and an authentication token.  This
>> authentication token is what permits the clients access (Ceph does not
>> do access control based on IP addresses).
>>
>> It's just capable of the minimal functionality of creating and
>> deleting shares so far, but I will shortly be looking into hooking up
>> snapshots/consistency groups, albeit for read-only snapshots only
>> (cephfs does not have writeable shapshots).  Currently deletion is
>> just a move into a 'trash' directory, the idea is to add something
>> later that cleans this up in the background: the downside to the
>> "shares are just directories" approach is that clearing them up has a
>> "rm -rf" cost!
>>
>> A note on the implementation: cephfs recently got the ability (not yet
>> in master) to restrict client metadata access based on path, so this
>> driver is simply creating shares by creating directories within a
>> cluster-wide filesystem, and issuing credentials to clients that
>> restrict them to their own directory.  They then mount that subpath,
>> so that from the client's point of view it's like having their own
>> filesystem.  We also have a quota mechanism that I'll hook in later to
>> enforce the share size.
>>
>> Currently the security here requires clients (i.e. the ceph-fuse code
>> on client hosts, not the userspace applications) to be trusted, as
>> quotas are enforced on the client side.  The OSD access control
>> operates on a per-pool basis, and creating a separate pool for each
>> share is inefficient.  In the future it is expected that CephFS will
>> be extended to support file layouts that use RADOS namespaces, which
>> are cheap, such that we can issue a new namespace to each share and
>> enforce the separation between shares on the OSD side.
>>
>> However, for many people the ultimate access control solution will be
>> to use a NFS gateway in front of their CephFS filesystem: it is
>> expected that an NFS-enabled cephfs driver will follow this native
>> driver in the not-too-distant future.
>>
>> This will be my first openstack contribution, so please bear with me
>> while I come up to speed with the submission process.  I'll also be in
>> Tokyo for the summit next month, so I hope to meet other interested
>> parties there.
>>
>> All the best,
>> John
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sbauza at redhat.com  Fri Sep 25 10:07:02 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Fri, 25 Sep 2015 12:07:02 +0200
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
Message-ID: <56051CC6.1020606@redhat.com>



Le 25/09/2015 00:13, James Penick a ?crit :
>
>
>     At risk of getting too offtopic I think there's an alternate
>     solution to doing this in Nova or on the client side.  I think
>     we're missing some sort of OpenStack API and service that can
>     handle this.  Nova is a low level infrastructure API and service,
>     it is not designed to handle these orchestrations.  I haven't
>     checked in on Heat in a while but perhaps this is a role that it
>     could fill.
>
>     I think that too many people consider Nova to be *the* OpenStack
>     API when considering instances/volumes/networking/images and
>     that's not something I would like to see continue.  Or at the very
>     least I would like to see a split between the orchestration/proxy
>     pieces and the "manage my VM/container/baremetal" bits
>
>
> (new thread)
>  You've hit on one of my biggest issues right now: As far as many 
> deployers and consumers are concerned (and definitely what I tell my 
> users within Yahoo): The value of an OpenStack value-stream (compute, 
> network, storage) is to provide a single consistent API for 
> abstracting and managing those infrastructure resources.
>
>  Take networking: I can manage Firewalls, switches, IP selection, SDN, 
> etc through Neutron. But for compute, If I want VM I go through Nova, 
> for Baremetal I can -mostly- go through Nova, and for containers I 
> would talk to Magnum or use something like the nova docker driver.
>
>  This means that, by default, Nova -is- the closest thing to a top 
> level abstraction layer for compute. But if that is explicitly against 
> Nova's charter, and Nova isn't going to be the top level abstraction 
> for all things Compute, then something else needs to fill that space. 
> When that happens, all things common to compute provisioning should 
> come out of Nova and move into that new API. Availability zones, 
> Quota, etc.
>

There is an old story that I would like to see where a nova boot could 
have some affinity with volumes and networks. That means that Neutron 
and Cinder could provide some resources to the nova scheduler (or nova 
could call the projects) so that we could use either filters (for hard 
limits) or weighers (for soft limits) in order to say "eh, Nova, please 
create me an instance with that flavor and that image close to this 
volume or this network".

That said, we still have lots of work to do in Nova to help those 
projects giving resources and we agreed to first work on the scheduler 
interfaces (for providing resources and for getting a destination) 
before working on cross-project resources. That's still an on-going work 
but we hope to land the new interfaces by Mitaka.

Not sure if we could have some discussion at Tokyo between Cinder, 
Neutron and Nova about how to provide resources to the nova scheduler 
given we haven't yet finished the interface reworking, but we could at 
least get feedback from the Neutron and Cinder teams about what kind of 
resources they'd like to provide so that an user could ask for.

Thoughts on that ?
-Sylvain

> -James
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/7a572429/attachment.html>

From kzaitsev at mirantis.com  Fri Sep 25 10:16:55 2015
From: kzaitsev at mirantis.com (Kirill Zaitsev)
Date: Fri, 25 Sep 2015 13:16:55 +0300
Subject: [openstack-dev] [murano] suggestion on commit message title
 format for the murano-apps repository
In-Reply-To: <CAM9f5rjmKSVfn0PySHVZC2FB81_gzgB5a0Vc1=BSL31JEU40WA@mail.gmail.com>
References: <CAM9f5rjmKSVfn0PySHVZC2FB81_gzgB5a0Vc1=BSL31JEU40WA@mail.gmail.com>
Message-ID: <etPan.56051f17.7df2783e.34fe@TefMBPr.local>

Looks reasonable to me! Could you maybe document that on HACKING.rst in the repo? We could vote on the commit itself.?

--?
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 25 Sep 2015 at 02:14:09, Alexey Khivin (akhivin at mirantis.com) wrote:

Hello everyone

Almost an every commit-message in the murano-apps repository contains a name of the application which it is related to

I suggest to specify application within commit message title using strict and uniform format?


For example, something like this:

[ApacheHTTPServer]?Utilize Custom Network selector
[Docker/Kubernetes]?Fix typo

instead of this:

Utilize Custom Network selector in Apache App
Fix typo in Kubernetes Cluster app


I think it would be useful for readability of the messages list

--
Regards,
Alexey Khivin

__________________________________________________________________________  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/e9961bfc/attachment.html>

From sandeep.raman at gmail.com  Fri Sep 25 10:52:46 2015
From: sandeep.raman at gmail.com (Sandeep Raman)
Date: Fri, 25 Sep 2015 16:22:46 +0530
Subject: [openstack-dev] [Large Deployments Team][Performance Team] New
 informal working group suggestion
In-Reply-To: <CACsCO2yHugc0FQmXBxO_-uzaOvR_KXQNdPOEYYneU=vqoeJSEw@mail.gmail.com>
References: <CACsCO2yHugc0FQmXBxO_-uzaOvR_KXQNdPOEYYneU=vqoeJSEw@mail.gmail.com>
Message-ID: <CAOwk0=0BhkFqjNgzx7zWrS_tk9xj8Vh8uz7qgbO42wt_dpYoXw@mail.gmail.com>

On Tue, Sep 22, 2015 at 6:27 PM, Dina Belova <dbelova at mirantis.com> wrote:

> Hey, OpenStackers!
>
> I'm writing to propose to organise new informal team to work specifically
> on the OpenStack performance issues. This will be a sub team in already
> existing Large Deployments Team, and I suppose it will be a good idea to
> gather people interested in OpenStack performance in one room and identify
> what issues are worrying contributors, what can be done and share results
> of performance researches :)
>

Dina, I'm focused in performance and scale testing [no coding
background].How can I contribute and what is the expectation from this
informal team?

>
> So please volunteer to take part in this initiative. I hope it will be
> many people interested and we'll be able to use cross-projects session
> slot <http://odsreg.openstack.org/cfp/details/5> to meet in Tokyo and
> hold a kick-off meeting.
>

I'm not coming to Tokyo. How could I still be part of discussions if any? I
also feel it is good to have a IRC channel for perf-scale discussion. Let
me know your thoughts.


> I would like to apologise I'm writing to two mailing lists at the same
> time, but I want to make sure that all possibly interested people will
> notice the email.
>
> Thanks and see you in Tokyo :)
>
> Cheers,
> Dina
>
> --
>
> Best regards,
>
> Dina Belova
>
> Senior Software Engineer
>
> Mirantis Inc.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/bbb8e3a4/attachment.html>

From davanum at gmail.com  Fri Sep 25 10:59:31 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Fri, 25 Sep 2015 06:59:31 -0400
Subject: [openstack-dev] [Openstack-i18n] [nova][i18n] Is there any
 point in using _() inpython-novaclient?
In-Reply-To: <5604DEA8.5090905@suse.com>
References: <55E9D9AD.1000402@linux.vnet.ibm.com>
 <201509060518.t865IeSf019572@d01av05.pok.ibm.com>
 <55EF0334.3030606@linux.vnet.ibm.com> <55FC23C0.3040105@suse.com>
 <CAOyZ2aGT5O_K1nT6OKWd-TGkLoX0_H0XAz0KvPqErzSYexEDJg@mail.gmail.com>
 <55FEAF6F.80805@suse.com> <5604DEA8.5090905@suse.com>
Message-ID: <CANw6fcGYG8fcwukD-HwF+LAAsj-GqxpO_wAQhrgfGPcNGVMYPA@mail.gmail.com>

That sounds like the right solution Andreas. thanks!

-- Dims

On Fri, Sep 25, 2015 at 1:42 AM, Andreas Jaeger <aj at suse.com> wrote:

> On 09/20/2015 03:06 PM, Andreas Jaeger wrote:
>
>> On 09/20/2015 02:16 PM, Duncan Thomas wrote:
>>
>>> Certainly for cinder, and I suspect many other project, the openstack
>>> client is a wrapper for python-cinderclient libraries, so if you want
>>> translated exceptions then you need to translate python-cinderclient
>>> too, unless I'm missing something?
>>>
>>
>> Ah - let's investigate some more here.
>>
>> Looking at python-cinderclient, I see translations only for the help
>> strings of the client like in cinderclient/shell.py. Are there strings
>> in the library of cinder that will be displayed to the user as well?
>>
>
> We discussed on the i18n team list and will enable those repos if the
> teams send patches. The i18n team will prioritize which resources gets
> translated,
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
>    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
>        HRB 21284 (AG N?rnberg)
>     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> _______________________________________________
> Openstack-i18n mailing list
> Openstack-i18n at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/06d2594f/attachment.html>

From sgolovatiuk at mirantis.com  Fri Sep 25 11:09:36 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Fri, 25 Sep 2015 13:09:36 +0200
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <262AA17A-E015-4191-BECF-4E044874D527@gmail.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com>
 <55FC0DCF.7060307@redhat.com>
 <262AA17A-E015-4191-BECF-4E044874D527@gmail.com>
Message-ID: <CA+HkNVsjYj3UXtR2eJGVSvDECuFaaHRDBN-mp+YXe2nh4Y3TWQ@mail.gmail.com>

Hi,

Morgan gave the perfect case why operators want to use uWSGI. Let's imagine
a future when all openstack services will work as mod_wsgi processes under
apache. It's like to put all eggs in one basket. If you need to reconfigure
one service on controller it may affect another service. For instance,
sometimes operators need to increase number of Threads/Processes for wsgi
or add new virtual host to apache. That will require graceful or cold
restart of apache. It affects other services. Another case, internal
problems in mod_wsgi where it may lead to apache crash affecting all
services.

uWSGI/gunicorn model is safer as in this case apache is reverse_proxy only.
This  model gives flexibility for operators. They may use apache/nginx as
proxy or load balancer. Stop or crash of one service won't lead to downtime
of other services. The complexity of OpenStack management will be easier
and friendly.



--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Sep 18, 2015 at 3:44 PM, Morgan Fainberg <morgan.fainberg at gmail.com>
wrote:

> There is and has been desire to support uWSGI and other alternatives to
> mod_wsgi. There are a variety of operational reasons to consider uWSGI
> and/or gunicorn behind apache most notably to facilitate easier management
> of the processes independently of the webserver itself. With mod_wsgi the
> processes are directly tied to the apache server where as with uWSGI and
> gunicorn you can manage the various services independently and/or with
> differing VENVs more easily.
>
> There are potential other concerns that must be weighed when considering
> which method of deployment to use. I hope we have clear documentation
> within the next cycle (and possible choices for the gate) for utilizing
> uWSGI and/or gunicorn.
>
> --Morgan
>
> Sent via mobile
>
> On Sep 18, 2015, at 06:12, Adam Young <ayoung at redhat.com> wrote:
>
> On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
>
> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
>
> In the fuel project, we recently ran into a couple of issues with Apache2 +
> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>
> Looking deep into Apache2 issues specifically around "apache2ctl graceful"
> and module loading/unloading and the hooks used by mod_wsgi [3]. I started
> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> something else better that people are already using.
>
> One data point that keeps coming up is, all the CI jobs use Apache2 +
> mod_wsgi so it must be the best solution....Is it? If not, what is?
>
> Disclaimer: it's been a while since I've cared about performance with a
> web server in front of a Python app.
>
> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
> on again. In general, I seem to remember it being thought of as a bit
> old and crusty, but mostly working.
>
>
> I am not aware of that.  It has been the workhorse of the Python/wsgi
> world for a while, and we use it heavily.
>
> At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
> and saw a significant performance increase. This was a Django app. uwsgi
> is fairly straightforward to operate and comes loaded with a myriad of
> options[1] to help folks make the most of it. I've played with Ironic
> behind uwsgi and it seemed to work fine, though I haven't done any sort
> of load testing. I'd encourage folks to give it a shot. :)
>
>
> Again, switching web servers is as likely to introduce as to solve
> problems.  If there are performance issues:
>
> 1.  Idenitfy what causes them
> 2.  Change configuration settings to deal with them
> 3.  Fix upstream bugs in the underlying system.
>
>
> Keystone is not about performance.  Keystone is about security.  The cloud
> is designed to scale horizontally first.  Before advocating switching to a
> difference web server, make sure it supports the technologies required.
>
>
> 1. TLS at the latest level
> 2. Kerberos/GSSAPI/SPNEGO
> 3. X509 Client cert validation
> 4. SAML
>
> OpenID connect would be a good one to add to the list;  Its been requested
> for a while.
>
> If Keystone is having performance issues, it is most likely at the
> database layer, not the web server.
>
>
>
> "Programmers waste enormous amounts of time thinking about, or worrying
> about, the speed of noncritical parts of their programs, and these attempts
> at efficiency actually have a strong negative impact when debugging and
> maintenance are considered. We *should* forget about small efficiencies,
> say about 97% of the time: *premature optimization is the root of all
> evil.* Yet we should not pass up our opportunities in that critical
> 3%."   --Donald Knuth
>
>
>
> Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
>
> gunicorn[2] is another good option that may be worth investigating; I
> personally don't have any experience with it, but I seem to remember
> hearing it has good eventlet support.
>
> // jim
>
> [0] https://uwsgi-docs.readthedocs.org/en/latest/
> [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
> [2] http://gunicorn.org/
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/495225a4/attachment.html>

From rvasilets at mirantis.com  Fri Sep 25 11:10:14 2015
From: rvasilets at mirantis.com (Roman Vasilets)
Date: Fri, 25 Sep 2015 14:10:14 +0300
Subject: [openstack-dev]  [Rally][Meeting][Agenda]
Message-ID: <CABmajVVMH468_TcmQ9A8VcujekRV6PNBbbz+AJ9xZcWpw-fnQQ@mail.gmail.com>

Hi, its a friendly reminder that if you what to discuss some topics at
Rally meetings, please add you topic to our Meeting agenda
https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
specify who will lead topic discusion. Add some information about
topic(links, etc.) Thank you for your attention.

- Best regards, Vasilets Roman.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/bf42d577/attachment.html>

From sgolovatiuk at mirantis.com  Fri Sep 25 11:24:31 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Fri, 25 Sep 2015 13:24:31 +0200
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CAKb2=12gEBNYhY=uGsyWfv4UCc+c3kmEPF4coUDKqObzb0Eqxg@mail.gmail.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com>
 <55FC0DCF.7060307@redhat.com>
 <CAHAWLf3ZhXQD=pcm9rwML9ZOTA7pMWvNXm4+zbfdA=KmS25g8w@mail.gmail.com>
 <CAKb2=12gEBNYhY=uGsyWfv4UCc+c3kmEPF4coUDKqObzb0Eqxg@mail.gmail.com>
Message-ID: <CA+HkNVtjbmY8CFGBFtj9N26vu7tDTTc8Z0T=tMFck9xFGUChXA@mail.gmail.com>

Alexandr,

oauth, shibboleth & openid support are very keystone specific features.
Many other openstack projects don't need these modules at all but they may
require faster HTTP server (lighthttp/nginx).

For all projects we may use "HTTP server -> uwsgi" model and leave apache
for keystone as " HTTP server -> apache -> uwsgi/mod_wsgi". However, I
would like to think about whole Openstack ecosystem in general. In that
case we'll minimize the number of programs operator should have knowledge.




--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Sep 18, 2015 at 4:28 PM, Alexander Makarov <amakarov at mirantis.com>
wrote:

> Please consider that we use some apache mods - does
> nginx/uwsgi/gunicorn have oauth, shibboleth & openid support?
>
> On Fri, Sep 18, 2015 at 4:54 PM, Vladimir Kuklin <vkuklin at mirantis.com>
> wrote:
> > Folks
> >
> > I think we do not need to switch to nginx-only or consider any kind of
> war
> > between nginx and apache adherents. Everyone should be able to use
> > web-server he or she needs without being pinned to the unwanted one. It
> is
> > like Postgres vs MySQL war. Why not support both?
> >
> > May be someone does not need something that apache supports and nginx not
> > and needs nginx features which apache does not support. Let's let our
> users
> > decide what they want.
> >
> > And the first step should be simple here - support for uwsgi. It will
> allow
> > for usage of any web-server that can work with uwsgi. It will allow also
> us
> > to check for the support of all apache-like bindings like SPNEGO or
> whatever
> > and provide our users with enough info on making decisions. I did not
> > personally test nginx modules for SAML and SPNEGO, but I am pretty
> confident
> > about TLS/SSL parts of nginx.
> >
> > Moreover, nginx will allow you to do things you cannot do with apache,
> e.g.
> > do smart load balancing, which may be crucial for high-loaded
> installations.
> >
> >
> > On Fri, Sep 18, 2015 at 4:12 PM, Adam Young <ayoung at redhat.com> wrote:
> >>
> >> On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
> >>
> >> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
> >>
> >> In the fuel project, we recently ran into a couple of issues with
> Apache2
> >> +
> >> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
> >>
> >> Looking deep into Apache2 issues specifically around "apache2ctl
> graceful"
> >> and module loading/unloading and the hooks used by mod_wsgi [3]. I
> started
> >> wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> >> something else better that people are already using.
> >>
> >> One data point that keeps coming up is, all the CI jobs use Apache2 +
> >> mod_wsgi so it must be the best solution....Is it? If not, what is?
> >>
> >> Disclaimer: it's been a while since I've cared about performance with a
> >> web server in front of a Python app.
> >>
> >> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
> >> on again. In general, I seem to remember it being thought of as a bit
> >> old and crusty, but mostly working.
> >>
> >>
> >> I am not aware of that.  It has been the workhorse of the Python/wsgi
> >> world for a while, and we use it heavily.
> >>
> >> At a previous job, we switched from Apache2 + mod_wsgi to nginx +
> uwsgi[0]
> >> and saw a significant performance increase. This was a Django app. uwsgi
> >> is fairly straightforward to operate and comes loaded with a myriad of
> >> options[1] to help folks make the most of it. I've played with Ironic
> >> behind uwsgi and it seemed to work fine, though I haven't done any sort
> >> of load testing. I'd encourage folks to give it a shot. :)
> >>
> >>
> >> Again, switching web servers is as likely to introduce as to solve
> >> problems.  If there are performance issues:
> >>
> >> 1.  Idenitfy what causes them
> >> 2.  Change configuration settings to deal with them
> >> 3.  Fix upstream bugs in the underlying system.
> >>
> >>
> >> Keystone is not about performance.  Keystone is about security.  The
> cloud
> >> is designed to scale horizontally first.  Before advocating switching
> to a
> >> difference web server, make sure it supports the technologies required.
> >>
> >>
> >> 1. TLS at the latest level
> >> 2. Kerberos/GSSAPI/SPNEGO
> >> 3. X509 Client cert validation
> >> 4. SAML
> >>
> >> OpenID connect would be a good one to add to the list;  Its been
> requested
> >> for a while.
> >>
> >> If Keystone is having performance issues, it is most likely at the
> >> database layer, not the web server.
> >>
> >>
> >>
> >> "Programmers waste enormous amounts of time thinking about, or worrying
> >> about, the speed of noncritical parts of their programs, and these
> attempts
> >> at efficiency actually have a strong negative impact when debugging and
> >> maintenance are considered. We should forget about small efficiencies,
> say
> >> about 97% of the time: premature optimization is the root of all evil.
> Yet
> >> we should not pass up our opportunities in that critical 3%."   --Donald
> >> Knuth
> >>
> >>
> >>
> >> Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
> >>
> >> gunicorn[2] is another good option that may be worth investigating; I
> >> personally don't have any experience with it, but I seem to remember
> >> hearing it has good eventlet support.
> >>
> >> // jim
> >>
> >> [0] https://uwsgi-docs.readthedocs.org/en/latest/
> >> [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
> >> [2] http://gunicorn.org/
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Yours Faithfully,
> > Vladimir Kuklin,
> > Fuel Library Tech Lead,
> > Mirantis, Inc.
> > +7 (495) 640-49-04
> > +7 (926) 702-39-68
> > Skype kuklinvv
> > 35bk3, Vorontsovskaya Str.
> > Moscow, Russia,
> > www.mirantis.com
> > www.mirantis.ru
> > vkuklin at mirantis.com
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Kind Regards,
> Alexander Makarov,
> Senior Software Developer,
>
> Mirantis, Inc.
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (926) 204-50-60
>
> Skype: MAKAPOB.AJIEKCAHDP
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/442c0001/attachment.html>

From bpavlovic at mirantis.com  Fri Sep 25 12:10:50 2015
From: bpavlovic at mirantis.com (Boris Pavlovic)
Date: Fri, 25 Sep 2015 05:10:50 -0700
Subject: [openstack-dev] [OPNFV] [Functest] Tempest & Rally
In-Reply-To: <A8CA1ACC1C845244BD549100217EE1DA2245C916@ESESSMB207.ericsson.se>
References: <17168_1443102983_56040107_17168_166_1_56040104.5080602@orange.com>
 <3E4E0D3116F17747BB7E466168535A7427229868@DEMUMBX002.nsn-intra.net>
 <CAD85om2cHYuJ7Q_ubD1UPZ6eFAMUdgY6Te=7KDu2LqP7Kn-ggA@mail.gmail.com>
 <A8CA1ACC1C845244BD549100217EE1DA2245C916@ESESSMB207.ericsson.se>
Message-ID: <CAD85om3D7MMaoWvoq8=eiyWorg_Ys=RQg5wkPOjs7ta+zvwoXw@mail.gmail.com>

Jose,


Rally community provides official docker images here:
https://hub.docker.com/r/rallyforge/rally/
So I would suggest to use them.


Best regards,
Boris Pavlovic



On Fri, Sep 25, 2015 at 5:07 AM, Jose Lausuch <jose.lausuch at ericsson.com>
wrote:

> Hi,
>
>
>
> Thanks for the hint Boris.
>
>
>
> Regarding what we do at functest with Rally, yes, we clone the latest from
> the Rally repo. We thought about that before and the possible errors it can
> convey, compatibility and so on.
>
>
>
> As I am working on a Docker image where all the Functest environment will
> be pre-installed, we might get rid of such potential problems. But, that
> image will need constant updates if there are major patches/bugfixes in the
> rally repo.
>
>
>
> What is your opinion on this? What do you think it makes more sense?
>
>
>
> /Jose
>
>
>
>
>
>
>
> *From:* boris at pavlovic.ru [mailto:boris at pavlovic.ru] *On Behalf Of *Boris
> Pavlovic
> *Sent:* Friday, September 25, 2015 7:56 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* EXT morgan.richomme at orange.com; Kosonen, Juha (Nokia - FI/Espoo);
> Jose Lausuch
> *Subject:* Re: [openstack-dev] [OPNFV] [Functest] Tempest & Rally
>
>
>
> Morgan,
>
>
>
>
>
> You should add at least:
>
>
>
> sla:
>   failure_rate:
>     max: 0
>
>
>
> Otherwise rally will pass 100% no matter what is happening.
>
>
>
>
>
> Best regards,
>
> Boris Pavlovic
>
>
>
> On Thu, Sep 24, 2015 at 10:29 PM, Tikkanen, Viktor (Nokia - FI/Espoo) <
> viktor.tikkanen at nokia.com> wrote:
>
> Hi Morgan
>
> and thank you for the overview.
>
> I'm now waiting for the POD#2 VPN profile (will be ready soon). We will
> try then to figure out what OpenStack/tempest/rally configuration changes
> are needed in order to get rid of those test failures.
>
> I suppose that most of the problems (like "Multiple possible networks
> found" etc.) are relatively easy to solve.
>
> BTW, since tempest is being currently developed in "branchless" mode
> (without release specific stable versions), do we have some common
> understanding/requirements how "dynamically" Functest should use its code?
>
> For example, config_functest.py seems to contain routines for
> cloning/installing rally (and indirectly tempest) code, does it mean that
> the code will be cloned/installed at the time when the test set is executed
> for the first time? (I'm just wondering if it is necessary or not to
> "freeze" somehow used code for each OPNFV release to make sure that it will
> remain compatible and that test results will be comparable between
> different OPNFV setups).
>
> -Viktor
>
> > -----Original Message-----
> > From: EXT morgan.richomme at orange.com [mailto:morgan.richomme at orange.com]
> > Sent: Thursday, September 24, 2015 4:56 PM
> > To: Kosonen, Juha (Nokia - FI/Espoo); Tikkanen, Viktor (Nokia - FI/Espoo)
> > Cc: Jose Lausuch
> > Subject: [OPNFV] [Functest] Tempest & Rally
> >
> > Hi,
> >
> > I was wondering whether you could have a look at Rally/Tempest tests we
> > automatically launch in Functest.
> > We have still some errors and I assume most of them are due to
> > misconfiguration and/or quota ...
> > With Jose, we planned to have a look after SR0 but we do not have much
> > time and we are not fully skilled (even if we progressed a little bit:))
> >
> > If you could have a look and give your feedback, it would be very
> > helpful, we could discuss it during an IRC weekly meeting
> > In Arno we did not use the SLA criteria, that is also something we could
> > do for the B Release
> >
> > for instance if you look at
> > https://build.opnfv.org/ci/view/functest/job/functest-foreman-
> > master/19/consoleText
> >
> > you will see rally and Tempest log
> >
> > Rally scenario are a compilation of default Rally scenario played one
> > after the other and can be found in
> >
> https://git.opnfv.org/cgit/functest/tree/testcases/VIM/OpenStack/CI/suites
> >
> > the Rally artifacts are also pushed into the artifact server
> > http://artifacts.opnfv.org/
> > e.g.
> > http://artifacts.opnfv.org/functest/lf_pod2/2015-09-23_17-36-
> > 07/results/rally/opnfv-authenticate.html
> > look for 09-23 to get Rally json/html files and tempest.conf
> >
> > thanks
> >
> > Morgan
> >
> >
> >
> __________________________________________________________________________
> > _______________________________________________
> >
> > Ce message et ses pieces jointes peuvent contenir des informations
> > confidentielles ou privilegiees et ne doivent donc
> > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> > recu ce message par erreur, veuillez le signaler
> > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> > electroniques etant susceptibles d'alteration,
> > Orange decline toute responsabilite si ce message a ete altere, deforme
> ou
> > falsifie. Merci.
> >
> > This message and its attachments may contain confidential or privileged
> > information that may be protected by law;
> > they should not be distributed, used or copied without authorisation.
> > If you have received this email in error, please notify the sender and
> > delete this message and its attachments.
> > As emails may be altered, Orange is not liable for messages that have
> been
> > modified, changed or falsified.
> > Thank you.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/e9f8776f/attachment.html>

From aheczko at mirantis.com  Fri Sep 25 12:16:19 2015
From: aheczko at mirantis.com (Adam Heczko)
Date: Fri, 25 Sep 2015 14:16:19 +0200
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
Message-ID: <CAJciqMw-tin_gF_1JqC+X-0kaph2m3Y+jDq9LFC08R_wrzitgQ@mail.gmail.com>

Are we discussing mod_wsgi and Keystone or OpenStack as a general?
If Keystone specific use case, then probably Apache provides broadest
choice of tested external authenticators.
I'm not against uwsgi at all, but to be honest expectation that nginx could
substitute Apache in terms of authentication providers is simply
unrealistic.

A.


On Fri, Sep 25, 2015 at 1:24 PM, Sergii Golovatiuk <sgolovatiuk at mirantis.com
> wrote:

> Alexandr,
>
> oauth, shibboleth & openid support are very keystone specific features.
> Many other openstack projects don't need these modules at all but they may
> require faster HTTP server (lighthttp/nginx).
>
> For all projects we may use "HTTP server -> uwsgi" model and leave apache
> for keystone as " HTTP server -> apache -> uwsgi/mod_wsgi". However, I
> would like to think about whole Openstack ecosystem in general. In that
> case we'll minimize the number of programs operator should have knowledge.
>
>
>
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Fri, Sep 18, 2015 at 4:28 PM, Alexander Makarov <amakarov at mirantis.com>
> wrote:
>
>> Please consider that we use some apache mods - does
>> nginx/uwsgi/gunicorn have oauth, shibboleth & openid support?
>>
>> On Fri, Sep 18, 2015 at 4:54 PM, Vladimir Kuklin <vkuklin at mirantis.com>
>> wrote:
>> > Folks
>> >
>> > I think we do not need to switch to nginx-only or consider any kind of
>> war
>> > between nginx and apache adherents. Everyone should be able to use
>> > web-server he or she needs without being pinned to the unwanted one. It
>> is
>> > like Postgres vs MySQL war. Why not support both?
>> >
>> > May be someone does not need something that apache supports and nginx
>> not
>> > and needs nginx features which apache does not support. Let's let our
>> users
>> > decide what they want.
>> >
>> > And the first step should be simple here - support for uwsgi. It will
>> allow
>> > for usage of any web-server that can work with uwsgi. It will allow
>> also us
>> > to check for the support of all apache-like bindings like SPNEGO or
>> whatever
>> > and provide our users with enough info on making decisions. I did not
>> > personally test nginx modules for SAML and SPNEGO, but I am pretty
>> confident
>> > about TLS/SSL parts of nginx.
>> >
>> > Moreover, nginx will allow you to do things you cannot do with apache,
>> e.g.
>> > do smart load balancing, which may be crucial for high-loaded
>> installations.
>> >
>> >
>> > On Fri, Sep 18, 2015 at 4:12 PM, Adam Young <ayoung at redhat.com> wrote:
>> >>
>> >> On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
>> >>
>> >> On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
>> >>
>> >> In the fuel project, we recently ran into a couple of issues with
>> Apache2
>> >> +
>> >> mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>> >>
>> >> Looking deep into Apache2 issues specifically around "apache2ctl
>> graceful"
>> >> and module loading/unloading and the hooks used by mod_wsgi [3]. I
>> started
>> >> wondering if Apache2 + mod_wsgi is the "right" solution and if there
>> was
>> >> something else better that people are already using.
>> >>
>> >> One data point that keeps coming up is, all the CI jobs use Apache2 +
>> >> mod_wsgi so it must be the best solution....Is it? If not, what is?
>> >>
>> >> Disclaimer: it's been a while since I've cared about performance with a
>> >> web server in front of a Python app.
>> >>
>> >> IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
>> >> on again. In general, I seem to remember it being thought of as a bit
>> >> old and crusty, but mostly working.
>> >>
>> >>
>> >> I am not aware of that.  It has been the workhorse of the Python/wsgi
>> >> world for a while, and we use it heavily.
>> >>
>> >> At a previous job, we switched from Apache2 + mod_wsgi to nginx +
>> uwsgi[0]
>> >> and saw a significant performance increase. This was a Django app.
>> uwsgi
>> >> is fairly straightforward to operate and comes loaded with a myriad of
>> >> options[1] to help folks make the most of it. I've played with Ironic
>> >> behind uwsgi and it seemed to work fine, though I haven't done any sort
>> >> of load testing. I'd encourage folks to give it a shot. :)
>> >>
>> >>
>> >> Again, switching web servers is as likely to introduce as to solve
>> >> problems.  If there are performance issues:
>> >>
>> >> 1.  Idenitfy what causes them
>> >> 2.  Change configuration settings to deal with them
>> >> 3.  Fix upstream bugs in the underlying system.
>> >>
>> >>
>> >> Keystone is not about performance.  Keystone is about security.  The
>> cloud
>> >> is designed to scale horizontally first.  Before advocating switching
>> to a
>> >> difference web server, make sure it supports the technologies required.
>> >>
>> >>
>> >> 1. TLS at the latest level
>> >> 2. Kerberos/GSSAPI/SPNEGO
>> >> 3. X509 Client cert validation
>> >> 4. SAML
>> >>
>> >> OpenID connect would be a good one to add to the list;  Its been
>> requested
>> >> for a while.
>> >>
>> >> If Keystone is having performance issues, it is most likely at the
>> >> database layer, not the web server.
>> >>
>> >>
>> >>
>> >> "Programmers waste enormous amounts of time thinking about, or worrying
>> >> about, the speed of noncritical parts of their programs, and these
>> attempts
>> >> at efficiency actually have a strong negative impact when debugging and
>> >> maintenance are considered. We should forget about small efficiencies,
>> say
>> >> about 97% of the time: premature optimization is the root of all evil.
>> Yet
>> >> we should not pass up our opportunities in that critical 3%."
>>  --Donald
>> >> Knuth
>> >>
>> >>
>> >>
>> >> Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
>> >>
>> >> gunicorn[2] is another good option that may be worth investigating; I
>> >> personally don't have any experience with it, but I seem to remember
>> >> hearing it has good eventlet support.
>> >>
>> >> // jim
>> >>
>> >> [0] https://uwsgi-docs.readthedocs.org/en/latest/
>> >> [1] https://uwsgi-docs.readthedocs.org/en/latest/Options.html
>> >> [2] http://gunicorn.org/
>> >>
>> >>
>> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>
>> >>
>> >>
>> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > --
>> > Yours Faithfully,
>> > Vladimir Kuklin,
>> > Fuel Library Tech Lead,
>> > Mirantis, Inc.
>> > +7 (495) 640-49-04
>> > +7 (926) 702-39-68
>> > Skype kuklinvv
>> > 35bk3, Vorontsovskaya Str.
>> > Moscow, Russia,
>> > www.mirantis.com
>> > www.mirantis.ru
>> > vkuklin at mirantis.com
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Kind Regards,
>> Alexander Makarov,
>> Senior Software Developer,
>>
>> Mirantis, Inc.
>> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>>
>> Tel.: +7 (495) 640-49-04
>> Tel.: +7 (926) 204-50-60
>>
>> Skype: MAKAPOB.AJIEKCAHDP
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/9f65be02/attachment.html>

From rakhmerov at mirantis.com  Fri Sep 25 12:28:25 2015
From: rakhmerov at mirantis.com (Renat Akhmerov)
Date: Fri, 25 Sep 2015 18:28:25 +0600
Subject: [openstack-dev] [mistral] Mistral Liberty RC1 available
Message-ID: <42C79AD6-3094-4BD8-8FE0-DB65C8A991F1@mirantis.com>

Hi,

Mistral Liberty RC1 release is available. The exact version name is 1.0.0.0rc1.

Look at the release page in order to see a list of bugs fixed during RC1 cycle: https://launchpad.net/mistral/liberty/liberty-rc1 <https://launchpad.net/mistral/liberty/liberty-rc1>

Thanks!

Renat Akhmerov
@ Mirantis Inc.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/8b145635/attachment.html>

From dukhlov at mirantis.com  Fri Sep 25 12:33:41 2015
From: dukhlov at mirantis.com (Dmitriy Ukhlov)
Date: Fri, 25 Sep 2015 15:33:41 +0300
Subject: [openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver
	implementation
Message-ID: <CADpk5BZpWRJP3m=W8xnHX_3o8-3Z5KXzeku4hCCwGWOv4jx2cA@mail.gmail.com>

Hello stackers,

I'm working on new olso.messaging RabbitMQ driver implementation which uses
pika client library instead of kombu. It related to
https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika.
In this letter I want to share current results and probably get first
feedack from you.
Now code is availabe here:
https://github.com/dukhlov/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_pika.py

Current status of this code:
- pika driver passes functional tests
- pika driver tempest smoke tests
- pika driver passes almost all tempest full tests (except 5) but it seems
that reason is not related to oslo.messaging
Also I created small devstack patch to support pika driver testing on gate (
https://review.openstack.org/#/c/226348/)

Next steps:
- communicate with Manish (blueprint owner)
- write spec to this blueprint
- send a review with this patch when spec and devstack patch get merged.

Thank you.


-- 
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/16c8e5db/attachment.html>

From zhjwpku at gmail.com  Fri Sep 25 12:34:11 2015
From: zhjwpku at gmail.com (John Hunter)
Date: Fri, 25 Sep 2015 20:34:11 +0800
Subject: [openstack-dev] VDI questions
Message-ID: <CAEG8a3KdteosCfhgnd=PrcD202TXG-y590on0M_ri4YzAvPm5Q@mail.gmail.com>

Hi guys,
I am new to OpenStack, so I want to ask is there a project that
works on VDI(Virtual Desktop Infrastructure) based OpenStack?
I want to dig into it, please help.

Sincerely,
Zhao

-- 
Best regards
Junwang Zhao
Department of Computer Science &Technology
Peking University
Beijing, 100871, PRC
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/bfe8394e/attachment.html>

From dprince at redhat.com  Fri Sep 25 12:34:58 2015
From: dprince at redhat.com (Dan Prince)
Date: Fri, 25 Sep 2015 08:34:58 -0400
Subject: [openstack-dev] [TripleO] tripleo.org theme
Message-ID: <1443184498.2443.10.camel@redhat.com>

It has come to my attention that we aren't making great use of our
tripleo.org domain. One thing that would be useful would be to have the
new tripleo-docs content displayed there. It would also be nice to have
quick links to some of our useful resources, perhaps Derek's CI report
[1], a custom Reviewday page for TripleO reviews (something like this
[2]), and perhaps other links too. I'm thinking these go in the header,
and not just on some random TripleO docs page. Or perhaps both places.

I was thinking that instead of the normal OpenStack theme however we
could go a bit off the beaten path and do our own TripleO theme.
Basically a custom tripleosphinx project that we ninja in as a
replacement for oslosphinx.

Could get our own mascot... or do something silly with words. I'm
reaching out to graphics artists who could help with this sort of
thing... but before that decision is made I wanted to ask about
thoughts on the matter here first.

Speak up... it would be nice to have this wrapped up before Tokyo.

[1] http://goodsquishy.com/downloads/tripleo-jobs.html
[2] http://status.openstack.org/reviews/

Dan


From skinjo at redhat.com  Fri Sep 25 12:35:13 2015
From: skinjo at redhat.com (Shinobu Kinjo)
Date: Fri, 25 Sep 2015 08:35:13 -0400 (EDT)
Subject: [openstack-dev] [Manila] CephFS native driver
In-Reply-To: <CALe9h7dP-HKxkkHK5eEspBOQGoWe_YGDPx8wWhZHShP_cej-Mw@mail.gmail.com>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
 <1886719958.21640286.1443164654402.JavaMail.zimbra@redhat.com>
 <CALe9h7fhTL2zUZZM_SGCPVnBg0EZdF88ikfSqRLyOrUh5vHukA@mail.gmail.com>
 <1268826320.21692761.1443172574805.JavaMail.zimbra@redhat.com>
 <CALe9h7dP-HKxkkHK5eEspBOQGoWe_YGDPx8wWhZHShP_cej-Mw@mail.gmail.com>
Message-ID: <1046873751.21781575.1443184513049.JavaMail.zimbra@redhat.com>

Thanks!
Keep me in the loop.

Shinobu

----- Original Message -----
From: "John Spray" <jspray at redhat.com>
To: "Shinobu Kinjo" <skinjo at redhat.com>
Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Sent: Friday, September 25, 2015 6:54:09 PM
Subject: Re: [openstack-dev] [Manila] CephFS native driver

On Fri, Sep 25, 2015 at 10:16 AM, Shinobu Kinjo <skinjo at redhat.com> wrote:
> Thank you for your reply.
>
>> The main distinction here is that for
>> native CephFS clients, they get a shared filesystem where all the
>> clients can talk to all the Ceph OSDs directly, and avoid the
>> potential bottleneck of an NFS->local fs->RBD server.
>
> As you know each pass from clients to rados is:
>
>  1) CephFS
>   [Apps] -> [VFS] -> [Kernel Driver] -> [Ceph-Kernel Client]
>    -> [MON], [MDS], [OSD]
>
>  2) RBD
>   [Apps] -> [VFS] -> [librbd] -> [librados] -> [MON], [OSD]
>
> Considering above, there could be more bottleneck in 1) than 2),
> I think.
>
> What do you think?

The bottleneck I'm talking about is when you share the filesystem
between many guests.  In the RBD image case, you would have a single
NFS server, through which all the data and metadata would have to
flow: that becomes a limiting factor.  In the CephFS case, the clients
can talk to the MDS and OSD daemons individually, without having to
flow through one NFS server.

The preference depends on the use case: the benefits of a shared
filesystem like CephFS don't become apparent until you have lots of
guests using the same shared filesystem.  I'd expect people to keep
using Cinder+RBD for cases where a filesystem is just exposed to one
guest at a time.

>>  3.What are you thinking of integration with OpenStack using
>>   a new implementation?
>>   Since it's going to be new kind of, there should be differ-
>>   ent architecture.
>
> Sorry, it's just too ambiguous. Frankly how are you going to
> implement such a new future, was my question.
>
> Make sense?

Right now this is just about building Manila drivers to enable use of
Ceph, rather than re-architecting anything.  A user would create a
conventional Ceph cluster and a conventional OpenStack cluster, this
is just about enabling the use of the two together via Manila (i.e. to
do for CephFS/Manila what is already done for RBD/Cinder).

I expect there will be more discussion later about exactly what the
NFS layer will look like, though we can start with the simple case of
creating a guest VM that acts as a gateway.

>>  4.Is this implementation intended for OneStack integration
>>   mainly?
>
> Yes, that's just my typo -;
>
>  OneStack -> OpenStack

Naturally the Manila part is just for openstack.  However, some of the
utility parts (e.g. the "VolumeClient" class) might get re-used in
other systems that require a similar concept (like containers, other
clouds).

John

>
>
>> This piece of work is specifically about Manila; general improvements
>> in Ceph integration would be a different topic.
>
> That's interesting to me.
>
> Shinobu
>
> ----- Original Message -----
> From: "John Spray" <jspray at redhat.com>
> To: "Shinobu Kinjo" <skinjo at redhat.com>
> Cc: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Sent: Friday, September 25, 2015 5:51:36 PM
> Subject: Re: [openstack-dev] [Manila] CephFS native driver
>
> On Fri, Sep 25, 2015 at 8:04 AM, Shinobu Kinjo <skinjo at redhat.com> wrote:
>> So here are questions from my side.
>> Just question.
>>
>>
>>  1.What is the biggest advantage comparing others such as RDB?
>>   We should be able to implement what you are going to do in
>>   existing module, shouldn't we?
>
> I guess you mean compared to using a local filesystem on top of RBD,
> and exporting it over NFS?  The main distinction here is that for
> native CephFS clients, they get a shared filesystem where all the
> clients can talk to all the Ceph OSDs directly, and avoid the
> potential bottleneck of an NFS->local fs->RBD server.
>
> Workloads requiring a local filesystem would probably continue to map
> a cinder block device and use that.  The Manila driver is intended for
> use cases that require a shared filesystem.
>
>>  2.What are you going to focus on with a new implementation?
>>   It seems to be to use NFS in front of that implementation
>>   with more transparently.
>
> The goal here is to make cephfs accessible to people by making it easy
> to provision it for their applications, just like Manila in general.
> The motivation for putting an NFS layer in front of CephFS is to make
> it easier for people to adopt, because they won't need to install any
> ceph-specific code in their guests.  It will also be easier to
> support, because any ceph client bugfixes would not need to be
> installed within guests (if we assume existing nfs clients are bug
> free :-))
>
>>  3.What are you thinking of integration with OpenStack using
>>   a new implementation?
>>   Since it's going to be new kind of, there should be differ-
>>   ent architecture.
>
> Not sure I understand this question?
>
>>  4.Is this implementation intended for OneStack integration
>>   mainly?
>
> Nope (I had not heard of onestack before).
>
>> Since velocity of OpenStack feature expansion is much more than
>> it used to be, it's much more important to think of performance.
>
>> Is a new implementation also going to improve Ceph integration
>> with OpenStack system?
>
> This piece of work is specifically about Manila; general improvements
> in Ceph integration would be a different topic.
>
> Thanks,
> John
>
>>
>> Thank you so much for your explanation in advance.
>>
>> Shinobu
>>
>> ----- Original Message -----
>> From: "John Spray" <jspray at redhat.com>
>> To: openstack-dev at lists.openstack.org, "Ceph Development" <ceph-devel at vger.kernel.org>
>> Sent: Thursday, September 24, 2015 10:49:17 PM
>> Subject: [openstack-dev] [Manila] CephFS native driver
>>
>> Hi all,
>>
>> I've recently started work on a CephFS driver for Manila.  The (early)
>> code is here:
>> https://github.com/openstack/manila/compare/master...jcsp:ceph
>>
>> It requires a special branch of ceph which is here:
>> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>>
>> This isn't done yet (hence this email rather than a gerrit review),
>> but I wanted to give everyone a heads up that this work is going on,
>> and a brief status update.
>>
>> This is the 'native' driver in the sense that clients use the CephFS
>> client to access the share, rather than re-exporting it over NFS.  The
>> idea is that this driver will be useful for anyone who has such
>> clients, as well as acting as the basis for a later NFS-enabled
>> driver.
>>
>> The export location returned by the driver gives the client the Ceph
>> mon IP addresses, the share path, and an authentication token.  This
>> authentication token is what permits the clients access (Ceph does not
>> do access control based on IP addresses).
>>
>> It's just capable of the minimal functionality of creating and
>> deleting shares so far, but I will shortly be looking into hooking up
>> snapshots/consistency groups, albeit for read-only snapshots only
>> (cephfs does not have writeable shapshots).  Currently deletion is
>> just a move into a 'trash' directory, the idea is to add something
>> later that cleans this up in the background: the downside to the
>> "shares are just directories" approach is that clearing them up has a
>> "rm -rf" cost!
>>
>> A note on the implementation: cephfs recently got the ability (not yet
>> in master) to restrict client metadata access based on path, so this
>> driver is simply creating shares by creating directories within a
>> cluster-wide filesystem, and issuing credentials to clients that
>> restrict them to their own directory.  They then mount that subpath,
>> so that from the client's point of view it's like having their own
>> filesystem.  We also have a quota mechanism that I'll hook in later to
>> enforce the share size.
>>
>> Currently the security here requires clients (i.e. the ceph-fuse code
>> on client hosts, not the userspace applications) to be trusted, as
>> quotas are enforced on the client side.  The OSD access control
>> operates on a per-pool basis, and creating a separate pool for each
>> share is inefficient.  In the future it is expected that CephFS will
>> be extended to support file layouts that use RADOS namespaces, which
>> are cheap, such that we can issue a new namespace to each share and
>> enforce the separation between shares on the OSD side.
>>
>> However, for many people the ultimate access control solution will be
>> to use a NFS gateway in front of their CephFS filesystem: it is
>> expected that an NFS-enabled cephfs driver will follow this native
>> driver in the not-too-distant future.
>>
>> This will be my first openstack contribution, so please bear with me
>> while I come up to speed with the submission process.  I'll also be in
>> Tokyo for the summit next month, so I hope to meet other interested
>> parties there.
>>
>> All the best,
>> John
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From thomas.morin at orange.com  Fri Sep 25 12:37:14 2015
From: thomas.morin at orange.com (thomas.morin at orange.com)
Date: Fri, 25 Sep 2015 14:37:14 +0200
Subject: [openstack-dev] [neutron] How could an L2 agent extension
 access agent methods ?
In-Reply-To: <5605100D.4070506@redhat.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
 <5605100D.4070506@redhat.com>
Message-ID: <28922_1443184635_56053FFB_28922_4082_5_56053FFA.80000@orange.com>

Kevin, Miguel,

I agree that (4) is what makes most sense.
(more below)

Miguel Angel Ajo :
> Do you have a rough idea of what operations you may need to do?

Right now, what bagpipe driver for networking-bgpvpn needs to interact 
with is:
- int_br OVSBridge (read-only)
- tun_br OVSBridge (add patch port, add flows)
- patch_int_ofport port number (read-only)
- local_vlan_map dict (read-only)
- setup_entry_for_arp_reply method (called to add static ARP entries)

> Please bear in mind, the extension interface will be available from 
> different agent types
> (OVS, SR-IOV, [eventually LB]), so this interface you're talking about 
> could also serve as
> a translation driver for the agents (where the translation is 
> possible), I totally understand
> that most extensions are specific agent bound, and we must be able to 
> identify
> the agent we're serving back exactly.

Yes, I do have this in mind, but what we've identified for now seems to 
be OVS specific.

-Thomas



> Kevin Benton wrote:
>> I think the 4th of the options you proposed would be the best. We don't
>> want to give agents direct access to the agent object or else we will 
>> run
>> the risk of breaking extensions all of the time during any kind of
>> reorganization or refactoring. Having a well defined API in between will
>> give us flexibility to move things around.
>>
>> On Fri, Sep 25, 2015 at 1:32 AM,<thomas.morin at orange.com> wrote:
>>
>>> Hi everyone,
>>>
>>> (TL;DR: we would like an L2 agent extension to be able to call 
>>> methods on
>>> the agent class, e.g. OVSAgent)
>>>
>>> In the networking-bgpvpn project, we need the reference driver to 
>>> interact
>>> with the ML2 openvswitch agent with new RPCs to allow exchanging
>>> information with the BGP VPN implementation running on the compute 
>>> nodes.
>>> We also need the OVS agent to setup specific things on the OVS 
>>> bridges for
>>> MPLS traffic.
>>>
>>> To extend the agent behavior, we currently create a new agent by 
>>> mimicking
>>> the main() in ovs_neutron_agent.py but instead of instantiating 
>>> instantiate
>>> OVSAgent, with instantiate a class that overloads the OVSAgent class 
>>> with
>>> the additional behavior we need [1] .
>>>
>>> This is really not the ideal way of extending the agent, and we would
>>> prefer using the L2 agent extension framework [2].
>>>
>>> Using the L2 agent extension framework would work, but only 
>>> partially: it
>>> would easily allos us to register our RPC consumers, but not to let us
>>> access to some datastructures/methods of the agent that we need to use:
>>> setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge
>>> objects to manipulate OVS ports.
>>>
>>> I've filled-in an RFE bug to track this issue [5].
>>>
>>> We would like something like one of the following:
>>> 1) augment the L2 agent extension interface 
>>> (AgentCoreResourceExtension)
>>> to give access to the agent object (and thus let the extension call 
>>> methods
>>> of the agent) by giving the agent as a parameter of the initialize 
>>> method
>>> [4]
>>> 2) augment the L2 agent extension interface 
>>> (AgentCoreResourceExtension)
>>> to give access to the agent object (and thus let the extension call 
>>> methods
>>> of the agent) by giving the agent as a parameter of a new setAgent 
>>> method
>>> 3) augment the L2 agent extension interface 
>>> (AgentCoreResourceExtension)
>>> to give access only to specific/chosen methods on the agent object, for
>>> instance by giving a dict as a parameter of the initialize method [4],
>>> whose keys would be method names, and values would be pointer to these
>>> methods on the agent object
>>> 4) define a new interface with methods to access things inside the 
>>> agent,
>>> this interface would be implemented by an object instantiated by the 
>>> agent,
>>> and that the agent would pass to the extension manager, thus 
>>> allowing the
>>> extension manager to passe the object to an extension through the
>>> initialize method of AgentCoreResourceExtension [4]
>>>
>>> Any feedback on these ideas...?
>>> Of course any other idea is welcome...
>>>
>>> For the sake of triggering reaction, the question could be rephrased 
>>> as:
>>> if we submit a change doing (1) above, would it have a reasonable 
>>> chance of
>>> merging ?
>>>
>>> -Thomas
>>>
>>> [1]
>>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py 
>>>
>>> [2] https://review.openstack.org/#/c/195439/
>>> [3]
>>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30 
>>>
>>> [4]
>>> https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28 
>>>
>>> [5] https://bugs.launchpad.net/neutron/+bug/1499637
>>>
>>> _________________________________________________________________________________________________________________________ 
>>>
>>>
>>> Ce message et ses pieces jointes peuvent contenir des informations 
>>> confidentielles ou privilegiees et ne doivent donc
>>> pas etre diffuses, exploites ou copies sans autorisation. Si vous 
>>> avez recu ce message par erreur, veuillez le signaler
>>> a l'expediteur et le detruire ainsi que les pieces jointes. Les 
>>> messages electroniques etant susceptibles d'alteration,
>>> Orange decline toute responsabilite si ce message a ete altere, 
>>> deforme ou falsifie. Merci.
>>>
>>> This message and its attachments may contain confidential or 
>>> privileged information that may be protected by law;
>>> they should not be distributed, used or copied without authorisation.
>>> If you have received this email in error, please notify the sender 
>>> and delete this message and its attachments.
>>> As emails may be altered, Orange is not liable for messages that 
>>> have been modified, changed or falsified.
>>> Thank you.


_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.



From dstanek at dstanek.com  Fri Sep 25 12:54:41 2015
From: dstanek at dstanek.com (David Stanek)
Date: Fri, 25 Sep 2015 08:54:41 -0400
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CAJciqMw-tin_gF_1JqC+X-0kaph2m3Y+jDq9LFC08R_wrzitgQ@mail.gmail.com>
References: <CAJciqMw-tin_gF_1JqC+X-0kaph2m3Y+jDq9LFC08R_wrzitgQ@mail.gmail.com>
Message-ID: <CAO69Ndm6N36QqDqPUYXfQTiwUM3ZSwEi_yL+8Y0unWtTBCBUfA@mail.gmail.com>

On Fri, Sep 25, 2015 at 8:25 AM Adam Heczko <aheczko at mirantis.com> wrote:

> Are we discussing mod_wsgi and Keystone or OpenStack as a general?
> If Keystone specific use case, then probably Apache provides broadest
> choice of tested external authenticators.
> I'm not against uwsgi at all, but to be honest expectation that nginx
> could substitute Apache in terms of authentication providers is simply
> unrealistic.
>

uwsgi isn't a replacement for Apache. It's a replacement for mod_wsgi. It
just so happens that it does let you use user web servers if that's what
your usecase dictates.

As a Keystone developer I don't want to tell deployers that they have to
use Apache. It should be their choice. Since Apache is the most common web
server in our community I think we should continue to provide example
configurations and guidance for it.


-- David
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/1473fefd/attachment.html>

From zhjwpku at gmail.com  Fri Sep 25 13:10:30 2015
From: zhjwpku at gmail.com (John Hunter)
Date: Fri, 25 Sep 2015 21:10:30 +0800
Subject: [openstack-dev] VDI questions
In-Reply-To: <CAEG8a3KdteosCfhgnd=PrcD202TXG-y590on0M_ri4YzAvPm5Q@mail.gmail.com>
References: <CAEG8a3KdteosCfhgnd=PrcD202TXG-y590on0M_ri4YzAvPm5Q@mail.gmail.com>
Message-ID: <CAEG8a3JUgaXZui3kyon1XTST7XGkqXFaBWFVyNGS7i7w=E3zfw@mail.gmail.com>

As far as I know, there are some vendors provide VDI, VMware View,
Citrix Xenserver, Amazon WorkSpaces. Is there a corresponding project
in OpenStack?

Please help!

On Fri, Sep 25, 2015 at 8:34 PM, John Hunter <zhjwpku at gmail.com> wrote:

> Hi guys,
> I am new to OpenStack, so I want to ask is there a project that
> works on VDI(Virtual Desktop Infrastructure) based OpenStack?
> I want to dig into it, please help.
>
> Sincerely,
> Zhao
>
> --
> Best regards
> Junwang Zhao
> Department of Computer Science &Technology
> Peking University
> Beijing, 100871, PRC
>



-- 
Best regards
Junwang Zhao
Department of Computer Science &Technology
Peking University
Beijing, 100871, PRC
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/08e49d0a/attachment.html>

From sgolovatiuk at mirantis.com  Fri Sep 25 13:16:20 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Fri, 25 Sep 2015 15:16:20 +0200
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CAO69Ndm6N36QqDqPUYXfQTiwUM3ZSwEi_yL+8Y0unWtTBCBUfA@mail.gmail.com>
References: <CAJciqMw-tin_gF_1JqC+X-0kaph2m3Y+jDq9LFC08R_wrzitgQ@mail.gmail.com>
 <CAO69Ndm6N36QqDqPUYXfQTiwUM3ZSwEi_yL+8Y0unWtTBCBUfA@mail.gmail.com>
Message-ID: <CA+HkNVu6iCcKvHwoXLqDD+nPQdbSaYnydbsn7AHTnJmEBYAfPg@mail.gmail.com>

David,

I am thinking how all OpenStack components may coexist as mod_wsgi
processes under apache. In Fuel we stepped into problem with deployment
using graceful restart [1] thus this thread was raised to have a good WSGI
alternatives.

[1] https://github.com/GrahamDumpleton/mod_wsgi/pull/95







--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/0b3203ec/attachment.html>

From aheczko at mirantis.com  Fri Sep 25 13:25:19 2015
From: aheczko at mirantis.com (Adam Heczko)
Date: Fri, 25 Sep 2015 15:25:19 +0200
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CAO69Ndm6N36QqDqPUYXfQTiwUM3ZSwEi_yL+8Y0unWtTBCBUfA@mail.gmail.com>
References: <CAJciqMw-tin_gF_1JqC+X-0kaph2m3Y+jDq9LFC08R_wrzitgQ@mail.gmail.com>
 <CAO69Ndm6N36QqDqPUYXfQTiwUM3ZSwEi_yL+8Y0unWtTBCBUfA@mail.gmail.com>
Message-ID: <CAJciqMzoFeKAhTFzhEc4AkAsoBHadehRibisFKFye3mVzEDOmg@mail.gmail.com>

OK, sorry I mixed up nginx and uwsgi :)

A.

On Fri, Sep 25, 2015 at 2:54 PM, David Stanek <dstanek at dstanek.com> wrote:

>
> On Fri, Sep 25, 2015 at 8:25 AM Adam Heczko <aheczko at mirantis.com> wrote:
>
>> Are we discussing mod_wsgi and Keystone or OpenStack as a general?
>> If Keystone specific use case, then probably Apache provides broadest
>> choice of tested external authenticators.
>> I'm not against uwsgi at all, but to be honest expectation that nginx
>> could substitute Apache in terms of authentication providers is simply
>> unrealistic.
>>
>
> uwsgi isn't a replacement for Apache. It's a replacement for mod_wsgi. It
> just so happens that it does let you use user web servers if that's what
> your usecase dictates.
>
> As a Keystone developer I don't want to tell deployers that they have to
> use Apache. It should be their choice. Since Apache is the most common web
> server in our community I think we should continue to provide example
> configurations and guidance for it.
>
>
> -- David
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/36dc01b2/attachment.html>

From ihrachys at redhat.com  Fri Sep 25 13:32:54 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Fri, 25 Sep 2015 15:32:54 +0200
Subject: [openstack-dev] [neutron] How could an L2 agent extension
	access agent methods ?
In-Reply-To: <28922_1443184635_56053FFB_28922_4082_5_56053FFA.80000@orange.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
 <5605100D.4070506@redhat.com>
 <28922_1443184635_56053FFB_28922_4082_5_56053FFA.80000@orange.com>
Message-ID: <D67601DC-EA26-4315-A8D0-96D0F1593799@redhat.com>


> On 25 Sep 2015, at 14:37, thomas.morin at orange.com wrote:
> 
> Kevin, Miguel,
> 
> I agree that (4) is what makes most sense.
> (more below)
> 
> Miguel Angel Ajo :
>> Do you have a rough idea of what operations you may need to do?
> 
> Right now, what bagpipe driver for networking-bgpvpn needs to interact with is:
> - int_br OVSBridge (read-only)
> - tun_br OVSBridge (add patch port, add flows)
> - patch_int_ofport port number (read-only)
> - local_vlan_map dict (read-only)
> - setup_entry_for_arp_reply method (called to add static ARP entries)
> 

Sounds very tightly coupled to OVS agent.

>> Please bear in mind, the extension interface will be available from different agent types
>> (OVS, SR-IOV, [eventually LB]), so this interface you're talking about could also serve as
>> a translation driver for the agents (where the translation is possible), I totally understand
>> that most extensions are specific agent bound, and we must be able to identify
>> the agent we're serving back exactly.
> 
> Yes, I do have this in mind, but what we've identified for now seems to be OVS specific.

Indeed it does. Maybe you can try to define the needed pieces in high level actions, not internal objects you need to access to. Like ?- connect endpoint X to Y?, ?determine segmentation id for a network? etc.

Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/90ccb054/attachment.pgp>

From ihrachys at redhat.com  Fri Sep 25 13:39:07 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Fri, 25 Sep 2015 15:39:07 +0200
Subject: [openstack-dev] [all][stable][release] 2015.1.2
In-Reply-To: <5604F16A.6010807@redhat.com>
References: <CANZa-e+LZg0PZgPDrkhgifuZ_BQ6EhTua-420C5K2Z+A8cbPsg@mail.gmail.com>
 <20150924073107.GF24386@sofja.berg.ol>
 <CAGi==UXRm7mARJecBT69qqQMfOycdx_crVf-OCD_x+O9z2J2nw@mail.gmail.com>
 <5604F16A.6010807@redhat.com>
Message-ID: <50053AD0-B264-4450-A772-E15B21A24506@redhat.com>

> On 25 Sep 2015, at 09:02, Matthias Runge <mrunge at redhat.com> wrote:
> 
> On 25/09/15 00:17, Alan Pevec wrote:
>>> For Horizon, it would make sense to move this a week back. We discovered
>>> a few issues in Liberty, which are present in current kilo, too. I'd
>>> love to cherry-pick a few of them to kilo.
>> 
>> What are LP bug#s ? Are you saying that fixes are still work in
>> progress on master and not ready for backporting yet?
>> 
> Current backports:
> https://review.openstack.org/#/q/status:open+project:openstack/horizon+branch:stable/kilo,n,z
> 
> They are tagged with lp bugs currently under review. As you see, they
> are in (slow) progress.
> 
> Matthias

I see you have three people in the horizon-stable-maint team only. Have you considered expanding the team with more folks? In neutron, we have five people in the stable-maint group.

Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/185366de/attachment.pgp>

From doug at doughellmann.com  Fri Sep 25 13:52:35 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 25 Sep 2015 09:52:35 -0400
Subject: [openstack-dev] [election][tc] TC Candidacy
Message-ID: <1443189079-sup-9836@lrrr.local>

I am announcing my candidacy for a position on the OpenStack Technical
Committee.

I am currently employed by HP to work 100% upstream on OpenStack. I
started contributing to OpenStack in 2012, not long after joining
Dreamhost. In the time since, I have worked on a wide variety of
projects within the community. I am one of the founding members of the
Ceilometer and unified command line projects. I am also part of the
team working on the Python 3 transition, and have contributed to
several of the infrastructure projects. I formally joined the Release
Management team at the start of Liberty, and have been working on
updating processes and tools to make releases easier for all project
teams. I will be PTL of the Release Management team for the Mitaka
cycle.  I served as PTL for the Oslo project for three terms, and I
have served on the Technical Committee for the last two years. In
addition to my technical contributions, I helped to found and still
help to organize the OpenStack meetup group in Atlanta, Georgia.

I characterize most of my recent work as enabling others in the
community. As Oslo PTL I helped the team complete its transition from
copy-and-paste sharing to true shared libraries by creating new
processes and tools. Ceilometer was one of the earliest projects to
need to interact with a significant number of the other projects at a
code level, and my experience on that team led me to establish the
Oslo liaison program when we recognized a similar need as adoption of
Oslo libraries expanded. That pattern has been reused by many of the
other cross-project teams to establish formal lines of communication
to address our community?s growth. The self-service release review
tools the release management and infrastructure teams are building now
are another effort to remove process bottlenecks by enabling project
teams to manage their own releases.

During my past terms on the TC, I worked to find compromise positions
and clarity, incorporating the views of other committee members while
tempering them with my own. Several times I wrote the first draft of
policy changes on contentious topics, moving abstract arguments to
discussions of concrete terms. This was especially true for the shift
from the incubation to ?big tent? governance models late in 2014. I
prepared several alternate versions of the policy changes before an
approach was found that appealed to all committee members.  Iteration
for the win.

All of these experiences have given me a unique cross-project
perspective into OpenStack, and reinforced for me the importance of
communication between project teams to smooth out the integration
points and remove friction.  Improving communication and cross-project
efforts is a key role for the Technical Committee. For example, during
Mitaka I will be working to support the interoperability goals of the
community by ensuring the DefCore committee have and understand all of
the input from project contributors to set the right technical
direction, and that the contributors in turn understand the issues
raised from within the DefCore committee so we can add or modify
features to improve interoperability between OpenStack deployments.

The OpenStack community is the most exciting and welcoming group I
have interacted with in more than 20 years of contributing to open
source projects. I'm looking forward to continuing to being a part of
the community and serving the project.

Thank you,
Doug

Official Candidacy: https://review.openstack.org/#/c/227857/
Review history: https://review.openstack.org/#/q/reviewer:2472,n,z
Commit history: https://review.openstack.org/#/q/owner:2472,n,z
Stackalytics: http://stackalytics.com/?user_id=doug-hellmann
Foundation: http://www.openstack.org/community/members/profile/359
OpenHUB: https://www.openhub.net/accounts/doughellmann
Freenode: dhellmann
Website: https://doughellmann.com


From doug at doughellmann.com  Fri Sep 25 13:59:07 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 25 Sep 2015 09:59:07 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
Message-ID: <1443189345-sup-9818@lrrr.local>

Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +0000:
> > 
> > On Sep 24, 2015, at 5:55 PM, Sabari Murugesan <sabari.bits at gmail.com> wrote:
> > 
> > Hi Melanie
> > 
> > In general, images created by glance v1 API should be accessible using v2 and
> > vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with an image was
> > causing incompatibility. These fixes were back-ported to stable/kilo.
> > 
> > Thanks
> > Sabari
> > 
> > [1] - https://bugs.launchpad.net/glance/+bug/1447215
> > [2] - https://bugs.launchpad.net/bugs/1419823 
> > [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193 
> > 
> > 
> > On Thu, Sep 24, 2015 at 2:17 PM, melanie witt <melwittt at gmail.com> wrote:
> > Hi All,
> > 
> > I have been looking and haven't yet located documentation about how to upgrade from glance v1 to glance v2.
> > 
> > From what I understand, images and snapshots created with v1 can't be listed/accessed through the v2 api. Are there instructions about how to migrate images and snapshots from v1 to v2? Are there other incompatibilities between v1 and v2?
> > 
> > I'm asking because I have read that glance v1 isn't defcore compliant and so we need all projects to move to v2, but the incompatibility from v1 to v2 is preventing that in nova. Is there anything else preventing v2 adoption? Could we move to glance v2 if there's a migration path from v1 to v2 that operators can run through before upgrading to a version that uses v2 as the default?
> 
> Just to clarify the DefCore situation a bit here: 
> 
> The DefCore Committee is considering adding some Glance v2
capabilities [1] as ?advisory? (e.g. not required now but might be
in the future unless folks provide feedback as to why it shouldn?t
be) in it?s next Guideline, which is due to go the Board of Directors
in January and will cover Juno, Kilo, and Liberty [2].	The Nova image
API?s are already required [3][4].  As discussion began about which
Glance capabilities to include and whether or not to keep the Nova
image API?s as required, it was pointed out that the many ways images
can currently be created in OpenStack is problematic from an
interoperability point of view in that some clouds use one and some use
others.  To be included in a DefCore Guideline, capabilities are scored
against twelve Criteria [5], and need to achieve a certain total to be
included.  Having a bunch of different ways to deal with images
actually hurts the chances of any one of them meeting the bar because
it makes it less likely that they?ll achieve several criteria.  For
example:
> 
> One of the criteria is ?widely deployed? [6].  In the case of images, both the Nova image-create API and Glance v2 are both pretty widely deployed [7]; Glance v1 isn?t, and at least one uses none of those but instead uses the import task API.
> 
> Another criteria is ?atomic? [8] which basically means the capability is unique and can?t be built out of other required capabilities.  Since the Nova image-create API is already required and effectively does the same thing as glance v1 and v2?s image create API?s, the latter lose points.

This seems backwards. The Nova API doesn't "do the same thing" as
the Glance API, it is a *proxy* for the Glance API. We should not
be requiring proxy APIs for interop. DefCore should only be using
tests that talk directly to the service that owns the feature being
tested.

Doug

> 
> Another criteria is ?future direction? [9].  Glance v1 gets no points here since v2 is the current API, has been for a while, and there?s even been some work on v3 already.
> 
> There are also criteria for  ?used by clients? [11].  Unfortunately both Glance v1 and v2 fall down pretty hard here as it turns out that of all the client libraries users reported in the last user survey, it appears the only one other than the OpenStack clients supports Glance v2 and one supports Glance v1 while the rest all rely on the Nova API's.  Even within OpenStack we don?t necessarily have good adoption since Nova still uses the v1 API to talk to Glance and OpenStackClient didn?t support image creation with v2 until this week?s 1.7.0 release. [13]
> 
> So, it?s a bit problematic that v1 is still being used even within the project (though it did get slightly better this week).  It?s highly unlikely at this point that it makes any sense for DefCore to require OpenStack Powered products to expose v1 to end users.  Even if DefCore does end up requiring Glance v2 to be exposed to end users, that doesn?t necessarily mean Nova couldn?t continue to use v1: OpenStack Powered products wouldn?t be required to expose v1 to end users, but if the nova image-create API remains required then they?d have to expose it at least internally to the cloud.  But?.really?  That?s still sort of an ugly position to be in, because at the end of the day that?s still a lot more moving parts than are really necessary and that?s not particularly good for operators, end users, developers who want interoperable ways of doing things, or pretty much anybody else.  
> 
> So basically: yes, it would be *lovely* if we could all get behind fewer ways of dealing with images. [10]  
> 
> [1] https://review.openstack.org/#/c/213353/
> [2] http://git.openstack.org/cgit/openstack/defcore/tree/2016.next.json#n8
> [3] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json#n23
> [4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n20
> [5] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst
> [6] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n40
> [7] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074540.html
> [8] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n87
> [9] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n60
> [10] Meh, entirely too many footnotes here so why not put one out of order for fun: https://www.youtube.com/watch?v=oHg5SJYRHA0
> [11] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n48
> [12] See comments in https://review.openstack.org/#/c/213353/7/working_materials/scoring.txt
> [13] http://docs.openstack.org/developer/python-openstackclient/releases.html#sep-2015
> 
> At Your Service,
> 
> Mark T. Voelker
> 
> > 
> > Thanks,
> > -melanie (irc: melwitt)
> > 
> > 
> > 
> > 
> > 
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From doug at doughellmann.com  Fri Sep 25 14:01:07 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 25 Sep 2015 10:01:07 -0400
Subject: [openstack-dev] [oslo] LnP tests for oslo.messaging
In-Reply-To: <147889938968B74596B6EE7D57BEB44A01D4E5A1@PHX-EXRDA-S62.corp.ebay.com>
References: <147889938968B74596B6EE7D57BEB44A01D4E5A1@PHX-EXRDA-S62.corp.ebay.com>
Message-ID: <1443189637-sup-5937@lrrr.local>

Excerpts from Huang, Oscar's message of 2015-09-25 02:32:02 +0000:
> Hi, 
>     We would like to setup a LnP test environment for oslo.messaging and Rabbit MQ so that we can continuously track the stability and performance impact of the olso/kombu/amqp library changes and MQ upgrades. 
>     I wonder whether there are some existing packs of test cases can be used as the workload. 
>     Basically we want to emulate the running status of a nova cell of large scales of computes(>1000), but focus only on messaging subsystem. 
>     Thanks.
> 
> 
> Best wishes, 
> 
> Oscar (???)
> 

I'm not familiar with the acronym "LnP", could you describe what that
means?

Doug


From andrew at lascii.com  Fri Sep 25 14:12:55 2015
From: andrew at lascii.com (Andrew Laski)
Date: Fri, 25 Sep 2015 10:12:55 -0400
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
Message-ID: <20150925141255.GG8745@crypt>

On 09/24/15 at 03:13pm, James Penick wrote:
>>
>>
>> At risk of getting too offtopic I think there's an alternate solution to
>> doing this in Nova or on the client side.  I think we're missing some sort
>> of OpenStack API and service that can handle this.  Nova is a low level
>> infrastructure API and service, it is not designed to handle these
>> orchestrations.  I haven't checked in on Heat in a while but perhaps this
>> is a role that it could fill.
>>
>> I think that too many people consider Nova to be *the* OpenStack API when
>> considering instances/volumes/networking/images and that's not something I
>> would like to see continue.  Or at the very least I would like to see a
>> split between the orchestration/proxy pieces and the "manage my
>> VM/container/baremetal" bits
>
>
>(new thread)
> You've hit on one of my biggest issues right now: As far as many deployers
>and consumers are concerned (and definitely what I tell my users within
>Yahoo): The value of an OpenStack value-stream (compute, network, storage)
>is to provide a single consistent API for abstracting and managing those
>infrastructure resources.
>
> Take networking: I can manage Firewalls, switches, IP selection, SDN, etc
>through Neutron. But for compute, If I want VM I go through Nova, for
>Baremetal I can -mostly- go through Nova, and for containers I would talk
>to Magnum or use something like the nova docker driver.
>
> This means that, by default, Nova -is- the closest thing to a top level
>abstraction layer for compute. But if that is explicitly against Nova's
>charter, and Nova isn't going to be the top level abstraction for all
>things Compute, then something else needs to fill that space. When that
>happens, all things common to compute provisioning should come out of Nova
>and move into that new API. Availability zones, Quota, etc.

I do think Nova is the top level abstraction layer for compute.  My 
issue is when Nova is asked to manage other resources.  There's no API 
call to tell Cinder "create a volume and attach it to this instance, and 
create that instance if it doesn't exist."  And I'm not sure why the 
reverse isn't true.

I want Nova to be the absolute best API for managing compute resources.  
It's when someone is managing compute and volumes and networks together 
that I don't feel that Nova is the best place for that.  Most 
importantly right now it seems that not everyone is on the same page on 
this and I think it would be beneficial to come together and figure out 
what sort of workloads the Nova API is intending to provide.

>
>-James

>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From openstack at nemebean.com  Fri Sep 25 14:35:20 2015
From: openstack at nemebean.com (Ben Nemec)
Date: Fri, 25 Sep 2015 09:35:20 -0500
Subject: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core
In-Reply-To: <1443114453-sup-7374@lrrr.local>
References: <1443114453-sup-7374@lrrr.local>
Message-ID: <56055BA8.5070305@nemebean.com>

Not sure if my vote still counts, but +1. :-)

On 09/24/2015 12:12 PM, Doug Hellmann wrote:
> Oslo team,
> 
> I am nominating Brant Knudson for Oslo core.
> 
> As liaison from the Keystone team Brant has participated in meetings,
> summit sessions, and other discussions at a level higher than some
> of our own core team members.  He is already core on oslo.policy
> and oslo.cache, and given his track record I am confident that he would
> make a good addition to the team.
> 
> Please indicate your opinion by responding with +1/-1 as usual.
> 
> Doug
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From andrew at lascii.com  Fri Sep 25 14:42:16 2015
From: andrew at lascii.com (Andrew Laski)
Date: Fri, 25 Sep 2015 10:42:16 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <1443189345-sup-9818@lrrr.local>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
Message-ID: <20150925144216.GH8745@crypt>

On 09/25/15 at 09:59am, Doug Hellmann wrote:
>Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +0000:
>> >
>> > On Sep 24, 2015, at 5:55 PM, Sabari Murugesan <sabari.bits at gmail.com> wrote:
>> >
>> > Hi Melanie
>> >
>> > In general, images created by glance v1 API should be accessible using v2 and
>> > vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with an image was
>> > causing incompatibility. These fixes were back-ported to stable/kilo.
>> >
>> > Thanks
>> > Sabari
>> >
>> > [1] - https://bugs.launchpad.net/glance/+bug/1447215
>> > [2] - https://bugs.launchpad.net/bugs/1419823
>> > [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193
>> >
>> >
>> > On Thu, Sep 24, 2015 at 2:17 PM, melanie witt <melwittt at gmail.com> wrote:
>> > Hi All,
>> >
>> > I have been looking and haven't yet located documentation about how to upgrade from glance v1 to glance v2.
>> >
>> > From what I understand, images and snapshots created with v1 can't be listed/accessed through the v2 api. Are there instructions about how to migrate images and snapshots from v1 to v2? Are there other incompatibilities between v1 and v2?
>> >
>> > I'm asking because I have read that glance v1 isn't defcore compliant and so we need all projects to move to v2, but the incompatibility from v1 to v2 is preventing that in nova. Is there anything else preventing v2 adoption? Could we move to glance v2 if there's a migration path from v1 to v2 that operators can run through before upgrading to a version that uses v2 as the default?
>>
>> Just to clarify the DefCore situation a bit here:
>>
>> The DefCore Committee is considering adding some Glance v2
>capabilities [1] as ?advisory? (e.g. not required now but might be
>in the future unless folks provide feedback as to why it shouldn?t
>be) in it?s next Guideline, which is due to go the Board of Directors
>in January and will cover Juno, Kilo, and Liberty [2].	The Nova image
>API?s are already required [3][4].  As discussion began about which
>Glance capabilities to include and whether or not to keep the Nova
>image API?s as required, it was pointed out that the many ways images
>can currently be created in OpenStack is problematic from an
>interoperability point of view in that some clouds use one and some use
>others.  To be included in a DefCore Guideline, capabilities are scored
>against twelve Criteria [5], and need to achieve a certain total to be
>included.  Having a bunch of different ways to deal with images
>actually hurts the chances of any one of them meeting the bar because
>it makes it less likely that they?ll achieve several criteria.  For
>example:
>>
>> One of the criteria is ?widely deployed? [6].  In the case of images, both the Nova image-create API and Glance v2 are both pretty widely deployed [7]; Glance v1 isn?t, and at least one uses none of those but instead uses the import task API.
>>
>> Another criteria is ?atomic? [8] which basically means the capability is unique and can?t be built out of other required capabilities.  Since the Nova image-create API is already required and effectively does the same thing as glance v1 and v2?s image create API?s, the latter lose points.
>
>This seems backwards. The Nova API doesn't "do the same thing" as
>the Glance API, it is a *proxy* for the Glance API. We should not
>be requiring proxy APIs for interop. DefCore should only be using
>tests that talk directly to the service that owns the feature being
>tested.

I completely agree with this.  I will admit to having some confusion as 
to why Glance capabilities have been tested through Nova and I know 
others have raised this same thought within the process.

>
>Doug
>
>>
>> Another criteria is ?future direction? [9].  Glance v1 gets no points here since v2 is the current API, has been for a while, and there?s even been some work on v3 already.
>>
>> There are also criteria for  ?used by clients? [11].  Unfortunately both Glance v1 and v2 fall down pretty hard here as it turns out that of all the client libraries users reported in the last user survey, it appears the only one other than the OpenStack clients supports Glance v2 and one supports Glance v1 while the rest all rely on the Nova API's.  Even within OpenStack we don?t necessarily have good adoption since Nova still uses the v1 API to talk to Glance and OpenStackClient didn?t support image creation with v2 until this week?s 1.7.0 release. [13]
>>
>> So, it?s a bit problematic that v1 is still being used even within the project (though it did get slightly better this week).  It?s highly unlikely at this point that it makes any sense for DefCore to require OpenStack Powered products to expose v1 to end users.  Even if DefCore does end up requiring Glance v2 to be exposed to end users, that doesn?t necessarily mean Nova couldn?t continue to use v1: OpenStack Powered products wouldn?t be required to expose v1 to end users, but if the nova image-create API remains required then they?d have to expose it at least internally to the cloud.  But?.really?  That?s still sort of an ugly position to be in, because at the end of the day that?s still a lot more moving parts than are really necessary and that?s not particularly good for operators, end users, developers who want interoperable ways of doing things, or pretty much anybody else.
>>
>> So basically: yes, it would be *lovely* if we could all get behind fewer ways of dealing with images. [10]
>>
>> [1] https://review.openstack.org/#/c/213353/
>> [2] http://git.openstack.org/cgit/openstack/defcore/tree/2016.next.json#n8
>> [3] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json#n23
>> [4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n20
>> [5] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst
>> [6] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n40
>> [7] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074540.html
>> [8] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n87
>> [9] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n60
>> [10] Meh, entirely too many footnotes here so why not put one out of order for fun: https://www.youtube.com/watch?v=oHg5SJYRHA0
>> [11] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n48
>> [12] See comments in https://review.openstack.org/#/c/213353/7/working_materials/scoring.txt
>> [13] http://docs.openstack.org/developer/python-openstackclient/releases.html#sep-2015
>>
>> At Your Service,
>>
>> Mark T. Voelker
>>
>> >
>> > Thanks,
>> > -melanie (irc: melwitt)
>> >
>> >
>> >
>> >
>> >
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From ihrachys at redhat.com  Fri Sep 25 14:44:59 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Fri, 25 Sep 2015 16:44:59 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
Message-ID: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>

Hi all,

releases are approaching, so it?s the right time to start some bike shedding on the mailing list.

Recently I got pointed out several times [1][2] that I violate our commit message requirement [3] for the message lines that says: "Subsequent lines should be wrapped at 72 characters.?

I agree that very long commit message lines can be bad, f.e. if they are 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we have 79 chars limit for the code.

We had a check for the line lengths in openstack-dev/hacking before but it was killed [4] as per openstack-dev@ discussion [5].

I believe commit message lines of <=80 chars are absolutely fine and should not get -1 treatment. I propose to raise the limit for the guideline on wiki accordingly.

Comments?

[1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
[2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
[3]: https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
[4]: https://review.openstack.org/#/c/142585/
[5]: http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519

Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/11047f17/attachment.pgp>

From dtantsur at redhat.com  Fri Sep 25 14:52:14 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Fri, 25 Sep 2015 16:52:14 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
Message-ID: <56055F9E.1080306@redhat.com>

On 09/25/2015 04:44 PM, Ihar Hrachyshka wrote:
> Hi all,
>
> releases are approaching, so it?s the right time to start some bike shedding on the mailing list.
>
> Recently I got pointed out several times [1][2] that I violate our commit message requirement [3] for the message lines that says: "Subsequent lines should be wrapped at 72 characters.?
>
> I agree that very long commit message lines can be bad, f.e. if they are 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we have 79 chars limit for the code.
>
> We had a check for the line lengths in openstack-dev/hacking before but it was killed [4] as per openstack-dev@ discussion [5].
>
> I believe commit message lines of <=80 chars are absolutely fine and should not get -1 treatment. I propose to raise the limit for the guideline on wiki accordingly.

+1, I never understood it actually. I know some folks even question 80 
chars for the code, so having 72 chars for commit messages looks a bit 
weird to me.

>
> Comments?
>
> [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
> [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
> [3]: https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
> [4]: https://review.openstack.org/#/c/142585/
> [5]: http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
>
> Ihar
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From vikschw at gmail.com  Fri Sep 25 14:55:41 2015
From: vikschw at gmail.com (Vikram Choudhary)
Date: Fri, 25 Sep 2015 20:25:41 +0530
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <56055F9E.1080306@redhat.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <56055F9E.1080306@redhat.com>
Message-ID: <CAFeBh8v=ChmtwzE2x1Z_jBi=8116fQ0yDHfg9CctGZT9iNSCuA@mail.gmail.com>

+1 for <=80 chars. It will be uniform with the existing coding style.

On Fri, Sep 25, 2015 at 8:22 PM, Dmitry Tantsur <dtantsur at redhat.com> wrote:

> On 09/25/2015 04:44 PM, Ihar Hrachyshka wrote:
>
>> Hi all,
>>
>> releases are approaching, so it?s the right time to start some bike
>> shedding on the mailing list.
>>
>> Recently I got pointed out several times [1][2] that I violate our commit
>> message requirement [3] for the message lines that says: "Subsequent lines
>> should be wrapped at 72 characters.?
>>
>> I agree that very long commit message lines can be bad, f.e. if they are
>> 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we have 79
>> chars limit for the code.
>>
>> We had a check for the line lengths in openstack-dev/hacking before but
>> it was killed [4] as per openstack-dev@ discussion [5].
>>
>> I believe commit message lines of <=80 chars are absolutely fine and
>> should not get -1 treatment. I propose to raise the limit for the guideline
>> on wiki accordingly.
>>
>
> +1, I never understood it actually. I know some folks even question 80
> chars for the code, so having 72 chars for commit messages looks a bit
> weird to me.
>
>
>> Comments?
>>
>> [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
>> [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
>> [3]:
>> https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
>> [4]: https://review.openstack.org/#/c/142585/
>> [5]:
>> http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
>>
>> Ihar
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/f1e00063/attachment.html>

From julien at danjou.info  Fri Sep 25 15:00:23 2015
From: julien at danjou.info (Julien Danjou)
Date: Fri, 25 Sep 2015 17:00:23 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com> (Ihar
 Hrachyshka's message of "Fri, 25 Sep 2015 16:44:59 +0200")
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
Message-ID: <m0twqibl2w.fsf@danjou.info>

On Fri, Sep 25 2015, Ihar Hrachyshka wrote:

> I agree that very long commit message lines can be bad, f.e. if they
> are 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we
> have 79 chars limit for the code.

Agreed. <= 80 chars should be a rule of thumb and applied in a smart
way. As you say, 200+ is not OK ? but we're human we can judge and be
smart.

If we wanted to enforce that, we would just have to write a bot setting
-1 automatically. I'm getting tired of seeing people doing bots' jobs in
Gerrit.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/be406c1a/attachment.pgp>

From doug at doughellmann.com  Fri Sep 25 15:05:09 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 25 Sep 2015 11:05:09 -0400
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <CAFeBh8v=ChmtwzE2x1Z_jBi=8116fQ0yDHfg9CctGZT9iNSCuA@mail.gmail.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <56055F9E.1080306@redhat.com>
 <CAFeBh8v=ChmtwzE2x1Z_jBi=8116fQ0yDHfg9CctGZT9iNSCuA@mail.gmail.com>
Message-ID: <1443193450-sup-1179@lrrr.local>

git tools such as git log and git show indent the commit message in
their output, so you don't actually have the full 79/80 character width
to work with. That's where the 72 comes from.

Doug

Excerpts from Vikram Choudhary's message of 2015-09-25 20:25:41 +0530:
> +1 for <=80 chars. It will be uniform with the existing coding style.
> 
> On Fri, Sep 25, 2015 at 8:22 PM, Dmitry Tantsur <dtantsur at redhat.com> wrote:
> 
> > On 09/25/2015 04:44 PM, Ihar Hrachyshka wrote:
> >
> >> Hi all,
> >>
> >> releases are approaching, so it?s the right time to start some bike
> >> shedding on the mailing list.
> >>
> >> Recently I got pointed out several times [1][2] that I violate our commit
> >> message requirement [3] for the message lines that says: "Subsequent lines
> >> should be wrapped at 72 characters.?
> >>
> >> I agree that very long commit message lines can be bad, f.e. if they are
> >> 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we have 79
> >> chars limit for the code.
> >>
> >> We had a check for the line lengths in openstack-dev/hacking before but
> >> it was killed [4] as per openstack-dev@ discussion [5].
> >>
> >> I believe commit message lines of <=80 chars are absolutely fine and
> >> should not get -1 treatment. I propose to raise the limit for the guideline
> >> on wiki accordingly.
> >>
> >
> > +1, I never understood it actually. I know some folks even question 80
> > chars for the code, so having 72 chars for commit messages looks a bit
> > weird to me.
> >
> >
> >> Comments?
> >>
> >> [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
> >> [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
> >> [3]:
> >> https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
> >> [4]: https://review.openstack.org/#/c/142585/
> >> [5]:
> >> http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
> >>
> >> Ihar
> >>
> >>
> >>
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >


From mvoelker at vmware.com  Fri Sep 25 15:09:29 2015
From: mvoelker at vmware.com (Mark Voelker)
Date: Fri, 25 Sep 2015 15:09:29 +0000
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <20150925144216.GH8745@crypt>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local> <20150925144216.GH8745@crypt>
Message-ID: <F1384511-A789-4FF3-988A-E1409E647F98@vmware.com>

On Sep 25, 2015, at 10:42 AM, Andrew Laski <andrew at lascii.com> wrote:
> 
> On 09/25/15 at 09:59am, Doug Hellmann wrote:
>> Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +0000:
>>> >
>>> > On Sep 24, 2015, at 5:55 PM, Sabari Murugesan <sabari.bits at gmail.com> wrote:
>>> >
>>> > Hi Melanie
>>> >
>>> > In general, images created by glance v1 API should be accessible using v2 and
>>> > vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with an image was
>>> > causing incompatibility. These fixes were back-ported to stable/kilo.
>>> >
>>> > Thanks
>>> > Sabari
>>> >
>>> > [1] - https://bugs.launchpad.net/glance/+bug/1447215
>>> > [2] - https://bugs.launchpad.net/bugs/1419823
>>> > [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193
>>> >
>>> >
>>> > On Thu, Sep 24, 2015 at 2:17 PM, melanie witt <melwittt at gmail.com> wrote:
>>> > Hi All,
>>> >
>>> > I have been looking and haven't yet located documentation about how to upgrade from glance v1 to glance v2.
>>> >
>>> > From what I understand, images and snapshots created with v1 can't be listed/accessed through the v2 api. Are there instructions about how to migrate images and snapshots from v1 to v2? Are there other incompatibilities between v1 and v2?
>>> >
>>> > I'm asking because I have read that glance v1 isn't defcore compliant and so we need all projects to move to v2, but the incompatibility from v1 to v2 is preventing that in nova. Is there anything else preventing v2 adoption? Could we move to glance v2 if there's a migration path from v1 to v2 that operators can run through before upgrading to a version that uses v2 as the default?
>>> 
>>> Just to clarify the DefCore situation a bit here:
>>> 
>>> The DefCore Committee is considering adding some Glance v2
>> capabilities [1] as ?advisory? (e.g. not required now but might be
>> in the future unless folks provide feedback as to why it shouldn?t
>> be) in it?s next Guideline, which is due to go the Board of Directors
>> in January and will cover Juno, Kilo, and Liberty [2].	The Nova image
>> API?s are already required [3][4].  As discussion began about which
>> Glance capabilities to include and whether or not to keep the Nova
>> image API?s as required, it was pointed out that the many ways images
>> can currently be created in OpenStack is problematic from an
>> interoperability point of view in that some clouds use one and some use
>> others.  To be included in a DefCore Guideline, capabilities are scored
>> against twelve Criteria [5], and need to achieve a certain total to be
>> included.  Having a bunch of different ways to deal with images
>> actually hurts the chances of any one of them meeting the bar because
>> it makes it less likely that they?ll achieve several criteria.  For
>> example:
>>> 
>>> One of the criteria is ?widely deployed? [6].  In the case of images, both the Nova image-create API and Glance v2 are both pretty widely deployed [7]; Glance v1 isn?t, and at least one uses none of those but instead uses the import task API.
>>> 
>>> Another criteria is ?atomic? [8] which basically means the capability is unique and can?t be built out of other required capabilities.  Since the Nova image-create API is already required and effectively does the same thing as glance v1 and v2?s image create API?s, the latter lose points.
>> 
>> This seems backwards. The Nova API doesn't "do the same thing" as
>> the Glance API, it is a *proxy* for the Glance API. We should not
>> be requiring proxy APIs for interop. DefCore should only be using
>> tests that talk directly to the service that owns the feature being
>> tested.
> 
> I completely agree with this.  I will admit to having some confusion as to why Glance capabilities have been tested through Nova and I know others have raised this same thought within the process.

Because it turns out that?s how most of the world is dealing with images.

Generally speaking, the nova image API and glance v2 API?s have roughly equal adoption among public and private cloud products, but among the client SDK?s people are using to interact with OpenStack the nova image API?s have much better adoption (see notes in previous message for details).  So we gave the world lots of different ways to do the same thing and the world has strongly adopted two of them (with reasonable evidence that the Nova image API is actually the most-adopted of the lot).  If you?re looking for the most interoperable way to create an image across lots of different OpenStack clouds today, it?s actually through Nova.

At Your Service,

Mark T. Voelker

> 
>> 
>> Doug
>> 
>>> 
>>> Another criteria is ?future direction? [9].  Glance v1 gets no points here since v2 is the current API, has been for a while, and there?s even been some work on v3 already.
>>> 
>>> There are also criteria for  ?used by clients? [11].  Unfortunately both Glance v1 and v2 fall down pretty hard here as it turns out that of all the client libraries users reported in the last user survey, it appears the only one other than the OpenStack clients supports Glance v2 and one supports Glance v1 while the rest all rely on the Nova API's.  Even within OpenStack we don?t necessarily have good adoption since Nova still uses the v1 API to talk to Glance and OpenStackClient didn?t support image creation with v2 until this week?s 1.7.0 release. [13]
>>> 
>>> So, it?s a bit problematic that v1 is still being used even within the project (though it did get slightly better this week). It?s highly unlikely at this point that it makes any sense for DefCore to require OpenStack Powered products to expose v1 to end users.  Even if DefCore does end up requiring Glance v2 to be exposed to end users, that doesn?t necessarily mean Nova couldn?t continue to use v1: OpenStack Powered products wouldn?t be required to expose v1 to end users, but if the nova image-create API remains required then they?d have to expose it at least internally to the cloud.  But?.really?  That?s still sort of an ugly position to be in, because at the end of the day that?s still a lot more moving parts than are really necessary and that?s not particularly good for operators, end users, developers who want interoperable ways of doing things, or pretty much anybody else.
>>> 
>>> So basically: yes, it would be *lovely* if we could all get behind fewer ways of dealing with images. [10]
>>> 
>>> [1] https://review.openstack.org/#/c/213353/
>>> [2] http://git.openstack.org/cgit/openstack/defcore/tree/2016.next.json#n8
>>> [3] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json#n23
>>> [4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n20
>>> [5] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst
>>> [6] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n40
>>> [7] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074540.html
>>> [8] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n87
>>> [9] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n60
>>> [10] Meh, entirely too many footnotes here so why not put one out of order for fun: https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DoHg5SJYRHA0&d=BQIGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=Q8IhPU-EIzbG5YDx5LYO7zEJpGZykn7RwFg-UTPWvDc&m=7GN2Z6neK-F2gi0ByCirYmqR6sCFjhWPEfaNmlQtUp4&s=y1hzWonwPHabYRZrdKG5X8PNnkHpQ2SNeOuLc489B1s&e= [11] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n48
>>> [12] See comments in https://review.openstack.org/#/c/213353/7/working_materials/scoring.txt
>>> [13] http://docs.openstack.org/developer/python-openstackclient/releases.html#sep-2015
>>> 
>>> At Your Service,
>>> 
>>> Mark T. Voelker
>>> 
>>> >
>>> > Thanks,
>>> > -melanie (irc: melwitt)
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> > __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From michal.dulko at intel.com  Fri Sep 25 15:13:31 2015
From: michal.dulko at intel.com (Dulko, Michal)
Date: Fri, 25 Sep 2015 15:13:31 +0000
Subject: [openstack-dev] [nova] Servicegroup refactoring for the Control
 Plane - Mitaka
In-Reply-To: <CAPJ8RRXQaPHRS7d8SytK52838agAq7RHCfTnjPfed7W0089ykA@mail.gmail.com>
References: <CAPJ8RRXQaPHRS7d8SytK52838agAq7RHCfTnjPfed7W0089ykA@mail.gmail.com>
Message-ID: <1443194010.10461.10.camel@mdulko-MOBL2.imu.intel.com>

On Wed, 2015-09-23 at 11:11 -0700, Vilobh Meshram wrote:

> Accepted in Liberty [1] [2] :
> [1] Services information be stored in respective backend configured
> by CONF.servicegroup_driver and all the interfaces which plan to
> access service information go through servicegroup layer.
> [2] Add tooz specific drivers e.g. replace existing nova servicegroup
> zookeeper driver with a new zookeeper driver backed by Tooz zookeeper
> driver.
> 
> 
> Proposal for Mitaka [3][4] :
> [3] Services information be stored in nova.services (nova database)
> and liveliness information be managed by CONF.servicegroup_driver
> (DB/Zookeeper/Memcache)
> [4] Stick to what is accepted for #2. Just that the scope will be
> decided based on whether we go with #1 (as accepted for Liberty) or #3
> (what is proposed for Mitaka)
> 
I like Mitaka (#3) proposal more. We still have whole data in the
persistent database and SG driver informs only if a host is alive. This
would make transitions between SG drivers easier for administrators and
at last this is why you want to use ZooKeeper - to know about failure
early and don't schedule new VMs to such non-responding host.


From e0ne at e0ne.info  Fri Sep 25 15:33:46 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Fri, 25 Sep 2015 18:33:46 +0300
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <1443193450-sup-1179@lrrr.local>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <56055F9E.1080306@redhat.com>
 <CAFeBh8v=ChmtwzE2x1Z_jBi=8116fQ0yDHfg9CctGZT9iNSCuA@mail.gmail.com>
 <1443193450-sup-1179@lrrr.local>
Message-ID: <CAGocpaH+7fPPkXAeLyfb8J1t02RDHYYG9eMmLvPbGA4mu6yMGw@mail.gmail.com>

+1 for <=80 chars. 72 characters sometimes are not enough

Regards,
Ivan Kolodyazhny

On Fri, Sep 25, 2015 at 6:05 PM, Doug Hellmann <doug at doughellmann.com>
wrote:

> git tools such as git log and git show indent the commit message in
> their output, so you don't actually have the full 79/80 character width
> to work with. That's where the 72 comes from.
>
> Doug
>
> Excerpts from Vikram Choudhary's message of 2015-09-25 20:25:41 +0530:
> > +1 for <=80 chars. It will be uniform with the existing coding style.
> >
> > On Fri, Sep 25, 2015 at 8:22 PM, Dmitry Tantsur <dtantsur at redhat.com>
> wrote:
> >
> > > On 09/25/2015 04:44 PM, Ihar Hrachyshka wrote:
> > >
> > >> Hi all,
> > >>
> > >> releases are approaching, so it?s the right time to start some bike
> > >> shedding on the mailing list.
> > >>
> > >> Recently I got pointed out several times [1][2] that I violate our
> commit
> > >> message requirement [3] for the message lines that says: "Subsequent
> lines
> > >> should be wrapped at 72 characters.?
> > >>
> > >> I agree that very long commit message lines can be bad, f.e. if they
> are
> > >> 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we
> have 79
> > >> chars limit for the code.
> > >>
> > >> We had a check for the line lengths in openstack-dev/hacking before
> but
> > >> it was killed [4] as per openstack-dev@ discussion [5].
> > >>
> > >> I believe commit message lines of <=80 chars are absolutely fine and
> > >> should not get -1 treatment. I propose to raise the limit for the
> guideline
> > >> on wiki accordingly.
> > >>
> > >
> > > +1, I never understood it actually. I know some folks even question 80
> > > chars for the code, so having 72 chars for commit messages looks a bit
> > > weird to me.
> > >
> > >
> > >> Comments?
> > >>
> > >> [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
> > >> [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
> > >> [3]:
> > >>
> https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
> > >> [4]: https://review.openstack.org/#/c/142585/
> > >> [5]:
> > >>
> http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
> > >>
> > >> Ihar
> > >>
> > >>
> > >>
> > >>
> __________________________________________________________________________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > >
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/17ddd985/attachment.html>

From rybrown at redhat.com  Fri Sep 25 15:00:37 2015
From: rybrown at redhat.com (Ryan Brown)
Date: Fri, 25 Sep 2015 11:00:37 -0400
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
Message-ID: <56056195.3030900@redhat.com>

On 09/25/2015 10:44 AM, Ihar Hrachyshka wrote:
> Hi all,
>
> releases are approaching, so it?s the right time to start some bike
> shedding on the mailing list.
>
> Recently I got pointed out several times [1][2] that I violate our
> commit message requirement [3] for the message lines that says:
> "Subsequent lines should be wrapped at 72 characters.?
>
> I agree that very long commit message lines can be bad, f.e. if they
> are 200+ chars. But <= 79 chars?.. Don?t think so. Especially since
> we have 79 chars limit for the code.

The default "git log" display shows the commit message already indented, 
and the tab may display as 8 spaces I suppose. I believe the 72 limit is 
derived from 80-8 (terminal width - tab width)

I don't know how many folks use 80-char terminals (I use side-by-side 
110-column terms). Having some limit to prevent 200+ is reasonable, but 
I think it's pedantic to -1 a patch due to a 78-char commit message line.

> We had a check for the line lengths in openstack-dev/hacking before
> but it was killed [4] as per openstack-dev@ discussion [5].
>
> I believe commit message lines of <=80 chars are absolutely fine and
> should not get -1 treatment. I propose to raise the limit for the
> guideline on wiki accordingly.
>
> Comments?
>
> [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG [2]:
> https://review.openstack.org/#/c/227319/2//COMMIT_MSG [3]:
> https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
>
>
[4]: https://review.openstack.org/#/c/142585/
> [5]:
> http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
>
>  Ihar
>
>
>
> __________________________________________________________________________
>
>
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.


From yguenane at redhat.com  Fri Sep 25 15:40:40 2015
From: yguenane at redhat.com (Yanis Guenane)
Date: Fri, 25 Sep 2015 17:40:40 +0200
Subject: [openstack-dev] [puppet] service default value functions
In-Reply-To: <5601EFA8.8040204@puppetlabs.com>
References: <CABzFt8OG+ho+TRhiubvjVWxYeZ8_-eEGQs7cWjTR4ZhHQmwK1Q@mail.gmail.com>
 <5601EFA8.8040204@puppetlabs.com>
Message-ID: <56056AF8.3020800@redhat.com>



On 09/23/2015 02:17 AM, Cody Herriges wrote:
> Alex Schultz wrote:
>> Hey puppet folks,
>>
>> Based on the meeting yesterday[0], I had proposed creating a parser
>> function called is_service_default[1] to validate if a variable matched
>> our agreed upon value of '<SERVICE DEFAULT>'.  This got me thinking
>> about how can we maybe not use the arbitrary string throughout the
>> puppet that can not easily be validated.  So I tested creating another
>> puppet function named service_default[2] to replace the use of '<SERVICE
>> DEFAULT>' throughout all the puppet modules.  My tests seemed to
>> indicate that you can use a parser function as parameter default for
>> classes. 
>>
>> I wanted to send a note to gather comments around the second function. 
>> When we originally discussed what to use to designate for a service's
>> default configuration, I really didn't like using an arbitrary string
>> since it's hard to parse and validate. I think leveraging a function
>> might be better since it is something that can be validated via tests
>> and a syntax checker.  Thoughts?
>>
>>
>> Thanks,
>> -Alex
>>
>> [0] http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html
>> [1] https://review.openstack.org/#/c/223672
>> [2] https://review.openstack.org/#/c/224187
>>
> I've been mulling this over the last several days and I just can't
> accept an entire ruby function which would be ran for every parameter
> with the desired static value of "<SERVICE DEFAULT>" when the class is
> declared and parsed.  I am not generally against using functions as a
> parameter default just not a fan in this case because running ruby just
> to return a static string seems inappropriate and not optimal.
>
> In this specific case I think the params pattern and inheritance can
> obtain us the same goals.  I also find this a valid us of inheritance
> cross module namespaces but...only because all our modules must depend
> on puppet-openstacklib.
>
> http://paste.openstack.org/show/473655

Hello,

I do like the params pattern. This is something we could probably apply
for other purpose later,
centralizing a common parameter value of all our modules into a single
place, yet override-able
for each single module (if necessary) in its own params.pp file.

--
Yanis Guenane

>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From fungi at yuggoth.org  Fri Sep 25 15:42:15 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Fri, 25 Sep 2015 15:42:15 +0000
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
Message-ID: <20150925154215.GE4731@yuggoth.org>

On 2015-09-25 16:44:59 +0200 (+0200), Ihar Hrachyshka wrote:
[...]
> I believe commit message lines of <=80 chars are absolutely fine
> and should not get -1 treatment. I propose to raise the limit for
> the guideline on wiki accordingly.
[...]

As one of those traditionalists who still does all his work in 80x24
terminals (well, 80x25 with a status line), I keep my commit message
titles at or under 50 characters and message contents to no more
than 68 characters to allow for cleaner indentation/quoting just
like for my MUA editor settings. After all, some (non-OpenStack)
projects take patch submissions by E-mail and so it's easier to just
follow conservative E-mail line length conventions when possible.

That said, while I appreciate when people keep their commit message
lines wrapped short enough that they render sanely on my terminal, I
make a point of not leaving negative reviews about commit message
formatting unless it's really egregious (and usually not even then).
We have plenty of real bugs to deal with, and it's not worth my time
to berate people for inconsistent commit message layout as long as
the requisite information is present--it's easier to just lead by
example and hope that others follow most of the time.

As for the underlying topic, people leaving -1 reviews about silly,
unimportant details: reviewers need to get used to the fact that
sometimes there will be a -1 on a perfectly good proposed change, so
it's fine to ignore negative votes from people who are wasting their
time on pointless trivia. Please don't set your review filters to
skip changes with a CR -1 on them; review and judge for yourself.
-- 
Jeremy Stanley


From jim at jimrollenhagen.com  Fri Sep 25 15:42:39 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Fri, 25 Sep 2015 08:42:39 -0700
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
Message-ID: <20150925154239.GE14957@jimrollenhagen.com>

On Fri, Sep 25, 2015 at 04:44:59PM +0200, Ihar Hrachyshka wrote:
> Hi all,
> 
> releases are approaching, so it?s the right time to start some bike shedding on the mailing list.
> 
> Recently I got pointed out several times [1][2] that I violate our commit message requirement [3] for the message lines that says: "Subsequent lines should be wrapped at 72 characters.?
> 
> I agree that very long commit message lines can be bad, f.e. if they are 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we have 79 chars limit for the code.
> 
> We had a check for the line lengths in openstack-dev/hacking before but it was killed [4] as per openstack-dev@ discussion [5].
> 
> I believe commit message lines of <=80 chars are absolutely fine and should not get -1 treatment. I propose to raise the limit for the guideline on wiki accordingly.
> 
> Comments?

It makes me really sad that we actually even spend time discussing
things like this. As a core reviewer, I would just totally ignore this
-1. I also ignore -1s for things like minor typos in a comment, etc.

Let's focus on building good software instead. :)

// jim



From berrange at redhat.com  Fri Sep 25 15:47:56 2015
From: berrange at redhat.com (Daniel P. Berrange)
Date: Fri, 25 Sep 2015 16:47:56 +0100
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <1443193450-sup-1179@lrrr.local>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <56055F9E.1080306@redhat.com>
 <CAFeBh8v=ChmtwzE2x1Z_jBi=8116fQ0yDHfg9CctGZT9iNSCuA@mail.gmail.com>
 <1443193450-sup-1179@lrrr.local>
Message-ID: <20150925154756.GG13579@redhat.com>

On Fri, Sep 25, 2015 at 11:05:09AM -0400, Doug Hellmann wrote:
> git tools such as git log and git show indent the commit message in
> their output, so you don't actually have the full 79/80 character width
> to work with. That's where the 72 comes from.

It is also commonly done because so that when you copy commits into
email, the commit message doesn't get further line breaks inserted.
This isn't a big deal with openstack, as we don't use an email workflow
for patch review.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


From harlowja at outlook.com  Fri Sep 25 15:54:21 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Fri, 25 Sep 2015 08:54:21 -0700
Subject: [openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver
 implementation
In-Reply-To: <CADpk5BZpWRJP3m=W8xnHX_3o8-3Z5KXzeku4hCCwGWOv4jx2cA@mail.gmail.com>
References: <CADpk5BZpWRJP3m=W8xnHX_3o8-3Z5KXzeku4hCCwGWOv4jx2cA@mail.gmail.com>
Message-ID: <BLU437-SMTP728EE7E183D36AE65604D3D8420@phx.gbl>

Dmitriy Ukhlov wrote:
> Hello stackers,
>
> I'm working on new olso.messaging RabbitMQ driver implementation which
> uses pika client library instead of kombu. It related to
> https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika.
> In this letter I want to share current results and probably get first
> feedack from you.
> Now code is availabe here:
> https://github.com/dukhlov/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_pika.py
>

This will end up on review.openstack.org right so that it can be 
properly reviewed (it will likely take a while since it looks to be 
~1000+ lines of code)?

Also suggestion, before that merges, can docs be added, seems like very 
little docstrings about what/why/how. For sustainability purposes that 
would be appreciated I think.

> Current status of this code:
> - pika driver passes functional tests
> - pika driver tempest smoke tests
> - pika driver passes almost all tempest full tests (except 5) but it
> seems that reason is not related to oslo.messaging
> Also I created small devstack patch to support pika driver testing on
> gate (https://review.openstack.org/#/c/226348/)
>
> Next steps:
> - communicate with Manish (blueprint owner)
> - write spec to this blueprint
> - send a review with this patch when spec and devstack patch get merged.
>
> Thank you.
>
>
> --
> Best regards,
> Dmitriy Ukhlov
> Mirantis Inc.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From Kevin.Fox at pnnl.gov  Fri Sep 25 15:58:57 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Fri, 25 Sep 2015 15:58:57 +0000
Subject: [openstack-dev] [murano] [app-catalog] versions for murano
 assets in the catalog
In-Reply-To: <CA+odVQG47k6N7--zC-ge+c3S507GnVSAarEeQs2ZULNyqABD8w@mail.gmail.com>
References: <CA+odVQHdkocESWDvNhwZbQaMAyBPCJciXCTeDrTcAsYGN7Y4nA@mail.gmail.com>
 <CAOnDsYNYi7zgmxCe57mf9cC7ma7m9pt5Rqx9aMGzjDn3eoGPUg@mail.gmail.com>,
 <CA+odVQG47k6N7--zC-ge+c3S507GnVSAarEeQs2ZULNyqABD8w@mail.gmail.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7CAA7C@EX10MBOX06.pnnl.gov>

To clarify a bit, We're unsure that running glance without keystone as a distribution mechanism for apps.openstack.org assets is a good idea/fit.

Having assets stored in the local cloud all in one place (In glance) seems like a very good idea to me.

We need closer communications between Murano, Glance, and App-Catalog going forward since each project has valuable things to contribute to the over all big picture.

Thanks,
Kevin
________________________________________
From: Christopher Aedo [doc at aedo.net]
Sent: Thursday, September 24, 2015 2:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [murano] [app-catalog] versions for murano assets in the catalog

On Tue, Sep 22, 2015 at 12:06 PM, Serg Melikyan <smelikyan at mirantis.com> wrote:
> Hi Chris,
>
> concern regarding assets versioning in Community App Catalog indeed
> affects Murano because we are constantly improving our language and
> adding new features, e.g. we added ability to select existing Neutron
> network for particular application in Liberty and if user wants to use
> this feature - his application will be incompatible with Kilo. I think
> this also valid for Heat because they HOT language is also improving
> with each release.
>
> Thank you for proposing workaround, I think this is a good way to
> solve immediate blocker while Community App Catalog team is working on
> resolving handling versions elegantly from they side. Kirill proposed
> two changes in Murano to follow this approach that I've already +2 ed:
>
> * https://review.openstack.org/225251 - openstack/murano-dashboard
> * https://review.openstack.org/225249 - openstack/python-muranoclient
>
> Looks like corresponding commit to Community App Catalog is already
> merged [0] and our next step is to prepare new version of applications
> from openstack/murano-apps and then figure out how to publish them
> properly.

Yep, thanks, this looks like a step in the right direction to give us
some wiggle room to handle different versions of assets in the App
Catalog for the next few months.

Down the road we want to make sure that the App Catalog is not closely
tied to any other projects, or how those projects handle versions.  We
will clearly communicate our intentions around versions of assets (and
how to specify which version is desired when retrieving an asset) here
on the mailing list, during the weekly meetings, and during the weekly
cross-project meeting as well.

> P.S. I've also talked with Alexander and Kirill regarding better ways
> to handle versioning for assets in Community App Catalog and they
> shared that they are starting working on PoC using Glance Artifact
> Repository, probably they can share more details regarding this work
> here. We would be happy to work on this together given that in Liberty
> we implemented experimental support for package versioning inside the
> Murano (e.g. having two version of the same app working side-by-side)
> [1]
>
> References:
> [0] https://review.openstack.org/224869
> [1] http://murano-specs.readthedocs.org/en/latest/specs/liberty/murano-versioning.html

Thanks, looking forward to the PoC.  We have discussed whether or not
using Glance Artifact Repository makes sense for the App Catalog and
so far the consensus has been that it is not a great fit for what we
need.  Though it brings a lot of great stuff to the table, all we
really need is a place to drop large (and small) binaries.  Swift as a
storage component is the obvious choice for that - the metadata around
the asset itself (when it was added, by whom, rating, version, etc.)
will have to live in a DB anyway.  Given that, seems like Glance is
not an obvious great fit, but like I said I'm looking forward to
hearing/seeing more on this front :)

-Christopher

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From harlowja at outlook.com  Fri Sep 25 15:58:59 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Fri, 25 Sep 2015 08:58:59 -0700
Subject: [openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver
 implementation
In-Reply-To: <CADpk5BZpWRJP3m=W8xnHX_3o8-3Z5KXzeku4hCCwGWOv4jx2cA@mail.gmail.com>
References: <CADpk5BZpWRJP3m=W8xnHX_3o8-3Z5KXzeku4hCCwGWOv4jx2cA@mail.gmail.com>
Message-ID: <BLU436-SMTP75C16DF1120A00053D00E4D8420@phx.gbl>

Also a side question, that someone might know,

Whatever happened to the folks from rabbitmq (incorporated? pivotal?) 
who were going to get involved in oslo.messaging, did that ever happen; 
if anyone knows?

They might be a good bunch of people to review such a pika driver (since 
I think they as a corporation created pika?).

Dmitriy Ukhlov wrote:
> Hello stackers,
>
> I'm working on new olso.messaging RabbitMQ driver implementation which
> uses pika client library instead of kombu. It related to
> https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika.
> In this letter I want to share current results and probably get first
> feedack from you.
> Now code is availabe here:
> https://github.com/dukhlov/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_pika.py
>
> Current status of this code:
> - pika driver passes functional tests
> - pika driver tempest smoke tests
> - pika driver passes almost all tempest full tests (except 5) but it
> seems that reason is not related to oslo.messaging
> Also I created small devstack patch to support pika driver testing on
> gate (https://review.openstack.org/#/c/226348/)
>
> Next steps:
> - communicate with Manish (blueprint owner)
> - write spec to this blueprint
> - send a review with this patch when spec and devstack patch get merged.
>
> Thank you.
>
>
> --
> Best regards,
> Dmitriy Ukhlov
> Mirantis Inc.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From chris at openstack.org  Fri Sep 25 16:00:25 2015
From: chris at openstack.org (Chris Hoge)
Date: Fri, 25 Sep 2015 09:00:25 -0700
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <1443189345-sup-9818@lrrr.local>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
Message-ID: <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>


> On Sep 25, 2015, at 6:59 AM, Doug Hellmann <doug at doughellmann.com> wrote:
> 
> Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +0000:
>>> 
>>> On Sep 24, 2015, at 5:55 PM, Sabari Murugesan <sabari.bits at gmail.com> wrote:
>>> 
>>> Hi Melanie
>>> 
>>> In general, images created by glance v1 API should be accessible using v2 and
>>> vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with an image was
>>> causing incompatibility. These fixes were back-ported to stable/kilo.
>>> 
>>> Thanks
>>> Sabari
>>> 
>>> [1] - https://bugs.launchpad.net/glance/+bug/1447215
>>> [2] - https://bugs.launchpad.net/bugs/1419823 
>>> [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193 
>>> 
>>> 
>>> On Thu, Sep 24, 2015 at 2:17 PM, melanie witt <melwittt at gmail.com> wrote:
>>> Hi All,
>>> 
>>> I have been looking and haven't yet located documentation about how to upgrade from glance v1 to glance v2.
>>> 
>>> From what I understand, images and snapshots created with v1 can't be listed/accessed through the v2 api. Are there instructions about how to migrate images and snapshots from v1 to v2? Are there other incompatibilities between v1 and v2?
>>> 
>>> I'm asking because I have read that glance v1 isn't defcore compliant and so we need all projects to move to v2, but the incompatibility from v1 to v2 is preventing that in nova. Is there anything else preventing v2 adoption? Could we move to glance v2 if there's a migration path from v1 to v2 that operators can run through before upgrading to a version that uses v2 as the default?
>> 
>> Just to clarify the DefCore situation a bit here: 
>> 
>> The DefCore Committee is considering adding some Glance v2
> capabilities [1] as ?advisory? (e.g. not required now but might be
> in the future unless folks provide feedback as to why it shouldn?t
> be) in it?s next Guideline, which is due to go the Board of Directors
> in January and will cover Juno, Kilo, and Liberty [2].	The Nova image
> API?s are already required [3][4].  As discussion began about which
> Glance capabilities to include and whether or not to keep the Nova
> image API?s as required, it was pointed out that the many ways images
> can currently be created in OpenStack is problematic from an
> interoperability point of view in that some clouds use one and some use
> others.  To be included in a DefCore Guideline, capabilities are scored
> against twelve Criteria [5], and need to achieve a certain total to be
> included.  Having a bunch of different ways to deal with images
> actually hurts the chances of any one of them meeting the bar because
> it makes it less likely that they?ll achieve several criteria.  For
> example:
>> 
>> One of the criteria is ?widely deployed? [6].  In the case of images, both the Nova image-create API and Glance v2 are both pretty widely deployed [7]; Glance v1 isn?t, and at least one uses none of those but instead uses the import task API.
>> 
>> Another criteria is ?atomic? [8] which basically means the capability is unique and can?t be built out of other required capabilities.  Since the Nova image-create API is already required and effectively does the same thing as glance v1 and v2?s image create API?s, the latter lose points.
> 
> This seems backwards. The Nova API doesn't "do the same thing" as
> the Glance API, it is a *proxy* for the Glance API. We should not
> be requiring proxy APIs for interop. DefCore should only be using
> tests that talk directly to the service that owns the feature being
> tested.

I agree in general, at the time the standard was approved the
only api we had available to us (because only nova code was
being considered for inclusion) was the proxy.

We?re looking at v2 as the required api going forward, but
as has been mentioned before, the nova proxy requires that
v1 be present as a non-public api. Not the best situation in
the world, and I?m personally looking forward to Glance,
Cinder, and Neutron becoming explicitly required APIs in
DefCore.


> Doug
> 
>> 
>> Another criteria is ?future direction? [9].  Glance v1 gets no points here since v2 is the current API, has been for a while, and there?s even been some work on v3 already.
>> 
>> There are also criteria for  ?used by clients? [11].  Unfortunately both Glance v1 and v2 fall down pretty hard here as it turns out that of all the client libraries users reported in the last user survey, it appears the only one other than the OpenStack clients supports Glance v2 and one supports Glance v1 while the rest all rely on the Nova API's.  Even within OpenStack we don?t necessarily have good adoption since Nova still uses the v1 API to talk to Glance and OpenStackClient didn?t support image creation with v2 until this week?s 1.7.0 release. [13]
>> 
>> So, it?s a bit problematic that v1 is still being used even within the project (though it did get slightly better this week).  It?s highly unlikely at this point that it makes any sense for DefCore to require OpenStack Powered products to expose v1 to end users.  Even if DefCore does end up requiring Glance v2 to be exposed to end users, that doesn?t necessarily mean Nova couldn?t continue to use v1: OpenStack Powered products wouldn?t be required to expose v1 to end users, but if the nova image-create API remains required then they?d have to expose it at least internally to the cloud.  But?.really?  That?s still sort of an ugly position to be in, because at the end of the day that?s still a lot more moving parts than are really necessary and that?s not particularly good for operators, end users, developers who want interoperable ways of doing things, or pretty much anybody else.  
>> 
>> So basically: yes, it would be *lovely* if we could all get behind fewer ways of dealing with images. [10]  
>> 
>> [1] https://review.openstack.org/#/c/213353/
>> [2] http://git.openstack.org/cgit/openstack/defcore/tree/2016.next.json#n8
>> [3] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json#n23
>> [4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n20
>> [5] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst
>> [6] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n40
>> [7] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074540.html
>> [8] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n87
>> [9] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n60
>> [10] Meh, entirely too many footnotes here so why not put one out of order for fun: https://www.youtube.com/watch?v=oHg5SJYRHA0
>> [11] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n48
>> [12] See comments in https://review.openstack.org/#/c/213353/7/working_materials/scoring.txt
>> [13] http://docs.openstack.org/developer/python-openstackclient/releases.html#sep-2015
>> 
>> At Your Service,
>> 
>> Mark T. Voelker
>> 
>>> 
>>> Thanks,
>>> -melanie (irc: melwitt)
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From emilien at redhat.com  Fri Sep 25 16:02:47 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Fri, 25 Sep 2015 12:02:47 -0400
Subject: [openstack-dev] [puppet] should puppet-neutron manage third party
	software?
Message-ID: <56057027.3090808@redhat.com>

In our last meeting [1], we were discussing about whether managing or
not external packaging repositories for Neutron plugin dependencies.

Current situation:
puppet-neutron is installing (packages like neutron-plugin-*) &
configure Neutron plugins (configuration files like
/etc/neutron/plugins/*.ini
Some plugins (Cisco) are doing more: they install third party packages
(not part of OpenStack), from external repos.

The question is: should we continue that way and accept that kind of
patch [2]?

I vote for no: managing external packages & external repositories should
be up to an external more.
Example: my SDN tool is called "sdnmagic":
1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
configure the .ini file(s) to make it work in Neutron
2/ create puppet-sdnmagic that will take care of everything else:
install sdnmagic, manage packaging (and specific dependencies),
repositories, etc.
I -1 puppet-neutron should handle it. We are not managing SDN soltution:
we are enabling puppet-neutron to work with them.

I would like to find a consensus here, that will be consistent across
*all plugins* without exception.


Thanks for your feedback,

[1] http://goo.gl/zehmN2
[2] https://review.openstack.org/#/c/209997/
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/c6f59c22/attachment.pgp>

From derekh at redhat.com  Fri Sep 25 16:04:19 2015
From: derekh at redhat.com (Derek Higgins)
Date: Fri, 25 Sep 2015 17:04:19 +0100
Subject: [openstack-dev] [TripleO] tripleo.org theme
In-Reply-To: <1443184498.2443.10.camel@redhat.com>
References: <1443184498.2443.10.camel@redhat.com>
Message-ID: <56057083.2060604@redhat.com>



On 25/09/15 13:34, Dan Prince wrote:
> It has come to my attention that we aren't making great use of our
> tripleo.org domain. One thing that would be useful would be to have the
> new tripleo-docs content displayed there. It would also be nice to have
> quick links to some of our useful resources, perhaps Derek's CI report
> [1], a custom Reviewday page for TripleO reviews (something like this
> [2]), and perhaps other links too. I'm thinking these go in the header,
> and not just on some random TripleO docs page. Or perhaps both places.

We could even host some of these things on tripleo.org (not just link to 
them)

>
> I was thinking that instead of the normal OpenStack theme however we
> could go a bit off the beaten path and do our own TripleO theme.
> Basically a custom tripleosphinx project that we ninja in as a
> replacement for oslosphinx.
>
> Could get our own mascot... or do something silly with words. I'm
> reaching out to graphics artists who could help with this sort of
> thing... but before that decision is made I wanted to ask about
> thoughts on the matter here first.

+1 from me, the more content, articles etc... we can get up ther the 
better as long as we keep at it and it doesn't go stale.

>
> Speak up... it would be nice to have this wrapped up before Tokyo.
>
> [1] http://goodsquishy.com/downloads/tripleo-jobs.html
> [2] http://status.openstack.org/reviews/
>
> Dan
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From Kevin.Fox at pnnl.gov  Fri Sep 25 16:05:38 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Fri, 25 Sep 2015 16:05:38 +0000
Subject: [openstack-dev] CephFS native driver
In-Reply-To: <1886719958.21640286.1443164654402.JavaMail.zimbra@redhat.com>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>,
 <1886719958.21640286.1443164654402.JavaMail.zimbra@redhat.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7CAAC6@EX10MBOX06.pnnl.gov>

I think having a native cephfs driver without nfs in the cloud is a very compelling feature. nfs is nearly impossible to make both HA and Scalable without adding really expensive dedicated hardware. Ceph on the other hand scales very nicely and its very fault tollerent out of the box.

Thanks,
Kevin
________________________________________
From: Shinobu Kinjo [skinjo at redhat.com]
Sent: Friday, September 25, 2015 12:04 AM
To: OpenStack Development Mailing List (not for usage questions); John Spray
Cc: Ceph Development; openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Manila] CephFS native driver

So here are questions from my side.
Just question.


 1.What is the biggest advantage comparing others such as RDB?
  We should be able to implement what you are going to do in
  existing module, shouldn't we?

 2.What are you going to focus on with a new implementation?
  It seems to be to use NFS in front of that implementation
  with more transparently.

 3.What are you thinking of integration with OpenStack using
  a new implementation?
  Since it's going to be new kind of, there should be differ-
  ent architecture.

 4.Is this implementation intended for OneStack integration
  mainly?

Since velocity of OpenStack feature expansion is much more than
it used to be, it's much more important to think of performance.

Is a new implementation also going to improve Ceph integration
with OpenStack system?

Thank you so much for your explanation in advance.

Shinobu

----- Original Message -----
From: "John Spray" <jspray at redhat.com>
To: openstack-dev at lists.openstack.org, "Ceph Development" <ceph-devel at vger.kernel.org>
Sent: Thursday, September 24, 2015 10:49:17 PM
Subject: [openstack-dev] [Manila] CephFS native driver

Hi all,

I've recently started work on a CephFS driver for Manila.  The (early)
code is here:
https://github.com/openstack/manila/compare/master...jcsp:ceph

It requires a special branch of ceph which is here:
https://github.com/ceph/ceph/compare/master...jcsp:wip-manila

This isn't done yet (hence this email rather than a gerrit review),
but I wanted to give everyone a heads up that this work is going on,
and a brief status update.

This is the 'native' driver in the sense that clients use the CephFS
client to access the share, rather than re-exporting it over NFS.  The
idea is that this driver will be useful for anyone who has such
clients, as well as acting as the basis for a later NFS-enabled
driver.

The export location returned by the driver gives the client the Ceph
mon IP addresses, the share path, and an authentication token.  This
authentication token is what permits the clients access (Ceph does not
do access control based on IP addresses).

It's just capable of the minimal functionality of creating and
deleting shares so far, but I will shortly be looking into hooking up
snapshots/consistency groups, albeit for read-only snapshots only
(cephfs does not have writeable shapshots).  Currently deletion is
just a move into a 'trash' directory, the idea is to add something
later that cleans this up in the background: the downside to the
"shares are just directories" approach is that clearing them up has a
"rm -rf" cost!

A note on the implementation: cephfs recently got the ability (not yet
in master) to restrict client metadata access based on path, so this
driver is simply creating shares by creating directories within a
cluster-wide filesystem, and issuing credentials to clients that
restrict them to their own directory.  They then mount that subpath,
so that from the client's point of view it's like having their own
filesystem.  We also have a quota mechanism that I'll hook in later to
enforce the share size.

Currently the security here requires clients (i.e. the ceph-fuse code
on client hosts, not the userspace applications) to be trusted, as
quotas are enforced on the client side.  The OSD access control
operates on a per-pool basis, and creating a separate pool for each
share is inefficient.  In the future it is expected that CephFS will
be extended to support file layouts that use RADOS namespaces, which
are cheap, such that we can issue a new namespace to each share and
enforce the separation between shares on the OSD side.

However, for many people the ultimate access control solution will be
to use a NFS gateway in front of their CephFS filesystem: it is
expected that an NFS-enabled cephfs driver will follow this native
driver in the not-too-distant future.

This will be my first openstack contribution, so please bear with me
while I come up to speed with the submission process.  I'll also be in
Tokyo for the summit next month, so I hope to meet other interested
parties there.

All the best,
John

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From Kevin.Fox at pnnl.gov  Fri Sep 25 16:11:59 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Fri, 25 Sep 2015 16:11:59 +0000
Subject: [openstack-dev] [all] -1 due to line length violation in
	commit	messages
In-Reply-To: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7CAB18@EX10MBOX06.pnnl.gov>

Yeah, and worse, since I never can remember the exact number(72 I guess), I always just round down to 70-1 to be safe.

Its silly.

Thanks,
Kevin
________________________________________
From: Ihar Hrachyshka [ihrachys at redhat.com]
Sent: Friday, September 25, 2015 7:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: bapalm at us.ibm.com
Subject: [openstack-dev] [all] -1 due to line length violation in commit        messages

Hi all,

releases are approaching, so it?s the right time to start some bike shedding on the mailing list.

Recently I got pointed out several times [1][2] that I violate our commit message requirement [3] for the message lines that says: "Subsequent lines should be wrapped at 72 characters.?

I agree that very long commit message lines can be bad, f.e. if they are 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we have 79 chars limit for the code.

We had a check for the line lengths in openstack-dev/hacking before but it was killed [4] as per openstack-dev@ discussion [5].

I believe commit message lines of <=80 chars are absolutely fine and should not get -1 treatment. I propose to raise the limit for the guideline on wiki accordingly.

Comments?

[1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
[2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
[3]: https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
[4]: https://review.openstack.org/#/c/142585/
[5]: http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519

Ihar


From edgar.magana at workday.com  Fri Sep 25 16:14:31 2015
From: edgar.magana at workday.com (Edgar Magana)
Date: Fri, 25 Sep 2015 16:14:31 +0000
Subject: [openstack-dev] [puppet] should puppet-neutron manage third
 party software?
In-Reply-To: <56057027.3090808@redhat.com>
References: <56057027.3090808@redhat.com>
Message-ID: <DDF6F73E-716D-439A-87B8-158008AD31AA@workday.com>

Hi There,

I just added my comment on the review. I do agree with Emilien. There should be specific repos for plugins and drivers.

BTW. I love the sdnmagic name  ;-)

Edgar




On 9/25/15, 9:02 AM, "Emilien Macchi" <emilien at redhat.com> wrote:

>In our last meeting [1], we were discussing about whether managing or
>not external packaging repositories for Neutron plugin dependencies.
>
>Current situation:
>puppet-neutron is installing (packages like neutron-plugin-*) &
>configure Neutron plugins (configuration files like
>/etc/neutron/plugins/*.ini
>Some plugins (Cisco) are doing more: they install third party packages
>(not part of OpenStack), from external repos.
>
>The question is: should we continue that way and accept that kind of
>patch [2]?
>
>I vote for no: managing external packages & external repositories should
>be up to an external more.
>Example: my SDN tool is called "sdnmagic":
>1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
>configure the .ini file(s) to make it work in Neutron
>2/ create puppet-sdnmagic that will take care of everything else:
>install sdnmagic, manage packaging (and specific dependencies),
>repositories, etc.
>I -1 puppet-neutron should handle it. We are not managing SDN soltution:
>we are enabling puppet-neutron to work with them.
>
>I would like to find a consensus here, that will be consistent across
>*all plugins* without exception.
>
>
>Thanks for your feedback,
>
>[1] http://goo.gl/zehmN2
>[2] https://review.openstack.org/#/c/209997/
>-- 
>Emilien Macchi
>

From Kevin.Fox at pnnl.gov  Fri Sep 25 16:15:15 2015
From: Kevin.Fox at pnnl.gov (Fox, Kevin M)
Date: Fri, 25 Sep 2015 16:15:15 +0000
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <20150925154239.GE14957@jimrollenhagen.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>,
 <20150925154239.GE14957@jimrollenhagen.com>
Message-ID: <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>

Another option... why are we wasting time on something that a computer can handle? Why not just let the line length be infinite in the commit message and have gerrit wrap it to <insert random number here> length lines on merge?

Thanks,
Kevin
________________________________________
From: Jim Rollenhagen [jim at jimrollenhagen.com]
Sent: Friday, September 25, 2015 8:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] -1 due to line length violation in commit messages

On Fri, Sep 25, 2015 at 04:44:59PM +0200, Ihar Hrachyshka wrote:
> Hi all,
>
> releases are approaching, so it?s the right time to start some bike shedding on the mailing list.
>
> Recently I got pointed out several times [1][2] that I violate our commit message requirement [3] for the message lines that says: "Subsequent lines should be wrapped at 72 characters.?
>
> I agree that very long commit message lines can be bad, f.e. if they are 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we have 79 chars limit for the code.
>
> We had a check for the line lengths in openstack-dev/hacking before but it was killed [4] as per openstack-dev@ discussion [5].
>
> I believe commit message lines of <=80 chars are absolutely fine and should not get -1 treatment. I propose to raise the limit for the guideline on wiki accordingly.
>
> Comments?

It makes me really sad that we actually even spend time discussing
things like this. As a core reviewer, I would just totally ignore this
-1. I also ignore -1s for things like minor typos in a comment, etc.

Let's focus on building good software instead. :)

// jim


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From mvoelker at vmware.com  Fri Sep 25 16:16:26 2015
From: mvoelker at vmware.com (Mark Voelker)
Date: Fri, 25 Sep 2015 16:16:26 +0000
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
Message-ID: <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>

On Sep 25, 2015, at 12:00 PM, Chris Hoge <chris at openstack.org> wrote:
> 
>> 
>> On Sep 25, 2015, at 6:59 AM, Doug Hellmann <doug at doughellmann.com> wrote:
>> 
>> Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +0000:
>>>> 
>>>> On Sep 24, 2015, at 5:55 PM, Sabari Murugesan <sabari.bits at gmail.com> wrote:
>>>> 
>>>> Hi Melanie
>>>> 
>>>> In general, images created by glance v1 API should be accessible using v2 and
>>>> vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with an image was
>>>> causing incompatibility. These fixes were back-ported to stable/kilo.
>>>> 
>>>> Thanks
>>>> Sabari
>>>> 
>>>> [1] - https://bugs.launchpad.net/glance/+bug/1447215
>>>> [2] - https://bugs.launchpad.net/bugs/1419823 
>>>> [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193 
>>>> 
>>>> 
>>>> On Thu, Sep 24, 2015 at 2:17 PM, melanie witt <melwittt at gmail.com> wrote:
>>>> Hi All,
>>>> 
>>>> I have been looking and haven't yet located documentation about how to upgrade from glance v1 to glance v2.
>>>> 
>>>> From what I understand, images and snapshots created with v1 can't be listed/accessed through the v2 api. Are there instructions about how to migrate images and snapshots from v1 to v2? Are there other incompatibilities between v1 and v2?
>>>> 
>>>> I'm asking because I have read that glance v1 isn't defcore compliant and so we need all projects to move to v2, but the incompatibility from v1 to v2 is preventing that in nova. Is there anything else preventing v2 adoption? Could we move to glance v2 if there's a migration path from v1 to v2 that operators can run through before upgrading to a version that uses v2 as the default?
>>> 
>>> Just to clarify the DefCore situation a bit here: 
>>> 
>>> The DefCore Committee is considering adding some Glance v2
>> capabilities [1] as ?advisory? (e.g. not required now but might be
>> in the future unless folks provide feedback as to why it shouldn?t
>> be) in it?s next Guideline, which is due to go the Board of Directors
>> in January and will cover Juno, Kilo, and Liberty [2].	The Nova image
>> API?s are already required [3][4].  As discussion began about which
>> Glance capabilities to include and whether or not to keep the Nova
>> image API?s as required, it was pointed out that the many ways images
>> can currently be created in OpenStack is problematic from an
>> interoperability point of view in that some clouds use one and some use
>> others.  To be included in a DefCore Guideline, capabilities are scored
>> against twelve Criteria [5], and need to achieve a certain total to be
>> included.  Having a bunch of different ways to deal with images
>> actually hurts the chances of any one of them meeting the bar because
>> it makes it less likely that they?ll achieve several criteria.  For
>> example:
>>> 
>>> One of the criteria is ?widely deployed? [6].  In the case of images, both the Nova image-create API and Glance v2 are both pretty widely deployed [7]; Glance v1 isn?t, and at least one uses none of those but instead uses the import task API.
>>> 
>>> Another criteria is ?atomic? [8] which basically means the capability is unique and can?t be built out of other required capabilities.  Since the Nova image-create API is already required and effectively does the same thing as glance v1 and v2?s image create API?s, the latter lose points.
>> 
>> This seems backwards. The Nova API doesn't "do the same thing" as
>> the Glance API, it is a *proxy* for the Glance API. We should not
>> be requiring proxy APIs for interop. DefCore should only be using
>> tests that talk directly to the service that owns the feature being
>> tested.
> 
> I agree in general, at the time the standard was approved the
> only api we had available to us (because only nova code was
> being considered for inclusion) was the proxy.
> 
> We?re looking at v2 as the required api going forward, but
> as has been mentioned before, the nova proxy requires that
> v1 be present as a non-public api. Not the best situation in
> the world, and I?m personally looking forward to Glance,
> Cinder, and Neutron becoming explicitly required APIs in
> DefCore.
> 

Also worth pointing out here: when we talk about ?doing the same thing? from a DefCore perspective, we?re essentially talking about what?s exposed to the end user, not how that?s implemented in OpenStack?s source code.  So from an end user?s perspective:

If I call nova image-create, I get an image in my cloud.  If I call the Glance v2 API to create an image, I also get an image in my cloud.  I neither see nor care that Nova is actually talking to Glance in the background, because if I?m writing code that uses the OpenStack API?s, I need to pick which one of those two API?s to make my code call upon to put an image in my cloud.  Or, in the worst case, I have to write a bunch of if/else loops into my code because some clouds I want to use only allow one way and some allow only the other.

So from that end-user perspective, the Nova image-create API indeed does ?do the same thing" as the Glance API.

At Your Service,

Mark T. Voelker


> 
>> Doug
>> 
>>> 
>>> Another criteria is ?future direction? [9].  Glance v1 gets no points here since v2 is the current API, has been for a while, and there?s even been some work on v3 already.
>>> 
>>> There are also criteria for  ?used by clients? [11].  Unfortunately both Glance v1 and v2 fall down pretty hard here as it turns out that of all the client libraries users reported in the last user survey, it appears the only one other than the OpenStack clients supports Glance v2 and one supports Glance v1 while the rest all rely on the Nova API's.  Even within OpenStack we don?t necessarily have good adoption since Nova still uses the v1 API to talk to Glance and OpenStackClient didn?t support image creation with v2 until this week?s 1.7.0 release. [13]
>>> 
>>> So, it?s a bit problematic that v1 is still being used even within the project (though it did get slightly better this week). It?s highly unlikely at this point that it makes any sense for DefCore to require OpenStack Powered products to expose v1 to end users.  Even if DefCore does end up requiring Glance v2 to be exposed to end users, that doesn?t necessarily mean Nova couldn?t continue to use v1: OpenStack Powered products wouldn?t be required to expose v1 to end users, but if the nova image-create API remains required then they?d have to expose it at least internally to the cloud.  But?.really?  That?s still sort of an ugly position to be in, because at the end of the day that?s still a lot more moving parts than are really necessary and that?s not particularly good for operators, end users, developers who want interoperable ways of doing things, or pretty much anybody else.  
>>> 
>>> So basically: yes, it would be *lovely* if we could all get behind fewer ways of dealing with images. [10]  
>>> 
>>> [1] https://review.openstack.org/#/c/213353/
>>> [2] http://git.openstack.org/cgit/openstack/defcore/tree/2016.next.json#n8
>>> [3] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json#n23
>>> [4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n20
>>> [5] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst
>>> [6] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n40
>>> [7] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074540.html
>>> [8] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n87
>>> [9] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n60
>>> [10] Meh, entirely too many footnotes here so why not put one out of order for fun: https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DoHg5SJYRHA0&d=BQIGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=Q8IhPU-EIzbG5YDx5LYO7zEJpGZykn7RwFg-UTPWvDc&m=95BssXyE_6OT7evZ9w9sdss0Ab4W_rrmwSdBc4Y8QVk&s=AGxMjguGSO6Doyo-BHeGpHAad085e62KrJYqOXX0rZ0&e= 
>>> [11] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n48
>>> [12] See comments in https://review.openstack.org/#/c/213353/7/working_materials/scoring.txt
>>> [13] http://docs.openstack.org/developer/python-openstackclient/releases.html#sep-2015
>>> 
>>> At Your Service,
>>> 
>>> Mark T. Voelker
>>> 
>>>> 
>>>> Thanks,
>>>> -melanie (irc: melwitt)
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>>> 
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From jim at jimrollenhagen.com  Fri Sep 25 16:23:20 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Fri, 25 Sep 2015 09:23:20 -0700
Subject: [openstack-dev] [release][ironic] Ironic 4.2.0 release
Message-ID: <20150925162320.GF14957@jimrollenhagen.com>

Hi all,

I'm proud to announce the release of Ironic 4.2.0! This follows quickly on
our 4.1.0 release with 21 bug fixes, and marks the completion of four
blueprints.

It is also the basis for our stable/Liberty branch, and will be included in
the coordinated OpenStack Liberty release.

Major changes are listed below and at http://docs.openstack.org/developer/ironic/releasenotes/
and full release details are available on Launchpad: https://launchpad.net/ironic/liberty/4.2.0

* Deprecated the bash ramdisk

  The older bash ramdisk built by diskimage-builder is now deprecated and
  support will be removed at the beginning of the "N" development cycle. Users
  should migrate to a ramdisk running ironic-python-agent, which now also
  supports the pxe_* drivers that the bash ramdisk was responsible for.
  For more info on building an ironic-python-agent ramdisk, see:
  http://docs.openstack.org/developer/ironic/deploy/install-guide.html#building-or-downloading-a-deploy-ramdisk-image

* Raised API version to 1.14

  * 1.12 allows setting RAID properties for a node; however support for
    putting this configuration on a node is not yet implemented for in-tree
    drivers; this will be added in a future release.
  * 1.13 adds a new 'abort' verb to the provision state API. This may be used
    to abort cleaning for nodes in the CLEANWAIT state.
  * 1.14 makes the following endpoints discoverable in the API:
    * /v1/nodes/<UUID or logical name>/states
    * /v1/drivers/<driver name>/properties

* Implemented a new Boot interface for drivers

  This change enhances the driver interface for driver authors, and should not
  affect users of Ironic, by splitting control of booting a server from the
  DeployInterface. The BootInterface is responsible for booting an image on a
  server, while the DeployInterface is responsible for deploying a tenant image
  to a server.

  This has been implemented in most in-tree drivers, and is a
  backwards-compatible change for out-of-tree drivers. The following in-tree
  drivers will be updated in a forth-coming release:

  * agent_ilo
  * agent_irmc
  * iscsi_ilo
  * iscsi_irmc

* Implemented a new RAID interface for drivers

  This change enhances the driver interface for driver authors. Drivers may
  begin implementing this interface to support RAID configuration for nodes.
  This is not yet implemented for any in-tree drivers.

* Image size is now checked before deployment with agent drivers

  The agent must download the tenant image in full before writing it to disk.
  As such, the server being deployed must have enough RAM for running the
  agent and storing the image. This is now checked before Ironic tells the
  agent to deploy an image. An optional config [agent]memory_consumed_by_agent
  is provided. When Ironic does this check, this config option may be set to
  factor in the amount of RAM to reserve for running the agent.

* Added Cisco IMC driver

  This driver supports managing Cisco UCS C-series servers through the
  CIMC API, rather than IPMI. Documentation is available at:
  http://docs.openstack.org/developer/ironic/drivers/cimc.html

* iLO virtual media drivers can work without Swift

  iLO virtual media drivers (iscsi_ilo and agent_ilo) can work standalone
  without Swift, by configuring an HTTP(S) server for hosting the
  deploy/boot images. A web server needs to be running on every conductor
  node and needs to be configured in ironic.conf.

  iLO driver documentation is available at:
  http://docs.openstack.org/developer/ironic/drivers/ilo.html

// jim + deva


From ayoung at redhat.com  Fri Sep 25 16:23:33 2015
From: ayoung at redhat.com (Adam Young)
Date: Fri, 25 Sep 2015 12:23:33 -0400
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <CA+HkNVsjYj3UXtR2eJGVSvDECuFaaHRDBN-mp+YXe2nh4Y3TWQ@mail.gmail.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com> <55FC0DCF.7060307@redhat.com>
 <262AA17A-E015-4191-BECF-4E044874D527@gmail.com>
 <CA+HkNVsjYj3UXtR2eJGVSvDECuFaaHRDBN-mp+YXe2nh4Y3TWQ@mail.gmail.com>
Message-ID: <56057505.6080802@redhat.com>

On 09/25/2015 07:09 AM, Sergii Golovatiuk wrote:
> Hi,
>
> Morgan gave the perfect case why operators want to use uWSGI. Let's 
> imagine a future when all openstack services will work as mod_wsgi 
> processes under apache. It's like to put all eggs in one basket. If 
> you need to reconfigure one service on controller it may affect 
> another service. For instance, sometimes operators need to increase 
> number of Threads/Processes for wsgi or add new virtual host to 
> apache. That will require graceful or cold restart of apache. It 
> affects other services. Another case, internal problems in mod_wsgi 
> where it may lead to apache crash affecting all services.
>
> uWSGI/gunicorn model is safer as in this case apache is reverse_proxy 
> only. This  model gives flexibility for operators. They may use 
> apache/nginx as proxy or load balancer. Stop or crash of one service 
> won't lead to downtime of other services. The complexity of OpenStack 
> management will be easier and friendly.

There are some fallacies here:

1. OpenStack services should all be on the same machine.
2. OpenStack web services should run on ports other than 443.

I think both of these are ideas who's time have come and gone.

If you have a single machine, run them out of separate containers. That 
allows different services to work with different versions of the 
libraries. It lets you mix a newer Keystone with older everything else.

If everything is on port 443, you need a single web server at the front 
end to multiplex it;  uWSGI or any other one does not obviate that.


There are no good ports left in /etc/services;  stop trying to reserve 
new ones for the web.  If you need to run on a web service, you need to 
be able to get through firewalls.  You need to run on standard ports. 
Run on 443.

Keystone again is a great example of this: it has two ports: 5000 and 35357.

port 5000 in /etc/services is

commplex-main   5000/tcp

and  port 35357 is smack dab in the middle of the ephemeral range.


Again, so long as the web server supports the cryptographic secure 
mechanisms, I don't care what one you chose.  But The idea of use going 
to Keystone and getting a bearer token as the basis for security is 
immature;  we should be doing the following on every call:

1. TLS
2. Cryptographic authentication.


They can be together or split up.

So, lets get everything running inside Apache, and, at the same time, 
push our other favorite web servers to support the necessary pieces to 
make OpenStack and the Web secure.


>
>
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Fri, Sep 18, 2015 at 3:44 PM, Morgan Fainberg 
> <morgan.fainberg at gmail.com <mailto:morgan.fainberg at gmail.com>> wrote:
>
>     There is and has been desire to support uWSGI and other
>     alternatives to mod_wsgi. There are a variety of operational
>     reasons to consider uWSGI and/or gunicorn behind apache most
>     notably to facilitate easier management of the processes
>     independently of the webserver itself. With mod_wsgi the processes
>     are directly tied to the apache server where as with uWSGI and
>     gunicorn you can manage the various services independently and/or
>     with differing VENVs more easily.
>
>     There are potential other concerns that must be weighed when
>     considering which method of deployment to use. I hope we have
>     clear documentation within the next cycle (and possible choices
>     for the gate) for utilizing uWSGI and/or gunicorn.
>
>     --Morgan
>
>     Sent via mobile
>
>     On Sep 18, 2015, at 06:12, Adam Young <ayoung at redhat.com
>     <mailto:ayoung at redhat.com>> wrote:
>
>>     On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
>>>     On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
>>>>     In the fuel project, we recently ran into a couple of issues with Apache2 +
>>>>     mod_wsgi as we switched Keystone to run . Please see [1] and [2].
>>>>
>>>>     Looking deep into Apache2 issues specifically around "apache2ctl graceful"
>>>>     and module loading/unloading and the hooks used by mod_wsgi [3]. I started
>>>>     wondering if Apache2 + mod_wsgi is the "right" solution and if there was
>>>>     something else better that people are already using.
>>>>
>>>>     One data point that keeps coming up is, all the CI jobs use Apache2 +
>>>>     mod_wsgi so it must be the best solution....Is it? If not, what is?
>>>     Disclaimer: it's been a while since I've cared about performance with a
>>>     web server in front of a Python app.
>>>
>>>     IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
>>>     on again. In general, I seem to remember it being thought of as a bit
>>>     old and crusty, but mostly working.
>>
>>     I am not aware of that.  It has been the workhorse of the
>>     Python/wsgi world for a while, and we use it heavily.
>>
>>>     At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
>>>     and saw a significant performance increase. This was a Django app. uwsgi
>>>     is fairly straightforward to operate and comes loaded with a myriad of
>>>     options[1] to help folks make the most of it. I've played with Ironic
>>>     behind uwsgi and it seemed to work fine, though I haven't done any sort
>>>     of load testing. I'd encourage folks to give it a shot. :)
>>
>>     Again, switching web servers is as likely to introduce as to
>>     solve problems.  If there are performance issues:
>>
>>     1.  Idenitfy what causes them
>>     2.  Change configuration settings to deal with them
>>     3.  Fix upstream bugs in the underlying system.
>>
>>
>>     Keystone is not about performance.  Keystone is about security. 
>>     The cloud is designed to scale horizontally first.  Before
>>     advocating switching to a difference web server, make sure it
>>     supports the technologies required.
>>
>>
>>     1. TLS at the latest level
>>     2. Kerberos/GSSAPI/SPNEGO
>>     3. X509 Client cert validation
>>     4. SAML
>>
>>     OpenID connect would be a good one to add to the list;  Its been
>>     requested for a while.
>>
>>     If Keystone is having performance issues, it is most likely at
>>     the database layer, not the web server.
>>
>>
>>
>>     "Programmers waste enormous amounts of time thinking about, or
>>     worrying about, the speed of noncritical parts of their programs,
>>     and these attempts at efficiency actually have a strong negative
>>     impact when debugging and maintenance are considered. We /should/
>>     forget about small efficiencies, say about 97% of the time:
>>     *premature optimization is the root of all evil.* Yet we should
>>     not pass up our opportunities in that critical 3%."   --Donald Knuth
>>
>>
>>
>>>     Of course, uwsgi can also be ran behind Apache2, if you'd prefer.
>>>
>>>     gunicorn[2] is another good option that may be worth investigating; I
>>>     personally don't have any experience with it, but I seem to remember
>>>     hearing it has good eventlet support.
>>>
>>>     // jim
>>>
>>>     [0]https://uwsgi-docs.readthedocs.org/en/latest/
>>>     [1]https://uwsgi-docs.readthedocs.org/en/latest/Options.html
>>>     [2]http://gunicorn.org/
>>>
>>>     __________________________________________________________________________
>>>     OpenStack Development Mailing List (not for usage questions)
>>>     Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>     <mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>     __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe: OpenStack-dev-request at lists.openstack.org
>>     <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/f3a87124/attachment.html>

From anteaya at anteaya.info  Fri Sep 25 16:32:49 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Fri, 25 Sep 2015 12:32:49 -0400
Subject: [openstack-dev] [puppet] should puppet-neutron manage third
 party software?
In-Reply-To: <DDF6F73E-716D-439A-87B8-158008AD31AA@workday.com>
References: <56057027.3090808@redhat.com>
 <DDF6F73E-716D-439A-87B8-158008AD31AA@workday.com>
Message-ID: <56057731.7060109@anteaya.info>

On 09/25/2015 12:14 PM, Edgar Magana wrote:
> Hi There,
> 
> I just added my comment on the review. I do agree with Emilien. There should be specific repos for plugins and drivers.
> 
> BTW. I love the sdnmagic name  ;-)
> 
> Edgar
> 
> 
> 
> 
> On 9/25/15, 9:02 AM, "Emilien Macchi" <emilien at redhat.com> wrote:
> 
>> In our last meeting [1], we were discussing about whether managing or
>> not external packaging repositories for Neutron plugin dependencies.
>>
>> Current situation:
>> puppet-neutron is installing (packages like neutron-plugin-*) &
>> configure Neutron plugins (configuration files like
>> /etc/neutron/plugins/*.ini
>> Some plugins (Cisco) are doing more: they install third party packages
>> (not part of OpenStack), from external repos.
>>
>> The question is: should we continue that way and accept that kind of
>> patch [2]?
>>
>> I vote for no: managing external packages & external repositories should
>> be up to an external more.
>> Example: my SDN tool is called "sdnmagic":
>> 1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
>> configure the .ini file(s) to make it work in Neutron
>> 2/ create puppet-sdnmagic that will take care of everything else:
>> install sdnmagic, manage packaging (and specific dependencies),
>> repositories, etc.
>> I -1 puppet-neutron should handle it. We are not managing SDN soltution:
>> we are enabling puppet-neutron to work with them.
>>
>> I would like to find a consensus here, that will be consistent across
>> *all plugins* without exception.
>>
>>
>> Thanks for your feedback,
>>
>> [1] http://goo.gl/zehmN2
>> [2] https://review.openstack.org/#/c/209997/
>> -- 
>> Emilien Macchi
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

I think the data point provided by the Cinder situation needs to be
considered in this decision: https://bugs.launchpad.net/manila/+bug/1499334

The bug report outlines the issue, but the tl;dr is that one Cinder
driver changed their licensing on a library required to run in tree code.

Thanks,
Anita.


From morgan.fainberg at gmail.com  Fri Sep 25 16:47:47 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Fri, 25 Sep 2015 09:47:47 -0700
Subject: [openstack-dev] Apache2 vs uWSGI vs ...
In-Reply-To: <56057505.6080802@redhat.com>
References: <CANw6fcEb=G0uVNuJ-oCvyNrb7Tafp2SS0KEnM7KFsvVVOEdvsQ@mail.gmail.com>
 <20150918020452.GQ21846@jimrollenhagen.com> <55FC0DCF.7060307@redhat.com>
 <262AA17A-E015-4191-BECF-4E044874D527@gmail.com>
 <CA+HkNVsjYj3UXtR2eJGVSvDECuFaaHRDBN-mp+YXe2nh4Y3TWQ@mail.gmail.com>
 <56057505.6080802@redhat.com>
Message-ID: <10C0BDCC-BE6E-4BE3-909D-83D6B2B44217@gmail.com>

There is no reason why the wsgi app container matters. This is simply a "we should document use if uwsgi and/or gunicorn as an alternative to mod_wsgi". If one solution is better for the gate it will be used there and each deployment will make the determination of what they want to use. Adam's point remains regardless of what wsgi solution is used. 

> On Sep 25, 2015, at 09:23, Adam Young <ayoung at redhat.com> wrote:
> 
>> On 09/25/2015 07:09 AM, Sergii Golovatiuk wrote:
>> Hi,
>> 
>> Morgan gave the perfect case why operators want to use uWSGI. Let's imagine a future when all openstack services will work as mod_wsgi processes under apache. It's like to put all eggs in one basket. If you need to reconfigure one service on controller it may affect another service. For instance, sometimes operators need to increase number of Threads/Processes for wsgi or add new virtual host to apache. That will require graceful or cold restart of apache. It affects other services. Another case, internal problems in mod_wsgi where it may lead to apache crash affecting all services. 
>> 
>> uWSGI/gunicorn model is safer as in this case apache is reverse_proxy only. This  model gives flexibility for operators. They may use apache/nginx as proxy or load balancer. Stop or crash of one service won't lead to downtime of other services. The complexity of OpenStack management will be easier and friendly.
> 
> There are some fallacies here:
> 
> 1. OpenStack services should all be on the same machine.
> 2. OpenStack web services should run on ports other than 443.
> 
> I think both of these are ideas who's time have come and gone.
> 
> If you have a single machine, run them out of separate containers.  That allows different services to work with different versions of the libraries. It lets you mix a newer Keystone with older everything else.
> 

Often the APIs are deployed on a common set of nodes. 

> If everything is on port 443, you need a single web server at the front end to multiplex it;  uWSGI or any other one does not obviate that.
> 

++

> 
> There are no good ports left in /etc/services;  stop trying to reserve new ones for the web.  If you need to run on a web service, you need to be able to get through firewalls.  You need to run on standard ports. Run on 443.
> 
> Keystone again is a great example of this: it has two ports: 5000 and 35357.
> 
> port 5000 in /etc/services is
> 
> commplex-main   5000/tcp
> 
> and  port 35357 is smack dab in the middle of the ephemeral range.
> 

This is a disconnect between linux and IANA. IANA has said 35357 is not ephemeral, linux defaults to say it is. 

> 
> Again, so long as the web server supports the cryptographic secure mechanisms, I don't care what one you chose.  But The idea of use going to Keystone and getting a bearer token as the basis for security is immature;  we should be doing the following on every call:
> 
> 1. TLS
> 2. Cryptographic authentication.
> 
> 
> They can be together or split up.
> 
> So, lets get everything running inside Apache, and, at the same time, push our other favorite web servers to support the necessary pieces to make OpenStack and the Web secure.
> 

++. We should do this and also document alternatives for wsgi which has no impact on this goal. Lets try and keep focused on the different initiatives and not cross the reasons for them. 

From chris.friesen at windriver.com  Fri Sep 25 16:48:02 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Fri, 25 Sep 2015 10:48:02 -0600
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <56047763.7090302@windriver.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
 <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>
 <56042AE2.6000707@windriver.com> <56043E5B.7020709@windriver.com>
 <56047763.7090302@windriver.com>
Message-ID: <56057AC2.4030709@windriver.com>

On 09/24/2015 04:21 PM, Chris Friesen wrote:
> On 09/24/2015 12:18 PM, Chris Friesen wrote:
>
>>
>> I think what happened is that we took the SIGTERM after the open() call in
>> create_iscsi_target(), but before writing anything to the file.
>>
>>          f = open(volume_path, 'w+')
>>          f.write(volume_conf)
>>          f.close()
>>
>> The 'w+' causes the file to be immediately truncated on opening, leading to an
>> empty file.
>>
>> To work around this, I think we need to do the classic "write to a temporary
>> file and then rename it to the desired filename" trick.  The atomicity of the
>> rename ensures that either the old contents or the new contents are present.
>
> I'm pretty sure that upstream code is still susceptible to zeroing out the file
> in the above scenario.  However, it doesn't take an exception--that's due to a
> local change on our part that attempted to fix the below issue.
>
> The stable/kilo code *does* have a problem in that when it regenerates the file
> it's missing the CHAP authentication line (beginning with "incominguser").

I've proposed a change at https://review.openstack.org/#/c/227943/

If anyone has suggestions on how to do this more robustly or more cleanly, 
please let me know.

Chris


From rbryant at redhat.com  Fri Sep 25 16:59:00 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Fri, 25 Sep 2015 12:59:00 -0400
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
Message-ID: <56057D54.4040204@redhat.com>

On 09/25/2015 12:15 PM, Fox, Kevin M wrote:
> Another option... why are we wasting time on something that a
> computer can handle? Why not just let the line length be infinite in
> the commit message and have gerrit wrap it to <insert random number
> here> length lines on merge?

I don't think gerrit should mess with the commit message at all.  Commit
message formatting is often very intentional.

-- 
Russell Bryant


From jsbryant at electronicjungle.net  Fri Sep 25 16:59:09 2015
From: jsbryant at electronicjungle.net (Jay S. Bryant)
Date: Fri, 25 Sep 2015 11:59:09 -0500
Subject: [openstack-dev] [Cinder] Late Liberty patches should now be
	unblocked ...
Message-ID: <56057D5D.6040701@electronicjungle.net>

All,

Now that Mitaka is open I have done my best to go through and remove all 
the -2's that I had given to block Liberty patches that needed to wait 
for Mitaka.

If you have a patch that I missed please ping me on IRC.

Happy Mitaka merging!

Thanks,
Jay



From emilien at redhat.com  Fri Sep 25 17:01:59 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Fri, 25 Sep 2015 13:01:59 -0400
Subject: [openstack-dev] [Openstack-operators] [puppet] feedback request
 about puppet-keystone
In-Reply-To: <56002E35.2010807@redhat.com>
References: <56002E35.2010807@redhat.com>
Message-ID: <56057E07.6070201@redhat.com>



On 09/21/2015 12:20 PM, Emilien Macchi wrote:
> Hi,
> 
> Puppet OpenStack group would like to know your feedback about using
> puppet-keystone module.
> 
> Please take two minutes and feel the form [1] that contains a few
> questions. The answers will help us to define our roadmap for the next
> cycle and make Keystone deployment stronger for our users.
> 
> The result of the forms should be visible online, otherwise I'll make
> sure the results are 100% public and transparent.
> 
> Thank you for your time,
> 
> [1] http://goo.gl/forms/eiGWFkkXLZ
> 

So after 5 days, here is a bit of feedback (13 people did the poll [1]):

1/ Providers
Except for 1, most of people are managing a few number of Keystone
users/tenants.
I would like to know if it's because the current implementation (using
openstackclient) is too slow or just because they don't need to do that
(they use bash, sdk, ansible, etc).

2/ Features you want

* "Configuration of federation via shibboleth":
WIP on https://review.openstack.org/#/c/216821/

* "Configuration of federation via mod_mellon":
Will come after shibboleth I guess.

* "Allow to configure websso"":
See
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/enabling-federation.html

* "Management of fernet keys":
nothing *yet* in our roadmap AFIK, adding it in our backlog [2]

* "Support for hybrid domain configurations (e.g. using both LDAP and
built in database backend)":
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/support-for-keystone-domain-configuration.html

* "Full v3 API support (depends on other modules beyond just
puppet-keystone)":
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html

* "the ability to upgrade modules independently of one another, like we
do in production - currently the puppet dependencies dictate the order
in which we do upgrades more than the OpenStack dependencies":

During the last Summit, we decided [3] as a community that our modules
branches will only support the OpenStack release of the branch.
ie: stable/kilo supports OpenStack 2015.1 (Kilo). Maybe you can deploy
Juno or Liberty with it, but our community does not support it.
To give a little background, we already discussed about it [4] on the ML.
Our interface is 100% (or should be) backward compatible for at least
one full cycle, so you should not have issue when using a new version of
the module with the same parameters. Though (and indeed), you need to
keep your modules synchronized, specially because we have libraries and
common providers (in puppet-keystone).
AFIK, OpenStack also works like this with openstack/requirements.
I'm not sure you can run Glance Kilo with Oslo Juno (maybe I'm wrong).
What you're asking would be technically hard because we would have to
support old versions of our providers & libraries, with a lot of
backward compatible & legacy code in place, while we already do a good
job in the parameters (interface).
If you have any serious proposal, we would be happy to discuss design
and find a solution.

3/ What we could improve in Puppet Keystone (and in general, regarding
the answers)

* "(...) but it would be nice to be able to deploy master and the most
recent version immediately rather than wait. Happy to get involved with
that as our maturity improves and we actually start to use the current
version earlier. Contribution is hard when you folk are ahead of the
game, any fixes and additions we have are ancient already":

I would like to understand the issues here:
do you have problems to contribute?
is your issue "a feature is in master and not in stable/*" ? If that's
the case, that means we can do a better job in backport policy.
Something we already talked each others and I hope our group is aware
about that.

* "We were using keystone_user_role until we had huge compilation times
due to the matrix (tenant x role x user) that is not scalable. With
every single user and tenant on the environment, the catalog compilation
increased. An improvement on that area will be useful."

I understand the frustration and we are working on it [5].

* "Currently does not handle deployment of hybrid domain configurations."

Ditto:
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/support-for-keystone-domain-configuration.html


I liked running a poll like this, if you don't mind I'll take time to
prepare a bigger poll so we can gather more and more feedback, because
it's really useful. Thanks for that.


Discussion is open on this thread about features/concerns mentioned in
the poll.


[1]
https://docs.google.com/forms/d/1Z6IGeJRNmX7xx0Ggmr5Pmpzq7BudphDkZE-3t4Q5G1k/viewanalytics
[2] https://trello.com/c/HjiWUng3/65-puppet-keystone-manage-fernet-keys
[3]
http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/master-policy.html
[4] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069147.html
[5] https://bugs.launchpad.net/puppet-keystone/+bug/1493450
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/cd4fa89e/attachment.pgp>

From andrew at lascii.com  Fri Sep 25 17:12:57 2015
From: andrew at lascii.com (Andrew Laski)
Date: Fri, 25 Sep 2015 13:12:57 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <F1384511-A789-4FF3-988A-E1409E647F98@vmware.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local> <20150925144216.GH8745@crypt>
 <F1384511-A789-4FF3-988A-E1409E647F98@vmware.com>
Message-ID: <20150925171257.GI8745@crypt>

On 09/25/15 at 03:09pm, Mark Voelker wrote:
>On Sep 25, 2015, at 10:42 AM, Andrew Laski <andrew at lascii.com> wrote:
>>
>> On 09/25/15 at 09:59am, Doug Hellmann wrote:
>>> Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +0000:
>>>> >
>>>> > On Sep 24, 2015, at 5:55 PM, Sabari Murugesan <sabari.bits at gmail.com> wrote:
>>>> >
>>>> > Hi Melanie
>>>> >
>>>> > In general, images created by glance v1 API should be accessible using v2 and
>>>> > vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with an image was
>>>> > causing incompatibility. These fixes were back-ported to stable/kilo.
>>>> >
>>>> > Thanks
>>>> > Sabari
>>>> >
>>>> > [1] - https://bugs.launchpad.net/glance/+bug/1447215
>>>> > [2] - https://bugs.launchpad.net/bugs/1419823
>>>> > [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193
>>>> >
>>>> >
>>>> > On Thu, Sep 24, 2015 at 2:17 PM, melanie witt <melwittt at gmail.com> wrote:
>>>> > Hi All,
>>>> >
>>>> > I have been looking and haven't yet located documentation about how to upgrade from glance v1 to glance v2.
>>>> >
>>>> > From what I understand, images and snapshots created with v1 can't be listed/accessed through the v2 api. Are there instructions about how to migrate images and snapshots from v1 to v2? Are there other incompatibilities between v1 and v2?
>>>> >
>>>> > I'm asking because I have read that glance v1 isn't defcore compliant and so we need all projects to move to v2, but the incompatibility from v1 to v2 is preventing that in nova. Is there anything else preventing v2 adoption? Could we move to glance v2 if there's a migration path from v1 to v2 that operators can run through before upgrading to a version that uses v2 as the default?
>>>>
>>>> Just to clarify the DefCore situation a bit here:
>>>>
>>>> The DefCore Committee is considering adding some Glance v2
>>> capabilities [1] as ?advisory? (e.g. not required now but might be
>>> in the future unless folks provide feedback as to why it shouldn?t
>>> be) in it?s next Guideline, which is due to go the Board of Directors
>>> in January and will cover Juno, Kilo, and Liberty [2].	The Nova image
>>> API?s are already required [3][4].  As discussion began about which
>>> Glance capabilities to include and whether or not to keep the Nova
>>> image API?s as required, it was pointed out that the many ways images
>>> can currently be created in OpenStack is problematic from an
>>> interoperability point of view in that some clouds use one and some use
>>> others.  To be included in a DefCore Guideline, capabilities are scored
>>> against twelve Criteria [5], and need to achieve a certain total to be
>>> included.  Having a bunch of different ways to deal with images
>>> actually hurts the chances of any one of them meeting the bar because
>>> it makes it less likely that they?ll achieve several criteria.  For
>>> example:
>>>>
>>>> One of the criteria is ?widely deployed? [6].  In the case of images, both the Nova image-create API and Glance v2 are both pretty widely deployed [7]; Glance v1 isn?t, and at least one uses none of those but instead uses the import task API.
>>>>
>>>> Another criteria is ?atomic? [8] which basically means the capability is unique and can?t be built out of other required capabilities.  Since the Nova image-create API is already required and effectively does the same thing as glance v1 and v2?s image create API?s, the latter lose points.
>>>
>>> This seems backwards. The Nova API doesn't "do the same thing" as
>>> the Glance API, it is a *proxy* for the Glance API. We should not
>>> be requiring proxy APIs for interop. DefCore should only be using
>>> tests that talk directly to the service that owns the feature being
>>> tested.
>>
>> I completely agree with this.  I will admit to having some confusion as to why Glance capabilities have been tested through Nova and I know others have raised this same thought within the process.
>
>Because it turns out that?s how most of the world is dealing with images.
>
>Generally speaking, the nova image API and glance v2 API?s have roughly equal adoption among public and private cloud products, but among the client SDK?s people are using to interact with OpenStack the nova image API?s have much better adoption (see notes in previous message for details).  So we gave the world lots of different ways to do the same thing and the world has strongly adopted two of them (with reasonable evidence that the Nova image API is actually the most-adopted of the lot).  If you?re looking for the most interoperable way to create an image across lots of different OpenStack clouds today, it?s actually through Nova.

I understand that reasoning, but still am unsure on a few things.

The direction seems to be moving towards having a requirement that the 
same functionality is offered in two places, Nova API and Glance V2 API.  
That seems like it would fragment adoption rather than unify it.

Also after digging in on image-create I feel that there may be a mixup.  
The image-create in Glance and image-create in Nova are two different 
things.  In Glance you create an image and send the disk image data in 
the request, in Nova an image-create takes a snapshot of the instance 
provided in the request.  But it seems like DefCore is treating them as 
equivalent unless I'm misunderstanding.

>
>At Your Service,
>
>Mark T. Voelker
>
>>
>>>
>>> Doug
>>>
>>>>
>>>> Another criteria is ?future direction? [9].  Glance v1 gets no points here since v2 is the current API, has been for a while, and there?s even been some work on v3 already.
>>>>
>>>> There are also criteria for  ?used by clients? [11].  Unfortunately both Glance v1 and v2 fall down pretty hard here as it turns out that of all the client libraries users reported in the last user survey, it appears the only one other than the OpenStack clients supports Glance v2 and one supports Glance v1 while the rest all rely on the Nova API's.  Even within OpenStack we don?t necessarily have good adoption since Nova still uses the v1 API to talk to Glance and OpenStackClient didn?t support image creation with v2 until this week?s 1.7.0 release. [13]
>>>>
>>>> So, it?s a bit problematic that v1 is still being used even within the project (though it did get slightly better this week). It?s highly unlikely at this point that it makes any sense for DefCore to require OpenStack Powered products to expose v1 to end users.  Even if DefCore does end up requiring Glance v2 to be exposed to end users, that doesn?t necessarily mean Nova couldn?t continue to use v1: OpenStack Powered products wouldn?t be required to expose v1 to end users, but if the nova image-create API remains required then they?d have to expose it at least internally to the cloud.  But?.really?  That?s still sort of an ugly position to be in, because at the end of the day that?s still a lot more moving parts than are really necessary and that?s not particularly good for operators, end users, developers who want interoperable ways of doing things, or pretty much anybody else.
>>>>
>>>> So basically: yes, it would be *lovely* if we could all get behind fewer ways of dealing with images. [10]
>>>>
>>>> [1] https://review.openstack.org/#/c/213353/
>>>> [2] http://git.openstack.org/cgit/openstack/defcore/tree/2016.next.json#n8
>>>> [3] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json#n23
>>>> [4] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05.json#n20
>>>> [5] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst
>>>> [6] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n40
>>>> [7] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074540.html
>>>> [8] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n87
>>>> [9] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n60
>>>> [10] Meh, entirely too many footnotes here so why not put one out of order for fun: https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DoHg5SJYRHA0&d=BQIGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=Q8IhPU-EIzbG5YDx5LYO7zEJpGZykn7RwFg-UTPWvDc&m=7GN2Z6neK-F2gi0ByCirYmqR6sCFjhWPEfaNmlQtUp4&s=y1hzWonwPHabYRZrdKG5X8PNnkHpQ2SNeOuLc489B1s&e= [11] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst#n48
>>>> [12] See comments in https://review.openstack.org/#/c/213353/7/working_materials/scoring.txt
>>>> [13] http://docs.openstack.org/developer/python-openstackclient/releases.html#sep-2015
>>>>
>>>> At Your Service,
>>>>
>>>> Mark T. Voelker
>>>>
>>>> >
>>>> > Thanks,
>>>> > -melanie (irc: melwitt)
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > __________________________________________________________________________
>>>> > OpenStack Development Mailing List (not for usage questions)
>>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>> >
>>>> > __________________________________________________________________________
>>>> > OpenStack Development Mailing List (not for usage questions)
>>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From brian.rosmaita at RACKSPACE.COM  Fri Sep 25 17:24:48 2015
From: brian.rosmaita at RACKSPACE.COM (Brian Rosmaita)
Date: Fri, 25 Sep 2015 17:24:48 +0000
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
Message-ID: <D22AF859.22B68%brian.rosmaita@rackspace.com>

I'd like to clarify something.

On 9/25/15, 12:16 PM, "Mark Voelker" <mvoelker at vmware.com> wrote:
[big snip]
>Also worth pointing out here: when we talk about ?doing the same thing?
>from a DefCore perspective, we?re essentially talking about what?s
>exposed to the end user, not how that?s implemented in OpenStack?s source
>code.  So from an end user?s perspective:
>
>If I call nova image-create, I get an image in my cloud.  If I call the
>Glance v2 API to create an image, I also get an image in my cloud.  I
>neither see nor care that Nova is actually talking to Glance in the
>background, because if I?m writing code that uses the OpenStack API?s, I
>need to pick which one of those two API?s to make my code call upon to
>put an image in my cloud.  Or, in the worst case, I have to write a bunch
>of if/else loops into my code because some clouds I want to use only
>allow one way and some allow only the other.

The above is a bit inaccurate.

The nova image-create command does give you an image in your cloud.  The
image you get, however, is a snapshot of an instance that has been
previously created in Nova.  If you don't have an instance, you cannot
create an image via that command.  There is no provision in the Compute
(Nova) API to allow you to create an image out of bits that you supply.

The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
register them as an image which you can then use to boot instances from by
using the Compute API.  But note that if all you have available is the
Images API, you cannot create an image of one of your instances.

>So from that end-user perspective, the Nova image-create API indeed does
>?do the same thing" as the Glance API.

They don't "do the same thing".  Even if you have full access to the
Images v1 or v2 API, you will still have to use the Compute (Nova) API to
create an image of an instance, which is by far the largest use-case for
image creation.  You can't do it through Glance, because Glance doesn't
know anything about instances.  Nova has to know about Glance, because it
needs to fetch images for instance creation, and store images for
on-demand images of instances.


>At Your Service,
>
>Mark T. Voelker

Glad to be of service, too,
brian



From openstack at nemebean.com  Fri Sep 25 17:31:50 2015
From: openstack at nemebean.com (Ben Nemec)
Date: Fri, 25 Sep 2015 12:31:50 -0500
Subject: [openstack-dev] [TripleO] tripleo.org theme
In-Reply-To: <1443184498.2443.10.camel@redhat.com>
References: <1443184498.2443.10.camel@redhat.com>
Message-ID: <56058506.3030807@nemebean.com>

On 09/25/2015 07:34 AM, Dan Prince wrote:
> It has come to my attention that we aren't making great use of our
> tripleo.org domain. One thing that would be useful would be to have the
> new tripleo-docs content displayed there. It would also be nice to have
> quick links to some of our useful resources, perhaps Derek's CI report
> [1], a custom Reviewday page for TripleO reviews (something like this
> [2]), and perhaps other links too. I'm thinking these go in the header,
> and not just on some random TripleO docs page. Or perhaps both places.

Note that there's a TripleO Inbox Dashboard linked toward the bottom of
https://wiki.openstack.org/wiki/TripleO#Review_team (It should probably
be higher up than that, since it's incredibly useful).  This is actually
what I use for tracking TripleO reviews, and would be a simple thing to
start with for this.

+1 to everything else.

> 
> I was thinking that instead of the normal OpenStack theme however we
> could go a bit off the beaten path and do our own TripleO theme.
> Basically a custom tripleosphinx project that we ninja in as a
> replacement for oslosphinx.
> 
> Could get our own mascot... or do something silly with words. I'm
> reaching out to graphics artists who could help with this sort of
> thing... but before that decision is made I wanted to ask about
> thoughts on the matter here first.

I like the mascot/logo idea.  Not sure why we would want to deviate from
the standard OpenStack docs theme though.  What is your motivation for
suggesting that?

Also, if we get a mascot I want t-shirts. ;-)

> 
> Speak up... it would be nice to have this wrapped up before Tokyo.
> 
> [1] http://goodsquishy.com/downloads/tripleo-jobs.html
> [2] http://status.openstack.org/reviews/
> 
> Dan
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From mvoelker at vmware.com  Fri Sep 25 17:42:24 2015
From: mvoelker at vmware.com (Mark Voelker)
Date: Fri, 25 Sep 2015 17:42:24 +0000
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <D22AF859.22B68%brian.rosmaita@rackspace.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
Message-ID: <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>

On Sep 25, 2015, at 1:24 PM, Brian Rosmaita <brian.rosmaita at RACKSPACE.COM> wrote:
> 
> I'd like to clarify something.
> 
> On 9/25/15, 12:16 PM, "Mark Voelker" <mvoelker at vmware.com> wrote:
> [big snip]
>> Also worth pointing out here: when we talk about ?doing the same thing?
>> from a DefCore perspective, we?re essentially talking about what?s
>> exposed to the end user, not how that?s implemented in OpenStack?s source
>> code.  So from an end user?s perspective:
>> 
>> If I call nova image-create, I get an image in my cloud.  If I call the
>> Glance v2 API to create an image, I also get an image in my cloud.  I
>> neither see nor care that Nova is actually talking to Glance in the
>> background, because if I?m writing code that uses the OpenStack API?s, I
>> need to pick which one of those two API?s to make my code call upon to
>> put an image in my cloud.  Or, in the worst case, I have to write a bunch
>> of if/else loops into my code because some clouds I want to use only
>> allow one way and some allow only the other.
> 
> The above is a bit inaccurate.
> 
> The nova image-create command does give you an image in your cloud.  The
> image you get, however, is a snapshot of an instance that has been
> previously created in Nova.  If you don't have an instance, you cannot
> create an image via that command.  There is no provision in the Compute
> (Nova) API to allow you to create an image out of bits that you supply.
> 
> The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
> register them as an image which you can then use to boot instances from by
> using the Compute API.  But note that if all you have available is the
> Images API, you cannot create an image of one of your instances.
> 
>> So from that end-user perspective, the Nova image-create API indeed does
>> ?do the same thing" as the Glance API.
> 
> They don't "do the same thing".  Even if you have full access to the
> Images v1 or v2 API, you will still have to use the Compute (Nova) API to
> create an image of an instance, which is by far the largest use-case for
> image creation.  You can't do it through Glance, because Glance doesn't
> know anything about instances.  Nova has to know about Glance, because it
> needs to fetch images for instance creation, and store images for
> on-demand images of instances.

Yup, that?s fair: this was a bad example to pick (need moar coffee I guess).  Let?s use image-list instead. =)

At Your Service,

Mark T. Voelker


> 
> 
>> At Your Service,
>> 
>> Mark T. Voelker
> 
> Glad to be of service, too,
> brian
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From gessau at cisco.com  Fri Sep 25 17:47:56 2015
From: gessau at cisco.com (Henry Gessau)
Date: Fri, 25 Sep 2015 13:47:56 -0400
Subject: [openstack-dev] [neutron][horizon] Nice new Network Topology panel
	in Horizon
Message-ID: <560588CC.1040501@cisco.com>

It has been about three years in the making but now it is finally here.
A screenshot doesn't do it justice, so here is a short video overview:
https://youtu.be/PxFd-lJV0e4

Isn't that neat? I am sure you can see that it is a great improvement,
especially for larger topologies.

This new view will be part of the Liberty release of Horizon. I encourage you to
take a look at it with your own network topologies, play around with it, and
provide feedback. Please stop by the #openstack-horizon IRC channel if there are
issues you would like addressed.

Thanks to the folks who made this happen.


From doug at doughellmann.com  Fri Sep 25 17:56:39 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Fri, 25 Sep 2015 13:56:39 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
Message-ID: <1443203624-sup-2555@lrrr.local>

Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
> On Sep 25, 2015, at 1:24 PM, Brian Rosmaita <brian.rosmaita at RACKSPACE.COM> wrote:
> > 
> > I'd like to clarify something.
> > 
> > On 9/25/15, 12:16 PM, "Mark Voelker" <mvoelker at vmware.com> wrote:
> > [big snip]
> >> Also worth pointing out here: when we talk about ?doing the same thing?
> >> from a DefCore perspective, we?re essentially talking about what?s
> >> exposed to the end user, not how that?s implemented in OpenStack?s source
> >> code.  So from an end user?s perspective:
> >> 
> >> If I call nova image-create, I get an image in my cloud.  If I call the
> >> Glance v2 API to create an image, I also get an image in my cloud.  I
> >> neither see nor care that Nova is actually talking to Glance in the
> >> background, because if I?m writing code that uses the OpenStack API?s, I
> >> need to pick which one of those two API?s to make my code call upon to
> >> put an image in my cloud.  Or, in the worst case, I have to write a bunch
> >> of if/else loops into my code because some clouds I want to use only
> >> allow one way and some allow only the other.
> > 
> > The above is a bit inaccurate.
> > 
> > The nova image-create command does give you an image in your cloud.  The
> > image you get, however, is a snapshot of an instance that has been
> > previously created in Nova.  If you don't have an instance, you cannot
> > create an image via that command.  There is no provision in the Compute
> > (Nova) API to allow you to create an image out of bits that you supply.
> > 
> > The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
> > register them as an image which you can then use to boot instances from by
> > using the Compute API.  But note that if all you have available is the
> > Images API, you cannot create an image of one of your instances.
> > 
> >> So from that end-user perspective, the Nova image-create API indeed does
> >> ?do the same thing" as the Glance API.
> > 
> > They don't "do the same thing".  Even if you have full access to the
> > Images v1 or v2 API, you will still have to use the Compute (Nova) API to
> > create an image of an instance, which is by far the largest use-case for
> > image creation.  You can't do it through Glance, because Glance doesn't
> > know anything about instances.  Nova has to know about Glance, because it
> > needs to fetch images for instance creation, and store images for
> > on-demand images of instances.
> 
> Yup, that?s fair: this was a bad example to pick (need moar coffee I guess).  Let?s use image-list instead. =)

>From a "technical direction" perspective, I still think it's a bad
situation for us to be relying on any proxy APIs like this. Yes,
they are widely deployed, but we want to be using glance for image
features, neutron for networking, etc. Having the nova proxy is
fine, but while we have DefCore using tests to enforce the presence
of the proxy we can't deprecate those APIs.

What do we need to do to make that change happen over the next cycle
or so?

Doug

> 
> At Your Service,
> 
> Mark T. Voelker
> 
> > 
> > 
> >> At Your Service,
> >> 
> >> Mark T. Voelker
> > 
> > Glad to be of service, too,
> > brian
> > 
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From mitsuhiro.tanino at hds.com  Fri Sep 25 18:30:19 2015
From: mitsuhiro.tanino at hds.com (Mitsuhiro Tanino)
Date: Fri, 25 Sep 2015 18:30:19 +0000
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <56057AC2.4030709@windriver.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
 <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>
 <56042AE2.6000707@windriver.com> <56043E5B.7020709@windriver.com>
 <56047763.7090302@windriver.com> <56057AC2.4030709@windriver.com>
Message-ID: <04867A083C09694985E4171FD38905ED427A5F6E@USINDEM103.corp.hds.com>

On 09/22/2015 06:43 PM, Robert Collins wrote:
> On 23 September 2015 at 09:52, Chris Friesen 
> <chris.friesen at windriver.com> wrote:
>> Hi,
>>
>> I recently had an issue with one file out of a dozen or so in 
>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm 
>> running stable/kilo if it makes a difference.
>>
>> Looking at the code in 
>> volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we 
>> should do a fsync() before the close().  The way it stands now, it 
>> seems like it might be possible to write the file, start making use 
>> of it, and then take a power outage before it actually gets written 
>> to persistent storage.  When we come back up we could have an 
>> instance expecting to make use of it, but no target information in the on-disk copy of the file.

I think even if there is no target information in configuration file dir, c-vol started successfully
and iSCSI targets were created automatically and volumes were exported, right?

There is an problem in this case that the iSCSI target was created without authentication because
we can't get previous authentication from the configuration file.

I'm curious what kind of problem did you met?
  
> If its being kept in sync with DB records, and won't self-heal from 
> this situation, then yes. e.g. if the overall workflow is something 
> like

In my understanding, the provider_auth in database has user name and password for iSCSI target. 
Therefore if we get authentication from DB, I think we can self-heal from this situation
correctly after c-vol service is restarted.

The lio target obtains authentication from provider_auth in database, but tgtd, iet, cxt obtain
authentication from file to recreate iSCSI target when c-vol is restarted.
If the file is missing, these volumes are exported without authentication and configuration
file is recreated as I mentioned above.

tgtd: Get target chap auth from file
iet:  Get target chap auth from file
cxt:  Get target chap auth from file
lio:  Get target chap auth from Database(in provider_auth)
scst: Get target chap auth by using original command

If we get authentication from DB for tgtd, iet and cxt same as lio, we can recreate iSCSI target
with proper authentication when c-vol is restarted.
I think this is a solution for this situation.

Any thought?

Thanks,
Mitsuhiro Tanino

> -----Original Message-----
> From: Chris Friesen [mailto:chris.friesen at windriver.com]
> Sent: Friday, September 25, 2015 12:48 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [cinder] should we use fsync when writing iscsi
> config file?
> 
> On 09/24/2015 04:21 PM, Chris Friesen wrote:
> > On 09/24/2015 12:18 PM, Chris Friesen wrote:
> >
> >>
> >> I think what happened is that we took the SIGTERM after the open()
> >> call in create_iscsi_target(), but before writing anything to the file.
> >>
> >>          f = open(volume_path, 'w+')
> >>          f.write(volume_conf)
> >>          f.close()
> >>
> >> The 'w+' causes the file to be immediately truncated on opening,
> >> leading to an empty file.
> >>
> >> To work around this, I think we need to do the classic "write to a
> >> temporary file and then rename it to the desired filename" trick.
> >> The atomicity of the rename ensures that either the old contents or the new
> contents are present.
> >
> > I'm pretty sure that upstream code is still susceptible to zeroing out
> > the file in the above scenario.  However, it doesn't take an
> > exception--that's due to a local change on our part that attempted to fix the
> below issue.
> >
> > The stable/kilo code *does* have a problem in that when it regenerates
> > the file it's missing the CHAP authentication line (beginning with
> "incominguser").
> 
> I've proposed a change at https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__review.openstack.org_-23_c_227943_&d=BQICAg&c=DZ-
> EF4pZfxGSU6MfABwx0g&r=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE&m=SVlOqKiqO04_
> NttKUIoDiaOR7cePB0SOA1bpjakqAss&s=q2_8XBAVH9lQ2mdT72nW7dN2EafIqJEpHGLBuf4K970&e=
> 
> If anyone has suggestions on how to do this more robustly or more cleanly,
> please let me know.
> 
> Chris
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-
> 2Dbin_mailman_listinfo_openstack-2Ddev&d=BQICAg&c=DZ-
> EF4pZfxGSU6MfABwx0g&r=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE&m=SVlOqKiqO04_
> NttKUIoDiaOR7cePB0SOA1bpjakqAss&s=0DBbmeXSIK2c5QlBnwURY1iwNN1AXuqOLaUYnxjBl0w&e=


From eharney at redhat.com  Fri Sep 25 18:56:14 2015
From: eharney at redhat.com (Eric Harney)
Date: Fri, 25 Sep 2015 14:56:14 -0400
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <04867A083C09694985E4171FD38905ED427A5F6E@USINDEM103.corp.hds.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
 <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>
 <56042AE2.6000707@windriver.com> <56043E5B.7020709@windriver.com>
 <56047763.7090302@windriver.com> <56057AC2.4030709@windriver.com>
 <04867A083C09694985E4171FD38905ED427A5F6E@USINDEM103.corp.hds.com>
Message-ID: <560598CE.6040901@redhat.com>

On 09/25/2015 02:30 PM, Mitsuhiro Tanino wrote:
> On 09/22/2015 06:43 PM, Robert Collins wrote:
>> On 23 September 2015 at 09:52, Chris Friesen 
>> <chris.friesen at windriver.com> wrote:
>>> Hi,
>>>
>>> I recently had an issue with one file out of a dozen or so in 
>>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm 
>>> running stable/kilo if it makes a difference.
>>>
>>> Looking at the code in 
>>> volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we 
>>> should do a fsync() before the close().  The way it stands now, it 
>>> seems like it might be possible to write the file, start making use 
>>> of it, and then take a power outage before it actually gets written 
>>> to persistent storage.  When we come back up we could have an 
>>> instance expecting to make use of it, but no target information in the on-disk copy of the file.
> 
> I think even if there is no target information in configuration file dir, c-vol started successfully
> and iSCSI targets were created automatically and volumes were exported, right?
> 
> There is an problem in this case that the iSCSI target was created without authentication because
> we can't get previous authentication from the configuration file.
> 
> I'm curious what kind of problem did you met?
>   
>> If its being kept in sync with DB records, and won't self-heal from 
>> this situation, then yes. e.g. if the overall workflow is something 
>> like
> 
> In my understanding, the provider_auth in database has user name and password for iSCSI target. 
> Therefore if we get authentication from DB, I think we can self-heal from this situation
> correctly after c-vol service is restarted.
> 

Is this not already done as-needed by ensure_export()?

> The lio target obtains authentication from provider_auth in database, but tgtd, iet, cxt obtain
> authentication from file to recreate iSCSI target when c-vol is restarted.
> If the file is missing, these volumes are exported without authentication and configuration
> file is recreated as I mentioned above.
> 
> tgtd: Get target chap auth from file
> iet:  Get target chap auth from file
> cxt:  Get target chap auth from file
> lio:  Get target chap auth from Database(in provider_auth)
> scst: Get target chap auth by using original command
> 
> If we get authentication from DB for tgtd, iet and cxt same as lio, we can recreate iSCSI target
> with proper authentication when c-vol is restarted.
> I think this is a solution for this situation.
> 

This may be possible, but fixing the target config file to be written
more safely to work as currently intended is still a win.

> Any thought?
> 
> Thanks,
> Mitsuhiro Tanino
> 
>> -----Original Message-----
>> From: Chris Friesen [mailto:chris.friesen at windriver.com]
>> Sent: Friday, September 25, 2015 12:48 PM
>> To: openstack-dev at lists.openstack.org
>> Subject: Re: [openstack-dev] [cinder] should we use fsync when writing iscsi
>> config file?
>>
>> On 09/24/2015 04:21 PM, Chris Friesen wrote:
>>> On 09/24/2015 12:18 PM, Chris Friesen wrote:
>>>
>>>>
>>>> I think what happened is that we took the SIGTERM after the open()
>>>> call in create_iscsi_target(), but before writing anything to the file.
>>>>
>>>>          f = open(volume_path, 'w+')
>>>>          f.write(volume_conf)
>>>>          f.close()
>>>>
>>>> The 'w+' causes the file to be immediately truncated on opening,
>>>> leading to an empty file.
>>>>
>>>> To work around this, I think we need to do the classic "write to a
>>>> temporary file and then rename it to the desired filename" trick.
>>>> The atomicity of the rename ensures that either the old contents or the new
>> contents are present.
>>>
>>> I'm pretty sure that upstream code is still susceptible to zeroing out
>>> the file in the above scenario.  However, it doesn't take an
>>> exception--that's due to a local change on our part that attempted to fix the
>> below issue.
>>>
>>> The stable/kilo code *does* have a problem in that when it regenerates
>>> the file it's missing the CHAP authentication line (beginning with
>> "incominguser").
>>
>> I've proposed a change at https://urldefense.proofpoint.com/v2/url?u=https-
>> 3A__review.openstack.org_-23_c_227943_&d=BQICAg&c=DZ-
>> EF4pZfxGSU6MfABwx0g&r=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE&m=SVlOqKiqO04_
>> NttKUIoDiaOR7cePB0SOA1bpjakqAss&s=q2_8XBAVH9lQ2mdT72nW7dN2EafIqJEpHGLBuf4K970&e=
>>
>> If anyone has suggestions on how to do this more robustly or more cleanly,
>> please let me know.
>>
>> Chris
>>


From e0ne at e0ne.info  Fri Sep 25 18:58:50 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Fri, 25 Sep 2015 21:58:50 +0300
Subject: [openstack-dev] [Cinder] Late Liberty patches should now be
 unblocked ...
In-Reply-To: <56057D5D.6040701@electronicjungle.net>
References: <56057D5D.6040701@electronicjungle.net>
Message-ID: <CAGocpaFr6qMc=4SLcOGDqZpBwYSJc78DbcnSKtfOp8+4_yvVzA@mail.gmail.com>

Thanks, Jay.

I've removed my -2's last night. If I missed something, please ping me via
e-mail or IRC (e0ne).

Regards,
Ivan Kolodyazhny


On Fri, Sep 25, 2015 at 7:59 PM, Jay S. Bryant <
jsbryant at electronicjungle.net> wrote:

> All,
>
> Now that Mitaka is open I have done my best to go through and remove all
> the -2's that I had given to block Liberty patches that needed to wait for
> Mitaka.
>
> If you have a patch that I missed please ping me on IRC.
>
> Happy Mitaka merging!
>
> Thanks,
> Jay
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/9e949c72/attachment.html>

From fungi at yuggoth.org  Fri Sep 25 19:02:48 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Fri, 25 Sep 2015 19:02:48 +0000
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
Message-ID: <20150925190247.GF4731@yuggoth.org>

On 2015-09-25 16:15:15 +0000 (+0000), Fox, Kevin M wrote:
> Another option... why are we wasting time on something that a
> computer can handle? Why not just let the line length be infinite
> in the commit message and have gerrit wrap it to <insert random
> number here> length lines on merge?

The commit message content (including whitespace/formatting) is part
of the data fed into the hash algorithm to generate the commit
identifier. If Gerrit changed the commit message at upload, that
would alter the Git SHA compared to your local copy of the same
commit. This quickly goes down a Git madness rabbit hole (not the
least of which is that it would completely break signed commits).
-- 
Jeremy Stanley


From chris.friesen at windriver.com  Fri Sep 25 19:03:30 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Fri, 25 Sep 2015 13:03:30 -0600
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <04867A083C09694985E4171FD38905ED427A5F6E@USINDEM103.corp.hds.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
 <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>
 <56042AE2.6000707@windriver.com> <56043E5B.7020709@windriver.com>
 <56047763.7090302@windriver.com> <56057AC2.4030709@windriver.com>
 <04867A083C09694985E4171FD38905ED427A5F6E@USINDEM103.corp.hds.com>
Message-ID: <56059A82.7020607@windriver.com>

On 09/25/2015 12:30 PM, Mitsuhiro Tanino wrote:
> On 09/22/2015 06:43 PM, Robert Collins wrote:
>> On 23 September 2015 at 09:52, Chris Friesen
>> <chris.friesen at windriver.com> wrote:
>>> Hi,
>>>
>>> I recently had an issue with one file out of a dozen or so in
>>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.  I'm
>>> running stable/kilo if it makes a difference.
>>>
>>> Looking at the code in
>>> volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we
>>> should do a fsync() before the close().  The way it stands now, it
>>> seems like it might be possible to write the file, start making use
>>> of it, and then take a power outage before it actually gets written
>>> to persistent storage.  When we come back up we could have an
>>> instance expecting to make use of it, but no target information in the on-disk copy of the file.
>
> I think even if there is no target information in configuration file dir, c-vol started successfully
> and iSCSI targets were created automatically and volumes were exported, right?
>
> There is an problem in this case that the iSCSI target was created without authentication because
> we can't get previous authentication from the configuration file.
>
> I'm curious what kind of problem did you met?

We had an issue in a private patch that was ported to Kilo without realizing 
that the data type of chap_auth had changed.

> In my understanding, the provider_auth in database has user name and password for iSCSI target.
> Therefore if we get authentication from DB, I think we can self-heal from this situation
> correctly after c-vol service is restarted.
>
> The lio target obtains authentication from provider_auth in database, but tgtd, iet, cxt obtain
> authentication from file to recreate iSCSI target when c-vol is restarted.
> If the file is missing, these volumes are exported without authentication and configuration
> file is recreated as I mentioned above.
>
> tgtd: Get target chap auth from file
> iet:  Get target chap auth from file
> cxt:  Get target chap auth from file
> lio:  Get target chap auth from Database(in provider_auth)
> scst: Get target chap auth by using original command
>
> If we get authentication from DB for tgtd, iet and cxt same as lio, we can recreate iSCSI target
> with proper authentication when c-vol is restarted.
> I think this is a solution for this situation.

If we fixed the chap auth info then we could live with a zero-size file. 
However, with the current code if we take a kernel panic or power outage it's 
theoretically possible to end up with a corrupt file of nonzero size (due to 
metadata hitting the persistant storage before the data).  I'm not confident 
that the current code would deal properly with that.

That said, if we always regenerate every file from the DB on cinder-volume 
startup (regardless of whether or not it existed, and without reading in the 
existing file), then we'd be okay without the robustness improvements.

Chris



From mitsuhiro.tanino at hds.com  Fri Sep 25 19:13:09 2015
From: mitsuhiro.tanino at hds.com (Mitsuhiro Tanino)
Date: Fri, 25 Sep 2015 19:13:09 +0000
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <560598CE.6040901@redhat.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
 <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>
 <56042AE2.6000707@windriver.com> <56043E5B.7020709@windriver.com>
 <56047763.7090302@windriver.com> <56057AC2.4030709@windriver.com>
 <04867A083C09694985E4171FD38905ED427A5F6E@USINDEM103.corp.hds.com>
 <560598CE.6040901@redhat.com>
Message-ID: <04867A083C09694985E4171FD38905ED427A6005@USINDEM103.corp.hds.com>

> -----Original Message-----
> From: Eric Harney [mailto:eharney at redhat.com]
> Sent: Friday, September 25, 2015 2:56 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [cinder] should we use fsync when writing iscsi
> config file?
> 
> On 09/25/2015 02:30 PM, Mitsuhiro Tanino wrote:
> > On 09/22/2015 06:43 PM, Robert Collins wrote:
> >> On 23 September 2015 at 09:52, Chris Friesen
> >> <chris.friesen at windriver.com> wrote:
> >>> Hi,
> >>>
> >>> I recently had an issue with one file out of a dozen or so in
> >>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.
> >>> I'm running stable/kilo if it makes a difference.
> >>>
> >>> Looking at the code in
> >>> volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we
> >>> should do a fsync() before the close().  The way it stands now, it
> >>> seems like it might be possible to write the file, start making use
> >>> of it, and then take a power outage before it actually gets written
> >>> to persistent storage.  When we come back up we could have an
> >>> instance expecting to make use of it, but no target information in the on-
> disk copy of the file.
> >
> > I think even if there is no target information in configuration file
> > dir, c-vol started successfully and iSCSI targets were created automatically
> and volumes were exported, right?
> >
> > There is an problem in this case that the iSCSI target was created
> > without authentication because we can't get previous authentication from the
> configuration file.
> >
> > I'm curious what kind of problem did you met?
> >
> >> If its being kept in sync with DB records, and won't self-heal from
> >> this situation, then yes. e.g. if the overall workflow is something
> >> like
> >
> > In my understanding, the provider_auth in database has user name and password
> for iSCSI target.
> > Therefore if we get authentication from DB, I think we can self-heal
> > from this situation correctly after c-vol service is restarted.
> >
> 
> Is this not already done as-needed by ensure_export()?

Yes. This logic is in the ensure_export but only lio target uses DB and
other targets use file.
 
> > The lio target obtains authentication from provider_auth in database,
> > but tgtd, iet, cxt obtain authentication from file to recreate iSCSI target
> when c-vol is restarted.
> > If the file is missing, these volumes are exported without
> > authentication and configuration file is recreated as I mentioned above.
> >
> > tgtd: Get target chap auth from file
> > iet:  Get target chap auth from file
> > cxt:  Get target chap auth from file
> > lio:  Get target chap auth from Database(in provider_auth)
> > scst: Get target chap auth by using original command
> >
> > If we get authentication from DB for tgtd, iet and cxt same as lio, we
> > can recreate iSCSI target with proper authentication when c-vol is restarted.
> > I think this is a solution for this situation.
> >
> 
> This may be possible, but fixing the target config file to be written more
> safely to work as currently intended is still a win.

I think it is better to fix both of them,
(1) Add a logic to write configuration file using fsync
(2) Read authentication from database during ensure_export() same as lio target.

Thanks,
Mitsuhiro Tanino

> > Any thought?
> >
> > Thanks,
> > Mitsuhiro Tanino
> >
> >> -----Original Message-----
> >> From: Chris Friesen [mailto:chris.friesen at windriver.com]
> >> Sent: Friday, September 25, 2015 12:48 PM
> >> To: openstack-dev at lists.openstack.org
> >> Subject: Re: [openstack-dev] [cinder] should we use fsync when
> >> writing iscsi config file?
> >>
> >> On 09/24/2015 04:21 PM, Chris Friesen wrote:
> >>> On 09/24/2015 12:18 PM, Chris Friesen wrote:
> >>>
> >>>>
> >>>> I think what happened is that we took the SIGTERM after the open()
> >>>> call in create_iscsi_target(), but before writing anything to the file.
> >>>>
> >>>>          f = open(volume_path, 'w+')
> >>>>          f.write(volume_conf)
> >>>>          f.close()
> >>>>
> >>>> The 'w+' causes the file to be immediately truncated on opening,
> >>>> leading to an empty file.
> >>>>
> >>>> To work around this, I think we need to do the classic "write to a
> >>>> temporary file and then rename it to the desired filename" trick.
> >>>> The atomicity of the rename ensures that either the old contents or
> >>>> the new
> >> contents are present.
> >>>
> >>> I'm pretty sure that upstream code is still susceptible to zeroing
> >>> out the file in the above scenario.  However, it doesn't take an
> >>> exception--that's due to a local change on our part that attempted
> >>> to fix the
> >> below issue.
> >>>
> >>> The stable/kilo code *does* have a problem in that when it
> >>> regenerates the file it's missing the CHAP authentication line
> >>> (beginning with
> >> "incominguser").
> >>
> >> I've proposed a change at
> >> https://urldefense.proofpoint.com/v2/url?u=https-
> >> 3A__review.openstack.org_-23_c_227943_&d=BQICAg&c=DZ-
> >> EF4pZfxGSU6MfABwx0g&r=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE&m=S
> >> VlOqKiqO04_
> >> NttKUIoDiaOR7cePB0SOA1bpjakqAss&s=q2_8XBAVH9lQ2mdT72nW7dN2EafIqJEpHGL
> >> Buf4K970&e=
> >>
> >> If anyone has suggestions on how to do this more robustly or more
> >> cleanly, please let me know.
> >>
> >> Chris
> >>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-
> 2Dbin_mailman_listinfo_openstack-2Ddev&d=BQICAg&c=DZ-
> EF4pZfxGSU6MfABwx0g&r=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE&m=vWG1ciG_TMcb
> mjkXUAVnW68zaUBvi8EzLOmGdwMvF1s&s=9EG5xodg7NZlnOtSGDJkyWd3gdj85ZGYevdvUzthjJg&e=


From john.griffith8 at gmail.com  Fri Sep 25 19:15:24 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Fri, 25 Sep 2015 13:15:24 -0600
Subject: [openstack-dev] [OpenStack-Dev] Recent proposal to remove NetApp
	drivers
Message-ID: <CAPWkaSU=SHx8bK5R2PAtmj626vRr2Tj22+P+O+GqPDzG5b8guw@mail.gmail.com>

Hey Everyone,

So I've been kinda busy on IRC today with conversations about some patches
I proposed yesterday [1], [2].

I thought maybe I should try and explain my actions a bit, because it seems
that some think I just arbitrarily flew off the handle and proposed
something drastic, or that I had some malicious intent here.

This all started when this bug [3] was submitted against Cinder and
Manilla.  So I took at look at the review that merged this ([4]), and sure
enough that should not have merged due to the licensing issue.

At that point I discussed the issue publicly on IRC in the openstack-dev
channel and asked for some guidance/input from others [5].  Note as the log
continues there were violation discoveries on top of the fact that we don't
usually do proprietary libs in OpenStack. I reached out to a few NetApp
folks on the Cinder channel in IRC but was unable to get any real response
other than "I can't really talk about that", so I attempted to revert the
library patch myself.  This however proved to be difficult due to the high
volume of changes that have merged since the original patch landed.

I took it upon myself to attempt to fix the merge conflicts myself, however
this proved to be a rather large task, and frankly I am not familiar enough
with the NetApp code to be making such a large change and "hoping" that I
got it correct.  I again stated this via IRC to a number of people.  After
spending well over an hour working on merge conflicts in the NetApp code,
and having the only response form NetApp developers be "I can't say
anything about that", I then decided that the alternative was to propose
removal of the NetApp drivers altogether which I proposed here [7].

It seems that there are folks that have taken quite a bit of offense to
this, and are more than mildly upset with me.  To them I apologize if this
upset you.  I will say however that given the same situation and timing, I
would do the same thing again.  I'm a bit offended that there are
accusations that I'm intentionally doing something against NetApp (or any
Vendor here).  I won't even dignify the comments by responding, I'll just
let my contributions and involvement in the community speak for itself.

The good news is that as of this morning a NetApp developer has in fact
worked on the revert patch and fixed the merge conflicts (which I've now
spent a fair amount of time this afternoon reviewing), and as soon as that
merges I will propose a backport to stable/liberty.

Thanks,
John

[1]: https://review.openstack.org/#/c/227427/
[2]: https://review.openstack.org/#/c/227524/
[3]: https://bugs.launchpad.net/cinder/+bug/1499334
[4]: https://review.openstack.org/#/c/215700/
[5]:
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T16:56:50
[6]:
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T19:28:33
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/314891fa/attachment.html>

From anteaya at anteaya.info  Fri Sep 25 19:24:52 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Fri, 25 Sep 2015 15:24:52 -0400
Subject: [openstack-dev] [OpenStack-Dev] Recent proposal to remove
 NetApp drivers
In-Reply-To: <CAPWkaSU=SHx8bK5R2PAtmj626vRr2Tj22+P+O+GqPDzG5b8guw@mail.gmail.com>
References: <CAPWkaSU=SHx8bK5R2PAtmj626vRr2Tj22+P+O+GqPDzG5b8guw@mail.gmail.com>
Message-ID: <56059F84.1020304@anteaya.info>

On 09/25/2015 03:15 PM, John Griffith wrote:
> Hey Everyone,
> 
> So I've been kinda busy on IRC today with conversations about some patches
> I proposed yesterday [1], [2].
> 
> I thought maybe I should try and explain my actions a bit, because it seems
> that some think I just arbitrarily flew off the handle and proposed
> something drastic, or that I had some malicious intent here.
> 
> This all started when this bug [3] was submitted against Cinder and
> Manilla.  So I took at look at the review that merged this ([4]), and sure
> enough that should not have merged due to the licensing issue.
> 
> At that point I discussed the issue publicly on IRC in the openstack-dev
> channel and asked for some guidance/input from others [5].  Note as the log
> continues there were violation discoveries on top of the fact that we don't
> usually do proprietary libs in OpenStack. I reached out to a few NetApp
> folks on the Cinder channel in IRC but was unable to get any real response
> other than "I can't really talk about that", so I attempted to revert the
> library patch myself.  This however proved to be difficult due to the high
> volume of changes that have merged since the original patch landed.
> 
> I took it upon myself to attempt to fix the merge conflicts myself, however
> this proved to be a rather large task, and frankly I am not familiar enough
> with the NetApp code to be making such a large change and "hoping" that I
> got it correct.  I again stated this via IRC to a number of people.  After
> spending well over an hour working on merge conflicts in the NetApp code,
> and having the only response form NetApp developers be "I can't say
> anything about that", I then decided that the alternative was to propose
> removal of the NetApp drivers altogether which I proposed here [7].
> 
> It seems that there are folks that have taken quite a bit of offense to
> this, and are more than mildly upset with me.  To them I apologize if this
> upset you.  I will say however that given the same situation and timing, I
> would do the same thing again.  I'm a bit offended that there are
> accusations that I'm intentionally doing something against NetApp (or any
> Vendor here).  I won't even dignify the comments by responding, I'll just
> let my contributions and involvement in the community speak for itself.
> 
> The good news is that as of this morning a NetApp developer has in fact
> worked on the revert patch and fixed the merge conflicts (which I've now
> spent a fair amount of time this afternoon reviewing), and as soon as that
> merges I will propose a backport to stable/liberty.
> 
> Thanks,
> John
> 
> [1]: https://review.openstack.org/#/c/227427/
> [2]: https://review.openstack.org/#/c/227524/
> [3]: https://bugs.launchpad.net/cinder/+bug/1499334
> [4]: https://review.openstack.org/#/c/215700/
> [5]:
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T16:56:50
> [6]:
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T19:28:33
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Thank you John, for a through description of your actions with links for
reference.

Open source software only remains open if we all work hard to understand
what the means and act accordingly.

I value your actions in this regard to bring something questionable to
light, to ask advice of others, to speak publicly about it and to step
forward so the greater community can see the facts for themselves.

Thank you, John, I support this kind of behaviour.

I am additionally grateful for those working hard to rectify the matter.

Thank you,
Anita.


From mitsuhiro.tanino at hds.com  Fri Sep 25 19:26:32 2015
From: mitsuhiro.tanino at hds.com (Mitsuhiro Tanino)
Date: Fri, 25 Sep 2015 19:26:32 +0000
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <56059A82.7020607@windriver.com>
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <CAPWkaSU5nyep8smk4t5UxL7Y1q1aaPHLvftRo=2qGuvwr6Z4-g@mail.gmail.com>
 <CAPWkaSW+BKys26PG++sgwFLPbq+-WXsURdDNVWjFSpQQGnZquw@mail.gmail.com>
 <56042AE2.6000707@windriver.com> <56043E5B.7020709@windriver.com>
 <56047763.7090302@windriver.com> <56057AC2.4030709@windriver.com>
 <04867A083C09694985E4171FD38905ED427A5F6E@USINDEM103.corp.hds.com>
 <56059A82.7020607@windriver.com>
Message-ID: <04867A083C09694985E4171FD38905ED427A602C@USINDEM103.corp.hds.com>

> -----Original Message-----
> From: Chris Friesen [mailto:chris.friesen at windriver.com]
> Sent: Friday, September 25, 2015 3:04 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [cinder] should we use fsync when writing iscsi
> config file?
> 
> On 09/25/2015 12:30 PM, Mitsuhiro Tanino wrote:
> > On 09/22/2015 06:43 PM, Robert Collins wrote:
> >> On 23 September 2015 at 09:52, Chris Friesen
> >> <chris.friesen at windriver.com> wrote:
> >>> Hi,
> >>>
> >>> I recently had an issue with one file out of a dozen or so in
> >>> "/opt/cgcs/cinder/data/volumes/" being present but of size zero.
> >>> I'm running stable/kilo if it makes a difference.
> >>>
> >>> Looking at the code in
> >>> volume.targets.tgt.TgtAdm.create_iscsi_target(), I'm wondering if we
> >>> should do a fsync() before the close().  The way it stands now, it
> >>> seems like it might be possible to write the file, start making use
> >>> of it, and then take a power outage before it actually gets written
> >>> to persistent storage.  When we come back up we could have an
> >>> instance expecting to make use of it, but no target information in the on-
> disk copy of the file.
> >
> > I think even if there is no target information in configuration file
> > dir, c-vol started successfully and iSCSI targets were created automatically
> and volumes were exported, right?
> >
> > There is an problem in this case that the iSCSI target was created
> > without authentication because we can't get previous authentication from the
> configuration file.
> >
> > I'm curious what kind of problem did you met?
> 
> We had an issue in a private patch that was ported to Kilo without realizing
> that the data type of chap_auth had changed.

I understand. Thank you for your explanation.
 
> > In my understanding, the provider_auth in database has user name and password
> for iSCSI target.
> > Therefore if we get authentication from DB, I think we can self-heal
> > from this situation correctly after c-vol service is restarted.
> >
> > The lio target obtains authentication from provider_auth in database,
> > but tgtd, iet, cxt obtain authentication from file to recreate iSCSI target
> when c-vol is restarted.
> > If the file is missing, these volumes are exported without
> > authentication and configuration file is recreated as I mentioned above.
> >
> > tgtd: Get target chap auth from file
> > iet:  Get target chap auth from file
> > cxt:  Get target chap auth from file
> > lio:  Get target chap auth from Database(in provider_auth)
> > scst: Get target chap auth by using original command
> >
> > If we get authentication from DB for tgtd, iet and cxt same as lio, we
> > can recreate iSCSI target with proper authentication when c-vol is restarted.
> > I think this is a solution for this situation.
> 
> If we fixed the chap auth info then we could live with a zero-size file.
> However, with the current code if we take a kernel panic or power outage it's
> theoretically possible to end up with a corrupt file of nonzero size (due to
> metadata hitting the persistant storage before the data).  I'm not confident
> that the current code would deal properly with that.
> 
> That said, if we always regenerate every file from the DB on cinder-volume
> startup (regardless of whether or not it existed, and without reading in the
> existing file), then we'd be okay without the robustness improvements.

This file is referred when the SCSI target service is restarted.
Therefore, adding robustness for this file is also good approach. IMO.

Thanks,
Mitsuhiro Tanino

> Chris
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-
> 2Dbin_mailman_listinfo_openstack-2Ddev&d=BQICAg&c=DZ-
> EF4pZfxGSU6MfABwx0g&r=klD1krzABGW034E9oBtY1xmIn3g7xZAIxV0XxaZpkJE&m=SZPjS9uXH42q
> hmzSRbSZ8x39C9xi3aBDw-SQ7xa8cTM&s=XWJ91NIJglFkBSr762rSq9TdWeiRSdS5Pl0LzS1_1Z8&e=


From comnea.dani at gmail.com  Fri Sep 25 19:27:51 2015
From: comnea.dani at gmail.com (Daniel Comnea)
Date: Fri, 25 Sep 2015 20:27:51 +0100
Subject: [openstack-dev] [neutron][horizon] Nice new Network Topology
 panel in Horizon
In-Reply-To: <560588CC.1040501@cisco.com>
References: <560588CC.1040501@cisco.com>
Message-ID: <CAOBAnZNkRq_bTUxRrNw4aMmrD1coaU3fSjCPtFggtC=zvDcqvg@mail.gmail.com>

Great job Henry !

On Fri, Sep 25, 2015 at 6:47 PM, Henry Gessau <gessau at cisco.com> wrote:

> It has been about three years in the making but now it is finally here.
> A screenshot doesn't do it justice, so here is a short video overview:
> https://youtu.be/PxFd-lJV0e4
>
> Isn't that neat? I am sure you can see that it is a great improvement,
> especially for larger topologies.
>
> This new view will be part of the Liberty release of Horizon. I encourage
> you to
> take a look at it with your own network topologies, play around with it,
> and
> provide feedback. Please stop by the #openstack-horizon IRC channel if
> there are
> issues you would like addressed.
>
> Thanks to the folks who made this happen.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/6b9150c0/attachment.html>

From Pratik.Mallya at rackspace.com  Fri Sep 25 19:32:35 2015
From: Pratik.Mallya at rackspace.com (Pratik Mallya)
Date: Fri, 25 Sep 2015 19:32:35 +0000
Subject: [openstack-dev] [Heat] Assumptions regarding extensions to
	OpenStack api's
Message-ID: <BF51D25F-2D7E-4C5D-B0BF-B20707F0FAE4@rackspace.com>

Hello Heat Team,

I was wondering if OpenStack Heat assumes that the Nova extensions api would always exist in a cloud? My impression was that since these features are extensions, they may or may not be implemented by the cloud provider and hence Heat must not rely on it being present.

My question is prompted by this code change: [0] where it is assumed that the os-interfaces extension [1] is implemented.

If we cannot rely on that assumption, then that code would need to be changed with a 404 guard since that endpoint may not exist and the nova client may thus raise a 404.

Thanks,
Pratik Mallya
Software Developer
Rackspace, Inc.

[0]: https://github.com/openstack/heat/commit/54c26453a0a8e8cb574858c7e1d362d0abea3822#diff-b3857cb91556a2a83f40842658589e4fR163
[1]: http://developer.openstack.org/api-ref-compute-v2-ext.html#os-interface
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/3579964e/attachment.html>

From john.griffith8 at gmail.com  Fri Sep 25 19:33:55 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Fri, 25 Sep 2015 13:33:55 -0600
Subject: [openstack-dev] [OpenStack-Dev] Recent proposal to remove
	NetApp drivers
In-Reply-To: <CAPWkaSU=SHx8bK5R2PAtmj626vRr2Tj22+P+O+GqPDzG5b8guw@mail.gmail.com>
References: <CAPWkaSU=SHx8bK5R2PAtmj626vRr2Tj22+P+O+GqPDzG5b8guw@mail.gmail.com>
Message-ID: <CAPWkaSUtS+Dp4Qcmm6e_ONCKZocSY29XkGruqrUW+E=UyeJ5wg@mail.gmail.com>

On Fri, Sep 25, 2015 at 1:15 PM, John Griffith <john.griffith8 at gmail.com>
wrote:

> Hey Everyone,
>
> So I've been kinda busy on IRC today with conversations about some patches
> I proposed yesterday [1], [2].
>
> I thought maybe I should try and explain my actions a bit, because it
> seems that some think I just arbitrarily flew off the handle and proposed
> something drastic, or that I had some malicious intent here.
>
> This all started when this bug [3] was submitted against Cinder and
> Manilla.  So I took at look at the review that merged this ([4]), and sure
> enough that should not have merged due to the licensing issue.
>
> At that point I discussed the issue publicly on IRC in the openstack-dev
> channel and asked for some guidance/input from others [5].  Note as the log
> continues there were violation discoveries on top of the fact that we don't
> usually do proprietary libs in OpenStack. I reached out to a few NetApp
> folks on the Cinder channel in IRC but was unable to get any real response
> other than "I can't really talk about that", so I attempted to revert the
> library patch myself.  This however proved to be difficult due to the high
> volume of changes that have merged since the original patch landed.
>
> I took it upon myself to attempt to fix the merge conflicts myself,
> however this proved to be a rather large task, and frankly I am not
> familiar enough with the NetApp code to be making such a large change and
> "hoping" that I got it correct.  I again stated this via IRC to a number of
> people.  After spending well over an hour working on merge conflicts in the
> NetApp code, and having the only response form NetApp developers be "I
> can't say anything about that", I then decided that the alternative was to
> propose removal of the NetApp drivers altogether which I proposed here [7].
>
?Oops... that's link [2]  (s/[7]/[2]/)?


>
> It seems that there are folks that have taken quite a bit of offense to
> this, and are more than mildly upset with me.  To them I apologize if this
> upset you.  I will say however that given the same situation and timing, I
> would do the same thing again.  I'm a bit offended that there are
> accusations that I'm intentionally doing something against NetApp (or any
> Vendor here).  I won't even dignify the comments by responding, I'll just
> let my contributions and involvement in the community speak for itself.
>
> The good news is that as of this morning a NetApp developer has in fact
> worked on the revert patch and fixed the merge conflicts (which I've now
> spent a fair amount of time this afternoon reviewing), and as soon as that
> merges I will propose a backport to stable/liberty.
>
> Thanks,
> John
>
> [1]: https://review.openstack.org/#/c/227427/
> [2]: https://review.openstack.org/#/c/227524/
> [3]: https://bugs.launchpad.net/cinder/+bug/1499334
> [4]: https://review.openstack.org/#/c/215700/
> [5]:
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T16:56:50
> [6]:
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-09-24.log.html#t2015-09-24T19:28:33
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/d2268849/attachment.html>

From fungi at yuggoth.org  Fri Sep 25 19:34:15 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Fri, 25 Sep 2015 19:34:15 +0000
Subject: [openstack-dev] [OpenStack-Dev] Recent proposal to remove
 NetApp drivers
In-Reply-To: <CAPWkaSU=SHx8bK5R2PAtmj626vRr2Tj22+P+O+GqPDzG5b8guw@mail.gmail.com>
References: <CAPWkaSU=SHx8bK5R2PAtmj626vRr2Tj22+P+O+GqPDzG5b8guw@mail.gmail.com>
Message-ID: <20150925193415.GG4731@yuggoth.org>

On 2015-09-25 13:15:24 -0600 (-0600), John Griffith wrote:
[...]
> It seems that there are folks that have taken quite a bit of offense to
> this, and are more than mildly upset with me.  To them I apologize if this
> upset you.  I will say however that given the same situation and timing, I
> would do the same thing again.  I'm a bit offended that there are
> accusations that I'm intentionally doing something against NetApp (or any
> Vendor here).  I won't even dignify the comments by responding, I'll just
> let my contributions and involvement in the community speak for itself.
[...]

For what it's worth, I was personally shocked to see developers
connected to our community copy Apache-licensed software contributed
by others into a proprietary derivative and redistribute it without
attribution in clear violation of the Apache license. I understand
that free software licenses are a bit of an enigma for traditional
enterprises, but I hold our community to a higher standard than
that. Contributing to free software means, among other things, that
you actually ought to understand the licenses under which those
contributions are made.

I was however pleased to see today that a new upload of the
offending library, while still not distributed under a free license,
now at least seems to me (in my non-lawyer opinion) to be abiding by
the terms of the Apache license for the parts of OpenStack it
includes. Thank you for taking our software's licenses seriously!
-- 
Jeremy Stanley


From rochelle.grober at huawei.com  Fri Sep 25 19:44:57 2015
From: rochelle.grober at huawei.com (Rochelle Grober)
Date: Fri, 25 Sep 2015 19:44:57 +0000
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <1443203624-sup-2555@lrrr.local>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
Message-ID: <DA7681A6D234954992BD2FB907F9666208EEDD4E@SJCEML701-CHM.china.huawei.com>



Doug Hellmann wrote:
Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
> On Sep 25, 2015, at 1:24 PM, Brian Rosmaita <brian.rosmaita at RACKSPACE.COM> wrote:
> > 
> > I'd like to clarify something.
> > 
> > On 9/25/15, 12:16 PM, "Mark Voelker" <mvoelker at vmware.com> wrote:
> > [big snip]
> >> Also worth pointing out here: when we talk about ?doing the same thing?
> >> from a DefCore perspective, we?re essentially talking about what?s
> >> exposed to the end user, not how that?s implemented in OpenStack?s source
> >> code.  So from an end user?s perspective:
> >> 
> >> If I call nova image-create, I get an image in my cloud.  If I call the
> >> Glance v2 API to create an image, I also get an image in my cloud.  I
> >> neither see nor care that Nova is actually talking to Glance in the
> >> background, because if I?m writing code that uses the OpenStack API?s, I
> >> need to pick which one of those two API?s to make my code call upon to
> >> put an image in my cloud.  Or, in the worst case, I have to write a bunch
> >> of if/else loops into my code because some clouds I want to use only
> >> allow one way and some allow only the other.
> > 
> > The above is a bit inaccurate.
> > 
> > The nova image-create command does give you an image in your cloud.  The
> > image you get, however, is a snapshot of an instance that has been
> > previously created in Nova.  If you don't have an instance, you cannot
> > create an image via that command.  There is no provision in the Compute
> > (Nova) API to allow you to create an image out of bits that you supply.
> > 
> > The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
> > register them as an image which you can then use to boot instances from by
> > using the Compute API.  But note that if all you have available is the
> > Images API, you cannot create an image of one of your instances.
> > 
> >> So from that end-user perspective, the Nova image-create API indeed does
> >> ?do the same thing" as the Glance API.
> > 
> > They don't "do the same thing".  Even if you have full access to the
> > Images v1 or v2 API, you will still have to use the Compute (Nova) API to
> > create an image of an instance, which is by far the largest use-case for
> > image creation.  You can't do it through Glance, because Glance doesn't
> > know anything about instances.  Nova has to know about Glance, because it
> > needs to fetch images for instance creation, and store images for
> > on-demand images of instances.
> 
> Yup, that?s fair: this was a bad example to pick (need moar coffee I guess).  Let?s use image-list instead. =)

From a "technical direction" perspective, I still think it's a bad
situation for us to be relying on any proxy APIs like this. Yes,
they are widely deployed, but we want to be using glance for image
features, neutron for networking, etc. Having the nova proxy is
fine, but while we have DefCore using tests to enforce the presence
of the proxy we can't deprecate those APIs.

What do we need to do to make that change happen over the next cycle
or so?

[Rocky]
This is likely the first case DefCore will have pf deprecating a requirement ;-)  The committee wasn't thrilled with the original requirement, but really, can you have OpenStack without some way of creating an instance?  And Glance V1 had no user facing APIs, so the committee was kind of stuck.

But, going forward, what needs to happen in Dev is for Glance V2 to become *the way* to create images, and for Glance V1 to be deprecated *and removed*.  Then we've got two more cycles before we can require V2 only.  Yes, DefCore is a trailing requirement.  We have to give our user community time to migrate to versions of OpenStack that don't have the "old" capability.

But now comes the tricky part....How do you allow both V1 and V2 capabilities and still be interoperable?  This will definitely be the first test for DefCore on migration from obsolete capabilities to current capabilities.  We could use some help figuring out how to make that work.

--Rocky

Doug

> 
> At Your Service,
> 
> Mark T. Voelker
> 
> > 
> > 
> >> At Your Service,
> >> 
> >> Mark T. Voelker
> > 
> > Glad to be of service, too,
> > brian
> > 
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From gessau at cisco.com  Fri Sep 25 20:04:14 2015
From: gessau at cisco.com (Henry Gessau)
Date: Fri, 25 Sep 2015 16:04:14 -0400
Subject: [openstack-dev] [neutron][horizon] Nice new Network Topology
 panel in Horizon
In-Reply-To: <CAOBAnZNkRq_bTUxRrNw4aMmrD1coaU3fSjCPtFggtC=zvDcqvg@mail.gmail.com>
References: <560588CC.1040501@cisco.com>
 <CAOBAnZNkRq_bTUxRrNw4aMmrD1coaU3fSjCPtFggtC=zvDcqvg@mail.gmail.com>
Message-ID: <5605A8BE.4030507@cisco.com>

On Fri, Sep 25, 2015, Daniel Comnea <comnea.dani at gmail.com> wrote:
> Great job Henry !

I had nothing to do with it! (See below.)

> On Fri, Sep 25, 2015 at 6:47 PM, Henry Gessau <gessau at cisco.com
> <mailto:gessau at cisco.com>> wrote:
> 
>     It has been about three years in the making but now it is finally here.
>     A screenshot doesn't do it justice, so here is a short video overview:
>     https://youtu.be/PxFd-lJV0e4
> 
>     Isn't that neat? I am sure you can see that it is a great improvement,
>     especially for larger topologies.
> 
>     This new view will be part of the Liberty release of Horizon. I encourage you to
>     take a look at it with your own network topologies, play around with it, and
>     provide feedback. Please stop by the #openstack-horizon IRC channel if there are
>     issues you would like addressed.
> 
>     Thanks to the folks who made this happen.

I forgot to include the list of folks:

Curvature was started by Sam Betts, John Davidge, Jack Fletcher and Bradley
Jones as an intern project under Debo Dutta. It was first implemented for
"quantum" on the Grizzly release of OpenStack [1]. Sam, John and Brad are now
regular upstream contributors to OpenStack. In the Horizon project Rob Cresswell
has been instrumental in getting the panel view integrated.

[1] https://youtu.be/pmpRhcwyJIo



From mordred at inaugust.com  Fri Sep 25 20:08:19 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Fri, 25 Sep 2015 15:08:19 -0500
Subject: [openstack-dev] [Heat] Assumptions regarding extensions to
 OpenStack api's
In-Reply-To: <BF51D25F-2D7E-4C5D-B0BF-B20707F0FAE4@rackspace.com>
References: <BF51D25F-2D7E-4C5D-B0BF-B20707F0FAE4@rackspace.com>
Message-ID: <5605A9B3.8060808@inaugust.com>

On 09/25/2015 02:32 PM, Pratik Mallya wrote:
> Hello Heat Team,
>
> I was wondering if OpenStack Heat assumes that the Nova extensions api
> would always exist in a cloud? My impression was that since these
> features are extensions, they may or may not be implemented by the cloud
> provider and hence Heat must not rely on it being present.
>
> My question is prompted by this code change: [0] where it is assumed
> that the os-interfaces extension [1] is implemented.
>
> If we cannot rely on that assumption, then that code would need to be
> changed with a 404 guard since that endpoint may not exist and the nova
> client may thus raise a 404.

Correct. Extensions are not everywhere and so you must either query the 
extensions API to find out what extensions the cloud has, or you must 
404 guard.

Of course, you can't ONLY 404 guard, because the cloud may also throw 
unauthorized - so querying the nova extension API is the more correct 
way to deal with it.

> Thanks,
> Pratik Mallya
> Software Developer
> Rackspace, Inc.
>
> [0]:
> https://github.com/openstack/heat/commit/54c26453a0a8e8cb574858c7e1d362d0abea3822#diff-b3857cb91556a2a83f40842658589e4fR163
> [1]: http://developer.openstack.org/api-ref-compute-v2-ext.html#os-interface
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From dukhlov at mirantis.com  Fri Sep 25 20:21:05 2015
From: dukhlov at mirantis.com (Dmitriy Ukhlov)
Date: Fri, 25 Sep 2015 23:21:05 +0300
Subject: [openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver
	implementation
In-Reply-To: <BLU436-SMTP75C16DF1120A00053D00E4D8420@phx.gbl>
References: <CADpk5BZpWRJP3m=W8xnHX_3o8-3Z5KXzeku4hCCwGWOv4jx2cA@mail.gmail.com>
 <BLU436-SMTP75C16DF1120A00053D00E4D8420@phx.gbl>
Message-ID: <CADpk5BZgXOmk+Uou8x93YUJPP6VfUHh36GmCzpCoqGrvzAmXjQ@mail.gmail.com>

Hello Joshua, thank you for your feedback.

This will end up on review.openstack.org right so that it can be properly
> reviewed (it will likely take a while since it looks to be ~1000+ lines of
> code)?


Yes, sure I will send this patch to  review.openstack.org, but first of all
I need to get merged devstack patch (
https://review.openstack.org/#/c/226348/).
Then I will add gate jobs with testing new driver using devstack. And the
will send pika driver patch to review.

Also suggestion, before that merges, can docs be added, seems like very
> little docstrings about what/why/how. For sustainability purposes that
> would be appreciated I think.


Ok. Will add.

On Fri, Sep 25, 2015 at 6:58 PM, Joshua Harlow <harlowja at outlook.com> wrote:

> Also a side question, that someone might know,
>
> Whatever happened to the folks from rabbitmq (incorporated? pivotal?) who
> were going to get involved in oslo.messaging, did that ever happen; if
> anyone knows?
>
> They might be a good bunch of people to review such a pika driver (since I
> think they as a corporation created pika?).
>
> Dmitriy Ukhlov wrote:
>
>> Hello stackers,
>>
>> I'm working on new olso.messaging RabbitMQ driver implementation which
>> uses pika client library instead of kombu. It related to
>> https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika.
>> In this letter I want to share current results and probably get first
>> feedack from you.
>> Now code is availabe here:
>>
>> https://github.com/dukhlov/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_pika.py
>>
>> Current status of this code:
>> - pika driver passes functional tests
>> - pika driver tempest smoke tests
>> - pika driver passes almost all tempest full tests (except 5) but it
>> seems that reason is not related to oslo.messaging
>> Also I created small devstack patch to support pika driver testing on
>> gate (https://review.openstack.org/#/c/226348/)
>>
>> Next steps:
>> - communicate with Manish (blueprint owner)
>> - write spec to this blueprint
>> - send a review with this patch when spec and devstack patch get merged.
>>
>> Thank you.
>>
>>
>> --
>> Best regards,
>> Dmitriy Ukhlov
>> Mirantis Inc.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/b105bead/attachment.html>

From mvoelker at vmware.com  Fri Sep 25 20:43:23 2015
From: mvoelker at vmware.com (Mark Voelker)
Date: Fri, 25 Sep 2015 20:43:23 +0000
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <1443203624-sup-2555@lrrr.local>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
Message-ID: <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>

On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
> 
> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
>> On Sep 25, 2015, at 1:24 PM, Brian Rosmaita <brian.rosmaita at RACKSPACE.COM> wrote:
>>> 
>>> I'd like to clarify something.
>>> 
>>> On 9/25/15, 12:16 PM, "Mark Voelker" <mvoelker at vmware.com> wrote:
>>> [big snip]
>>>> Also worth pointing out here: when we talk about ?doing the same thing?
>>>> from a DefCore perspective, we?re essentially talking about what?s
>>>> exposed to the end user, not how that?s implemented in OpenStack?s source
>>>> code.  So from an end user?s perspective:
>>>> 
>>>> If I call nova image-create, I get an image in my cloud.  If I call the
>>>> Glance v2 API to create an image, I also get an image in my cloud.  I
>>>> neither see nor care that Nova is actually talking to Glance in the
>>>> background, because if I?m writing code that uses the OpenStack API?s, I
>>>> need to pick which one of those two API?s to make my code call upon to
>>>> put an image in my cloud.  Or, in the worst case, I have to write a bunch
>>>> of if/else loops into my code because some clouds I want to use only
>>>> allow one way and some allow only the other.
>>> 
>>> The above is a bit inaccurate.
>>> 
>>> The nova image-create command does give you an image in your cloud.  The
>>> image you get, however, is a snapshot of an instance that has been
>>> previously created in Nova.  If you don't have an instance, you cannot
>>> create an image via that command.  There is no provision in the Compute
>>> (Nova) API to allow you to create an image out of bits that you supply.
>>> 
>>> The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
>>> register them as an image which you can then use to boot instances from by
>>> using the Compute API.  But note that if all you have available is the
>>> Images API, you cannot create an image of one of your instances.
>>> 
>>>> So from that end-user perspective, the Nova image-create API indeed does
>>>> ?do the same thing" as the Glance API.
>>> 
>>> They don't "do the same thing".  Even if you have full access to the
>>> Images v1 or v2 API, you will still have to use the Compute (Nova) API to
>>> create an image of an instance, which is by far the largest use-case for
>>> image creation.  You can't do it through Glance, because Glance doesn't
>>> know anything about instances.  Nova has to know about Glance, because it
>>> needs to fetch images for instance creation, and store images for
>>> on-demand images of instances.
>> 
>> Yup, that?s fair: this was a bad example to pick (need moar coffee I guess).  Let?s use image-list instead. =)
> 
> From a "technical direction" perspective, I still think it's a bad

Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1] 



> situation for us to be relying on any proxy APIs like this. Yes,
> they are widely deployed, but we want to be using glance for image
> features, neutron for networking, etc. Having the nova proxy is
> fine, but while we have DefCore using tests to enforce the presence
> of the proxy we can't deprecate those APIs.


Actually that?s not true: DefCore can totally deprecate things too, and can do so in response to the technical community deprecating things.  See my comments in this review [2].  Maybe I need to write another post about that...

/me envisions the title being ?Who?s on First??


> 
> What do we need to do to make that change happen over the next cycle
> or so?

There are several things that can be done:

First, if you don?t like the Criteria or the weights that the various Criteria today have, we can suggest changes to them.  The Board of Directors will ultimately have to approve that change, but we can certainly ask (I think there?s plenty of evidence that our Directors listen to the community?s concerns).  There?s actually already some early discussion about that now, though most of the energy is going into other things at the moment (because deadlines).  See post above for links.

Second, we certainly could consider changes to the Capabilities that are currently required.  That happens every six months according to a Board-approved schedule. [3]  The window is just about to close for the next Guideline, but that might be ok from the perspective of a lot of stuff is likely be advisory in the next Guideline anyway, and advisory cycles are explicitly meant to generate feedback like this.  Making changes to Guidelines is basically submitting a patch. [4]

Third, as a technical community we can make the capabilities we want score better.  So for example: we could make nova image use glance v2, or we could deprecate those API?s (per above, you do not have to wait on DefCore for that to happen), or we could send patches to the client SDK?s that OpenStack users are relying on to make those capabilities supported.

[1] http://markvoelker.github.io/blog/defcore-misconceptions-1/
[2] https://review.openstack.org/#/c/207467/4/reference/tags/assert_follows-standard-deprecation.rst
[3] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/2015A.rst#n10
[4] http://git.openstack.org/cgit/openstack/defcore/tree/HACKING.rst

At Your Service,

Mark T. Voelker


> 
> Doug
> 
>> 
>> At Your Service,
>> 
>> Mark T. Voelker
>> 
>>> 
>>> 
>>>> At Your Service,
>>>> 
>>>> Mark T. Voelker
>>> 
>>> Glad to be of service, too,
>>> brian
>>> 
>>> 
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From andrew at lascii.com  Fri Sep 25 21:29:24 2015
From: andrew at lascii.com (Andrew Laski)
Date: Fri, 25 Sep 2015 17:29:24 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <DA7681A6D234954992BD2FB907F9666208EEDD4E@SJCEML701-CHM.china.huawei.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <DA7681A6D234954992BD2FB907F9666208EEDD4E@SJCEML701-CHM.china.huawei.com>
Message-ID: <20150925212924.GJ8745@crypt>

On 09/25/15 at 07:44pm, Rochelle Grober wrote:
>
>
>Doug Hellmann wrote:
>Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
>> On Sep 25, 2015, at 1:24 PM, Brian Rosmaita <brian.rosmaita at RACKSPACE.COM> wrote:
>> >
>> > I'd like to clarify something.
>> >
>> > On 9/25/15, 12:16 PM, "Mark Voelker" <mvoelker at vmware.com> wrote:
>> > [big snip]
>> >> Also worth pointing out here: when we talk about ?doing the same thing?
>> >> from a DefCore perspective, we?re essentially talking about what?s
>> >> exposed to the end user, not how that?s implemented in OpenStack?s source
>> >> code.  So from an end user?s perspective:
>> >>
>> >> If I call nova image-create, I get an image in my cloud.  If I call the
>> >> Glance v2 API to create an image, I also get an image in my cloud.  I
>> >> neither see nor care that Nova is actually talking to Glance in the
>> >> background, because if I?m writing code that uses the OpenStack API?s, I
>> >> need to pick which one of those two API?s to make my code call upon to
>> >> put an image in my cloud.  Or, in the worst case, I have to write a bunch
>> >> of if/else loops into my code because some clouds I want to use only
>> >> allow one way and some allow only the other.
>> >
>> > The above is a bit inaccurate.
>> >
>> > The nova image-create command does give you an image in your cloud.  The
>> > image you get, however, is a snapshot of an instance that has been
>> > previously created in Nova.  If you don't have an instance, you cannot
>> > create an image via that command.  There is no provision in the Compute
>> > (Nova) API to allow you to create an image out of bits that you supply.
>> >
>> > The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
>> > register them as an image which you can then use to boot instances from by
>> > using the Compute API.  But note that if all you have available is the
>> > Images API, you cannot create an image of one of your instances.
>> >
>> >> So from that end-user perspective, the Nova image-create API indeed does
>> >> ?do the same thing" as the Glance API.
>> >
>> > They don't "do the same thing".  Even if you have full access to the
>> > Images v1 or v2 API, you will still have to use the Compute (Nova) API to
>> > create an image of an instance, which is by far the largest use-case for
>> > image creation.  You can't do it through Glance, because Glance doesn't
>> > know anything about instances.  Nova has to know about Glance, because it
>> > needs to fetch images for instance creation, and store images for
>> > on-demand images of instances.
>>
>> Yup, that?s fair: this was a bad example to pick (need moar coffee I guess).  Let?s use image-list instead. =)
>
>From a "technical direction" perspective, I still think it's a bad
>situation for us to be relying on any proxy APIs like this. Yes,
>they are widely deployed, but we want to be using glance for image
>features, neutron for networking, etc. Having the nova proxy is
>fine, but while we have DefCore using tests to enforce the presence
>of the proxy we can't deprecate those APIs.
>
>What do we need to do to make that change happen over the next cycle
>or so?
>
>[Rocky]
>This is likely the first case DefCore will have pf deprecating a requirement ;-)  The committee wasn't thrilled with the original requirement, but really, can you have OpenStack without some way of creating an instance?  And Glance V1 had no user facing APIs, so the committee was kind of stuck.
>
>But, going forward, what needs to happen in Dev is for Glance V2 to become *the way* to create images, and for Glance V1 to be deprecated *and removed*.  Then we've got two more cycles before we can require V2 only.  Yes, DefCore is a trailing requirement.  We have to give our user community time to migrate to versions of OpenStack that don't have the "old" capability.

I still feel that there's a misunderstanding here.  The Nova API is a 
proxy for listing images and getting details on a particular image but 
otherwise does not expose the capabilities of Glance that the Glance API 
does.  Nova does not allow users to create images in Glance in the 
manner that seems to be under discussion here.  You can boot an instance 
from a preexisting image, modify it, and then have Nova upload a 
snapshot of that image to Glance.  You can not take a user provided 
image and get it into Glance via Nova.  And if there are no images in 
Glance you can not bootstrap one in via Nova.


>
>But now comes the tricky part....How do you allow both V1 and V2 capabilities and still be interoperable?  This will definitely be the first test for DefCore on migration from obsolete capabilities to current capabilities.  We could use some help figuring out how to make that work.
>
>--Rocky
>
>Doug
>
>>
>> At Your Service,
>>
>> Mark T. Voelker
>>
>> >
>> >
>> >> At Your Service,
>> >>
>> >> Mark T. Voelker
>> >
>> > Glad to be of service, too,
>> > brian
>> >
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sharis at Brocade.com  Fri Sep 25 22:35:21 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Fri, 25 Sep 2015 22:35:21 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <1443139720841.25541@vmware.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>,
 <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>
 <1443116221875.72882@vmware.com>,
 <EB8DB51184817F479FC9C47B120861EE1986D904@SHSMSX101.ccr.corp.intel.com>
 <1443139720841.25541@vmware.com>
Message-ID: <e3c1fccdacc24c0a85e813f843e6b3d0@HQ1WP-EXMB12.corp.brocade.com>

Thanks Alex, Zhou,

I get errors from congress when I do a re-join. These errors seem to due to the order in which the services are coming up. Hence I still depend on running stack.sh after the VM is up and running. Please try out the new VM - also advise if you need to add any of your use cases. Also re-join starts "screen" - do we expect the end user to know how to use "screen".

I do understand that running "stack.sh" takes time to run - but it does not do things that appear to be any kind of magic which we want to avoid in order to get the user excited.

I have uploaded a new version of the VM please experiment with this and let me know:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova

(root: vagrant password: vagrant)

-Shiv



From: Alex Yip [mailto:ayip at vmware.com]
Sent: Thursday, September 24, 2015 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I was able to make devstack run without a network connection by disabling tempest.  So, I think it uses the loopback IP address, and that does not change, so rejoin-stack.sh works without a network at all.



- Alex





________________________________
From: Zhou, Zhenzan <zhenzan.zhou at intel.com<mailto:zhenzan.zhou at intel.com>>
Sent: Thursday, September 24, 2015 4:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:ayip at vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a minute or so.  Then I run rejoin-stack.sh which takes just another minute or so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack state that was running before.



- Alex





________________________________
From: Shiv Haris <sharis at Brocade.com<mailto:sharis at Brocade.com>>
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user instantiates the Usecase-VM. However creating a OVA file is possible only when the VM is halted which means Openstack is not running and the user will have to run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM and using it in another setup is not very straight forward. It involves modifying the .vbox file and seems that it is prone to user errors. I am leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ....

Thanks,

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you posed however I am still working on some of the subtle issues raised. Once I have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this big?  I think we should finish this as a VM but then look into doing it with containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB - but the OVA compress the image and disk to 3 GB. I will looking at other options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup time is substantial, and if there's a problem, it's good to assume the user won't know how to fix it.  Is it possible to have devstack up and running when we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack will be down when you bring up  the VM. I agree a snapshot will be a better choice.

- It'd be good to have a README to explain how to use the use-case structure. It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script so that we can run the use cases one after another without worrying about interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com<mailto:sharis at brocade.com>> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com<mailto:sharis at Brocade.com>]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova<https://urldefense.proofpoint.com/v2/url?u=http-3A__paloaltan.net_Congress_Congress-5FUsecases-5FSEPT-5F17-5F2015.ova&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=3IP4igrLri-BaK8VbjbEq2l_AGknCI7-t3UbP5VwlU8&s=wVyys8I915mHTzrOp8f0KLqProw6ygNfaMSP0T-yqCg&e=>

I usually run this on a macbook air - but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases - I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/b5dc5b81/attachment.html>

From chris at openstack.org  Fri Sep 25 22:40:51 2015
From: chris at openstack.org (Chris Hoge)
Date: Fri, 25 Sep 2015 15:40:51 -0700
Subject: [openstack-dev] [cinder][neutron][all] New third-party-ci testing
	requirements for OpenStack Compatible mark
Message-ID: <EBF1AF08-B54D-4391-9B22-964390523E0A@openstack.org>

In November, the OpenStack Foundation will start requiring vendors requesting
new "OpenStack Compatible" storage driver licenses to start passing the Cinder
third-party integration tests. The new program was approved by the Board at
the July meeting in Austin and follows the improvement of the testing standards
and technical requirements for the "OpenStack Powered" program. This is all
part of the effort of the Foundation to use the OpenStack brand to guarantee a
base-level of interoperability and consistency for OpenStack users and to
protect the work of our community of developers by applying a trademark backed
by their technical efforts.

The Cinder driver testing is the first step of a larger effort to apply
community determined standards to the Foundation marketing programs. We're
starting with Cinder because it has a successful testing program in place, and
we have plans to extend the program to network drivers and OpenStack
applications. We're going require CI testing for new "OpenStack Compatible"
storage licenses starting on November 1, and plan to roll out network and
application testing in 2016.

One of our goals is to work with project leaders and developers to help us
define and implement these test programs. The standards for third-party
drivers and applications should be determined by the developers and users
in our community, who are experts in how to maintain the quality of the
ecosystem.

We welcome and feedback on this program, and are also happy to answer any
questions you might have.

Thanks!

Chris Hoge
Interop Engineer
OpenStack Foundation

From devdatta.kulkarni at RACKSPACE.COM  Fri Sep 25 22:49:17 2015
From: devdatta.kulkarni at RACKSPACE.COM (Devdatta Kulkarni)
Date: Fri, 25 Sep 2015 22:49:17 +0000
Subject: [openstack-dev] [Glance][Solum] Using os-auth-token and
 os-image-url with glance client
In-Reply-To: <OF69663DD6.8D614D95-ON00257ECB.000A7F24-85257ECB.000B1FB4@notes.na.collabserv.com>
References: <1443134649448.43790@RACKSPACE.COM>,
 <OF69663DD6.8D614D95-ON00257ECB.000A7F24-85257ECB.000B1FB4@notes.na.collabserv.com>
Message-ID: <1443221357776.34083@RACKSPACE.COM>

Steve,


Similar to other OpenStack services, Solum client uses the provided/configured username and password of a user to get a token, and sends it to Solum API service in a http header. On the API side, we use keystonemiddleware to validate the token. Upon successful authentication, we store information which we get back from keystone (project-id, username, and token) and use it to instantiate other services' python clients to interact with them (glance, swift, neutron, heat).


Let us know if there is a better approach for enabling inter-service interactions.


Thanks,

Devdatta


________________________________
From: Steve Martinelli <stevemar at ca.ibm.com>
Sent: Thursday, September 24, 2015 9:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][Solum] Using os-auth-token and os-image-url with glance client


I can't speak to the glance client changes, but this seems like an awkward design.

If you don't know the end user's name and password, then how are you getting the token? Is it the admin token? Why not create a service account and use keystonemiddleware?

Thanks,

Steve Martinelli
OpenStack Keystone Core

[Inactive hide details for Devdatta Kulkarni ---2015/09/24 06:44:57 PM---Hi, Glance team, In Solum, we use Glance to store Docke]Devdatta Kulkarni ---2015/09/24 06:44:57 PM---Hi, Glance team, In Solum, we use Glance to store Docker images that we create for applications. We

From: Devdatta Kulkarni <devdatta.kulkarni at RACKSPACE.COM>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Date: 2015/09/24 06:44 PM
Subject: [openstack-dev] [Glance][Solum] Using os-auth-token and os-image-url with glance client

________________________________



Hi, Glance team,

In Solum, we use Glance to store Docker images that we create for applications. We use Glance client internally to upload these images. Till recently, 'glance image-create' with only token has been
working for us (in devstack). Today, I started noticing that glance image-create with just token is not working anymore. It is also not working when os-auth-token and os-image-url are passed in. According to documentation (http://docs.openstack.org/developer/python-glanceclient/), passing token and image-url should work. The client, which I have installed from master, is asking username (and password, if username is specified).

Solum does not have access to end-user's password. So we need the ability to interact with Glance without providing password, as it has been working till recently.

I investigated the issue a bit and have filed a bug with my findings.
https://bugs.launchpad.net/python-glanceclient/+bug/1499540

Can someone help with resolving this issue.

Regards,
Devdatta__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/be588c77/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: graycol.gif
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/be588c77/attachment.gif>

From devdatta.kulkarni at RACKSPACE.COM  Fri Sep 25 22:53:11 2015
From: devdatta.kulkarni at RACKSPACE.COM (Devdatta Kulkarni)
Date: Fri, 25 Sep 2015 22:53:11 +0000
Subject: [openstack-dev] [Glance][Solum] Using os-auth-token and
 os-image-url with glance client
In-Reply-To: <20150925090225.GP26372@redhat.com>
References: <1443134649448.43790@RACKSPACE.COM>,
 <20150925090225.GP26372@redhat.com>
Message-ID: <1443221591186.8749@RACKSPACE.COM>

Nikhil, Flavio,

Thank you for giving immediate attention to this issue.

Regards,
Devdatta

________________________________________
From: Flavio Percoco <flavio at redhat.com>
Sent: Friday, September 25, 2015 4:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][Solum] Using os-auth-token and os-image-url with glance client

On 24/09/15 22:44 +0000, Devdatta Kulkarni wrote:
>Hi, Glance team,
>
>
>In Solum, we use Glance to store Docker images that we create for applications.
>We use Glance client internally to upload these images. Till recently, 'glance
>image-create' with only token has been
>
>working for us (in devstack). Today, I started noticing that glance
>image-create with just token is not working anymore. It is also not working
>when os-auth-token and os-image-url are passed in. According to documentation (
>http://docs.openstack.org/developer/python-glanceclient/), passing token and
>image-url should work. The client, which I have installed from master, is
>asking username (and password, if username is specified).
>
>
>Solum does not have access to end-user's password. So we need the ability to
>interact with Glance without providing password, as it has been working till
>recently.
>
>
>I investigated the issue a bit and have filed a bug with my findings.
>
>https://bugs.launchpad.net/python-glanceclient/+bug/1499540
>
>
>Can someone help with resolving this issue.
>

This should fix your issue and we'll backport it to Liberty.

https://review.openstack.org/#/c/227723/

Thanks for reporting,
Flavio


--
@flaper87
Flavio Percoco


From duncan.thomas at gmail.com  Fri Sep 25 22:54:44 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Sat, 26 Sep 2015 01:54:44 +0300
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <20150925141255.GG8745@crypt>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt>
Message-ID: <CAOyZ2aHKgwPvveqxU5hcz7GAJBXebQmZShmtJyONsi8=DVYXsA@mail.gmail.com>

I think there's a place for yet another service breakout from nova - some
sort of like-weight platform orchestration piece, nothing as complicated or
complete as heat, nothing that touches the inside of a VM, just something
that can talk to cinder, nova and neutron (plus I guess ironic and whatever
the container thing is called) and work through long running /
cross-project tasks. I'd probably expect it to provide a task style
interface, e.g. a boot-from-new-volume call returns a request-id that can
then be polled for detailed status.

The existing nova API for this (and any other nova APIs where this makes
sense) can then become a proxy for the new service, so that tenants are not
affected. The nova apis can then be deprecated in slow time.

Anybody else think this could be useful?

On 25 September 2015 at 17:12, Andrew Laski <andrew at lascii.com> wrote:

> On 09/24/15 at 03:13pm, James Penick wrote:
>
>>
>>>
>>> At risk of getting too offtopic I think there's an alternate solution to
>>> doing this in Nova or on the client side.  I think we're missing some
>>> sort
>>> of OpenStack API and service that can handle this.  Nova is a low level
>>> infrastructure API and service, it is not designed to handle these
>>> orchestrations.  I haven't checked in on Heat in a while but perhaps this
>>> is a role that it could fill.
>>>
>>> I think that too many people consider Nova to be *the* OpenStack API when
>>> considering instances/volumes/networking/images and that's not something
>>> I
>>> would like to see continue.  Or at the very least I would like to see a
>>> split between the orchestration/proxy pieces and the "manage my
>>> VM/container/baremetal" bits
>>>
>>
>>
>> (new thread)
>> You've hit on one of my biggest issues right now: As far as many deployers
>> and consumers are concerned (and definitely what I tell my users within
>> Yahoo): The value of an OpenStack value-stream (compute, network, storage)
>> is to provide a single consistent API for abstracting and managing those
>> infrastructure resources.
>>
>> Take networking: I can manage Firewalls, switches, IP selection, SDN, etc
>> through Neutron. But for compute, If I want VM I go through Nova, for
>> Baremetal I can -mostly- go through Nova, and for containers I would talk
>> to Magnum or use something like the nova docker driver.
>>
>> This means that, by default, Nova -is- the closest thing to a top level
>> abstraction layer for compute. But if that is explicitly against Nova's
>> charter, and Nova isn't going to be the top level abstraction for all
>> things Compute, then something else needs to fill that space. When that
>> happens, all things common to compute provisioning should come out of Nova
>> and move into that new API. Availability zones, Quota, etc.
>>
>
> I do think Nova is the top level abstraction layer for compute.  My issue
> is when Nova is asked to manage other resources.  There's no API call to
> tell Cinder "create a volume and attach it to this instance, and create
> that instance if it doesn't exist."  And I'm not sure why the reverse isn't
> true.
>
> I want Nova to be the absolute best API for managing compute resources.
> It's when someone is managing compute and volumes and networks together
> that I don't feel that Nova is the best place for that.  Most importantly
> right now it seems that not everyone is on the same page on this and I
> think it would be beneficial to come together and figure out what sort of
> workloads the Nova API is intending to provide.
>
>
>
>> -James
>>
>
> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150926/0b9f5156/attachment.html>

From sumanth.sathyanarayana at gmail.com  Fri Sep 25 23:23:31 2015
From: sumanth.sathyanarayana at gmail.com (Sumanth Sathyanarayana)
Date: Fri, 25 Sep 2015 16:23:31 -0700
Subject: [openstack-dev] Murano code flow for custom development and
 combining murano with horizon in devstack
Message-ID: <CAGCi2YSoCbi+a+3+K2UtunyGpEVer-fgpu3VmQCW+wL-hvgDeg@mail.gmail.com>

Hello,

Could anyone let me know if the changes in murano dashboard and horizon's
openstackdashboard, both be combined and showed under one tab.
i.e. say under Murano tab on the side left panel all the changes done in
horizon and murano both appear.

If anyone could point me to a link explaining custom development of murano
and the code flow would be very helpful...

Thanks & Best Regards
Sumanth
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/d38c6a2f/attachment.html>

From chris at openstack.org  Fri Sep 25 23:37:55 2015
From: chris at openstack.org (Chris Hoge)
Date: Fri, 25 Sep 2015 16:37:55 -0700
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <20150925171257.GI8745@crypt>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local> <20150925144216.GH8745@crypt>
 <F1384511-A789-4FF3-988A-E1409E647F98@vmware.com>
 <20150925171257.GI8745@crypt>
Message-ID: <2E1ACB8D-7D12-4F10-89A0-BBBF8BD3E448@openstack.org>


> On Sep 25, 2015, at 10:12 AM, Andrew Laski <andrew at lascii.com> wrote:
> 
> I understand that reasoning, but still am unsure on a few things.
> 
> The direction seems to be moving towards having a requirement that the same functionality is offered in two places, Nova API and Glance V2 API. That seems like it would fragment adoption rather than unify it.

My hope would be that proxies would be deprecated as new capabilities
moved in. Some of this will be driven by application developers too,
though. We?re looking at an interoperability standard, which has a
natural tension between backwards compatibility and new features.

> 
> Also after digging in on image-create I feel that there may be a mixup.  The image-create in Glance and image-create in Nova are two different things. In Glance you create an image and send the disk image data in the request, in Nova an image-create takes a snapshot of the instance provided in the request.  But it seems like DefCore is treating them as equivalent unless I'm misunderstanding.
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/37d4f673/attachment.html>

From german.eichberger at hpe.com  Fri Sep 25 23:58:23 2015
From: german.eichberger at hpe.com (Eichberger, German)
Date: Fri, 25 Sep 2015 23:58:23 +0000
Subject: [openstack-dev] [lbaas] [octavia] Proposing new meeting time
 Wednesday 16:00 UTC
Message-ID: <D22B2C34.185FD%german.eichberger@hpe.com>

All,

In our last meeting [1] we discussed moving the meeting earlier to
accommodate participants from the EMEA region. I am therefore proposing to
move the meeting to 16:00 UTC on Wednesday. Please respond to this e-mail
if you have alternate suggestions. I will send out another e-mail
announcing the new time and the date we will start with that.

Thanks,
German

[1] 
http://eavesdrop.openstack.org/meetings/octavia/2015/octavia.2015-09-23-20.
00.log.html



From rmoats at us.ibm.com  Sat Sep 26 00:03:38 2015
From: rmoats at us.ibm.com (Ryan Moats)
Date: Fri, 25 Sep 2015 19:03:38 -0500
Subject: [openstack-dev] [neutron] congrats to armax!
Message-ID: <201509260003.t8Q03kaA029977@d03av03.boulder.ibm.com>


First, congratulations to armax on being elected PTL for Mitaka.  Looking
forward to Neutron improving over the next six months.

Second thanks to everybody that voted in the election. Hopefully we had
something close to 100% turnout, because that is an important
responsibility of the population.

Ryan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150925/4be37927/attachment.html>

From tdecacqu at redhat.com  Sat Sep 26 00:08:09 2015
From: tdecacqu at redhat.com (Tristan Cacqueray)
Date: Sat, 26 Sep 2015 00:08:09 +0000
Subject: [openstack-dev] [all][elections] PTL Election Conclusion and Results
Message-ID: <5605E1E9.5080406@redhat.com>

Thank you to the electorate, to all those who voted and to all
candidates who put their name forward for PTL for this election. A
healthy, open process breeds trust in our decision making capability
thank you to all those who make this process possible.

Now for the results of the PTL election process, please join me in
extending congratulations to the following PTLs:

* Barbican
** Douglas Mendizabal
* Ceilometer
** Gordon Chung
* ChefOpenstack
** Jan Klare
* Cinder
** Sean Mcginnis
* Community App Catalog
** Christopher Aedo
* Congress
** Tim Hinrichs
* Cue
** Vipul Sabhaya
* Designate
** Graham Hayes
* Documentation
** Lana Brindley
* Glance
** Flavio Percoco
* Heat
** Sergey Kraynev
* Horizon
** David Lyle
* I18n
** Ying Chun Guo
* Infrastructure
** Jeremy Stanley
* Ironic
** Jim Rollenhagen
* Keystone
** Steve Martinelli
* Kolla
** Steven Dake
* Magnum PTL will be elected in another round.
* Manila
** Ben Swartzlander
* Mistral
** Renat Akhmerov
* Murano
** Serg Melikyan
* Neutron
** Armando Migliaccio
* Nova
** John Garbutt
* OpenStack UX
** Piet Kruithof
* OpenStackAnsible
** Jesse Pretorius
* OpenStackClient
** Dean Troyer
* Oslo
** Davanum Srinivas
* Packaging-deb
** Thomas Goirand
* PuppetOpenStack
** Emilien Macchi
* Quality Assurance
** Matthew Treinish
* Rally
** Boris Pavlovic
* RefStack
** Catherine Diep
* Release cycle management
** Doug Hellmann
* RpmPackaging
** Dirk Mueller
* Sahara
** Sergey Lukjanov
* Searchlight
** Travis Tripp
* Security
** Robert Clark
* Solum
** Devdatta Kulkarni
* Swift
** John Dickinson
* TripleO
** Dan Prince
* Trove
** Craig Vyvial
* Zaqar
** Fei Long Wang


Cinder results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_bbc6b6675115d3cd

Glance results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_03e00971a7e1fad8

Ironic results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff53995355fda506

Keystone results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7f18159b9ba89ad1

Mistral results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_c448863622ee81e0

Neutron results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_844e671ae72d37dd

Oslo results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff0ab5e43b8f44e4


Thank you to all involved in the PTL election process,
Tristan

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150926/5fc2e2c0/attachment.pgp>

From mordred at inaugust.com  Sat Sep 26 00:16:51 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Fri, 25 Sep 2015 19:16:51 -0500
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <2E1ACB8D-7D12-4F10-89A0-BBBF8BD3E448@openstack.org>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local> <20150925144216.GH8745@crypt>
 <F1384511-A789-4FF3-988A-E1409E647F98@vmware.com>
 <20150925171257.GI8745@crypt>
 <2E1ACB8D-7D12-4F10-89A0-BBBF8BD3E448@openstack.org>
Message-ID: <5605E3F3.908@inaugust.com>

On 09/25/2015 06:37 PM, Chris Hoge wrote:
>
>> On Sep 25, 2015, at 10:12 AM, Andrew Laski <andrew at lascii.com
>> <mailto:andrew at lascii.com>> wrote:
>>
>> I understand that reasoning, but still am unsure on a few things.
>>
>> The direction seems to be moving towards having a requirement that the
>> same functionality is offered in two places, Nova API and Glance V2
>> API. That seems like it would fragment adoption rather than unify it.
>
> My hope would be that proxies would be deprecated as new capabilities
> moved in. Some of this will be driven by application developers too,
> though. We?re looking at an interoperability standard, which has a
> natural tension between backwards compatibility and new features.

Yeah. The proxies are also less efficient, because they have to bounce 
through two places.

>>
>> Also after digging in on image-create I feel that there may be a
>> mixup.  The image-create in Glance and image-create in Nova are two
>> different things. In Glance you create an image and send the disk
>> image data in the request, in Nova an image-create takes a snapshot of
>> the instance provided in the request.  But it seems like DefCore is
>> treating them as equivalent unless I'm misunderstanding.
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From edgar.magana at workday.com  Sat Sep 26 00:27:34 2015
From: edgar.magana at workday.com (Edgar Magana)
Date: Sat, 26 Sep 2015 00:27:34 +0000
Subject: [openstack-dev] [all][elections] PTL Election Conclusion and
 Results
In-Reply-To: <5605E1E9.5080406@redhat.com>
References: <5605E1E9.5080406@redhat.com>
Message-ID: <09500E8A-E7EA-4B78-9397-04BDB78DF181@workday.com>

Congratulations to our PTLs!

Let's make Mitaka a great release.

As Neutron core I want to congratulate to Rosella, Ryan and Armando for volunteering. It is a huge responsibility and we all going to help. 

Armando,

Next round of beers on you! 

Cheers,

Edgar




Sent from my iPhone
> On Sep 25, 2015, at 5:10 PM, Tristan Cacqueray <tdecacqu at redhat.com> wrote:
> 
> Thank you to the electorate, to all those who voted and to all
> candidates who put their name forward for PTL for this election. A
> healthy, open process breeds trust in our decision making capability
> thank you to all those who make this process possible.
> 
> Now for the results of the PTL election process, please join me in
> extending congratulations to the following PTLs:
> 
> * Barbican
> ** Douglas Mendizabal
> * Ceilometer
> ** Gordon Chung
> * ChefOpenstack
> ** Jan Klare
> * Cinder
> ** Sean Mcginnis
> * Community App Catalog
> ** Christopher Aedo
> * Congress
> ** Tim Hinrichs
> * Cue
> ** Vipul Sabhaya
> * Designate
> ** Graham Hayes
> * Documentation
> ** Lana Brindley
> * Glance
> ** Flavio Percoco
> * Heat
> ** Sergey Kraynev
> * Horizon
> ** David Lyle
> * I18n
> ** Ying Chun Guo
> * Infrastructure
> ** Jeremy Stanley
> * Ironic
> ** Jim Rollenhagen
> * Keystone
> ** Steve Martinelli
> * Kolla
> ** Steven Dake
> * Magnum PTL will be elected in another round.
> * Manila
> ** Ben Swartzlander
> * Mistral
> ** Renat Akhmerov
> * Murano
> ** Serg Melikyan
> * Neutron
> ** Armando Migliaccio
> * Nova
> ** John Garbutt
> * OpenStack UX
> ** Piet Kruithof
> * OpenStackAnsible
> ** Jesse Pretorius
> * OpenStackClient
> ** Dean Troyer
> * Oslo
> ** Davanum Srinivas
> * Packaging-deb
> ** Thomas Goirand
> * PuppetOpenStack
> ** Emilien Macchi
> * Quality Assurance
> ** Matthew Treinish
> * Rally
> ** Boris Pavlovic
> * RefStack
> ** Catherine Diep
> * Release cycle management
> ** Doug Hellmann
> * RpmPackaging
> ** Dirk Mueller
> * Sahara
> ** Sergey Lukjanov
> * Searchlight
> ** Travis Tripp
> * Security
> ** Robert Clark
> * Solum
> ** Devdatta Kulkarni
> * Swift
> ** John Dickinson
> * TripleO
> ** Dan Prince
> * Trove
> ** Craig Vyvial
> * Zaqar
> ** Fei Long Wang
> 
> 
> Cinder results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_bbc6b6675115d3cd
> 
> Glance results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_03e00971a7e1fad8
> 
> Ironic results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff53995355fda506
> 
> Keystone results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7f18159b9ba89ad1
> 
> Mistral results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_c448863622ee81e0
> 
> Neutron results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_844e671ae72d37dd
> 
> Oslo results:
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff0ab5e43b8f44e4
> 
> 
> Thank you to all involved in the PTL election process,
> Tristan
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From ben at swartzlander.org  Sat Sep 26 00:27:58 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Fri, 25 Sep 2015 20:27:58 -0400
Subject: [openstack-dev] [Manila] CephFS native driver
In-Reply-To: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
Message-ID: <5605E68E.7060404@swartzlander.org>

On 09/24/2015 09:49 AM, John Spray wrote:
> Hi all,
>
> I've recently started work on a CephFS driver for Manila.  The (early)
> code is here:
> https://github.com/openstack/manila/compare/master...jcsp:ceph

Awesome! This is something that's been talking about for quite some time 
and I'm pleased to see progress on making it a reality.

> It requires a special branch of ceph which is here:
> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>
> This isn't done yet (hence this email rather than a gerrit review),
> but I wanted to give everyone a heads up that this work is going on,
> and a brief status update.
>
> This is the 'native' driver in the sense that clients use the CephFS
> client to access the share, rather than re-exporting it over NFS.  The
> idea is that this driver will be useful for anyone who has such
> clients, as well as acting as the basis for a later NFS-enabled
> driver.

This makes sense, but have you given thought to the optimal way to 
provide NFS semantics for those who prefer that? Obviously you can pair 
the existing Manila Generic driver with Cinder running on ceph, but I 
wonder how that wound compare to some kind of ganesha bridge that 
translates between NFS and cephfs. It that something you've looked into?

> The export location returned by the driver gives the client the Ceph
> mon IP addresses, the share path, and an authentication token.  This
> authentication token is what permits the clients access (Ceph does not
> do access control based on IP addresses).
>
> It's just capable of the minimal functionality of creating and
> deleting shares so far, but I will shortly be looking into hooking up
> snapshots/consistency groups, albeit for read-only snapshots only
> (cephfs does not have writeable shapshots).  Currently deletion is
> just a move into a 'trash' directory, the idea is to add something
> later that cleans this up in the background: the downside to the
> "shares are just directories" approach is that clearing them up has a
> "rm -rf" cost!

All snapshots are read-only... The question is whether you can take a 
snapshot and clone it into something that's writable. We're looking at 
allowing for different kinds of snapshot semantics in Manila for Mitaka. 
Even if there's no create-share-from-snapshot functionality a readable 
snapshot is still useful and something we'd like to enable.

The deletion issue sounds like a common one, although if you don't have 
the thing that cleans them up in the background yet I hope someone is 
working on that.

> A note on the implementation: cephfs recently got the ability (not yet
> in master) to restrict client metadata access based on path, so this
> driver is simply creating shares by creating directories within a
> cluster-wide filesystem, and issuing credentials to clients that
> restrict them to their own directory.  They then mount that subpath,
> so that from the client's point of view it's like having their own
> filesystem.  We also have a quota mechanism that I'll hook in later to
> enforce the share size.

So quotas aren't enforced yet? That seems like a serious issue for any 
operator except those that want to support "infinite" size shares. I 
hope that gets fixed soon as well.

> Currently the security here requires clients (i.e. the ceph-fuse code
> on client hosts, not the userspace applications) to be trusted, as
> quotas are enforced on the client side.  The OSD access control
> operates on a per-pool basis, and creating a separate pool for each
> share is inefficient.  In the future it is expected that CephFS will
> be extended to support file layouts that use RADOS namespaces, which
> are cheap, such that we can issue a new namespace to each share and
> enforce the separation between shares on the OSD side.

I think it will be important to document all of these limitations. I 
wouldn't let them stop you from getting the driver done, but if I was a 
deployer I'd want to know about these details.

> However, for many people the ultimate access control solution will be
> to use a NFS gateway in front of their CephFS filesystem: it is
> expected that an NFS-enabled cephfs driver will follow this native
> driver in the not-too-distant future.

Okay this answers part of my above question, but how to you expect the 
NFS gateway to work? Ganesha has been used successfully in the past.

> This will be my first openstack contribution, so please bear with me
> while I come up to speed with the submission process.  I'll also be in
> Tokyo for the summit next month, so I hope to meet other interested
> parties there.

Welcome and I look forward you meeting you in Tokyo!

-Ben


> All the best,
> John
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From skinjo at redhat.com  Sat Sep 26 01:09:49 2015
From: skinjo at redhat.com (Shinobu Kinjo)
Date: Fri, 25 Sep 2015 21:09:49 -0400 (EDT)
Subject: [openstack-dev] CephFS native driver
In-Reply-To: <1A3C52DFCD06494D8528644858247BF01B7CAAC6@EX10MBOX06.pnnl.gov>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
 <1886719958.21640286.1443164654402.JavaMail.zimbra@redhat.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAAC6@EX10MBOX06.pnnl.gov>
Message-ID: <202887051.22178513.1443229789582.JavaMail.zimbra@redhat.com>

> nfs is nearly impossible to make both HA and Scalable without adding really expensive dedicated hardware.

I don't think we need quite expensive hardware for this purpose.

What I'm thinking now is:

[Controller] [Compute1] ... [ComputeN]
[ RADOS                              ]

Controller becomes Ceph native client using RBD, CephFS whatever Ceph provides.

[Controller] [Compute1] ... [ComputeN]
[ Driver   ]
[ RADOS                              ]

Controller provides share space with VMs through NFS.

[Controller] [Compute1] ... [ComputeN]
    |        [ VM1    ]
[ NFS      ]-[ Share  ]
[ Driver   ]
    |
[ RADOS                              ]

Pacemaker or pacemaker remote, (and stonith) provide HA between RADOS, controller and compute.

In here, what we really need to think about is which one is better to realize this concept, CephFS or RBD.

If we use CephFS, Ceph client (controller) always accesses to MON, MDS, OSD to get latest map, access to data, in this scenario, cost of rebalancing could be high when failover happens.

Anyway we need to think what architecture is more reasonable in case of any kind of disaster scenarios.

Shinobu

----- Original Message -----
From: "Kevin M Fox" <Kevin.Fox at pnnl.gov>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>, "John Spray" <jspray at redhat.com>
Cc: "Ceph Development" <ceph-devel at vger.kernel.org>
Sent: Saturday, September 26, 2015 1:05:38 AM
Subject: Re: [openstack-dev] CephFS native driver

I think having a native cephfs driver without nfs in the cloud is a very compelling feature. nfs is nearly impossible to make both HA and Scalable without adding really expensive dedicated hardware. Ceph on the other hand scales very nicely and its very fault tollerent out of the box.

Thanks,
Kevin
________________________________________
From: Shinobu Kinjo [skinjo at redhat.com]
Sent: Friday, September 25, 2015 12:04 AM
To: OpenStack Development Mailing List (not for usage questions); John Spray
Cc: Ceph Development; openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Manila] CephFS native driver

So here are questions from my side.
Just question.


 1.What is the biggest advantage comparing others such as RDB?
  We should be able to implement what you are going to do in
  existing module, shouldn't we?

 2.What are you going to focus on with a new implementation?
  It seems to be to use NFS in front of that implementation
  with more transparently.

 3.What are you thinking of integration with OpenStack using
  a new implementation?
  Since it's going to be new kind of, there should be differ-
  ent architecture.

 4.Is this implementation intended for OneStack integration
  mainly?

Since velocity of OpenStack feature expansion is much more than
it used to be, it's much more important to think of performance.

Is a new implementation also going to improve Ceph integration
with OpenStack system?

Thank you so much for your explanation in advance.

Shinobu

----- Original Message -----
From: "John Spray" <jspray at redhat.com>
To: openstack-dev at lists.openstack.org, "Ceph Development" <ceph-devel at vger.kernel.org>
Sent: Thursday, September 24, 2015 10:49:17 PM
Subject: [openstack-dev] [Manila] CephFS native driver

Hi all,

I've recently started work on a CephFS driver for Manila.  The (early)
code is here:
https://github.com/openstack/manila/compare/master...jcsp:ceph

It requires a special branch of ceph which is here:
https://github.com/ceph/ceph/compare/master...jcsp:wip-manila

This isn't done yet (hence this email rather than a gerrit review),
but I wanted to give everyone a heads up that this work is going on,
and a brief status update.

This is the 'native' driver in the sense that clients use the CephFS
client to access the share, rather than re-exporting it over NFS.  The
idea is that this driver will be useful for anyone who has such
clients, as well as acting as the basis for a later NFS-enabled
driver.

The export location returned by the driver gives the client the Ceph
mon IP addresses, the share path, and an authentication token.  This
authentication token is what permits the clients access (Ceph does not
do access control based on IP addresses).

It's just capable of the minimal functionality of creating and
deleting shares so far, but I will shortly be looking into hooking up
snapshots/consistency groups, albeit for read-only snapshots only
(cephfs does not have writeable shapshots).  Currently deletion is
just a move into a 'trash' directory, the idea is to add something
later that cleans this up in the background: the downside to the
"shares are just directories" approach is that clearing them up has a
"rm -rf" cost!

A note on the implementation: cephfs recently got the ability (not yet
in master) to restrict client metadata access based on path, so this
driver is simply creating shares by creating directories within a
cluster-wide filesystem, and issuing credentials to clients that
restrict them to their own directory.  They then mount that subpath,
so that from the client's point of view it's like having their own
filesystem.  We also have a quota mechanism that I'll hook in later to
enforce the share size.

Currently the security here requires clients (i.e. the ceph-fuse code
on client hosts, not the userspace applications) to be trusted, as
quotas are enforced on the client side.  The OSD access control
operates on a per-pool basis, and creating a separate pool for each
share is inefficient.  In the future it is expected that CephFS will
be extended to support file layouts that use RADOS namespaces, which
are cheap, such that we can issue a new namespace to each share and
enforce the separation between shares on the OSD side.

However, for many people the ultimate access control solution will be
to use a NFS gateway in front of their CephFS filesystem: it is
expected that an NFS-enabled cephfs driver will follow this native
driver in the not-too-distant future.

This will be my first openstack contribution, so please bear with me
while I come up to speed with the submission process.  I'll also be in
Tokyo for the summit next month, so I hope to meet other interested
parties there.

All the best,
John

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From dougwig at parksidesoftware.com  Sat Sep 26 01:30:29 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Fri, 25 Sep 2015 19:30:29 -0600
Subject: [openstack-dev] [lbaas] [octavia] Proposing new meeting time
	Wednesday 16:00 UTC
In-Reply-To: <D22B2C34.185FD%german.eichberger@hpe.com>
References: <D22B2C34.185FD%german.eichberger@hpe.com>
Message-ID: <298EAAEC-6FD5-428A-B9F0-9E19A3BF08A9@parksidesoftware.com>

Works for me. 

Doug


> On Sep 25, 2015, at 5:58 PM, Eichberger, German <german.eichberger at hpe.com> wrote:
> 
> All,
> 
> In our last meeting [1] we discussed moving the meeting earlier to
> accommodate participants from the EMEA region. I am therefore proposing to
> move the meeting to 16:00 UTC on Wednesday. Please respond to this e-mail
> if you have alternate suggestions. I will send out another e-mail
> announcing the new time and the date we will start with that.
> 
> Thanks,
> German
> 
> [1] 
> http://eavesdrop.openstack.org/meetings/octavia/2015/octavia.2015-09-23-20.
> 00.log.html
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From vikschw at gmail.com  Sat Sep 26 01:35:18 2015
From: vikschw at gmail.com (Vikram Choudhary)
Date: Sat, 26 Sep 2015 07:05:18 +0530
Subject: [openstack-dev] [all][elections] PTL Election Conclusion and
	Results
In-Reply-To: <09500E8A-E7EA-4B78-9397-04BDB78DF181@workday.com>
References: <5605E1E9.5080406@redhat.com>
 <09500E8A-E7EA-4B78-9397-04BDB78DF181@workday.com>
Message-ID: <CAFeBh8u_n5oCthhFqBF8ovNjj1GAGSK7oEhRaG8J2FWDcQaWEA@mail.gmail.com>

Congrats Armando!
On Sep 26, 2015 5:57 AM, "Edgar Magana" <edgar.magana at workday.com> wrote:

> Congratulations to our PTLs!
>
> Let's make Mitaka a great release.
>
> As Neutron core I want to congratulate to Rosella, Ryan and Armando for
> volunteering. It is a huge responsibility and we all going to help.
>
> Armando,
>
> Next round of beers on you!
>
> Cheers,
>
> Edgar
>
>
>
>
> Sent from my iPhone
> > On Sep 25, 2015, at 5:10 PM, Tristan Cacqueray <tdecacqu at redhat.com>
> wrote:
> >
> > Thank you to the electorate, to all those who voted and to all
> > candidates who put their name forward for PTL for this election. A
> > healthy, open process breeds trust in our decision making capability
> > thank you to all those who make this process possible.
> >
> > Now for the results of the PTL election process, please join me in
> > extending congratulations to the following PTLs:
> >
> > * Barbican
> > ** Douglas Mendizabal
> > * Ceilometer
> > ** Gordon Chung
> > * ChefOpenstack
> > ** Jan Klare
> > * Cinder
> > ** Sean Mcginnis
> > * Community App Catalog
> > ** Christopher Aedo
> > * Congress
> > ** Tim Hinrichs
> > * Cue
> > ** Vipul Sabhaya
> > * Designate
> > ** Graham Hayes
> > * Documentation
> > ** Lana Brindley
> > * Glance
> > ** Flavio Percoco
> > * Heat
> > ** Sergey Kraynev
> > * Horizon
> > ** David Lyle
> > * I18n
> > ** Ying Chun Guo
> > * Infrastructure
> > ** Jeremy Stanley
> > * Ironic
> > ** Jim Rollenhagen
> > * Keystone
> > ** Steve Martinelli
> > * Kolla
> > ** Steven Dake
> > * Magnum PTL will be elected in another round.
> > * Manila
> > ** Ben Swartzlander
> > * Mistral
> > ** Renat Akhmerov
> > * Murano
> > ** Serg Melikyan
> > * Neutron
> > ** Armando Migliaccio
> > * Nova
> > ** John Garbutt
> > * OpenStack UX
> > ** Piet Kruithof
> > * OpenStackAnsible
> > ** Jesse Pretorius
> > * OpenStackClient
> > ** Dean Troyer
> > * Oslo
> > ** Davanum Srinivas
> > * Packaging-deb
> > ** Thomas Goirand
> > * PuppetOpenStack
> > ** Emilien Macchi
> > * Quality Assurance
> > ** Matthew Treinish
> > * Rally
> > ** Boris Pavlovic
> > * RefStack
> > ** Catherine Diep
> > * Release cycle management
> > ** Doug Hellmann
> > * RpmPackaging
> > ** Dirk Mueller
> > * Sahara
> > ** Sergey Lukjanov
> > * Searchlight
> > ** Travis Tripp
> > * Security
> > ** Robert Clark
> > * Solum
> > ** Devdatta Kulkarni
> > * Swift
> > ** John Dickinson
> > * TripleO
> > ** Dan Prince
> > * Trove
> > ** Craig Vyvial
> > * Zaqar
> > ** Fei Long Wang
> >
> >
> > Cinder results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_bbc6b6675115d3cd
> >
> > Glance results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_03e00971a7e1fad8
> >
> > Ironic results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff53995355fda506
> >
> > Keystone results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_7f18159b9ba89ad1
> >
> > Mistral results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_c448863622ee81e0
> >
> > Neutron results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_844e671ae72d37dd
> >
> > Oslo results:
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ff0ab5e43b8f44e4
> >
> >
> > Thank you to all involved in the PTL election process,
> > Tristan
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150926/76274d13/attachment.html>

From harlowja at outlook.com  Sat Sep 26 04:04:32 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Fri, 25 Sep 2015 21:04:32 -0700
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <CAOyZ2aHKgwPvveqxU5hcz7GAJBXebQmZShmtJyONsi8=DVYXsA@mail.gmail.com>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt>
 <CAOyZ2aHKgwPvveqxU5hcz7GAJBXebQmZShmtJyONsi8=DVYXsA@mail.gmail.com>
Message-ID: <BLU436-SMTP258B7E08A4B003762D19DB4D8410@phx.gbl>

+1 from me, although I thought heat was supposed to be this thing?

Maybe there should be a 'warm' project or something ;)

Or we can call it 'bbs' for 'building block service' (obviously not 
bulletin board system); ask said service to build a set of blocks into 
well defined structures and let it figure out how to make that happen...

This though most definitely requires cross-project agreement though so 
I'd hope we can reach that somehow (before creating a halfway done new 
orchestration thing that is halfway integrated with a bunch of other 
apis that do one quarter of the work in ten different ways).

Duncan Thomas wrote:
> I think there's a place for yet another service breakout from nova -
> some sort of like-weight platform orchestration piece, nothing as
> complicated or complete as heat, nothing that touches the inside of a
> VM, just something that can talk to cinder, nova and neutron (plus I
> guess ironic and whatever the container thing is called) and work
> through long running / cross-project tasks. I'd probably expect it to
> provide a task style interface, e.g. a boot-from-new-volume call returns
> a request-id that can then be polled for detailed status.
>
> The existing nova API for this (and any other nova APIs where this makes
> sense) can then become a proxy for the new service, so that tenants are
> not affected. The nova apis can then be deprecated in slow time.
>
> Anybody else think this could be useful?
>
> On 25 September 2015 at 17:12, Andrew Laski <andrew at lascii.com
> <mailto:andrew at lascii.com>> wrote:
>
>     On 09/24/15 at 03:13pm, James Penick wrote:
>
>
>
>             At risk of getting too offtopic I think there's an alternate
>             solution to
>             doing this in Nova or on the client side.  I think we're
>             missing some sort
>             of OpenStack API and service that can handle this.  Nova is
>             a low level
>             infrastructure API and service, it is not designed to handle
>             these
>             orchestrations.  I haven't checked in on Heat in a while but
>             perhaps this
>             is a role that it could fill.
>
>             I think that too many people consider Nova to be *the*
>             OpenStack API when
>             considering instances/volumes/networking/images and that's
>             not something I
>             would like to see continue.  Or at the very least I would
>             like to see a
>             split between the orchestration/proxy pieces and the "manage my
>             VM/container/baremetal" bits
>
>
>
>         (new thread)
>         You've hit on one of my biggest issues right now: As far as many
>         deployers
>         and consumers are concerned (and definitely what I tell my users
>         within
>         Yahoo): The value of an OpenStack value-stream (compute,
>         network, storage)
>         is to provide a single consistent API for abstracting and
>         managing those
>         infrastructure resources.
>
>         Take networking: I can manage Firewalls, switches, IP selection,
>         SDN, etc
>         through Neutron. But for compute, If I want VM I go through
>         Nova, for
>         Baremetal I can -mostly- go through Nova, and for containers I
>         would talk
>         to Magnum or use something like the nova docker driver.
>
>         This means that, by default, Nova -is- the closest thing to a
>         top level
>         abstraction layer for compute. But if that is explicitly against
>         Nova's
>         charter, and Nova isn't going to be the top level abstraction
>         for all
>         things Compute, then something else needs to fill that space.
>         When that
>         happens, all things common to compute provisioning should come
>         out of Nova
>         and move into that new API. Availability zones, Quota, etc.
>
>
>     I do think Nova is the top level abstraction layer for compute.  My
>     issue is when Nova is asked to manage other resources.  There's no
>     API call to tell Cinder "create a volume and attach it to this
>     instance, and create that instance if it doesn't exist."  And I'm
>     not sure why the reverse isn't true.
>
>     I want Nova to be the absolute best API for managing compute
>     resources.  It's when someone is managing compute and volumes and
>     networks together that I don't feel that Nova is the best place for
>     that.  Most importantly right now it seems that not everyone is on
>     the same page on this and I think it would be beneficial to come
>     together and figure out what sort of workloads the Nova API is
>     intending to provide.
>
>
>
>         -James
>
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> --
> Duncan Thomas
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From julien at danjou.info  Sat Sep 26 08:48:39 2015
From: julien at danjou.info (Julien Danjou)
Date: Sat, 26 Sep 2015 10:48:39 +0200
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
	config file?
In-Reply-To: <5601EEF5.9090301@windriver.com> (Chris Friesen's message of
 "Tue, 22 Sep 2015 18:14:45 -0600")
References: <5601CD90.2050102@windriver.com>
 <CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>
 <BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>
 <5601EEF5.9090301@windriver.com>
Message-ID: <m0oagpbm6w.fsf@danjou.info>

On Tue, Sep 22 2015, Chris Friesen wrote:

> On 09/22/2015 05:48 PM, Joshua Harlow wrote:
>> A present:
>>
>>  >>> import contextlib
>>  >>> import os
>>  >>>
>>  >>> @contextlib.contextmanager
>> ... def synced_file(path, mode='wb'):
>> ...   with open(path, mode) as fh:
>> ...      yield fh
>> ...      os.fdatasync(fh.fileno())
>> ...
>>  >>> with synced_file("/tmp/b.txt") as fh:
>> ...    fh.write("b")
>
> Isn't that missing an "fh.flush()" somewhere before the fdatasync()?

Unless proven otherwise, close() does a flush().

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150926/d152929f/attachment.pgp>

From thierry at openstack.org  Sat Sep 26 09:17:22 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Sat, 26 Sep 2015 11:17:22 +0200
Subject: [openstack-dev] [Glance] [Horizon] [Sahara] [Barbican] Liberty RC1
	available
Message-ID: <560662A2.1020801@openstack.org>

Hello everyone,

Last for this week, Glance, Horizon, Sahara, and Barbican just produced
their first release candidate for the end of the Liberty cycle. The RC1
tarballs, as well as a list of last-minute features and fixed bugs since
liberty-1 are available at:

https://launchpad.net/glance/liberty/liberty-rc1
https://launchpad.net/horizon/liberty/liberty-rc1
https://launchpad.net/sahara/liberty/liberty-rc1
https://launchpad.net/barbican/liberty/liberty-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as final versions
on October 15. You are therefore strongly encouraged to test and
validate these tarballs !

Alternatively, you can directly test the stable/liberty release branch at:

http://git.openstack.org/cgit/openstack/glance/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/sahara/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/barbican/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/glance/+filebug
or
https://bugs.launchpad.net/horizon/+filebug
or
https://bugs.launchpad.net/sahara/+filebug
or
https://bugs.launchpad.net/barbican/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branches of Glance, Horizon, Sahara and Barbican
are now officially open for Mitaka development, so feature freeze
restrictions no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)


From jspray at redhat.com  Sat Sep 26 11:02:34 2015
From: jspray at redhat.com (John Spray)
Date: Sat, 26 Sep 2015 12:02:34 +0100
Subject: [openstack-dev] [Manila] CephFS native driver
In-Reply-To: <5605E68E.7060404@swartzlander.org>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
 <5605E68E.7060404@swartzlander.org>
Message-ID: <CALe9h7dF1_onycacgt15QVuyP0t1DYb0PmFO9mM6TWJaFEwwiQ@mail.gmail.com>

On Sat, Sep 26, 2015 at 1:27 AM, Ben Swartzlander <ben at swartzlander.org> wrote:
> On 09/24/2015 09:49 AM, John Spray wrote:
>>
>> Hi all,
>>
>> I've recently started work on a CephFS driver for Manila.  The (early)
>> code is here:
>> https://github.com/openstack/manila/compare/master...jcsp:ceph
>
>
> Awesome! This is something that's been talking about for quite some time and
> I'm pleased to see progress on making it a reality.
>
>> It requires a special branch of ceph which is here:
>> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>>
>> This isn't done yet (hence this email rather than a gerrit review),
>> but I wanted to give everyone a heads up that this work is going on,
>> and a brief status update.
>>
>> This is the 'native' driver in the sense that clients use the CephFS
>> client to access the share, rather than re-exporting it over NFS.  The
>> idea is that this driver will be useful for anyone who has suchq
>> clients, as well as acting as the basis for a later NFS-enabled
>> driver.
>
>
> This makes sense, but have you given thought to the optimal way to provide
> NFS semantics for those who prefer that? Obviously you can pair the existing
> Manila Generic driver with Cinder running on ceph, but I wonder how that
> wound compare to some kind of ganesha bridge that translates between NFS and
> cephfs. It that something you've looked into?

The Ceph FSAL in ganesha already exists, some work is going on at the
moment to get it more regularly built and tested.  There's some
separate design work to be done to decide exactly how that part of
things is going to work, including discussing with all the right
people, but I didn't want to let that hold up getting the initial
native driver out there.

>> The export location returned by the driver gives the client the Ceph
>> mon IP addresses, the share path, and an authentication token.  This
>> authentication token is what permits the clients access (Ceph does not
>> do access control based on IP addresses).
>>
>> It's just capable of the minimal functionality of creating and
>> deleting shares so far, but I will shortly be looking into hooking up
>> snapshots/consistency groups, albeit for read-only snapshots only
>> (cephfs does not have writeable shapshots).  Currently deletion is
>> just a move into a 'trash' directory, the idea is to add something
>> later that cleans this up in the background: the downside to the
>> "shares are just directories" approach is that clearing them up has a
>> "rm -rf" cost!
>
>
> All snapshots are read-only... The question is whether you can take a
> snapshot and clone it into something that's writable. We're looking at
> allowing for different kinds of snapshot semantics in Manila for Mitaka.
> Even if there's no create-share-from-snapshot functionality a readable
> snapshot is still useful and something we'd like to enable.

Enabling creation of snapshots is pretty trivial, the slightly more
interesting part will be accessing them.  CephFS doesn't provide a
rollback mechanism, so

> The deletion issue sounds like a common one, although if you don't have the
> thing that cleans them up in the background yet I hope someone is working on
> that.

Yeah, that would be me -- the most important sentence in my original
email was probably "this isn't done yet" :-)

>> A note on the implementation: cephfs recently got the ability (not yet
>> in master) to restrict client metadata access based on path, so this
>> driver is simply creating shares by creating directories within a
>> cluster-wide filesystem, and issuing credentials to clients that
>> restrict them to their own directory.  They then mount that subpath,
>> so that from the client's point of view it's like having their own
>> filesystem.  We also have a quota mechanism that I'll hook in later to
>> enforce the share size.
>
>
> So quotas aren't enforced yet? That seems like a serious issue for any
> operator except those that want to support "infinite" size shares. I hope
> that gets fixed soon as well.

Same again, just not done yet.  Well, actually since I wrote the
original email I added quota support to my branch, so never mind!

>> Currently the security here requires clients (i.e. the ceph-fuse code
>> on client hosts, not the userspace applications) to be trusted, as
>> quotas are enforced on the client side.  The OSD access control
>> operates on a per-pool basis, and creating a separate pool for each
>> share is inefficient.  In the future it is expected that CephFS will
>> be extended to support file layouts that use RADOS namespaces, which
>> are cheap, such that we can issue a new namespace to each share and
>> enforce the separation between shares on the OSD side.
>
>
> I think it will be important to document all of these limitations. I
> wouldn't let them stop you from getting the driver done, but if I was a
> deployer I'd want to know about these details.

Yes, definitely.  I'm also adding an optional flag when creating
volumes to give them their own RADOS pool for data, which would make
the level of isolation much stronger, at the cost of using more
resources per volume.  Creating separate pools has a substantial
overhead, but in sites with a relatively small number of shared
filesystems it could be desirable.  We may also want to look into
making this a layered thing with a pool per tenant, and then
less-isolated shares within that pool.  (pool in this paragraph means
the ceph concept, not the manila concept).

At some stage I would like to add the ability to have physically
separate filesystems within ceph (i.e. filesystems don't share the
same MDSs), which would add a second optional level of isolation for
metadata as well as data

Overall though, there's going to be sort of a race here between the
native ceph multitenancy capability, and the use of NFS to provide
similar levels of isolation.

>> However, for many people the ultimate access control solution will be
>> to use a NFS gateway in front of their CephFS filesystem: it is
>> expected that an NFS-enabled cephfs driver will follow this native
>> driver in the not-too-distant future.
>
>
> Okay this answers part of my above question, but how to you expect the NFS
> gateway to work? Ganesha has been used successfully in the past.

Ganesha is the preferred server right now.  There is probably going to
need to be some level of experimentation needed to confirm that it's
working and performing sufficiently well compared with knfs on top of
the cephfs kernel client.  Personally though, I have a strong
preference for userspace solutions where they work well enough.

The broader question is exactly where in the system the NFS gateways
run, and how they get configured -- that's the very next conversation
to have after the guts of this driver are done.  We are interested in
approaches that bring the CephFS protocol as close to the guests as
possible before bridging it to NFS, possibly even running ganesha
instances locally on the hypervisors, but I don't think we're ready to
draw a clear picture of that just yet, and I suspect we will end up
wanting to enable multiple methods, including the lowest common
denominator "run a VM with a ceph client and ganesha" case.

>> This will be my first openstack contribution, so please bear with me
>> while I come up to speed with the submission process.  I'll also be in
>> Tokyo for the summit next month, so I hope to meet other interested
>> parties there.
>
>
> Welcome and I look forward you meeting you in Tokyo!

Likewise!

John

>
> -Ben
>
>
>
>> All the best,
>> John
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From jspray at redhat.com  Sat Sep 26 11:08:05 2015
From: jspray at redhat.com (John Spray)
Date: Sat, 26 Sep 2015 12:08:05 +0100
Subject: [openstack-dev] [Manila] CephFS native driver
In-Reply-To: <CALe9h7dF1_onycacgt15QVuyP0t1DYb0PmFO9mM6TWJaFEwwiQ@mail.gmail.com>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
 <5605E68E.7060404@swartzlander.org>
 <CALe9h7dF1_onycacgt15QVuyP0t1DYb0PmFO9mM6TWJaFEwwiQ@mail.gmail.com>
Message-ID: <CALe9h7c163D98qZtSTchz9_vOiuktOAi+uYnEJOkkTS9y3jwSA@mail.gmail.com>

On Sat, Sep 26, 2015 at 12:02 PM, John Spray <jspray at redhat.com> wrote:
> On Sat, Sep 26, 2015 at 1:27 AM, Ben Swartzlander <ben at swartzlander.org> wrote:
>> All snapshots are read-only... The question is whether you can take a
>> snapshot and clone it into something that's writable. We're looking at
>> allowing for different kinds of snapshot semantics in Manila for Mitaka.
>> Even if there's no create-share-from-snapshot functionality a readable
>> snapshot is still useful and something we'd like to enable.
>
> Enabling creation of snapshots is pretty trivial, the slightly more
> interesting part will be accessing them.  CephFS doesn't provide a
> rollback mechanism, so

Oops, missed a bit.

Looking again at the level of support for snapshots in Manila's
current API, it seems like we may not be in such bad shape anyway.
Yes, the cloning case is what I'm thinking about when talk about
writable shapshots: currently clone from snapshot is probably going to
look like a "cp -r", unfortunately.  However, if someone could ask for
a read-only clone, then we would be able to give them direct access to
the snapshot itself.  Haven't fully looked into the snapshot handling
in Manila so let me know if any of this doesn't make sense.

John


From twinkle1chawla at gmail.com  Sat Sep 26 13:21:37 2015
From: twinkle1chawla at gmail.com (Twinkle Chawla)
Date: Sat, 26 Sep 2015 18:51:37 +0530
Subject: [openstack-dev] [Openstack] [Glance] [Horizon] [Sahara]
	[Barbican] Liberty RC1 available
In-Reply-To: <560662A2.1020801@openstack.org>
References: <560662A2.1020801@openstack.org>
Message-ID: <CAGUS0pBZFUgBnmzD5Zvy7f8Wx32-1Twt4yCp43Z9N7rJvOFC6w@mail.gmail.com>

Hello everyone,
I am new to Outreachy - Openstack and I want to contribute in it. I am
getting problem in selecting Project, Please help me out so that I can find
a way to move on.

Regards,

On Sat, Sep 26, 2015 at 2:47 PM, Thierry Carrez <thierry at openstack.org>
wrote:

> Hello everyone,
>
> Last for this week, Glance, Horizon, Sahara, and Barbican just produced
> their first release candidate for the end of the Liberty cycle. The RC1
> tarballs, as well as a list of last-minute features and fixed bugs since
> liberty-1 are available at:
>
> https://launchpad.net/glance/liberty/liberty-rc1
> https://launchpad.net/horizon/liberty/liberty-rc1
> https://launchpad.net/sahara/liberty/liberty-rc1
> https://launchpad.net/barbican/liberty/liberty-rc1
>
> Unless release-critical issues are found that warrant a release
> candidate respin, these RC1s will be formally released as final versions
> on October 15. You are therefore strongly encouraged to test and
> validate these tarballs !
>
> Alternatively, you can directly test the stable/liberty release branch at:
>
> http://git.openstack.org/cgit/openstack/glance/log/?h=stable/liberty
> http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/liberty
> http://git.openstack.org/cgit/openstack/sahara/log/?h=stable/liberty
> http://git.openstack.org/cgit/openstack/barbican/log/?h=stable/liberty
>
> If you find an issue that could be considered release-critical, please
> file it at:
>
> https://bugs.launchpad.net/glance/+filebug
> or
> https://bugs.launchpad.net/horizon/+filebug
> or
> https://bugs.launchpad.net/sahara/+filebug
> or
> https://bugs.launchpad.net/barbican/+filebug
>
> and tag it *liberty-rc-potential* to bring it to the release crew's
> attention.
>
> Note that the "master" branches of Glance, Horizon, Sahara and Barbican
> are now officially open for Mitaka development, so feature freeze
> restrictions no longer apply there.
>
> Regards,
>
> --
> Thierry Carrez (ttx)
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>



-- 
*TWINKLE CHAWLA*
________________________________________________________________
B. Tech. (Computer Science Engineering)
Arya College of Engineering & Information Technology, Jaipur

________________________________________________________________
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150926/71fd5319/attachment.html>

From annegentle at justwriteclick.com  Sat Sep 26 14:19:58 2015
From: annegentle at justwriteclick.com (Anne Gentle)
Date: Sat, 26 Sep 2015 09:19:58 -0500
Subject: [openstack-dev] [glance] [infra] [neutron] [swift] [trove] [zaqar]
 [murano] [docs] Outreachy: selecting a project to contribute to
Message-ID: <CAD0KtVEkNKUExOf+PYAw+Cq03iw9iV9b9REhZz2m+mqOMER4Bw@mail.gmail.com>

Hi Twinkle,

Welcome! Thanks for your interest. If you have time, stop by #openstack-opw
on Freenode IRC to have a real-time chat with OpenStack Outreachy admins
and mentors and other interns.

Also, typically you would write an email to the mailing list with a new
subject line so that we can reply to you in the thread about your question.
I'm doing that with my reply to show an example of using topics or keywords
with square brackets in the subject line. Those are some of the projects
with mentors listed in https://wiki.openstack.org/wiki/Outreachy.

Hope that helps in your quest to learn more about each project.

Thanks,
Anne



On Sat, Sep 26, 2015 at 8:21 AM, Twinkle Chawla <twinkle1chawla at gmail.com>
wrote:

> Hello everyone,
> I am new to Outreachy - Openstack and I want to contribute in it. I am
> getting problem in selecting Project, Please help me out so that I can find
> a way to move on.
>
> Regards,
>
> On Sat, Sep 26, 2015 at 2:47 PM, Thierry Carrez <thierry at openstack.org>
> wrote:
>
>> Hello everyone,
>>
>> Last for this week, Glance, Horizon, Sahara, and Barbican just produced
>> their first release candidate for the end of the Liberty cycle. The RC1
>> tarballs, as well as a list of last-minute features and fixed bugs since
>> liberty-1 are available at:
>>
>> https://launchpad.net/glance/liberty/liberty-rc1
>> https://launchpad.net/horizon/liberty/liberty-rc1
>> https://launchpad.net/sahara/liberty/liberty-rc1
>> https://launchpad.net/barbican/liberty/liberty-rc1
>>
>> Unless release-critical issues are found that warrant a release
>> candidate respin, these RC1s will be formally released as final versions
>> on October 15. You are therefore strongly encouraged to test and
>> validate these tarballs !
>>
>> Alternatively, you can directly test the stable/liberty release branch at:
>>
>> http://git.openstack.org/cgit/openstack/glance/log/?h=stable/liberty
>> http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/liberty
>> http://git.openstack.org/cgit/openstack/sahara/log/?h=stable/liberty
>> http://git.openstack.org/cgit/openstack/barbican/log/?h=stable/liberty
>>
>> If you find an issue that could be considered release-critical, please
>> file it at:
>>
>> https://bugs.launchpad.net/glance/+filebug
>> or
>> https://bugs.launchpad.net/horizon/+filebug
>> or
>> https://bugs.launchpad.net/sahara/+filebug
>> or
>> https://bugs.launchpad.net/barbican/+filebug
>>
>> and tag it *liberty-rc-potential* to bring it to the release crew's
>> attention.
>>
>> Note that the "master" branches of Glance, Horizon, Sahara and Barbican
>> are now officially open for Mitaka development, so feature freeze
>> restrictions no longer apply there.
>>
>> Regards,
>>
>> --
>> Thierry Carrez (ttx)
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
>
> --
> *TWINKLE CHAWLA*
> ________________________________________________________________
> B. Tech. (Computer Science Engineering)
> Arya College of Engineering & Information Technology, Jaipur
>
> ________________________________________________________________
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>


-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150926/5a2e7166/attachment.html>

From duncan.thomas at gmail.com  Sat Sep 26 15:30:09 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Sat, 26 Sep 2015 18:30:09 +0300
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <BLU436-SMTP258B7E08A4B003762D19DB4D8410@phx.gbl>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt>
 <CAOyZ2aHKgwPvveqxU5hcz7GAJBXebQmZShmtJyONsi8=DVYXsA@mail.gmail.com>
 <BLU436-SMTP258B7E08A4B003762D19DB4D8410@phx.gbl>
Message-ID: <CAOyZ2aHKG4LAyyK0z6EiDByEEWTtWZdLZU3DghVchGEOciJbrw@mail.gmail.com>

On 26 September 2015 at 07:04, Joshua Harlow <harlowja at outlook.com> wrote:

> Maybe there should be a 'warm' project or something ;)
>
> Or we can call it 'bbs' for 'building block service' (obviously not
> bulletin board system); ask said service to build a set of blocks into well
> defined structures and let it figure out how to make that happen...
>
>
I was looking for a pun on herding cats for the name, but I've not yet come
up with one.

More seriously though, I don't see this as directly heat - heat is a
project with much  greater scope. What I'm thinking of is, like
cinder/neutron, just breaking out a small piece of the nova API, cleaning
it up and adding any obviously missing features. It is a project that could
easily be ruined by scope creep.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150926/bfc0d85b/attachment.html>

From yolandeamate at gmail.com  Sat Sep 26 20:20:57 2015
From: yolandeamate at gmail.com (Amate Yolande)
Date: Sat, 26 Sep 2015 21:20:57 +0100
Subject: [openstack-dev] [murano] Outreachy: Interest in contributing to
	open-source
Message-ID: <CAFAMDXaHJ522on7m6cSObGuoqzxwMCmkc-dRcY4RekjO6WfXrA@mail.gmail.com>

Hello

My name is Amate Yolande from Buea Cameroon. I am new to open source
and I am interested in participating in the Outreachy. I would like to
work on the "Murano - Implementation of tagging heat stacks, created
by murano" project and would like to get some directives on how to
familiarize myself with the project. So far I have been able to
install and test OpenStack from dev-stack on a spare computer using a
local network at home.

Thanks
Yolande


From ijw.ubuntu at cack.org.uk  Sun Sep 27 04:19:50 2015
From: ijw.ubuntu at cack.org.uk (Ian Wells)
Date: Sat, 26 Sep 2015 21:19:50 -0700
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <20150925190247.GF4731@yuggoth.org>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
Message-ID: <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>

Can I ask a different question - could we reject a few simple-to-check
things on the push, like bad commit messages?  For things that take 2
seconds to fix and do make people's lives better, it's not that they're
rejected, it's that the whole rejection cycle via gerrit review (push/wait
for tests to run/check website/swear/find change/fix/push again) is out of
proportion to the effort taken to fix it.

It seems here that there's benefit to 72 line messages - not that everyone
sees that benefit, but it is present - but it doesn't outweigh the current
cost.
-- 
Ian.


On 25 September 2015 at 12:02, Jeremy Stanley <fungi at yuggoth.org> wrote:

> On 2015-09-25 16:15:15 +0000 (+0000), Fox, Kevin M wrote:
> > Another option... why are we wasting time on something that a
> > computer can handle? Why not just let the line length be infinite
> > in the commit message and have gerrit wrap it to <insert random
> > number here> length lines on merge?
>
> The commit message content (including whitespace/formatting) is part
> of the data fed into the hash algorithm to generate the commit
> identifier. If Gerrit changed the commit message at upload, that
> would alter the Git SHA compared to your local copy of the same
> commit. This quickly goes down a Git madness rabbit hole (not the
> least of which is that it would completely break signed commits).
> --
> Jeremy Stanley
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150926/e4bc4109/attachment.html>

From irenab.dev at gmail.com  Sun Sep 27 04:59:15 2015
From: irenab.dev at gmail.com (Irena Berezovsky)
Date: Sun, 27 Sep 2015 07:59:15 +0300
Subject: [openstack-dev] [neutron] How could an L2 agent extension
 access agent methods ?
In-Reply-To: <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
Message-ID: <CALqgCCodFKCp_ar5M6+9b9Ngiqf1qc6Rk-SKrF2GBYzc9f2CMw@mail.gmail.com>

I would like to second  Kevin. This can be done in a similar way as ML2
Plugin passed plugin_context to ML2 Extension Drivers:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L910
.

BR,
Irena

On Fri, Sep 25, 2015 at 11:57 AM, Kevin Benton <blak111 at gmail.com> wrote:

> I think the 4th of the options you proposed would be the best. We don't
> want to give agents direct access to the agent object or else we will run
> the risk of breaking extensions all of the time during any kind of
> reorganization or refactoring. Having a well defined API in between will
> give us flexibility to move things around.
>
> On Fri, Sep 25, 2015 at 1:32 AM, <thomas.morin at orange.com> wrote:
>
>> Hi everyone,
>>
>> (TL;DR: we would like an L2 agent extension to be able to call methods on
>> the agent class, e.g. OVSAgent)
>>
>> In the networking-bgpvpn project, we need the reference driver to
>> interact with the ML2 openvswitch agent with new RPCs to allow exchanging
>> information with the BGP VPN implementation running on the compute nodes.
>> We also need the OVS agent to setup specific things on the OVS bridges for
>> MPLS traffic.
>>
>> To extend the agent behavior, we currently create a new agent by
>> mimicking the main() in ovs_neutron_agent.py but instead of instantiating
>> instantiate OVSAgent, with instantiate a class that overloads the OVSAgent
>> class with the additional behavior we need [1] .
>>
>> This is really not the ideal way of extending the agent, and we would
>> prefer using the L2 agent extension framework [2].
>>
>> Using the L2 agent extension framework would work, but only partially: it
>> would easily allos us to register our RPC consumers, but not to let us
>> access to some datastructures/methods of the agent that we need to use:
>> setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge
>> objects to manipulate OVS ports.
>>
>> I've filled-in an RFE bug to track this issue [5].
>>
>> We would like something like one of the following:
>> 1) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access to the agent object (and thus let the extension call methods
>> of the agent) by giving the agent as a parameter of the initialize method
>> [4]
>> 2) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access to the agent object (and thus let the extension call methods
>> of the agent) by giving the agent as a parameter of a new setAgent method
>> 3) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access only to specific/chosen methods on the agent object, for
>> instance by giving a dict as a parameter of the initialize method [4],
>> whose keys would be method names, and values would be pointer to these
>> methods on the agent object
>> 4) define a new interface with methods to access things inside the agent,
>> this interface would be implemented by an object instantiated by the agent,
>> and that the agent would pass to the extension manager, thus allowing the
>> extension manager to passe the object to an extension through the
>> initialize method of AgentCoreResourceExtension [4]
>>
>> Any feedback on these ideas...?
>> Of course any other idea is welcome...
>>
>> For the sake of triggering reaction, the question could be rephrased as:
>> if we submit a change doing (1) above, would it have a reasonable chance of
>> merging ?
>>
>> -Thomas
>>
>> [1]
>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
>> [2] https://review.openstack.org/#/c/195439/
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
>> [4]
>> https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
>> [5] https://bugs.launchpad.net/neutron/+bug/1499637
>>
>> _________________________________________________________________________________________________________________________
>>
>> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
>> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
>> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
>> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>
>> This message and its attachments may contain confidential or privileged information that may be protected by law;
>> they should not be distributed, used or copied without authorisation.
>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>> Thank you.
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150927/0d04c9bf/attachment.html>

From tony.a.wang at alcatel-lucent.com  Sun Sep 27 06:26:20 2015
From: tony.a.wang at alcatel-lucent.com (WANG, Ming Hao (Tony T))
Date: Sun, 27 Sep 2015 06:26:20 +0000
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <56041F07.9080705@redhat.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com>
Message-ID: <F1F484A52BD63243B5497BFC9DE26E5A1A78E403@SG70YWXCHMBA05.zap.alcatel-lucent.com>

Russell,

Thanks for your valuable information.
I understood Geneve is some kind of tunnel format for network virtualization encapsulation, just like VxLAN.
But I'm still confused by the connection between Geneve and VTEP.
I suppose VTEP should be on behalf of "VxLAN Tunnel Endpoint", which should be used for VxLAN only.

Does it become some "common tunnel endpoint" in OVN, and can be also used as a tunnel endpoint for Geneve?

Thanks,
Tony
-----Original Message-----
From: Russell Bryant [mailto:rbryant at redhat.com] 
Sent: Friday, September 25, 2015 12:04 AM
To: WANG, Ming Hao (Tony T); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

On 09/24/2015 10:37 AM, WANG, Ming Hao (Tony T) wrote:
> Russell,
> 
> Thanks for your detail explanation and kind help!
> I have understand how container in VM can acquire network interfaces in different neutron networks now.
> For the connections between compute nodes, I think I need to study Geneve protocol and VTEP first.
> Any further question, I may need to continue consulting you. :-)

OVN uses Geneve in conceptually the same way as to how the Neutron reference implementation (ML2+OVS) uses VxLAN to create overlay networks among the compute nodes for tenant overlay networks.

VTEP gateways or provider networks come into play when you want to connect these overlay networks to physical, or "underlay" networks.

Hope that helps,

--
Russell Bryant


From morgan.fainberg at gmail.com  Sun Sep 27 06:36:09 2015
From: morgan.fainberg at gmail.com (Morgan Fainberg)
Date: Sat, 26 Sep 2015 23:36:09 -0700
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
Message-ID: <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>

As a core (and former PTL) I just ignored commit message -1s unless there is something majorly wrong (no bug id where one is needed, etc). 

I appreciate well formatted commits, but can we let this one go? This discussion is so far into the meta-bike-shedding (bike shedding about bike shedding commit messages) ... If a commit message is *that* bad a -1 (or just fixing it?) Might be worth it. However, if a commit isn't missing key info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence moving from topic to topic, there isn't a good reason to block the review. 

It is not worth having a bot -1 bad commits or even having gerrit muck with them. Let's do the job of the reviewer and actually review code instead of going crazy with commit messages. 

Sent via mobile

> On Sep 26, 2015, at 21:19, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
> 
> Can I ask a different question - could we reject a few simple-to-check things on the push, like bad commit messages?  For things that take 2 seconds to fix and do make people's lives better, it's not that they're rejected, it's that the whole rejection cycle via gerrit review (push/wait for tests to run/check website/swear/find change/fix/push again) is out of proportion to the effort taken to fix it.
> 
> It seems here that there's benefit to 72 line messages - not that everyone sees that benefit, but it is present - but it doesn't outweigh the current cost.
> -- 
> Ian.
> 
> 
>> On 25 September 2015 at 12:02, Jeremy Stanley <fungi at yuggoth.org> wrote:
>> On 2015-09-25 16:15:15 +0000 (+0000), Fox, Kevin M wrote:
>> > Another option... why are we wasting time on something that a
>> > computer can handle? Why not just let the line length be infinite
>> > in the commit message and have gerrit wrap it to <insert random
>> > number here> length lines on merge?
>> 
>> The commit message content (including whitespace/formatting) is part
>> of the data fed into the hash algorithm to generate the commit
>> identifier. If Gerrit changed the commit message at upload, that
>> would alter the Git SHA compared to your local copy of the same
>> commit. This quickly goes down a Git madness rabbit hole (not the
>> least of which is that it would completely break signed commits).
>> --
>> Jeremy Stanley
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150926/fb25d6be/attachment.html>

From tony.a.wang at alcatel-lucent.com  Sun Sep 27 07:50:37 2015
From: tony.a.wang at alcatel-lucent.com (WANG, Ming Hao (Tony T))
Date: Sun, 27 Sep 2015 07:50:37 +0000
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com> 
Message-ID: <F1F484A52BD63243B5497BFC9DE26E5A1A78E43E@SG70YWXCHMBA05.zap.alcatel-lucent.com>

Russell,

Another question is about "localnet". It is a very useful feature. :-)

Is it possible to assign which VLAN tag will be used for a specific provider network?
In your example in https://github.com/openvswitch/ovs/commit/c02819293d52f7ea7b714242d871b2b01f57f905 : "physnet1" is used as physical network, and br-eth1 is used as the provider network OpenFlow switch.
If we can assign the VLAN tag of the provider network, is the VLAN tag translation done by "br-int" or "br-eth1"?


Thanks,
Tony

-----Original Message-----
From: WANG, Ming Hao (Tony T) 
Sent: Sunday, September 27, 2015 2:26 PM
To: 'Russell Bryant'; OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

Russell,

Thanks for your valuable information.
I understood Geneve is some kind of tunnel format for network virtualization encapsulation, just like VxLAN.
But I'm still confused by the connection between Geneve and VTEP.
I suppose VTEP should be on behalf of "VxLAN Tunnel Endpoint", which should be used for VxLAN only.

Does it become some "common tunnel endpoint" in OVN, and can be also used as a tunnel endpoint for Geneve?

Thanks,
Tony
-----Original Message-----
From: Russell Bryant [mailto:rbryant at redhat.com] 
Sent: Friday, September 25, 2015 12:04 AM
To: WANG, Ming Hao (Tony T); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

On 09/24/2015 10:37 AM, WANG, Ming Hao (Tony T) wrote:
> Russell,
> 
> Thanks for your detail explanation and kind help!
> I have understand how container in VM can acquire network interfaces in different neutron networks now.
> For the connections between compute nodes, I think I need to study Geneve protocol and VTEP first.
> Any further question, I may need to continue consulting you. :-)

OVN uses Geneve in conceptually the same way as to how the Neutron reference implementation (ML2+OVS) uses VxLAN to create overlay networks among the compute nodes for tenant overlay networks.

VTEP gateways or provider networks come into play when you want to connect these overlay networks to physical, or "underlay" networks.

Hope that helps,

--
Russell Bryant


From blak111 at gmail.com  Sun Sep 27 10:50:01 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Sun, 27 Sep 2015 12:50:01 +0200
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <F1F484A52BD63243B5497BFC9DE26E5A1A78E43E@SG70YWXCHMBA05.zap.alcatel-lucent.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78E43E@SG70YWXCHMBA05.zap.alcatel-lucent.com>
Message-ID: <CAO_F6JPMR=kwdFXrOPs=u111-siR5LMLg7z_tPzf00OznN1rww@mail.gmail.com>

Assuming it implements the normal provider networks API, you just specify
the segmentation_id when you create the network.

neutron net-create NET_NAME --provider:network_type vlan
--provider:physical_network physnet1 --provider:segmentation_id VLAN_TAG

On Sun, Sep 27, 2015 at 9:50 AM, WANG, Ming Hao (Tony T) <
tony.a.wang at alcatel-lucent.com> wrote:

> Russell,
>
> Another question is about "localnet". It is a very useful feature. :-)
>
> Is it possible to assign which VLAN tag will be used for a specific
> provider network?
> In your example in
> https://github.com/openvswitch/ovs/commit/c02819293d52f7ea7b714242d871b2b01f57f905
> : "physnet1" is used as physical network, and br-eth1 is used as the
> provider network OpenFlow switch.
> If we can assign the VLAN tag of the provider network, is the VLAN tag
> translation done by "br-int" or "br-eth1"?
>
>
> Thanks,
> Tony
>
> -----Original Message-----
> From: WANG, Ming Hao (Tony T)
> Sent: Sunday, September 27, 2015 2:26 PM
> To: 'Russell Bryant'; OpenStack Development Mailing List (not for usage
> questions)
> Subject: RE: [openstack-dev] [neutron + ovn] Does neutron ovn plugin
> support to setup multiple neutron networks for one container?
>
> Russell,
>
> Thanks for your valuable information.
> I understood Geneve is some kind of tunnel format for network
> virtualization encapsulation, just like VxLAN.
> But I'm still confused by the connection between Geneve and VTEP.
> I suppose VTEP should be on behalf of "VxLAN Tunnel Endpoint", which
> should be used for VxLAN only.
>
> Does it become some "common tunnel endpoint" in OVN, and can be also used
> as a tunnel endpoint for Geneve?
>
> Thanks,
> Tony
> -----Original Message-----
> From: Russell Bryant [mailto:rbryant at redhat.com]
> Sent: Friday, September 25, 2015 12:04 AM
> To: WANG, Ming Hao (Tony T); OpenStack Development Mailing List (not for
> usage questions)
> Subject: Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin
> support to setup multiple neutron networks for one container?
>
> On 09/24/2015 10:37 AM, WANG, Ming Hao (Tony T) wrote:
> > Russell,
> >
> > Thanks for your detail explanation and kind help!
> > I have understand how container in VM can acquire network interfaces in
> different neutron networks now.
> > For the connections between compute nodes, I think I need to study
> Geneve protocol and VTEP first.
> > Any further question, I may need to continue consulting you. :-)
>
> OVN uses Geneve in conceptually the same way as to how the Neutron
> reference implementation (ML2+OVS) uses VxLAN to create overlay networks
> among the compute nodes for tenant overlay networks.
>
> VTEP gateways or provider networks come into play when you want to connect
> these overlay networks to physical, or "underlay" networks.
>
> Hope that helps,
>
> --
> Russell Bryant
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150927/0063f196/attachment.html>

From doug at doughellmann.com  Sun Sep 27 12:43:16 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Sun, 27 Sep 2015 08:43:16 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
Message-ID: <1443356431-sup-7293@lrrr.local>

Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +0000:
> On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
> > 
> > Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
> >> On Sep 25, 2015, at 1:24 PM, Brian Rosmaita <brian.rosmaita at RACKSPACE.COM> wrote:
> >>> 
> >>> I'd like to clarify something.
> >>> 
> >>> On 9/25/15, 12:16 PM, "Mark Voelker" <mvoelker at vmware.com> wrote:
> >>> [big snip]
> >>>> Also worth pointing out here: when we talk about ?doing the same thing?
> >>>> from a DefCore perspective, we?re essentially talking about what?s
> >>>> exposed to the end user, not how that?s implemented in OpenStack?s source
> >>>> code.  So from an end user?s perspective:
> >>>> 
> >>>> If I call nova image-create, I get an image in my cloud.  If I call the
> >>>> Glance v2 API to create an image, I also get an image in my cloud.  I
> >>>> neither see nor care that Nova is actually talking to Glance in the
> >>>> background, because if I?m writing code that uses the OpenStack API?s, I
> >>>> need to pick which one of those two API?s to make my code call upon to
> >>>> put an image in my cloud.  Or, in the worst case, I have to write a bunch
> >>>> of if/else loops into my code because some clouds I want to use only
> >>>> allow one way and some allow only the other.
> >>> 
> >>> The above is a bit inaccurate.
> >>> 
> >>> The nova image-create command does give you an image in your cloud.  The
> >>> image you get, however, is a snapshot of an instance that has been
> >>> previously created in Nova.  If you don't have an instance, you cannot
> >>> create an image via that command.  There is no provision in the Compute
> >>> (Nova) API to allow you to create an image out of bits that you supply.
> >>> 
> >>> The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
> >>> register them as an image which you can then use to boot instances from by
> >>> using the Compute API.  But note that if all you have available is the
> >>> Images API, you cannot create an image of one of your instances.
> >>> 
> >>>> So from that end-user perspective, the Nova image-create API indeed does
> >>>> ?do the same thing" as the Glance API.
> >>> 
> >>> They don't "do the same thing".  Even if you have full access to the
> >>> Images v1 or v2 API, you will still have to use the Compute (Nova) API to
> >>> create an image of an instance, which is by far the largest use-case for
> >>> image creation.  You can't do it through Glance, because Glance doesn't
> >>> know anything about instances.  Nova has to know about Glance, because it
> >>> needs to fetch images for instance creation, and store images for
> >>> on-demand images of instances.
> >> 
> >> Yup, that?s fair: this was a bad example to pick (need moar coffee I guess).  Let?s use image-list instead. =)
> > 
> > From a "technical direction" perspective, I still think it's a bad
> 
> Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1] 

And yet future technical direction does factor in, and I'm trying
to add a new heuristic to that aspect of consideration of tests:
Do not add tests that use proxy APIs.

If there is some compelling reason to add a capability for which
the only tests use a proxy, that's important feedback for the
contributor community and tells us we need to improve our test
coverage. If the reason to use the proxy is that no one is deploying
the proxied API publicly, that is also useful feedback, but I suspect
we will, in most cases (glance is the exception), say "Yeah, that's
not how we mean for you to run the services long-term, so don't
include that capability."

> 
> > situation for us to be relying on any proxy APIs like this. Yes,
> > they are widely deployed, but we want to be using glance for image
> > features, neutron for networking, etc. Having the nova proxy is
> > fine, but while we have DefCore using tests to enforce the presence
> > of the proxy we can't deprecate those APIs.
> 
> 
> Actually that?s not true: DefCore can totally deprecate things too, and can do so in response to the technical community deprecating things.  See my comments in this review [2].  Maybe I need to write another post about that...

Sorry, I wasn't clear. The Nova team would, I expect, view the use of
those APIs in DefCore as a reason to avoid deprecating them in the code
even if they wanted to consider them as legacy features that should be
removed. Maybe that's not true, and the Nova team would be happy to
deprecate the APIs, but I did think that part of the feedback cycle we
were establishing here was to have an indication from the outside of the
contributor base about what APIs are considered important enough to keep
alive for a long period of time.

> 
> /me envisions the title being ?Who?s on First??

Indeed.

> 
> > 
> > What do we need to do to make that change happen over the next cycle
> > or so?
> 
> There are several things that can be done:
> 
> First, if you don?t like the Criteria or the weights that the various Criteria today have, we can suggest changes to them.  The Board of Directors will ultimately have to approve that change, but we can certainly ask (I think there?s plenty of evidence that our Directors listen to the community?s concerns).  There?s actually already some early discussion about that now, though most of the energy is going into other things at the moment (because deadlines).  See post above for links.
> 
> Second, we certainly could consider changes to the Capabilities that are currently required.  That happens every six months according to a Board-approved schedule. [3]  The window is just about to close for the next Guideline, but that might be ok from the perspective of a lot of stuff is likely be advisory in the next Guideline anyway, and advisory cycles are explicitly meant to generate feedback like this.  Making changes to Guidelines is basically submitting a patch. [4]
> 
> Third, as a technical community we can make the capabilities we want score better.  So for example: we could make nova image use glance v2, or we could deprecate those API?s (per above, you do not have to wait on DefCore for that to happen), or we could send patches to the client SDK?s that OpenStack users are relying on to make those capabilities supported.

I'll just note the irony of asking "How do I provide requirements
feedback" and getting "Submit a patch" as the answer. ;-)

I'll try to find some time to figure out how the capabilities
specifications work, and learn about what each of the tests means
and how it works, and then how to submit the patch under the review
process you use. It's likely to take me a while, so if someone on
the "inside" who understands those things better wants to collaborate
more closely while I come up to speed, I'm happy to do that off-list.

Doug

> 
> [1] http://markvoelker.github.io/blog/defcore-misconceptions-1/
> [2] https://review.openstack.org/#/c/207467/4/reference/tags/assert_follows-standard-deprecation.rst
> [3] http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/2015A.rst#n10
> [4] http://git.openstack.org/cgit/openstack/defcore/tree/HACKING.rst
> 
> At Your Service,
> 
> Mark T. Voelker
> 
> > 
> > Doug
> > 
> >> 
> >> At Your Service,
> >> 
> >> Mark T. Voelker
> >> 
> >>> 
> >>> 
> >>>> At Your Service,
> >>>> 
> >>>> Mark T. Voelker
> >>> 
> >>> Glad to be of service, too,
> >>> brian
> >>> 
> >>> 
> >>> __________________________________________________________________________
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> 
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From armamig at gmail.com  Sun Sep 27 16:39:37 2015
From: armamig at gmail.com (Armando M.)
Date: Sun, 27 Sep 2015 09:39:37 -0700
Subject: [openstack-dev] [cinder][neutron][all] New third-party-ci
 testing requirements for OpenStack Compatible mark
In-Reply-To: <EBF1AF08-B54D-4391-9B22-964390523E0A@openstack.org>
References: <EBF1AF08-B54D-4391-9B22-964390523E0A@openstack.org>
Message-ID: <CAK+RQeZ+zbP=JoBNpeLdhktQ=93UVKCgWuGOzUxvTSpAN=EKNg@mail.gmail.com>

On 25 September 2015 at 15:40, Chris Hoge <chris at openstack.org> wrote:

> In November, the OpenStack Foundation will start requiring vendors
> requesting
> new "OpenStack Compatible" storage driver licenses to start passing the
> Cinder
> third-party integration tests.

The new program was approved by the Board at
> the July meeting in Austin and follows the improvement of the testing
> standards
> and technical requirements for the "OpenStack Powered" program. This is all
> part of the effort of the Foundation to use the OpenStack brand to
> guarantee a
> base-level of interoperability and consistency for OpenStack users and to
> protect the work of our community of developers by applying a trademark
> backed
> by their technical efforts.
>
> The Cinder driver testing is the first step of a larger effort to apply
> community determined standards to the Foundation marketing programs. We're
> starting with Cinder because it has a successful testing program in place,
> and
> we have plans to extend the program to network drivers and OpenStack
> applications. We're going require CI testing for new "OpenStack Compatible"
> storage licenses starting on November 1, and plan to roll out network and
> application testing in 2016.
>
> One of our goals is to work with project leaders and developers to help us
> define and implement these test programs. The standards for third-party
> drivers and applications should be determined by the developers and users
> in our community, who are experts in how to maintain the quality of the
> ecosystem.
>
> We welcome and feedback on this program, and are also happy to answer any
> questions you might have.
>

Thanks for spearheading this effort.

Do you have more information/pointers about the program, and how Cinder in
particular is
paving the way for other projects to follow?

Thanks,
Armando


> Thanks!
>
> Chris Hoge
> Interop Engineer
> OpenStack Foundation
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150927/21131622/attachment.html>

From bertrand.lallau at gmail.com  Sun Sep 27 19:20:16 2015
From: bertrand.lallau at gmail.com (Bertrand LALLAU)
Date: Sun, 27 Sep 2015 21:20:16 +0200
Subject: [openstack-dev] [lbaas] [octavia] Proposing new meeting time
 Wednesday 16:00 UTC
In-Reply-To: <298EAAEC-6FD5-428A-B9F0-9E19A3BF08A9@parksidesoftware.com>
References: <D22B2C34.185FD%german.eichberger@hpe.com>
 <298EAAEC-6FD5-428A-B9F0-9E19A3BF08A9@parksidesoftware.com>
Message-ID: <CAGrjEp+t8-cVcf=BfR-riLJna1pDgY8SnZPpkV0uJzv97O45Gw@mail.gmail.com>

Hi all,
Living in France I prefer the previous meeting time (20:00 UTC)
regards,

On Sat, Sep 26, 2015 at 3:30 AM, Doug Wiegley <dougwig at parksidesoftware.com>
wrote:

> Works for me.
>
> Doug
>
>
> > On Sep 25, 2015, at 5:58 PM, Eichberger, German <
> german.eichberger at hpe.com> wrote:
> >
> > All,
> >
> > In our last meeting [1] we discussed moving the meeting earlier to
> > accommodate participants from the EMEA region. I am therefore proposing
> to
> > move the meeting to 16:00 UTC on Wednesday. Please respond to this e-mail
> > if you have alternate suggestions. I will send out another e-mail
> > announcing the new time and the date we will start with that.
> >
> > Thanks,
> > German
> >
> > [1]
> >
> http://eavesdrop.openstack.org/meetings/octavia/2015/octavia.2015-09-23-20
> .
> > 00.log.html
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150927/5bd65ef4/attachment.html>

From rbryant at redhat.com  Sun Sep 27 20:14:45 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Sun, 27 Sep 2015 16:14:45 -0400
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <F1F484A52BD63243B5497BFC9DE26E5A1A78E403@SG70YWXCHMBA05.zap.alcatel-lucent.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78E403@SG70YWXCHMBA05.zap.alcatel-lucent.com>
Message-ID: <56084E35.20101@redhat.com>

On 09/27/2015 02:26 AM, WANG, Ming Hao (Tony T) wrote:
> Russell,
> 
> Thanks for your valuable information.
> I understood Geneve is some kind of tunnel format for network virtualization encapsulation, just like VxLAN.
> But I'm still confused by the connection between Geneve and VTEP.
> I suppose VTEP should be on behalf of "VxLAN Tunnel Endpoint", which should be used for VxLAN only.
> 
> Does it become some "common tunnel endpoint" in OVN, and can be also used as a tunnel endpoint for Geneve?

When using VTEP gateways, both the Geneve and VxLAN protocols are being
used.  Packets between hypervisors are sent using Geneve.  Packets
between a hypervisor and the gateway are sent using VxLAN.

-- 
Russell Bryant


From rbryant at redhat.com  Sun Sep 27 20:18:11 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Sun, 27 Sep 2015 16:18:11 -0400
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <CAO_F6JPMR=kwdFXrOPs=u111-siR5LMLg7z_tPzf00OznN1rww@mail.gmail.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78E43E@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <CAO_F6JPMR=kwdFXrOPs=u111-siR5LMLg7z_tPzf00OznN1rww@mail.gmail.com>
Message-ID: <56084F03.3050903@redhat.com>

On 09/27/2015 06:50 AM, Kevin Benton wrote:
> Assuming it implements the normal provider networks API, you just
> specify the segmentation_id when you create the network. 
> 
> neutron net-create NET_NAME --provider:network_type vlan
> --provider:physical_network physnet1 --provider:segmentation_id VLAN_TAG

Yes, the OVN plugin will implement the normal provider networks API.
It's a WIP.

My first goal was to just implement support for "--provider:network_type
flat" end to end.  I have the OVN side merged and now I'm working on the
Neutron plugin piece.  Once that's done, I'll go back add add VLAN
support, which shouldn't be very difficult at that point.  I'm aiming to
have all of that done by the Tokyo summit (among other things).

> On Sun, Sep 27, 2015 at 9:50 AM, WANG, Ming Hao (Tony T)
> <tony.a.wang at alcatel-lucent.com <mailto:tony.a.wang at alcatel-lucent.com>>
> wrote:
> 
>     Russell,
> 
>     Another question is about "localnet". It is a very useful feature. :-)
> 
>     Is it possible to assign which VLAN tag will be used for a specific
>     provider network?
>     In your example in
>     https://github.com/openvswitch/ovs/commit/c02819293d52f7ea7b714242d871b2b01f57f905
>     : "physnet1" is used as physical network, and br-eth1 is used as the
>     provider network OpenFlow switch.
>     If we can assign the VLAN tag of the provider network, is the VLAN
>     tag translation done by "br-int" or "br-eth1"?


-- 
Russell Bryant


From matt at mattfischer.com  Sun Sep 27 22:36:37 2015
From: matt at mattfischer.com (Matt Fischer)
Date: Sun, 27 Sep 2015 16:36:37 -0600
Subject: [openstack-dev] [Openstack-operators] [puppet] feedback request
	about puppet-keystone
In-Reply-To: <56057E07.6070201@redhat.com>
References: <56002E35.2010807@redhat.com>
	<56057E07.6070201@redhat.com>
Message-ID: <CAHr1CO9qCyLetUmx+3WKMUuB+y7ARLJUysvtNtAix2JVrdcK9A@mail.gmail.com>

On Fri, Sep 25, 2015 at 11:01 AM, Emilien Macchi <emilien at redhat.com> wrote:
>
>
> So after 5 days, here is a bit of feedback (13 people did the poll [1]):
>
> 1/ Providers
> Except for 1, most of people are managing a few number of Keystone
> users/tenants.
> I would like to know if it's because the current implementation (using
> openstackclient) is too slow or just because they don't need to do that
> (they use bash, sdk, ansible, etc).
>

I'm generally thinking the opposite of you, I'd actually love to know the
use case for anyone managing more than a few users with Puppet. We have
service users and a few accounts for things like backups, monitoring etc.
Beyond that, the accounts are for actual users and they have to follow an
intake and project creation process that also handles things like networks.
We found this workflow much easier to script with python and it can also be
done without a deploy. This is all handled by a manager after ensuring that
OpenStack is the right solution for them, finding project requirements,
etc. So I think this is what many folks are doing, their user creation
workflow just doesn't mesh with puppet and their puppet deployment process.
 (This also discounts password management, something I don't want to be
doing for users with puppet)



>
> 2/ Features you want
>
> * "Configuration of federation via shibboleth":
> WIP on https://review.openstack.org/#/c/216821/
>
> * "Configuration of federation via mod_mellon":
> Will come after shibboleth I guess.
>
> * "Allow to configure websso"":
> See
>
> http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/enabling-federation.html
>
> * "Management of fernet keys":
> nothing *yet* in our roadmap AFIK, adding it in our backlog [2]
>

I looked into this when we deployed but could not come up with a great
solution that didn't involve declaring a master node on which keys were
generated. Would be happy to re-investigate or work with someone on this.


>
> * "Support for hybrid domain configurations (e.g. using both LDAP and
> built in database backend)":
>
> http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/support-for-keystone-domain-configuration.html
>
> * "Full v3 API support (depends on other modules beyond just
> puppet-keystone)":
>
> http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/api-v3-support.html
>
> * "the ability to upgrade modules independently of one another, like we
> do in production - currently the puppet dependencies dictate the order
> in which we do upgrades more than the OpenStack dependencies":
>
> During the last Summit, we decided [3] as a community that our modules
> branches will only support the OpenStack release of the branch.
> ie: stable/kilo supports OpenStack 2015.1 (Kilo). Maybe you can deploy
> Juno or Liberty with it, but our community does not support it.
> To give a little background, we already discussed about it [4] on the ML.
> Our interface is 100% (or should be) backward compatible for at least
> one full cycle, so you should not have issue when using a new version of
> the module with the same parameters. Though (and indeed), you need to
> keep your modules synchronized, specially because we have libraries and
> common providers (in puppet-keystone).
> AFIK, OpenStack also works like this with openstack/requirements.
> I'm not sure you can run Glance Kilo with Oslo Juno (maybe I'm wrong).
> What you're asking would be technically hard because we would have to
> support old versions of our providers & libraries, with a lot of
> backward compatible & legacy code in place, while we already do a good
> job in the parameters (interface).
> If you have any serious proposal, we would be happy to discuss design
> and find a solution.
>
> 3/ What we could improve in Puppet Keystone (and in general, regarding
> the answers)
>
> * "(...) but it would be nice to be able to deploy master and the most
> recent version immediately rather than wait. Happy to get involved with
> that as our maturity improves and we actually start to use the current
> version earlier. Contribution is hard when you folk are ahead of the
> game, any fixes and additions we have are ancient already":
>
> I would like to understand the issues here:
> do you have problems to contribute?
> is your issue "a feature is in master and not in stable/*" ? If that's
> the case, that means we can do a better job in backport policy.
> Something we already talked each others and I hope our group is aware
> about that.
>
> * "We were using keystone_user_role until we had huge compilation times
> due to the matrix (tenant x role x user) that is not scalable. With
> every single user and tenant on the environment, the catalog compilation
> increased. An improvement on that area will be useful."
>
> I understand the frustration and we are working on it [5].
>
> * "Currently does not handle deployment of hybrid domain configurations."
>
> Ditto:
>
> http://specs.openstack.org/openstack/puppet-openstack-specs/specs/liberty/support-for-keystone-domain-configuration.html
>
>
> I liked running a poll like this, if you don't mind I'll take time to
> prepare a bigger poll so we can gather more and more feedback, because
> it's really useful. Thanks for that.
>
>
> Discussion is open on this thread about features/concerns mentioned in
> the poll.
>
>
> [1]
>
> https://docs.google.com/forms/d/1Z6IGeJRNmX7xx0Ggmr5Pmpzq7BudphDkZE-3t4Q5G1k/viewanalytics
> [2] https://trello.com/c/HjiWUng3/65-puppet-keystone-manage-fernet-keys
> [3]
>
> http://specs.openstack.org/openstack/puppet-openstack-specs/specs/kilo/master-policy.html
> [4]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/069147.html
> [5] https://bugs.launchpad.net/puppet-keystone/+bug/1493450
> --
> Emilien Macchi
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150927/0dc70888/attachment.html>

From emilien.macchi at gmail.com  Mon Sep 28 00:03:04 2015
From: emilien.macchi at gmail.com (Emilien Macchi)
Date: Sun, 27 Sep 2015 20:03:04 -0400
Subject: [openstack-dev] [puppet] Fwd: Action required:
 stackforge/puppet-openstack project move
In-Reply-To: <E1ZfazO-0004NC-FO@lists.openstack.org>
References: <E1ZfazO-0004NC-FO@lists.openstack.org>
Message-ID: <560883B8.6070704@gmail.com>

should we delete it?

FYI: the module is deprecated in Juno release.

I vote for yes.


-------- Forwarded Message --------
Subject: Action required: stackforge/puppet-openstack project move
Date: Fri, 25 Sep 2015 21:57:10 +0000
From: OpenStack Infrastructure Team <openstack-infra at lists.openstack.org>
To: clayton.oneill at twcable.com, colleen at gazlene.net, bodepd at gmail.com,
dprince at redhat.com, emilien at redhat.com, francois.charlier at redhat.com,
mgagne at iweb.com, matt at mattfischer.com, woppin at gmail.com,
sbadia at redhat.com, xingchao at unitedstack.com, yguenane at redhat.com

You appear to be associated with the stackforge/puppet-openstack project.

The stackforge/ git repository namespace is being retired[1], and all
projects within need to move to the openstack/ namespace or, in the
case of inactive projects, identified as such and made read-only.

For more background information, see this mailing list post and TC
resolution:

  http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html

http://governance.openstack.org/resolutions/20150615-stackforge-retirement.html

To ensure we have correctly identified all of the projects, we have
created a wiki page listing the projects that should be moved and the
projects that should be retired.  You may find it here:

  https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement

Please add the stackforge/puppet-openstack project to the appropriate
list on this page ("Active Projects to Move" or "Inactive Projects to
Retire") as soon as possible.

Projects that have not self-categorized by Friday October 2 will be
assumed to be inactive and placed on the list of "Inactive Projects to
Retire".

Thank you for attending to this promptly,

The OpenStack Infrastructure Team




From matt at mattfischer.com  Mon Sep 28 00:37:59 2015
From: matt at mattfischer.com (Matt Fischer)
Date: Sun, 27 Sep 2015 18:37:59 -0600
Subject: [openstack-dev] [puppet] Fwd: Action required:
 stackforge/puppet-openstack project move
In-Reply-To: <560883B8.6070704@gmail.com>
References: <E1ZfazO-0004NC-FO@lists.openstack.org>
 <560883B8.6070704@gmail.com>
Message-ID: <CAHr1CO89VVR-apnOugesXZjkLU-CCKTGE95hDHh+D+1c5p7ZUQ@mail.gmail.com>

I'm not sure what value it has anymore but why not just readonly?
On Sep 27, 2015 6:09 PM, "Emilien Macchi" <emilien.macchi at gmail.com> wrote:

> should we delete it?
>
> FYI: the module is deprecated in Juno release.
>
> I vote for yes.
>
>
> -------- Forwarded Message --------
> Subject: Action required: stackforge/puppet-openstack project move
> Date: Fri, 25 Sep 2015 21:57:10 +0000
> From: OpenStack Infrastructure Team <openstack-infra at lists.openstack.org>
> To: clayton.oneill at twcable.com, colleen at gazlene.net, bodepd at gmail.com,
> dprince at redhat.com, emilien at redhat.com, francois.charlier at redhat.com,
> mgagne at iweb.com, matt at mattfischer.com, woppin at gmail.com,
> sbadia at redhat.com, xingchao at unitedstack.com, yguenane at redhat.com
>
> You appear to be associated with the stackforge/puppet-openstack project.
>
> The stackforge/ git repository namespace is being retired[1], and all
> projects within need to move to the openstack/ namespace or, in the
> case of inactive projects, identified as such and made read-only.
>
> For more background information, see this mailing list post and TC
> resolution:
>
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/072140.html
>
>
> http://governance.openstack.org/resolutions/20150615-stackforge-retirement.html
>
> To ensure we have correctly identified all of the projects, we have
> created a wiki page listing the projects that should be moved and the
> projects that should be retired.  You may find it here:
>
>   https://wiki.openstack.org/wiki/Stackforge_Namespace_Retirement
>
> Please add the stackforge/puppet-openstack project to the appropriate
> list on this page ("Active Projects to Move" or "Inactive Projects to
> Retire") as soon as possible.
>
> Projects that have not self-categorized by Friday October 2 will be
> assumed to be inactive and placed on the list of "Inactive Projects to
> Retire".
>
> Thank you for attending to this promptly,
>
> The OpenStack Infrastructure Team
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150927/1e8d9bf4/attachment.html>

From fungi at yuggoth.org  Mon Sep 28 00:44:05 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Mon, 28 Sep 2015 00:44:05 +0000
Subject: [openstack-dev] [puppet] Fwd: Action required:
 stackforge/puppet-openstack project move
In-Reply-To: <CAHr1CO89VVR-apnOugesXZjkLU-CCKTGE95hDHh+D+1c5p7ZUQ@mail.gmail.com>
References: <E1ZfazO-0004NC-FO@lists.openstack.org>
 <560883B8.6070704@gmail.com>
 <CAHr1CO89VVR-apnOugesXZjkLU-CCKTGE95hDHh+D+1c5p7ZUQ@mail.gmail.com>
Message-ID: <20150928004405.GS4731@yuggoth.org>

On 2015-09-27 18:37:59 -0600 (-0600), Matt Fischer wrote:
> On Sep 27, 2015 6:09 PM, "Emilien Macchi" <emilien.macchi at gmail.com> wrote:
> > 
> > should we delete it?
> 
> I'm not sure what value it has anymore but why not just readonly?

We aren't deleting any repos, just making them read-only and
committing a change to replace all the files with a README
indicating the repo is retired (and indicating to look at the HEAD^1
commit for its prior state).
-- 
Jeremy Stanley


From davanum at gmail.com  Mon Sep 28 02:02:14 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Sun, 27 Sep 2015 22:02:14 -0400
Subject: [openstack-dev] [oslo][doc] Oslo Doc Sprint final Tally (was Re:
 [oslo][doc] Oslo doc sprint 9/24-9/25)
Message-ID: <CANw6fcFjTuzkEyaV1QwQkqXThyDRv6dvYQX7BLS1E4B7B+SV1A@mail.gmail.com>

Hello,

We had a very good Doc Sprint. Thanks everyone who contributed.

Finally tally: Merged 88 commits across 20+ repos and gained one core
commiter! :)

Huge welcome to Brant Knudson. The full list of reviews are at the bottom
of the etherpad [1]

Thanks,
Dims

[1] https://etherpad.openstack.org/p/oslo-liberty-virtual-doc-sprint

On Wed, Sep 23, 2015 at 7:18 PM, Davanum Srinivas <davanum at gmail.com> wrote:

> Reminder, we are doing the Doc Sprint tomorrow. Please help out with what
> ever item or items you can.
>
> Thanks,
> Dims
>
> On Wed, Sep 16, 2015 at 5:40 PM, James Carey <bellerophon at flyinghorsie.com
> > wrote:
>
>> In order to improve the Oslo libraries documentation, the Oslo team is
>> having a documentation sprint from 9/24 to 9/25.
>>
>> We'll kick things off at 14:00 UTC on 9/24 in the
>> #openstack-oslo-docsprint IRC channel and we'll use an etherpad [0].
>>
>> All help is appreciated.   If you can help or have suggestions for
>> areas of focus, please update the etherpad.
>>
>> [0] https://etherpad.openstack.org/p/oslo-liberty-virtual-doc-sprint
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150927/22e47b19/attachment.html>

From tony.a.wang at alcatel-lucent.com  Mon Sep 28 03:01:25 2015
From: tony.a.wang at alcatel-lucent.com (WANG, Ming Hao (Tony T))
Date: Mon, 28 Sep 2015 03:01:25 +0000
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <56084F03.3050903@redhat.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78E43E@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <CAO_F6JPMR=kwdFXrOPs=u111-siR5LMLg7z_tPzf00OznN1rww@mail.gmail.com>
 <56084F03.3050903@redhat.com>
Message-ID: <F1F484A52BD63243B5497BFC9DE26E5A1A78E574@SG70YWXCHMBA05.zap.alcatel-lucent.com>

Russell and Kevin,

Thanks for your detail information!
I got it.

Thanks again,
Tony

-----Original Message-----
From: Russell Bryant [mailto:rbryant at redhat.com] 
Sent: Monday, September 28, 2015 4:18 AM
To: Kevin Benton; OpenStack Development Mailing List (not for usage questions)
Cc: WANG, Ming Hao (Tony T)
Subject: Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

On 09/27/2015 06:50 AM, Kevin Benton wrote:
> Assuming it implements the normal provider networks API, you just 
> specify the segmentation_id when you create the network.
> 
> neutron net-create NET_NAME --provider:network_type vlan 
> --provider:physical_network physnet1 --provider:segmentation_id 
> VLAN_TAG

Yes, the OVN plugin will implement the normal provider networks API.
It's a WIP.

My first goal was to just implement support for "--provider:network_type flat" end to end.  I have the OVN side merged and now I'm working on the Neutron plugin piece.  Once that's done, I'll go back add add VLAN support, which shouldn't be very difficult at that point.  I'm aiming to have all of that done by the Tokyo summit (among other things).

> On Sun, Sep 27, 2015 at 9:50 AM, WANG, Ming Hao (Tony T) 
> <tony.a.wang at alcatel-lucent.com 
> <mailto:tony.a.wang at alcatel-lucent.com>>
> wrote:
> 
>     Russell,
> 
>     Another question is about "localnet". It is a very useful feature. 
> :-)
> 
>     Is it possible to assign which VLAN tag will be used for a specific
>     provider network?
>     In your example in
>     https://github.com/openvswitch/ovs/commit/c02819293d52f7ea7b714242d871b2b01f57f905
>     : "physnet1" is used as physical network, and br-eth1 is used as the
>     provider network OpenFlow switch.
>     If we can assign the VLAN tag of the provider network, is the VLAN
>     tag translation done by "br-int" or "br-eth1"?


--
Russell Bryant

From chris.friesen at windriver.com  Mon Sep 28 04:19:52 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Sun, 27 Sep 2015 22:19:52 -0600
Subject: [openstack-dev] [cinder] should we use fsync when writing iscsi
 config file?
In-Reply-To: <m0oagpbm6w.fsf@danjou.info>
References: <5601CD90.2050102@windriver.com>	<CAJ3HoZ3KP4B2gVqoD2VL+6DiQ6Js4eDbOpdjhpt8iKaZKYqWtw@mail.gmail.com>	<BLU436-SMTP9084DAF04DE3D527AD01BAD8450@phx.gbl>	<5601EEF5.9090301@windriver.com>
 <m0oagpbm6w.fsf@danjou.info>
Message-ID: <5608BFE8.40707@windriver.com>

On 09/26/2015 02:48 AM, Julien Danjou wrote:
> On Tue, Sep 22 2015, Chris Friesen wrote:
>
>> On 09/22/2015 05:48 PM, Joshua Harlow wrote:
>>> A present:
>>>
>>>   >>> import contextlib
>>>   >>> import os
>>>   >>>
>>>   >>> @contextlib.contextmanager
>>> ... def synced_file(path, mode='wb'):
>>> ...   with open(path, mode) as fh:
>>> ...      yield fh
>>> ...      os.fdatasync(fh.fileno())
>>> ...
>>>   >>> with synced_file("/tmp/b.txt") as fh:
>>> ...    fh.write("b")
>>
>> Isn't that missing an "fh.flush()" somewhere before the fdatasync()?
>
> Unless proven otherwise, close() does a flush().

There's no close() before the fdatasync() in the above code.  (And it wouldn't 
make sense anyway because you need the open fd to do the fdatasync().)

Chris



From donald.d.dugger at intel.com  Mon Sep 28 04:43:33 2015
From: donald.d.dugger at intel.com (Dugger, Donald D)
Date: Mon, 28 Sep 2015 04:43:33 +0000
Subject: [openstack-dev] [nova-scheduler] Scheduler sub-group meeting -
	Agenda 9/21
Message-ID: <6AF484C0160C61439DE06F17668F3BCB540052AD@ORSMSX114.amr.corp.intel.com>

Meeting on #openstack-meeting-alt at 1400 UTC (8:00AM MDT)



1) Mitaka planning

2) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/f1d8148d/attachment.html>

From brandon.logan at RACKSPACE.COM  Mon Sep 28 05:12:31 2015
From: brandon.logan at RACKSPACE.COM (Brandon Logan)
Date: Mon, 28 Sep 2015 05:12:31 +0000
Subject: [openstack-dev] [lbaas] [octavia] Proposing new meeting time
 Wednesday 16:00 UTC
In-Reply-To: <D22B2C34.185FD%german.eichberger@hpe.com>
References: <D22B2C34.185FD%german.eichberger@hpe.com>
Message-ID: <1443417151.4568.6.camel@localhost>

Is there a lot of people requesting this meeting change?

Thanks,
Brandon

On Fri, 2015-09-25 at 23:58 +0000, Eichberger, German wrote:
> All,
> 
> In our last meeting [1] we discussed moving the meeting earlier to
> accommodate participants from the EMEA region. I am therefore proposing to
> move the meeting to 16:00 UTC on Wednesday. Please respond to this e-mail
> if you have alternate suggestions. I will send out another e-mail
> announcing the new time and the date we will start with that.
> 
> Thanks,
> German
> 
> [1] 
> http://eavesdrop.openstack.org/meetings/octavia/2015/octavia.2015-09-23-20.
> 00.log.html
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From masoom.alam at wanclouds.net  Mon Sep 28 05:43:05 2015
From: masoom.alam at wanclouds.net (masoom alam)
Date: Sun, 27 Sep 2015 22:43:05 -0700
Subject: [openstack-dev] KILO: neutron port-update --allowed-address-pairs
 action=clear throws an exception
Message-ID: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>

Can anybody highlight why the following command is throwing an exception:

*Command#* neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3
--allowed-address-pairs action=clear

*Error: * 2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource
[req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most
recent call last):
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     result =
method(request=request, **args)
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
allow_bulk=self._allow_bulk)
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/api/v2/base.py", line 652, in
prepare_request_body
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
attr_vals['validate'][rule])
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in
_validate_allowed_address_pairs
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     if
len(address_pairs) > cfg.CONF.max_allowed_address_pair:
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError: object of
type 'NoneType' has no len()
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource



There is a similar bug filed at Lauchpad for Havana
https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However there is
no fix and the work around  - using curl, mentioned on the bug is also not
working for KILO...it was working for havana and Icehouse....any
pointers...?

Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150927/846e9b3b/attachment.html>

From ayshihanzhang at 126.com  Mon Sep 28 06:29:24 2015
From: ayshihanzhang at 126.com (shihanzhang)
Date: Mon, 28 Sep 2015 14:29:24 +0800 (CST)
Subject: [openstack-dev] KILO: neutron port-update
 --allowed-address-pairs action=clear throws an exception
In-Reply-To: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>
References: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>
Message-ID: <3a0e6e9d.8be3.15012a43a27.Coremail.ayshihanzhang@126.com>

which branch do you use?  there is not this problem in master branch.





At 2015-09-28 13:43:05, "masoom alam" <masoom.alam at wanclouds.net> wrote:

Can anybody highlight why the following command is throwing an exception:


Command# neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3 --allowed-address-pairs action=clear


Error:  2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource [req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most recent call last):
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     result = method(request=request, **args)
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File "/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     allow_bulk=self._allow_bulk)
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File "/opt/stack/neutron/neutron/api/v2/base.py", line 652, in prepare_request_body
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     attr_vals['validate'][rule])
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File "/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in _validate_allowed_address_pairs
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     if len(address_pairs) > cfg.CONF.max_allowed_address_pair:
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError: object of type 'NoneType' has no len()
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource






There is a similar bug filed at Lauchpad for Havana https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However there is no fix and the work around  - using curl, mentioned on the bug is also not working for KILO...it was working for havana and Icehouse....any pointers...?


Thanks



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/42b1d8be/attachment.html>

From masoom.alam at wanclouds.net  Mon Sep 28 06:36:44 2015
From: masoom.alam at wanclouds.net (masoom alam)
Date: Sun, 27 Sep 2015 23:36:44 -0700
Subject: [openstack-dev] KILO: neutron port-update
 --allowed-address-pairs action=clear throws an exception
In-Reply-To: <3a0e6e9d.8be3.15012a43a27.Coremail.ayshihanzhang@126.com>
References: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>
 <3a0e6e9d.8be3.15012a43a27.Coremail.ayshihanzhang@126.com>
Message-ID: <CABk5PjLcLCk3WJYJW5PdQTmOt0SN0+CxBV4tOAFZ7NJOEq6CKg@mail.gmail.com>

stable KILO

shall I checkout the latest code are you saying this...Also can you please
confirm if you have tested this thing at your end....and there was no
problem...


Thanks

On Sun, Sep 27, 2015 at 11:29 PM, shihanzhang <ayshihanzhang at 126.com> wrote:

> which branch do you use?  there is not this problem in master branch.
>
>
>
>
>
> At 2015-09-28 13:43:05, "masoom alam" <masoom.alam at wanclouds.net> wrote:
>
> Can anybody highlight why the following command is throwing an exception:
>
> *Command#* neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3
> --allowed-address-pairs action=clear
>
> *Error: * 2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource
> [req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most
> recent call last):
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
> "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     result =
> method(request=request, **args)
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
> "/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
> allow_bulk=self._allow_bulk)
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
> "/opt/stack/neutron/neutron/api/v2/base.py", line 652, in
> prepare_request_body
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
> attr_vals['validate'][rule])
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
> "/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in
> _validate_allowed_address_pairs
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     if
> len(address_pairs) > cfg.CONF.max_allowed_address_pair:
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError: object of
> type 'NoneType' has no len()
> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>
>
>
> There is a similar bug filed at Lauchpad for Havana
> https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However there
> is no fix and the work around  - using curl, mentioned on the bug is also
> not working for KILO...it was working for havana and Icehouse....any
> pointers...?
>
> Thanks
>
>
>
>
> ????iPhone6s???5288???????
> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150927/fda672aa/attachment.html>

From ayshihanzhang at 126.com  Mon Sep 28 06:51:55 2015
From: ayshihanzhang at 126.com (shihanzhang)
Date: Mon, 28 Sep 2015 14:51:55 +0800 (CST)
Subject: [openstack-dev] KILO: neutron port-update
 --allowed-address-pairs action=clear throws an exception
In-Reply-To: <CABk5PjLcLCk3WJYJW5PdQTmOt0SN0+CxBV4tOAFZ7NJOEq6CKg@mail.gmail.com>
References: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>
 <3a0e6e9d.8be3.15012a43a27.Coremail.ayshihanzhang@126.com>
 <CABk5PjLcLCk3WJYJW5PdQTmOt0SN0+CxBV4tOAFZ7NJOEq6CKg@mail.gmail.com>
Message-ID: <37458b90.9403.15012b8da80.Coremail.ayshihanzhang@126.com>

I don't see any exception using bellow command


root at szxbz:/opt/stack/neutron# neutron port-update 3748649e-243d-4408-a5f1-8122f1fbf501 --allowed-address-pairs action=clear
Allowed address pairs must be a list.




At 2015-09-28 14:36:44, "masoom alam" <masoom.alam at wanclouds.net> wrote:

stable KILO


shall I checkout the latest code are you saying this...Also can you please confirm if you have tested this thing at your end....and there was no problem...




Thanks


On Sun, Sep 27, 2015 at 11:29 PM, shihanzhang <ayshihanzhang at 126.com> wrote:

which branch do you use?  there is not this problem in master branch.






At 2015-09-28 13:43:05, "masoom alam" <masoom.alam at wanclouds.net> wrote:

Can anybody highlight why the following command is throwing an exception:


Command# neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3 --allowed-address-pairs action=clear


Error:  2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource [req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most recent call last):
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     result = method(request=request, **args)
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File "/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     allow_bulk=self._allow_bulk)
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File "/opt/stack/neutron/neutron/api/v2/base.py", line 652, in prepare_request_body
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     attr_vals['validate'][rule])
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File "/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in _validate_allowed_address_pairs
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     if len(address_pairs) > cfg.CONF.max_allowed_address_pair:
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError: object of type 'NoneType' has no len()
2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource






There is a similar bug filed at Lauchpad for Havana https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However there is no fix and the work around  - using curl, mentioned on the bug is also not working for KILO...it was working for havana and Icehouse....any pointers...?


Thanks








????iPhone6s???5288???????


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/1baba8e5/attachment.html>

From samuel.bartel.pro at gmail.com  Mon Sep 28 08:13:02 2015
From: samuel.bartel.pro at gmail.com (Samuel Bartel)
Date: Mon, 28 Sep 2015 10:13:02 +0200
Subject: [openstack-dev] [Fuel][Plugins] add health check for plugins
In-Reply-To: <CA+vYeFqOeRQgEzuoSx7mgEhNcVqfNYBweUS=3we8-tTcMD0-Mw@mail.gmail.com>
References: <CAGq0MBgitgb5ep1vCYr45hrX7u1--gBEP3_LCcz0xoCCQw9asQ@mail.gmail.com>
 <CAOq3GZVoB_7rOU0zr=vaFB3bC4W1eNoyJV-B6Rm2xZzGnTHtRg@mail.gmail.com>
 <2d9b21315bb63501f34fbbdeb36fc0cb@mail.gmail.com>
 <CA+vYeFqOeRQgEzuoSx7mgEhNcVqfNYBweUS=3we8-tTcMD0-Mw@mail.gmail.com>
Message-ID: <CAGq0MBjgMPOC3-8e6cyAFpjfeUqKcAsuq6MRR3+=auLkk2MAaA@mail.gmail.com>

Hi,

Totally agree with you Andrey,
other use cases could be :
-for Ironic plugin, add test to validate that Ironic is properly deploy
-for LMA plugin check that metric and log are properly collect, that elk,
nagios or grafana dashboard are accessible
-for cinder netapp multi backend, check that different type of backend can
be crreated
and so on

So it would be very intersting to have enxtensibility ofr OSTF test


Samuel

2015-09-08 0:05 GMT+02:00 Andrey Danin <adanin at mirantis.com>:

> Hi.
>
> Sorry for bringing this thread back from the grave but it look quite
> interesting to me.
>
> Sheena, could you please explain how pre-deployment sanity checks should
> look like? I don't get what it is.
>
> From the Health Check point of view plugins may be divided to two groups:
>
> 1) A plugin that doesn't change an already covered functionality thus
> doesn't require extra tests implemented. Such plugins may be Contrail and
> almost all SDN plugins, Glance or Cinder backend plugins, and others which
> don't bring any changes in OSt API or any extra OSt components.
>
> 2) A plugin that adds new elements into OSt or changes API or a standard
> behavior. Such plugins may be Contrail (because it actually adds Contrail
> Controller which may be covered by Health Check too), Cisco ASR plugin
> (because it always creates HA routers), some Swift plugins (we don't have
> Swift/S3 API covered by Health Check now at all), SR-IOV plugins (because
> they require special network preparation and extra drivers to be presented
> in an image), when a combination of different ML2 plugins or hypervisors
> deployed (because you need to test all network underlayers or HVs).
>
> So, all that means we need to make OSTF extendible by Fuel plugin's tests
> eventually.
>
>
> On Mon, Aug 10, 2015 at 5:17 PM, Sheena Gregson <sgregson at mirantis.com>
> wrote:
>
>> I like that idea a lot ? I also think there would be value in adding
>> pre-deployment sanity checks that could be called from the Health Check
>> screen prior to deployment.  Thoughts?
>>
>>
>>
>> *From:* Simon Pasquier [mailto:spasquier at mirantis.com]
>> *Sent:* Monday, August 10, 2015 9:00 AM
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev at lists.openstack.org>
>> *Subject:* Re: [openstack-dev] [Fuel][Plugins] add health check for
>> plugins
>>
>>
>>
>> Hello Samuel,
>>
>> This looks like an interesting idea. Do you have any concrete example to
>> illustrate your point (with one of your plugins maybe)?
>>
>> BR,
>>
>> Simon
>>
>>
>>
>> On Mon, Aug 10, 2015 at 12:04 PM, Samuel Bartel <
>> samuel.bartel.pro at gmail.com> wrote:
>>
>> Hi all,
>>
>>
>>
>> actually with fuel plugins there are test for the plugins used by the
>> CICD, but after a deployment it is not possible for the user to easily test
>> if a plugin is crrectly deploy or not.
>>
>> I am wondering if it could be interesting to improve the fuel plugin
>> framework in order to be able to define test for each plugin which would ba
>> dded to the health Check. the user would be able to test the plugin when
>> testing the deployment test.
>>
>>
>>
>> What do you think about that?
>>
>>
>>
>>
>>
>> Kind regards
>>
>>
>>
>> Samuel
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Andrey Danin
> adanin at mirantis.com
> skype: gcon.monolake
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/e3916dc6/attachment.html>

From sbauza at redhat.com  Mon Sep 28 08:22:26 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Mon, 28 Sep 2015 10:22:26 +0200
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <20150925141255.GG8745@crypt>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt>
Message-ID: <5608F8C2.1060409@redhat.com>



Le 25/09/2015 16:12, Andrew Laski a ?crit :
> On 09/24/15 at 03:13pm, James Penick wrote:
>>>
>>>
>>> At risk of getting too offtopic I think there's an alternate 
>>> solution to
>>> doing this in Nova or on the client side.  I think we're missing 
>>> some sort
>>> of OpenStack API and service that can handle this.  Nova is a low level
>>> infrastructure API and service, it is not designed to handle these
>>> orchestrations.  I haven't checked in on Heat in a while but perhaps 
>>> this
>>> is a role that it could fill.
>>>
>>> I think that too many people consider Nova to be *the* OpenStack API 
>>> when
>>> considering instances/volumes/networking/images and that's not 
>>> something I
>>> would like to see continue.  Or at the very least I would like to see a
>>> split between the orchestration/proxy pieces and the "manage my
>>> VM/container/baremetal" bits
>>
>>
>> (new thread)
>> You've hit on one of my biggest issues right now: As far as many 
>> deployers
>> and consumers are concerned (and definitely what I tell my users within
>> Yahoo): The value of an OpenStack value-stream (compute, network, 
>> storage)
>> is to provide a single consistent API for abstracting and managing those
>> infrastructure resources.
>>
>> Take networking: I can manage Firewalls, switches, IP selection, SDN, 
>> etc
>> through Neutron. But for compute, If I want VM I go through Nova, for
>> Baremetal I can -mostly- go through Nova, and for containers I would 
>> talk
>> to Magnum or use something like the nova docker driver.
>>
>> This means that, by default, Nova -is- the closest thing to a top level
>> abstraction layer for compute. But if that is explicitly against Nova's
>> charter, and Nova isn't going to be the top level abstraction for all
>> things Compute, then something else needs to fill that space. When that
>> happens, all things common to compute provisioning should come out of 
>> Nova
>> and move into that new API. Availability zones, Quota, etc.
>
> I do think Nova is the top level abstraction layer for compute. My 
> issue is when Nova is asked to manage other resources.  There's no API 
> call to tell Cinder "create a volume and attach it to this instance, 
> and create that instance if it doesn't exist."  And I'm not sure why 
> the reverse isn't true.
>
> I want Nova to be the absolute best API for managing compute 
> resources.  It's when someone is managing compute and volumes and 
> networks together that I don't feel that Nova is the best place for 
> that.  Most importantly right now it seems that not everyone is on the 
> same page on this and I think it would be beneficial to come together 
> and figure out what sort of workloads the Nova API is intending to 
> provide.

I totally agree with you on those points :
  - nova API should be only supporting CRUD operations for compute VMs 
and should no longer manage neither volumes nor networks IMHO, because 
it creates more problems than it resolves
  - given the above, nova API could possibly accept resources from 
networks or volumes but only for placement decisions related to instances.

Tho, I can also understand that operators sometimes just want a single 
tool for creating this kind of relationship between a volume and an 
instance (and not provide a YAML file), but IMHO, it doesn't perhaps 
need a top-level API, just a python client able to do some very simple 
orchestration between services, something like openstack-client.

I don't really see a uber-value for getting a proxy API calling Nova or 
Neutron. IMHO, that should still be done by clients, not services.

-Sylvain

>
>>
>> -James
>
>> __________________________________________________________________________ 
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From vstinner at redhat.com  Mon Sep 28 08:35:50 2015
From: vstinner at redhat.com (Victor Stinner)
Date: Mon, 28 Sep 2015 10:35:50 +0200
Subject: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core
In-Reply-To: <1443114453-sup-7374@lrrr.local>
References: <1443114453-sup-7374@lrrr.local>
Message-ID: <5608FBE6.9060109@redhat.com>

+1 for Brant

Victor

Le 24/09/2015 19:12, Doug Hellmann a ?crit :
> Oslo team,
>
> I am nominating Brant Knudson for Oslo core.
>
> As liaison from the Keystone team Brant has participated in meetings,
> summit sessions, and other discussions at a level higher than some
> of our own core team members.  He is already core on oslo.policy
> and oslo.cache, and given his track record I am confident that he would
> make a good addition to the team.
>
> Please indicate your opinion by responding with +1/-1 as usual.
>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From efedorova at mirantis.com  Mon Sep 28 08:39:24 2015
From: efedorova at mirantis.com (Ekaterina Chernova)
Date: Mon, 28 Sep 2015 11:39:24 +0300
Subject: [openstack-dev] [murano] Outreachy: Interest in contributing to
	open-source
In-Reply-To: <CAFAMDXaHJ522on7m6cSObGuoqzxwMCmkc-dRcY4RekjO6WfXrA@mail.gmail.com>
References: <CAFAMDXaHJ522on7m6cSObGuoqzxwMCmkc-dRcY4RekjO6WfXrA@mail.gmail.com>
Message-ID: <CAOFFu8Zi9C4S_fWq=PQ2SRe-Wr=yszy85TS6m+Cz52OHN8EXNg@mail.gmail.com>

Hi Yolande,

welcome to OpenStack and open source!

We gladly introduce you with Murano project!

Topic 'Implementation of tagging heat stacks, created by murano' is already
taken,
but we can offer you another one after talking to you about what you are
interesting in.

You can go to #murano channel on IRC node and reach me (katyafervent) or
someone else.
Also, you can contact me directly by mail.


Regards,
Kate.


On Sat, Sep 26, 2015 at 11:20 PM, Amate Yolande <yolandeamate at gmail.com>
wrote:

> Hello
>
> My name is Amate Yolande from Buea Cameroon. I am new to open source
> and I am interested in participating in the Outreachy. I would like to
> work on the "Murano - Implementation of tagging heat stacks, created
> by murano" project and would like to get some directives on how to
> familiarize myself with the project. So far I have been able to
> install and test OpenStack from dev-stack on a spare computer using a
> local network at home.
>
> Thanks
> Yolande
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/e9b30f14/attachment.html>

From nmakhotkin at mirantis.com  Mon Sep 28 08:46:11 2015
From: nmakhotkin at mirantis.com (Nikolay Makhotkin)
Date: Mon, 28 Sep 2015 11:46:11 +0300
Subject: [openstack-dev] [mistral] Cancelling team meeting today (09/28/2015)
Message-ID: <CACarOJYg8YbuzcVDYHJeDkoqQ4s_MeqDwQXyvTHt-M8OcV-LfA@mail.gmail.com>

Mistral Team,

We?re cancelling today?s team meeting because a number of key members won?t
be able to attend.

The next one is scheduled for 5 Oct.


Best Regards,
Nikolay Makhotkin
@Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/97741473/attachment.html>

From amotoki at gmail.com  Mon Sep 28 08:57:06 2015
From: amotoki at gmail.com (Akihiro Motoki)
Date: Mon, 28 Sep 2015 17:57:06 +0900
Subject: [openstack-dev] KILO: neutron port-update
 --allowed-address-pairs action=clear throws an exception
In-Reply-To: <37458b90.9403.15012b8da80.Coremail.ayshihanzhang@126.com>
References: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>
 <3a0e6e9d.8be3.15012a43a27.Coremail.ayshihanzhang@126.com>
 <CABk5PjLcLCk3WJYJW5PdQTmOt0SN0+CxBV4tOAFZ7NJOEq6CKg@mail.gmail.com>
 <37458b90.9403.15012b8da80.Coremail.ayshihanzhang@126.com>
Message-ID: <CALhU9tk_e0TZfxc=kpjSpYMze-MBriW-zpR9n4njfSU9vX3FRA@mail.gmail.com>

As already mentioned, we need to pass [] (an empty list) rather than None
as allowed_address_pairs.

At the moment it is not supported in Neutron CLI.
This review https://review.openstack.org/#/c/218551/ is trying to fix this
problem.

Akihiro


2015-09-28 15:51 GMT+09:00 shihanzhang <ayshihanzhang at 126.com>:

> I don't see any exception using bellow command
>
> root at szxbz:/opt/stack/neutron# neutron port-update
> 3748649e-243d-4408-a5f1-8122f1fbf501 --allowed-address-pairs action=clear
> Allowed address pairs must be a list.
>
>
>
> At 2015-09-28 14:36:44, "masoom alam" <masoom.alam at wanclouds.net> wrote:
>
> stable KILO
>
> shall I checkout the latest code are you saying this...Also can you please
> confirm if you have tested this thing at your end....and there was no
> problem...
>
>
> Thanks
>
> On Sun, Sep 27, 2015 at 11:29 PM, shihanzhang <ayshihanzhang at 126.com>
> wrote:
>
>> which branch do you use?  there is not this problem in master branch.
>>
>>
>>
>>
>>
>> At 2015-09-28 13:43:05, "masoom alam" <masoom.alam at wanclouds.net> wrote:
>>
>> Can anybody highlight why the following command is throwing an exception:
>>
>> *Command#* neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3
>> --allowed-address-pairs action=clear
>>
>> *Error: * 2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource
>> [req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most
>> recent call last):
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>> "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     result =
>> method(request=request, **args)
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>> "/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>> allow_bulk=self._allow_bulk)
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>> "/opt/stack/neutron/neutron/api/v2/base.py", line 652, in
>> prepare_request_body
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>> attr_vals['validate'][rule])
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>> "/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in
>> _validate_allowed_address_pairs
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     if
>> len(address_pairs) > cfg.CONF.max_allowed_address_pair:
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError: object
>> of type 'NoneType' has no len()
>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>
>>
>>
>> There is a similar bug filed at Lauchpad for Havana
>> https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However there
>> is no fix and the work around  - using curl, mentioned on the bug is also
>> not working for KILO...it was working for havana and Icehouse....any
>> pointers...?
>>
>> Thanks
>>
>>
>>
>>
>> ????iPhone6s???5288???????
>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> ????iPhone6s???5288???????
> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/4205807b/attachment.html>

From julien at danjou.info  Mon Sep 28 09:01:08 2015
From: julien at danjou.info (Julien Danjou)
Date: Mon, 28 Sep 2015 11:01:08 +0200
Subject: [openstack-dev] [all] Consistent support for SSL termination
	proxies across all API services
In-Reply-To: <m0a8scerdk.fsf@danjou.info> (Julien Danjou's message of "Wed, 23
 Sep 2015 23:51:35 +0200")
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
 <56028A4B.9010203@dague.net> <m0fv25fjvf.fsf@danjou.info>
 <5602ECBF.2020900@dague.net> <m0a8scerdk.fsf@danjou.info>
Message-ID: <m01tdic3zf.fsf@danjou.info>

On Wed, Sep 23 2015, Julien Danjou wrote:


[?]

> I'm willing to clear that out and come with specs and patches if that
> can help. :)

Following-up on myself, I went ahead and I wrote a more complete version
of the current proxy middleware we have ? which also supports RFC7239:

  https://review.openstack.org/#/c/227868/

With that in place, having a proxy (SSL or not) correctly configured in
front of any WSGI application should be completely transparent for the
application, with no need of additional configuration.

If that suits everyone, I'll then propose deprecation of the
oslo_middleware.ssl middleware in favor of this one.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/1ebf8838/attachment.pgp>

From sgregson at mirantis.com  Mon Sep 28 09:06:42 2015
From: sgregson at mirantis.com (Sheena Gregson)
Date: Mon, 28 Sep 2015 12:06:42 +0300
Subject: [openstack-dev] [Fuel][Plugins] add health check for plugins
In-Reply-To: <CAGq0MBjgMPOC3-8e6cyAFpjfeUqKcAsuq6MRR3+=auLkk2MAaA@mail.gmail.com>
References: <CAGq0MBgitgb5ep1vCYr45hrX7u1--gBEP3_LCcz0xoCCQw9asQ@mail.gmail.com>
 <CAOq3GZVoB_7rOU0zr=vaFB3bC4W1eNoyJV-B6Rm2xZzGnTHtRg@mail.gmail.com>
 <2d9b21315bb63501f34fbbdeb36fc0cb@mail.gmail.com>
 <CA+vYeFqOeRQgEzuoSx7mgEhNcVqfNYBweUS=3we8-tTcMD0-Mw@mail.gmail.com>
 <CAGq0MBjgMPOC3-8e6cyAFpjfeUqKcAsuq6MRR3+=auLkk2MAaA@mail.gmail.com>
Message-ID: <7854d3e66a593325404281449e53f955@mail.gmail.com>

I just realized I missed this thread when Andrey responded to it ?
apologies!



I was thinking of things like ? confirming VMware username and password are
accurate, confirming that SSL certificate chain is valid, etc. ? some of
these are not plugin based, but I think there is value in enabling both
core components and plugins to specify tests that can be run prior to
deployment that will help ensure the deployment will not fail.



Does that make sense?  In this case, it is not confirming the deployment
was successful (post-deployment), it is checking known parameters for
validity prior to attempting to deploy (pre-deployment).



*From:* Samuel Bartel [mailto:samuel.bartel.pro at gmail.com]
*Sent:* Monday, September 28, 2015 11:13 AM
*To:* OpenStack Development Mailing List (not for usage questions) <
openstack-dev at lists.openstack.org>
*Subject:* Re: [openstack-dev] [Fuel][Plugins] add health check for plugins



Hi,

Totally agree with you Andrey,

other use cases could be :

-for Ironic plugin, add test to validate that Ironic is properly deploy

-for LMA plugin check that metric and log are properly collect, that elk,
nagios or grafana dashboard are accessible

-for cinder netapp multi backend, check that different type of backend can
be crreated

and so on

So it would be very intersting to have enxtensibility ofr OSTF test





Samuel



2015-09-08 0:05 GMT+02:00 Andrey Danin <adanin at mirantis.com>:

Hi.

Sorry for bringing this thread back from the grave but it look quite
interesting to me.

Sheena, could you please explain how pre-deployment sanity checks should
look like? I don't get what it is.

>From the Health Check point of view plugins may be divided to two groups:


1) A plugin that doesn't change an already covered functionality thus
doesn't require extra tests implemented. Such plugins may be Contrail and
almost all SDN plugins, Glance or Cinder backend plugins, and others which
don't bring any changes in OSt API or any extra OSt components.


2) A plugin that adds new elements into OSt or changes API or a standard
behavior. Such plugins may be Contrail (because it actually adds Contrail
Controller which may be covered by Health Check too), Cisco ASR plugin
(because it always creates HA routers), some Swift plugins (we don't have
Swift/S3 API covered by Health Check now at all), SR-IOV plugins (because
they require special network preparation and extra drivers to be presented
in an image), when a combination of different ML2 plugins or hypervisors
deployed (because you need to test all network underlayers or HVs).

So, all that means we need to make OSTF extendible by Fuel plugin's tests
eventually.



On Mon, Aug 10, 2015 at 5:17 PM, Sheena Gregson <sgregson at mirantis.com>
wrote:

I like that idea a lot ? I also think there would be value in adding
pre-deployment sanity checks that could be called from the Health Check
screen prior to deployment.  Thoughts?



*From:* Simon Pasquier [mailto:spasquier at mirantis.com]
*Sent:* Monday, August 10, 2015 9:00 AM
*To:* OpenStack Development Mailing List (not for usage questions) <
openstack-dev at lists.openstack.org>
*Subject:* Re: [openstack-dev] [Fuel][Plugins] add health check for plugins



Hello Samuel,

This looks like an interesting idea. Do you have any concrete example to
illustrate your point (with one of your plugins maybe)?

BR,

Simon



On Mon, Aug 10, 2015 at 12:04 PM, Samuel Bartel <samuel.bartel.pro at gmail.com>
wrote:

Hi all,



actually with fuel plugins there are test for the plugins used by the CICD,
but after a deployment it is not possible for the user to easily test if a
plugin is crrectly deploy or not.

I am wondering if it could be interesting to improve the fuel plugin
framework in order to be able to define test for each plugin which would ba
dded to the health Check. the user would be able to test the plugin when
testing the deployment test.



What do you think about that?





Kind regards



Samuel


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Andrey Danin
adanin at mirantis.com
skype: gcon.monolake


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/869d748e/attachment.html>

From duncan.thomas at gmail.com  Mon Sep 28 09:23:06 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Mon, 28 Sep 2015 12:23:06 +0300
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <5608F8C2.1060409@redhat.com>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
Message-ID: <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>

The trouble with putting more intelligence in the clients is that there are
more clients than just the one we provide, and the more smarts we require
in the clients, the more divergence of functionality we're likely to see.
Also, bugs and slowly percolating bug fixes.
On 28 Sep 2015 11:27, "Sylvain Bauza" <sbauza at redhat.com> wrote:

>
>
> Le 25/09/2015 16:12, Andrew Laski a ?crit :
>
>> On 09/24/15 at 03:13pm, James Penick wrote:
>>
>>>
>>>>
>>>> At risk of getting too offtopic I think there's an alternate solution to
>>>> doing this in Nova or on the client side.  I think we're missing some
>>>> sort
>>>> of OpenStack API and service that can handle this.  Nova is a low level
>>>> infrastructure API and service, it is not designed to handle these
>>>> orchestrations.  I haven't checked in on Heat in a while but perhaps
>>>> this
>>>> is a role that it could fill.
>>>>
>>>> I think that too many people consider Nova to be *the* OpenStack API
>>>> when
>>>> considering instances/volumes/networking/images and that's not
>>>> something I
>>>> would like to see continue.  Or at the very least I would like to see a
>>>> split between the orchestration/proxy pieces and the "manage my
>>>> VM/container/baremetal" bits
>>>>
>>>
>>>
>>> (new thread)
>>> You've hit on one of my biggest issues right now: As far as many
>>> deployers
>>> and consumers are concerned (and definitely what I tell my users within
>>> Yahoo): The value of an OpenStack value-stream (compute, network,
>>> storage)
>>> is to provide a single consistent API for abstracting and managing those
>>> infrastructure resources.
>>>
>>> Take networking: I can manage Firewalls, switches, IP selection, SDN, etc
>>> through Neutron. But for compute, If I want VM I go through Nova, for
>>> Baremetal I can -mostly- go through Nova, and for containers I would talk
>>> to Magnum or use something like the nova docker driver.
>>>
>>> This means that, by default, Nova -is- the closest thing to a top level
>>> abstraction layer for compute. But if that is explicitly against Nova's
>>> charter, and Nova isn't going to be the top level abstraction for all
>>> things Compute, then something else needs to fill that space. When that
>>> happens, all things common to compute provisioning should come out of
>>> Nova
>>> and move into that new API. Availability zones, Quota, etc.
>>>
>>
>> I do think Nova is the top level abstraction layer for compute. My issue
>> is when Nova is asked to manage other resources.  There's no API call to
>> tell Cinder "create a volume and attach it to this instance, and create
>> that instance if it doesn't exist."  And I'm not sure why the reverse isn't
>> true.
>>
>> I want Nova to be the absolute best API for managing compute resources.
>> It's when someone is managing compute and volumes and networks together
>> that I don't feel that Nova is the best place for that.  Most importantly
>> right now it seems that not everyone is on the same page on this and I
>> think it would be beneficial to come together and figure out what sort of
>> workloads the Nova API is intending to provide.
>>
>
> I totally agree with you on those points :
>  - nova API should be only supporting CRUD operations for compute VMs and
> should no longer manage neither volumes nor networks IMHO, because it
> creates more problems than it resolves
>  - given the above, nova API could possibly accept resources from networks
> or volumes but only for placement decisions related to instances.
>
> Tho, I can also understand that operators sometimes just want a single
> tool for creating this kind of relationship between a volume and an
> instance (and not provide a YAML file), but IMHO, it doesn't perhaps need a
> top-level API, just a python client able to do some very simple
> orchestration between services, something like openstack-client.
>
> I don't really see a uber-value for getting a proxy API calling Nova or
> Neutron. IMHO, that should still be done by clients, not services.
>
> -Sylvain
>
>
>>
>>> -James
>>>
>>
>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/58226ae5/attachment.html>

From vstinner at redhat.com  Mon Sep 28 09:23:55 2015
From: vstinner at redhat.com (Victor Stinner)
Date: Mon, 28 Sep 2015 11:23:55 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <m0twqibl2w.fsf@danjou.info>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <m0twqibl2w.fsf@danjou.info>
Message-ID: <5609072B.8010707@redhat.com>

Le 25/09/2015 17:00, Julien Danjou a ?crit :
> If we wanted to enforce that, we would just have to write a bot setting
> -1 automatically. I'm getting tired of seeing people doing bots' jobs in
> Gerrit.

It may also help to enhance "git review -s" to install a local 
post-commit hook to warn if the commit message doesn't respect OpenStack 
coding style.

(Like the evil "final point", haha. I don't recall if it was mandatory 
or illegal.)

Victor


From sbauza at redhat.com  Mon Sep 28 09:35:03 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Mon, 28 Sep 2015 11:35:03 +0200
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
 <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
Message-ID: <560909C7.9080907@redhat.com>



Le 28/09/2015 11:23, Duncan Thomas a ?crit :
>
> The trouble with putting more intelligence in the clients is that 
> there are more clients than just the one we provide, and the more 
> smarts we require in the clients, the more divergence of functionality 
> we're likely to see. Also, bugs and slowly percolating bug fixes.
>

That's why I consider the layer of orchestration in the client just 
being as identical as what we have in Nova, not more than that. If we 
require more than just a volume creation when asking to boot from a 
volume with source=image, then I agree with you, it has nothing to do in 
the client, but rather in Heat.

The same goes with networks. What is done with Nova for managing CRUD 
operations can be done in python clients, but that's the limit.

About the maintenance burden, I also consider that patching clients is 
far more easier than patching an API unless I missed something.

-Sylvain


> On 28 Sep 2015 11:27, "Sylvain Bauza" <sbauza at redhat.com 
> <mailto:sbauza at redhat.com>> wrote:
>
>
>
>     Le 25/09/2015 16:12, Andrew Laski a ?crit :
>
>         On 09/24/15 at 03:13pm, James Penick wrote:
>
>
>
>                 At risk of getting too offtopic I think there's an
>                 alternate solution to
>                 doing this in Nova or on the client side.  I think
>                 we're missing some sort
>                 of OpenStack API and service that can handle this. 
>                 Nova is a low level
>                 infrastructure API and service, it is not designed to
>                 handle these
>                 orchestrations.  I haven't checked in on Heat in a
>                 while but perhaps this
>                 is a role that it could fill.
>
>                 I think that too many people consider Nova to be *the*
>                 OpenStack API when
>                 considering instances/volumes/networking/images and
>                 that's not something I
>                 would like to see continue.  Or at the very least I
>                 would like to see a
>                 split between the orchestration/proxy pieces and the
>                 "manage my
>                 VM/container/baremetal" bits
>
>
>
>             (new thread)
>             You've hit on one of my biggest issues right now: As far
>             as many deployers
>             and consumers are concerned (and definitely what I tell my
>             users within
>             Yahoo): The value of an OpenStack value-stream (compute,
>             network, storage)
>             is to provide a single consistent API for abstracting and
>             managing those
>             infrastructure resources.
>
>             Take networking: I can manage Firewalls, switches, IP
>             selection, SDN, etc
>             through Neutron. But for compute, If I want VM I go
>             through Nova, for
>             Baremetal I can -mostly- go through Nova, and for
>             containers I would talk
>             to Magnum or use something like the nova docker driver.
>
>             This means that, by default, Nova -is- the closest thing
>             to a top level
>             abstraction layer for compute. But if that is explicitly
>             against Nova's
>             charter, and Nova isn't going to be the top level
>             abstraction for all
>             things Compute, then something else needs to fill that
>             space. When that
>             happens, all things common to compute provisioning should
>             come out of Nova
>             and move into that new API. Availability zones, Quota, etc.
>
>
>         I do think Nova is the top level abstraction layer for
>         compute. My issue is when Nova is asked to manage other
>         resources.  There's no API call to tell Cinder "create a
>         volume and attach it to this instance, and create that
>         instance if it doesn't exist."  And I'm not sure why the
>         reverse isn't true.
>
>         I want Nova to be the absolute best API for managing compute
>         resources.  It's when someone is managing compute and volumes
>         and networks together that I don't feel that Nova is the best
>         place for that.  Most importantly right now it seems that not
>         everyone is on the same page on this and I think it would be
>         beneficial to come together and figure out what sort of
>         workloads the Nova API is intending to provide.
>
>
>     I totally agree with you on those points :
>      - nova API should be only supporting CRUD operations for compute
>     VMs and should no longer manage neither volumes nor networks IMHO,
>     because it creates more problems than it resolves
>      - given the above, nova API could possibly accept resources from
>     networks or volumes but only for placement decisions related to
>     instances.
>
>     Tho, I can also understand that operators sometimes just want a
>     single tool for creating this kind of relationship between a
>     volume and an instance (and not provide a YAML file), but IMHO, it
>     doesn't perhaps need a top-level API, just a python client able to
>     do some very simple orchestration between services, something like
>     openstack-client.
>
>     I don't really see a uber-value for getting a proxy API calling
>     Nova or Neutron. IMHO, that should still be done by clients, not
>     services.
>
>     -Sylvain
>
>
>
>             -James
>
>
>             __________________________________________________________________________
>
>             OpenStack Development Mailing List (not for usage questions)
>             Unsubscribe:
>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>             <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>         __________________________________________________________________________
>
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/60e81509/attachment.html>

From corpqa at gmail.com  Mon Sep 28 09:37:58 2015
From: corpqa at gmail.com (OpenStack Mailing List Archive)
Date: Mon, 28 Sep 2015 02:37:58 -0700
Subject: [openstack-dev] [Sahara] Data locality when reading from Swift in
	MapReduce
Message-ID: <e56f1c7431f5fd2de4698670cb26e380@openstack.nimeyo.com>

An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/ffb1ec6d/attachment.html>

From geguileo at redhat.com  Mon Sep 28 09:47:54 2015
From: geguileo at redhat.com (Gorka Eguileor)
Date: Mon, 28 Sep 2015 11:47:54 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
Message-ID: <20150928094754.GP3713@localhost>

On 26/09, Morgan Fainberg wrote:
> As a core (and former PTL) I just ignored commit message -1s unless there is something majorly wrong (no bug id where one is needed, etc). 
> 
> I appreciate well formatted commits, but can we let this one go? This discussion is so far into the meta-bike-shedding (bike shedding about bike shedding commit messages) ... If a commit message is *that* bad a -1 (or just fixing it?) Might be worth it. However, if a commit isn't missing key info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence moving from topic to topic, there isn't a good reason to block the review. 
> 
> It is not worth having a bot -1 bad commits or even having gerrit muck with them. Let's do the job of the reviewer and actually review code instead of going crazy with commit messages. 
> 
> Sent via mobile
> 

I have to disagree, as reviewers we have to make sure that guidelines
are followed, if we have an explicit guideline that states that
the limit length is 72 chars, I will -1 any patch that doesn't follow
the guideline, just as I would do with i18n guideline violations.

Typos are a completely different matter and they should not be grouped
together with guideline infringements.

I agree that it is a waste of time and resources when you have to -1 a
patch for this, but there multiple solutions, you can make sure your
editor does auto wrapping at the right length (I have mine configured
this way), or create a git-enforce policy with a client-side hook, or do
like Ihar is trying to do and push for a guideline change.

I don't mind changing the guideline to any other length, but as long as
it is 72 chars I will keep enforcing it, as it is not the place of
reviewers to decide which guidelines are worthy of being enforced and
which ones are not.

Cheers,
Gorka.



> > On Sep 26, 2015, at 21:19, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
> > 
> > Can I ask a different question - could we reject a few simple-to-check things on the push, like bad commit messages?  For things that take 2 seconds to fix and do make people's lives better, it's not that they're rejected, it's that the whole rejection cycle via gerrit review (push/wait for tests to run/check website/swear/find change/fix/push again) is out of proportion to the effort taken to fix it.
> > 
> > It seems here that there's benefit to 72 line messages - not that everyone sees that benefit, but it is present - but it doesn't outweigh the current cost.
> > -- 
> > Ian.
> > 
> > 
> >> On 25 September 2015 at 12:02, Jeremy Stanley <fungi at yuggoth.org> wrote:
> >> On 2015-09-25 16:15:15 +0000 (+0000), Fox, Kevin M wrote:
> >> > Another option... why are we wasting time on something that a
> >> > computer can handle? Why not just let the line length be infinite
> >> > in the commit message and have gerrit wrap it to <insert random
> >> > number here> length lines on merge?
> >> 
> >> The commit message content (including whitespace/formatting) is part
> >> of the data fed into the hash algorithm to generate the commit
> >> identifier. If Gerrit changed the commit message at upload, that
> >> would alter the Git SHA compared to your local copy of the same
> >> commit. This quickly goes down a Git madness rabbit hole (not the
> >> least of which is that it would completely break signed commits).
> >> --
> >> Jeremy Stanley
> >> 
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From mangelajo at redhat.com  Mon Sep 28 09:53:51 2015
From: mangelajo at redhat.com (Miguel Angel Ajo)
Date: Mon, 28 Sep 2015 11:53:51 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <20150928094754.GP3713@localhost>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost>
Message-ID: <56090E2F.2050508@redhat.com>

IMO those checks should be automated, as we automate pep8 checks on our
python code.

I agree, that if we have a rule for this, we should follow it, but it's 
a waste
of time reviewing and enforcing this manually.

Best regards.
Miguel ?ngel.

Gorka Eguileor wrote:
> On 26/09, Morgan Fainberg wrote:
>> As a core (and former PTL) I just ignored commit message -1s unless there is something majorly wrong (no bug id where one is needed, etc).
>>
>> I appreciate well formatted commits, but can we let this one go? This discussion is so far into the meta-bike-shedding (bike shedding about bike shedding commit messages) ... If a commit message is *that* bad a -1 (or just fixing it?) Might be worth it. However, if a commit isn't missing key info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence moving from topic to topic, there isn't a good reason to block the review.
>>
>> It is not worth having a bot -1 bad commits or even having gerrit muck with them. Let's do the job of the reviewer and actually review code instead of going crazy with commit messages.
>>
>> Sent via mobile
>>
>
> I have to disagree, as reviewers we have to make sure that guidelines
> are followed, if we have an explicit guideline that states that
> the limit length is 72 chars, I will -1 any patch that doesn't follow
> the guideline, just as I would do with i18n guideline violations.
>
> Typos are a completely different matter and they should not be grouped
> together with guideline infringements.
>
> I agree that it is a waste of time and resources when you have to -1 a
> patch for this, but there multiple solutions, you can make sure your
> editor does auto wrapping at the right length (I have mine configured
> this way), or create a git-enforce policy with a client-side hook, or do
> like Ihar is trying to do and push for a guideline change.
>
> I don't mind changing the guideline to any other length, but as long as
> it is 72 chars I will keep enforcing it, as it is not the place of
> reviewers to decide which guidelines are worthy of being enforced and
> which ones are not.
>
> Cheers,
> Gorka.
>
>
>
>>> On Sep 26, 2015, at 21:19, Ian Wells<ijw.ubuntu at cack.org.uk>  wrote:
>>>
>>> Can I ask a different question - could we reject a few simple-to-check things on the push, like bad commit messages?  For things that take 2 seconds to fix and do make people's lives better, it's not that they're rejected, it's that the whole rejection cycle via gerrit review (push/wait for tests to run/check website/swear/find change/fix/push again) is out of proportion to the effort taken to fix it.
>>>
>>> It seems here that there's benefit to 72 line messages - not that everyone sees that benefit, but it is present - but it doesn't outweigh the current cost.
>>> -- 
>>> Ian.
>>>
>>>
>>>> On 25 September 2015 at 12:02, Jeremy Stanley<fungi at yuggoth.org>  wrote:
>>>> On 2015-09-25 16:15:15 +0000 (+0000), Fox, Kevin M wrote:
>>>>> Another option... why are we wasting time on something that a
>>>>> computer can handle? Why not just let the line length be infinite
>>>>> in the commit message and have gerrit wrap it to<insert random
>>>>> number here>  length lines on merge?
>>>> The commit message content (including whitespace/formatting) is part
>>>> of the data fed into the hash algorithm to generate the commit
>>>> identifier. If Gerrit changed the commit message at upload, that
>>>> would alter the Git SHA compared to your local copy of the same
>>>> commit. This quickly goes down a Git madness rabbit hole (not the
>>>> least of which is that it would completely break signed commits).
>>>> --
>>>> Jeremy Stanley
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From rcresswe at cisco.com  Mon Sep 28 09:57:45 2015
From: rcresswe at cisco.com (Rob Cresswell (rcresswe))
Date: Mon, 28 Sep 2015 09:57:45 +0000
Subject: [openstack-dev]  [Horizon] Horizon Productivity Suggestion
Message-ID: <D22ECDB2.F132%rcresswe@cisco.com>

Hi folks,

I?m wondering if we could try marking out a small 2-3 minute slot at the
start of each weekly meeting to highlight Critical/ High bugs that have
code up for review, as well as important blueprints that have code up for
review. These would be blueprints for features that were identified as
high priority at the summit.

The thought here is that we were very efficient in L-RC1 at moving code
along, which is nice for productivity, but not really great for stability;
it would be good to do this kind of targeted work earlier in the cycle.
I?ve noticed other projects doing this in their meetings, and it seems
quite effective.

Rob



From mrunge at redhat.com  Mon Sep 28 10:23:48 2015
From: mrunge at redhat.com (Matthias Runge)
Date: Mon, 28 Sep 2015 12:23:48 +0200
Subject: [openstack-dev] [all][stable][release][horizon] 2015.1.2
In-Reply-To: <50053AD0-B264-4450-A772-E15B21A24506@redhat.com>
References: <CANZa-e+LZg0PZgPDrkhgifuZ_BQ6EhTua-420C5K2Z+A8cbPsg@mail.gmail.com>
 <20150924073107.GF24386@sofja.berg.ol>
 <CAGi==UXRm7mARJecBT69qqQMfOycdx_crVf-OCD_x+O9z2J2nw@mail.gmail.com>
 <5604F16A.6010807@redhat.com>
 <50053AD0-B264-4450-A772-E15B21A24506@redhat.com>
Message-ID: <56091534.9010006@redhat.com>

On 25/09/15 15:39, Ihar Hrachyshka wrote:

> I see you have three people in the horizon-stable-maint team only. 
> Have you considered expanding the team with more folks? In
> neutron, we have five people in the stable-maint group.

Good suggestion!

In horizon, we just have a very few cores being in charge to support a
installation or a distribution.

So: if any of you considering themself to be a good candidate for
being a stable reviewer, please speak up!

For Horizon, I will ping Horizon cores asking them to join
horizon-stable-maint team.

Matthias



From anlin.kong at gmail.com  Mon Sep 28 10:28:40 2015
From: anlin.kong at gmail.com (Lingxian Kong)
Date: Mon, 28 Sep 2015 18:28:40 +0800
Subject: [openstack-dev] [mistral] Cancelling team meeting today
	(09/28/2015)
In-Reply-To: <CACarOJYg8YbuzcVDYHJeDkoqQ4s_MeqDwQXyvTHt-M8OcV-LfA@mail.gmail.com>
References: <CACarOJYg8YbuzcVDYHJeDkoqQ4s_MeqDwQXyvTHt-M8OcV-LfA@mail.gmail.com>
Message-ID: <CALjNAZ3D3AX2D8Vi5DH8Zws2AMmuF1ozZ2-kH+nMF8Dq3ZFjCg@mail.gmail.com>

Hi, guys,

We will have a 7 days holiday in China, which is National Day, from
Oct. 1 to Oct. 7. So, I'll miss the next team meeting :(

On Mon, Sep 28, 2015 at 4:46 PM, Nikolay Makhotkin
<nmakhotkin at mirantis.com> wrote:
> Mistral Team,
>
> We?re cancelling today?s team meeting because a number of key members won?t
> be able to attend.
>
> The next one is scheduled for 5 Oct.
>
>
> Best Regards,
> Nikolay Makhotkin
> @Mirantis Inc.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards!
-----------------------------------
Lingxian Kong


From duncan.thomas at gmail.com  Mon Sep 28 10:35:54 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Mon, 28 Sep 2015 13:35:54 +0300
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <560909C7.9080907@redhat.com>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
 <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
 <560909C7.9080907@redhat.com>
Message-ID: <CAOyZ2aHo-mU99ktue_W0qUt1Du6uY--ANDceQ4xufKf5LBB64w@mail.gmail.com>

On 28 September 2015 at 12:35, Sylvain Bauza <sbauza at redhat.com> wrote:

> About the maintenance burden, I also consider that patching clients is far
> more easier than patching an API unless I missed something.
>
>
I think I very much disagree there - patching a central installation is
much, much easier than getting N customers to patch M different libraries,
even assuming the fix is available for any significant subset of the M
libraries, plus making sure that new customers use the correct libraries,
plus helping any customers who have some sort of roll-your-own library do
the new right thing...

I think there's a definite place for a simple API to do infrastructure
level orchestration without needing the complexities of heat - these APIs
are in nova because they're useful - there's clear operator desire for them
and a couple of operators have been quite vocal about their desire for them
not to be removed. Great, let's keep them, but form a team of people
interested in getting them right (get rid of fixed timeouts, etc), add any
missing pieces (like floating IPs for new VMs) and generally focus on
getting this piece of the puzzle right. Breaking another small piece off
nova and polishing it has been a generally successful pattern.

I remember Monty Taylor (copied) having a rant about the lack of the
perfect 'give me a VM with all its stuff sorted' API. Care to comment,
Monty?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/c7951d81/attachment.html>

From kzaitsev at mirantis.com  Mon Sep 28 10:56:28 2015
From: kzaitsev at mirantis.com (Kirill Zaitsev)
Date: Mon, 28 Sep 2015 13:56:28 +0300
Subject: [openstack-dev] Murano code flow for custom development and
 combining murano with horizon in devstack
In-Reply-To: <CAGCi2YSoCbi+a+3+K2UtunyGpEVer-fgpu3VmQCW+wL-hvgDeg@mail.gmail.com>
References: <CAGCi2YSoCbi+a+3+K2UtunyGpEVer-fgpu3VmQCW+wL-hvgDeg@mail.gmail.com>
Message-ID: <etPan.56091cdc.35adbca5.34fe@TefMBPr.local>

Hi, murano-dashboard works the same way any other horizon dashboard does.

I?m not quite sure, what you meant by ?combined and showed under one tab?, could you please elaborate?
If you?re asking about debugging ? you can install murano-dashboard locally and configure it to use remote cloud (i.e. devstack) as descried here?http://murano.readthedocs.org/en/latest/install/manual.html#install-murano-dashboard?. If not ? then I did?t quite understood what you asked in the first place =)

Feel free to come and ask around in #murano ? you might get help there faster then on ML =) ?

--?
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 26 Sep 2015 at 02:24:10, Sumanth Sathyanarayana (sumanth.sathyanarayana at gmail.com) wrote:

Hello,

Could anyone let me know if the changes in murano dashboard and horizon's openstackdashboard, both be combined and showed under one tab.
i.e. say under Murano tab on the side left panel all the changes done in horizon and murano both appear.

If anyone could point me to a link explaining custom development of murano and the code flow would be very helpful...

Thanks & Best Regards
Sumanth

__________________________________________________________________________  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/957e10fe/attachment.html>

From sean at dague.net  Mon Sep 28 11:00:10 2015
From: sean at dague.net (Sean Dague)
Date: Mon, 28 Sep 2015 07:00:10 -0400
Subject: [openstack-dev] [all] Consistent support for SSL termination
 proxies across all API services
In-Reply-To: <m01tdic3zf.fsf@danjou.info>
References: <55FB5D1E.2080706@internap.com> <55FC584E.80805@nemebean.com>
 <560134B2.300@dague.net> <56017DE7.7030209@internap.com>
 <560193E2.7090603@dague.net> <5601A911.2030504@internap.com>
 <5601BFA9.7000902@dague.net> <5601C87C.2010609@internap.com>
 <56028A4B.9010203@dague.net> <m0fv25fjvf.fsf@danjou.info>
 <5602ECBF.2020900@dague.net> <m0a8scerdk.fsf@danjou.info>
 <m01tdic3zf.fsf@danjou.info>
Message-ID: <56091DBA.9090101@dague.net>

On 09/28/2015 05:01 AM, Julien Danjou wrote:
> On Wed, Sep 23 2015, Julien Danjou wrote:
> 
> 
> [?]
> 
>> I'm willing to clear that out and come with specs and patches if that
>> can help. :)
> 
> Following-up on myself, I went ahead and I wrote a more complete version
> of the current proxy middleware we have ? which also supports RFC7239:
> 
>   https://review.openstack.org/#/c/227868/
> 
> With that in place, having a proxy (SSL or not) correctly configured in
> front of any WSGI application should be completely transparent for the
> application, with no need of additional configuration.
> 
> If that suits everyone, I'll then propose deprecation of the
> oslo_middleware.ssl middleware in favor of this one.

Great, thanks, Julien, that looks like a good ball to move forward here
in Mitaka. My +1 added to the patch.

	-Sean

-- 
Sean Dague
http://dague.net

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 465 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/a961e606/attachment.pgp>

From sean at dague.net  Mon Sep 28 11:03:13 2015
From: sean at dague.net (Sean Dague)
Date: Mon, 28 Sep 2015 07:03:13 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <5605E3F3.908@inaugust.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local> <20150925144216.GH8745@crypt>
 <F1384511-A789-4FF3-988A-E1409E647F98@vmware.com>
 <20150925171257.GI8745@crypt>
 <2E1ACB8D-7D12-4F10-89A0-BBBF8BD3E448@openstack.org>
 <5605E3F3.908@inaugust.com>
Message-ID: <56091E71.7080308@dague.net>

On 09/25/2015 08:16 PM, Monty Taylor wrote:
> On 09/25/2015 06:37 PM, Chris Hoge wrote:
>>
>>> On Sep 25, 2015, at 10:12 AM, Andrew Laski <andrew at lascii.com
>>> <mailto:andrew at lascii.com>> wrote:
>>>
>>> I understand that reasoning, but still am unsure on a few things.
>>>
>>> The direction seems to be moving towards having a requirement that the
>>> same functionality is offered in two places, Nova API and Glance V2
>>> API. That seems like it would fragment adoption rather than unify it.
>>
>> My hope would be that proxies would be deprecated as new capabilities
>> moved in. Some of this will be driven by application developers too,
>> though. We?re looking at an interoperability standard, which has a
>> natural tension between backwards compatibility and new features.
> 
> Yeah. The proxies are also less efficient, because they have to bounce
> through two places.

The social theory on the proxies is also that they are fully frozen. So
assuming there would be new good features that people want / need from
projects, they'll naturally migrate to newer direct APIs. More carrot
than stick.

Just pulling APIs that work for people is a good way to make people mad
at you. But giving a gentle nudge because we're never adding new
goodness here is fine.

	-Sean

-- 
Sean Dague
http://dague.net


From aj at suse.com  Mon Sep 28 11:09:57 2015
From: aj at suse.com (Andreas Jaeger)
Date: Mon, 28 Sep 2015 13:09:57 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <20150928094754.GP3713@localhost>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost>
Message-ID: <56092005.1000905@suse.com>

On 2015-09-28 11:47, Gorka Eguileor wrote:
> On 26/09, Morgan Fainberg wrote:
>> As a core (and former PTL) I just ignored commit message -1s unless there is something majorly wrong (no bug id where one is needed, etc).
>>
>> I appreciate well formatted commits, but can we let this one go? This discussion is so far into the meta-bike-shedding (bike shedding about bike shedding commit messages) ... If a commit message is *that* bad a -1 (or just fixing it?) Might be worth it. However, if a commit isn't missing key info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence moving from topic to topic, there isn't a good reason to block the review.
>>
>> It is not worth having a bot -1 bad commits or even having gerrit muck with them. Let's do the job of the reviewer and actually review code instead of going crazy with commit messages.
>>
>> Sent via mobile
>>
>
> I have to disagree, as reviewers we have to make sure that guidelines
> are followed, if we have an explicit guideline that states that
> the limit length is 72 chars, I will -1 any patch that doesn't follow
> the guideline, just as I would do with i18n guideline violations.
 > [...]

You could also tell the committer about the length so that s/he learns 
for the next time. Giving a -1 just for a few lines that are 80 chars 
long is over the top IMHO,

Andreas
-- 
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
        HRB 21284 (AG N?rnberg)
     GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126



From sean at dague.net  Mon Sep 28 11:10:02 2015
From: sean at dague.net (Sean Dague)
Date: Mon, 28 Sep 2015 07:10:02 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <1443356431-sup-7293@lrrr.local>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
 <1443356431-sup-7293@lrrr.local>
Message-ID: <5609200A.2000607@dague.net>

On 09/27/2015 08:43 AM, Doug Hellmann wrote:
> Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +0000:
>> On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
>>>
>>> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
<snip>
>>
>> Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1] 
> 
> And yet future technical direction does factor in, and I'm trying
> to add a new heuristic to that aspect of consideration of tests:
> Do not add tests that use proxy APIs.
> 
> If there is some compelling reason to add a capability for which
> the only tests use a proxy, that's important feedback for the
> contributor community and tells us we need to improve our test
> coverage. If the reason to use the proxy is that no one is deploying
> the proxied API publicly, that is also useful feedback, but I suspect
> we will, in most cases (glance is the exception), say "Yeah, that's
> not how we mean for you to run the services long-term, so don't
> include that capability."

I think we might also just realize that some of the tests are using the
proxy because... that's how they were originally written. And they could
be rewritten to use native APIs. Realistically I think use of the image
proxy in Tempest is mostly because the nova API (with proxy) was easier
to write to... 3 years ago.

Changing these tests is definitely a thing that's on the table when they
do a funny thing.

I do agree that "testing proxies" should not be part of Defcore, and I
like Doug's idea of making that a new heuristic in test selection.

>>> situation for us to be relying on any proxy APIs like this. Yes,
>>> they are widely deployed, but we want to be using glance for image
>>> features, neutron for networking, etc. Having the nova proxy is
>>> fine, but while we have DefCore using tests to enforce the presence
>>> of the proxy we can't deprecate those APIs.
>>
>>
>> Actually that?s not true: DefCore can totally deprecate things too, and can do so in response to the technical community deprecating things.  See my comments in this review [2].  Maybe I need to write another post about that...
> 
> Sorry, I wasn't clear. The Nova team would, I expect, view the use of
> those APIs in DefCore as a reason to avoid deprecating them in the code
> even if they wanted to consider them as legacy features that should be
> removed. Maybe that's not true, and the Nova team would be happy to
> deprecate the APIs, but I did think that part of the feedback cycle we
> were establishing here was to have an indication from the outside of the
> contributor base about what APIs are considered important enough to keep
> alive for a long period of time.

I'd also agree with this. Defcore is a wider contract that we're trying
to get even more people to write to because that cross section should be
widely deployed. So deprecating something in Defcore is something I
think most teams, Nova included, would be very reluctant to do. It's
just asking for breaking your users.

	-Sean

-- 
Sean Dague
http://dague.net


From masoom.alam at wanclouds.net  Mon Sep 28 11:31:18 2015
From: masoom.alam at wanclouds.net (masoom alam)
Date: Mon, 28 Sep 2015 04:31:18 -0700
Subject: [openstack-dev] KILO: neutron port-update
 --allowed-address-pairs action=clear throws an exception
In-Reply-To: <CALhU9tk_e0TZfxc=kpjSpYMze-MBriW-zpR9n4njfSU9vX3FRA@mail.gmail.com>
References: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>
 <3a0e6e9d.8be3.15012a43a27.Coremail.ayshihanzhang@126.com>
 <CABk5PjLcLCk3WJYJW5PdQTmOt0SN0+CxBV4tOAFZ7NJOEq6CKg@mail.gmail.com>
 <37458b90.9403.15012b8da80.Coremail.ayshihanzhang@126.com>
 <CALhU9tk_e0TZfxc=kpjSpYMze-MBriW-zpR9n4njfSU9vX3FRA@mail.gmail.com>
Message-ID: <CABk5PjKd=6hSxKL6+68HkYoupMvCYNMb7Y+kb-1UPGre2E8hVw@mail.gmail.com>

Please help, its not working....:

root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack# neutron
port-show 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
+-----------------------+---------------------------------------------------------------------------------+
| Field                 | Value
                              |
+-----------------------+---------------------------------------------------------------------------------+
| admin_state_up        | True
                               |
| allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address":
"fa:16:3e:69:e9:ef"}                |
| binding:host_id       | openstack-latest-kilo-28-09-2015-masoom
                              |
| binding:profile       | {}
                               |
| binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}
                               |
| binding:vif_type      | ovs
                              |
| binding:vnic_type     | normal
                               |
| device_id             | d44b9025-f12b-4f85-8b7b-57cc1138acdd
                               |
| device_owner          | compute:nova
                               |
| extra_dhcp_opts       |
                              |
| fixed_ips             | {"subnet_id":
"bbb6726a-937f-4e0d-8ac2-f82f84272b1f", "ip_address": "10.0.0.3"} |
| id                    | 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
                               |
| mac_address           | fa:16:3e:69:e9:ef
                              |
| name                  |
                              |
| network_id            | ae1b7e34-9f6c-4c8f-bf08-99a1e390034c
                               |
| security_groups       | 8adda6d7-1b3e-4047-a130-a57609a0bd68
                               |
| status                | ACTIVE
                               |
| tenant_id             | 09945e673b7a4ab183afb166735b4fa7
                               |
+-----------------------+---------------------------------------------------------------------------------+

root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack# neutron
port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9  --allowed-address-pairs
[] action=clear
AllowedAddressPair must contain ip_address


root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack# neutron
port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9  --allowed-address-pairs
[10.0.0.201] action=clear
The number of allowed address pair exceeds the maximum 10.

root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack# neutron
port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9  --allowed-address-pairs
 action=clear
Request Failed: internal server error while processing your request.




On Mon, Sep 28, 2015 at 1:57 AM, Akihiro Motoki <amotoki at gmail.com> wrote:

> As already mentioned, we need to pass [] (an empty list) rather than None
> as allowed_address_pairs.
>
> At the moment it is not supported in Neutron CLI.
> This review https://review.openstack.org/#/c/218551/ is trying to fix
> this problem.
>
> Akihiro
>
>
> 2015-09-28 15:51 GMT+09:00 shihanzhang <ayshihanzhang at 126.com>:
>
>> I don't see any exception using bellow command
>>
>> root at szxbz:/opt/stack/neutron# neutron port-update
>> 3748649e-243d-4408-a5f1-8122f1fbf501 --allowed-address-pairs action=clear
>> Allowed address pairs must be a list.
>>
>>
>>
>> At 2015-09-28 14:36:44, "masoom alam" <masoom.alam at wanclouds.net> wrote:
>>
>> stable KILO
>>
>> shall I checkout the latest code are you saying this...Also can you
>> please confirm if you have tested this thing at your end....and there was
>> no problem...
>>
>>
>> Thanks
>>
>> On Sun, Sep 27, 2015 at 11:29 PM, shihanzhang <ayshihanzhang at 126.com>
>> wrote:
>>
>>> which branch do you use?  there is not this problem in master branch.
>>>
>>>
>>>
>>>
>>>
>>> At 2015-09-28 13:43:05, "masoom alam" <masoom.alam at wanclouds.net> wrote:
>>>
>>> Can anybody highlight why the following command is throwing an exception:
>>>
>>> *Command#* neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3
>>> --allowed-address-pairs action=clear
>>>
>>> *Error: * 2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource
>>> [req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most
>>> recent call last):
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>> "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     result =
>>> method(request=request, **args)
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>> "/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>> allow_bulk=self._allow_bulk)
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>> "/opt/stack/neutron/neutron/api/v2/base.py", line 652, in
>>> prepare_request_body
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>> attr_vals['validate'][rule])
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>> "/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in
>>> _validate_allowed_address_pairs
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     if
>>> len(address_pairs) > cfg.CONF.max_allowed_address_pair:
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError: object
>>> of type 'NoneType' has no len()
>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>
>>>
>>>
>>> There is a similar bug filed at Lauchpad for Havana
>>> https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However there
>>> is no fix and the work around  - using curl, mentioned on the bug is also
>>> not working for KILO...it was working for havana and Icehouse....any
>>> pointers...?
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>> ????iPhone6s???5288???????
>>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> ????iPhone6s???5288???????
>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/06ea99fd/attachment.html>

From john at johngarbutt.com  Mon Sep 28 11:32:53 2015
From: john at johngarbutt.com (John Garbutt)
Date: Mon, 28 Sep 2015 12:32:53 +0100
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <5609200A.2000607@dague.net>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
 <1443356431-sup-7293@lrrr.local> <5609200A.2000607@dague.net>
Message-ID: <CABib2_rm7BG6uuKZ8pDePbCVgdS6QGMU6j4xtF+m7DujWsm9rw@mail.gmail.com>

On 28 September 2015 at 12:10, Sean Dague <sean at dague.net> wrote:
> On 09/27/2015 08:43 AM, Doug Hellmann wrote:
>> Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +0000:
>>> On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
>>>>
>>>> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
> <snip>
>>>
>>> Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1]
>>
>> And yet future technical direction does factor in, and I'm trying
>> to add a new heuristic to that aspect of consideration of tests:
>> Do not add tests that use proxy APIs.
>>
>> If there is some compelling reason to add a capability for which
>> the only tests use a proxy, that's important feedback for the
>> contributor community and tells us we need to improve our test
>> coverage. If the reason to use the proxy is that no one is deploying
>> the proxied API publicly, that is also useful feedback, but I suspect
>> we will, in most cases (glance is the exception), say "Yeah, that's
>> not how we mean for you to run the services long-term, so don't
>> include that capability."
>
> I think we might also just realize that some of the tests are using the
> proxy because... that's how they were originally written.

>From my memory, thats how we got here.

The Nova tests needed to use an image API. (i.e. list images used to
check the snapshot Nova, or similar)

The Nova proxy was chosen over Glance v1 and Glance v2, mostly due to
it being the only widely deployed option.

> And they could be rewritten to use native APIs.

+1
Once Glance v2 is available.

Adding Glance v2 as advisory seems a good step to help drive more adoption.

> I do agree that "testing proxies" should not be part of Defcore, and I
> like Doug's idea of making that a new heuristic in test selection.

+1
Thats a good thing to add.
But I don't think we had another option in this case.

>> Sorry, I wasn't clear. The Nova team would, I expect, view the use of
>> those APIs in DefCore as a reason to avoid deprecating them in the code
>> even if they wanted to consider them as legacy features that should be
>> removed. Maybe that's not true, and the Nova team would be happy to
>> deprecate the APIs, but I did think that part of the feedback cycle we
>> were establishing here was to have an indication from the outside of the
>> contributor base about what APIs are considered important enough to keep
>> alive for a long period of time.
> I'd also agree with this. Defcore is a wider contract that we're trying
> to get even more people to write to because that cross section should be
> widely deployed. So deprecating something in Defcore is something I
> think most teams, Nova included, would be very reluctant to do. It's
> just asking for breaking your users.

I can't see us removing the proxy APIs in Nova any time soon,
regardless of DefCore, as it would break too many people.

But personally, I like dropping them from Defcore, to signal that the
best practice is to use the Glance v2 API directly, rather than the
Nova proxy.

Maybe the are just marked deprecated, but still required, although
that sounds a bit crazy.

Thanks,
johnthetubaguy

>>>> situation for us to be relying on any proxy APIs like this. Yes,
>>>> they are widely deployed, but we want to be using glance for image
>>>> features, neutron for networking, etc. Having the nova proxy is
>>>> fine, but while we have DefCore using tests to enforce the presence
>>>> of the proxy we can't deprecate those APIs.
>>>
>>>
>>> Actually that?s not true: DefCore can totally deprecate things too, and can do so in response to the technical community deprecating things.  See my comments in this review [2].  Maybe I need to write another post about that...
>>
>
>
>         -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From john at johngarbutt.com  Mon Sep 28 11:51:39 2015
From: john at johngarbutt.com (John Garbutt)
Date: Mon, 28 Sep 2015 12:51:39 +0100
Subject: [openstack-dev] [nova] Liberty and Nova Specs,
	Blueprints and Design Summit for Mitaka
Message-ID: <CABib2_rNnUV-uL0RuGSEZHnTG9OEmBX1H6vVq+WjM5jDuAxadQ@mail.gmail.com>

Hi,

A quick update on where we are at.

Liberty
---------

Its time to give RC1 an good test.

We need to keep our eyes out for release blockers, using this tag:
https://bugs.launchpad.net/nova/+bugs?field.tag=liberty-rc-potential

We will need an new tag on or after 8th October, to include translation updates.

Mitaka
---------

If you had something approved in Liberty, it needs to be re-approved for Mitaka.
(We spoke at length about the pros and cons of alternatives at the
midcylce, see etherpad).

Lets try get most specs *merged* before the summit.
And most spec-less blueprints approved too.
Stuff we can't agree on, can then try to get time at the summit.

To help set the priority of spec reviews, we are trying to categories
the specs and blueprints in this etherpad:
https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking

Note: please also put any specless blueprints into the above etherpad,
so they can get reviewed by nova-drivers, so we can vote on them at
the next Nova meeting.

Design Summit
----------------------

For things that can't be agreed in a spec, and/or needs in person
discussion, please do submit a summit session here:
http://goo.gl/forms/D2Qk8XGhZ6

If that does work for you, please see:
https://etherpad.openstack.org/p/mitaka-nova-summit-suggestions

The deadline for proposals will likely be Tuesday 6th October, 23.59
UTC, so we can have a draft ready by Thursday 8th October, and get
everything set in stone before Thursday 16th October.


I suspect I forgot something important... As usual, do catch me via
email or IRC if there are any questions.

Thanks,
johnthetubaguy


From geguileo at redhat.com  Mon Sep 28 12:09:05 2015
From: geguileo at redhat.com (Gorka Eguileor)
Date: Mon, 28 Sep 2015 14:09:05 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <56092005.1000905@suse.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost> <56092005.1000905@suse.com>
Message-ID: <20150928120905.GS3713@localhost>

On 28/09, Andreas Jaeger wrote:
> On 2015-09-28 11:47, Gorka Eguileor wrote:
> >On 26/09, Morgan Fainberg wrote:
> >>As a core (and former PTL) I just ignored commit message -1s unless there is something majorly wrong (no bug id where one is needed, etc).
> >>
> >>I appreciate well formatted commits, but can we let this one go? This discussion is so far into the meta-bike-shedding (bike shedding about bike shedding commit messages) ... If a commit message is *that* bad a -1 (or just fixing it?) Might be worth it. However, if a commit isn't missing key info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence moving from topic to topic, there isn't a good reason to block the review.
> >>
> >>It is not worth having a bot -1 bad commits or even having gerrit muck with them. Let's do the job of the reviewer and actually review code instead of going crazy with commit messages.
> >>
> >>Sent via mobile
> >>
> >
> >I have to disagree, as reviewers we have to make sure that guidelines
> >are followed, if we have an explicit guideline that states that
> >the limit length is 72 chars, I will -1 any patch that doesn't follow
> >the guideline, just as I would do with i18n guideline violations.
> > [...]
> 
> You could also tell the committer about the length so that s/he learns for
> the next time. Giving a -1 just for a few lines that are 80 chars long is
> over the top IMHO,
> 
> Andreas
> -- 

I tell the committer of this guideline, just like it was told to me on
my first commits; and I agree that it sucks to give or receive a -1 for
this, but let me put it this way, how many times will you be
getting/giving a -1 to the same person for this?

If it's a first time committer you'll probably say it once, they'll
learn it, fix it and them we have all our commits conforming to our
guidelines, not such a big deal (although I agree with Miguel Angel that
this should be automated) and if it's not a first time committer he
should have known better and he deserves the -1 for not paying attention
and/or not having his dev env properly setup.


>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
>    GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
>        HRB 21284 (AG N?rnberg)
>     GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From emilien at redhat.com  Mon Sep 28 12:31:51 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Mon, 28 Sep 2015 08:31:51 -0400
Subject: [openstack-dev] [puppet] weekly meeting #53
Message-ID: <56093337.3060905@redhat.com>

Hello!

Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
in #openstack-meeting-4:

https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150929

Feel free to add any additional items you'd like to discuss.
If our schedule allows it, we'll make bug triage during the meeting.

Regards,
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/650bf2d6/attachment.pgp>

From thierry at openstack.org  Mon Sep 28 12:33:09 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Mon, 28 Sep 2015 14:33:09 +0200
Subject: [openstack-dev] [ptls] Asserting that your projects follow at least
 the base deprecation policy
Message-ID: <56093385.1010108@openstack.org>

Hi everyone,

Last week at the Technical Committee meeting, following the discussion
on this mailing-list a few weeks ago, we finally passed a definition[1]
for a standard base deprecation policy for OpenStack projects.

[1]
http://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html

This is expressed as a tag that project teams can choose to add to
specific deliverables that they produce, to assert that they will follow
(at the very least) this base deprecation policy when it comes to
removing features or configuration options in that project. It is meant
to convey that minimal insurance to downstream consumers of that project.

It's worth noting that it's completely fine for a given deliverable to
*not* assert that. Complying with this policy has a cost, as it forces a
time-constrained process and imposes to maintain features in the code
during the deprecation period, distracting useful resources from pure
development. Young or fast-moving projects may not be able or willing to
slow down and to pay that cost at this stage of their development. This
tag requires a certain maturity and stability which some projects have
just not reached yet.

If you want to assert that a service your team delivers will, starting
with their Liberty release, follow that base deprecation policy, you can
propose a change adding that tag to the corresponding deliverable in the
reference/projects.yaml file in the openstack/governance repository.

Thanks in advance!

-- 
Thierry Carrez (ttx)


From sbauza at redhat.com  Mon Sep 28 12:58:07 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Mon, 28 Sep 2015 14:58:07 +0200
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <CAOyZ2aHo-mU99ktue_W0qUt1Du6uY--ANDceQ4xufKf5LBB64w@mail.gmail.com>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
 <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
 <560909C7.9080907@redhat.com>
 <CAOyZ2aHo-mU99ktue_W0qUt1Du6uY--ANDceQ4xufKf5LBB64w@mail.gmail.com>
Message-ID: <5609395F.8040503@redhat.com>



Le 28/09/2015 12:35, Duncan Thomas a ?crit :
>
>
> On 28 September 2015 at 12:35, Sylvain Bauza <sbauza at redhat.com 
> <mailto:sbauza at redhat.com>> wrote:
>
>     About the maintenance burden, I also consider that patching
>     clients is far more easier than patching an API unless I missed
>     something.
>
>
> I think I very much disagree there - patching a central installation 
> is much, much easier than getting N customers to patch M different 
> libraries, even assuming the fix is available for any significant 
> subset of the M libraries, plus making sure that new customers use the 
> correct libraries, plus helping any customers who have some sort of 
> roll-your-own library do the new right thing...
>

Well, having N versions of clients against one single API version is 
just something we manage since the beginning. I don't really see why it 
suddently becomes so difficult to manage it.


> I think there's a definite place for a simple API to do infrastructure 
> level orchestration without needing the complexities of heat - these 
> APIs are in nova because they're useful - there's clear operator 
> desire for them and a couple of operators have been quite vocal about 
> their desire for them not to be removed. Great, let's keep them, but 
> form a team of people interested in getting them right (get rid of 
> fixed timeouts, etc), add any missing pieces (like floating IPs for 
> new VMs) and generally focus on getting this piece of the puzzle 
> right. Breaking another small piece off nova and polishing it has been 
> a generally successful pattern.

I don't want to overthink what could be the right scope of that future 
API but given the Heat mission statement [1] and its service name 
'orchestration', I don't see why this API endpoint should land in the 
Nova codebase and couldn't be rather provided by the Heat API. Oh sure, 
it would perhaps require another endpoint behind the same service, but 
isn't that better than having another endpoint in Nova ?

[1] 
https://github.com/openstack/governance/blob/master/reference/projects.yaml#L482-L484


>
> I remember Monty Taylor (copied) having a rant about the lack of the 
> perfect 'give me a VM with all its stuff sorted' API. Care to comment, 
> Monty?

Sounds you misunderstood me. I'm not against implementing this excellent 
usecase, I just think the best place is not in Nova and should be done 
elsewhere.

>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/6471090d/attachment.html>

From doug at doughellmann.com  Mon Sep 28 13:03:35 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 28 Sep 2015 09:03:35 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <CABib2_rm7BG6uuKZ8pDePbCVgdS6QGMU6j4xtF+m7DujWsm9rw@mail.gmail.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
 <1443356431-sup-7293@lrrr.local> <5609200A.2000607@dague.net>
 <CABib2_rm7BG6uuKZ8pDePbCVgdS6QGMU6j4xtF+m7DujWsm9rw@mail.gmail.com>
Message-ID: <1443444996-sup-6545@lrrr.local>

Excerpts from John Garbutt's message of 2015-09-28 12:32:53 +0100:
> On 28 September 2015 at 12:10, Sean Dague <sean at dague.net> wrote:
> > On 09/27/2015 08:43 AM, Doug Hellmann wrote:
> >> Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +0000:
> >>> On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
> >>>>
> >>>> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
> > <snip>
> >>>
> >>> Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1]
> >>
> >> And yet future technical direction does factor in, and I'm trying
> >> to add a new heuristic to that aspect of consideration of tests:
> >> Do not add tests that use proxy APIs.
> >>
> >> If there is some compelling reason to add a capability for which
> >> the only tests use a proxy, that's important feedback for the
> >> contributor community and tells us we need to improve our test
> >> coverage. If the reason to use the proxy is that no one is deploying
> >> the proxied API publicly, that is also useful feedback, but I suspect
> >> we will, in most cases (glance is the exception), say "Yeah, that's
> >> not how we mean for you to run the services long-term, so don't
> >> include that capability."
> >
> > I think we might also just realize that some of the tests are using the
> > proxy because... that's how they were originally written.
> 
> From my memory, thats how we got here.
> 
> The Nova tests needed to use an image API. (i.e. list images used to
> check the snapshot Nova, or similar)
> 
> The Nova proxy was chosen over Glance v1 and Glance v2, mostly due to
> it being the only widely deployed option.

Right, and I want to make sure it's clear that I am differentiating
between "these tests are bad" and "these tests are bad *for DefCore*".
We should definitely continue to test the proxy API, since it's a
feature we have and that our users rely on.

> 
> > And they could be rewritten to use native APIs.
> 
> +1
> Once Glance v2 is available.
> 
> Adding Glance v2 as advisory seems a good step to help drive more adoption.

I think we probably don't want to rewrite the existing tests, since
that effectively changes the contract out from under existing folks
complying with DefCore.  If we need new, parallel, tests that do
not use the proxy to make more suitable tests for DefCore to use,
we should create those.

> 
> > I do agree that "testing proxies" should not be part of Defcore, and I
> > like Doug's idea of making that a new heuristic in test selection.
> 
> +1
> Thats a good thing to add.
> But I don't think we had another option in this case.

We did have the option of leaving the feature out and highlighting the
discrepancy to the contributors so tests could be added. That
communication didn't really happen, as far as I can tell.

> >> Sorry, I wasn't clear. The Nova team would, I expect, view the use of
> >> those APIs in DefCore as a reason to avoid deprecating them in the code
> >> even if they wanted to consider them as legacy features that should be
> >> removed. Maybe that's not true, and the Nova team would be happy to
> >> deprecate the APIs, but I did think that part of the feedback cycle we
> >> were establishing here was to have an indication from the outside of the
> >> contributor base about what APIs are considered important enough to keep
> >> alive for a long period of time.
> > I'd also agree with this. Defcore is a wider contract that we're trying
> > to get even more people to write to because that cross section should be
> > widely deployed. So deprecating something in Defcore is something I
> > think most teams, Nova included, would be very reluctant to do. It's
> > just asking for breaking your users.
> 
> I can't see us removing the proxy APIs in Nova any time soon,
> regardless of DefCore, as it would break too many people.
> 
> But personally, I like dropping them from Defcore, to signal that the
> best practice is to use the Glance v2 API directly, rather than the
> Nova proxy.
> 
> Maybe the are just marked deprecated, but still required, although
> that sounds a bit crazy.

Marking them as deprecated, then removing them from DefCore, would let
the Nova team make a technical decision about what to do with them
(maybe they get spun out into a separate service, maybe they're so
popular you just keep them, whatever).

Doug


From devdatta.kulkarni at RACKSPACE.COM  Mon Sep 28 13:12:41 2015
From: devdatta.kulkarni at RACKSPACE.COM (Devdatta Kulkarni)
Date: Mon, 28 Sep 2015 13:12:41 +0000
Subject: [openstack-dev] [Solum] Proposal to change weekly IRC meeting time
Message-ID: <1443445961222.53304@RACKSPACE.COM>

Hi team,

In last week's IRC meeting [1], we discussed about changing our weekly IRC 
meeting time from current time of 2100 UTC to 1700 UTC to, 
(a) avoid conflict with the cross-project meeting, and (b) accommodate participants from India/Asia.

I have submitted a WIP review [2] to make this change. Please provide your votes (+1/-1) on the review.
I will remove WIP once majority of our contributors have voted on it.
Note that we have to change the meeting channel as our current channel (openstack-meeting-alt) is not available at 1700 UTC on Tuesdays.

I am thinking that if majority of our team members agree, we can move to this 
new time starting October 6 meeting.

Thanks,
Devdatta

[1] http://eavesdrop.openstack.org/meetings/solum_team_meeting/2015/solum_team_meeting.2015-09-22-21.00.log.html

[2] https://review.openstack.org/#/c/228441/

From victoria at vmartinezdelacruz.com  Mon Sep 28 13:38:49 2015
From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=)
Date: Mon, 28 Sep 2015 10:38:49 -0300
Subject: [openstack-dev] [murano] Outreachy: Interest in contributing to
	open-source
In-Reply-To: <CAOFFu8Zi9C4S_fWq=PQ2SRe-Wr=yszy85TS6m+Cz52OHN8EXNg@mail.gmail.com>
References: <CAFAMDXaHJ522on7m6cSObGuoqzxwMCmkc-dRcY4RekjO6WfXrA@mail.gmail.com>
 <CAOFFu8Zi9C4S_fWq=PQ2SRe-Wr=yszy85TS6m+Cz52OHN8EXNg@mail.gmail.com>
Message-ID: <CAJ_e2gDbKrR6WKZoEh9a_saKZ+zj10UavBGbqBhG_JWQjK=Arg@mail.gmail.com>

Hi Yolande,

Welcome! Glad to hear you already have a project/mentor in mind. Ekaterina
will help you with your first contribution and on picking a task for your
internship application.
Please let me know if I can help on how to setup your contributor
accounts/development environment, or with the application process in
general.

Join other Outreachy folks in #openstack-opw, thanks!

Best,

Victoria

2015-09-28 5:39 GMT-03:00 Ekaterina Chernova <efedorova at mirantis.com>:

> Hi Yolande,
>
> welcome to OpenStack and open source!
>
> We gladly introduce you with Murano project!
>
> Topic 'Implementation of tagging heat stacks, created by murano' is
> already taken,
> but we can offer you another one after talking to you about what you are
> interesting in.
>
> You can go to #murano channel on IRC node and reach me (katyafervent) or
> someone else.
> Also, you can contact me directly by mail.
>
>
> Regards,
> Kate.
>
>
> On Sat, Sep 26, 2015 at 11:20 PM, Amate Yolande <yolandeamate at gmail.com>
> wrote:
>
>> Hello
>>
>> My name is Amate Yolande from Buea Cameroon. I am new to open source
>> and I am interested in participating in the Outreachy. I would like to
>> work on the "Murano - Implementation of tagging heat stacks, created
>> by murano" project and would like to get some directives on how to
>> familiarize myself with the project. So far I have been able to
>> install and test OpenStack from dev-stack on a spare computer using a
>> local network at home.
>>
>> Thanks
>> Yolande
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/087a58b7/attachment.html>

From victoria at vmartinezdelacruz.com  Mon Sep 28 13:40:11 2015
From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=)
Date: Mon, 28 Sep 2015 10:40:11 -0300
Subject: [openstack-dev] [glance] [infra] [neutron] [swift] [trove]
 [zaqar] [murano] [docs] Outreachy: selecting a project to contribute to
In-Reply-To: <CAD0KtVEkNKUExOf+PYAw+Cq03iw9iV9b9REhZz2m+mqOMER4Bw@mail.gmail.com>
References: <CAD0KtVEkNKUExOf+PYAw+Cq03iw9iV9b9REhZz2m+mqOMER4Bw@mail.gmail.com>
Message-ID: <CAJ_e2gCvC_tECpyKZSGLQnXmEd1xJajM_J8hhZbkxhHQPg6Jdg@mail.gmail.com>

Hi Twinkle,

Welcome! Please join us in #openstack-opw channel on irc.freenode.org. We
can chat about different projects and helping you get started with
OpenStack.

Best,

Victoria

2015-09-26 11:19 GMT-03:00 Anne Gentle <annegentle at justwriteclick.com>:

> Hi Twinkle,
>
> Welcome! Thanks for your interest. If you have time, stop by
> #openstack-opw on Freenode IRC to have a real-time chat with OpenStack
> Outreachy admins and mentors and other interns.
>
> Also, typically you would write an email to the mailing list with a new
> subject line so that we can reply to you in the thread about your question.
> I'm doing that with my reply to show an example of using topics or keywords
> with square brackets in the subject line. Those are some of the projects
> with mentors listed in https://wiki.openstack.org/wiki/Outreachy.
>
> Hope that helps in your quest to learn more about each project.
>
> Thanks,
> Anne
>
>
>
> On Sat, Sep 26, 2015 at 8:21 AM, Twinkle Chawla <twinkle1chawla at gmail.com>
> wrote:
>
>> Hello everyone,
>> I am new to Outreachy - Openstack and I want to contribute in it. I am
>> getting problem in selecting Project, Please help me out so that I can find
>> a way to move on.
>>
>> Regards,
>>
>> On Sat, Sep 26, 2015 at 2:47 PM, Thierry Carrez <thierry at openstack.org>
>> wrote:
>>
>>> Hello everyone,
>>>
>>> Last for this week, Glance, Horizon, Sahara, and Barbican just produced
>>> their first release candidate for the end of the Liberty cycle. The RC1
>>> tarballs, as well as a list of last-minute features and fixed bugs since
>>> liberty-1 are available at:
>>>
>>> https://launchpad.net/glance/liberty/liberty-rc1
>>> https://launchpad.net/horizon/liberty/liberty-rc1
>>> https://launchpad.net/sahara/liberty/liberty-rc1
>>> https://launchpad.net/barbican/liberty/liberty-rc1
>>>
>>> Unless release-critical issues are found that warrant a release
>>> candidate respin, these RC1s will be formally released as final versions
>>> on October 15. You are therefore strongly encouraged to test and
>>> validate these tarballs !
>>>
>>> Alternatively, you can directly test the stable/liberty release branch
>>> at:
>>>
>>> http://git.openstack.org/cgit/openstack/glance/log/?h=stable/liberty
>>> http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/liberty
>>> http://git.openstack.org/cgit/openstack/sahara/log/?h=stable/liberty
>>> http://git.openstack.org/cgit/openstack/barbican/log/?h=stable/liberty
>>>
>>> If you find an issue that could be considered release-critical, please
>>> file it at:
>>>
>>> https://bugs.launchpad.net/glance/+filebug
>>> or
>>> https://bugs.launchpad.net/horizon/+filebug
>>> or
>>> https://bugs.launchpad.net/sahara/+filebug
>>> or
>>> https://bugs.launchpad.net/barbican/+filebug
>>>
>>> and tag it *liberty-rc-potential* to bring it to the release crew's
>>> attention.
>>>
>>> Note that the "master" branches of Glance, Horizon, Sahara and Barbican
>>> are now officially open for Mitaka development, so feature freeze
>>> restrictions no longer apply there.
>>>
>>> Regards,
>>>
>>> --
>>> Thierry Carrez (ttx)
>>>
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>
>>
>>
>> --
>> *TWINKLE CHAWLA*
>> ________________________________________________________________
>> B. Tech. (Computer Science Engineering)
>> Arya College of Engineering & Information Technology, Jaipur
>>
>> ________________________________________________________________
>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
>
> --
> Anne Gentle
> Rackspace
> Principal Engineer
> www.justwriteclick.com
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/e567c02d/attachment.html>

From james.slagle at gmail.com  Mon Sep 28 13:43:37 2015
From: james.slagle at gmail.com (James Slagle)
Date: Mon, 28 Sep 2015 09:43:37 -0400
Subject: [openstack-dev] [TripleO] tripleo.org theme
In-Reply-To: <1443184498.2443.10.camel@redhat.com>
References: <1443184498.2443.10.camel@redhat.com>
Message-ID: <CAHV77z9cb_HC59Z0rYPpw48TMGRW0baP2iJ5xMBSJiv5Sd6+8A@mail.gmail.com>

On Fri, Sep 25, 2015 at 8:34 AM, Dan Prince <dprince at redhat.com> wrote:
> It has come to my attention that we aren't making great use of our
> tripleo.org domain. One thing that would be useful would be to have the
> new tripleo-docs content displayed there. It would also be nice to have
> quick links to some of our useful resources, perhaps Derek's CI report
> [1], a custom Reviewday page for TripleO reviews (something like this
> [2]), and perhaps other links too. I'm thinking these go in the header,
> and not just on some random TripleO docs page. Or perhaps both places.
>
> I was thinking that instead of the normal OpenStack theme however we
> could go a bit off the beaten path and do our own TripleO theme.
> Basically a custom tripleosphinx project that we ninja in as a
> replacement for oslosphinx.

Would the content of tripleo-docs be exactly the same as what is
published at http://docs.openstack.org/developer/tripleo-docs/ ?

I think it probably should be, and be updated on every merged commit.
If that's not the case, I think it should be abundantly clear why
someone might would use one set of docs over the other.

I'm not sure about why we'd want a different theme. Is it just so that
it's styled the same as the rest of tripleo.org?

>
> Could get our own mascot... or do something silly with words. I'm
> reaching out to graphics artists who could help with this sort of
> thing... but before that decision is made I wanted to ask about
> thoughts on the matter here first.
>
> Speak up... it would be nice to have this wrapped up before Tokyo.
>
> [1] http://goodsquishy.com/downloads/tripleo-jobs.html
> [2] http://status.openstack.org/reviews/

+1 to everything else. I had put what to do with tripleo.org on the
tokyo etherpad. Someone (not sure who) also suggested:

* collaborative blogging
* serve upstream generated overcloud images?

I like both of those ideas as well. For the blogging, maybe we could
parse the rss feed from planet.openstack.org and pick out the TripleO
stuff somehow.



-- 
-- James Slagle
--


From monty at inaugust.com  Mon Sep 28 13:50:10 2015
From: monty at inaugust.com (Monty Taylor)
Date: Mon, 28 Sep 2015 08:50:10 -0500
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <5609395F.8040503@redhat.com>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
 <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
 <560909C7.9080907@redhat.com>
 <CAOyZ2aHo-mU99ktue_W0qUt1Du6uY--ANDceQ4xufKf5LBB64w@mail.gmail.com>
 <5609395F.8040503@redhat.com>
Message-ID: <56094592.1020405@inaugust.com>

On 09/28/2015 07:58 AM, Sylvain Bauza wrote:
>
>
> Le 28/09/2015 12:35, Duncan Thomas a ?crit :
>>
>>
>> On 28 September 2015 at 12:35, Sylvain Bauza <sbauza at redhat.com
>> <mailto:sbauza at redhat.com>> wrote:
>>
>>     About the maintenance burden, I also consider that patching
>>     clients is far more easier than patching an API unless I missed
>>     something.
>>
>>
>> I think I very much disagree there - patching a central installation
>> is much, much easier than getting N customers to patch M different
>> libraries, even assuming the fix is available for any significant
>> subset of the M libraries, plus making sure that new customers use the
>> correct libraries, plus helping any customers who have some sort of
>> roll-your-own library do the new right thing...
>>
>
> Well, having N versions of clients against one single API version is
> just something we manage since the beginning. I don't really see why it
> suddently becomes so difficult to manage it.
>
>
>> I think there's a definite place for a simple API to do infrastructure
>> level orchestration without needing the complexities of heat - these
>> APIs are in nova because they're useful - there's clear operator
>> desire for them and a couple of operators have been quite vocal about
>> their desire for them not to be removed. Great, let's keep them, but
>> form a team of people interested in getting them right (get rid of
>> fixed timeouts, etc), add any missing pieces (like floating IPs for
>> new VMs) and generally focus on getting this piece of the puzzle
>> right. Breaking another small piece off nova and polishing it has been
>> a generally successful pattern.
>
> I don't want to overthink what could be the right scope of that future
> API but given the Heat mission statement [1] and its service name
> 'orchestration', I don't see why this API endpoint should land in the
> Nova codebase and couldn't be rather provided by the Heat API. Oh sure,
> it would perhaps require another endpoint behind the same service, but
> isn't that better than having another endpoint in Nova ?
>
> [1]
> https://github.com/openstack/governance/blob/master/reference/projects.yaml#L482-L484
>
>
>>
>> I remember Monty Taylor (copied) having a rant about the lack of the
>> perfect 'give me a VM with all its stuff sorted' API. Care to comment,
>> Monty?
>
> Sounds you misunderstood me. I'm not against implementing this excellent
> usecase, I just think the best place is not in Nova and should be done
> elsewhere.
>

Specifically, I want "nova boot" to get me a VM with an IP address. I 
don't want it to do fancy orchestration - I want it to not need fancy 
orchestration, because needing fancy orchestration to get a VM  on a 
network is not a feature.

I also VERY MUCH do not want to need Heat to get a VM. I want to use 
Heat to do something complex. Getting a VM is not complex. It should not 
be complex. What it's complex and to the level of needing Heat, we've 
failed somewhere else.

Also, people should stop deploying clouds that require people to use 
floating IPs to get basic internet access. It's a misuse of the construct.

Public Network "ext-net" -> shared / directly attachable
Per-tenant Network "private" -> private network, not shared, not routable

If the user chooses, a router can be added with gateway set to ext-net.

This way:

nova boot --network=ext-net  -> vm dhcp'd on the public network
nova boot --network=private  -> vm dhcp'd on the private network
nova floating-ip-attach      -> vm gets a floating ip attached to their 
vm from the ext-net network

All of the use cases are handled, basic things are easy (boot a vm on 
the network works in one step) and for the 5% of cases where a floating 
IP is actually needed (a long-lived service on a single vm that wants to 
keep the IP and not just a DNS name across VM migrations and isn't using 
a load-balancer) can use that.

This is, btw, the most common public cloud deployment model.

Let's stop making things harder than they need to be and serve our users.


From masoom.alam at wanclouds.net  Mon Sep 28 13:54:53 2015
From: masoom.alam at wanclouds.net (masoom alam)
Date: Mon, 28 Sep 2015 06:54:53 -0700
Subject: [openstack-dev] KILO: neutron port-update
 --allowed-address-pairs action=clear throws an exception
In-Reply-To: <CABk5PjKd=6hSxKL6+68HkYoupMvCYNMb7Y+kb-1UPGre2E8hVw@mail.gmail.com>
References: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>
 <3a0e6e9d.8be3.15012a43a27.Coremail.ayshihanzhang@126.com>
 <CABk5PjLcLCk3WJYJW5PdQTmOt0SN0+CxBV4tOAFZ7NJOEq6CKg@mail.gmail.com>
 <37458b90.9403.15012b8da80.Coremail.ayshihanzhang@126.com>
 <CALhU9tk_e0TZfxc=kpjSpYMze-MBriW-zpR9n4njfSU9vX3FRA@mail.gmail.com>
 <CABk5PjKd=6hSxKL6+68HkYoupMvCYNMb7Y+kb-1UPGre2E8hVw@mail.gmail.com>
Message-ID: <CABk5PjKE+Q9DCEyfoDA_OsE3cghoaLQoyW3o3i=j9+z6m2Pp0A@mail.gmail.com>

This is even not working:

root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
 --allowed-address-pairs type=list [] action=clear
AllowedAddressPair must contain ip_address


root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
 --allowed-address-pairs type=list {} action=clear
AllowedAddressPair must contain ip_address




On Mon, Sep 28, 2015 at 4:31 AM, masoom alam <masoom.alam at wanclouds.net>
wrote:

> Please help, its not working....:
>
> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack# neutron
> port-show 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>
> +-----------------------+---------------------------------------------------------------------------------+
> | Field                 | Value
>                                 |
>
> +-----------------------+---------------------------------------------------------------------------------+
> | admin_state_up        | True
>                                |
> | allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address":
> "fa:16:3e:69:e9:ef"}                |
> | binding:host_id       | openstack-latest-kilo-28-09-2015-masoom
>                                 |
> | binding:profile       | {}
>                                |
> | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}
>                                |
> | binding:vif_type      | ovs
>                                 |
> | binding:vnic_type     | normal
>                                |
> | device_id             | d44b9025-f12b-4f85-8b7b-57cc1138acdd
>                                |
> | device_owner          | compute:nova
>                                |
> | extra_dhcp_opts       |
>                                 |
> | fixed_ips             | {"subnet_id":
> "bbb6726a-937f-4e0d-8ac2-f82f84272b1f", "ip_address": "10.0.0.3"} |
> | id                    | 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>                                |
> | mac_address           | fa:16:3e:69:e9:ef
>                                 |
> | name                  |
>                                 |
> | network_id            | ae1b7e34-9f6c-4c8f-bf08-99a1e390034c
>                                |
> | security_groups       | 8adda6d7-1b3e-4047-a130-a57609a0bd68
>                                |
> | status                | ACTIVE
>                                |
> | tenant_id             | 09945e673b7a4ab183afb166735b4fa7
>                                |
>
> +-----------------------+---------------------------------------------------------------------------------+
>
> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack# neutron
> port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9  --allowed-address-pairs
> [] action=clear
> AllowedAddressPair must contain ip_address
>
>
> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack# neutron
> port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9  --allowed-address-pairs
> [10.0.0.201] action=clear
> The number of allowed address pair exceeds the maximum 10.
>
> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack# neutron
> port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9  --allowed-address-pairs
>  action=clear
> Request Failed: internal server error while processing your request.
>
>
>
>
> On Mon, Sep 28, 2015 at 1:57 AM, Akihiro Motoki <amotoki at gmail.com> wrote:
>
>> As already mentioned, we need to pass [] (an empty list) rather than None
>> as allowed_address_pairs.
>>
>> At the moment it is not supported in Neutron CLI.
>> This review https://review.openstack.org/#/c/218551/ is trying to fix
>> this problem.
>>
>> Akihiro
>>
>>
>> 2015-09-28 15:51 GMT+09:00 shihanzhang <ayshihanzhang at 126.com>:
>>
>>> I don't see any exception using bellow command
>>>
>>> root at szxbz:/opt/stack/neutron# neutron port-update
>>> 3748649e-243d-4408-a5f1-8122f1fbf501 --allowed-address-pairs action=clear
>>> Allowed address pairs must be a list.
>>>
>>>
>>>
>>> At 2015-09-28 14:36:44, "masoom alam" <masoom.alam at wanclouds.net> wrote:
>>>
>>> stable KILO
>>>
>>> shall I checkout the latest code are you saying this...Also can you
>>> please confirm if you have tested this thing at your end....and there was
>>> no problem...
>>>
>>>
>>> Thanks
>>>
>>> On Sun, Sep 27, 2015 at 11:29 PM, shihanzhang <ayshihanzhang at 126.com>
>>> wrote:
>>>
>>>> which branch do you use?  there is not this problem in master branch.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> At 2015-09-28 13:43:05, "masoom alam" <masoom.alam at wanclouds.net>
>>>> wrote:
>>>>
>>>> Can anybody highlight why the following command is throwing an
>>>> exception:
>>>>
>>>> *Command#* neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3
>>>> --allowed-address-pairs action=clear
>>>>
>>>> *Error: * 2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource
>>>> [req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most
>>>> recent call last):
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>> "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     result =
>>>> method(request=request, **args)
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>> "/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>> allow_bulk=self._allow_bulk)
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>> "/opt/stack/neutron/neutron/api/v2/base.py", line 652, in
>>>> prepare_request_body
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>> attr_vals['validate'][rule])
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>> "/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in
>>>> _validate_allowed_address_pairs
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     if
>>>> len(address_pairs) > cfg.CONF.max_allowed_address_pair:
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError: object
>>>> of type 'NoneType' has no len()
>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>>
>>>>
>>>>
>>>> There is a similar bug filed at Lauchpad for Havana
>>>> https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However
>>>> there is no fix and the work around  - using curl, mentioned on the bug is
>>>> also not working for KILO...it was working for havana and Icehouse....any
>>>> pointers...?
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>>
>>>> ????iPhone6s???5288???????
>>>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> ????iPhone6s???5288???????
>>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/62441973/attachment.html>

From thierry at openstack.org  Mon Sep 28 14:08:29 2015
From: thierry at openstack.org (Thierry Carrez)
Date: Mon, 28 Sep 2015 16:08:29 +0200
Subject: [openstack-dev] [all] Proposed Mitaka release schedule
Message-ID: <560949DD.4060503@openstack.org>

Hi everyone,

You can find the proposed release schedule for Mitaka here:

https://wiki.openstack.org/wiki/Mitaka_Release_Schedule

That places the end release on April 7, 2016. It's also worth noting
that in an effort to maximize development time, this schedule reduces
the time between Feature Freeze and final release by one week (5 weeks
instead of 6 weeks). That means we'll collectively have to be a lot
stricter on Feature freeze exceptions this time around. Be prepared for
that.

Feel free to ping the Release management team members on
#openstack-relmgr-office if you have any question.

-- 
Thierry Carrez (ttx)


From andrew at lascii.com  Mon Sep 28 14:11:31 2015
From: andrew at lascii.com (Andrew Laski)
Date: Mon, 28 Sep 2015 10:11:31 -0400
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <56094592.1020405@inaugust.com>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
 <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
 <560909C7.9080907@redhat.com>
 <CAOyZ2aHo-mU99ktue_W0qUt1Du6uY--ANDceQ4xufKf5LBB64w@mail.gmail.com>
 <5609395F.8040503@redhat.com> <56094592.1020405@inaugust.com>
Message-ID: <20150928141131.GK8745@crypt>

On 09/28/15 at 08:50am, Monty Taylor wrote:
>On 09/28/2015 07:58 AM, Sylvain Bauza wrote:
>>
>>
>>Le 28/09/2015 12:35, Duncan Thomas a ?crit :
>>>
>>>
>>>On 28 September 2015 at 12:35, Sylvain Bauza <sbauza at redhat.com
>>><mailto:sbauza at redhat.com>> wrote:
>>>
>>>    About the maintenance burden, I also consider that patching
>>>    clients is far more easier than patching an API unless I missed
>>>    something.
>>>
>>>
>>>I think I very much disagree there - patching a central installation
>>>is much, much easier than getting N customers to patch M different
>>>libraries, even assuming the fix is available for any significant
>>>subset of the M libraries, plus making sure that new customers use the
>>>correct libraries, plus helping any customers who have some sort of
>>>roll-your-own library do the new right thing...
>>>
>>
>>Well, having N versions of clients against one single API version is
>>just something we manage since the beginning. I don't really see why it
>>suddently becomes so difficult to manage it.
>>
>>
>>>I think there's a definite place for a simple API to do infrastructure
>>>level orchestration without needing the complexities of heat - these
>>>APIs are in nova because they're useful - there's clear operator
>>>desire for them and a couple of operators have been quite vocal about
>>>their desire for them not to be removed. Great, let's keep them, but
>>>form a team of people interested in getting them right (get rid of
>>>fixed timeouts, etc), add any missing pieces (like floating IPs for
>>>new VMs) and generally focus on getting this piece of the puzzle
>>>right. Breaking another small piece off nova and polishing it has been
>>>a generally successful pattern.
>>
>>I don't want to overthink what could be the right scope of that future
>>API but given the Heat mission statement [1] and its service name
>>'orchestration', I don't see why this API endpoint should land in the
>>Nova codebase and couldn't be rather provided by the Heat API. Oh sure,
>>it would perhaps require another endpoint behind the same service, but
>>isn't that better than having another endpoint in Nova ?
>>
>>[1]
>>https://github.com/openstack/governance/blob/master/reference/projects.yaml#L482-L484
>>
>>
>>>
>>>I remember Monty Taylor (copied) having a rant about the lack of the
>>>perfect 'give me a VM with all its stuff sorted' API. Care to comment,
>>>Monty?
>>
>>Sounds you misunderstood me. I'm not against implementing this excellent
>>usecase, I just think the best place is not in Nova and should be done
>>elsewhere.
>>
>
>Specifically, I want "nova boot" to get me a VM with an IP address. I 
>don't want it to do fancy orchestration - I want it to not need fancy 
>orchestration, because needing fancy orchestration to get a VM  on a 
>network is not a feature.

In the networking case there is a minimum of orchestration because the 
time required to allocate a port is small.  What has been requiring 
orchestration is the creation of volumes because of the requirement of 
Cinder to download an image, or be on a backend that support fast 
cloning and rely on a cache hit.  So the question under discussion is 
when booting an instance relies on another service performing a long 
running operation where is a good place to handle that.

My thinking for a while has been that we could use another API that 
could manage those things.  And be the central place you're looking for 
to pass a simple "nova boot" with whatever options are required so you 
don't have to manage the complexities of calls to 
Neutron/Cinder/Nova(current API).  What's become clear to me from this 
thread is that people don't seem to oppose that idea, however they don't 
want their users/clients to need to switch what API they're currently 
using(Nova).

The right way to proceed with this idea seems to be to by evolving the 
Nova API and potentially creating a split down the road.  And by split I 
more mean architectural within Nova, and not necessarily a split API.  
What I imagine is that we follow the model of git and have a plumbing 
and porcelain API and each can focus on doing the right things.


>
>I also VERY MUCH do not want to need Heat to get a VM. I want to use 
>Heat to do something complex. Getting a VM is not complex. It should 
>not be complex. What it's complex and to the level of needing Heat, 
>we've failed somewhere else.
>
>Also, people should stop deploying clouds that require people to use 
>floating IPs to get basic internet access. It's a misuse of the 
>construct.
>
>Public Network "ext-net" -> shared / directly attachable
>Per-tenant Network "private" -> private network, not shared, not routable
>
>If the user chooses, a router can be added with gateway set to ext-net.
>
>This way:
>
>nova boot --network=ext-net  -> vm dhcp'd on the public network
>nova boot --network=private  -> vm dhcp'd on the private network
>nova floating-ip-attach      -> vm gets a floating ip attached to 
>their vm from the ext-net network
>
>All of the use cases are handled, basic things are easy (boot a vm on 
>the network works in one step) and for the 5% of cases where a 
>floating IP is actually needed (a long-lived service on a single vm 
>that wants to keep the IP and not just a DNS name across VM 
>migrations and isn't using a load-balancer) can use that.
>
>This is, btw, the most common public cloud deployment model.
>
>Let's stop making things harder than they need to be and serve our users.
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From sean at dague.net  Mon Sep 28 14:19:06 2015
From: sean at dague.net (Sean Dague)
Date: Mon, 28 Sep 2015 10:19:06 -0400
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <20150928141131.GK8745@crypt>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
 <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
 <560909C7.9080907@redhat.com>
 <CAOyZ2aHo-mU99ktue_W0qUt1Du6uY--ANDceQ4xufKf5LBB64w@mail.gmail.com>
 <5609395F.8040503@redhat.com> <56094592.1020405@inaugust.com>
 <20150928141131.GK8745@crypt>
Message-ID: <56094C5A.3010306@dague.net>

On 09/28/2015 10:11 AM, Andrew Laski wrote:
> On 09/28/15 at 08:50am, Monty Taylor wrote:
>> On 09/28/2015 07:58 AM, Sylvain Bauza wrote:
<snip>
>>
>> Specifically, I want "nova boot" to get me a VM with an IP address. I
>> don't want it to do fancy orchestration - I want it to not need fancy
>> orchestration, because needing fancy orchestration to get a VM  on a
>> network is not a feature.
> 
> In the networking case there is a minimum of orchestration because the
> time required to allocate a port is small.  What has been requiring
> orchestration is the creation of volumes because of the requirement of
> Cinder to download an image, or be on a backend that support fast
> cloning and rely on a cache hit.  So the question under discussion is
> when booting an instance relies on another service performing a long
> running operation where is a good place to handle that.
> 
> My thinking for a while has been that we could use another API that
> could manage those things.  And be the central place you're looking for
> to pass a simple "nova boot" with whatever options are required so you
> don't have to manage the complexities of calls to
> Neutron/Cinder/Nova(current API).  What's become clear to me from this
> thread is that people don't seem to oppose that idea, however they don't
> want their users/clients to need to switch what API they're currently
> using(Nova).
> 
> The right way to proceed with this idea seems to be to by evolving the
> Nova API and potentially creating a split down the road.  And by split I
> more mean architectural within Nova, and not necessarily a split API. 
> What I imagine is that we follow the model of git and have a plumbing
> and porcelain API and each can focus on doing the right things.

Right, and I think that's a fine approach. Nova's job is "give me a
working VM". Working includes networking, persistent storage. The API
semantics for "give me a working VM" should exist in Nova.

It is also fine if there are lower level calls that tweak parts of that,
but nova boot shouldn't have to be a multi step API process for the
user. Building one working VM you can do something with is really the
entire point of Nova.

	-Sean

-- 
Sean Dague
http://dague.net


From soulxu at gmail.com  Mon Sep 28 14:25:07 2015
From: soulxu at gmail.com (Alex Xu)
Date: Mon, 28 Sep 2015 22:25:07 +0800
Subject: [openstack-dev] [nova] Nova API sub-team meeting
Message-ID: <CAH7mGauPih82vQMbTD=5M1upiiK+ocROnPkDkJgmxbcquX1t4Q@mail.gmail.com>

Hi,

We have weekly Nova API meeting this week. The meeting is being held
Tuesday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Tue)
Japan 21:00 (Tue)
China 20:00 (Tue)
United Kingdom 13:00 (Tue)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/f9325638/attachment.html>

From mspreitz at us.ibm.com  Mon Sep 28 14:08:26 2015
From: mspreitz at us.ibm.com (Mike Spreitzer)
Date: Mon, 28 Sep 2015 10:08:26 -0400
Subject: [openstack-dev] [devstack] Is there a way to configure devstack for
 one flat external network using Kilo, Neutron?
Message-ID: <201509281431.t8SEVY21016426@d03av05.boulder.ibm.com>

Is there a way to configure devstack to install Neutron such that there is 
just one network and that is an external network and Nova can create 
Compute Instances on it, using projects of Kilo vintage?

Thanks,
Mike



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/48c438f4/attachment.html>

From e0ne at e0ne.info  Mon Sep 28 14:41:46 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Mon, 28 Sep 2015 17:41:46 +0300
Subject: [openstack-dev] [all] Proposed Mitaka release schedule
In-Reply-To: <560949DD.4060503@openstack.org>
References: <560949DD.4060503@openstack.org>
Message-ID: <CAGocpaFPMM932M1jBjjb-9FS3kHLjAKY+hhQHuapZgu8dpe9vQ@mail.gmail.com>

Hi Thierry,

Thank you for sharing this information with us so early. One
comment/question from me about FinalClientLibraryRelease:

Could we make client release at least one week later after M-3 milestone?
It will give us more chances to have features landed into the client if
they were merged late before M-3 and feature freeze.

Regards,
Ivan Kolodyazhny

On Mon, Sep 28, 2015 at 5:08 PM, Thierry Carrez <thierry at openstack.org>
wrote:

> Hi everyone,
>
> You can find the proposed release schedule for Mitaka here:
>
> https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
>
> That places the end release on April 7, 2016. It's also worth noting
> that in an effort to maximize development time, this schedule reduces
> the time between Feature Freeze and final release by one week (5 weeks
> instead of 6 weeks). That means we'll collectively have to be a lot
> stricter on Feature freeze exceptions this time around. Be prepared for
> that.
>
> Feel free to ping the Release management team members on
> #openstack-relmgr-office if you have any question.
>
> --
> Thierry Carrez (ttx)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/1b8c2873/attachment.html>

From mspreitz at us.ibm.com  Mon Sep 28 14:28:52 2015
From: mspreitz at us.ibm.com (Mike Spreitzer)
Date: Mon, 28 Sep 2015 10:28:52 -0400
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <56094592.1020405@inaugust.com>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
 <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
 <560909C7.9080907@redhat.com>
 <CAOyZ2aHo-mU99ktue_W0qUt1Du6uY--ANDceQ4xufKf5LBB64w@mail.gmail.com>
 <5609395F.8040503@redhat.com> <56094592.1020405@inaugust.com>
Message-ID: <OFC69F6190.20FA1FDB-ON85257ECE.004F5E33-85257ECE.004F8C32@notes.na.collabserv.com>

> From: Monty Taylor <monty at inaugust.com>
> To: Sylvain Bauza <sbauza at redhat.com>, "OpenStack Development 
> Mailing List (not for usage questions)" 
<openstack-dev at lists.openstack.org>
> Date: 09/28/2015 09:54 AM
> Subject: Re: [openstack-dev] Compute API (Was Re: [nova][cinder] how
> to handle AZ bug 1496235?)
>
> ...
> Specifically, I want "nova boot" to get me a VM with an IP address. I 
> don't want it to do fancy orchestration - I want it to not need fancy 
> orchestration, because needing fancy orchestration to get a VM  on a 
> network is not a feature.
> 
> I also VERY MUCH do not want to need Heat to get a VM. I want to use 
> Heat to do something complex. Getting a VM is not complex. It should not 

> be complex. What it's complex and to the level of needing Heat, we've 
> failed somewhere else.
> 
> Also, people should stop deploying clouds that require people to use 
> floating IPs to get basic internet access. It's a misuse of the 
construct.
> 
> Public Network "ext-net" -> shared / directly attachable
> Per-tenant Network "private" -> private network, not shared, not 
routable
> 
> If the user chooses, a router can be added with gateway set to ext-net.
> 
> This way:
> 
> nova boot --network=ext-net  -> vm dhcp'd on the public network
> nova boot --network=private  -> vm dhcp'd on the private network
> nova floating-ip-attach      -> vm gets a floating ip attached to their 
> vm from the ext-net network
> 
> All of the use cases are handled, basic things are easy (boot a vm on 
> the network works in one step) and for the 5% of cases where a floating 
> IP is actually needed (a long-lived service on a single vm that wants to 

> keep the IP and not just a DNS name across VM migrations and isn't using 

> a load-balancer) can use that.
> 
> This is, btw, the most common public cloud deployment model.
> 
> Let's stop making things harder than they need to be and serve our 
users.

As an operator, +1

Mike



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/a0d6f8f2/attachment.html>

From annegentle at justwriteclick.com  Mon Sep 28 14:55:38 2015
From: annegentle at justwriteclick.com (Anne Gentle)
Date: Mon, 28 Sep 2015 09:55:38 -0500
Subject: [openstack-dev] [election] [tc] Candidacy for Technical Committee
Message-ID: <CAD0KtVHZ=hWOxixv5HdKMw-PtW5dCDbj=zWV4S8QuF336MF9YA@mail.gmail.com>

Hi all,

I'm writing to let you know I would like to run for Technical Committee
again this term.

Here is my candidacy statement for your review.

I have worked at Rackspace on the OpenStack collection of projects for five
years now, working primarily on documentation. Rackspace is one of the
founding
organizations and currently running and supporting both public and private
clouds. I work on the developer experience team at Rackspace where we
advocate
for the cloud developer through support, tools, outreach in the community,
and
documentation.

In that time I've learned a lot about the people and projects that make up
OpenStack, and I've served the project in capacities that are often lacking
in
open source -- documentation, communication, API standards, and on-boarding
newcomers, especially under-represented groups such as women in technology.

The growth of OpenStack as a whole has been phenomenal. That growth and
attention has afforded us some experimentation with community, collaborative
documentation. In 2012 we assembled a team of operators to write the
OpenStack
Operations Guide that now is available as an O'Reilly book. We copied that
model successfully twice now with the Security Guide and the Architecture
and
Design Guide. Now project teams are completing focused documentation
efforts.
In 2014 the docs team implemented a new RST-based web design that makes the
docs.openstack.org site more usable and also easier to contribute to. In the
Kilo release we saw the highest number of docs contributors were for the
REST
API documentation. In the past six months I've been writing blog entries
with
fellow TC members in order to offer a line of communication beyond the
mailing
lists. This year my focus has been on API documentation and cross-project
work,
all the while supporting Docs PTL Lana Brindley in continuing the many
documentation efforts.

Why should I earn your vote? Vote for me if you care about cross-project
collaboration and communication especially. In the next year I want to find
innovative solutions to hard problems as OpenStack matures and continues to
strive for interoperability.

My expertise and experience in end-user support including useful and
consistent
REST APIs lends itself well to working on the Technical Committee. I have an
extensive network of relationships with both public and private cloud
experts
at Rackspace and beyond.

I am interested in the long-term viability of OpenStack as a whole. I
intend to
continue working hard for the community. I believe the Technical Committee
is
the best place for me to work with others to keep OpenStack on the right
track
to offering an open source cloud for the world.

Get to know me and my work:
OpenStack profile: https://www.openstack.org/community/members/profile/87
OpenStack blog: https://www.openstack.org/blog/author/annegentle/
Stackalytics: http://stackalytics.com/?user_id=annegentle
Reviews: https://review.openstack.org/#/q/reviewer:%22Anne+Gentle%22,n,z
Commits: https://review.openstack.org/#/q/owner:%22Anne+Gentle%22,n,z
IRC: annegentle
Blog: http://justwriteclick.com/
Candidate Statement: https://review.openstack.org/228482

Thanks for your consideration,
Anne

-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/3496c358/attachment.html>

From sbauza at redhat.com  Mon Sep 28 15:01:12 2015
From: sbauza at redhat.com (Sylvain Bauza)
Date: Mon, 28 Sep 2015 17:01:12 +0200
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <56094C5A.3010306@dague.net>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
 <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
 <560909C7.9080907@redhat.com>
 <CAOyZ2aHo-mU99ktue_W0qUt1Du6uY--ANDceQ4xufKf5LBB64w@mail.gmail.com>
 <5609395F.8040503@redhat.com> <56094592.1020405@inaugust.com>
 <20150928141131.GK8745@crypt> <56094C5A.3010306@dague.net>
Message-ID: <56095638.2000003@redhat.com>



Le 28/09/2015 16:19, Sean Dague a ?crit :
> On 09/28/2015 10:11 AM, Andrew Laski wrote:
>> On 09/28/15 at 08:50am, Monty Taylor wrote:
>>> On 09/28/2015 07:58 AM, Sylvain Bauza wrote:
> <snip>
>>> Specifically, I want "nova boot" to get me a VM with an IP address. I
>>> don't want it to do fancy orchestration - I want it to not need fancy
>>> orchestration, because needing fancy orchestration to get a VM  on a
>>> network is not a feature.
>> In the networking case there is a minimum of orchestration because the
>> time required to allocate a port is small.  What has been requiring
>> orchestration is the creation of volumes because of the requirement of
>> Cinder to download an image, or be on a backend that support fast
>> cloning and rely on a cache hit.  So the question under discussion is
>> when booting an instance relies on another service performing a long
>> running operation where is a good place to handle that.
>>
>> My thinking for a while has been that we could use another API that
>> could manage those things.  And be the central place you're looking for
>> to pass a simple "nova boot" with whatever options are required so you
>> don't have to manage the complexities of calls to
>> Neutron/Cinder/Nova(current API).  What's become clear to me from this
>> thread is that people don't seem to oppose that idea, however they don't
>> want their users/clients to need to switch what API they're currently
>> using(Nova).
>>
>> The right way to proceed with this idea seems to be to by evolving the
>> Nova API and potentially creating a split down the road.  And by split I
>> more mean architectural within Nova, and not necessarily a split API.
>> What I imagine is that we follow the model of git and have a plumbing
>> and porcelain API and each can focus on doing the right things.
> Right, and I think that's a fine approach. Nova's job is "give me a
> working VM". Working includes networking, persistent storage. The API
> semantics for "give me a working VM" should exist in Nova.
>
> It is also fine if there are lower level calls that tweak parts of that,
> but nova boot shouldn't have to be a multi step API process for the
> user. Building one working VM you can do something with is really the
> entire point of Nova.

I'm all for a request with some network and volume semantics in it, 
which would imply that the Nova scheduler would be able to do some 
instance placement based on cross-project resources.

What is a grey line to me is the fact that "give me a volume-backed 
instance from this image" requires some volume creation to get it done. 
So, if we consider that Nova is the best API for it (and I can 
understand the motivation for it), then we need some clear architectural 
segmentation between Nova and the other projects in Nova, like Andrew 
said (sorry if you feel I'm paraphrasing). For example, the move to 
os-brick is one of the efforts we should do, but it's just one step, 
since we should decouple all the tasks involved in the instance creation 
to isolate those in a better way).

-Sylvain

> 	-Sean
>



From waldemar.znoinski at intel.com  Mon Sep 28 15:02:08 2015
From: waldemar.znoinski at intel.com (Znoinski, Waldemar)
Date: Mon, 28 Sep 2015 15:02:08 +0000
Subject: [openstack-dev]  [nova][infra] Intel NFV CI, testing hugepages,
 numa topology, cpu pinning
Message-ID: <BBC443A36D29714C9068F3D85E2D81D17C7D3B@IRSMSX101.ger.corp.intel.com>

Hi cores et al,

As discussed briefly with Jay Pipes in Vancouver, we were about to provide CI and tests for features such as: hugepages, cpu pinning, numa topology. We since worked on NFV CI here at Intel. The work on the CI and tests is now completed.

A few details about the CI and tests.
* CI WIKI [1]
* last issues with the tests[2] were resolved two weeks ago and the CI is stable since
* currently the CI is NOT commenting back to Gerrit
* changes and artifacts samples below[3]
* Intel NFV CI is based on the infrastructure used by Intel Networking CI [4] - see comments left by this CI for comment form and content


[1] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-NFV-CI
[2] https://github.com/stackforge/intel-nfv-ci-tests
[3]
https://review.openstack.org/#/c/115484/ , http://intel-openstack-ci-logs.ovh/compute-ci/refs/changes/84/115484/28/
https://review.openstack.org/#/c/228168/ , http://intel-openstack-ci-logs.ovh/compute-ci/refs/changes/68/228168/2/
https://review.openstack.org/#/c/227138/ , http://intel-openstack-ci-logs.ovh/compute-ci/refs/changes/38/227138/3

[4] https://review.openstack.org/#/q/reviewer:%22Intel+Networking+CI+%253Copenstack-networking-ci%2540intel.com%253E%22,n,z

Unless there are  doubt and/or blockers we'd like to enable our CI commenting back with results and links to Nova in Openstack Gerrit.
All feedback welcome. I'll be attending Nova meeting this Thurs 1400 UTC for any follow ups as well.


Thanks
Waldek

--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/22c7a827/attachment.html>

From rybrown at redhat.com  Mon Sep 28 15:11:45 2015
From: rybrown at redhat.com (Ryan Brown)
Date: Mon, 28 Sep 2015 11:11:45 -0400
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <BLU436-SMTP258B7E08A4B003762D19DB4D8410@phx.gbl>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt>
 <CAOyZ2aHKgwPvveqxU5hcz7GAJBXebQmZShmtJyONsi8=DVYXsA@mail.gmail.com>
 <BLU436-SMTP258B7E08A4B003762D19DB4D8410@phx.gbl>
Message-ID: <560958B1.9050700@redhat.com>

On 09/26/2015 12:04 AM, Joshua Harlow wrote:
> +1 from me, although I thought heat was supposed to be this thing?
>
> Maybe there should be a 'warm' project or something ;)
>
> Or we can call it 'bbs' for 'building block service' (obviously not
> bulletin board system); ask said service to build a set of blocks into
> well defined structures and let it figure out how to make that happen...
>
> This though most definitely requires cross-project agreement though so
> I'd hope we can reach that somehow (before creating a halfway done new
> orchestration thing that is halfway integrated with a bunch of other
> apis that do one quarter of the work in ten different ways).

Indeed, I don't think I understand what need heat is failing to fulfill 
here? A user can easily have a template that contains a single server 
and a volume.

Heat's job is to be an API that lets you define a result[1] and then 
calls the APIs of whatever projects provide those things.

1: in this case, the result is "a working server with network and storage"

> Duncan Thomas wrote:
>> I think there's a place for yet another service breakout from nova -
>> some sort of like-weight platform orchestration piece, nothing as
>> complicated or complete as heat, nothing that touches the inside of a
>> VM, just something that can talk to cinder, nova and neutron (plus I
>> guess ironic and whatever the container thing is called) and work
>> through long running / cross-project tasks. I'd probably expect it to
>> provide a task style interface, e.g. a boot-from-new-volume call returns
>> a request-id that can then be polled for detailed status.
>>
>> The existing nova API for this (and any other nova APIs where this makes
>> sense) can then become a proxy for the new service, so that tenants are
>> not affected. The nova apis can then be deprecated in slow time.
>>
>> Anybody else think this could be useful?

-- 
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.


From gord at live.ca  Mon Sep 28 15:22:13 2015
From: gord at live.ca (gord chung)
Date: Mon, 28 Sep 2015 11:22:13 -0400
Subject: [openstack-dev] [ceilometer] OpenStack Telemetry user survey
Message-ID: <BLU436-SMTP18560C1DE4D5F50C2F788B1DE4F0@phx.gbl>

Hello,

The OpenStack Telemetry (aka Ceilometer) team would like to collect 
feedback and information from its user base in order to drive future 
improvements to the project.  To do so, we have developed a survey. It 
should take about 15min to complete.
Questions are fairly technical, so please ensure that you ask someone 
within your organization that is hands on using Ceilometer.

https://goo.gl/rKNhM1

On behalf of the Ceilometer community, we thank you for the time you 
will spend in helping us understand your needs.

-- 
Gordon Chung
Ceilometer PTL


From sumanth.sathyanarayana at gmail.com  Mon Sep 28 15:23:41 2015
From: sumanth.sathyanarayana at gmail.com (Sumanth Sathyanarayana)
Date: Mon, 28 Sep 2015 08:23:41 -0700
Subject: [openstack-dev] [murano] Murano code flow for custom
 development and combining murano with horizon in devstack
Message-ID: <CAGCi2YQ=3D49rbQxsy35yrw0roP3xBKhoZckPZb9oF6Z-w8wXw@mail.gmail.com>

Thanks Kirill.

Would adding "[murano]? to the subject line suffice or pls let me know if
there is a separate mailing list for murano?

Let me try to clarify what I am asking over here.
So we have all these panels - 'Compute', 'Network', etc on Horizon
dashboard.
If I add another panel like 'Example1' in Horizon by changing the code of
openstack-dashboard inside horizon, it gets added. Similarly if I change
the code of murano-dashboard and add another panel 'Example2' it gets added
as a separate panel. Now, assuming I have some changes in Horizon's
openstack-dashboard(i.e. example 1) and some changes in
murano-dashboard(i.e. example2) is there a way of combining them into one
panel i.e. say along with Compute, Network, etc panels I have something
like a Murano panel under which both the changes of Horizon's
openstack-dashboard(example1 - subpanel) and murano-dashboard(example2 -
subpanel) be incorporated.

Would a simple copy paste of say all the changes in Horizon's
openstack-dashboard to murano-dashboard work or is there a better way of
handling it.  Is murano-dashboard and openstack-dashboard code flow similar
with all the different files like 'tables.py', 'form.py', 'views.py', etc?
Does murano have a tutorial page something similar to this -
http://docs.openstack.org/developer/horizon/topics/tutorial.html

Thanks & Best Regards
Sumanth

On Mon, Sep 28, 2015 at 3:56 AM, Kirill Zaitsev <kzaitsev at mirantis.com>
wrote:

> Hi, murano-dashboard works the same way any other horizon dashboard does.
>
> I?m not quite sure, what you meant by ?combined and showed under one tab?,
> could you please elaborate?
> If you?re asking about debugging ? you can install murano-dashboard
> locally and configure it to use remote cloud (i.e. devstack) as descried
> here
> http://murano.readthedocs.org/en/latest/install/manual.html#install-murano-dashboard .
> If not ? then I did?t quite understood what you asked in the first place =)
>
> Feel free to come and ask around in #murano ? you might get help there
> faster then on ML =)
>
> --
> Kirill Zaitsev
> Murano team
> Software Engineer
> Mirantis, Inc
>
> On 26 Sep 2015 at 02:24:10, Sumanth Sathyanarayana (
> sumanth.sathyanarayana at gmail.com) wrote:
>
> Hello,
>
> Could anyone let me know if the changes in murano dashboard and horizon's
> openstackdashboard, both be combined and showed under one tab.
> i.e. say under Murano tab on the side left panel all the changes done in
> horizon and murano both appear.
>
> If anyone could point me to a link explaining custom development of murano
> and the code flow would be very helpful...
>
> Thanks & Best Regards
> Sumanth
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/1f44e387/attachment.html>

From douglas.mendizabal at rackspace.com  Mon Sep 28 15:25:32 2015
From: douglas.mendizabal at rackspace.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=)
Date: Mon, 28 Sep 2015 10:25:32 -0500
Subject: [openstack-dev] [Barbican][Security] Automatic Certificate
 Management Environment
In-Reply-To: <D229D17E.2D4E7%robert.clark@hpe.com>
References: <D229D17E.2D4E7%robert.clark@hpe.com>
Message-ID: <56095BEC.9050208@rackspace.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Hi Rob,

I agree that the ACME standard is very interesting, but I'm also
pretty new to it.  And by pretty new I mean I just started reading it
after I saw your email. :)

There has been interest in adding support for Let's Encrypt as a
Barbican back end since it was first announced, but I don't think
anyone has looked into it recently.

Being able to deploy Barbican and issue free certificates via Let's
Encrypt is a very compelling feature, but it's definitely going to
take some effort to figure out the best way to do it.  The first issue
that I can see is sorting out auth.  The Public CA plugins we're
developing all work under the assumption that Barbican itself is a
client to the CA and has a set of credentials that can be used to
interact with it.

The Let's Encrypt model is different in that Barbican itself probably
does not want to have a set of credentials to to talk to Let's
Encrypt, or we would end up owning all of our client's certs.  So
we'll have to figure out a way for clients to be able to provide their
Let's Encrypt credentials to Barbican.

Another challenge is going to be sorting out the automated domain
verification.  We could possibly just proxy the challenges to the
client and then wait for the client to let us know which verification
challenges have been completed.

As far as supporting ACME on the front end, I'm not sure it would be
possible.  There is a lot of overlap between ACME and the current
Barbican CMS API.  Adding support for ACME would basically mean
supporting two competing APIs in a single product, which is sure to
cause confusion and a ton of overhead in development.

Additionally, there are features in ACME that I think would prove
almost impossible to support with existing public CAs.  Namely the
automated challenge feature.  Currently Barbican CMS defers the Domain
Validation to some out-of-band process between the CA and the client.
 So you could, for example, order a Symantec Cert using the Barbican
API.  Barbican would begin the Cert workflow in an automated fashion,
but at some point someone from Symantec is going to email the domain
owner and they're going to have to respond to the challenges the good
old fashioned way.

I think that a prerequisite for ACME support on the front end is going
to be Public CA support of the ACME standard.  At that point we could
probably phase out the Barbican CMS API, and just support ACME on the
front end.

- - Douglas Mendiz?bal

On 9/24/15 10:12 AM, Clark, Robert Graham wrote:
> Hi All,
> 
> So I did a bit of tyre kicking with Letsencrypt today, one of the
> things I thought was interesting was the adherence to the
> burgeoning Automatic Certificate Management Environment (ACME)
> standard.
> 
> https://letsencrypt.github.io/acme-spec/
> 
> It?s one of the more readable crypto related standards drafts out
> there, reading it has me wondering how this might be useful for
> Anchor, or indeed for Barbican where things get quite interesting,
> both at the front end (enabling ACME clients to engage with
> Barbican) or at the back end (enabling Barbican to talk to any
> number of ACME enabled CA endpoints.
> 
> I was wondering if there?s been any discussion/review here, I?m new
> to ACME but I?m not sure if I?m late to the party?
> 
> Cheers -Rob
> 
> ______________________________________________________________________
____
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJWCVvsAAoJEB7Z2EQgmLX7eMUP/1qRXJJfmmQgI/z4rj13ug8q
IAG+iENDhhXojZyvf0F87zhj6DnQSTOzGwKIotP0FxUpLGSiNOYsKix4+OIShWqH
7HPdMLZnl6cROf/n11QgSQouvhRpUTW/UKtGaPGCO2Lw53LXHQqNOXCQdq12mdhT
CB9+55PpG7dr3bY9vX9PeB61QP410+c68ICcHhpOAFEm5TnmV0NL2JQ0zkjwENM3
s3kwIyZE3awZg3fp9zdxzuI87OEqQ+f4PN55q7GMJjwAdU72SU7crFjrpJxlPCCr
LuEb6J1rz9I5eThp2woaLOS1w4irBwrk5kp+0Af8Z2VZqWo2/mX6VlQJ5cgIMQdp
je3Ku500bQSjWAfv8/CJBIQ4bkBnOwaBWxzsXEzYDlHofQQHHubpsUlV202BOtIZ
F9BN0P/siwSQ+GOmCYd64AfAsBwjiY7uhOong50GFjThDkLxgRuOXjtt7u7ZBfvF
PbLRfo9teJju6zQHrE+G4WUURdBIP9NEDr7kkT7uDcVXNrPLY/8Op4tdws5yk49w
BFlWrjCZ8uuhDGm6gyaLzjcY/UU3Zwf8G4E4CL7YpWYLhVuaqs7ig/qbTUEK/xdt
+C6yU6JJGYT9j+H18sZOCMlvXRNG3W9CO/6fKiZ1PprsYXDN2UYfYHIyBNROeVkj
au8DQsYCyQBpkJStuJwt
=8g77
-----END PGP SIGNATURE-----


From dms at redhat.com  Mon Sep 28 15:31:30 2015
From: dms at redhat.com (David Moreau Simard)
Date: Mon, 28 Sep 2015 11:31:30 -0400
Subject: [openstack-dev] [puppet] Moving puppet-ceph to the Openstack big
	tent
Message-ID: <CAH7C+Pr2rA65O8gm3B1A64fFPctb_-q=jE=a6AHKJcfHNY1OEw@mail.gmail.com>

Hi,

puppet-ceph currently lives in stackforge [1] which is being retired
[2]. puppet-ceph is also mirrored on the Ceph Github organization [3].
This version of the puppet-ceph module was created from scratch and
not as a fork of the (then) upstream puppet-ceph by Enovance [4].
Today, the version by Enovance is no longer officially maintained
since Red Hat has adopted the new release.

Being an Openstack project under Stackforge or Openstack brings a lot
of benefits but it's not black and white, there are cons too.

It provides us with the tools, the processes and the frameworks to
review and test each contribution to ensure we ship a module that is
stable and is held to the highest standards.
But it also means that:
- We forego some level of ownership back to the Openstack foundation,
it's technical committee and the Puppet Openstack PTL.
- puppet-ceph contributors will also be required to sign the
Contributors License Agreement and jump through the Gerrit hoops [5]
which can make contributing to the project harder.

We have put tremendous efforts into creating a quality module and as
such it was the first puppet module in the stackforge organization to
implement not only unit tests but also integration tests with third
party CI.
Integration testing for other puppet modules are just now starting to
take shape by using the Openstack CI inrastructure.

In the context of Openstack, RDO already ships with a mean to install
Ceph with this very module and Fuel will be adopting it soon as well.
This means the module will benefit from real world experience and
improvements by the Openstack community and packagers.
This will help further reinforce that not only Ceph is the best
unified storage solution for Openstack but that we have means to
deploy it in the real world easily.

We all know that Ceph is also deployed outside of this context and
this is why the core reviewers make sure that contributions remain
generic and usable outside of this use case.

Today, the core members of the project discussed whether or not we
should move puppet-ceph to the Openstack big tent and we had a
consensus approving the move.
We would also like to hear the thoughts of the community on this topic.

Please let us know what you think.

Thanks,

[1]: https://github.com/stackforge/puppet-ceph
[2]: https://review.openstack.org/#/c/192016/
[3]: https://github.com/ceph/puppet-ceph
[4]: https://github.com/redhat-cip/puppet-ceph
[5]: https://wiki.openstack.org/wiki/How_To_Contribute

David Moreau Simard


From ed at leafe.com  Mon Sep 28 15:47:43 2015
From: ed at leafe.com (Ed Leafe)
Date: Mon, 28 Sep 2015 10:47:43 -0500
Subject: [openstack-dev] [election][tc] TC Candidacy
Message-ID: <5609611F.2040504@leafe.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Greetings Stackers! I'm announcing my candidacy for the Technical
Committee Elections.

Those of you who have been involved in OpenStack most likely know me,
as I was part of the original team from Rackspace that created
OpenStack in 2010. For the past year or so I have been employed at IBM
to work on 100% upstream OpenStack development. Part of that time has
been spent getting a good overview of both the project and the
community behind it. I've also placed some of my focus on the groups
of people working *with* OpenStack, and not just those developing it.
And to that end I've been working with the current TC as much as
possible, especially in the areas of API standardization and
consistency. I'm always lurking on the regular TC meetings (and
sometimes throwing in my two cents), and reviewing as much of the
material as I can. I believe that I can have a much greater impact as
part of the TC instead of just being that occasional voice from the
back of the room.

There are some good reasons not to vote for me: the other TC
candidates. I wish I could run on a "throw out the bums" platform, but
that's simply not possible.  I know all the current members, and they
have all done an excellent job this past year, and would all represent
the community well if re-elected. I can only promise that if elected,
I will work hard to have at least as much positive impact on the
community as the TC member I might replace. That will be a difficult
task indeed, but I believe that I have the long-term experience to
help guide OpenStack as it continues to grow and thrive in the future.

- -- 

- -- Ed Leafe
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWCWEfAAoJEKMgtcocwZqLTJIP/2K4pD+jhdDO487k5zLUlYvY
8NB6Zv7VG7i6LRllX6D8M9zKgG2/vEknHa76lgE23N1fjhnSJ9zLyffLyAPmmOvT
T0Z4U80as8sxZCq7RwWvLh0VvyLq/wlXr1bHkTHiZbQTZl4B/V2fvVWATktkrf9T
Pv4BgVJTQj/lkLN6RVz027/z6t8bGx9idGYsU6BI3mQifO1ebePWxhPtr5BvZG+P
Bnio7Ya4cQqX9KPWNge6chlRCWZ8vsdCqxP5jJz4O7GTbXJPJ2X1WVHJZNpLq0Q+
+tvN1vKkdFbfQWFCQLUVyk0+qoE+SmGxksB7I77PK4sw49Yw8TZjXimQoDWyM1+x
6WlVleDvBQeHXB+13jjlJNkHIXb1typ28+nb9yU5pummH4OiTs/uVsqxqwZCAIOd
WwDDKoj2x17RsSbluVgQn7yAHE/5e82/HlZnBDKrtkQpc1qyNdq755zMbY8qLJk+
xH+qGvw//kicSFsERNZNX8JC0z8suVovuC+T7ZAyOZ9obk0LkT9+9E7BfnRxK0mi
URG9iyoZ02u4csN+hQz59+LrBLq+TqFga4pX3zqYQbCH+yXvEJg1iwHWbso3ERUn
uR1r2IBj8p6ZTh+iGzc7/7yyu/mpHV+FDjtOwA0GtBDOqvF0k83Vn1KDI0hvdzzx
JpjI9XunME417m8904qJ
=Q8LE
-----END PGP SIGNATURE-----


From kzaitsev at mirantis.com  Mon Sep 28 15:47:41 2015
From: kzaitsev at mirantis.com (Kirill Zaitsev)
Date: Mon, 28 Sep 2015 18:47:41 +0300
Subject: [openstack-dev] [murano] Murano code flow for custom
 development and combining murano with horizon in devstack
In-Reply-To: <CAGCi2YQ=3D49rbQxsy35yrw0roP3xBKhoZckPZb9oF6Z-w8wXw@mail.gmail.com>
References: <CAGCi2YQ=3D49rbQxsy35yrw0roP3xBKhoZckPZb9oF6Z-w8wXw@mail.gmail.com>
Message-ID: <etPan.5609611d.472918b.133b7@TefMBPr.local>

Hi

1) adding [murano] would definitely suffice

2) Seems that you want to combine some of the murano panels under your own dashboard, right? (I do not really see why would you want to do that, but still). I believe that it is possible. You can look at muranodashboard/dashboard.py file. It defines Panels and a Dashboard murano has. Technically you can import those and just follow instructions on?http://docs.openstack.org/developer/horizon/topics/tutorial.html?you mentioned.?

3) Murano-dashboard does not have such a page, because murano-dashboard is?itself a dashboard built for horizon and follows the structure, defined in the article you mentioned.

I might have gotten something wrong, so I once again invite you to join #murano on freenode IRC. Would probably be much faster to chat there.

--?
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 28 Sep 2015 at 18:23:41, Sumanth Sathyanarayana (sumanth.sathyanarayana at gmail.com) wrote:

Thanks Kirill.

Would adding "[murano]? to the subject line suffice or pls let me know if there is a separate mailing list for murano?

Let me try to clarify what I am asking over here.
So we have all these panels - 'Compute', 'Network', etc on Horizon dashboard.
If I add another panel like 'Example1' in Horizon by changing the code of openstack-dashboard inside horizon, it gets added. Similarly if I change the code of murano-dashboard and add another panel 'Example2' it gets added as a separate panel. Now, assuming I have some changes in Horizon's openstack-dashboard(i.e. example 1) and some changes in murano-dashboard(i.e. example2) is there a way of combining them into one panel i.e. say along with Compute, Network, etc panels I have something like a Murano panel under which both the changes of Horizon's openstack-dashboard(example1 - subpanel) and murano-dashboard(example2 - subpanel) be incorporated.?

Would a simple copy paste of say all the changes in Horizon's openstack-dashboard to murano-dashboard work or is there a better way of handling it.? Is murano-dashboard and openstack-dashboard code flow similar with all the different files like 'tables.py', 'form.py', 'views.py', etc? Does murano have a tutorial page something similar to this -?http://docs.openstack.org/developer/horizon/topics/tutorial.html

Thanks & Best Regards
Sumanth

On Mon, Sep 28, 2015 at 3:56 AM, Kirill Zaitsev <kzaitsev at mirantis.com> wrote:
Hi, murano-dashboard works the same way any other horizon dashboard does.

I?m not quite sure, what you meant by ?combined and showed under one tab?, could you please elaborate?
If you?re asking about debugging ? you can install murano-dashboard locally and configure it to use remote cloud (i.e. devstack) as descried here?http://murano.readthedocs.org/en/latest/install/manual.html#install-murano-dashboard?. If not ? then I did?t quite understood what you asked in the first place =)

Feel free to come and ask around in #murano ? you might get help there faster then on ML =) ?

--?
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 26 Sep 2015 at 02:24:10, Sumanth Sathyanarayana (sumanth.sathyanarayana at gmail.com) wrote:

Hello,

Could anyone let me know if the changes in murano dashboard and horizon's openstackdashboard, both be combined and showed under one tab.
i.e. say under Murano tab on the side left panel all the changes done in horizon and murano both appear.

If anyone could point me to a link explaining custom development of murano and the code flow would be very helpful...

Thanks & Best Regards
Sumanth

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/054304c0/attachment.html>

From fungi at yuggoth.org  Mon Sep 28 16:04:07 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Mon, 28 Sep 2015 16:04:07 +0000
Subject: [openstack-dev] [TripleO] tripleo.org theme
In-Reply-To: <CAHV77z9cb_HC59Z0rYPpw48TMGRW0baP2iJ5xMBSJiv5Sd6+8A@mail.gmail.com>
References: <1443184498.2443.10.camel@redhat.com>
 <CAHV77z9cb_HC59Z0rYPpw48TMGRW0baP2iJ5xMBSJiv5Sd6+8A@mail.gmail.com>
Message-ID: <20150928160407.GT4731@yuggoth.org>

On 2015-09-28 09:43:37 -0400 (-0400), James Slagle wrote:
> Would the content of tripleo-docs be exactly the same as what is
> published at http://docs.openstack.org/developer/tripleo-docs/ ?
> 
> I think it probably should be, and be updated on every merged commit.
> If that's not the case, I think it should be abundantly clear why
> someone might would use one set of docs over the other.
[...]

If you're looking at Project Infrastructure/Upstream CI integration
with that site, I encourage you to just move remaining content to
docs.openstack.org or possibly implement rewrites to something like
tripleo.openstack.org. We've been doing the same for other "vanity"
domains (see recent moves of devstack.org, refstack.org,
trystack.org) so that we can get everything under a common base
domain unless there is an actual technical requirement to have a
separate domain (e.g. the cross-domain browser security concerns
which drove us to register openstackid.org for our OpenID
authentication).
-- 
Jeremy Stanley


From lyz at princessleia.com  Mon Sep 28 16:09:43 2015
From: lyz at princessleia.com (Elizabeth K. Joseph)
Date: Mon, 28 Sep 2015 09:09:43 -0700
Subject: [openstack-dev] [Infra] Meeting Tuesday September 29th at 19:00 UTC
Message-ID: <CABesOu1tzcPJ5bF75e+c7mE-sMsdoOV2M-+YSF8Pc7smkLHBTQ@mail.gmail.com>

Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday September 29th[0], at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last couple of meetings are available.

September 15th:

Minutes: http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-15-19.02.log.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-15-19.02.txt
Log: http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-15-19.02.log.html

September 22nd:

Minutes: http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-22-19.02.log.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-22-19.02.txt
Log: http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-22-19.02.log.html

[0] The 29th is also my birthday, I accept presents in the form of
cute cat pictures, but let's save those until after the meeting ;)

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2


From amotoki at gmail.com  Mon Sep 28 16:09:50 2015
From: amotoki at gmail.com (Akihiro Motoki)
Date: Tue, 29 Sep 2015 01:09:50 +0900
Subject: [openstack-dev] KILO: neutron port-update
 --allowed-address-pairs action=clear throws an exception
In-Reply-To: <CABk5PjKE+Q9DCEyfoDA_OsE3cghoaLQoyW3o3i=j9+z6m2Pp0A@mail.gmail.com>
References: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>
 <3a0e6e9d.8be3.15012a43a27.Coremail.ayshihanzhang@126.com>
 <CABk5PjLcLCk3WJYJW5PdQTmOt0SN0+CxBV4tOAFZ7NJOEq6CKg@mail.gmail.com>
 <37458b90.9403.15012b8da80.Coremail.ayshihanzhang@126.com>
 <CALhU9tk_e0TZfxc=kpjSpYMze-MBriW-zpR9n4njfSU9vX3FRA@mail.gmail.com>
 <CABk5PjKd=6hSxKL6+68HkYoupMvCYNMb7Y+kb-1UPGre2E8hVw@mail.gmail.com>
 <CABk5PjKE+Q9DCEyfoDA_OsE3cghoaLQoyW3o3i=j9+z6m2Pp0A@mail.gmail.com>
Message-ID: <CALhU9tnSWOL=yon1NcdnU-tdJVfnN4_-vmcyN1AzQcVk9yNkSQ@mail.gmail.com>

Are you reading our reply comments?
At the moment, there is no way to set allowed-address-pairs to an empty
list by using neutron CLI.
When action=clear is passed, type=xxx, list=true and specified values are
ignored and None is sent to the server.
Thus you cannot set allowed-address-pairs to [] with neutron port-update
CLI command.


2015-09-28 22:54 GMT+09:00 masoom alam <masoom.alam at wanclouds.net>:

> This is even not working:
>
> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
> neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
>  --allowed-address-pairs type=list [] action=clear
> AllowedAddressPair must contain ip_address
>
>
> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
> neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
>  --allowed-address-pairs type=list {} action=clear
> AllowedAddressPair must contain ip_address
>
>
>
>
> On Mon, Sep 28, 2015 at 4:31 AM, masoom alam <masoom.alam at wanclouds.net>
> wrote:
>
>> Please help, its not working....:
>>
>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>> neutron port-show 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>
>> +-----------------------+---------------------------------------------------------------------------------+
>> | Field                 | Value
>>                                 |
>>
>> +-----------------------+---------------------------------------------------------------------------------+
>> | admin_state_up        | True
>>                                  |
>> | allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address":
>> "fa:16:3e:69:e9:ef"}                |
>> | binding:host_id       | openstack-latest-kilo-28-09-2015-masoom
>>                                 |
>> | binding:profile       | {}
>>                                  |
>> | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}
>>                                  |
>> | binding:vif_type      | ovs
>>                                 |
>> | binding:vnic_type     | normal
>>                                  |
>> | device_id             | d44b9025-f12b-4f85-8b7b-57cc1138acdd
>>                                  |
>> | device_owner          | compute:nova
>>                                  |
>> | extra_dhcp_opts       |
>>                                 |
>> | fixed_ips             | {"subnet_id":
>> "bbb6726a-937f-4e0d-8ac2-f82f84272b1f", "ip_address": "10.0.0.3"} |
>> | id                    | 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>                                  |
>> | mac_address           | fa:16:3e:69:e9:ef
>>                                 |
>> | name                  |
>>                                 |
>> | network_id            | ae1b7e34-9f6c-4c8f-bf08-99a1e390034c
>>                                  |
>> | security_groups       | 8adda6d7-1b3e-4047-a130-a57609a0bd68
>>                                  |
>> | status                | ACTIVE
>>                                  |
>> | tenant_id             | 09945e673b7a4ab183afb166735b4fa7
>>                                  |
>>
>> +-----------------------+---------------------------------------------------------------------------------+
>>
>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>> neutron port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>  --allowed-address-pairs [] action=clear
>> AllowedAddressPair must contain ip_address
>>
>>
>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>> neutron port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>  --allowed-address-pairs [10.0.0.201] action=clear
>> The number of allowed address pair exceeds the maximum 10.
>>
>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>> neutron port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>  --allowed-address-pairs  action=clear
>> Request Failed: internal server error while processing your request.
>>
>>
>>
>>
>> On Mon, Sep 28, 2015 at 1:57 AM, Akihiro Motoki <amotoki at gmail.com>
>> wrote:
>>
>>> As already mentioned, we need to pass [] (an empty list) rather than
>>> None as allowed_address_pairs.
>>>
>>> At the moment it is not supported in Neutron CLI.
>>> This review https://review.openstack.org/#/c/218551/ is trying to fix
>>> this problem.
>>>
>>> Akihiro
>>>
>>>
>>> 2015-09-28 15:51 GMT+09:00 shihanzhang <ayshihanzhang at 126.com>:
>>>
>>>> I don't see any exception using bellow command
>>>>
>>>> root at szxbz:/opt/stack/neutron# neutron port-update
>>>> 3748649e-243d-4408-a5f1-8122f1fbf501 --allowed-address-pairs action=clear
>>>> Allowed address pairs must be a list.
>>>>
>>>>
>>>>
>>>> At 2015-09-28 14:36:44, "masoom alam" <masoom.alam at wanclouds.net>
>>>> wrote:
>>>>
>>>> stable KILO
>>>>
>>>> shall I checkout the latest code are you saying this...Also can you
>>>> please confirm if you have tested this thing at your end....and there was
>>>> no problem...
>>>>
>>>>
>>>> Thanks
>>>>
>>>> On Sun, Sep 27, 2015 at 11:29 PM, shihanzhang <ayshihanzhang at 126.com>
>>>> wrote:
>>>>
>>>>> which branch do you use?  there is not this problem in master branch.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> At 2015-09-28 13:43:05, "masoom alam" <masoom.alam at wanclouds.net>
>>>>> wrote:
>>>>>
>>>>> Can anybody highlight why the following command is throwing an
>>>>> exception:
>>>>>
>>>>> *Command#* neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3
>>>>> --allowed-address-pairs action=clear
>>>>>
>>>>> *Error: * 2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource
>>>>> [req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most
>>>>> recent call last):
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>> "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     result =
>>>>> method(request=request, **args)
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>> "/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>>> allow_bulk=self._allow_bulk)
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>> "/opt/stack/neutron/neutron/api/v2/base.py", line 652, in
>>>>> prepare_request_body
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>>> attr_vals['validate'][rule])
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>> "/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in
>>>>> _validate_allowed_address_pairs
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     if
>>>>> len(address_pairs) > cfg.CONF.max_allowed_address_pair:
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError:
>>>>> object of type 'NoneType' has no len()
>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>>>
>>>>>
>>>>>
>>>>> There is a similar bug filed at Lauchpad for Havana
>>>>> https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However
>>>>> there is no fix and the work around  - using curl, mentioned on the bug is
>>>>> also not working for KILO...it was working for havana and Icehouse....any
>>>>> pointers...?
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ????iPhone6s???5288???????
>>>>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>>
>>>> ????iPhone6s???5288???????
>>>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/788f87be/attachment.html>

From andrew at lascii.com  Mon Sep 28 16:12:58 2015
From: andrew at lascii.com (Andrew Laski)
Date: Mon, 28 Sep 2015 12:12:58 -0400
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <56094C5A.3010306@dague.net>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
 <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
 <560909C7.9080907@redhat.com>
 <CAOyZ2aHo-mU99ktue_W0qUt1Du6uY--ANDceQ4xufKf5LBB64w@mail.gmail.com>
 <5609395F.8040503@redhat.com> <56094592.1020405@inaugust.com>
 <20150928141131.GK8745@crypt> <56094C5A.3010306@dague.net>
Message-ID: <20150928161258.GL8745@crypt>

On 09/28/15 at 10:19am, Sean Dague wrote:
>On 09/28/2015 10:11 AM, Andrew Laski wrote:
>> On 09/28/15 at 08:50am, Monty Taylor wrote:
>>> On 09/28/2015 07:58 AM, Sylvain Bauza wrote:
><snip>
>>>
>>> Specifically, I want "nova boot" to get me a VM with an IP address. I
>>> don't want it to do fancy orchestration - I want it to not need fancy
>>> orchestration, because needing fancy orchestration to get a VM  on a
>>> network is not a feature.
>>
>> In the networking case there is a minimum of orchestration because the
>> time required to allocate a port is small.  What has been requiring
>> orchestration is the creation of volumes because of the requirement of
>> Cinder to download an image, or be on a backend that support fast
>> cloning and rely on a cache hit.  So the question under discussion is
>> when booting an instance relies on another service performing a long
>> running operation where is a good place to handle that.
>>
>> My thinking for a while has been that we could use another API that
>> could manage those things.  And be the central place you're looking for
>> to pass a simple "nova boot" with whatever options are required so you
>> don't have to manage the complexities of calls to
>> Neutron/Cinder/Nova(current API).  What's become clear to me from this
>> thread is that people don't seem to oppose that idea, however they don't
>> want their users/clients to need to switch what API they're currently
>> using(Nova).
>>
>> The right way to proceed with this idea seems to be to by evolving the
>> Nova API and potentially creating a split down the road.  And by split I
>> more mean architectural within Nova, and not necessarily a split API.
>> What I imagine is that we follow the model of git and have a plumbing
>> and porcelain API and each can focus on doing the right things.
>
>Right, and I think that's a fine approach. Nova's job is "give me a
>working VM". Working includes networking, persistent storage. The API
>semantics for "give me a working VM" should exist in Nova.
>
>It is also fine if there are lower level calls that tweak parts of that,
>but nova boot shouldn't have to be a multi step API process for the
>user. Building one working VM you can do something with is really the
>entire point of Nova.

What I'm struggling with is where do we draw the line in this model?  
For instance we don't allow a user to boot an instance from a disk image 
on their local machine via the Nova API, that is a multi step process.  
And which parameters do we expose that can influence network and volume 
creation, if not all of them?  It would be helpful to establish 
guidelines on what is a good candidate for inclusion in Nova.

I see a clear line between something that handles the creation of all 
ancillary resources needed to boot a VM and then the creation of the VM 
itself.  I don't understand why the creation of the other resources 
should live within Nova but as long as we can get to a good split 
between responsibilities that's a secondary concern.

>
>	-Sean
>
>-- 
>Sean Dague
>http://dague.net
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From tnapierala at mirantis.com  Mon Sep 28 16:31:56 2015
From: tnapierala at mirantis.com (Tomasz Napierala)
Date: Mon, 28 Sep 2015 18:31:56 +0200
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
Message-ID: <9A3CA8E8-1B57-4529-B9C6-FB2679DAD636@mirantis.com>

> On 18 Sep 2015, at 04:39, Sergey Lukjanov <slukjanov at mirantis.com> wrote:
> 
> 
> Time line:
> 
> PTL elections
> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL position
> * September 29 - October 8: PTL elections

Just a reminder that we have a deadline for candidates today.

Regards,
-- 
Tomasz 'Zen' Napierala
Product Engineering - Poland









From zbitter at redhat.com  Mon Sep 28 16:40:25 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Mon, 28 Sep 2015 12:40:25 -0400
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <20150928094754.GP3713@localhost>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost>
Message-ID: <56096D79.6090005@redhat.com>

On 28/09/15 05:47, Gorka Eguileor wrote:
> On 26/09, Morgan Fainberg wrote:
>> As a core (and former PTL) I just ignored commit message -1s unless there is something majorly wrong (no bug id where one is needed, etc).
>>
>> I appreciate well formatted commits, but can we let this one go? This discussion is so far into the meta-bike-shedding (bike shedding about bike shedding commit messages) ... If a commit message is *that* bad a -1 (or just fixing it?) Might be worth it. However, if a commit isn't missing key info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence moving from topic to topic, there isn't a good reason to block the review.

+1

>> It is not worth having a bot -1 bad commits or even having gerrit muck with them. Let's do the job of the reviewer and actually review code instead of going crazy with commit messages.

+1

>> Sent via mobile
>>
>
> I have to disagree, as reviewers we have to make sure that guidelines
> are followed, if we have an explicit guideline that states that
> the limit length is 72 chars, I will -1 any patch that doesn't follow
> the guideline, just as I would do with i18n guideline violations.

Apparently you're unaware of the definition of the word 'guideline'. 
It's a guide. If it were a hard-and-fast rule then we would have a bot 
enforcing it already.

Is there anything quite so frightening as a large group of people 
blindly enforcing rules with total indifference to any sense of 
overarching purpose?

A reminder that the reason for this guideline is to ensure that none of 
the broad variety of tools that are available in the Git ecosystem 
effectively become unusable with the OpenStack repos due to wildly 
inconsistent formatting. And of course, even that goal has to be 
balanced against our other goals, such as building a healthy community 
and occasionally shipping some software.

There are plenty of ways to achieve that goal other than blanket 
drive-by -1's for trivial inconsistencies in the formatting of 
individual commit messages. A polite comment and a link to the 
guidelines is a great way to educate new contributors. For core 
reviewers especially, a comment like that and a +1 review will *almost 
always* get you the change you want in double-quick time. (Any 
contributor who knows they are 30s work away from a +2 is going to be 
highly motivated.)

> Typos are a completely different matter and they should not be grouped
> together with guideline infringements.

"Violations"? "Infringements"? It's line wrapping, not a felony case.

> I agree that it is a waste of time and resources when you have to -1 a
> patch for this, but there multiple solutions, you can make sure your
> editor does auto wrapping at the right length (I have mine configured
> this way), or create a git-enforce policy with a client-side hook, or do
> like Ihar is trying to do and push for a guideline change.
>
> I don't mind changing the guideline to any other length, but as long as
> it is 72 chars I will keep enforcing it, as it is not the place of
> reviewers to decide which guidelines are worthy of being enforced and
> which ones are not.

Of course it is.

If we're not here to use our brains, why are we here? Serious question. 
Feel free to use any definition of 'here'.

> Cheers,
> Gorka.
>
>
>
>>> On Sep 26, 2015, at 21:19, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
>>>
>>> Can I ask a different question - could we reject a few simple-to-check things on the push, like bad commit messages?  For things that take 2 seconds to fix and do make people's lives better, it's not that they're rejected, it's that the whole rejection cycle via gerrit review (push/wait for tests to run/check website/swear/find change/fix/push again) is out of proportion to the effort taken to fix it.

I would welcome a confirmation step - but *not* an outright rejection - 
that runs *locally* in git-review before the change is pushed. Right 
now, gerrit gives you a warning after the review is pushed, at which 
point it is too late.

>>> It seems here that there's benefit to 72 line messages - not that everyone sees that benefit, but it is present - but it doesn't outweigh the current cost.

Yes, 72 columns is the correct guideline IMHO. It's used virtually 
throughout the Git ecosystem now. Back in the early days of Git it 
wasn't at all clear - should you have no line breaks at all and let each 
tool do its own soft line wrapping? If not, where should you wrap? Now 
there's a clear consensus that you hard wrap at 72. Vi wraps git commit 
messages at 72 by default.

The output of "git log" indents commit messages by four spaces, so 
anything longer than 76 gets ugly, hard-to-read line-wrapping. I've also 
noticed that Launchpad (or at least the bot that posts commit messages 
to Launchpad when patches merge) does a hard wrap at 72 characters.

A much better idea than modifying the guideline would be to put 
documentation on the wiki about how to set up your editor so that this 
is never an issue. You shouldn't even have to even think about the line 
length for at least 99% of commits.

cheers,
Zane.


From graham.hayes at hpe.com  Mon Sep 28 16:51:06 2015
From: graham.hayes at hpe.com (Hayes, Graham)
Date: Mon, 28 Sep 2015 16:51:06 +0000
Subject: [openstack-dev] [Designate] Topics for Design Summit
Message-ID: <325F898546FBBF4487D24D6D606A277E18B23BE6@G9W0750.americas.hpqcorp.net>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi All,

Just a reminder that this etherpad is still nearly empty :).

https://etherpad.openstack.org/p/designate-design-summit-tokyo

Can we try and get a few more topics so we can talk about it on
Wednesday?

Thanks,

Graham

- -- 
Graham Hayes
Software Engineer
DNS as a Service
Advanced Network Services
HP Helion Cloud - Platform Services

GPG Key: 7D28E972


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJWCW/3AAoJEPRBUqpJBgIieFgH/A21CbLyzOCpZhDK3XfsA3S+
xsVdH7GNosItWIaarvgHC2GcckR+OSHHE2XLzeVGaZ4ugdkbp6C99CaczOF1VszZ
pNDcKWAUcnQt+qmwt8AbSbpNw6L0EYRujLjR4ekmfclQlLZ/uqtd59VCN/dArcnH
21j7852lP1PjVN+YwJ74dN/wdqTWCZ6zl+jgdG2DoF+y9ZWOQlNItp2khd8hJBrg
cx+LQESNZsCSwdJrueC25qMJ0Qfm29/jnNoET8r8ly5RMui2Zdv3Jz/QHt527nsD
2InrBdEi/+RXllYufpuFXIJAMMXQ+HtAZrcyOpSSfEiJJ6IHJ8y6yGUFTY2QuwE=
=qTQ6
-----END PGP SIGNATURE-----


From doc at aedo.net  Mon Sep 28 16:55:01 2015
From: doc at aedo.net (Christopher Aedo)
Date: Mon, 28 Sep 2015 09:55:01 -0700
Subject: [openstack-dev] [devstack] Is there a way to configure devstack
 for one flat external network using Kilo, Neutron?
In-Reply-To: <201509281431.t8SEVY21016426@d03av05.boulder.ibm.com>
References: <201509281431.t8SEVY21016426@d03av05.boulder.ibm.com>
Message-ID: <CA+odVQHPcJAV2PvuMToHix8=M+d0t+df6e4t=e_AJcv4rRF-NA@mail.gmail.com>

On Mon, Sep 28, 2015 at 7:08 AM, Mike Spreitzer <mspreitz at us.ibm.com> wrote:
> Is there a way to configure devstack to install Neutron such that there is
> just one network and that is an external network and Nova can create Compute
> Instances on it, using projects of Kilo vintage?

Mike, this might help, I have a script to make a local.conf that works
on a single-nic VM with Neutron:
https://github.com/aedocw/devstack-helper

Would just have to add the bits to specify using the Kilo release.  I
would definitely appreciate feedback on this though, especially if the
resulting config isn't quite right.

-Christopher


From efedorova at mirantis.com  Mon Sep 28 16:55:41 2015
From: efedorova at mirantis.com (Ekaterina Chernova)
Date: Mon, 28 Sep 2015 19:55:41 +0300
Subject: [openstack-dev] [puppet][murano] Launchpad project created
Message-ID: <CAOFFu8Z-Ez2-cNFvm+5i9C0ZLKwN4Rr99UnfiaRsUOvZDgZhHQ@mail.gmail.com>

Hi all,

as you know murano started creating puppet manifests.
They are located at the corresponding repository
<https://github.com/openstack/puppet-murano>[1]

And this is just a notification, that the separate project
<https://launchpad.net/puppet-murano> [2]to track puppet-related activities
is created and
puppet-openstack <https://launchpad.net/~puppet-openstack> [2] group is
selected as a maintainer.

Currently, there is just one bug there, but please, don't forget to triage
new bugs in this repository.

[1] - https://github.com/openstack/puppet-murano
[2] - https://launchpad.net/puppet-murano
[3] - https://launchpad.net/~puppet-openstack

Regards,
Kate.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/de3a4125/attachment.html>

From doug at doughellmann.com  Mon Sep 28 17:01:46 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 28 Sep 2015 13:01:46 -0400
Subject: [openstack-dev] [all] Proposed Mitaka release schedule
In-Reply-To: <CAGocpaFPMM932M1jBjjb-9FS3kHLjAKY+hhQHuapZgu8dpe9vQ@mail.gmail.com>
References: <560949DD.4060503@openstack.org>
 <CAGocpaFPMM932M1jBjjb-9FS3kHLjAKY+hhQHuapZgu8dpe9vQ@mail.gmail.com>
Message-ID: <1443459548-sup-9875@lrrr.local>

Excerpts from Ivan Kolodyazhny's message of 2015-09-28 17:41:46 +0300:
> Hi Thierry,
> 
> Thank you for sharing this information with us so early. One
> comment/question from me about FinalClientLibraryRelease:
> 
> Could we make client release at least one week later after M-3 milestone?
> It will give us more chances to have features landed into the client if
> they were merged late before M-3 and feature freeze.

The timeline in this schedule is more or less the same as what we've
done the past two cycles.  Allowing client feature work to carry
on longer in the schedule does not leave time for dealing with
issues introduced by buggy client releases.

I recommend prioritizing features that are going to require work
in the clients so those patches can land early enough to follow the
schedule, especially if those are features that other projects are
going to be depending on.

Doug

> 
> Regards,
> Ivan Kolodyazhny
> 
> On Mon, Sep 28, 2015 at 5:08 PM, Thierry Carrez <thierry at openstack.org>
> wrote:
> 
> > Hi everyone,
> >
> > You can find the proposed release schedule for Mitaka here:
> >
> > https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
> >
> > That places the end release on April 7, 2016. It's also worth noting
> > that in an effort to maximize development time, this schedule reduces
> > the time between Feature Freeze and final release by one week (5 weeks
> > instead of 6 weeks). That means we'll collectively have to be a lot
> > stricter on Feature freeze exceptions this time around. Be prepared for
> > that.
> >
> > Feel free to ping the Release management team members on
> > #openstack-relmgr-office if you have any question.
> >
> > --
> > Thierry Carrez (ttx)
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >


From sean at dague.net  Mon Sep 28 17:09:24 2015
From: sean at dague.net (Sean Dague)
Date: Mon, 28 Sep 2015 13:09:24 -0400
Subject: [openstack-dev] [election] [tc] TC candidacy
Message-ID: <56097444.9040209@dague.net>

I'd like to submit my candidacy for the TC Election.

I've been involved with OpenStack since the beginning of the Folsom
release. We may have worked together on parts of Nova, Tempest,
DevStack, Grenade, or in generally debugging failures in the upstream
gate in OpenStack. Or from things like the gerrit-dash-creator that
makes it easy for teams to build custom review dashboards.

I'm a strong believer in the Big Tent approach to OpenStack. I was one
of the early instigators of that conversation [1], which as with all
things got better with more smart people involved in it. I'm also a
strong believer that the longevity of the OpenStack project is about
having a solid and small on ramp that gets OpenStack in as many places
as possible. And a clear path to expand it over time. This was one of
the reasons I believed the compute-starter-kit was an important tag
that the TC bless [2].

Over the past year I've spent time on decomposing and modularizing
OpenStack components. We no longer test all the projects and all the
libraries in one giant gate queue. We now have a plugin framework for
DevStack and Grenade that makes it easier for people to expand on top
of it.

I've also been focusing on the API side of OpenStack. In the API
working group, and on the Nova project. I helped define the
Micro-version approach that is used in Nova, and has been adopted by
other projects in OpenStack [3]. This has opened up a new way for
projects to evolve their API while not breaking existing applications.

Over the next couple of cycles my focus is going to be interop, with
an eye on the Service Catalog and the API of projects. I hope to do
that work with a broad range of contributors to make the end user
experience of OpenStack much better.

In my role at Hewlett Packard I've got the freedom to work on many of
these larger issues in the way that works best for the community.

If this is the kind of voice that you would like on the TC, please
feel free to vote for me. It would be my honor and pleasure to
continue to represent you, and the mission to expand OpenStack's
reach, on the TC.

-Sean

--
Sean Dague
irc: sdague

1. https://dague.net/2014/08/26/openstack-as-layers/
2. http://governance.openstack.org/reference/tags/compute_starter_kit.html
3. https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/

Review history: https://review.openstack.org/#/q/reviewer:2750,n,z
Commit history: https://review.openstack.org/#/q/owner:2750,n,z
Stackalytics: http://stackalytics.com/?user_id=sdague
: https://www.openhub.net/accounts/sdague

-- 
Sean Dague
http://dague.net

-- 
Sean Dague
http://dague.net


From krotscheck at gmail.com  Mon Sep 28 17:25:17 2015
From: krotscheck at gmail.com (Michael Krotscheck)
Date: Mon, 28 Sep 2015 17:25:17 +0000
Subject: [openstack-dev] [ironic] [javascript] ironic-webclient seeks more
	reviewers and contributors
Message-ID: <CABM65auWvfa=bx4tuLUOohdacEgGf7hzVJ+xpj_KSW0w4GQLEA@mail.gmail.com>

Hey everyone!

The ironic webclient is finally moving forward again, however we're at that
odd stage where most of the ironic team is Awesome Python and Hardware
types, and we're lacking reviewers able to keep an eye on our Angular
Javascript code. In the interest of not creating an echo chamber over here,
we'd like to invite anyone and everyone interested in looking at our JS
code to add their own +2, and maybe even contribute a bit!

https://review.openstack.org/#/q/status:open+project:openstack/ironic-webclient,n,z
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/a0c436dc/attachment-0001.html>

From richard at raseley.com  Mon Sep 28 17:25:59 2015
From: richard at raseley.com (Richard Raseley)
Date: Mon, 28 Sep 2015 10:25:59 -0700
Subject: [openstack-dev] [puppet] Moving puppet-ceph to the Openstack
 big tent
In-Reply-To: <CAH7C+Pr2rA65O8gm3B1A64fFPctb_-q=jE=a6AHKJcfHNY1OEw@mail.gmail.com>
References: <CAH7C+Pr2rA65O8gm3B1A64fFPctb_-q=jE=a6AHKJcfHNY1OEw@mail.gmail.com>
Message-ID: <56097827.4000703@raseley.com>

On 09/28/2015 08:31 AM, David Moreau Simard wrote:
> puppet-ceph currently lives in stackforge [1] which is being retired
> [2]. puppet-ceph is also mirrored on the Ceph Github organization [3].
> This version of the puppet-ceph module was created from scratch and
> not as a fork of the (then) upstream puppet-ceph by Enovance [4].
> Today, the version by Enovance is no longer officially maintained
> since Red Hat has adopted the new release.
>
> Being an Openstack project under Stackforge or Openstack brings a lot
> of benefits but it's not black and white, there are cons too.
>
> It provides us with the tools, the processes and the frameworks to
> review and test each contribution to ensure we ship a module that is
> stable and is held to the highest standards.
> But it also means that:
> - We forego some level of ownership back to the Openstack foundation,
> it's technical committee and the Puppet Openstack PTL.
> - puppet-ceph contributors will also be required to sign the
> Contributors License Agreement and jump through the Gerrit hoops [5]
> which can make contributing to the project harder.
>
> We have put tremendous efforts into creating a quality module and as
> such it was the first puppet module in the stackforge organization to
> implement not only unit tests but also integration tests with third
> party CI.
> Integration testing for other puppet modules are just now starting to
> take shape by using the Openstack CI inrastructure.
>
> In the context of Openstack, RDO already ships with a mean to install
> Ceph with this very module and Fuel will be adopting it soon as well.
> This means the module will benefit from real world experience and
> improvements by the Openstack community and packagers.
> This will help further reinforce that not only Ceph is the best
> unified storage solution for Openstack but that we have means to
> deploy it in the real world easily.
>
> We all know that Ceph is also deployed outside of this context and
> this is why the core reviewers make sure that contributions remain
> generic and usable outside of this use case.
>
> Today, the core members of the project discussed whether or not we
> should move puppet-ceph to the Openstack big tent and we had a
> consensus approving the move.
> We would also like to hear the thoughts of the community on this topic.
>
> Please let us know what you think.
There was some discussion a while back around whether or not to bring
those modules into the project which provide support for
OpenStack-related tools which were not part of OpenStack themselves. The
specific example at that time was the puppet-midonet module.

Unfortunately the consensus was to not allow these modules in. I think
now, as I did then, that there is a lot of value in bringing some of
these things into the project, as so many of our implementations depend
on them. I also understand the other perspective, but think any concerns
could be addressed by building some formal criteria about what third
party tools are 'blessed'.

I look forward to seeing feedback from the rest of the community on this.

Regards,

Richard

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/bc64ba05/attachment.pgp>

From ben at swartzlander.org  Mon Sep 28 17:29:48 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Mon, 28 Sep 2015 13:29:48 -0400
Subject: [openstack-dev] [cinder] The Absurdity of the Milestone-1 Deadline
	for Drivers
Message-ID: <5609790C.60002@swartzlander.org>

I've always thought it was a bit strange to require new drivers to merge 
by milestone 1. I think I understand the motivations of the policy. The 
main motivation was to free up reviewers to review "other things" and 
this policy guarantees that for 75% of the release reviewers don't have 
to review new drivers. The other motivation was to prevent vendors from 
turning up at the last minute with crappy drivers that needed a ton of 
work, by encouraging them to get started earlier, or forcing them to 
wait until the next cycle.

I believe that the deadline actually does more harm than good.

First of all, to those that don't want to spend time on driver reviews, 
there are other solutions to that problem. Some people do want to review 
the drivers, and those who don't can simply ignore them and spend time 
on what they care about. I've heard people who spend time on driver 
reviews say that the milestone-1 deadline doesn't mean they spend less 
time reviewing drivers overall, it just all gets crammed into the 
beginning of each release. It should be obvious that setting a deadline 
doesn't actually affect the amount of reviewer effort, it just 
concentrates that effort.

The argument about crappy code is also a lot weaker now that there are 
CI requirements which force vendors to spend much more time up front and 
clear a much higher quality bar before the driver is even considered for 
merging. Drivers that aren't ready for merge can always be deferred to a 
later release, but it seems weird to defer drivers that are high quality 
just because they're submitted during milestones 2 or 3.

All the the above is just my opinion though, and you shouldn't care 
about my opinions, as I don't do much coding and reviewing in Cinder. 
There is a real reason I'm writing this email...

In Manila we added some major new features during Liberty. All of the 
new features merged in the last week of L-3. It was a nightmare of merge 
conflicts and angry core reviewers, and many contributors worked through 
a holiday weekend to bring the release together. While asking myself how 
we can avoid such a situation in the future, it became clear to me that 
bigger features need to merge earlier -- the earlier the better.

When I look at the release timeline, and ask myself when is the best 
time to merge new major features, and when is the best time to merge new 
drivers, it seems obvious that *features* need to happen early and 
drivers should come *later*. New major features require FAR more review 
time than new drivers, and they require testing, and even after they 
merge they cause merge conflicts that everyone else has to deal with. 
Better that that works happens in milestones 1 and 2 than right before 
feature freeze. New drivers can come in right before feature freeze as 
far as I'm concerned. Drivers don't cause merge conflicts, and drivers 
don't need huge amounts of testing (presumably the CI system ensure some 
level of quality).

It also occurs to me that new features which require driver 
implementation (hello replication!) *really* should go in during the 
first milestone so that drivers have time to implement the feature 
during the same release.

So I'm asking the Cinder core team to reconsider the milestone-1 
deadline for drivers, and to change it to a deadline for new major 
features (in milestone-1 or milestone-2), and to allow drivers to merge 
whenever*. This is the same pitch I'll be making to the Manila core 
team. I've been considering this idea for a few weeks now but I wanted 
to wait until after PTL elections to suggest it here.

-Ben Swartzlander


* I don't actually care if/when there is a driver deadline, what I care 
about is that reviewers are free during M-1 to work on reviewing/testing 
of features. The easiest way to achieve that seems to be moving the 
driver deadline.



From eli at mirantis.com  Mon Sep 28 17:47:56 2015
From: eli at mirantis.com (Evgeniy L)
Date: Mon, 28 Sep 2015 20:47:56 +0300
Subject: [openstack-dev] [Fuel] Code review process in Fuel and related
	issues
In-Reply-To: <CAKYN3rN_+ZSOvWYerchULuH1Regu_r3nRZ77+E1XAXgEWrwGbQ@mail.gmail.com>
References: <CAKYN3rNAw4vqbrvUONaemxOx=mACM3Aq_JAjpBeXmhjXq-zi5A@mail.gmail.com>
 <CABfuu9qPOe2RVhBG7aq+coVRQ0898pkv+DXGQBs9nGU93b+krA@mail.gmail.com>
 <30E12849-7AAB-45F7-BA7B-A4D952053419@mirantis.com>
 <CACo6NWA_=2JnJfcFwbTbt1M33P7Gqpg_xemKDV5x7miu94TAHQ@mail.gmail.com>
 <9847EFCC-7772-4BB8-AD0E-4CA6BC65B535@mirantis.com>
 <CACo6NWDdxzWxDkU078tuuHupyArux09bPya72hC24WwnkNiCFg@mail.gmail.com>
 <55E6E82D.6030100@gmail.com>
 <CACo6NWCjp-DTCY2nrKyDij1TPeSuTCr9PhTLQ25Vf_Y5cJ=sZQ@mail.gmail.com>
 <55E7221C.2070008@gmail.com>
 <CAKYN3rN_+ZSOvWYerchULuH1Regu_r3nRZ77+E1XAXgEWrwGbQ@mail.gmail.com>
Message-ID: <CABfuu9prAfuBGCO_3WzL_Tv-cJZ0JLyN8ak-rX33ehWKRRYRnw@mail.gmail.com>

Hi Mike, thanks, now it's clear.

On Thu, Sep 3, 2015 at 9:19 AM, Mike Scherbakov <mscherbakov at mirantis.com>
wrote:

> Thank you all for the feedback.
>
>
> Dims -
>
> > 1) I'd advise to codify a proposal in fuel-specs under a 'policy'
> directory
>
> I think it's great idea and I'll do it.
>
>
> > 2) We don't have SME terminology, but we do have Maintainers both in
> oslo-incubator
>
> I like "Maintainers" more than SMEs, thank you for suggestion. I'd switch
> SME -> Maintainer everywhere.
>
>
> > 3) Is there a plan to split existing repos to more repos? Then each repo
> can have a core team (one core team for one repo), PTL takes care of all
> repos and MAINTAINERS take care of directories within a repo. That will
> line up well with what we are doing elsewhere in the community (essentially
> "Component Lead" is a core team which may not be a single person).
>
> That's the plan, with one difference though. According to you, there is a
> core team per repo without a lead identified. In my proposal, I'd like to
> ensure that we always choose a lead by the process of voting. I'd like to
> have voting process of identifying a component lead. It has to be fair
> process.
>
>
> > We do not have a concept of SLA anywhere that i know of, so it will have
> to be some kind of social consensus and not a real carrot/stick.
>
> > As for me the idea of SLA contradicts to qualitative reviews.
>
> I'm not thinking about carrot or stick here. I'd like to ensure that core
> reviewers and component leads are targeted to complete reviews within a
> certain time. If it doesn't happen, then patchset needs to be discussed
> during IRC meeting if we can delegate some testing, etc. to someone. If
> there are many patchsets out of SLA, then we'd consider other changes
> (decide to drop something from the release so to free up resources or
> something else).
>
> We had a problem in the past, when we would not pay attention to a patch
> proposed by someone before it's being escalated. I'm suggesting a solution
> for that problem. SLA is the contract between contributor and reviewer, and
> both would have same expectations on how long would it take to review the
> patch. Without expectations aligned, contributors can get upset easily.
> They may expect that their code will be reviewed and merged within hours,
> while it fact it's days. I'm not even talking about patches which can be
> forgotten and hang in the queue for months...
>
>
> > If we succeed in reducing the load on core reviewers, it will mean that
> core reviewers will do less code reviews. This could lead to core reviewer
> demotion.
>
> I expect that there will be a drop in code reviews being done by core
> reviewers team. This is the point actually - do less reviews, but do it
> more thoroughly. Don't work on patches which have easy mistakes, as those
> should be cought by maintainers, before these patches come to the core
> reviewer's plate. I don't think though that it will lead to core reviewer
> "demotion". Still, you will be doing many reviews - just less than before,
> and other who did a little - will do more reviews, potentially becoming
> joining a core team later.
>
>
> > It would be nice if Jenkins could add reviewers after CI +1, or we can
> use gerrit dashboard for SMEs to not waste their time on review that has
> not yet passed CI and does not have +1 from other reviewers.
>
> This is good suggestion. I agree.
>
>
> > AFAIK Boris Pavlovic introduced some scripts
>
> > in Rally which do basic preliminary check of review message, checking
>
> > that it's formally correct.
>
> Thanks Igor, I believe this can be applied as well.
>
>
> > Another thing is I got a bit confused by the difference between Core
> Reviewer and Component Lead,
>
> > aren't those the same persons? Shouldn't every Core Reviewer know the
> architecture, best practises
>
> > and participate in design architecture sessions?
>
> Component Lead is being elected by core reviewers team as the lead. So
> it's just another core reviewer / architect, but with the right to have a
> final word. Also, for large parts like fuel-library / nailgun, I'd expect
> this person to be free from any feature-work. For smaller things, like
> network verifier, I don't think that we'd need to have dedicated component
> owner who will be free from any feature work.
>
>
> Igor K., Evgeny L, did I address your questions regarding SLA and
> component lead vs core reviewer?
>
> Thank you,
>
> On Wed, Sep 2, 2015 at 9:28 AM Jay Pipes <jaypipes at gmail.com> wrote:
>
>> On 09/02/2015 08:45 AM, Igor Kalnitsky wrote:
>> >> I think there's plenty of examples of people in OpenStack projects
>> >> that both submit code (and lead features) that also do code review
>> >> on a daily basis.
>> >
>> > * Do these features huge?
>>
>> Yes.
>>
>> > * Is their code contribution huge or just small patches?
>>
>> Both.
>>
>> > * Did they get to master before FF?
>>
>> Yes.
>>
>> > * How many intersecting features OpenStack projects have under
>> > development? (since often merge conflicts requires a lot of re-review)
>>
>> I recognize that Fuel, like devstack, has lots of cross-project
>> dependencies. That just makes things harder to handle for Fuel, but it's
>> not a reason to have core reviewers not working on code or non-core
>> reviewers not doing reviews.
>>
>> > * How often OpenStack people are busy on other activities, such as
>> > helping fellas, troubleshooting customers, participate design meetings
>> > and so on?
>>
>> Quite often. I'm personally on IRC participating in design discussions,
>> code reviews, and helping people every day. Not troubleshooting
>> customers, though...
>>
>> > If so, do you sure they are humans then? :) I can only speak for
>> > myself, and that's what I want to say: during 7.0 dev cycle I burned
>> > in hell and I don't want to continue that way.
>>
>> I think you mean you "burned out" :) But, yes, I hear you. I understand
>> the pressure that you are under, and I sympathize with you. I just feel
>> that the situation is not an either/or situation, and encouraging some
>> folks to only do reviews and not participate in coding/feature
>> development is a dangerous thing.
>>
>> Best,
>> -jay
>>
>> > Thanks,
>> > Igor
>> >
>> > On Wed, Sep 2, 2015 at 3:14 PM, Jay Pipes <jaypipes at gmail.com> wrote:
>> >> On 09/02/2015 03:00 AM, Igor Kalnitsky wrote:
>> >>>
>> >>> It won't work that way. You either busy on writing code / leading
>> >>> feature or doing review. It couldn't be combined effectively. Any
>> >>> context switch between activities requires an extra time to focus on.
>> >>
>> >>
>> >> I don't agree with the above, Igor. I think there's plenty of examples
>> of
>> >> people in OpenStack projects that both submit code (and lead features)
>> that
>> >> also do code review on a daily basis.
>> >>
>> >> Best,
>> >> -jay
>> >>
>> >>
>> >>
>> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Mike Scherbakov
> #mihgen
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/3fa40a63/attachment.html>

From dprince at redhat.com  Mon Sep 28 17:49:08 2015
From: dprince at redhat.com (Dan Prince)
Date: Mon, 28 Sep 2015 13:49:08 -0400
Subject: [openstack-dev] [TripleO] tripleo.org theme
In-Reply-To: <56058506.3030807@nemebean.com>
References: <1443184498.2443.10.camel@redhat.com>
 <56058506.3030807@nemebean.com>
Message-ID: <1443462548.13118.4.camel@redhat.com>

On Fri, 2015-09-25 at 12:31 -0500, Ben Nemec wrote:
> On 09/25/2015 07:34 AM, Dan Prince wrote:
> > It has come to my attention that we aren't making great use of our
> > tripleo.org domain. One thing that would be useful would be to have
> > the
> > new tripleo-docs content displayed there. It would also be nice to
> > have
> > quick links to some of our useful resources, perhaps Derek's CI
> > report
> > [1], a custom Reviewday page for TripleO reviews (something like
> > this
> > [2]), and perhaps other links too. I'm thinking these go in the
> > header,
> > and not just on some random TripleO docs page. Or perhaps both
> > places.
> 
> Note that there's a TripleO Inbox Dashboard linked toward the bottom
> of
> https://wiki.openstack.org/wiki/TripleO#Review_team (It should
> probably
> be higher up than that, since it's incredibly useful).  This is
> actually
> what I use for tracking TripleO reviews, and would be a simple thing
> to
> start with for this.
> 
> +1 to everything else.
> 
> > 
> > I was thinking that instead of the normal OpenStack theme however
> > we
> > could go a bit off the beaten path and do our own TripleO theme.
> > Basically a custom tripleosphinx project that we ninja in as a
> > replacement for oslosphinx.
> > 
> > Could get our own mascot... or do something silly with words. I'm
> > reaching out to graphics artists who could help with this sort of
> > thing... but before that decision is made I wanted to ask about
> > thoughts on the matter here first.
> 
> I like the mascot/logo idea. 

Exactly. Mascot's are cool. Lets get one.

>  Not sure why we would want to deviate from
> the standard OpenStack docs theme though.  What is your motivation
> for
> suggesting that?

This isn't about moving away from OpenStack. The only reason I wanted a
custom theme was to:

1) Display the mascot

2) Provide a custom header for our TripleO specific things. For
starters I've got a custom ReviewDay report with all TripleO specific
reviews along with the TripleO CI status report we've found so useful.

Looks something like this ATM:

https://twitter.com/dovetaildan/status/648270765972832256/photo/1

In short think of the tripleo.org theme as a lense into the TripleO
world upstream. It is only meant to guide you back to all of the
upstream OpenStack goodness that already exists.

Dan


> 
> Also, if we get a mascot I want t-shirts. ;-)
> 
> > 
> > Speak up... it would be nice to have this wrapped up before Tokyo.
> > 
> > [1] http://goodsquishy.com/downloads/tripleo-jobs.html
> > [2] http://status.openstack.org/reviews/
> > 
> > Dan
> > 
> > ___________________________________________________________________
> > _______
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> _____________________________________________________________________
> _____
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From tim at styra.com  Mon Sep 28 17:54:40 2015
From: tim at styra.com (Tim Hinrichs)
Date: Mon, 28 Sep 2015 17:54:40 +0000
Subject: [openstack-dev] [Congress] hands on lab
In-Reply-To: <89e38d2f55c04a15bfcf964050203dfc@EX13-MBX-012.vmware.com>
References: <89e38d2f55c04a15bfcf964050203dfc@EX13-MBX-012.vmware.com>
Message-ID: <CAJjxPADmsmJmVA7B_ehT=_REjSy_p-i1=vE6O0HNZBhm0VAtUw@mail.gmail.com>

Hi Alex,

I went through the HOL.  Some feedback...

1.  When starting up, it paused for a while showing the following
messages.  Maybe that caused the issues I mention below.
Waiting for network configuration...
Waiting up to 60 more seconds for network configuration...

2.  Maybe spell out the first paragraph of "Starting the Devstack VM".
Maybe something like....

To run the devstack VM, first install VirtualBox.  Then download the image
from XXX and start the VM.  (From VirtualBox's File menu, Import Appliance
and then choose the file you downloaded.)

Now you'll want to ssh to the VM from your laptop so it's easier to copy
and paste.  (The VM is setup to use bridge networking with DHCP.) To find
the IP address, login to the VM through the console.

username: congress
password: congress

Run ?ifconfig? to get the IP address of the VM for eth0

$ ifconfig eth0

Now open a terminal from your laptop and ssh to that IP address using the
same username and password from above.

<instructions for getting devstack running>

Now that devstack is running, you can point your laptop's browser to the
VM's IP address you found earlier to use Horizon.

 3. Connection problems.
Horizon.  The browser brought up the login screen but gave me the message
"Unable to establish connection to keystone endpoint." after I provided
admin/password.

Congress.  From the congress log I see connection errors happening for
keystone and neutron.

Looking at the Keystone screen log I don't see much of anything.  Here's
the API log in its entirety.  Seems it hasn't gotten any requests today
(Sept 28).

10.1.184.61 - - [17/Sep/2015:12:05:08 -0700] "POST /v2.0/tokens HTTP/1.1"
200 4287 "-" "python-keystoneclient" 52799(us)

10.1.184.61 - - [17/Sep/2015:12:05:08 -0700] "GET /v3/auth/tokens HTTP/1.1"
200 7655 "-" "python-keystoneclient" 65278(us)

10.1.184.61 - - [17/Sep/2015:12:05:08 -0700] "POST /v2.0/tokens HTTP/1.1"
200 4287 "-" "python-keystoneclient" 63428(us)

10.1.184.61 - - [17/Sep/2015:12:05:08 -0700] "GET /v3/auth/tokens HTTP/1.1"
200 7655 "-" "python-keystoneclient" 135994(us)

10.1.184.61 - - [17/Sep/2015:12:05:09 -0700] "POST /v2.0/tokens HTTP/1.1"
200 4287 "-" "python-keystoneclient" 56581(us)

10.1.184.61 - - [17/Sep/2015:12:05:09 -0700] "GET /v3/auth/tokens HTTP/1.1"
200 7655 "-" "python-keystoneclient" 55412(us)

10.1.184.61 - - [17/Sep/2015:12:05:12 -0700] "GET /v2.0/users HTTP/1.1" 200
1679 "-" "python-keystoneclient" 13630(us)

10.1.184.61 - - [17/Sep/2015:12:05:12 -0700] "GET /v2.0/OS-KSADM/roles
HTTP/1.1" 200 397 "-" "python-keystoneclient" 10940(us)

10.1.184.61 - - [17/Sep/2015:12:05:12 -0700] "GET /v2.0/tenants HTTP/1.1"
200 752 "-" "python-keystoneclient" 12387(us)

The VM has eth0 bridged.  I can ping google.com from inside the VM; I can
ssh to the VM from my laptop.

Any ideas what's going on?  (I'm trying to unstack/clean/stack to see if
that works, but it'll take a while.)

Tim




On Thu, Sep 17, 2015 at 6:05 PM Alex Yip <ayip at vmware.com> wrote:

> Hi all,
> I have created a VirtualBox VM that matches the Vancouver handson-lab here:
>
>
> https://drive.google.com/file/d/0B94E7u1TIA8oTEdOQlFERkFwMUE/view?usp=sharing
>
> There's also an updated instruction document here:
>
>
> https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub
>
> If you have some time, please try it out to see if it all works as
> expected.
> thanks, Alex
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/2e9a23e2/attachment.html>

From dprince at redhat.com  Mon Sep 28 17:54:51 2015
From: dprince at redhat.com (Dan Prince)
Date: Mon, 28 Sep 2015 13:54:51 -0400
Subject: [openstack-dev] [TripleO] tripleo.org theme
In-Reply-To: <CAHV77z9cb_HC59Z0rYPpw48TMGRW0baP2iJ5xMBSJiv5Sd6+8A@mail.gmail.com>
References: <1443184498.2443.10.camel@redhat.com>
 <CAHV77z9cb_HC59Z0rYPpw48TMGRW0baP2iJ5xMBSJiv5Sd6+8A@mail.gmail.com>
Message-ID: <1443462891.13118.10.camel@redhat.com>

On Mon, 2015-09-28 at 09:43 -0400, James Slagle wrote:
> On Fri, Sep 25, 2015 at 8:34 AM, Dan Prince <dprince at redhat.com>
> wrote:
> > It has come to my attention that we aren't making great use of our
> > tripleo.org domain. One thing that would be useful would be to have
> > the
> > new tripleo-docs content displayed there. It would also be nice to
> > have
> > quick links to some of our useful resources, perhaps Derek's CI
> > report
> > [1], a custom Reviewday page for TripleO reviews (something like
> > this
> > [2]), and perhaps other links too. I'm thinking these go in the
> > header,
> > and not just on some random TripleO docs page. Or perhaps both
> > places.
> > 
> > I was thinking that instead of the normal OpenStack theme however
> > we
> > could go a bit off the beaten path and do our own TripleO theme.
> > Basically a custom tripleosphinx project that we ninja in as a
> > replacement for oslosphinx.
> 
> Would the content of tripleo-docs be exactly the same as what is
> published at http://docs.openstack.org/developer/tripleo-docs/ ?

Yes. Exactly the same content with just a custom header to link to some
TripleO specific reports that our upstream community finds useful.

> 
> I think it probably should be, and be updated on every merged commit.
> If that's not the case, I think it should be abundantly clear why
> someone might would use one set of docs over the other.

The sample site I have currently refreshes every 7 or so minutes.
Basically you'd get new tripleo-docs content, new reviewday reviews,
and an updated CI report on each refresh.

> 
> I'm not sure about why we'd want a different theme. Is it just so
> that
> it's styled the same as the rest of tripleo.org?

Because mascot's are cool? Because we can? We don't have to have a
custom theme but I'd just as soon have one. I mean oslo has a moose,
Ironic has got a bear, perhaps TripleO needs an owl? Or something :)

This isn't about stepping away from upstream. Just a simple way to
organize upstream resources specific to the TripleO project in a new
fashion. A different way to slice into the project...

Dan

> 
> > 
> > Could get our own mascot... or do something silly with words. I'm
> > reaching out to graphics artists who could help with this sort of
> > thing... but before that decision is made I wanted to ask about
> > thoughts on the matter here first.
> > 
> > Speak up... it would be nice to have this wrapped up before Tokyo.
> > 
> > [1] http://goodsquishy.com/downloads/tripleo-jobs.html
> > [2] http://status.openstack.org/reviews/
> 
> +1 to everything else. I had put what to do with tripleo.org on the
> tokyo etherpad. Someone (not sure who) also suggested:
> 
> * collaborative blogging

Yes. I added this section I think. I think perhaps that is something we
could add later if we choose.

> * serve upstream generated overcloud images?

Sure. This would require a bit more infrastructure but we could use the
domain for this sort of thing too.

> 
> I like both of those ideas as well. For the blogging, maybe we could
> parse the rss feed from planet.openstack.org and pick out the TripleO
> stuff somehow.
> 
> 
> 


From tim at styra.com  Mon Sep 28 18:00:07 2015
From: tim at styra.com (Tim Hinrichs)
Date: Mon, 28 Sep 2015 18:00:07 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <e3c1fccdacc24c0a85e813f843e6b3d0@HQ1WP-EXMB12.corp.brocade.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>
 <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>
 <1443116221875.72882@vmware.com>
 <EB8DB51184817F479FC9C47B120861EE1986D904@SHSMSX101.ccr.corp.intel.com>
 <1443139720841.25541@vmware.com>
 <e3c1fccdacc24c0a85e813f843e6b3d0@HQ1WP-EXMB12.corp.brocade.com>
Message-ID: <CAJjxPACAfCY5ihZtNFt6SvDih7TZf8BcR8d5BR60_XgDbT5bvQ@mail.gmail.com>

When I tried to import the image, it gave me an error.


Could not create the imported medium
'/Users/tim/VirtualBox
VMs/Congress_Usecases/Congress_Usecases_SEPT_25_2015-disk1.vmdk'
.

VMDK: Compressed image is corrupted
'/Congress_Usecases_SEPT_25_2015-disk1.vmdk' (VERR_ZIP_CORRUPTED).


Tim



On Fri, Sep 25, 2015 at 3:38 PM Shiv Haris <sharis at brocade.com> wrote:

> Thanks Alex, Zhou,
>
>
>
> I get errors from congress when I do a re-join. These errors seem to due
> to the order in which the services are coming up. Hence I still depend on
> running stack.sh after the VM is up and running. Please try out the new VM
> ? also advise if you need to add any of your use cases. Also re-join starts
> ?screen? ? do we expect the end user to know how to use ?screen?.
>
>
>
> I do understand that running ?stack.sh? takes time to run ? but it does
> not do things that appear to be any kind of magic which we want to avoid in
> order to get the user excited.
>
>
>
> I have uploaded a new version of the VM please experiment with this and
> let me know:
>
>
>
> http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova
>
>
>
> (root: vagrant password: vagrant)
>
>
>
> -Shiv
>
>
>
>
>
>
>
> *From:* Alex Yip [mailto:ayip at vmware.com]
> *Sent:* Thursday, September 24, 2015 5:09 PM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> I was able to make devstack run without a network connection by disabling
> tempest.  So, I think it uses the loopback IP address, and that does not
> change, so rejoin-stack.sh works without a network at all.
>
>
>
> - Alex
>
>
>
>
> ------------------------------
>
> *From:* Zhou, Zhenzan <zhenzan.zhou at intel.com>
> *Sent:* Thursday, September 24, 2015 4:56 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Rejoin-stack.sh works only if its IP was not changed. So using NAT network
> and fixed ip inside the VM can help.
>
>
>
> BR
>
> Zhou Zhenzan
>
>
>
> *From:* Alex Yip [mailto:ayip at vmware.com <ayip at vmware.com>]
> *Sent:* Friday, September 25, 2015 01:37
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> I have been using images, rather than snapshots.
>
>
>
> It doesn't take that long to start up.  First, I boot the VM which takes a
> minute or so.  Then I run rejoin-stack.sh which takes just another minute
> or so.  It's really not that bad, and rejoin-stack.sh restores vms and
> openstack state that was running before.
>
>
>
> - Alex
>
>
>
>
> ------------------------------
>
> *From:* Shiv Haris <sharis at Brocade.com>
> *Sent:* Thursday, September 24, 2015 10:29 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Hi Congress folks,
>
>
>
> I am looking for ideas. We want the Openstack to be running when the user
> instantiates the Usecase-VM. However creating a OVA file is possible only
> when the VM is halted which means Openstack is not running and the user
> will have to run devstack again (which is time consuming) when the VM is
> restarted.
>
>
>
> The option is to take a snapshot. It appears that taking a snapshot of the
> VM and using it in another setup is not very straight forward. It involves
> modifying the .vbox file and seems that it is prone to user errors. I am
> leaning towards halting the machine and generating an OVA file.
>
>
>
> I am looking for suggestions ?.
>
>
>
> Thanks,
>
>
>
> -Shiv
>
>
>
>
>
> *From:* Shiv Haris [mailto:sharis at Brocade.com <sharis at Brocade.com>]
> *Sent:* Thursday, September 24, 2015 9:53 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> First of all I apologize for not making it at the meeting yesterday, could
> not cut short another overlapping meeting.
>
>
>
> Also, Tim thanks for the feedback. I have addressed some of the issues you
> posed however I am still working on some of the subtle issues raised. Once
> I have addressed all I will post another VM by end of the week.
>
>
>
> -Shiv
>
>
>
>
>
> *From:* Tim Hinrichs [mailto:tim at styra.com <tim at styra.com>]
> *Sent:* Friday, September 18, 2015 5:14 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> It's great to have this available!  I think it'll help people understand
> what's going on MUCH more quickly.
>
>
>
> Some thoughts.
>
> - The image is 3GB, which took me 30 minutes to download.  Are all VMs
> this big?  I think we should finish this as a VM but then look into doing
> it with containers to make it EVEN easier for people to get started.
>
>
>
> [shivharis] Yes, unfortunately that is the case. The disk size I set is
> 20GB ? but the OVA compress the image and disk to 3 GB. I will looking at
> other options.
>
>
>
>
>
> - It gave me an error about a missing shared directory when I started up.
>
> [shivharis] will fix this
>
>
>
> - I expected devstack to be running when I launched the VM.  devstack
> startup time is substantial, and if there's a problem, it's good to assume
> the user won't know how to fix it.  Is it possible to have devstack up and
> running when we start the VM?  That said, it started up fine for me.
>
> [shivharis] OVA files can be created only when the VM is halted, so
> devstack will be down when you bring up  the VM. I agree a snapshot will be
> a better choice.
>
>
>
> - It'd be good to have a README to explain how to use the use-case
> structure. It wasn't obvious to me.
>
> [shivharis] added.
>
>
>
> - The top-level dir of the Congress_Usecases folder has a
> Congress_Usecases folder within it.  I assume the inner one shouldn't be
> there?
>
> [shivharis] my automation issues, fixed.
>
>
>
> - When I ran the 10_install_policy.sh, it gave me a bunch of authorization
> problems.
>
> [shivharis] fixed
>
>
>
> But otherwise I think the setup looks reasonable.  Will there be an undo
> script so that we can run the use cases one after another without worrying
> about interactions?
>
> [shivharis] tricky, will find some way out.
>
>
>
> Tim
>
>
>
> [shivharis] Thanks
>
>
>
> On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com> wrote:
>
> Hi Congress folks,
>
>
>
> BTW the login/password for the VM is vagrant/vagrant
>
>
>
> -Shiv
>
>
>
>
>
> *From:* Shiv Haris [mailto:sharis at Brocade.com]
> *Sent:* Thursday, September 17, 2015 5:03 PM
> *To:* openstack-dev at lists.openstack.org
> *Subject:* [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Hi All,
>
>
>
> I have put my VM (virtualbox) at:
>
>
>
> http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__paloaltan.net_Congress_Congress-5FUsecases-5FSEPT-5F17-5F2015.ova&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=3IP4igrLri-BaK8VbjbEq2l_AGknCI7-t3UbP5VwlU8&s=wVyys8I915mHTzrOp8f0KLqProw6ygNfaMSP0T-yqCg&e=>
>
>
>
> I usually run this on a macbook air ? but it should work on other
> platfroms as well. I chose virtualbox since it is free.
>
>
>
> Please send me your usecases ? I can incorporate in the VM and send you an
> updated image. Please take a look at the structure I have in place for the
> first usecase; would prefer it be the same for other usecases. (However I
> am still open to suggestions for changes)
>
>
>
> Thanks,
>
>
>
> -Shiv
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/2747bee7/attachment.html>

From duncan.thomas at gmail.com  Mon Sep 28 18:11:18 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Mon, 28 Sep 2015 21:11:18 +0300
Subject: [openstack-dev] [cinder] The Absurdity of the Milestone-1
 Deadline for Drivers
In-Reply-To: <5609790C.60002@swartzlander.org>
References: <5609790C.60002@swartzlander.org>
Message-ID: <CAOyZ2aH=JboCjmUoGdYG1e99vfcOQsrt=rbwae3Fk-S1Nv2SEw@mail.gmail.com>

I can definitely see your logic, but we've a history in cinder of vendors
trying to cram drivers in at the last minute which we very much wanted to
stop dead. I might suggest that the second milestone, rather than the first
might be a better one to dedicate to driver reviews...

An interesting thing is that, from the point of view of cinder core, we
don't need more drivers. We've drivers coming out of our ears. Sure it is
incrementally better to get more drivers, but we've got far more to gain
from harnessing the developers of companies who want to merge new drivers
for core work (or ci improvements, or translation or docs improvements or
whatever) than we do from yet another driver. Increasing the bar for enter
slightly does us no major harm that I can tell - there's a clear benefit
from having your driver in tree, so if there's a small 'tax' to get added
then I suspect we can benefit from it substantially.

Definitely worth reviewing deadlines and other bureaucracy occasionally and
working out if it is still serving us well.

Cheers
On 28 Sep 2015 20:32, "Ben Swartzlander" <ben at swartzlander.org> wrote:

> I've always thought it was a bit strange to require new drivers to merge
> by milestone 1. I think I understand the motivations of the policy. The
> main motivation was to free up reviewers to review "other things" and this
> policy guarantees that for 75% of the release reviewers don't have to
> review new drivers. The other motivation was to prevent vendors from
> turning up at the last minute with crappy drivers that needed a ton of
> work, by encouraging them to get started earlier, or forcing them to wait
> until the next cycle.
>
> I believe that the deadline actually does more harm than good.
>
> First of all, to those that don't want to spend time on driver reviews,
> there are other solutions to that problem. Some people do want to review
> the drivers, and those who don't can simply ignore them and spend time on
> what they care about. I've heard people who spend time on driver reviews
> say that the milestone-1 deadline doesn't mean they spend less time
> reviewing drivers overall, it just all gets crammed into the beginning of
> each release. It should be obvious that setting a deadline doesn't actually
> affect the amount of reviewer effort, it just concentrates that effort.
>
> The argument about crappy code is also a lot weaker now that there are CI
> requirements which force vendors to spend much more time up front and clear
> a much higher quality bar before the driver is even considered for merging.
> Drivers that aren't ready for merge can always be deferred to a later
> release, but it seems weird to defer drivers that are high quality just
> because they're submitted during milestones 2 or 3.
>
> All the the above is just my opinion though, and you shouldn't care about
> my opinions, as I don't do much coding and reviewing in Cinder. There is a
> real reason I'm writing this email...
>
> In Manila we added some major new features during Liberty. All of the new
> features merged in the last week of L-3. It was a nightmare of merge
> conflicts and angry core reviewers, and many contributors worked through a
> holiday weekend to bring the release together. While asking myself how we
> can avoid such a situation in the future, it became clear to me that bigger
> features need to merge earlier -- the earlier the better.
>
> When I look at the release timeline, and ask myself when is the best time
> to merge new major features, and when is the best time to merge new
> drivers, it seems obvious that *features* need to happen early and drivers
> should come *later*. New major features require FAR more review time than
> new drivers, and they require testing, and even after they merge they cause
> merge conflicts that everyone else has to deal with. Better that that works
> happens in milestones 1 and 2 than right before feature freeze. New drivers
> can come in right before feature freeze as far as I'm concerned. Drivers
> don't cause merge conflicts, and drivers don't need huge amounts of testing
> (presumably the CI system ensure some level of quality).
>
> It also occurs to me that new features which require driver implementation
> (hello replication!) *really* should go in during the first milestone so
> that drivers have time to implement the feature during the same release.
>
> So I'm asking the Cinder core team to reconsider the milestone-1 deadline
> for drivers, and to change it to a deadline for new major features (in
> milestone-1 or milestone-2), and to allow drivers to merge whenever*. This
> is the same pitch I'll be making to the Manila core team. I've been
> considering this idea for a few weeks now but I wanted to wait until after
> PTL elections to suggest it here.
>
> -Ben Swartzlander
>
>
> * I don't actually care if/when there is a driver deadline, what I care
> about is that reviewers are free during M-1 to work on reviewing/testing of
> features. The easiest way to achieve that seems to be moving the driver
> deadline.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/cdbda197/attachment.html>

From john.griffith8 at gmail.com  Mon Sep 28 18:13:04 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Mon, 28 Sep 2015 12:13:04 -0600
Subject: [openstack-dev] [cinder] The Absurdity of the Milestone-1
 Deadline for Drivers
In-Reply-To: <5609790C.60002@swartzlander.org>
References: <5609790C.60002@swartzlander.org>
Message-ID: <CAPWkaSVO84Mg9Grz3KkTECjEmWziQK7RE8Mr8nEOhQntGLmnpg@mail.gmail.com>

On Mon, Sep 28, 2015 at 11:29 AM, Ben Swartzlander <ben at swartzlander.org>
wrote:

> I've always thought it was a bit strange to require new drivers to merge
> by milestone 1. I think I understand the motivations of the policy. The
> main motivation was to free up reviewers to review "other things" and this
> policy guarantees that for 75% of the release reviewers don't have to
> review new drivers. The other motivation was to prevent vendors from
> turning up at the last minute with crappy drivers that needed a ton of
> work, by encouraging them to get started earlier, or forcing them to wait
> until the next cycle.
>

?Yep, these were some of the ideas behind it but the first milestone did
for sure create some consequences.?


>
> I believe that the deadline actually does more harm than good.
>

?In retrospect I'd agree with you on this.  We ended up spending our major
focus for the first milestone on nothing but drivers which I think looking
back wasn't so good.  But to be fair, we try things, see how they work,
revisit and move on.  Which is the plan last I checked (there's a proposal
to talk about some of this at the summit in Tokyo).?


>
> First of all, to those that don't want to spend time on driver reviews,
> there are other solutions to that problem. Some people do want to review
> the drivers, and those who don't can simply ignore them and spend time on
> what they care about. I've heard people who spend time on driver reviews
> say that the milestone-1 deadline doesn't mean they spend less time
> reviewing drivers overall, it just all gets crammed into the beginning of
> each release. It should be obvious that setting a deadline doesn't actually
> affect the amount of reviewer effort, it just concentrates that effort.
>

?True statement, although there was an effort to have core reviewer 'sign
up' and take ownership/responsibility specifically to review various
drivers.?


>
> The argument about crappy code is also a lot weaker now that there are CI
> requirements which force vendors to spend much more time up front and clear
> a much higher quality bar before the driver is even considered for merging.
> Drivers that aren't ready for merge can always be deferred to a later
> release, but it seems weird to defer drivers that are high quality just
> because they're submitted during milestones 2 or 3.
>
> All the the above is just my opinion though, and you shouldn't care about
> my opinions, as I don't do much coding and reviewing in Cinder. There is a
> real reason I'm writing this email...
>
> In Manila we added some major new features during Liberty. All of the new
> features merged in the last week of L-3. It was a nightmare of merge
> conflicts and angry core reviewers, and many contributors worked through a
> holiday weekend to bring the release together. While asking myself how we
> can avoid such a situation in the future, it became clear to me that bigger
> features need to merge earlier -- the earlier the better.
>

?So this is most certainly an issue that we've been having in Cinder as
well.  It's a bad problem to have in my opinion and also I personally took
a pretty hard stance against some new features, reworking of core code
because it wasn't ready until the last week or so of the milestone and
frankly they were HUGE changes.
?


>
> When I look at the release timeline, and ask myself when is the best time
> to merge new major features, and when is the best time to merge new
> drivers, it seems obvious that *features* need to happen early and drivers
> should come *later*. New major features require FAR more review time than
> new drivers, and they require testing, and even after they merge they cause
> merge conflicts that everyone else has to deal with. Better that that works
> happens in milestones 1 and 2 than right before feature freeze. New drivers
> can come in right before feature freeze as far as I'm concerned. Drivers
> don't cause merge conflicts, and drivers don't need huge amounts of testing
> (presumably the CI system ensure some level of quality).
>

?For the most part I think I completely agree with everything you've said
above.?


>
> It also occurs to me that new features which require driver implementation
> (hello replication!) *really* should go in during the first milestone so
> that drivers have time to implement the feature during the same release.
>

?Yep, I most certainly should have pushed this in to the core code MUCH
earlier.  But to be honest, if you look at the life-cycle of the spec and
the patch that implemented it; it was proposed very early, BUT there was
very little useful input until after the mid-cycle'ish meetup in FTC. Was
it a matter of review focus, bike-shedding, or me being lazy... maybe a
little of all three.?


>
> So I'm asking the Cinder core team to reconsider the milestone-1 deadline
> for drivers, and to change it to a deadline for new major features (in
> milestone-1 or milestone-2), and to allow drivers to merge whenever*. This
> is the same pitch I'll be making to the Manila core team. I've been
> considering this idea for a few weeks now but I wanted to wait until after
> PTL elections to suggest it here.
>

?The good news here I think is that we do have a number of proposals to
revisit the various deadlines and how we set them.  In my opinion putting a
bit more process in place was good for the project, but I do think that
maybe we swung a little too far in one direction.  The reality though IMHO
is still you live and learn and improve as you go.  I think everyone on the
Cinder team is pretty good at living up to that philosophy.
?


>
> -Ben Swartzlander
>
>
> * I don't actually care if/when there is a driver deadline, what I care
> about is that reviewers are free during M-1 to work on reviewing/testing of
> features. The easiest way to achieve that seems to be moving the driver
> deadline.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/42ec47f7/attachment.html>

From dprince at redhat.com  Mon Sep 28 18:13:47 2015
From: dprince at redhat.com (Dan Prince)
Date: Mon, 28 Sep 2015 14:13:47 -0400
Subject: [openstack-dev] [TripleO] tripleo.org theme
In-Reply-To: <20150928160407.GT4731@yuggoth.org>
References: <1443184498.2443.10.camel@redhat.com>
 <CAHV77z9cb_HC59Z0rYPpw48TMGRW0baP2iJ5xMBSJiv5Sd6+8A@mail.gmail.com>
 <20150928160407.GT4731@yuggoth.org>
Message-ID: <1443464027.13118.26.camel@redhat.com>

On Mon, 2015-09-28 at 16:04 +0000, Jeremy Stanley wrote:
> On 2015-09-28 09:43:37 -0400 (-0400), James Slagle wrote:
> > Would the content of tripleo-docs be exactly the same as what is
> > published at http://docs.openstack.org/developer/tripleo-docs/ ?
> > 
> > I think it probably should be, and be updated on every merged
> > commit.
> > If that's not the case, I think it should be abundantly clear why
> > someone might would use one set of docs over the other.
> [...]
> 
> If you're looking at Project Infrastructure/Upstream CI integration
> with that site, I encourage you to just move remaining content to
> docs.openstack.org or possibly implement rewrites to something like
> tripleo.openstack.org.

docs.openstack.org is great too. We already use that for many things
and we aren't moving away from it. The only reason I wanted to include
tripleo-docs in this new themed site was simply that it highlights the
content a bit more. Front and central content w/ our mascot would be
quite nice... Just a different way to cut into TripleO...

tripleo.openstack.org would be cool too (perhaps in addition to
tripleo.org even). Having both point to the same thing would work quite
nice I think.

As for hosting tripleo.org I'm initially planning on hosting it outside
of Infra. I had previously asked about infra hosting the DNS domain
here and it sounded like the domain might not be the best fit:

http://lists.openstack.org/pipermail/openstack-infra/2015-May/002776.ht
ml

Moving it to be infra managed would be fine too. I've got no issues
with that should we agree that is the best place for it.

>  We've been doing the same for other "vanity"
> domains (see recent moves of devstack.org, refstack.org,
> trystack.org) so that we can get everything under a common base
> domain unless there is an actual technical requirement to have a
> separate domain (e.g. the cross-domain browser security concerns
> which drove us to register openstackid.org for our OpenID
> authentication).




From john.griffith8 at gmail.com  Mon Sep 28 18:24:08 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Mon, 28 Sep 2015 12:24:08 -0600
Subject: [openstack-dev] [cinder] The Absurdity of the Milestone-1
 Deadline for Drivers
In-Reply-To: <CAOyZ2aH=JboCjmUoGdYG1e99vfcOQsrt=rbwae3Fk-S1Nv2SEw@mail.gmail.com>
References: <5609790C.60002@swartzlander.org>
 <CAOyZ2aH=JboCjmUoGdYG1e99vfcOQsrt=rbwae3Fk-S1Nv2SEw@mail.gmail.com>
Message-ID: <CAPWkaSWSRXYAXCphZUNkPZQTaPx2-vLwE4gJmYXym20mF+oBPw@mail.gmail.com>

On Mon, Sep 28, 2015 at 12:11 PM, Duncan Thomas <duncan.thomas at gmail.com>
wrote:

> I can definitely see your logic, but we've a history in cinder of vendors
> trying to cram drivers in at the last minute which we very much wanted to
> stop dead. I might suggest that the second milestone, rather than the first
> might be a better one to dedicate to driver reviews...
>
?I guess we're not waiting until the summit :)  So honestly I think the
whole driver freeze thing should just be a part of the regular
feature-freeze rules and requirements.  If the code gets submitted late;
well the submitter runs the risk of it not getting reviewed and not making
it.  That's life, and no amount of PM's on IRC or email pleading for a
review really sway me in those cases.

We've given this weird expectation to folks that "if you submit X by date
Y, we guarantee it'll merge" which is not only inaccurate, but completely
unfair.  It needs to be clear that there is a 6 month (actually more like 4
or 5) cycle to get your code in.  The longer you wait, the less likely
everything will get reviewed and merged.?  Sorry, but that's how it goes; I
personally stopped treating drivers as 'special' submissions a while ago
and have no intention of going back to pretending they're something
"special".  They're just another feature add IMHO.


> An interesting thing is that, from the point of view of cinder core, we
> don't need more drivers. We've drivers coming out of our ears.
>

?True that!!!  What are we at like, 80 or something now??

?  Anybody interested in this topic come chat with me at the summit
(shameless plug for Griffith's agendas on removing drivers from
cinder/volume/drivers..., or at least coming up with a different method of
implementing them)  :)

It's a hard problem, no easy answer.  I also argue that "good" drivers do
make Cinder better; but we shouldn't make calls on what's good and bad
other than code review.

Sure it is incrementally better to get more drivers, but we've got far more
> to gain from harnessing the developers of companies who want to merge new
> drivers for core work (or ci improvements, or translation or docs
> improvements or whatever) than we do from yet another driver. Increasing
> the bar for enter slightly does us no major harm that I can tell - there's
> a clear benefit from having your driver in tree, so if there's a small
> 'tax' to get added then I suspect we can benefit from it substantially.
>
> Definitely worth reviewing deadlines and other bureaucracy occasionally
> and working out if it is still serving us well.
> ?
>
>
>
Cheers
> On 28 Sep 2015 20:32, "Ben Swartzlander" <ben at swartzlander.org> wrote:
>
>> I've always thought it was a bit strange to require new drivers to merge
>> by milestone 1. I think I understand the motivations of the policy. The
>> main motivation was to free up reviewers to review "other things" and this
>> policy guarantees that for 75% of the release reviewers don't have to
>> review new drivers. The other motivation was to prevent vendors from
>> turning up at the last minute with crappy drivers that needed a ton of
>> work, by encouraging them to get started earlier, or forcing them to wait
>> until the next cycle.
>>
>> I believe that the deadline actually does more harm than good.
>>
>> First of all, to those that don't want to spend time on driver reviews,
>> there are other solutions to that problem. Some people do want to review
>> the drivers, and those who don't can simply ignore them and spend time on
>> what they care about. I've heard people who spend time on driver reviews
>> say that the milestone-1 deadline doesn't mean they spend less time
>> reviewing drivers overall, it just all gets crammed into the beginning of
>> each release. It should be obvious that setting a deadline doesn't actually
>> affect the amount of reviewer effort, it just concentrates that effort.
>>
>> The argument about crappy code is also a lot weaker now that there are CI
>> requirements which force vendors to spend much more time up front and clear
>> a much higher quality bar before the driver is even considered for merging.
>> Drivers that aren't ready for merge can always be deferred to a later
>> release, but it seems weird to defer drivers that are high quality just
>> because they're submitted during milestones 2 or 3.
>>
>> All the the above is just my opinion though, and you shouldn't care about
>> my opinions, as I don't do much coding and reviewing in Cinder. There is a
>> real reason I'm writing this email...
>>
>> In Manila we added some major new features during Liberty. All of the new
>> features merged in the last week of L-3. It was a nightmare of merge
>> conflicts and angry core reviewers, and many contributors worked through a
>> holiday weekend to bring the release together. While asking myself how we
>> can avoid such a situation in the future, it became clear to me that bigger
>> features need to merge earlier -- the earlier the better.
>>
>> When I look at the release timeline, and ask myself when is the best time
>> to merge new major features, and when is the best time to merge new
>> drivers, it seems obvious that *features* need to happen early and drivers
>> should come *later*. New major features require FAR more review time than
>> new drivers, and they require testing, and even after they merge they cause
>> merge conflicts that everyone else has to deal with. Better that that works
>> happens in milestones 1 and 2 than right before feature freeze. New drivers
>> can come in right before feature freeze as far as I'm concerned. Drivers
>> don't cause merge conflicts, and drivers don't need huge amounts of testing
>> (presumably the CI system ensure some level of quality).
>>
>> It also occurs to me that new features which require driver
>> implementation (hello replication!) *really* should go in during the first
>> milestone so that drivers have time to implement the feature during the
>> same release.
>>
>> So I'm asking the Cinder core team to reconsider the milestone-1 deadline
>> for drivers, and to change it to a deadline for new major features (in
>> milestone-1 or milestone-2), and to allow drivers to merge whenever*. This
>> is the same pitch I'll be making to the Manila core team. I've been
>> considering this idea for a few weeks now but I wanted to wait until after
>> PTL elections to suggest it here.
>>
>> -Ben Swartzlander
>>
>>
>> * I don't actually care if/when there is a driver deadline, what I care
>> about is that reviewers are free during M-1 to work on reviewing/testing of
>> features. The easiest way to achieve that seems to be moving the driver
>> deadline.
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/6f9865a1/attachment.html>

From vkuklin at mirantis.com  Mon Sep 28 18:24:27 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Mon, 28 Sep 2015 21:24:27 +0300
Subject: [openstack-dev] [Fuel] Fuel PTL Candidacy
Message-ID: <CAHAWLf1E302DPb_oun0jvoRBEJVwmPAvkNMCFqP7fSs+yPGtUA@mail.gmail.com>

Fuelers

I am glad that Fuel is becoming more open as a project and is heading
towards Openstack BigTent which should allow our project to scale and
become one of the default OpenStack installers not only for a fraction of
end users but for a significant amount of OpenStack developers also.
Although Fuel is a fairly good toolchain, that allows you to set up a
cluster in a production-ready way, there is still a room for improvement as
we need to work on many things to scale to thousand of deployed nodes,
allow plugin developers to alter deployment workflow and input data in the
way the want and improve many other things which are still not the best in
the industry.

And as a someone who has been working on this project since its very
beginning, leading (which was very tiny several first months) so-called
Fuel Library team and working on all the deployment components I am very
familiar with all the existing issues we need to address in the first
place.

And so far I want to nominate myself for PTL position of Fuel project to be
able to lead the project towards its perfection.

I would like to outline the major points I am going to work on during this
half-year cadence

* General Standards of Decision Making (Meritocracy and Cold Numbers)

This assumes that each design decision on architecture must be applied
according to the sufficient data and expert analysis performed by subject
matter experts and cold and heartless machines that should run tests and
collect metrics of existing and proposed solutions to figure out the best
solution. Each decision on the new library or tool to choose must be
accompanied by clear report showing that this change actually makes
difference. I will start working on the unified toolchain and methodology
on how to make decisions immediately.

* HA Polishing (Finish It!)

This one has always been one of the strongest parts of Fuel and we are
using our own unique and innovative ways of deploying HA architecture.
Nevertheless, we still have many things to work on to make it perfect:

1) Fencing and power management facilities
2) Event-based cluster management and disaster handling
3) Node health monitoring
4) Many-many others

* Reference Architecture Changes (Simplicity and Focus on What Matters The
Most)

It seems pretty obvious that our reference architecture requires some
simplification and polishing. We have several key-value storages, we are
using several HTTP servers/proxies and so on.
This makes us support lots of stuff instead of concentrating on a 'golden'
set of tools and utilities and making them work perfectly.
I want to spend significant time on figuring out possible solutions for
redefining of our architecture based on aforementioned principles.

* Quality and Deployment Velocity (CI and QA improvements)

Although we are among the only projects who run very extensive set of tests
including for each commit into Fuel Library - we can still work on many
things for perfect QA and CI.
These are:

1) integration tests for all the components, e.g. run deployment for each
deployment-related component like nailgun, nailgun-agent, fuel-agent,
fuel-astute and others
2) significantly increase test coverage: e.g. cover each deployment task
with noop test, increase unit test coverage for all the Fuel-only modules
3) work on the ways of automated generation of test plans for each
particular feature and bugfix including regression, functional, scale,
stress and other types of tests
4) work on the way of introducing more granular tests for fuel deployment
components - we have an awesome framework to get faster feedback, written
by fuel QA team, but we have not yet integrated it into our CI

* Flexibility and Scalability (Plugin Development and User Experience)

We have a lot of work to do on orchestration and capabilities of our
deployment engine. We still need to think of how to apply fixes easily onto
already deployed environments, how to parallelize deployment on different
nodes as it takes more and more time when we have more roles in the cluster.

We need to think how to make regular cluster operations less invasive.

We need to work on ability to document and standardize all the interactions
between various components of Fuel, such as Nailgun and Fuel-Library. There
are many roadblocks in Nailgun which we need to get rid of and make all as
data-driven as possible allowing plugin developers an regular users to
alter deployment in the way the want by introducing.

We also need to work on our provisioning scalability parts such as moving
towards iPXE + peer2peer base image distribution.

* Documentation (Make It Clear)

It seems we have lots of good tools like our devops toolkit, noop tests and
other things. But this info is not available to external contributors and
plugin developers. Even Fuel engineers do not know everything about how
these tools work.  I am going to setup several webinars and write a dozen
of articles on how to develop and test Fuel.

* Community (Unleash The Monster of Synergy)

And the last but not least - community collaboration. We are doing great
job while gluing existing deployment libraries and testing their
integration. Community folks like puppet-openstack are also doing great
job. And, instead of duplicating it, I would like to merge with upstream
code as much as possible thus freeing our resources onto the things we do
the best along with sharing the stuff we do the best with community by
testing results of their work against our reference architecture and
installations.

Additionally, it is worth to mention that I envision significant value in
making Fuel able to install current OpenStack trunk code thus allowing us
to set up Fuel CI for each piece of OpenStack and allowing OpenStack
developers to test their code against multi-host HA-enabled reference
architecture in an easy and automated way.

* Conclusion

I am sure that going this way will make our project better and ramp up our
project quality and growth significantly, allow us to finally join Big Tent
and become the default most effective and easy-to-use installer and manager
of OpenStack clouds.

I will be happy to lead Fuel project during this cadence with this pretty
consistent and wide platform. It seems a little bit ambitious and not so
simple to implement it - but I expect help from Component Leads for
Fuel-Library and Fuel-Python as well as yours.

Remember, as Red Queen said: "you must run at least twice as fast as that!"

Thank you all for your time and consideration!

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/540d92a5/attachment.html>

From armamig at gmail.com  Mon Sep 28 18:38:18 2015
From: armamig at gmail.com (Armando M.)
Date: Mon, 28 Sep 2015 11:38:18 -0700
Subject: [openstack-dev] [neutron] congrats to armax!
In-Reply-To: <201509260003.t8Q03kaA029977@d03av03.boulder.ibm.com>
References: <201509260003.t8Q03kaA029977@d03av03.boulder.ibm.com>
Message-ID: <CAK+RQeaLBOA_iRPWbrjBBZWgJYU+SQMCx9aACfLGq7hFrhYCAQ@mail.gmail.com>

On 25 September 2015 at 17:03, Ryan Moats <rmoats at us.ibm.com> wrote:

> First, congratulations to armax on being elected PTL for Mitaka. Looking
> forward to Neutron improving over the next six months.
>
> Second thanks to everybody that voted in the election. Hopefully we had
> something close to 100% turnout, because that is an important
> responsibility of the population.
>

My humble thanks. I'll do my best to serve this community well, and not
disappoint the trust given.

Cheers,
Armando

>
>
> Ryan
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/c3a12f6f/attachment.html>

From walter.boring at hpe.com  Mon Sep 28 18:42:10 2015
From: walter.boring at hpe.com (Walter A. Boring IV)
Date: Mon, 28 Sep 2015 11:42:10 -0700
Subject: [openstack-dev] [cinder] The Absurdity of the Milestone-1
 Deadline for Drivers
In-Reply-To: <5609790C.60002@swartzlander.org>
References: <5609790C.60002@swartzlander.org>
Message-ID: <56098A02.1040409@hpe.com>

On 09/28/2015 10:29 AM, Ben Swartzlander wrote:
> I've always thought it was a bit strange to require new drivers to 
> merge by milestone 1. I think I understand the motivations of the 
> policy. The main motivation was to free up reviewers to review "other 
> things" and this policy guarantees that for 75% of the release 
> reviewers don't have to review new drivers. The other motivation was 
> to prevent vendors from turning up at the last minute with crappy 
> drivers that needed a ton of work, by encouraging them to get started 
> earlier, or forcing them to wait until the next cycle.
>
> I believe that the deadline actually does more harm than good.
But harm to whom?   It certainly puts the pressure on driver developers 
to make sure they get involved in the Cinder community and get aware of 
when the deadlines are.
I believe it simply shifts the time in which drivers get into tree. My 
$0.02 of opinion is, that if a new driver developer misses the 
milestone, then they have the rest of the release to work on getting CI 
up and running and ready to go for the next release.   I'm not sure I 
see the harm to the Cinder community or the project.   It's a deadline 
that a driver developer has to be aware of and compensate for.  We've 
had how many drivers land in the last 2 releases using this 
requirement?  I believe it's somewhere of 20+ drivers?

>
> First of all, to those that don't want to spend time on driver 
> reviews, there are other solutions to that problem. Some people do 
> want to review the drivers, and those who don't can simply ignore them 
> and spend time on what they care about. I've heard people who spend 
> time on driver reviews say that the milestone-1 deadline doesn't mean 
> they spend less time reviewing drivers overall, it just all gets 
> crammed into the beginning of each release. It should be obvious that 
> setting a deadline doesn't actually affect the amount of reviewer 
> effort, it just concentrates that effort.
>
> The argument about crappy code is also a lot weaker now that there are 
> CI requirements which force vendors to spend much more time up front 
> and clear a much higher quality bar before the driver is even 
> considered for merging. Drivers that aren't ready for merge can always 
> be deferred to a later release, but it seems weird to defer drivers 
> that are high quality just because they're submitted during milestones 
> 2 or 3.
I disagree here.  CI doesn't prevent you from having a crappy driver.  
Your driver just needs to pass CI tests.  CI ensures that your driver 
works, but doesn't ensure that it
really meats the core reviewers standards for code.  Do we care?  I 
think we do.  Having drivers talk directly to the db, or FC drivers 
missing the FCZM decorators for auto zoning, etc.

>
> All the the above is just my opinion though, and you shouldn't care 
> about my opinions, as I don't do much coding and reviewing in Cinder. 
> There is a real reason I'm writing this email...
>
> In Manila we added some major new features during Liberty. All of the 
> new features merged in the last week of L-3. It was a nightmare of 
> merge conflicts and angry core reviewers, and many contributors worked 
> through a holiday weekend to bring the release together. While asking 
> myself how we can avoid such a situation in the future, it became 
> clear to me that bigger features need to merge earlier -- the earlier 
> the better.
>
> When I look at the release timeline, and ask myself when is the best 
> time to merge new major features, and when is the best time to merge 
> new drivers, it seems obvious that *features* need to happen early and 
> drivers should come *later*. New major features require FAR more 
> review time than new drivers, and they require testing, and even after 
> they merge they cause merge conflicts that everyone else has to deal 
> with. Better that that works happens in milestones 1 and 2 than right 
> before feature freeze. New drivers can come in right before feature 
> freeze as far as I'm concerned. Drivers don't cause merge conflicts, 
> and drivers don't need huge amounts of testing (presumably the CI 
> system ensure some level of quality).
>
> It also occurs to me that new features which require driver 
> implementation (hello replication!) *really* should go in during the 
> first milestone so that drivers have time to implement the feature 
> during the same release.
>
> So I'm asking the Cinder core team to reconsider the milestone-1 
> deadline for drivers, and to change it to a deadline for new major 
> features (in milestone-1 or milestone-2), and to allow drivers to 
> merge whenever*. This is the same pitch I'll be making to the Manila 
> core team. I've been considering this idea for a few weeks now but I 
> wanted to wait until after PTL elections to suggest it here.
>
> -Ben Swartzlander
>
>
> * I don't actually care if/when there is a driver deadline, what I 
> care about is that reviewers are free during M-1 to work on 
> reviewing/testing of features. The easiest way to achieve that seems 
> to be moving the driver deadline.
I'm not opposed to M-2 for new drivers, but I think it simply shifts the 
headache of 3000+ line driver reviews to M-2 instead of M-1, reviewers 
will spend their time reviewing core features and not look at drivers 
until the M-2 deadline.  Is that better or worse?  I think part of this 
is why we've raised the discussion of pulling drivers out of Cinder tree.
I'm not clear how that helps the community and quality of drivers though.

Walt




From rlooyahoo at gmail.com  Mon Sep 28 18:54:02 2015
From: rlooyahoo at gmail.com (Ruby Loo)
Date: Mon, 28 Sep 2015 14:54:02 -0400
Subject: [openstack-dev] [ironic] weekly subteam status report
Message-ID: <CA+5K_1EmiNvhDh2=TSDVA0+=8j1YieRnzaMEe2i8b=mZehYweA@mail.gmail.com>

Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
============
(diff with Sep 21)
- Open: 135 (-1). 9 new (+2), 36 in progress (-5), 0 critical, 7 high (-5)
and 9 incomplete
- Nova bugs with Ironic tag: 24 (+1). 0 new, 0 critical, 1 high

dtantsur continues to ping people and unassign abandoned bugs


Neutron/Ironic work (jroll)
====================
no major updates, code still being worked on


ironic-lib adoption (dtantsur)
======================
dsvm gate is working (though non-voting) on ironic-lib, time to switch!
- <rameshg87> dtantsur: yeah, meanwhile I will check and decide on the
refactoring of ironic to use ironic-lib
- <rameshg87> dtantsur: I think faizan told he has patch almost ready.
- rameshg87 will have update by 9/29.


Nova Liaisons (jlvillal & mrda)
=======================
- No updates


Testing/Quality (jlvillal/lekha)
======================
- Still waiting for feature freeze to be lifted from global requirements to
add mimic requirement. Pinged dhellman on IRC on 28-Sep-2015
- jlvillal will work on initial test directory re-organization on ironic
server for functional testing
- krtaylor to look at unit test coverage
- krtaylor to document real BM CI rollout for PowerKVM
- grenade upgrade testing status


Inspector (dtansur)
===============
ironic-inspector 2.2.0 released
-
http://lists.openstack.org/pipermail/openstack-announce/2015-September/000659.html
- 3 blueprints finished, 14 bugs fixed
- stable/liberty was created from this release


Bifrost (TheJulia)
=============
Gate fixed, release 0.0.1 cut last week.


webclient (krotscheck / betherly)
=========================
- discussions with Horizon last week assured that we should be able to work
with both Horizon and downstream team successfully
- discussions with Horizon downstream (HP) team have concluded that we will
meet half way to have a UX that does not 100% resemble the existing panels
but is not so different as to confuse users
- design underway currently for submission to UX team (Piet)
- panel creation underway
- Reviews have stalled out:
https://review.openstack.org/#/q/status:open+project:openstack/ironic-webclient,n,z


Drivers
======

DRAC (ifarkas/lucas)
----------------------------
no update - working on python-dracclient to reach feature parity with DRAC
driver


iRMC (naohirot)
----------------------
https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z
- Status: Reactive (solicited for core team's review)
    - New boot driver interface for iRMC drivers (bp/new-boot-interface)
    - Enhance Power Interface for Soft Reboot and NMI
(bp/enhance-power-interface-for-soft-reboot-and-nmi)
    - iRMC out of band inspection (bp/ironic-node-properties-discovery)

........

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/b6692204/attachment.html>

From ben at swartzlander.org  Mon Sep 28 19:11:04 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Mon, 28 Sep 2015 15:11:04 -0400
Subject: [openstack-dev] [cinder] The Absurdity of the Milestone-1
 Deadline for Drivers
In-Reply-To: <56098A02.1040409@hpe.com>
References: <5609790C.60002@swartzlander.org> <56098A02.1040409@hpe.com>
Message-ID: <560990C8.40806@swartzlander.org>

On 09/28/2015 02:42 PM, Walter A. Boring IV wrote:
> On 09/28/2015 10:29 AM, Ben Swartzlander wrote:
>> I've always thought it was a bit strange to require new drivers to 
>> merge by milestone 1. I think I understand the motivations of the 
>> policy. The main motivation was to free up reviewers to review "other 
>> things" and this policy guarantees that for 75% of the release 
>> reviewers don't have to review new drivers. The other motivation was 
>> to prevent vendors from turning up at the last minute with crappy 
>> drivers that needed a ton of work, by encouraging them to get started 
>> earlier, or forcing them to wait until the next cycle.
>>
>> I believe that the deadline actually does more harm than good.
> But harm to whom?   It certainly puts the pressure on driver 
> developers to make sure they get involved in the Cinder community and 
> get aware of when the deadlines are.
> I believe it simply shifts the time in which drivers get into tree. My 
> $0.02 of opinion is, that if a new driver developer misses the 
> milestone, then they have the rest of the release to work on getting 
> CI up and running and ready to go for the next release.   I'm not sure 
> I see the harm to the Cinder community or the project.   It's a 
> deadline that a driver developer has to be aware of and compensate 
> for.  We've had how many drivers land in the last 2 releases using 
> this requirement?  I believe it's somewhere of 20+ drivers?
>

Walt I think you missed the point of my suggestion. I don't actually 
care when drivers go in. What I care about is that features land early 
so they can get appropriate review time and testing time, and so that 
merge conflicts can be dealt with. It seems to me that many people 
review nothing but drivers during milestone-1 so I suggest moving the 
deadline so those reviews can happen later. The 3rd milestone is when 
there should be no big architectural changes going on and is IMO the 
safest time to merge drivers. So the harm of the milestone-1 deadline is 
that is sucks all the oxygen out of the room, delaying features to 
milestone-2.

-Ben


>>
>> First of all, to those that don't want to spend time on driver 
>> reviews, there are other solutions to that problem. Some people do 
>> want to review the drivers, and those who don't can simply ignore 
>> them and spend time on what they care about. I've heard people who 
>> spend time on driver reviews say that the milestone-1 deadline 
>> doesn't mean they spend less time reviewing drivers overall, it just 
>> all gets crammed into the beginning of each release. It should be 
>> obvious that setting a deadline doesn't actually affect the amount of 
>> reviewer effort, it just concentrates that effort.
>>
>> The argument about crappy code is also a lot weaker now that there 
>> are CI requirements which force vendors to spend much more time up 
>> front and clear a much higher quality bar before the driver is even 
>> considered for merging. Drivers that aren't ready for merge can 
>> always be deferred to a later release, but it seems weird to defer 
>> drivers that are high quality just because they're submitted during 
>> milestones 2 or 3.
> I disagree here.  CI doesn't prevent you from having a crappy driver.  
> Your driver just needs to pass CI tests.  CI ensures that your driver 
> works, but doesn't ensure that it
> really meats the core reviewers standards for code.  Do we care? I 
> think we do.  Having drivers talk directly to the db, or FC drivers 
> missing the FCZM decorators for auto zoning, etc.
>
>>
>> All the the above is just my opinion though, and you shouldn't care 
>> about my opinions, as I don't do much coding and reviewing in Cinder. 
>> There is a real reason I'm writing this email...
>>
>> In Manila we added some major new features during Liberty. All of the 
>> new features merged in the last week of L-3. It was a nightmare of 
>> merge conflicts and angry core reviewers, and many contributors 
>> worked through a holiday weekend to bring the release together. While 
>> asking myself how we can avoid such a situation in the future, it 
>> became clear to me that bigger features need to merge earlier -- the 
>> earlier the better.
>>
>> When I look at the release timeline, and ask myself when is the best 
>> time to merge new major features, and when is the best time to merge 
>> new drivers, it seems obvious that *features* need to happen early 
>> and drivers should come *later*. New major features require FAR more 
>> review time than new drivers, and they require testing, and even 
>> after they merge they cause merge conflicts that everyone else has to 
>> deal with. Better that that works happens in milestones 1 and 2 than 
>> right before feature freeze. New drivers can come in right before 
>> feature freeze as far as I'm concerned. Drivers don't cause merge 
>> conflicts, and drivers don't need huge amounts of testing (presumably 
>> the CI system ensure some level of quality).
>>
>> It also occurs to me that new features which require driver 
>> implementation (hello replication!) *really* should go in during the 
>> first milestone so that drivers have time to implement the feature 
>> during the same release.
>>
>> So I'm asking the Cinder core team to reconsider the milestone-1 
>> deadline for drivers, and to change it to a deadline for new major 
>> features (in milestone-1 or milestone-2), and to allow drivers to 
>> merge whenever*. This is the same pitch I'll be making to the Manila 
>> core team. I've been considering this idea for a few weeks now but I 
>> wanted to wait until after PTL elections to suggest it here.
>>
>> -Ben Swartzlander
>>
>>
>> * I don't actually care if/when there is a driver deadline, what I 
>> care about is that reviewers are free during M-1 to work on 
>> reviewing/testing of features. The easiest way to achieve that seems 
>> to be moving the driver deadline.
> I'm not opposed to M-2 for new drivers, but I think it simply shifts 
> the headache of 3000+ line driver reviews to M-2 instead of M-1, 
> reviewers will spend their time reviewing core features and not look 
> at drivers until the M-2 deadline.  Is that better or worse?  I think 
> part of this is why we've raised the discussion of pulling drivers out 
> of Cinder tree.
> I'm not clear how that helps the community and quality of drivers though.
>
> Walt
>
>
>
> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From slawek at kaplonski.pl  Mon Sep 28 19:45:32 2015
From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=)
Date: Mon, 28 Sep 2015 21:45:32 +0200
Subject: [openstack-dev] Openvswitch agent unit tests
Message-ID: <20150928194532.GC17980@dell>

Hello,

I'm new developer who want to start contributing to neutron. I have some
small experience with neutron already but I didn't do anything which I
could push to upstream for now. So I searched for some bug on launchpad
and I found such bug which I took:
https://bugs.launchpad.net/neutron/+bug/1285893 and I started to
checking how I can write new tests (I think that it is quite easy job to
do for the beginning but maybe I'm wrong).
Now I have some questions to You:
1. From test-coverage I can see that for example there is missing
coverage like in lines 349-350 in method _restore_local_vlan_map(self) - should
I create new test and call that metod to check if proper exception will
be raised? or maybe it is not neccessary at all and such "one lines"
missing coverage is not really needed to be checked? Or maybe it should
be done in some different way?

2. What about tests for methods like: "_local_vlan_for_flat" which is
not checked at all? should be created new test for such method? or maybe
it should be covered by some different test?

Thanks in advance for any advice and tips how to write such unit tests
properly :)

--
Best regards / Pozdrawiam
S?awek Kap?o?ski
slawek at kaplonski.pl

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/59fc38ca/attachment.pgp>

From mvoelker at vmware.com  Mon Sep 28 19:55:18 2015
From: mvoelker at vmware.com (Mark Voelker)
Date: Mon, 28 Sep 2015 19:55:18 +0000
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <1443444996-sup-6545@lrrr.local>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
 <1443356431-sup-7293@lrrr.local> <5609200A.2000607@dague.net>
 <CABib2_rm7BG6uuKZ8pDePbCVgdS6QGMU6j4xtF+m7DujWsm9rw@mail.gmail.com>
 <1443444996-sup-6545@lrrr.local>
Message-ID: <399F9428-1FE2-4DDF-B679-080BDD583101@vmware.com>

On Sep 28, 2015, at 9:03 AM, Doug Hellmann <doug at doughellmann.com> wrote:
> 
> Excerpts from John Garbutt's message of 2015-09-28 12:32:53 +0100:
>> On 28 September 2015 at 12:10, Sean Dague <sean at dague.net> wrote:
>>> On 09/27/2015 08:43 AM, Doug Hellmann wrote:
>>>> Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +0000:
>>>>> On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
>>>>>> 
>>>>>> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
>>> <snip>
>>>>> 
>>>>> Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1]
>>>> 
>>>> And yet future technical direction does factor in, and I'm trying
>>>> to add a new heuristic to that aspect of consideration of tests:
>>>> Do not add tests that use proxy APIs.
>>>> 
>>>> If there is some compelling reason to add a capability for which
>>>> the only tests use a proxy, that's important feedback for the
>>>> contributor community and tells us we need to improve our test
>>>> coverage. If the reason to use the proxy is that no one is deploying
>>>> the proxied API publicly, that is also useful feedback, but I suspect
>>>> we will, in most cases (glance is the exception), say "Yeah, that's
>>>> not how we mean for you to run the services long-term, so don't
>>>> include that capability."
>>> 
>>> I think we might also just realize that some of the tests are using the
>>> proxy because... that's how they were originally written.
>> 
>> From my memory, thats how we got here.
>> 
>> The Nova tests needed to use an image API. (i.e. list images used to
>> check the snapshot Nova, or similar)
>> 
>> The Nova proxy was chosen over Glance v1 and Glance v2, mostly due to
>> it being the only widely deployed option.
> 
> Right, and I want to make sure it's clear that I am differentiating
> between "these tests are bad" and "these tests are bad *for DefCore*".
> We should definitely continue to test the proxy API, since it's a
> feature we have and that our users rely on.
> 
>> 
>>> And they could be rewritten to use native APIs.
>> 
>> +1
>> Once Glance v2 is available.
>> 
>> Adding Glance v2 as advisory seems a good step to help drive more adoption.
> 
> I think we probably don't want to rewrite the existing tests, since
> that effectively changes the contract out from under existing folks
> complying with DefCore.  If we need new, parallel, tests that do
> not use the proxy to make more suitable tests for DefCore to use,
> we should create those.
> 
>> 
>>> I do agree that "testing proxies" should not be part of Defcore, and I
>>> like Doug's idea of making that a new heuristic in test selection.
>> 
>> +1
>> Thats a good thing to add.
>> But I don't think we had another option in this case.
> 
> We did have the option of leaving the feature out and highlighting the
> discrepancy to the contributors so tests could be added. That
> communication didn't really happen, as far as I can tell.
> 
>>>> Sorry, I wasn't clear. The Nova team would, I expect, view the use of
>>>> those APIs in DefCore as a reason to avoid deprecating them in the code
>>>> even if they wanted to consider them as legacy features that should be
>>>> removed. Maybe that's not true, and the Nova team would be happy to
>>>> deprecate the APIs, but I did think that part of the feedback cycle we
>>>> were establishing here was to have an indication from the outside of the
>>>> contributor base about what APIs are considered important enough to keep
>>>> alive for a long period of time.
>>> I'd also agree with this. Defcore is a wider contract that we're trying
>>> to get even more people to write to because that cross section should be
>>> widely deployed. So deprecating something in Defcore is something I
>>> think most teams, Nova included, would be very reluctant to do. It's
>>> just asking for breaking your users.
>> 
>> I can't see us removing the proxy APIs in Nova any time soon,
>> regardless of DefCore, as it would break too many people.
>> 
>> But personally, I like dropping them from Defcore, to signal that the
>> best practice is to use the Glance v2 API directly, rather than the
>> Nova proxy.
>> 
>> Maybe the are just marked deprecated, but still required, although
>> that sounds a bit crazy.
> 
> Marking them as deprecated, then removing them from DefCore, would let
> the Nova team make a technical decision about what to do with them
> (maybe they get spun out into a separate service, maybe they're so
> popular you just keep them, whatever).

So, here?s that Who?s On First thing again.  Just to clarify: Nova does not need Capabilities to be removed from Guidelines in order to make technical decisions about what to do with a feature (though removing a Capability from future Guidelines may make Nova a lot more comfortable with their decision if they *do* decide to deprecate something, which I think is what Doug was pointing out here).

The DefCore Committee cannot tell projects what they can and cannot do with their code [1].  All DefCore can to is tell vendors what capabilities they have to expose to end users (if and only if those vendors want their products to be OpenStack Powered(TM) [2]).  It also tells end users what things they can rely on being present (if and only if they choose an OpenStack Powered(TM) product that adheres to a particular Guideline).  It is a Wonderful Thing if stuff doesn?t get dropped from Guidelines very often because nobody wants users to have to worry about not being able to rely on things they previously relied on very often.  It?s therefore also a Wonderful Thing if projects like Nova and the DefCore Committee are talking to each other with an eye on making end-user experience as consistent and stable as possible, and that when things do change, those transitions are handled as smoothly as possible.

But at the end of the day, if Nova wants to deprecate something, spin it out, or keep it, Nova doesn?t need DefCore to do anything first in order to make that decision.  DefCore would love a heads-up so the next Guideline (which comes out several months after the OpenStack release in which the changes are made did) can take the decision into account.  In fact in the case of deprecation, as of last week projects are more less required to give DefCore a heads-up if they want the assert:follows-standard-deprecation [3] tag.  A heads-up is even nice if Nova decides they want to keep supporting something since that will help the ?future direction? criteria be scored properly.

Ultimately, what Nova does with Nova?s code is still Nova?s decision to make.  I think that?s a pretty good thing. 

And FWIW I think it?s a pretty good thing we?re all now openly discussing it, too (after all this whole DefCore thing is still pretty new to most folks) so thanks to all of you for that. =)

At Your Service,

Mark T. Voelker


[1] Do folks know that DefCore is a Board of Directors activity and not a TC activity?  If not, see slides 13-16: http://www.slideshare.net/markvoelker/defcore-the-interoperability-standard-for-openstack-53040869
[2] http://www.openstack.org/interop
[3] http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst?id=ad2b0fd939a4613a68bc154a20c771c002568234#n65

> 
> Doug
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From jpenick at gmail.com  Mon Sep 28 19:59:32 2015
From: jpenick at gmail.com (James Penick)
Date: Mon, 28 Sep 2015 12:59:32 -0700
Subject: [openstack-dev] Compute API (Was Re: [nova][cinder] how to
 handle AZ bug 1496235?)
In-Reply-To: <20150928161258.GL8745@crypt>
References: <CAMomh-7vS=Xd_7HbcL5jQsdhMh1H9re62h2vg68j8xsJc9=QmA@mail.gmail.com>
 <20150925141255.GG8745@crypt> <5608F8C2.1060409@redhat.com>
 <CAOyZ2aFw+omfP-ptLz6Ch0PExV6L+=uANU7XBNbNYqm1JFQjYA@mail.gmail.com>
 <560909C7.9080907@redhat.com>
 <CAOyZ2aHo-mU99ktue_W0qUt1Du6uY--ANDceQ4xufKf5LBB64w@mail.gmail.com>
 <5609395F.8040503@redhat.com> <56094592.1020405@inaugust.com>
 <20150928141131.GK8745@crypt> <56094C5A.3010306@dague.net>
 <20150928161258.GL8745@crypt>
Message-ID: <CAMomh-7RGDe98Xi-wTbyDJqGM9DpdOvU3kKkC3Noi4-jFUs+Nw@mail.gmail.com>

>I see a clear line between something that handles the creation of all
ancillary resources needed to boot a VM and then the creation of the VM
itself.

I agree. To me the line is the difference between creating a top level
resource, and adding a host to that resource.

For example, I do expect a top level compute API to:
-Request an IP from Neutron
-Associate an instance with a volume
-Add an instance to a network security group in Neutron
-Add a real to a vip in neutron

But I don't expect Nova to:
-Create tenant/provider networks in Neutron
-Create a volume (boot from volume is a weird case)
-Create a neutron security group
-Create a load balancer

Also, if Nova is the API for all things compute, then there are some things
it will need to support that are not specific to VMs. For example, with
Ironic my users expect to use the Nova API/CLI to boot a baremetal compute
resource, and specify raid configuration as well as drive layout. My
understanding is there has been pushback on adding that to Nova, since it
doesn't make sense to have RAID config when building a VM. But, if Nova is
the compute abstraction layer, then we'll need a graceful way to express
this.

-James



On Mon, Sep 28, 2015 at 9:12 AM, Andrew Laski <andrew at lascii.com> wrote:

> On 09/28/15 at 10:19am, Sean Dague wrote:
>
>> On 09/28/2015 10:11 AM, Andrew Laski wrote:
>>
>>> On 09/28/15 at 08:50am, Monty Taylor wrote:
>>>
>>>> On 09/28/2015 07:58 AM, Sylvain Bauza wrote:
>>>>
>>> <snip>
>>
>>>
>>>> Specifically, I want "nova boot" to get me a VM with an IP address. I
>>>> don't want it to do fancy orchestration - I want it to not need fancy
>>>> orchestration, because needing fancy orchestration to get a VM  on a
>>>> network is not a feature.
>>>>
>>>
>>> In the networking case there is a minimum of orchestration because the
>>> time required to allocate a port is small.  What has been requiring
>>> orchestration is the creation of volumes because of the requirement of
>>> Cinder to download an image, or be on a backend that support fast
>>> cloning and rely on a cache hit.  So the question under discussion is
>>> when booting an instance relies on another service performing a long
>>> running operation where is a good place to handle that.
>>>
>>> My thinking for a while has been that we could use another API that
>>> could manage those things.  And be the central place you're looking for
>>> to pass a simple "nova boot" with whatever options are required so you
>>> don't have to manage the complexities of calls to
>>> Neutron/Cinder/Nova(current API).  What's become clear to me from this
>>> thread is that people don't seem to oppose that idea, however they don't
>>> want their users/clients to need to switch what API they're currently
>>> using(Nova).
>>>
>>> The right way to proceed with this idea seems to be to by evolving the
>>> Nova API and potentially creating a split down the road.  And by split I
>>> more mean architectural within Nova, and not necessarily a split API.
>>> What I imagine is that we follow the model of git and have a plumbing
>>> and porcelain API and each can focus on doing the right things.
>>>
>>
>> Right, and I think that's a fine approach. Nova's job is "give me a
>> working VM". Working includes networking, persistent storage. The API
>> semantics for "give me a working VM" should exist in Nova.
>>
>> It is also fine if there are lower level calls that tweak parts of that,
>> but nova boot shouldn't have to be a multi step API process for the
>> user. Building one working VM you can do something with is really the
>> entire point of Nova.
>>
>
> What I'm struggling with is where do we draw the line in this model?  For
> instance we don't allow a user to boot an instance from a disk image on
> their local machine via the Nova API, that is a multi step process.  And
> which parameters do we expose that can influence network and volume
> creation, if not all of them?  It would be helpful to establish guidelines
> on what is a good candidate for inclusion in Nova.
>
> I see a clear line between something that handles the creation of all
> ancillary resources needed to boot a VM and then the creation of the VM
> itself.  I don't understand why the creation of the other resources should
> live within Nova but as long as we can get to a good split between
> responsibilities that's a secondary concern.
>
>
>
>>         -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/febaf781/attachment.html>

From sean at dague.net  Mon Sep 28 20:02:10 2015
From: sean at dague.net (Sean Dague)
Date: Mon, 28 Sep 2015 16:02:10 -0400
Subject: [openstack-dev] [all] devstack default worker changes
Message-ID: <56099CC2.3020001@dague.net>

We used to default to using the mysqldb driver in devstack, which is a C
binding that is not eventlet aware. This means that any time a db query
is executed the entire greenlet is blocked. This behavior meant that in
devstack (and in oslo.concurrency) default number of workers for things
like API servers and conductors was = num of cores on the machine so
that everything wouldn't be deadlocked all the time when touching the
database.

The downside is we fixed concurrency with memory, which means you have a
lot of idle python eating all your memory all the time. Devstack in 4GB
isn't really an option any more with more than a small number of
services turned on.

We changed the default mysql driver to the pure python one back early in
Liberty. It shook out some bugs, but things got good quickly on that.
Just recently we decided to see if we could drop the worker count as
well in upstream devstack and gate jobs.

The new math will give you numproc / 4 workers for these services (min
of 2). This seems to be running fine. More interestingly it ends up
keeping an extra 1GB+ of memory in page cache by the end of the gate
runs, which seems to be making things a bit more stable, and under less
load. ( https://review.openstack.org/#/c/226831/ )

We've not seen any down side of this change yet, and it should make it
easier to get devstack into a smaller footprint VM. However, I wanted to
make sure people knew it was a change in case they see some new edge
failure condition that could be related to it.

So, it should mostly all be flowers and unicorns. But, keep an eye out
in case a troll shows up under a bridge somewhere.

	-Sean

-- 
Sean Dague
http://dague.net



From sean.mcginnis at gmx.com  Mon Sep 28 20:11:07 2015
From: sean.mcginnis at gmx.com (Sean McGinnis)
Date: Mon, 28 Sep 2015 15:11:07 -0500
Subject: [openstack-dev] [cinder] The Absurdity of the Milestone-1
 Deadline for Drivers
In-Reply-To: <CAPWkaSVO84Mg9Grz3KkTECjEmWziQK7RE8Mr8nEOhQntGLmnpg@mail.gmail.com>
References: <5609790C.60002@swartzlander.org>
 <CAPWkaSVO84Mg9Grz3KkTECjEmWziQK7RE8Mr8nEOhQntGLmnpg@mail.gmail.com>
Message-ID: <20150928201104.GA24594@gmx.com>

On Mon, Sep 28, 2015 at 12:13:04PM -0600, John Griffith wrote:
> On Mon, Sep 28, 2015 at 11:29 AM, Ben Swartzlander <ben at swartzlander.org>
> wrote:
> 
> > I've always thought it was a bit strange to require new drivers to merge
> > by milestone 1. I think I understand the motivations of the policy. The
> > main motivation was to free up reviewers to review "other things" and this
> > policy guarantees that for 75% of the release reviewers don't have to
> > review new drivers. The other motivation was to prevent vendors from
> > turning up at the last minute with crappy drivers that needed a ton of
> > work, by encouraging them to get started earlier, or forcing them to wait
> > until the next cycle.
> >
> 
> ?Yep, these were some of the ideas behind it but the first milestone did
> for sure create some consequences.?
> 
> 
> >
> > I believe that the deadline actually does more harm than good.
> >
> 
> ?In retrospect I'd agree with you on this.  We ended up spending our major
> focus for the first milestone on nothing but drivers which I think looking
> back wasn't so good.  But to be fair, we try things, see how they work,
> revisit and move on.  Which is the plan last I checked (there's a proposal
> to talk about some of this at the summit in Tokyo).?
> 

We will have more discussion on this for sure, but I figure I should
chime in with some of my thoughts.

I definitely do want to reconsider our deadlines. There are going to
be challenges no matter what point in the cycle we set for things like
driver submissions, but as John said, we need to try things and see how
it works. I saw a lot of logic in moving new drivers to the first
milestone, but I don't think it worked out as well as we had hoped it
would.

The biggest problem I see is it made the drivers a major focus for the
first part of the cycle. It seemed to me that that distracted a lot of
focus from core functionality. It got that part out of the way (mostly)
but it sort of disrupted the momentum from things discussed at the
Summit.

> 
> >
> > First of all, to those that don't want to spend time on driver reviews,
> > there are other solutions to that problem. Some people do want to review
> > the drivers, and those who don't can simply ignore them and spend time on
> > what they care about. I've heard people who spend time on driver reviews
> > say that the milestone-1 deadline doesn't mean they spend less time
> > reviewing drivers overall, it just all gets crammed into the beginning of
> > each release. It should be obvious that setting a deadline doesn't actually
> > affect the amount of reviewer effort, it just concentrates that effort.
> >
> 

I do think this is a very valid point.

I certainly don't have all the answers. I'm looking forward to the
discussions around this to get ideas of how to make things better. But I
do think we need to try something else to find a better way. Maybe we'll
end up back to where we are, but I do think it warrants more discussion.

Sean


From doug at doughellmann.com  Mon Sep 28 20:16:59 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 28 Sep 2015 16:16:59 -0400
Subject: [openstack-dev] [release][requirements] changes to the
	requirements-core team
Message-ID: <1443471075-sup-7601@lrrr.local>

It has been a while since we've reviewed the requirements-core team
members. Since it's the end of the cycle, I did a quick scan of the
stats this week and I am proposing some changes based on participation.

I spoke with Ghe Rivero and Julien Danjou, both of whom have done good
work in the past but who have changed their focus more recently. Both
agreed that it no longer made sense to have them on the core team, so I
will remove them today. As with other core teams, renewed interest from
past cores is usually handled by fast-tracking them back onto the team.
Thank you, Ghe and Julien, for your contributions!

I also spoke with Davanum Srinivas (dims) about joining the team. He has
been active this cycle with requirements reviews and related release
work, and is interested in being more involved. The usual practice is to
wait a few days for feedback before adding someone new, so I will wait a
few days before adding dims.

Doug


From dborodaenko at mirantis.com  Mon Sep 28 20:20:29 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Mon, 28 Sep 2015 13:20:29 -0700
Subject: [openstack-dev] [fuel] PTL Candidacy
Message-ID: <20150928202029.GA2298@localhost>

I'd like to announce my candidacy as Fuel PTL for the next cycle.

It is our very first election, so we are taking our time to make sure we
get everything right. We have extended the nomination period to
September 28 [0] to give Fuel contributors more time to learn about the
OpenStack governance process [1] and discuss how it is going to apply to
Fuel [2].

[0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015#Timeline
[1] https://wiki.openstack.org/wiki/Governance
[2] http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-09-24-16.02.log.html#l-91

Fuel is a large project with well defined component boundaries. Some of
these components are as large and as active as whole other OpenStack
projects [3]. To improve our ability to cope with code review, design
review, and dispute resolution, we're introducing a role of a Component
Lead for the two largest components, and a team structure policy [4]
that will help new contributors find the right people for each stage of
our development process: from discussing ideas to reaching consensus
about design to getting the implementation reviewed and merged.

[3] https://lwn.net/Articles/648610/
[4] https://review.openstack.org/225376

Electing a PTL is an important step towards more open governance,
development, and community collaboration. Still, it is only one step,
and while we have made significant progress this year, there is still a
lot of work to be done before we can meet all the requirements of
becoming an official OpenStack project [5].

[5] https://review.openstack.org/199232

Specifically, we need to eliminate code duplication with Puppet
OpenStack project, and bring all our git repositories in compliance with
the Project Testing Interface, before bringing our proposal to join the
Big Tent back to the attention of the Technical Committee.

I think that in the next six months we should focus on the following:

- Continue improving our collaboration with Puppet OpenStack project and
  complete the migration from local forked copies to reusing upstream
  Puppet modules with librarian-puppet-simple.

- Collaborate with other OpenStack projects, most importantly RPM and
  DEB Packaging, OpenStack Infrastructure, and Documentation.

- Implement PTI [6] for all Fuel components and get commits to all our
  repositories gated with unit tests (as well as functional and
  integration tests where possible) on OpenStack Infrastructure.

  [6] http://governance.openstack.org/reference/project-testing-interface.html

- Continue and expand modularization efforts on all levels for better
  reuse of Fuel components both internally and in other projects. Follow
  fuel-agent [7] as the first example of how to extract parts of
  fuel-web into self-sufficient sub-projects. Expand applicability of
  plugins [8], deployment tasks [9], and network templates [10] to make
  it easier to adapt Fuel for different use cases.

  [7] https://docs.mirantis.com/openstack/fuel/fuel-6.1/reference-architecture.html#fuel-agent
  [8] https://wiki.openstack.org/wiki/Fuel/Plugins
  [9] https://docs.mirantis.com/openstack/fuel/fuel-6.1/reference-architecture.html#task-based-deployment
  [10] https://blueprints.launchpad.net/fuel/+spec/templates-for-networking

- Expand Fuel developers documentation [11] and wiki [12], populate more
  of our component README files with information for contributors.
  Unsurprisingly, README.md in fuel-docs is a good example of that [13].

  [11] https://docs.fuel-infra.org/fuel-dev/
  [12] https://wiki.openstack.org/wiki/Fuel/How_to_contribute
  [13] https://github.com/stackforge/fuel-docs/blob/master/README.md

- Make our decision making process more transparent. We are already
  using fuel-specs repository [14] to discuss design specifications for
  Fuel bluprints, we should use openstack-dev and IRC more actively to
  include more people from OpenStack community in our technical
  discussions.

  [14] http://specs.fuel-infra.org/fuel-specs-master/

I have long advocated for more collaboration with the free software
community, and I strongly believe that paying close attention to
feedback from the community, encouraging new contributors, and building
a healthy and diverse community around Fuel is the best way to make Fuel
the awesomest OpenStack deployment tool for everyone.

[15] https://lists.launchpad.net/fuel-dev/msg00727.html

Thank you for your consideration,

-- 
Dmitry Borodaenko


From doug at doughellmann.com  Mon Sep 28 20:29:19 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Mon, 28 Sep 2015 16:29:19 -0400
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <399F9428-1FE2-4DDF-B679-080BDD583101@vmware.com>
References: <9B767DB4-10D7-49CD-8BDE-E82FA4B8616E@gmail.com>
 <CAJJL21NvHw5WUcH=kE12FF9p6CgLSb6mcUk+GJQVkY9=uFg-wQ@mail.gmail.com>
 <8BCBE2D0-3645-4272-892C-3DAA62F0733B@vmware.com>
 <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
 <1443356431-sup-7293@lrrr.local> <5609200A.2000607@dague.net>
 <CABib2_rm7BG6uuKZ8pDePbCVgdS6QGMU6j4xtF+m7DujWsm9rw@mail.gmail.com>
 <1443444996-sup-6545@lrrr.local>
 <399F9428-1FE2-4DDF-B679-080BDD583101@vmware.com>
Message-ID: <1443471990-sup-8574@lrrr.local>

Excerpts from Mark Voelker's message of 2015-09-28 19:55:18 +0000:
> On Sep 28, 2015, at 9:03 AM, Doug Hellmann <doug at doughellmann.com> wrote:
> > 
> > Excerpts from John Garbutt's message of 2015-09-28 12:32:53 +0100:
> >> On 28 September 2015 at 12:10, Sean Dague <sean at dague.net> wrote:
> >>> On 09/27/2015 08:43 AM, Doug Hellmann wrote:
> >>>> Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +0000:
> >>>>> On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
> >>>>>> 
> >>>>>> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
> >>> <snip>
> >>>>> 
> >>>>> Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1]
> >>>> 
> >>>> And yet future technical direction does factor in, and I'm trying
> >>>> to add a new heuristic to that aspect of consideration of tests:
> >>>> Do not add tests that use proxy APIs.
> >>>> 
> >>>> If there is some compelling reason to add a capability for which
> >>>> the only tests use a proxy, that's important feedback for the
> >>>> contributor community and tells us we need to improve our test
> >>>> coverage. If the reason to use the proxy is that no one is deploying
> >>>> the proxied API publicly, that is also useful feedback, but I suspect
> >>>> we will, in most cases (glance is the exception), say "Yeah, that's
> >>>> not how we mean for you to run the services long-term, so don't
> >>>> include that capability."
> >>> 
> >>> I think we might also just realize that some of the tests are using the
> >>> proxy because... that's how they were originally written.
> >> 
> >> From my memory, thats how we got here.
> >> 
> >> The Nova tests needed to use an image API. (i.e. list images used to
> >> check the snapshot Nova, or similar)
> >> 
> >> The Nova proxy was chosen over Glance v1 and Glance v2, mostly due to
> >> it being the only widely deployed option.
> > 
> > Right, and I want to make sure it's clear that I am differentiating
> > between "these tests are bad" and "these tests are bad *for DefCore*".
> > We should definitely continue to test the proxy API, since it's a
> > feature we have and that our users rely on.
> > 
> >> 
> >>> And they could be rewritten to use native APIs.
> >> 
> >> +1
> >> Once Glance v2 is available.
> >> 
> >> Adding Glance v2 as advisory seems a good step to help drive more adoption.
> > 
> > I think we probably don't want to rewrite the existing tests, since
> > that effectively changes the contract out from under existing folks
> > complying with DefCore.  If we need new, parallel, tests that do
> > not use the proxy to make more suitable tests for DefCore to use,
> > we should create those.
> > 
> >> 
> >>> I do agree that "testing proxies" should not be part of Defcore, and I
> >>> like Doug's idea of making that a new heuristic in test selection.
> >> 
> >> +1
> >> Thats a good thing to add.
> >> But I don't think we had another option in this case.
> > 
> > We did have the option of leaving the feature out and highlighting the
> > discrepancy to the contributors so tests could be added. That
> > communication didn't really happen, as far as I can tell.
> > 
> >>>> Sorry, I wasn't clear. The Nova team would, I expect, view the use of
> >>>> those APIs in DefCore as a reason to avoid deprecating them in the code
> >>>> even if they wanted to consider them as legacy features that should be
> >>>> removed. Maybe that's not true, and the Nova team would be happy to
> >>>> deprecate the APIs, but I did think that part of the feedback cycle we
> >>>> were establishing here was to have an indication from the outside of the
> >>>> contributor base about what APIs are considered important enough to keep
> >>>> alive for a long period of time.
> >>> I'd also agree with this. Defcore is a wider contract that we're trying
> >>> to get even more people to write to because that cross section should be
> >>> widely deployed. So deprecating something in Defcore is something I
> >>> think most teams, Nova included, would be very reluctant to do. It's
> >>> just asking for breaking your users.
> >> 
> >> I can't see us removing the proxy APIs in Nova any time soon,
> >> regardless of DefCore, as it would break too many people.
> >> 
> >> But personally, I like dropping them from Defcore, to signal that the
> >> best practice is to use the Glance v2 API directly, rather than the
> >> Nova proxy.
> >> 
> >> Maybe the are just marked deprecated, but still required, although
> >> that sounds a bit crazy.
> > 
> > Marking them as deprecated, then removing them from DefCore, would let
> > the Nova team make a technical decision about what to do with them
> > (maybe they get spun out into a separate service, maybe they're so
> > popular you just keep them, whatever).
> 
> So, here?s that Who?s On First thing again.  Just to clarify: Nova does not need Capabilities to be removed from Guidelines in order to make technical decisions about what to do with a feature (though removing a Capability from future Guidelines may make Nova a lot more comfortable with their decision if they *do* decide to deprecate something, which I think is what Doug was pointing out here).
> 
> The DefCore Committee cannot tell projects what they can and cannot do with their code [1].  All DefCore can to is tell vendors what capabilities they have to expose to end users (if and only if those vendors want their products to be OpenStack Powered(TM) [2]).  It also tells end users what things they can rely on being present (if and only if they choose an OpenStack Powered(TM) product that adheres to a particular Guideline).  It is a Wonderful Thing if stuff doesn?t get dropped from Guidelines very often because nobody wants users to have to worry about not being able to rely on things they previously relied on very often.  It?s therefore also a Wonderful Thing if projects like Nova and the DefCore Committee are talking to each other with an eye on making end-user experience as consistent and stable as possible, and that when things do change, those transitions are handled as smoothly as possible.
> 
> But at the end of the day, if Nova wants to deprecate something, spin it out, or keep it, Nova doesn?t need DefCore to do anything first in order to make that decision.  DefCore would love a heads-up so the next Guideline (which comes out several months after the OpenStack release in which the changes are made did) can take the decision into account.  In fact in the case of deprecation, as of last week projects are more less required to give DefCore a heads-up if they want the assert:follows-standard-deprecation [3] tag.  A heads-up is even nice if Nova decides they want to keep supporting something since that will help the ?future direction? criteria be scored properly.
> 
> Ultimately, what Nova does with Nova?s code is still Nova?s decision to make.  I think that?s a pretty good thing. 

Indeed! I guess I overestimated the expectations for DefCore. I thought
introducing the capabilities tests implied a broader commitment to keep
the feature than it sounds like is actually the case. I'm glad we are
more flexible than I thought. :-)

> 
> And FWIW I think it?s a pretty good thing we?re all now openly discussing it, too (after all this whole DefCore thing is still pretty new to most folks) so thanks to all of you for that. =)

Yes, it was pretty difficult to follow some of the earlier DefCore
discussions while the process and guidelines were being worked out.
Thanks for clarifying!

Doug

> 
> At Your Service,
> 
> Mark T. Voelker
> 
> 
> [1] Do folks know that DefCore is a Board of Directors activity and not a TC activity?  If not, see slides 13-16: http://www.slideshare.net/markvoelker/defcore-the-interoperability-standard-for-openstack-53040869
> [2] http://www.openstack.org/interop
> [3] http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst?id=ad2b0fd939a4613a68bc154a20c771c002568234#n65
> 
> > 
> > Doug
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From jaypipes at gmail.com  Mon Sep 28 20:43:53 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Mon, 28 Sep 2015 16:43:53 -0400
Subject: [openstack-dev] [all] devstack default worker changes
In-Reply-To: <56099CC2.3020001@dague.net>
References: <56099CC2.3020001@dague.net>
Message-ID: <5609A689.7000708@gmail.com>

This is great news indeed, Sean. :)

On 09/28/2015 04:02 PM, Sean Dague wrote:
> We used to default to using the mysqldb driver in devstack, which is a C
> binding that is not eventlet aware. This means that any time a db query
> is executed the entire greenlet is blocked. This behavior meant that in
> devstack (and in oslo.concurrency) default number of workers for things
> like API servers and conductors was = num of cores on the machine so
> that everything wouldn't be deadlocked all the time when touching the
> database.
>
> The downside is we fixed concurrency with memory, which means you have a
> lot of idle python eating all your memory all the time. Devstack in 4GB
> isn't really an option any more with more than a small number of
> services turned on.
>
> We changed the default mysql driver to the pure python one back early in
> Liberty. It shook out some bugs, but things got good quickly on that.
> Just recently we decided to see if we could drop the worker count as
> well in upstream devstack and gate jobs.
>
> The new math will give you numproc / 4 workers for these services (min
> of 2). This seems to be running fine. More interestingly it ends up
> keeping an extra 1GB+ of memory in page cache by the end of the gate
> runs, which seems to be making things a bit more stable, and under less
> load. ( https://review.openstack.org/#/c/226831/ )
>
> We've not seen any down side of this change yet, and it should make it
> easier to get devstack into a smaller footprint VM. However, I wanted to
> make sure people knew it was a change in case they see some new edge
> failure condition that could be related to it.
>
> So, it should mostly all be flowers and unicorns. But, keep an eye out
> in case a troll shows up under a bridge somewhere.
>
> 	-Sean
>


From fgiannet at cisco.com  Mon Sep 28 20:48:51 2015
From: fgiannet at cisco.com (Fabio Giannetti (fgiannet))
Date: Mon, 28 Sep 2015 20:48:51 +0000
Subject: [openstack-dev] [Congress] Congress and Monasca Joint Session at
 Tokyo Design Summit
Message-ID: <D22EF45B.9922%fgiannet@cisco.com>

Tim and Congress folks,
  I am writing on behalf of the Monasca community and I would like to explore the possibility of holding a joint session during the Tokyo Design Summit.
We would like to explore:

  1.  how to integrate Monasca with Congress so then Monasca can provide metrics, logs and event data for policy evaluation/enforcement
  2.  How to leverage Monasca alarming to automatically notify about statuses that may imply policy breach
  3.  How to automatically (if possible) convert policies (or subparts) into Monasca alarms.

Please point me to a submission page if I have to create a formal proposal for the topic and/or let me know other forms we can interact at the Summit.
Thanks in advance,
Fabio

[http://www.cisco.com/c/dam/assets/email-signature-tool/logo_06.png?ct=1430182397611]

Fabio Giannetti
Cloud Innovation Architect
Cisco Services
fgiannet at cisco.com<mailto:fgiannet at cisco.com>
Phone: +1 408 527 1134
Mobile: +1 408 854 0020


Cisco Systems, Inc.
285 W. Tasman Drive
San Jose
California
95134
United States
Cisco.com<http://www.cisco.com/>





[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] Think before you print.

This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.

Please click here<http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for Company Registration Information.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/7ac0a1aa/attachment.html>

From amuller at redhat.com  Mon Sep 28 20:52:01 2015
From: amuller at redhat.com (Assaf Muller)
Date: Mon, 28 Sep 2015 16:52:01 -0400
Subject: [openstack-dev] Openvswitch agent unit tests
In-Reply-To: <20150928194532.GC17980@dell>
References: <20150928194532.GC17980@dell>
Message-ID: <CABARBAamDj5q+LU+vtn8reA4928uOHK-+H_MtgCcaftTYZY=Eg@mail.gmail.com>

Generally speaking, testing agent methods that interact with the system
heavily with unit tests provide very little,
and arguably negative value to the project. Mocking internal methods and
asserting that they were called is a
clear anti-pattern to my mind. In Neutron-land we prefer to test agent code
with functional tests.
Since 'functional tests' is a very over-loaded term, what I mean by that is
specifically running the actual unmocked
code on the system and asserting the expected behavior.

Check out:
neutron/tests/functional/agent/test_ovs_lib
neutron/tests/functional/agent/test_l2_ovs_agent

On Mon, Sep 28, 2015 at 3:45 PM, S?awek Kap?o?ski <slawek at kaplonski.pl>
wrote:

> Hello,
>
> I'm new developer who want to start contributing to neutron. I have some
> small experience with neutron already but I didn't do anything which I
> could push to upstream for now. So I searched for some bug on launchpad
> and I found such bug which I took:
> https://bugs.launchpad.net/neutron/+bug/1285893 and I started to
> checking how I can write new tests (I think that it is quite easy job to
> do for the beginning but maybe I'm wrong).
> Now I have some questions to You:
> 1. From test-coverage I can see that for example there is missing
> coverage like in lines 349-350 in method _restore_local_vlan_map(self) -
> should
> I create new test and call that metod to check if proper exception will
> be raised? or maybe it is not neccessary at all and such "one lines"
> missing coverage is not really needed to be checked? Or maybe it should
> be done in some different way?
>
> 2. What about tests for methods like: "_local_vlan_for_flat" which is
> not checked at all? should be created new test for such method? or maybe
> it should be covered by some different test?
>
> Thanks in advance for any advice and tips how to write such unit tests
> properly :)
>
> --
> Best regards / Pozdrawiam
> S?awek Kap?o?ski
> slawek at kaplonski.pl
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/285ed850/attachment.html>

From mestery at mestery.com  Mon Sep 28 20:55:50 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Mon, 28 Sep 2015 15:55:50 -0500
Subject: [openstack-dev] [cinder][neutron][all] New third-party-ci
 testing requirements for OpenStack Compatible mark
In-Reply-To: <CAK+RQeZ+zbP=JoBNpeLdhktQ=93UVKCgWuGOzUxvTSpAN=EKNg@mail.gmail.com>
References: <EBF1AF08-B54D-4391-9B22-964390523E0A@openstack.org>
 <CAK+RQeZ+zbP=JoBNpeLdhktQ=93UVKCgWuGOzUxvTSpAN=EKNg@mail.gmail.com>
Message-ID: <CAL3VkVzWvOv79eOh1p+Cqi8=JfydDYi5gwnDoq5vHJhGtc3Ojg@mail.gmail.com>

The Neutron team also discussed this in Vancouver, you can see the etherpad
here [1]. We talked about the idea of creating a validation suite, and it
sounds like that's something we should again discuss in Tokyo for the
Mitaka cycle. I think a validation suite would be a great step forward for
Neutron third-party CI systems to use to validate they work with a release.

[1] https://etherpad.openstack.org/p/YVR-neutron-third-party-ci-liberty

On Sun, Sep 27, 2015 at 11:39 AM, Armando M. <armamig at gmail.com> wrote:

>
>
> On 25 September 2015 at 15:40, Chris Hoge <chris at openstack.org> wrote:
>
>> In November, the OpenStack Foundation will start requiring vendors
>> requesting
>> new "OpenStack Compatible" storage driver licenses to start passing the
>> Cinder
>> third-party integration tests.
>
> The new program was approved by the Board at
>> the July meeting in Austin and follows the improvement of the testing
>> standards
>> and technical requirements for the "OpenStack Powered" program. This is
>> all
>> part of the effort of the Foundation to use the OpenStack brand to
>> guarantee a
>> base-level of interoperability and consistency for OpenStack users and to
>> protect the work of our community of developers by applying a trademark
>> backed
>> by their technical efforts.
>>
>> The Cinder driver testing is the first step of a larger effort to apply
>> community determined standards to the Foundation marketing programs. We're
>> starting with Cinder because it has a successful testing program in
>> place, and
>> we have plans to extend the program to network drivers and OpenStack
>> applications. We're going require CI testing for new "OpenStack
>> Compatible"
>> storage licenses starting on November 1, and plan to roll out network and
>> application testing in 2016.
>>
>> One of our goals is to work with project leaders and developers to help us
>> define and implement these test programs. The standards for third-party
>> drivers and applications should be determined by the developers and users
>> in our community, who are experts in how to maintain the quality of the
>> ecosystem.
>>
>> We welcome and feedback on this program, and are also happy to answer any
>> questions you might have.
>>
>
> Thanks for spearheading this effort.
>
> Do you have more information/pointers about the program, and how Cinder in
> particular is
> paving the way for other projects to follow?
>
> Thanks,
> Armando
>
>
>> Thanks!
>>
>> Chris Hoge
>> Interop Engineer
>> OpenStack Foundation
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/410597fc/attachment-0001.html>

From john.l.villalovos at intel.com  Mon Sep 28 21:00:07 2015
From: john.l.villalovos at intel.com (Villalovos, John L)
Date: Mon, 28 Sep 2015 21:00:07 +0000
Subject: [openstack-dev] [Ironic] Preparing for functional testing in
 Ironic. Will likely break any in-flight patches.
Message-ID: <8E24C936330C954289EBF379E7F96CB64AAECA95@ORSMSX105.amr.corp.intel.com>

Just to give a heads up to people. I have proposed the following patch:
https://review.openstack.org/228612

This moves the tests in ironic/tests/ to ironic/tests/unit/

Likely this patch will break most in-flight patches, so rebasing will be required when this patch goes in.

If you have any questions, please let me know.

Thanks,
John


From amuller at redhat.com  Mon Sep 28 21:00:30 2015
From: amuller at redhat.com (Assaf Muller)
Date: Mon, 28 Sep 2015 17:00:30 -0400
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <56096D79.6090005@redhat.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost> <56096D79.6090005@redhat.com>
Message-ID: <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>

On Mon, Sep 28, 2015 at 12:40 PM, Zane Bitter <zbitter at redhat.com> wrote:

> On 28/09/15 05:47, Gorka Eguileor wrote:
>
>> On 26/09, Morgan Fainberg wrote:
>>
>>> As a core (and former PTL) I just ignored commit message -1s unless
>>> there is something majorly wrong (no bug id where one is needed, etc).
>>>
>>> I appreciate well formatted commits, but can we let this one go? This
>>> discussion is so far into the meta-bike-shedding (bike shedding about bike
>>> shedding commit messages) ... If a commit message is *that* bad a -1 (or
>>> just fixing it?) Might be worth it. However, if a commit isn't missing key
>>> info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence
>>> moving from topic to topic, there isn't a good reason to block the review.
>>>
>>
> +1
>
> It is not worth having a bot -1 bad commits or even having gerrit muck
>>> with them. Let's do the job of the reviewer and actually review code
>>> instead of going crazy with commit messages.
>>>
>>
> +1
>
> Sent via mobile
>>>
>>>
>> I have to disagree, as reviewers we have to make sure that guidelines
>> are followed, if we have an explicit guideline that states that
>> the limit length is 72 chars, I will -1 any patch that doesn't follow
>> the guideline, just as I would do with i18n guideline violations.
>>
>
> Apparently you're unaware of the definition of the word 'guideline'. It's
> a guide. If it were a hard-and-fast rule then we would have a bot enforcing
> it already.
>
> Is there anything quite so frightening as a large group of people blindly
> enforcing rules with total indifference to any sense of overarching purpose?
>
> A reminder that the reason for this guideline is to ensure that none of
> the broad variety of tools that are available in the Git ecosystem
> effectively become unusable with the OpenStack repos due to wildly
> inconsistent formatting. And of course, even that goal has to be balanced
> against our other goals, such as building a healthy community and
> occasionally shipping some software.
>
> There are plenty of ways to achieve that goal other than blanket drive-by
> -1's for trivial inconsistencies in the formatting of individual commit
> messages.


The actual issue is that we as a community (Speaking of the Neutron
community at least) are stat-crazed. We have a fair number of contributors
that -1 for trivial issues to retain their precious stats with alarming
zeal. That is the real issue. All of these commit message issues,
translation mishaps,
comment typos etc are excuses for people to boost their stats without
contributing their time or energy in to the project. I am beyond bitter
about this
issue at this point.

I'll say what I've always said about this issue: The review process is
about collaboration. I imagine that the author is sitting next to me, and
we're going
through the patch together for the purpose of improving it. Review comments
should be motivated by a thirst to improve the proposed code in a real way,
not by your want or need to improve your stats on stackalytics. The latter
is an enormous waste of your time.


> A polite comment and a link to the guidelines is a great way to educate
> new contributors. For core reviewers especially, a comment like that and a
> +1 review will *almost always* get you the change you want in double-quick
> time. (Any contributor who knows they are 30s work away from a +2 is going
> to be highly motivated.)
>
> Typos are a completely different matter and they should not be grouped
>> together with guideline infringements.
>>
>
> "Violations"? "Infringements"? It's line wrapping, not a felony case.
>
> I agree that it is a waste of time and resources when you have to -1 a
>> patch for this, but there multiple solutions, you can make sure your
>> editor does auto wrapping at the right length (I have mine configured
>> this way), or create a git-enforce policy with a client-side hook, or do
>> like Ihar is trying to do and push for a guideline change.
>>
>> I don't mind changing the guideline to any other length, but as long as
>> it is 72 chars I will keep enforcing it, as it is not the place of
>> reviewers to decide which guidelines are worthy of being enforced and
>> which ones are not.
>>
>
> Of course it is.
>
> If we're not here to use our brains, why are we here? Serious question.
> Feel free to use any definition of 'here'.
>
> Cheers,
>> Gorka.
>>
>>
>>
>> On Sep 26, 2015, at 21:19, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
>>>>
>>>> Can I ask a different question - could we reject a few simple-to-check
>>>> things on the push, like bad commit messages?  For things that take 2
>>>> seconds to fix and do make people's lives better, it's not that they're
>>>> rejected, it's that the whole rejection cycle via gerrit review (push/wait
>>>> for tests to run/check website/swear/find change/fix/push again) is out of
>>>> proportion to the effort taken to fix it.
>>>>
>>>
> I would welcome a confirmation step - but *not* an outright rejection -
> that runs *locally* in git-review before the change is pushed. Right now,
> gerrit gives you a warning after the review is pushed, at which point it is
> too late.
>
> It seems here that there's benefit to 72 line messages - not that everyone
>>>> sees that benefit, but it is present - but it doesn't outweigh the current
>>>> cost.
>>>>
>>>
> Yes, 72 columns is the correct guideline IMHO. It's used virtually
> throughout the Git ecosystem now. Back in the early days of Git it wasn't
> at all clear - should you have no line breaks at all and let each tool do
> its own soft line wrapping? If not, where should you wrap? Now there's a
> clear consensus that you hard wrap at 72. Vi wraps git commit messages at
> 72 by default.
>
> The output of "git log" indents commit messages by four spaces, so
> anything longer than 76 gets ugly, hard-to-read line-wrapping. I've also
> noticed that Launchpad (or at least the bot that posts commit messages to
> Launchpad when patches merge) does a hard wrap at 72 characters.
>
> A much better idea than modifying the guideline would be to put
> documentation on the wiki about how to set up your editor so that this is
> never an issue. You shouldn't even have to even think about the line length
> for at least 99% of commits.
>
> cheers,
> Zane.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/1ee7bd36/attachment.html>

From robertc at robertcollins.net  Mon Sep 28 21:03:37 2015
From: robertc at robertcollins.net (Robert Collins)
Date: Tue, 29 Sep 2015 10:03:37 +1300
Subject: [openstack-dev] [release][requirements] changes to the
 requirements-core team
In-Reply-To: <1443471075-sup-7601@lrrr.local>
References: <1443471075-sup-7601@lrrr.local>
Message-ID: <CAJ3HoZ00oFg0YKV7BWXXR4MVfQa+_5NNqdx2CoXT2+hx-PDLxw@mail.gmail.com>

All sounds good to me.

-Rob

On 29 September 2015 at 09:16, Doug Hellmann <doug at doughellmann.com> wrote:
> It has been a while since we've reviewed the requirements-core team
> members. Since it's the end of the cycle, I did a quick scan of the
> stats this week and I am proposing some changes based on participation.
>
> I spoke with Ghe Rivero and Julien Danjou, both of whom have done good
> work in the past but who have changed their focus more recently. Both
> agreed that it no longer made sense to have them on the core team, so I
> will remove them today. As with other core teams, renewed interest from
> past cores is usually handled by fast-tracking them back onto the team.
> Thank you, Ghe and Julien, for your contributions!
>
> I also spoke with Davanum Srinivas (dims) about joining the team. He has
> been active this cycle with requirements reviews and related release
> work, and is interested in being more involved. The usual practice is to
> wait a few days for feedback before adding someone new, so I will wait a
> few days before adding dims.
>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud


From jim at jimrollenhagen.com  Mon Sep 28 21:09:33 2015
From: jim at jimrollenhagen.com (Jim Rollenhagen)
Date: Mon, 28 Sep 2015 14:09:33 -0700
Subject: [openstack-dev] [Ironic] Preparing for functional testing in
 Ironic. Will likely break any in-flight patches.
In-Reply-To: <8E24C936330C954289EBF379E7F96CB64AAECA95@ORSMSX105.amr.corp.intel.com>
References: <8E24C936330C954289EBF379E7F96CB64AAECA95@ORSMSX105.amr.corp.intel.com>
Message-ID: <20150928210933.GJ14957@jimrollenhagen.com>

On Mon, Sep 28, 2015 at 09:00:07PM +0000, Villalovos, John L wrote:
> Just to give a heads up to people. I have proposed the following patch:
> https://review.openstack.org/228612
> 
> This moves the tests in ironic/tests/ to ironic/tests/unit/
> 
> Likely this patch will break most in-flight patches, so rebasing will be required when this patch goes in.

More concretely, the team wants to land this patch ASAP. It's the
beginning of the cycle, so presumably there aren't any patches we're
trying to get in quickly. We want to get it done before any more patches
sneak in new files/imports that break this.

I hope to land this today; if we don't, I plan to leave it with my +2
tonight and Dmitry can land it first thing in the morning. ;)

// jim


From sean at dague.net  Mon Sep 28 21:21:45 2015
From: sean at dague.net (Sean Dague)
Date: Mon, 28 Sep 2015 17:21:45 -0400
Subject: [openstack-dev] [release][requirements] changes to the
 requirements-core team
In-Reply-To: <1443471075-sup-7601@lrrr.local>
References: <1443471075-sup-7601@lrrr.local>
Message-ID: <5609AF69.8070501@dague.net>

On 09/28/2015 04:16 PM, Doug Hellmann wrote:
> It has been a while since we've reviewed the requirements-core team
> members. Since it's the end of the cycle, I did a quick scan of the
> stats this week and I am proposing some changes based on participation.
> 
> I spoke with Ghe Rivero and Julien Danjou, both of whom have done good
> work in the past but who have changed their focus more recently. Both
> agreed that it no longer made sense to have them on the core team, so I
> will remove them today. As with other core teams, renewed interest from
> past cores is usually handled by fast-tracking them back onto the team.
> Thank you, Ghe and Julien, for your contributions!
> 
> I also spoke with Davanum Srinivas (dims) about joining the team. He has
> been active this cycle with requirements reviews and related release
> work, and is interested in being more involved. The usual practice is to
> wait a few days for feedback before adding someone new, so I will wait a
> few days before adding dims.

All seems reasonable, +1.

	-Sean

-- 
Sean Dague
http://dague.net


From clint at fewbar.com  Mon Sep 28 21:24:43 2015
From: clint at fewbar.com (Clint Byrum)
Date: Mon, 28 Sep 2015 14:24:43 -0700
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
Message-ID: <1443475269-sup-7070@fewbar.com>

Excerpts from Morgan Fainberg's message of 2015-09-26 23:36:09 -0700:
> As a core (and former PTL) I just ignored commit message -1s unless there is something majorly wrong (no bug id where one is needed, etc). 
> 
> I appreciate well formatted commits, but can we let this one go? This discussion is so far into the meta-bike-shedding (bike shedding about bike shedding commit messages) ... If a commit message is *that* bad a -1 (or just fixing it?) Might be worth it. However, if a commit isn't missing key info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence moving from topic to topic, there isn't a good reason to block the review. 
> 
> It is not worth having a bot -1 bad commits or even having gerrit muck with them. Let's do the job of the reviewer and actually review code instead of going crazy with commit messages. 
> 

Agreed with all of your sentiments.

Please anyone -1'ing for this, read this:

https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure


"The first line should be limited to 50 characters and should not end
with a period (commit messages over 72 characters will be rejected by
the gate)."

"Subsequent lines should be wrapped at 72 characters."

Notice, the word "should" is used, not "must".

So _DO NOT_ -1 for this. A "should" is a guideline, and a note like
"hey could you wrap at 72 chars if you push again? We like to keep them
formatted that way, see [link] for more info. Thanks! #notAMinusOne"

Please can we not spend another minute on this? Thanks!


From mestery at mestery.com  Mon Sep 28 21:27:19 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Mon, 28 Sep 2015 16:27:19 -0500
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost> <56096D79.6090005@redhat.com>
 <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
Message-ID: <CAL3VkVz1UrPvTAq_rKNx8W4EDVxOQbHZ73y1XjGoimYs07iJ8A@mail.gmail.com>

On Mon, Sep 28, 2015 at 4:00 PM, Assaf Muller <amuller at redhat.com> wrote:

>
>
> On Mon, Sep 28, 2015 at 12:40 PM, Zane Bitter <zbitter at redhat.com> wrote:
>
>> On 28/09/15 05:47, Gorka Eguileor wrote:
>>
>>> On 26/09, Morgan Fainberg wrote:
>>>
>>>> As a core (and former PTL) I just ignored commit message -1s unless
>>>> there is something majorly wrong (no bug id where one is needed, etc).
>>>>
>>>> I appreciate well formatted commits, but can we let this one go? This
>>>> discussion is so far into the meta-bike-shedding (bike shedding about bike
>>>> shedding commit messages) ... If a commit message is *that* bad a -1 (or
>>>> just fixing it?) Might be worth it. However, if a commit isn't missing key
>>>> info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence
>>>> moving from topic to topic, there isn't a good reason to block the review.
>>>>
>>>
>> +1
>>
>> It is not worth having a bot -1 bad commits or even having gerrit muck
>>>> with them. Let's do the job of the reviewer and actually review code
>>>> instead of going crazy with commit messages.
>>>>
>>>
>> +1
>>
>> Sent via mobile
>>>>
>>>>
>>> I have to disagree, as reviewers we have to make sure that guidelines
>>> are followed, if we have an explicit guideline that states that
>>> the limit length is 72 chars, I will -1 any patch that doesn't follow
>>> the guideline, just as I would do with i18n guideline violations.
>>>
>>
>> Apparently you're unaware of the definition of the word 'guideline'. It's
>> a guide. If it were a hard-and-fast rule then we would have a bot enforcing
>> it already.
>>
>> Is there anything quite so frightening as a large group of people blindly
>> enforcing rules with total indifference to any sense of overarching purpose?
>>
>> A reminder that the reason for this guideline is to ensure that none of
>> the broad variety of tools that are available in the Git ecosystem
>> effectively become unusable with the OpenStack repos due to wildly
>> inconsistent formatting. And of course, even that goal has to be balanced
>> against our other goals, such as building a healthy community and
>> occasionally shipping some software.
>>
>> There are plenty of ways to achieve that goal other than blanket drive-by
>> -1's for trivial inconsistencies in the formatting of individual commit
>> messages.
>
>
> The actual issue is that we as a community (Speaking of the Neutron
> community at least) are stat-crazed. We have a fair number of contributors
> that -1 for trivial issues to retain their precious stats with alarming
> zeal. That is the real issue. All of these commit message issues,
> translation mishaps,
> comment typos etc are excuses for people to boost their stats without
> contributing their time or energy in to the project. I am beyond bitter
> about this
> issue at this point.
>
>
I should note that as the previous PTL, for the most part I viewed stats as
garbage. Keep in mind I nominated two new core reviewers whose stats were
lowe but who are incredibly important members of our community [1]. I did
this because they are the type of people to be core reviewers, and we had a
long conversation on this. So, I agree with you, this stats thing is awful.
And Stackalytics hasn't helped it, but made it much worse.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071869.html


> I'll say what I've always said about this issue: The review process is
> about collaboration. I imagine that the author is sitting next to me, and
> we're going
> through the patch together for the purpose of improving it. Review
> comments should be motivated by a thirst to improve the proposed code in a
> real way,
> not by your want or need to improve your stats on stackalytics. The latter
> is an enormous waste of your time.
>
>
>> A polite comment and a link to the guidelines is a great way to educate
>> new contributors. For core reviewers especially, a comment like that and a
>> +1 review will *almost always* get you the change you want in double-quick
>> time. (Any contributor who knows they are 30s work away from a +2 is going
>> to be highly motivated.)
>>
>> Typos are a completely different matter and they should not be grouped
>>> together with guideline infringements.
>>>
>>
>> "Violations"? "Infringements"? It's line wrapping, not a felony case.
>>
>> I agree that it is a waste of time and resources when you have to -1 a
>>> patch for this, but there multiple solutions, you can make sure your
>>> editor does auto wrapping at the right length (I have mine configured
>>> this way), or create a git-enforce policy with a client-side hook, or do
>>> like Ihar is trying to do and push for a guideline change.
>>>
>>> I don't mind changing the guideline to any other length, but as long as
>>> it is 72 chars I will keep enforcing it, as it is not the place of
>>> reviewers to decide which guidelines are worthy of being enforced and
>>> which ones are not.
>>>
>>
>> Of course it is.
>>
>> If we're not here to use our brains, why are we here? Serious question.
>> Feel free to use any definition of 'here'.
>>
>> Cheers,
>>> Gorka.
>>>
>>>
>>>
>>> On Sep 26, 2015, at 21:19, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
>>>>>
>>>>> Can I ask a different question - could we reject a few simple-to-check
>>>>> things on the push, like bad commit messages?  For things that take 2
>>>>> seconds to fix and do make people's lives better, it's not that they're
>>>>> rejected, it's that the whole rejection cycle via gerrit review (push/wait
>>>>> for tests to run/check website/swear/find change/fix/push again) is out of
>>>>> proportion to the effort taken to fix it.
>>>>>
>>>>
>> I would welcome a confirmation step - but *not* an outright rejection -
>> that runs *locally* in git-review before the change is pushed. Right now,
>> gerrit gives you a warning after the review is pushed, at which point it is
>> too late.
>>
>> It seems here that there's benefit to 72 line messages - not that
>>>>> everyone sees that benefit, but it is present - but it doesn't outweigh the
>>>>> current cost.
>>>>>
>>>>
>> Yes, 72 columns is the correct guideline IMHO. It's used virtually
>> throughout the Git ecosystem now. Back in the early days of Git it wasn't
>> at all clear - should you have no line breaks at all and let each tool do
>> its own soft line wrapping? If not, where should you wrap? Now there's a
>> clear consensus that you hard wrap at 72. Vi wraps git commit messages at
>> 72 by default.
>>
>> The output of "git log" indents commit messages by four spaces, so
>> anything longer than 76 gets ugly, hard-to-read line-wrapping. I've also
>> noticed that Launchpad (or at least the bot that posts commit messages to
>> Launchpad when patches merge) does a hard wrap at 72 characters.
>>
>> A much better idea than modifying the guideline would be to put
>> documentation on the wiki about how to set up your editor so that this is
>> never an issue. You shouldn't even have to even think about the line length
>> for at least 99% of commits.
>>
>> cheers,
>> Zane.
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/6a080083/attachment.html>

From blak111 at gmail.com  Mon Sep 28 21:29:14 2015
From: blak111 at gmail.com (Kevin Benton)
Date: Mon, 28 Sep 2015 23:29:14 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost> <56096D79.6090005@redhat.com>
 <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
Message-ID: <CAO_F6JMAkFVYV8=zjx0hVeqDgq=dLzTH=yW4HPaw=gpkG3TC=A@mail.gmail.com>

I think a blanket statement about what people's motivations are is not
fair. We've seen in this thread that some people want to enforce the limit
of 72 chars and it's not about padding their stats.

The issue here is that we have a guideline with a very specific number. If
we don't care to enforce it, why do we even bother? "Please do this, unless
you don't feel like it", is going to be hard for many people to review in a
way that pleases everyone.

On Mon, Sep 28, 2015 at 11:00 PM, Assaf Muller <amuller at redhat.com> wrote:

>
>
> On Mon, Sep 28, 2015 at 12:40 PM, Zane Bitter <zbitter at redhat.com> wrote:
>
>> On 28/09/15 05:47, Gorka Eguileor wrote:
>>
>>> On 26/09, Morgan Fainberg wrote:
>>>
>>>> As a core (and former PTL) I just ignored commit message -1s unless
>>>> there is something majorly wrong (no bug id where one is needed, etc).
>>>>
>>>> I appreciate well formatted commits, but can we let this one go? This
>>>> discussion is so far into the meta-bike-shedding (bike shedding about bike
>>>> shedding commit messages) ... If a commit message is *that* bad a -1 (or
>>>> just fixing it?) Might be worth it. However, if a commit isn't missing key
>>>> info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence
>>>> moving from topic to topic, there isn't a good reason to block the review.
>>>>
>>>
>> +1
>>
>> It is not worth having a bot -1 bad commits or even having gerrit muck
>>>> with them. Let's do the job of the reviewer and actually review code
>>>> instead of going crazy with commit messages.
>>>>
>>>
>> +1
>>
>> Sent via mobile
>>>>
>>>>
>>> I have to disagree, as reviewers we have to make sure that guidelines
>>> are followed, if we have an explicit guideline that states that
>>> the limit length is 72 chars, I will -1 any patch that doesn't follow
>>> the guideline, just as I would do with i18n guideline violations.
>>>
>>
>> Apparently you're unaware of the definition of the word 'guideline'. It's
>> a guide. If it were a hard-and-fast rule then we would have a bot enforcing
>> it already.
>>
>> Is there anything quite so frightening as a large group of people blindly
>> enforcing rules with total indifference to any sense of overarching purpose?
>>
>> A reminder that the reason for this guideline is to ensure that none of
>> the broad variety of tools that are available in the Git ecosystem
>> effectively become unusable with the OpenStack repos due to wildly
>> inconsistent formatting. And of course, even that goal has to be balanced
>> against our other goals, such as building a healthy community and
>> occasionally shipping some software.
>>
>> There are plenty of ways to achieve that goal other than blanket drive-by
>> -1's for trivial inconsistencies in the formatting of individual commit
>> messages.
>
>
> The actual issue is that we as a community (Speaking of the Neutron
> community at least) are stat-crazed. We have a fair number of contributors
> that -1 for trivial issues to retain their precious stats with alarming
> zeal. That is the real issue. All of these commit message issues,
> translation mishaps,
> comment typos etc are excuses for people to boost their stats without
> contributing their time or energy in to the project. I am beyond bitter
> about this
> issue at this point.
>
> I'll say what I've always said about this issue: The review process is
> about collaboration. I imagine that the author is sitting next to me, and
> we're going
> through the patch together for the purpose of improving it. Review
> comments should be motivated by a thirst to improve the proposed code in a
> real way,
> not by your want or need to improve your stats on stackalytics. The latter
> is an enormous waste of your time.
>
>
>> A polite comment and a link to the guidelines is a great way to educate
>> new contributors. For core reviewers especially, a comment like that and a
>> +1 review will *almost always* get you the change you want in double-quick
>> time. (Any contributor who knows they are 30s work away from a +2 is going
>> to be highly motivated.)
>>
>> Typos are a completely different matter and they should not be grouped
>>> together with guideline infringements.
>>>
>>
>> "Violations"? "Infringements"? It's line wrapping, not a felony case.
>>
>> I agree that it is a waste of time and resources when you have to -1 a
>>> patch for this, but there multiple solutions, you can make sure your
>>> editor does auto wrapping at the right length (I have mine configured
>>> this way), or create a git-enforce policy with a client-side hook, or do
>>> like Ihar is trying to do and push for a guideline change.
>>>
>>> I don't mind changing the guideline to any other length, but as long as
>>> it is 72 chars I will keep enforcing it, as it is not the place of
>>> reviewers to decide which guidelines are worthy of being enforced and
>>> which ones are not.
>>>
>>
>> Of course it is.
>>
>> If we're not here to use our brains, why are we here? Serious question.
>> Feel free to use any definition of 'here'.
>>
>> Cheers,
>>> Gorka.
>>>
>>>
>>>
>>> On Sep 26, 2015, at 21:19, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
>>>>>
>>>>> Can I ask a different question - could we reject a few simple-to-check
>>>>> things on the push, like bad commit messages?  For things that take 2
>>>>> seconds to fix and do make people's lives better, it's not that they're
>>>>> rejected, it's that the whole rejection cycle via gerrit review (push/wait
>>>>> for tests to run/check website/swear/find change/fix/push again) is out of
>>>>> proportion to the effort taken to fix it.
>>>>>
>>>>
>> I would welcome a confirmation step - but *not* an outright rejection -
>> that runs *locally* in git-review before the change is pushed. Right now,
>> gerrit gives you a warning after the review is pushed, at which point it is
>> too late.
>>
>> It seems here that there's benefit to 72 line messages - not that
>>>>> everyone sees that benefit, but it is present - but it doesn't outweigh the
>>>>> current cost.
>>>>>
>>>>
>> Yes, 72 columns is the correct guideline IMHO. It's used virtually
>> throughout the Git ecosystem now. Back in the early days of Git it wasn't
>> at all clear - should you have no line breaks at all and let each tool do
>> its own soft line wrapping? If not, where should you wrap? Now there's a
>> clear consensus that you hard wrap at 72. Vi wraps git commit messages at
>> 72 by default.
>>
>> The output of "git log" indents commit messages by four spaces, so
>> anything longer than 76 gets ugly, hard-to-read line-wrapping. I've also
>> noticed that Launchpad (or at least the bot that posts commit messages to
>> Launchpad when patches merge) does a hard wrap at 72 characters.
>>
>> A much better idea than modifying the guideline would be to put
>> documentation on the wiki about how to set up your editor so that this is
>> never an issue. You shouldn't even have to even think about the line length
>> for at least 99% of commits.
>>
>> cheers,
>> Zane.
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/5a08a607/attachment.html>

From german.eichberger at hpe.com  Mon Sep 28 21:41:34 2015
From: german.eichberger at hpe.com (Eichberger, German)
Date: Mon, 28 Sep 2015 21:41:34 +0000
Subject: [openstack-dev] [lbaas] [octavia] Proposing new meeting time
 Wednesday 16:00 UTC
In-Reply-To: <1443417151.4568.6.camel@localhost>
References: <D22B2C34.185FD%german.eichberger@hpe.com>
 <1443417151.4568.6.camel@localhost>
Message-ID: <D22F0186.1874F%german.eichberger@hpe.com>

Brandon,

We had some requests in the past and I just wanted to float the idea on
the ML since we are starting a new cycle...

Thanks,
German

On 9/27/15, 10:12 PM, "Brandon Logan" <brandon.logan at RACKSPACE.COM> wrote:

>Is there a lot of people requesting this meeting change?
>
>Thanks,
>Brandon
>
>On Fri, 2015-09-25 at 23:58 +0000, Eichberger, German wrote:
>> All,
>> 
>> In our last meeting [1] we discussed moving the meeting earlier to
>> accommodate participants from the EMEA region. I am therefore proposing
>>to
>> move the meeting to 16:00 UTC on Wednesday. Please respond to this
>>e-mail
>> if you have alternate suggestions. I will send out another e-mail
>> announcing the new time and the date we will start with that.
>> 
>> Thanks,
>> German
>> 
>> [1] 
>> 
>>http://eavesdrop.openstack.org/meetings/octavia/2015/octavia.2015-09-23-2
>>0.
>> 00.log.html
>> 
>> 
>> 
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From mriedem at linux.vnet.ibm.com  Mon Sep 28 21:49:55 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Mon, 28 Sep 2015 16:49:55 -0500
Subject: [openstack-dev] [all] devstack default worker changes
In-Reply-To: <56099CC2.3020001@dague.net>
References: <56099CC2.3020001@dague.net>
Message-ID: <5609B603.50401@linux.vnet.ibm.com>



On 9/28/2015 3:02 PM, Sean Dague wrote:
> We used to default to using the mysqldb driver in devstack, which is a C
> binding that is not eventlet aware. This means that any time a db query
> is executed the entire greenlet is blocked. This behavior meant that in
> devstack (and in oslo.concurrency) default number of workers for things
> like API servers and conductors was = num of cores on the machine so
> that everything wouldn't be deadlocked all the time when touching the
> database.
>
> The downside is we fixed concurrency with memory, which means you have a
> lot of idle python eating all your memory all the time. Devstack in 4GB
> isn't really an option any more with more than a small number of
> services turned on.
>
> We changed the default mysql driver to the pure python one back early in
> Liberty. It shook out some bugs, but things got good quickly on that.
> Just recently we decided to see if we could drop the worker count as
> well in upstream devstack and gate jobs.
>
> The new math will give you numproc / 4 workers for these services (min
> of 2). This seems to be running fine. More interestingly it ends up
> keeping an extra 1GB+ of memory in page cache by the end of the gate
> runs, which seems to be making things a bit more stable, and under less
> load. ( https://review.openstack.org/#/c/226831/ )
>
> We've not seen any down side of this change yet, and it should make it
> easier to get devstack into a smaller footprint VM. However, I wanted to
> make sure people knew it was a change in case they see some new edge
> failure condition that could be related to it.
>
> So, it should mostly all be flowers and unicorns. But, keep an eye out
> in case a troll shows up under a bridge somewhere.
>
> 	-Sean
>

The large ops job appears to be throwing up on itself a bit because of 
this change.  Tracking with bug 
https://bugs.launchpad.net/nova/+bug/1500615.

-- 

Thanks,

Matt Riedemann



From tim at styra.com  Mon Sep 28 21:51:34 2015
From: tim at styra.com (Tim Hinrichs)
Date: Mon, 28 Sep 2015 21:51:34 +0000
Subject: [openstack-dev] [Congress] Congress and Monasca Joint Session
 at Tokyo Design Summit
In-Reply-To: <D22EF45B.9922%fgiannet@cisco.com>
References: <D22EF45B.9922%fgiannet@cisco.com>
Message-ID: <CAJjxPABV4snGh3Tr8nnHxY3S=yy1J7n+HRLw9rdXM2PFtBWTQQ@mail.gmail.com>

Hi Fabio: Thanks for reaching out.  We should definitely talk at the
summit.  I don't know if we can devote 1 of the 3 allocated Congress
sessions to Monasca, but we'll talk it over during IRC on Wed and let you
know.  Or do you have a session we could use for the discussion?  In any
case, I'm confident we can make good progress toward integrating Congress
and Monasca in Tokyo.  Monasca sounds interesting--I'm looking forward to
learning more!

Congress team: if we could all quickly browse the Monasca wiki before Wed's
IRC, that would be great:
https://wiki.openstack.org/wiki/Monasca

Tim



On Mon, Sep 28, 2015 at 1:50 PM Fabio Giannetti (fgiannet) <
fgiannet at cisco.com> wrote:

> Tim and Congress folks,
>   I am writing on behalf of the Monasca community and I would like to
> explore the possibility of holding a joint session during the Tokyo Design
> Summit.
> We would like to explore:
>
>    1. how to integrate Monasca with Congress so then Monasca can provide
>    metrics, logs and event data for policy evaluation/enforcement
>    2. How to leverage Monasca alarming to automatically notify about
>    statuses that may imply policy breach
>    3. How to automatically (if possible) convert policies (or subparts)
>    into Monasca alarms.
>
> Please point me to a submission page if I have to create a formal proposal
> for the topic and/or let me know other forms we can interact at the Summit.
> Thanks in advance,
> Fabio
>
> *Fabio Giannetti*
> Cloud Innovation Architect
> Cisco Services
> fgiannet at cisco.com
> Phone: *+1 408 527 1134*
> Mobile: *+1 408 854 0020*
>
> *Cisco Systems, Inc.*
> 285 W. Tasman Drive
> San Jose
> California
> 95134
> United States
> Cisco.com <http://www.cisco.com/>
>
>  Think before you print.
>
> This email may contain confidential and privileged material for the sole
> use of the intended recipient. Any review, use, distribution or disclosure
> by others is strictly prohibited. If you are not the intended recipient (or
> authorized to receive for the recipient), please contact the sender by
> reply email and delete all copies of this message.
>
> Please click here
> <http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for
> Company Registration Information.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/c4ee529e/attachment.html>

From sgolovatiuk at mirantis.com  Mon Sep 28 21:51:49 2015
From: sgolovatiuk at mirantis.com (Sergii Golovatiuk)
Date: Mon, 28 Sep 2015 23:51:49 +0200
Subject: [openstack-dev] [fuel][ptl] PTL candidacy
Message-ID: <CA+HkNVteWAoYY_CGaA1MtdK2-9CkxVRP+H66mq0bb6q-wsAv0A@mail.gmail.com>

Hi friends,


I?d like to raise my hand to send my candidacy for Fuel PTL position for
next cycle.

Before I go forward with my candidacy, I will remind some facts we?ve been
focuses as a team for last six months:

- Synchronisation with upstream modules. As a result we have a nice
mechanist to synchronise upstream manifests

- Plugin system improvements. We extended our plugin system to allow to
detach components. These changes creates a flexible mechanism for plugin
developers.

- HA Improvements. Fuel team polished OCF scripts. QA engineers automated a
lot of scenarios of our high availability architecture.

- Breaking Fuel to components. fuel-qa, fuel-agent and other components
were moved to own repositories. That increased the velocity of component
development.

- Fuel repositories. Fuel team implemented online repositories for
distribution system it supports. This allowed to speed up the update
delivery.

- Granular deployment. Instead of single ?puppet apply? fuel has a lot of
small applies. This step speed up the development process as the developer
doesn?t need to wait for whole deployment. He may apply only ?required?
task. This flexibility gives a lot of room for plugin developers.


I believe there are some of the most important topics that our team should
focus on:

With my deployer/operator hat on:


- Continue breaking Fuel monolith to components. This will allow to
increase the velocity of product development. A good candidate is fuel-web.
There are some other projects that can be moved to own repositories.

- Switching to fuel2 CLI. We should finally deprecate first version of CLI.
fuel2 which is based on cliff should be a main tool for operator.

- Implement integration testing. Fuel is suffering from lack of integration
testing. This means that state of processes are not ensured after
deployment. Also, I am going to spend time to introduce code coverage
metrics that will be used for analysing if coverage is getting better or
not.

- CI improvements. Fuel is suffering from slow gates. ISO compilation,
master node deployment, openstack deployment require more and more time.
Reducing CI time will speed up the development process. I believe we should
start with metrics so we?ll know how much time is required from code to
deployed openstack. So, we?ll iteratively improve the bottle necks. In the
end, Developer will require less time for CI.

- Improve collaboration with Puppet OpenStack community. This part was
started as simple synchronisation of community manifests. A couple of
cycles we contributed a lot of bug fixes. In the end, Fuel team implemented
a nice mechanism that allows us to consume openstack-puppet modules without
any modifications. However, we are still in process of migration. It should
be done within next 6 months.

- Documentation improvements. I believe that documentation improvement will
allow to minimise the barrier for new contributors. I am going to add more
samples and details to our development documentation. A good sample is
puppetlabs-stdlib [1]

[1] https://github.com/puppetlabs/puppetlabs-stdlib

>From other hand, Fuelers have a lot of knowledge in OpenStack which should
be contributed back to upstream. I believe HA experience should contributed
back to upstream.

- Lifecycle improvements. I am going to make a paradigm shift that Fuel is
not only deployment tool. It?s very useful for lifecycle management.

With my leader hat on:

- Implementing lieutenant based model. Fuel Core developers are overloaded
with reviews. Switching to lieutenant model should free up their time that
can be spent on some R&D. [2]

[2]
https://www.mail-archive.com/openstack-dev at lists.openstack.org/msg62229.html

- The last but not least. I?ll do all my the best to get Fuel under Big
Tent.

I have many many things in my backlog. However, I believe the above are the
most important. I believe that resolving these issues will speed up the
velocity of development and make a cultural shift with many external
contributions.

Sincerely yours,

Sergii Golovatiuk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/8ae3f446/attachment.html>

From clint at fewbar.com  Mon Sep 28 21:51:50 2015
From: clint at fewbar.com (Clint Byrum)
Date: Mon, 28 Sep 2015 14:51:50 -0700
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <CAO_F6JMAkFVYV8=zjx0hVeqDgq=dLzTH=yW4HPaw=gpkG3TC=A@mail.gmail.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost> <56096D79.6090005@redhat.com>
 <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
 <CAO_F6JMAkFVYV8=zjx0hVeqDgq=dLzTH=yW4HPaw=gpkG3TC=A@mail.gmail.com>
Message-ID: <1443477049-sup-9690@fewbar.com>

Excerpts from Kevin Benton's message of 2015-09-28 14:29:14 -0700:
> I think a blanket statement about what people's motivations are is not
> fair. We've seen in this thread that some people want to enforce the limit
> of 72 chars and it's not about padding their stats.
> 
> The issue here is that we have a guideline with a very specific number. If
> we don't care to enforce it, why do we even bother? "Please do this, unless
> you don't feel like it", is going to be hard for many people to review in a
> way that pleases everyone.
> 

Please do read said guidelines. "Must" would be used if it were to be
"enforced". It "should" be formatted that way.

> On Mon, Sep 28, 2015 at 11:00 PM, Assaf Muller <amuller at redhat.com> wrote:
> 
> >
> >
> > On Mon, Sep 28, 2015 at 12:40 PM, Zane Bitter <zbitter at redhat.com> wrote:
> >
> >> On 28/09/15 05:47, Gorka Eguileor wrote:
> >>
> >>> On 26/09, Morgan Fainberg wrote:
> >>>
> >>>> As a core (and former PTL) I just ignored commit message -1s unless
> >>>> there is something majorly wrong (no bug id where one is needed, etc).
> >>>>
> >>>> I appreciate well formatted commits, but can we let this one go? This
> >>>> discussion is so far into the meta-bike-shedding (bike shedding about bike
> >>>> shedding commit messages) ... If a commit message is *that* bad a -1 (or
> >>>> just fixing it?) Might be worth it. However, if a commit isn't missing key
> >>>> info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence
> >>>> moving from topic to topic, there isn't a good reason to block the review.
> >>>>
> >>>
> >> +1
> >>
> >> It is not worth having a bot -1 bad commits or even having gerrit muck
> >>>> with them. Let's do the job of the reviewer and actually review code
> >>>> instead of going crazy with commit messages.
> >>>>
> >>>
> >> +1
> >>
> >> Sent via mobile
> >>>>
> >>>>
> >>> I have to disagree, as reviewers we have to make sure that guidelines
> >>> are followed, if we have an explicit guideline that states that
> >>> the limit length is 72 chars, I will -1 any patch that doesn't follow
> >>> the guideline, just as I would do with i18n guideline violations.
> >>>
> >>
> >> Apparently you're unaware of the definition of the word 'guideline'. It's
> >> a guide. If it were a hard-and-fast rule then we would have a bot enforcing
> >> it already.
> >>
> >> Is there anything quite so frightening as a large group of people blindly
> >> enforcing rules with total indifference to any sense of overarching purpose?
> >>
> >> A reminder that the reason for this guideline is to ensure that none of
> >> the broad variety of tools that are available in the Git ecosystem
> >> effectively become unusable with the OpenStack repos due to wildly
> >> inconsistent formatting. And of course, even that goal has to be balanced
> >> against our other goals, such as building a healthy community and
> >> occasionally shipping some software.
> >>
> >> There are plenty of ways to achieve that goal other than blanket drive-by
> >> -1's for trivial inconsistencies in the formatting of individual commit
> >> messages.
> >
> >
> > The actual issue is that we as a community (Speaking of the Neutron
> > community at least) are stat-crazed. We have a fair number of contributors
> > that -1 for trivial issues to retain their precious stats with alarming
> > zeal. That is the real issue. All of these commit message issues,
> > translation mishaps,
> > comment typos etc are excuses for people to boost their stats without
> > contributing their time or energy in to the project. I am beyond bitter
> > about this
> > issue at this point.
> >
> > I'll say what I've always said about this issue: The review process is
> > about collaboration. I imagine that the author is sitting next to me, and
> > we're going
> > through the patch together for the purpose of improving it. Review
> > comments should be motivated by a thirst to improve the proposed code in a
> > real way,
> > not by your want or need to improve your stats on stackalytics. The latter
> > is an enormous waste of your time.
> >
> >
> >> A polite comment and a link to the guidelines is a great way to educate
> >> new contributors. For core reviewers especially, a comment like that and a
> >> +1 review will *almost always* get you the change you want in double-quick
> >> time. (Any contributor who knows they are 30s work away from a +2 is going
> >> to be highly motivated.)
> >>
> >> Typos are a completely different matter and they should not be grouped
> >>> together with guideline infringements.
> >>>
> >>
> >> "Violations"? "Infringements"? It's line wrapping, not a felony case.
> >>
> >> I agree that it is a waste of time and resources when you have to -1 a
> >>> patch for this, but there multiple solutions, you can make sure your
> >>> editor does auto wrapping at the right length (I have mine configured
> >>> this way), or create a git-enforce policy with a client-side hook, or do
> >>> like Ihar is trying to do and push for a guideline change.
> >>>
> >>> I don't mind changing the guideline to any other length, but as long as
> >>> it is 72 chars I will keep enforcing it, as it is not the place of
> >>> reviewers to decide which guidelines are worthy of being enforced and
> >>> which ones are not.
> >>>
> >>
> >> Of course it is.
> >>
> >> If we're not here to use our brains, why are we here? Serious question.
> >> Feel free to use any definition of 'here'.
> >>
> >> Cheers,
> >>> Gorka.
> >>>
> >>>
> >>>
> >>> On Sep 26, 2015, at 21:19, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
> >>>>>
> >>>>> Can I ask a different question - could we reject a few simple-to-check
> >>>>> things on the push, like bad commit messages?  For things that take 2
> >>>>> seconds to fix and do make people's lives better, it's not that they're
> >>>>> rejected, it's that the whole rejection cycle via gerrit review (push/wait
> >>>>> for tests to run/check website/swear/find change/fix/push again) is out of
> >>>>> proportion to the effort taken to fix it.
> >>>>>
> >>>>
> >> I would welcome a confirmation step - but *not* an outright rejection -
> >> that runs *locally* in git-review before the change is pushed. Right now,
> >> gerrit gives you a warning after the review is pushed, at which point it is
> >> too late.
> >>
> >> It seems here that there's benefit to 72 line messages - not that
> >>>>> everyone sees that benefit, but it is present - but it doesn't outweigh the
> >>>>> current cost.
> >>>>>
> >>>>
> >> Yes, 72 columns is the correct guideline IMHO. It's used virtually
> >> throughout the Git ecosystem now. Back in the early days of Git it wasn't
> >> at all clear - should you have no line breaks at all and let each tool do
> >> its own soft line wrapping? If not, where should you wrap? Now there's a
> >> clear consensus that you hard wrap at 72. Vi wraps git commit messages at
> >> 72 by default.
> >>
> >> The output of "git log" indents commit messages by four spaces, so
> >> anything longer than 76 gets ugly, hard-to-read line-wrapping. I've also
> >> noticed that Launchpad (or at least the bot that posts commit messages to
> >> Launchpad when patches merge) does a hard wrap at 72 characters.
> >>
> >> A much better idea than modifying the guideline would be to put
> >> documentation on the wiki about how to set up your editor so that this is
> >> never an issue. You shouldn't even have to even think about the line length
> >> for at least 99% of commits.
> >>
> >> cheers,
> >> Zane.
> >>
> >>
> >> __________________________________________________________________________
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 


From amuller at redhat.com  Mon Sep 28 21:54:31 2015
From: amuller at redhat.com (Assaf Muller)
Date: Mon, 28 Sep 2015 17:54:31 -0400
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <CAO_F6JMAkFVYV8=zjx0hVeqDgq=dLzTH=yW4HPaw=gpkG3TC=A@mail.gmail.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost> <56096D79.6090005@redhat.com>
 <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
 <CAO_F6JMAkFVYV8=zjx0hVeqDgq=dLzTH=yW4HPaw=gpkG3TC=A@mail.gmail.com>
Message-ID: <CABARBAb-v=9NzhBixiEbHL1AGVbCvy1LfRDG=yX=XBjbtsMGeQ@mail.gmail.com>

On Mon, Sep 28, 2015 at 5:29 PM, Kevin Benton <blak111 at gmail.com> wrote:

> I think a blanket statement about what people's motivations are is not
> fair. We've seen in this thread that some people want to enforce the limit
> of 72 chars and it's not about padding their stats.
>

I took this golden opportunity to kidnap the thread and invoke a general
rant, it's not specific to the 72 characters git commit title discussion.


>
> The issue here is that we have a guideline with a very specific number. If
> we don't care to enforce it, why do we even bother? "Please do this, unless
> you don't feel like it", is going to be hard for many people to review in a
> way that pleases everyone.
>
> On Mon, Sep 28, 2015 at 11:00 PM, Assaf Muller <amuller at redhat.com> wrote:
>
>>
>>
>> On Mon, Sep 28, 2015 at 12:40 PM, Zane Bitter <zbitter at redhat.com> wrote:
>>
>>> On 28/09/15 05:47, Gorka Eguileor wrote:
>>>
>>>> On 26/09, Morgan Fainberg wrote:
>>>>
>>>>> As a core (and former PTL) I just ignored commit message -1s unless
>>>>> there is something majorly wrong (no bug id where one is needed, etc).
>>>>>
>>>>> I appreciate well formatted commits, but can we let this one go? This
>>>>> discussion is so far into the meta-bike-shedding (bike shedding about bike
>>>>> shedding commit messages) ... If a commit message is *that* bad a -1 (or
>>>>> just fixing it?) Might be worth it. However, if a commit isn't missing key
>>>>> info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence
>>>>> moving from topic to topic, there isn't a good reason to block the review.
>>>>>
>>>>
>>> +1
>>>
>>> It is not worth having a bot -1 bad commits or even having gerrit muck
>>>>> with them. Let's do the job of the reviewer and actually review code
>>>>> instead of going crazy with commit messages.
>>>>>
>>>>
>>> +1
>>>
>>> Sent via mobile
>>>>>
>>>>>
>>>> I have to disagree, as reviewers we have to make sure that guidelines
>>>> are followed, if we have an explicit guideline that states that
>>>> the limit length is 72 chars, I will -1 any patch that doesn't follow
>>>> the guideline, just as I would do with i18n guideline violations.
>>>>
>>>
>>> Apparently you're unaware of the definition of the word 'guideline'.
>>> It's a guide. If it were a hard-and-fast rule then we would have a bot
>>> enforcing it already.
>>>
>>> Is there anything quite so frightening as a large group of people
>>> blindly enforcing rules with total indifference to any sense of overarching
>>> purpose?
>>>
>>> A reminder that the reason for this guideline is to ensure that none of
>>> the broad variety of tools that are available in the Git ecosystem
>>> effectively become unusable with the OpenStack repos due to wildly
>>> inconsistent formatting. And of course, even that goal has to be balanced
>>> against our other goals, such as building a healthy community and
>>> occasionally shipping some software.
>>>
>>> There are plenty of ways to achieve that goal other than blanket
>>> drive-by -1's for trivial inconsistencies in the formatting of individual
>>> commit messages.
>>
>>
>> The actual issue is that we as a community (Speaking of the Neutron
>> community at least) are stat-crazed. We have a fair number of contributors
>> that -1 for trivial issues to retain their precious stats with alarming
>> zeal. That is the real issue. All of these commit message issues,
>> translation mishaps,
>> comment typos etc are excuses for people to boost their stats without
>> contributing their time or energy in to the project. I am beyond bitter
>> about this
>> issue at this point.
>>
>> I'll say what I've always said about this issue: The review process is
>> about collaboration. I imagine that the author is sitting next to me, and
>> we're going
>> through the patch together for the purpose of improving it. Review
>> comments should be motivated by a thirst to improve the proposed code in a
>> real way,
>> not by your want or need to improve your stats on stackalytics. The
>> latter is an enormous waste of your time.
>>
>>
>>> A polite comment and a link to the guidelines is a great way to educate
>>> new contributors. For core reviewers especially, a comment like that and a
>>> +1 review will *almost always* get you the change you want in double-quick
>>> time. (Any contributor who knows they are 30s work away from a +2 is going
>>> to be highly motivated.)
>>>
>>> Typos are a completely different matter and they should not be grouped
>>>> together with guideline infringements.
>>>>
>>>
>>> "Violations"? "Infringements"? It's line wrapping, not a felony case.
>>>
>>> I agree that it is a waste of time and resources when you have to -1 a
>>>> patch for this, but there multiple solutions, you can make sure your
>>>> editor does auto wrapping at the right length (I have mine configured
>>>> this way), or create a git-enforce policy with a client-side hook, or do
>>>> like Ihar is trying to do and push for a guideline change.
>>>>
>>>> I don't mind changing the guideline to any other length, but as long as
>>>> it is 72 chars I will keep enforcing it, as it is not the place of
>>>> reviewers to decide which guidelines are worthy of being enforced and
>>>> which ones are not.
>>>>
>>>
>>> Of course it is.
>>>
>>> If we're not here to use our brains, why are we here? Serious question.
>>> Feel free to use any definition of 'here'.
>>>
>>> Cheers,
>>>> Gorka.
>>>>
>>>>
>>>>
>>>> On Sep 26, 2015, at 21:19, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
>>>>>>
>>>>>> Can I ask a different question - could we reject a few
>>>>>> simple-to-check things on the push, like bad commit messages?  For things
>>>>>> that take 2 seconds to fix and do make people's lives better, it's not that
>>>>>> they're rejected, it's that the whole rejection cycle via gerrit review
>>>>>> (push/wait for tests to run/check website/swear/find change/fix/push again)
>>>>>> is out of proportion to the effort taken to fix it.
>>>>>>
>>>>>
>>> I would welcome a confirmation step - but *not* an outright rejection -
>>> that runs *locally* in git-review before the change is pushed. Right now,
>>> gerrit gives you a warning after the review is pushed, at which point it is
>>> too late.
>>>
>>> It seems here that there's benefit to 72 line messages - not that
>>>>>> everyone sees that benefit, but it is present - but it doesn't outweigh the
>>>>>> current cost.
>>>>>>
>>>>>
>>> Yes, 72 columns is the correct guideline IMHO. It's used virtually
>>> throughout the Git ecosystem now. Back in the early days of Git it wasn't
>>> at all clear - should you have no line breaks at all and let each tool do
>>> its own soft line wrapping? If not, where should you wrap? Now there's a
>>> clear consensus that you hard wrap at 72. Vi wraps git commit messages at
>>> 72 by default.
>>>
>>> The output of "git log" indents commit messages by four spaces, so
>>> anything longer than 76 gets ugly, hard-to-read line-wrapping. I've also
>>> noticed that Launchpad (or at least the bot that posts commit messages to
>>> Launchpad when patches merge) does a hard wrap at 72 characters.
>>>
>>> A much better idea than modifying the guideline would be to put
>>> documentation on the wiki about how to set up your editor so that this is
>>> never an issue. You shouldn't even have to even think about the line length
>>> for at least 99% of commits.
>>>
>>> cheers,
>>> Zane.
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/e2776b27/attachment.html>

From dougwig at parksidesoftware.com  Mon Sep 28 22:00:53 2015
From: dougwig at parksidesoftware.com (Doug Wiegley)
Date: Mon, 28 Sep 2015 16:00:53 -0600
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost> <56096D79.6090005@redhat.com>
 <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
Message-ID: <F7D5FFC1-0695-4580-BEF3-435D941278A1@parksidesoftware.com>


> On Sep 28, 2015, at 3:00 PM, Assaf Muller <amuller at redhat.com> wrote:
> 
> 
> 
> On Mon, Sep 28, 2015 at 12:40 PM, Zane Bitter <zbitter at redhat.com <mailto:zbitter at redhat.com>> wrote:
> On 28/09/15 05:47, Gorka Eguileor wrote:
> On 26/09, Morgan Fainberg wrote:
> As a core (and former PTL) I just ignored commit message -1s unless there is something majorly wrong (no bug id where one is needed, etc).
> 
> I appreciate well formatted commits, but can we let this one go? This discussion is so far into the meta-bike-shedding (bike shedding about bike shedding commit messages) ... If a commit message is *that* bad a -1 (or just fixing it?) Might be worth it. However, if a commit isn't missing key info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence moving from topic to topic, there isn't a good reason to block the review.
> 
> +1
> 
> It is not worth having a bot -1 bad commits or even having gerrit muck with them. Let's do the job of the reviewer and actually review code instead of going crazy with commit messages.
> 
> +1
> 
> Sent via mobile
> 
> 
> I have to disagree, as reviewers we have to make sure that guidelines
> are followed, if we have an explicit guideline that states that
> the limit length is 72 chars, I will -1 any patch that doesn't follow
> the guideline, just as I would do with i18n guideline violations.
> 
> Apparently you're unaware of the definition of the word 'guideline'. It's a guide. If it were a hard-and-fast rule then we would have a bot enforcing it already.
> 
> Is there anything quite so frightening as a large group of people blindly enforcing rules with total indifference to any sense of overarching purpose?
> 
> A reminder that the reason for this guideline is to ensure that none of the broad variety of tools that are available in the Git ecosystem effectively become unusable with the OpenStack repos due to wildly inconsistent formatting. And of course, even that goal has to be balanced against our other goals, such as building a healthy community and occasionally shipping some software.
> 
> There are plenty of ways to achieve that goal other than blanket drive-by -1's for trivial inconsistencies in the formatting of individual commit messages.
> 
> The actual issue is that we as a community (Speaking of the Neutron community at least) are stat-crazed. We have a fair number of contributors
> that -1 for trivial issues to retain their precious stats with alarming zeal. That is the real issue. All of these commit message issues, translation mishaps,
> comment typos etc are excuses for people to boost their stats without contributing their time or energy in to the project. I am beyond bitter about this
> issue at this point.
> 
> I'll say what I've always said about this issue: The review process is about collaboration. I imagine that the author is sitting next to me, and we're going
> through the patch together for the purpose of improving it. Review comments should be motivated by a thirst to improve the proposed code in a real way,
> not by your want or need to improve your stats on stackalytics. The latter is an enormous waste of your time.

This is kind of a thread-jack, but to respond to your concern, I think the infra team has a nice writeup on how to review that addresses your concern: http://docs.openstack.org/infra/system-config/project.html#review-criteria <http://docs.openstack.org/infra/system-config/project.html#review-criteria>

I?ve certainly seen plenty of neutron reviewers that I?d prefer to see following the above link (myself included, on occasion.)

Thanks,
doug



>  
> A polite comment and a link to the guidelines is a great way to educate new contributors. For core reviewers especially, a comment like that and a +1 review will *almost always* get you the change you want in double-quick time. (Any contributor who knows they are 30s work away from a +2 is going to be highly motivated.)
> 
> Typos are a completely different matter and they should not be grouped
> together with guideline infringements.
> 
> "Violations"? "Infringements"? It's line wrapping, not a felony case.
> 
> I agree that it is a waste of time and resources when you have to -1 a
> patch for this, but there multiple solutions, you can make sure your
> editor does auto wrapping at the right length (I have mine configured
> this way), or create a git-enforce policy with a client-side hook, or do
> like Ihar is trying to do and push for a guideline change.
> 
> I don't mind changing the guideline to any other length, but as long as
> it is 72 chars I will keep enforcing it, as it is not the place of
> reviewers to decide which guidelines are worthy of being enforced and
> which ones are not.
> 
> Of course it is.
> 
> If we're not here to use our brains, why are we here? Serious question. Feel free to use any definition of 'here'.
> 
> Cheers,
> Gorka.
> 
> 
> 
> On Sep 26, 2015, at 21:19, Ian Wells <ijw.ubuntu at cack.org.uk <mailto:ijw.ubuntu at cack.org.uk>> wrote:
> 
> Can I ask a different question - could we reject a few simple-to-check things on the push, like bad commit messages?  For things that take 2 seconds to fix and do make people's lives better, it's not that they're rejected, it's that the whole rejection cycle via gerrit review (push/wait for tests to run/check website/swear/find change/fix/push again) is out of proportion to the effort taken to fix it.
> 
> I would welcome a confirmation step - but *not* an outright rejection - that runs *locally* in git-review before the change is pushed. Right now, gerrit gives you a warning after the review is pushed, at which point it is too late.
> 
> It seems here that there's benefit to 72 line messages - not that everyone sees that benefit, but it is present - but it doesn't outweigh the current cost.
> 
> Yes, 72 columns is the correct guideline IMHO. It's used virtually throughout the Git ecosystem now. Back in the early days of Git it wasn't at all clear - should you have no line breaks at all and let each tool do its own soft line wrapping? If not, where should you wrap? Now there's a clear consensus that you hard wrap at 72. Vi wraps git commit messages at 72 by default.
> 
> The output of "git log" indents commit messages by four spaces, so anything longer than 76 gets ugly, hard-to-read line-wrapping. I've also noticed that Launchpad (or at least the bot that posts commit messages to Launchpad when patches merge) does a hard wrap at 72 characters.
> 
> A much better idea than modifying the guideline would be to put documentation on the wiki about how to set up your editor so that this is never an issue. You shouldn't even have to even think about the line length for at least 99% of commits.
> 
> cheers,
> Zane.
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/96d81dfe/attachment.html>

From e0ne at e0ne.info  Mon Sep 28 22:25:04 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Tue, 29 Sep 2015 01:25:04 +0300
Subject: [openstack-dev]  [cinder] [all] The future of Cinder API v1
Message-ID: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>

Hi all,

As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was
introduced in Grizzly and v1 API is deprecated since Juno.

After [1] is merged, Cinder API v1 is disabled in gates by default. We've
got a filed bug [2] to remove Cinder v1 API at all.


According to Deprecation Policy [3] looks like we are OK to remote it. But
I would like to ask Cinder API users if any still use API v1.
Should we remove it at all Mitaka release or just disable by default in the
cinder.conf?

AFAIR, only Rally doesn't support API v2 now and I'm going to implement it
asap.

[1] https://review.openstack.org/194726
[2] https://bugs.launchpad.net/cinder/+bug/1467589
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html

Regards,
Ivan Kolodyazhny
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/3adab28d/attachment.html>

From matt at mattfischer.com  Mon Sep 28 22:34:33 2015
From: matt at mattfischer.com (Matt Fischer)
Date: Mon, 28 Sep 2015 16:34:33 -0600
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
Message-ID: <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>

Yes, people are probably still using it. Last time I tried to use V2 it
didn't work because the clients were broken, and then it went back on the
bottom of my to do list. Is this mess fixed?

http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html

On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny <e0ne at e0ne.info> wrote:

> Hi all,
>
> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was
> introduced in Grizzly and v1 API is deprecated since Juno.
>
> After [1] is merged, Cinder API v1 is disabled in gates by default. We've
> got a filed bug [2] to remove Cinder v1 API at all.
>
>
> According to Deprecation Policy [3] looks like we are OK to remote it. But
> I would like to ask Cinder API users if any still use API v1.
> Should we remove it at all Mitaka release or just disable by default in
> the cinder.conf?
>
> AFAIR, only Rally doesn't support API v2 now and I'm going to implement it
> asap.
>
> [1] https://review.openstack.org/194726
> [2] https://bugs.launchpad.net/cinder/+bug/1467589
> [3]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>
> Regards,
> Ivan Kolodyazhny
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/53759de8/attachment.html>

From fungi at yuggoth.org  Mon Sep 28 22:53:36 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Mon, 28 Sep 2015 22:53:36 +0000
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <CAL3VkVz1UrPvTAq_rKNx8W4EDVxOQbHZ73y1XjGoimYs07iJ8A@mail.gmail.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost> <56096D79.6090005@redhat.com>
 <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
 <CAL3VkVz1UrPvTAq_rKNx8W4EDVxOQbHZ73y1XjGoimYs07iJ8A@mail.gmail.com>
Message-ID: <20150928225336.GW4731@yuggoth.org>

On 2015-09-28 16:27:19 -0500 (-0500), Kyle Mestery wrote:
[...]
> I should note that as the previous PTL, for the most part I viewed
> stats as garbage. Keep in mind I nominated two new core reviewers
> whose stats were lowe but who are incredibly important members of
> our community [1]. I did this because they are the type of people
> to be core reviewers, and we had a long conversation on this. So,
> I agree with you, this stats thing is awful. And Stackalytics
> hasn't helped it, but made it much worse.
[...]

"Any observed statistical regularity will tend to
collapse once pressure is placed upon it for control
purposes." - Charles Goodhart, 1975

https://en.wikipedia.org/wiki/Goodhart%27s_law

-- 
Jeremy Stanley


From ramy.asselin at hpe.com  Mon Sep 28 23:04:36 2015
From: ramy.asselin at hpe.com (Asselin, Ramy)
Date: Mon, 28 Sep 2015 23:04:36 +0000
Subject: [openstack-dev] [third-party] Reminder of 3rd party ci working
 group meeting Tuesday at 1700 UTC in #openstack-meeting
Message-ID: <4BFD2A2A3BAE4A46AA43C6A2DB44D16965B0E349@G4W3223.americas.hpqcorp.net>

Hi,

This is a reminder for 3rd party ci operators that there is a working group meeting Tuesday at 1700 UTC in #openstack-meeting

The agenda is available here. Please feel free to add additional topics: https://wiki.openstack.org/wiki/Meetings/ThirdParty#Agenda_for_next_Working_Group_meeting

Thanks,
Ramy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/f4ddfe30/attachment.html>

From sorrison at gmail.com  Mon Sep 28 23:17:51 2015
From: sorrison at gmail.com (Sam Morrison)
Date: Tue, 29 Sep 2015 09:17:51 +1000
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
	of Cinder API v1
In-Reply-To: <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
Message-ID: <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>

Yeah we?re still using v1 as the clients that are packaged with most distros don?t support v2 easily.

Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our ?volume? endpoint to point to v2 (we have a volumev2 endpoint too) and the client breaks.

$ cinder list
ERROR: OpenStack Block Storage API version is set to 1 but you are accessing a 2 endpoint. Change its value through --os-volume-api-version or env[OS_VOLUME_API_VERSION].

Sam


> On 29 Sep 2015, at 8:34 am, Matt Fischer <matt at mattfischer.com> wrote:
> 
> Yes, people are probably still using it. Last time I tried to use V2 it didn't work because the clients were broken, and then it went back on the bottom of my to do list. Is this mess fixed?
> 
> http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html <http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html>
> 
> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny <e0ne at e0ne.info <mailto:e0ne at e0ne.info>> wrote:
> Hi all,
> 
> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was introduced in Grizzly and v1 API is deprecated since Juno.
> 
> After [1] is merged, Cinder API v1 is disabled in gates by default. We've got a filed bug [2] to remove Cinder v1 API at all.
> 
> 
> According to Deprecation Policy [3] looks like we are OK to remote it. But I would like to ask Cinder API users if any still use API v1.
> Should we remove it at all Mitaka release or just disable by default in the cinder.conf?
> 
> AFAIR, only Rally doesn't support API v2 now and I'm going to implement it asap.
> 
> [1] https://review.openstack.org/194726 <https://review.openstack.org/194726> 
> [2] https://bugs.launchpad.net/cinder/+bug/1467589 <https://bugs.launchpad.net/cinder/+bug/1467589>
> [3] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html <http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html>
> Regards,
> Ivan Kolodyazhny
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org <mailto:OpenStack-operators at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
> 
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org <mailto:OpenStack-operators at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/dbb6897c/attachment.html>

From mvoelker at vmware.com  Tue Sep 29 00:19:46 2015
From: mvoelker at vmware.com (Mark Voelker)
Date: Tue, 29 Sep 2015 00:19:46 +0000
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The
	future	of Cinder API v1
In-Reply-To: <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
Message-ID: <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>

FWIW, the most popular client libraries in the last user survey[1] other than OpenStack?s own clients were: libcloud (48 respondents), jClouds (36 respondents), Fog (34 respondents), php-opencloud (21 respondents), DeltaCloud (which has been retired by Apache and hasn?t seen a commit in two years, but 17 respondents are still using it), pkgcloud (15 respondents), and OpenStack.NET (14 respondents).  Of those:

* libcloud appears to support the nova-volume API but not the cinder API: https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/openstack.py#L251

* jClouds appears to support only the v1 API: https://github.com/jclouds/jclouds/tree/jclouds-1.9.1/apis/openstack-cinder/src/main/java/org/jclouds

* Fog also appears to only support the v1 API: https://github.com/fog/fog/blob/master/lib/fog/openstack/volume.rb#L99

* php-opencloud appears to only support the v1 API: https://php-opencloud.readthedocs.org/en/latest/services/volume/index.html

* DeltaCloud I honestly haven?t looked at since it?s thoroughly dead, but I can?t imagine it supports v2.

* pkgcloud has beta-level support for Cinder but I think it?s v1 (may be mistaken): https://github.com/pkgcloud/pkgcloud/#block-storage----beta and https://github.com/pkgcloud/pkgcloud/tree/master/lib/pkgcloud/openstack/blockstorage

* OpenStack.NET does appear to support v2: http://www.openstacknetsdk.org/docs/html/T_net_openstack_Core_Providers_IBlockStorageProvider.htm

Now, it?s anyone?s guess as to whether or not users of those client libraries actually try to use them for volume operations or not (anecdotally I know a few clouds I help support are using client libraries that only support v1), and some users might well be using more than one library or mixing in code they wrote themselves.  But most of the above that support cinder do seem to rely on v1.  Some management tools also appear to still rely on the v1 API (such as RightScale: http://docs.rightscale.com/clouds/openstack/openstack_config_prereqs.html ).  From that perspective it might be useful to keep it around a while longer and disable it by default.  Personally I?d probably lean that way, especially given that folks here on the ops list are still reporting problems too.

That said, v1 has been deprecated since Juno, and the Juno release notes said it was going to be removed [2], so there?s a case to be made that there?s been plenty of fair warning too I suppose.

[1] http://superuser.openstack.org/articles/openstack-application-developers-share-insights
[2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_7

At Your Service,

Mark T. Voelker



> On Sep 28, 2015, at 7:17 PM, Sam Morrison <sorrison at gmail.com> wrote:
> 
> Yeah we?re still using v1 as the clients that are packaged with most distros don?t support v2 easily.
> 
> Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our ?volume? endpoint to point to v2 (we have a volumev2 endpoint too) and the client breaks.
> 
> $ cinder list
> ERROR: OpenStack Block Storage API version is set to 1 but you are accessing a 2 endpoint. Change its value through --os-volume-api-version or env[OS_VOLUME_API_VERSION].
> 
> Sam
> 
> 
>> On 29 Sep 2015, at 8:34 am, Matt Fischer <matt at mattfischer.com> wrote:
>> 
>> Yes, people are probably still using it. Last time I tried to use V2 it didn't work because the clients were broken, and then it went back on the bottom of my to do list. Is this mess fixed?
>> 
>> http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
>> 
>> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny <e0ne at e0ne.info> wrote:
>> Hi all,
>> 
>> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was introduced in Grizzly and v1 API is deprecated since Juno.
>> 
>> After [1] is merged, Cinder API v1 is disabled in gates by default. We've got a filed bug [2] to remove Cinder v1 API at all.
>> 
>> 
>> According to Deprecation Policy [3] looks like we are OK to remote it. But I would like to ask Cinder API users if any still use API v1.
>> Should we remove it at all Mitaka release or just disable by default in the cinder.conf?
>> 
>> AFAIR, only Rally doesn't support API v2 now and I'm going to implement it asap.
>> 
>> [1] https://review.openstack.org/194726 
>> [2] https://bugs.launchpad.net/cinder/+bug/1467589
>> [3] http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>> 
>> Regards,
>> Ivan Kolodyazhny
>> 
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>> 
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From pieter.c.kruithof-jr at hpe.com  Tue Sep 29 01:01:09 2015
From: pieter.c.kruithof-jr at hpe.com (Kruithof, Piet)
Date: Tue, 29 Sep 2015 01:01:09 +0000
Subject: [openstack-dev] [Openstack-operators] [neutron] [nova] Nova
 Network/Neutron Migration Survey - need response from folks currently using
 Nova Networks in their deployments
Message-ID: <D22F3EF1.18564%pieter.c.kruithof-jr@hp.com>

There has been a significant response to the Nova Network/Neutron migration survey.  However, the responses are leaning heavily on the side of deployments currently using Neutron.  As a result, we would like to have more representation from folks currently using Nova Networks.

If you are currently using Nova Networks, please respond to the survey!

You will also be entered in a raffle for one of two $100 US Amazon gift cards at the end of the survey.   As always, the results from the survey will be shared with the OpenStack community.

Please click on the following link to begin the survey.

https://www.surveymonkey.com/r/osnetworking


Piet Kruithof
PTL, OpenStack UX project

"For every complex problem, there is a solution that is simple, neat and wrong.?

H L Menken


From iwienand at redhat.com  Tue Sep 29 01:09:38 2015
From: iwienand at redhat.com (Ian Wienand)
Date: Tue, 29 Sep 2015 11:09:38 +1000
Subject: [openstack-dev] [murano] Fix order of arguments in assertEqual
In-Reply-To: <CAEVmkayNpEtY8mKapyPRDg6wfApR6SfBORPx71yMDhefbv92eQ@mail.gmail.com>
References: <CAN4ORgYdV8KL+m2KySkqNNHFURz9k70YABSE8fOT6Z8oubLkNA@mail.gmail.com>
 <0A2625CC-25BB-4CCE-ABD4-927076A50D30@mirantis.com>
 <E1FB4937BE24734DAD0D1D4E4E506D7890D1794D@MAIL703.KDS.KEANE.COM>
 <CAEVmkayNpEtY8mKapyPRDg6wfApR6SfBORPx71yMDhefbv92eQ@mail.gmail.com>
Message-ID: <5609E4D2.6040507@redhat.com>

On 09/24/2015 08:18 PM, Andrey Kurilin wrote:
> I agree that wrong order of arguments misleads while debugging errors, BUT
> how we can prevent regression?

Spell it out and use keyword args?

   assertEqual(expected="foo", observed=...)

is pretty hard to mess up

-i


From ken1ohmichi at gmail.com  Tue Sep 29 01:23:03 2015
From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi)
Date: Tue, 29 Sep 2015 10:23:03 +0900
Subject: [openstack-dev] [murano] Fix order of arguments in assertEqual
In-Reply-To: <5609E4D2.6040507@redhat.com>
References: <CAN4ORgYdV8KL+m2KySkqNNHFURz9k70YABSE8fOT6Z8oubLkNA@mail.gmail.com>
 <0A2625CC-25BB-4CCE-ABD4-927076A50D30@mirantis.com>
 <E1FB4937BE24734DAD0D1D4E4E506D7890D1794D@MAIL703.KDS.KEANE.COM>
 <CAEVmkayNpEtY8mKapyPRDg6wfApR6SfBORPx71yMDhefbv92eQ@mail.gmail.com>
 <5609E4D2.6040507@redhat.com>
Message-ID: <CAA393vju0GaKB=t=5Bj0enMJ-N+Khjqvho14qmRjXbgKS6953w@mail.gmail.com>

Hi

2015-09-29 10:09 GMT+09:00 Ian Wienand <iwienand at redhat.com>:
> On 09/24/2015 08:18 PM, Andrey Kurilin wrote:
>>
>> I agree that wrong order of arguments misleads while debugging errors, BUT
>> how we can prevent regression?
>
>
> Spell it out and use keyword args?
>
>   assertEqual(expected="foo", observed=...)
>
> is pretty hard to mess up

There is a lot of this kind of patches on the gerrit for Nova also.
How about having pep8 rule for blocking this issue?
I don't think it is smart to -1 by reviewers when reviewing patches if
the patch is against the above order.

https://review.openstack.org/#/c/227650/ is trying adding the pep8 rule to Nova.

Thanks
Ken Ohmichi


From dborodaenko at mirantis.com  Tue Sep 29 01:58:03 2015
From: dborodaenko at mirantis.com (Dmitry Borodaenko)
Date: Mon, 28 Sep 2015 18:58:03 -0700
Subject: [openstack-dev] [fuel] Nominate Svetlana Karslioglu for fuel-docs
	core
Message-ID: <20150929015803.GA6281@localhost>

I'd like to nominate Svetlana Karslioglu as a core reviewer for the
fuel-docs-core team. During the last few months, Svetlana restructured
the Fuel QuickStart Guide, fixed a few documentation bugs for Fuel 7.0,
and improved the quality of the Fuel documentation through reviews.

I believe it's time to grant her core reviewer rights in the fuel-docs
repository.

Svetlana's contribution to fuel-docs:
http://stackalytics.com/?user_id=skarslioglu&release=all&project_type=all&module=fuel-docs

Core reviewer approval process definition:
https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

-- 
Dmitry Borodaenko


From travis.tripp at hpe.com  Tue Sep 29 02:22:59 2015
From: travis.tripp at hpe.com (Tripp, Travis S)
Date: Tue, 29 Sep 2015 02:22:59 +0000
Subject: [openstack-dev] [Horizon] Horizon Productivity Suggestion
In-Reply-To: <D22ECDB2.F132%rcresswe@cisco.com>
References: <D22ECDB2.F132%rcresswe@cisco.com>
Message-ID: <F68A1980-DFE6-4CEE-80AB-5D4CA8B0A69C@hpe.com>

Things always move more quickly at the end of a cycle because people feel release pressure, but I do think this is a good idea. 2 - 3 minutes isn?t very realistic. It would need to be planned for longer.





On 9/28/15, 3:57 AM, "Rob Cresswell (rcresswe)" <rcresswe at cisco.com> wrote:

>Hi folks,
>
>I?m wondering if we could try marking out a small 2-3 minute slot at the
>start of each weekly meeting to highlight Critical/ High bugs that have
>code up for review, as well as important blueprints that have code up for
>review. These would be blueprints for features that were identified as
>high priority at the summit.
>
>The thought here is that we were very efficient in L-RC1 at moving code
>along, which is nice for productivity, but not really great for stability;
>it would be good to do this kind of targeted work earlier in the cycle.
>I?ve noticed other projects doing this in their meetings, and it seems
>quite effective.
>
>Rob
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From armamig at gmail.com  Tue Sep 29 03:06:14 2015
From: armamig at gmail.com (Armando M.)
Date: Mon, 28 Sep 2015 20:06:14 -0700
Subject: [openstack-dev] [Openstack-operators] [neutron] [nova] Nova
 Network/Neutron Migration Survey - need response from folks currently using
 Nova Networks in their deployments
In-Reply-To: <D22F3EF1.18564%pieter.c.kruithof-jr@hp.com>
References: <D22F3EF1.18564%pieter.c.kruithof-jr@hp.com>
Message-ID: <CAK+RQeZqcGWCsGeEVXhCtjK0oe0OAad0iRsHUBpuO-EJWG0WNQ@mail.gmail.com>

On 28 September 2015 at 18:01, Kruithof, Piet <pieter.c.kruithof-jr at hpe.com>
wrote:

> There has been a significant response to the Nova Network/Neutron
> migration survey.  However, the responses are leaning heavily on the side
> of deployments currently using Neutron.  As a result, we would like to have
> more representation from folks currently using Nova Networks.
>

Well, that in itself it's telling...


>
> If you are currently using Nova Networks, please respond to the survey!
>
> You will also be entered in a raffle for one of two $100 US Amazon gift
> cards at the end of the survey.   As always, the results from the survey
> will be shared with the OpenStack community.
>
> Please click on the following link to begin the survey.
>
> https://www.surveymonkey.com/r/osnetworking
>
>
> Piet Kruithof
> PTL, OpenStack UX project
>
> "For every complex problem, there is a solution that is simple, neat and
> wrong.?
>
> H L Menken
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/7afcb3d1/attachment.html>

From os.lcheng at gmail.com  Tue Sep 29 03:11:58 2015
From: os.lcheng at gmail.com (Lin Hua Cheng)
Date: Mon, 28 Sep 2015 20:11:58 -0700
Subject: [openstack-dev] [Horizon] Horizon Productivity Suggestion
In-Reply-To: <F68A1980-DFE6-4CEE-80AB-5D4CA8B0A69C@hpe.com>
References: <D22ECDB2.F132%rcresswe@cisco.com>
 <F68A1980-DFE6-4CEE-80AB-5D4CA8B0A69C@hpe.com>
Message-ID: <CABtBEBVrvfPSh=gtvj1hF3+Gj6qhF=3E3_+tmeOux8M2__gnew@mail.gmail.com>

I agree with Travis that 2-3 minutes is not enough, that may not be even
enough to talk about one bug. :)

We could save some time if we have someone monitoring the bugs/feature and
publish the high priority item into a report - something similar to what
Keystone does [1].  Reviewers can look this up every time if they need to
prioritize their reviews.

We can rotate this responsibility among cores every month - even non-core
if someone wants to volunteer.

-Lin

[1]
https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting#Keystone_Weekly_Bug_Reports




On Mon, Sep 28, 2015 at 7:22 PM, Tripp, Travis S <travis.tripp at hpe.com>
wrote:

> Things always move more quickly at the end of a cycle because people feel
> release pressure, but I do think this is a good idea. 2 - 3 minutes isn?t
> very realistic. It would need to be planned for longer.
>
>
>
>
>
> On 9/28/15, 3:57 AM, "Rob Cresswell (rcresswe)" <rcresswe at cisco.com>
> wrote:
>
> >Hi folks,
> >
> >I?m wondering if we could try marking out a small 2-3 minute slot at the
> >start of each weekly meeting to highlight Critical/ High bugs that have
> >code up for review, as well as important blueprints that have code up for
> >review. These would be blueprints for features that were identified as
> >high priority at the summit.
> >
> >The thought here is that we were very efficient in L-RC1 at moving code
> >along, which is nice for productivity, but not really great for stability;
> >it would be good to do this kind of targeted work earlier in the cycle.
> >I?ve noticed other projects doing this in their meetings, and it seems
> >quite effective.
> >
> >Rob
> >
> >
> >__________________________________________________________________________
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/4cce3040/attachment.html>

From wanghua.humble at gmail.com  Tue Sep 29 03:31:52 2015
From: wanghua.humble at gmail.com (=?UTF-8?B?546L5Y2O?=)
Date: Tue, 29 Sep 2015 11:31:52 +0800
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
Message-ID: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but
exposes container in swarm coe. As I know, swarm is only a scheduler of
container, which is like nova in openstack. Docker compose is a
orchestration program which is like heat in openstack. k8s is the
combination of scheduler and orchestration. So I think it is better to
expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/58898145/attachment.html>

From john.griffith8 at gmail.com  Tue Sep 29 03:43:35 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Mon, 28 Sep 2015 21:43:35 -0600
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
Message-ID: <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>

On Mon, Sep 28, 2015 at 6:19 PM, Mark Voelker <mvoelker at vmware.com> wrote:

> FWIW, the most popular client libraries in the last user survey[1] other
> than OpenStack?s own clients were: libcloud (48 respondents), jClouds (36
> respondents), Fog (34 respondents), php-opencloud (21 respondents),
> DeltaCloud (which has been retired by Apache and hasn?t seen a commit in
> two years, but 17 respondents are still using it), pkgcloud (15
> respondents), and OpenStack.NET (14 respondents).  Of those:
>
> * libcloud appears to support the nova-volume API but not the cinder API:
> https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/openstack.py#L251
>
> * jClouds appears to support only the v1 API:
> https://github.com/jclouds/jclouds/tree/jclouds-1.9.1/apis/openstack-cinder/src/main/java/org/jclouds
>
> * Fog also appears to only support the v1 API:
> https://github.com/fog/fog/blob/master/lib/fog/openstack/volume.rb#L99
>
> * php-opencloud appears to only support the v1 API:
> https://php-opencloud.readthedocs.org/en/latest/services/volume/index.html
>
> * DeltaCloud I honestly haven?t looked at since it?s thoroughly dead, but
> I can?t imagine it supports v2.
>
> * pkgcloud has beta-level support for Cinder but I think it?s v1 (may be
> mistaken): https://github.com/pkgcloud/pkgcloud/#block-storage----beta
> and
> https://github.com/pkgcloud/pkgcloud/tree/master/lib/pkgcloud/openstack/blockstorage
>
> * OpenStack.NET does appear to support v2:
> http://www.openstacknetsdk.org/docs/html/T_net_openstack_Core_Providers_IBlockStorageProvider.htm
>
> Now, it?s anyone?s guess as to whether or not users of those client
> libraries actually try to use them for volume operations or not
> (anecdotally I know a few clouds I help support are using client libraries
> that only support v1), and some users might well be using more than one
> library or mixing in code they wrote themselves.  But most of the above
> that support cinder do seem to rely on v1.  Some management tools also
> appear to still rely on the v1 API (such as RightScale:
> http://docs.rightscale.com/clouds/openstack/openstack_config_prereqs.html
> ).  From that perspective it might be useful to keep it around a while
> longer and disable it by default.  Personally I?d probably lean that way,
> especially given that folks here on the ops list are still reporting
> problems too.
>
> That said, v1 has been deprecated since Juno, and the Juno release notes
> said it was going to be removed [2], so there?s a case to be made that
> there?s been plenty of fair warning too I suppose.
>
> [1]
> http://superuser.openstack.org/articles/openstack-application-developers-share-insights
> [2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_7
>
> At Your Service,
>
> Mark T. Voelker
>
>
>
> > On Sep 28, 2015, at 7:17 PM, Sam Morrison <sorrison at gmail.com> wrote:
> >
> > Yeah we?re still using v1 as the clients that are packaged with most
> distros don?t support v2 easily.
> >
> > Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our
> ?volume? endpoint to point to v2 (we have a volumev2 endpoint too) and the
> client breaks.
> >
> > $ cinder list
> > ERROR: OpenStack Block Storage API version is set to 1 but you are
> accessing a 2 endpoint. Change its value through --os-volume-api-version or
> env[OS_VOLUME_API_VERSION].
> >
> > Sam
> >
> >
> >> On 29 Sep 2015, at 8:34 am, Matt Fischer <matt at mattfischer.com> wrote:
> >>
> >> Yes, people are probably still using it. Last time I tried to use V2 it
> didn't work because the clients were broken, and then it went back on the
> bottom of my to do list. Is this mess fixed?
> >>
> >>
> http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
> >>
> >> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny <e0ne at e0ne.info>
> wrote:
> >> Hi all,
> >>
> >> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API
> was introduced in Grizzly and v1 API is deprecated since Juno.
> >>
> >> After [1] is merged, Cinder API v1 is disabled in gates by default.
> We've got a filed bug [2] to remove Cinder v1 API at all.
> >>
> >>
> >> According to Deprecation Policy [3] looks like we are OK to remote it.
> But I would like to ask Cinder API users if any still use API v1.
> >> Should we remove it at all Mitaka release or just disable by default in
> the cinder.conf?
> >>
> >> AFAIR, only Rally doesn't support API v2 now and I'm going to implement
> it asap.
> >>
> >> [1] https://review.openstack.org/194726
> >> [2] https://bugs.launchpad.net/cinder/+bug/1467589
> >> [3]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
> >>
> >> Regards,
> >> Ivan Kolodyazhny
> >>
> >> _______________________________________________
> >> OpenStack-operators mailing list
> >> OpenStack-operators at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >>
> >> _______________________________________________
> >> OpenStack-operators mailing list
> >> OpenStack-operators at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

?My opinion is that even though V1 has technically been deprecated for
multiple cycles, V2 was never really viable until the Liberty release.
Between issues with V2 and other components, and then the version discovery
issues that broke some things; I think we should reset the deprecation
clock so to speak.

It was only in the last milestone of Liberty that folks finally got
everything updated and talking V2.  Not to mention the patch to switch the
default in devstack just landed (where everything uses it including Nova).

To summarize, absolutely NO to removing V1 in Mitaka, and I think resetting
the deprecation clock is the most reasonable course of action here.

Thanks,
John?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/f865e1e7/attachment.html>

From sbhou at cn.ibm.com  Tue Sep 29 03:51:28 2015
From: sbhou at cn.ibm.com (Sheng Bo Hou)
Date: Tue, 29 Sep 2015 11:51:28 +0800
Subject: [openstack-dev] [Cinder] [Manila] Will NFS stay with Cinder as a
	reference implementation?
Message-ID: <OF527463AA.29EEC511-ON48257ECF.0014DC53-48257ECF.00153220@cn.ibm.com>

Hi folks,

I have a question about the file services in OpenStack.

As you know there is a generic NFS driver in Cinder and other file system 
drivers inherit it, while the project Manila is determined to provide the 
file system service.

Will NFS stay with Cinder as the reference implementation for the coming 
release or releases? Are all the file system drivers going to move to 
Manila? 
What is relation between Manila as FSaaS and NFS in Cinder?
Any ideas?

Thank you.

Best wishes,
Vincent Hou (???)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM at IBMCN    E-mail: sbhou at cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
??:???????????8???????28??????3? ???100193
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/8933eea2/attachment.html>

From gilles at redhat.com  Tue Sep 29 04:18:24 2015
From: gilles at redhat.com (Gilles Dubreuil)
Date: Tue, 29 Sep 2015 14:18:24 +1000
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <87vbbc2eiu.fsf@s390.unix4.net>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com> <55F76F5C.2020106@redhat.com>
 <87vbbc2eiu.fsf@s390.unix4.net>
Message-ID: <560A1110.5000209@redhat.com>



On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
> Gilles Dubreuil <gilles at redhat.com> writes:
> 
>> On 15/09/15 06:53, Rich Megginson wrote:
>>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>>> Hi,
>>>>
>>>> Gilles Dubreuil <gilles at redhat.com> writes:
>>>>
>>>>> A. The 'composite namevar' approach:
>>>>>
>>>>>     keystone_tenant {'projectX::domainY': ... }
>>>>>   B. The 'meaningless name' approach:
>>>>>
>>>>>    keystone_tenant {'myproject': name='projectX', domain=>'domainY',
>>>>> ...}
>>>>>
>>>>> Notes:
>>>>>   - Actually using both combined should work too with the domain
>>>>> supposedly overriding the name part of the domain.
>>>>>   - Please look at [1] this for some background between the two
>>>>> approaches:
>>>>>
>>>>> The question
>>>>> -------------
>>>>> Decide between the two approaches, the one we would like to retain for
>>>>> puppet-keystone.
>>>>>
>>>>> Why it matters?
>>>>> ---------------
>>>>> 1. Domain names are mandatory in every user, group or project. Besides
>>>>> the backward compatibility period mentioned earlier, where no domain
>>>>> means using the default one.
>>>>> 2. Long term impact
>>>>> 3. Both approaches are not completely equivalent which different
>>>>> consequences on the future usage.
>>>> I can't see why they couldn't be equivalent, but I may be missing
>>>> something here.
>>>
>>> I think we could support both.  I don't see it as an either/or situation.
>>>
>>>>
>>>>> 4. Being consistent
>>>>> 5. Therefore the community to decide
>>>>>
>>>>> Pros/Cons
>>>>> ----------
>>>>> A.
>>>> I think it's the B: meaningless approach here.
>>>>
>>>>>    Pros
>>>>>      - Easier names
>>>> That's subjective, creating unique and meaningful name don't look easy
>>>> to me.
>>>
>>> The point is that this allows choice - maybe the user already has some
>>> naming scheme, or wants to use a more "natural" meaningful name - rather
>>> than being forced into a possibly "awkward" naming scheme with "::"
>>>
>>>   keystone_user { 'heat domain admin user':
>>>     name => 'admin',
>>>     domain => 'HeatDomain',
>>>     ...
>>>   }
>>>
>>>   keystone_user_role {'heat domain admin user@::HeatDomain':
>>>     roles => ['admin']
>>>     ...
>>>   }
>>>
>>>>
>>>>>    Cons
>>>>>      - Titles have no meaning!
>>>
>>> They have meaning to the user, not necessarily to Puppet.
>>>
>>>>>      - Cases where 2 or more resources could exists
>>>
>>> This seems to be the hardest part - I still cannot figure out how to use
>>> "compound" names with Puppet.
>>>
>>>>>      - More difficult to debug
>>>
>>> More difficult than it is already? :P
>>>
>>>>>      - Titles mismatch when listing the resources (self.instances)
>>>>>
>>>>> B.
>>>>>    Pros
>>>>>      - Unique titles guaranteed
>>>>>      - No ambiguity between resource found and their title
>>>>>    Cons
>>>>>      - More complicated titles
>>>>> My vote
>>>>> --------
>>>>> I would love to have the approach A for easier name.
>>>>> But I've seen the challenge of maintaining the providers behind the
>>>>> curtains and the confusion it creates with name/titles and when not sure
>>>>> about the domain we're dealing with.
>>>>> Also I believe that supporting self.instances consistently with
>>>>> meaningful name is saner.
>>>>> Therefore I vote B
>>>> +1 for B.
>>>>
>>>> My view is that this should be the advertised way, but the other method
>>>> (meaningless) should be there if the user need it.
>>>>
>>>> So as far as I'm concerned the two idioms should co-exist.  This would
>>>> mimic what is possible with all puppet resources.  For instance you can:
>>>>
>>>>    file { '/tmp/foo.bar': ensure => present }
>>>>
>>>> and you can
>>>>
>>>>    file { 'meaningless_id': name => '/tmp/foo.bar', ensure => present }
>>>>
>>>> The two refer to the same resource.
>>>
>>> Right.
>>>
>>
>> I disagree, using the name for the title is not creating a composite
>> name. The latter requires adding at least another parameter to be part
>> of the title.
>>
>> Also in the case of the file resource, a path/filename is a unique name,
>> which is not the case of an Openstack user which might exist in several
>> domains.
>>
>> I actually added the meaningful name case in:
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html
>>
>> But that doesn't work very well because without adding the domain to the
>> name, the following fails:
>>
>> keystone_tenant {'project_1': domain => 'domain_A', ...}
>> keystone_tenant {'project_1': domain => 'domain_B', ...}
>>
>> And adding the domain makes it a de-facto 'composite name'.
> 
> I agree that my example is not similar to what the keystone provider has
> to do.  What I wanted to point out is that user in puppet should be used
> to have this kind of *interface*, one where your put something
> meaningful in the title and one where you put something meaningless.
> The fact that the meaningful one is a compound one shouldn't matter to
> the user.
> 

There is a big blocker of making use of domain name as parameter.
The issue is the limitation of autorequire.

Because autorequire doesn't support any parameter other than the
resource type and expects the resource title (or a list of) [1].

So for instance, keystone_user requires the tenant project1 from
domain1, then the resource name must be 'project1::domain1' because
otherwise there is no way to specify 'domain1':

autorequire(:keystone_tenant) do
  self[:tenant]
end

Alternatively, as Sofer suggested (in a discussion we had), we could
poke the catalog to retrieve the corresponding resource(s).
Unfortunately, unless there is a way around, that doesn't work because
no matter what autorequire wants a title.


So it seems for the scoped domain resources, we have to stick together
the name and domain: '<name>::<domain>'.

[1]
https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type.rb#L2003

>>>>
>>>> But, If that's indeed not possible to have them both,
>>
>> There are cases where having both won't be possible like the trusts, but
>> why not for the resources supporting it.
>>
>> That said, I think we need to make a choice, at least to get started, to
>> have something working, consistently, besides exceptions. Other options
>> to be added later.
> 
> So we should go we the meaningful one first for consistency, I think.
> 
>>>> then I would keep only the meaningful name.
>>>>
>>>>
>>>> As a side note, someone raised an issue about the delimiter being
>>>> hardcoded to "::".  This could be a property of the resource.  This
>>>> would enable the user to use weird name with "::" in it and assign a "/"
>>>> (for instance) to the delimiter property:
>>>>
>>>>    Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }
>>>>
>>>> bar::is::cool is the name of the domain and foo::blah is the project.
>>>
>>> That's a good idea.  Please file a bug for that.
>>>
>>>>
>>>>> Finally
>>>>> ------
>>>>> Thanks for reading that far!
>>>>> To choose, please provide feedback with more pros/cons, examples and
>>>>> your vote.
>>>>>
>>>>> Thanks,
>>>>> Gilles
>>>>>
>>>>>
>>>>> PS:
>>>>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>>>>



From mspreitz at us.ibm.com  Tue Sep 29 04:49:45 2015
From: mspreitz at us.ibm.com (Mike Spreitzer)
Date: Tue, 29 Sep 2015 00:49:45 -0400
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
Message-ID: <201509290449.t8T4nr0J013172@d03av03.boulder.ibm.com>

> From: ?? <wanghua.humble at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev at lists.openstack.org>
> Date: 09/28/2015 11:34 PM
> Subject: [openstack-dev] [magnum]swarm + compose = k8s?
> 
> Hi folks,
> 
> Magnum now exposes service, pod, etc to users in kubernetes coe, but
> exposes container in swarm coe. As I know, swarm is only a scheduler
> of container, which is like nova in openstack. Docker compose is a 
> orchestration program which is like heat in openstack. k8s is the 
> combination of scheduler and orchestration. So I think it is better 
> to expose the apis in compose to users which are at the same level as 
k8s.
> 

Why should the users be deprived of direct access to the Swarm API when it 
is there?

Note also that Compose addresses more general, and differently focused, 
orchestration than Kubernetes; the latter only offers homogenous scaling 
groups --- which a docker-compose.yaml file can not even describe.

Regards,
Mike



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/d460b00c/attachment.html>

From clint at fewbar.com  Tue Sep 29 04:53:36 2015
From: clint at fewbar.com (Clint Byrum)
Date: Mon, 28 Sep 2015 21:53:36 -0700
Subject: [openstack-dev] [election] [tc] TC candidacy -- Let's do this
Message-ID: <1443502344-sup-3541@fewbar.com>

Greetings Stackers,

Some of you I know, some of you I'm meeting for the first time.

Through the last 3 years, since I got involved with OpenStack, I've
seen it grow and mature, and I want to make sure it continues to as we
all raise the bar on what we expect from the not so little cloud engine
that could.

I've been involved with some controversial projects, and been pleased
to watch our community handle most of the controversy by simply choosing
to make our developers' actions uncontroversial with the big tent. I aim
to continue this trend of being inclusive and supportive of the efforts
of everyone who wants to throw their hat in as an OpenStack project.

I intend to put users first, and operators second, with our beloved
developers third. I consider myself "all of the above", and I think that
should bring useful perspective to the TC. This isn't an area I think
OpenStack has failed in, but I believe it requires constant vigilance.

I do have a specific agenda for all of OpenStack, and which I will use
my TC seat to advance whenever possible:

- Streamline everything. There are parts of OpenStack that scale to
the moon, and parts that don't. I think OpenStack should try hard for
everything with the OpenStack moniker on it to put performance and
stability above all else.

- Measure efficiency. We don't necessarily measure how efficient OpenStack
is, or how much each change is affecting its efficiency. We have blog
posts, and anecdotes, but I want to have actual graphs that belong to
the community and show us if we're living up to our goals.

- Resiliency. Once we're streaminling things, and measuring our progress,
we should be able to separate performance problems for reliability
problems, and make OpenStack more resilient in general. This serves
users and operators alike, but it may be hard for developers. That's ok,
we like hard stuff, right?

- No sacred cows. I feel like sometimes the tech we have chosen is
either begrudgingly kept because there's nothing good enough to change,
or hallowed as "the way this works." I would like to challenge some of
those cows and see if we can't do a little bit of work toward streamlining
and resilience by challenging conventional wisdom.

I'm going to work on these things from outside the TC anyway. However,
with the added influence that a seat on the TC provides, I can certainly
work on these things even more.

Thank you for your time!

Clint Byrum
IBM


From adrian.otto at rackspace.com  Tue Sep 29 05:03:42 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Tue, 29 Sep 2015 05:03:42 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
Message-ID: <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to operate. We are intentionally avoiding re-inventing the wheel. Our goal is not to replace docker swarm (or other existing systems), but to compliment it/them. We want to offer users of Docker the richness of native APIs and supporting tools. This way they will not need to compromise features or wait longer for us to implement each new feature as it is added. Keep in mind that our pod, service, and replication controller resources pre-date this philosophy. If we started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com>> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes container in swarm coe. As I know, swarm is only a scheduler of container, which is like nova in openstack. Docker compose is a orchestration program which is like heat in openstack. k8s is the combination of scheduler and orchestration. So I think it is better to expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/b7389aa0/attachment.html>

From EGuz at walmartlabs.com  Tue Sep 29 05:17:40 2015
From: EGuz at walmartlabs.com (Egor Guz)
Date: Tue, 29 Sep 2015 05:17:40 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
Message-ID: <D22F6B8E.1DA5F%eguz@walmartlabs.com>

Also I belive docker compose is just command line tool which doesn?t have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

?
Egor

From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to operate. We are intentionally avoiding re-inventing the wheel. Our goal is not to replace docker swarm (or other existing systems), but to compliment it/them. We want to offer users of Docker the richness of native APIs and supporting tools. This way they will not need to compromise features or wait longer for us to implement each new feature as it is added. Keep in mind that our pod, service, and replication controller resources pre-date this philosophy. If we started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com>> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes container in swarm coe. As I know, swarm is only a scheduler of container, which is like nova in openstack. Docker compose is a orchestration program which is like heat in openstack. k8s is the combination of scheduler and orchestration. So I think it is better to expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From ton at us.ibm.com  Tue Sep 29 05:30:13 2015
From: ton at us.ibm.com (Ton Ngo)
Date: Mon, 28 Sep 2015 22:30:13 -0700
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <D22F6B8E.1DA5F%eguz@walmartlabs.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
Message-ID: <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com>

Would it make sense to ask the opposite of Wanghua's question:  should
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding
heat resources can just interface with k8s instead of Magnum.
Ton Ngo,



From:	Egor Guz <EGuz at walmartlabs.com>
To:	"openstack-dev at lists.openstack.org"
            <openstack-dev at lists.openstack.org>
Date:	09/28/2015 10:20 PM
Subject:	Re: [openstack-dev] [magnum]swarm + compose = k8s?



Also I belive docker compose is just command line tool which doesn?t have
any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker
compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

?
Egor

From: Adrian Otto <adrian.otto at rackspace.com<
mailto:adrian.otto at rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to
operate. We are intentionally avoiding re-inventing the wheel. Our goal is
not to replace docker swarm (or other existing systems), but to compliment
it/them. We want to offer users of Docker the richness of native APIs and
supporting tools. This way they will not need to compromise features or
wait longer for us to implement each new feature as it is added. Keep in
mind that our pod, service, and replication controller resources pre-date
this philosophy. If we started out with the current approach, those would
not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<
mailto:wanghua.humble at gmail.com>> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but
exposes container in swarm coe. As I know, swarm is only a scheduler of
container, which is like nova in openstack. Docker compose is a
orchestration program which is like heat in openstack. k8s is the
combination of scheduler and orchestration. So I think it is better to
expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<
mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/3b02805c/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/3b02805c/attachment.gif>

From armamig at gmail.com  Tue Sep 29 06:06:19 2015
From: armamig at gmail.com (Armando M.)
Date: Mon, 28 Sep 2015 23:06:19 -0700
Subject: [openstack-dev] [Neutron] Defeact management
Message-ID: <CAK+RQeavPidL+_JNPT6gsbKUXsWvSOaH8aFbSh--EFbHGXVe0A@mail.gmail.com>

Hi folks,

One of the areas I would like to look into during the Mitaka cycle is
'stability' [1]. The team has done a great job improving test coverage, and
at the same time increasing reliability of the product.

However, regressions are always around the corner, and there is a huge
backlog of outstanding bugs (800+ of new/confirmed/triaged/in progress
actively reported) that pressure the team. Having these slip through the
cracks or leave them lingering is not cool.

To this aim, I would like to propose a number of changes in the way the
team manage defeats, and I will be going through the process of proposing
these changes via code review by editing [2] (like done in [3]).

Feedback most welcome.

Many thanks,
Armando


[1]
http://git.openstack.org/cgit/openstack/election/tree/candidates/mitaka/Neutron/Armando_Migliaccio.txt#n25
[2] http://docs.openstack.org/developer/neutron/policies/index.html
[3] https://review.openstack.org/#/c/228733/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/ace3110b/attachment.html>

From duncan.thomas at gmail.com  Tue Sep 29 06:23:37 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Tue, 29 Sep 2015 09:23:37 +0300
Subject: [openstack-dev] [Cinder] [Manila] Will NFS stay with Cinder as
 a reference implementation?
In-Reply-To: <OF527463AA.29EEC511-ON48257ECF.0014DC53-48257ECF.00153220@cn.ibm.com>
References: <OF527463AA.29EEC511-ON48257ECF.0014DC53-48257ECF.00153220@cn.ibm.com>
Message-ID: <CAOyZ2aFKZaq6zVGc_VZMub4VJmriowvZUroBXwOVQGca0sEPKg@mail.gmail.com>

Cinder provides a block storage abstraction to a vm. Manila provides a
filesystem abstraction. The two are very different, and complementary. I
see no reason why the nfs related cinder drivers should be removed based on
the existence or maturity of manila - manila is not going to suddenly start
providing block storage to a vm.
On 29 Sep 2015 06:56, "Sheng Bo Hou" <sbhou at cn.ibm.com> wrote:

> Hi folks,
>
> I have a question about the file services in OpenStack.
>
> As you know there is a generic NFS driver in Cinder and other file system
> drivers inherit it, while the project Manila is determined to provide the
> file system service.
>
> Will NFS stay with Cinder as the reference implementation for the coming
> release or releases? Are all the file system drivers going to move to
> Manila?
> What is relation between Manila as FSaaS and NFS in Cinder?
> Any ideas?
>
> Thank you.
>
> Best wishes,
> Vincent Hou (???)
>
> Staff Software Engineer, Open Standards and Open Source Team, Emerging
> Technology Institute, IBM China Software Development Lab
>
> Tel: 86-10-82450778 Fax: 86-10-82453660
> Notes ID: Sheng Bo Hou/China/IBM at IBMCN    E-mail: sbhou at cn.ibm.com
> Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang
> West Road, Haidian District, Beijing, P.R.C.100193
> ??:???????????8???????28??????3? ???100193
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/11656c35/attachment.html>

From Abhishek.Kekane at nttdata.com  Tue Sep 29 06:26:06 2015
From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek)
Date: Tue, 29 Sep 2015 06:26:06 +0000
Subject: [openstack-dev] [cinder] snapshot and cloning for NFS backend
Message-ID: <E1FB4937BE24734DAD0D1D4E4E506D7890D1A40E@MAIL703.KDS.KEANE.COM>

Hi Devs,

The cinder-specs [1] for snapshot and cloning NFS backend submitted by Eric was approved in Kilo but due to nova issue [2] it is not implemented in Kilo and Liberty.
I am discussing about this nova bug with nova team for finding possible solutions and Nikola has given some pointers about fixing the same in launchpad bug.

This feature is very useful for NFS backend and if the work should be continued then is there a need to resubmit this specs for approval in Mitaka?

Please let me know your opinion on the same.

[1] https://review.openstack.org/#/c/133074/
[2] https://bugs.launchpad.net/nova/+bug/1416132


Thanks & Regards,

Abhishek Kekane

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/59590da6/attachment.html>

From samuel.bartel.pro at gmail.com  Tue Sep 29 07:02:13 2015
From: samuel.bartel.pro at gmail.com (Samuel Bartel)
Date: Tue, 29 Sep 2015 09:02:13 +0200
Subject: [openstack-dev] [Fuel][Plugins] add health check for plugins
In-Reply-To: <7854d3e66a593325404281449e53f955@mail.gmail.com>
References: <CAGq0MBgitgb5ep1vCYr45hrX7u1--gBEP3_LCcz0xoCCQw9asQ@mail.gmail.com>
 <CAOq3GZVoB_7rOU0zr=vaFB3bC4W1eNoyJV-B6Rm2xZzGnTHtRg@mail.gmail.com>
 <2d9b21315bb63501f34fbbdeb36fc0cb@mail.gmail.com>
 <CA+vYeFqOeRQgEzuoSx7mgEhNcVqfNYBweUS=3we8-tTcMD0-Mw@mail.gmail.com>
 <CAGq0MBjgMPOC3-8e6cyAFpjfeUqKcAsuq6MRR3+=auLkk2MAaA@mail.gmail.com>
 <7854d3e66a593325404281449e53f955@mail.gmail.com>
Message-ID: <CAGq0MBhFQXVC4RXPL29wLc8WiniAN7K693UKPV=8vVy-bfc71A@mail.gmail.com>

it makes sense.
We have two intersting use cases here:
-to extend sanity test to validate plugins (see example in my previous
email)
-to extend OSTF test to add pre-deployment check. verify network tests are
not enough to ensure that a deployment will be successfull or not. there
are lot of other external parameters which can impact the deployment.
VMWare credentials as mentionned by sheena is a good example but there is
also check DNS, NTP, Netapp credentials , Netapp or NFS server ip is
reachable, external Elasticsearch or Influxdb server is reachable (in case
of using  external servers for LMA) and so one. It would be very
interesting to be able to add those tests for core components and plugins.
it would help us to ensure user that if your tests are ok so no external
elements would interfere with you deployment. it can be a verify settings
test as the verify network one



2015-09-28 11:06 GMT+02:00 Sheena Gregson <sgregson at mirantis.com>:

> I just realized I missed this thread when Andrey responded to it ?
> apologies!
>
>
>
> I was thinking of things like ? confirming VMware username and password
> are accurate, confirming that SSL certificate chain is valid, etc. ? some
> of these are not plugin based, but I think there is value in enabling both
> core components and plugins to specify tests that can be run prior to
> deployment that will help ensure the deployment will not fail.
>
>
>
> Does that make sense?  In this case, it is not confirming the deployment
> was successful (post-deployment), it is checking known parameters for
> validity prior to attempting to deploy (pre-deployment).
>
>
>
> *From:* Samuel Bartel [mailto:samuel.bartel.pro at gmail.com]
> *Sent:* Monday, September 28, 2015 11:13 AM
>
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev at lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Fuel][Plugins] add health check for
> plugins
>
>
>
> Hi,
>
> Totally agree with you Andrey,
>
> other use cases could be :
>
> -for Ironic plugin, add test to validate that Ironic is properly deploy
>
> -for LMA plugin check that metric and log are properly collect, that elk,
> nagios or grafana dashboard are accessible
>
> -for cinder netapp multi backend, check that different type of backend can
> be crreated
>
> and so on
>
> So it would be very intersting to have enxtensibility ofr OSTF test
>
>
>
>
>
> Samuel
>
>
>
> 2015-09-08 0:05 GMT+02:00 Andrey Danin <adanin at mirantis.com>:
>
> Hi.
>
> Sorry for bringing this thread back from the grave but it look quite
> interesting to me.
>
> Sheena, could you please explain how pre-deployment sanity checks should
> look like? I don't get what it is.
>
> From the Health Check point of view plugins may be divided to two groups:
>
>
> 1) A plugin that doesn't change an already covered functionality thus
> doesn't require extra tests implemented. Such plugins may be Contrail and
> almost all SDN plugins, Glance or Cinder backend plugins, and others which
> don't bring any changes in OSt API or any extra OSt components.
>
>
> 2) A plugin that adds new elements into OSt or changes API or a standard
> behavior. Such plugins may be Contrail (because it actually adds Contrail
> Controller which may be covered by Health Check too), Cisco ASR plugin
> (because it always creates HA routers), some Swift plugins (we don't have
> Swift/S3 API covered by Health Check now at all), SR-IOV plugins (because
> they require special network preparation and extra drivers to be presented
> in an image), when a combination of different ML2 plugins or hypervisors
> deployed (because you need to test all network underlayers or HVs).
>
> So, all that means we need to make OSTF extendible by Fuel plugin's tests
> eventually.
>
>
>
> On Mon, Aug 10, 2015 at 5:17 PM, Sheena Gregson <sgregson at mirantis.com>
> wrote:
>
> I like that idea a lot ? I also think there would be value in adding
> pre-deployment sanity checks that could be called from the Health Check
> screen prior to deployment.  Thoughts?
>
>
>
> *From:* Simon Pasquier [mailto:spasquier at mirantis.com]
> *Sent:* Monday, August 10, 2015 9:00 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev at lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Fuel][Plugins] add health check for
> plugins
>
>
>
> Hello Samuel,
>
> This looks like an interesting idea. Do you have any concrete example to
> illustrate your point (with one of your plugins maybe)?
>
> BR,
>
> Simon
>
>
>
> On Mon, Aug 10, 2015 at 12:04 PM, Samuel Bartel <
> samuel.bartel.pro at gmail.com> wrote:
>
> Hi all,
>
>
>
> actually with fuel plugins there are test for the plugins used by the
> CICD, but after a deployment it is not possible for the user to easily test
> if a plugin is crrectly deploy or not.
>
> I am wondering if it could be interesting to improve the fuel plugin
> framework in order to be able to define test for each plugin which would ba
> dded to the health Check. the user would be able to test the plugin when
> testing the deployment test.
>
>
>
> What do you think about that?
>
>
>
>
>
> Kind regards
>
>
>
> Samuel
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
>
> Andrey Danin
> adanin at mirantis.com
> skype: gcon.monolake
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/10cfe76b/attachment.html>

From flavio at redhat.com  Tue Sep 29 08:01:15 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 29 Sep 2015 10:01:15 +0200
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <CABib2_rm7BG6uuKZ8pDePbCVgdS6QGMU6j4xtF+m7DujWsm9rw@mail.gmail.com>
References: <1443189345-sup-9818@lrrr.local>
 <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
 <1443356431-sup-7293@lrrr.local> <5609200A.2000607@dague.net>
 <CABib2_rm7BG6uuKZ8pDePbCVgdS6QGMU6j4xtF+m7DujWsm9rw@mail.gmail.com>
Message-ID: <20150929080115.GA7310@redhat.com>

On 28/09/15 12:32 +0100, John Garbutt wrote:
>On 28 September 2015 at 12:10, Sean Dague <sean at dague.net> wrote:
>> On 09/27/2015 08:43 AM, Doug Hellmann wrote:
>>> Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +0000:
>>>> On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
>>>>>
>>>>> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
>> <snip>
>>>>
>>>> Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1]
>>>
>>> And yet future technical direction does factor in, and I'm trying
>>> to add a new heuristic to that aspect of consideration of tests:
>>> Do not add tests that use proxy APIs.
>>>
>>> If there is some compelling reason to add a capability for which
>>> the only tests use a proxy, that's important feedback for the
>>> contributor community and tells us we need to improve our test
>>> coverage. If the reason to use the proxy is that no one is deploying
>>> the proxied API publicly, that is also useful feedback, but I suspect
>>> we will, in most cases (glance is the exception), say "Yeah, that's
>>> not how we mean for you to run the services long-term, so don't
>>> include that capability."
>>
>> I think we might also just realize that some of the tests are using the
>> proxy because... that's how they were originally written.
>
>From my memory, thats how we got here.
>
>The Nova tests needed to use an image API. (i.e. list images used to
>check the snapshot Nova, or similar)
>
>The Nova proxy was chosen over Glance v1 and Glance v2, mostly due to
>it being the only widely deployed option.
>
>> And they could be rewritten to use native APIs.
>
>+1
>Once Glance v2 is available.
>
>Adding Glance v2 as advisory seems a good step to help drive more adoption.

What do you mean here? Glance v2 tests?

I just want to make sure you're talking about the tests and not the
API since the later is available and deployed in several clouds.[0]

[0] http://lists.openstack.org/pipermail/openstack-dev/2015-September/074500.html

Flavio




-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/24541c17/attachment.pgp>

From flavio at redhat.com  Tue Sep 29 08:03:39 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 29 Sep 2015 10:03:39 +0200
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <1443444996-sup-6545@lrrr.local>
References: <7560F387-9D33-4DA8-9802-314D01B5D3E2@openstack.org>
 <E24CFF25-009E-4E0E-A6CB-1F63B55C4DEE@vmware.com>
 <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
 <1443356431-sup-7293@lrrr.local> <5609200A.2000607@dague.net>
 <CABib2_rm7BG6uuKZ8pDePbCVgdS6QGMU6j4xtF+m7DujWsm9rw@mail.gmail.com>
 <1443444996-sup-6545@lrrr.local>
Message-ID: <20150929080339.GB7310@redhat.com>

On 28/09/15 09:03 -0400, Doug Hellmann wrote:
>Excerpts from John Garbutt's message of 2015-09-28 12:32:53 +0100:
>> On 28 September 2015 at 12:10, Sean Dague <sean at dague.net> wrote:
>> > On 09/27/2015 08:43 AM, Doug Hellmann wrote:
>> >> Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +0000:
>> >>> On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
>> >>>>
>> >>>> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
>> > <snip>
>> >>>
>> >>> Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1]
>> >>
>> >> And yet future technical direction does factor in, and I'm trying
>> >> to add a new heuristic to that aspect of consideration of tests:
>> >> Do not add tests that use proxy APIs.
>> >>
>> >> If there is some compelling reason to add a capability for which
>> >> the only tests use a proxy, that's important feedback for the
>> >> contributor community and tells us we need to improve our test
>> >> coverage. If the reason to use the proxy is that no one is deploying
>> >> the proxied API publicly, that is also useful feedback, but I suspect
>> >> we will, in most cases (glance is the exception), say "Yeah, that's
>> >> not how we mean for you to run the services long-term, so don't
>> >> include that capability."
>> >
>> > I think we might also just realize that some of the tests are using the
>> > proxy because... that's how they were originally written.
>>
>> From my memory, thats how we got here.
>>
>> The Nova tests needed to use an image API. (i.e. list images used to
>> check the snapshot Nova, or similar)
>>
>> The Nova proxy was chosen over Glance v1 and Glance v2, mostly due to
>> it being the only widely deployed option.
>
>Right, and I want to make sure it's clear that I am differentiating
>between "these tests are bad" and "these tests are bad *for DefCore*".
>We should definitely continue to test the proxy API, since it's a
>feature we have and that our users rely on.

++

>
>>
>> > And they could be rewritten to use native APIs.
>>
>> +1
>> Once Glance v2 is available.
>>
>> Adding Glance v2 as advisory seems a good step to help drive more adoption.
>
>I think we probably don't want to rewrite the existing tests, since
>that effectively changes the contract out from under existing folks
>complying with DefCore.  If we need new, parallel, tests that do
>not use the proxy to make more suitable tests for DefCore to use,
>we should create those.

I believe this is the road we'll take. It'll not only be safer but
it'll also respect the current contract.

>>
>> > I do agree that "testing proxies" should not be part of Defcore, and I
>> > like Doug's idea of making that a new heuristic in test selection.
>>
>> +1
>> Thats a good thing to add.
>> But I don't think we had another option in this case.
>
>We did have the option of leaving the feature out and highlighting the
>discrepancy to the contributors so tests could be added. That
>communication didn't really happen, as far as I can tell.

++

>
>> >> Sorry, I wasn't clear. The Nova team would, I expect, view the use of
>> >> those APIs in DefCore as a reason to avoid deprecating them in the code
>> >> even if they wanted to consider them as legacy features that should be
>> >> removed. Maybe that's not true, and the Nova team would be happy to
>> >> deprecate the APIs, but I did think that part of the feedback cycle we
>> >> were establishing here was to have an indication from the outside of the
>> >> contributor base about what APIs are considered important enough to keep
>> >> alive for a long period of time.
>> > I'd also agree with this. Defcore is a wider contract that we're trying
>> > to get even more people to write to because that cross section should be
>> > widely deployed. So deprecating something in Defcore is something I
>> > think most teams, Nova included, would be very reluctant to do. It's
>> > just asking for breaking your users.
>>
>> I can't see us removing the proxy APIs in Nova any time soon,
>> regardless of DefCore, as it would break too many people.
>>
>> But personally, I like dropping them from Defcore, to signal that the
>> best practice is to use the Glance v2 API directly, rather than the
>> Nova proxy.
>>
>> Maybe the are just marked deprecated, but still required, although
>> that sounds a bit crazy.
>
>Marking them as deprecated, then removing them from DefCore, would let
>the Nova team make a technical decision about what to do with them
>(maybe they get spun out into a separate service, maybe they're so
>popular you just keep them, whatever).

++


Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/b4c90fa6/attachment.pgp>

From flavio at redhat.com  Tue Sep 29 08:07:50 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 29 Sep 2015 10:07:50 +0200
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <1443471990-sup-8574@lrrr.local>
References: <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
 <1443356431-sup-7293@lrrr.local> <5609200A.2000607@dague.net>
 <CABib2_rm7BG6uuKZ8pDePbCVgdS6QGMU6j4xtF+m7DujWsm9rw@mail.gmail.com>
 <1443444996-sup-6545@lrrr.local>
 <399F9428-1FE2-4DDF-B679-080BDD583101@vmware.com>
 <1443471990-sup-8574@lrrr.local>
Message-ID: <20150929080750.GC7310@redhat.com>

On 28/09/15 16:29 -0400, Doug Hellmann wrote:
>Excerpts from Mark Voelker's message of 2015-09-28 19:55:18 +0000:
>> On Sep 28, 2015, at 9:03 AM, Doug Hellmann <doug at doughellmann.com> wrote:
>> >
>> > Excerpts from John Garbutt's message of 2015-09-28 12:32:53 +0100:
>> >> On 28 September 2015 at 12:10, Sean Dague <sean at dague.net> wrote:
>> >>> On 09/27/2015 08:43 AM, Doug Hellmann wrote:
>> >>>> Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +0000:
>> >>>>> On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
>> >>>>>>
>> >>>>>> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
>> >>> <snip>
>> >>>>>
>> >>>>> Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1]
>> >>>>
>> >>>> And yet future technical direction does factor in, and I'm trying
>> >>>> to add a new heuristic to that aspect of consideration of tests:
>> >>>> Do not add tests that use proxy APIs.
>> >>>>
>> >>>> If there is some compelling reason to add a capability for which
>> >>>> the only tests use a proxy, that's important feedback for the
>> >>>> contributor community and tells us we need to improve our test
>> >>>> coverage. If the reason to use the proxy is that no one is deploying
>> >>>> the proxied API publicly, that is also useful feedback, but I suspect
>> >>>> we will, in most cases (glance is the exception), say "Yeah, that's
>> >>>> not how we mean for you to run the services long-term, so don't
>> >>>> include that capability."
>> >>>
>> >>> I think we might also just realize that some of the tests are using the
>> >>> proxy because... that's how they were originally written.
>> >>
>> >> From my memory, thats how we got here.
>> >>
>> >> The Nova tests needed to use an image API. (i.e. list images used to
>> >> check the snapshot Nova, or similar)
>> >>
>> >> The Nova proxy was chosen over Glance v1 and Glance v2, mostly due to
>> >> it being the only widely deployed option.
>> >
>> > Right, and I want to make sure it's clear that I am differentiating
>> > between "these tests are bad" and "these tests are bad *for DefCore*".
>> > We should definitely continue to test the proxy API, since it's a
>> > feature we have and that our users rely on.
>> >
>> >>
>> >>> And they could be rewritten to use native APIs.
>> >>
>> >> +1
>> >> Once Glance v2 is available.
>> >>
>> >> Adding Glance v2 as advisory seems a good step to help drive more adoption.
>> >
>> > I think we probably don't want to rewrite the existing tests, since
>> > that effectively changes the contract out from under existing folks
>> > complying with DefCore.  If we need new, parallel, tests that do
>> > not use the proxy to make more suitable tests for DefCore to use,
>> > we should create those.
>> >
>> >>
>> >>> I do agree that "testing proxies" should not be part of Defcore, and I
>> >>> like Doug's idea of making that a new heuristic in test selection.
>> >>
>> >> +1
>> >> Thats a good thing to add.
>> >> But I don't think we had another option in this case.
>> >
>> > We did have the option of leaving the feature out and highlighting the
>> > discrepancy to the contributors so tests could be added. That
>> > communication didn't really happen, as far as I can tell.
>> >
>> >>>> Sorry, I wasn't clear. The Nova team would, I expect, view the use of
>> >>>> those APIs in DefCore as a reason to avoid deprecating them in the code
>> >>>> even if they wanted to consider them as legacy features that should be
>> >>>> removed. Maybe that's not true, and the Nova team would be happy to
>> >>>> deprecate the APIs, but I did think that part of the feedback cycle we
>> >>>> were establishing here was to have an indication from the outside of the
>> >>>> contributor base about what APIs are considered important enough to keep
>> >>>> alive for a long period of time.
>> >>> I'd also agree with this. Defcore is a wider contract that we're trying
>> >>> to get even more people to write to because that cross section should be
>> >>> widely deployed. So deprecating something in Defcore is something I
>> >>> think most teams, Nova included, would be very reluctant to do. It's
>> >>> just asking for breaking your users.
>> >>
>> >> I can't see us removing the proxy APIs in Nova any time soon,
>> >> regardless of DefCore, as it would break too many people.
>> >>
>> >> But personally, I like dropping them from Defcore, to signal that the
>> >> best practice is to use the Glance v2 API directly, rather than the
>> >> Nova proxy.
>> >>
>> >> Maybe the are just marked deprecated, but still required, although
>> >> that sounds a bit crazy.
>> >
>> > Marking them as deprecated, then removing them from DefCore, would let
>> > the Nova team make a technical decision about what to do with them
>> > (maybe they get spun out into a separate service, maybe they're so
>> > popular you just keep them, whatever).
>>
>> So, here?s that Who?s On First thing again.  Just to clarify: Nova does not need Capabilities to be removed from Guidelines in order to make technical decisions about what to do with a feature (though removing a Capability from future Guidelines may make Nova a lot more comfortable with their decision if they *do* decide to deprecate something, which I think is what Doug was pointing out here).
>>
>> The DefCore Committee cannot tell projects what they can and cannot do with their code [1].  All DefCore can to is tell vendors what capabilities they have to expose to end users (if and only if those vendors want their products to be OpenStack Powered(TM) [2]).  It also tells end users what things they can rely on being present (if and only if they choose an OpenStack Powered(TM) product that adheres to a particular Guideline).  It is a Wonderful Thing if stuff doesn?t get dropped from Guidelines very often because nobody wants users to have to worry about not being able to rely on things they previously relied on very often.  It?s therefore also a Wonderful Thing if projects like Nova and the DefCore Committee are talking to each other with an eye on making end-user experience as consistent and stable as possible, and that when things do change, those transitions are handled as smoothly as possible.
>>
>> But at the end of the day, if Nova wants to deprecate something, spin it out, or keep it, Nova doesn?t need DefCore to do anything first in order to make that decision.  DefCore would love a heads-up so the next Guideline (which comes out several months after the OpenStack release in which the changes are made did) can take the decision into account.  In fact in the case of deprecation, as of last week projects are more less required to give DefCore a heads-up if they want the assert:follows-standard-deprecation [3] tag.  A heads-up is even nice if Nova decides they want to keep supporting something since that will help the ?future direction? criteria be scored properly.
>>
>> Ultimately, what Nova does with Nova?s code is still Nova?s decision to make.  I think that?s a pretty good thing.
>
>Indeed! I guess I overestimated the expectations for DefCore. I thought
>introducing the capabilities tests implied a broader commitment to keep
>the feature than it sounds like is actually the case. I'm glad we are
>more flexible than I thought. :-)

ditto! Glad to hear this is the case.

>> And FWIW I think it?s a pretty good thing we?re all now openly discussing it, too (after all this whole DefCore thing is still pretty new to most folks) so thanks to all of you for that. =)
>
>Yes, it was pretty difficult to follow some of the earlier DefCore
>discussions while the process and guidelines were being worked out.
>Thanks for clarifying!
>
>

Thank you both for driving this thread. Sorry for having joined so
late!

Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/afa9af9a/attachment.pgp>

From zengyz1983 at live.cn  Tue Sep 29 09:07:59 2015
From: zengyz1983 at live.cn (ZengYingzhe)
Date: Tue, 29 Sep 2015 17:07:59 +0800
Subject: [openstack-dev] [heat]Refactor wsgi implementation of heat
In-Reply-To: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
Message-ID: <BAY179-W756AB08F5451471855D32BD94E0@phx.gbl>

Hi all,
Some basic WSGI functions have been implemented by oslo.service[1], so that heat can reuse these functions,no need to implement them individually, such as the Router class in wsgi.py.
Some other openstack projects have already started to refactor related logic[2].Perhaps heat should also launch this progress?
[1] https://github.com/openstack/oslo.service/blob/master/oslo_service/wsgi.py [2] https://bugs.launchpad.net/cinder/+bug/1499658Regards,
Yingzhe Zeng


 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/86f1880d/attachment.html>

From tom.cammann at hpe.com  Tue Sep 29 09:22:30 2015
From: tom.cammann at hpe.com (Tom Cammann)
Date: Tue, 29 Sep 2015 10:22:30 +0100
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com>
Message-ID: <560A5856.1050303@hpe.com>

This has been my thinking in the last couple of months to completely 
deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be 
very difficult and probably a wasted effort trying to consolidate their 
separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
>
> Would it make sense to ask the opposite of Wanghua's question: should 
> pod/service/rc be deprecated if the user can easily get to the k8s api?
> Even if we want to orchestrate these in a Heat template, the 
> corresponding heat resources can just interface with k8s instead of 
> Magnum.
> Ton Ngo,
>
> Inactive hide details for Egor Guz ---09/28/2015 10:20:02 PM---Also I 
> belive docker compose is just command line tool which doeEgor Guz 
> ---09/28/2015 10:20:02 PM---Also I belive docker compose is just 
> command line tool which doesn?t have any api or scheduling feat
>
> From: Egor Guz <EGuz at walmartlabs.com>
> To: "openstack-dev at lists.openstack.org" 
> <openstack-dev at lists.openstack.org>
> Date: 09/28/2015 10:20 PM
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> ------------------------------------------------------------------------
>
>
>
> Also I belive docker compose is just command line tool which doesn?t 
> have any api or scheduling features.
> But during last Docker Conf hackathon PayPal folks implemented docker 
> compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
> which can give you pod like experience.
>
> ?
> Egor
>
> From: Adrian Otto 
> <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage 
> questions)" 
> <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Date: Monday, September 28, 2015 at 22:03
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> Wanghua,
>
> I do follow your logic, but docker-compose only needs the docker API 
> to operate. We are intentionally avoiding re-inventing the wheel. Our 
> goal is not to replace docker swarm (or other existing systems), but 
> to compliment it/them. We want to offer users of Docker the richness 
> of native APIs and supporting tools. This way they will not need to 
> compromise features or wait longer for us to implement each new 
> feature as it is added. Keep in mind that our pod, service, and 
> replication controller resources pre-date this philosophy. If we 
> started out with the current approach, those would not exist in Magnum.
>
> Thanks,
>
> Adrian
>
> On Sep 28, 2015, at 8:32 PM, ?? 
> <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com>> wrote:
>
> Hi folks,
>
> Magnum now exposes service, pod, etc to users in kubernetes coe, but 
> exposes container in swarm coe. As I know, swarm is only a scheduler 
> of container, which is like nova in openstack. Docker compose is a 
> orchestration program which is like heat in openstack. k8s is the 
> combination of scheduler and orchestration. So I think it is better to 
> expose the apis in compose to users which are at the same level as k8s.
>
>
> Regards
> Wanghua
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/0ac51a5f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/0ac51a5f/attachment.gif>

From rcresswe at cisco.com  Tue Sep 29 09:32:23 2015
From: rcresswe at cisco.com (Rob Cresswell (rcresswe))
Date: Tue, 29 Sep 2015 09:32:23 +0000
Subject: [openstack-dev] [Horizon] Horizon Productivity Suggestion
In-Reply-To: <CABtBEBVrvfPSh=gtvj1hF3+Gj6qhF=3E3_+tmeOux8M2__gnew@mail.gmail.com>
References: <D22ECDB2.F132%rcresswe@cisco.com>
 <F68A1980-DFE6-4CEE-80AB-5D4CA8B0A69C@hpe.com>
 <CABtBEBVrvfPSh=gtvj1hF3+Gj6qhF=3E3_+tmeOux8M2__gnew@mail.gmail.com>
Message-ID: <D23017EC.F20D%rcresswe@cisco.com>

I wasn?t really envisioning a big discussion on the bugs; more like a brief notice period to let reviewers know high-priority items. Could definitely spend longer over it if that is preferred. Timing aside, the overall idea sounds good though?

Lin: That?s a good idea. A wiki page would probably suffice.

Rob

From: Lin Hua Cheng <os.lcheng at gmail.com<mailto:os.lcheng at gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, 29 September 2015 04:11
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Horizon] Horizon Productivity Suggestion

I agree with Travis that 2-3 minutes is not enough, that may not be even enough to talk about one bug. :)

We could save some time if we have someone monitoring the bugs/feature and publish the high priority item into a report - something similar to what Keystone does [1].  Reviewers can look this up every time if they need to prioritize their reviews.

We can rotate this responsibility among cores every month - even non-core if someone wants to volunteer.

-Lin

[1] https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting#Keystone_Weekly_Bug_Reports




On Mon, Sep 28, 2015 at 7:22 PM, Tripp, Travis S <travis.tripp at hpe.com<mailto:travis.tripp at hpe.com>> wrote:
Things always move more quickly at the end of a cycle because people feel release pressure, but I do think this is a good idea. 2 - 3 minutes isn?t very realistic. It would need to be planned for longer.





On 9/28/15, 3:57 AM, "Rob Cresswell (rcresswe)" <rcresswe at cisco.com<mailto:rcresswe at cisco.com>> wrote:

>Hi folks,
>
>I?m wondering if we could try marking out a small 2-3 minute slot at the
>start of each weekly meeting to highlight Critical/ High bugs that have
>code up for review, as well as important blueprints that have code up for
>review. These would be blueprints for features that were identified as
>high priority at the summit.
>
>The thought here is that we were very efficient in L-RC1 at moving code
>along, which is nice for productivity, but not really great for stability;
>it would be good to do this kind of targeted work earlier in the cycle.
>I?ve noticed other projects doing this in their meetings, and it seems
>quite effective.
>
>Rob
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/aa465084/attachment.html>

From skraynev at mirantis.com  Tue Sep 29 09:47:23 2015
From: skraynev at mirantis.com (Sergey Kraynev)
Date: Tue, 29 Sep 2015 12:47:23 +0300
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <EF3B902C-A4BD-4011-9D24-6F6AE2806FAA@parksidesoftware.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>
 <1442968743.30604.13.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C9145@EX10MBOX06.pnnl.gov>
 <CABkBM5GvWpG57HkBHghvH+q7ZK8V8s_oHL2KAfHQdRiuOAcSOg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C93B5@EX10MBOX06.pnnl.gov>
 <CAAbQNRmXG82C+4VAuuZcY6NRG5eNwQB=aYLe3T00wWAHyC65tQ@mail.gmail.com>
 <EF3B902C-A4BD-4011-9D24-6F6AE2806FAA@parksidesoftware.com>
Message-ID: <CAAbQNR=y3paqW=zKU9r18xrGLcRs0BeBe=449WALcD9_TVzHJA@mail.gmail.com>

Guys, my apologize for the delay. Now I can give answers.

Stephen, Heat meeting scheduled on Wednesday. (
https://wiki.openstack.org/wiki/Meetings/HeatAgenda)

Result was really short and clear: use suggested in this mail thread naming
OS::LBaaS::*  and add new resources.
I personally think, that this work requires separate BP + spec. So some
corner cases about similar resources may be discussed on review for this
specification.

Thank you guys, for the raising this idea. We  definitely should provide
new "fresh" resources for users :)

Regards,
Sergey.

On 25 September 2015 at 01:30, Doug Wiegley <dougwig at parksidesoftware.com>
wrote:

> Hi Sergey,
>
> I agree with the previous comments here. While supporting several APIs at
> once, with one set of objects, is a noble goal, in this case, the object
> relationships are *completely* different. Unless you want to get into the
> business of redefining your own higher-level API abstractions in all cases,
> that general strategy for all things will be awkward and difficult.
>
> Some API changes lend themselves well to object reuse abstractions. Some
> don?t. Lbaas v2 is definitely the latter, IMO.
>
> What was the result of your meeting discussion?  (*goes to grub around in
> eavesdrop logs after typing this.*)
>
> Thanks,
> doug
>
>
>
> On Sep 23, 2015, at 12:09 PM, Sergey Kraynev <skraynev at mirantis.com>
> wrote:
>
> Guys. I happy, that you already discussed it here :)
> However, I'd like to raise same question on our Heat IRC meeting.
> Probably we should define some common concepts, because I think, that
> lbaas is not single example of service with
> several APIs.
> I will post update in this thread later (after meeting).
>
> Regards,
> Sergey.
>
> On 23 September 2015 at 14:37, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>
>> Seperate ns would work great.
>>
>> Thanks,
>> Kevin
>>
>> ------------------------------
>> *From:* Banashankar KV
>> *Sent:* Tuesday, September 22, 2015 9:14:35 PM
>>
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for
>> LbaasV2
>>
>> What you think about separating both of them with the name as Doug
>> mentioned. In future if we want to get rid of the v1 we can just remove
>> that namespace. Everything will be clean.
>>
>> Thanks
>> Banashankar
>>
>>
>> On Tue, Sep 22, 2015 at 6:01 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>>
>>> As I understand it, loadbalancer in v2 is more like pool was in v1. Can
>>> we make it such that if you are using the loadbalancer resource and have
>>> the mandatory v2 properties that it tries to use v2 api, otherwise its a v1
>>> resource? PoolMember should be ok being the same. It just needs to call v1
>>> or v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api
>>> different between them? Can it be like pool member?
>>>
>>> Thanks,
>>> Kevin
>>>
>>> ------------------------------
>>> *From:* Brandon Logan
>>> *Sent:* Tuesday, September 22, 2015 5:39:03 PM
>>>
>>> *To:* openstack-dev at lists.openstack.org
>>> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for
>>> LbaasV2
>>>
>>> So for the API v1s api is of the structure:
>>>
>>> <neutron-endpoint>/lb/(vip|pool|member|health_monitor)
>>>
>>> V2s is:
>>> <neutron-endpoint>/lbaas/(loadbalancer|listener|pool|healthmonitor)
>>>
>>> member is a child of pool, so it would go down one level.
>>>
>>> The only difference is the lb for v1 and lbaas for v2.  Not sure if that
>>> is enough of a different.
>>>
>>> Thanks,
>>> Brandon
>>> On Tue, 2015-09-22 at 23:48 +0000, Fox, Kevin M wrote:
>>> > Thats the problem. :/
>>> >
>>> > I can't think of a way to have them coexist without: breaking old
>>> > templates, including v2 in the name, or having a flag on the resource
>>> > saying the version is v2. And as an app developer I'd rather not have
>>> > my existing templates break.
>>> >
>>> > I haven't compared the api's at all, but is there a required field of
>>> > v2 that is different enough from v1 that by its simple existence in
>>> > the resource you can tell a v2 from a v1 object? Would something like
>>> > that work? PoolMember wouldn't have to change, the same resource could
>>> > probably work for whatever lb it was pointing at I'm guessing.
>>> >
>>> > Thanks,
>>> > Kevin
>>> >
>>> >
>>> >
>>> > ______________________________________________________________________
>>> > From: Banashankar KV [banveerad at gmail.com]
>>> > Sent: Tuesday, September 22, 2015 4:40 PM
>>> > To: OpenStack Development Mailing List (not for usage questions)
>>> > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
>>> > LbaasV2
>>> >
>>> >
>>> >
>>> > Ok, sounds good. So now the question is how should we name the new V2
>>> > resources ?
>>> >
>>> >
>>> >
>>> > Thanks
>>> > Banashankar
>>> >
>>> >
>>> >
>>> > On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
>>> > wrote:
>>> >         Yes, hence the need to support the v2 resources as seperate
>>> >         things. Then I can rewrite the templates to include the new
>>> >         resources rather then the old resources as appropriate. IE, it
>>> >         will be a porting effort to rewrite them. Then do a heat
>>> >         update on the stack to migrate it from lbv1 to lbv2. Since
>>> >         they are different resources, it should create the new and
>>> >         delete the old.
>>> >
>>> >         Thanks,
>>> >         Kevin
>>> >
>>> >
>>> >         ______________________________________________________________
>>> >         From: Banashankar KV [banveerad at gmail.com]
>>> >         Sent: Tuesday, September 22, 2015 4:16 PM
>>> >
>>> >         To: OpenStack Development Mailing List (not for usage
>>> >         questions)
>>> >         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
>>> >         for LbaasV2
>>> >
>>> >
>>> >
>>> >
>>> >         But I think, V2 has introduced some new components and whole
>>> >         association of the resources with each other is changed, we
>>> >         should be still able to do what Kevin has mentioned ?
>>> >
>>> >         Thanks
>>> >         Banashankar
>>> >
>>> >
>>> >
>>> >         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
>>> >         <Kevin.Fox at pnnl.gov> wrote:
>>> >                 There needs to be a way to have both v1 and v2
>>> >                 supported in one engine....
>>> >
>>> >                 Say I have templates that use v1 already in existence
>>> >                 (I do), and I want to be able to heat stack update on
>>> >                 them one at a time to v2. This will replace the v1 lb
>>> >                 with v2, migrating the floating ip from the v1 lb to
>>> >                 the v2 one. This gives a smoothish upgrade path.
>>> >
>>> >                 Thanks,
>>> >                 Kevin
>>> >                 ________________________________________
>>> >                 From: Brandon Logan [brandon.logan at RACKSPACE.COM]
>>> >                 Sent: Tuesday, September 22, 2015 3:22 PM
>>> >                 To: openstack-dev at lists.openstack.org
>>> >                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>>> >                 support for LbaasV2
>>> >
>>> >                 Well I'd hate to have the V2 postfix on it because V1
>>> >                 will be deprecated
>>> >                 and removed, which means the V2 being there would be
>>> >                 lame.  Is there any
>>> >                 kind of precedent set for for how to handle this?
>>> >
>>> >                 Thanks,
>>> >                 Brandon
>>> >                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
>>> >                 wrote:
>>> >                 > So are we thinking of making it as ?
>>> >                 > OS::Neutron::LoadBalancerV2
>>> >                 >
>>> >                 > OS::Neutron::ListenerV2
>>> >                 >
>>> >                 > OS::Neutron::PoolV2
>>> >                 >
>>> >                 > OS::Neutron::PoolMemberV2
>>> >                 >
>>> >                 > OS::Neutron::HealthMonitorV2
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 > and add all those into the loadbalancer.py of heat
>>> >                 engine ?
>>> >                 >
>>> >                 > Thanks
>>> >                 > Banashankar
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>>> >                 > <skraynev at mirantis.com> wrote:
>>> >                 >         Brandon.
>>> >                 >
>>> >                 >
>>> >                 >         As I understand we v1 and v2 have
>>> >                 differences also in list of
>>> >                 >         objects and also in relationships between
>>> >                 them.
>>> >                 >         So I don't think that it will be easy to
>>> >                 upgrade old resources
>>> >                 >         (unfortunately).
>>> >                 >         I'd agree with second Kevin's suggestion
>>> >                 about implementation
>>> >                 >         new resources in this case.
>>> >                 >
>>> >                 >
>>> >                 >         I see, that a lot of guys, who wants to help
>>> >                 with it :) And I
>>> >                 >         suppose, that me and Rabi Mishra may try to
>>> >                 help with it,
>>> >                 >         because we was involvement in implementation
>>> >                 of v1 resources
>>> >                 >         in Heat.
>>> >                 >         Follow the list of v1 lbaas resources in
>>> >                 Heat:
>>> >                 >
>>> >                 >
>>> >                 >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>>> >                 >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>>> >                 >
>>> >                 >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>>> >                 >
>>> >                 >
>>> >
>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >         Also, I suppose, that it may be discussed
>>> >                 during summit
>>> >                 >         talks :)
>>> >                 >         Will add to etherpad with potential
>>> >                 sessions.
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >         Regards,
>>> >                 >         Sergey.
>>> >                 >
>>> >                 >         On 22 September 2015 at 22:27, Brandon Logan
>>> >                 >         <brandon.logan at rackspace.com> wrote:
>>> >                 >                 There is some overlap, but there was
>>> >                 some incompatible
>>> >                 >                 differences when
>>> >                 >                 we started designing v2.  I'm sure
>>> >                 the same issues
>>> >                 >                 will arise this time
>>> >                 >                 around so new resources sounds like
>>> >                 the path to go.
>>> >                 >                 However, I do not
>>> >                 >                 know much about Heat and the
>>> >                 resources so I'm speaking
>>> >                 >                 on a very
>>> >                 >                 uneducated level here.
>>> >                 >
>>> >                 >                 Thanks,
>>> >                 >                 Brandon
>>> >                 >                 On Tue, 2015-09-22 at 18:38 +0000,
>>> >                 Fox, Kevin M wrote:
>>> >                 >                 > We're using the v1 resources...
>>> >                 >                 >
>>> >                 >                 > If the v2 ones are compatible and
>>> >                 can seamlessly
>>> >                 >                 upgrade, great
>>> >                 >                 >
>>> >                 >                 > Otherwise, make new ones please.
>>> >                 >                 >
>>> >                 >                 > Thanks,
>>> >                 >                 > Kevin
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >
>>> >
>>> ______________________________________________________________________
>>> >                 >                 > From: Banashankar KV
>>> >                 [banveerad at gmail.com]
>>> >                 >                 > Sent: Tuesday, September 22, 2015
>>> >                 10:07 AM
>>> >                 >                 > To: OpenStack Development Mailing
>>> >                 List (not for
>>> >                 >                 usage questions)
>>> >                 >                 > Subject: Re: [openstack-dev]
>>> >                 [neutron][lbaas] - Heat
>>> >                 >                 support for
>>> >                 >                 > LbaasV2
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 > Hi Brandon,
>>> >                 >                 > Work in progress, but need some
>>> >                 input on the way we
>>> >                 >                 want them, like
>>> >                 >                 > replace the existing lbaasv1 or we
>>> >                 still need to
>>> >                 >                 support them ?
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 > Thanks
>>> >                 >                 > Banashankar
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
>>> >                 Brandon Logan
>>> >                 >                 > <brandon.logan at rackspace.com>
>>> >                 wrote:
>>> >                 >                 >         Hi Banashankar,
>>> >                 >                 >         I think it'd be great if
>>> >                 you got this going.
>>> >                 >                 One of those
>>> >                 >                 >         things we
>>> >                 >                 >         want to have and people
>>> >                 ask for but has
>>> >                 >                 always gotten a lower
>>> >                 >                 >         priority
>>> >                 >                 >         due to the critical things
>>> >                 needed.
>>> >                 >                 >
>>> >                 >                 >         Thanks,
>>> >                 >                 >         Brandon
>>> >                 >                 >         On Mon, 2015-09-21 at
>>> >                 17:57 -0700,
>>> >                 >                 Banashankar KV wrote:
>>> >                 >                 >         > Hi All,
>>> >                 >                 >         > I was thinking of
>>> >                 starting the work on
>>> >                 >                 heat to support
>>> >                 >                 >         LBaasV2,  Is
>>> >                 >                 >         > there any concerns about
>>> >                 that?
>>> >                 >                 >         >
>>> >                 >                 >         >
>>> >                 >                 >         > I don't know if it is
>>> >                 the right time to
>>> >                 >                 bring this up :D .
>>> >                 >                 >         >
>>> >                 >                 >         > Thanks,
>>> >                 >                 >         > Banashankar (bana_k)
>>> >                 >                 >         >
>>> >                 >                 >         >
>>> >                 >                 >
>>> >                 >                 >         >
>>> >                 >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >                 >         > OpenStack Development
>>> >                 Mailing List (not
>>> >                 >                 for usage questions)
>>> >                 >                 >         > Unsubscribe:
>>> >                 >                 >
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >                 >                 >         >
>>> >                 >                 >
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >                 >         OpenStack Development
>>> >                 Mailing List (not for
>>> >                 >                 usage questions)
>>> >                 >                 >         Unsubscribe:
>>> >                 >                 >
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >                 >                 >
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >                 > OpenStack Development Mailing List
>>> >                 (not for usage
>>> >                 >                 questions)
>>> >                 >                 > Unsubscribe:
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >                 >                 >
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >                 OpenStack Development Mailing List
>>> >                 (not for usage
>>> >                 >                 questions)
>>> >                 >                 Unsubscribe:
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 >         OpenStack Development Mailing List (not for
>>> >                 usage questions)
>>> >                 >         Unsubscribe:
>>> >                 >
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >                 >
>>> >                 >
>>> >                 >
>>> >                 >
>>> >
>>> __________________________________________________________________________
>>> >                 > OpenStack Development Mailing List (not for usage
>>> >                 questions)
>>> >                 > Unsubscribe:
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >                 >
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> __________________________________________________________________________
>>> >                 OpenStack Development Mailing List (not for usage
>>> >                 questions)
>>> >                 Unsubscribe:
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> __________________________________________________________________________
>>> >                 OpenStack Development Mailing List (not for usage
>>> >                 questions)
>>> >                 Unsubscribe:
>>> >
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> >
>>> >
>>> __________________________________________________________________________
>>> >         OpenStack Development Mailing List (not for usage questions)
>>> >         Unsubscribe:
>>> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> >
>>> __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/aba1f8b3/attachment.html>

From skraynev at mirantis.com  Tue Sep 29 09:56:43 2015
From: skraynev at mirantis.com (Sergey Kraynev)
Date: Tue, 29 Sep 2015 12:56:43 +0300
Subject: [openstack-dev] [heat] Traditional question about Heat IRC meeting
	time.
Message-ID: <CAAbQNRnZH9B2Cw1ZnUk4a-QKdSdxw2fuvGVFPUgp97OvqaDZ0Q@mail.gmail.com>

Hi Heaters!

Previously we had constant "tradition" to change meeting time for involving
more people from different time zones.
However last release cycle show, that two different meetings with 07:00 and
20:00 UTC are comfortable for most of our contributors. Both time values
are acceptable for me and I plan to visit both meetings. So I suggested to
leave it without any changes.

What do you think about it ?

Regards,
Sergey.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/c154dcc3/attachment.html>

From pshchelokovskyy at mirantis.com  Tue Sep 29 10:16:50 2015
From: pshchelokovskyy at mirantis.com (Pavlo Shchelokovskyy)
Date: Tue, 29 Sep 2015 10:16:50 +0000
Subject: [openstack-dev] [heat] Traditional question about Heat IRC
 meeting time.
In-Reply-To: <CAAbQNRnZH9B2Cw1ZnUk4a-QKdSdxw2fuvGVFPUgp97OvqaDZ0Q@mail.gmail.com>
References: <CAAbQNRnZH9B2Cw1ZnUk4a-QKdSdxw2fuvGVFPUgp97OvqaDZ0Q@mail.gmail.com>
Message-ID: <CACfB1usZiopxuqNS7oGEQU9RwHmcoBBZ=B6tZ+Za7Y5PguLbLA@mail.gmail.com>

+1, works for me too

On Tue, Sep 29, 2015 at 12:57 PM Sergey Kraynev <skraynev at mirantis.com>
wrote:

> Hi Heaters!
>
> Previously we had constant "tradition" to change meeting time for
> involving more people from different time zones.
> However last release cycle show, that two different meetings with 07:00
> and 20:00 UTC are comfortable for most of our contributors. Both time
> values are acceptable for me and I plan to visit both meetings. So I
> suggested to leave it without any changes.
>
> What do you think about it ?
>
> Regards,
> Sergey.
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/c4fd138d/attachment.html>

From wanghua.humble at gmail.com  Tue Sep 29 10:27:31 2015
From: wanghua.humble at gmail.com (=?UTF-8?B?546L5Y2O?=)
Date: Tue, 29 Sep 2015 18:27:31 +0800
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <560A5856.1050303@hpe.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com>
 <560A5856.1050303@hpe.com>
Message-ID: <CAH5-jC8fTherpnJjGOhtBjsXKyJNjFiwL6YjHsO_Ui2eeqP08Q@mail.gmail.com>

I agree with Tom to see Magnum as COEDaaS. k8s, swarm, mesos are so
different in their architectures that magnum can not provide unified API to
user. So I think we should focus on deployment.

Regards,
Wanghua

On Tue, Sep 29, 2015 at 5:22 PM, Tom Cammann <tom.cammann at hpe.com> wrote:

> This has been my thinking in the last couple of months to completely
> deprecate the COE specific APIs such as pod/service/rc and container.
>
> As we now support Mesos, Kubernetes and Docker Swarm its going to be very
> difficult and probably a wasted effort trying to consolidate their separate
> APIs under a single Magnum API.
>
> I'm starting to see Magnum as COEDaaS - Container Orchestration Engine
> Deployment as a Service.
>
>
> On 29/09/15 06:30, Ton Ngo wrote:
>
> Would it make sense to ask the opposite of Wanghua's question: should
> pod/service/rc be deprecated if the user can easily get to the k8s api?
> Even if we want to orchestrate these in a Heat template, the corresponding
> heat resources can just interface with k8s instead of Magnum.
> Ton Ngo,
>
> [image: Inactive hide details for Egor Guz ---09/28/2015 10:20:02
> PM---Also I belive docker compose is just command line tool which doe]Egor
> Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose is just
> command line tool which doesn?t have any api or scheduling feat
>
> From: Egor Guz <EGuz at walmartlabs.com> <EGuz at walmartlabs.com>
> To: "openstack-dev at lists.openstack.org"
> <openstack-dev at lists.openstack.org> <openstack-dev at lists.openstack.org>
> <openstack-dev at lists.openstack.org>
> Date: 09/28/2015 10:20 PM
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> ------------------------------
>
>
>
> Also I belive docker compose is just command line tool which doesn?t have
> any api or scheduling features.
> But during last Docker Conf hackathon PayPal folks implemented docker
> compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
> which can give you pod like experience.
>
> ?
> Egor
>
> From: Adrian Otto <adrian.otto at rackspace.com<
> mailto:adrian.otto at rackspace.com <adrian.otto at rackspace.com>>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> <openstack-dev at lists.openstack.org>>>
> Date: Monday, September 28, 2015 at 22:03
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> <openstack-dev at lists.openstack.org>>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> Wanghua,
>
> I do follow your logic, but docker-compose only needs the docker API to
> operate. We are intentionally avoiding re-inventing the wheel. Our goal is
> not to replace docker swarm (or other existing systems), but to compliment
> it/them. We want to offer users of Docker the richness of native APIs and
> supporting tools. This way they will not need to compromise features or
> wait longer for us to implement each new feature as it is added. Keep in
> mind that our pod, service, and replication controller resources pre-date
> this philosophy. If we started out with the current approach, those would
> not exist in Magnum.
>
> Thanks,
>
> Adrian
>
> On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<
> mailto:wanghua.humble at gmail.com <wanghua.humble at gmail.com>>> wrote:
>
> Hi folks,
>
> Magnum now exposes service, pod, etc to users in kubernetes coe, but
> exposes container in swarm coe. As I know, swarm is only a scheduler of
> container, which is like nova in openstack. Docker compose is a
> orchestration program which is like heat in openstack. k8s is the
> combination of scheduler and orchestration. So I think it is better to
> expose the apis in compose to users which are at the same level as k8s.
>
>
> Regards
> Wanghua
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org<
> mailto:OpenStack-dev-request at lists.openstack.org
> <OpenStack-dev-request at lists.openstack.org>>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/3a4961b8/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/3a4961b8/attachment.gif>

From sean at dague.net  Tue Sep 29 10:28:34 2015
From: sean at dague.net (Sean Dague)
Date: Tue, 29 Sep 2015 06:28:34 -0400
Subject: [openstack-dev] [all] Proposed Mitaka release schedule
In-Reply-To: <560949DD.4060503@openstack.org>
References: <560949DD.4060503@openstack.org>
Message-ID: <560A67D2.5070105@dague.net>

On 09/28/2015 10:08 AM, Thierry Carrez wrote:
> Hi everyone,
> 
> You can find the proposed release schedule for Mitaka here:
> 
> https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
> 
> That places the end release on April 7, 2016. It's also worth noting
> that in an effort to maximize development time, this schedule reduces
> the time between Feature Freeze and final release by one week (5 weeks
> instead of 6 weeks). That means we'll collectively have to be a lot
> stricter on Feature freeze exceptions this time around. Be prepared for
> that.
> 
> Feel free to ping the Release management team members on
> #openstack-relmgr-office if you have any question.

Seems reasonable. Though I miss your sleepy guy icon for the week of
christmas. :) I do see that milestone 2 is 7 weeks to account for it.

	-Sean

-- 
Sean Dague
http://dague.net


From shardy at redhat.com  Tue Sep 29 10:31:27 2015
From: shardy at redhat.com (Steven Hardy)
Date: Tue, 29 Sep 2015 11:31:27 +0100
Subject: [openstack-dev] [heat] Traditional question about Heat IRC
 meeting time.
In-Reply-To: <CAAbQNRnZH9B2Cw1ZnUk4a-QKdSdxw2fuvGVFPUgp97OvqaDZ0Q@mail.gmail.com>
References: <CAAbQNRnZH9B2Cw1ZnUk4a-QKdSdxw2fuvGVFPUgp97OvqaDZ0Q@mail.gmail.com>
Message-ID: <20150929103126.GC14707@t430slt.redhat.com>

On Tue, Sep 29, 2015 at 12:56:43PM +0300, Sergey Kraynev wrote:
>    Hi Heaters!
>    Previously we had constant "tradition" to change meeting time for
>    involving more people from different time zones.
>    However last release cycle show, that two different meetings with 07:00
>    and 20:00 UTC are comfortable for most of our contributors. Both time
>    values areA acceptable for me and I plan to visit both meetings. So I
>    suggested to leave it without any changes.
>    What do you think about it ?

+1, both times are OK for me.

Steve


From wanghua.humble at gmail.com  Tue Sep 29 10:36:48 2015
From: wanghua.humble at gmail.com (=?UTF-8?B?546L5Y2O?=)
Date: Tue, 29 Sep 2015 18:36:48 +0800
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <CAH5-jC8fTherpnJjGOhtBjsXKyJNjFiwL6YjHsO_Ui2eeqP08Q@mail.gmail.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com>
 <560A5856.1050303@hpe.com>
 <CAH5-jC8fTherpnJjGOhtBjsXKyJNjFiwL6YjHsO_Ui2eeqP08Q@mail.gmail.com>
Message-ID: <CAH5-jC8o4Mf2C6bFsCMpPkRjiWiyNL9RCT0AkX4xfkvhiOWkhw@mail.gmail.com>

@Egor,  docker compose is just a command line tool now, but I think it will
change its architecture to Client and Server in the future, otherwise it
can not do some complicate jobs.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/a45291ae/attachment.html>

From vkuklin at mirantis.com  Tue Sep 29 10:46:36 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Tue, 29 Sep 2015 13:46:36 +0300
Subject: [openstack-dev] [Fuel][PTL] PTL Candidates Q&A Session
Message-ID: <CAHAWLf2nmFnTo6yOpRCy-J0vqxLGKpzi9qhLoJtRgea92jhNeg@mail.gmail.com>

Folks

I think it is awesome we have three candidates for PTL position in Fuel. I
read all candidates' emails (including mine own several times :-) ) and I
got a slight thought of not being able to really differentiate the
candidates platforms as they are almost identical from the high-level point
of view. But we all know that the devil is in details. And this details
will actually affect project future.

Thus I thought about Q&A session at #fuel-dev channel in IRC. I think that
this will be mutually benefitial for everyone to get our platforms a little
bit more clear.

Let's do it before or right at the start of actual voting so that our
contributors can make better decisions based on this session.

I suggest the following format:

1) 3 questions from electorate members - let's put them onto an etherpad
2) 2 questions from a candidate to his opponents (1 question per opponent)
3) external moderator - I suppose, @xarses as our weekly meeting moderator
could help us
4) time and date - Wednesday or Thursday comfortable for both timezones,
e.g. after 4PM UTC or right after fuel weekly meeting.

What do you think, folks?

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/e00ca983/attachment.html>

From geguileo at redhat.com  Tue Sep 29 11:20:29 2015
From: geguileo at redhat.com (Gorka Eguileor)
Date: Tue, 29 Sep 2015 13:20:29 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <1443477049-sup-9690@fewbar.com>
References: <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost> <56096D79.6090005@redhat.com>
 <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
 <CAO_F6JMAkFVYV8=zjx0hVeqDgq=dLzTH=yW4HPaw=gpkG3TC=A@mail.gmail.com>
 <1443477049-sup-9690@fewbar.com>
Message-ID: <20150929112029.GX3713@localhost>

On 28/09, Clint Byrum wrote:
> Excerpts from Kevin Benton's message of 2015-09-28 14:29:14 -0700:
> > I think a blanket statement about what people's motivations are is not
> > fair. We've seen in this thread that some people want to enforce the limit
> > of 72 chars and it's not about padding their stats.
> > 
> > The issue here is that we have a guideline with a very specific number. If
> > we don't care to enforce it, why do we even bother? "Please do this, unless
> > you don't feel like it", is going to be hard for many people to review in a
> > way that pleases everyone.
> > 
> 
> Please do read said guidelines. "Must" would be used if it were to be
> "enforced". It "should" be formatted that way.

Since we are not all native speakers expecting everyone to realize that
difference - which is completely right - may be a little optimistic,
moreover considering that parts of those guidelines may even be written
by non natives.

Let's say I interpret all "should" instances in that guideline as rules
that don't need to be strictly enforced, I see that the Change-Id
"should not be changed when rebasing" - this one would certainly be fun
to watch if we didn't follow it - the blueprint "should give the name of
a Launchpad blueprint" - I don't know any core that would not -1 a patch
if he notices the BP reference missing - and machine targeted metadata
"should all be grouped together at the end of the commit message" - this
one everyone follows instinctively, so no problem.

And if we look at the i18n guidelines, almost everything is using
should, but on reviews these are treated as strict *must* because of the
implications.

Anyway, it's a matter of opinion and afaik in Cinder we don't even have
a real problem with downvoting for the commit message length, I don't
see more than 1 every couple of months or so.

Cheers,
Gorka.

> 
> > On Mon, Sep 28, 2015 at 11:00 PM, Assaf Muller <amuller at redhat.com> wrote:
> > 
> > >
> > >
> > > On Mon, Sep 28, 2015 at 12:40 PM, Zane Bitter <zbitter at redhat.com> wrote:
> > >
> > >> On 28/09/15 05:47, Gorka Eguileor wrote:
> > >>
> > >>> On 26/09, Morgan Fainberg wrote:
> > >>>
> > >>>> As a core (and former PTL) I just ignored commit message -1s unless
> > >>>> there is something majorly wrong (no bug id where one is needed, etc).
> > >>>>
> > >>>> I appreciate well formatted commits, but can we let this one go? This
> > >>>> discussion is so far into the meta-bike-shedding (bike shedding about bike
> > >>>> shedding commit messages) ... If a commit message is *that* bad a -1 (or
> > >>>> just fixing it?) Might be worth it. However, if a commit isn't missing key
> > >>>> info (bug id? Bp? Etc) and isn't one long incredibly unbroken sentence
> > >>>> moving from topic to topic, there isn't a good reason to block the review.
> > >>>>
> > >>>
> > >> +1
> > >>
> > >> It is not worth having a bot -1 bad commits or even having gerrit muck
> > >>>> with them. Let's do the job of the reviewer and actually review code
> > >>>> instead of going crazy with commit messages.
> > >>>>
> > >>>
> > >> +1
> > >>
> > >> Sent via mobile
> > >>>>
> > >>>>
> > >>> I have to disagree, as reviewers we have to make sure that guidelines
> > >>> are followed, if we have an explicit guideline that states that
> > >>> the limit length is 72 chars, I will -1 any patch that doesn't follow
> > >>> the guideline, just as I would do with i18n guideline violations.
> > >>>
> > >>
> > >> Apparently you're unaware of the definition of the word 'guideline'. It's
> > >> a guide. If it were a hard-and-fast rule then we would have a bot enforcing
> > >> it already.
> > >>
> > >> Is there anything quite so frightening as a large group of people blindly
> > >> enforcing rules with total indifference to any sense of overarching purpose?
> > >>
> > >> A reminder that the reason for this guideline is to ensure that none of
> > >> the broad variety of tools that are available in the Git ecosystem
> > >> effectively become unusable with the OpenStack repos due to wildly
> > >> inconsistent formatting. And of course, even that goal has to be balanced
> > >> against our other goals, such as building a healthy community and
> > >> occasionally shipping some software.
> > >>
> > >> There are plenty of ways to achieve that goal other than blanket drive-by
> > >> -1's for trivial inconsistencies in the formatting of individual commit
> > >> messages.
> > >
> > >
> > > The actual issue is that we as a community (Speaking of the Neutron
> > > community at least) are stat-crazed. We have a fair number of contributors
> > > that -1 for trivial issues to retain their precious stats with alarming
> > > zeal. That is the real issue. All of these commit message issues,
> > > translation mishaps,
> > > comment typos etc are excuses for people to boost their stats without
> > > contributing their time or energy in to the project. I am beyond bitter
> > > about this
> > > issue at this point.
> > >
> > > I'll say what I've always said about this issue: The review process is
> > > about collaboration. I imagine that the author is sitting next to me, and
> > > we're going
> > > through the patch together for the purpose of improving it. Review
> > > comments should be motivated by a thirst to improve the proposed code in a
> > > real way,
> > > not by your want or need to improve your stats on stackalytics. The latter
> > > is an enormous waste of your time.
> > >
> > >
> > >> A polite comment and a link to the guidelines is a great way to educate
> > >> new contributors. For core reviewers especially, a comment like that and a
> > >> +1 review will *almost always* get you the change you want in double-quick
> > >> time. (Any contributor who knows they are 30s work away from a +2 is going
> > >> to be highly motivated.)
> > >>
> > >> Typos are a completely different matter and they should not be grouped
> > >>> together with guideline infringements.
> > >>>
> > >>
> > >> "Violations"? "Infringements"? It's line wrapping, not a felony case.
> > >>
> > >> I agree that it is a waste of time and resources when you have to -1 a
> > >>> patch for this, but there multiple solutions, you can make sure your
> > >>> editor does auto wrapping at the right length (I have mine configured
> > >>> this way), or create a git-enforce policy with a client-side hook, or do
> > >>> like Ihar is trying to do and push for a guideline change.
> > >>>
> > >>> I don't mind changing the guideline to any other length, but as long as
> > >>> it is 72 chars I will keep enforcing it, as it is not the place of
> > >>> reviewers to decide which guidelines are worthy of being enforced and
> > >>> which ones are not.
> > >>>
> > >>
> > >> Of course it is.
> > >>
> > >> If we're not here to use our brains, why are we here? Serious question.
> > >> Feel free to use any definition of 'here'.
> > >>
> > >> Cheers,
> > >>> Gorka.
> > >>>
> > >>>
> > >>>
> > >>> On Sep 26, 2015, at 21:19, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
> > >>>>>
> > >>>>> Can I ask a different question - could we reject a few simple-to-check
> > >>>>> things on the push, like bad commit messages?  For things that take 2
> > >>>>> seconds to fix and do make people's lives better, it's not that they're
> > >>>>> rejected, it's that the whole rejection cycle via gerrit review (push/wait
> > >>>>> for tests to run/check website/swear/find change/fix/push again) is out of
> > >>>>> proportion to the effort taken to fix it.
> > >>>>>
> > >>>>
> > >> I would welcome a confirmation step - but *not* an outright rejection -
> > >> that runs *locally* in git-review before the change is pushed. Right now,
> > >> gerrit gives you a warning after the review is pushed, at which point it is
> > >> too late.
> > >>
> > >> It seems here that there's benefit to 72 line messages - not that
> > >>>>> everyone sees that benefit, but it is present - but it doesn't outweigh the
> > >>>>> current cost.
> > >>>>>
> > >>>>
> > >> Yes, 72 columns is the correct guideline IMHO. It's used virtually
> > >> throughout the Git ecosystem now. Back in the early days of Git it wasn't
> > >> at all clear - should you have no line breaks at all and let each tool do
> > >> its own soft line wrapping? If not, where should you wrap? Now there's a
> > >> clear consensus that you hard wrap at 72. Vi wraps git commit messages at
> > >> 72 by default.
> > >>
> > >> The output of "git log" indents commit messages by four spaces, so
> > >> anything longer than 76 gets ugly, hard-to-read line-wrapping. I've also
> > >> noticed that Launchpad (or at least the bot that posts commit messages to
> > >> Launchpad when patches merge) does a hard wrap at 72 characters.
> > >>
> > >> A much better idea than modifying the guideline would be to put
> > >> documentation on the wiki about how to set up your editor so that this is
> > >> never an issue. You shouldn't even have to even think about the line length
> > >> for at least 99% of commits.
> > >>
> > >> cheers,
> > >> Zane.
> > >>
> > >>
> > >> __________________________________________________________________________
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >
> > >
> > > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From geguileo at redhat.com  Tue Sep 29 11:37:52 2015
From: geguileo at redhat.com (Gorka Eguileor)
Date: Tue, 29 Sep 2015 13:37:52 +0200
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
Message-ID: <20150929113752.GY3713@localhost>

On 28/09, John Griffith wrote:
> On Mon, Sep 28, 2015 at 6:19 PM, Mark Voelker <mvoelker at vmware.com> wrote:
> 
> > FWIW, the most popular client libraries in the last user survey[1] other
> > than OpenStack?s own clients were: libcloud (48 respondents), jClouds (36
> > respondents), Fog (34 respondents), php-opencloud (21 respondents),
> > DeltaCloud (which has been retired by Apache and hasn?t seen a commit in
> > two years, but 17 respondents are still using it), pkgcloud (15
> > respondents), and OpenStack.NET (14 respondents).  Of those:
> >
> > * libcloud appears to support the nova-volume API but not the cinder API:
> > https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/openstack.py#L251
> >
> > * jClouds appears to support only the v1 API:
> > https://github.com/jclouds/jclouds/tree/jclouds-1.9.1/apis/openstack-cinder/src/main/java/org/jclouds
> >
> > * Fog also appears to only support the v1 API:
> > https://github.com/fog/fog/blob/master/lib/fog/openstack/volume.rb#L99
> >
> > * php-opencloud appears to only support the v1 API:
> > https://php-opencloud.readthedocs.org/en/latest/services/volume/index.html
> >
> > * DeltaCloud I honestly haven?t looked at since it?s thoroughly dead, but
> > I can?t imagine it supports v2.
> >
> > * pkgcloud has beta-level support for Cinder but I think it?s v1 (may be
> > mistaken): https://github.com/pkgcloud/pkgcloud/#block-storage----beta
> > and
> > https://github.com/pkgcloud/pkgcloud/tree/master/lib/pkgcloud/openstack/blockstorage
> >
> > * OpenStack.NET does appear to support v2:
> > http://www.openstacknetsdk.org/docs/html/T_net_openstack_Core_Providers_IBlockStorageProvider.htm
> >
> > Now, it?s anyone?s guess as to whether or not users of those client
> > libraries actually try to use them for volume operations or not
> > (anecdotally I know a few clouds I help support are using client libraries
> > that only support v1), and some users might well be using more than one
> > library or mixing in code they wrote themselves.  But most of the above
> > that support cinder do seem to rely on v1.  Some management tools also
> > appear to still rely on the v1 API (such as RightScale:
> > http://docs.rightscale.com/clouds/openstack/openstack_config_prereqs.html
> > ).  From that perspective it might be useful to keep it around a while
> > longer and disable it by default.  Personally I?d probably lean that way,
> > especially given that folks here on the ops list are still reporting
> > problems too.
> >
> > That said, v1 has been deprecated since Juno, and the Juno release notes
> > said it was going to be removed [2], so there?s a case to be made that
> > there?s been plenty of fair warning too I suppose.
> >
> > [1]
> > http://superuser.openstack.org/articles/openstack-application-developers-share-insights
> > [2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_7
> >
> > At Your Service,
> >
> > Mark T. Voelker
> >
> >
> >
> > > On Sep 28, 2015, at 7:17 PM, Sam Morrison <sorrison at gmail.com> wrote:
> > >
> > > Yeah we?re still using v1 as the clients that are packaged with most
> > distros don?t support v2 easily.
> > >
> > > Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our
> > ?volume? endpoint to point to v2 (we have a volumev2 endpoint too) and the
> > client breaks.
> > >
> > > $ cinder list
> > > ERROR: OpenStack Block Storage API version is set to 1 but you are
> > accessing a 2 endpoint. Change its value through --os-volume-api-version or
> > env[OS_VOLUME_API_VERSION].
> > >
> > > Sam
> > >
> > >
> > >> On 29 Sep 2015, at 8:34 am, Matt Fischer <matt at mattfischer.com> wrote:
> > >>
> > >> Yes, people are probably still using it. Last time I tried to use V2 it
> > didn't work because the clients were broken, and then it went back on the
> > bottom of my to do list. Is this mess fixed?
> > >>
> > >>
> > http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
> > >>
> > >> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny <e0ne at e0ne.info>
> > wrote:
> > >> Hi all,
> > >>
> > >> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API
> > was introduced in Grizzly and v1 API is deprecated since Juno.
> > >>
> > >> After [1] is merged, Cinder API v1 is disabled in gates by default.
> > We've got a filed bug [2] to remove Cinder v1 API at all.
> > >>
> > >>
> > >> According to Deprecation Policy [3] looks like we are OK to remote it.
> > But I would like to ask Cinder API users if any still use API v1.
> > >> Should we remove it at all Mitaka release or just disable by default in
> > the cinder.conf?
> > >>
> > >> AFAIR, only Rally doesn't support API v2 now and I'm going to implement
> > it asap.
> > >>
> > >> [1] https://review.openstack.org/194726
> > >> [2] https://bugs.launchpad.net/cinder/+bug/1467589
> > >> [3]
> > http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
> > >>
> > >> Regards,
> > >> Ivan Kolodyazhny
> > >>
> > >> _______________________________________________
> > >> OpenStack-operators mailing list
> > >> OpenStack-operators at lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >>
> > >>
> > >> _______________________________________________
> > >> OpenStack-operators mailing list
> > >> OpenStack-operators at lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >
> > >
> > __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ?My opinion is that even though V1 has technically been deprecated for
> multiple cycles, V2 was never really viable until the Liberty release.
> Between issues with V2 and other components, and then the version discovery
> issues that broke some things; I think we should reset the deprecation
> clock so to speak.
> 
> It was only in the last milestone of Liberty that folks finally got
> everything updated and talking V2.  Not to mention the patch to switch the
> default in devstack just landed (where everything uses it including Nova).
> 
> To summarize, absolutely NO to removing V1 in Mitaka, and I think resetting
> the deprecation clock is the most reasonable course of action here.
> 
> Thanks,
> John?

I agree with John, regardless of the fact that the deprecation period
has expired I think it would be safer to keep it a little longer.

One possibility is to leave it as it is for Mitaka and for N disable it
by default in cinder.conf like Ivan suggests.

Cheers,
Gorka.


From kuvaja at hpe.com  Tue Sep 29 12:01:11 2015
From: kuvaja at hpe.com (Kuvaja, Erno)
Date: Tue, 29 Sep 2015 12:01:11 +0000
Subject: [openstack-dev] Cross-Project Meeting Tue 29th of Sep, 21:00 UTC
Message-ID: <EA70533067B8F34F801E964ABCA4C4410F4DDC7A@G9W0745.americas.hpqcorp.net>

Dear PTLs, cross-project liaisons and anyone else interested,



We'll have a cross-project meeting today at 21:00 UTC, with the

following agenda:



* Review of past action items

* Team announcements (horizontal, vertical, diagonal)

* Cross-Project Specs to discuss:

** Service Catalog Standardization [0]

** Backwards compatibility for clients and libraries [1]

* Open discussion



[0] https://review.openstack.org/181393

[1] https://review.openstack.org/226157



If you're from an horizontal team (Release management, QA, Infra, Docs,

Security, I18n...) or a vertical team (Nova, Swift, Keystone...) and

have something to communicate to the other teams, feel free to abuse the

relevant sections of that meeting and make sure it gets #info-ed by the

meetbot in the meeting summary.



See you there !



For more details on this meeting, please see:

https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting



--

Erno (jokke_) Kuvaja

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/f980c773/attachment.html>

From skraynev at mirantis.com  Tue Sep 29 12:06:07 2015
From: skraynev at mirantis.com (Sergey Kraynev)
Date: Tue, 29 Sep 2015 15:06:07 +0300
Subject: [openstack-dev] [Heat] Assumptions regarding extensions to
 OpenStack api's
In-Reply-To: <5605A9B3.8060808@inaugust.com>
References: <BF51D25F-2D7E-4C5D-B0BF-B20707F0FAE4@rackspace.com>
 <5605A9B3.8060808@inaugust.com>
Message-ID: <CAAbQNRkLEH5GcM-LrUa8+xGXXHi94pmZWrKdx=AeVy=GW6QXpQ@mail.gmail.com>

Guys. Thank you for the pointing to the this gap.

It probably was wrong missed before assumption about nova extension.
I was personally sure, that it this extension always is installed.
Currently we have couple resources OS::Nova::Server and AWS::EC2::Instance,
which uses this extension with wrong assumption, that extension is
presented in nova.

Pratik, I have seen bug created by you and agree, that it's bug.
We will fix this during Mitaka, also I suppose, that it should be
backported to previous releases (may need additional work, but it will be
done, I supppose).



Regards,
Sergey.

On 25 September 2015 at 23:08, Monty Taylor <mordred at inaugust.com> wrote:

> On 09/25/2015 02:32 PM, Pratik Mallya wrote:
>
>> Hello Heat Team,
>>
>> I was wondering if OpenStack Heat assumes that the Nova extensions api
>> would always exist in a cloud? My impression was that since these
>> features are extensions, they may or may not be implemented by the cloud
>> provider and hence Heat must not rely on it being present.
>>
>> My question is prompted by this code change: [0] where it is assumed
>> that the os-interfaces extension [1] is implemented.
>>
>> If we cannot rely on that assumption, then that code would need to be
>> changed with a 404 guard since that endpoint may not exist and the nova
>> client may thus raise a 404.
>>
>
> Correct. Extensions are not everywhere and so you must either query the
> extensions API to find out what extensions the cloud has, or you must 404
> guard.
>
> Of course, you can't ONLY 404 guard, because the cloud may also throw
> unauthorized - so querying the nova extension API is the more correct way
> to deal with it.
>
> Thanks,
>> Pratik Mallya
>> Software Developer
>> Rackspace, Inc.
>>
>> [0]:
>>
>> https://github.com/openstack/heat/commit/54c26453a0a8e8cb574858c7e1d362d0abea3822#diff-b3857cb91556a2a83f40842658589e4fR163
>> [1]:
>> http://developer.openstack.org/api-ref-compute-v2-ext.html#os-interface
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/7e6742f1/attachment.html>

From sean.mcginnis at gmx.com  Tue Sep 29 12:07:36 2015
From: sean.mcginnis at gmx.com (Sean McGinnis)
Date: Tue, 29 Sep 2015 07:07:36 -0500
Subject: [openstack-dev] [cinder] snapshot and cloning for NFS backend
In-Reply-To: <E1FB4937BE24734DAD0D1D4E4E506D7890D1A40E@MAIL703.KDS.KEANE.COM>
References: <E1FB4937BE24734DAD0D1D4E4E506D7890D1A40E@MAIL703.KDS.KEANE.COM>
Message-ID: <20150929120736.GA24920@gmx.com>

On Tue, Sep 29, 2015 at 06:26:06AM +0000, Kekane, Abhishek wrote:
> Hi Devs,
> 
> The cinder-specs [1] for snapshot and cloning NFS backend submitted by Eric was approved in Kilo but due to nova issue [2] it is not implemented in Kilo and Liberty.
> I am discussing about this nova bug with nova team for finding possible solutions and Nikola has given some pointers about fixing the same in launchpad bug.
> 
> This feature is very useful for NFS backend and if the work should be continued then is there a need to resubmit this specs for approval in Mitaka?

Thanks for looking at this Abhishek. I would like to see this work
continued and completed in Mitaka if at all possible.

Would you mind submitting a patch to add the spec to Mitaka? I will make
sure we get that through and targeted for this release.

Thanks!

Sean

> 
> Please let me know your opinion on the same.
> 
> [1] https://review.openstack.org/#/c/133074/
> [2] https://bugs.launchpad.net/nova/+bug/1416132
> 
> 
> Thanks & Regards,
> 
> Abhishek Kekane


From Matjaz.Pancur at fri.uni-lj.si  Tue Sep 29 12:08:38 2015
From: Matjaz.Pancur at fri.uni-lj.si (=?utf-8?B?UGFuxI11ciwgTWF0amHFvg==?=)
Date: Tue, 29 Sep 2015 12:08:38 +0000
Subject: [openstack-dev] Screenshots in reviews on Gerrit
References: <8A4C82EE-1EFC-4CED-8F47-48AE9F2478C5@fri.uni-lj.si>
Message-ID: <6300B064-A3C3-44D8-8254-E71B851FA86D@fri.uni-lj.si>

Repost from openstack-docs ML.
-Matjaz


Hi all,

I?ve been wondering if there is an already established or agreed on best practice for inclusion of screenshots in the review comment thread (like we have paste.openstack.org<http://paste.openstack.org/> for text snippets and logs)? Sometimes I would like to include an annotated screenshot of the issue (eg. a slide that renders wrong only on a certain OS/browser combination).

People are usually pragmatic and just use their existing Dropbox/iCloud/GDrive/etc. cloud service and publish in the review a link to it, but this is not a permanent solution as links are usually ephemeral/short-lived. This causes some transparency/archival issues, so I would like to use a more permanent solution (like it is for logs/snippets in paste.openstack.org<http://paste.openstack.org/>).

Any ideas on how should we deal with this?

Matjaz

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/bd58b157/attachment.html>

From afedorova at mirantis.com  Tue Sep 29 12:09:46 2015
From: afedorova at mirantis.com (Aleksandra Fedorova)
Date: Tue, 29 Sep 2015 15:09:46 +0300
Subject: [openstack-dev] [Fuel] Branch stable/7.0 created in fuel-docs
	repository
Message-ID: <CAMG8Tv-goYzsTipaifQNjkS9tKADRoxpoRkW-65kqeZ8s8vaGA@mail.gmail.com>

Hi, all,

please be informed, we've created stable/7.0 branch in fuel-docs repository.

Since now to update documentation for 7.0 release you need to merge a
patch to master branch first and then cherry-pick it to stable branch.

-- 
Aleksandra Fedorova
Fuel CI Engineer
bookwar


From flavio at redhat.com  Tue Sep 29 12:08:00 2015
From: flavio at redhat.com (Flavio Percoco)
Date: Tue, 29 Sep 2015 14:08:00 +0200
Subject: [openstack-dev] [all] Proposed Mitaka release schedule
In-Reply-To: <560949DD.4060503@openstack.org>
References: <560949DD.4060503@openstack.org>
Message-ID: <20150929120800.GD7310@redhat.com>

On 28/09/15 16:08 +0200, Thierry Carrez wrote:
>Hi everyone,
>
>You can find the proposed release schedule for Mitaka here:
>
>https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
>
>That places the end release on April 7, 2016. It's also worth noting
>that in an effort to maximize development time, this schedule reduces
>the time between Feature Freeze and final release by one week (5 weeks
>instead of 6 weeks). That means we'll collectively have to be a lot
>stricter on Feature freeze exceptions this time around. Be prepared for
>that.
>
>Feel free to ping the Release management team members on
>#openstack-relmgr-office if you have any question.

Sounds reasonable to me!

Thanks for sending this out,
Flavio

-- 
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/e0df5cb7/attachment.pgp>

From geguileo at redhat.com  Tue Sep 29 12:10:15 2015
From: geguileo at redhat.com (Gorka Eguileor)
Date: Tue, 29 Sep 2015 14:10:15 +0200
Subject: [openstack-dev] [Cinder] [Manila] Will NFS stay with Cinder as
 a reference implementation?
In-Reply-To: <CAOyZ2aFKZaq6zVGc_VZMub4VJmriowvZUroBXwOVQGca0sEPKg@mail.gmail.com>
References: <OF527463AA.29EEC511-ON48257ECF.0014DC53-48257ECF.00153220@cn.ibm.com>
 <CAOyZ2aFKZaq6zVGc_VZMub4VJmriowvZUroBXwOVQGca0sEPKg@mail.gmail.com>
Message-ID: <20150929121015.GB3713@localhost>

On 29/09, Duncan Thomas wrote:
> Cinder provides a block storage abstraction to a vm. Manila provides a
> filesystem abstraction. The two are very different, and complementary. I
> see no reason why the nfs related cinder drivers should be removed based on
> the existence or maturity of manila - manila is not going to suddenly start
> providing block storage to a vm.
> On 29 Sep 2015 06:56, "Sheng Bo Hou" <sbhou at cn.ibm.com> wrote:
> 
> > Hi folks,
> >
> > I have a question about the file services in OpenStack.
> >
> > As you know there is a generic NFS driver in Cinder and other file system
> > drivers inherit it, while the project Manila is determined to provide the
> > file system service.
> >
> > Will NFS stay with Cinder as the reference implementation for the coming
> > release or releases? Are all the file system drivers going to move to
> > Manila?
> > What is relation between Manila as FSaaS and NFS in Cinder?
> > Any ideas?
> >
> > Thank you.
> >
> > Best wishes,
> > Vincent Hou (???)
> >
> > Staff Software Engineer, Open Standards and Open Source Team, Emerging
> > Technology Institute, IBM China Software Development Lab
> >
> > Tel: 86-10-82450778 Fax: 86-10-82453660
> > Notes ID: Sheng Bo Hou/China/IBM at IBMCN    E-mail: sbhou at cn.ibm.com
> > Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang
> > West Road, Haidian District, Beijing, P.R.C.100193
> > ??:???????????8???????28??????3? ???100193
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

I agree with Duncan,

There is a clear distinction between the projects' objectives:
- Cinder provides the canonical storage provisioning control plane in
  OpenStack for block storage as well as delivering a persistence model
  for instance storage.
- Manila is a File Share Service, in a similar manner, provides
  coordinated access to shared or distributed file systems.

So I wouldn't move out our NFS drivers.

As the relation between those is that while they both use the same
storage type they expose it completely different.

Cheers,
Gorka.


From Abhishek.Kekane at nttdata.com  Tue Sep 29 12:17:22 2015
From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek)
Date: Tue, 29 Sep 2015 12:17:22 +0000
Subject: [openstack-dev] [cinder] snapshot and cloning for NFS backend
In-Reply-To: <20150929120736.GA24920@gmx.com>
References: <E1FB4937BE24734DAD0D1D4E4E506D7890D1A40E@MAIL703.KDS.KEANE.COM>
 <20150929120736.GA24920@gmx.com>
Message-ID: <E1FB4937BE24734DAD0D1D4E4E506D7890D1A656@MAIL703.KDS.KEANE.COM>

Hi Sean,

Sure I will submit a patch to add this spec in Mitaka.

Thank you,

Abhishek Kekane

-----Original Message-----
From: Sean McGinnis [mailto:sean.mcginnis at gmx.com] 
Sent: 29 September 2015 17:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] snapshot and cloning for NFS backend

On Tue, Sep 29, 2015 at 06:26:06AM +0000, Kekane, Abhishek wrote:
> Hi Devs,
> 
> The cinder-specs [1] for snapshot and cloning NFS backend submitted by Eric was approved in Kilo but due to nova issue [2] it is not implemented in Kilo and Liberty.
> I am discussing about this nova bug with nova team for finding possible solutions and Nikola has given some pointers about fixing the same in launchpad bug.
> 
> This feature is very useful for NFS backend and if the work should be continued then is there a need to resubmit this specs for approval in Mitaka?

Thanks for looking at this Abhishek. I would like to see this work continued and completed in Mitaka if at all possible.

Would you mind submitting a patch to add the spec to Mitaka? I will make sure we get that through and targeted for this release.

Thanks!

Sean

> 
> Please let me know your opinion on the same.
> 
> [1] https://review.openstack.org/#/c/133074/
> [2] https://bugs.launchpad.net/nova/+bug/1416132
> 
> 
> Thanks & Regards,
> 
> Abhishek Kekane

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


From sgordon at redhat.com  Tue Sep 29 12:47:19 2015
From: sgordon at redhat.com (Steve Gordon)
Date: Tue, 29 Sep 2015 08:47:19 -0400 (EDT)
Subject: [openstack-dev] [nfv][telcowg] Telco Working Group meeting schedule
In-Reply-To: <816915853.59493929.1443530498525.JavaMail.zimbra@redhat.com>
Message-ID: <142156967.59496233.1443530839928.JavaMail.zimbra@redhat.com>

Hi all,

As discussed in last week's meeting [1] we have been seeing increasingly limited engagement in the 1900 UTC meeting slot. For this reason starting from next week's meeting (October 6th) it is proposed that we consolidate on the 1400 UTC slot which is generally better attended and stop alternating the time each week.

Unrelated to the above, I am traveling this Wednesday and will not be able to facilitate the meeting on September 30th @ 1900 UTC. Is anyone else able to help out by facilitating the meeting at this time? I can help out with agenda etc.

Thanks in advance,

Steve


[1] http://eavesdrop.openstack.org/meetings/telcowg/2015/telcowg.2015-09-23-14.00.html


From e0ne at e0ne.info  Tue Sep 29 12:48:37 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Tue, 29 Sep 2015 15:48:37 +0300
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <20150929113752.GY3713@localhost>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
 <20150929113752.GY3713@localhost>
Message-ID: <CAGocpaGowju87LysZY44rFTtm3j0PLGhPM1xQYrHueffeVowVQ@mail.gmail.com>

First of all, I would like to say thank you for the feedback!

TBH, I did'n propose to remove API v1 at all in Mitaka. I was against to
remove v1  API instead of disabling it.

IMO, if we'll decide to leave it as is in Mitaka and disable in N release -
nothing will change. Everybody will use v1 API until N. Disabling v1 early
in Mitaka will give everybody more time to fix their clients. Anyway, I
leave a very easy way to re-enable v1.

Regards,
Ivan Kolodyazhny

On Tue, Sep 29, 2015 at 2:37 PM, Gorka Eguileor <geguileo at redhat.com> wrote:

> On 28/09, John Griffith wrote:
> > On Mon, Sep 28, 2015 at 6:19 PM, Mark Voelker <mvoelker at vmware.com>
> wrote:
> >
> > > FWIW, the most popular client libraries in the last user survey[1]
> other
> > > than OpenStack?s own clients were: libcloud (48 respondents), jClouds
> (36
> > > respondents), Fog (34 respondents), php-opencloud (21 respondents),
> > > DeltaCloud (which has been retired by Apache and hasn?t seen a commit
> in
> > > two years, but 17 respondents are still using it), pkgcloud (15
> > > respondents), and OpenStack.NET (14 respondents).  Of those:
> > >
> > > * libcloud appears to support the nova-volume API but not the cinder
> API:
> > >
> https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/openstack.py#L251
> > >
> > > * jClouds appears to support only the v1 API:
> > >
> https://github.com/jclouds/jclouds/tree/jclouds-1.9.1/apis/openstack-cinder/src/main/java/org/jclouds
> > >
> > > * Fog also appears to only support the v1 API:
> > > https://github.com/fog/fog/blob/master/lib/fog/openstack/volume.rb#L99
> > >
> > > * php-opencloud appears to only support the v1 API:
> > >
> https://php-opencloud.readthedocs.org/en/latest/services/volume/index.html
> > >
> > > * DeltaCloud I honestly haven?t looked at since it?s thoroughly dead,
> but
> > > I can?t imagine it supports v2.
> > >
> > > * pkgcloud has beta-level support for Cinder but I think it?s v1 (may
> be
> > > mistaken): https://github.com/pkgcloud/pkgcloud/#block-storage----beta
> > > and
> > >
> https://github.com/pkgcloud/pkgcloud/tree/master/lib/pkgcloud/openstack/blockstorage
> > >
> > > * OpenStack.NET does appear to support v2:
> > >
> http://www.openstacknetsdk.org/docs/html/T_net_openstack_Core_Providers_IBlockStorageProvider.htm
> > >
> > > Now, it?s anyone?s guess as to whether or not users of those client
> > > libraries actually try to use them for volume operations or not
> > > (anecdotally I know a few clouds I help support are using client
> libraries
> > > that only support v1), and some users might well be using more than one
> > > library or mixing in code they wrote themselves.  But most of the above
> > > that support cinder do seem to rely on v1.  Some management tools also
> > > appear to still rely on the v1 API (such as RightScale:
> > >
> http://docs.rightscale.com/clouds/openstack/openstack_config_prereqs.html
> > > ).  From that perspective it might be useful to keep it around a while
> > > longer and disable it by default.  Personally I?d probably lean that
> way,
> > > especially given that folks here on the ops list are still reporting
> > > problems too.
> > >
> > > That said, v1 has been deprecated since Juno, and the Juno release
> notes
> > > said it was going to be removed [2], so there?s a case to be made that
> > > there?s been plenty of fair warning too I suppose.
> > >
> > > [1]
> > >
> http://superuser.openstack.org/articles/openstack-application-developers-share-insights
> > > [2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_7
> > >
> > > At Your Service,
> > >
> > > Mark T. Voelker
> > >
> > >
> > >
> > > > On Sep 28, 2015, at 7:17 PM, Sam Morrison <sorrison at gmail.com>
> wrote:
> > > >
> > > > Yeah we?re still using v1 as the clients that are packaged with most
> > > distros don?t support v2 easily.
> > > >
> > > > Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our
> > > ?volume? endpoint to point to v2 (we have a volumev2 endpoint too) and
> the
> > > client breaks.
> > > >
> > > > $ cinder list
> > > > ERROR: OpenStack Block Storage API version is set to 1 but you are
> > > accessing a 2 endpoint. Change its value through
> --os-volume-api-version or
> > > env[OS_VOLUME_API_VERSION].
> > > >
> > > > Sam
> > > >
> > > >
> > > >> On 29 Sep 2015, at 8:34 am, Matt Fischer <matt at mattfischer.com>
> wrote:
> > > >>
> > > >> Yes, people are probably still using it. Last time I tried to use
> V2 it
> > > didn't work because the clients were broken, and then it went back on
> the
> > > bottom of my to do list. Is this mess fixed?
> > > >>
> > > >>
> > >
> http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
> > > >>
> > > >> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny <e0ne at e0ne.info>
> > > wrote:
> > > >> Hi all,
> > > >>
> > > >> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2
> API
> > > was introduced in Grizzly and v1 API is deprecated since Juno.
> > > >>
> > > >> After [1] is merged, Cinder API v1 is disabled in gates by default.
> > > We've got a filed bug [2] to remove Cinder v1 API at all.
> > > >>
> > > >>
> > > >> According to Deprecation Policy [3] looks like we are OK to remote
> it.
> > > But I would like to ask Cinder API users if any still use API v1.
> > > >> Should we remove it at all Mitaka release or just disable by
> default in
> > > the cinder.conf?
> > > >>
> > > >> AFAIR, only Rally doesn't support API v2 now and I'm going to
> implement
> > > it asap.
> > > >>
> > > >> [1] https://review.openstack.org/194726
> > > >> [2] https://bugs.launchpad.net/cinder/+bug/1467589
> > > >> [3]
> > >
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
> > > >>
> > > >> Regards,
> > > >> Ivan Kolodyazhny
> > > >>
> > > >> _______________________________________________
> > > >> OpenStack-operators mailing list
> > > >> OpenStack-operators at lists.openstack.org
> > > >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > > >>
> > > >>
> > > >> _______________________________________________
> > > >> OpenStack-operators mailing list
> > > >> OpenStack-operators at lists.openstack.org
> > > >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > > >
> > > >
> > >
> __________________________________________________________________________
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > ?My opinion is that even though V1 has technically been deprecated for
> > multiple cycles, V2 was never really viable until the Liberty release.
> > Between issues with V2 and other components, and then the version
> discovery
> > issues that broke some things; I think we should reset the deprecation
> > clock so to speak.
> >
> > It was only in the last milestone of Liberty that folks finally got
> > everything updated and talking V2.  Not to mention the patch to switch
> the
> > default in devstack just landed (where everything uses it including
> Nova).
> >
> > To summarize, absolutely NO to removing V1 in Mitaka, and I think
> resetting
> > the deprecation clock is the most reasonable course of action here.
> >
> > Thanks,
> > John?
>
> I agree with John, regardless of the fact that the deprecation period
> has expired I think it would be safer to keep it a little longer.
>
> One possibility is to leave it as it is for Mitaka and for N disable it
> by default in cinder.conf like Ivan suggests.
>
> Cheers,
> Gorka.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/ba835461/attachment.html>

From Abhishek.Kekane at nttdata.com  Tue Sep 29 13:05:21 2015
From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek)
Date: Tue, 29 Sep 2015 13:05:21 +0000
Subject: [openstack-dev] [cinder] snapshot and cloning for NFS backend
In-Reply-To: <E1FB4937BE24734DAD0D1D4E4E506D7890D1A656@MAIL703.KDS.KEANE.COM>
References: <E1FB4937BE24734DAD0D1D4E4E506D7890D1A40E@MAIL703.KDS.KEANE.COM>
 <20150929120736.GA24920@gmx.com>
 <E1FB4937BE24734DAD0D1D4E4E506D7890D1A656@MAIL703.KDS.KEANE.COM>
Message-ID: <E1FB4937BE24734DAD0D1D4E4E506D7890D1A6D9@MAIL703.KDS.KEANE.COM>

Hi Sean,

Author of specs is Eric Harney, If he is ok with it then I will submit the patch for moving specs to Mitaka.

Thank you,

Abhishek Kekane

-----Original Message-----
From: Kekane, Abhishek [mailto:Abhishek.Kekane at nttdata.com] 
Sent: 29 September 2015 17:47
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] snapshot and cloning for NFS backend

Hi Sean,

Sure I will submit a patch to add this spec in Mitaka.

Thank you,

Abhishek Kekane

-----Original Message-----
From: Sean McGinnis [mailto:sean.mcginnis at gmx.com]
Sent: 29 September 2015 17:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] snapshot and cloning for NFS backend

On Tue, Sep 29, 2015 at 06:26:06AM +0000, Kekane, Abhishek wrote:
> Hi Devs,
> 
> The cinder-specs [1] for snapshot and cloning NFS backend submitted by Eric was approved in Kilo but due to nova issue [2] it is not implemented in Kilo and Liberty.
> I am discussing about this nova bug with nova team for finding possible solutions and Nikola has given some pointers about fixing the same in launchpad bug.
> 
> This feature is very useful for NFS backend and if the work should be continued then is there a need to resubmit this specs for approval in Mitaka?

Thanks for looking at this Abhishek. I would like to see this work continued and completed in Mitaka if at all possible.

Would you mind submitting a patch to add the spec to Mitaka? I will make sure we get that through and targeted for this release.

Thanks!

Sean

> 
> Please let me know your opinion on the same.
> 
> [1] https://review.openstack.org/#/c/133074/
> [2] https://bugs.launchpad.net/nova/+bug/1416132
> 
> 
> Thanks & Regards,
> 
> Abhishek Kekane

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


From mestery at mestery.com  Tue Sep 29 13:21:23 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Tue, 29 Sep 2015 08:21:23 -0500
Subject: [openstack-dev] [election] [TC] TC Candidacy
Message-ID: <CAL3VkVzejpx4r=Qd6MX=Oc4duVQ1oyutUQ6jt0Qo6g+M6R63wg@mail.gmail.com>

I'd like to announce my candidacy for the TC election.

A bit about me and my involvement with OpenStack: I've been
involved with OpenStack for a long time now in various capacities.
Most recently, I was the PTL for Neutron for the Juno, Kilo and
Liberty cycles. I've been a core reviewer on Neutron since July of
2013. I helped to write the Project Team Guide [1] by participating
in the sprint during it's creation. I founded and continue to lead
the Minnesota OpenStack Meetup [2]. I've spoken at many OpenStack
conferences and other Open Source conferences.

Given the "Big Tent" governance model we now find ourselves under,
my experience in leading Neutron as it adopted the "Neutron Stadium"
is very relevant to the current state of OpenStack. While leading
Neutron, the "Neutron Stadium" allowed many new project teams to
develop and grow. I hope to be able to work with new OpenStack
projects and guide them as they grow into OpenStack projects. I'd
also like to ensure the uniqueness which each projects brings to the
table isn't lost when it becomes an OpenStack project. While I
firmly believe each project must adopt the Four Opens [3] in order
to become an OpenStack project, it's important to not lose the
original spark which was lit to start these projects as they move
into the Big Tent.

If I'm elected, I will continue to dedicate time to work upstream
and help new and old projects. I'll work hard to assist anyone who
requests my help. And I'll continue to strive to do what I can
to ensure OpenStack's future success.

OpenStack has been an amazing community to be involved in. I am
fortunate to have been a part of this community for a long time now,
and should I be elected, I look forward to the chance to help guide
it while serving on the TC for the next two cycles!

Thank you!

--
Kyle Mestery
IRC: mestery

[1] http://docs.openstack.org/project-team-guide/
[2] http://www.meetup.com/Minnesota-OpenStack-Meetup/
[3]
http://docs.openstack.org/project-team-guide/introduction.html#the-four-opens
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/a0d10513/attachment.html>

From john.griffith8 at gmail.com  Tue Sep 29 13:37:53 2015
From: john.griffith8 at gmail.com (John Griffith)
Date: Tue, 29 Sep 2015 07:37:53 -0600
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <CAGocpaGowju87LysZY44rFTtm3j0PLGhPM1xQYrHueffeVowVQ@mail.gmail.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
 <20150929113752.GY3713@localhost>
 <CAGocpaGowju87LysZY44rFTtm3j0PLGhPM1xQYrHueffeVowVQ@mail.gmail.com>
Message-ID: <CAPWkaSW880hTWO5mDmTxutC=3yse1M5L1dvBjrMV9CuSV8mLBw@mail.gmail.com>

On Tue, Sep 29, 2015 at 6:48 AM, Ivan Kolodyazhny <e0ne at e0ne.info> wrote:

> First of all, I would like to say thank you for the feedback!
>
> TBH, I did'n propose to remove API v1 at all in Mitaka. I was against to
> remove v1  API instead of disabling it.
>

?Sorry Ivan, I did not mean to imply that you were proposing removal.  You
did ask the question so I answered :)


IMO, if we'll decide to leave it as is in Mitaka and disable in N release -
> nothing will change. Everybody will use v1 API until N. Disabling v1 early
> in Mitaka will give everybody more time to fix their clients. Anyway, I
> leave a very easy way to re-enable v1.
>
> Regards,
> Ivan Kolodyazhny
>
> On Tue, Sep 29, 2015 at 2:37 PM, Gorka Eguileor <geguileo at redhat.com>
> wrote:
>
>> On 28/09, John Griffith wrote:
>> > On Mon, Sep 28, 2015 at 6:19 PM, Mark Voelker <mvoelker at vmware.com>
>> wrote:
>> >
>> > > FWIW, the most popular client libraries in the last user survey[1]
>> other
>> > > than OpenStack?s own clients were: libcloud (48 respondents), jClouds
>> (36
>> > > respondents), Fog (34 respondents), php-opencloud (21 respondents),
>> > > DeltaCloud (which has been retired by Apache and hasn?t seen a commit
>> in
>> > > two years, but 17 respondents are still using it), pkgcloud (15
>> > > respondents), and OpenStack.NET (14 respondents).  Of those:
>> > >
>> > > * libcloud appears to support the nova-volume API but not the cinder
>> API:
>> > >
>> https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/openstack.py#L251
>> > >
>> > > * jClouds appears to support only the v1 API:
>> > >
>> https://github.com/jclouds/jclouds/tree/jclouds-1.9.1/apis/openstack-cinder/src/main/java/org/jclouds
>> > >
>> > > * Fog also appears to only support the v1 API:
>> > >
>> https://github.com/fog/fog/blob/master/lib/fog/openstack/volume.rb#L99
>> > >
>> > > * php-opencloud appears to only support the v1 API:
>> > >
>> https://php-opencloud.readthedocs.org/en/latest/services/volume/index.html
>> > >
>> > > * DeltaCloud I honestly haven?t looked at since it?s thoroughly dead,
>> but
>> > > I can?t imagine it supports v2.
>> > >
>> > > * pkgcloud has beta-level support for Cinder but I think it?s v1 (may
>> be
>> > > mistaken):
>> https://github.com/pkgcloud/pkgcloud/#block-storage----beta
>> > > and
>> > >
>> https://github.com/pkgcloud/pkgcloud/tree/master/lib/pkgcloud/openstack/blockstorage
>> > >
>> > > * OpenStack.NET does appear to support v2:
>> > >
>> http://www.openstacknetsdk.org/docs/html/T_net_openstack_Core_Providers_IBlockStorageProvider.htm
>> > >
>> > > Now, it?s anyone?s guess as to whether or not users of those client
>> > > libraries actually try to use them for volume operations or not
>> > > (anecdotally I know a few clouds I help support are using client
>> libraries
>> > > that only support v1), and some users might well be using more than
>> one
>> > > library or mixing in code they wrote themselves.  But most of the
>> above
>> > > that support cinder do seem to rely on v1.  Some management tools also
>> > > appear to still rely on the v1 API (such as RightScale:
>> > >
>> http://docs.rightscale.com/clouds/openstack/openstack_config_prereqs.html
>> > > ).  From that perspective it might be useful to keep it around a while
>> > > longer and disable it by default.  Personally I?d probably lean that
>> way,
>> > > especially given that folks here on the ops list are still reporting
>> > > problems too.
>> > >
>> > > That said, v1 has been deprecated since Juno, and the Juno release
>> notes
>> > > said it was going to be removed [2], so there?s a case to be made that
>> > > there?s been plenty of fair warning too I suppose.
>> > >
>> > > [1]
>> > >
>> http://superuser.openstack.org/articles/openstack-application-developers-share-insights
>> > > [2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_7
>> > >
>> > > At Your Service,
>> > >
>> > > Mark T. Voelker
>> > >
>> > >
>> > >
>> > > > On Sep 28, 2015, at 7:17 PM, Sam Morrison <sorrison at gmail.com>
>> wrote:
>> > > >
>> > > > Yeah we?re still using v1 as the clients that are packaged with most
>> > > distros don?t support v2 easily.
>> > > >
>> > > > Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our
>> > > ?volume? endpoint to point to v2 (we have a volumev2 endpoint too)
>> and the
>> > > client breaks.
>> > > >
>> > > > $ cinder list
>> > > > ERROR: OpenStack Block Storage API version is set to 1 but you are
>> > > accessing a 2 endpoint. Change its value through
>> --os-volume-api-version or
>> > > env[OS_VOLUME_API_VERSION].
>> > > >
>> > > > Sam
>> > > >
>> > > >
>> > > >> On 29 Sep 2015, at 8:34 am, Matt Fischer <matt at mattfischer.com>
>> wrote:
>> > > >>
>> > > >> Yes, people are probably still using it. Last time I tried to use
>> V2 it
>> > > didn't work because the clients were broken, and then it went back on
>> the
>> > > bottom of my to do list. Is this mess fixed?
>> > > >>
>> > > >>
>> > >
>> http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
>> > > >>
>> > > >> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny <e0ne at e0ne.info>
>> > > wrote:
>> > > >> Hi all,
>> > > >>
>> > > >> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2
>> API
>> > > was introduced in Grizzly and v1 API is deprecated since Juno.
>> > > >>
>> > > >> After [1] is merged, Cinder API v1 is disabled in gates by default.
>> > > We've got a filed bug [2] to remove Cinder v1 API at all.
>> > > >>
>> > > >>
>> > > >> According to Deprecation Policy [3] looks like we are OK to remote
>> it.
>> > > But I would like to ask Cinder API users if any still use API v1.
>> > > >> Should we remove it at all Mitaka release or just disable by
>> default in
>> > > the cinder.conf?
>> > > >>
>> > > >> AFAIR, only Rally doesn't support API v2 now and I'm going to
>> implement
>> > > it asap.
>> > > >>
>> > > >> [1] https://review.openstack.org/194726
>> > > >> [2] https://bugs.launchpad.net/cinder/+bug/1467589
>> > > >> [3]
>> > >
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>> > > >>
>> > > >> Regards,
>> > > >> Ivan Kolodyazhny
>> > > >>
>> > > >> _______________________________________________
>> > > >> OpenStack-operators mailing list
>> > > >> OpenStack-operators at lists.openstack.org
>> > > >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> > > >>
>> > > >>
>> > > >> _______________________________________________
>> > > >> OpenStack-operators mailing list
>> > > >> OpenStack-operators at lists.openstack.org
>> > > >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> > > >
>> > > >
>> > >
>> __________________________________________________________________________
>> > > > OpenStack Development Mailing List (not for usage questions)
>> > > > Unsubscribe:
>> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > >
>> __________________________________________________________________________
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> > ?My opinion is that even though V1 has technically been deprecated for
>> > multiple cycles, V2 was never really viable until the Liberty release.
>> > Between issues with V2 and other components, and then the version
>> discovery
>> > issues that broke some things; I think we should reset the deprecation
>> > clock so to speak.
>> >
>> > It was only in the last milestone of Liberty that folks finally got
>> > everything updated and talking V2.  Not to mention the patch to switch
>> the
>> > default in devstack just landed (where everything uses it including
>> Nova).
>> >
>> > To summarize, absolutely NO to removing V1 in Mitaka, and I think
>> resetting
>> > the deprecation clock is the most reasonable course of action here.
>> >
>> > Thanks,
>> > John?
>>
>> I agree with John, regardless of the fact that the deprecation period
>> has expired I think it would be safer to keep it a little longer.
>>
>> One possibility is to leave it as it is for Mitaka and for N disable it
>> by default in cinder.conf like Ivan suggests.
>>
>> Cheers,
>> Gorka.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/1ecc1af1/attachment.html>

From mhorban at mirantis.com  Tue Sep 29 14:30:20 2015
From: mhorban at mirantis.com (mhorban)
Date: Tue, 29 Sep 2015 17:30:20 +0300
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
	service
Message-ID: <560AA07C.5090305@mirantis.com>

 > Excerpts from Josh's message:

 >> So a few 'event' like constructs/libraries that I know about:
 >>
 >> 
http://docs.openstack.org/developer/taskflow/types.html#taskflow.types.notifier.Notifier 

 >>
 >>
 >> I'd be happy to extract that and move to somewhere else if needed, it
 >> provides basic event/pub/sub kind of activities for taskflow 
(in-memory,
 >> not over rpc...)

I've investigated several event libraries...And chose taskflow because 
first of all it fits all our requirements and it is already used in 
openstack.


 > Excerpts from Doug's message

 >> We probably want the ability to have multiple callbacks. There are
 >> already a lot of libraries available on PyPI for handling "events" like
 >> this, so maybe we can pick one of those that is well maintained and
 >> integrate it with oslo.service?

I've created raw review in oslo.service 
https://review.openstack.org/#/c/228892/ .
I've used taskflow library(as Josh proposed).
By default I added one handler that reloads global configuration.
What do you think about such implementation?

Marian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/8f73f9bd/attachment.html>

From douglas.mendizabal at rackspace.com  Tue Sep 29 14:38:29 2015
From: douglas.mendizabal at rackspace.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=)
Date: Tue, 29 Sep 2015 09:38:29 -0500
Subject: [openstack-dev] Optional Dependencies
Message-ID: <560AA265.6080004@rackspace.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Hi openstack-dev,

I was wondering what the correct way of handling optional dependencies
is? I'm specifically talking about libraries that are only required
for a specific driver for example, so they're not technically a hard
requirement and the project is expected to function without them when
using a driver that does not require the lib.

I read through the README in openstack/requirements [1] but I didn't
see anything about it.


Thanks,
Douglas Mendiz?bal

[1]
https://git.openstack.org/cgit/openstack/requirements/tree/README.rst
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJWCqJlAAoJEB7Z2EQgmLX72D8P/RROP9qT7DRY1jDnbK0Aj/TZ
lYujurHS70nXCj/Pw6uqsq41TttmwMdAx85yXwoLv/XBASaFYZ6eT6i0scfHBKAu
z5f0IomaJMQDGJ27By/amcE5eMiST5sEW/OCwHyZxdM8zgo3mzX1jIslmFEyPJ0z
wSah5DoZZh3J0RfQuBg8MOQgJVZo74KiNRou1uKE82cbVXJzVKjlfn+r7yO9TUtx
9hB/77a8sDFBWI4nXluTP+Dfpy6NSW1kqwwUoDtsZACtrhTDNCDWxUUIjfyBlIKT
LdY+oVrhqWSUI/WwCop4+Aim64obaAq5yWPR6fjTlcQ3+iCYbBzzgP/9VOm/+0Nr
AGzVbIW7ah2yEDhM0yTymaay8+G1mc+jxhvwAtTxJVIJLcJXdC3XK6b00OFkO2Kt
0dkjx/i8/riP56sb62P2a3heS3gOFqzqzwlh9SD8Omvhot3NkOr2e1QR7Cvjh1le
W5U/61vGKxmtv+iIaFXd86CRO46+4UiD1V+T0lKz083J9XuC49nkhyfuMP3ev6lc
/qD6uOnbJfyVWKRdf2PkTEe9C8YsXlxEWZ72GFC+u1jvL5K/NATUkLLWmGuv/JH+
tPyAOPISKHh44mhJqM/K37NvJO/TloOhz0a2fW2FV8kOX1V5wVAZiQBSEWtCAI8u
29up4yIgvi13ZkrRb94n
=fj9z
-----END PGP SIGNATURE-----


From blk at acm.org  Tue Sep 29 14:45:53 2015
From: blk at acm.org (Brant Knudson)
Date: Tue, 29 Sep 2015 09:45:53 -0500
Subject: [openstack-dev] Optional Dependencies
In-Reply-To: <560AA265.6080004@rackspace.com>
References: <560AA265.6080004@rackspace.com>
Message-ID: <CAHjeE=SesyYcOi2HJAcpPGa_MLcnsQtCN-1rRbdh=XY1wKvMWA@mail.gmail.com>

On Tue, Sep 29, 2015 at 9:38 AM, Douglas Mendiz?bal <
douglas.mendizabal at rackspace.com> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA512
>
> Hi openstack-dev,
>
> I was wondering what the correct way of handling optional dependencies
> is? I'm specifically talking about libraries that are only required
> for a specific driver for example, so they're not technically a hard
> requirement and the project is expected to function without them when
> using a driver that does not require the lib.
>
>
We've got some in keystone, so for example the packages for ldap won't get
installed unless it's requested. One of the reasons we chose this one
specifically is that the python ldap package requires a C library.

We set up the different extra package groups in setup.cfg [extras]:

[1]
http://git.openstack.org/cgit/openstack/keystone/tree/setup.cfg?h=stable/liberty#n24

See pbr docs: http://docs.openstack.org/developer/pbr/#extra-requirements

Here's the tox.ini line so that they're all installed for unit tests:
http://git.openstack.org/cgit/openstack/keystone/tree/tox.ini?h=stable/liberty#n10



> I read through the README in openstack/requirements [1] but I didn't
> see anything about it.
>
>
> Thanks,
> Douglas Mendiz?bal
>
> [1]
> https://git.openstack.org/cgit/openstack/requirements/tree/README.rst
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - https://gpgtools.org
>
> iQIcBAEBCgAGBQJWCqJlAAoJEB7Z2EQgmLX72D8P/RROP9qT7DRY1jDnbK0Aj/TZ
> lYujurHS70nXCj/Pw6uqsq41TttmwMdAx85yXwoLv/XBASaFYZ6eT6i0scfHBKAu
> z5f0IomaJMQDGJ27By/amcE5eMiST5sEW/OCwHyZxdM8zgo3mzX1jIslmFEyPJ0z
> wSah5DoZZh3J0RfQuBg8MOQgJVZo74KiNRou1uKE82cbVXJzVKjlfn+r7yO9TUtx
> 9hB/77a8sDFBWI4nXluTP+Dfpy6NSW1kqwwUoDtsZACtrhTDNCDWxUUIjfyBlIKT
> LdY+oVrhqWSUI/WwCop4+Aim64obaAq5yWPR6fjTlcQ3+iCYbBzzgP/9VOm/+0Nr
> AGzVbIW7ah2yEDhM0yTymaay8+G1mc+jxhvwAtTxJVIJLcJXdC3XK6b00OFkO2Kt
> 0dkjx/i8/riP56sb62P2a3heS3gOFqzqzwlh9SD8Omvhot3NkOr2e1QR7Cvjh1le
> W5U/61vGKxmtv+iIaFXd86CRO46+4UiD1V+T0lKz083J9XuC49nkhyfuMP3ev6lc
> /qD6uOnbJfyVWKRdf2PkTEe9C8YsXlxEWZ72GFC+u1jvL5K/NATUkLLWmGuv/JH+
> tPyAOPISKHh44mhJqM/K37NvJO/TloOhz0a2fW2FV8kOX1V5wVAZiQBSEWtCAI8u
> 29up4yIgvi13ZkrRb94n
> =fj9z
> -----END PGP SIGNATURE-----
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/e0552312/attachment.html>

From sombrafam at gmail.com  Tue Sep 29 14:50:37 2015
From: sombrafam at gmail.com (Erlon Cruz)
Date: Tue, 29 Sep 2015 11:50:37 -0300
Subject: [openstack-dev] [Cinder] [Manila] Will NFS stay with Cinder as
 a reference implementation?
In-Reply-To: <20150929121015.GB3713@localhost>
References: <OF527463AA.29EEC511-ON48257ECF.0014DC53-48257ECF.00153220@cn.ibm.com>
 <CAOyZ2aFKZaq6zVGc_VZMub4VJmriowvZUroBXwOVQGca0sEPKg@mail.gmail.com>
 <20150929121015.GB3713@localhost>
Message-ID: <CAF+CadubuSg0dcHREL1kcFVx5SpG9qwALm2JkVqVUwRHJaRvZw@mail.gmail.com>

Hi Vincent,

Just to complement what Mike and Gorka said, Cinder NFS drivers does not
provision FS services. It 'consumes' a FS export, and provisions block
storage.
To be more detailed, the Cinder NFS backend, mount a remote share from a
NFS server, and create/stores the cinder volumes as files inside this
exports on the volume node. In a scenario that you have Manila and Cinder,
if Manila is using the generic driver and Cinder the NFS backend, before
create a share, Manila will create a volume in Cinder, which will store
that in the NFS share as a file.

Erlon



On Tue, Sep 29, 2015 at 9:10 AM, Gorka Eguileor <geguileo at redhat.com> wrote:

> On 29/09, Duncan Thomas wrote:
> > Cinder provides a block storage abstraction to a vm. Manila provides a
> > filesystem abstraction. The two are very different, and complementary. I
> > see no reason why the nfs related cinder drivers should be removed based
> on
> > the existence or maturity of manila - manila is not going to suddenly
> start
> > providing block storage to a vm.
> > On 29 Sep 2015 06:56, "Sheng Bo Hou" <sbhou at cn.ibm.com> wrote:
> >
> > > Hi folks,
> > >
> > > I have a question about the file services in OpenStack.
> > >
> > > As you know there is a generic NFS driver in Cinder and other file
> system
> > > drivers inherit it, while the project Manila is determined to provide
> the
> > > file system service.
> > >
> > > Will NFS stay with Cinder as the reference implementation for the
> coming
> > > release or releases? Are all the file system drivers going to move to
> > > Manila?
> > > What is relation between Manila as FSaaS and NFS in Cinder?
> > > Any ideas?
> > >
> > > Thank you.
> > >
> > > Best wishes,
> > > Vincent Hou (???)
> > >
> > > Staff Software Engineer, Open Standards and Open Source Team, Emerging
> > > Technology Institute, IBM China Software Development Lab
> > >
> > > Tel: 86-10-82450778 Fax: 86-10-82453660
> > > Notes ID: Sheng Bo Hou/China/IBM at IBMCN    E-mail: sbhou at cn.ibm.com
> > > Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang
> > > West Road, Haidian District, Beijing, P.R.C.100193
> > > ??:???????????8???????28??????3? ???100193
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
>
> I agree with Duncan,
>
> There is a clear distinction between the projects' objectives:
> - Cinder provides the canonical storage provisioning control plane in
>   OpenStack for block storage as well as delivering a persistence model
>   for instance storage.
> - Manila is a File Share Service, in a similar manner, provides
>   coordinated access to shared or distributed file systems.
>
> So I wouldn't move out our NFS drivers.
>
> As the relation between those is that while they both use the same
> storage type they expose it completely different.
>
> Cheers,
> Gorka.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/c306a584/attachment.html>

From sombrafam at gmail.com  Tue Sep 29 15:04:43 2015
From: sombrafam at gmail.com (Erlon Cruz)
Date: Tue, 29 Sep 2015 12:04:43 -0300
Subject: [openstack-dev] [cinder][neutron][all] New third-party-ci
 testing requirements for OpenStack Compatible mark
In-Reply-To: <CAL3VkVzWvOv79eOh1p+Cqi8=JfydDYi5gwnDoq5vHJhGtc3Ojg@mail.gmail.com>
References: <EBF1AF08-B54D-4391-9B22-964390523E0A@openstack.org>
 <CAK+RQeZ+zbP=JoBNpeLdhktQ=93UVKCgWuGOzUxvTSpAN=EKNg@mail.gmail.com>
 <CAL3VkVzWvOv79eOh1p+Cqi8=JfydDYi5gwnDoq5vHJhGtc3Ojg@mail.gmail.com>
Message-ID: <CAF+Cads=NaqrvSdCnx79=J8=C+5U5TLcaGzVHqjbk1Xk_eZj6A@mail.gmail.com>

Hi Cris,

There are some questions that came to my mind.

Cinder has near zero tolerance to backends that does not have a CI running.
So, can one assume that all drivers in Cinder will have the "OpenStack
Compatible" seal?

When you say that the driver have to 'pass' the integration tests, what
tests do you consider? All tests in tempest? All patches? Do you have any
criteria to determine if a backend is passing or not?

About this "OpenStack Compatible" flag, how does it work? Will you hold a
list with the Compatible vendors? Is anything a vendor need to to in order
to use this?

Thanks,
Erlon

On Mon, Sep 28, 2015 at 5:55 PM, Kyle Mestery <mestery at mestery.com> wrote:

> The Neutron team also discussed this in Vancouver, you can see the
> etherpad here [1]. We talked about the idea of creating a validation suite,
> and it sounds like that's something we should again discuss in Tokyo for
> the Mitaka cycle. I think a validation suite would be a great step forward
> for Neutron third-party CI systems to use to validate they work with a
> release.
>
> [1] https://etherpad.openstack.org/p/YVR-neutron-third-party-ci-liberty
>
> On Sun, Sep 27, 2015 at 11:39 AM, Armando M. <armamig at gmail.com> wrote:
>
>>
>>
>> On 25 September 2015 at 15:40, Chris Hoge <chris at openstack.org> wrote:
>>
>>> In November, the OpenStack Foundation will start requiring vendors
>>> requesting
>>> new "OpenStack Compatible" storage driver licenses to start passing the
>>> Cinder
>>> third-party integration tests.
>>
>> The new program was approved by the Board at
>>> the July meeting in Austin and follows the improvement of the testing
>>> standards
>>> and technical requirements for the "OpenStack Powered" program. This is
>>> all
>>> part of the effort of the Foundation to use the OpenStack brand to
>>> guarantee a
>>> base-level of interoperability and consistency for OpenStack users and to
>>> protect the work of our community of developers by applying a trademark
>>> backed
>>> by their technical efforts.
>>>
>>> The Cinder driver testing is the first step of a larger effort to apply
>>> community determined standards to the Foundation marketing programs.
>>> We're
>>> starting with Cinder because it has a successful testing program in
>>> place, and
>>> we have plans to extend the program to network drivers and OpenStack
>>> applications. We're going require CI testing for new "OpenStack
>>> Compatible"
>>> storage licenses starting on November 1, and plan to roll out network and
>>> application testing in 2016.
>>>
>>> One of our goals is to work with project leaders and developers to help
>>> us
>>> define and implement these test programs. The standards for third-party
>>> drivers and applications should be determined by the developers and users
>>> in our community, who are experts in how to maintain the quality of the
>>> ecosystem.
>>>
>>> We welcome and feedback on this program, and are also happy to answer any
>>> questions you might have.
>>>
>>
>> Thanks for spearheading this effort.
>>
>> Do you have more information/pointers about the program, and how Cinder
>> in particular is
>> paving the way for other projects to follow?
>>
>> Thanks,
>> Armando
>>
>>
>>> Thanks!
>>>
>>> Chris Hoge
>>> Interop Engineer
>>> OpenStack Foundation
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/f0c15145/attachment.html>

From chris at openstack.org  Tue Sep 29 15:08:38 2015
From: chris at openstack.org (Chris Hoge)
Date: Tue, 29 Sep 2015 08:08:38 -0700
Subject: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?
In-Reply-To: <20150929080750.GC7310@redhat.com>
References: <D22AF859.22B68%brian.rosmaita@rackspace.com>
 <78E78E6C-AAFE-44ED-9C31-17E1D7F5B8DC@vmware.com>
 <1443203624-sup-2555@lrrr.local>
 <0B3904AF-BB54-4E04-BAE0-CDB75080E698@vmware.com>
 <1443356431-sup-7293@lrrr.local> <5609200A.2000607@dague.net>
 <CABib2_rm7BG6uuKZ8pDePbCVgdS6QGMU6j4xtF+m7DujWsm9rw@mail.gmail.com>
 <1443444996-sup-6545@lrrr.local>
 <399F9428-1FE2-4DDF-B679-080BDD583101@vmware.com>
 <1443471990-sup-8574@lrrr.local> <20150929080750.GC7310@redhat.com>
Message-ID: <B6E17A6B-BC5A-4E9F-928A-596A8CD653B2@openstack.org>


> On Sep 29, 2015, at 1:07 AM, Flavio Percoco <flavio at redhat.com> wrote:
> 
> On 28/09/15 16:29 -0400, Doug Hellmann wrote:
>> Excerpts from Mark Voelker's message of 2015-09-28 19:55:18 +0000:
>>> On Sep 28, 2015, at 9:03 AM, Doug Hellmann <doug at doughellmann.com> wrote:
>>> >
>>> > Excerpts from John Garbutt's message of 2015-09-28 12:32:53 +0100:
>>> >> On 28 September 2015 at 12:10, Sean Dague <sean at dague.net> wrote:
>>> >>> On 09/27/2015 08:43 AM, Doug Hellmann wrote:
>>> >>>> Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +0000:
>>> >>>>> On Sep 25, 2015, at 1:56 PM, Doug Hellmann <doug at doughellmann.com> wrote:
>>> >>>>>>
>>> >>>>>> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +0000:
>>> >>> <snip>
>>> >>>>>
>>> >>>>> Ah.  Thanks for bringing that up, because I think this may be an area where there?s some misconception about what DefCore is set up to do today.  In it?s present form, the Board of Directors has structured DefCore to look much more at trailing indicators of market acceptance rather than future technical direction.  More on that over here. [1]
>>> >>>>
>>> >>>> And yet future technical direction does factor in, and I'm trying
>>> >>>> to add a new heuristic to that aspect of consideration of tests:
>>> >>>> Do not add tests that use proxy APIs.
>>> >>>>
>>> >>>> If there is some compelling reason to add a capability for which
>>> >>>> the only tests use a proxy, that's important feedback for the
>>> >>>> contributor community and tells us we need to improve our test
>>> >>>> coverage. If the reason to use the proxy is that no one is deploying
>>> >>>> the proxied API publicly, that is also useful feedback, but I suspect
>>> >>>> we will, in most cases (glance is the exception), say "Yeah, that's
>>> >>>> not how we mean for you to run the services long-term, so don't
>>> >>>> include that capability."
>>> >>>
>>> >>> I think we might also just realize that some of the tests are using the
>>> >>> proxy because... that's how they were originally written.
>>> >>
>>> >> From my memory, thats how we got here.
>>> >>
>>> >> The Nova tests needed to use an image API. (i.e. list images used to
>>> >> check the snapshot Nova, or similar)
>>> >>
>>> >> The Nova proxy was chosen over Glance v1 and Glance v2, mostly due to
>>> >> it being the only widely deployed option.
>>> >
>>> > Right, and I want to make sure it's clear that I am differentiating
>>> > between "these tests are bad" and "these tests are bad *for DefCore*".
>>> > We should definitely continue to test the proxy API, since it's a
>>> > feature we have and that our users rely on.
>>> >
>>> >>
>>> >>> And they could be rewritten to use native APIs.
>>> >>
>>> >> +1
>>> >> Once Glance v2 is available.
>>> >>
>>> >> Adding Glance v2 as advisory seems a good step to help drive more adoption.
>>> >
>>> > I think we probably don't want to rewrite the existing tests, since
>>> > that effectively changes the contract out from under existing folks
>>> > complying with DefCore.  If we need new, parallel, tests that do
>>> > not use the proxy to make more suitable tests for DefCore to use,
>>> > we should create those.
>>> >
>>> >>
>>> >>> I do agree that "testing proxies" should not be part of Defcore, and I
>>> >>> like Doug's idea of making that a new heuristic in test selection.
>>> >>
>>> >> +1
>>> >> Thats a good thing to add.
>>> >> But I don't think we had another option in this case.
>>> >
>>> > We did have the option of leaving the feature out and highlighting the
>>> > discrepancy to the contributors so tests could be added. That
>>> > communication didn't really happen, as far as I can tell.
>>> >
>>> >>>> Sorry, I wasn't clear. The Nova team would, I expect, view the use of
>>> >>>> those APIs in DefCore as a reason to avoid deprecating them in the code
>>> >>>> even if they wanted to consider them as legacy features that should be
>>> >>>> removed. Maybe that's not true, and the Nova team would be happy to
>>> >>>> deprecate the APIs, but I did think that part of the feedback cycle we
>>> >>>> were establishing here was to have an indication from the outside of the
>>> >>>> contributor base about what APIs are considered important enough to keep
>>> >>>> alive for a long period of time.
>>> >>> I'd also agree with this. Defcore is a wider contract that we're trying
>>> >>> to get even more people to write to because that cross section should be
>>> >>> widely deployed. So deprecating something in Defcore is something I
>>> >>> think most teams, Nova included, would be very reluctant to do. It's
>>> >>> just asking for breaking your users.
>>> >>
>>> >> I can't see us removing the proxy APIs in Nova any time soon,
>>> >> regardless of DefCore, as it would break too many people.
>>> >>
>>> >> But personally, I like dropping them from Defcore, to signal that the
>>> >> best practice is to use the Glance v2 API directly, rather than the
>>> >> Nova proxy.
>>> >>
>>> >> Maybe the are just marked deprecated, but still required, although
>>> >> that sounds a bit crazy.
>>> >
>>> > Marking them as deprecated, then removing them from DefCore, would let
>>> > the Nova team make a technical decision about what to do with them
>>> > (maybe they get spun out into a separate service, maybe they're so
>>> > popular you just keep them, whatever).
>>> 
>>> So, here?s that Who?s On First thing again.  Just to clarify: Nova does not need Capabilities to be removed from Guidelines in order to make technical decisions about what to do with a feature (though removing a Capability from future Guidelines may make Nova a lot more comfortable with their decision if they *do* decide to deprecate something, which I think is what Doug was pointing out here).
>>> 
>>> The DefCore Committee cannot tell projects what they can and cannot do with their code [1].  All DefCore can to is tell vendors what capabilities they have to expose to end users (if and only if those vendors want their products to be OpenStack Powered(TM) [2]).  It also tells end users what things they can rely on being present (if and only if they choose an OpenStack Powered(TM) product that adheres to a particular Guideline).  It is a Wonderful Thing if stuff doesn?t get dropped from Guidelines very often because nobody wants users to have to worry about not being able to rely on things they previously relied on very often.  It?s therefore also a Wonderful Thing if projects like Nova and the DefCore Committee are talking to each other with an eye on making end-user experience as consistent and stable as possible, and that when things do change, those transitions are handled as smoothly as possible.
>>> 
>>> But at the end of the day, if Nova wants to deprecate something, spin it out, or keep it, Nova doesn?t need DefCore to do anything first in order to make that decision.  DefCore would love a heads-up so the next Guideline (which comes out several months after the OpenStack release in which the changes are made did) can take the decision into account.  In fact in the case of deprecation, as of last week projects are more less required to give DefCore a heads-up if they want the assert:follows-standard-deprecation [3] tag.  A heads-up is even nice if Nova decides they want to keep supporting something since that will help the ?future direction? criteria be scored properly.
>>> 
>>> Ultimately, what Nova does with Nova?s code is still Nova?s decision to make.  I think that?s a pretty good thing.
>> 
>> Indeed! I guess I overestimated the expectations for DefCore. I thought
>> introducing the capabilities tests implied a broader commitment to keep
>> the feature than it sounds like is actually the case. I'm glad we are
>> more flexible than I thought. :-)
> 
> ditto! Glad to hear this is the case.

Having two active guidelines at any given time make it easy
to update without necessarily breaking the world for everyone.

We?re moving a little bit faster now because we are trying to get
this right for the community, and course correcting as we see
issues arise.

> 
>>> And FWIW I think it?s a pretty good thing we?re all now openly discussing it, too (after all this whole DefCore thing is still pretty new to most folks) so thanks to all of you for that. =)
>> 
>> Yes, it was pretty difficult to follow some of the earlier DefCore
>> discussions while the process and guidelines were being worked out.
>> Thanks for clarifying!
>> 
>> 
> 
> Thank you both for driving this thread. Sorry for having joined so
> late!
> 
> Flavio
> 
> -- 
> @flaper87
> Flavio Percoco
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/97d13a0e/attachment.html>

From chris at openstack.org  Tue Sep 29 15:28:30 2015
From: chris at openstack.org (Chris Hoge)
Date: Tue, 29 Sep 2015 08:28:30 -0700
Subject: [openstack-dev] [cinder][neutron][all] New third-party-ci
	testing requirements for OpenStack Compatible mark
In-Reply-To: <CAF+Cads=NaqrvSdCnx79=J8=C+5U5TLcaGzVHqjbk1Xk_eZj6A@mail.gmail.com>
References: <EBF1AF08-B54D-4391-9B22-964390523E0A@openstack.org>
 <CAK+RQeZ+zbP=JoBNpeLdhktQ=93UVKCgWuGOzUxvTSpAN=EKNg@mail.gmail.com>
 <CAL3VkVzWvOv79eOh1p+Cqi8=JfydDYi5gwnDoq5vHJhGtc3Ojg@mail.gmail.com>
 <CAF+Cads=NaqrvSdCnx79=J8=C+5U5TLcaGzVHqjbk1Xk_eZj6A@mail.gmail.com>
Message-ID: <2C45DFFA-C868-43F5-AEFA-89926CB27D2D@openstack.org>

On Sep 29, 2015, at 8:04 AM, Erlon Cruz <sombrafam at gmail.com> wrote:
> 
> Hi Cris,
> 
> There are some questions that came to my mind.
> 
> Cinder has near zero tolerance to backends that does not have a CI running. So, can one assume that all drivers in Cinder will have the "OpenStack Compatible" seal?

One of the reasons we started with Cinder was because they have
have an existing program that is well maintained. Any driver passing
CI becomes eligible for the "OpenStack Compatible? mark. It?s not
automatic, and still needs a signed agreement with the Foundation.

> When you say that the driver have to 'pass' the integration tests, what tests do you consider? All tests in tempest? All patches? Do you have any criteria to determine if a backend is passing or not?

We?re letting the project drive what tests need to be passed. So,
taking a look at this dashboard[1] (it?s one of many that monitor
our test systems) the drivers are running the dsvm-tempest-full
tests. One of the things that the tests exercise, and we?re interested
in from the driver standpoint, are both the user-facing Cinder APIs
as well as the driver-facing APIs.

For Neutron, which we would like to help roll out in the coming year,
this would be a CI run that is defined by the Neutron development
team. We have no interest in dictating to the developers what should
be run. Instead, we want to adopt what the community considers
to be the best-practices and standards for drivers.

> About this "OpenStack Compatible" flag, how does it work? Will you hold a list with the Compatible vendors? Is anything a vendor need to to in order to use this?

?OpenStack Compatible? is one of the trademark programs that is
administered by the Foundation. A company that want to apply the
OpenStack logo to their product needs to sign a licensing agreement,
which gives them the right to use the logo in their marketing materials.

We also create an entry in the OpenStack Marketplace for their
product, which has information about the company and the product, but
also information about tests that the product may have passed. The
best example I can give right now is with the ?OpenStack Powered?
program, where we display which Defcore guideline a product has
successfully passed[2].

Chris

[1] http://ci-watch.tintri.com/project?project=cinder&time=24+hours
[2] For example: http://www.openstack.org/marketplace/public-clouds/unitedstack/uos-cloud

> Thanks,
> Erlon
> 
> On Mon, Sep 28, 2015 at 5:55 PM, Kyle Mestery <mestery at mestery.com <mailto:mestery at mestery.com>> wrote:
> The Neutron team also discussed this in Vancouver, you can see the etherpad here [1]. We talked about the idea of creating a validation suite, and it sounds like that's something we should again discuss in Tokyo for the Mitaka cycle. I think a validation suite would be a great step forward for Neutron third-party CI systems to use to validate they work with a release.
> 
> [1] https://etherpad.openstack.org/p/YVR-neutron-third-party-ci-liberty <https://etherpad.openstack.org/p/YVR-neutron-third-party-ci-liberty>
> 
> On Sun, Sep 27, 2015 at 11:39 AM, Armando M. <armamig at gmail.com <mailto:armamig at gmail.com>> wrote:
> 
> 
> On 25 September 2015 at 15:40, Chris Hoge <chris at openstack.org <mailto:chris at openstack.org>> wrote:
> In November, the OpenStack Foundation will start requiring vendors requesting
> new "OpenStack Compatible" storage driver licenses to start passing the Cinder
> third-party integration tests.
> The new program was approved by the Board at
> the July meeting in Austin and follows the improvement of the testing standards
> and technical requirements for the "OpenStack Powered" program. This is all
> part of the effort of the Foundation to use the OpenStack brand to guarantee a
> base-level of interoperability and consistency for OpenStack users and to
> protect the work of our community of developers by applying a trademark backed
> by their technical efforts.
> 
> The Cinder driver testing is the first step of a larger effort to apply
> community determined standards to the Foundation marketing programs. We're
> starting with Cinder because it has a successful testing program in place, and
> we have plans to extend the program to network drivers and OpenStack
> applications. We're going require CI testing for new "OpenStack Compatible"
> storage licenses starting on November 1, and plan to roll out network and
> application testing in 2016.
> 
> One of our goals is to work with project leaders and developers to help us
> define and implement these test programs. The standards for third-party
> drivers and applications should be determined by the developers and users
> in our community, who are experts in how to maintain the quality of the
> ecosystem.
> 
> We welcome and feedback on this program, and are also happy to answer any
> questions you might have.
> 
> Thanks for spearheading this effort.
> 
> Do you have more information/pointers about the program, and how Cinder in particular is 
> paving the way for other projects to follow?
> 
> Thanks,
> Armando
> 
> 
> Thanks!
> 
> Chris Hoge
> Interop Engineer
> OpenStack Foundation
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/15d396f4/attachment.html>

From danehans at cisco.com  Tue Sep 29 15:28:54 2015
From: danehans at cisco.com (Daneyon Hansen (danehans))
Date: Tue, 29 Sep 2015 15:28:54 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <560A5856.1050303@hpe.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
Message-ID: <D22FFC4A.689B3%danehans@cisco.com>


+1

From: Tom Cammann <tom.cammann at hpe.com<mailto:tom.cammann at hpe.com>>
Reply-To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very difficult and probably a wasted effort trying to consolidate their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:

Would it make sense to ask the opposite of Wanghua's question: should pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat resources can just interface with k8s instead of Magnum.
Ton Ngo,

[Inactive            hide details for Egor Guz ---09/28/2015 10:20:02 PM---Also I            belive docker compose is just command line tool which doe]Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose is just command line tool which doesn?t have any api or scheduling feat

From: Egor Guz <EGuz at walmartlabs.com><mailto:EGuz at walmartlabs.com>
To: "openstack-dev at lists.openstack.org"<mailto:openstack-dev at lists.openstack.org> <openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

________________________________



Also I belive docker compose is just command line tool which doesn?t have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

?
Egor

From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to operate. We are intentionally avoiding re-inventing the wheel. Our goal is not to replace docker swarm (or other existing systems), but to compliment it/them. We want to offer users of Docker the richness of native APIs and supporting tools. This way they will not need to compromise features or wait longer for us to implement each new feature as it is added. Keep in mind that our pod, service, and replication controller resources pre-date this philosophy. If we started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com>> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes container in swarm coe. As I know, swarm is only a scheduler of container, which is like nova in openstack. Docker compose is a orchestration program which is like heat in openstack. k8s is the combination of scheduler and orchestration. So I think it is better to expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/e8b29d9c/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ATT00001.gif
Type: image/gif
Size: 105 bytes
Desc: ATT00001.gif
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/e8b29d9c/attachment.gif>

From adrian.otto at rackspace.com  Tue Sep 29 15:44:33 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Tue, 29 Sep 2015 15:44:33 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <D22FFC4A.689B3%danehans@cisco.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
Message-ID: <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) <danehans at cisco.com<mailto:danehans at cisco.com>> wrote:


+1

From: Tom Cammann <tom.cammann at hpe.com<mailto:tom.cammann at hpe.com>>
Reply-To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very difficult and probably a wasted effort trying to consolidate their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat resources can just interface with k8s instead of Magnum.
Ton Ngo,

<ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose is just command line tool which doesn?t have any api or scheduling feat

From: Egor Guz <EGuz at walmartlabs.com><mailto:EGuz at walmartlabs.com>
To: "openstack-dev at lists.openstack.org"<mailto:openstack-dev at lists.openstack.org> <openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
________________________________



Also I belive docker compose is just command line tool which doesn?t have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

?
Egor

From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to operate. We are intentionally avoiding re-inventing the wheel. Our goal is not to replace docker swarm (or other existing systems), but to compliment it/them. We want to offer users of Docker the richness of native APIs and supporting tools. This way they will not need to compromise features or wait longer for us to implement each new feature as it is added. Keep in mind that our pod, service, and replication controller resources pre-date this philosophy. If we started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com>> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes container in swarm coe. As I know, swarm is only a scheduler of container, which is like nova in openstack. Docker compose is a orchestration program which is like heat in openstack. k8s is the combination of scheduler and orchestration. So I think it is better to expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<ATT00001.gif>__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/ee9878a2/attachment.html>

From anteaya at anteaya.info  Tue Sep 29 15:45:18 2015
From: anteaya at anteaya.info (Anita Kuno)
Date: Tue, 29 Sep 2015 11:45:18 -0400
Subject: [openstack-dev] [cinder][neutron][all] New third-party-ci
 testing requirements for OpenStack Compatible mark
In-Reply-To: <2C45DFFA-C868-43F5-AEFA-89926CB27D2D@openstack.org>
References: <EBF1AF08-B54D-4391-9B22-964390523E0A@openstack.org>
 <CAK+RQeZ+zbP=JoBNpeLdhktQ=93UVKCgWuGOzUxvTSpAN=EKNg@mail.gmail.com>
 <CAL3VkVzWvOv79eOh1p+Cqi8=JfydDYi5gwnDoq5vHJhGtc3Ojg@mail.gmail.com>
 <CAF+Cads=NaqrvSdCnx79=J8=C+5U5TLcaGzVHqjbk1Xk_eZj6A@mail.gmail.com>
 <2C45DFFA-C868-43F5-AEFA-89926CB27D2D@openstack.org>
Message-ID: <560AB20E.3020803@anteaya.info>

On 09/29/2015 11:28 AM, Chris Hoge wrote:
> On Sep 29, 2015, at 8:04 AM, Erlon Cruz <sombrafam at gmail.com> wrote:
>>
>> Hi Cris,
>>
>> There are some questions that came to my mind.
>>
>> Cinder has near zero tolerance to backends that does not have a CI running. So, can one assume that all drivers in Cinder will have the "OpenStack Compatible" seal?
> 
> One of the reasons we started with Cinder was because they have
> have an existing program that is well maintained. Any driver passing
> CI becomes eligible for the "OpenStack Compatible? mark. It?s not
> automatic, and still needs a signed agreement with the Foundation.
> 
>> When you say that the driver have to 'pass' the integration tests, what tests do you consider? All tests in tempest? All patches? Do you have any criteria to determine if a backend is passing or not?
> 
> We?re letting the project drive what tests need to be passed. So,
> taking a look at this dashboard[1] (it?s one of many that monitor
> our test systems)

Dashboards, this and any other, aggregate results (build succeeded,
build failed) that are reported back to Gerrit. They don't index the
logs to evaluate if expected test output is present in the logs. If
aggregated results suit your purpose, then fine, this tool is helpful,
but let's not ascribe responsibility to a tool that isn't performing
that action.

Tools usually monitor the stream of comments as they are broadcast from
Gerrit (stream-events). Taking these reported status comments and
aggregating them into a visual digestible form is all they are doing.


> the drivers are running the dsvm-tempest-full
> tests.

Drivers are reporting they are running the tests.

If we are going to be relying on dashboards and aggregating tools for
making decisions lets be sure we are mindful of exactly what information
is being conveyed and what assumptions are being made on top of that
information.

Thank you,
Anita.

> One of the things that the tests exercise, and we?re interested
> in from the driver standpoint, are both the user-facing Cinder APIs
> as well as the driver-facing APIs.
> 
> For Neutron, which we would like to help roll out in the coming year,
> this would be a CI run that is defined by the Neutron development
> team. We have no interest in dictating to the developers what should
> be run. Instead, we want to adopt what the community considers
> to be the best-practices and standards for drivers.
> 
>> About this "OpenStack Compatible" flag, how does it work? Will you hold a list with the Compatible vendors? Is anything a vendor need to to in order to use this?
> 
> ?OpenStack Compatible? is one of the trademark programs that is
> administered by the Foundation. A company that want to apply the
> OpenStack logo to their product needs to sign a licensing agreement,
> which gives them the right to use the logo in their marketing materials.
> 
> We also create an entry in the OpenStack Marketplace for their
> product, which has information about the company and the product, but
> also information about tests that the product may have passed. The
> best example I can give right now is with the ?OpenStack Powered?
> program, where we display which Defcore guideline a product has
> successfully passed[2].
> 
> Chris
> 
> [1] http://ci-watch.tintri.com/project?project=cinder&time=24+hours
> [2] For example: http://www.openstack.org/marketplace/public-clouds/unitedstack/uos-cloud
> 
>> Thanks,
>> Erlon
>>
>> On Mon, Sep 28, 2015 at 5:55 PM, Kyle Mestery <mestery at mestery.com <mailto:mestery at mestery.com>> wrote:
>> The Neutron team also discussed this in Vancouver, you can see the etherpad here [1]. We talked about the idea of creating a validation suite, and it sounds like that's something we should again discuss in Tokyo for the Mitaka cycle. I think a validation suite would be a great step forward for Neutron third-party CI systems to use to validate they work with a release.
>>
>> [1] https://etherpad.openstack.org/p/YVR-neutron-third-party-ci-liberty <https://etherpad.openstack.org/p/YVR-neutron-third-party-ci-liberty>
>>
>> On Sun, Sep 27, 2015 at 11:39 AM, Armando M. <armamig at gmail.com <mailto:armamig at gmail.com>> wrote:
>>
>>
>> On 25 September 2015 at 15:40, Chris Hoge <chris at openstack.org <mailto:chris at openstack.org>> wrote:
>> In November, the OpenStack Foundation will start requiring vendors requesting
>> new "OpenStack Compatible" storage driver licenses to start passing the Cinder
>> third-party integration tests.
>> The new program was approved by the Board at
>> the July meeting in Austin and follows the improvement of the testing standards
>> and technical requirements for the "OpenStack Powered" program. This is all
>> part of the effort of the Foundation to use the OpenStack brand to guarantee a
>> base-level of interoperability and consistency for OpenStack users and to
>> protect the work of our community of developers by applying a trademark backed
>> by their technical efforts.
>>
>> The Cinder driver testing is the first step of a larger effort to apply
>> community determined standards to the Foundation marketing programs. We're
>> starting with Cinder because it has a successful testing program in place, and
>> we have plans to extend the program to network drivers and OpenStack
>> applications. We're going require CI testing for new "OpenStack Compatible"
>> storage licenses starting on November 1, and plan to roll out network and
>> application testing in 2016.
>>
>> One of our goals is to work with project leaders and developers to help us
>> define and implement these test programs. The standards for third-party
>> drivers and applications should be determined by the developers and users
>> in our community, who are experts in how to maintain the quality of the
>> ecosystem.
>>
>> We welcome and feedback on this program, and are also happy to answer any
>> questions you might have.
>>
>> Thanks for spearheading this effort.
>>
>> Do you have more information/pointers about the program, and how Cinder in particular is 
>> paving the way for other projects to follow?
>>
>> Thanks,
>> Armando
>>
>>
>> Thanks!
>>
>> Chris Hoge
>> Interop Engineer
>> OpenStack Foundation
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From ihrachys at redhat.com  Tue Sep 29 16:05:37 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Tue, 29 Sep 2015 18:05:37 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
	messages
In-Reply-To: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
Message-ID: <72742C5E-1136-40C2-85D4-1AD6522C6ADA@redhat.com>

> On 25 Sep 2015, at 16:44, Ihar Hrachyshka <ihrachys at redhat.com> wrote:
> 
> Hi all,
> 
> releases are approaching, so it?s the right time to start some bike shedding on the mailing list.
> 
> Recently I got pointed out several times [1][2] that I violate our commit message requirement [3] for the message lines that says: "Subsequent lines should be wrapped at 72 characters.?
> 
> I agree that very long commit message lines can be bad, f.e. if they are 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we have 79 chars limit for the code.
> 
> We had a check for the line lengths in openstack-dev/hacking before but it was killed [4] as per openstack-dev@ discussion [5].
> 
> I believe commit message lines of <=80 chars are absolutely fine and should not get -1 treatment. I propose to raise the limit for the guideline on wiki accordingly.
> 
> Comments?
> 
> [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
> [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
> [3]: https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
> [4]: https://review.openstack.org/#/c/142585/
> [5]: http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
> 
> Ihar

Thanks everyone for replies.

Now I realize WHY we do it with 72 chars and not 80 chars (git log output). :) I updated the wiki page with how to configure Vim to enforce the rule. I also removed the notion of gating on commit messages because we have them removed since recently.

Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/6d7c9444/attachment.pgp>

From duncan.thomas at gmail.com  Tue Sep 29 16:09:36 2015
From: duncan.thomas at gmail.com (Duncan Thomas)
Date: Tue, 29 Sep 2015 19:09:36 +0300
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <CAGocpaGowju87LysZY44rFTtm3j0PLGhPM1xQYrHueffeVowVQ@mail.gmail.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
 <20150929113752.GY3713@localhost>
 <CAGocpaGowju87LysZY44rFTtm3j0PLGhPM1xQYrHueffeVowVQ@mail.gmail.com>
Message-ID: <CAOyZ2aHEd_FrF4CRh3azK1RtCVzS7R+M02pRf+YoM19+DrdFCw@mail.gmail.com>

I think disabling it by default in early M should help shake out any
remaining issues - we can decide if we actually release that way later.

I'm against actually removing the V1 code however
On 29 Sep 2015 15:56, "Ivan Kolodyazhny" <e0ne at e0ne.info> wrote:

> First of all, I would like to say thank you for the feedback!
>
> TBH, I did'n propose to remove API v1 at all in Mitaka. I was against to
> remove v1  API instead of disabling it.
>
> IMO, if we'll decide to leave it as is in Mitaka and disable in N release
> - nothing will change. Everybody will use v1 API until N. Disabling v1
> early in Mitaka will give everybody more time to fix their clients. Anyway,
> I leave a very easy way to re-enable v1.
>
> Regards,
> Ivan Kolodyazhny
>
> On Tue, Sep 29, 2015 at 2:37 PM, Gorka Eguileor <geguileo at redhat.com>
> wrote:
>
>> On 28/09, John Griffith wrote:
>> > On Mon, Sep 28, 2015 at 6:19 PM, Mark Voelker <mvoelker at vmware.com>
>> wrote:
>> >
>> > > FWIW, the most popular client libraries in the last user survey[1]
>> other
>> > > than OpenStack?s own clients were: libcloud (48 respondents), jClouds
>> (36
>> > > respondents), Fog (34 respondents), php-opencloud (21 respondents),
>> > > DeltaCloud (which has been retired by Apache and hasn?t seen a commit
>> in
>> > > two years, but 17 respondents are still using it), pkgcloud (15
>> > > respondents), and OpenStack.NET (14 respondents).  Of those:
>> > >
>> > > * libcloud appears to support the nova-volume API but not the cinder
>> API:
>> > >
>> https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/openstack.py#L251
>> > >
>> > > * jClouds appears to support only the v1 API:
>> > >
>> https://github.com/jclouds/jclouds/tree/jclouds-1.9.1/apis/openstack-cinder/src/main/java/org/jclouds
>> > >
>> > > * Fog also appears to only support the v1 API:
>> > >
>> https://github.com/fog/fog/blob/master/lib/fog/openstack/volume.rb#L99
>> > >
>> > > * php-opencloud appears to only support the v1 API:
>> > >
>> https://php-opencloud.readthedocs.org/en/latest/services/volume/index.html
>> > >
>> > > * DeltaCloud I honestly haven?t looked at since it?s thoroughly dead,
>> but
>> > > I can?t imagine it supports v2.
>> > >
>> > > * pkgcloud has beta-level support for Cinder but I think it?s v1 (may
>> be
>> > > mistaken):
>> https://github.com/pkgcloud/pkgcloud/#block-storage----beta
>> > > and
>> > >
>> https://github.com/pkgcloud/pkgcloud/tree/master/lib/pkgcloud/openstack/blockstorage
>> > >
>> > > * OpenStack.NET does appear to support v2:
>> > >
>> http://www.openstacknetsdk.org/docs/html/T_net_openstack_Core_Providers_IBlockStorageProvider.htm
>> > >
>> > > Now, it?s anyone?s guess as to whether or not users of those client
>> > > libraries actually try to use them for volume operations or not
>> > > (anecdotally I know a few clouds I help support are using client
>> libraries
>> > > that only support v1), and some users might well be using more than
>> one
>> > > library or mixing in code they wrote themselves.  But most of the
>> above
>> > > that support cinder do seem to rely on v1.  Some management tools also
>> > > appear to still rely on the v1 API (such as RightScale:
>> > >
>> http://docs.rightscale.com/clouds/openstack/openstack_config_prereqs.html
>> > > ).  From that perspective it might be useful to keep it around a while
>> > > longer and disable it by default.  Personally I?d probably lean that
>> way,
>> > > especially given that folks here on the ops list are still reporting
>> > > problems too.
>> > >
>> > > That said, v1 has been deprecated since Juno, and the Juno release
>> notes
>> > > said it was going to be removed [2], so there?s a case to be made that
>> > > there?s been plenty of fair warning too I suppose.
>> > >
>> > > [1]
>> > >
>> http://superuser.openstack.org/articles/openstack-application-developers-share-insights
>> > > [2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_7
>> > >
>> > > At Your Service,
>> > >
>> > > Mark T. Voelker
>> > >
>> > >
>> > >
>> > > > On Sep 28, 2015, at 7:17 PM, Sam Morrison <sorrison at gmail.com>
>> wrote:
>> > > >
>> > > > Yeah we?re still using v1 as the clients that are packaged with most
>> > > distros don?t support v2 easily.
>> > > >
>> > > > Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our
>> > > ?volume? endpoint to point to v2 (we have a volumev2 endpoint too)
>> and the
>> > > client breaks.
>> > > >
>> > > > $ cinder list
>> > > > ERROR: OpenStack Block Storage API version is set to 1 but you are
>> > > accessing a 2 endpoint. Change its value through
>> --os-volume-api-version or
>> > > env[OS_VOLUME_API_VERSION].
>> > > >
>> > > > Sam
>> > > >
>> > > >
>> > > >> On 29 Sep 2015, at 8:34 am, Matt Fischer <matt at mattfischer.com>
>> wrote:
>> > > >>
>> > > >> Yes, people are probably still using it. Last time I tried to use
>> V2 it
>> > > didn't work because the clients were broken, and then it went back on
>> the
>> > > bottom of my to do list. Is this mess fixed?
>> > > >>
>> > > >>
>> > >
>> http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
>> > > >>
>> > > >> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny <e0ne at e0ne.info>
>> > > wrote:
>> > > >> Hi all,
>> > > >>
>> > > >> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2
>> API
>> > > was introduced in Grizzly and v1 API is deprecated since Juno.
>> > > >>
>> > > >> After [1] is merged, Cinder API v1 is disabled in gates by default.
>> > > We've got a filed bug [2] to remove Cinder v1 API at all.
>> > > >>
>> > > >>
>> > > >> According to Deprecation Policy [3] looks like we are OK to remote
>> it.
>> > > But I would like to ask Cinder API users if any still use API v1.
>> > > >> Should we remove it at all Mitaka release or just disable by
>> default in
>> > > the cinder.conf?
>> > > >>
>> > > >> AFAIR, only Rally doesn't support API v2 now and I'm going to
>> implement
>> > > it asap.
>> > > >>
>> > > >> [1] https://review.openstack.org/194726
>> > > >> [2] https://bugs.launchpad.net/cinder/+bug/1467589
>> > > >> [3]
>> > >
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>> > > >>
>> > > >> Regards,
>> > > >> Ivan Kolodyazhny
>> > > >>
>> > > >> _______________________________________________
>> > > >> OpenStack-operators mailing list
>> > > >> OpenStack-operators at lists.openstack.org
>> > > >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> > > >>
>> > > >>
>> > > >> _______________________________________________
>> > > >> OpenStack-operators mailing list
>> > > >> OpenStack-operators at lists.openstack.org
>> > > >>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> > > >
>> > > >
>> > >
>> __________________________________________________________________________
>> > > > OpenStack Development Mailing List (not for usage questions)
>> > > > Unsubscribe:
>> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > >
>> __________________________________________________________________________
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> > ?My opinion is that even though V1 has technically been deprecated for
>> > multiple cycles, V2 was never really viable until the Liberty release.
>> > Between issues with V2 and other components, and then the version
>> discovery
>> > issues that broke some things; I think we should reset the deprecation
>> > clock so to speak.
>> >
>> > It was only in the last milestone of Liberty that folks finally got
>> > everything updated and talking V2.  Not to mention the patch to switch
>> the
>> > default in devstack just landed (where everything uses it including
>> Nova).
>> >
>> > To summarize, absolutely NO to removing V1 in Mitaka, and I think
>> resetting
>> > the deprecation clock is the most reasonable course of action here.
>> >
>> > Thanks,
>> > John?
>>
>> I agree with John, regardless of the fact that the deprecation period
>> has expired I think it would be safer to keep it a little longer.
>>
>> One possibility is to leave it as it is for Mitaka and for N disable it
>> by default in cinder.conf like Ivan suggests.
>>
>> Cheers,
>> Gorka.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/4740e1e1/attachment-0001.html>

From emilien at redhat.com  Tue Sep 29 16:13:06 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Tue, 29 Sep 2015 12:13:06 -0400
Subject: [openstack-dev] [puppet] weekly meeting #53
In-Reply-To: <56093337.3060905@redhat.com>
References: <56093337.3060905@redhat.com>
Message-ID: <560AB892.8020607@redhat.com>



On 09/28/2015 08:31 AM, Emilien Macchi wrote:
> Hello!
> 
> Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
> in #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150929
> 
> Feel free to add any additional items you'd like to discuss.
> If our schedule allows it, we'll make bug triage during the meeting.
> 

We did our weekly meeting, and you can read the notes here:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-29-15.00.html

Thanks,
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/786e02c0/attachment.pgp>

From emilien at redhat.com  Tue Sep 29 16:19:26 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Tue, 29 Sep 2015 12:19:26 -0400
Subject: [openstack-dev] [puppet] Fwd: Action required:
 stackforge/puppet-openstack project move
In-Reply-To: <20150928004405.GS4731@yuggoth.org>
References: <E1ZfazO-0004NC-FO@lists.openstack.org>
 <560883B8.6070704@gmail.com>
 <CAHr1CO89VVR-apnOugesXZjkLU-CCKTGE95hDHh+D+1c5p7ZUQ@mail.gmail.com>
 <20150928004405.GS4731@yuggoth.org>
Message-ID: <560ABA0E.6010807@redhat.com>

Just to make it official, we decided today [1] our desire to retire
stackforge/puppet-openstack Puppet module.

The wiki is updated accordingly.

[1]
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-29-15.00.html

On 09/27/2015 08:44 PM, Jeremy Stanley wrote:
> On 2015-09-27 18:37:59 -0600 (-0600), Matt Fischer wrote:
>> On Sep 27, 2015 6:09 PM, "Emilien Macchi" <emilien.macchi at gmail.com> wrote:
>>>
>>> should we delete it?
>>
>> I'm not sure what value it has anymore but why not just readonly?
> 
> We aren't deleting any repos, just making them read-only and
> committing a change to replace all the files with a README
> indicating the repo is retired (and indicating to look at the HEAD^1
> commit for its prior state).
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/613c2139/attachment.pgp>

From emilien at redhat.com  Tue Sep 29 16:23:59 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Tue, 29 Sep 2015 12:23:59 -0400
Subject: [openstack-dev] [puppet] should puppet-neutron manage third
 party software?
In-Reply-To: <56057731.7060109@anteaya.info>
References: <56057027.3090808@redhat.com>
 <DDF6F73E-716D-439A-87B8-158008AD31AA@workday.com>
 <56057731.7060109@anteaya.info>
Message-ID: <560ABB1F.6000701@redhat.com>

My suggestion:

* patch master to send deprecation warning if third party repositories
are managed in our current puppet-neutron module.
* do not manage third party repositories from now and do not accept any
patch containing this kind of code.
* in the next cycle, we will consider deleting legacy code that used to
manage third party software repos.

Thoughts?

On 09/25/2015 12:32 PM, Anita Kuno wrote:
> On 09/25/2015 12:14 PM, Edgar Magana wrote:
>> Hi There,
>>
>> I just added my comment on the review. I do agree with Emilien. There should be specific repos for plugins and drivers.
>>
>> BTW. I love the sdnmagic name  ;-)
>>
>> Edgar
>>
>>
>>
>>
>> On 9/25/15, 9:02 AM, "Emilien Macchi" <emilien at redhat.com> wrote:
>>
>>> In our last meeting [1], we were discussing about whether managing or
>>> not external packaging repositories for Neutron plugin dependencies.
>>>
>>> Current situation:
>>> puppet-neutron is installing (packages like neutron-plugin-*) &
>>> configure Neutron plugins (configuration files like
>>> /etc/neutron/plugins/*.ini
>>> Some plugins (Cisco) are doing more: they install third party packages
>>> (not part of OpenStack), from external repos.
>>>
>>> The question is: should we continue that way and accept that kind of
>>> patch [2]?
>>>
>>> I vote for no: managing external packages & external repositories should
>>> be up to an external more.
>>> Example: my SDN tool is called "sdnmagic":
>>> 1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
>>> configure the .ini file(s) to make it work in Neutron
>>> 2/ create puppet-sdnmagic that will take care of everything else:
>>> install sdnmagic, manage packaging (and specific dependencies),
>>> repositories, etc.
>>> I -1 puppet-neutron should handle it. We are not managing SDN soltution:
>>> we are enabling puppet-neutron to work with them.
>>>
>>> I would like to find a consensus here, that will be consistent across
>>> *all plugins* without exception.
>>>
>>>
>>> Thanks for your feedback,
>>>
>>> [1] http://goo.gl/zehmN2
>>> [2] https://review.openstack.org/#/c/209997/
>>> -- 
>>> Emilien Macchi
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> I think the data point provided by the Cinder situation needs to be
> considered in this decision: https://bugs.launchpad.net/manila/+bug/1499334
> 
> The bug report outlines the issue, but the tl;dr is that one Cinder
> driver changed their licensing on a library required to run in tree code.
> 
> Thanks,
> Anita.
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/13c09392/attachment.pgp>

From eharney at redhat.com  Tue Sep 29 16:26:19 2015
From: eharney at redhat.com (Eric Harney)
Date: Tue, 29 Sep 2015 12:26:19 -0400
Subject: [openstack-dev] [cinder] snapshot and cloning for NFS backend
In-Reply-To: <E1FB4937BE24734DAD0D1D4E4E506D7890D1A6D9@MAIL703.KDS.KEANE.COM>
References: <E1FB4937BE24734DAD0D1D4E4E506D7890D1A40E@MAIL703.KDS.KEANE.COM>
 <20150929120736.GA24920@gmx.com>
 <E1FB4937BE24734DAD0D1D4E4E506D7890D1A656@MAIL703.KDS.KEANE.COM>
 <E1FB4937BE24734DAD0D1D4E4E506D7890D1A6D9@MAIL703.KDS.KEANE.COM>
Message-ID: <560ABBAB.3090302@redhat.com>

On 09/29/2015 09:05 AM, Kekane, Abhishek wrote:
> Hi Sean,
> 
> Author of specs is Eric Harney, If he is ok with it then I will submit the patch for moving specs to Mitaka.
> 
> Thank you,
> 
> Abhishek Kekane
> 

I saw that go by, thanks.

Note that while the work outlined there is sufficient, to get this
feature fully robust and polished we also need to get some attention on
this spec:

https://review.openstack.org/#/c/165393/

(I'll update it to move it to the mitaka directory as well I suppose.)

> -----Original Message-----
> From: Kekane, Abhishek [mailto:Abhishek.Kekane at nttdata.com] 
> Sent: 29 September 2015 17:47
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [cinder] snapshot and cloning for NFS backend
> 
> Hi Sean,
> 
> Sure I will submit a patch to add this spec in Mitaka.
> 
> Thank you,
> 
> Abhishek Kekane
> 
> -----Original Message-----
> From: Sean McGinnis [mailto:sean.mcginnis at gmx.com]
> Sent: 29 September 2015 17:38
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [cinder] snapshot and cloning for NFS backend
> 
> On Tue, Sep 29, 2015 at 06:26:06AM +0000, Kekane, Abhishek wrote:
>> Hi Devs,
>>
>> The cinder-specs [1] for snapshot and cloning NFS backend submitted by Eric was approved in Kilo but due to nova issue [2] it is not implemented in Kilo and Liberty.
>> I am discussing about this nova bug with nova team for finding possible solutions and Nikola has given some pointers about fixing the same in launchpad bug.
>>
>> This feature is very useful for NFS backend and if the work should be continued then is there a need to resubmit this specs for approval in Mitaka?
> 
> Thanks for looking at this Abhishek. I would like to see this work continued and completed in Mitaka if at all possible.
> 
> Would you mind submitting a patch to add the spec to Mitaka? I will make sure we get that through and targeted for this release.
> 
> Thanks!
> 
> Sean
> 
>>
>> Please let me know your opinion on the same.
>>
>> [1] https://review.openstack.org/#/c/133074/
>> [2] https://bugs.launchpad.net/nova/+bug/1416132
>>
>>
>> Thanks & Regards,
>>
>> Abhishek Kekane
> 




From mgagne at internap.com  Tue Sep 29 16:31:42 2015
From: mgagne at internap.com (=?UTF-8?Q?Mathieu_Gagn=c3=a9?=)
Date: Tue, 29 Sep 2015 12:31:42 -0400
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
Message-ID: <560ABCEE.7010408@internap.com>

On 2015-09-28 11:43 PM, John Griffith wrote:
> 
> 
> On Mon, Sep 28, 2015 at 6:19 PM, Mark Voelker <mvoelker at vmware.com
> <mailto:mvoelker at vmware.com>> wrote:
> 
>     FWIW, the most popular client libraries in the last user survey[1]
>     other than OpenStack?s own clients were: libcloud (48 respondents),
>     jClouds (36 respondents), Fog (34 respondents), php-opencloud (21
>     respondents), DeltaCloud (which has been retired by Apache and
>     hasn?t seen a commit in two years, but 17 respondents are still
>     using it), pkgcloud (15 respondents), and OpenStack.NET (14
>     respondents).  Of those:
> 
>     * libcloud appears to support the nova-volume API but not the cinder
>     API:
>     https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/openstack.py#L251
> 
>     * jClouds appears to support only the v1 API:
>     https://github.com/jclouds/jclouds/tree/jclouds-1.9.1/apis/openstack-cinder/src/main/java/org/jclouds
> 
>     * Fog also appears to only support the v1 API:
>     https://github.com/fog/fog/blob/master/lib/fog/openstack/volume.rb#L99
> 
>     * php-opencloud appears to only support the v1 API:
>     https://php-opencloud.readthedocs.org/en/latest/services/volume/index.html
> 
>     * DeltaCloud I honestly haven?t looked at since it?s thoroughly
>     dead, but I can?t imagine it supports v2.
> 
>     * pkgcloud has beta-level support for Cinder but I think it?s v1
>     (may be mistaken):
>     https://github.com/pkgcloud/pkgcloud/#block-storage----beta and
>     https://github.com/pkgcloud/pkgcloud/tree/master/lib/pkgcloud/openstack/blockstorage
> 
>     * OpenStack.NET does appear to support v2:
>     http://www.openstacknetsdk.org/docs/html/T_net_openstack_Core_Providers_IBlockStorageProvider.htm
> 
>     Now, it?s anyone?s guess as to whether or not users of those client
>     libraries actually try to use them for volume operations or not
>     (anecdotally I know a few clouds I help support are using client
>     libraries that only support v1), and some users might well be using
>     more than one library or mixing in code they wrote themselves.  But
>     most of the above that support cinder do seem to rely on v1.  Some
>     management tools also appear to still rely on the v1 API (such as
>     RightScale:
>     http://docs.rightscale.com/clouds/openstack/openstack_config_prereqs.html
>     ).  From that perspective it might be useful to keep it around a
>     while longer and disable it by default.  Personally I?d probably
>     lean that way, especially given that folks here on the ops list are
>     still reporting problems too.
> 
>     That said, v1 has been deprecated since Juno, and the Juno release
>     notes said it was going to be removed [2], so there?s a case to be
>     made that there?s been plenty of fair warning too I suppose.
> 
>     [1]
>     http://superuser.openstack.org/articles/openstack-application-developers-share-insights
>     [2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_7
> 
>     At Your Service,
> 
>     Mark T. Voelker
> 
> 
> 
>     > On Sep 28, 2015, at 7:17 PM, Sam Morrison <sorrison at gmail.com
>     <mailto:sorrison at gmail.com>> wrote:
>     >
>     > Yeah we?re still using v1 as the clients that are packaged with
>     most distros don?t support v2 easily.
>     >
>     > Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our
>     ?volume? endpoint to point to v2 (we have a volumev2 endpoint too)
>     and the client breaks.
>     >
>     > $ cinder list
>     > ERROR: OpenStack Block Storage API version is set to 1 but you are
>     accessing a 2 endpoint. Change its value through
>     --os-volume-api-version or env[OS_VOLUME_API_VERSION].
>     >
>     > Sam
>     >
>     >
>     >> On 29 Sep 2015, at 8:34 am, Matt Fischer <matt at mattfischer.com
>     <mailto:matt at mattfischer.com>> wrote:
>     >>
>     >> Yes, people are probably still using it. Last time I tried to use
>     V2 it didn't work because the clients were broken, and then it went
>     back on the bottom of my to do list. Is this mess fixed?
>     >>
>     >>
>     http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
>     >>
>     >> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny <e0ne at e0ne.info
>     <mailto:e0ne at e0ne.info>> wrote:
>     >> Hi all,
>     >>
>     >> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2
>     API was introduced in Grizzly and v1 API is deprecated since Juno.
>     >>
>     >> After [1] is merged, Cinder API v1 is disabled in gates by
>     default. We've got a filed bug [2] to remove Cinder v1 API at all.
>     >>
>     >>
>     >> According to Deprecation Policy [3] looks like we are OK to
>     remote it. But I would like to ask Cinder API users if any still use
>     API v1.
>     >> Should we remove it at all Mitaka release or just disable by
>     default in the cinder.conf?
>     >>
>     >> AFAIR, only Rally doesn't support API v2 now and I'm going to
>     implement it asap.
>     >>
>     >> [1] https://review.openstack.org/194726
>     >> [2] https://bugs.launchpad.net/cinder/+bug/1467589
>     >> [3]
>     http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>     >>
>     >> Regards,
>     >> Ivan Kolodyazhny
>     >>
> 
> 
> ?My opinion is that even though V1 has technically been deprecated for
> multiple cycles, V2 was never really viable until the Liberty release. 
> Between issues with V2 and other components, and then the version
> discovery issues that broke some things; I think we should reset the
> deprecation clock so to speak.
> 
> It was only in the last milestone of Liberty that folks finally got
> everything updated and talking V2.  Not to mention the patch to switch
> the default in devstack just landed (where everything uses it including
> Nova).
> 
> To summarize, absolutely NO to removing V1 in Mitaka, and I think
> resetting the deprecation clock is the most reasonable course of action
> here.
> 

I agree with John Griffith. I don't have any empirical evidences to back
my "feelings" on that one but it's true that we weren't enable to enable
Cinder v2 until now.

Which makes me wonder: When can we actually deprecate an API version? I
*feel* we are fast to jump on the deprecation when the replacement isn't
100% ready yet for several versions.

-- 
Mathieu


From ihrachys at redhat.com  Tue Sep 29 16:32:12 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Tue, 29 Sep 2015 18:32:12 +0200
Subject: [openstack-dev] [all][stable][release][horizon] 2015.1.2
In-Reply-To: <56091534.9010006@redhat.com>
References: <CANZa-e+LZg0PZgPDrkhgifuZ_BQ6EhTua-420C5K2Z+A8cbPsg@mail.gmail.com>
 <20150924073107.GF24386@sofja.berg.ol>
 <CAGi==UXRm7mARJecBT69qqQMfOycdx_crVf-OCD_x+O9z2J2nw@mail.gmail.com>
 <5604F16A.6010807@redhat.com>
 <50053AD0-B264-4450-A772-E15B21A24506@redhat.com>
 <56091534.9010006@redhat.com>
Message-ID: <3B2EC3CA-39E1-4E63-92BA-8F0F2F5CE18E@redhat.com>

> On 28 Sep 2015, at 12:23, Matthias Runge <mrunge at redhat.com> wrote:
> 
> On 25/09/15 15:39, Ihar Hrachyshka wrote:
> 
>> I see you have three people in the horizon-stable-maint team only.
>> Have you considered expanding the team with more folks? In
>> neutron, we have five people in the stable-maint group.
> 
> Good suggestion!
> 
> In horizon, we just have a very few cores being in charge to support a
> installation or a distribution.
> 
> So: if any of you considering themself to be a good candidate for
> being a stable reviewer, please speak up!
> 
> For Horizon, I will ping Horizon cores asking them to join
> horizon-stable-maint team.

Note that there is no requirement that stable cores should necessarily be master cores, as well as vice versa. For example, I became stable-maint member from neutron side first, then became neutron core after a while.

Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/ef7fa1ca/attachment.pgp>

From matt at mattfischer.com  Tue Sep 29 16:36:52 2015
From: matt at mattfischer.com (Matt Fischer)
Date: Tue, 29 Sep 2015 10:36:52 -0600
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <560ABCEE.7010408@internap.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
 <560ABCEE.7010408@internap.com>
Message-ID: <CAHr1CO-HoRr+zoccf1YvQ5EnAtO_CPfNwWAoe0akynj2Ra7xxA@mail.gmail.com>

>
>
>
> I agree with John Griffith. I don't have any empirical evidences to back
> my "feelings" on that one but it's true that we weren't enable to enable
> Cinder v2 until now.
>
> Which makes me wonder: When can we actually deprecate an API version? I
> *feel* we are fast to jump on the deprecation when the replacement isn't
> 100% ready yet for several versions.
>
> --
> Mathieu
>


I don't think it's too much to ask that versions can't be deprecated until
the new version is 100% working, passing all tests, and the clients (at
least python-xxxclients) can handle it without issues. Ideally I'd like to
also throw in the criteria that devstack, rally, tempest, and other
services are all using and exercising the new API.

I agree that things feel rushed.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/f3065a99/attachment.html>

From EGuz at walmartlabs.com  Tue Sep 29 17:11:14 2015
From: EGuz at walmartlabs.com (Egor Guz)
Date: Tue, 29 Sep 2015 17:11:14 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
Message-ID: <D230119F.1DB05%eguz@walmartlabs.com>

definitely ;), but the are some thoughts to Tom?s email.

I agree that we shouldn't reinvent apis, but I don?t think Magnum should only focus at deployment (I feel we will become another Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to Kub/Mesos/Swarm communities for that.

?
Egor

From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) <danehans at cisco.com<mailto:danehans at cisco.com>> wrote:


+1

From: Tom Cammann <tom.cammann at hpe.com<mailto:tom.cammann at hpe.com>>
Reply-To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very difficult and probably a wasted effort trying to consolidate their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat resources can just interface with k8s instead of Magnum.
Ton Ngo,

<ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose is just command line tool which doesn?t have any api or scheduling feat

From: Egor Guz <EGuz at walmartlabs.com><mailto:EGuz at walmartlabs.com>
To: "openstack-dev at lists.openstack.org"<mailto:openstack-dev at lists.openstack.org> <openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
________________________________



Also I belive docker compose is just command line tool which doesn?t have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

?
Egor

From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to operate. We are intentionally avoiding re-inventing the wheel. Our goal is not to replace docker swarm (or other existing systems), but to compliment it/them. We want to offer users of Docker the richness of native APIs and supporting tools. This way they will not need to compromise features or wait longer for us to implement each new feature as it is added. Keep in mind that our pod, service, and replication controller resources pre-date this philosophy. If we started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com>> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes container in swarm coe. As I know, swarm is only a scheduler of container, which is like nova in openstack. Docker compose is a orchestration program which is like heat in openstack. k8s is the combination of scheduler and orchestration. So I think it is better to expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<ATT00001.gif>__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From carol.l.barrett at intel.com  Tue Sep 29 17:13:59 2015
From: carol.l.barrett at intel.com (Barrett, Carol L)
Date: Tue, 29 Sep 2015 17:13:59 +0000
Subject: [openstack-dev]  [election] TC] TC Candidacy
Message-ID: <2D352D0CD819F64F9715B1B89695400D5C91B59E@ORSMSX113.amr.corp.intel.com>

I am writing to announce my candidacy a position on the TC.

I am not your typical candidate, which is one reason that I am asking for your vote. I have a technical background, with over a decade of software development experience. I have a marketing background, with over a decade of Technology, Product and Brand marketing experience. I have a business development background and have been an entrepreneur. I have been a product manager and a software planner. It's this combination of experiences that I have accumulated during my almost 35 years in the technology industry that I believe will enable me to provide unique value as a member of the TC.

I have been an active member in the community since 2013, working with Community teams to understand market-segment needs and/or barriers to their deployment of OpenStack and bringing that information to the Community in the form of specs, blueprints, prototypes and user stories (Win The Enterprise WG, Enterprise WG, Product WG). These teams have brought resources to develop these capabilities (one example is improving upgrade capabilities through versioned objects implementation) and have been able to close gaps resulting in expanding Enterprise deployments of OpenStack.

I believe that a diverse Community will enable us to develop more innovative software that will meet the needs of global markets. I work with both the Diversity Working Group and the Women of OpenStack to help strengthen the diversity of our Community.

As a TC member, I will work to support our success by utilizing my project/program management experience. Specifically, I will:
1)      Tap the collective market knowledge within our Community to identify innovative capabilities that will define the defacto cloud computing platform of the future.
2)      Work to bring projects together to define and implement cross-project capabilities and initiatives.
3)      Support the development of a multi-release roadmap definition and communication process.

As cross-project coordination needs expand and concepts like big tent stretch our capabilities, I believe that my skill-sets, experiences, and values align well with the emerging needs within our community.

I would appreciate the opportunity to further my contributions to our Community and thank you for your consideration.

Carol Barrett


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/b189c054/attachment.html>

From mvoelker at vmware.com  Tue Sep 29 17:32:45 2015
From: mvoelker at vmware.com (Mark Voelker)
Date: Tue, 29 Sep 2015 17:32:45 +0000
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <CAHr1CO-HoRr+zoccf1YvQ5EnAtO_CPfNwWAoe0akynj2Ra7xxA@mail.gmail.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
 <560ABCEE.7010408@internap.com>
 <CAHr1CO-HoRr+zoccf1YvQ5EnAtO_CPfNwWAoe0akynj2Ra7xxA@mail.gmail.com>
Message-ID: <491F2677-6DFD-4FF8-BCA9-1169FF1841B2@vmware.com>


Mark T. Voelker



> On Sep 29, 2015, at 12:36 PM, Matt Fischer <matt at mattfischer.com> wrote:
> 
> 
> 
> I agree with John Griffith. I don't have any empirical evidences to back
> my "feelings" on that one but it's true that we weren't enable to enable
> Cinder v2 until now.
> 
> Which makes me wonder: When can we actually deprecate an API version? I
> *feel* we are fast to jump on the deprecation when the replacement isn't
> 100% ready yet for several versions.
> 
> --
> Mathieu
> 
> 
> I don't think it's too much to ask that versions can't be deprecated until the new version is 100% working, passing all tests, and the clients (at least python-xxxclients) can handle it without issues. Ideally I'd like to also throw in the criteria that devstack, rally, tempest, and other services are all using and exercising the new API.
> 
> I agree that things feel rushed.


FWIW, the TC recently created an assert:follows-standard-deprecation tag.  Ivan linked to a thread in which Thierry asked for input on it, but FYI the final language as it was approved last week [1] is a bit different than originally proposed.  It now requires one release plus 3 linear months of deprecated-but-still-present-in-the-tree as a minimum, and recommends at least two full stable releases for significant features (an entire API version would undoubtedly fall into that bucket).  It also requires that a migration path will be documented.  However to Matt?s point, it doesn?t contain any language that says specific things like:

In the case of major API version deprecation:
* $oldversion and $newversion must both work with [cinder|nova|whatever]client and openstackclient during the deprecation period.
* It must be possible to run $oldversion and $newversion concurrently on the servers to ensure end users don?t have to switch overnight. 
* Devstack uses $newversion by default.
* $newversion works in Tempest/Rally/whatever else.

What it *does* do is require that a thread be started here on openstack-operators [2] so that operators can provide feedback.  I would hope that feedback like ?I can?t get clients to use it so please don?t remove it yet? would be taken into account by projects, which seems to be exactly what?s happening in this case with Cinder v1.  =)

I?d hazard a guess that the TC would be interested in hearing about whether you think that plan is a reasonable one (and given that TC election season is upon us, candidates for the TC probably would too).

[1] https://review.openstack.org/#/c/207467/
[2] http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59

At Your Service,

Mark T. Voelker


>  
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From doug at doughellmann.com  Tue Sep 29 17:33:34 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 29 Sep 2015 13:33:34 -0400
Subject: [openstack-dev] Optional Dependencies
In-Reply-To: <CAHjeE=SesyYcOi2HJAcpPGa_MLcnsQtCN-1rRbdh=XY1wKvMWA@mail.gmail.com>
References: <560AA265.6080004@rackspace.com>
 <CAHjeE=SesyYcOi2HJAcpPGa_MLcnsQtCN-1rRbdh=XY1wKvMWA@mail.gmail.com>
Message-ID: <1443547957-sup-1348@lrrr.local>

Excerpts from Brant Knudson's message of 2015-09-29 09:45:53 -0500:
> On Tue, Sep 29, 2015 at 9:38 AM, Douglas Mendiz?bal <
> douglas.mendizabal at rackspace.com> wrote:
> 
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA512
> >
> > Hi openstack-dev,
> >
> > I was wondering what the correct way of handling optional dependencies
> > is? I'm specifically talking about libraries that are only required
> > for a specific driver for example, so they're not technically a hard
> > requirement and the project is expected to function without them when
> > using a driver that does not require the lib.
> >
> >
> We've got some in keystone, so for example the packages for ldap won't get
> installed unless it's requested. One of the reasons we chose this one
> specifically is that the python ldap package requires a C library.
> 
> We set up the different extra package groups in setup.cfg [extras]:
> 
> [1]
> http://git.openstack.org/cgit/openstack/keystone/tree/setup.cfg?h=stable/liberty#n24
> 
> See pbr docs: http://docs.openstack.org/developer/pbr/#extra-requirements
> 
> Here's the tox.ini line so that they're all installed for unit tests:
> http://git.openstack.org/cgit/openstack/keystone/tree/tox.ini?h=stable/liberty#n10
> 
> > I read through the README in openstack/requirements [1] but I didn't
> > see anything about it.

Just to complete the picture, requirements listed as "extras" still need
to be included in the global-requirements list so we can synchronize the
versions everyone uses.

Doug

> >
> >
> > Thanks,
> > Douglas Mendiz?bal
> >
> > [1]
> > https://git.openstack.org/cgit/openstack/requirements/tree/README.rst
> > -----BEGIN PGP SIGNATURE-----
> > Comment: GPGTools - https://gpgtools.org
> >
> > iQIcBAEBCgAGBQJWCqJlAAoJEB7Z2EQgmLX72D8P/RROP9qT7DRY1jDnbK0Aj/TZ
> > lYujurHS70nXCj/Pw6uqsq41TttmwMdAx85yXwoLv/XBASaFYZ6eT6i0scfHBKAu
> > z5f0IomaJMQDGJ27By/amcE5eMiST5sEW/OCwHyZxdM8zgo3mzX1jIslmFEyPJ0z
> > wSah5DoZZh3J0RfQuBg8MOQgJVZo74KiNRou1uKE82cbVXJzVKjlfn+r7yO9TUtx
> > 9hB/77a8sDFBWI4nXluTP+Dfpy6NSW1kqwwUoDtsZACtrhTDNCDWxUUIjfyBlIKT
> > LdY+oVrhqWSUI/WwCop4+Aim64obaAq5yWPR6fjTlcQ3+iCYbBzzgP/9VOm/+0Nr
> > AGzVbIW7ah2yEDhM0yTymaay8+G1mc+jxhvwAtTxJVIJLcJXdC3XK6b00OFkO2Kt
> > 0dkjx/i8/riP56sb62P2a3heS3gOFqzqzwlh9SD8Omvhot3NkOr2e1QR7Cvjh1le
> > W5U/61vGKxmtv+iIaFXd86CRO46+4UiD1V+T0lKz083J9XuC49nkhyfuMP3ev6lc
> > /qD6uOnbJfyVWKRdf2PkTEe9C8YsXlxEWZ72GFC+u1jvL5K/NATUkLLWmGuv/JH+
> > tPyAOPISKHh44mhJqM/K37NvJO/TloOhz0a2fW2FV8kOX1V5wVAZiQBSEWtCAI8u
> > 29up4yIgvi13ZkrRb94n
> > =fj9z
> > -----END PGP SIGNATURE-----
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >


From rmeggins at redhat.com  Tue Sep 29 17:43:45 2015
From: rmeggins at redhat.com (Rich Megginson)
Date: Tue, 29 Sep 2015 11:43:45 -0600
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <560A1110.5000209@redhat.com>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com> <55F76F5C.2020106@redhat.com>
 <87vbbc2eiu.fsf@s390.unix4.net> <560A1110.5000209@redhat.com>
Message-ID: <560ACDD1.5040901@redhat.com>

On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>
> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
>> Gilles Dubreuil <gilles at redhat.com> writes:
>>
>>> On 15/09/15 06:53, Rich Megginson wrote:
>>>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>>>> Hi,
>>>>>
>>>>> Gilles Dubreuil <gilles at redhat.com> writes:
>>>>>
>>>>>> A. The 'composite namevar' approach:
>>>>>>
>>>>>>      keystone_tenant {'projectX::domainY': ... }
>>>>>>    B. The 'meaningless name' approach:
>>>>>>
>>>>>>     keystone_tenant {'myproject': name='projectX', domain=>'domainY',
>>>>>> ...}
>>>>>>
>>>>>> Notes:
>>>>>>    - Actually using both combined should work too with the domain
>>>>>> supposedly overriding the name part of the domain.
>>>>>>    - Please look at [1] this for some background between the two
>>>>>> approaches:
>>>>>>
>>>>>> The question
>>>>>> -------------
>>>>>> Decide between the two approaches, the one we would like to retain for
>>>>>> puppet-keystone.
>>>>>>
>>>>>> Why it matters?
>>>>>> ---------------
>>>>>> 1. Domain names are mandatory in every user, group or project. Besides
>>>>>> the backward compatibility period mentioned earlier, where no domain
>>>>>> means using the default one.
>>>>>> 2. Long term impact
>>>>>> 3. Both approaches are not completely equivalent which different
>>>>>> consequences on the future usage.
>>>>> I can't see why they couldn't be equivalent, but I may be missing
>>>>> something here.
>>>> I think we could support both.  I don't see it as an either/or situation.
>>>>
>>>>>> 4. Being consistent
>>>>>> 5. Therefore the community to decide
>>>>>>
>>>>>> Pros/Cons
>>>>>> ----------
>>>>>> A.
>>>>> I think it's the B: meaningless approach here.
>>>>>
>>>>>>     Pros
>>>>>>       - Easier names
>>>>> That's subjective, creating unique and meaningful name don't look easy
>>>>> to me.
>>>> The point is that this allows choice - maybe the user already has some
>>>> naming scheme, or wants to use a more "natural" meaningful name - rather
>>>> than being forced into a possibly "awkward" naming scheme with "::"
>>>>
>>>>    keystone_user { 'heat domain admin user':
>>>>      name => 'admin',
>>>>      domain => 'HeatDomain',
>>>>      ...
>>>>    }
>>>>
>>>>    keystone_user_role {'heat domain admin user@::HeatDomain':
>>>>      roles => ['admin']
>>>>      ...
>>>>    }
>>>>
>>>>>>     Cons
>>>>>>       - Titles have no meaning!
>>>> They have meaning to the user, not necessarily to Puppet.
>>>>
>>>>>>       - Cases where 2 or more resources could exists
>>>> This seems to be the hardest part - I still cannot figure out how to use
>>>> "compound" names with Puppet.
>>>>
>>>>>>       - More difficult to debug
>>>> More difficult than it is already? :P
>>>>
>>>>>>       - Titles mismatch when listing the resources (self.instances)
>>>>>>
>>>>>> B.
>>>>>>     Pros
>>>>>>       - Unique titles guaranteed
>>>>>>       - No ambiguity between resource found and their title
>>>>>>     Cons
>>>>>>       - More complicated titles
>>>>>> My vote
>>>>>> --------
>>>>>> I would love to have the approach A for easier name.
>>>>>> But I've seen the challenge of maintaining the providers behind the
>>>>>> curtains and the confusion it creates with name/titles and when not sure
>>>>>> about the domain we're dealing with.
>>>>>> Also I believe that supporting self.instances consistently with
>>>>>> meaningful name is saner.
>>>>>> Therefore I vote B
>>>>> +1 for B.
>>>>>
>>>>> My view is that this should be the advertised way, but the other method
>>>>> (meaningless) should be there if the user need it.
>>>>>
>>>>> So as far as I'm concerned the two idioms should co-exist.  This would
>>>>> mimic what is possible with all puppet resources.  For instance you can:
>>>>>
>>>>>     file { '/tmp/foo.bar': ensure => present }
>>>>>
>>>>> and you can
>>>>>
>>>>>     file { 'meaningless_id': name => '/tmp/foo.bar', ensure => present }
>>>>>
>>>>> The two refer to the same resource.
>>>> Right.
>>>>
>>> I disagree, using the name for the title is not creating a composite
>>> name. The latter requires adding at least another parameter to be part
>>> of the title.
>>>
>>> Also in the case of the file resource, a path/filename is a unique name,
>>> which is not the case of an Openstack user which might exist in several
>>> domains.
>>>
>>> I actually added the meaningful name case in:
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html
>>>
>>> But that doesn't work very well because without adding the domain to the
>>> name, the following fails:
>>>
>>> keystone_tenant {'project_1': domain => 'domain_A', ...}
>>> keystone_tenant {'project_1': domain => 'domain_B', ...}
>>>
>>> And adding the domain makes it a de-facto 'composite name'.
>> I agree that my example is not similar to what the keystone provider has
>> to do.  What I wanted to point out is that user in puppet should be used
>> to have this kind of *interface*, one where your put something
>> meaningful in the title and one where you put something meaningless.
>> The fact that the meaningful one is a compound one shouldn't matter to
>> the user.
>>
> There is a big blocker of making use of domain name as parameter.
> The issue is the limitation of autorequire.
>
> Because autorequire doesn't support any parameter other than the
> resource type and expects the resource title (or a list of) [1].
>
> So for instance, keystone_user requires the tenant project1 from
> domain1, then the resource name must be 'project1::domain1' because
> otherwise there is no way to specify 'domain1':
>
> autorequire(:keystone_tenant) do
>    self[:tenant]
> end

Not exactly.  See https://review.openstack.org/#/c/226919/

For example::

     keystone_tenant {'some random tenant':
       name   => 'project1',
       domain => 'domain1'
     }
     keystone_user {'some random user':
       name   => 'user1',
       domain => 'domain1'
     }

How does keystone_user_role need to be declared such that the 
autorequire for keystone_user and keystone_tenant work?

     keystone_user_role {'some random user at some random tenant': ...}

In this case, I'm assuming this will work

   autorequire(:keystone_user) do
     self[:name].rpartition('@').first
   end
   autorequire(:keystone_user) do
     self[:name].rpartition('@').last
   end

The keystone_user require will be on 'some random user' and the 
keystone_tenant require will be on 'some random tenant'.

So it should work, but _you have to be absolutely consistent in using 
the title everywhere_.  That is, once you have chosen to give something 
a title, you must use that title everywhere: in autorequires (as 
described above), in resource references (e.g. Keystone_user['some 
random user'] ~> Service['myservice']), and anywhere the resource will 
be referenced by its title.


>
> Alternatively, as Sofer suggested (in a discussion we had), we could
> poke the catalog to retrieve the corresponding resource(s).

That is another question I posed in 
https://review.openstack.org/#/c/226919/:

I guess we can look up the user resource and tenant resource from the 
catalog based on the title?  e.g.

     user = puppet.catalog.resource.find(:keystone_user, 'some random user')
     userid = user[:id]

> Unfortunately, unless there is a way around, that doesn't work because
> no matter what autorequire wants a title.

Which I think we can provide.

The other tricky parts will be self.instances and self.prefetch.

I think self.instances can continue to use the 'name::domain' naming 
convention, since it needs some way to create a unique title for all 
resources.

The real work will be in self.prefetch, which will need to compare all 
of the parameters/properties to see if a resource declared in a manifest 
matches exactly a resource found in Keystone. In this case, we may have 
to 'rename' the resource returned by self.instances to make it match the 
one from the manifest so that autorequires and resource references 
continue to work.

>
>
> So it seems for the scoped domain resources, we have to stick together
> the name and domain: '<name>::<domain>'.
>
> [1]
> https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type.rb#L2003
>
>>>>> But, If that's indeed not possible to have them both,
>>> There are cases where having both won't be possible like the trusts, but
>>> why not for the resources supporting it.
>>>
>>> That said, I think we need to make a choice, at least to get started, to
>>> have something working, consistently, besides exceptions. Other options
>>> to be added later.
>> So we should go we the meaningful one first for consistency, I think.
>>
>>>>> then I would keep only the meaningful name.
>>>>>
>>>>>
>>>>> As a side note, someone raised an issue about the delimiter being
>>>>> hardcoded to "::".  This could be a property of the resource.  This
>>>>> would enable the user to use weird name with "::" in it and assign a "/"
>>>>> (for instance) to the delimiter property:
>>>>>
>>>>>     Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/", ... }
>>>>>
>>>>> bar::is::cool is the name of the domain and foo::blah is the project.
>>>> That's a good idea.  Please file a bug for that.
>>>>
>>>>>> Finally
>>>>>> ------
>>>>>> Thanks for reading that far!
>>>>>> To choose, please provide feedback with more pros/cons, examples and
>>>>>> your vote.
>>>>>>
>>>>>> Thanks,
>>>>>> Gilles
>>>>>>
>>>>>>
>>>>>> PS:
>>>>>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>>>>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From mordred at inaugust.com  Tue Sep 29 17:54:29 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Tue, 29 Sep 2015 12:54:29 -0500
Subject: [openstack-dev] [election][TC] TC Candidacy
Message-ID: <560AD055.1070201@inaugust.com>

Hi!

I would like to continue serving on the TC, if you'll have me.

Although past performance is no guarantee of future profits, I think 
it's worth noting that I've been doing this for a while now. I was on 
the precursor to the TC, the Project Policy Board. Over the time I've 
been the instigator or a key player in several major initiatives, such 
as Stackforge, the Project Testing Interface and The Big Tent.

Thing is, that really doesn't matter, because while understanding the 
past is important if you want to avoid re-learning the same lessons, we 
need to be firmly focused on the future, and to be willing to make 
changes as needed to accommodate the reality we find ourselves in.

I think it's time for the TC to take a more active position on technical
design issues.

This past cycle, Sean and Anne wrote up a spec that came from the last 
summit around standardization of the keystone catalog data. Doug dove in 
to issues around Glance upload. Both are instances where clear technical 
leadership and design was needed, and in both instances we understand 
that it goes hand in hand with being clear to our deployers and end 
users about what it is that we expect via interaction with DefCore.

I want to see more things like that, and I'd like to be involved with 
moving the TC another step down the road from being a "policy board" to 
being a "technical committee".

On the social side, I'd like to work with people on figuring out how to 
expand our capacity for trust across the project. We set up all of our 
systems and culture initially to protect against bad-faith and 
antagonistic behavior - but we've been doing this long enough now that I 
think the assumption of bad and protective behavior is counter 
productive. We're never going to get the big issues fixed if we can't 
land hard patches.

Finally, I think we need to re-think our mission.

    The OpenStack Mission: to produce the ubiquitous Open Source Cloud
    Computing platform that will meet the needs of public and private
    clouds regardless of size, by being simple to implement and
    massively scalable.

That mission is about clouds, and I think it has completely forgotten a 
key ingredient - users. Focusing on meeting the needs of the clouds 
themselves has gotten us to an amazing place, but in order to take the 
next step we have to start putting the consumers of OpenStack Clouds 
front and center in our thinking.

Thank you for the trust you've placed in me so far, and I hope I've 
lived up to it well enough for you to keep me around.

Monty


From sharis at Brocade.com  Tue Sep 29 17:59:36 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Tue, 29 Sep 2015 17:59:36 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <CAJjxPACAfCY5ihZtNFt6SvDih7TZf8BcR8d5BR60_XgDbT5bvQ@mail.gmail.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>
 <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>
 <1443116221875.72882@vmware.com>
 <EB8DB51184817F479FC9C47B120861EE1986D904@SHSMSX101.ccr.corp.intel.com>
 <1443139720841.25541@vmware.com>
 <e3c1fccdacc24c0a85e813f843e6b3d0@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPACAfCY5ihZtNFt6SvDih7TZf8BcR8d5BR60_XgDbT5bvQ@mail.gmail.com>
Message-ID: <3ec66c5f81df47a295010045c8b0bc3a@HQ1WP-EXMB12.corp.brocade.com>

I uploaded another copy of this (just to be sure) and verified with a download and comparing..

Here is the new link:

http://paloaltan.net/Congress/Congress_Usecases_Sept_25_2015.ova

(login/pass ? vagrant/vagrant)
size=3291827200
cksum 562393333 3291827200





From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Monday, September 28, 2015 11:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

When I tried to import the image, it gave me an error.



Could not create the imported medium '/Users/tim/VirtualBox VMs/Congress_Usecases/Congress_Usecases_SEPT_25_2015-disk1.vmdk'.

VMDK: Compressed image is corrupted '/Congress_Usecases_SEPT_25_2015-disk1.vmdk' (VERR_ZIP_CORRUPTED).



Tim



On Fri, Sep 25, 2015 at 3:38 PM Shiv Haris <sharis at brocade.com<mailto:sharis at brocade.com>> wrote:
Thanks Alex, Zhou,

I get errors from congress when I do a re-join. These errors seem to due to the order in which the services are coming up. Hence I still depend on running stack.sh after the VM is up and running. Please try out the new VM ? also advise if you need to add any of your use cases. Also re-join starts ?screen? ? do we expect the end user to know how to use ?screen?.

I do understand that running ?stack.sh? takes time to run ? but it does not do things that appear to be any kind of magic which we want to avoid in order to get the user excited.

I have uploaded a new version of the VM please experiment with this and let me know:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova

(root: vagrant password: vagrant)

-Shiv



From: Alex Yip [mailto:ayip at vmware.com<mailto:ayip at vmware.com>]
Sent: Thursday, September 24, 2015 5:09 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I was able to make devstack run without a network connection by disabling tempest.  So, I think it uses the loopback IP address, and that does not change, so rejoin-stack.sh works without a network at all.



- Alex





________________________________
From: Zhou, Zhenzan <zhenzan.zhou at intel.com<mailto:zhenzan.zhou at intel.com>>
Sent: Thursday, September 24, 2015 4:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:ayip at vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a minute or so.  Then I run rejoin-stack.sh which takes just another minute or so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack state that was running before.



- Alex





________________________________
From: Shiv Haris <sharis at Brocade.com<mailto:sharis at Brocade.com>>
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user instantiates the Usecase-VM. However creating a OVA file is possible only when the VM is halted which means Openstack is not running and the user will have to run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM and using it in another setup is not very straight forward. It involves modifying the .vbox file and seems that it is prone to user errors. I am leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ?.

Thanks,

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you posed however I am still working on some of the subtle issues raised. Once I have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this big?  I think we should finish this as a VM but then look into doing it with containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB ? but the OVA compress the image and disk to 3 GB. I will looking at other options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup time is substantial, and if there's a problem, it's good to assume the user won't know how to fix it.  Is it possible to have devstack up and running when we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack will be down when you bring up  the VM. I agree a snapshot will be a better choice.

- It'd be good to have a README to explain how to use the use-case structure. It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script so that we can run the use cases one after another without worrying about interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com<mailto:sharis at brocade.com>> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com<mailto:sharis at Brocade.com>]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova<https://urldefense.proofpoint.com/v2/url?u=http-3A__paloaltan.net_Congress_Congress-5FUsecases-5FSEPT-5F17-5F2015.ova&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=3IP4igrLri-BaK8VbjbEq2l_AGknCI7-t3UbP5VwlU8&s=wVyys8I915mHTzrOp8f0KLqProw6ygNfaMSP0T-yqCg&e=>

I usually run this on a macbook air ? but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases ? I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/dec95778/attachment.html>

From philip.schwartz at RACKSPACE.COM  Tue Sep 29 18:07:35 2015
From: philip.schwartz at RACKSPACE.COM (Philip Schwartz)
Date: Tue, 29 Sep 2015 18:07:35 +0000
Subject: [openstack-dev] [election][TC] TC Candidacy
In-Reply-To: <560AD055.1070201@inaugust.com>
References: <560AD055.1070201@inaugust.com>
Message-ID: <E2441074-E6EF-4625-A358-F6345FD18AD0@rackspace.com>

+1

I also completely agree on the idea of re-thinking our mission. The key to OpenStack?s future will completely be about the users and supporting them in their effort to use the software at scale.

Philip

> On Sep 29, 2015, at 1:54 PM, Monty Taylor <mordred at inaugust.com> wrote:
> 
> Hi!
> 
> I would like to continue serving on the TC, if you'll have me.
> 
> Although past performance is no guarantee of future profits, I think it's worth noting that I've been doing this for a while now. I was on the precursor to the TC, the Project Policy Board. Over the time I've been the instigator or a key player in several major initiatives, such as Stackforge, the Project Testing Interface and The Big Tent.
> 
> Thing is, that really doesn't matter, because while understanding the past is important if you want to avoid re-learning the same lessons, we need to be firmly focused on the future, and to be willing to make changes as needed to accommodate the reality we find ourselves in.
> 
> I think it's time for the TC to take a more active position on technical
> design issues.
> 
> This past cycle, Sean and Anne wrote up a spec that came from the last summit around standardization of the keystone catalog data. Doug dove in to issues around Glance upload. Both are instances where clear technical leadership and design was needed, and in both instances we understand that it goes hand in hand with being clear to our deployers and end users about what it is that we expect via interaction with DefCore.
> 
> I want to see more things like that, and I'd like to be involved with moving the TC another step down the road from being a "policy board" to being a "technical committee".
> 
> On the social side, I'd like to work with people on figuring out how to expand our capacity for trust across the project. We set up all of our systems and culture initially to protect against bad-faith and antagonistic behavior - but we've been doing this long enough now that I think the assumption of bad and protective behavior is counter productive. We're never going to get the big issues fixed if we can't land hard patches.
> 
> Finally, I think we need to re-think our mission.
> 
>   The OpenStack Mission: to produce the ubiquitous Open Source Cloud
>   Computing platform that will meet the needs of public and private
>   clouds regardless of size, by being simple to implement and
>   massively scalable.
> 
> That mission is about clouds, and I think it has completely forgotten a key ingredient - users. Focusing on meeting the needs of the clouds themselves has gotten us to an amazing place, but in order to take the next step we have to start putting the consumers of OpenStack Clouds front and center in our thinking.
> 
> Thank you for the trust you've placed in me so far, and I hope I've lived up to it well enough for you to keep me around.
> 
> Monty
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From amrith at tesora.com  Tue Sep 29 18:16:24 2015
From: amrith at tesora.com (Amrith Kumar)
Date: Tue, 29 Sep 2015 18:16:24 +0000
Subject: [openstack-dev] [election][TC] TC Candidacy
Message-ID: <CY1PR0701MB1193E183D7B66B7BA8C4C80EA04E0@CY1PR0701MB1193.namprd07.prod.outlook.com>

Greetings,

I'm writing to submit my candidacy for election to the OpenStack
Technical Committee.

By way of introduction, my name is Amrith Kumar, I am the Chief
Technology Officer[1] at Tesora[2], a company focused on
Database-as-a-Service and the OpenStack Trove project.

As OpenStack evolves, so too should the Technical Committee. I believe
that the Technical Committee has a very high representation from the
core (nova, cinder, swift, glance, neutron, ...) projects, and there
needs to be more representation of the other projects in OpenStack
that are not part of this core (trove, sahara, ...). I believe that
the future success of OpenStack depends heavily on the successful
integration of these non-core projects, and the ease with which people
can deploy the core and non-core components of OpenStack.

If elected, I would bring a different perspective to the TC, one that
would broaden the representation the non-core projects on the
Technical Committee.

Here is a quick summary of what I do relative to OpenStack, and why I
believe that you should consider my candidacy for TC

   - active contributor and member of the core review team for Trove,
     and spend all of my time on upstream development activities
     related to OpenStack Trove. I am also a periodic contributor to
     Oslo and the Trove liaison to Oslo

   - authored the book on OpenStack Trove[3] along with Doug Shelley,
     a co-worker and contributor to the Trove project

   - OpenStack evangelism and outreach activities include

          - Frequent participant and speaker at a variety of technical
            events where I have spoken about OpenStack in general, and
            sometimes Trove in particular

          - Frequent participant in the Boston OpenStack Meetup,
            presenter at a number of OpenStack Meetups all over the
            country

          - Actively promoting an initiative called "OpenStack in the
            Classroom" [4] that I shared on the mailing list several
            months ago. I have had good luck promoting it to a number
            of colleges in the Boston area

    - chosen by the OpenStack Foundation to serve as part of a global
      committee of OpenStack experts developing a list of Domains and
      Tasks essential for OpenStack administration professionals.

I believe firmly that the long term success of OpenStack depends on
the ease with which users are able to deploy and bring a cloud into
production. The big tent proposal was a very important step in that
direction but much remains to be done.

It is essential that we make it easier for users to deploy and use
other projects that are not part of the "core", projects like Trove,
Sahara, Magnum, and so on.

A current example of the kind(s) of issues that I believe to exist
relates to the creation of protected 'service resources' that make it
possible for services to provision servers and storage in tenant
context while making it impossible for tenants to manipulate these
resources other than through the service that provisioned them
[5],[6].

This is an example of a specific initiative that I have been working
with members of the community to address.

This is a feature that will have to be provided by the core projects
and the beneficiaries are the non-core projects that request these
'service resources'.

I believe that it is imperative that the Technical Committee take a
leadership role in addressing these kinds of issues, as it did in the
big-tent approach.

The activities of the TC align closely with my own skills and past
experience which include architecting and implementing complex
computer architecture, with flexible and extensible API's and then
working with diverse teams to deliver the software that implements
these. I work very effectively as a facilitator of teams and in
helping groups of people work together.

I believe that I will be able to work with the rest of the team and
drive this kind of activity to successful completion. I am committed
to the success of OpenStack and would like to contribute further as a
member of the Technical Committee.

I look forward to your support in the upcoming election, and I thank
the outgoing members of the TC for their hard work and the solid
foundation that they have laid for future committees.

Here are some more links:

     Official Candidacy: https://review.openstack.org/#/c/229073/
     Review history: https://review.openstack.org/#/q/reviewer:9664,n,z
     Commit history: https://review.openstack.org/#/q/owner:9664,n,z
     Stackalytics: http://stackalytics.com/?user_id=amrith&release=all
     Foundation: http://www.openstack.org/community/members/profile/15733
     Freenode: amrith
     LinkedIn: http://www.linkedin.com/in/amrith

Thank you,

-amrith

[1] http://www.tesora.com/people/amrith-kumar/
[2] http://www.tesora.com/
[3] http://www.apress.com/9781484212226
[4] http://openstack.markmail.org/thread/ukckrielspbrjejp
[5] https://review.openstack.org/#/c/186357/
[6] https://review.openstack.org/#/c/203880/



From harlowja at outlook.com  Tue Sep 29 18:39:16 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Tue, 29 Sep 2015 11:39:16 -0700
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
 service
In-Reply-To: <560AA07C.5090305@mirantis.com>
References: <560AA07C.5090305@mirantis.com>
Message-ID: <BLU437-SMTP31EFC86626162C14502CE4D84E0@phx.gbl>

mhorban wrote:
>  > Excerpts from Josh's message:
>
>  >> So a few 'event' like constructs/libraries that I know about:
>  >>
>  >>
> http://docs.openstack.org/developer/taskflow/types.html#taskflow.types.notifier.Notifier
>
>  >>
>  >>
>  >> I'd be happy to extract that and move to somewhere else if needed, it
>  >> provides basic event/pub/sub kind of activities for taskflow
> (in-memory,
>  >> not over rpc...)
>
> I've investigated several event libraries...And chose taskflow because
> first of all it fits all our requirements and it is already used in
> openstack.

Very cool, will check more of that review out,

Although if we are going to go forward with this it's probably a good 
idea to split that notification class/code out of taskflow and into its 
own tiny library, so that taskflow and oslo.service can use it (this is 
how https://github.com/openstack/automaton and 
https://github.com/openstack/futurist came into being). That avoids 
having to bring in all of taskflow when you are using just *one* of its 
types/classes (and aren't really using the rest of taskflow).

>
>
>  > Excerpts from Doug's message
>
>  >> We probably want the ability to have multiple callbacks. There are
>  >> already a lot of libraries available on PyPI for handling "events" like
>  >> this, so maybe we can pick one of those that is well maintained and
>  >> integrate it with oslo.service?
>
> I've created raw review in oslo.service
> https://review.openstack.org/#/c/228892/ .
> I've used taskflow library(as Josh proposed).
> By default I added one handler that reloads global configuration.
> What do you think about such implementation?
>
> Marian
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From maishsk at maishsk.com  Tue Sep 29 19:09:38 2015
From: maishsk at maishsk.com (Maish Saidel-Keesing)
Date: Tue, 29 Sep 2015 22:09:38 +0300
Subject: [openstack-dev] [Election] [TC] TC Candidacy
Message-ID: <560AE1F2.2040300@maishsk.com>

Hello to you all.

I would like to propose myself as a candidate for the Technical 
Committee for
the upcoming term. The reasons for running in the last election [1]
are still relevant for this election of the TC.

Since the last election my involvement in OpenStack has increased with a
spotlight on the Operators aspect of the community:

     - Focusing on the ops-tags-team[2] helping to create tags with the 
intent of
        creating information relevant to Operators.
     - Helping to vet and review submissions to the OpenStack Planet[3] and
        contributing as a core in openstack-planet-core.
     - Participating in the Item Writing Committee of the first Foundation
        initiative for the inaugural OpenStack Certification Exam Program.

As an OpenStack community we have made some huge steps in the right 
direction,
and are bringing more and more of the Operator and User community into our
midst. Operators and Users should also be represented in the
Technical Committee.

It is my hope that the electorate accept that there is a huge benefit,
and also a clear need, to have representation from all aspects of 
OpenStack, not
only from the developer community. When this happens - the disconnect (and
sometimes tension) that we have between these different parts will cease 
to be
an issue and we as a community will continue to thrive and grow.
In order to finally bridge this gap, it is time to open the ranks, bring an
Operator into the TC and to become truly inclusive.

I humbly ask for your selection so that I may represent Operators in the
Technical Committee for the next term.

Thank you for your consideration.

Some more information about me:
OpenStack profile: https://www.openstack.org/community/members/profile/15265
Reviews: 
https://review.openstack.org/#/q/reviewer:%22Maish+Saidel-Keesing%22,n,z
IRC: maishsk
Blog: http://technodrone.blogspot.com

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-April/062372.html
[2] 
https://review.openstack.org/#/q/status:open+project:stackforge/ops-tags-team,n,z
[3] https://wiki.openstack.org/wiki/AddingYourBlog

-- 
Best Regards,
Maish Saidel-Keesing


From doug at doughellmann.com  Tue Sep 29 19:25:57 2015
From: doug at doughellmann.com (Doug Hellmann)
Date: Tue, 29 Sep 2015 15:25:57 -0400
Subject: [openstack-dev] [oslo][oslo.config] Reloading configuration of
	service
In-Reply-To: <BLU437-SMTP31EFC86626162C14502CE4D84E0@phx.gbl>
References: <560AA07C.5090305@mirantis.com>
 <BLU437-SMTP31EFC86626162C14502CE4D84E0@phx.gbl>
Message-ID: <1443554721-sup-7256@lrrr.local>

Excerpts from Joshua Harlow's message of 2015-09-29 11:39:16 -0700:
> mhorban wrote:
> >  > Excerpts from Josh's message:
> >
> >  >> So a few 'event' like constructs/libraries that I know about:
> >  >>
> >  >>
> > http://docs.openstack.org/developer/taskflow/types.html#taskflow.types.notifier.Notifier
> >
> >  >>
> >  >>
> >  >> I'd be happy to extract that and move to somewhere else if needed, it
> >  >> provides basic event/pub/sub kind of activities for taskflow
> > (in-memory,
> >  >> not over rpc...)
> >
> > I've investigated several event libraries...And chose taskflow because
> > first of all it fits all our requirements and it is already used in
> > openstack.
> 
> Very cool, will check more of that review out,
> 
> Although if we are going to go forward with this it's probably a good 
> idea to split that notification class/code out of taskflow and into its 
> own tiny library, so that taskflow and oslo.service can use it (this is 
> how https://github.com/openstack/automaton and 
> https://github.com/openstack/futurist came into being). That avoids 
> having to bring in all of taskflow when you are using just *one* of its 
> types/classes (and aren't really using the rest of taskflow).

+1 to streamlining

> 
> >
> >
> >  > Excerpts from Doug's message
> >
> >  >> We probably want the ability to have multiple callbacks. There are
> >  >> already a lot of libraries available on PyPI for handling "events" like
> >  >> this, so maybe we can pick one of those that is well maintained and
> >  >> integrate it with oslo.service?
> >
> > I've created raw review in oslo.service
> > https://review.openstack.org/#/c/228892/ .
> > I've used taskflow library(as Josh proposed).
> > By default I added one handler that reloads global configuration.
> > What do you think about such implementation?
> >
> > Marian
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


From vadivel.openstack at gmail.com  Tue Sep 29 19:36:55 2015
From: vadivel.openstack at gmail.com (Vadivel Poonathan)
Date: Tue, 29 Sep 2015 12:36:55 -0700
Subject: [openstack-dev] [Neutron] Release of a neutron sub-project
Message-ID: <CALtWjwbJb4Q=+9LXmBVpRTWt4R9a6BsiCrNYfbgejJ0z1zs4Bw@mail.gmail.com>

Hi,

As per the Sub-Project Release process - i would like to tag and release
the following sub-project as part of upcoming Liberty release.
The process says talk to one of the member of 'neutron-release' group. I
couldn?t find a group mail-id for this group. Hence I am sending this email
to the dev list.

I just have removed the version from setup.cfg and got the patch merged, as
specified in the release process. Can someone from the neutron-release
group makes this sub-project release.


ALE Omniswitch
Git: https://git.openstack.org/cgit/openstack/networking-ale-omniswitch
Launchpad: https://launchpad.net/networking-ale-omniswitch
Pypi: https://pypi.python.org/pypi/networking-ale-omniswitch

Thanks,
Vad
--
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/1a29d14d/attachment.html>

From davanum at gmail.com  Tue Sep 29 20:31:31 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Tue, 29 Sep 2015 16:31:31 -0400
Subject: [openstack-dev] [Neutron] Release of a neutron sub-project
In-Reply-To: <CALtWjwbJb4Q=+9LXmBVpRTWt4R9a6BsiCrNYfbgejJ0z1zs4Bw@mail.gmail.com>
References: <CALtWjwbJb4Q=+9LXmBVpRTWt4R9a6BsiCrNYfbgejJ0z1zs4Bw@mail.gmail.com>
Message-ID: <CANw6fcG00ecoLHyWpKS+N_JpK-NqdpAU_oG2jr6_igx33OPXqw@mail.gmail.com>

Vad,

You can look up the group in gerrit:
https://review.openstack.org/#/admin/groups/150,members

-- Dims

On Tue, Sep 29, 2015 at 3:36 PM, Vadivel Poonathan <
vadivel.openstack at gmail.com> wrote:

> Hi,
>
> As per the Sub-Project Release process - i would like to tag and release
> the following sub-project as part of upcoming Liberty release.
> The process says talk to one of the member of 'neutron-release' group. I
> couldn?t find a group mail-id for this group. Hence I am sending this email
> to the dev list.
>
> I just have removed the version from setup.cfg and got the patch merged,
> as specified in the release process. Can someone from the neutron-release
> group makes this sub-project release.
>
>
> ALE Omniswitch
> Git: https://git.openstack.org/cgit/openstack/networking-ale-omniswitch
> Launchpad: https://launchpad.net/networking-ale-omniswitch
> Pypi: https://pypi.python.org/pypi/networking-ale-omniswitch
>
> Thanks,
> Vad
> --
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/7420b862/attachment.html>

From slawek at kaplonski.pl  Tue Sep 29 21:00:06 2015
From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=)
Date: Tue, 29 Sep 2015 23:00:06 +0200
Subject: [openstack-dev] Openvswitch agent unit tests
In-Reply-To: <CABARBAamDj5q+LU+vtn8reA4928uOHK-+H_MtgCcaftTYZY=Eg@mail.gmail.com>
References: <20150928194532.GC17980@dell>
 <CABARBAamDj5q+LU+vtn8reA4928uOHK-+H_MtgCcaftTYZY=Eg@mail.gmail.com>
Message-ID: <20150929210005.GB2126@dell>

Hello,

Thx for Your tips. So I should focus more on write new functional tests
for ovs agent if there are missing some rather then doing unit tests for
it?

-- 
Best regards / Pozdrawiam
S?awek Kap?o?ski
slawek at kaplonski.pl

On Mon, 28 Sep 2015, Assaf Muller wrote:

> Generally speaking, testing agent methods that interact with the system
> heavily with unit tests provide very little,
> and arguably negative value to the project. Mocking internal methods and
> asserting that they were called is a
> clear anti-pattern to my mind. In Neutron-land we prefer to test agent code
> with functional tests.
> Since 'functional tests' is a very over-loaded term, what I mean by that is
> specifically running the actual unmocked
> code on the system and asserting the expected behavior.
> 
> Check out:
> neutron/tests/functional/agent/test_ovs_lib
> neutron/tests/functional/agent/test_l2_ovs_agent
> 
> On Mon, Sep 28, 2015 at 3:45 PM, S?awek Kap?o?ski <slawek at kaplonski.pl>
> wrote:
> 
> > Hello,
> >
> > I'm new developer who want to start contributing to neutron. I have some
> > small experience with neutron already but I didn't do anything which I
> > could push to upstream for now. So I searched for some bug on launchpad
> > and I found such bug which I took:
> > https://bugs.launchpad.net/neutron/+bug/1285893 and I started to
> > checking how I can write new tests (I think that it is quite easy job to
> > do for the beginning but maybe I'm wrong).
> > Now I have some questions to You:
> > 1. From test-coverage I can see that for example there is missing
> > coverage like in lines 349-350 in method _restore_local_vlan_map(self) -
> > should
> > I create new test and call that metod to check if proper exception will
> > be raised? or maybe it is not neccessary at all and such "one lines"
> > missing coverage is not really needed to be checked? Or maybe it should
> > be done in some different way?
> >
> > 2. What about tests for methods like: "_local_vlan_for_flat" which is
> > not checked at all? should be created new test for such method? or maybe
> > it should be covered by some different test?
> >
> > Thanks in advance for any advice and tips how to write such unit tests
> > properly :)
> >
> > --
> > Best regards / Pozdrawiam
> > S?awek Kap?o?ski
> > slawek at kaplonski.pl
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/78d48c2a/attachment.pgp>

From xarses at gmail.com  Tue Sep 29 21:45:54 2015
From: xarses at gmail.com (Andrew Woodward)
Date: Tue, 29 Sep 2015 21:45:54 +0000
Subject: [openstack-dev] [puppet] Moving puppet-ceph to the Openstack
	big tent
In-Reply-To: <CAH7C+Pr2rA65O8gm3B1A64fFPctb_-q=jE=a6AHKJcfHNY1OEw@mail.gmail.com>
References: <CAH7C+Pr2rA65O8gm3B1A64fFPctb_-q=jE=a6AHKJcfHNY1OEw@mail.gmail.com>
Message-ID: <CACEfbZjcC7AR3SXNrepxM=KAX64WXCtGxeHhnOKQ-PmCZU73gg@mail.gmail.com>

[I'm cross posting this to the other Ceph threads to ensure that it's seen]

We've discussed this on Monday on IRC and again in the puppet-openstack IRC
meeting. The current census is that we will move from the deprecated
stackforge organization and will be moved to the openstack one. At this
time we will not be perusing membership as a formal OpenStack project. This
will allow puppet-ceph to retain the tight relationship with OpenStack
community and tools for the time being.

On Mon, Sep 28, 2015 at 8:32 AM David Moreau Simard <dms at redhat.com> wrote:

> Hi,
>
> puppet-ceph currently lives in stackforge [1] which is being retired
> [2]. puppet-ceph is also mirrored on the Ceph Github organization [3].
> This version of the puppet-ceph module was created from scratch and
> not as a fork of the (then) upstream puppet-ceph by Enovance [4].
> Today, the version by Enovance is no longer officially maintained
> since Red Hat has adopted the new release.
>
> Being an Openstack project under Stackforge or Openstack brings a lot
> of benefits but it's not black and white, there are cons too.
>
> It provides us with the tools, the processes and the frameworks to
> review and test each contribution to ensure we ship a module that is
> stable and is held to the highest standards.
> But it also means that:
> - We forego some level of ownership back to the Openstack foundation,
> it's technical committee and the Puppet Openstack PTL.
> - puppet-ceph contributors will also be required to sign the
> Contributors License Agreement and jump through the Gerrit hoops [5]
> which can make contributing to the project harder.
>
> We have put tremendous efforts into creating a quality module and as
> such it was the first puppet module in the stackforge organization to
> implement not only unit tests but also integration tests with third
> party CI.
> Integration testing for other puppet modules are just now starting to
> take shape by using the Openstack CI inrastructure.
>
> In the context of Openstack, RDO already ships with a mean to install
> Ceph with this very module and Fuel will be adopting it soon as well.
> This means the module will benefit from real world experience and
> improvements by the Openstack community and packagers.
> This will help further reinforce that not only Ceph is the best
> unified storage solution for Openstack but that we have means to
> deploy it in the real world easily.
>
> We all know that Ceph is also deployed outside of this context and
> this is why the core reviewers make sure that contributions remain
> generic and usable outside of this use case.
>
> Today, the core members of the project discussed whether or not we
> should move puppet-ceph to the Openstack big tent and we had a
> consensus approving the move.
> We would also like to hear the thoughts of the community on this topic.
>
> Please let us know what you think.
>
> Thanks,
>
> [1]: https://github.com/stackforge/puppet-ceph
> [2]: https://review.openstack.org/#/c/192016/
> [3]: https://github.com/ceph/puppet-ceph
> [4]: https://github.com/redhat-cip/puppet-ceph
> [5]: https://wiki.openstack.org/wiki/How_To_Contribute
>
> David Moreau Simard
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/7d6993b8/attachment.html>

From banveerad at gmail.com  Tue Sep 29 22:06:39 2015
From: banveerad at gmail.com (Banashankar KV)
Date: Tue, 29 Sep 2015 15:06:39 -0700
Subject: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
In-Reply-To: <CAAbQNR=y3paqW=zKU9r18xrGLcRs0BeBe=449WALcD9_TVzHJA@mail.gmail.com>
References: <CABkBM5EFGK2eYXXz=a1TnZKhrZvcxCFPshBtU5WuL7GYGUKTnQ@mail.gmail.com>
 <1442938697.30604.1.camel@localhost>
 <CABkBM5Fpu2rbf=6kb2m9F1GGspfHW5Hizve5HGaHaM-xWWU47w@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C8CAF@EX10MBOX06.pnnl.gov>
 <1442950062.30604.3.camel@localhost>
 <CAAbQNR=sKK=x0V8eAjcerJM+Gqwy-TSe7VLYt1iVdyMrb--moA@mail.gmail.com>
 <CABkBM5Gb=XShbUp=OGjzs15BxOKmvAdc5zoP_XW+vW2jObnxhQ@mail.gmail.com>
 <1442960561.30604.5.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C8FCC@EX10MBOX06.pnnl.gov>
 <CABkBM5HpCdkBNa4FkNh1G2-mbhYWi=AFU3qKO0Fk0-n=QmLp5g@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C9072@EX10MBOX06.pnnl.gov>
 <CABkBM5HMDABjqkXaEVWrXp+psRCieDztY12t5BPTuOcMKwkKtg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C909E@EX10MBOX06.pnnl.gov>
 <1442968743.30604.13.camel@localhost>
 <1A3C52DFCD06494D8528644858247BF01B7C9145@EX10MBOX06.pnnl.gov>
 <CABkBM5GvWpG57HkBHghvH+q7ZK8V8s_oHL2KAfHQdRiuOAcSOg@mail.gmail.com>
 <1A3C52DFCD06494D8528644858247BF01B7C93B5@EX10MBOX06.pnnl.gov>
 <CAAbQNRmXG82C+4VAuuZcY6NRG5eNwQB=aYLe3T00wWAHyC65tQ@mail.gmail.com>
 <EF3B902C-A4BD-4011-9D24-6F6AE2806FAA@parksidesoftware.com>
 <CAAbQNR=y3paqW=zKU9r18xrGLcRs0BeBe=449WALcD9_TVzHJA@mail.gmail.com>
Message-ID: <CABkBM5EGtKOQhGgZ_1YjcyNsnx9WwuHC5-4hjh1F00n60V0JEA@mail.gmail.com>

Hi All,
Have created a BP for the task. Please review it.

https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport


Thanks
Banashankar


On Tue, Sep 29, 2015 at 2:47 AM, Sergey Kraynev <skraynev at mirantis.com>
wrote:

> Guys, my apologize for the delay. Now I can give answers.
>
> Stephen, Heat meeting scheduled on Wednesday. (
> https://wiki.openstack.org/wiki/Meetings/HeatAgenda)
>
> Result was really short and clear: use suggested in this mail thread
> naming OS::LBaaS::*  and add new resources.
> I personally think, that this work requires separate BP + spec. So some
> corner cases about similar resources may be discussed on review for this
> specification.
>
> Thank you guys, for the raising this idea. We  definitely should provide
> new "fresh" resources for users :)
>
> Regards,
> Sergey.
>
> On 25 September 2015 at 01:30, Doug Wiegley <dougwig at parksidesoftware.com>
> wrote:
>
>> Hi Sergey,
>>
>> I agree with the previous comments here. While supporting several APIs at
>> once, with one set of objects, is a noble goal, in this case, the object
>> relationships are *completely* different. Unless you want to get into the
>> business of redefining your own higher-level API abstractions in all cases,
>> that general strategy for all things will be awkward and difficult.
>>
>> Some API changes lend themselves well to object reuse abstractions. Some
>> don?t. Lbaas v2 is definitely the latter, IMO.
>>
>> What was the result of your meeting discussion?  (*goes to grub around in
>> eavesdrop logs after typing this.*)
>>
>> Thanks,
>> doug
>>
>>
>>
>> On Sep 23, 2015, at 12:09 PM, Sergey Kraynev <skraynev at mirantis.com>
>> wrote:
>>
>> Guys. I happy, that you already discussed it here :)
>> However, I'd like to raise same question on our Heat IRC meeting.
>> Probably we should define some common concepts, because I think, that
>> lbaas is not single example of service with
>> several APIs.
>> I will post update in this thread later (after meeting).
>>
>> Regards,
>> Sergey.
>>
>> On 23 September 2015 at 14:37, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>>
>>> Seperate ns would work great.
>>>
>>> Thanks,
>>> Kevin
>>>
>>> ------------------------------
>>> *From:* Banashankar KV
>>> *Sent:* Tuesday, September 22, 2015 9:14:35 PM
>>>
>>> *To:* OpenStack Development Mailing List (not for usage questions)
>>> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for
>>> LbaasV2
>>>
>>> What you think about separating both of them with the name as Doug
>>> mentioned. In future if we want to get rid of the v1 we can just remove
>>> that namespace. Everything will be clean.
>>>
>>> Thanks
>>> Banashankar
>>>
>>>
>>> On Tue, Sep 22, 2015 at 6:01 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
>>> wrote:
>>>
>>>> As I understand it, loadbalancer in v2 is more like pool was in v1. Can
>>>> we make it such that if you are using the loadbalancer resource and have
>>>> the mandatory v2 properties that it tries to use v2 api, otherwise its a v1
>>>> resource? PoolMember should be ok being the same. It just needs to call v1
>>>> or v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api
>>>> different between them? Can it be like pool member?
>>>>
>>>> Thanks,
>>>> Kevin
>>>>
>>>> ------------------------------
>>>> *From:* Brandon Logan
>>>> *Sent:* Tuesday, September 22, 2015 5:39:03 PM
>>>>
>>>> *To:* openstack-dev at lists.openstack.org
>>>> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for
>>>> LbaasV2
>>>>
>>>> So for the API v1s api is of the structure:
>>>>
>>>> <neutron-endpoint>/lb/(vip|pool|member|health_monitor)
>>>>
>>>> V2s is:
>>>> <neutron-endpoint>/lbaas/(loadbalancer|listener|pool|healthmonitor)
>>>>
>>>> member is a child of pool, so it would go down one level.
>>>>
>>>> The only difference is the lb for v1 and lbaas for v2.  Not sure if that
>>>> is enough of a different.
>>>>
>>>> Thanks,
>>>> Brandon
>>>> On Tue, 2015-09-22 at 23:48 +0000, Fox, Kevin M wrote:
>>>> > Thats the problem. :/
>>>> >
>>>> > I can't think of a way to have them coexist without: breaking old
>>>> > templates, including v2 in the name, or having a flag on the resource
>>>> > saying the version is v2. And as an app developer I'd rather not have
>>>> > my existing templates break.
>>>> >
>>>> > I haven't compared the api's at all, but is there a required field of
>>>> > v2 that is different enough from v1 that by its simple existence in
>>>> > the resource you can tell a v2 from a v1 object? Would something like
>>>> > that work? PoolMember wouldn't have to change, the same resource could
>>>> > probably work for whatever lb it was pointing at I'm guessing.
>>>> >
>>>> > Thanks,
>>>> > Kevin
>>>> >
>>>> >
>>>> >
>>>> > ______________________________________________________________________
>>>> > From: Banashankar KV [banveerad at gmail.com]
>>>> > Sent: Tuesday, September 22, 2015 4:40 PM
>>>> > To: OpenStack Development Mailing List (not for usage questions)
>>>> > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
>>>> > LbaasV2
>>>> >
>>>> >
>>>> >
>>>> > Ok, sounds good. So now the question is how should we name the new V2
>>>> > resources ?
>>>> >
>>>> >
>>>> >
>>>> > Thanks
>>>> > Banashankar
>>>> >
>>>> >
>>>> >
>>>> > On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
>>>> > wrote:
>>>> >         Yes, hence the need to support the v2 resources as seperate
>>>> >         things. Then I can rewrite the templates to include the new
>>>> >         resources rather then the old resources as appropriate. IE, it
>>>> >         will be a porting effort to rewrite them. Then do a heat
>>>> >         update on the stack to migrate it from lbv1 to lbv2. Since
>>>> >         they are different resources, it should create the new and
>>>> >         delete the old.
>>>> >
>>>> >         Thanks,
>>>> >         Kevin
>>>> >
>>>> >
>>>> >         ______________________________________________________________
>>>> >         From: Banashankar KV [banveerad at gmail.com]
>>>> >         Sent: Tuesday, September 22, 2015 4:16 PM
>>>> >
>>>> >         To: OpenStack Development Mailing List (not for usage
>>>> >         questions)
>>>> >         Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support
>>>> >         for LbaasV2
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >         But I think, V2 has introduced some new components and whole
>>>> >         association of the resources with each other is changed, we
>>>> >         should be still able to do what Kevin has mentioned ?
>>>> >
>>>> >         Thanks
>>>> >         Banashankar
>>>> >
>>>> >
>>>> >
>>>> >         On Tue, Sep 22, 2015 at 3:39 PM, Fox, Kevin M
>>>> >         <Kevin.Fox at pnnl.gov> wrote:
>>>> >                 There needs to be a way to have both v1 and v2
>>>> >                 supported in one engine....
>>>> >
>>>> >                 Say I have templates that use v1 already in existence
>>>> >                 (I do), and I want to be able to heat stack update on
>>>> >                 them one at a time to v2. This will replace the v1 lb
>>>> >                 with v2, migrating the floating ip from the v1 lb to
>>>> >                 the v2 one. This gives a smoothish upgrade path.
>>>> >
>>>> >                 Thanks,
>>>> >                 Kevin
>>>> >                 ________________________________________
>>>> >                 From: Brandon Logan [brandon.logan at RACKSPACE.COM]
>>>> >                 Sent: Tuesday, September 22, 2015 3:22 PM
>>>> >                 To: openstack-dev at lists.openstack.org
>>>> >                 Subject: Re: [openstack-dev] [neutron][lbaas] - Heat
>>>> >                 support for LbaasV2
>>>> >
>>>> >                 Well I'd hate to have the V2 postfix on it because V1
>>>> >                 will be deprecated
>>>> >                 and removed, which means the V2 being there would be
>>>> >                 lame.  Is there any
>>>> >                 kind of precedent set for for how to handle this?
>>>> >
>>>> >                 Thanks,
>>>> >                 Brandon
>>>> >                 On Tue, 2015-09-22 at 14:49 -0700, Banashankar KV
>>>> >                 wrote:
>>>> >                 > So are we thinking of making it as ?
>>>> >                 > OS::Neutron::LoadBalancerV2
>>>> >                 >
>>>> >                 > OS::Neutron::ListenerV2
>>>> >                 >
>>>> >                 > OS::Neutron::PoolV2
>>>> >                 >
>>>> >                 > OS::Neutron::PoolMemberV2
>>>> >                 >
>>>> >                 > OS::Neutron::HealthMonitorV2
>>>> >                 >
>>>> >                 >
>>>> >                 >
>>>> >                 > and add all those into the loadbalancer.py of heat
>>>> >                 engine ?
>>>> >                 >
>>>> >                 > Thanks
>>>> >                 > Banashankar
>>>> >                 >
>>>> >                 >
>>>> >                 >
>>>> >                 > On Tue, Sep 22, 2015 at 12:52 PM, Sergey Kraynev
>>>> >                 > <skraynev at mirantis.com> wrote:
>>>> >                 >         Brandon.
>>>> >                 >
>>>> >                 >
>>>> >                 >         As I understand we v1 and v2 have
>>>> >                 differences also in list of
>>>> >                 >         objects and also in relationships between
>>>> >                 them.
>>>> >                 >         So I don't think that it will be easy to
>>>> >                 upgrade old resources
>>>> >                 >         (unfortunately).
>>>> >                 >         I'd agree with second Kevin's suggestion
>>>> >                 about implementation
>>>> >                 >         new resources in this case.
>>>> >                 >
>>>> >                 >
>>>> >                 >         I see, that a lot of guys, who wants to help
>>>> >                 with it :) And I
>>>> >                 >         suppose, that me and Rabi Mishra may try to
>>>> >                 help with it,
>>>> >                 >         because we was involvement in implementation
>>>> >                 of v1 resources
>>>> >                 >         in Heat.
>>>> >                 >         Follow the list of v1 lbaas resources in
>>>> >                 Heat:
>>>> >                 >
>>>> >                 >
>>>> >                 >
>>>> >
>>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LoadBalancer
>>>> >                 >
>>>> >
>>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::Pool
>>>> >                 >
>>>> >                 >
>>>> >
>>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::PoolMember
>>>> >                 >
>>>> >                 >
>>>> >
>>>> http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::HealthMonitor
>>>> >                 >
>>>> >                 >
>>>> >                 >
>>>> >                 >         Also, I suppose, that it may be discussed
>>>> >                 during summit
>>>> >                 >         talks :)
>>>> >                 >         Will add to etherpad with potential
>>>> >                 sessions.
>>>> >                 >
>>>> >                 >
>>>> >                 >
>>>> >                 >         Regards,
>>>> >                 >         Sergey.
>>>> >                 >
>>>> >                 >         On 22 September 2015 at 22:27, Brandon Logan
>>>> >                 >         <brandon.logan at rackspace.com> wrote:
>>>> >                 >                 There is some overlap, but there was
>>>> >                 some incompatible
>>>> >                 >                 differences when
>>>> >                 >                 we started designing v2.  I'm sure
>>>> >                 the same issues
>>>> >                 >                 will arise this time
>>>> >                 >                 around so new resources sounds like
>>>> >                 the path to go.
>>>> >                 >                 However, I do not
>>>> >                 >                 know much about Heat and the
>>>> >                 resources so I'm speaking
>>>> >                 >                 on a very
>>>> >                 >                 uneducated level here.
>>>> >                 >
>>>> >                 >                 Thanks,
>>>> >                 >                 Brandon
>>>> >                 >                 On Tue, 2015-09-22 at 18:38 +0000,
>>>> >                 Fox, Kevin M wrote:
>>>> >                 >                 > We're using the v1 resources...
>>>> >                 >                 >
>>>> >                 >                 > If the v2 ones are compatible and
>>>> >                 can seamlessly
>>>> >                 >                 upgrade, great
>>>> >                 >                 >
>>>> >                 >                 > Otherwise, make new ones please.
>>>> >                 >                 >
>>>> >                 >                 > Thanks,
>>>> >                 >                 > Kevin
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >
>>>> >
>>>> ______________________________________________________________________
>>>> >                 >                 > From: Banashankar KV
>>>> >                 [banveerad at gmail.com]
>>>> >                 >                 > Sent: Tuesday, September 22, 2015
>>>> >                 10:07 AM
>>>> >                 >                 > To: OpenStack Development Mailing
>>>> >                 List (not for
>>>> >                 >                 usage questions)
>>>> >                 >                 > Subject: Re: [openstack-dev]
>>>> >                 [neutron][lbaas] - Heat
>>>> >                 >                 support for
>>>> >                 >                 > LbaasV2
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 > Hi Brandon,
>>>> >                 >                 > Work in progress, but need some
>>>> >                 input on the way we
>>>> >                 >                 want them, like
>>>> >                 >                 > replace the existing lbaasv1 or we
>>>> >                 still need to
>>>> >                 >                 support them ?
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 > Thanks
>>>> >                 >                 > Banashankar
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 > On Tue, Sep 22, 2015 at 9:18 AM,
>>>> >                 Brandon Logan
>>>> >                 >                 > <brandon.logan at rackspace.com>
>>>> >                 wrote:
>>>> >                 >                 >         Hi Banashankar,
>>>> >                 >                 >         I think it'd be great if
>>>> >                 you got this going.
>>>> >                 >                 One of those
>>>> >                 >                 >         things we
>>>> >                 >                 >         want to have and people
>>>> >                 ask for but has
>>>> >                 >                 always gotten a lower
>>>> >                 >                 >         priority
>>>> >                 >                 >         due to the critical things
>>>> >                 needed.
>>>> >                 >                 >
>>>> >                 >                 >         Thanks,
>>>> >                 >                 >         Brandon
>>>> >                 >                 >         On Mon, 2015-09-21 at
>>>> >                 17:57 -0700,
>>>> >                 >                 Banashankar KV wrote:
>>>> >                 >                 >         > Hi All,
>>>> >                 >                 >         > I was thinking of
>>>> >                 starting the work on
>>>> >                 >                 heat to support
>>>> >                 >                 >         LBaasV2,  Is
>>>> >                 >                 >         > there any concerns about
>>>> >                 that?
>>>> >                 >                 >         >
>>>> >                 >                 >         >
>>>> >                 >                 >         > I don't know if it is
>>>> >                 the right time to
>>>> >                 >                 bring this up :D .
>>>> >                 >                 >         >
>>>> >                 >                 >         > Thanks,
>>>> >                 >                 >         > Banashankar (bana_k)
>>>> >                 >                 >         >
>>>> >                 >                 >         >
>>>> >                 >                 >
>>>> >                 >                 >         >
>>>> >                 >                 >
>>>> >                 >
>>>> >
>>>> __________________________________________________________________________
>>>> >                 >                 >         > OpenStack Development
>>>> >                 Mailing List (not
>>>> >                 >                 for usage questions)
>>>> >                 >                 >         > Unsubscribe:
>>>> >                 >                 >
>>>> >                 >
>>>> >
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> >                 >                 >         >
>>>> >                 >                 >
>>>> >                 >
>>>> >
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >
>>>> >
>>>> __________________________________________________________________________
>>>> >                 >                 >         OpenStack Development
>>>> >                 Mailing List (not for
>>>> >                 >                 usage questions)
>>>> >                 >                 >         Unsubscribe:
>>>> >                 >                 >
>>>> >                 >
>>>> >
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> >                 >                 >
>>>> >                 >
>>>> >
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >                 >
>>>> >                 >
>>>> >
>>>> __________________________________________________________________________
>>>> >                 >                 > OpenStack Development Mailing List
>>>> >                 (not for usage
>>>> >                 >                 questions)
>>>> >                 >                 > Unsubscribe:
>>>> >                 >
>>>> >
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> >                 >                 >
>>>> >                 >
>>>> >
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >                 >
>>>> >                 >
>>>> >
>>>> __________________________________________________________________________
>>>> >                 >                 OpenStack Development Mailing List
>>>> >                 (not for usage
>>>> >                 >                 questions)
>>>> >                 >                 Unsubscribe:
>>>> >                 >
>>>> >
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> >                 >
>>>> >
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >                 >
>>>> >                 >
>>>> >                 >
>>>> >                 >
>>>> >                 >
>>>> >
>>>> __________________________________________________________________________
>>>> >                 >         OpenStack Development Mailing List (not for
>>>> >                 usage questions)
>>>> >                 >         Unsubscribe:
>>>> >                 >
>>>> >
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> >                 >
>>>> >
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >                 >
>>>> >                 >
>>>> >                 >
>>>> >                 >
>>>> >
>>>> __________________________________________________________________________
>>>> >                 > OpenStack Development Mailing List (not for usage
>>>> >                 questions)
>>>> >                 > Unsubscribe:
>>>> >
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> >                 >
>>>> >
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>> >
>>>> __________________________________________________________________________
>>>> >                 OpenStack Development Mailing List (not for usage
>>>> >                 questions)
>>>> >                 Unsubscribe:
>>>> >
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> >
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>> >
>>>> __________________________________________________________________________
>>>> >                 OpenStack Development Mailing List (not for usage
>>>> >                 questions)
>>>> >                 Unsubscribe:
>>>> >
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> >
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> __________________________________________________________________________
>>>> >         OpenStack Development Mailing List (not for usage questions)
>>>> >         Unsubscribe:
>>>> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> >
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>> >
>>>> >
>>>> >
>>>> __________________________________________________________________________
>>>> > OpenStack Development Mailing List (not for usage questions)
>>>> > Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/d0a803d1/attachment.html>

From loic at dachary.org  Tue Sep 29 22:11:28 2015
From: loic at dachary.org (Loic Dachary)
Date: Wed, 30 Sep 2015 00:11:28 +0200
Subject: [openstack-dev] [ceph-users] [puppet] Moving puppet-ceph to the
 Openstack big tent
In-Reply-To: <CACEfbZjcC7AR3SXNrepxM=KAX64WXCtGxeHhnOKQ-PmCZU73gg@mail.gmail.com>
References: <CAH7C+Pr2rA65O8gm3B1A64fFPctb_-q=jE=a6AHKJcfHNY1OEw@mail.gmail.com>
 <CACEfbZjcC7AR3SXNrepxM=KAX64WXCtGxeHhnOKQ-PmCZU73gg@mail.gmail.com>
Message-ID: <560B0C90.1000500@dachary.org>

Good move :-)

On 29/09/2015 23:45, Andrew Woodward wrote:
> [I'm cross posting this to the other Ceph threads to ensure that it's seen]
> 
> We've discussed this on Monday on IRC and again in the puppet-openstack IRC meeting. The current census is that we will move from the deprecated stackforge organization and will be moved to the openstack one. At this time we will not be perusing membership as a formal OpenStack project. This will allow puppet-ceph to retain the tight relationship with OpenStack community and tools for the time being. 
> 
> On Mon, Sep 28, 2015 at 8:32 AM David Moreau Simard <dms at redhat.com <mailto:dms at redhat.com>> wrote:
> 
>     Hi,
> 
>     puppet-ceph currently lives in stackforge [1] which is being retired
>     [2]. puppet-ceph is also mirrored on the Ceph Github organization [3].
>     This version of the puppet-ceph module was created from scratch and
>     not as a fork of the (then) upstream puppet-ceph by Enovance [4].
>     Today, the version by Enovance is no longer officially maintained
>     since Red Hat has adopted the new release.
> 
>     Being an Openstack project under Stackforge or Openstack brings a lot
>     of benefits but it's not black and white, there are cons too.
> 
>     It provides us with the tools, the processes and the frameworks to
>     review and test each contribution to ensure we ship a module that is
>     stable and is held to the highest standards.
>     But it also means that:
>     - We forego some level of ownership back to the Openstack foundation,
>     it's technical committee and the Puppet Openstack PTL.
>     - puppet-ceph contributors will also be required to sign the
>     Contributors License Agreement and jump through the Gerrit hoops [5]
>     which can make contributing to the project harder.
> 
>     We have put tremendous efforts into creating a quality module and as
>     such it was the first puppet module in the stackforge organization to
>     implement not only unit tests but also integration tests with third
>     party CI.
>     Integration testing for other puppet modules are just now starting to
>     take shape by using the Openstack CI inrastructure.
> 
>     In the context of Openstack, RDO already ships with a mean to install
>     Ceph with this very module and Fuel will be adopting it soon as well.
>     This means the module will benefit from real world experience and
>     improvements by the Openstack community and packagers.
>     This will help further reinforce that not only Ceph is the best
>     unified storage solution for Openstack but that we have means to
>     deploy it in the real world easily.
> 
>     We all know that Ceph is also deployed outside of this context and
>     this is why the core reviewers make sure that contributions remain
>     generic and usable outside of this use case.
> 
>     Today, the core members of the project discussed whether or not we
>     should move puppet-ceph to the Openstack big tent and we had a
>     consensus approving the move.
>     We would also like to hear the thoughts of the community on this topic.
> 
>     Please let us know what you think.
> 
>     Thanks,
> 
>     [1]: https://github.com/stackforge/puppet-ceph
>     [2]: https://review.openstack.org/#/c/192016/
>     [3]: https://github.com/ceph/puppet-ceph
>     [4]: https://github.com/redhat-cip/puppet-ceph
>     [5]: https://wiki.openstack.org/wiki/How_To_Contribute
> 
>     David Moreau Simard
>     --
>     To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>     the body of a message to majordomo at vger.kernel.org <mailto:majordomo at vger.kernel.org>
>     More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> -- 
> 
> --
> 
> Andrew Woodward
> 
> Mirantis
> 
> Fuel Community Ambassador
> 
> Ceph Community
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Lo?c Dachary, Artisan Logiciel Libre

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/98595628/attachment-0001.pgp>

From stdake at cisco.com  Tue Sep 29 22:20:53 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Tue, 29 Sep 2015 22:20:53 +0000
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for
	core reviewer
Message-ID: <D2305CD3.13957%stdake@cisco.com>

Hi folks,

I am proposing Michal for core reviewer.  Consider my proposal as a +1 vote.  Michal has done a fantastic job with rsyslog, has done a nice job overall contributing to the project for the last cycle, and has really improved his review quality and participation over the last several months.

Our process requires 3 +1 votes, with no veto (-1) votes.  If your uncertain, it is best to abstain :)  I will leave the voting open for 1 week until Tuesday October 6th or until there is a unanimous decision or a  veto.

Regards
-steve
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/7fbd2c1f/attachment.html>

From zaro0508 at gmail.com  Tue Sep 29 22:30:19 2015
From: zaro0508 at gmail.com (Zaro)
Date: Tue, 29 Sep 2015 15:30:19 -0700
Subject: [openstack-dev] Infra needs Gerrit developers
Message-ID: <CABf-f+oVrWB9A+ZOOpNMZdccJn=Gv95_RpLa8SXM9K6jp75scA@mail.gmail.com>

Hello All,

I believe you are all familiar with Gerrit.  Our community relies on it
quite heavily and it is one of the most important applications in our CI
infrastructure. I work on the OpenStack-infra team and I've been hacking on
Gerrit for a while. I'm the infra team's sole Gerrit developer. I also test
all our Gerrit upgrades prior to infra upgrading Gerrit.  There are many
Gerrit feature and bug fix requests coming from the OpenStack community
however due to limited resources it has been a challenge to meet those
requests.

I've been fielding some of those requests and trying to make Gerrit better
for OpenStack.  I was wondering whether there are any other folks in our
community who might also like to hack on a large scale java application
that's being used by many corporations and open source projects in the
world.  If so this is an opportunity for you to contribute.  I'm hoping to
get more OpenStackers involved with the Gerrit community so we can
collectively make OpenStack better.  If you would like to get involved let
the openstack-infra folks know[1] and we will try help get you going.

For instance our last attempt to upgrading Gerrit failed due to a bug[2]
that makes repos unusable on a diff timeout.   This bug is still not fixed
so a nice way to contribute is to help us fix things like this so we can
continue to use never versions of Gerrit.

[1] in #openstack-infra or on openstack-infra at lists.openstack.org
[2] https://code.google.com/p/gerrit/issues/detail?id=3424


Thank You.
- Khai (AKA zaro)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/944d2aff/attachment.html>

From amuller at redhat.com  Tue Sep 29 22:54:16 2015
From: amuller at redhat.com (Assaf Muller)
Date: Tue, 29 Sep 2015 18:54:16 -0400
Subject: [openstack-dev] Openvswitch agent unit tests
In-Reply-To: <20150929210005.GB2126@dell>
References: <20150928194532.GC17980@dell>
 <CABARBAamDj5q+LU+vtn8reA4928uOHK-+H_MtgCcaftTYZY=Eg@mail.gmail.com>
 <20150929210005.GB2126@dell>
Message-ID: <CABARBAZd0+0RJLy9DOrHRi5E5bYxbuU62E95CMsEGMTOtcRm0A@mail.gmail.com>

On Tue, Sep 29, 2015 at 5:00 PM, S?awek Kap?o?ski <slawek at kaplonski.pl>
wrote:

> Hello,
>
> Thx for Your tips. So I should focus more on write new functional tests
> for ovs agent if there are missing some rather then doing unit tests for
> it?
>

If a method is conceivably testable with unit tests (Without over relying
on mock), that is preferable.
Failing that, functional tests are the way to go. The general idea is to
test bottom up: Lots of unit
tests, fewer functional tests, fewer API/integration/fullstack tests, and
even fewer Tempest scenario
tests. In the case of the OVS agent (And other Neutron agents that interact
with the underlying hypervisor)
it is difficult to test the agent with unit tests effectively, which is why
I encourage developers to test
via functional, mock-less tests, like the tests I linked you in my previous
email.


>
> --
> Best regards / Pozdrawiam
> S?awek Kap?o?ski
> slawek at kaplonski.pl
>
> On Mon, 28 Sep 2015, Assaf Muller wrote:
>
> > Generally speaking, testing agent methods that interact with the system
> > heavily with unit tests provide very little,
> > and arguably negative value to the project. Mocking internal methods and
> > asserting that they were called is a
> > clear anti-pattern to my mind. In Neutron-land we prefer to test agent
> code
> > with functional tests.
> > Since 'functional tests' is a very over-loaded term, what I mean by that
> is
> > specifically running the actual unmocked
> > code on the system and asserting the expected behavior.
> >
> > Check out:
> > neutron/tests/functional/agent/test_ovs_lib
> > neutron/tests/functional/agent/test_l2_ovs_agent
> >
> > On Mon, Sep 28, 2015 at 3:45 PM, S?awek Kap?o?ski <slawek at kaplonski.pl>
> > wrote:
> >
> > > Hello,
> > >
> > > I'm new developer who want to start contributing to neutron. I have
> some
> > > small experience with neutron already but I didn't do anything which I
> > > could push to upstream for now. So I searched for some bug on launchpad
> > > and I found such bug which I took:
> > > https://bugs.launchpad.net/neutron/+bug/1285893 and I started to
> > > checking how I can write new tests (I think that it is quite easy job
> to
> > > do for the beginning but maybe I'm wrong).
> > > Now I have some questions to You:
> > > 1. From test-coverage I can see that for example there is missing
> > > coverage like in lines 349-350 in method _restore_local_vlan_map(self)
> -
> > > should
> > > I create new test and call that metod to check if proper exception will
> > > be raised? or maybe it is not neccessary at all and such "one lines"
> > > missing coverage is not really needed to be checked? Or maybe it should
> > > be done in some different way?
> > >
> > > 2. What about tests for methods like: "_local_vlan_for_flat" which is
> > > not checked at all? should be created new test for such method? or
> maybe
> > > it should be covered by some different test?
> > >
> > > Thanks in advance for any advice and tips how to write such unit tests
> > > properly :)
> > >
> > > --
> > > Best regards / Pozdrawiam
> > > S?awek Kap?o?ski
> > > slawek at kaplonski.pl
> > >
> > >
> > >
> __________________________________________________________________________
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
>
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/9c0bcebc/attachment.html>

From stdake at cisco.com  Tue Sep 29 23:13:02 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Tue, 29 Sep 2015 23:13:02 +0000
Subject: [openstack-dev] [Election] [TC} TC Candidacy
Message-ID: <D230690D.139D4%stdake@cisco.com>

Hello Peers,


I would be pleased to serve on the Technical Committee with your vote.  I have a very relevant background to serving you in the Technical Committee in that I have done the following during OpenStack development:


  1.  Led the development, incubation, and integration of Heat for the first project to enter what would be a precursor to the Big Tent.
  2.  Wrote most of the initial commits and contributed heavily to the first Big Tent project Magnum.
  3.  Solving the most complex problem OpenStack faces, Deploying the Big Tent, by serving as PTL of Kolla which I have led from first commit through the Big Tent process.
  4.  Top 10 contributor by Person-day effort for Liberty.  Top 1% by Person-day effort  for Kilo.  Top 1% by Person-day effort for OpenStack project overall.


My experience with Heat, Magnum, and Kolla directly relate to the Technical Committee's charter which is to serve as technical oversight for the OpenStack project.  I am highly interested in growth management of my peers and the systems and people I involve myself with.  I believe it is one of the core responsibilities of the Technical Committee to provide effective growth management for the OpenStack ecosystem, in partnership with the OpenStack Board.


I can confirm that the original incubation track that Heat executed was highly painful and very challenging.  It was not very inclusive, not documented, and highly subjective.


The new Big Tent process is far improved.  It is almost completely objective and measurable which in my opinion leads to high growth in individuals and systems during their evaluation.


Most of the work the Technical Committee has done over the last year has been to make OpenStack more obecjtive and measureable by anyone.  I personally feel this change is fantastic and has unstuck the Technical Committeee.  This work facilitates the growth of OpenStack in general and new projects in particular.


I want to bring my experience creating new things for OpenStack to bear to create New Things for the Technical Committee to further acclerate growth.  I would happily accept your vote for the Technical Committee election.


Regards,

-steve
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/087a3615/attachment.html>

From klindgren at godaddy.com  Tue Sep 29 23:33:55 2015
From: klindgren at godaddy.com (Kris G. Lindgren)
Date: Tue, 29 Sep 2015 23:33:55 +0000
Subject: [openstack-dev] [ops] Operator Local Patches
Message-ID: <9FFC2E05-79FC-4876-A2B9-FD1C2BE1269D@godaddy.com>

Hello All,

We have some pretty good contributions of local patches on the etherpad.  We are going through right now and trying to group patches that multiple people are carrying and patches that people may not be carrying but solves a problem that they are running into.  If you can take some time and either add your own local patches that you have to the ether pad or add +1's next to the patches that are laid out, it would help us immensely.

The etherpad can be found at: https://etherpad.openstack.org/p/operator-local-patches

Thanks for your help!

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren"
Date: Tuesday, September 22, 2015 at 4:21 PM
To: openstack-operators
Subject: Re: Operator Local Patches

Hello all,

Friendly reminder: If you have local patches and haven't yet done so, please contribute to the etherpad at: https://etherpad.openstack.org/p/operator-local-patches

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren"
Date: Friday, September 18, 2015 at 4:35 PM
To: openstack-operators
Cc: Tom Fifield
Subject: Operator Local Patches

Hello Operators!

During the ops meetup in Palo Alto were we talking about sessions for Tokyo. A session that I purposed, that got a bunch of +1's,  was about local patches that operators were carrying.  From my experience this is done to either implement business logic,  fix assumptions in projects that do not apply to your implementation, implement business requirements that are not yet implemented in openstack, or fix scale related bugs.  What I would like to do is get a working group together to do the following:

1.) Document local patches that operators have (even those that are in gerrit right now waiting to be committed upstream)
2.) Figure out commonality in those patches
3.) Either upstream the common fixes to the appropriate projects or figure out if a hook can be added to allow people to run their code at that specific point
4.) ????
5.) Profit

To start this off, I have documented every patch, along with a description of what it does and why we did it (where needed), that GoDaddy is running [1].  What I am asking is that the operator community please update the etherpad with the patches that you are running, so that we have a good starting point for discussions in Tokyo and beyond.

[1] - https://etherpad.openstack.org/p/operator-local-patches
___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/5242a229/attachment.html>

From pieter.c.kruithof-jr at hpe.com  Tue Sep 29 23:31:24 2015
From: pieter.c.kruithof-jr at hpe.com (Kruithof, Piet)
Date: Tue, 29 Sep 2015 23:31:24 +0000
Subject: [openstack-dev] [PTL] The OpenStack UX team would like to know
 which roles are relevant to your project
Message-ID: <D2307B6A.18687%pieter.c.kruithof-jr@hp.com>

Dear PTLs,

The OpenStack UX project and Product working group is conducting a persona working session in IBM's design center in Austin from October 5th - 7th. The goal of the workshop is to develop several personas that can be used by the community for creating user stories, planning, etc.  The reason your should care is that it allows the PTLs to have more focused conversations around who will be using a specific feature.

A description of personas can be found at http://www.usability.gov/how-to-and-tools/methods/personas.html

In preparation for the workshop, we are asking the project PTLs to identify several roles that the workshop attendees should spend time building into personas. In this context, a role simply includes a name and description while a persona includes richer data such as motivations, pain points, education as well as a description of their relationship to the other personas.

This should take about five minutes to complete.

https://www.surveymonkey.com/r/H6WS2HS

BTW ?  you are totally welcome to attend the working session if you happen to be in Austin.  Even if you just want to stop by at lunch for some BBQ. It would be cool to meet you in person.


Piet Kruithof
PTL, OpenStack UX project

"For every complex problem, there is a solution that is simple, neat and wrong.?

H L Menken



From me at not.mn  Tue Sep 29 23:43:10 2015
From: me at not.mn (John Dickinson)
Date: Tue, 29 Sep 2015 16:43:10 -0700
Subject: [openstack-dev] [PTL] The OpenStack UX team would like to know
 which roles are relevant to your project
In-Reply-To: <D2307B6A.18687%pieter.c.kruithof-jr@hp.com>
References: <D2307B6A.18687%pieter.c.kruithof-jr@hp.com>
Message-ID: <0A99D805-6F42-40BD-B79C-590D0CFB1B84@not.mn>

There's a ranking from 1 to 19. Which is the higher rank?

--John



On 29 Sep 2015, at 16:31, Kruithof, Piet wrote:

> Dear PTLs,
>
> The OpenStack UX project and Product working group is conducting a persona working session in IBM's design center in Austin from October 5th - 7th. The goal of the workshop is to develop several personas that can be used by the community for creating user stories, planning, etc.  The reason your should care is that it allows the PTLs to have more focused conversations around who will be using a specific feature.
>
> A description of personas can be found at http://www.usability.gov/how-to-and-tools/methods/personas.html
>
> In preparation for the workshop, we are asking the project PTLs to identify several roles that the workshop attendees should spend time building into personas. In this context, a role simply includes a name and description while a persona includes richer data such as motivations, pain points, education as well as a description of their relationship to the other personas.
>
> This should take about five minutes to complete.
>
> https://www.surveymonkey.com/r/H6WS2HS
>
> BTW ?  you are totally welcome to attend the working session if you happen to be in Austin.  Even if you just want to stop by at lunch for some BBQ. It would be cool to meet you in person.
>
>
> Piet Kruithof
> PTL, OpenStack UX project
>
> "For every complex problem, there is a solution that is simple, neat and wrong.?
>
> H L Menken
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/af4b898e/attachment.pgp>

From martin.andre at gmail.com  Tue Sep 29 23:47:27 2015
From: martin.andre at gmail.com (=?UTF-8?Q?Martin_Andr=C3=A9?=)
Date: Wed, 30 Sep 2015 08:47:27 +0900
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for
 core reviewer
In-Reply-To: <D2305CD3.13957%stdake@cisco.com>
References: <D2305CD3.13957%stdake@cisco.com>
Message-ID: <CAHD=wRdeU9n+FShy3DPbUuHq166fuA+7JmScWGuewjOqL77NGg@mail.gmail.com>

On Wed, Sep 30, 2015 at 7:20 AM, Steven Dake (stdake) <stdake at cisco.com>
wrote:

> Hi folks,
>
> I am proposing Michal for core reviewer.  Consider my proposal as a +1
> vote.  Michal has done a fantastic job with rsyslog, has done a nice job
> overall contributing to the project for the last cycle, and has really
> improved his review quality and participation over the last several months.
>
> Our process requires 3 +1 votes, with no veto (-1) votes.  If your
> uncertain, it is best to abstain :)  I will leave the voting open for 1
> week until Tuesday October 6th or until there is a unanimous decision or a
>  veto.
>

+1, without hesitation.

Martin


> Regards
> -steve
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/c5615c24/attachment.html>

From samuel at yaple.net  Wed Sep 30 00:24:01 2015
From: samuel at yaple.net (Sam Yaple)
Date: Tue, 29 Sep 2015 19:24:01 -0500
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for
 core reviewer
In-Reply-To: <CAHD=wRdeU9n+FShy3DPbUuHq166fuA+7JmScWGuewjOqL77NGg@mail.gmail.com>
References: <D2305CD3.13957%stdake@cisco.com>
 <CAHD=wRdeU9n+FShy3DPbUuHq166fuA+7JmScWGuewjOqL77NGg@mail.gmail.com>
Message-ID: <CAJ3CzQU+NfuZvrBjJ9YLpVcTedTiXv23_eVj2WCSE_5C9UhkwA@mail.gmail.com>

+1 Michal will be a great addition to the Core team.
On Sep 29, 2015 6:48 PM, "Martin Andr?" <martin.andre at gmail.com> wrote:

>
>
> On Wed, Sep 30, 2015 at 7:20 AM, Steven Dake (stdake) <stdake at cisco.com>
> wrote:
>
>> Hi folks,
>>
>> I am proposing Michal for core reviewer.  Consider my proposal as a +1
>> vote.  Michal has done a fantastic job with rsyslog, has done a nice job
>> overall contributing to the project for the last cycle, and has really
>> improved his review quality and participation over the last several months.
>>
>> Our process requires 3 +1 votes, with no veto (-1) votes.  If your
>> uncertain, it is best to abstain :)  I will leave the voting open for 1
>> week until Tuesday October 6th or until there is a unanimous decision or a
>>  veto.
>>
>
> +1, without hesitation.
>
> Martin
>
>
>> Regards
>> -steve
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/1be98bc1/attachment.html>

From mestery at mestery.com  Wed Sep 30 01:04:27 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Tue, 29 Sep 2015 20:04:27 -0500
Subject: [openstack-dev] [Neutron] Release of a neutron sub-project
In-Reply-To: <CALtWjwbJb4Q=+9LXmBVpRTWt4R9a6BsiCrNYfbgejJ0z1zs4Bw@mail.gmail.com>
References: <CALtWjwbJb4Q=+9LXmBVpRTWt4R9a6BsiCrNYfbgejJ0z1zs4Bw@mail.gmail.com>
Message-ID: <CAL3VkVws=+ynad4u5HEXiuOsrd92=b1b_vMVijeujdy2u9YB_Q@mail.gmail.com>

On Tue, Sep 29, 2015 at 2:36 PM, Vadivel Poonathan <
vadivel.openstack at gmail.com> wrote:

> Hi,
>
> As per the Sub-Project Release process - i would like to tag and release
> the following sub-project as part of upcoming Liberty release.
> The process says talk to one of the member of 'neutron-release' group. I
> couldn?t find a group mail-id for this group. Hence I am sending this email
> to the dev list.
>
> I just have removed the version from setup.cfg and got the patch merged,
> as specified in the release process. Can someone from the neutron-release
> group makes this sub-project release.
>
>

Vlad, I'll do this tomorrow. Find me on IRC (mestery) and ping me there so
I can get your IRC NIC in case I have questions.

Thanks!
Kyle


>
> ALE Omniswitch
> Git: https://git.openstack.org/cgit/openstack/networking-ale-omniswitch
> Launchpad: https://launchpad.net/networking-ale-omniswitch
> Pypi: https://pypi.python.org/pypi/networking-ale-omniswitch
>
> Thanks,
> Vad
> --
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/f7a3c7b7/attachment.html>

From sean at coreitpro.com  Wed Sep 30 01:29:17 2015
From: sean at coreitpro.com (Sean Collins)
Date: Wed, 30 Sep 2015 01:29:17 +0000
Subject: [openstack-dev] [Devstack][Sahara][Cinder] BlockDeviceDriver
	support in Devstack
Message-ID: <000001501bde2d9b-f38e2772-475f-444e-beaa-b888146b50ab-000000@email.amazonses.com>

This review was recently abandoned. Can you provide insight as to why?

On September 17, 2015, at 2:30 PM, "Sean M. Collins" <sean at coreitpro.com> wrote:

You need to remove your Workflow-1.

-- 
Sean M. Collins

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

From ayoung at redhat.com  Wed Sep 30 01:34:05 2015
From: ayoung at redhat.com (Adam Young)
Date: Tue, 29 Sep 2015 21:34:05 -0400
Subject: [openstack-dev] [PTL] The OpenStack UX team would like to know
 which roles are relevant to your project
In-Reply-To: <D2307B6A.18687%pieter.c.kruithof-jr@hp.com>
References: <D2307B6A.18687%pieter.c.kruithof-jr@hp.com>
Message-ID: <560B3C0D.5010508@redhat.com>

On 09/29/2015 07:31 PM, Kruithof, Piet wrote:
> Dear PTLs,
>
> The OpenStack UX project and Product working group is conducting a persona working session in IBM's design center in Austin from October 5th - 7th. The goal of the workshop is to develop several personas that can be used by the community for creating user stories, planning, etc.  The reason your should care is that it allows the PTLs to have more focused conversations around who will be using a specific feature.
>
> A description of personas can be found at http://www.usability.gov/how-to-and-tools/methods/personas.html
>
> In preparation for the workshop, we are asking the project PTLs to identify several roles that the workshop attendees should spend time building into personas. In this context, a role simply includes a name and description while a persona includes richer data such as motivations, pain points, education as well as a description of their relationship to the other personas.
>
> This should take about five minutes to complete.
>
> https://www.surveymonkey.com/r/H6WS2HS

On the Keystone side, trying to get support for role inference that will 
make these easier:

https://review.openstack.org/#/c/125704/


>
> BTW ?  you are totally welcome to attend the working session if you happen to be in Austin.  Even if you just want to stop by at lunch for some BBQ. It would be cool to meet you in person.
>
>
> Piet Kruithof
> PTL, OpenStack UX project
>
> "For every complex problem, there is a solution that is simple, neat and wrong.?
>
> H L Menken
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From sbaker at redhat.com  Wed Sep 30 01:41:04 2015
From: sbaker at redhat.com (Steve Baker)
Date: Wed, 30 Sep 2015 14:41:04 +1300
Subject: [openstack-dev] [heat] Traditional question about Heat IRC
 meeting time.
In-Reply-To: <CAAbQNRnZH9B2Cw1ZnUk4a-QKdSdxw2fuvGVFPUgp97OvqaDZ0Q@mail.gmail.com>
References: <CAAbQNRnZH9B2Cw1ZnUk4a-QKdSdxw2fuvGVFPUgp97OvqaDZ0Q@mail.gmail.com>
Message-ID: <560B3DB0.3060806@redhat.com>

+1

On 29/09/15 22:56, Sergey Kraynev wrote:
> Hi Heaters!
>
> Previously we had constant "tradition" to change meeting time for 
> involving more people from different time zones.
> However last release cycle show, that two different meetings with 
> 07:00 and 20:00 UTC are comfortable for most of our contributors. Both 
> time values are acceptable for me and I plan to visit both meetings. 
> So I suggested to leave it without any changes.
>
> What do you think about it ?
>
> Regards,
> Sergey.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/0196c8d5/attachment.html>

From zhengzhenyulixi at gmail.com  Wed Sep 30 02:56:13 2015
From: zhengzhenyulixi at gmail.com (Zhenyu Zheng)
Date: Wed, 30 Sep 2015 10:56:13 +0800
Subject: [openstack-dev] [nova][python-novaclient] Functional test fail due
 to publicURL endpoint for volume service not found
Message-ID: <CAO0b__9JNmZ3_zf67_urAAO3J=iMXcfuVEURCC2BCfOXW_u-4Q@mail.gmail.com>

Hi, all

I submitted a patch for novaclient last night:
https://review.openstack.org/#/c/228769/ , and it turns out the functional
test has failed due to:  publicURL endpoint for volume service not found. I
also found out that another novaclient patch:
https://review.openstack.org/#/c/217131/ also fails due to this error, so
this must be a bug. Any idea on how to fix this?

Thanks,

BR,

Zheng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/5ce967fb/attachment.html>

From jay.lau.513 at gmail.com  Wed Sep 30 02:57:02 2015
From: jay.lau.513 at gmail.com (Jay Lau)
Date: Wed, 30 Sep 2015 10:57:02 +0800
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <D230119F.1DB05%eguz@walmartlabs.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com>
 <560A5856.1050303@hpe.com> <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
Message-ID: <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>

+1 to Egor, I think that the final goal of Magnum is container as a service
but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is focusing
on container service. I know it is difficult to unify all of the concepts
in different coe (k8s has pod, service, rc, swarm only has container, nova
only has VM, PM with different hypervisors), but this deserve some deep
dive and thinking to see how can move forward.....

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <EGuz at walmartlabs.com> wrote:

> definitely ;), but the are some thoughts to Tom?s email.
>
> I agree that we shouldn't reinvent apis, but I don?t think Magnum should
> only focus at deployment (I feel we will become another Puppet/Chef/Ansible
> module if we do it ):)
> I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
> OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step
> in to Kub/Mesos/Swarm communities for that.
>
> ?
> Egor
>
> From: Adrian Otto <adrian.otto at rackspace.com<mailto:
> adrian.otto at rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> >>
> Date: Tuesday, September 29, 2015 at 08:44
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> >>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> This is definitely a topic we should cover in Tokyo.
>
> On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) <danehans at cisco.com
> <mailto:danehans at cisco.com>> wrote:
>
>
> +1
>
> From: Tom Cammann <tom.cammann at hpe.com<mailto:tom.cammann at hpe.com>>
> Reply-To: "openstack-dev at lists.openstack.org<mailto:
> openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org
> <mailto:openstack-dev at lists.openstack.org>>
> Date: Tuesday, September 29, 2015 at 2:22 AM
> To: "openstack-dev at lists.openstack.org<mailto:
> openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org
> <mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> This has been my thinking in the last couple of months to completely
> deprecate the COE specific APIs such as pod/service/rc and container.
>
> As we now support Mesos, Kubernetes and Docker Swarm its going to be very
> difficult and probably a wasted effort trying to consolidate their separate
> APIs under a single Magnum API.
>
> I'm starting to see Magnum as COEDaaS - Container Orchestration Engine
> Deployment as a Service.
>
> On 29/09/15 06:30, Ton Ngo wrote:
> Would it make sense to ask the opposite of Wanghua's question: should
> pod/service/rc be deprecated if the user can easily get to the k8s api?
> Even if we want to orchestrate these in a Heat template, the corresponding
> heat resources can just interface with k8s instead of Magnum.
> Ton Ngo,
>
> <ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker
> compose is just command line tool which doesn?t have any api or scheduling
> feat
>
> From: Egor Guz <EGuz at walmartlabs.com><mailto:EGuz at walmartlabs.com>
> To: "openstack-dev at lists.openstack.org"<mailto:
> openstack-dev at lists.openstack.org> <openstack-dev at lists.openstack.org
> ><mailto:openstack-dev at lists.openstack.org>
> Date: 09/28/2015 10:20 PM
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> ________________________________
>
>
>
> Also I belive docker compose is just command line tool which doesn?t have
> any api or scheduling features.
> But during last Docker Conf hackathon PayPal folks implemented docker
> compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
> which can give you pod like experience.
>
> ?
> Egor
>
> From: Adrian Otto <adrian.otto at rackspace.com<mailto:
> adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> ><mailto:openstack-dev at lists.openstack.org>>
> Date: Monday, September 28, 2015 at 22:03
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org
> ><mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> Wanghua,
>
> I do follow your logic, but docker-compose only needs the docker API to
> operate. We are intentionally avoiding re-inventing the wheel. Our goal is
> not to replace docker swarm (or other existing systems), but to compliment
> it/them. We want to offer users of Docker the richness of native APIs and
> supporting tools. This way they will not need to compromise features or
> wait longer for us to implement each new feature as it is added. Keep in
> mind that our pod, service, and replication controller resources pre-date
> this philosophy. If we started out with the current approach, those would
> not exist in Magnum.
>
> Thanks,
>
> Adrian
>
> On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<mailto:
> wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com>> wrote:
>
> Hi folks,
>
> Magnum now exposes service, pod, etc to users in kubernetes coe, but
> exposes container in swarm coe. As I know, swarm is only a scheduler of
> container, which is like nova in openstack. Docker compose is a
> orchestration program which is like heat in openstack. k8s is the
> combination of scheduler and orchestration. So I think it is better to
> expose the apis in compose to users which are at the same level as k8s.
>
>
> Regards
> Wanghua
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:
> OpenStack-dev-request at lists.openstack.org><mailto:
> OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> <mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> <mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> <ATT00001.gif>__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:
> OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/0c8cedbb/attachment.html>

From mordred at inaugust.com  Wed Sep 30 04:00:17 2015
From: mordred at inaugust.com (Monty Taylor)
Date: Wed, 30 Sep 2015 00:00:17 -0400
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
Message-ID: <560B5E51.1010900@inaugust.com>

*waving hands wildly at details* ...

I believe that the real win is if Magnum's control plan can integrate 
the network and storage fabrics that exist in an OpenStack with 
kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's not 
interesting ... an ansible playbook can do that in 5 minutes. OTOH - 
deploying some kube into a cloud in such a way that it shares a tenant 
network with some VMs that are there - that's good stuff and I think 
actually provides significant value.

On 09/29/2015 10:57 PM, Jay Lau wrote:
> +1 to Egor, I think that the final goal of Magnum is container as a
> service but not coe deployment as a service. ;-)
>
> Especially we are also working on Magnum UI, the Magnum UI should export
> some interfaces to enable end user can create container applications but
> not only coe deployment.
>
> I hope that the Magnum can be treated as another "Nova" which is
> focusing on container service. I know it is difficult to unify all of
> the concepts in different coe (k8s has pod, service, rc, swarm only has
> container, nova only has VM, PM with different hypervisors), but this
> deserve some deep dive and thinking to see how can move forward.....
>
> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <EGuz at walmartlabs.com
> <mailto:EGuz at walmartlabs.com>> wrote:
>
>     definitely ;), but the are some thoughts to Tom?s email.
>
>     I agree that we shouldn't reinvent apis, but I don?t think Magnum
>     should only focus at deployment (I feel we will become another
>     Puppet/Chef/Ansible module if we do it ):)
>     I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
>     OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
>     step in to Kub/Mesos/Swarm communities for that.
>
>     ?
>     Egor
>
>     From: Adrian Otto <adrian.otto at rackspace.com
>     <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com
>     <mailto:adrian.otto at rackspace.com>>>
>     Reply-To: "OpenStack Development Mailing List (not for usage
>     questions)" <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>>
>     Date: Tuesday, September 29, 2015 at 08:44
>     To: "OpenStack Development Mailing List (not for usage questions)"
>     <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>>
>     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
>     This is definitely a topic we should cover in Tokyo.
>
>     On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
>     <danehans at cisco.com
>     <mailto:danehans at cisco.com><mailto:danehans at cisco.com
>     <mailto:danehans at cisco.com>>> wrote:
>
>
>     +1
>
>     From: Tom Cammann <tom.cammann at hpe.com
>     <mailto:tom.cammann at hpe.com><mailto:tom.cammann at hpe.com
>     <mailto:tom.cammann at hpe.com>>>
>     Reply-To: "openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>"
>     <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>>
>     Date: Tuesday, September 29, 2015 at 2:22 AM
>     To: "openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>"
>     <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>>
>     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
>     This has been my thinking in the last couple of months to completely
>     deprecate the COE specific APIs such as pod/service/rc and container.
>
>     As we now support Mesos, Kubernetes and Docker Swarm its going to be
>     very difficult and probably a wasted effort trying to consolidate
>     their separate APIs under a single Magnum API.
>
>     I'm starting to see Magnum as COEDaaS - Container Orchestration
>     Engine Deployment as a Service.
>
>     On 29/09/15 06:30, Ton Ngo wrote:
>     Would it make sense to ask the opposite of Wanghua's question:
>     should pod/service/rc be deprecated if the user can easily get to
>     the k8s api?
>     Even if we want to orchestrate these in a Heat template, the
>     corresponding heat resources can just interface with k8s instead of
>     Magnum.
>     Ton Ngo,
>
>     <ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
>     docker compose is just command line tool which doesn?t have any api
>     or scheduling feat
>
>     From: Egor Guz <EGuz at walmartlabs.com
>     <mailto:EGuz at walmartlabs.com>><mailto:EGuz at walmartlabs.com
>     <mailto:EGuz at walmartlabs.com>>
>     To: "openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>"<mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>
>     <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>
>     Date: 09/28/2015 10:20 PM
>     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>     ________________________________
>
>
>
>     Also I belive docker compose is just command line tool which doesn?t
>     have any api or scheduling features.
>     But during last Docker Conf hackathon PayPal folks implemented
>     docker compose executor for Mesos
>     (https://github.com/mohitsoni/compose-executor)
>     which can give you pod like experience.
>
>     ?
>     Egor
>
>     From: Adrian Otto <adrian.otto at rackspace.com
>     <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com
>     <mailto:adrian.otto at rackspace.com>><mailto:adrian.otto at rackspace.com
>     <mailto:adrian.otto at rackspace.com>>>
>     Reply-To: "OpenStack Development Mailing List (not for usage
>     questions)" <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>>
>     Date: Monday, September 28, 2015 at 22:03
>     To: "OpenStack Development Mailing List (not for usage questions)"
>     <openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org
>     <mailto:openstack-dev at lists.openstack.org>>>
>     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
>     Wanghua,
>
>     I do follow your logic, but docker-compose only needs the docker API
>     to operate. We are intentionally avoiding re-inventing the wheel.
>     Our goal is not to replace docker swarm (or other existing systems),
>     but to compliment it/them. We want to offer users of Docker the
>     richness of native APIs and supporting tools. This way they will not
>     need to compromise features or wait longer for us to implement each
>     new feature as it is added. Keep in mind that our pod, service, and
>     replication controller resources pre-date this philosophy. If we
>     started out with the current approach, those would not exist in Magnum.
>
>     Thanks,
>
>     Adrian
>
>     On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com
>     <mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com
>     <mailto:wanghua.humble at gmail.com>><mailto:wanghua.humble at gmail.com
>     <mailto:wanghua.humble at gmail.com>>> wrote:
>
>     Hi folks,
>
>     Magnum now exposes service, pod, etc to users in kubernetes coe, but
>     exposes container in swarm coe. As I know, swarm is only a scheduler
>     of container, which is like nova in openstack. Docker compose is a
>     orchestration program which is like heat in openstack. k8s is the
>     combination of scheduler and orchestration. So I think it is better
>     to expose the apis in compose to users which are at the same level
>     as k8s.
>
>
>     Regards
>     Wanghua
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe: OpenStack-dev-request at lists.openstack.org
>     <mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org
>     <mailto:OpenStack-dev-request at lists.openstack.org>><mailto:OpenStack-dev-request at lists.openstack.org
>     <mailto:OpenStack-dev-request at lists.openstack.org>>?subject:unsubscribe
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><mailto:OpenStack-dev-request at lists.openstack.org
>     <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><mailto:OpenStack-dev-request at lists.openstack.org
>     <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>     <ATT00001.gif>__________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe: OpenStack-dev-request at lists.openstack.org
>     <mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org
>     <mailto:OpenStack-dev-request at lists.openstack.org>>?subject:unsubscribe
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



From r1chardj0n3s at gmail.com  Wed Sep 30 05:31:27 2015
From: r1chardj0n3s at gmail.com (Richard Jones)
Date: Wed, 30 Sep 2015 15:31:27 +1000
Subject: [openstack-dev] [Horizon] Horizon Productivity Suggestion
In-Reply-To: <D23017EC.F20D%rcresswe@cisco.com>
References: <D22ECDB2.F132%rcresswe@cisco.com>
 <F68A1980-DFE6-4CEE-80AB-5D4CA8B0A69C@hpe.com>
 <CABtBEBVrvfPSh=gtvj1hF3+Gj6qhF=3E3_+tmeOux8M2__gnew@mail.gmail.com>
 <D23017EC.F20D%rcresswe@cisco.com>
Message-ID: <CAHrZfZBkvHQhOTEgUhmystMwxWzcgCmQLwBZGVeUVKRvgQy4rw@mail.gmail.com>

The etherpad that we had running for the bulk of L was really handy;
something like that would be great to keep using to let folks know what is
in play.

On 29 September 2015 at 19:32, Rob Cresswell (rcresswe) <rcresswe at cisco.com>
wrote:

> I wasn?t really envisioning a big discussion on the bugs; more like a
> brief notice period to let reviewers know high-priority items. Could
> definitely spend longer over it if that is preferred. Timing aside, the
> overall idea sounds good though?
>
> Lin: That?s a good idea. A wiki page would probably suffice.
>
> Rob
>
> From: Lin Hua Cheng <os.lcheng at gmail.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Date: Tuesday, 29 September 2015 04:11
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Horizon] Horizon Productivity Suggestion
>
> I agree with Travis that 2-3 minutes is not enough, that may not be even
> enough to talk about one bug. :)
>
> We could save some time if we have someone monitoring the bugs/feature and
> publish the high priority item into a report - something similar to what
> Keystone does [1].  Reviewers can look this up every time if they need to
> prioritize their reviews.
>
> We can rotate this responsibility among cores every month - even non-core
> if someone wants to volunteer.
>
> -Lin
>
> [1]
> https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting#Keystone_Weekly_Bug_Reports
>
>
>
>
> On Mon, Sep 28, 2015 at 7:22 PM, Tripp, Travis S <travis.tripp at hpe.com>
> wrote:
>
>> Things always move more quickly at the end of a cycle because people feel
>> release pressure, but I do think this is a good idea. 2 - 3 minutes isn?t
>> very realistic. It would need to be planned for longer.
>>
>>
>>
>>
>>
>> On 9/28/15, 3:57 AM, "Rob Cresswell (rcresswe)" <rcresswe at cisco.com>
>> wrote:
>>
>> >Hi folks,
>> >
>> >I?m wondering if we could try marking out a small 2-3 minute slot at the
>> >start of each weekly meeting to highlight Critical/ High bugs that have
>> >code up for review, as well as important blueprints that have code up for
>> >review. These would be blueprints for features that were identified as
>> >high priority at the summit.
>> >
>> >The thought here is that we were very efficient in L-RC1 at moving code
>> >along, which is nice for productivity, but not really great for
>> stability;
>> >it would be good to do this kind of targeted work earlier in the cycle.
>> >I?ve noticed other projects doing this in their meetings, and it seems
>> >quite effective.
>> >
>> >Rob
>> >
>> >
>>
>> >__________________________________________________________________________
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/efb961cd/attachment.html>

From silvan at quobyte.com  Wed Sep 30 05:38:45 2015
From: silvan at quobyte.com (Silvan Kaiser)
Date: Wed, 30 Sep 2015 07:38:45 +0200
Subject: [openstack-dev] [cinder][neutron][all] New third-party-ci
 testing requirements for OpenStack Compatible mark
In-Reply-To: <2C45DFFA-C868-43F5-AEFA-89926CB27D2D@openstack.org>
References: <EBF1AF08-B54D-4391-9B22-964390523E0A@openstack.org>
 <CAK+RQeZ+zbP=JoBNpeLdhktQ=93UVKCgWuGOzUxvTSpAN=EKNg@mail.gmail.com>
 <CAL3VkVzWvOv79eOh1p+Cqi8=JfydDYi5gwnDoq5vHJhGtc3Ojg@mail.gmail.com>
 <CAF+Cads=NaqrvSdCnx79=J8=C+5U5TLcaGzVHqjbk1Xk_eZj6A@mail.gmail.com>
 <2C45DFFA-C868-43F5-AEFA-89926CB27D2D@openstack.org>
Message-ID: <CALsyUnPT786Dv8w+L4GBpoQWysWU2re66hq-LM4Epu5gittEdQ@mail.gmail.com>

Hello Chris!
FYI: regarding Cinder CIs, the tests to be run are specified at [1], afaik.

Silvan


[1]
https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#What_tests_do_I_use.3F



2015-09-29 17:28 GMT+02:00 Chris Hoge <chris at openstack.org>:

> On Sep 29, 2015, at 8:04 AM, Erlon Cruz <sombrafam at gmail.com> wrote:
>
>
> Hi Cris,
>
> There are some questions that came to my mind.
>
> Cinder has near zero tolerance to backends that does not have a CI
> running. So, can one assume that all drivers in Cinder will have the
> "OpenStack Compatible" seal?
>
>
> One of the reasons we started with Cinder was because they have
> have an existing program that is well maintained. Any driver passing
> CI becomes eligible for the "OpenStack Compatible? mark. It?s not
> automatic, and still needs a signed agreement with the Foundation.
>
> When you say that the driver have to 'pass' the integration tests, what
> tests do you consider? All tests in tempest? All patches? Do you have any
> criteria to determine if a backend is passing or not?
>
>
> We?re letting the project drive what tests need to be passed. So,
> taking a look at this dashboard[1] (it?s one of many that monitor
> our test systems) the drivers are running the dsvm-tempest-full
> tests. One of the things that the tests exercise, and we?re interested
> in from the driver standpoint, are both the user-facing Cinder APIs
> as well as the driver-facing APIs.
>
> For Neutron, which we would like to help roll out in the coming year,
> this would be a CI run that is defined by the Neutron development
> team. We have no interest in dictating to the developers what should
> be run. Instead, we want to adopt what the community considers
> to be the best-practices and standards for drivers.
>
> About this "OpenStack Compatible" flag, how does it work? Will you hold a
> list with the Compatible vendors? Is anything a vendor need to to in order
> to use this?
>
>
> ?OpenStack Compatible? is one of the trademark programs that is
> administered by the Foundation. A company that want to apply the
> OpenStack logo to their product needs to sign a licensing agreement,
> which gives them the right to use the logo in their marketing materials.
>
> We also create an entry in the OpenStack Marketplace for their
> product, which has information about the company and the product, but
> also information about tests that the product may have passed. The
> best example I can give right now is with the ?OpenStack Powered?
> program, where we display which Defcore guideline a product has
> successfully passed[2].
>
> Chris
>
> [1] http://ci-watch.tintri.com/project?project=cinder&time=24+hours
> [2] For example:
> http://www.openstack.org/marketplace/public-clouds/unitedstack/uos-cloud
>
> Thanks,
> Erlon
>
> On Mon, Sep 28, 2015 at 5:55 PM, Kyle Mestery <mestery at mestery.com> wrote:
>
>> The Neutron team also discussed this in Vancouver, you can see the
>> etherpad here [1]. We talked about the idea of creating a validation suite,
>> and it sounds like that's something we should again discuss in Tokyo for
>> the Mitaka cycle. I think a validation suite would be a great step forward
>> for Neutron third-party CI systems to use to validate they work with a
>> release.
>>
>> [1] https://etherpad.openstack.org/p/YVR-neutron-third-party-ci-liberty
>>
>> On Sun, Sep 27, 2015 at 11:39 AM, Armando M. <armamig at gmail.com> wrote:
>>
>>>
>>>
>>> On 25 September 2015 at 15:40, Chris Hoge <chris at openstack.org> wrote:
>>>
>>>> In November, the OpenStack Foundation will start requiring vendors
>>>> requesting
>>>> new "OpenStack Compatible" storage driver licenses to start passing the
>>>> Cinder
>>>> third-party integration tests.
>>>
>>> The new program was approved by the Board at
>>>> the July meeting in Austin and follows the improvement of the testing
>>>> standards
>>>> and technical requirements for the "OpenStack Powered" program. This is
>>>> all
>>>> part of the effort of the Foundation to use the OpenStack brand to
>>>> guarantee a
>>>> base-level of interoperability and consistency for OpenStack users and
>>>> to
>>>> protect the work of our community of developers by applying a trademark
>>>> backed
>>>> by their technical efforts.
>>>>
>>>> The Cinder driver testing is the first step of a larger effort to apply
>>>> community determined standards to the Foundation marketing programs.
>>>> We're
>>>> starting with Cinder because it has a successful testing program in
>>>> place, and
>>>> we have plans to extend the program to network drivers and OpenStack
>>>> applications. We're going require CI testing for new "OpenStack
>>>> Compatible"
>>>> storage licenses starting on November 1, and plan to roll out network
>>>> and
>>>> application testing in 2016.
>>>>
>>>> One of our goals is to work with project leaders and developers to help
>>>> us
>>>> define and implement these test programs. The standards for third-party
>>>> drivers and applications should be determined by the developers and
>>>> users
>>>> in our community, who are experts in how to maintain the quality of the
>>>> ecosystem.
>>>>
>>>> We welcome and feedback on this program, and are also happy to answer
>>>> any
>>>> questions you might have.
>>>>
>>>
>>> Thanks for spearheading this effort.
>>>
>>> Do you have more information/pointers about the program, and how Cinder
>>> in particular is
>>> paving the way for other projects to follow?
>>>
>>> Thanks,
>>> Armando
>>>
>>>
>>>> Thanks!
>>>>
>>>> Chris Hoge
>>>> Interop Engineer
>>>> OpenStack Foundation
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request at lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dr. Silvan Kaiser
Quobyte GmbH
Hardenbergplatz 2, 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com<http://www.quobyte.com/>
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Bj?rn Kolbeck, Dr. Jan Stender

-- 

--
*Quobyte* GmbH
Hardenbergplatz 2 - 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Bj?rn Kolbeck, Dr. Jan Stender
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/cdf41c82/attachment.html>

From gilles at redhat.com  Wed Sep 30 06:26:25 2015
From: gilles at redhat.com (Gilles Dubreuil)
Date: Wed, 30 Sep 2015 16:26:25 +1000
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <560ACDD1.5040901@redhat.com>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com> <55F76F5C.2020106@redhat.com>
 <87vbbc2eiu.fsf@s390.unix4.net> <560A1110.5000209@redhat.com>
 <560ACDD1.5040901@redhat.com>
Message-ID: <560B8091.4060500@redhat.com>



On 30/09/15 03:43, Rich Megginson wrote:
> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>>
>> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
>>> Gilles Dubreuil <gilles at redhat.com> writes:
>>>
>>>> On 15/09/15 06:53, Rich Megginson wrote:
>>>>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>>>>> Hi,
>>>>>>
>>>>>> Gilles Dubreuil <gilles at redhat.com> writes:
>>>>>>
>>>>>>> A. The 'composite namevar' approach:
>>>>>>>
>>>>>>>      keystone_tenant {'projectX::domainY': ... }
>>>>>>>    B. The 'meaningless name' approach:
>>>>>>>
>>>>>>>     keystone_tenant {'myproject': name='projectX',
>>>>>>> domain=>'domainY',
>>>>>>> ...}
>>>>>>>
>>>>>>> Notes:
>>>>>>>    - Actually using both combined should work too with the domain
>>>>>>> supposedly overriding the name part of the domain.
>>>>>>>    - Please look at [1] this for some background between the two
>>>>>>> approaches:
>>>>>>>
>>>>>>> The question
>>>>>>> -------------
>>>>>>> Decide between the two approaches, the one we would like to
>>>>>>> retain for
>>>>>>> puppet-keystone.
>>>>>>>
>>>>>>> Why it matters?
>>>>>>> ---------------
>>>>>>> 1. Domain names are mandatory in every user, group or project.
>>>>>>> Besides
>>>>>>> the backward compatibility period mentioned earlier, where no domain
>>>>>>> means using the default one.
>>>>>>> 2. Long term impact
>>>>>>> 3. Both approaches are not completely equivalent which different
>>>>>>> consequences on the future usage.
>>>>>> I can't see why they couldn't be equivalent, but I may be missing
>>>>>> something here.
>>>>> I think we could support both.  I don't see it as an either/or
>>>>> situation.
>>>>>
>>>>>>> 4. Being consistent
>>>>>>> 5. Therefore the community to decide
>>>>>>>
>>>>>>> Pros/Cons
>>>>>>> ----------
>>>>>>> A.
>>>>>> I think it's the B: meaningless approach here.
>>>>>>
>>>>>>>     Pros
>>>>>>>       - Easier names
>>>>>> That's subjective, creating unique and meaningful name don't look
>>>>>> easy
>>>>>> to me.
>>>>> The point is that this allows choice - maybe the user already has some
>>>>> naming scheme, or wants to use a more "natural" meaningful name -
>>>>> rather
>>>>> than being forced into a possibly "awkward" naming scheme with "::"
>>>>>
>>>>>    keystone_user { 'heat domain admin user':
>>>>>      name => 'admin',
>>>>>      domain => 'HeatDomain',
>>>>>      ...
>>>>>    }
>>>>>
>>>>>    keystone_user_role {'heat domain admin user@::HeatDomain':
>>>>>      roles => ['admin']
>>>>>      ...
>>>>>    }
>>>>>
>>>>>>>     Cons
>>>>>>>       - Titles have no meaning!
>>>>> They have meaning to the user, not necessarily to Puppet.
>>>>>
>>>>>>>       - Cases where 2 or more resources could exists
>>>>> This seems to be the hardest part - I still cannot figure out how
>>>>> to use
>>>>> "compound" names with Puppet.
>>>>>
>>>>>>>       - More difficult to debug
>>>>> More difficult than it is already? :P
>>>>>
>>>>>>>       - Titles mismatch when listing the resources (self.instances)
>>>>>>>
>>>>>>> B.
>>>>>>>     Pros
>>>>>>>       - Unique titles guaranteed
>>>>>>>       - No ambiguity between resource found and their title
>>>>>>>     Cons
>>>>>>>       - More complicated titles
>>>>>>> My vote
>>>>>>> --------
>>>>>>> I would love to have the approach A for easier name.
>>>>>>> But I've seen the challenge of maintaining the providers behind the
>>>>>>> curtains and the confusion it creates with name/titles and when
>>>>>>> not sure
>>>>>>> about the domain we're dealing with.
>>>>>>> Also I believe that supporting self.instances consistently with
>>>>>>> meaningful name is saner.
>>>>>>> Therefore I vote B
>>>>>> +1 for B.
>>>>>>
>>>>>> My view is that this should be the advertised way, but the other
>>>>>> method
>>>>>> (meaningless) should be there if the user need it.
>>>>>>
>>>>>> So as far as I'm concerned the two idioms should co-exist.  This
>>>>>> would
>>>>>> mimic what is possible with all puppet resources.  For instance
>>>>>> you can:
>>>>>>
>>>>>>     file { '/tmp/foo.bar': ensure => present }
>>>>>>
>>>>>> and you can
>>>>>>
>>>>>>     file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
>>>>>> present }
>>>>>>
>>>>>> The two refer to the same resource.
>>>>> Right.
>>>>>
>>>> I disagree, using the name for the title is not creating a composite
>>>> name. The latter requires adding at least another parameter to be part
>>>> of the title.
>>>>
>>>> Also in the case of the file resource, a path/filename is a unique
>>>> name,
>>>> which is not the case of an Openstack user which might exist in several
>>>> domains.
>>>>
>>>> I actually added the meaningful name case in:
>>>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html
>>>>
>>>>
>>>> But that doesn't work very well because without adding the domain to
>>>> the
>>>> name, the following fails:
>>>>
>>>> keystone_tenant {'project_1': domain => 'domain_A', ...}
>>>> keystone_tenant {'project_1': domain => 'domain_B', ...}
>>>>
>>>> And adding the domain makes it a de-facto 'composite name'.
>>> I agree that my example is not similar to what the keystone provider has
>>> to do.  What I wanted to point out is that user in puppet should be used
>>> to have this kind of *interface*, one where your put something
>>> meaningful in the title and one where you put something meaningless.
>>> The fact that the meaningful one is a compound one shouldn't matter to
>>> the user.
>>>
>> There is a big blocker of making use of domain name as parameter.
>> The issue is the limitation of autorequire.
>>
>> Because autorequire doesn't support any parameter other than the
>> resource type and expects the resource title (or a list of) [1].
>>
>> So for instance, keystone_user requires the tenant project1 from
>> domain1, then the resource name must be 'project1::domain1' because
>> otherwise there is no way to specify 'domain1':
>>

Yeah, I kept forgetting this is only about resource relationship/order
within a given catalog.
And therefore this is *not* about guaranteeing referred resources exist,
 for instance when created (or not) in a different puppet run/catalog.

This might be obvious but it's easy (at least for me) to forget that
when thinking of the resources list, in terms of openstack IDs for
example inside self.instances!

>> autorequire(:keystone_tenant) do
>>    self[:tenant]
>> end
> 
> Not exactly.  See https://review.openstack.org/#/c/226919/
> 

That's nice and makes the implementation easier.
Thanks.

> For example::
> 
>     keystone_tenant {'some random tenant':
>       name   => 'project1',
>       domain => 'domain1'
>     }
>     keystone_user {'some random user':
>       name   => 'user1',
>       domain => 'domain1'
>     }
> 
> How does keystone_user_role need to be declared such that the
> autorequire for keystone_user and keystone_tenant work?
> 
>     keystone_user_role {'some random user at some random tenant': ...}
> 
> In this case, I'm assuming this will work
> 
>   autorequire(:keystone_user) do
>     self[:name].rpartition('@').first
>   end
>   autorequire(:keystone_user) do
>     self[:name].rpartition('@').last
>   end
> 
> The keystone_user require will be on 'some random user' and the
> keystone_tenant require will be on 'some random tenant'.
> 
> So it should work, but _you have to be absolutely consistent in using
> the title everywhere_.  That is, once you have chosen to give something
> a title, you must use that title everywhere: in autorequires (as
> described above), in resource references (e.g. Keystone_user['some
> random user'] ~> Service['myservice']), and anywhere the resource will
> be referenced by its title.
> 

Yes the title must the same everywhere it's used but only within a given
catalog.

No matter how the dependent resources are named/titled as long as they
provide the necessary resources.

For instance, given the following resources:

keystone_user {'first user': name => 'user1', domain => 'domain_A', ...}
keystone_user {'user1::domain_B': ...}
keystone_user {'user1': ...} # Default domain
keystone_project {'project1::domain_A': ...}
keystone_project {'project1': ...} # Default domain

And their respective titles:
'first user'
'user1::domain_B'
'user1'
'project1::domain_A'
'project1'

Then another resource to use them, let's say keystone_user_role.
Using those unique titles one should be able to do things like these:

keystone_user_role {'first user at project1::domain_A':
  roles => ['role1]
}

keystone_user_role {'admin role for user1':
  user    => 'user1'
  project => 'project1'
  roles   => ['admin'] }

That's look cool but the drawback is the names are different when
listing. That's expected since we're allowing meaningless titles.

$ puppet resource keystone_user

keystone_user { 'user1::Default':
  ensure    => 'present',
  domain_id => 'default',
  email     => 'test at Default.com',
  enabled   => 'true',
  id        => 'fb56d86a21f54b09aa435b96fd321eee',
}
keystone_user { 'user1::domain_B':
  ensure    => 'present',
  domain_id => '79beff022efd4011b9a036155f450af8',
  email     => 'user1 at domain_B.com',
  enabled   => 'true',
  id        => '2174faac46f949fca44e2edab3d53675',
}
keystone_user { 'user1::domain_A':
  ensure    => 'present',
  domain_id => '9387210938a0ef1b3c843feee8a00a34',
  email     => 'user1 at domain_A.com',
  enabled   => 'true',
  id        => '1bfadcff825e4c188e8e4eb6ce9a2ff5',
}

Note: I changed the domain field to domain_id because it makes more
sense here

This is fine as long as when running any catalog, a same resource with a
different name but same parameters means the same resource.

If everyone agrees with such behavior, then we might be good to go.

The exceptions must be addressed on a per case basis.
Effectively, there are cases in Openstack where several objects with the
exact same parameters can co-exist, for instance with the trust (See
commit message in [1] for examples). In the trust case running the same
catalog over and over will keep adding the resource (not really
idempotent!). I've actually re-raised the issue with Keystone developers
[2].

[1] https://review.openstack.org/200996
[2] https://bugs.launchpad.net/keystone/+bug/1475091

> 
>>
>> Alternatively, as Sofer suggested (in a discussion we had), we could
>> poke the catalog to retrieve the corresponding resource(s).
> 
> That is another question I posed in
> https://review.openstack.org/#/c/226919/:
> 
> I guess we can look up the user resource and tenant resource from the
> catalog based on the title?  e.g.
> 
>     user = puppet.catalog.resource.find(:keystone_user, 'some random user')
>     userid = user[:id]
> 
>> Unfortunately, unless there is a way around, that doesn't work because
>> no matter what autorequire wants a title.
> 
> Which I think we can provide.
> 
> The other tricky parts will be self.instances and self.prefetch.
> 
> I think self.instances can continue to use the 'name::domain' naming
> convention, since it needs some way to create a unique title for all
> resources.
> 
> The real work will be in self.prefetch, which will need to compare all
> of the parameters/properties to see if a resource declared in a manifest
> matches exactly a resource found in Keystone. In this case, we may have
> to 'rename' the resource returned by self.instances to make it match the
> one from the manifest so that autorequires and resource references
> continue to work.
> 
>>
>>
>> So it seems for the scoped domain resources, we have to stick together
>> the name and domain: '<name>::<domain>'.
>>
>> [1]
>> https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type.rb#L2003
>>
>>>>>> But, If that's indeed not possible to have them both,
>>>> There are cases where having both won't be possible like the trusts,
>>>> but
>>>> why not for the resources supporting it.
>>>>
>>>> That said, I think we need to make a choice, at least to get
>>>> started, to
>>>> have something working, consistently, besides exceptions. Other options
>>>> to be added later.
>>> So we should go we the meaningful one first for consistency, I think.
>>>
>>>>>> then I would keep only the meaningful name.
>>>>>>
>>>>>>
>>>>>> As a side note, someone raised an issue about the delimiter being
>>>>>> hardcoded to "::".  This could be a property of the resource.  This
>>>>>> would enable the user to use weird name with "::" in it and assign
>>>>>> a "/"
>>>>>> (for instance) to the delimiter property:
>>>>>>
>>>>>>     Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/",
>>>>>> ... }
>>>>>>
>>>>>> bar::is::cool is the name of the domain and foo::blah is the project.
>>>>> That's a good idea.  Please file a bug for that.
>>>>>
>>>>>>> Finally
>>>>>>> ------
>>>>>>> Thanks for reading that far!
>>>>>>> To choose, please provide feedback with more pros/cons, examples and
>>>>>>> your vote.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Gilles
>>>>>>>
>>>>>>>
>>>>>>> PS:
>>>>>>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>>>>>>
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From harlowja at outlook.com  Wed Sep 30 06:52:14 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Tue, 29 Sep 2015 23:52:14 -0700
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <560B5E51.1010900@inaugust.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com>
 <560A5856.1050303@hpe.com> <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
 <560B5E51.1010900@inaugust.com>
Message-ID: <BLU436-SMTP82CE45A309EF0AB6AEECAD84D0@phx.gbl>

+1

Pretty please don't make it a deployment project; because really some
other project that just specializes in deployment (ansible, chef,
puppet...) can do that better. I do get how public clouds can find a
deployment project useful (it allows customers to try out these new
~fancy~ COE things), but I also tend to think it's short-term thinking
to believe that such a project will last.

Now an integrated COE <-> openstack (keystone, cinder, neutron...)
project I think really does provide value and has some really neat
possiblities to provide a unique value add to openstack; a project that
can deploy some other software, meh, not so much IMHO. Of course an
integrated COE <-> openstack project will of course be much harder,
especially as the COE projects are not openstack 'native' but nothing
worth doing is easy. I hope that it was known that COE projects are a
new (and rapidly shifting) landscape and the going wasn't going to be
easy when magnum was created; don't lose hope! (I'm cheering for you
guys/gals).

My 2 cents,

Josh

On Wed, 30 Sep 2015 00:00:17 -0400
Monty Taylor <mordred at inaugust.com> wrote:

> *waving hands wildly at details* ...
> 
> I believe that the real win is if Magnum's control plan can integrate 
> the network and storage fabrics that exist in an OpenStack with 
> kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's
> not interesting ... an ansible playbook can do that in 5 minutes.
> OTOH - deploying some kube into a cloud in such a way that it shares
> a tenant network with some VMs that are there - that's good stuff and
> I think actually provides significant value.
> 
> On 09/29/2015 10:57 PM, Jay Lau wrote:
> > +1 to Egor, I think that the final goal of Magnum is container as a
> > service but not coe deployment as a service. ;-)
> >
> > Especially we are also working on Magnum UI, the Magnum UI should
> > export some interfaces to enable end user can create container
> > applications but not only coe deployment.
> >
> > I hope that the Magnum can be treated as another "Nova" which is
> > focusing on container service. I know it is difficult to unify all
> > of the concepts in different coe (k8s has pod, service, rc, swarm
> > only has container, nova only has VM, PM with different
> > hypervisors), but this deserve some deep dive and thinking to see
> > how can move forward.....
> >
> > On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <EGuz at walmartlabs.com
> > <mailto:EGuz at walmartlabs.com>> wrote:
> >
> >     definitely ;), but the are some thoughts to Tom?s email.
> >
> >     I agree that we shouldn't reinvent apis, but I don?t think
> > Magnum should only focus at deployment (I feel we will become
> > another Puppet/Chef/Ansible module if we do it ):)
> >     I belive our goal should be seamlessly integrate
> > Kub/Mesos/Swarm to OpenStack ecosystem
> > (Neutron/Cinder/Barbican/etc) even if we need to step in to
> > Kub/Mesos/Swarm communities for that.
> >
> >     ?
> >     Egor
> >
> >     From: Adrian Otto <adrian.otto at rackspace.com
> >     <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com
> >     <mailto:adrian.otto at rackspace.com>>>
> >     Reply-To: "OpenStack Development Mailing List (not for usage
> >     questions)" <openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>>
> >     Date: Tuesday, September 29, 2015 at 08:44
> >     To: "OpenStack Development Mailing List (not for usage
> > questions)" <openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>>
> >     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >
> >     This is definitely a topic we should cover in Tokyo.
> >
> >     On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
> >     <danehans at cisco.com
> >     <mailto:danehans at cisco.com><mailto:danehans at cisco.com
> >     <mailto:danehans at cisco.com>>> wrote:
> >
> >
> >     +1
> >
> >     From: Tom Cammann <tom.cammann at hpe.com
> >     <mailto:tom.cammann at hpe.com><mailto:tom.cammann at hpe.com
> >     <mailto:tom.cammann at hpe.com>>>
> >     Reply-To: "openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>"
> >     <openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>>
> >     Date: Tuesday, September 29, 2015 at 2:22 AM
> >     To: "openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>"
> >     <openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>>
> >     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >
> >     This has been my thinking in the last couple of months to
> > completely deprecate the COE specific APIs such as pod/service/rc
> > and container.
> >
> >     As we now support Mesos, Kubernetes and Docker Swarm its going
> > to be very difficult and probably a wasted effort trying to
> > consolidate their separate APIs under a single Magnum API.
> >
> >     I'm starting to see Magnum as COEDaaS - Container Orchestration
> >     Engine Deployment as a Service.
> >
> >     On 29/09/15 06:30, Ton Ngo wrote:
> >     Would it make sense to ask the opposite of Wanghua's question:
> >     should pod/service/rc be deprecated if the user can easily get
> > to the k8s api?
> >     Even if we want to orchestrate these in a Heat template, the
> >     corresponding heat resources can just interface with k8s
> > instead of Magnum.
> >     Ton Ngo,
> >
> >     <ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
> >     docker compose is just command line tool which doesn?t have any
> > api or scheduling feat
> >
> >     From: Egor Guz <EGuz at walmartlabs.com
> >     <mailto:EGuz at walmartlabs.com>><mailto:EGuz at walmartlabs.com
> >     <mailto:EGuz at walmartlabs.com>>
> >     To: "openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>"<mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>
> >     <openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>
> >     Date: 09/28/2015 10:20 PM
> >     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >     ________________________________
> >
> >
> >
> >     Also I belive docker compose is just command line tool which
> > doesn?t have any api or scheduling features.
> >     But during last Docker Conf hackathon PayPal folks implemented
> >     docker compose executor for Mesos
> >     (https://github.com/mohitsoni/compose-executor)
> >     which can give you pod like experience.
> >
> >     ?
> >     Egor
> >
> >     From: Adrian Otto <adrian.otto at rackspace.com
> >     <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com
> >     <mailto:adrian.otto at rackspace.com>><mailto:adrian.otto at rackspace.com
> >     <mailto:adrian.otto at rackspace.com>>>
> >     Reply-To: "OpenStack Development Mailing List (not for usage
> >     questions)" <openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>>
> >     Date: Monday, September 28, 2015 at 22:03
> >     To: "OpenStack Development Mailing List (not for usage
> > questions)" <openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>>
> >     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >
> >     Wanghua,
> >
> >     I do follow your logic, but docker-compose only needs the
> > docker API to operate. We are intentionally avoiding re-inventing
> > the wheel. Our goal is not to replace docker swarm (or other
> > existing systems), but to compliment it/them. We want to offer
> > users of Docker the richness of native APIs and supporting tools.
> > This way they will not need to compromise features or wait longer
> > for us to implement each new feature as it is added. Keep in mind
> > that our pod, service, and replication controller resources
> > pre-date this philosophy. If we started out with the current
> > approach, those would not exist in Magnum.
> >
> >     Thanks,
> >
> >     Adrian
> >
> >     On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com
> >     <mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com
> >     <mailto:wanghua.humble at gmail.com>><mailto:wanghua.humble at gmail.com
> >     <mailto:wanghua.humble at gmail.com>>> wrote:
> >
> >     Hi folks,
> >
> >     Magnum now exposes service, pod, etc to users in kubernetes
> > coe, but exposes container in swarm coe. As I know, swarm is only a
> > scheduler of container, which is like nova in openstack. Docker
> > compose is a orchestration program which is like heat in openstack.
> > k8s is the combination of scheduler and orchestration. So I think
> > it is better to expose the apis in compose to users which are at
> > the same level as k8s.
> >
> >
> >     Regards
> >     Wanghua
> >     __________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe: OpenStack-dev-request at lists.openstack.org
> >     <mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org
> >     <mailto:OpenStack-dev-request at lists.openstack.org>><mailto:OpenStack-dev-request at lists.openstack.org
> >     <mailto:OpenStack-dev-request at lists.openstack.org>>?subject:unsubscribe
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     __________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><mailto:OpenStack-dev-request at lists.openstack.org
> >     <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> >     __________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><mailto:OpenStack-dev-request at lists.openstack.org
> >     <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >     <ATT00001.gif>__________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe: OpenStack-dev-request at lists.openstack.org
> >     <mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org
> >     <mailto:OpenStack-dev-request at lists.openstack.org>>?subject:unsubscribe
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >     __________________________________________________________________________
> >     OpenStack Development Mailing List (not for usage questions)
> >     Unsubscribe:
> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> > Thanks,
> >
> > Jay Lau (Guangya Liu)
> >
> >
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From masoom.alam at wanclouds.net  Wed Sep 30 06:55:05 2015
From: masoom.alam at wanclouds.net (masoom alam)
Date: Tue, 29 Sep 2015 23:55:05 -0700
Subject: [openstack-dev] KILO: neutron port-update
 --allowed-address-pairs action=clear throws an exception
In-Reply-To: <CALhU9tnSWOL=yon1NcdnU-tdJVfnN4_-vmcyN1AzQcVk9yNkSQ@mail.gmail.com>
References: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>
 <3a0e6e9d.8be3.15012a43a27.Coremail.ayshihanzhang@126.com>
 <CABk5PjLcLCk3WJYJW5PdQTmOt0SN0+CxBV4tOAFZ7NJOEq6CKg@mail.gmail.com>
 <37458b90.9403.15012b8da80.Coremail.ayshihanzhang@126.com>
 <CALhU9tk_e0TZfxc=kpjSpYMze-MBriW-zpR9n4njfSU9vX3FRA@mail.gmail.com>
 <CABk5PjKd=6hSxKL6+68HkYoupMvCYNMb7Y+kb-1UPGre2E8hVw@mail.gmail.com>
 <CABk5PjKE+Q9DCEyfoDA_OsE3cghoaLQoyW3o3i=j9+z6m2Pp0A@mail.gmail.com>
 <CALhU9tnSWOL=yon1NcdnU-tdJVfnN4_-vmcyN1AzQcVk9yNkSQ@mail.gmail.com>
Message-ID: <CABk5PjLhn4iMej0-4DpqMjCRf9MODw3bNfcgKC81jBVBLyP53A@mail.gmail.com>

After I applied the patch set 4 manually, I am still getting the following
exception:

DEBUG: urllib3.util.retry Converted retries value: 0 -> Retry(total=0,
connect=None, read=None, redirect=0)
DEBUG: keystoneclient.session RESP:
DEBUG: neutronclient.v2_0.client Error message: {"NeutronError":
{"message": "Request Failed: internal server error while processing your
request.", "type": "HTTPInternalServerError", "detail": ""}}
ERROR: neutronclient.shell Request Failed: internal server error while
processing your request.
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py",
line 766, in run_subcommand
    return run_command(cmd, cmd_parser, sub_argv)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py",
line 101, in run_command
    return cmd.run(known_args)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
line 535, in run
    obj_updater(_id, body)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
102, in with_params
    ret = self.function(instance, *args, **kwargs)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
549, in update_port
    return self.put(self.port_path % (port), body=body)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
302, in put
    headers=headers, params=params)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
270, in retry_request
    headers=headers, params=params)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
211, in do_request
    self._handle_fault_response(status_code, replybody)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
185, in _handle_fault_response
    exception_handler_v20(status_code, des_error_body)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
70, in exception_handler_v20
    status_code=status_code)
InternalServerError: Request Failed: internal server error while processing
your request.


On Mon, Sep 28, 2015 at 9:09 AM, Akihiro Motoki <amotoki at gmail.com> wrote:

> Are you reading our reply comments?
> At the moment, there is no way to set allowed-address-pairs to an empty
> list by using neutron CLI.
> When action=clear is passed, type=xxx, list=true and specified values are
> ignored and None is sent to the server.
> Thus you cannot set allowed-address-pairs to [] with neutron port-update
> CLI command.
>
>
> 2015-09-28 22:54 GMT+09:00 masoom alam <masoom.alam at wanclouds.net>:
>
>> This is even not working:
>>
>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
>> neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
>>  --allowed-address-pairs type=list [] action=clear
>> AllowedAddressPair must contain ip_address
>>
>>
>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
>> neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
>>  --allowed-address-pairs type=list {} action=clear
>> AllowedAddressPair must contain ip_address
>>
>>
>>
>>
>> On Mon, Sep 28, 2015 at 4:31 AM, masoom alam <masoom.alam at wanclouds.net>
>> wrote:
>>
>>> Please help, its not working....:
>>>
>>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>>> neutron port-show 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>
>>> +-----------------------+---------------------------------------------------------------------------------+
>>> | Field                 | Value
>>>                                   |
>>>
>>> +-----------------------+---------------------------------------------------------------------------------+
>>> | admin_state_up        | True
>>>                                  |
>>> | allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address":
>>> "fa:16:3e:69:e9:ef"}                |
>>> | binding:host_id       | openstack-latest-kilo-28-09-2015-masoom
>>>                                   |
>>> | binding:profile       | {}
>>>                                  |
>>> | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}
>>>                                  |
>>> | binding:vif_type      | ovs
>>>                                   |
>>> | binding:vnic_type     | normal
>>>                                  |
>>> | device_id             | d44b9025-f12b-4f85-8b7b-57cc1138acdd
>>>                                  |
>>> | device_owner          | compute:nova
>>>                                  |
>>> | extra_dhcp_opts       |
>>>                                   |
>>> | fixed_ips             | {"subnet_id":
>>> "bbb6726a-937f-4e0d-8ac2-f82f84272b1f", "ip_address": "10.0.0.3"} |
>>> | id                    | 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>                                  |
>>> | mac_address           | fa:16:3e:69:e9:ef
>>>                                   |
>>> | name                  |
>>>                                   |
>>> | network_id            | ae1b7e34-9f6c-4c8f-bf08-99a1e390034c
>>>                                  |
>>> | security_groups       | 8adda6d7-1b3e-4047-a130-a57609a0bd68
>>>                                  |
>>> | status                | ACTIVE
>>>                                  |
>>> | tenant_id             | 09945e673b7a4ab183afb166735b4fa7
>>>                                  |
>>>
>>> +-----------------------+---------------------------------------------------------------------------------+
>>>
>>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>>> neutron port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>  --allowed-address-pairs [] action=clear
>>> AllowedAddressPair must contain ip_address
>>>
>>>
>>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>>> neutron port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>  --allowed-address-pairs [10.0.0.201] action=clear
>>> The number of allowed address pair exceeds the maximum 10.
>>>
>>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>>> neutron port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>  --allowed-address-pairs  action=clear
>>> Request Failed: internal server error while processing your request.
>>>
>>>
>>>
>>>
>>> On Mon, Sep 28, 2015 at 1:57 AM, Akihiro Motoki <amotoki at gmail.com>
>>> wrote:
>>>
>>>> As already mentioned, we need to pass [] (an empty list) rather than
>>>> None as allowed_address_pairs.
>>>>
>>>> At the moment it is not supported in Neutron CLI.
>>>> This review https://review.openstack.org/#/c/218551/ is trying to fix
>>>> this problem.
>>>>
>>>> Akihiro
>>>>
>>>>
>>>> 2015-09-28 15:51 GMT+09:00 shihanzhang <ayshihanzhang at 126.com>:
>>>>
>>>>> I don't see any exception using bellow command
>>>>>
>>>>> root at szxbz:/opt/stack/neutron# neutron port-update
>>>>> 3748649e-243d-4408-a5f1-8122f1fbf501 --allowed-address-pairs action=clear
>>>>> Allowed address pairs must be a list.
>>>>>
>>>>>
>>>>>
>>>>> At 2015-09-28 14:36:44, "masoom alam" <masoom.alam at wanclouds.net>
>>>>> wrote:
>>>>>
>>>>> stable KILO
>>>>>
>>>>> shall I checkout the latest code are you saying this...Also can you
>>>>> please confirm if you have tested this thing at your end....and there was
>>>>> no problem...
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> On Sun, Sep 27, 2015 at 11:29 PM, shihanzhang <ayshihanzhang at 126.com>
>>>>> wrote:
>>>>>
>>>>>> which branch do you use?  there is not this problem in master branch.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> At 2015-09-28 13:43:05, "masoom alam" <masoom.alam at wanclouds.net>
>>>>>> wrote:
>>>>>>
>>>>>> Can anybody highlight why the following command is throwing an
>>>>>> exception:
>>>>>>
>>>>>> *Command#* neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3
>>>>>> --allowed-address-pairs action=clear
>>>>>>
>>>>>> *Error: * 2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource
>>>>>> [req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback (most
>>>>>> recent call last):
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>>> "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     result =
>>>>>> method(request=request, **args)
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>>> "/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>>>> allow_bulk=self._allow_bulk)
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>>> "/opt/stack/neutron/neutron/api/v2/base.py", line 652, in
>>>>>> prepare_request_body
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>>>> attr_vals['validate'][rule])
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>>> "/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in
>>>>>> _validate_allowed_address_pairs
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     if
>>>>>> len(address_pairs) > cfg.CONF.max_allowed_address_pair:
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError:
>>>>>> object of type 'NoneType' has no len()
>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>>>>
>>>>>>
>>>>>>
>>>>>> There is a similar bug filed at Lauchpad for Havana
>>>>>> https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However
>>>>>> there is no fix and the work around  - using curl, mentioned on the bug is
>>>>>> also not working for KILO...it was working for havana and Icehouse....any
>>>>>> pointers...?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ????iPhone6s???5288???????
>>>>>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>>>>>
>>>>>>
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> ????iPhone6s???5288???????
>>>>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/223af5ed/attachment.html>

From masoom.alam at wanclouds.net  Wed Sep 30 06:55:28 2015
From: masoom.alam at wanclouds.net (masoom alam)
Date: Tue, 29 Sep 2015 23:55:28 -0700
Subject: [openstack-dev] KILO: neutron port-update
 --allowed-address-pairs action=clear throws an exception
In-Reply-To: <CABk5PjLhn4iMej0-4DpqMjCRf9MODw3bNfcgKC81jBVBLyP53A@mail.gmail.com>
References: <CABk5PjJ=yepeXgaxUTJxLwXRh+8deGAGike3Avpb5Vk5getrKA@mail.gmail.com>
 <3a0e6e9d.8be3.15012a43a27.Coremail.ayshihanzhang@126.com>
 <CABk5PjLcLCk3WJYJW5PdQTmOt0SN0+CxBV4tOAFZ7NJOEq6CKg@mail.gmail.com>
 <37458b90.9403.15012b8da80.Coremail.ayshihanzhang@126.com>
 <CALhU9tk_e0TZfxc=kpjSpYMze-MBriW-zpR9n4njfSU9vX3FRA@mail.gmail.com>
 <CABk5PjKd=6hSxKL6+68HkYoupMvCYNMb7Y+kb-1UPGre2E8hVw@mail.gmail.com>
 <CABk5PjKE+Q9DCEyfoDA_OsE3cghoaLQoyW3o3i=j9+z6m2Pp0A@mail.gmail.com>
 <CALhU9tnSWOL=yon1NcdnU-tdJVfnN4_-vmcyN1AzQcVk9yNkSQ@mail.gmail.com>
 <CABk5PjLhn4iMej0-4DpqMjCRf9MODw3bNfcgKC81jBVBLyP53A@mail.gmail.com>
Message-ID: <CABk5Pj+u=8sdRgNc8UK6qooNk12DBMrYFA2TBKfDHGTgLtCEgw@mail.gmail.com>

This patch: https://review.openstack.org/#/c/218551/

On Tue, Sep 29, 2015 at 11:55 PM, masoom alam <masoom.alam at wanclouds.net>
wrote:

> After I applied the patch set 4 manually, I am still getting the following
> exception:
>
> DEBUG: urllib3.util.retry Converted retries value: 0 -> Retry(total=0,
> connect=None, read=None, redirect=0)
> DEBUG: keystoneclient.session RESP:
> DEBUG: neutronclient.v2_0.client Error message: {"NeutronError":
> {"message": "Request Failed: internal server error while processing your
> request.", "type": "HTTPInternalServerError", "detail": ""}}
> ERROR: neutronclient.shell Request Failed: internal server error while
> processing your request.
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py",
> line 766, in run_subcommand
>     return run_command(cmd, cmd_parser, sub_argv)
>   File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py",
> line 101, in run_command
>     return cmd.run(known_args)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
> line 535, in run
>     obj_updater(_id, body)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 102, in with_params
>     ret = self.function(instance, *args, **kwargs)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 549, in update_port
>     return self.put(self.port_path % (port), body=body)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 302, in put
>     headers=headers, params=params)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 270, in retry_request
>     headers=headers, params=params)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 211, in do_request
>     self._handle_fault_response(status_code, replybody)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 185, in _handle_fault_response
>     exception_handler_v20(status_code, des_error_body)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 70, in exception_handler_v20
>     status_code=status_code)
> InternalServerError: Request Failed: internal server error while
> processing your request.
>
>
> On Mon, Sep 28, 2015 at 9:09 AM, Akihiro Motoki <amotoki at gmail.com> wrote:
>
>> Are you reading our reply comments?
>> At the moment, there is no way to set allowed-address-pairs to an empty
>> list by using neutron CLI.
>> When action=clear is passed, type=xxx, list=true and specified values are
>> ignored and None is sent to the server.
>> Thus you cannot set allowed-address-pairs to [] with neutron port-update
>> CLI command.
>>
>>
>> 2015-09-28 22:54 GMT+09:00 masoom alam <masoom.alam at wanclouds.net>:
>>
>>> This is even not working:
>>>
>>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
>>> neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
>>>  --allowed-address-pairs type=list [] action=clear
>>> AllowedAddressPair must contain ip_address
>>>
>>>
>>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
>>> neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
>>>  --allowed-address-pairs type=list {} action=clear
>>> AllowedAddressPair must contain ip_address
>>>
>>>
>>>
>>>
>>> On Mon, Sep 28, 2015 at 4:31 AM, masoom alam <masoom.alam at wanclouds.net>
>>> wrote:
>>>
>>>> Please help, its not working....:
>>>>
>>>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>>>> neutron port-show 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>>
>>>> +-----------------------+---------------------------------------------------------------------------------+
>>>> | Field                 | Value
>>>>                                   |
>>>>
>>>> +-----------------------+---------------------------------------------------------------------------------+
>>>> | admin_state_up        | True
>>>>                                    |
>>>> | allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address":
>>>> "fa:16:3e:69:e9:ef"}                |
>>>> | binding:host_id       | openstack-latest-kilo-28-09-2015-masoom
>>>>                                   |
>>>> | binding:profile       | {}
>>>>                                    |
>>>> | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug":
>>>> true}                                  |
>>>> | binding:vif_type      | ovs
>>>>                                   |
>>>> | binding:vnic_type     | normal
>>>>                                    |
>>>> | device_id             | d44b9025-f12b-4f85-8b7b-57cc1138acdd
>>>>                                    |
>>>> | device_owner          | compute:nova
>>>>                                    |
>>>> | extra_dhcp_opts       |
>>>>                                   |
>>>> | fixed_ips             | {"subnet_id":
>>>> "bbb6726a-937f-4e0d-8ac2-f82f84272b1f", "ip_address": "10.0.0.3"} |
>>>> | id                    | 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>>                                    |
>>>> | mac_address           | fa:16:3e:69:e9:ef
>>>>                                   |
>>>> | name                  |
>>>>                                   |
>>>> | network_id            | ae1b7e34-9f6c-4c8f-bf08-99a1e390034c
>>>>                                    |
>>>> | security_groups       | 8adda6d7-1b3e-4047-a130-a57609a0bd68
>>>>                                    |
>>>> | status                | ACTIVE
>>>>                                    |
>>>> | tenant_id             | 09945e673b7a4ab183afb166735b4fa7
>>>>                                    |
>>>>
>>>> +-----------------------+---------------------------------------------------------------------------------+
>>>>
>>>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>>>> neutron port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>>  --allowed-address-pairs [] action=clear
>>>> AllowedAddressPair must contain ip_address
>>>>
>>>>
>>>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>>>> neutron port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>>  --allowed-address-pairs [10.0.0.201] action=clear
>>>> The number of allowed address pair exceeds the maximum 10.
>>>>
>>>> root at openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>>>> neutron port-update 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>>  --allowed-address-pairs  action=clear
>>>> Request Failed: internal server error while processing your request.
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Sep 28, 2015 at 1:57 AM, Akihiro Motoki <amotoki at gmail.com>
>>>> wrote:
>>>>
>>>>> As already mentioned, we need to pass [] (an empty list) rather than
>>>>> None as allowed_address_pairs.
>>>>>
>>>>> At the moment it is not supported in Neutron CLI.
>>>>> This review https://review.openstack.org/#/c/218551/ is trying to fix
>>>>> this problem.
>>>>>
>>>>> Akihiro
>>>>>
>>>>>
>>>>> 2015-09-28 15:51 GMT+09:00 shihanzhang <ayshihanzhang at 126.com>:
>>>>>
>>>>>> I don't see any exception using bellow command
>>>>>>
>>>>>> root at szxbz:/opt/stack/neutron# neutron port-update
>>>>>> 3748649e-243d-4408-a5f1-8122f1fbf501 --allowed-address-pairs action=clear
>>>>>> Allowed address pairs must be a list.
>>>>>>
>>>>>>
>>>>>>
>>>>>> At 2015-09-28 14:36:44, "masoom alam" <masoom.alam at wanclouds.net>
>>>>>> wrote:
>>>>>>
>>>>>> stable KILO
>>>>>>
>>>>>> shall I checkout the latest code are you saying this...Also can you
>>>>>> please confirm if you have tested this thing at your end....and there was
>>>>>> no problem...
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> On Sun, Sep 27, 2015 at 11:29 PM, shihanzhang <ayshihanzhang at 126.com>
>>>>>> wrote:
>>>>>>
>>>>>>> which branch do you use?  there is not this problem in master branch.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> At 2015-09-28 13:43:05, "masoom alam" <masoom.alam at wanclouds.net>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Can anybody highlight why the following command is throwing an
>>>>>>> exception:
>>>>>>>
>>>>>>> *Command#* neutron port-update db3113df-14a3-4d6d-a3c5-d0517a134fc3
>>>>>>> --allowed-address-pairs action=clear
>>>>>>>
>>>>>>> *Error: * 2015-09-27 21:44:32.144 ERROR neutron.api.v2.resource
>>>>>>> [req-b1cbe1f2-ba21-4337-a714-f337c54ee9fc admin None] update failed
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource Traceback
>>>>>>> (most recent call last):
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>>>> "/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     result =
>>>>>>> method(request=request, **args)
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>>>> "/opt/stack/neutron/neutron/api/v2/base.py", line 515, in update
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>>>>> allow_bulk=self._allow_bulk)
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>>>> "/opt/stack/neutron/neutron/api/v2/base.py", line 652, in
>>>>>>> prepare_request_body
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>>>>> attr_vals['validate'][rule])
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource   File
>>>>>>> "/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 51, in
>>>>>>> _validate_allowed_address_pairs
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource     if
>>>>>>> len(address_pairs) > cfg.CONF.max_allowed_address_pair:
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource TypeError:
>>>>>>> object of type 'NoneType' has no len()
>>>>>>> 2015-09-27 21:44:32.144 TRACE neutron.api.v2.resource
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> There is a similar bug filed at Lauchpad for Havana
>>>>>>> https://bugs.launchpad.net/juniperopenstack/+bug/1351979 .However
>>>>>>> there is no fix and the work around  - using curl, mentioned on the bug is
>>>>>>> also not working for KILO...it was working for havana and Icehouse....any
>>>>>>> pointers...?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ????iPhone6s???5288???????
>>>>>>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>>>>>>
>>>>>>>
>>>>>>> __________________________________________________________________________
>>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>>> Unsubscribe:
>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> ????iPhone6s???5288???????
>>>>>> <http://rd.da.netease.com/redirect?t=ORBmhG&p=y7fo42&proId=1024&target=http%3A%2F%2Fwww.kaola.com%2Factivity%2Fdetail%2F4650.html%3Ftag%3Dea467f1dcce6ada85b1ae151610748b5>
>>>>>>
>>>>>>
>>>>>> __________________________________________________________________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150929/83be9f99/attachment.html>

From Abhishek.Kekane at nttdata.com  Wed Sep 30 07:04:45 2015
From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek)
Date: Wed, 30 Sep 2015 07:04:45 +0000
Subject: [openstack-dev] [nova] Shared storage space count for Nova
Message-ID: <E1FB4937BE24734DAD0D1D4E4E506D7890D1AB97@MAIL703.KDS.KEANE.COM>

Hi Devs,

Nova shared storage has issue [1] for counting free space, total space and disk available least which affects hypervisor stats and scheduler.
I have created a etherpad [2] which contains detail problem description and possible solution with possible challenges for this design.

Later I came to know there is ML [3] initiated by Jay Pipes which has a solution of creating resource pools for disk, CPU, memory, Numa modes etc.

IMO this is a good way and good to be addressed in Mitaka release. I am eager to work on this and will provide any kind of help in implementation, review etc.

Please give us your opinion about the same.

[1] https://bugs.launchpad.net/nova/+bug/1252321
[2] https://etherpad.openstack.org/p/shared-storage-space-count
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070564.html


Thank you,

Abhishek Kekane

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/e110b539/attachment.html>

From anant.patil at hpe.com  Wed Sep 30 07:10:52 2015
From: anant.patil at hpe.com (Anant Patil)
Date: Wed, 30 Sep 2015 12:40:52 +0530
Subject: [openstack-dev] [heat] Convergence: Detecting and handling worker
 failures
In-Reply-To: <560B8494.6050805@hpe.com>
References: <560B8494.6050805@hpe.com>
Message-ID: <560B8AFC.2030207@hpe.com>

Hi,

One of remaining items in convergence is detecting and handling engine
(the engine worker) failures, and here are my thoughts.

Background: Since the work is distributed among heat engines, by some
means heat needs to detect the failure and pick up the tasks from failed
engine and re-distribute or run the task again.

One of the simple way is to poll the DB to detect the liveliness by
checking the table populated by heat-manage. Each engine records its
presence periodically by updating current timestamp. All the engines
will have a periodic task for checking the DB for liveliness of other
engines. Each engine will check for timestamp updated by other engines
and if it finds one which is older than the periodicity of timestamp
updates, then it detects a failure. When this happens, the remaining
engines, as and when they detect the failures, will try to acquire the
lock for in-progress resources that were handled by the engine which
died. They will then run the tasks to completion.

Another option is to use a coordination library like the community owned
tooz (http://docs.openstack.org/developer/tooz/) which supports
distributed locking and leader election. We use it to elect a leader
among heat engines and that will be responsible for running periodic
tasks for checking state of each engine and distributing the tasks to
other engines when one fails. The advantage, IMHO, will be simplified
heat code. Also, we can move the timeout task to the leader which will
run time out for all the stacks and sends signal for aborting operation
when timeout happens. The downside: an external resource like
Zookeper/memcached etc are needed for leader election.

In the long run, IMO, using a library like tooz will be useful for heat.
A lot of boiler plate needed for locking and running centralized tasks
(such as timeout) will not be needed in heat. Given that we are moving
towards distribution of tasks and horizontal scaling is preferred, it
will be advantageous to use them.

Please share your thoughts.

- Anant




From peng at hyper.sh  Wed Sep 30 07:19:56 2015
From: peng at hyper.sh (Peng Zhao)
Date: Wed, 30 Sep 2015 07:19:56 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <560B5E51.1010900@inaugust.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com>
 <560A5856.1050303@hpe.com> <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
 <560B5E51.1010900@inaugust.com>
Message-ID: <1443597596566-6c23b3f2-e926472c-ff8c568e@mixmax.com>

Echo with Monty:

> I believe that the real win is if Magnum's control plan can integrate the
network and storage fabrics > that exist in an OpenStack with kube/mesos/swarm.
We are working on the Cinder (ceph), Neutron, Keystone integration in HyperStack
[1] and love to contribute. Another TODO is the multi-tenancy support in
k8s/swarm/mesos. A global scheduler/orchestrator for all tenants yields higher
utilization rate than separate schedulers for each.
[1] https://launchpad.net/hyperstack
----------------------------------------------------- Hyper - Make VM run like Container


On Wed, Sep 30, 2015 at 12:00 PM, Monty Taylor < mordred at inaugust.com > wrote:
*waving hands wildly at details* ...

I believe that the real win is if Magnum's control plan can integrate the
network and storage fabrics that exist in an OpenStack with kube/mesos/swarm.
Just deploying is VERY meh. I do not care - it's not interesting ... an ansible
playbook can do that in 5 minutes. OTOH - deploying some kube into a cloud in
such a way that it shares a tenant network with some VMs that are there - that's
good stuff and I think actually provides significant value.

On 09/29/2015 10:57 PM, Jay Lau wrote:
+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another ?Nova? which is
focusing on container service. I know it is difficult to unify all of
the concepts in different coe (k8s has pod, service, rc, swarm only has
container, nova only has VM, PM with different hypervisors), but this
deserve some deep dive and thinking to see how can move forward.....

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz < EGuz at walmartlabs.com
<mailto: EGuz at walmartlabs.com >> wrote:

definitely ;), but the are some thoughts to Tom?s email.

I agree that we shouldn't reinvent apis, but I don?t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
step in to Kub/Mesos/Swarm communities for that.

?
Egor

From: Adrian Otto < adrian.otto at rackspace.com
<mailto: adrian.otto at rackspace. com ><mailto: adrian.otto at racksp ace.com
<mailto: adrian.otto at rackspace. com >>>
Reply-To: ?OpenStack Development Mailing List (not for usage
questions)? < openstack-dev at lists.openstack .org
<mailto: openstack-dev at lists.op enstack.org ><mailto: openstack- dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >>>
Date: Tuesday, September 29, 2015 at 08:44
To: ?OpenStack Development Mailing List (not for usage questions)?
< openstack-dev at lists.openstack .org
<mailto: openstack-dev at lists.op enstack.org ><mailto: openstack- dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
< danehans at cisco.com
<mailto: danehans at cisco.com ><ma ilto: danehans at cisco.com
<mailto: danehans at cisco.com >>> wrote:


+1

From: Tom Cammann < tom.cammann at hpe.com
<mailto: tom.cammann at hpe.com ><m ailto: tom.cammann at hpe.com
<mailto: tom.cammann at hpe.com >>>
Reply-To: ? openstack-dev at lists.openstack .org
<mailto: openstack-dev at lists.op enstack.org ><mailto: openstack- dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >>?
< openstack-dev at lists.openstack .org
<mailto: openstack-dev at lists.op enstack.org ><mailto: openstack- dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: ? openstack-dev at lists.openstack .org
<mailto: openstack-dev at lists.op enstack.org ><mailto: openstack- dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >>?
< openstack-dev at lists.openstack .org
<mailto: openstack-dev at lists.op enstack.org ><mailto: openstack- dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely
deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be
very difficult and probably a wasted effort trying to consolidate
their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question:
should pod/service/rc be deprecated if the user can easily get to
the k8s api?
Even if we want to orchestrate these in a Heat template, the
corresponding heat resources can just interface with k8s instead of
Magnum.
Ton Ngo,

<ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
docker compose is just command line tool which doesn?t have any api
or scheduling feat

From: Egor Guz < EGuz at walmartlabs.com
<mailto: EGuz at walmartlabs.com >> <mailto: EGuz at walmartlabs.com
<mailto: EGuz at walmartlabs.com >>
To: ? openstack-dev at lists.openstack .org
<mailto: openstack-dev at lists.op enstack.org >?<mailto: openstack -dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >>
< openstack-dev at lists.openstack .org
<mailto: openstack-dev at lists.op enstack.org >><mailto: openstack -dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
______________________________ __



Also I belive docker compose is just command line tool which doesn?t
have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented
docker compose executor for Mesos
( https://github.com/mohitsoni/ compose-executor )
which can give you pod like experience.

?
Egor

From: Adrian Otto < adrian.otto at rackspace.com
<mailto: adrian.otto at rackspace. com ><mailto: adrian.otto at racksp ace.com
<mailto: adrian.otto at rackspace. com >><mailto: adrian.otto at racks pace.com
<mailto: adrian.otto at rackspace. com >>>
Reply-To: ?OpenStack Development Mailing List (not for usage
questions)? < openstack-dev at lists.openstack .org
<mailto: openstack-dev at lists.op enstack.org ><mailto: openstack- dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >><mailto: openstack -dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >>>
Date: Monday, September 28, 2015 at 22:03
To: ?OpenStack Development Mailing List (not for usage questions)?
< openstack-dev at lists.openstack .org
<mailto: openstack-dev at lists.op enstack.org ><mailto: openstack- dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >><mailto: openstack -dev at lists.openstack.org
<mailto: openstack-dev at lists.op enstack.org >>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API
to operate. We are intentionally avoiding re-inventing the wheel.
Our goal is not to replace docker swarm (or other existing systems),
but to compliment it/them. We want to offer users of Docker the
richness of native APIs and supporting tools. This way they will not
need to compromise features or wait longer for us to implement each
new feature as it is added. Keep in mind that our pod, service, and
replication controller resources pre-date this philosophy. If we
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? < wanghua.humble at gmail.com
<mailto: wanghua.humble at gmail.c om ><mailto: wanghua.humble at gmai l.com
<mailto: wanghua.humble at gmail.c om >><mailto: wanghua.humble at gma il.com
<mailto: wanghua.humble at gmail.c om >>> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but
exposes container in swarm coe. As I know, swarm is only a scheduler
of container, which is like nova in openstack. Docker compose is a
orchestration program which is like heat in openstack. k8s is the
combination of scheduler and orchestration. So I think it is better
to expose the apis in compose to users which are at the same level
as k8s.


Regards
Wanghua
______________________________ ______________________________ ______________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.op enstack.org
<mailto: OpenStack-dev-request@ lists.openstack.org ><mailto: Op enStack-dev-request at lists.open stack.org
<mailto: OpenStack-dev-request@ lists.openstack.org >><mailto: O penStack-dev-request at lists.ope nstack.org
<mailto: OpenStack-dev-request@ lists.openstack.org >>?subject: unsubscribe
http://lists.openstack.org/cgi -bin/mailman/listinfo/openstac k-dev
______________________________ ______________________________ ______________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.op enstack.org?subject:unsubscrib e
< http://OpenStack-dev-request@ lists.openstack.org?subject:un subscribe ><mailto: OpenStack-de v-request at lists.openstack.org
<mailto: OpenStack-dev-request@ lists.openstack.org >?subject:u nsubscribe>
http://lists.openstack.org/cgi -bin/mailman/listinfo/openstac k-dev





______________________________ ______________________________ ______________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.op enstack.org?subject:unsubscrib e
< http://OpenStack-dev-request@ lists.openstack.org?subject:un subscribe ><mailto: OpenStack-de v-request at lists.openstack.org
<mailto: OpenStack-dev-request@ lists.openstack.org >?subject:u nsubscribe> http://lists.openst ack.org/cgi-bin/mailman/listin fo/openstack-dev

<ATT00001.gif>________________ ______________________________ ____________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.op enstack.org
<mailto: OpenStack-dev-request@ lists.openstack.org ><mailto: Op enStack-dev-request at lists.open stack.org
<mailto: OpenStack-dev-request@ lists.openstack.org >>?subject: unsubscribe
http://lists.openstack.org/cgi -bin/mailman/listinfo/openstac k-dev

______________________________ ______________________________ ______________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.op enstack.org?subject:unsubscrib e
< http://OpenStack-dev-request@ lists.openstack.org?subject:un subscribe >
http://lists.openstack.org/cgi -bin/mailman/listinfo/openstac k-dev




--
Thanks,

Jay Lau (Guangya Liu)


______________________________ ______________________________ ______________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.op enstack.org?subject:unsubscrib e
http://lists.openstack.org/cgi -bin/mailman/listinfo/openstac k-dev



______________________________ ______________________________ ______________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.op enstack.org?subject:unsubscrib e
http://lists.openstack.org/cgi -bin/mailman/listinfo/openstac k-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/f1960a14/attachment-0001.html>

From dougal at redhat.com  Wed Sep 30 08:08:41 2015
From: dougal at redhat.com (Dougal Matthews)
Date: Wed, 30 Sep 2015 09:08:41 +0100
Subject: [openstack-dev] [TripleO] Defining a public API for tripleo-common
Message-ID: <CAPMB-2SnkubiNstjy3Uotrb_Gq1mQaJ9LOQutkEzeTOLJxKy7Q@mail.gmail.com>

Hi,

What is the standard practice for defining public API's for OpenStack
libraries? As I am working on refactoring and updating tripleo-common I have
to grep through the projects I know that use it to make sure I don't break
anything.

Personally I would choose to have a policy of "If it is documented, it is
public" because that is very clear and it still allows us to do internal
refactoring.

Otherwise we could use __all__ to define what is public in each file, or
assume everything that doesn't start with an underscore is public.

Cheers,
Dougal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/909ff19f/attachment.html>

From dbelova at mirantis.com  Wed Sep 30 08:27:04 2015
From: dbelova at mirantis.com (Dina Belova)
Date: Wed, 30 Sep 2015 11:27:04 +0300
Subject: [openstack-dev] [Large Deployments Team][Performance Team] New
 informal working group suggestion
In-Reply-To: <CAOwk0=0BhkFqjNgzx7zWrS_tk9xj8Vh8uz7qgbO42wt_dpYoXw@mail.gmail.com>
References: <CACsCO2yHugc0FQmXBxO_-uzaOvR_KXQNdPOEYYneU=vqoeJSEw@mail.gmail.com>
 <CAOwk0=0BhkFqjNgzx7zWrS_tk9xj8Vh8uz7qgbO42wt_dpYoXw@mail.gmail.com>
Message-ID: <CACsCO2wSs030XA7RiqBGyhtSVRpAhBPSFfb_X-=tB1Gg9GcZoA@mail.gmail.com>

Sandeep,

sorry for the late response :) I'm hoping to define 'spheres of interest'
and most painful moments using people's experience on Tokyo summit and
we'll find out what needs to be tested most and can be actually done. You
can share your ideas of what needs to be tested and focused on in
https://etherpad.openstack.org/p/openstack-performance-issues etherpad,
this will be a pool of ideas I'm going to use in Tokyo.

I can either create irc channel for the discussions or we can use
#openstack-operators channel as LDT is using it for the communication.
After Tokyo summit I'm planning to set Doodle voting for the time people
will be comfortable with to have periodic meetings :)

Cheers,
Dina

On Fri, Sep 25, 2015 at 1:52 PM, Sandeep Raman <sandeep.raman at gmail.com>
wrote:

> On Tue, Sep 22, 2015 at 6:27 PM, Dina Belova <dbelova at mirantis.com> wrote:
>
>> Hey, OpenStackers!
>>
>> I'm writing to propose to organise new informal team to work specifically
>> on the OpenStack performance issues. This will be a sub team in already
>> existing Large Deployments Team, and I suppose it will be a good idea to
>> gather people interested in OpenStack performance in one room and identify
>> what issues are worrying contributors, what can be done and share results
>> of performance researches :)
>>
>
> Dina, I'm focused in performance and scale testing [no coding
> background].How can I contribute and what is the expectation from this
> informal team?
>
>>
>> So please volunteer to take part in this initiative. I hope it will be
>> many people interested and we'll be able to use cross-projects session
>> slot <http://odsreg.openstack.org/cfp/details/5> to meet in Tokyo and
>> hold a kick-off meeting.
>>
>
> I'm not coming to Tokyo. How could I still be part of discussions if any?
> I also feel it is good to have a IRC channel for perf-scale discussion. Let
> me know your thoughts.
>
>
>> I would like to apologise I'm writing to two mailing lists at the same
>> time, but I want to make sure that all possibly interested people will
>> notice the email.
>>
>> Thanks and see you in Tokyo :)
>>
>> Cheers,
>> Dina
>>
>> --
>>
>> Best regards,
>>
>> Dina Belova
>>
>> Senior Software Engineer
>>
>> Mirantis Inc.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/b98be23a/attachment.html>

From paul.bourke at oracle.com  Wed Sep 30 08:37:20 2015
From: paul.bourke at oracle.com (Paul Bourke)
Date: Wed, 30 Sep 2015 09:37:20 +0100
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for
 core reviewer
In-Reply-To: <CAJ3CzQU+NfuZvrBjJ9YLpVcTedTiXv23_eVj2WCSE_5C9UhkwA@mail.gmail.com>
References: <D2305CD3.13957%stdake@cisco.com>
 <CAHD=wRdeU9n+FShy3DPbUuHq166fuA+7JmScWGuewjOqL77NGg@mail.gmail.com>
 <CAJ3CzQU+NfuZvrBjJ9YLpVcTedTiXv23_eVj2WCSE_5C9UhkwA@mail.gmail.com>
Message-ID: <560B9F40.3070205@oracle.com>

+1 I actually thought he was core already :)

On 30/09/15 01:24, Sam Yaple wrote:
> +1 Michal will be a great addition to the Core team.
>
> On Sep 29, 2015 6:48 PM, "Martin Andr?" <martin.andre at gmail.com
> <mailto:martin.andre at gmail.com>> wrote:
>
>
>
>     On Wed, Sep 30, 2015 at 7:20 AM, Steven Dake (stdake)
>     <stdake at cisco.com <mailto:stdake at cisco.com>> wrote:
>
>         Hi folks,
>
>         I am proposing Michal for core reviewer.  Consider my proposal
>         as a +1 vote.  Michal has done a fantastic job with rsyslog, has
>         done a nice job overall contributing to the project for the last
>         cycle, and has really improved his review quality and
>         participation over the last several months.
>
>         Our process requires 3 +1 votes, with no veto (-1) votes.  If
>         your uncertain, it is best to abstain :)  I will leave the
>         voting open for 1 week until Tuesday October 6th or until there
>         is a unanimous decision or a  veto.
>
>
>     +1, without hesitation.
>
>     Martin
>
>         Regards
>         -steve
>
>         __________________________________________________________________________
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From jordan.pittier at scality.com  Wed Sep 30 08:47:31 2015
From: jordan.pittier at scality.com (Jordan Pittier)
Date: Wed, 30 Sep 2015 10:47:31 +0200
Subject: [openstack-dev] [Devstack][Sahara][Cinder] BlockDeviceDriver
 support in Devstack
In-Reply-To: <000001501bde2d9b-f38e2772-475f-444e-beaa-b888146b50ab-000000@email.amazonses.com>
References: <000001501bde2d9b-f38e2772-475f-444e-beaa-b888146b50ab-000000@email.amazonses.com>
Message-ID: <CAAKgrcm=QP42-ej7+H+m=7KpJBAEamjVuU1H76D-+PSkFN-xDQ@mail.gmail.com>

Hi Sean,
Because the recommended way in now to write devstack plugins.

Jordan

On Wed, Sep 30, 2015 at 3:29 AM, Sean Collins <sean at coreitpro.com> wrote:

> This review was recently abandoned. Can you provide insight as to why?
>
> On September 17, 2015, at 2:30 PM, "Sean M. Collins" <sean at coreitpro.com>
> wrote:
>
> You need to remove your Workflow-1.
>
> --
> Sean M. Collins
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/9f56d0a7/attachment.html>

From jesse.pretorius at gmail.com  Wed Sep 30 08:51:03 2015
From: jesse.pretorius at gmail.com (Jesse Pretorius)
Date: Wed, 30 Sep 2015 09:51:03 +0100
Subject: [openstack-dev] [openstack-ansible] Proposing Steve Lewis
	(stevelle) for core reviewer
Message-ID: <CAGSrQvyepXcdV8bBov0+jHBzgEN-5=-jeg2QvAsiMwB_-viZag@mail.gmail.com>

Hi everyone,

I'd like to propose that Steve Lewis (stevelle) be added as a core reviewer.

He has made an effort to consistently keep up with doing reviews in the
last cycle and always makes an effort to ensure that his responses are made
after thorough testing where possible. I have found his input to be
valuable.

-- 
Jesse Pretorius
IRC: odyssey4me
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/4c05e22b/attachment.html>

From thingee at gmail.com  Wed Sep 30 08:54:57 2015
From: thingee at gmail.com (Mike Perez)
Date: Wed, 30 Sep 2015 01:54:57 -0700
Subject: [openstack-dev] [election][TC] TC Candidacy
Message-ID: <CAHcn5b0ZGoS8H487-m0TMMD2zJFBY2HcY=VvPpTf5ydWK7QbhQ@mail.gmail.com>

Hi all!

I'm announcing my candidacy for a position on the OpenStack Technical
Committee.

On October 1st I will be employed by the OpenStack Foundation as
a Cross-Project Developer Coordinator to help bring focus and support to
cross-project initiatives within the cross-project specs, Def Core, The Product
Working group, etc.

I feel the items below have enabled others across this project to strive for
quality. If you would all have me as a member of the Technical Committee, you
can help me to enable more quality work in OpenStack.

* I have been working in OpenStack since 2010. I spent a good amount of my time
  working on OpenStack in my free time before being paid full time to work on
  it. It has been an important part of my life, and rewarding to see what we
  have all achieved together.

* I was PTL for the Cinder project in the Kilo and Liberty releases for two
  cross-project reasons:
  * Third party continuous integration (CI).
  * Stop talking about rolling upgrades, and actually make it happen for
    operators.

* I led the effort in bringing third party continuous integration to the
  Cinder project for more than 60 different drivers. [1]
  * I removed 25 different storage drivers from Cinder to bring quality to the
    project to ensure what was in the Kilo release would work for operators.
    I did what I believed was right, regardless of whether it would cost me
    re-election for PTL [2].
  * In my conversations with other projects, this has enabled others to
    follow the same effort. Continuing this trend of quality cross-project will
    be my next focus.

* During my first term of PTL for Cinder, the team, and much respect to Thang
  Pham working on an effort to end the rolling upgrade problem, not just for
  Cinder, but for *all* projects.
  * First step was making databases independent from services via Oslo
    versioned objects.
  * In Liberty we have a solution coming that helps with RPC versioned messages
    to allow upgrading services independently.

* I have attempted to help with diversity in our community.
  * Helped lead our community to raise $17,403 for the Ada Initiative [3],
    which was helping address gender-diversity with a focus in open source.
  * For the Vancouver summit, I helped bring in the ally skills workshops from
    the Ada Initiative, so that our community can continue to be a welcoming
    environment [4].

* Within the Cinder team, I have enabled all to provide good documentation for
  important items in our release notes in Kilo [5] and Liberty [6].
  * Other projects have reached out to me after Kilo feeling motivated for this
    same effort. I've explained in the August 2015 Operators midcycle sprint
    that I will make this a cross-project effort in order to provide better
    communication to our operators and users.

* I started an OpenStack Dev List summary in the OpenStack Weekly Newsletter
  (What you need to know from the developer's list), in order to enable others
  to keep up with the dev list on important cross-project information. [7][8]

* I created the Cinder v2 API which has brought consistency in
  request/responses with other OpenStack projects.
  * I documented Cinder v1 and Cinder v2 API's. Later on I created the Cinder
    API reference documentation content. The attempt here was to enable others
    to have somewhere to start, to continue quality documentation with
    continued developments.

Please help me to do more positive work in this project. It would be an
honor to be member of your technical committee.


Thank you,
Mike Perez

Official Candidacy: https://review.openstack.org/#/c/229298/2
Review History: https://review.openstack.org/#/q/reviewer:170,n,z
Commit History: https://review.openstack.org/#/q/owner:170,n,z
Stackalytics: http://stackalytics.com/?user_id=thingee
Foundation: https://www.openstack.org/community/members/profile/4840
IRC Freenode: thingee
Website: http://thing.ee


[1] - http://lists.openstack.org/pipermail/openstack-dev/2015-January/054614.html
[2] - https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:cinder-driver-removals,n,z
[3] - http://lists.openstack.org/pipermail/openstack-dev/2014-October/047892.html
[4] - http://lists.openstack.org/pipermail/openstack-dev/2015-May/064156.html
[5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#OpenStack_Block_Storage_.28Cinder.29
[6] - https://wiki.openstack.org/wiki/ReleaseNotes/Liberty#OpenStack_Block_Storage_.28Cinder.29
[7] - http://www.openstack.org/blog/2015/09/openstack-community-weekly-newsletter-sept-12-18/
[8] - http://www.openstack.org/blog/2015/09/openstack-weekly-community-newsletter-sept-19-25/


From coolsvap at gmail.com  Wed Sep 30 09:00:27 2015
From: coolsvap at gmail.com (Swapnil Kulkarni)
Date: Wed, 30 Sep 2015 14:30:27 +0530
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for
 core reviewer
In-Reply-To: <D2305CD3.13957%stdake@cisco.com>
References: <D2305CD3.13957%stdake@cisco.com>
Message-ID: <CAKO+H+KXa20dq9W9UqX+FcmYtJnZHyGBU3i4ESkQiwjSLSX+Tw@mail.gmail.com>

On Wed, Sep 30, 2015 at 3:50 AM, Steven Dake (stdake) <stdake at cisco.com>
wrote:

> Hi folks,
>
> I am proposing Michal for core reviewer.  Consider my proposal as a +1
> vote.  Michal has done a fantastic job with rsyslog, has done a nice job
> overall contributing to the project for the last cycle, and has really
> improved his review quality and participation over the last several months.
>
> Our process requires 3 +1 votes, with no veto (-1) votes.  If your
> uncertain, it is best to abstain :)  I will leave the voting open for 1
> week until Tuesday October 6th or until there is a unanimous decision or a
>  veto.
>

+1 :)

>
> Regards
> -steve
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/70834a2b/attachment.html>

From e0ne at e0ne.info  Wed Sep 30 09:01:32 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Wed, 30 Sep 2015 12:01:32 +0300
Subject: [openstack-dev] [Devstack][Sahara][Cinder] BlockDeviceDriver
 support in Devstack
In-Reply-To: <CAAKgrcm=QP42-ej7+H+m=7KpJBAEamjVuU1H76D-+PSkFN-xDQ@mail.gmail.com>
References: <000001501bde2d9b-f38e2772-475f-444e-beaa-b888146b50ab-000000@email.amazonses.com>
 <CAAKgrcm=QP42-ej7+H+m=7KpJBAEamjVuU1H76D-+PSkFN-xDQ@mail.gmail.com>
Message-ID: <CAGocpaFvG9bKp-B51yHTXtxfbb40eaS8g2+5GZhTT1MdPcGaxw@mail.gmail.com>

Sean,

It was already implemented as devstack plugin

Regards,
Ivan Kolodyazhny

On Wed, Sep 30, 2015 at 11:47 AM, Jordan Pittier <jordan.pittier at scality.com
> wrote:

> Hi Sean,
> Because the recommended way in now to write devstack plugins.
>
> Jordan
>
> On Wed, Sep 30, 2015 at 3:29 AM, Sean Collins <sean at coreitpro.com> wrote:
>
>> This review was recently abandoned. Can you provide insight as to why?
>>
>> On September 17, 2015, at 2:30 PM, "Sean M. Collins" <sean at coreitpro.com>
>> wrote:
>>
>> You need to remove your Workflow-1.
>>
>> --
>> Sean M. Collins
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/a2f3f68c/attachment.html>

From wateringcan at gmail.com  Wed Sep 30 09:17:13 2015
From: wateringcan at gmail.com (Matt Thompson)
Date: Wed, 30 Sep 2015 10:17:13 +0100
Subject: [openstack-dev] [openstack-ansible] Proposing Steve Lewis
 (stevelle) for core reviewer
In-Reply-To: <CAGSrQvyepXcdV8bBov0+jHBzgEN-5=-jeg2QvAsiMwB_-viZag@mail.gmail.com>
References: <CAGSrQvyepXcdV8bBov0+jHBzgEN-5=-jeg2QvAsiMwB_-viZag@mail.gmail.com>
Message-ID: <CAJr8TocdNo4g_84GzDCQx7PhBwtLEkj-uWc=hpZg3ue6cbO1qw@mail.gmail.com>

Fully agree here -- +1 from me.

--Matt (mattt)

On Wed, Sep 30, 2015 at 9:51 AM, Jesse Pretorius <jesse.pretorius at gmail.com>
wrote:

> Hi everyone,
>
> I'd like to propose that Steve Lewis (stevelle) be added as a core
> reviewer.
>
> He has made an effort to consistently keep up with doing reviews in the
> last cycle and always makes an effort to ensure that his responses are made
> after thorough testing where possible. I have found his input to be
> valuable.
>
> --
> Jesse Pretorius
> IRC: odyssey4me
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/d1957ded/attachment.html>

From jesse.pretorius at gmail.com  Wed Sep 30 09:19:55 2015
From: jesse.pretorius at gmail.com (Jesse Pretorius)
Date: Wed, 30 Sep 2015 10:19:55 +0100
Subject: [openstack-dev] [stackalytics] Broken stats after project rename
Message-ID: <CAGSrQvzyvH=CKdRO4OnpcfFg+QrEtAZ3BCZVP5yJTsd--+2uZg@mail.gmail.com>

Hi everyone,

After the rename of os-ansible-deployment to openstack-ansible it appears
that all git-related stats (eg: commits) prior to the rename have been lost.

http://stackalytics.com/?metric=commits&module=openstack-ansible

Can anyone assist with rectifying this?

-- 
Jesse Pretorius
IRC: odyssey4me
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/c6661de8/attachment.html>

From clint at fewbar.com  Wed Sep 30 09:29:55 2015
From: clint at fewbar.com (Clint Byrum)
Date: Wed, 30 Sep 2015 02:29:55 -0700
Subject: [openstack-dev] [heat] Convergence: Detecting and handling
	worker failures
In-Reply-To: <560B8AFC.2030207@hpe.com>
References: <560B8494.6050805@hpe.com> <560B8AFC.2030207@hpe.com>
Message-ID: <1443605008-sup-2932@fewbar.com>

Excerpts from Anant Patil's message of 2015-09-30 00:10:52 -0700:
> Hi,
> 
> One of remaining items in convergence is detecting and handling engine
> (the engine worker) failures, and here are my thoughts.
> 
> Background: Since the work is distributed among heat engines, by some
> means heat needs to detect the failure and pick up the tasks from failed
> engine and re-distribute or run the task again.
> 
> One of the simple way is to poll the DB to detect the liveliness by
> checking the table populated by heat-manage. Each engine records its
> presence periodically by updating current timestamp. All the engines
> will have a periodic task for checking the DB for liveliness of other
> engines. Each engine will check for timestamp updated by other engines
> and if it finds one which is older than the periodicity of timestamp
> updates, then it detects a failure. When this happens, the remaining
> engines, as and when they detect the failures, will try to acquire the
> lock for in-progress resources that were handled by the engine which
> died. They will then run the tasks to completion.
> 
> Another option is to use a coordination library like the community owned
> tooz (http://docs.openstack.org/developer/tooz/) which supports
> distributed locking and leader election. We use it to elect a leader
> among heat engines and that will be responsible for running periodic
> tasks for checking state of each engine and distributing the tasks to
> other engines when one fails. The advantage, IMHO, will be simplified
> heat code. Also, we can move the timeout task to the leader which will
> run time out for all the stacks and sends signal for aborting operation
> when timeout happens. The downside: an external resource like
> Zookeper/memcached etc are needed for leader election.
> 

It's becoming increasingly clear that OpenStack services in general need
to look at distributed locking primitives. There's a whole spec for that
right now:

https://review.openstack.org/#/c/209661/

I suggest joining that conversation, and embracing a DLM as the way to
do this.

Also, the leader election should be per-stack, and the leader selection
should be heavily weighted based on a consistent hash algorithm so that
you get even distribution of stacks to workers. You can look at how
Ironic breaks up all of the nodes that way. They're using a similar lock
to the one Heat uses now, so the two projects can collaborate nicely on
a real solution.


From ishakhat at mirantis.com  Wed Sep 30 09:31:25 2015
From: ishakhat at mirantis.com (Ilya Shakhat)
Date: Wed, 30 Sep 2015 12:31:25 +0300
Subject: [openstack-dev] [stackalytics] Broken stats after project rename
In-Reply-To: <CAGSrQvzyvH=CKdRO4OnpcfFg+QrEtAZ3BCZVP5yJTsd--+2uZg@mail.gmail.com>
References: <CAGSrQvzyvH=CKdRO4OnpcfFg+QrEtAZ3BCZVP5yJTsd--+2uZg@mail.gmail.com>
Message-ID: <CAMzOD1+rkushN2KYUTd435MPc2JoRjDAwmmk+rgWB4bqJJVQ7w@mail.gmail.com>

Hi Jesse,

Thanks for letting know. Stackalytics team will fix the issue during the
day.

--Ilya

2015-09-30 12:19 GMT+03:00 Jesse Pretorius <jesse.pretorius at gmail.com>:

> Hi everyone,
>
> After the rename of os-ansible-deployment to openstack-ansible it appears
> that all git-related stats (eg: commits) prior to the rename have been lost.
>
> http://stackalytics.com/?metric=commits&module=openstack-ansible
>
> Can anyone assist with rectifying this?
>
> --
> Jesse Pretorius
> IRC: odyssey4me
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/bb1b57e7/attachment.html>

From ikalnitsky at mirantis.com  Wed Sep 30 09:35:43 2015
From: ikalnitsky at mirantis.com (Igor Kalnitsky)
Date: Wed, 30 Sep 2015 12:35:43 +0300
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <9A3CA8E8-1B57-4529-B9C6-FB2679DAD636@mirantis.com>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
 <9A3CA8E8-1B57-4529-B9C6-FB2679DAD636@mirantis.com>
Message-ID: <CACo6NWBDQfaZd8emVoOcSqmP+G-+LKkMdp28md8aZP7J0zezgA@mail.gmail.com>

> * September 29 - October 8: PTL elections

So, it's in progress. Where I can vote? I didn't receive any emails.

On Mon, Sep 28, 2015 at 7:31 PM, Tomasz Napierala
<tnapierala at mirantis.com> wrote:
>> On 18 Sep 2015, at 04:39, Sergey Lukjanov <slukjanov at mirantis.com> wrote:
>>
>>
>> Time line:
>>
>> PTL elections
>> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL position
>> * September 29 - October 8: PTL elections
>
> Just a reminder that we have a deadline for candidates today.
>
> Regards,
> --
> Tomasz 'Zen' Napierala
> Product Engineering - Poland
>
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From zengyz1983 at live.cn  Wed Sep 30 09:51:16 2015
From: zengyz1983 at live.cn (ZengYingzhe)
Date: Wed, 30 Sep 2015 17:51:16 +0800
Subject: [openstack-dev] [heat] Convergence: Detecting and handling
 worker failures
In-Reply-To: <560B8AFC.2030207@hpe.com>
References: <560B8494.6050805@hpe.com>,<560B8AFC.2030207@hpe.com>
Message-ID: <BAY179-W124A6D47454BED69782580D94D0@phx.gbl>

Hi Anant,
For the second option, if the leader engine fails, how to trigger a new leader election progress?
Best Regards,Yingzhe Zeng

> To: openstack-dev at lists.openstack.org
> From: anant.patil at hpe.com
> Date: Wed, 30 Sep 2015 12:40:52 +0530
> Subject: [openstack-dev] [heat] Convergence: Detecting and handling worker failures
> 
> Hi,
> 
> One of remaining items in convergence is detecting and handling engine
> (the engine worker) failures, and here are my thoughts.
> 
> Background: Since the work is distributed among heat engines, by some
> means heat needs to detect the failure and pick up the tasks from failed
> engine and re-distribute or run the task again.
> 
> One of the simple way is to poll the DB to detect the liveliness by
> checking the table populated by heat-manage. Each engine records its
> presence periodically by updating current timestamp. All the engines
> will have a periodic task for checking the DB for liveliness of other
> engines. Each engine will check for timestamp updated by other engines
> and if it finds one which is older than the periodicity of timestamp
> updates, then it detects a failure. When this happens, the remaining
> engines, as and when they detect the failures, will try to acquire the
> lock for in-progress resources that were handled by the engine which
> died. They will then run the tasks to completion.
> 
> Another option is to use a coordination library like the community owned
> tooz (http://docs.openstack.org/developer/tooz/) which supports
> distributed locking and leader election. We use it to elect a leader
> among heat engines and that will be responsible for running periodic
> tasks for checking state of each engine and distributing the tasks to
> other engines when one fails. The advantage, IMHO, will be simplified
> heat code. Also, we can move the timeout task to the leader which will
> run time out for all the stacks and sends signal for aborting operation
> when timeout happens. The downside: an external resource like
> Zookeper/memcached etc are needed for leader election.
> 
> In the long run, IMO, using a library like tooz will be useful for heat.
> A lot of boiler plate needed for locking and running centralized tasks
> (such as timeout) will not be needed in heat. Given that we are moving
> towards distribution of tasks and horizontal scaling is preferred, it
> will be advantageous to use them.
> 
> Please share your thoughts.
> 
> - Anant
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/2685580c/attachment.html>

From akurilin at mirantis.com  Wed Sep 30 10:04:19 2015
From: akurilin at mirantis.com (Andrey Kurilin)
Date: Wed, 30 Sep 2015 13:04:19 +0300
Subject: [openstack-dev] [nova][python-novaclient] Functional test fail
 due to publicURL endpoint for volume service not found
In-Reply-To: <CAO0b__9JNmZ3_zf67_urAAO3J=iMXcfuVEURCC2BCfOXW_u-4Q@mail.gmail.com>
References: <CAO0b__9JNmZ3_zf67_urAAO3J=iMXcfuVEURCC2BCfOXW_u-4Q@mail.gmail.com>
Message-ID: <CAEVmkaywJYAZOhavdNbKcOLbfg68Bfw9NZpoy7Cf=VNAwyC9vA@mail.gmail.com>

Hi!
It looks like cause of issue is a disabling Cinder API V1 in gates by
default(
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075689.html)
which was merged yesterday( https://review.openstack.org/#/c/194726/17 ).

Since Cinder V1 is disabled, python-novaclient has several issues:
 - "nova volume-* does not work when using cinder v2 API" -
https://bugs.launchpad.net/python-novaclient/+bug/1392846
 - "nova volume-* managers override service_type to 'volume', which is
missed in gates" - https://bugs.launchpad.net/python-novaclient/+bug/1501258


On Wed, Sep 30, 2015 at 5:56 AM, Zhenyu Zheng <zhengzhenyulixi at gmail.com>
wrote:

> Hi, all
>
> I submitted a patch for novaclient last night:
> https://review.openstack.org/#/c/228769/ , and it turns out the
> functional test has failed due to:  publicURL endpoint for volume service
> not found. I also found out that another novaclient patch:
> https://review.openstack.org/#/c/217131/ also fails due to this error, so
> this must be a bug. Any idea on how to fix this?
>
> Thanks,
>
> BR,
>
> Zheng
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/4c8e91ee/attachment.html>

From andy.mccrae at gmail.com  Wed Sep 30 10:06:16 2015
From: andy.mccrae at gmail.com (Andy McCrae)
Date: Wed, 30 Sep 2015 11:06:16 +0100
Subject: [openstack-dev] [openstack-ansible] Proposing Steve Lewis
 (stevelle) for core reviewer
In-Reply-To: <CAGSrQvyepXcdV8bBov0+jHBzgEN-5=-jeg2QvAsiMwB_-viZag@mail.gmail.com>
References: <CAGSrQvyepXcdV8bBov0+jHBzgEN-5=-jeg2QvAsiMwB_-viZag@mail.gmail.com>
Message-ID: <CAM2OCdP5X84No-zj30GrRVR-nrPfDfPDzAkzwckw0AwqopH1+w@mail.gmail.com>

+1 from me.

On 30 September 2015 at 09:51, Jesse Pretorius <jesse.pretorius at gmail.com>
wrote:

> Hi everyone,
>
> I'd like to propose that Steve Lewis (stevelle) be added as a core
> reviewer.
>
> He has made an effort to consistently keep up with doing reviews in the
> last cycle and always makes an effort to ensure that his responses are made
> after thorough testing where possible. I have found his input to be
> valuable.
>
> --
> Jesse Pretorius
> IRC: odyssey4me
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/06aaa8a4/attachment.html>

From thomas.morin at orange.com  Wed Sep 30 10:08:31 2015
From: thomas.morin at orange.com (thomas.morin at orange.com)
Date: Wed, 30 Sep 2015 12:08:31 +0200
Subject: [openstack-dev] [neutron] How could an L2 agent extension
 access agent methods ?
In-Reply-To: <D67601DC-EA26-4315-A8D0-96D0F1593799@redhat.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
 <5605100D.4070506@redhat.com>
 <28922_1443184635_56053FFB_28922_4082_5_56053FFA.80000@orange.com>
 <D67601DC-EA26-4315-A8D0-96D0F1593799@redhat.com>
Message-ID: <18564_1443607712_560BB4A0_18564_861_1_560BB49F.6030903@orange.com>

Hi Ihar,

Ihar Hrachyshka :
>> Miguel Angel Ajo :
>>> Do you have a rough idea of what operations you may need to do?
>> Right now, what bagpipe driver for networking-bgpvpn needs to interact with is:
>> - int_br OVSBridge (read-only)
>> - tun_br OVSBridge (add patch port, add flows)
>> - patch_int_ofport port number (read-only)
>> - local_vlan_map dict (read-only)
>> - setup_entry_for_arp_reply method (called to add static ARP entries)
>>
> Sounds very tightly coupled to OVS agent.

>
>>> Please bear in mind, the extension interface will be available from different agent types
>>> (OVS, SR-IOV, [eventually LB]), so this interface you're talking about could also serve as
>>> a translation driver for the agents (where the translation is possible), I totally understand
>>> that most extensions are specific agent bound, and we must be able to identify
>>> the agent we're serving back exactly.
>> Yes, I do have this in mind, but what we've identified for now seems to be OVS specific.
> Indeed it does. Maybe you can try to define the needed pieces in high level actions, not internal objects you need to access to. Like ?- connect endpoint X to Y?, ?determine segmentation id for a network? etc.

I've been thinking about this, but would tend to reach the conclusion 
that the things we need to interact with are pretty hard to abstract out 
into something that would be generic across different agents.  
Everything we need to do in our case relates to how the agents use 
bridges and represent networks internally: linuxbridge has one bridge 
per Network, while OVS has a limited number of bridges playing different 
roles for all networks with internal segmentation.

To look at the two things you  mention:
- "connect endpoint X to Y" : what we need to do is redirect the traffic 
destinated to the gateway of a Neutron network, to the thing that will 
do the MPLS forwarding for the right BGP VPN context (called VRF), in 
our case br-mpls (that could be done with an OVS table too) ; that 
action might be abstracted out to hide the details specific to OVS, but 
I'm not sure on how to  name the destination in a way that would be 
agnostic to these details, and this is not really relevant to do until 
we have a relevant context in which the linuxbridge would pass packets 
to something doing MPLS forwarding (OVS is currently the only option we 
support for MPLS forwarding, and it does not really make sense to mix 
linuxbridge for Neutron L2/L3 and OVS for MPLS)
- "determine segmentation id for a network": this is something really 
OVS-agent-specific, the linuxbridge agent uses multiple linux bridges, 
and does not rely on internal segmentation

Completely abstracting out packet forwarding pipelines in OVS and 
linuxbridge agents would possibly allow defining an interface that agent 
extension could use without to know about anything specific to OVS or 
the linuxbridge, but I believe this is a very significant taks to tackle.

Hopefully it will be acceptable to create an interface, even it exposes 
a set of methods specific to the linuxbridge agent and a set of methods 
specific to the OVS agent.  That would mean that the agent extension 
that can work in both contexts (not our case yet) would check the agent 
type before using the first set or the second set.

Does this approach make sense ?

-Thomas

_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.



From thomas.morin at orange.com  Wed Sep 30 10:14:19 2015
From: thomas.morin at orange.com (thomas.morin at orange.com)
Date: Wed, 30 Sep 2015 12:14:19 +0200
Subject: [openstack-dev] [neutron] How could an L2 agent extension
 access agent methods ?
In-Reply-To: <CALqgCCodFKCp_ar5M6+9b9Ngiqf1qc6Rk-SKrF2GBYzc9f2CMw@mail.gmail.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
 <CALqgCCodFKCp_ar5M6+9b9Ngiqf1qc6Rk-SKrF2GBYzc9f2CMw@mail.gmail.com>
Message-ID: <18564_1443608060_560BB5FC_18564_1242_1_560BB5FB.4080104@orange.com>

Hi Irena,

Irena Berezovsky :
 > I would like to second  Kevin. This can be done in a similar way as 
ML2 Plugin passed plugin_context
 > to ML2 Extension Drivers: 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L910.

Yes, this would be similar and could indeed be named agent_context .

However, contrarily to ML2 plugin which provides a context when calling 
most driver methods, I don't think that here we would need a context to 
be passed at each call of an AgentCoreResourceExtension, providing a 
interface to hook to the agent at initialize seems enough to me.

Thanks,

-Thomas



On Fri, Sep 25, 2015 at 11:57 AM, Kevin Benton <blak111 at gmail.com 
<mailto:blak111 at gmail.com>> wrote:

    I think the 4th of the options you proposed would be the best. We
    don't want to give agents direct access to the agent object or else
    we will run the risk of breaking extensions all of the time during
    any kind of reorganization or refactoring. Having a well defined API
    in between will give us flexibility to move things around.

    On Fri, Sep 25, 2015 at 1:32 AM, <thomas.morin at orange.com
    <mailto:thomas.morin at orange.com>> wrote:

        Hi everyone,

        (TL;DR: we would like an L2 agent extension to be able to call
        methods on the agent class, e.g. OVSAgent)

        In the networking-bgpvpn project, we need the reference driver
        to interact with the ML2 openvswitch agent with new RPCs to
        allow exchanging information with the BGP VPN implementation
        running on the compute nodes. We also need the OVS agent to
        setup specific things on the OVS bridges for MPLS traffic.

        To extend the agent behavior, we currently create a new agent by
        mimicking the main() in ovs_neutron_agent.py but instead of
        instantiating instantiate OVSAgent, with instantiate a class
        that overloads the OVSAgent class with the additional behavior
        we need [1] .

        This is really not the ideal way of extending the agent, and we
        would prefer using the L2 agent extension framework [2].

        Using the L2 agent extension framework would work, but only
        partially: it would easily allos us to register our RPC
        consumers, but not to let us access to some
        datastructures/methods of the agent that we need to use:
        setup_entry_for_arp_reply and local_vlan_map, access to the
        OVSBridge objects to manipulate OVS ports.

        I've filled-in an RFE bug to track this issue [5].

        We would like something like one of the following:
        1) augment the L2 agent extension interface
        (AgentCoreResourceExtension) to give access to the agent object
        (and thus let the extension call methods of the agent) by giving
        the agent as a parameter of the initialize method [4]
        2) augment the L2 agent extension interface
        (AgentCoreResourceExtension) to give access to the agent object
        (and thus let the extension call methods of the agent) by giving
        the agent as a parameter of a new setAgent method
        3) augment the L2 agent extension interface
        (AgentCoreResourceExtension) to give access only to
        specific/chosen methods on the agent object, for instance by
        giving a dict as a parameter of the initialize method [4], whose
        keys would be method names, and values would be pointer to these
        methods on the agent object
        4) define a new interface with methods to access things inside
        the agent, this interface would be implemented by an object
        instantiated by the agent, and that the agent would pass to the
        extension manager, thus allowing the extension manager to passe
        the object to an extension through the initialize method of
        AgentCoreResourceExtension [4]

        Any feedback on these ideas...?
        Of course any other idea is welcome...

        For the sake of triggering reaction, the question could be
        rephrased as: if we submit a change doing (1) above, would it
        have a reasonable chance of merging ?

        -Thomas

        [1]
        https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
        [2] https://review.openstack.org/#/c/195439/
        [3]
        https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
        [4]
        https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
        [5] https://bugs.launchpad.net/neutron/+bug/1499637

        _________________________________________________________________________________________________________________________

        Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
        pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
        a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
        Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

        This message and its attachments may contain confidential or privileged information that may be protected by law;
        they should not be distributed, used or copied without authorisation.
        If you have received this email in error, please notify the sender and delete this message and its attachments.
        As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
        Thank you.


        __________________________________________________________________________
        OpenStack Development Mailing List (not for usage questions)
        Unsubscribe:
        OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
        <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




    -- 
    Kevin Benton

    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
    <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/f66ab82b/attachment.html>

From jaypipes at gmail.com  Wed Sep 30 10:18:08 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Wed, 30 Sep 2015 06:18:08 -0400
Subject: [openstack-dev] [nova] Shared storage space count for Nova
In-Reply-To: <E1FB4937BE24734DAD0D1D4E4E506D7890D1AB97@MAIL703.KDS.KEANE.COM>
References: <E1FB4937BE24734DAD0D1D4E4E506D7890D1AB97@MAIL703.KDS.KEANE.COM>
Message-ID: <560BB6E0.4010706@gmail.com>

On 09/30/2015 03:04 AM, Kekane, Abhishek wrote:
> Hi Devs,
>
> Nova shared storage has issue [1] for counting free space, total space
> and disk available least which affects hypervisor stats and scheduler.
>
> I have created a etherpad [2] which contains detail problem description
> and possible solution with possible challenges for this design.
>
> Later I came to know there is ML [3] initiated by Jay Pipes which has a
> solution of creating resource pools for disk, CPU, memory, Numa modes etc.
>
> IMO this is a good way and good to be addressed in Mitaka release. I am
> eager to work on this and will provide any kind of help in
> implementation, review etc.
>
> Please give us your opinion about the same.

Hi! I actually have created a work in progress blueprint for the above 
proposed solution here:

https://review.openstack.org/#/c/225546/

I will have it completed by end of week.

Best,
-jay

> [1] https://bugs.launchpad.net/nova/+bug/1252321
>
> [2] https://etherpad.openstack.org/p/shared-storage-space-count
>
> [3] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070564.html


From ihrachys at redhat.com  Wed Sep 30 10:26:22 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Wed, 30 Sep 2015 12:26:22 +0200
Subject: [openstack-dev] [neutron] How could an L2 agent extension
	access agent methods ?
In-Reply-To: <18564_1443607712_560BB4A0_18564_861_1_560BB49F.6030903@orange.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
 <5605100D.4070506@redhat.com>
 <28922_1443184635_56053FFB_28922_4082_5_56053FFA.80000@orange.com>
 <D67601DC-EA26-4315-A8D0-96D0F1593799@redhat.com>
 <18564_1443607712_560BB4A0_18564_861_1_560BB49F.6030903@orange.com>
Message-ID: <B1F39F30-6D14-4C3F-AA11-AB8908D22A45@redhat.com>


> On 30 Sep 2015, at 12:08, thomas.morin at orange.com wrote:
> 
> Hi Ihar,
> 
> Ihar Hrachyshka :
>>> Miguel Angel Ajo :
>>>> Do you have a rough idea of what operations you may need to do?
>>> Right now, what bagpipe driver for networking-bgpvpn needs to interact with is:
>>> - int_br OVSBridge (read-only)
>>> - tun_br OVSBridge (add patch port, add flows)
>>> - patch_int_ofport port number (read-only)
>>> - local_vlan_map dict (read-only)
>>> - setup_entry_for_arp_reply method (called to add static ARP entries)
>>> 
>> Sounds very tightly coupled to OVS agent.
> 
>> 
>>>> Please bear in mind, the extension interface will be available from different agent types
>>>> (OVS, SR-IOV, [eventually LB]), so this interface you're talking about could also serve as
>>>> a translation driver for the agents (where the translation is possible), I totally understand
>>>> that most extensions are specific agent bound, and we must be able to identify
>>>> the agent we're serving back exactly.
>>> Yes, I do have this in mind, but what we've identified for now seems to be OVS specific.
>> Indeed it does. Maybe you can try to define the needed pieces in high level actions, not internal objects you need to access to. Like ?- connect endpoint X to Y?, ?determine segmentation id for a network? etc.
> 
> I've been thinking about this, but would tend to reach the conclusion that the things we need to interact with are pretty hard to abstract out into something that would be generic across different agents.  Everything we need to do in our case relates to how the agents use bridges and represent networks internally: linuxbridge has one bridge per Network, while OVS has a limited number of bridges playing different roles for all networks with internal segmentation.
> 
> To look at the two things you  mention:
> - "connect endpoint X to Y" : what we need to do is redirect the traffic destinated to the gateway of a Neutron network, to the thing that will do the MPLS forwarding for the right BGP VPN context (called VRF), in our case br-mpls (that could be done with an OVS table too) ; that action might be abstracted out to hide the details specific to OVS, but I'm not sure on how to  name the destination in a way that would be agnostic to these details, and this is not really relevant to do until we have a relevant context in which the linuxbridge would pass packets to something doing MPLS forwarding (OVS is currently the only option we support for MPLS forwarding, and it does not really make sense to mix linuxbridge for Neutron L2/L3 and OVS for MPLS)
> - "determine segmentation id for a network": this is something really OVS-agent-specific, the linuxbridge agent uses multiple linux bridges, and does not rely on internal segmentation
> 
> Completely abstracting out packet forwarding pipelines in OVS and linuxbridge agents would possibly allow defining an interface that agent extension could use without to know about anything specific to OVS or the linuxbridge, but I believe this is a very significant taks to tackle.

If you look for a clean way to integrate with reference agents, then it?s something that we should try to achieve. I agree it?s not an easy thing.

Just an idea: can we have a resource for traffic forwarding, similar to security groups? I know folks are not ok with extending security groups API due to compatibility reasons, so maybe fwaas is the place to experiment with it.

> 
> Hopefully it will be acceptable to create an interface, even it exposes a set of methods specific to the linuxbridge agent and a set of methods specific to the OVS agent.  That would mean that the agent extension that can work in both contexts (not our case yet) would check the agent type before using the first set or the second set.

The assumption of the whole idea of l2 agent extensions is that they are agent agnostic. In case of QoS, we implemented a common QoS extension that can be plugged in any agent [1], and a set of backend drivers (atm it?s just sr-iov [2] and ovs [3]) that are selected based on the driver type argument passed into the extension manager [4][5]. Your extension could use similar approach to select the backend.

[1]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l2/extensions/qos.py#n169
[2]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/extension_drivers/qos_driver.py
[3]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py
[4]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#n395
[5]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py#n155

> 
> Does this approach make sense ?
> 
> -Thomas
> 
> _________________________________________________________________________________________________________________________
> 
> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
> 
> This message and its attachments may contain confidential or privileged information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
> Thank you.

Note that you should really avoid putting that ^^ kind of signature into your emails intended for public mailing lists. If it?s confidential, why do you send it to everyone? And sorry, folks will copy it without authorisation, for archiving and indexing reasons and whatnot.

Ihar
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/9767e81e/attachment.pgp>

From Abhishek.Kekane at nttdata.com  Wed Sep 30 10:26:52 2015
From: Abhishek.Kekane at nttdata.com (Kekane, Abhishek)
Date: Wed, 30 Sep 2015 10:26:52 +0000
Subject: [openstack-dev] [nova] Shared storage space count for Nova
In-Reply-To: <560BB6E0.4010706@gmail.com>
References: <E1FB4937BE24734DAD0D1D4E4E506D7890D1AB97@MAIL703.KDS.KEANE.COM>
 <560BB6E0.4010706@gmail.com>
Message-ID: <E1FB4937BE24734DAD0D1D4E4E506D7890D1ADB3@MAIL703.KDS.KEANE.COM>

Hi Jay,

Thank you for the update.

Abhishek Kekane

-----Original Message-----
From: Jay Pipes [mailto:jaypipes at gmail.com] 
Sent: 30 September 2015 15:48
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [nova] Shared storage space count for Nova

On 09/30/2015 03:04 AM, Kekane, Abhishek wrote:
> Hi Devs,
>
> Nova shared storage has issue [1] for counting free space, total space 
> and disk available least which affects hypervisor stats and scheduler.
>
> I have created a etherpad [2] which contains detail problem 
> description and possible solution with possible challenges for this design.
>
> Later I came to know there is ML [3] initiated by Jay Pipes which has 
> a solution of creating resource pools for disk, CPU, memory, Numa modes etc.
>
> IMO this is a good way and good to be addressed in Mitaka release. I 
> am eager to work on this and will provide any kind of help in 
> implementation, review etc.
>
> Please give us your opinion about the same.

Hi! I actually have created a work in progress blueprint for the above proposed solution here:

https://review.openstack.org/#/c/225546/

I will have it completed by end of week.

Best,
-jay

> [1] https://bugs.launchpad.net/nova/+bug/1252321
>
> [2] https://etherpad.openstack.org/p/shared-storage-space-count
>
> [3] 
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070564.ht
> ml

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

______________________________________________________________________
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


From vkuklin at mirantis.com  Wed Sep 30 10:28:38 2015
From: vkuklin at mirantis.com (Vladimir Kuklin)
Date: Wed, 30 Sep 2015 05:28:38 -0500
Subject: [openstack-dev] [fuel] PTL & Component Leads elections
In-Reply-To: <CACo6NWBDQfaZd8emVoOcSqmP+G-+LKkMdp28md8aZP7J0zezgA@mail.gmail.com>
References: <CA+GZd7-7wNUoRe6Zsg3zaUJ_83R7dHLKeJbieRh6pFpcO8Z8ZA@mail.gmail.com>
 <9A3CA8E8-1B57-4529-B9C6-FB2679DAD636@mirantis.com>
 <CACo6NWBDQfaZd8emVoOcSqmP+G-+LKkMdp28md8aZP7J0zezgA@mail.gmail.com>
Message-ID: <CAHAWLf3A2R0h+HJTAinZUN=UTWknQnkR9UidV-8Xq01JTNCDCg@mail.gmail.com>

+1 to Igor. Do we have voting system set up?

On Wed, Sep 30, 2015 at 4:35 AM, Igor Kalnitsky <ikalnitsky at mirantis.com>
wrote:

> > * September 29 - October 8: PTL elections
>
> So, it's in progress. Where I can vote? I didn't receive any emails.
>
> On Mon, Sep 28, 2015 at 7:31 PM, Tomasz Napierala
> <tnapierala at mirantis.com> wrote:
> >> On 18 Sep 2015, at 04:39, Sergey Lukjanov <slukjanov at mirantis.com>
> wrote:
> >>
> >>
> >> Time line:
> >>
> >> PTL elections
> >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
> position
> >> * September 29 - October 8: PTL elections
> >
> > Just a reminder that we have a deadline for candidates today.
> >
> > Regards,
> > --
> > Tomasz 'Zen' Napierala
> > Product Engineering - Poland
> >
> >
> >
> >
> >
> >
> >
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/b4958f93/attachment.html>

From sean at dague.net  Wed Sep 30 10:32:48 2015
From: sean at dague.net (Sean Dague)
Date: Wed, 30 Sep 2015 06:32:48 -0400
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <491F2677-6DFD-4FF8-BCA9-1169FF1841B2@vmware.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
 <560ABCEE.7010408@internap.com>
 <CAHr1CO-HoRr+zoccf1YvQ5EnAtO_CPfNwWAoe0akynj2Ra7xxA@mail.gmail.com>
 <491F2677-6DFD-4FF8-BCA9-1169FF1841B2@vmware.com>
Message-ID: <560BBA50.3060401@dague.net>

On 09/29/2015 01:32 PM, Mark Voelker wrote:
> 
> Mark T. Voelker
> 
> 
> 
>> On Sep 29, 2015, at 12:36 PM, Matt Fischer <matt at mattfischer.com> wrote:
>>
>>
>>
>> I agree with John Griffith. I don't have any empirical evidences to back
>> my "feelings" on that one but it's true that we weren't enable to enable
>> Cinder v2 until now.
>>
>> Which makes me wonder: When can we actually deprecate an API version? I
>> *feel* we are fast to jump on the deprecation when the replacement isn't
>> 100% ready yet for several versions.
>>
>> --
>> Mathieu
>>
>>
>> I don't think it's too much to ask that versions can't be deprecated until the new version is 100% working, passing all tests, and the clients (at least python-xxxclients) can handle it without issues. Ideally I'd like to also throw in the criteria that devstack, rally, tempest, and other services are all using and exercising the new API.
>>
>> I agree that things feel rushed.
> 
> 
> FWIW, the TC recently created an assert:follows-standard-deprecation tag.  Ivan linked to a thread in which Thierry asked for input on it, but FYI the final language as it was approved last week [1] is a bit different than originally proposed.  It now requires one release plus 3 linear months of deprecated-but-still-present-in-the-tree as a minimum, and recommends at least two full stable releases for significant features (an entire API version would undoubtedly fall into that bucket).  It also requires that a migration path will be documented.  However to Matt?s point, it doesn?t contain any language that says specific things like:
> 
> In the case of major API version deprecation:
> * $oldversion and $newversion must both work with [cinder|nova|whatever]client and openstackclient during the deprecation period.
> * It must be possible to run $oldversion and $newversion concurrently on the servers to ensure end users don?t have to switch overnight. 
> * Devstack uses $newversion by default.
> * $newversion works in Tempest/Rally/whatever else.
> 
> What it *does* do is require that a thread be started here on openstack-operators [2] so that operators can provide feedback.  I would hope that feedback like ?I can?t get clients to use it so please don?t remove it yet? would be taken into account by projects, which seems to be exactly what?s happening in this case with Cinder v1.  =)
> 
> I?d hazard a guess that the TC would be interested in hearing about whether you think that plan is a reasonable one (and given that TC election season is upon us, candidates for the TC probably would too).
> 
> [1] https://review.openstack.org/#/c/207467/
> [2] http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59
> 
> At Your Service,
> 
> Mark T. Voelker

I would agree that the amount of breaks even in our own system has been
substantial here, and I'm personally feeling we should probably revert
the devstack change that turns off v1. It looks like it wasn't just one
client that got caught in this, but most of them.

This feels like this transition has been too much stick, and not enough
carrot. IIRC openstack client wouldn't work with cinder v2 until a
couple of months ago, as that made me do some weird things in grenade in
building volumes. [1]

	-Sean

1.
https://github.com/openstack-dev/grenade/blob/master/projects/70_cinder/resources.sh#L40-L41

-- 
Sean Dague
http://dague.net


From julien at danjou.info  Wed Sep 30 10:50:18 2015
From: julien at danjou.info (Julien Danjou)
Date: Wed, 30 Sep 2015 12:50:18 +0200
Subject: [openstack-dev] [election] [tc] Candidacy for Mitaka
Message-ID: <m01tdg89lh.fsf@danjou.info>

Hi fellow developers,

I hereby announce my candidacy for the OpenStack Technical Committee election.

I am currently employed by Red Hat and spend all my time working on upstream
OpenStack development. Something I've been doing since 2011. Those last years,
I ran the Ceilometer project as a PTL and already served the TC a few cycles
ago. I did many contributions to OpenStack as a whole, and I'm one of the top
contributors of the project[1] ? hey I contributed to 72 OpenStack projects!

My plan here is to bring some of my views of the OpenStack world to the
technical committee, which actually does not seem to do much technical stuff
nowadays ? much more bureaucracy. Maybe we should rename it?

I'm glad we now have a "big tent" approach of our community. I was one of the
first and only at the TC to say we should not push back projects for bad
reasons, and now we are accepting 10? times more. The tag system we are now
using and that has been imagined is nice, but as a new user of the tags, I find
them annoying and not always completely thought-through. I'm in favor of a more
agile and more user-oriented development, and I'd love bringing more of that.

I would also like to bring some of my hindsight about testing, usability and
documentation on the table. I've been, with part of the Telemetry team, able to
build a project that has a good and sane community, that works by default, has
a well-designed REST API and a great up-to-date documentation and is simple to
deploy and use. I wish the rest of OpenStack was a bit more like that.

[1] http://stackalytics.com/?metric=commits&user_id=jdanjou&release=all

Happy hacking!

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 800 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/425d1ecc/attachment.pgp>

From mangelajo at redhat.com  Wed Sep 30 10:53:02 2015
From: mangelajo at redhat.com (Miguel Angel Ajo)
Date: Wed, 30 Sep 2015 12:53:02 +0200
Subject: [openstack-dev] [neutron] How could an L2 agent extension
 access agent methods ?
In-Reply-To: <B1F39F30-6D14-4C3F-AA11-AB8908D22A45@redhat.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
 <5605100D.4070506@redhat.com>
 <28922_1443184635_56053FFB_28922_4082_5_56053FFA.80000@orange.com>
 <D67601DC-EA26-4315-A8D0-96D0F1593799@redhat.com>
 <18564_1443607712_560BB4A0_18564_861_1_560BB49F.6030903@orange.com>
 <B1F39F30-6D14-4C3F-AA11-AB8908D22A45@redhat.com>
Message-ID: <560BBF0E.2040904@redhat.com>



Ihar Hrachyshka wrote:
>> On 30 Sep 2015, at 12:08, thomas.morin at orange.com wrote:
>>
>> Hi Ihar,
>>
>> Ihar Hrachyshka :
>>>> Miguel Angel Ajo :
>>>>> Do you have a rough idea of what operations you may need to do?
>>>> Right now, what bagpipe driver for networking-bgpvpn needs to interact with is:
>>>> - int_br OVSBridge (read-only)
>>>> - tun_br OVSBridge (add patch port, add flows)
>>>> - patch_int_ofport port number (read-only)
>>>> - local_vlan_map dict (read-only)
>>>> - setup_entry_for_arp_reply method (called to add static ARP entries)
>>>>
>>> Sounds very tightly coupled to OVS agent.
>>>>> Please bear in mind, the extension interface will be available from different agent types
>>>>> (OVS, SR-IOV, [eventually LB]), so this interface you're talking about could also serve as
>>>>> a translation driver for the agents (where the translation is possible), I totally understand
>>>>> that most extensions are specific agent bound, and we must be able to identify
>>>>> the agent we're serving back exactly.
>>>> Yes, I do have this in mind, but what we've identified for now seems to be OVS specific.
>>> Indeed it does. Maybe you can try to define the needed pieces in high level actions, not internal objects you need to access to. Like ?- connect endpoint X to Y?, ?determine segmentation id for a network? etc.
>> I've been thinking about this, but would tend to reach the conclusion that the things we need to interact with are pretty hard to abstract out into something that would be generic across different agents.  Everything we need to do in our case relates to how the agents use bridges and represent networks internally: linuxbridge has one bridge per Network, while OVS has a limited number of bridges playing different roles for all networks with internal segmentation.
>>
>> To look at the two things you  mention:
>> - "connect endpoint X to Y" : what we need to do is redirect the traffic destinated to the gateway of a Neutron network, to the thing that will do the MPLS forwarding for the right BGP VPN context (called VRF), in our case br-mpls (that could be done with an OVS table too) ; that action might be abstracted out to hide the details specific to OVS, but I'm not sure on how to  name the destination in a way that would be agnostic to these details, and this is not really relevant to do until we have a relevant context in which the linuxbridge would pass packets to something doing MPLS forwarding (OVS is currently the only option we support for MPLS forwarding, and it does not really make sense to mix linuxbridge for Neutron L2/L3 and OVS for MPLS)
>> - "determine segmentation id for a network": this is something really OVS-agent-specific, the linuxbridge agent uses multiple linux bridges, and does not rely on internal segmentation
>>
>> Completely abstracting out packet forwarding pipelines in OVS and linuxbridge agents would possibly allow defining an interface that agent extension could use without to know about anything specific to OVS or the linuxbridge, but I believe this is a very significant taks to tackle.
>
> If you look for a clean way to integrate with reference agents, then it?s something that we should try to achieve. I agree it?s not an easy thing.
>
> Just an idea: can we have a resource for traffic forwarding, similar to security groups? I know folks are not ok with extending security groups API due to compatibility reasons, so maybe fwaas is the place to experiment with it.
>
>> Hopefully it will be acceptable to create an interface, even it exposes a set of methods specific to the linuxbridge agent and a set of methods specific to the OVS agent.  That would mean that the agent extension that can work in both contexts (not our case yet) would check the agent type before using the first set or the second set.
>
> The assumption of the whole idea of l2 agent extensions is that they are agent agnostic. In case of QoS, we implemented a common QoS extension that can be plugged in any agent [1], and a set of backend drivers (atm it?s just sr-iov [2] and ovs [3]) that are selected based on the driver type argument passed into the extension manager [4][5]. Your extension could use similar approach to select the backend.
>
> [1]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l2/extensions/qos.py#n169
> [2]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/extension_drivers/qos_driver.py
> [3]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py
> [4]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#n395
> [5]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py#n155

I disagree on the agent-agnostic thing. QoS extension for SR-IOV is 
totally not agnostic for OVS or LB, in the QoS case, it's just
accidental that OVS & LB share common bridges now due to the OVS Hybrid 
implementation that leverages linux bridge
and iptables.

I agree on having a well defined interface, on which API is available to 
talking back to each agent, and it has to be common, where
it's possible to be common.

It doesn't have to be easy, but it's the way if we want a world where 
those commonalities and reusability of extensions can
exist and not be just accidental, but it's not realistic in my opinion 
to AIM for it on every shot. I believe we should try where we can
but we should be open to agent specific extensions. The idea of the 
extensions is that you can extend specific agents without
being forced to have the main loop hijacked, or eventually having off 
tree code plugged into our agents.

There we should add support to identify the type of agent the extension 
works with (compatibility, versioning, etc..)



>> Does this approach make sense ?
>>
>> -Thomas
>>
>> _________________________________________________________________________________________________________________________
>>
>> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
>> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
>> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
>> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>
>> This message and its attachments may contain confidential or privileged information that may be protected by law;
>> they should not be distributed, used or copied without authorisation.
>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>> Thank you.
>
> Note that you should really avoid putting that ^^ kind of signature into your emails intended for public mailing lists. If it?s confidential, why do you send it to everyone? And sorry, folks will copy it without authorisation, for archiving and indexing reasons and whatnot.
>
> Ihar
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From ihrachys at redhat.com  Wed Sep 30 11:09:08 2015
From: ihrachys at redhat.com (Ihar Hrachyshka)
Date: Wed, 30 Sep 2015 13:09:08 +0200
Subject: [openstack-dev] [neutron] How could an L2 agent extension
	access agent methods ?
In-Reply-To: <560BBF0E.2040904@redhat.com>
References: <20742_1443169933_5605068D_20742_3290_4_5605068C.7000705@orange.com>
 <CAO_F6JMoPqeSmURsk5UO8LVnBruQ2y0e_hDTLV2ctNAtm2_AGg@mail.gmail.com>
 <5605100D.4070506@redhat.com>
 <28922_1443184635_56053FFB_28922_4082_5_56053FFA.80000@orange.com>
 <D67601DC-EA26-4315-A8D0-96D0F1593799@redhat.com>
 <18564_1443607712_560BB4A0_18564_861_1_560BB49F.6030903@orange.com>
 <B1F39F30-6D14-4C3F-AA11-AB8908D22A45@redhat.com>
 <560BBF0E.2040904@redhat.com>
Message-ID: <C3762785-3A21-4468-B293-3C535DEAB233@redhat.com>


> On 30 Sep 2015, at 12:53, Miguel Angel Ajo <mangelajo at redhat.com> wrote:
> 
> 
> 
> Ihar Hrachyshka wrote:
>>> On 30 Sep 2015, at 12:08, thomas.morin at orange.com wrote:
>>> 
>>> Hi Ihar,
>>> 
>>> Ihar Hrachyshka :
>>>>> Miguel Angel Ajo :
>>>>>> Do you have a rough idea of what operations you may need to do?
>>>>> Right now, what bagpipe driver for networking-bgpvpn needs to interact with is:
>>>>> - int_br OVSBridge (read-only)
>>>>> - tun_br OVSBridge (add patch port, add flows)
>>>>> - patch_int_ofport port number (read-only)
>>>>> - local_vlan_map dict (read-only)
>>>>> - setup_entry_for_arp_reply method (called to add static ARP entries)
>>>>> 
>>>> Sounds very tightly coupled to OVS agent.
>>>>>> Please bear in mind, the extension interface will be available from different agent types
>>>>>> (OVS, SR-IOV, [eventually LB]), so this interface you're talking about could also serve as
>>>>>> a translation driver for the agents (where the translation is possible), I totally understand
>>>>>> that most extensions are specific agent bound, and we must be able to identify
>>>>>> the agent we're serving back exactly.
>>>>> Yes, I do have this in mind, but what we've identified for now seems to be OVS specific.
>>>> Indeed it does. Maybe you can try to define the needed pieces in high level actions, not internal objects you need to access to. Like ?- connect endpoint X to Y?, ?determine segmentation id for a network? etc.
>>> I've been thinking about this, but would tend to reach the conclusion that the things we need to interact with are pretty hard to abstract out into something that would be generic across different agents.  Everything we need to do in our case relates to how the agents use bridges and represent networks internally: linuxbridge has one bridge per Network, while OVS has a limited number of bridges playing different roles for all networks with internal segmentation.
>>> 
>>> To look at the two things you  mention:
>>> - "connect endpoint X to Y" : what we need to do is redirect the traffic destinated to the gateway of a Neutron network, to the thing that will do the MPLS forwarding for the right BGP VPN context (called VRF), in our case br-mpls (that could be done with an OVS table too) ; that action might be abstracted out to hide the details specific to OVS, but I'm not sure on how to  name the destination in a way that would be agnostic to these details, and this is not really relevant to do until we have a relevant context in which the linuxbridge would pass packets to something doing MPLS forwarding (OVS is currently the only option we support for MPLS forwarding, and it does not really make sense to mix linuxbridge for Neutron L2/L3 and OVS for MPLS)
>>> - "determine segmentation id for a network": this is something really OVS-agent-specific, the linuxbridge agent uses multiple linux bridges, and does not rely on internal segmentation
>>> 
>>> Completely abstracting out packet forwarding pipelines in OVS and linuxbridge agents would possibly allow defining an interface that agent extension could use without to know about anything specific to OVS or the linuxbridge, but I believe this is a very significant taks to tackle.
>> 
>> If you look for a clean way to integrate with reference agents, then it?s something that we should try to achieve. I agree it?s not an easy thing.
>> 
>> Just an idea: can we have a resource for traffic forwarding, similar to security groups? I know folks are not ok with extending security groups API due to compatibility reasons, so maybe fwaas is the place to experiment with it.
>> 
>>> Hopefully it will be acceptable to create an interface, even it exposes a set of methods specific to the linuxbridge agent and a set of methods specific to the OVS agent.  That would mean that the agent extension that can work in both contexts (not our case yet) would check the agent type before using the first set or the second set.
>> 
>> The assumption of the whole idea of l2 agent extensions is that they are agent agnostic. In case of QoS, we implemented a common QoS extension that can be plugged in any agent [1], and a set of backend drivers (atm it?s just sr-iov [2] and ovs [3]) that are selected based on the driver type argument passed into the extension manager [4][5]. Your extension could use similar approach to select the backend.
>> 
>> [1]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l2/extensions/qos.py#n169
>> [2]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/extension_drivers/qos_driver.py
>> [3]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py
>> [4]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#n395
>> [5]: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py#n155
> 
> I disagree on the agent-agnostic thing. QoS extension for SR-IOV is totally not agnostic for OVS or LB, in the QoS case, it's just
> accidental that OVS & LB share common bridges now due to the OVS Hybrid implementation that leverages linux bridge
> and iptables.

Wait. The QoS extension has nothing agent backend specific. All it does is it receives rpc updates for tracked resources and pass them into qos drivers. Those latter are the bits that implement backend specific operations. So I am not sure why you say the extension itself is agent specific: any other amqp based agent in the wild can adopt the extension as-is, only providing a new backend to load.

> 
> I agree on having a well defined interface, on which API is available to talking back to each agent, and it has to be common, where
> it's possible to be common.
> 
> It doesn't have to be easy, but it's the way if we want a world where those commonalities and reusability of extensions can
> exist and not be just accidental, but it's not realistic in my opinion to AIM for it on every shot. I believe we should try where we can
> but we should be open to agent specific extensions. The idea of the extensions is that you can extend specific agents without
> being forced to have the main loop hijacked, or eventually having off tree code plugged into our agents.

Partially, yes. The culprit here is how much the extension API should know about an agent. We can probably make the extension API completely extendable by allowing agents to pass any random kwargs into the extension manager that will forward them into extensions. Note that it breaks current API for extensions and technically breaks it (not that I know of any external extensions that could be affected so far).

> 
> There we should add support to identify the type of agent the extension works with (compatibility, versioning, etc..)

We already pass the type into extension manager, and that?s how we plug in the proper backend driver in QoS.

> 
> 
> 
>>> Does this approach make sense ?
>>> 
>>> -Thomas
>>> 
>>> _________________________________________________________________________________________________________________________
>>> 
>>> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
>>> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
>>> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
>>> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>>> 
>>> This message and its attachments may contain confidential or privileged information that may be protected by law;
>>> they should not be distributed, used or copied without authorisation.
>>> If you have received this email in error, please notify the sender and delete this message and its attachments.
>>> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
>>> Thank you.
>> 
>> Note that you should really avoid putting that ^^ kind of signature into your emails intended for public mailing lists. If it?s confidential, why do you send it to everyone? And sorry, folks will copy it without authorisation, for archiving and indexing reasons and whatnot.
>> 
>> Ihar
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/e7529987/attachment.pgp>

From pmurray at hpe.com  Wed Sep 30 11:25:12 2015
From: pmurray at hpe.com (Murray, Paul (HP Cloud))
Date: Wed, 30 Sep 2015 11:25:12 +0000
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <5F389C517756284F80239F55BCA5DDDD74CAC368@G4W3298.americas.hpqcorp.net>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <5F389C517756284F80239F55BCA5DDDD74CAC368@G4W3298.americas.hpqcorp.net>
Message-ID: <39E5672E03A1CB4B93936D1C4AA5E15D1DBF1827@G1W3782.americas.hpqcorp.net>


> Please respond to this post if you have an interest in this and what you would like to see done. 
> Include anything you are already getting on with so we get a clear picture. 

Thank you to those who replied to this thread. I have used the contents to start an etherpad page here:

https://etherpad.openstack.org/p/mitaka-live-migration 

I have taken the liberty of listing those that responded to the thread and the authors of mentioned patches as interested people.

>From the responses and looking at the specs up for review it looks like there are about five areas that could be addressed in Mitaka and several others that could come later. The first five are:

- migrating instances with a mix of local disks and cinder volumes
- pause instance during migration
- cancel migration
- migrate suspended instances
- improve CI coverage

Not all of these are covered by specs yet and all the existing specs need reviews. Please look at the etherpad and see if there is anything you think is missing.

Paul
	



From e0ne at e0ne.info  Wed Sep 30 11:29:14 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Wed, 30 Sep 2015 14:29:14 +0300
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <560BBA50.3060401@dague.net>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
 <560ABCEE.7010408@internap.com>
 <CAHr1CO-HoRr+zoccf1YvQ5EnAtO_CPfNwWAoe0akynj2Ra7xxA@mail.gmail.com>
 <491F2677-6DFD-4FF8-BCA9-1169FF1841B2@vmware.com> <560BBA50.3060401@dague.net>
Message-ID: <CAGocpaF1uqUTBh-1UsrChC4An-=BJRQqe+Ot8ODUYw=5vRpoyA@mail.gmail.com>

Sean,

openstack client supports Cinder API v2 since Liberty. What it the right
way ti fix grenade?

Regards,
Ivan Kolodyazhny,
Web Developer

On Wed, Sep 30, 2015 at 1:32 PM, Sean Dague <sean at dague.net> wrote:

> On 09/29/2015 01:32 PM, Mark Voelker wrote:
> >
> > Mark T. Voelker
> >
> >
> >
> >> On Sep 29, 2015, at 12:36 PM, Matt Fischer <matt at mattfischer.com>
> wrote:
> >>
> >>
> >>
> >> I agree with John Griffith. I don't have any empirical evidences to back
> >> my "feelings" on that one but it's true that we weren't enable to enable
> >> Cinder v2 until now.
> >>
> >> Which makes me wonder: When can we actually deprecate an API version? I
> >> *feel* we are fast to jump on the deprecation when the replacement isn't
> >> 100% ready yet for several versions.
> >>
> >> --
> >> Mathieu
> >>
> >>
> >> I don't think it's too much to ask that versions can't be deprecated
> until the new version is 100% working, passing all tests, and the clients
> (at least python-xxxclients) can handle it without issues. Ideally I'd like
> to also throw in the criteria that devstack, rally, tempest, and other
> services are all using and exercising the new API.
> >>
> >> I agree that things feel rushed.
> >
> >
> > FWIW, the TC recently created an assert:follows-standard-deprecation
> tag.  Ivan linked to a thread in which Thierry asked for input on it, but
> FYI the final language as it was approved last week [1] is a bit different
> than originally proposed.  It now requires one release plus 3 linear months
> of deprecated-but-still-present-in-the-tree as a minimum, and recommends at
> least two full stable releases for significant features (an entire API
> version would undoubtedly fall into that bucket).  It also requires that a
> migration path will be documented.  However to Matt?s point, it doesn?t
> contain any language that says specific things like:
> >
> > In the case of major API version deprecation:
> > * $oldversion and $newversion must both work with
> [cinder|nova|whatever]client and openstackclient during the deprecation
> period.
> > * It must be possible to run $oldversion and $newversion concurrently on
> the servers to ensure end users don?t have to switch overnight.
> > * Devstack uses $newversion by default.
> > * $newversion works in Tempest/Rally/whatever else.
> >
> > What it *does* do is require that a thread be started here on
> openstack-operators [2] so that operators can provide feedback.  I would
> hope that feedback like ?I can?t get clients to use it so please don?t
> remove it yet? would be taken into account by projects, which seems to be
> exactly what?s happening in this case with Cinder v1.  =)
> >
> > I?d hazard a guess that the TC would be interested in hearing about
> whether you think that plan is a reasonable one (and given that TC election
> season is upon us, candidates for the TC probably would too).
> >
> > [1] https://review.openstack.org/#/c/207467/
> > [2]
> http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59
> >
> > At Your Service,
> >
> > Mark T. Voelker
>
> I would agree that the amount of breaks even in our own system has been
> substantial here, and I'm personally feeling we should probably revert
> the devstack change that turns off v1. It looks like it wasn't just one
> client that got caught in this, but most of them.
>
> This feels like this transition has been too much stick, and not enough
> carrot. IIRC openstack client wouldn't work with cinder v2 until a
> couple of months ago, as that made me do some weird things in grenade in
> building volumes. [1]
>
>         -Sean
>
> 1.
>
> https://github.com/openstack-dev/grenade/blob/master/projects/70_cinder/resources.sh#L40-L41
>
> --
> Sean Dague
> http://dague.net
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/eb7ecc89/attachment.html>

From zigo at debian.org  Wed Sep 30 11:58:10 2015
From: zigo at debian.org (Thomas Goirand)
Date: Wed, 30 Sep 2015 13:58:10 +0200
Subject: [openstack-dev] Announcing Liberty RC1 availability in Debian
Message-ID: <560BCE52.5080100@debian.org>

Hi everyone!

1/ Announcement
===============

I'm pleased to announce, in advance of the final Liberty release, that
Liberty RC1 not only has been fully uploaded to Debian Experimental, but
also that the Tempest CI (which I maintain and is a package only CI, no
deployment tooling involved), shows that it's also fully installable and
working. There's still some failures, but these are, I am guessing, not
due to problems in the packaging, but rather some Tempest setup problems
which I intend to address.

If you want to try out Liberty RC1 in Debian, you can either try it
using Debian Sid + Experimental (recommended), or use the Jessie
backport repository built out of Mirantis Jenkins server. Repositories
are listed at this address:

http://liberty-jessie.pkgs.mirantis.com/

2/ Quick note about Liberty Debian repositories
===============================================

During Debconf 15, someone reported that the fact the Jessie backports
are on a Mirantis address is disturbing.

Note that, while the above really is a non-Debian (ie: non official
private) repository, it only contains unmodified source packages, only
just rebuilt for Debian Stable. Please don't be afraid by the tainted
"mirantis.com" domain name, I could have as well set a debian.net
address (which has been on my todo list for a long time). But it is
still Debian only packages. Everything there is strait out of Debian
repositories, nothing added, modified or removed.

I believe that Liberty release in Sid, is currently working very well,
but I haven't tested it as much as the Jessie backport.

Started with the Kilo release, I have been uploading packages to the
official Debian backports repositories. I will do so as well for the
Liberty release, after the final release is out, and after Liberty is
fully migrated to Debian Testing (the rule for stable-backports is that
packages *must* be available in Testing *first*, in order to provide an
upgrade path). So I do expect Liberty to be available from
jessie-backports maybe a few weeks *after* the final Liberty release.
Before that, use the unofficial Debian repositories.

3/ Horizon dependencies still in NEW queue
==========================================

It is also worth noting that Horizon hasn't been fully FTP master
approved, and that some packages are still remaining in the NEW queue.
This isn't the first release with such an issue with Horizon. I hope
that 1/ FTP masters will approve the remaining packages son 2/ for
Mitaka, the Horizon team will care about freezing external dependencies
(ie: new Javascript objects) earlier in the development cycle. I am
hereby proposing that the Horizon 3rd party dependency freeze happens
not later than Mitaka b2, so that we don't experience it again for the
next release. Note that this problem affects both Debian and Ubuntu, as
Ubuntu syncs dependencies from Debian.

5/ New packages in this release
===============================

You may have noticed that the below packages are now part of Debian:
- Manila
- Aodh
- ironic-inspector
- Zaqar (this one is still in the FTP masters NEW queue...)

I have also packaged a few more, but there are still blockers:
- Congress (antlr version is too low in Debian)
- Mistral

6/ Roadmap for Liberty final release
====================================

Next on my roadmap for the final release of Liberty, is finishing to
upgrade the remaining components to the latest version tested in the
gate. It has been done for most OpenStack deliverables, but about a
dozen are still in the lowest version supported by our global-requirements.

There's also some remaining work:
- more Neutron drivers
- Gnocchi
- Address the remaining Tempest failures, and widen the scope of tests
(add Sahara, Heat, Swift and others to the tested projects using the
Debian package CI)

I of course welcome everyone to test Liberty RC1 before the final
release, and report bugs on the Debian bug tracker if needed.

Also note that the Debian packaging CI is fully free software, and part
of Debian as well (you can look into the openstack-meta-packages package
in git.debian.org, and in openstack-pkg-tools). Contributions in this
field are also welcome.

7/ Thanks to Canonical & every OpenStack upstream projects
==========================================================

I'd like to point out that, even though I did the majority of the work
myself, for this release, there was a way more collaboration with
Canonical on the dependency chain. Indeed, for this Liberty release,
Canonical decided to upload every dependency to Debian first, and then
only sync from it. So a big thanks to the Canonical server team for
doing community work with me together. I just hope we could push this
even further, especially trying to have consistency for Nova and Neutron
binary package names, as it is an issue for Puppet guys.

Last, I would like to hereby thanks everyone who helped me fixing issues
in these packages. Thank you if you've been patient enough to explain,
and for your understanding when I wrongly thought an issue was upstream
when it really was in really in the packages. Thank you, IRC people, you
are all awesome guys!

8/ Note about Mirantis OpenStack 7.0 and 8.0
============================================

When reading these words, MOS and Fuel 7.0 should already be out. For
this release, lots of package sources have been taken directly from
Debian. It is on our roadmap to push this effort even further for MOS
8.0 (working over Trusty). I am please that this happens, so that the
community version of OpenStack (ie: the Debian OpenStack) will have the
benefits of more QA. I also hope that the project of doing packaging on
upstream OpenStack Gerrit with gating will happen at least for a few
packages during the Mitaka cycle, and that Debian will become the common
community platform for OpenStack as I always wanted it to be.

Happy OpenStack Liberty hacking,

Thomas Goirand (zigo)


From sgordon at redhat.com  Wed Sep 30 12:02:28 2015
From: sgordon at redhat.com (Steve Gordon)
Date: Wed, 30 Sep 2015 08:02:28 -0400 (EDT)
Subject: [openstack-dev] [Openstack-operators] [nfv][telcowg] Telco
 Working Group meeting	schedule
In-Reply-To: <142156967.59496233.1443530839928.JavaMail.zimbra@redhat.com>
References: <142156967.59496233.1443530839928.JavaMail.zimbra@redhat.com>
Message-ID: <117685662.60352062.1443614548679.JavaMail.zimbra@redhat.com>

----- Original Message -----
> From: "Steve Gordon" <sgordon at redhat.com>
> To: openstack-operators at lists.openstack.org, "OpenStack Development Mailing List (not for usage questions)"
> 
> Hi all,
> 
> As discussed in last week's meeting [1] we have been seeing increasingly
> limited engagement in the 1900 UTC meeting slot. For this reason starting
> from next week's meeting (October 6th) it is proposed that we consolidate on
> the 1400 UTC slot which is generally better attended and stop alternating
> the time each week.
> 
> Unrelated to the above, I am traveling this Wednesday and will not be able to
> facilitate the meeting on September 30th @ 1900 UTC. Is anyone else able to
> help out by facilitating the meeting at this time? I can help out with
> agenda etc.
> 
> Thanks in advance,
> 
> Steve
> 
> 
> [1]
> http://eavesdrop.openstack.org/meetings/telcowg/2015/telcowg.2015-09-23-14.00.html

Hi all,

I was not able to find a backup facilitator so today's meeting is cancelled, I have updated the schedule on the wiki.

Thanks,

Steve


From pawel.koniszewski at intel.com  Wed Sep 30 12:03:16 2015
From: pawel.koniszewski at intel.com (Koniszewski, Pawel)
Date: Wed, 30 Sep 2015 12:03:16 +0000
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <39E5672E03A1CB4B93936D1C4AA5E15D1DBF1827@G1W3782.americas.hpqcorp.net>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <5F389C517756284F80239F55BCA5DDDD74CAC368@G4W3298.americas.hpqcorp.net>
 <39E5672E03A1CB4B93936D1C4AA5E15D1DBF1827@G1W3782.americas.hpqcorp.net>
Message-ID: <191B00529A37FA4F9B1CAE61859E4D6E5AB4C636@IRSMSX101.ger.corp.intel.com>

> -----Original Message-----
> From: Murray, Paul (HP Cloud) [mailto:pmurray at hpe.com]
> Sent: Wednesday, September 30, 2015 1:25 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] live migration in Mitaka
>
>
> > Please respond to this post if you have an interest in this and what you
> would like to see done.
> > Include anything you are already getting on with so we get a clear 
> > picture.
>
> Thank you to those who replied to this thread. I have used the contents to
> start an etherpad page here:
>
> https://etherpad.openstack.org/p/mitaka-live-migration
>
> I have taken the liberty of listing those that responded to the thread and

> the
> authors of mentioned patches as interested people.
>
> From the responses and looking at the specs up for review it looks like 
> there
> are about five areas that could be addressed in Mitaka and several others
> that could come later. The first five are:
>
> - migrating instances with a mix of local disks and cinder volumes

Preliminary patch is up for review [1], we need to switch it to libvirt's v3

migrate API.

> - pause instance during migration
> - cancel migration
> - migrate suspended instances

I'm not sure I understand this correctly. When user calls 'nova suspend' I 
thought that it actually "hibernates" VM and saves memory state to disk 
[2][3]. In such case there is nothing to "live" migrate - shouldn't 
cold-migration/resize solve this problem?

> - improve CI coverage
>
> Not all of these are covered by specs yet and all the existing specs need
> reviews. Please look at the etherpad and see if there is anything you
think 
> is
> missing.

Paul, thanks for taking care of this. I've added missing spec to force live 
migration to finish [4].

Hope we manage to discuss all these items in Tokyo.

[1] https://review.openstack.org/#/c/227278/
[2] 
https://github.com/openstack/nova/blob/e31d1e11bd42bcfbd7b2c3d732d184a367b75
d6f/nova/virt/libvirt/driver.py#L2311
[3] 
https://github.com/openstack/nova/blob/e31d1e11bd42bcfbd7b2c3d732d184a367b75
d6f/nova/virt/libvirt/guest.py#L308
[4] https://review.openstack.org/#/c/229040/

Kind Regards,
Pawel Koniszewski
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6499 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/9e0127ec/attachment.bin>

From sean at dague.net  Wed Sep 30 12:10:43 2015
From: sean at dague.net (Sean Dague)
Date: Wed, 30 Sep 2015 08:10:43 -0400
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <CAGocpaF1uqUTBh-1UsrChC4An-=BJRQqe+Ot8ODUYw=5vRpoyA@mail.gmail.com>
References: <CAGocpaHXaAP9XkvpWy1RAwWt_v+H-x8xyY6EJh1Q+xG31GbJjw@mail.gmail.com>
 <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
 <560ABCEE.7010408@internap.com>
 <CAHr1CO-HoRr+zoccf1YvQ5EnAtO_CPfNwWAoe0akynj2Ra7xxA@mail.gmail.com>
 <491F2677-6DFD-4FF8-BCA9-1169FF1841B2@vmware.com>
 <560BBA50.3060401@dague.net>
 <CAGocpaF1uqUTBh-1UsrChC4An-=BJRQqe+Ot8ODUYw=5vRpoyA@mail.gmail.com>
Message-ID: <560BD143.8030902@dague.net>

On 09/30/2015 07:29 AM, Ivan Kolodyazhny wrote:
> Sean,
> 
> openstack client supports Cinder API v2 since Liberty. What it the right
> way ti fix grenade?

Here's the thing.

With this change: Rally doesn't work, novaclient doesn't work, grenade
doesn't work. Apparently nearly all the libraries in the real world
don't work.

I feel like that list of incompatibilities should have been collected
before this change. Managing a major API transition is a big deal, and
having a pretty good idea who you are going to break before you do it is
important. Just putting it out there and watching fallout isn't the
right approach.

	-Sean

-- 
Sean Dague
http://dague.net


From berrange at redhat.com  Wed Sep 30 12:23:56 2015
From: berrange at redhat.com (Daniel P. Berrange)
Date: Wed, 30 Sep 2015 13:23:56 +0100
Subject: [openstack-dev] [Openstack-operators] [cinder] [all] The future
 of Cinder API v1
In-Reply-To: <560BD143.8030902@dague.net>
References: <CAHr1CO9dhtr+7dNzV9S9Z7sKpXK_k4c4pZtQUd6jnGMX4t3JYA@mail.gmail.com>
 <CEC673AD-DE02-4B70-8691-68A57A037CCE@gmail.com>
 <DA353AC8-CAE0-448D-9072-63C1246542D5@vmware.com>
 <CAPWkaSUZBAjzk_0je2KEM4DFjyvxRYAaTGvfwbkAKosOKGLUnA@mail.gmail.com>
 <560ABCEE.7010408@internap.com>
 <CAHr1CO-HoRr+zoccf1YvQ5EnAtO_CPfNwWAoe0akynj2Ra7xxA@mail.gmail.com>
 <491F2677-6DFD-4FF8-BCA9-1169FF1841B2@vmware.com>
 <560BBA50.3060401@dague.net>
 <CAGocpaF1uqUTBh-1UsrChC4An-=BJRQqe+Ot8ODUYw=5vRpoyA@mail.gmail.com>
 <560BD143.8030902@dague.net>
Message-ID: <20150930122356.GA3004@redhat.com>

On Wed, Sep 30, 2015 at 08:10:43AM -0400, Sean Dague wrote:
> On 09/30/2015 07:29 AM, Ivan Kolodyazhny wrote:
> > Sean,
> > 
> > openstack client supports Cinder API v2 since Liberty. What it the right
> > way ti fix grenade?
> 
> Here's the thing.
> 
> With this change: Rally doesn't work, novaclient doesn't work, grenade
> doesn't work. Apparently nearly all the libraries in the real world
> don't work.
> 
> I feel like that list of incompatibilities should have been collected
> before this change. Managing a major API transition is a big deal, and
> having a pretty good idea who you are going to break before you do it is
> important. Just putting it out there and watching fallout isn't the
> right approach.

I have to agree, breaking APIs is a very big deal for consumers of
those APIs. When you break API you are trading off less work for
maintainers, vs extra pain for users. IMHO intentionally creating
pain for users is something that should be avoided unless there is
no practical alternative. I'd go as far as to say we should never
break API at all, which would mean keeping v1 around forever,
albeit recommending people use v2. If we really do want to kill
v1 and inflict pain on consumers, then we need to ensure that pain
is as close to zero as possible. This means we should not kill v1
until we've verified that all known current clients impl of v1 have
a v2 implementation available.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


From choudharyvikas16 at gmail.com  Wed Sep 30 12:27:48 2015
From: choudharyvikas16 at gmail.com (Vikas Choudhary)
Date: Wed, 30 Sep 2015 17:57:48 +0530
Subject: [openstack-dev] [Kuryr] tox -egenconfig not working
Message-ID: <CABJxuZpSy1KQ5WQ=5zH=hoNGeSW6YNLu3M2tw=SBsorp04px-Q@mail.gmail.com>

Hi,

I tried to generate sample kuryr.using "tox -e genconfig", but it is
failing:

genconfig create: /home/vikas/kuryr/.tox/genconfig
genconfig installdeps: -r/home/vikas/kuryr/requirements.txt,
-r/home/vikas/kuryr/test-requirements.txt
ERROR: could not install deps [-r/home/vikas/kuryr/requirements.txt,
-r/home/vikas/kuryr/test-requirements.txt]
___________________________________________________________________ summary
___________________________________________________________________
ERROR:   genconfig: could not install deps
[-r/home/vikas/kuryr/requirements.txt,
-r/home/vikas/kuryr/test-requirements.txt]

________________________________________________________

But if i run "pip install -r requirements.txt", its giving no error.

How to generalr sample config file? Please suggest.


-Vikas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/2b8bed13/attachment.html>

From rybrown at redhat.com  Wed Sep 30 12:43:22 2015
From: rybrown at redhat.com (Ryan Brown)
Date: Wed, 30 Sep 2015 08:43:22 -0400
Subject: [openstack-dev] [heat] Convergence: Detecting and handling
 worker failures
In-Reply-To: <560B8AFC.2030207@hpe.com>
References: <560B8494.6050805@hpe.com> <560B8AFC.2030207@hpe.com>
Message-ID: <560BD8EA.5050305@redhat.com>

On 09/30/2015 03:10 AM, Anant Patil wrote:
> Hi,
>
> One of remaining items in convergence is detecting and handling engine
> (the engine worker) failures, and here are my thoughts.
>
> Background: Since the work is distributed among heat engines, by some
> means heat needs to detect the failure and pick up the tasks from failed
> engine and re-distribute or run the task again.
>
> One of the simple way is to poll the DB to detect the liveliness by
> checking the table populated by heat-manage. Each engine records its
> presence periodically by updating current timestamp. All the engines
> will have a periodic task for checking the DB for liveliness of other
> engines. Each engine will check for timestamp updated by other engines
> and if it finds one which is older than the periodicity of timestamp
> updates, then it detects a failure. When this happens, the remaining
> engines, as and when they detect the failures, will try to acquire the
> lock for in-progress resources that were handled by the engine which
> died. They will then run the tasks to completion.

Implementing our own locking system, even a "simple" one, sounds like a 
recipe for major bugs to me. I agree with your assessment that tooz is a 
better long-run decision.

> Another option is to use a coordination library like the community owned
> tooz (http://docs.openstack.org/developer/tooz/) which supports
> distributed locking and leader election. We use it to elect a leader
> among heat engines and that will be responsible for running periodic
> tasks for checking state of each engine and distributing the tasks to
> other engines when one fails. The advantage, IMHO, will be simplified
> heat code. Also, we can move the timeout task to the leader which will
> run time out for all the stacks and sends signal for aborting operation
> when timeout happens. The downside: an external resource like
> Zookeper/memcached etc are needed for leader election.

That's not necessarily true. For single-node installations (devstack, 
TripleO underclouds, etc) tooz offers file and IPC backends that don't 
need an extra service. Tooz's MySQL/PostgreSQL backends only provide 
distributed locking functionality, so we may need to depend on the 
memcached/redis/zookeeper backends for multi-node installs.

Even if tooz doesn't provide everything we need, I'm sure patches would 
be welcome.

> In the long run, IMO, using a library like tooz will be useful for heat.
> A lot of boiler plate needed for locking and running centralized tasks
> (such as timeout) will not be needed in heat. Given that we are moving
> towards distribution of tasks and horizontal scaling is preferred, it
> will be advantageous to use them.
>
> Please share your thoughts.
>
> - Anant
-- 
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.


From kuvaja at hpe.com  Wed Sep 30 12:46:46 2015
From: kuvaja at hpe.com (Kuvaja, Erno)
Date: Wed, 30 Sep 2015 12:46:46 +0000
Subject: [openstack-dev] [stable][glance] glance-stable-maint group refresher
Message-ID: <EA70533067B8F34F801E964ABCA4C4410F4DF006@G9W0745.americas.hpqcorp.net>

Hi all,

I'd like to propose following changes to glance-stable-maint team:

1)      Removing Zhi Yan Liu from the group; unfortunately he has moved on to other ventures and is not actively participating our operations anymore.

2)      Adding Mike Fedosin to the group; Mike has been reviewing and backporting patches to glance stable branches and is working with the right mindset. I think he would be great addition to share the workload around.

Best,
Erno (jokke_) Kuvaja
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/a51e5101/attachment.html>

From mspreitz at us.ibm.com  Wed Sep 30 13:00:19 2015
From: mspreitz at us.ibm.com (Mike Spreitzer)
Date: Wed, 30 Sep 2015 09:00:19 -0400
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <20150929112029.GX3713@localhost>
References: <20150925154239.GE14957@jimrollenhagen.com>
 <1A3C52DFCD06494D8528644858247BF01B7CAB3F@EX10MBOX06.pnnl.gov>
 <20150925190247.GF4731@yuggoth.org>
 <CAPoubz5aNWKyAzLNR2J-6SjssuGB=aMbziz3RsWJsv53xOtm6w@mail.gmail.com>
 <6D6EDF68-BEDA-43D9-872F-B110C07895CA@gmail.com>
 <20150928094754.GP3713@localhost> <56096D79.6090005@redhat.com>
 <CABARBAa3OiFak-mHRvw-dpMsaVjj5ZjSMmfg3ny5Z4RWYB3kbg@mail.gmail.com>
 <CAO_F6JMAkFVYV8=zjx0hVeqDgq=dLzTH=yW4HPaw=gpkG3TC=A@mail.gmail.com>
 <1443477049-sup-9690@fewbar.com> <20150929112029.GX3713@localhost>
Message-ID: <201509301302.t8UD26ZW010736@d03av01.boulder.ibm.com>

> From: Gorka Eguileor <geguileo at redhat.com>
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev at lists.openstack.org>
> Date: 09/29/2015 07:34 AM
> Subject: Re: [openstack-dev] [all] -1 due to line length violation 
> in commit messages
...
> Since we are not all native speakers expecting everyone to realize that
> difference - which is completely right - may be a little optimistic,
> moreover considering that parts of those guidelines may even be written
> by non natives.
> 
> Let's say I interpret all "should" instances in that guideline as rules
> that don't need to be strictly enforced, I see that the Change-Id
> "should not be changed when rebasing" - this one would certainly be fun
> to watch if we didn't follow it - the blueprint "should give the name of
> a Launchpad blueprint" - I don't know any core that would not -1 a patch
> if he notices the BP reference missing - and machine targeted metadata
> "should all be grouped together at the end of the commit message" - this
> one everyone follows instinctively, so no problem.
> 
> And if we look at the i18n guidelines, almost everything is using
> should, but on reviews these are treated as strict *must* because of the
> implications.
> 
> Anyway, it's a matter of opinion and afaik in Cinder we don't even have
> a real problem with downvoting for the commit message length, I don't
> see more than 1 every couple of months or so.

Other communities have solved this by explicit reference to a standard 
defining terms like "must" and "should".

Regards,
Mike


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/ee81b8ae/attachment.html>

From agordeev at mirantis.com  Wed Sep 30 13:05:15 2015
From: agordeev at mirantis.com (Alexander Gordeev)
Date: Wed, 30 Sep 2015 16:05:15 +0300
Subject: [openstack-dev] [fuel][shotgun] do we still use subs?
Message-ID: <CAFneLEd8Qx8M1XswDtijW9f+=YoRUjh_gJDbZkrG9v5x2d4E9w@mail.gmail.com>

Hello fuelers,

My question is related to shotgun tool[1] which will be invoked in
order to generate the diagnostic snapshot.

It has possibilities to substitute particular sensitive data such as
credentials/hostnames/IPs/etc with meaningless values. It's done by
Subs [2] object driver.

However, it seems that subs is not used anymore. Well, at least it was
turned off by default for fuel 5.1 [3] and newer. I won't able to find
any traces of its usage in the code at fuel-web repo.

Seems that this piece of code for subs could be ditched. Even more, it
should be ditched as it looks like a fifth wheel from the project
architecture point of view. As shotgun is totally about getting the
actual logs, but not about corrupting them unpredictably with sed
scripts.

Proper log sanitization is the another story entirely. I doubt if it
could be fitted into shotgun and being effective and/or well designed
at the same time.

Perhaps, i missed something and subs is still being used actively.
So, folks don't hesitate to respond, if you know something which helps
to shed a light on subs.

Let's discuss anything related to subs or even vote on its removal.
Maybe we need to wait for another 2 years to pass until we could
finally get rid of it.

Let me know your thoughts.

Thanks!


[1] https://github.com/stackforge/fuel-web/tree/master/shotgun
[2] https://github.com/stackforge/fuel-web/blob/master/shotgun/shotgun/driver.py#L165-L233
[3] https://github.com/stackforge/fuel-web/blob/stable/5.1/nailgun/nailgun/settings.yaml


From rybrown at redhat.com  Wed Sep 30 13:15:43 2015
From: rybrown at redhat.com (Ryan Brown)
Date: Wed, 30 Sep 2015 09:15:43 -0400
Subject: [openstack-dev] [TripleO] Defining a public API for
 tripleo-common
In-Reply-To: <CAPMB-2SnkubiNstjy3Uotrb_Gq1mQaJ9LOQutkEzeTOLJxKy7Q@mail.gmail.com>
References: <CAPMB-2SnkubiNstjy3Uotrb_Gq1mQaJ9LOQutkEzeTOLJxKy7Q@mail.gmail.com>
Message-ID: <560BE07F.5060902@redhat.com>

On 09/30/2015 04:08 AM, Dougal Matthews wrote:
> Hi,
>
> What is the standard practice for defining public API's for OpenStack
> libraries? As I am working on refactoring and updating tripleo-common I have
> to grep through the projects I know that use it to make sure I don't break
> anything.

The API working group exists, but they focus on REST APIs so they don't 
have any guidelines on library APIs.

> Personally I would choose to have a policy of "If it is documented, it is
> public" because that is very clear and it still allows us to do internal
> refactoring.
>
> Otherwise we could use __all__ to define what is public in each file, or
> assume everything that doesn't start with an underscore is public.

I think assuming that anything without a leading underscore is public 
might be too broad. For example, that would make all of libutils 
ostensibly a "stable" interface. I don't think that's what we want, 
especially this early in the lifecycle.

In heatclient, we present "heatclient.client" and "heatclient.exc" 
modules as the main public API, and put versioned implementations in 
modules.

heatclient
|- client
|- exc
\- v1
   |- client
   |- resources
   |- events
   |- services

I think versioning the public API is the way to go, since it will make 
it easier to maintain backwards compatibility while new needs/uses evolve.

-- 
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.


From dtantsur at redhat.com  Wed Sep 30 13:25:28 2015
From: dtantsur at redhat.com (Dmitry Tantsur)
Date: Wed, 30 Sep 2015 15:25:28 +0200
Subject: [openstack-dev] [TripleO] Defining a public API for
 tripleo-common
In-Reply-To: <560BE07F.5060902@redhat.com>
References: <CAPMB-2SnkubiNstjy3Uotrb_Gq1mQaJ9LOQutkEzeTOLJxKy7Q@mail.gmail.com>
 <560BE07F.5060902@redhat.com>
Message-ID: <560BE2C8.3010609@redhat.com>

On 09/30/2015 03:15 PM, Ryan Brown wrote:
> On 09/30/2015 04:08 AM, Dougal Matthews wrote:
>> Hi,
>>
>> What is the standard practice for defining public API's for OpenStack
>> libraries? As I am working on refactoring and updating tripleo-common
>> I have
>> to grep through the projects I know that use it to make sure I don't
>> break
>> anything.
>
> The API working group exists, but they focus on REST APIs so they don't
> have any guidelines on library APIs.
>
>> Personally I would choose to have a policy of "If it is documented, it is
>> public" because that is very clear and it still allows us to do internal
>> refactoring.
>>
>> Otherwise we could use __all__ to define what is public in each file, or
>> assume everything that doesn't start with an underscore is public.
>
> I think assuming that anything without a leading underscore is public
> might be too broad. For example, that would make all of libutils
> ostensibly a "stable" interface. I don't think that's what we want,
> especially this early in the lifecycle.
>
> In heatclient, we present "heatclient.client" and "heatclient.exc"
> modules as the main public API, and put versioned implementations in
> modules.

I'd recommend to avoid things like 'heatclient.client', as in a big 
application it would lead to imports like

  from heatclient import client as heatclient

:)

What I did for ironic-inspector-client was to make a couple of most 
important things available directly on ironic_inspector_client top-level 
module, everything else - under ironic_inspector_client.v1 (modulo some 
legacy).

>
> heatclient
> |- client
> |- exc
> \- v1
>    |- client
>    |- resources
>    |- events
>    |- services
>
> I think versioning the public API is the way to go, since it will make
> it easier to maintain backwards compatibility while new needs/uses evolve.

++

>



From nik.komawar at gmail.com  Wed Sep 30 13:28:41 2015
From: nik.komawar at gmail.com (Nikhil Komawar)
Date: Wed, 30 Sep 2015 09:28:41 -0400
Subject: [openstack-dev] [stable][glance] glance-stable-maint group
 refresher
In-Reply-To: <EA70533067B8F34F801E964ABCA4C4410F4DF006@G9W0745.americas.hpqcorp.net>
References: <EA70533067B8F34F801E964ABCA4C4410F4DF006@G9W0745.americas.hpqcorp.net>
Message-ID: <560BE389.6040505@gmail.com>



On 9/30/15 8:46 AM, Kuvaja, Erno wrote:
>
> Hi all,
>
>  
>
> I?d like to propose following changes to glance-stable-maint team:
>
> 1)      Removing Zhi Yan Liu from the group; unfortunately he has
> moved on to other ventures and is not actively participating our
> operations anymore.
>
+1 (always welcome back)
>
> 2)      Adding Mike Fedosin to the group; Mike has been reviewing and
> backporting patches to glance stable branches and is working with the
> right mindset. I think he would be great addition to share the
> workload around.
>
+1 (definitely)
>
>  
>
> Best,
>
> Erno (jokke_) Kuvaja
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/4dcafdb8/attachment.html>

From kkushaev at mirantis.com  Wed Sep 30 13:31:38 2015
From: kkushaev at mirantis.com (Kairat Kushaev)
Date: Wed, 30 Sep 2015 16:31:38 +0300
Subject: [openstack-dev]  [glance] Models and validation for v2
Message-ID: <CAAetzei6b0emZ9JEy+W0jYA_VujXeMK+WcaFKSL-=pmnKmDgKQ@mail.gmail.com>

Hi All,
In short terms, I am wondering why we are validating responses from server
when we are doing
image-show, image-list, member-list, metadef-namespace-show and other
read-only requests.

AFAIK, we are building warlock models when receiving responses from server
(see [0]). Each model requires schema to be fetched from glance server. It
means that each time we are doing image-show, image-list, image-create,
member-list and others we are requesting schema from the server. AFAIU, we
are using models to dynamically validate that object is in accordance with
schema but is it the case when glance receives responses from the server?

Could somebody please explain me the reasoning of this implementation? Am I
missed some usage cases when validation is required for server responses?

I also noticed that we already faced some issues with such implementation
that leads to "mocking" validation([1][2]).


[0]:
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L185
[1]:
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L47
[2]: https://bugs.launchpad.net/python-glanceclient/+bug/1501046

Best regards,
Kairat Kushaev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/7a15853e/attachment.html>

From anant.patil at hpe.com  Wed Sep 30 14:03:46 2015
From: anant.patil at hpe.com (Anant Patil)
Date: Wed, 30 Sep 2015 19:33:46 +0530
Subject: [openstack-dev] [heat] Convergence: Detecting and handling
 worker failures
In-Reply-To: <560BD8EA.5050305@redhat.com>
References: <560B8494.6050805@hpe.com> <560B8AFC.2030207@hpe.com>
 <560BD8EA.5050305@redhat.com>
Message-ID: <560BEBC2.4060600@hpe.com>

On 30-Sep-15 18:13, Ryan Brown wrote:
> On 09/30/2015 03:10 AM, Anant Patil wrote:
>> Hi,
>>
>> One of remaining items in convergence is detecting and handling engine
>> (the engine worker) failures, and here are my thoughts.
>>
>> Background: Since the work is distributed among heat engines, by some
>> means heat needs to detect the failure and pick up the tasks from failed
>> engine and re-distribute or run the task again.
>>
>> One of the simple way is to poll the DB to detect the liveliness by
>> checking the table populated by heat-manage. Each engine records its
>> presence periodically by updating current timestamp. All the engines
>> will have a periodic task for checking the DB for liveliness of other
>> engines. Each engine will check for timestamp updated by other engines
>> and if it finds one which is older than the periodicity of timestamp
>> updates, then it detects a failure. When this happens, the remaining
>> engines, as and when they detect the failures, will try to acquire the
>> lock for in-progress resources that were handled by the engine which
>> died. They will then run the tasks to completion.
> 
> Implementing our own locking system, even a "simple" one, sounds like a 
> recipe for major bugs to me. I agree with your assessment that tooz is a 
> better long-run decision.
> 
>> Another option is to use a coordination library like the community owned
>> tooz (http://docs.openstack.org/developer/tooz/) which supports
>> distributed locking and leader election. We use it to elect a leader
>> among heat engines and that will be responsible for running periodic
>> tasks for checking state of each engine and distributing the tasks to
>> other engines when one fails. The advantage, IMHO, will be simplified
>> heat code. Also, we can move the timeout task to the leader which will
>> run time out for all the stacks and sends signal for aborting operation
>> when timeout happens. The downside: an external resource like
>> Zookeper/memcached etc are needed for leader election.
> 
> That's not necessarily true. For single-node installations (devstack, 
> TripleO underclouds, etc) tooz offers file and IPC backends that don't 
> need an extra service. Tooz's MySQL/PostgreSQL backends only provide 
> distributed locking functionality, so we may need to depend on the 
> memcached/redis/zookeeper backends for multi-node installs.
> 

Definitely, for single-node installations, one can rely on IPC as
backend. As a convention, a default provider for single node as IPC
would be helpful for running heat in devstack or development
environment. From a holistic perspective, I am referring to external
resource, as mostly the deployments are multi-node with active-active
HA.

> Even if tooz doesn't provide everything we need, I'm sure patches
> would be welcome.
>
I am sure when we dive in, we will find use cases for tooz as well.

>> In the long run, IMO, using a library like tooz will be useful for heat.
>> A lot of boiler plate needed for locking and running centralized tasks
>> (such as timeout) will not be needed in heat. Given that we are moving
>> towards distribution of tasks and horizontal scaling is preferred, it
>> will be advantageous to use them.
>>
>> Please share your thoughts.
>>
>> - Anant



From anant.patil at hpe.com  Wed Sep 30 14:13:48 2015
From: anant.patil at hpe.com (Anant Patil)
Date: Wed, 30 Sep 2015 19:43:48 +0530
Subject: [openstack-dev] [heat] Convergence: Detecting and handling
 worker failures
In-Reply-To: <1443605008-sup-2932@fewbar.com>
References: <560B8494.6050805@hpe.com> <560B8AFC.2030207@hpe.com>
 <1443605008-sup-2932@fewbar.com>
Message-ID: <560BEE1C.8090905@hpe.com>

On 30-Sep-15 14:59, Clint Byrum wrote:
> Excerpts from Anant Patil's message of 2015-09-30 00:10:52 -0700:
>> Hi,
>>
>> One of remaining items in convergence is detecting and handling engine
>> (the engine worker) failures, and here are my thoughts.
>>
>> Background: Since the work is distributed among heat engines, by some
>> means heat needs to detect the failure and pick up the tasks from failed
>> engine and re-distribute or run the task again.
>>
>> One of the simple way is to poll the DB to detect the liveliness by
>> checking the table populated by heat-manage. Each engine records its
>> presence periodically by updating current timestamp. All the engines
>> will have a periodic task for checking the DB for liveliness of other
>> engines. Each engine will check for timestamp updated by other engines
>> and if it finds one which is older than the periodicity of timestamp
>> updates, then it detects a failure. When this happens, the remaining
>> engines, as and when they detect the failures, will try to acquire the
>> lock for in-progress resources that were handled by the engine which
>> died. They will then run the tasks to completion.
>>
>> Another option is to use a coordination library like the community owned
>> tooz (http://docs.openstack.org/developer/tooz/) which supports
>> distributed locking and leader election. We use it to elect a leader
>> among heat engines and that will be responsible for running periodic
>> tasks for checking state of each engine and distributing the tasks to
>> other engines when one fails. The advantage, IMHO, will be simplified
>> heat code. Also, we can move the timeout task to the leader which will
>> run time out for all the stacks and sends signal for aborting operation
>> when timeout happens. The downside: an external resource like
>> Zookeper/memcached etc are needed for leader election.
>>
> 
> It's becoming increasingly clear that OpenStack services in general need
> to look at distributed locking primitives. There's a whole spec for that
> right now:
> 
> https://review.openstack.org/#/c/209661/
> 
> I suggest joining that conversation, and embracing a DLM as the way to
> do this.
> 

Thanks Clint for pointing to this.

> Also, the leader election should be per-stack, and the leader selection
> should be heavily weighted based on a consistent hash algorithm so that
> you get even distribution of stacks to workers. You can look at how
> Ironic breaks up all of the nodes that way. They're using a similar lock
> to the one Heat uses now, so the two projects can collaborate nicely on
> a real solution.
>

>From each stack, all the resources are distributed among heat engines,
so it is evenly distributed at resource level. I need to investigate
more on this. Thoughts are welcome.
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



From michal.dulko at intel.com  Wed Sep 30 14:25:53 2015
From: michal.dulko at intel.com (Dulko, Michal)
Date: Wed, 30 Sep 2015 14:25:53 +0000
Subject: [openstack-dev] [heat] Convergence: Detecting and handling
 worker failures
In-Reply-To: <1443605008-sup-2932@fewbar.com>
References: <560B8494.6050805@hpe.com> <560B8AFC.2030207@hpe.com>
 <1443605008-sup-2932@fewbar.com>
Message-ID: <1443623154.9696.10.camel@mdulko-MOBL2>

On Wed, 2015-09-30 at 02:29 -0700, Clint Byrum wrote:
> Excerpts from Anant Patil's message of 2015-09-30 00:10:52 -0700:
> > Hi,
> > 
> > One of remaining items in convergence is detecting and handling engine
> > (the engine worker) failures, and here are my thoughts.
> > 
> > Background: Since the work is distributed among heat engines, by some
> > means heat needs to detect the failure and pick up the tasks from failed
> > engine and re-distribute or run the task again.
> > 
> > One of the simple way is to poll the DB to detect the liveliness by
> > checking the table populated by heat-manage. Each engine records its
> > presence periodically by updating current timestamp. All the engines
> > will have a periodic task for checking the DB for liveliness of other
> > engines. Each engine will check for timestamp updated by other engines
> > and if it finds one which is older than the periodicity of timestamp
> > updates, then it detects a failure. When this happens, the remaining
> > engines, as and when they detect the failures, will try to acquire the
> > lock for in-progress resources that were handled by the engine which
> > died. They will then run the tasks to completion.
> > 
> > Another option is to use a coordination library like the community owned
> > tooz (http://docs.openstack.org/developer/tooz/) which supports
> > distributed locking and leader election. We use it to elect a leader
> > among heat engines and that will be responsible for running periodic
> > tasks for checking state of each engine and distributing the tasks to
> > other engines when one fails. The advantage, IMHO, will be simplified
> > heat code. Also, we can move the timeout task to the leader which will
> > run time out for all the stacks and sends signal for aborting operation
> > when timeout happens. The downside: an external resource like
> > Zookeper/memcached etc are needed for leader election.
> > 
> 
> It's becoming increasingly clear that OpenStack services in general need
> to look at distributed locking primitives. There's a whole spec for that
> right now:
> 
> https://review.openstack.org/#/c/209661/
> 
> I suggest joining that conversation, and embracing a DLM as the way to
> do this.
> 
> Also, the leader election should be per-stack, and the leader selection
> should be heavily weighted based on a consistent hash algorithm so that
> you get even distribution of stacks to workers. You can look at how
> Ironic breaks up all of the nodes that way. They're using a similar lock
> to the one Heat uses now, so the two projects can collaborate nicely on
> a real solution.

It is worth to mention that there's also an idea of using both Tooz and
hash ring approach [1].

There was enormously big discussion on this list when Cinder's faced
similar problem [2]. It finally became a discussion on whether we need a
common solution for DLM in OpenStack [3]. In the end Cinder is currently
trying to achieve A/A capabilities by using CAS DB operations. The
detecting of failed services is still discussed, but most mature
solution to this problem was described in [4]. It is based on database
checks.

Given that many projects are facing similar problems (well, it's not a
surprise that distributed system is facing general problems of
distributed systems?), we should certainly discuss how to approach that
class of issues. That's why a cross-project Design Summit session on the
topic was proposed [5] (this one is by harlowja, but I know that Mike
Perez also wanted to propose such session).

[1] https://review.openstack.org/#/c/195366/
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-July/070683.html
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071262.html
[4] http://gorka.eguileor.com/simpler-road-to-cinder-active-active/
[5] http://odsreg.openstack.org/cfp/details/8

From hongbin.lu at huawei.com  Wed Sep 30 14:44:45 2015
From: hongbin.lu at huawei.com (Hongbin Lu)
Date: Wed, 30 Sep 2015 14:44:45 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
Message-ID: <0957CD8F4B55C0418161614FEC580D6BCECE8E@SZXEMI503-MBS.china.huawei.com>

+1 from me as well.

I think what makes Magnum appealing is the promise to provide container-as-a-service. I see coe deployment as a helper to achieve the promise, instead of the main goal.

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau.513 at gmail.com]
Sent: September-29-15 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

+1 to Egor, I think that the final goal of Magnum is container as a service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export some interfaces to enable end user can create container applications but not only coe deployment.
I hope that the Magnum can be treated as another "Nova" which is focusing on container service. I know it is difficult to unify all of the concepts in different coe (k8s has pod, service, rc, swarm only has container, nova only has VM, PM with different hypervisors), but this deserve some deep dive and thinking to see how can move forward.....

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <EGuz at walmartlabs.com<mailto:EGuz at walmartlabs.com>> wrote:
definitely ;), but the are some thoughts to Tom?s email.

I agree that we shouldn't reinvent apis, but I don?t think Magnum should only focus at deployment (I feel we will become another Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to Kub/Mesos/Swarm communities for that.

?
Egor

From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) <danehans at cisco.com<mailto:danehans at cisco.com><mailto:danehans at cisco.com<mailto:danehans at cisco.com>>> wrote:


+1

From: Tom Cammann <tom.cammann at hpe.com<mailto:tom.cammann at hpe.com><mailto:tom.cammann at hpe.com<mailto:tom.cammann at hpe.com>>>
Reply-To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very difficult and probably a wasted effort trying to consolidate their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat resources can just interface with k8s instead of Magnum.
Ton Ngo,

<ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose is just command line tool which doesn?t have any api or scheduling feat

From: Egor Guz <EGuz at walmartlabs.com<mailto:EGuz at walmartlabs.com>><mailto:EGuz at walmartlabs.com<mailto:EGuz at walmartlabs.com>>
To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>"<mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>> <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
________________________________



Also I belive docker compose is just command line tool which doesn?t have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

?
Egor

From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>><mailto:adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to operate. We are intentionally avoiding re-inventing the wheel. Our goal is not to replace docker swarm (or other existing systems), but to compliment it/them. We want to offer users of Docker the richness of native APIs and supporting tools. This way they will not need to compromise features or wait longer for us to implement each new feature as it is added. Keep in mind that our pod, service, and replication controller resources pre-date this philosophy. If we started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com>><mailto:wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com>>> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes container in swarm coe. As I know, swarm is only a scheduler of container, which is like nova in openstack. Docker compose is a orchestration program which is like heat in openstack. k8s is the combination of scheduler and orchestration. So I think it is better to expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>><mailto:OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><mailto:OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><mailto:OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<ATT00001.gif>__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/be545e0c/attachment.html>

From zbitter at redhat.com  Wed Sep 30 14:52:42 2015
From: zbitter at redhat.com (Zane Bitter)
Date: Wed, 30 Sep 2015 10:52:42 -0400
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <72742C5E-1136-40C2-85D4-1AD6522C6ADA@redhat.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <72742C5E-1136-40C2-85D4-1AD6522C6ADA@redhat.com>
Message-ID: <560BF73A.50202@redhat.com>

On 29/09/15 12:05, Ihar Hrachyshka wrote:
>> On 25 Sep 2015, at 16:44, Ihar Hrachyshka <ihrachys at redhat.com> wrote:
>>
>> Hi all,
>>
>> releases are approaching, so it?s the right time to start some bike shedding on the mailing list.
>>
>> Recently I got pointed out several times [1][2] that I violate our commit message requirement [3] for the message lines that says: "Subsequent lines should be wrapped at 72 characters.?
>>
>> I agree that very long commit message lines can be bad, f.e. if they are 200+ chars. But <= 79 chars?.. Don?t think so. Especially since we have 79 chars limit for the code.
>>
>> We had a check for the line lengths in openstack-dev/hacking before but it was killed [4] as per openstack-dev@ discussion [5].
>>
>> I believe commit message lines of <=80 chars are absolutely fine and should not get -1 treatment. I propose to raise the limit for the guideline on wiki accordingly.
>>
>> Comments?
>>
>> [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
>> [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
>> [3]: https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
>> [4]: https://review.openstack.org/#/c/142585/
>> [5]: http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
>>
>> Ihar
>
> Thanks everyone for replies.
>
> Now I realize WHY we do it with 72 chars and not 80 chars (git log output). :) I updated the wiki page with how to configure Vim to enforce the rule. I also removed the notion of gating on commit messages because we have them removed since recently.

Thanks Ihar! FWIW, vim has had built-in support for setting that width 
since at least 7.2, and I suspect long before (for me it's in 
/usr/share/vim/vim74/ftplugin/gitcommit.vim). AFAIK the only thing you 
need in your .vimrc to take advantage is:

if has("autocmd")
   filetype plugin indent on
endif " has("autocmd")

This is included in the example vimrc file that ships with vim, so I 
think better advice for 99% of people would be to just install the 
example vimrc file if they don't already have a ~/.vimrc. (There are 
*lots* of other benefits too.) I've updated the wiki to reflect that, I 
hope you don't mind :)

It'd be great if anyone who didn't have it set up already could try this 
though, since it's been many, many years since it has not worked 
automagically for me ;)

cheers,
Zane.


From rlrossit at linux.vnet.ibm.com  Wed Sep 30 14:57:49 2015
From: rlrossit at linux.vnet.ibm.com (Ryan Rossiter)
Date: Wed, 30 Sep 2015 09:57:49 -0500
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <560B5E51.1010900@inaugust.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
 <560B5E51.1010900@inaugust.com>
Message-ID: <560BF86D.2010202@linux.vnet.ibm.com>


On 9/29/2015 11:00 PM, Monty Taylor wrote:
> *waving hands wildly at details* ...
>
> I believe that the real win is if Magnum's control plan can integrate 
> the network and storage fabrics that exist in an OpenStack with 
> kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's not 
> interesting ... an ansible playbook can do that in 5 minutes. OTOH - 
> deploying some kube into a cloud in such a way that it shares a tenant 
> network with some VMs that are there - that's good stuff and I think 
> actually provides significant value.
+1 on sharing the tenant network with VMs.

When I look at Magnum being an OpenStack project, I see it winning by 
integrating itself with the other projects, and to make containers just 
work in your cloud. Here's the scenario I would want a cloud with Magnum 
to do (though it may be very pie-in-the-sky):

I want to take my container, replicate it across 3 container host VMs 
(each of which lives on a different compute host), stick a Neutron LB in 
front of it, and hook it up to the same network as my 5 other VMs.

This way, it handles my containers in a service, and integrates 
beautifully with my existing OpenStack cloud.
>
> On 09/29/2015 10:57 PM, Jay Lau wrote:
>> +1 to Egor, I think that the final goal of Magnum is container as a
>> service but not coe deployment as a service. ;-)
>>
>> Especially we are also working on Magnum UI, the Magnum UI should export
>> some interfaces to enable end user can create container applications but
>> not only coe deployment.
>>
>> I hope that the Magnum can be treated as another "Nova" which is
>> focusing on container service. I know it is difficult to unify all of
>> the concepts in different coe (k8s has pod, service, rc, swarm only has
>> container, nova only has VM, PM with different hypervisors), but this
>> deserve some deep dive and thinking to see how can move forward.....
>>
>> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <EGuz at walmartlabs.com
>> <mailto:EGuz at walmartlabs.com>> wrote:
>>
>>     definitely ;), but the are some thoughts to Tom?s email.
>>
>>     I agree that we shouldn't reinvent apis, but I don?t think Magnum
>>     should only focus at deployment (I feel we will become another
>>     Puppet/Chef/Ansible module if we do it ):)
>>     I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
>>     OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
>>     step in to Kub/Mesos/Swarm communities for that.
>>
>>     ?
>>     Egor
>>
>>     From: Adrian Otto <adrian.otto at rackspace.com
>> <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com
>>     <mailto:adrian.otto at rackspace.com>>>
>>     Reply-To: "OpenStack Development Mailing List (not for usage
>>     questions)" <openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>>
>>     Date: Tuesday, September 29, 2015 at 08:44
>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>     <openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>>
>>     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>
>>     This is definitely a topic we should cover in Tokyo.
>>
>>     On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
>>     <danehans at cisco.com
>> <mailto:danehans at cisco.com><mailto:danehans at cisco.com
>>     <mailto:danehans at cisco.com>>> wrote:
>>
>>
>>     +1
>>
>>     From: Tom Cammann <tom.cammann at hpe.com
>> <mailto:tom.cammann at hpe.com><mailto:tom.cammann at hpe.com
>>     <mailto:tom.cammann at hpe.com>>>
>>     Reply-To: "openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>"
>>     <openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>>
>>     Date: Tuesday, September 29, 2015 at 2:22 AM
>>     To: "openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>"
>>     <openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>>
>>     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>
>>     This has been my thinking in the last couple of months to completely
>>     deprecate the COE specific APIs such as pod/service/rc and 
>> container.
>>
>>     As we now support Mesos, Kubernetes and Docker Swarm its going to be
>>     very difficult and probably a wasted effort trying to consolidate
>>     their separate APIs under a single Magnum API.
>>
>>     I'm starting to see Magnum as COEDaaS - Container Orchestration
>>     Engine Deployment as a Service.
>>
>>     On 29/09/15 06:30, Ton Ngo wrote:
>>     Would it make sense to ask the opposite of Wanghua's question:
>>     should pod/service/rc be deprecated if the user can easily get to
>>     the k8s api?
>>     Even if we want to orchestrate these in a Heat template, the
>>     corresponding heat resources can just interface with k8s instead of
>>     Magnum.
>>     Ton Ngo,
>>
>>     <ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
>>     docker compose is just command line tool which doesn?t have any api
>>     or scheduling feat
>>
>>     From: Egor Guz <EGuz at walmartlabs.com
>> <mailto:EGuz at walmartlabs.com>><mailto:EGuz at walmartlabs.com
>>     <mailto:EGuz at walmartlabs.com>>
>>     To: "openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org>"<mailto:openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>
>>     <openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>
>>     Date: 09/28/2015 10:20 PM
>>     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>     ________________________________
>>
>>
>>
>>     Also I belive docker compose is just command line tool which doesn?t
>>     have any api or scheduling features.
>>     But during last Docker Conf hackathon PayPal folks implemented
>>     docker compose executor for Mesos
>>     (https://github.com/mohitsoni/compose-executor)
>>     which can give you pod like experience.
>>
>>     ?
>>     Egor
>>
>>     From: Adrian Otto <adrian.otto at rackspace.com
>> <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com
>> <mailto:adrian.otto at rackspace.com>><mailto:adrian.otto at rackspace.com
>>     <mailto:adrian.otto at rackspace.com>>>
>>     Reply-To: "OpenStack Development Mailing List (not for usage
>>     questions)" <openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>>
>>     Date: Monday, September 28, 2015 at 22:03
>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>     <openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org
>>     <mailto:openstack-dev at lists.openstack.org>>>
>>     Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>
>>     Wanghua,
>>
>>     I do follow your logic, but docker-compose only needs the docker API
>>     to operate. We are intentionally avoiding re-inventing the wheel.
>>     Our goal is not to replace docker swarm (or other existing systems),
>>     but to compliment it/them. We want to offer users of Docker the
>>     richness of native APIs and supporting tools. This way they will not
>>     need to compromise features or wait longer for us to implement each
>>     new feature as it is added. Keep in mind that our pod, service, and
>>     replication controller resources pre-date this philosophy. If we
>>     started out with the current approach, those would not exist in 
>> Magnum.
>>
>>     Thanks,
>>
>>     Adrian
>>
>>     On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com
>> <mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com
>> <mailto:wanghua.humble at gmail.com>><mailto:wanghua.humble at gmail.com
>>     <mailto:wanghua.humble at gmail.com>>> wrote:
>>
>>     Hi folks,
>>
>>     Magnum now exposes service, pod, etc to users in kubernetes coe, but
>>     exposes container in swarm coe. As I know, swarm is only a scheduler
>>     of container, which is like nova in openstack. Docker compose is a
>>     orchestration program which is like heat in openstack. k8s is the
>>     combination of scheduler and orchestration. So I think it is better
>>     to expose the apis in compose to users which are at the same level
>>     as k8s.
>>
>>
>>     Regards
>>     Wanghua
>> __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe: OpenStack-dev-request at lists.openstack.org
>> <mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org
>> <mailto:OpenStack-dev-request at lists.openstack.org>><mailto:OpenStack-dev-request at lists.openstack.org
>> <mailto:OpenStack-dev-request at lists.openstack.org>>?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><mailto:OpenStack-dev-request at lists.openstack.org
>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>> __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe><mailto:OpenStack-dev-request at lists.openstack.org
>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> <ATT00001.gif>__________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe: OpenStack-dev-request at lists.openstack.org
>> <mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org
>> <mailto:OpenStack-dev-request at lists.openstack.org>>?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>>     OpenStack Development Mailing List (not for usage questions)
>>     Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> -- 
>> Thanks,
>>
>> Jay Lau (Guangya Liu)
>>
>>
>> __________________________________________________________________________ 
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Thanks,

Ryan Rossiter (rlrossit)



From harlowja at outlook.com  Wed Sep 30 15:16:34 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Wed, 30 Sep 2015 08:16:34 -0700
Subject: [openstack-dev] [heat] Convergence: Detecting and handling
 worker failures
In-Reply-To: <1443605008-sup-2932@fewbar.com>
References: <560B8494.6050805@hpe.com> <560B8AFC.2030207@hpe.com>
 <1443605008-sup-2932@fewbar.com>
Message-ID: <BLU436-SMTP564200B202F3D1A446D0CCD84D0@phx.gbl>

Clint Byrum wrote:
> Excerpts from Anant Patil's message of 2015-09-30 00:10:52 -0700:
>> Hi,
>>
>> One of remaining items in convergence is detecting and handling engine
>> (the engine worker) failures, and here are my thoughts.
>>
>> Background: Since the work is distributed among heat engines, by some
>> means heat needs to detect the failure and pick up the tasks from failed
>> engine and re-distribute or run the task again.
>>
>> One of the simple way is to poll the DB to detect the liveliness by
>> checking the table populated by heat-manage. Each engine records its
>> presence periodically by updating current timestamp. All the engines
>> will have a periodic task for checking the DB for liveliness of other
>> engines. Each engine will check for timestamp updated by other engines
>> and if it finds one which is older than the periodicity of timestamp
>> updates, then it detects a failure. When this happens, the remaining
>> engines, as and when they detect the failures, will try to acquire the
>> lock for in-progress resources that were handled by the engine which
>> died. They will then run the tasks to completion.
>>
>> Another option is to use a coordination library like the community owned
>> tooz (http://docs.openstack.org/developer/tooz/) which supports
>> distributed locking and leader election. We use it to elect a leader
>> among heat engines and that will be responsible for running periodic
>> tasks for checking state of each engine and distributing the tasks to
>> other engines when one fails. The advantage, IMHO, will be simplified
>> heat code. Also, we can move the timeout task to the leader which will
>> run time out for all the stacks and sends signal for aborting operation
>> when timeout happens. The downside: an external resource like
>> Zookeper/memcached etc are needed for leader election.
>>
>
> It's becoming increasingly clear that OpenStack services in general need
> to look at distributed locking primitives. There's a whole spec for that
> right now:
>
> https://review.openstack.org/#/c/209661/

As the author of said spec (Chronicles of a DLM) I fully agree that we 
shouldn't be reinventing this (again, and again). Also as the author of 
that spec, I'd like to encourage others to get involved in adding their 
use-cases/stories to it. I have done some initial analysis of projects 
and documented some of the recreation of DLM like things in it, and I'm 
very much open to including others stories as well. In the end I hope we 
can pick a DLM (ideally a single one) that has a wide community, is 
structurally sound, is easily useable & operable, is open and will help 
achieve and grow (what I think are) the larger long-term goals (and 
health) of many openstack projects.

Nicely formatted RST (for the latest uploaded spec) also viewable at:

http://docs-draft.openstack.org/61/209661/22/check/gate-openstack-specs-docs/ced42e7//doc/build/html/specs/chronicles-of-a-dlm.html#chronicles-of-a-distributed-lock-manager

>
> I suggest joining that conversation, and embracing a DLM as the way to
> do this.
>
> Also, the leader election should be per-stack, and the leader selection
> should be heavily weighted based on a consistent hash algorithm so that
> you get even distribution of stacks to workers. You can look at how
> Ironic breaks up all of the nodes that way. They're using a similar lock
> to the one Heat uses now, so the two projects can collaborate nicely on
> a real solution.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From akhivin at mirantis.com  Wed Sep 30 15:31:25 2015
From: akhivin at mirantis.com (Alexey Khivin)
Date: Wed, 30 Sep 2015 18:31:25 +0300
Subject: [openstack-dev] [murano] suggestion on commit message title
 format for the murano-apps repository
In-Reply-To: <etPan.56051f17.7df2783e.34fe@TefMBPr.local>
References: <CAM9f5rjmKSVfn0PySHVZC2FB81_gzgB5a0Vc1=BSL31JEU40WA@mail.gmail.com>
 <etPan.56051f17.7df2783e.34fe@TefMBPr.local>
Message-ID: <CAM9f5rgEASHf2HVQuT9AB36HuvpTJ+5j3iDgouC84uaq3ReZSg@mail.gmail.com>

lets discuss in more details how it should be

I have prepared a draft. please take a look
https://review.openstack.org/#/c/229477/




2015-09-25 13:16 GMT+03:00 Kirill Zaitsev <kzaitsev at mirantis.com>:

> Looks reasonable to me! Could you maybe document that on HACKING.rst in
> the repo? We could vote on the commit itself.
>
> --
> Kirill Zaitsev
> Murano team
> Software Engineer
> Mirantis, Inc
>
> On 25 Sep 2015 at 02:14:09, Alexey Khivin (akhivin at mirantis.com) wrote:
>
> Hello everyone
>
> Almost an every commit-message in the murano-apps repository contains a
> name of the application which it is related to
>
> I suggest to specify application within commit message title using strict
> and uniform format
>
>
> For example, something like this:
>
> [ApacheHTTPServer] Utilize Custom Network selector
> <https://review.openstack.org/201659>
> [Docker/Kubernetes <https://review.openstack.org/208452>] Fix typo
> <https://review.openstack.org/208452>
>
> instead of this:
>
> Utilize Custom Network selector in Apache App
> Fix typo in Kubernetes Cluster app <https://review.openstack.org/208452>
>
>
> I think it would be useful for readability of the messages list
>
> --
> Regards,
> Alexey Khivin
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Alexey Khivin

Skype: khivin
+79169167297

+7 (495) 640-4904 (office)
+7 (495) 646-56-27 (fax)
Moscow, Russia, Vorontsovskaya St. 35B, bld.3
www.mirantis.ru ,www.mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/12312f3e/attachment.html>

From devdatta.kulkarni at RACKSPACE.COM  Wed Sep 30 15:58:41 2015
From: devdatta.kulkarni at RACKSPACE.COM (Devdatta Kulkarni)
Date: Wed, 30 Sep 2015 15:58:41 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <0957CD8F4B55C0418161614FEC580D6BCECE8E@SZXEMI503-MBS.china.huawei.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>,
 <0957CD8F4B55C0418161614FEC580D6BCECE8E@SZXEMI503-MBS.china.huawei.com>
Message-ID: <1443628722351.56254@RACKSPACE.COM>

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its application container scheduling
requirements, deep integration of COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can depend on Keystone tokens to deploy 
and schedule containers on the Bay nodes instead of having to use COE specific credentials. 
That way, container resources will become first class components that can be monitored 
using Ceilometer, access controlled using Keystone, and managed from within Horizon.

Regards,
Devdatta


From: Hongbin Lu <hongbin.lu at huawei.com>
Sent: Wednesday, September 30, 2015 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
? 

+1 from me as well.
?
I think what makes Magnum appealing is the promise to provide container-as-a-service. I see coe deployment as a helper to achieve the promise, instead of  the main goal.
?
Best regards,
Hongbin
?

From: Jay Lau [mailto:jay.lau.513 at gmail.com]
Sent: September-29-15 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
 ?


+1 to Egor, I think that the final goal of Magnum is container as a service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export some interfaces to enable end user can create container applications but not only coe deployment.
 
I hope that the Magnum can be treated as another "Nova" which is focusing on container service. I know it is difficult to unify all of the concepts in different coe (k8s has pod, service, rc, swarm only has container, nova only has VM,  PM with different hypervisors), but this deserve some deep dive and thinking to see how can move forward..... 
 


?

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <EGuz at walmartlabs.com> wrote:
definitely ;), but the are some thoughts to Tom?s email.

I agree that we shouldn't reinvent apis, but I don?t think Magnum should only focus at deployment (I feel we will become another Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to Kub/Mesos/Swarm communities for that.

?
Egor

From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) <danehans at cisco.com<mailto:danehans at cisco.com>> wrote:


+1

From: Tom Cammann <tom.cammann at hpe.com<mailto:tom.cammann at hpe.com>>
Reply-To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very difficult and probably a wasted effort trying to consolidate their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat resources can just interface with k8s instead of Magnum.
Ton Ngo,

<ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose is just command line tool which doesn?t have any api or scheduling feat

From: Egor Guz <EGuz at walmartlabs.com><mailto:EGuz at walmartlabs.com>
To: "openstack-dev at lists.openstack.org"<mailto:openstack-dev at lists.openstack.org> <openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
________________________________



Also I belive docker compose is just command line tool which doesn?t have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

?
Egor

From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to operate. We are intentionally avoiding re-inventing the wheel. Our goal is not to replace docker swarm (or other existing systems), but to compliment it/them. We want to offer users of  Docker the richness of native APIs and supporting tools. This way they will not need to compromise features or wait longer for us to implement each new feature as it is added. Keep in mind that our pod, service, and replication controller resources pre-date  this philosophy. If we started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com>>  wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes container in swarm coe. As I know, swarm is only a scheduler of container, which is like nova in openstack. Docker compose is a orchestration program which is like heat in openstack.  k8s is the combination of scheduler and orchestration. So I think it is better to expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<ATT00001.gif>__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)


Unsubscribe:  OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   


-- 





Thanks,
 Jay Lau (Guangya Liu)
             

From edgar.magana at workday.com  Wed Sep 30 16:01:44 2015
From: edgar.magana at workday.com (Edgar Magana)
Date: Wed, 30 Sep 2015 16:01:44 +0000
Subject: [openstack-dev]  [election] [TC] TC Candidacy
Message-ID: <DC3F5B28-A467-4E16-A1B6-C33D0FC95333@workday.com>

Hello Developers and Community,

I would like to submit my candidacy for the TC election.

I have been involved in OpenStack activities since April 2011, when I became
part one of the founders of the networking project known as Quantum at that
time. I have been a core reviewer for Neutron since November 2011. I helped
to create the networking guide and contributed to multiple chapters. I have
spoken at many OpenStack summits, meet-ups and conferences. I have been very
active in the Operators meet-up moderating multiple sessions. In few words
"I love to evangelize OpenStack".

Last four years I have gained experience on project management and leadership
from different team perspectives like technology vendors, networking start-up
and over a year as OpenStack operator. This last one has been very interesting
compare with my previous ones because of my focus on a production ready
cloud powered by OpenStack. Running a high-scale production cloud is giving me
a different perspective on how the platform should be delivered as a product
ready to use and that is what I will be bringing to the TC and to all project
members and PTLs.

As a TC member my main focus will be to close any existing gap between the
development teams and their costumers who are the OpenStack users and operators
between others. In my operator role I have validated documentation and best
practices on OpenStack deployment and operations and I have been provided all
possible feedback and I want to do more on this area. I believe we can make
OpenStack better if we open the TC to members who have deployed pure OpenStack
with no vendor specific guidelines or any specific distribution influence.

I strongly believe we will make OpenStack more solid and integrated. I will
work as cross-project liaison in order to reach this goal. I will continue my
work of evangelizing the newest OpenStack projects and guide them to have the
best adoption process by the community and also I will help them to have the
best integration with any other open-source technologies.

No matter what is the result of the election I will continue my work on
OpenStack with passion and courage. This is the best project ever and it can
be even better. Let's inject some fresh ideas to the TC, let's keep making
this platform the de-facto cloud management system for all operators either
public, private or hybrid clouds.

Thank you so much for reading and considering my humble aspiration to the TC!

--
Edgar Magana
IRC: emagana
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/bdb4ebdc/attachment.html>

From rhallise at redhat.com  Wed Sep 30 16:02:30 2015
From: rhallise at redhat.com (Ryan Hallisey)
Date: Wed, 30 Sep 2015 12:02:30 -0400 (EDT)
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for
 core reviewer
In-Reply-To: <CAKO+H+KXa20dq9W9UqX+FcmYtJnZHyGBU3i4ESkQiwjSLSX+Tw@mail.gmail.com>
References: <D2305CD3.13957%stdake@cisco.com>
 <CAKO+H+KXa20dq9W9UqX+FcmYtJnZHyGBU3i4ESkQiwjSLSX+Tw@mail.gmail.com>
Message-ID: <662431838.29761026.1443628950728.JavaMail.zimbra@redhat.com>

Way to go Michal! +1

-Ryan

----- Original Message -----
From: "Swapnil Kulkarni" <coolsvap at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Sent: Wednesday, September 30, 2015 5:00:27 AM
Subject: Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer



On Wed, Sep 30, 2015 at 3:50 AM, Steven Dake (stdake) < stdake at cisco.com > wrote: 



Hi folks, 

I am proposing Michal for core reviewer. Consider my proposal as a +1 vote. Michal has done a fantastic job with rsyslog, has done a nice job overall contributing to the project for the last cycle, and has really improved his review quality and participation over the last several months. 

Our process requires 3 +1 votes, with no veto (-1) votes. If your uncertain, it is best to abstain :) I will leave the voting open for 1 week until Tuesday October 6th or until there is a unanimous decision or a veto. 

+1 :) 




Regards 
-steve 

__________________________________________________________________________ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From chris.friesen at windriver.com  Wed Sep 30 16:03:30 2015
From: chris.friesen at windriver.com (Chris Friesen)
Date: Wed, 30 Sep 2015 10:03:30 -0600
Subject: [openstack-dev] [nova] live migration in Mitaka
In-Reply-To: <191B00529A37FA4F9B1CAE61859E4D6E5AB4C636@IRSMSX101.ger.corp.intel.com>
References: <39E5672E03A1CB4B93936D1C4AA5E15D1DBE41CA@G1W3782.americas.hpqcorp.net>
 <5F389C517756284F80239F55BCA5DDDD74CAC368@G4W3298.americas.hpqcorp.net>
 <39E5672E03A1CB4B93936D1C4AA5E15D1DBF1827@G1W3782.americas.hpqcorp.net>
 <191B00529A37FA4F9B1CAE61859E4D6E5AB4C636@IRSMSX101.ger.corp.intel.com>
Message-ID: <560C07D2.1040801@windriver.com>

On 09/30/2015 06:03 AM, Koniszewski, Pawel wrote:
>> From: Murray, Paul (HP Cloud) [mailto:pmurray at hpe.com]

>> - migrate suspended instances
>
> I'm not sure I understand this correctly. When user calls 'nova suspend' I
> thought that it actually "hibernates" VM and saves memory state to disk
> [2][3]. In such case there is nothing to "live" migrate - shouldn't
> cold-migration/resize solve this problem?

A "suspend" currently uses a libvirt API (dom.managedSave()) that results in the 
use of a libvirt-managed hibernation file. (So nova doesn't know the filename.) 
  I've only looked at it briefly, but it seems like it should be possible to 
switch to virDomainSave(), which would let nova specify the file to save, and 
therefore allow cold migration of the suspended instance.

Chris


From jaypipes at gmail.com  Wed Sep 30 16:04:11 2015
From: jaypipes at gmail.com (Jay Pipes)
Date: Wed, 30 Sep 2015 12:04:11 -0400
Subject: [openstack-dev] [glance] Models and validation for v2
In-Reply-To: <CAAetzei6b0emZ9JEy+W0jYA_VujXeMK+WcaFKSL-=pmnKmDgKQ@mail.gmail.com>
References: <CAAetzei6b0emZ9JEy+W0jYA_VujXeMK+WcaFKSL-=pmnKmDgKQ@mail.gmail.com>
Message-ID: <560C07FB.9090505@gmail.com>

On 09/30/2015 09:31 AM, Kairat Kushaev wrote:
> Hi All,
> In short terms, I am wondering why we are validating responses from
> server when we are doing
> image-show, image-list, member-list, metadef-namespace-show and other
> read-only requests.
>
> AFAIK, we are building warlock models when receiving responses from
> server (see [0]). Each model requires schema to be fetched from glance
> server. It means that each time we are doing image-show, image-list,
> image-create, member-list and others we are requesting schema from the
> server. AFAIU, we are using models to dynamically validate that object
> is in accordance with schema but is it the case when glance receives
> responses from the server?
>
> Could somebody please explain me the reasoning of this implementation?
> Am I missed some usage cases when validation is required for server
> responses?
>
> I also noticed that we already faced some issues with such
> implementation that leads to "mocking" validation([1][2]).

The validation should not be done for responses, only ever requests (and 
it's unclear that there is value in doing this on the client side at 
all, IMHO).

-jay


From thingee at gmail.com  Wed Sep 30 16:11:57 2015
From: thingee at gmail.com (Mike Perez)
Date: Wed, 30 Sep 2015 09:11:57 -0700
Subject: [openstack-dev] [cinder] The Absurdity of the Milestone-1
 Deadline for Drivers
In-Reply-To: <5609790C.60002@swartzlander.org>
References: <5609790C.60002@swartzlander.org>
Message-ID: <20150930161157.GA31056@gmail.com>

On 13:29 Sep 28, Ben Swartzlander wrote:
> I've always thought it was a bit strange to require new drivers to
> merge by milestone 1. I think I understand the motivations of the
> policy. The main motivation was to free up reviewers to review "other
> things" and this policy guarantees that for 75% of the release
> reviewers don't have to review new drivers. The other motivation was
> to prevent vendors from turning up at the last minute with crappy
> drivers that needed a ton of work, by encouraging them to get started
> earlier, or forcing them to wait until the next cycle.
> 
> I believe that the deadline actually does more harm than good.
> 
> First of all, to those that don't want to spend time on driver
> reviews, there are other solutions to that problem. Some people do
> want to review the drivers, and those who don't can simply ignore
> them and spend time on what they care about. I've heard people who
> spend time on driver reviews say that the milestone-1 deadline
> doesn't mean they spend less time reviewing drivers overall, it just
> all gets crammed into the beginning of each release. It should be
> obvious that setting a deadline doesn't actually affect the amount of
> reviewer effort, it just concentrates that effort.

Some bad assumptions here:

* Nobody said they didn't want to review drivers.

* "Crammed" is completely an incorrect word here. An example with last release,
  we only had 3/17 drivers trying to get in during the last week of the
  milestone [1]. I don't think you're very active in Cinder to really judge how
  well the team has worked together to get these drivers in a timely way with
  vendors.

> The argument about crappy code is also a lot weaker now that there
> are CI requirements which force vendors to spend much more time up
> front and clear a much higher quality bar before the driver is even
> considered for merging. Drivers that aren't ready for merge can
> always be deferred to a later release, but it seems weird to defer
> drivers that are high quality just because they're submitted during
> milestones 2 or 3.

"Crappy code" ... I don't know where that's coming from. If anything, CI has
helped get the drivers in faster to get rid of what you call "cramming".

> All the the above is just my opinion though, and you shouldn't care
> about my opinions, as I don't do much coding and reviewing in Cinder.
> There is a real reason I'm writing this email...
>
> In Manila we added some major new features during Liberty. All of the
> new features merged in the last week of L-3. It was a nightmare of
> merge conflicts and angry core reviewers, and many contributors
> worked through a holiday weekend to bring the release together. While
> asking myself how we can avoid such a situation in the future, it
> became clear to me that bigger features need to merge earlier -- the
> earlier the better.
> 
> When I look at the release timeline, and ask myself when is the best
> time to merge new major features, and when is the best time to merge
> new drivers, it seems obvious that *features* need to happen early
> and drivers should come *later*. New major features require FAR more
> review time than new drivers, and they require testing, and even
> after they merge they cause merge conflicts that everyone else has to
> deal with. Better that that works happens in milestones 1 and 2 than
> right before feature freeze. New drivers can come in right before
> feature freeze as far as I'm concerned. Drivers don't cause merge
> conflicts, and drivers don't need huge amounts of testing (presumably
> the CI system ensure some level of quality).
> 
> It also occurs to me that new features which require driver
> implementation (hello replication!) *really* should go in during the
> first milestone so that drivers have time to implement the feature
> during the same release.

I disagree. You're under the assumption that there is an intention of getting
a feature being worked in Liberty, to be ready for Liberty.

No.

I've expressed this numerous times at the Cinder midcycle sprint you attended
that I did not want to see drivers working on replication in their driver.

> So I'm asking the Cinder core team to reconsider the milestone-1
> deadline for drivers, and to change it to a deadline for new major
> features (in milestone-1 or milestone-2), and to allow drivers to
> merge whenever*. This is the same pitch I'll be making to the Manila
> core team. I've been considering this idea for a few weeks now but I
> wanted to wait until after PTL elections to suggest it here.

During the release, a feature can be worked on, but adoption in drivers can be
difficult. Since just about every driver is behind on potential features, I'd
rather see driver maintainers focused on those features that have been ready
for sometime, not what we just merged a week ago.

There is no good reason to rush and suffer quality for the sake of vendors
wanting the latest and greatest feature. We're more mature than that.

Things like replication I'd rather see adopted in the next release. Otherwise,
if I can use you're own word, you're "cramming" features in a release.


[1] - https://etherpad.openstack.org/p/cinder-liberty-drivers

-- 
Mike Perez


From adrian.otto at rackspace.com  Wed Sep 30 16:31:01 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Wed, 30 Sep 2015 16:31:01 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <1443628722351.56254@RACKSPACE.COM>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
 <0957CD8F4B55C0418161614FEC580D6BCECE8E@SZXEMI503-MBS.china.huawei.com>
 <1443628722351.56254@RACKSPACE.COM>
Message-ID: <D98092A2-4326-44CE-AD0E-654D7DAA8738@rackspace.com>

Thanks everyone who has provided feedback on this thread. The good news is that most of what has been asked for from Magnum is actually in scope already, and some of it has already been implemented. We never aimed to be a COE deployment service. That happens to be a necessity to achieve our more ambitious goal: We want to provide a compelling Containers-as-a-Service solution for OpenStack clouds in a way that offers maximum leverage of what?s already in OpenStack, while giving end users the ability to use their favorite tools to interact with their COE of choice, with the multi-tenancy capability we expect from all OpenStack services, and simplified integration with a wealth of existing OpenStack services (Identity, Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for the k8s COE should be mirrored in other COE?s. We have not attempted to do that yet, and my suggestion is to continue resisting that temptation because it is not aligned with our vision. We are not here to re-invent container management as a hosted service. Instead, we aim to integrate prevailing technology, and make it work great with OpenStack. For example, adding docker-compose capability to Magnum is currently out-of-scope, and I think it should stay that way. With that said, I?m willing to have a discussion about this with the community at our upcoming Summit.

An argument could be made for feature consistency among various COE options (Bay Types). I see this as a relatively low value pursuit. Basic features like integration with OpenStack Networking and OpenStack Storage services should be universal. Whether you can present a YAML file for a bay to perform internal orchestration is not important in my view, as long as there is a prevailing way of addressing that need. In the case of Docker Bays, you can simply point a docker-compose client at it, and that will work fine.

Thanks,

Adrian

> On Sep 30, 2015, at 8:58 AM, Devdatta Kulkarni <devdatta.kulkarni at RACKSPACE.COM> wrote:
> 
> +1 Hongbin.
> 
> From perspective of Solum, which hopes to use Magnum for its application container scheduling
> requirements, deep integration of COEs with OpenStack services like Keystone will be useful.
> Specifically, I am thinking that it will be good if Solum can depend on Keystone tokens to deploy 
> and schedule containers on the Bay nodes instead of having to use COE specific credentials. 
> That way, container resources will become first class components that can be monitored 
> using Ceilometer, access controlled using Keystone, and managed from within Horizon.
> 
> Regards,
> Devdatta
> 
> 
> From: Hongbin Lu <hongbin.lu at huawei.com>
> Sent: Wednesday, September 30, 2015 9:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>   
> 
> +1 from me as well.
>  
> I think what makes Magnum appealing is the promise to provide container-as-a-service. I see coe deployment as a helper to achieve the promise, instead of  the main goal.
>  
> Best regards,
> Hongbin
>  
> 
> From: Jay Lau [mailto:jay.lau.513 at gmail.com]
> Sent: September-29-15 10:57 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>  
> 
> 
> +1 to Egor, I think that the final goal of Magnum is container as a service but not coe deployment as a service. ;-)
> 
> Especially we are also working on Magnum UI, the Magnum UI should export some interfaces to enable end user can create container applications but not only coe deployment.
> 
> I hope that the Magnum can be treated as another "Nova" which is focusing on container service. I know it is difficult to unify all of the concepts in different coe (k8s has pod, service, rc, swarm only has container, nova only has VM,  PM with different hypervisors), but this deserve some deep dive and thinking to see how can move forward..... 
> 
> 
> 
>  
> 
> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <EGuz at walmartlabs.com> wrote:
> definitely ;), but the are some thoughts to Tom?s email.
> 
> I agree that we shouldn't reinvent apis, but I don?t think Magnum should only focus at deployment (I feel we will become another Puppet/Chef/Ansible module if we do it ):)
> I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to Kub/Mesos/Swarm communities for that.
> 
> ?
> Egor
> 
> From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Date: Tuesday, September 29, 2015 at 08:44
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> 
> This is definitely a topic we should cover in Tokyo.
> 
> On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) <danehans at cisco.com<mailto:danehans at cisco.com>> wrote:
> 
> 
> +1
> 
> From: Tom Cammann <tom.cammann at hpe.com<mailto:tom.cammann at hpe.com>>
> Reply-To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Date: Tuesday, September 29, 2015 at 2:22 AM
> To: "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> 
> This has been my thinking in the last couple of months to completely deprecate the COE specific APIs such as pod/service/rc and container.
> 
> As we now support Mesos, Kubernetes and Docker Swarm its going to be very difficult and probably a wasted effort trying to consolidate their separate APIs under a single Magnum API.
> 
> I'm starting to see Magnum as COEDaaS - Container Orchestration Engine Deployment as a Service.
> 
> On 29/09/15 06:30, Ton Ngo wrote:
> Would it make sense to ask the opposite of Wanghua's question: should pod/service/rc be deprecated if the user can easily get to the k8s api?
> Even if we want to orchestrate these in a Heat template, the corresponding heat resources can just interface with k8s instead of Magnum.
> Ton Ngo,
> 
> <ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose is just command line tool which doesn?t have any api or scheduling feat
> 
> From: Egor Guz <EGuz at walmartlabs.com><mailto:EGuz at walmartlabs.com>
> To: "openstack-dev at lists.openstack.org"<mailto:openstack-dev at lists.openstack.org> <openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>
> Date: 09/28/2015 10:20 PM
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> ________________________________
> 
> 
> 
> Also I belive docker compose is just command line tool which doesn?t have any api or scheduling features.
> But during last Docker Conf hackathon PayPal folks implemented docker compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
> which can give you pod like experience.
> 
> ?
> Egor
> 
> From: Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
> Date: Monday, September 28, 2015 at 22:03
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> 
> Wanghua,
> 
> I do follow your logic, but docker-compose only needs the docker API to operate. We are intentionally avoiding re-inventing the wheel. Our goal is not to replace docker swarm (or other existing systems), but to compliment it/them. We want to offer users of  Docker the richness of native APIs and supporting tools. This way they will not need to compromise features or wait longer for us to implement each new feature as it is added. Keep in mind that our pod, service, and replication controller resources pre-date  this philosophy. If we started out with the current approach, those would not exist in Magnum.
> 
> Thanks,
> 
> Adrian
> 
> On Sep 28, 2015, at 8:32 PM, ?? <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com>>  wrote:
> 
> Hi folks,
> 
> Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes container in swarm coe. As I know, swarm is only a scheduler of container, which is like nova in openstack. Docker compose is a orchestration program which is like heat in openstack.  k8s is the combination of scheduler and orchestration. So I think it is better to expose the apis in compose to users which are at the same level as k8s.
> 
> 
> Regards
> Wanghua
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> <ATT00001.gif>__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> 
> 
> Unsubscribe:  OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:  OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> 
> 
> 
> 
> 
> Thanks,
> Jay Lau (Guangya Liu)
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From jpeeler at redhat.com  Wed Sep 30 16:30:59 2015
From: jpeeler at redhat.com (Jeff Peeler)
Date: Wed, 30 Sep 2015 12:30:59 -0400
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for
 core reviewer
In-Reply-To: <D2305CD3.13957%stdake@cisco.com>
References: <D2305CD3.13957%stdake@cisco.com>
Message-ID: <CALesnTzAqC_3DV-WZZyqj4FHXn-ft-mWPEUwQsyppigJTASUfA@mail.gmail.com>

On Tue, Sep 29, 2015 at 6:20 PM, Steven Dake (stdake) <stdake at cisco.com> wrote:
> Hi folks,
>
> I am proposing Michal for core reviewer.  Consider my proposal as a +1 vote.
> Michal has done a fantastic job with rsyslog, has done a nice job overall
> contributing to the project for the last cycle, and has really improved his
> review quality and participation over the last several months.

Agreed, +1!


From dk068x at att.com  Wed Sep 30 16:41:38 2015
From: dk068x at att.com (KARR, DAVID)
Date: Wed, 30 Sep 2015 16:41:38 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <e3c1fccdacc24c0a85e813f843e6b3d0@HQ1WP-EXMB12.corp.brocade.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>,
 <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>
 <1443116221875.72882@vmware.com>,
 <EB8DB51184817F479FC9C47B120861EE1986D904@SHSMSX101.ccr.corp.intel.com>
 <1443139720841.25541@vmware.com>
 <e3c1fccdacc24c0a85e813f843e6b3d0@HQ1WP-EXMB12.corp.brocade.com>
Message-ID: <B8D164BED956C5439875951895CB4B223BF1E8D7@CAFRFD1MSGUSRIA.ITServices.sbc.com>

I think I'm seeing similar errors, but I'm not certain.  With the OVA I downloaded last night, when I run "./rejoin-stack.sh", I get "Couldn't find ./stack-screenrc file; have you run stack.sh yet?"

Concerning the original page with setup instructions, at https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub , I note that the the login user and password is different (probably obvious), and obviously the required path to "cd" to.

Also, after starting the VM, the instructions say to run "ifconfig" to get the IP address of the VM, and then to ssh to the VM.  This seems odd.  If I've already done "interact with the console", then I'm already logged into the console.  The instructions also describe how to get to the Horizon client from your browser.  I'm not sure what this should say now.

From: Shiv Haris [mailto:sharis at Brocade.com]
Sent: Friday, September 25, 2015 3:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Thanks Alex, Zhou,

I get errors from congress when I do a re-join. These errors seem to due to the order in which the services are coming up. Hence I still depend on running stack.sh after the VM is up and running. Please try out the new VM - also advise if you need to add any of your use cases. Also re-join starts "screen" - do we expect the end user to know how to use "screen".

I do understand that running "stack.sh" takes time to run - but it does not do things that appear to be any kind of magic which we want to avoid in order to get the user excited.

I have uploaded a new version of the VM please experiment with this and let me know:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova

(root: vagrant password: vagrant)

-Shiv



From: Alex Yip [mailto:ayip at vmware.com]
Sent: Thursday, September 24, 2015 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I was able to make devstack run without a network connection by disabling tempest.  So, I think it uses the loopback IP address, and that does not change, so rejoin-stack.sh works without a network at all.



- Alex





________________________________
From: Zhou, Zhenzan <zhenzan.zhou at intel.com<mailto:zhenzan.zhou at intel.com>>
Sent: Thursday, September 24, 2015 4:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:ayip at vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a minute or so.  Then I run rejoin-stack.sh which takes just another minute or so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack state that was running before.



- Alex





________________________________
From: Shiv Haris <sharis at Brocade.com<mailto:sharis at Brocade.com>>
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user instantiates the Usecase-VM. However creating a OVA file is possible only when the VM is halted which means Openstack is not running and the user will have to run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM and using it in another setup is not very straight forward. It involves modifying the .vbox file and seems that it is prone to user errors. I am leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ....

Thanks,

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you posed however I am still working on some of the subtle issues raised. Once I have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this big?  I think we should finish this as a VM but then look into doing it with containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB - but the OVA compress the image and disk to 3 GB. I will looking at other options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup time is substantial, and if there's a problem, it's good to assume the user won't know how to fix it.  Is it possible to have devstack up and running when we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack will be down when you bring up  the VM. I agree a snapshot will be a better choice.

- It'd be good to have a README to explain how to use the use-case structure. It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script so that we can run the use cases one after another without worrying about interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com<mailto:sharis at brocade.com>> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com<mailto:sharis at Brocade.com>]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova<https://urldefense.proofpoint.com/v2/url?u=http-3A__paloaltan.net_Congress_Congress-5FUsecases-5FSEPT-5F17-5F2015.ova&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=3IP4igrLri-BaK8VbjbEq2l_AGknCI7-t3UbP5VwlU8&s=wVyys8I915mHTzrOp8f0KLqProw6ygNfaMSP0T-yqCg&e=>

I usually run this on a macbook air - but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases - I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/040f601b/attachment.html>

From kevin.carter at RACKSPACE.COM  Wed Sep 30 17:00:13 2015
From: kevin.carter at RACKSPACE.COM (Kevin Carter)
Date: Wed, 30 Sep 2015 17:00:13 +0000
Subject: [openstack-dev] [openstack-ansible] Proposing Steve
	Lewis	(stevelle) for core reviewer
In-Reply-To: <CAGSrQvyepXcdV8bBov0+jHBzgEN-5=-jeg2QvAsiMwB_-viZag@mail.gmail.com>
References: <CAGSrQvyepXcdV8bBov0+jHBzgEN-5=-jeg2QvAsiMwB_-viZag@mail.gmail.com>
Message-ID: <1443632413417.738@RACKSPACE.COM>

+1 from me


--

Kevin Carter
IRC: cloudnull

________________________________
From: Jesse Pretorius <jesse.pretorius at gmail.com>
Sent: Wednesday, September 30, 2015 3:51 AM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] [openstack-ansible] Proposing Steve Lewis (stevelle) for core reviewer

Hi everyone,

I'd like to propose that Steve Lewis (stevelle) be added as a core reviewer.

He has made an effort to consistently keep up with doing reviews in the last cycle and always makes an effort to ensure that his responses are made after thorough testing where possible. I have found his input to be valuable.

--
Jesse Pretorius
IRC: odyssey4me
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/8ff830d4/attachment.html>

From harm at weites.com  Wed Sep 30 17:04:43 2015
From: harm at weites.com (Harm Weites)
Date: Wed, 30 Sep 2015 19:04:43 +0200
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for
 core reviewer
In-Reply-To: <D2305CD3.13957%stdake@cisco.com>
References: <D2305CD3.13957%stdake@cisco.com>
Message-ID: <b2f9e70f866b46d5451c8f7391f7e1eb@weites.com>

Looks like he passed 3 already, but here's another +1 :)

Steven Dake (stdake) schreef op 2015-09-30 00:20:
> Hi folks,
> 
> I am proposing Michal for core reviewer. Consider my proposal as a +1
> vote. Michal has done a fantastic job with rsyslog, has done a nice
> job overall contributing to the project for the last cycle, and has
> really improved his review quality and participation over the last
> several months.
> 
> Our process requires 3 +1 votes, with no veto (-1) votes. If your
> uncertain, it is best to abstain :) I will leave the voting open for 1
> week until Tuesday October 6th or until there is a unanimous decision
> or a veto.
> 
> Regards
> -steve
>  
>  
>  
> _______________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From ben at swartzlander.org  Wed Sep 30 17:10:44 2015
From: ben at swartzlander.org (Ben Swartzlander)
Date: Wed, 30 Sep 2015 13:10:44 -0400
Subject: [openstack-dev] [cinder] The Absurdity of the Milestone-1
 Deadline for Drivers
In-Reply-To: <20150930161157.GA31056@gmail.com>
References: <5609790C.60002@swartzlander.org>
 <20150930161157.GA31056@gmail.com>
Message-ID: <560C1794.4080005@swartzlander.org>

On 09/30/2015 12:11 PM, Mike Perez wrote:
> On 13:29 Sep 28, Ben Swartzlander wrote:
>> I've always thought it was a bit strange to require new drivers to
>> merge by milestone 1. I think I understand the motivations of the
>> policy. The main motivation was to free up reviewers to review "other
>> things" and this policy guarantees that for 75% of the release
>> reviewers don't have to review new drivers. The other motivation was
>> to prevent vendors from turning up at the last minute with crappy
>> drivers that needed a ton of work, by encouraging them to get started
>> earlier, or forcing them to wait until the next cycle.
>>
>> I believe that the deadline actually does more harm than good.
>>
>> First of all, to those that don't want to spend time on driver
>> reviews, there are other solutions to that problem. Some people do
>> want to review the drivers, and those who don't can simply ignore
>> them and spend time on what they care about. I've heard people who
>> spend time on driver reviews say that the milestone-1 deadline
>> doesn't mean they spend less time reviewing drivers overall, it just
>> all gets crammed into the beginning of each release. It should be
>> obvious that setting a deadline doesn't actually affect the amount of
>> reviewer effort, it just concentrates that effort.
> Some bad assumptions here:
>
> * Nobody said they didn't want to review drivers.
>
> * "Crammed" is completely an incorrect word here. An example with last release,
>    we only had 3/17 drivers trying to get in during the last week of the
>    milestone [1]. I don't think you're very active in Cinder to really judge how
>    well the team has worked together to get these drivers in a timely way with
>    vendors.

There are fair points. No argument. I think I managed to obscure my main 
point with too much assumptions and rhetoric though.

Let me restate my argument as simply as possible.

Drivers are relatively low risk to the project. They're a lot of work to 
review due to the size, but the risk of missing bugs is small because 
those bugs will affect only the users who choose to deploy the given 
driver. Also drivers are well understood, so the process of reviewing 
them is straightforward.

New features are high risk. Even a small change to the manager or API 
code can have dramatic impact on all users of Cinder. Larger changes 
that touch multiple modules in different areas must be reviewed by 
people who understand all of Cinder just to get basic assurance that 
they do what they say. Finding bugs in these kinds of changes is tricky. 
Reading the code only gets you so far, and automated testing only 
scratches the surface. You have to run the code and try it out. These 
things take time and core team time is a limited and precious resource.

Now, if you have some high risk changes and some low risk changes, which 
do you think it makes sense to work on early in the release, and which 
do you think is safe to merge at the last minute? I asked myself that 
question and decided that I'd rather to high risk stuff early and low 
risk stuff later. Based on that belief, I'm making a suggestion to move 
the deadlines around.


>> The argument about crappy code is also a lot weaker now that there
>> are CI requirements which force vendors to spend much more time up
>> front and clear a much higher quality bar before the driver is even
>> considered for merging. Drivers that aren't ready for merge can
>> always be deferred to a later release, but it seems weird to defer
>> drivers that are high quality just because they're submitted during
>> milestones 2 or 3.
> "Crappy code" ... I don't know where that's coming from. If anything, CI has
> helped get the drivers in faster to get rid of what you call "cramming".


That's good. If that's true, then I would think it supports an argument 
that the deadlines are unnecessary because the underlying problem 
(limited reviewer time) has been solved.


>> All the the above is just my opinion though, and you shouldn't care
>> about my opinions, as I don't do much coding and reviewing in Cinder.
>> There is a real reason I'm writing this email...
>>
>> In Manila we added some major new features during Liberty. All of the
>> new features merged in the last week of L-3. It was a nightmare of
>> merge conflicts and angry core reviewers, and many contributors
>> worked through a holiday weekend to bring the release together. While
>> asking myself how we can avoid such a situation in the future, it
>> became clear to me that bigger features need to merge earlier -- the
>> earlier the better.
>>
>> When I look at the release timeline, and ask myself when is the best
>> time to merge new major features, and when is the best time to merge
>> new drivers, it seems obvious that *features* need to happen early
>> and drivers should come *later*. New major features require FAR more
>> review time than new drivers, and they require testing, and even
>> after they merge they cause merge conflicts that everyone else has to
>> deal with. Better that that works happens in milestones 1 and 2 than
>> right before feature freeze. New drivers can come in right before
>> feature freeze as far as I'm concerned. Drivers don't cause merge
>> conflicts, and drivers don't need huge amounts of testing (presumably
>> the CI system ensure some level of quality).
>>
>> It also occurs to me that new features which require driver
>> implementation (hello replication!) *really* should go in during the
>> first milestone so that drivers have time to implement the feature
>> during the same release.
> I disagree. You're under the assumption that there is an intention of getting
> a feature being worked in Liberty, to be ready for Liberty.
>
> No.
>
> I've expressed this numerous times at the Cinder midcycle sprint you attended
> that I did not want to see drivers working on replication in their driver.

I'm all for features being worked on over multiple releases. I only 
called out replication because of the recent controversy it caused. A 
better example would have been a feature like "expand volume" which was 
added some releases ago. It seems reasonable to me to add a feature like 
that and for drivers to implement it all inside of one release. I'm not 
sure what the next feature of that flavor will be, but whatever it is, I 
hope it merges early in a release and not right before feature freeze.


>> So I'm asking the Cinder core team to reconsider the milestone-1
>> deadline for drivers, and to change it to a deadline for new major
>> features (in milestone-1 or milestone-2), and to allow drivers to
>> merge whenever*. This is the same pitch I'll be making to the Manila
>> core team. I've been considering this idea for a few weeks now but I
>> wanted to wait until after PTL elections to suggest it here.
> During the release, a feature can be worked on, but adoption in drivers can be
> difficult. Since just about every driver is behind on potential features, I'd
> rather see driver maintainers focused on those features that have been ready
> for sometime, not what we just merged a week ago.
>
> There is no good reason to rush and suffer quality for the sake of vendors
> wanting the latest and greatest feature. We're more mature than that.
>
> Things like replication I'd rather see adopted in the next release. Otherwise,
> if I can use you're own word, you're "cramming" features in a release.
>
>
> [1] - https://etherpad.openstack.org/p/cinder-liberty-drivers

I hope you didn't read my post as a post from NetApp complaining about 
drivers and driver features. I have no vested interest whatsoever in 
what drivers get merged or what features get implemented. I'm writing 
from my experience as Manila PTL and how the Manila core team got burned 
by our own optimism regarding how many core features we could get into a 
release.

Mike, I completely agree with you that vendor interests have to come 
second to the project itself and I have no problem making vendors wait 
to get their features in. I think we're even in vehement agreement and 
you somehow assume I have a different motive. The balance of how much 
time goes into reviewing vendor stuff vs core stuff is a hard problem 
that I'm not addressing here, I'm ONLY trying to focus on the order in 
which we do things, and I'm proposing that features should come first 
and drivers should come last (in the release cycle).

-Ben Swartzlander



From carol.l.barrett at intel.com  Wed Sep 30 17:14:08 2015
From: carol.l.barrett at intel.com (Barrett, Carol L)
Date: Wed, 30 Sep 2015 17:14:08 +0000
Subject: [openstack-dev] [election][TC] TC Candidacy
In-Reply-To: <CAHcn5b0ZGoS8H487-m0TMMD2zJFBY2HcY=VvPpTf5ydWK7QbhQ@mail.gmail.com>
References: <CAHcn5b0ZGoS8H487-m0TMMD2zJFBY2HcY=VvPpTf5ydWK7QbhQ@mail.gmail.com>
Message-ID: <2D352D0CD819F64F9715B1B89695400D5C920D9E@ORSMSX113.amr.corp.intel.com>

Mike - Congrats on your new position! Looking forward to working with you.
Carol

-----Original Message-----
From: Mike Perez [mailto:thingee at gmail.com] 
Sent: Wednesday, September 30, 2015 1:55 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [election][TC] TC Candidacy

Hi all!

I'm announcing my candidacy for a position on the OpenStack Technical Committee.

On October 1st I will be employed by the OpenStack Foundation as a Cross-Project Developer Coordinator to help bring focus and support to cross-project initiatives within the cross-project specs, Def Core, The Product Working group, etc.

I feel the items below have enabled others across this project to strive for quality. If you would all have me as a member of the Technical Committee, you can help me to enable more quality work in OpenStack.

* I have been working in OpenStack since 2010. I spent a good amount of my time
  working on OpenStack in my free time before being paid full time to work on
  it. It has been an important part of my life, and rewarding to see what we
  have all achieved together.

* I was PTL for the Cinder project in the Kilo and Liberty releases for two
  cross-project reasons:
  * Third party continuous integration (CI).
  * Stop talking about rolling upgrades, and actually make it happen for
    operators.

* I led the effort in bringing third party continuous integration to the
  Cinder project for more than 60 different drivers. [1]
  * I removed 25 different storage drivers from Cinder to bring quality to the
    project to ensure what was in the Kilo release would work for operators.
    I did what I believed was right, regardless of whether it would cost me
    re-election for PTL [2].
  * In my conversations with other projects, this has enabled others to
    follow the same effort. Continuing this trend of quality cross-project will
    be my next focus.

* During my first term of PTL for Cinder, the team, and much respect to Thang
  Pham working on an effort to end the rolling upgrade problem, not just for
  Cinder, but for *all* projects.
  * First step was making databases independent from services via Oslo
    versioned objects.
  * In Liberty we have a solution coming that helps with RPC versioned messages
    to allow upgrading services independently.

* I have attempted to help with diversity in our community.
  * Helped lead our community to raise $17,403 for the Ada Initiative [3],
    which was helping address gender-diversity with a focus in open source.
  * For the Vancouver summit, I helped bring in the ally skills workshops from
    the Ada Initiative, so that our community can continue to be a welcoming
    environment [4].

* Within the Cinder team, I have enabled all to provide good documentation for
  important items in our release notes in Kilo [5] and Liberty [6].
  * Other projects have reached out to me after Kilo feeling motivated for this
    same effort. I've explained in the August 2015 Operators midcycle sprint
    that I will make this a cross-project effort in order to provide better
    communication to our operators and users.

* I started an OpenStack Dev List summary in the OpenStack Weekly Newsletter
  (What you need to know from the developer's list), in order to enable others
  to keep up with the dev list on important cross-project information. [7][8]

* I created the Cinder v2 API which has brought consistency in
  request/responses with other OpenStack projects.
  * I documented Cinder v1 and Cinder v2 API's. Later on I created the Cinder
    API reference documentation content. The attempt here was to enable others
    to have somewhere to start, to continue quality documentation with
    continued developments.

Please help me to do more positive work in this project. It would be an honor to be member of your technical committee.


Thank you,
Mike Perez

Official Candidacy: https://review.openstack.org/#/c/229298/2
Review History: https://review.openstack.org/#/q/reviewer:170,n,z
Commit History: https://review.openstack.org/#/q/owner:170,n,z
Stackalytics: http://stackalytics.com/?user_id=thingee
Foundation: https://www.openstack.org/community/members/profile/4840
IRC Freenode: thingee
Website: http://thing.ee


[1] - http://lists.openstack.org/pipermail/openstack-dev/2015-January/054614.html
[2] - https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:cinder-driver-removals,n,z
[3] - http://lists.openstack.org/pipermail/openstack-dev/2014-October/047892.html
[4] - http://lists.openstack.org/pipermail/openstack-dev/2015-May/064156.html
[5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#OpenStack_Block_Storage_.28Cinder.29
[6] - https://wiki.openstack.org/wiki/ReleaseNotes/Liberty#OpenStack_Block_Storage_.28Cinder.29
[7] - http://www.openstack.org/blog/2015/09/openstack-community-weekly-newsletter-sept-12-18/
[8] - http://www.openstack.org/blog/2015/09/openstack-weekly-community-newsletter-sept-19-25/

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From wayne at puppetlabs.com  Wed Sep 30 17:21:33 2015
From: wayne at puppetlabs.com (Wayne Warren)
Date: Wed, 30 Sep 2015 10:21:33 -0700
Subject: [openstack-dev] Infra needs Gerrit developers
In-Reply-To: <CABf-f+oVrWB9A+ZOOpNMZdccJn=Gv95_RpLa8SXM9K6jp75scA@mail.gmail.com>
References: <CABf-f+oVrWB9A+ZOOpNMZdccJn=Gv95_RpLa8SXM9K6jp75scA@mail.gmail.com>
Message-ID: <CAMq41s=Hf9exO7oQ5Sh5j5168bRezpszSg1hUvc=wrXwFqtACQ@mail.gmail.com>

I am definitely interested in helping out with this as I feel the pain
of gerrit, particularly around text entry...

Not a huge fan of Java but might be able to take on some low-hanging
fruit once I've had a chance to tackle the JJB 2.0 API.

Maybe this is the wrong place to discuss, but is there any chance the
Gerrit project might consider a move toward Clojure as its primary
language? I suspect this could be done in a way that slowly deprecates
the use of Java over time but would need to spend time investigating
the current Gerrit architecture before making any strong claims about
this.

On Tue, Sep 29, 2015 at 3:30 PM, Zaro <zaro0508 at gmail.com> wrote:
> Hello All,
>
> I believe you are all familiar with Gerrit.  Our community relies on it
> quite heavily and it is one of the most important applications in our CI
> infrastructure. I work on the OpenStack-infra team and I've been hacking on
> Gerrit for a while. I'm the infra team's sole Gerrit developer. I also test
> all our Gerrit upgrades prior to infra upgrading Gerrit.  There are many
> Gerrit feature and bug fix requests coming from the OpenStack community
> however due to limited resources it has been a challenge to meet those
> requests.
>
> I've been fielding some of those requests and trying to make Gerrit better
> for OpenStack.  I was wondering whether there are any other folks in our
> community who might also like to hack on a large scale java application
> that's being used by many corporations and open source projects in the
> world.  If so this is an opportunity for you to contribute.  I'm hoping to
> get more OpenStackers involved with the Gerrit community so we can
> collectively make OpenStack better.  If you would like to get involved let
> the openstack-infra folks know[1] and we will try help get you going.
>
> For instance our last attempt to upgrading Gerrit failed due to a bug[2]
> that makes repos unusable on a diff timeout.   This bug is still not fixed
> so a nice way to contribute is to help us fix things like this so we can
> continue to use never versions of Gerrit.
>
> [1] in #openstack-infra or on openstack-infra at lists.openstack.org
> [2] https://code.google.com/p/gerrit/issues/detail?id=3424
>
>
> Thank You.
> - Khai (AKA zaro)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


From tim at styra.com  Wed Sep 30 17:22:01 2015
From: tim at styra.com (Tim Hinrichs)
Date: Wed, 30 Sep 2015 17:22:01 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <B8D164BED956C5439875951895CB4B223BF1E8D7@CAFRFD1MSGUSRIA.ITServices.sbc.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>
 <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>
 <1443116221875.72882@vmware.com>
 <EB8DB51184817F479FC9C47B120861EE1986D904@SHSMSX101.ccr.corp.intel.com>
 <1443139720841.25541@vmware.com>
 <e3c1fccdacc24c0a85e813f843e6b3d0@HQ1WP-EXMB12.corp.brocade.com>
 <B8D164BED956C5439875951895CB4B223BF1E8D7@CAFRFD1MSGUSRIA.ITServices.sbc.com>
Message-ID: <CAJjxPAAQ3NtR2Tty9HoTh1ccppw_pBgaUtBv2zbUFKbG+v1OsA@mail.gmail.com>

Hi David,

There are 2 VM images for Congress that we're working on simultaneously:
Shiv's and Alex's.

1. Shiv's image is to help new people understand some of the use cases
Congress was designed for.  The goal is to include a bunch of use cases
that we have working.

2. Alex's image is the one we'll be using for the hands-on-lab in Tokyo.
This one accompanies the Google doc instructions for the Hands On Lab:
https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub
.

It sounds like you might be using Shiv's image with Alex's hands-on-lab
instructions, so the instructions won't necessarily line up with the image.

Tim



On Wed, Sep 30, 2015 at 9:45 AM KARR, DAVID <dk068x at att.com> wrote:

> I think I?m seeing similar errors, but I?m not certain.  With the OVA I
> downloaded last night, when I run ?./rejoin-stack.sh?, I get ?Couldn?t find
> ./stack-screenrc file; have you run stack.sh yet??
>
>
>
> Concerning the original page with setup instructions, at
> https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub
> , I note that the the login user and password is different (probably
> obvious), and obviously the required path to ?cd? to.
>
>
>
> Also, after starting the VM, the instructions say to run ?ifconfig? to get
> the IP address of the VM, and then to ssh to the VM.  This seems odd.  If
> I?ve already done ?interact with the console?, then I?m already logged into
> the console.  The instructions also describe how to get to the Horizon
> client from your browser.  I?m not sure what this should say now.
>
>
>
> *From:* Shiv Haris [mailto:sharis at Brocade.com]
> *Sent:* Friday, September 25, 2015 3:35 PM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Thanks Alex, Zhou,
>
>
>
> I get errors from congress when I do a re-join. These errors seem to due
> to the order in which the services are coming up. Hence I still depend on
> running stack.sh after the VM is up and running. Please try out the new VM
> ? also advise if you need to add any of your use cases. Also re-join starts
> ?screen? ? do we expect the end user to know how to use ?screen?.
>
>
>
> I do understand that running ?stack.sh? takes time to run ? but it does
> not do things that appear to be any kind of magic which we want to avoid in
> order to get the user excited.
>
>
>
> I have uploaded a new version of the VM please experiment with this and
> let me know:
>
>
>
> http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova
>
>
>
> (root: vagrant password: vagrant)
>
>
>
> -Shiv
>
>
>
>
>
>
>
> *From:* Alex Yip [mailto:ayip at vmware.com <ayip at vmware.com>]
> *Sent:* Thursday, September 24, 2015 5:09 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> I was able to make devstack run without a network connection by disabling
> tempest.  So, I think it uses the loopback IP address, and that does not
> change, so rejoin-stack.sh works without a network at all.
>
>
>
> - Alex
>
>
>
>
> ------------------------------
>
> *From:* Zhou, Zhenzan <zhenzan.zhou at intel.com>
> *Sent:* Thursday, September 24, 2015 4:56 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Rejoin-stack.sh works only if its IP was not changed. So using NAT network
> and fixed ip inside the VM can help.
>
>
>
> BR
>
> Zhou Zhenzan
>
>
>
> *From:* Alex Yip [mailto:ayip at vmware.com <ayip at vmware.com>]
> *Sent:* Friday, September 25, 2015 01:37
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> I have been using images, rather than snapshots.
>
>
>
> It doesn't take that long to start up.  First, I boot the VM which takes a
> minute or so.  Then I run rejoin-stack.sh which takes just another minute
> or so.  It's really not that bad, and rejoin-stack.sh restores vms and
> openstack state that was running before.
>
>
>
> - Alex
>
>
>
>
> ------------------------------
>
> *From:* Shiv Haris <sharis at Brocade.com>
> *Sent:* Thursday, September 24, 2015 10:29 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Hi Congress folks,
>
>
>
> I am looking for ideas. We want the Openstack to be running when the user
> instantiates the Usecase-VM. However creating a OVA file is possible only
> when the VM is halted which means Openstack is not running and the user
> will have to run devstack again (which is time consuming) when the VM is
> restarted.
>
>
>
> The option is to take a snapshot. It appears that taking a snapshot of the
> VM and using it in another setup is not very straight forward. It involves
> modifying the .vbox file and seems that it is prone to user errors. I am
> leaning towards halting the machine and generating an OVA file.
>
>
>
> I am looking for suggestions ?.
>
>
>
> Thanks,
>
>
>
> -Shiv
>
>
>
>
>
> *From:* Shiv Haris [mailto:sharis at Brocade.com <sharis at Brocade.com>]
> *Sent:* Thursday, September 24, 2015 9:53 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> First of all I apologize for not making it at the meeting yesterday, could
> not cut short another overlapping meeting.
>
>
>
> Also, Tim thanks for the feedback. I have addressed some of the issues you
> posed however I am still working on some of the subtle issues raised. Once
> I have addressed all I will post another VM by end of the week.
>
>
>
> -Shiv
>
>
>
>
>
> *From:* Tim Hinrichs [mailto:tim at styra.com <tim at styra.com>]
> *Sent:* Friday, September 18, 2015 5:14 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> It's great to have this available!  I think it'll help people understand
> what's going on MUCH more quickly.
>
>
>
> Some thoughts.
>
> - The image is 3GB, which took me 30 minutes to download.  Are all VMs
> this big?  I think we should finish this as a VM but then look into doing
> it with containers to make it EVEN easier for people to get started.
>
>
>
> [shivharis] Yes, unfortunately that is the case. The disk size I set is
> 20GB ? but the OVA compress the image and disk to 3 GB. I will looking at
> other options.
>
>
>
>
>
> - It gave me an error about a missing shared directory when I started up.
>
> [shivharis] will fix this
>
>
>
> - I expected devstack to be running when I launched the VM.  devstack
> startup time is substantial, and if there's a problem, it's good to assume
> the user won't know how to fix it.  Is it possible to have devstack up and
> running when we start the VM?  That said, it started up fine for me.
>
> [shivharis] OVA files can be created only when the VM is halted, so
> devstack will be down when you bring up  the VM. I agree a snapshot will be
> a better choice.
>
>
>
> - It'd be good to have a README to explain how to use the use-case
> structure. It wasn't obvious to me.
>
> [shivharis] added.
>
>
>
> - The top-level dir of the Congress_Usecases folder has a
> Congress_Usecases folder within it.  I assume the inner one shouldn't be
> there?
>
> [shivharis] my automation issues, fixed.
>
>
>
> - When I ran the 10_install_policy.sh, it gave me a bunch of authorization
> problems.
>
> [shivharis] fixed
>
>
>
> But otherwise I think the setup looks reasonable.  Will there be an undo
> script so that we can run the use cases one after another without worrying
> about interactions?
>
> [shivharis] tricky, will find some way out.
>
>
>
> Tim
>
>
>
> [shivharis] Thanks
>
>
>
> On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com> wrote:
>
> Hi Congress folks,
>
>
>
> BTW the login/password for the VM is vagrant/vagrant
>
>
>
> -Shiv
>
>
>
>
>
> *From:* Shiv Haris [mailto:sharis at Brocade.com]
> *Sent:* Thursday, September 17, 2015 5:03 PM
> *To:* openstack-dev at lists.openstack.org
> *Subject:* [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Hi All,
>
>
>
> I have put my VM (virtualbox) at:
>
>
>
> http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__paloaltan.net_Congress_Congress-5FUsecases-5FSEPT-5F17-5F2015.ova&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=3IP4igrLri-BaK8VbjbEq2l_AGknCI7-t3UbP5VwlU8&s=wVyys8I915mHTzrOp8f0KLqProw6ygNfaMSP0T-yqCg&e=>
>
>
>
> I usually run this on a macbook air ? but it should work on other
> platfroms as well. I chose virtualbox since it is free.
>
>
>
> Please send me your usecases ? I can incorporate in the VM and send you an
> updated image. Please take a look at the structure I have in place for the
> first usecase; would prefer it be the same for other usecases. (However I
> am still open to suggestions for changes)
>
>
>
> Thanks,
>
>
>
> -Shiv
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/e5a5d421/attachment-0001.html>

From jordan.pittier at scality.com  Wed Sep 30 17:25:24 2015
From: jordan.pittier at scality.com (Jordan Pittier)
Date: Wed, 30 Sep 2015 19:25:24 +0200
Subject: [openstack-dev] Announcing Liberty RC1 availability in Debian
In-Reply-To: <560BCE52.5080100@debian.org>
References: <560BCE52.5080100@debian.org>
Message-ID: <CAAKgrcnK+ays05Uioyhv55DhF8L0Lx5gMCPNMMNZSMF9zQPWjQ@mail.gmail.com>

On Wed, Sep 30, 2015 at 1:58 PM, Thomas Goirand <zigo at debian.org> wrote:

> Hi everyone!
>
> 1/ Announcement
> ===============
>
> I'm pleased to announce, in advance of the final Liberty release, that
> Liberty RC1 not only has been fully uploaded to Debian Experimental, but
> also that the Tempest CI (which I maintain and is a package only CI, no
> deployment tooling involved), shows that it's also fully installable and
> working. There's still some failures, but these are, I am guessing, not
> due to problems in the packaging, but rather some Tempest setup problems
> which I intend to address.
>
> If you want to try out Liberty RC1 in Debian, you can either try it
> using Debian Sid + Experimental (recommended), or use the Jessie
> backport repository built out of Mirantis Jenkins server. Repositories
> are listed at this address:
>
> http://liberty-jessie.pkgs.mirantis.com/
>
> 2/ Quick note about Liberty Debian repositories
> ===============================================
>
> During Debconf 15, someone reported that the fact the Jessie backports
> are on a Mirantis address is disturbing.
>
> Note that, while the above really is a non-Debian (ie: non official
> private) repository, it only contains unmodified source packages, only
> just rebuilt for Debian Stable. Please don't be afraid by the tainted
> "mirantis.com" domain name, I could have as well set a debian.net
> address (which has been on my todo list for a long time). But it is
> still Debian only packages. Everything there is strait out of Debian
> repositories, nothing added, modified or removed.
>
> I believe that Liberty release in Sid, is currently working very well,
> but I haven't tested it as much as the Jessie backport.
>
> Started with the Kilo release, I have been uploading packages to the
> official Debian backports repositories. I will do so as well for the
> Liberty release, after the final release is out, and after Liberty is
> fully migrated to Debian Testing (the rule for stable-backports is that
> packages *must* be available in Testing *first*, in order to provide an
> upgrade path). So I do expect Liberty to be available from
> jessie-backports maybe a few weeks *after* the final Liberty release.
> Before that, use the unofficial Debian repositories.
>
> 3/ Horizon dependencies still in NEW queue
> ==========================================
>
> It is also worth noting that Horizon hasn't been fully FTP master
> approved, and that some packages are still remaining in the NEW queue.
> This isn't the first release with such an issue with Horizon. I hope
> that 1/ FTP masters will approve the remaining packages son 2/ for
> Mitaka, the Horizon team will care about freezing external dependencies
> (ie: new Javascript objects) earlier in the development cycle. I am
> hereby proposing that the Horizon 3rd party dependency freeze happens
> not later than Mitaka b2, so that we don't experience it again for the
> next release. Note that this problem affects both Debian and Ubuntu, as
> Ubuntu syncs dependencies from Debian.
>
> 5/ New packages in this release
> ===============================
>
> You may have noticed that the below packages are now part of Debian:
> - Manila
> - Aodh
> - ironic-inspector
> - Zaqar (this one is still in the FTP masters NEW queue...)
>
> I have also packaged a few more, but there are still blockers:
> - Congress (antlr version is too low in Debian)
> - Mistral
>
> 6/ Roadmap for Liberty final release
> ====================================
>
> Next on my roadmap for the final release of Liberty, is finishing to
> upgrade the remaining components to the latest version tested in the
> gate. It has been done for most OpenStack deliverables, but about a
> dozen are still in the lowest version supported by our global-requirements.
>
> There's also some remaining work:
> - more Neutron drivers
> - Gnocchi
> - Address the remaining Tempest failures, and widen the scope of tests
> (add Sahara, Heat, Swift and others to the tested projects using the
> Debian package CI)
>
> I of course welcome everyone to test Liberty RC1 before the final
> release, and report bugs on the Debian bug tracker if needed.
>
> Also note that the Debian packaging CI is fully free software, and part
> of Debian as well (you can look into the openstack-meta-packages package
> in git.debian.org, and in openstack-pkg-tools). Contributions in this
> field are also welcome.
>
> 7/ Thanks to Canonical & every OpenStack upstream projects
> ==========================================================
>
> I'd like to point out that, even though I did the majority of the work
> myself, for this release, there was a way more collaboration with
> Canonical on the dependency chain. Indeed, for this Liberty release,
> Canonical decided to upload every dependency to Debian first, and then
> only sync from it. So a big thanks to the Canonical server team for
> doing community work with me together. I just hope we could push this
> even further, especially trying to have consistency for Nova and Neutron
> binary package names, as it is an issue for Puppet guys.
>
> Last, I would like to hereby thanks everyone who helped me fixing issues
> in these packages. Thank you if you've been patient enough to explain,
> and for your understanding when I wrongly thought an issue was upstream
> when it really was in really in the packages. Thank you, IRC people, you
> are all awesome guys!
>
> 8/ Note about Mirantis OpenStack 7.0 and 8.0
> ============================================
>
> When reading these words, MOS and Fuel 7.0 should already be out. For
> this release, lots of package sources have been taken directly from
> Debian. It is on our roadmap to push this effort even further for MOS
> 8.0 (working over Trusty). I am please that this happens, so that the
> community version of OpenStack (ie: the Debian OpenStack) will have the
> benefits of more QA. I also hope that the project of doing packaging on
> upstream OpenStack Gerrit with gating will happen at least for a few
> packages during the Mitaka cycle, and that Debian will become the common
> community platform for OpenStack as I always wanted it to be.
>
> Happy OpenStack Liberty hacking,
>
> Thomas Goirand (zigo)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Good work Thomas, thanks a lot !

We are not used to reading "thanks" messages from you :) So I enjoy this
email even more !

Jordan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/c1bafbb0/attachment.html>

From david.wilde at rackspace.com  Wed Sep 30 17:37:27 2015
From: david.wilde at rackspace.com (Dave Wilde)
Date: Wed, 30 Sep 2015 17:37:27 +0000
Subject: [openstack-dev] [openstack-ansible] Proposing Steve Lewis
 (stevelle) for core reviewer
In-Reply-To: <CAGSrQvyepXcdV8bBov0+jHBzgEN-5=-jeg2QvAsiMwB_-viZag@mail.gmail.com>
References: <CAGSrQvyepXcdV8bBov0+jHBzgEN-5=-jeg2QvAsiMwB_-viZag@mail.gmail.com>
Message-ID: <etPan.560c1dd7.1e8660a8.a875@Crackintosh.local>

+1 from me as well

--
Dave Wilde
Sent with Airmail


On September 30, 2015 at 03:51:48, Jesse Pretorius (jesse.pretorius at gmail.com<mailto:jesse.pretorius at gmail.com>) wrote:

Hi everyone,

I'd like to propose that Steve Lewis (stevelle) be added as a core reviewer.

He has made an effort to consistently keep up with doing reviews in the last cycle and always makes an effort to ensure that his responses are made after thorough testing where possible. I have found his input to be valuable.

--
Jesse Pretorius
IRC: odyssey4me
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/8c4ca7ff/attachment.html>

From sathlang at redhat.com  Wed Sep 30 17:43:04 2015
From: sathlang at redhat.com (Sofer Athlan-Guyot)
Date: Wed, 30 Sep 2015 19:43:04 +0200
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
	'composite namevar' or 'meaningless name'?
In-Reply-To: <560B8091.4060500@redhat.com> (Gilles Dubreuil's message of "Wed, 
 30 Sep 2015 16:26:25 +1000")
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com> <55F76F5C.2020106@redhat.com>
 <87vbbc2eiu.fsf@s390.unix4.net> <560A1110.5000209@redhat.com>
 <560ACDD1.5040901@redhat.com> <560B8091.4060500@redhat.com>
Message-ID: <87mvw3iz13.fsf@s390.unix4.net>

Gilles Dubreuil <gilles at redhat.com> writes:

> On 30/09/15 03:43, Rich Megginson wrote:
>> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>>>
>>> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
>>>> Gilles Dubreuil <gilles at redhat.com> writes:
>>>>
>>>>> On 15/09/15 06:53, Rich Megginson wrote:
>>>>>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Gilles Dubreuil <gilles at redhat.com> writes:
>>>>>>>
>>>>>>>> A. The 'composite namevar' approach:
>>>>>>>>
>>>>>>>>      keystone_tenant {'projectX::domainY': ... }
>>>>>>>>    B. The 'meaningless name' approach:
>>>>>>>>
>>>>>>>>     keystone_tenant {'myproject': name='projectX',
>>>>>>>> domain=>'domainY',
>>>>>>>> ...}
>>>>>>>>
>>>>>>>> Notes:
>>>>>>>>    - Actually using both combined should work too with the domain
>>>>>>>> supposedly overriding the name part of the domain.
>>>>>>>>    - Please look at [1] this for some background between the two
>>>>>>>> approaches:
>>>>>>>>
>>>>>>>> The question
>>>>>>>> -------------
>>>>>>>> Decide between the two approaches, the one we would like to
>>>>>>>> retain for
>>>>>>>> puppet-keystone.
>>>>>>>>
>>>>>>>> Why it matters?
>>>>>>>> ---------------
>>>>>>>> 1. Domain names are mandatory in every user, group or project.
>>>>>>>> Besides
>>>>>>>> the backward compatibility period mentioned earlier, where no domain
>>>>>>>> means using the default one.
>>>>>>>> 2. Long term impact
>>>>>>>> 3. Both approaches are not completely equivalent which different
>>>>>>>> consequences on the future usage.
>>>>>>> I can't see why they couldn't be equivalent, but I may be missing
>>>>>>> something here.
>>>>>> I think we could support both.  I don't see it as an either/or
>>>>>> situation.
>>>>>>
>>>>>>>> 4. Being consistent
>>>>>>>> 5. Therefore the community to decide
>>>>>>>>
>>>>>>>> Pros/Cons
>>>>>>>> ----------
>>>>>>>> A.
>>>>>>> I think it's the B: meaningless approach here.
>>>>>>>
>>>>>>>>     Pros
>>>>>>>>       - Easier names
>>>>>>> That's subjective, creating unique and meaningful name don't look
>>>>>>> easy
>>>>>>> to me.
>>>>>> The point is that this allows choice - maybe the user already has some
>>>>>> naming scheme, or wants to use a more "natural" meaningful name -
>>>>>> rather
>>>>>> than being forced into a possibly "awkward" naming scheme with "::"
>>>>>>
>>>>>>    keystone_user { 'heat domain admin user':
>>>>>>      name => 'admin',
>>>>>>      domain => 'HeatDomain',
>>>>>>      ...
>>>>>>    }
>>>>>>
>>>>>>    keystone_user_role {'heat domain admin user@::HeatDomain':
>>>>>>      roles => ['admin']
>>>>>>      ...
>>>>>>    }
>>>>>>
>>>>>>>>     Cons
>>>>>>>>       - Titles have no meaning!
>>>>>> They have meaning to the user, not necessarily to Puppet.
>>>>>>
>>>>>>>>       - Cases where 2 or more resources could exists
>>>>>> This seems to be the hardest part - I still cannot figure out how
>>>>>> to use
>>>>>> "compound" names with Puppet.
>>>>>>
>>>>>>>>       - More difficult to debug
>>>>>> More difficult than it is already? :P
>>>>>>
>>>>>>>>       - Titles mismatch when listing the resources (self.instances)
>>>>>>>>
>>>>>>>> B.
>>>>>>>>     Pros
>>>>>>>>       - Unique titles guaranteed
>>>>>>>>       - No ambiguity between resource found and their title
>>>>>>>>     Cons
>>>>>>>>       - More complicated titles
>>>>>>>> My vote
>>>>>>>> --------
>>>>>>>> I would love to have the approach A for easier name.
>>>>>>>> But I've seen the challenge of maintaining the providers behind the
>>>>>>>> curtains and the confusion it creates with name/titles and when
>>>>>>>> not sure
>>>>>>>> about the domain we're dealing with.
>>>>>>>> Also I believe that supporting self.instances consistently with
>>>>>>>> meaningful name is saner.
>>>>>>>> Therefore I vote B
>>>>>>> +1 for B.
>>>>>>>
>>>>>>> My view is that this should be the advertised way, but the other
>>>>>>> method
>>>>>>> (meaningless) should be there if the user need it.
>>>>>>>
>>>>>>> So as far as I'm concerned the two idioms should co-exist.  This
>>>>>>> would
>>>>>>> mimic what is possible with all puppet resources.  For instance
>>>>>>> you can:
>>>>>>>
>>>>>>>     file { '/tmp/foo.bar': ensure => present }
>>>>>>>
>>>>>>> and you can
>>>>>>>
>>>>>>>     file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
>>>>>>> present }
>>>>>>>
>>>>>>> The two refer to the same resource.
>>>>>> Right.
>>>>>>
>>>>> I disagree, using the name for the title is not creating a composite
>>>>> name. The latter requires adding at least another parameter to be part
>>>>> of the title.
>>>>>
>>>>> Also in the case of the file resource, a path/filename is a unique
>>>>> name,
>>>>> which is not the case of an Openstack user which might exist in several
>>>>> domains.
>>>>>
>>>>> I actually added the meaningful name case in:
>>>>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html
>>>>>
>>>>>
>>>>> But that doesn't work very well because without adding the domain to
>>>>> the
>>>>> name, the following fails:
>>>>>
>>>>> keystone_tenant {'project_1': domain => 'domain_A', ...}
>>>>> keystone_tenant {'project_1': domain => 'domain_B', ...}
>>>>>
>>>>> And adding the domain makes it a de-facto 'composite name'.
>>>> I agree that my example is not similar to what the keystone provider has
>>>> to do.  What I wanted to point out is that user in puppet should be used
>>>> to have this kind of *interface*, one where your put something
>>>> meaningful in the title and one where you put something meaningless.
>>>> The fact that the meaningful one is a compound one shouldn't matter to
>>>> the user.
>>>>
>>> There is a big blocker of making use of domain name as parameter.
>>> The issue is the limitation of autorequire.
>>>
>>> Because autorequire doesn't support any parameter other than the
>>> resource type and expects the resource title (or a list of) [1].
>>>
>>> So for instance, keystone_user requires the tenant project1 from
>>> domain1, then the resource name must be 'project1::domain1' because
>>> otherwise there is no way to specify 'domain1':
>>>
>
> Yeah, I kept forgetting this is only about resource relationship/order
> within a given catalog.
> And therefore this is *not* about guaranteeing referred resources exist,
>  for instance when created (or not) in a different puppet run/catalog.
>
> This might be obvious but it's easy (at least for me) to forget that
> when thinking of the resources list, in terms of openstack IDs for
> example inside self.instances!
>
>>> autorequire(:keystone_tenant) do
>>>    self[:tenant]
>>> end
>> 
>> Not exactly.  See https://review.openstack.org/#/c/226919/
>> 
>
> That's nice and makes the implementation easier.
> Thanks.
>
>> For example::
>> 
>>     keystone_tenant {'some random tenant':
>>       name   => 'project1',
>>       domain => 'domain1'
>>     }
>>     keystone_user {'some random user':
>>       name   => 'user1',
>>       domain => 'domain1'
>>     }
>> 
>> How does keystone_user_role need to be declared such that the
>> autorequire for keystone_user and keystone_tenant work?
>> 
>>     keystone_user_role {'some random user at some random tenant': ...}
>> 
>> In this case, I'm assuming this will work
>> 
>>   autorequire(:keystone_user) do
>>     self[:name].rpartition('@').first
>>   end
>>   autorequire(:keystone_user) do
>>     self[:name].rpartition('@').last
>>   end
>> 
>> The keystone_user require will be on 'some random user' and the
>> keystone_tenant require will be on 'some random tenant'.
>> 
>> So it should work, but _you have to be absolutely consistent in using
>> the title everywhere_.  

Ok, so it seems I found a puppet pattern that could enable us to not
depend on the particular syntax used on the title to retrieve the
resource.  If one use "isnamevar" on multiple parameters, then using
"uniqueness_key" on the resource enable us to retrieve the resource in
the catalog, whatever the title of the resource is.

I have a working example in this change
https://review.openstack.org/#/c/226919/ for keystone_tenant with name,
and domain as the keys.  All of the following work and can be easily
retrieved using [domain, keys]

  keystone_domain { 'domain_one': ensure => present }
  keystone_domain { 'domain_two': ensure => present }
  keystone_tenant { 'project_one::domain_one': ensure => present }
  keystone_tenant { 'project_one::domain_two': ensure => present }
  keystone_tenant { 'meaningless_title_one': name => 'project_less', domain => 'domain_one', ensure => present }

This will raise a error:

  keystone_tenant { 'project_one::domain_two': ensure => present }
  keystone_tenant { 'meaningless_title_one': name => 'project_one', domain => 'domain_two', ensure => present }

As puppet will correctly find that they are the same resource.

>> That is, once you have chosen to give something
>> a title, you must use that title everywhere: in autorequires (as
>> described above), in resource references (e.g. Keystone_user['some
>> random user'] ~> Service['myservice']), and anywhere the resource will
>> be referenced by its title.
>> 
>
> Yes the title must the same everywhere it's used but only within a given
> catalog.
>
> No matter how the dependent resources are named/titled as long as they
> provide the necessary resources.
>
> For instance, given the following resources:
>
> keystone_user {'first user': name => 'user1', domain => 'domain_A', ...}
> keystone_user {'user1::domain_B': ...}
> keystone_user {'user1': ...} # Default domain
> keystone_project {'project1::domain_A': ...}
> keystone_project {'project1': ...} # Default domain
>
> And their respective titles:
> 'first user'
> 'user1::domain_B'
> 'user1'
> 'project1::domain_A'
> 'project1'
>
> Then another resource to use them, let's say keystone_user_role.
> Using those unique titles one should be able to do things like these:
>
> keystone_user_role {'first user at project1::domain_A':
>   roles => ['role1]
> }
>
> keystone_user_role {'admin role for user1':
>   user    => 'user1'
>   project => 'project1'
>   roles   => ['admin'] }
>
> That's look cool but the drawback is the names are different when
> listing. That's expected since we're allowing meaningless titles.
>
> $ puppet resource keystone_user
>
> keystone_user { 'user1::Default':
>   ensure    => 'present',
>   domain_id => 'default',
>   email     => 'test at Default.com',
>   enabled   => 'true',
>   id        => 'fb56d86a21f54b09aa435b96fd321eee',
> }
> keystone_user { 'user1::domain_B':
>   ensure    => 'present',
>   domain_id => '79beff022efd4011b9a036155f450af8',
>   email     => 'user1 at domain_B.com',
>   enabled   => 'true',
>   id        => '2174faac46f949fca44e2edab3d53675',
> }
> keystone_user { 'user1::domain_A':
>   ensure    => 'present',
>   domain_id => '9387210938a0ef1b3c843feee8a00a34',
>   email     => 'user1 at domain_A.com',
>   enabled   => 'true',
>   id        => '1bfadcff825e4c188e8e4eb6ce9a2ff5',
> }
>
> Note: I changed the domain field to domain_id because it makes more
> sense here
>
> This is fine as long as when running any catalog, a same resource with a
> different name but same parameters means the same resource.
>
> If everyone agrees with such behavior, then we might be good to go.
>
> The exceptions must be addressed on a per case basis.
> Effectively, there are cases in Openstack where several objects with the
> exact same parameters can co-exist, for instance with the trust (See
> commit message in [1] for examples). In the trust case running the same
> catalog over and over will keep adding the resource (not really
> idempotent!). I've actually re-raised the issue with Keystone developers
> [2].
>
> [1] https://review.openstack.org/200996
> [2] https://bugs.launchpad.net/keystone/+bug/1475091
>

For the keystone_tenant resource name, and domain are isnamevar
parameters.  Using "uniqueness_key" method we get the always unique,
always the same, couple [<domain>, <name>], then, when we have found the
resource we can associate it in prefetch[10] and in autorequire without
any problem.  So if we create a unique key by using isnamevar on the
required parameters for each resource that need it then we get rid of
the dependence on the title to retrieve the resource.

Example of resource that should have a composite key:
 - keystone_user: name and domain should be isnamevar. Then all the
   question about the parsing of title would go away with robust key
   finding.

 - user_role with username, user_domain_name, project_name, project_domain_name, domain as its elements.

When any of the keys are not filled they default to nil. Nil for domain
would be associated to default_domain.

The point is to go away from "title parsing" to "composite key
matching". I'm quite sure it would simplify the code in a lot of
places and solve the concerns raised here.

>> 
>>>
>>> Alternatively, as Sofer suggested (in a discussion we had), we could
>>> poke the catalog to retrieve the corresponding resource(s).
>> 
>> That is another question I posed in
>> https://review.openstack.org/#/c/226919/:
>> 
>> I guess we can look up the user resource and tenant resource from the
>> catalog based on the title?  e.g.
>> 
>>     user = puppet.catalog.resource.find(:keystone_user, 'some random user')
>>     userid = user[:id]
>> 
>>> Unfortunately, unless there is a way around, that doesn't work because
>>> no matter what autorequire wants a title.
>> 
>> Which I think we can provide.
>> 
>> The other tricky parts will be self.instances and self.prefetch.
>> 
>> I think self.instances can continue to use the 'name::domain' naming
>> convention, since it needs some way to create a unique title for all
>> resources.
>> 
>> The real work will be in self.prefetch, which will need to compare all
>> of the parameters/properties to see if a resource declared in a manifest
>> matches exactly a resource found in Keystone. In this case, we may have
>> to 'rename' the resource returned by self.instances to make it match the
>> one from the manifest so that autorequires and resource references
>> continue to work.
>> 
>>>
>>>
>>> So it seems for the scoped domain resources, we have to stick together
>>> the name and domain: '<name>::<domain>'.
>>>
>>> [1]
>>> https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type.rb#L2003
>>>
>>>>>>> But, If that's indeed not possible to have them both,
>>>>> There are cases where having both won't be possible like the trusts,
>>>>> but
>>>>> why not for the resources supporting it.
>>>>>
>>>>> That said, I think we need to make a choice, at least to get
>>>>> started, to
>>>>> have something working, consistently, besides exceptions. Other options
>>>>> to be added later.
>>>> So we should go we the meaningful one first for consistency, I think.
>>>>
>>>>>>> then I would keep only the meaningful name.
>>>>>>>
>>>>>>>
>>>>>>> As a side note, someone raised an issue about the delimiter being
>>>>>>> hardcoded to "::".  This could be a property of the resource.  This
>>>>>>> would enable the user to use weird name with "::" in it and assign
>>>>>>> a "/"
>>>>>>> (for instance) to the delimiter property:
>>>>>>>
>>>>>>>     Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/",
>>>>>>> ... }
>>>>>>>
>>>>>>> bar::is::cool is the name of the domain and foo::blah is the project.
>>>>>> That's a good idea.  Please file a bug for that.
>>>>>>
>>>>>>>> Finally
>>>>>>>> ------
>>>>>>>> Thanks for reading that far!
>>>>>>>> To choose, please provide feedback with more pros/cons, examples and
>>>>>>>> your vote.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Gilles
>>>>>>>>
>>>>>>>>
>>>>>>>> PS:
>>>>>>>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>>>>>>>
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[10]: there is a problem in puppet with the way it handle prefetch and
composite namevar.  I still have to open the bug though.  In the
meantime you will find in
https://review.openstack.org/#/c/226919/5/lib/puppet/provider/keystone_tenant/openstack.rb
a idiom that works.

-- 
Sofer Athlan-Guyot


From mfedosin at mirantis.com  Wed Sep 30 17:49:38 2015
From: mfedosin at mirantis.com (Mikhail Fedosin)
Date: Wed, 30 Sep 2015 20:49:38 +0300
Subject: [openstack-dev] [stable][glance] glance-stable-maint group
	refresher
In-Reply-To: <560BE389.6040505@gmail.com>
References: <EA70533067B8F34F801E964ABCA4C4410F4DF006@G9W0745.americas.hpqcorp.net>
 <560BE389.6040505@gmail.com>
Message-ID: <CAGk9pwaMKvjX=b24a+2crHJXNCRP-cUk37+K5T_Ui194b906pw@mail.gmail.com>

Thank you for your confidence in me folks! I I'll be happy to maintain the
stability of our project and continue working on its improvements.

Best regards,
Mike

On Wed, Sep 30, 2015 at 4:28 PM, Nikhil Komawar <nik.komawar at gmail.com>
wrote:

>
>
> On 9/30/15 8:46 AM, Kuvaja, Erno wrote:
>
> Hi all,
>
>
>
> I?d like to propose following changes to glance-stable-maint team:
>
> 1)      Removing Zhi Yan Liu from the group; unfortunately he has moved
> on to other ventures and is not actively participating our operations
> anymore.
>
> +1 (always welcome back)
>
> 2)      Adding Mike Fedosin to the group; Mike has been reviewing and
> backporting patches to glance stable branches and is working with the right
> mindset. I think he would be great addition to share the workload around.
>
> +1 (definitely)
>
>
>
> Best,
>
> Erno (jokke_) Kuvaja
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
>
> Thanks,
> Nikhil
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/62e5de89/attachment.html>

From sharis at Brocade.com  Wed Sep 30 17:51:06 2015
From: sharis at Brocade.com (Shiv Haris)
Date: Wed, 30 Sep 2015 17:51:06 +0000
Subject: [openstack-dev] [Congress] Congress Usecases VM
In-Reply-To: <CAJjxPAAQ3NtR2Tty9HoTh1ccppw_pBgaUtBv2zbUFKbG+v1OsA@mail.gmail.com>
References: <d8281396042c4cec948a73ba37d0648a@HQ1WP-EXMB12.corp.brocade.com>
 <CAJjxPADrU=WWO0eSCnCQWR2=0=YQx-M5LW8ttYKad3e2hVOmNQ@mail.gmail.com>
 <27aa84ce3bd540f38ce0ffe830d71580@HQ1WP-EXMB12.corp.brocade.com>
 <c3f05df9db9644cba942892651815b0a@HQ1WP-EXMB12.corp.brocade.com>
 <1443116221875.72882@vmware.com>
 <EB8DB51184817F479FC9C47B120861EE1986D904@SHSMSX101.ccr.corp.intel.com>
 <1443139720841.25541@vmware.com>
 <e3c1fccdacc24c0a85e813f843e6b3d0@HQ1WP-EXMB12.corp.brocade.com>
 <B8D164BED956C5439875951895CB4B223BF1E8D7@CAFRFD1MSGUSRIA.ITServices.sbc.com>
 <CAJjxPAAQ3NtR2Tty9HoTh1ccppw_pBgaUtBv2zbUFKbG+v1OsA@mail.gmail.com>
Message-ID: <5089ab5826e64bd18deb7b5b32c75311@HQ1WP-EXMB12.corp.brocade.com>

Hi David,

Exactly what Tim mentioned in his email ? there are 2 VMs.

The VM that I published has a README file in the home directory when you login with the credentials vagrant/vagrant.

Looking forward to your feedback.

-Shiv



From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Wednesday, September 30, 2015 10:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi David,

There are 2 VM images for Congress that we're working on simultaneously: Shiv's and Alex's.

1. Shiv's image is to help new people understand some of the use cases Congress was designed for.  The goal is to include a bunch of use cases that we have working.

2. Alex's image is the one we'll be using for the hands-on-lab in Tokyo.  This one accompanies the Google doc instructions for the Hands On Lab: https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub.

It sounds like you might be using Shiv's image with Alex's hands-on-lab instructions, so the instructions won't necessarily line up with the image.

Tim



On Wed, Sep 30, 2015 at 9:45 AM KARR, DAVID <dk068x at att.com<mailto:dk068x at att.com>> wrote:
I think I?m seeing similar errors, but I?m not certain.  With the OVA I downloaded last night, when I run ?./rejoin-stack.sh?, I get ?Couldn?t find ./stack-screenrc file; have you run stack.sh yet??

Concerning the original page with setup instructions, at https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub , I note that the the login user and password is different (probably obvious), and obviously the required path to ?cd? to.

Also, after starting the VM, the instructions say to run ?ifconfig? to get the IP address of the VM, and then to ssh to the VM.  This seems odd.  If I?ve already done ?interact with the console?, then I?m already logged into the console.  The instructions also describe how to get to the Horizon client from your browser.  I?m not sure what this should say now.

From: Shiv Haris [mailto:sharis at Brocade.com<mailto:sharis at Brocade.com>]
Sent: Friday, September 25, 2015 3:35 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Thanks Alex, Zhou,

I get errors from congress when I do a re-join. These errors seem to due to the order in which the services are coming up. Hence I still depend on running stack.sh after the VM is up and running. Please try out the new VM ? also advise if you need to add any of your use cases. Also re-join starts ?screen? ? do we expect the end user to know how to use ?screen?.

I do understand that running ?stack.sh? takes time to run ? but it does not do things that appear to be any kind of magic which we want to avoid in order to get the user excited.

I have uploaded a new version of the VM please experiment with this and let me know:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova

(root: vagrant password: vagrant)

-Shiv



From: Alex Yip [mailto:ayip at vmware.com]
Sent: Thursday, September 24, 2015 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I was able to make devstack run without a network connection by disabling tempest.  So, I think it uses the loopback IP address, and that does not change, so rejoin-stack.sh works without a network at all.



- Alex





________________________________
From: Zhou, Zhenzan <zhenzan.zhou at intel.com<mailto:zhenzan.zhou at intel.com>>
Sent: Thursday, September 24, 2015 4:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:ayip at vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a minute or so.  Then I run rejoin-stack.sh which takes just another minute or so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack state that was running before.



- Alex





________________________________
From: Shiv Haris <sharis at Brocade.com<mailto:sharis at Brocade.com>>
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user instantiates the Usecase-VM. However creating a OVA file is possible only when the VM is halted which means Openstack is not running and the user will have to run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM and using it in another setup is not very straight forward. It involves modifying the .vbox file and seems that it is prone to user errors. I am leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ?.

Thanks,

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you posed however I am still working on some of the subtle issues raised. Once I have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:tim at styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this big?  I think we should finish this as a VM but then look into doing it with containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB ? but the OVA compress the image and disk to 3 GB. I will looking at other options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup time is substantial, and if there's a problem, it's good to assume the user won't know how to fix it.  Is it possible to have devstack up and running when we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack will be down when you bring up  the VM. I agree a snapshot will be a better choice.

- It'd be good to have a README to explain how to use the use-case structure. It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script so that we can run the use cases one after another without worrying about interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris <sharis at brocade.com<mailto:sharis at brocade.com>> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sharis at Brocade.com<mailto:sharis at Brocade.com>]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova<https://urldefense.proofpoint.com/v2/url?u=http-3A__paloaltan.net_Congress_Congress-5FUsecases-5FSEPT-5F17-5F2015.ova&d=BQMGaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=djA1lFdIf0--GIJ_8gr44Q&m=3IP4igrLri-BaK8VbjbEq2l_AGknCI7-t3UbP5VwlU8&s=wVyys8I915mHTzrOp8f0KLqProw6ygNfaMSP0T-yqCg&e=>

I usually run this on a macbook air ? but it should work on other platfroms as well. I chose virtualbox since it is free.

Please send me your usecases ? I can incorporate in the VM and send you an updated image. Please take a look at the structure I have in place for the first usecase; would prefer it be the same for other usecases. (However I am still open to suggestions for changes)

Thanks,

-Shiv

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/c47d618e/attachment.html>

From jpeeler at redhat.com  Wed Sep 30 17:54:00 2015
From: jpeeler at redhat.com (Jeff Peeler)
Date: Wed, 30 Sep 2015 13:54:00 -0400
Subject: [openstack-dev] [kolla] new yaml format for all.yml, need feedback
Message-ID: <CALesnTwtq5wRwOpZtYRaSAuQ8EYYZabTSdgHFOPNi3f4mHSvfA@mail.gmail.com>

The patch I just submitted[1] modifies the syntax of all.yml to use
dictionaries, which changes how variables are referenced. The key
point being in globals.yml, the overriding of a variable will change
from simply specifying the variable to using the dictionary value:

old:
api_interface: 'eth0'

new:
network:
    api_interface: 'eth0'

Preliminary feedback on IRC sounded positive, so I'll go ahead and
work on finishing the review immediately assuming that we'll go
forward. Please ping me if you hate this change so that I can stop the
work.

[1] https://review.openstack.org/#/c/229535/


From stdake at cisco.com  Wed Sep 30 18:03:43 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Wed, 30 Sep 2015 18:03:43 +0000
Subject: [openstack-dev] [kolla] new yaml format for all.yml,
 need feedback
In-Reply-To: <CALesnTwtq5wRwOpZtYRaSAuQ8EYYZabTSdgHFOPNi3f4mHSvfA@mail.gmail.com>
References: <CALesnTwtq5wRwOpZtYRaSAuQ8EYYZabTSdgHFOPNi3f4mHSvfA@mail.gmail.com>
Message-ID: <D23171FD.13B8B%stdake@cisco.com>

I am in favor of this work if it lands before Liberty.

Regards
-steve


On 9/30/15, 10:54 AM, "Jeff Peeler" <jpeeler at redhat.com> wrote:

>The patch I just submitted[1] modifies the syntax of all.yml to use
>dictionaries, which changes how variables are referenced. The key
>point being in globals.yml, the overriding of a variable will change
>from simply specifying the variable to using the dictionary value:
>
>old:
>api_interface: 'eth0'
>
>new:
>network:
>    api_interface: 'eth0'
>
>Preliminary feedback on IRC sounded positive, so I'll go ahead and
>work on finishing the review immediately assuming that we'll go
>forward. Please ping me if you hate this change so that I can stop the
>work.
>
>[1] https://review.openstack.org/#/c/229535/
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



From stdake at cisco.com  Wed Sep 30 18:05:14 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Wed, 30 Sep 2015 18:05:14 +0000
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for
 core reviewer
In-Reply-To: <D2305CD3.13957%stdake@cisco.com>
References: <D2305CD3.13957%stdake@cisco.com>
Message-ID: <D231722B.13B8D%stdake@cisco.com>

Michal,

The vote was unanimous.  Welcome to the Kolla Core Reviewer team.  I have added you to the appropriate gerrit group.

Regards
-steve


From: Steven Dake <stdake at cisco.com<mailto:stdake at cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 3:20 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

Hi folks,

I am proposing Michal for core reviewer.  Consider my proposal as a +1 vote.  Michal has done a fantastic job with rsyslog, has done a nice job overall contributing to the project for the last cycle, and has really improved his review quality and participation over the last several months.

Our process requires 3 +1 votes, with no veto (-1) votes.  If your uncertain, it is best to abstain :)  I will leave the voting open for 1 week until Tuesday October 6th or until there is a unanimous decision or a  veto.

Regards
-steve
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/fa1d0e41/attachment.html>

From kkushaev at mirantis.com  Wed Sep 30 18:32:53 2015
From: kkushaev at mirantis.com (Kairat Kushaev)
Date: Wed, 30 Sep 2015 21:32:53 +0300
Subject: [openstack-dev] [glance] Models and validation for v2
In-Reply-To: <560C07FB.9090505@gmail.com>
References: <CAAetzei6b0emZ9JEy+W0jYA_VujXeMK+WcaFKSL-=pmnKmDgKQ@mail.gmail.com>
 <560C07FB.9090505@gmail.com>
Message-ID: <CAAetzejDK0_548nm2y-7KEAsg3saMusS3aSVKGPcKJVKNuAJZA@mail.gmail.com>

Agree with you. That's why I am asking about reasoning. Perhaps, we need to
realize how to get rid of this in glanceclient.

Best regards,
Kairat Kushaev

On Wed, Sep 30, 2015 at 7:04 PM, Jay Pipes <jaypipes at gmail.com> wrote:

> On 09/30/2015 09:31 AM, Kairat Kushaev wrote:
>
>> Hi All,
>> In short terms, I am wondering why we are validating responses from
>> server when we are doing
>> image-show, image-list, member-list, metadef-namespace-show and other
>> read-only requests.
>>
>> AFAIK, we are building warlock models when receiving responses from
>> server (see [0]). Each model requires schema to be fetched from glance
>> server. It means that each time we are doing image-show, image-list,
>> image-create, member-list and others we are requesting schema from the
>> server. AFAIU, we are using models to dynamically validate that object
>> is in accordance with schema but is it the case when glance receives
>> responses from the server?
>>
>> Could somebody please explain me the reasoning of this implementation?
>> Am I missed some usage cases when validation is required for server
>> responses?
>>
>> I also noticed that we already faced some issues with such
>> implementation that leads to "mocking" validation([1][2]).
>>
>
> The validation should not be done for responses, only ever requests (and
> it's unclear that there is value in doing this on the client side at all,
> IMHO).
>
> -jay
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/5b5dba74/attachment.html>

From mestery at mestery.com  Wed Sep 30 18:55:06 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Wed, 30 Sep 2015 13:55:06 -0500
Subject: [openstack-dev] [neutron] pypi packages for networking sub-projects
Message-ID: <CAL3VkVw=vpjCKnngx6J1_FESJ5t_Xk6v+M1xhNqAcaK4vV5K1A@mail.gmail.com>

Folks:

In trying to release some networking sub-projects recently, I ran into an
issue [1] where I couldn't release some projects due to them not being
registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
but before that can merge, we need to make sure all projects have pypi
registrations in place. The following networking sub-projects do NOT have
pypi registrations in place and need them created following the guidelines
here [3]:

networking-calico
networking-infoblox
networking-powervm

The following pypi registrations did not follow directions to enable
openstackci has "Owner" permissions, which allow for the publishing of
packages to pypi:

networking-ale-omniswitch
networking-arista
networking-l2gw
networking-vsphere

Once these are corrected, we can merge [2] which will then allow the
neutron-release team the ability to release pypi packages for those
packages.

Thanks!
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
[2] https://review.openstack.org/#/c/229564/1
[3]
http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/4cb0e619/attachment.html>

From mestery at mestery.com  Wed Sep 30 18:56:48 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Wed, 30 Sep 2015 13:56:48 -0500
Subject: [openstack-dev] [Neutron] Release of a neutron sub-project
In-Reply-To: <CAL3VkVws=+ynad4u5HEXiuOsrd92=b1b_vMVijeujdy2u9YB_Q@mail.gmail.com>
References: <CALtWjwbJb4Q=+9LXmBVpRTWt4R9a6BsiCrNYfbgejJ0z1zs4Bw@mail.gmail.com>
 <CAL3VkVws=+ynad4u5HEXiuOsrd92=b1b_vMVijeujdy2u9YB_Q@mail.gmail.com>
Message-ID: <CAL3VkVxjJ=TKAnHDPkhn4jVx4+Z-ut5wm5bba1CPrakviG5+kw@mail.gmail.com>

On Tue, Sep 29, 2015 at 8:04 PM, Kyle Mestery <mestery at mestery.com> wrote:

> On Tue, Sep 29, 2015 at 2:36 PM, Vadivel Poonathan <
> vadivel.openstack at gmail.com> wrote:
>
>> Hi,
>>
>> As per the Sub-Project Release process - i would like to tag and release
>> the following sub-project as part of upcoming Liberty release.
>> The process says talk to one of the member of 'neutron-release' group. I
>> couldn?t find a group mail-id for this group. Hence I am sending this email
>> to the dev list.
>>
>> I just have removed the version from setup.cfg and got the patch merged,
>> as specified in the release process. Can someone from the neutron-release
>> group makes this sub-project release.
>>
>>
>
> Vlad, I'll do this tomorrow. Find me on IRC (mestery) and ping me there so
> I can get your IRC NIC in case I have questions.
>
>
It turns out that the networking-ale-omniswitch pypi setup isn't correct,
see [1] for more info and how to correct. This turned out to be ok, because
it's forced me to re-examine the other networking sub-projects and their
pypi setup to ensure consistency, which the thread found here [1] will
resolve.

Once you resolve this ping me on IRC and I'll release this for you.

Thanks!
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075880.html


> Thanks!
> Kyle
>
>
>>
>> ALE Omniswitch
>> Git: https://git.openstack.org/cgit/openstack/networking-ale-omniswitch
>> Launchpad: https://launchpad.net/networking-ale-omniswitch
>> Pypi: https://pypi.python.org/pypi/networking-ale-omniswitch
>>
>> Thanks,
>> Vad
>> --
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/694a0b50/attachment.html>

From michal.jastrzebski at intel.com  Wed Sep 30 18:57:48 2015
From: michal.jastrzebski at intel.com (Jastrzebski, Michal)
Date: Wed, 30 Sep 2015 18:57:48 +0000
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for
 core reviewer
In-Reply-To: <D231722B.13B8D%stdake@cisco.com>
References: <D2305CD3.13957%stdake@cisco.com> <D231722B.13B8D%stdake@cisco.com>
Message-ID: <5C451B3D6BCE1443888323FE7B0AA3061E61F1A7@IRSMSX107.ger.corp.intel.com>

Thanks everyone!

I really appreciate this and I hope to help to make kolla even better project than it is right now (and right now it's pretty cool;)). We have great community, very diverse and very dedicated. It's pleasure to work with all of you and let's keep up with great work in following releases:)

Thank you again,
Micha?

> -----Original Message-----
> From: Steven Dake (stdake) [mailto:stdake at cisco.com]
> Sent: Wednesday, September 30, 2015 8:05 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core
> reviewer
> 
> Michal,
> 
> The vote was unanimous.  Welcome to the Kolla Core Reviewer team.  I have
> added you to the appropriate gerrit group.
> 
> Regards
> -steve
> 
> 
> From: Steven Dake <stdake at cisco.com <mailto:stdake at cisco.com> >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org <mailto:openstack-
> dev at lists.openstack.org> >
> Date: Tuesday, September 29, 2015 at 3:20 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org <mailto:openstack-
> dev at lists.openstack.org> >
> Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core
> reviewer
> 
> 
> 
> 	Hi folks,
> 
> 	I am proposing Michal for core reviewer.  Consider my proposal as a
> +1 vote.  Michal has done a fantastic job with rsyslog, has done a nice job
> overall contributing to the project for the last cycle, and has really improved his
> review quality and participation over the last several months.
> 
> 	Our process requires 3 +1 votes, with no veto (-1) votes.  If your
> uncertain, it is best to abstain :)  I will leave the voting open for 1 week until
> Tuesday October 6th or until there is a unanimous decision or a  veto.
> 
> 	Regards
> 	-steve



From e0ne at e0ne.info  Wed Sep 30 19:00:09 2015
From: e0ne at e0ne.info (Ivan Kolodyazhny)
Date: Wed, 30 Sep 2015 22:00:09 +0300
Subject: [openstack-dev]  [cinder] [Sahara] Block Device Driver updates
Message-ID: <CAGocpaF8ha4jz-UNG=-Wu+qzLvkhABWvmdcgAnfLF_VYsTV9bA@mail.gmail.com>

Hi team,

I know that Block Device Driver (BDD) is not popular in Cinder community.
The main issues were:

* driver is not good maintained
* it doesn't feet minimum features set
* there is no CI for it
* it's not a Cinder way/it works only when instance and volume are created
on the same host
* etc

AFAK, it's widely used in Sahara &  Hadoop communities because it works
fast. I won't discuss driver's performance in this thread. I share my
performance tests results once I'll finish it.

I'm going to share drive updates with you about issues above.

1) driver is not good maintained - we are working on it right now and will
fix any found issues. We've got devstack plugin [1] for this driver.

2) it doesn't feet minimum features set - I've filed a blueprint [2] for
it. There are patches that implement needed features in the gerrit [3].

3) there is no CI for it - In Cinder community, we've got strong
requirement that each driver must has CI. I've absolutely agree with that.
That's why new infra job is proposed [4].

4) it works only when instance and volume are created on the same host -
I've filed a blueprint [5] but after testing I've found that it's already
implemented by [6].


I hope, I've answered all questions that were asked in IRC and in comments
for [6]. I will do my best to support this driver and propose fix to delete
if community decide  to delete it from the cinder tree


[1] https://github.com/openstack/devstack-plugin-bdd
[2]
https://blueprints.launchpad.net/cinder/+spec/block-device-driver-minimum-features-set
[3]
https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/block-device-driver-minimum-features-set,n,z
[4] https://review.openstack.org/228857
[5]
https://blueprints.launchpad.net/cinder/+spec/block-device-driver-via-iscsi
[6] https://review.openstack.org/#/c/200039/


Regards,
Ivan Kolodyazhny
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/aa5cefa0/attachment.html>

From samuel at yaple.net  Wed Sep 30 19:18:47 2015
From: samuel at yaple.net (Sam Yaple)
Date: Wed, 30 Sep 2015 14:18:47 -0500
Subject: [openstack-dev] [kolla] new yaml format for all.yml,
	need feedback
In-Reply-To: <D23171FD.13B8B%stdake@cisco.com>
References: <CALesnTwtq5wRwOpZtYRaSAuQ8EYYZabTSdgHFOPNi3f4mHSvfA@mail.gmail.com>
 <D23171FD.13B8B%stdake@cisco.com>
Message-ID: <CAJ3CzQVBoL+FCczAYKwuoOcTA94Q4WfMWFRUJKrRuva9AoZgpA@mail.gmail.com>

Also in favor is it lands before Liberty. But I don't want to see a format
change straight into Mitaka.

Sam Yaple

On Wed, Sep 30, 2015 at 1:03 PM, Steven Dake (stdake) <stdake at cisco.com>
wrote:

> I am in favor of this work if it lands before Liberty.
>
> Regards
> -steve
>
>
> On 9/30/15, 10:54 AM, "Jeff Peeler" <jpeeler at redhat.com> wrote:
>
> >The patch I just submitted[1] modifies the syntax of all.yml to use
> >dictionaries, which changes how variables are referenced. The key
> >point being in globals.yml, the overriding of a variable will change
> >from simply specifying the variable to using the dictionary value:
> >
> >old:
> >api_interface: 'eth0'
> >
> >new:
> >network:
> >    api_interface: 'eth0'
> >
> >Preliminary feedback on IRC sounded positive, so I'll go ahead and
> >work on finishing the review immediately assuming that we'll go
> >forward. Please ping me if you hate this change so that I can stop the
> >work.
> >
> >[1] https://review.openstack.org/#/c/229535/
> >
> >__________________________________________________________________________
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/0ea08a07/attachment.html>

From sukhdevkapur at gmail.com  Wed Sep 30 19:21:39 2015
From: sukhdevkapur at gmail.com (Sukhdev Kapur)
Date: Wed, 30 Sep 2015 12:21:39 -0700
Subject: [openstack-dev] [neutron] pypi packages for networking
	sub-projects
In-Reply-To: <CAL3VkVw=vpjCKnngx6J1_FESJ5t_Xk6v+M1xhNqAcaK4vV5K1A@mail.gmail.com>
References: <CAL3VkVw=vpjCKnngx6J1_FESJ5t_Xk6v+M1xhNqAcaK4vV5K1A@mail.gmail.com>
Message-ID: <CA+wZVHRp1O1b5oujkSX7ccRJmbHp1wHbawPVD4Sbx374hYgYbg@mail.gmail.com>

Hey Kyle,

I am bit confused by this. I just checked networking-arista and see that
the co-owner of the project is openstackci
I also checked the [1] and [2] and the settings for networking-arista are
correct as well.

What else is missing which make you put networking-arista in the second
category?
Please advise.

Thanks
-Sukhdev


[1] - jenkins/jobs/projects.yaml
<https://review.openstack.org/#/c/229564/1/jenkins/jobs/projects.yaml>
[2] - zuul/layout.yaml
<https://review.openstack.org/#/c/229564/1/zuul/layout.yaml>

On Wed, Sep 30, 2015 at 11:55 AM, Kyle Mestery <mestery at mestery.com> wrote:

> Folks:
>
> In trying to release some networking sub-projects recently, I ran into an
> issue [1] where I couldn't release some projects due to them not being
> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
> but before that can merge, we need to make sure all projects have pypi
> registrations in place. The following networking sub-projects do NOT have
> pypi registrations in place and need them created following the guidelines
> here [3]:
>
> networking-calico
> networking-infoblox
> networking-powervm
>
> The following pypi registrations did not follow directions to enable
> openstackci has "Owner" permissions, which allow for the publishing of
> packages to pypi:
>
> networking-ale-omniswitch
> networking-arista
> networking-l2gw
> networking-vsphere
>
> Once these are corrected, we can merge [2] which will then allow the
> neutron-release team the ability to release pypi packages for those
> packages.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
> [2] https://review.openstack.org/#/c/229564/1
> [3]
> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/95500dd1/attachment.html>

From muralirdev at gmail.com  Wed Sep 30 19:29:09 2015
From: muralirdev at gmail.com (Murali R)
Date: Wed, 30 Sep 2015 12:29:09 -0700
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <56084E35.20101@redhat.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78E403@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56084E35.20101@redhat.com>
Message-ID: <CAO=R8o=W26dEp1VmvnnxtA2C9kSoBeNJQ-JKkGtMqX4qvbdXfw@mail.gmail.com>

Russell,

Are any additional options fields used in geneve between hypervisors at
this time? If so, how do they translate to vxlan when it hits gw? For
instance, I am interested to see if we can translate a custom header info
in vxlan to geneve headers and vice-versa. And if there are flow commands
available to add conditional flows at this time or if it is possible to
extend if need be.

Thanks
Murali

On Sun, Sep 27, 2015 at 1:14 PM, Russell Bryant <rbryant at redhat.com> wrote:

> On 09/27/2015 02:26 AM, WANG, Ming Hao (Tony T) wrote:
> > Russell,
> >
> > Thanks for your valuable information.
> > I understood Geneve is some kind of tunnel format for network
> virtualization encapsulation, just like VxLAN.
> > But I'm still confused by the connection between Geneve and VTEP.
> > I suppose VTEP should be on behalf of "VxLAN Tunnel Endpoint", which
> should be used for VxLAN only.
> >
> > Does it become some "common tunnel endpoint" in OVN, and can be also
> used as a tunnel endpoint for Geneve?
>
> When using VTEP gateways, both the Geneve and VxLAN protocols are being
> used.  Packets between hypervisors are sent using Geneve.  Packets
> between a hypervisor and the gateway are sent using VxLAN.
>
> --
> Russell Bryant
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/fe843260/attachment.html>

From jbelamaric at infoblox.com  Wed Sep 30 19:33:32 2015
From: jbelamaric at infoblox.com (John Belamaric)
Date: Wed, 30 Sep 2015 19:33:32 +0000
Subject: [openstack-dev] [neutron] pypi packages for networking
 sub-projects
In-Reply-To: <CAL3VkVw=vpjCKnngx6J1_FESJ5t_Xk6v+M1xhNqAcaK4vV5K1A@mail.gmail.com>
References: <CAL3VkVw=vpjCKnngx6J1_FESJ5t_Xk6v+M1xhNqAcaK4vV5K1A@mail.gmail.com>
Message-ID: <F554F900-EF91-4CA0-AEB0-BB476F475704@infoblox.com>

Kyle,

I have taken care of this for networking-infoblox. Please let me know if anything else is necessary.

Thanks,
John

On Sep 30, 2015, at 2:55 PM, Kyle Mestery <mestery at mestery.com<mailto:mestery at mestery.com>> wrote:

Folks:

In trying to release some networking sub-projects recently, I ran into an issue [1] where I couldn't release some projects due to them not being registered on pypi. I have a patch out [2] which adds pypi publishing jobs, but before that can merge, we need to make sure all projects have pypi registrations in place. The following networking sub-projects do NOT have pypi registrations in place and need them created following the guidelines here [3]:

networking-calico
networking-infoblox
networking-powervm

The following pypi registrations did not follow directions to enable openstackci has "Owner" permissions, which allow for the publishing of packages to pypi:

networking-ale-omniswitch
networking-arista
networking-l2gw
networking-vsphere

Once these are corrected, we can merge [2] which will then allow the neutron-release team the ability to release pypi packages for those packages.

Thanks!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
[2] https://review.openstack.org/#/c/229564/1
[3] http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/f3099e22/attachment.html>

From mestery at mestery.com  Wed Sep 30 19:42:03 2015
From: mestery at mestery.com (Kyle Mestery)
Date: Wed, 30 Sep 2015 14:42:03 -0500
Subject: [openstack-dev] [neutron] pypi packages for networking
	sub-projects
In-Reply-To: <CA+wZVHRp1O1b5oujkSX7ccRJmbHp1wHbawPVD4Sbx374hYgYbg@mail.gmail.com>
References: <CAL3VkVw=vpjCKnngx6J1_FESJ5t_Xk6v+M1xhNqAcaK4vV5K1A@mail.gmail.com>
 <CA+wZVHRp1O1b5oujkSX7ccRJmbHp1wHbawPVD4Sbx374hYgYbg@mail.gmail.com>
Message-ID: <CAL3VkVyg7vF_VedqOw8Sfii8Jy0tZB0tmhq8N09CeA673Ont_g@mail.gmail.com>

Sukhdev, you're right, for some reason that one didn't show up in a pypi
search on pypi itself, but does in google. And it is correctly owned [1].

[1] https://pypi.python.org/pypi/networking_arista

On Wed, Sep 30, 2015 at 2:21 PM, Sukhdev Kapur <sukhdevkapur at gmail.com>
wrote:

> Hey Kyle,
>
> I am bit confused by this. I just checked networking-arista and see that
> the co-owner of the project is openstackci
> I also checked the [1] and [2] and the settings for networking-arista are
> correct as well.
>
> What else is missing which make you put networking-arista in the second
> category?
> Please advise.
>
> Thanks
> -Sukhdev
>
>
> [1] - jenkins/jobs/projects.yaml
> <https://review.openstack.org/#/c/229564/1/jenkins/jobs/projects.yaml>
> [2] - zuul/layout.yaml
> <https://review.openstack.org/#/c/229564/1/zuul/layout.yaml>
>
> On Wed, Sep 30, 2015 at 11:55 AM, Kyle Mestery <mestery at mestery.com>
> wrote:
>
>> Folks:
>>
>> In trying to release some networking sub-projects recently, I ran into an
>> issue [1] where I couldn't release some projects due to them not being
>> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
>> but before that can merge, we need to make sure all projects have pypi
>> registrations in place. The following networking sub-projects do NOT have
>> pypi registrations in place and need them created following the guidelines
>> here [3]:
>>
>> networking-calico
>> networking-infoblox
>> networking-powervm
>>
>> The following pypi registrations did not follow directions to enable
>> openstackci has "Owner" permissions, which allow for the publishing of
>> packages to pypi:
>>
>> networking-ale-omniswitch
>> networking-arista
>> networking-l2gw
>> networking-vsphere
>>
>> Once these are corrected, we can merge [2] which will then allow the
>> neutron-release team the ability to release pypi packages for those
>> packages.
>>
>> Thanks!
>> Kyle
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
>> [2] https://review.openstack.org/#/c/229564/1
>> [3]
>> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/9bb55a1e/attachment.html>

From harlowja at outlook.com  Wed Sep 30 19:48:40 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Wed, 30 Sep 2015 12:48:40 -0700
Subject: [openstack-dev] [election][TC] TC Candidacy
Message-ID: <BLU436-SMTP157548A21C7D99A7E37D506D84D0@phx.gbl>

Hi folks,

I'd like to propose my candidacy for the technical committee
elections.

I've been involved in OpenStack for around ~four~ years now, working
to help integrate it into various Yahoo! systems and infrastructure.
I've been involved with integration and creation (and maturation) of
many projects (and libraries); for example rpm and venv packaging (via
anvil), cloud-init (a related tool), doc8 (a doc checking tool),
taskflow (an oslo library), tooz (an oslo library), automaton (an oslo
library), kazoo (a dependent library) and more.

As mentioned above, my contributions to OpenStack have been at the
project and library level. My experience in oslo (a group of
folks that specialize in cross-project libraries and reduction of
duplication across projects) has helped me grow and gain knowledge
about how to work across various projects. Now I would like to help
OpenStack projects become ~more~ excellent technically. I'd like to
be able  to leverage (and share) the experience I have gained at
Yahoo! to help make OpenStack that much better (we have tens of
thousands of VMs and thousands of hypervisors, tens of
thousands of baremetal instances split across many clusters with
varying network topology and layout).

I'd like to join the TC to aid some of the on-going work that helps
overhaul pieces of OpenStack to make them more scalable, more fault
tolerant, and in all honesty more ~modern~. I believe we (as a TC)
need to perform ~more~ outreach to projects and provide more advice
and guidance with respect to which technologies will help them scale
in the long term (for example instead of reinventing service discovery
solutions and/or distributed locking, use other open source solutions
that provide it already in a battle-hardened manner) proactively
instead of reactively.

I believe some of this can be solved by trying to make sure the TC is
on-top of: https://review.openstack.org/#/q/status:open+project:openstack
/openstack-specs,n,z and ensuring proposed/accepted cross-project
initiatives do not linger. (I'd personally rather have a cross-project
spec be reviewed and marked as not applicable vs. having a spec
linger.)

In summary, I would like to focus on helping this outreach and
involvement become better (and yes some of that outreach goes beyond
the OpenStack community), helping get OpenStack projects onto scalable
solutions (where applicable) and help make OpenStack become a cloud
solution that can work well for all (instead of work well for small
clouds and not work so well for large ones). Of course on-going
efforts need to conclude (tags for example) first but I hope that as a
TC member I can help promote work on OpenStack that helps the long
term technical sustainability (at small and megascale) of OpenStack
become better.

TLDR; work on getting TC to get more involved with the technical
outreach of OpenStack; reduce focus on approving projects and tags
and hopefully work to help the focus become on the long term technical
sustainability of OpenStack (at small and megascale); using my own
experiences to help in this process //

Thanks for considering me,

Joshua Harlow

------

Yahoo!

http://stackalytics.com/report/users/harlowja

Official submission @ https://review.openstack.org/229591


From rbryant at redhat.com  Wed Sep 30 19:49:41 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Wed, 30 Sep 2015 15:49:41 -0400
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <CAO=R8o=W26dEp1VmvnnxtA2C9kSoBeNJQ-JKkGtMqX4qvbdXfw@mail.gmail.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78E403@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56084E35.20101@redhat.com>
 <CAO=R8o=W26dEp1VmvnnxtA2C9kSoBeNJQ-JKkGtMqX4qvbdXfw@mail.gmail.com>
Message-ID: <560C3CD5.6020202@redhat.com>

On 09/30/2015 03:29 PM, Murali R wrote:
> Russell,
> 
> Are any additional options fields used in geneve between hypervisors at
> this time? If so, how do they translate to vxlan when it hits gw? For
> instance, I am interested to see if we can translate a custom header
> info in vxlan to geneve headers and vice-versa. 

Yes, geneve options are used. Specifically, there are three pieces of
metadata sent: a logical datapath ID (the logical switch, or network),
the source logical port, and the destination logical port.

Geneve is only used between hypervisors. VxLAN is only used between
hypervisors and a VTEP gateway. In that case, the additional metadata is
not included. There's just a tunnel ID in that case, used to identify
the source/destination logical switch on the VTEP gateway.

> And if there are flow
> commands available to add conditional flows at this time or if it is
> possible to extend if need be.

I'm not quite sure I understand this part.  Could you expand on what you
have in mind?

-- 
Russell Bryant


From sukhdevkapur at gmail.com  Wed Sep 30 19:53:33 2015
From: sukhdevkapur at gmail.com (Sukhdev Kapur)
Date: Wed, 30 Sep 2015 12:53:33 -0700
Subject: [openstack-dev] [neutron] pypi packages for networking
	sub-projects
In-Reply-To: <CAL3VkVw=vpjCKnngx6J1_FESJ5t_Xk6v+M1xhNqAcaK4vV5K1A@mail.gmail.com>
References: <CAL3VkVw=vpjCKnngx6J1_FESJ5t_Xk6v+M1xhNqAcaK4vV5K1A@mail.gmail.com>
Message-ID: <CA+wZVHQVWcfgEb3hykTNh=r3fN+Mp30XQDu4FTyy5HPsvh1kKw@mail.gmail.com>

Hey Kyle,

I have updated the ownership of networking-l2gw. I have +1'd your patch. As
soon as it merges the ACLs for the L2GW project will be fine as well.

Thanks for confirming about the networking-arista.

With this both of these packages should be good to go.

Thanks
-Sukhdev


On Wed, Sep 30, 2015 at 11:55 AM, Kyle Mestery <mestery at mestery.com> wrote:

> Folks:
>
> In trying to release some networking sub-projects recently, I ran into an
> issue [1] where I couldn't release some projects due to them not being
> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
> but before that can merge, we need to make sure all projects have pypi
> registrations in place. The following networking sub-projects do NOT have
> pypi registrations in place and need them created following the guidelines
> here [3]:
>
> networking-calico
> networking-infoblox
> networking-powervm
>
> The following pypi registrations did not follow directions to enable
> openstackci has "Owner" permissions, which allow for the publishing of
> packages to pypi:
>
> networking-ale-omniswitch
> networking-arista
> networking-l2gw
> networking-vsphere
>
> Once these are corrected, we can merge [2] which will then allow the
> neutron-release team the ability to release pypi packages for those
> packages.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
> [2] https://review.openstack.org/#/c/229564/1
> [3]
> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/24313cf4/attachment.html>

From shardy at redhat.com  Wed Sep 30 20:05:34 2015
From: shardy at redhat.com (Steven Hardy)
Date: Wed, 30 Sep 2015 21:05:34 +0100
Subject: [openstack-dev] [tripleo] How to selectively enable new services?
Message-ID: <20150930200533.GA16049@t430slt.redhat.com>

Hi all,

So I wanted to start some discussion on $subject, because atm we have a
couple of patches adding support for new services (which is great!):

Manila: https://review.openstack.org/#/c/188137/
Sahara: https://review.openstack.org/#/c/220863/

So, firstly I am *not* aiming to be any impediment to those landing, and I
know they have been in-progress for some time.  These look pretty close to
being ready to land and overall I think new service integration is a very
good thing for TripleO.

However, given the recent evolution towards the "big tent" of OpenStack, I
wanted to get some ideas on what an effective way to selectively enable
services would look like, as I can imagine not all users of TripleO want to
deploy all-the-services all of the time.

I was initially thinking we simply have e.g "EnableSahara" as a boolean in
overcloud-without-mergepy, and wire that in to the puppet manifests, such
that the services are not configured/started.  However comments in the
Sahara patch indicate it may be more complex than that, in particular
requiring changes to the loadbalancer puppet code and os-cloud-config.

This is all part of the more general "composable roles" problem, but is
there an initial step we can take, which will make it easy to simply
disable services (and ideally not pay the cost of configuring them at all)
on deployment?

Interested in peoples thoughts on this - has anyone already looked into it,
or is there any existing pattern we can reuse?

As mentioned above, not aiming to block anything on this, I guess we can
figure it out and retro-fit it to whatever services folks want to
selectively disable later if needed.

Thanks,

Steve


From muralirdev at gmail.com  Wed Sep 30 20:09:34 2015
From: muralirdev at gmail.com (Murali R)
Date: Wed, 30 Sep 2015 13:09:34 -0700
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <560C3CD5.6020202@redhat.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78E403@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56084E35.20101@redhat.com>
 <CAO=R8o=W26dEp1VmvnnxtA2C9kSoBeNJQ-JKkGtMqX4qvbdXfw@mail.gmail.com>
 <560C3CD5.6020202@redhat.com>
Message-ID: <CAO=R8o=mGRLXcetbZOLu_x9hjqNqUpbyDroToL=O5HiUEqzZMw@mail.gmail.com>

Russel,

For instance if I have a nsh header embedded in vxlan in the incoming
packet, I was wondering if I can transfer that to geneve options somehow.
This is just as an example. I may have header other info either in vxlan or
ip that needs to enter the ovn network and if we have generic ovs commands
to handle that, it will be useful. If commands don't exist but extensible
then I can do that as well.





On Wed, Sep 30, 2015 at 12:49 PM, Russell Bryant <rbryant at redhat.com> wrote:

> On 09/30/2015 03:29 PM, Murali R wrote:
> > Russell,
> >
> > Are any additional options fields used in geneve between hypervisors at
> > this time? If so, how do they translate to vxlan when it hits gw? For
> > instance, I am interested to see if we can translate a custom header
> > info in vxlan to geneve headers and vice-versa.
>
> Yes, geneve options are used. Specifically, there are three pieces of
> metadata sent: a logical datapath ID (the logical switch, or network),
> the source logical port, and the destination logical port.
>
> Geneve is only used between hypervisors. VxLAN is only used between
> hypervisors and a VTEP gateway. In that case, the additional metadata is
> not included. There's just a tunnel ID in that case, used to identify
> the source/destination logical switch on the VTEP gateway.
>
> > And if there are flow
> > commands available to add conditional flows at this time or if it is
> > possible to extend if need be.
>
> I'm not quite sure I understand this part.  Could you expand on what you
> have in mind?
>
> --
> Russell Bryant
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/245f68e4/attachment.html>

From harlowja at outlook.com  Wed Sep 30 20:13:25 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Wed, 30 Sep 2015 13:13:25 -0700
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <D98092A2-4326-44CE-AD0E-654D7DAA8738@rackspace.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
 <0957CD8F4B55C0418161614FEC580D6BCECE8E@SZXEMI503-MBS.china.huawei.com>
 <1443628722351.56254@RACKSPACE.COM>
 <D98092A2-4326-44CE-AD0E-654D7DAA8738@rackspace.com>
Message-ID: <BLU436-SMTP2546834433CE3DA7DED7F00D84D0@phx.gbl>

Adrian Otto wrote:
> Thanks everyone who has provided feedback on this thread. The good
> news is that most of what has been asked for from Magnum is actually
> in scope already, and some of it has already been implemented. We
> never aimed to be a COE deployment service. That happens to be a
> necessity to achieve our more ambitious goal: We want to provide a
> compelling Containers-as-a-Service solution for OpenStack clouds in a
> way that offers maximum leverage of what?s already in OpenStack,
> while giving end users the ability to use their favorite tools to
> interact with their COE of choice, with the multi-tenancy capability
> we expect from all OpenStack services, and simplified integration
> with a wealth of existing OpenStack services (Identity,
> Orchestration, Images, Networks, Storage, etc.).
>
> The areas we have disagreement are whether the features offered for
> the k8s COE should be mirrored in other COE?s. We have not attempted
> to do that yet, and my suggestion is to continue resisting that
> temptation because it is not aligned with our vision. We are not here
> to re-invent container management as a hosted service. Instead, we
> aim to integrate prevailing technology, and make it work great with
> OpenStack. For example, adding docker-compose capability to Magnum is
> currently out-of-scope, and I think it should stay that way. With
> that said, I?m willing to have a discussion about this with the
> community at our upcoming Summit.
>
> An argument could be made for feature consistency among various COE
> options (Bay Types). I see this as a relatively low value pursuit.
> Basic features like integration with OpenStack Networking and
> OpenStack Storage services should be universal. Whether you can
> present a YAML file for a bay to perform internal orchestration is
> not important in my view, as long as there is a prevailing way of
> addressing that need. In the case of Docker Bays, you can simply
> point a docker-compose client at it, and that will work fine.
>

So an interesting question, but how is tenancy going to work, will there 
be a keystone tenancy <-> COE tenancy adapter? From my understanding a 
whole bay (COE?) is owned by a tenant, which is great for tenants that 
want to ~experiment~ with a COE but seems disjoint from the end goal of 
an integrated COE where the tenancy model of both keystone and the COE 
is either the same or is adapted via some adapter layer.

For example:

1) Bay that is connected to uber-tenant 'yahoo'

    1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us'
    1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
    ...

All those tenancy information is in keystone, not replicated/synced into 
the COE (or in some other COE specific disjoint system).

Thoughts?

This one becomes especially hard if said COE(s) don't even have a 
tenancy model in the first place :-/

> Thanks,
>
> Adrian
>
>> On Sep 30, 2015, at 8:58 AM, Devdatta
>> Kulkarni<devdatta.kulkarni at RACKSPACE.COM>  wrote:
>>
>> +1 Hongbin.
>>
>> From perspective of Solum, which hopes to use Magnum for its
>> application container scheduling requirements, deep integration of
>> COEs with OpenStack services like Keystone will be useful.
>> Specifically, I am thinking that it will be good if Solum can
>> depend on Keystone tokens to deploy and schedule containers on the
>> Bay nodes instead of having to use COE specific credentials. That
>> way, container resources will become first class components that
>> can be monitored using Ceilometer, access controlled using
>> Keystone, and managed from within Horizon.
>>
>> Regards, Devdatta
>>
>>
>> From: Hongbin Lu<hongbin.lu at huawei.com> Sent: Wednesday, September
>> 30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
>> usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
>> compose = k8s?
>>
>>
>> +1 from me as well.
>>
>> I think what makes Magnum appealing is the promise to provide
>> container-as-a-service. I see coe deployment as a helper to achieve
>> the promise, instead of  the main goal.
>>
>> Best regards, Hongbin
>>
>>
>> From: Jay Lau [mailto:jay.lau.513 at gmail.com] Sent: September-29-15
>> 10:57 PM To: OpenStack Development Mailing List (not for usage
>> questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
>> k8s?
>>
>>
>>
>> +1 to Egor, I think that the final goal of Magnum is container as a
>> service but not coe deployment as a service. ;-)
>>
>> Especially we are also working on Magnum UI, the Magnum UI should
>> export some interfaces to enable end user can create container
>> applications but not only coe deployment.
>>
>> I hope that the Magnum can be treated as another "Nova" which is
>> focusing on container service. I know it is difficult to unify all
>> of the concepts in different coe (k8s has pod, service, rc, swarm
>> only has container, nova only has VM,  PM with different
>> hypervisors), but this deserve some deep dive and thinking to see
>> how can move forward.....
>>
>>
>>
>>
>>
>> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<EGuz at walmartlabs.com>
>> wrote: definitely ;), but the are some thoughts to Tom?s email.
>>
>> I agree that we shouldn't reinvent apis, but I don?t think Magnum
>> should only focus at deployment (I feel we will become another
>> Puppet/Chef/Ansible module if we do it ):) I belive our goal should
>> be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem
>> (Neutron/Cinder/Barbican/etc) even if we need to step in to
>> Kub/Mesos/Swarm communities for that.
>>
>> ? Egor
>>
>> From: Adrian
>> Otto<adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>
>> Reply-To: "OpenStack Development Mailing List (not for usage
>> questions)"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
>>
>>
Date: Tuesday, September 29, 2015 at 08:44
>> To: "OpenStack Development Mailing List (not for usage
>> questions)"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
>>
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>
>> This is definitely a topic we should cover in Tokyo.
>>
>> On Sep 29, 2015, at 8:28 AM, Daneyon Hansen
>> (danehans)<danehans at cisco.com<mailto:danehans at cisco.com>>  wrote:
>>
>>
>> +1
>>
>> From: Tom Cammann<tom.cammann at hpe.com<mailto:tom.cammann at hpe.com>>
>> Reply-To:
>> "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
>>
>>
Date: Tuesday, September 29, 2015 at 2:22 AM
>> To:
>> "openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
>>
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>
>> This has been my thinking in the last couple of months to
>> completely deprecate the COE specific APIs such as pod/service/rc
>> and container.
>>
>> As we now support Mesos, Kubernetes and Docker Swarm its going to
>> be very difficult and probably a wasted effort trying to
>> consolidate their separate APIs under a single Magnum API.
>>
>> I'm starting to see Magnum as COEDaaS - Container Orchestration
>> Engine Deployment as a Service.
>>
>> On 29/09/15 06:30, Ton Ngo wrote: Would it make sense to ask the
>> opposite of Wanghua's question: should pod/service/rc be deprecated
>> if the user can easily get to the k8s api? Even if we want to
>> orchestrate these in a Heat template, the corresponding heat
>> resources can just interface with k8s instead of Magnum. Ton Ngo,
>>
>> <ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
>> docker compose is just command line tool which doesn?t have any api
>> or scheduling feat
>>
>> From: Egor Guz<EGuz at walmartlabs.com><mailto:EGuz at walmartlabs.com>
>> To:
>> "openstack-dev at lists.openstack.org"<mailto:openstack-dev at lists.openstack.org>
>> <openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>
>>
>>
Date: 09/28/2015 10:20 PM
>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>> ________________________________
>>
>>
>>
>> Also I belive docker compose is just command line tool which
>> doesn?t have any api or scheduling features. But during last Docker
>> Conf hackathon PayPal folks implemented docker compose executor for
>> Mesos (https://github.com/mohitsoni/compose-executor) which can
>> give you pod like experience.
>>
>> ? Egor
>>
>> From: Adrian
>> Otto<adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
>>
>>
Reply-To: "OpenStack Development Mailing List (not for usage 
questions)"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
>> Date: Monday, September 28, 2015 at 22:03 To: "OpenStack
>> Development Mailing List (not for usage
>> questions)"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
>>
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>
>> Wanghua,
>>
>> I do follow your logic, but docker-compose only needs the docker
>> API to operate. We are intentionally avoiding re-inventing the
>> wheel. Our goal is not to replace docker swarm (or other existing
>> systems), but to compliment it/them. We want to offer users of
>> Docker the richness of native APIs and supporting tools. This way
>> they will not need to compromise features or wait longer for us to
>> implement each new feature as it is added. Keep in mind that our
>> pod, service, and replication controller resources pre-date  this
>> philosophy. If we started out with the current approach, those
>> would not exist in Magnum.
>>
>> Thanks,
>>
>> Adrian
>>
>> On Sep 28, 2015, at 8:32 PM, ??
>> <wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com>>
>> wrote:
>>
>> Hi folks,
>>
>> Magnum now exposes service, pod, etc to users in kubernetes coe,
>> but exposes container in swarm coe. As I know, swarm is only a
>> scheduler of container, which is like nova in openstack. Docker
>> compose is a orchestration program which is like heat in openstack.
>> k8s is the combination of scheduler and orchestration. So I think
>> it is better to expose the apis in compose to users which are at
>> the same level as k8s.
>>
>>
>> Regards Wanghua
>> __________________________________________________________________________
>>
>>
OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>
>>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>>
>>
OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>
>>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>> __________________________________________________________________________
>>
>>
OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
<ATT00001.gif>__________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>>
>>
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>
>>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>>
>>
OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>>
>>
>>
>>
>>
>> Thanks, Jay Lau (Guangya Liu)
>>
>> __________________________________________________________________________
>>
>>
OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
>
>
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From adrian.otto at rackspace.com  Wed Sep 30 20:38:38 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Wed, 30 Sep 2015 20:38:38 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <BLU436-SMTP2546834433CE3DA7DED7F00D84D0@phx.gbl>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
 <0957CD8F4B55C0418161614FEC580D6BCECE8E@SZXEMI503-MBS.china.huawei.com>
 <1443628722351.56254@RACKSPACE.COM>
 <D98092A2-4326-44CE-AD0E-654D7DAA8738@rackspace.com>
 <BLU436-SMTP2546834433CE3DA7DED7F00D84D0@phx.gbl>
Message-ID: <F794D8B2-433F-4182-B09F-097E2F32F0EA@rackspace.com>

Joshua,

The tenancy boundary in Magnum is the bay. You can place whatever single-tenant COE you want into the bay (Kubernetes, Mesos, Docker Swarm). This allows you to use native tools to interact with the COE in that bay, rather than using an OpenStack specific client. If you want to use the OpenStack client to create both bays, pods, and containers, you can do that today. You also have the choice, for example, to run kubctl against your Kubernetes bay, if you so desire.

Bays offer both a management and security isolation between multiple tenants. There is no intent to share a single bay between multiple tenants. In your use case, you would simply create two bays, one for each of the yahoo-mail.XX tenants. I am not convinced that having an uber-tenant makes sense.

Adrian

On Sep 30, 2015, at 1:13 PM, Joshua Harlow <harlowja at outlook.com<mailto:harlowja at outlook.com>> wrote:

Adrian Otto wrote:
Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what?s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE?s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I?m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.


So an interesting question, but how is tenancy going to work, will there be a keystone tenancy <-> COE tenancy adapter? From my understanding a whole bay (COE?) is owned by a tenant, which is great for tenants that want to ~experiment~ with a COE but seems disjoint from the end goal of an integrated COE where the tenancy model of both keystone and the COE is either the same or is adapted via some adapter layer.

For example:

1) Bay that is connected to uber-tenant 'yahoo'

  1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us<http://yahoo-mail.us/>'
  1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
  ...

All those tenancy information is in keystone, not replicated/synced into the COE (or in some other COE specific disjoint system).

Thoughts?

This one becomes especially hard if said COE(s) don't even have a tenancy model in the first place :-/

Thanks,

Adrian

On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni<devdatta.kulkarni at RACKSPACE.COM<mailto:devdatta.kulkarni at RACKSPACE.COM>>  wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu<hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>> Sent: Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of  the main goal.

Best regards, Hongbin


From: Jay Lau [mailto:jay.lau.513 at gmail.com] Sent: September-29-15
10:57 PM To: OpenStack Development Mailing List (not for usage
questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
k8s?



+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should
export some interfaces to enable end user can create container
applications but not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all
of the concepts in different coe (k8s has pod, service, rc, swarm
only has container, nova only has VM,  PM with different
hypervisors), but this deserve some deep dive and thinking to see
how can move forward.....





On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<EGuz at walmartlabs.com<mailto:EGuz at walmartlabs.com>>
wrote: definitely ;), but the are some thoughts to Tom?s email.

I agree that we shouldn't reinvent apis, but I don?t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):) I belive our goal should
be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem
(Neutron/Cinder/Barbican/etc) even if we need to step in to
Kub/Mesos/Swarm communities for that.

? Egor

From: Adrian
Otto<adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>


Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage
questions)"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>


Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen
(danehans)<danehans at cisco.com<mailto:danehans at cisco.com><mailto:danehans at cisco.com>>  wrote:


+1

From: Tom Cammann<tom.cammann at hpe.com<mailto:tom.cammann at hpe.com><mailto:tom.cammann at hpe.com>>
Reply-To:
"openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>


Date: Tuesday, September 29, 2015 at 2:22 AM
To:
"openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>


Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to
completely deprecate the COE specific APIs such as pod/service/rc
and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to
be very difficult and probably a wasted effort trying to
consolidate their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote: Would it make sense to ask the
opposite of Wanghua's question: should pod/service/rc be deprecated
if the user can easily get to the k8s api? Even if we want to
orchestrate these in a Heat template, the corresponding heat
resources can just interface with k8s instead of Magnum. Ton Ngo,

<ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
docker compose is just command line tool which doesn?t have any api
or scheduling feat

From: Egor Guz<EGuz at walmartlabs.com<mailto:EGuz at walmartlabs.com>><mailto:EGuz at walmartlabs.com>
To:
"openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>"<mailto:openstack-dev at lists.openstack.org>
<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org>


Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
________________________________



Also I belive docker compose is just command line tool which
doesn?t have any api or scheduling features. But during last Docker
Conf hackathon PayPal folks implemented docker compose executor for
Mesos (https://github.com/mohitsoni/compose-executor) which can
give you pod like experience.

? Egor

From: Adrian
Otto<adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>


Reply-To: "OpenStack Development Mailing List (not for usage questions)"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03 To: "OpenStack
Development Mailing List (not for usage
questions)"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>


Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker
API to operate. We are intentionally avoiding re-inventing the
wheel. Our goal is not to replace docker swarm (or other existing
systems), but to compliment it/them. We want to offer users of
Docker the richness of native APIs and supporting tools. This way
they will not need to compromise features or wait longer for us to
implement each new feature as it is added. Keep in mind that our
pod, service, and replication controller resources pre-date  this
philosophy. If we started out with the current approach, those
would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ??
<wanghua.humble at gmail.com<mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com>>
wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe,
but exposes container in swarm coe. As I know, swarm is only a
scheduler of container, which is like nova in openstack. Docker
compose is a orchestration program which is like heat in openstack.
k8s is the combination of scheduler and orchestration. So I think
it is better to expose the apis in compose to users which are at
the same level as k8s.


Regards Wanghua
__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



<ATT00001.gif>__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)


Unsubscribe:
OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--





Thanks, Jay Lau (Guangya Liu)

__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/da5cf3f4/attachment-0001.html>

From zigo at debian.org  Wed Sep 30 20:59:01 2015
From: zigo at debian.org (Thomas Goirand)
Date: Wed, 30 Sep 2015 22:59:01 +0200
Subject: [openstack-dev] [all] -1 due to line length violation in commit
 messages
In-Reply-To: <56056195.3030900@redhat.com>
References: <47F2F128-0427-4048-B088-ACA087B554C2@redhat.com>
 <56056195.3030900@redhat.com>
Message-ID: <560C4D15.1010406@debian.org>

On 09/25/2015 05:00 PM, Ryan Brown wrote:
> I believe the 72 limit is derived from 80-8 (terminal width - tab width)

If I'm not mistaking, 72 is because of the email format limitation.

Thomas



From sean at dague.net  Wed Sep 30 21:03:04 2015
From: sean at dague.net (Sean Dague)
Date: Wed, 30 Sep 2015 17:03:04 -0400
Subject: [openstack-dev] [nova] how to address boot from volume failures
Message-ID: <560C4E08.1030404@dague.net>

Today we attempted to branch devstack and grenade for liberty, and are
currently blocked because in liberty with openstack client and
novaclient, it's not possible to boot a server from volume using just
the volume id.

That's because of this change in novaclient -
https://review.openstack.org/#/c/221525/

That was done to resolve the issue that strong schema validation in Nova
started rejecting the kinds of calls that novaclient was making for boot
from volume, because the bdm 1 and 2 code was sharing common code and
got a bit tangled up. So 3 bdm 2 params were being sent on every request.

However, https://review.openstack.org/#/c/221525/ removed the ==1 code
path. If you pass in just {"vda": "$volume_id"} the code falls through,
volume id is lost, and nothing is booted. This is how the devstack
exercises and osc recommends booting from volume. I expect other people
might be doing that as well.

There seem to be a few options going forward:

1) fix the client without a revert

This would bring back a ==1 code path, which is basically just setting
volume_id, and move on. This means that until people upgrade their
client they loose access to this function on the server.

2) revert the client and loose up schema validation

If we revert the client to the old code, we also need to accept the fact
that novaclient has been sending 3 extra parameters to this API call
since as long as people can remember. We'd need a nova schema relax to
let those in and just accept that people are going to pass those.

3) fix osc and novaclient cli to not use this code path. This will also
require everyone upgrades both of those to not explode in the common
case of specifying boot from volume on the command line.

I slightly lean towards #2 on a compatibility front, but it's a chunk of
change at this point in the cycle, so I don't think there is a clear win
path. It would be good to collect opinions here. The bug tracking this
is - https://bugs.launchpad.net/python-openstackclient/+bug/1501435

	-Sean

-- 
Sean Dague
http://dague.net


From rmeggins at redhat.com  Wed Sep 30 21:03:42 2015
From: rmeggins at redhat.com (Rich Megginson)
Date: Wed, 30 Sep 2015 15:03:42 -0600
Subject: [openstack-dev] [puppet][keystone] Choose domain names with
 'composite namevar' or 'meaningless name'?
In-Reply-To: <87mvw3iz13.fsf@s390.unix4.net>
References: <55F27CB7.2040101@redhat.com> <87h9mw68wd.fsf@s390.unix4.net>
 <55F733AF.6080005@redhat.com> <55F76F5C.2020106@redhat.com>
 <87vbbc2eiu.fsf@s390.unix4.net> <560A1110.5000209@redhat.com>
 <560ACDD1.5040901@redhat.com> <560B8091.4060500@redhat.com>
 <87mvw3iz13.fsf@s390.unix4.net>
Message-ID: <560C4E2E.7030205@redhat.com>

On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:
> Gilles Dubreuil <gilles at redhat.com> writes:
>
>> On 30/09/15 03:43, Rich Megginson wrote:
>>> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>>>> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
>>>>> Gilles Dubreuil <gilles at redhat.com> writes:
>>>>>
>>>>>> On 15/09/15 06:53, Rich Megginson wrote:
>>>>>>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> Gilles Dubreuil <gilles at redhat.com> writes:
>>>>>>>>
>>>>>>>>> A. The 'composite namevar' approach:
>>>>>>>>>
>>>>>>>>>       keystone_tenant {'projectX::domainY': ... }
>>>>>>>>>     B. The 'meaningless name' approach:
>>>>>>>>>
>>>>>>>>>      keystone_tenant {'myproject': name='projectX',
>>>>>>>>> domain=>'domainY',
>>>>>>>>> ...}
>>>>>>>>>
>>>>>>>>> Notes:
>>>>>>>>>     - Actually using both combined should work too with the domain
>>>>>>>>> supposedly overriding the name part of the domain.
>>>>>>>>>     - Please look at [1] this for some background between the two
>>>>>>>>> approaches:
>>>>>>>>>
>>>>>>>>> The question
>>>>>>>>> -------------
>>>>>>>>> Decide between the two approaches, the one we would like to
>>>>>>>>> retain for
>>>>>>>>> puppet-keystone.
>>>>>>>>>
>>>>>>>>> Why it matters?
>>>>>>>>> ---------------
>>>>>>>>> 1. Domain names are mandatory in every user, group or project.
>>>>>>>>> Besides
>>>>>>>>> the backward compatibility period mentioned earlier, where no domain
>>>>>>>>> means using the default one.
>>>>>>>>> 2. Long term impact
>>>>>>>>> 3. Both approaches are not completely equivalent which different
>>>>>>>>> consequences on the future usage.
>>>>>>>> I can't see why they couldn't be equivalent, but I may be missing
>>>>>>>> something here.
>>>>>>> I think we could support both.  I don't see it as an either/or
>>>>>>> situation.
>>>>>>>
>>>>>>>>> 4. Being consistent
>>>>>>>>> 5. Therefore the community to decide
>>>>>>>>>
>>>>>>>>> Pros/Cons
>>>>>>>>> ----------
>>>>>>>>> A.
>>>>>>>> I think it's the B: meaningless approach here.
>>>>>>>>
>>>>>>>>>      Pros
>>>>>>>>>        - Easier names
>>>>>>>> That's subjective, creating unique and meaningful name don't look
>>>>>>>> easy
>>>>>>>> to me.
>>>>>>> The point is that this allows choice - maybe the user already has some
>>>>>>> naming scheme, or wants to use a more "natural" meaningful name -
>>>>>>> rather
>>>>>>> than being forced into a possibly "awkward" naming scheme with "::"
>>>>>>>
>>>>>>>     keystone_user { 'heat domain admin user':
>>>>>>>       name => 'admin',
>>>>>>>       domain => 'HeatDomain',
>>>>>>>       ...
>>>>>>>     }
>>>>>>>
>>>>>>>     keystone_user_role {'heat domain admin user@::HeatDomain':
>>>>>>>       roles => ['admin']
>>>>>>>       ...
>>>>>>>     }
>>>>>>>
>>>>>>>>>      Cons
>>>>>>>>>        - Titles have no meaning!
>>>>>>> They have meaning to the user, not necessarily to Puppet.
>>>>>>>
>>>>>>>>>        - Cases where 2 or more resources could exists
>>>>>>> This seems to be the hardest part - I still cannot figure out how
>>>>>>> to use
>>>>>>> "compound" names with Puppet.
>>>>>>>
>>>>>>>>>        - More difficult to debug
>>>>>>> More difficult than it is already? :P
>>>>>>>
>>>>>>>>>        - Titles mismatch when listing the resources (self.instances)
>>>>>>>>>
>>>>>>>>> B.
>>>>>>>>>      Pros
>>>>>>>>>        - Unique titles guaranteed
>>>>>>>>>        - No ambiguity between resource found and their title
>>>>>>>>>      Cons
>>>>>>>>>        - More complicated titles
>>>>>>>>> My vote
>>>>>>>>> --------
>>>>>>>>> I would love to have the approach A for easier name.
>>>>>>>>> But I've seen the challenge of maintaining the providers behind the
>>>>>>>>> curtains and the confusion it creates with name/titles and when
>>>>>>>>> not sure
>>>>>>>>> about the domain we're dealing with.
>>>>>>>>> Also I believe that supporting self.instances consistently with
>>>>>>>>> meaningful name is saner.
>>>>>>>>> Therefore I vote B
>>>>>>>> +1 for B.
>>>>>>>>
>>>>>>>> My view is that this should be the advertised way, but the other
>>>>>>>> method
>>>>>>>> (meaningless) should be there if the user need it.
>>>>>>>>
>>>>>>>> So as far as I'm concerned the two idioms should co-exist.  This
>>>>>>>> would
>>>>>>>> mimic what is possible with all puppet resources.  For instance
>>>>>>>> you can:
>>>>>>>>
>>>>>>>>      file { '/tmp/foo.bar': ensure => present }
>>>>>>>>
>>>>>>>> and you can
>>>>>>>>
>>>>>>>>      file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
>>>>>>>> present }
>>>>>>>>
>>>>>>>> The two refer to the same resource.
>>>>>>> Right.
>>>>>>>
>>>>>> I disagree, using the name for the title is not creating a composite
>>>>>> name. The latter requires adding at least another parameter to be part
>>>>>> of the title.
>>>>>>
>>>>>> Also in the case of the file resource, a path/filename is a unique
>>>>>> name,
>>>>>> which is not the case of an Openstack user which might exist in several
>>>>>> domains.
>>>>>>
>>>>>> I actually added the meaningful name case in:
>>>>>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html
>>>>>>
>>>>>>
>>>>>> But that doesn't work very well because without adding the domain to
>>>>>> the
>>>>>> name, the following fails:
>>>>>>
>>>>>> keystone_tenant {'project_1': domain => 'domain_A', ...}
>>>>>> keystone_tenant {'project_1': domain => 'domain_B', ...}
>>>>>>
>>>>>> And adding the domain makes it a de-facto 'composite name'.
>>>>> I agree that my example is not similar to what the keystone provider has
>>>>> to do.  What I wanted to point out is that user in puppet should be used
>>>>> to have this kind of *interface*, one where your put something
>>>>> meaningful in the title and one where you put something meaningless.
>>>>> The fact that the meaningful one is a compound one shouldn't matter to
>>>>> the user.
>>>>>
>>>> There is a big blocker of making use of domain name as parameter.
>>>> The issue is the limitation of autorequire.
>>>>
>>>> Because autorequire doesn't support any parameter other than the
>>>> resource type and expects the resource title (or a list of) [1].
>>>>
>>>> So for instance, keystone_user requires the tenant project1 from
>>>> domain1, then the resource name must be 'project1::domain1' because
>>>> otherwise there is no way to specify 'domain1':
>>>>
>> Yeah, I kept forgetting this is only about resource relationship/order
>> within a given catalog.
>> And therefore this is *not* about guaranteeing referred resources exist,
>>   for instance when created (or not) in a different puppet run/catalog.
>>
>> This might be obvious but it's easy (at least for me) to forget that
>> when thinking of the resources list, in terms of openstack IDs for
>> example inside self.instances!
>>
>>>> autorequire(:keystone_tenant) do
>>>>     self[:tenant]
>>>> end
>>> Not exactly.  See https://review.openstack.org/#/c/226919/
>>>
>> That's nice and makes the implementation easier.
>> Thanks.
>>
>>> For example::
>>>
>>>      keystone_tenant {'some random tenant':
>>>        name   => 'project1',
>>>        domain => 'domain1'
>>>      }
>>>      keystone_user {'some random user':
>>>        name   => 'user1',
>>>        domain => 'domain1'
>>>      }
>>>
>>> How does keystone_user_role need to be declared such that the
>>> autorequire for keystone_user and keystone_tenant work?
>>>
>>>      keystone_user_role {'some random user at some random tenant': ...}
>>>
>>> In this case, I'm assuming this will work
>>>
>>>    autorequire(:keystone_user) do
>>>      self[:name].rpartition('@').first
>>>    end
>>>    autorequire(:keystone_user) do
>>>      self[:name].rpartition('@').last
>>>    end
>>>
>>> The keystone_user require will be on 'some random user' and the
>>> keystone_tenant require will be on 'some random tenant'.
>>>
>>> So it should work, but _you have to be absolutely consistent in using
>>> the title everywhere_.
> Ok, so it seems I found a puppet pattern that could enable us to not
> depend on the particular syntax used on the title to retrieve the
> resource.  If one use "isnamevar" on multiple parameters, then using
> "uniqueness_key" on the resource enable us to retrieve the resource in
> the catalog, whatever the title of the resource is.
>
> I have a working example in this change
> https://review.openstack.org/#/c/226919/ for keystone_tenant with name,
> and domain as the keys.  All of the following work and can be easily
> retrieved using [domain, keys]
>
>    keystone_domain { 'domain_one': ensure => present }
>    keystone_domain { 'domain_two': ensure => present }
>    keystone_tenant { 'project_one::domain_one': ensure => present }
>    keystone_tenant { 'project_one::domain_two': ensure => present }
>    keystone_tenant { 'meaningless_title_one': name => 'project_less', domain => 'domain_one', ensure => present }
>
> This will raise a error:
>
>    keystone_tenant { 'project_one::domain_two': ensure => present }
>    keystone_tenant { 'meaningless_title_one': name => 'project_one', domain => 'domain_two', ensure => present }
>
> As puppet will correctly find that they are the same resource.

Great!

>
>>> That is, once you have chosen to give something
>>> a title, you must use that title everywhere: in autorequires (as
>>> described above), in resource references (e.g. Keystone_user['some
>>> random user'] ~> Service['myservice']), and anywhere the resource will
>>> be referenced by its title.
>>>
>> Yes the title must the same everywhere it's used but only within a given
>> catalog.
>>
>> No matter how the dependent resources are named/titled as long as they
>> provide the necessary resources.
>>
>> For instance, given the following resources:
>>
>> keystone_user {'first user': name => 'user1', domain => 'domain_A', ...}
>> keystone_user {'user1::domain_B': ...}
>> keystone_user {'user1': ...} # Default domain
>> keystone_project {'project1::domain_A': ...}
>> keystone_project {'project1': ...} # Default domain
>>
>> And their respective titles:
>> 'first user'
>> 'user1::domain_B'
>> 'user1'
>> 'project1::domain_A'
>> 'project1'
>>
>> Then another resource to use them, let's say keystone_user_role.
>> Using those unique titles one should be able to do things like these:
>>
>> keystone_user_role {'first user at project1::domain_A':
>>    roles => ['role1]
>> }
>>
>> keystone_user_role {'admin role for user1':
>>    user    => 'user1'
>>    project => 'project1'
>>    roles   => ['admin'] }
>>
>> That's look cool but the drawback is the names are different when
>> listing. That's expected since we're allowing meaningless titles.
>>
>> $ puppet resource keystone_user
>>
>> keystone_user { 'user1::Default':
>>    ensure    => 'present',
>>    domain_id => 'default',
>>    email     => 'test at Default.com',
>>    enabled   => 'true',
>>    id        => 'fb56d86a21f54b09aa435b96fd321eee',
>> }
>> keystone_user { 'user1::domain_B':
>>    ensure    => 'present',
>>    domain_id => '79beff022efd4011b9a036155f450af8',
>>    email     => 'user1 at domain_B.com',
>>    enabled   => 'true',
>>    id        => '2174faac46f949fca44e2edab3d53675',
>> }
>> keystone_user { 'user1::domain_A':
>>    ensure    => 'present',
>>    domain_id => '9387210938a0ef1b3c843feee8a00a34',
>>    email     => 'user1 at domain_A.com',
>>    enabled   => 'true',
>>    id        => '1bfadcff825e4c188e8e4eb6ce9a2ff5',
>> }
>>
>> Note: I changed the domain field to domain_id because it makes more
>> sense here
>>
>> This is fine as long as when running any catalog, a same resource with a
>> different name but same parameters means the same resource.
>>
>> If everyone agrees with such behavior, then we might be good to go.
>>
>> The exceptions must be addressed on a per case basis.
>> Effectively, there are cases in Openstack where several objects with the
>> exact same parameters can co-exist, for instance with the trust (See
>> commit message in [1] for examples). In the trust case running the same
>> catalog over and over will keep adding the resource (not really
>> idempotent!). I've actually re-raised the issue with Keystone developers
>> [2].
>>
>> [1] https://review.openstack.org/200996
>> [2] https://bugs.launchpad.net/keystone/+bug/1475091
>>
> For the keystone_tenant resource name, and domain are isnamevar
> parameters.  Using "uniqueness_key" method we get the always unique,
> always the same, couple [<domain>, <name>], then, when we have found the
> resource we can associate it in prefetch[10] and in autorequire without
> any problem.  So if we create a unique key by using isnamevar on the
> required parameters for each resource that need it then we get rid of
> the dependence on the title to retrieve the resource.
>
> Example of resource that should have a composite key:
>   - keystone_user: name and domain should be isnamevar. Then all the
>     question about the parsing of title would go away with robust key
>     finding.
>
>   - user_role with username, user_domain_name, project_name, project_domain_name, domain as its elements.
>
> When any of the keys are not filled they default to nil. Nil for domain
> would be associated to default_domain.
>
> The point is to go away from "title parsing" to "composite key
> matching". I'm quite sure it would simplify the code in a lot of
> places and solve the concerns raised here.

How does resource naming work at the Puppet manifest level?  For example:

   keystone_user {'some user':
     name => 'someuser',
     domain => 'domain',
   }

Would I use

   some_resource { 'name':
     requires => Keystone_user['some user'],
   }

or ???

>
>>>> Alternatively, as Sofer suggested (in a discussion we had), we could
>>>> poke the catalog to retrieve the corresponding resource(s).
>>> That is another question I posed in
>>> https://review.openstack.org/#/c/226919/:
>>>
>>> I guess we can look up the user resource and tenant resource from the
>>> catalog based on the title?  e.g.
>>>
>>>      user = puppet.catalog.resource.find(:keystone_user, 'some random user')
>>>      userid = user[:id]
>>>
>>>> Unfortunately, unless there is a way around, that doesn't work because
>>>> no matter what autorequire wants a title.
>>> Which I think we can provide.
>>>
>>> The other tricky parts will be self.instances and self.prefetch.
>>>
>>> I think self.instances can continue to use the 'name::domain' naming
>>> convention, since it needs some way to create a unique title for all
>>> resources.
>>>
>>> The real work will be in self.prefetch, which will need to compare all
>>> of the parameters/properties to see if a resource declared in a manifest
>>> matches exactly a resource found in Keystone. In this case, we may have
>>> to 'rename' the resource returned by self.instances to make it match the
>>> one from the manifest so that autorequires and resource references
>>> continue to work.
>>>
>>>>
>>>> So it seems for the scoped domain resources, we have to stick together
>>>> the name and domain: '<name>::<domain>'.
>>>>
>>>> [1]
>>>> https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type.rb#L2003
>>>>
>>>>>>>> But, If that's indeed not possible to have them both,
>>>>>> There are cases where having both won't be possible like the trusts,
>>>>>> but
>>>>>> why not for the resources supporting it.
>>>>>>
>>>>>> That said, I think we need to make a choice, at least to get
>>>>>> started, to
>>>>>> have something working, consistently, besides exceptions. Other options
>>>>>> to be added later.
>>>>> So we should go we the meaningful one first for consistency, I think.
>>>>>
>>>>>>>> then I would keep only the meaningful name.
>>>>>>>>
>>>>>>>>
>>>>>>>> As a side note, someone raised an issue about the delimiter being
>>>>>>>> hardcoded to "::".  This could be a property of the resource.  This
>>>>>>>> would enable the user to use weird name with "::" in it and assign
>>>>>>>> a "/"
>>>>>>>> (for instance) to the delimiter property:
>>>>>>>>
>>>>>>>>      Keystone_tenant { 'foo::blah/bar::is::cool': delimiter => "/",
>>>>>>>> ... }
>>>>>>>>
>>>>>>>> bar::is::cool is the name of the domain and foo::blah is the project.
>>>>>>> That's a good idea.  Please file a bug for that.
>>>>>>>
>>>>>>>>> Finally
>>>>>>>>> ------
>>>>>>>>> Thanks for reading that far!
>>>>>>>>> To choose, please provide feedback with more pros/cons, examples and
>>>>>>>>> your vote.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Gilles
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> PS:
>>>>>>>>> [1] https://groups.google.com/forum/#!topic/puppet-dev/CVYwvHnPSMc
>>>>>>>>>
>>>> __________________________________________________________________________
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> [10]: there is a problem in puppet with the way it handle prefetch and
> composite namevar.  I still have to open the bug though.  In the
> meantime you will find in
> https://review.openstack.org/#/c/226919/5/lib/puppet/provider/keystone_tenant/openstack.rb
> a idiom that works.
>



From rbryant at redhat.com  Wed Sep 30 21:11:57 2015
From: rbryant at redhat.com (Russell Bryant)
Date: Wed, 30 Sep 2015 17:11:57 -0400
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <CAO=R8o=mGRLXcetbZOLu_x9hjqNqUpbyDroToL=O5HiUEqzZMw@mail.gmail.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78E403@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56084E35.20101@redhat.com>
 <CAO=R8o=W26dEp1VmvnnxtA2C9kSoBeNJQ-JKkGtMqX4qvbdXfw@mail.gmail.com>
 <560C3CD5.6020202@redhat.com>
 <CAO=R8o=mGRLXcetbZOLu_x9hjqNqUpbyDroToL=O5HiUEqzZMw@mail.gmail.com>
Message-ID: <560C501D.2030601@redhat.com>

On 09/30/2015 04:09 PM, Murali R wrote:
> Russel,
> 
> For instance if I have a nsh header embedded in vxlan in the incoming
> packet, I was wondering if I can transfer that to geneve options
> somehow. This is just as an example. I may have header other info either
> in vxlan or ip that needs to enter the ovn network and if we have
> generic ovs commands to handle that, it will be useful. If commands
> don't exist but extensible then I can do that as well.

Well, OVS itself doesn't support NSH yet.  There are patches on the OVS
dev mailing list for it, though.

http://openvswitch.org/pipermail/dev/2015-September/060678.html

Are you interested in SFC?  I have been thinking about that and don't
think it will be too hard to add support for it in OVN.  I'm not sure
when I'll work on it, but it's high on my personal todo list.  If you
want to do it with NSH, that will require OVS support first, of course.

If you're interested in more generic extensibility of OVN, there's at
least going to be one talk about that at the OVS conference in November.
 If you aren't there, it will be on video.  I'm not sure what ideas they
will be proposing.

Since we're on the OpenStack list, I assume we're talking in the
OpenStack context.  For any feature we're talking about, we also have to
talk about how that is exposed through the Neutron API.  So, "generic
extensibility" doesn't immediately make sense for the Neutron case.

SFC certainly makes sense.  There's a Neutron project for adding an SFC
API and from what I've seen so far, I think we'll be able to extend OVN
such that it can back that API.

-- 
Russell Bryant


From emilien at redhat.com  Wed Sep 30 21:14:27 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Wed, 30 Sep 2015 17:14:27 -0400
Subject: [openstack-dev] [puppet] [infra] split integration jobs
Message-ID: <560C50B3.8010904@redhat.com>

Hello,

Today our Puppet OpenStack Integration jobs are deploying:
- mysql / rabbitmq
- keystone in wsgi with apache
- nova
- glance
- neutron with openvswitch
- cinder
- swift
- sahara
- heat
- ceilometer in wsgi with apache

Currently WIP:
- Horizon
- Trove

The status of the jobs is that some tempest tests (related to compute)
are failing randomly. Most of failures are because of timeouts:

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/e374fd1/logs/neutron/server.txt.gz#_2015-09-30_18_38_32_425

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/e374fd1/logs/nova/nova-compute.txt.gz#_2015-09-30_18_38_34_799

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/e374fd1/logs/nova/nova-compute.txt.gz#_2015-09-30_18_38_12_636

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/1d88f34/logs/nova/nova-compute.txt.gz#_2015-09-30_20_26_34_730

The timeouts happen because Nova needs more than 300s (default) to spawn
a VM. Neutron is barely able to sustain to Nova requests.

It's obvious we reached jenkins slave resources limits.


We have 3 options:

#1 increase timeouts and try to give more time to services to accomplish
what they need to do.

#2 drop some services from our testing scenario.

#3 split our scenario to have scenario001 and scenario002.

I feel like #1 is not really a scalable idea, since we are going to test
more and more services.

I don't like #2 because we want to test all our modules, not just a
subset of them.

I like #3 but we are going to consume more CI resources (that's why I
put [infra] tag).


Side note: we have some non-voting upgrade jobs that we don't really pay
attention now, because of lack of time to work on them. They consume 2
slaves. If resources are a problem, we can drop them and replace by the
2 new integration jobs.

So I propose option #3 and
* drop upgrade jobs if infra says we're using too much resources with 2
more jobs
* replace them by the 2 new integration jobs
or option #3 by adding 2 more jobs with a new scenario, where services
would be split.

Any feedback from Infra / Puppet teams is welcome,
Thanks,
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/4e3256d8/attachment.pgp>

From harlowja at outlook.com  Wed Sep 30 21:18:26 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Wed, 30 Sep 2015 14:18:26 -0700
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <F794D8B2-433F-4182-B09F-097E2F32F0EA@rackspace.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
 <0957CD8F4B55C0418161614FEC580D6BCECE8E@SZXEMI503-MBS.china.huawei.com>
 <1443628722351.56254@RACKSPACE.COM>
 <D98092A2-4326-44CE-AD0E-654D7DAA8738@rackspace.com>
 <BLU436-SMTP2546834433CE3DA7DED7F00D84D0@phx.gbl>
 <F794D8B2-433F-4182-B09F-097E2F32F0EA@rackspace.com>
Message-ID: <BLU436-SMTP588A4F6F58729D8697DAC9D84D0@phx.gbl>

Wouldn't that limit the ability to share/optimize resources then and 
increase the number of operators needed (since each COE/bay would need 
its own set of operators managing it)?

If all tenants are in a single openstack cloud, and under say a single 
company then there isn't much need for management isolation (in fact I 
think said feature is actually a anti-feature in a case like this). 
Especially since that management is already by keystone and the 
project/tenant & user associations and such there.

Security isolation I get, but if the COE is already multi-tenant aware 
and that multi-tenancy is connected into the openstack tenancy model, 
then it seems like that point is nil?

I get that the current tenancy boundary is the bay (aka the COE right?) 
but is that changeable? Is that ok with everyone, it seems oddly matched 
to say a company like yahoo, or other private cloud, where one COE would 
I think be preferred and tenancy should go inside of that; vs a eggshell 
like solution that seems like it would create more management and 
operability pain (now each yahoo internal group that creates a bay/coe 
needs to figure out how to operate it? and resources can't be shared 
and/or orchestrated across bays; hmmmm, seems like not fully using a COE 
for what it can do?)

Just my random thoughts, not sure how much is fixed in stone.

-Josh

Adrian Otto wrote:
> Joshua,
>
> The tenancy boundary in Magnum is the bay. You can place whatever
> single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
> Swarm). This allows you to use native tools to interact with the COE in
> that bay, rather than using an OpenStack specific client. If you want to
> use the OpenStack client to create both bays, pods, and containers, you
> can do that today. You also have the choice, for example, to run kubctl
> against your Kubernetes bay, if you so desire.
>
> Bays offer both a management and security isolation between multiple
> tenants. There is no intent to share a single bay between multiple
> tenants. In your use case, you would simply create two bays, one for
> each of the yahoo-mail.XX tenants. I am not convinced that having an
> uber-tenant makes sense.
>
> Adrian
>
>> On Sep 30, 2015, at 1:13 PM, Joshua Harlow <harlowja at outlook.com
>> <mailto:harlowja at outlook.com>> wrote:
>>
>> Adrian Otto wrote:
>>> Thanks everyone who has provided feedback on this thread. The good
>>> news is that most of what has been asked for from Magnum is actually
>>> in scope already, and some of it has already been implemented. We
>>> never aimed to be a COE deployment service. That happens to be a
>>> necessity to achieve our more ambitious goal: We want to provide a
>>> compelling Containers-as-a-Service solution for OpenStack clouds in a
>>> way that offers maximum leverage of what?s already in OpenStack,
>>> while giving end users the ability to use their favorite tools to
>>> interact with their COE of choice, with the multi-tenancy capability
>>> we expect from all OpenStack services, and simplified integration
>>> with a wealth of existing OpenStack services (Identity,
>>> Orchestration, Images, Networks, Storage, etc.).
>>>
>>> The areas we have disagreement are whether the features offered for
>>> the k8s COE should be mirrored in other COE?s. We have not attempted
>>> to do that yet, and my suggestion is to continue resisting that
>>> temptation because it is not aligned with our vision. We are not here
>>> to re-invent container management as a hosted service. Instead, we
>>> aim to integrate prevailing technology, and make it work great with
>>> OpenStack. For example, adding docker-compose capability to Magnum is
>>> currently out-of-scope, and I think it should stay that way. With
>>> that said, I?m willing to have a discussion about this with the
>>> community at our upcoming Summit.
>>>
>>> An argument could be made for feature consistency among various COE
>>> options (Bay Types). I see this as a relatively low value pursuit.
>>> Basic features like integration with OpenStack Networking and
>>> OpenStack Storage services should be universal. Whether you can
>>> present a YAML file for a bay to perform internal orchestration is
>>> not important in my view, as long as there is a prevailing way of
>>> addressing that need. In the case of Docker Bays, you can simply
>>> point a docker-compose client at it, and that will work fine.
>>>
>>
>> So an interesting question, but how is tenancy going to work, will
>> there be a keystone tenancy <-> COE tenancy adapter? From my
>> understanding a whole bay (COE?) is owned by a tenant, which is great
>> for tenants that want to ~experiment~ with a COE but seems disjoint
>> from the end goal of an integrated COE where the tenancy model of both
>> keystone and the COE is either the same or is adapted via some adapter
>> layer.
>>
>> For example:
>>
>> 1) Bay that is connected to uber-tenant 'yahoo'
>>
>> 1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
>> <http://yahoo-mail.us/>'
>> 1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
>> ...
>>
>> All those tenancy information is in keystone, not replicated/synced
>> into the COE (or in some other COE specific disjoint system).
>>
>> Thoughts?
>>
>> This one becomes especially hard if said COE(s) don't even have a
>> tenancy model in the first place :-/
>>
>>> Thanks,
>>>
>>> Adrian
>>>
>>>> On Sep 30, 2015, at 8:58 AM, Devdatta
>>>> Kulkarni<devdatta.kulkarni at RACKSPACE.COM
>>>> <mailto:devdatta.kulkarni at RACKSPACE.COM>> wrote:
>>>>
>>>> +1 Hongbin.
>>>>
>>>> From perspective of Solum, which hopes to use Magnum for its
>>>> application container scheduling requirements, deep integration of
>>>> COEs with OpenStack services like Keystone will be useful.
>>>> Specifically, I am thinking that it will be good if Solum can
>>>> depend on Keystone tokens to deploy and schedule containers on the
>>>> Bay nodes instead of having to use COE specific credentials. That
>>>> way, container resources will become first class components that
>>>> can be monitored using Ceilometer, access controlled using
>>>> Keystone, and managed from within Horizon.
>>>>
>>>> Regards, Devdatta
>>>>
>>>>
>>>> From: Hongbin Lu<hongbin.lu at huawei.com
>>>> <mailto:hongbin.lu at huawei.com>> Sent: Wednesday, September
>>>> 30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
>>>> usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
>>>> compose = k8s?
>>>>
>>>>
>>>> +1 from me as well.
>>>>
>>>> I think what makes Magnum appealing is the promise to provide
>>>> container-as-a-service. I see coe deployment as a helper to achieve
>>>> the promise, instead of the main goal.
>>>>
>>>> Best regards, Hongbin
>>>>
>>>>
>>>> From: Jay Lau [mailto:jay.lau.513 at gmail.com] Sent: September-29-15
>>>> 10:57 PM To: OpenStack Development Mailing List (not for usage
>>>> questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
>>>> k8s?
>>>>
>>>>
>>>>
>>>> +1 to Egor, I think that the final goal of Magnum is container as a
>>>> service but not coe deployment as a service. ;-)
>>>>
>>>> Especially we are also working on Magnum UI, the Magnum UI should
>>>> export some interfaces to enable end user can create container
>>>> applications but not only coe deployment.
>>>>
>>>> I hope that the Magnum can be treated as another "Nova" which is
>>>> focusing on container service. I know it is difficult to unify all
>>>> of the concepts in different coe (k8s has pod, service, rc, swarm
>>>> only has container, nova only has VM, PM with different
>>>> hypervisors), but this deserve some deep dive and thinking to see
>>>> how can move forward.....
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<EGuz at walmartlabs.com
>>>> <mailto:EGuz at walmartlabs.com>>
>>>> wrote: definitely ;), but the are some thoughts to Tom?s email.
>>>>
>>>> I agree that we shouldn't reinvent apis, but I don?t think Magnum
>>>> should only focus at deployment (I feel we will become another
>>>> Puppet/Chef/Ansible module if we do it ):) I belive our goal should
>>>> be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem
>>>> (Neutron/Cinder/Barbican/etc) even if we need to step in to
>>>> Kub/Mesos/Swarm communities for that.
>>>>
>>>> ? Egor
>>>>
>>>> From: Adrian
>>>> Otto<adrian.otto at rackspace.com
>>>> <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
>>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>>> questions)"<openstack-dev at lists.openstack.org
>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
>>>>
>>>>
>> Date: Tuesday, September 29, 2015 at 08:44
>>>> To: "OpenStack Development Mailing List (not for usage
>>>> questions)"<openstack-dev at lists.openstack.org
>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
>>>>
>>>>
>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>
>>>> This is definitely a topic we should cover in Tokyo.
>>>>
>>>> On Sep 29, 2015, at 8:28 AM, Daneyon Hansen
>>>> (danehans)<danehans at cisco.com
>>>> <mailto:danehans at cisco.com><mailto:danehans at cisco.com>> wrote:
>>>>
>>>>
>>>> +1
>>>>
>>>> From: Tom Cammann<tom.cammann at hpe.com
>>>> <mailto:tom.cammann at hpe.com><mailto:tom.cammann at hpe.com>>
>>>> Reply-To:
>>>> "openstack-dev at lists.openstack.org
>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>"<openstack-dev at lists.openstack.org
>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
>>>>
>>>>
>> Date: Tuesday, September 29, 2015 at 2:22 AM
>>>> To:
>>>> "openstack-dev at lists.openstack.org
>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>"<openstack-dev at lists.openstack.org
>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
>>>>
>>>>
>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>
>>>> This has been my thinking in the last couple of months to
>>>> completely deprecate the COE specific APIs such as pod/service/rc
>>>> and container.
>>>>
>>>> As we now support Mesos, Kubernetes and Docker Swarm its going to
>>>> be very difficult and probably a wasted effort trying to
>>>> consolidate their separate APIs under a single Magnum API.
>>>>
>>>> I'm starting to see Magnum as COEDaaS - Container Orchestration
>>>> Engine Deployment as a Service.
>>>>
>>>> On 29/09/15 06:30, Ton Ngo wrote: Would it make sense to ask the
>>>> opposite of Wanghua's question: should pod/service/rc be deprecated
>>>> if the user can easily get to the k8s api? Even if we want to
>>>> orchestrate these in a Heat template, the corresponding heat
>>>> resources can just interface with k8s instead of Magnum. Ton Ngo,
>>>>
>>>> <ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
>>>> docker compose is just command line tool which doesn?t have any api
>>>> or scheduling feat
>>>>
>>>> From: Egor Guz<EGuz at walmartlabs.com
>>>> <mailto:EGuz at walmartlabs.com>><mailto:EGuz at walmartlabs.com>
>>>> To:
>>>> "openstack-dev at lists.openstack.org
>>>> <mailto:openstack-dev at lists.openstack.org>"<mailto:openstack-dev at lists.openstack.org>
>>>> <openstack-dev at lists.openstack.org
>>>> <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists.openstack.org>
>>>>
>>>>
>> Date: 09/28/2015 10:20 PM
>>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>> ________________________________
>>>>
>>>>
>>>>
>>>> Also I belive docker compose is just command line tool which
>>>> doesn?t have any api or scheduling features. But during last Docker
>>>> Conf hackathon PayPal folks implemented docker compose executor for
>>>> Mesos (https://github.com/mohitsoni/compose-executor) which can
>>>> give you pod like experience.
>>>>
>>>> ? Egor
>>>>
>>>> From: Adrian
>>>> Otto<adrian.otto at rackspace.com
>>>> <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
>>>>
>>>>
>> Reply-To: "OpenStack Development Mailing List (not for usage
>> questions)"<openstack-dev at lists.openstack.org
>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
>>>> Date: Monday, September 28, 2015 at 22:03 To: "OpenStack
>>>> Development Mailing List (not for usage
>>>> questions)"<openstack-dev at lists.openstack.org
>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
>>>>
>>>>
>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>
>>>> Wanghua,
>>>>
>>>> I do follow your logic, but docker-compose only needs the docker
>>>> API to operate. We are intentionally avoiding re-inventing the
>>>> wheel. Our goal is not to replace docker swarm (or other existing
>>>> systems), but to compliment it/them. We want to offer users of
>>>> Docker the richness of native APIs and supporting tools. This way
>>>> they will not need to compromise features or wait longer for us to
>>>> implement each new feature as it is added. Keep in mind that our
>>>> pod, service, and replication controller resources pre-date this
>>>> philosophy. If we started out with the current approach, those
>>>> would not exist in Magnum.
>>>>
>>>> Thanks,
>>>>
>>>> Adrian
>>>>
>>>> On Sep 28, 2015, at 8:32 PM, ??
>>>> <wanghua.humble at gmail.com
>>>> <mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com>>
>>>> wrote:
>>>>
>>>> Hi folks,
>>>>
>>>> Magnum now exposes service, pod, etc to users in kubernetes coe,
>>>> but exposes container in swarm coe. As I know, swarm is only a
>>>> scheduler of container, which is like nova in openstack. Docker
>>>> compose is a orchestration program which is like heat in openstack.
>>>> k8s is the combination of scheduler and orchestration. So I think
>>>> it is better to expose the apis in compose to users which are at
>>>> the same level as k8s.
>>>>
>>>>
>>>> Regards Wanghua
>>>> __________________________________________________________________________
>>>>
>>>>
>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org
>>>> <mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>
>>>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> __________________________________________________________________________
>>>>
>>>>
>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org
>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>>
>>>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>>
>>>>
>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org
>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe<mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>
>> <ATT00001.gif>__________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>>
>>>>
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org
>>>> <mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>
>>>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>> __________________________________________________________________________
>>>>
>>>>
>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org
>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Thanks, Jay Lau (Guangya Liu)
>>>>
>>>> __________________________________________________________________________
>>>>
>>>>
>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org
>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __________________________________________________________________________
>>>
>>>
>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org
>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:OpenStack-dev-request at lists.openstack.org
>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From andrew at lascii.com  Wed Sep 30 21:45:08 2015
From: andrew at lascii.com (Andrew Laski)
Date: Wed, 30 Sep 2015 17:45:08 -0400
Subject: [openstack-dev] [nova] how to address boot from volume failures
In-Reply-To: <560C4E08.1030404@dague.net>
References: <560C4E08.1030404@dague.net>
Message-ID: <20150930214508.GM8745@crypt>

On 09/30/15 at 05:03pm, Sean Dague wrote:
>Today we attempted to branch devstack and grenade for liberty, and are
>currently blocked because in liberty with openstack client and
>novaclient, it's not possible to boot a server from volume using just
>the volume id.
>
>That's because of this change in novaclient -
>https://review.openstack.org/#/c/221525/
>
>That was done to resolve the issue that strong schema validation in Nova
>started rejecting the kinds of calls that novaclient was making for boot
>from volume, because the bdm 1 and 2 code was sharing common code and
>got a bit tangled up. So 3 bdm 2 params were being sent on every request.
>
>However, https://review.openstack.org/#/c/221525/ removed the ==1 code
>path. If you pass in just {"vda": "$volume_id"} the code falls through,
>volume id is lost, and nothing is booted. This is how the devstack
>exercises and osc recommends booting from volume. I expect other people
>might be doing that as well.
>
>There seem to be a few options going forward:
>
>1) fix the client without a revert
>
>This would bring back a ==1 code path, which is basically just setting
>volume_id, and move on. This means that until people upgrade their
>client they loose access to this function on the server.
>
>2) revert the client and loose up schema validation
>
>If we revert the client to the old code, we also need to accept the fact
>that novaclient has been sending 3 extra parameters to this API call
>since as long as people can remember. We'd need a nova schema relax to
>let those in and just accept that people are going to pass those.
>
>3) fix osc and novaclient cli to not use this code path. This will also
>require everyone upgrades both of those to not explode in the common
>case of specifying boot from volume on the command line.
>
>I slightly lean towards #2 on a compatibility front, but it's a chunk of
>change at this point in the cycle, so I don't think there is a clear win
>path. It would be good to collect opinions here. The bug tracking this
>is - https://bugs.launchpad.net/python-openstackclient/+bug/1501435

I have a slight preference for #1.  Nova is not buggy here novaclient 
is so I think we should contain the fix there.

Is using the v2 API an option?  That should also allow the 3 extra 
parameters mentioned in #2.

>
>	-Sean
>
>-- 
>Sean Dague
>http://dague.net
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From muralirdev at gmail.com  Wed Sep 30 22:01:28 2015
From: muralirdev at gmail.com (Murali R)
Date: Wed, 30 Sep 2015 15:01:28 -0700
Subject: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support
 to setup multiple neutron networks for one container?
In-Reply-To: <560C501D.2030601@redhat.com>
References: <56001064.3040909@inaugust.com>
 <CAGi==UV39aMOogdBf+kOr35Vt3rGcnbbAtMz2PbTTXPBkJ1ubA@mail.gmail.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78CA2A@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <560185DC.4060103@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D123@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <5602B570.9000207@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78D92F@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56041F07.9080705@redhat.com>
 <F1F484A52BD63243B5497BFC9DE26E5A1A78E403@SG70YWXCHMBA05.zap.alcatel-lucent.com>
 <56084E35.20101@redhat.com>
 <CAO=R8o=W26dEp1VmvnnxtA2C9kSoBeNJQ-JKkGtMqX4qvbdXfw@mail.gmail.com>
 <560C3CD5.6020202@redhat.com>
 <CAO=R8o=mGRLXcetbZOLu_x9hjqNqUpbyDroToL=O5HiUEqzZMw@mail.gmail.com>
 <560C501D.2030601@redhat.com>
Message-ID: <CAO=R8onZxbbXGqXF3pm0JZvG15eA88oMTbM1w5cU9t_ZnzLyFQ@mail.gmail.com>

Yes, sfc without nsh is what I am looking into and I am thinking ovn can
have a better approach.

I did an implementation of sfc around nsh that used ovs & flows from custom
ovs-agent back in mar-may. I added fields in ovs agent to send additional
info for actions as well. Neutron side was quite trivial. But the solution
required an implementation of ovs to listen on a different port to handle
nsh header so doubled the number of tunnels. The ovs code we used/modified
to was either from the link you sent or some other similar impl from Cisco
folks (I don't recall) that had actions and conditional commands for the
field. If we have generic ovs code to compare or set actions on any
configured address field was my thought. But haven't thought through much
on how to do that. In any case, with ovn we cannot define custom flows
directly on ovs, so that approach is dated now. But hoping some similar
feature can be added to ovn which can transpose some header field to geneve
options.

I am trying something right now with ovn and will be attending ovs
conference in nov. I am skipping openstack summit to attend something else
in far-east during that time. But lets keep the discussion going and
collaborate if you work on sfc.

On Wed, Sep 30, 2015 at 2:11 PM, Russell Bryant <rbryant at redhat.com> wrote:

> On 09/30/2015 04:09 PM, Murali R wrote:
> > Russel,
> >
> > For instance if I have a nsh header embedded in vxlan in the incoming
> > packet, I was wondering if I can transfer that to geneve options
> > somehow. This is just as an example. I may have header other info either
> > in vxlan or ip that needs to enter the ovn network and if we have
> > generic ovs commands to handle that, it will be useful. If commands
> > don't exist but extensible then I can do that as well.
>
> Well, OVS itself doesn't support NSH yet.  There are patches on the OVS
> dev mailing list for it, though.
>
> http://openvswitch.org/pipermail/dev/2015-September/060678.html
>
> Are you interested in SFC?  I have been thinking about that and don't
> think it will be too hard to add support for it in OVN.  I'm not sure
> when I'll work on it, but it's high on my personal todo list.  If you
> want to do it with NSH, that will require OVS support first, of course.
>
> If you're interested in more generic extensibility of OVN, there's at
> least going to be one talk about that at the OVS conference in November.
>  If you aren't there, it will be on video.  I'm not sure what ideas they
> will be proposing.
>
> Since we're on the OpenStack list, I assume we're talking in the
> OpenStack context.  For any feature we're talking about, we also have to
> talk about how that is exposed through the Neutron API.  So, "generic
> extensibility" doesn't immediately make sense for the Neutron case.
>
> SFC certainly makes sense.  There's a Neutron project for adding an SFC
> API and from what I've seen so far, I think we'll be able to extend OVN
> such that it can back that API.
>
> --
> Russell Bryant
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/79fabb93/attachment.html>

From mscherbakov at mirantis.com  Wed Sep 30 22:10:57 2015
From: mscherbakov at mirantis.com (Mike Scherbakov)
Date: Wed, 30 Sep 2015 22:10:57 +0000
Subject: [openstack-dev] [Fuel] Remove nova-network as a deployment option
	in Fuel?
Message-ID: <CAKYN3rOs-mshwEu3ZAzL0yrgMZi3tF4LLnMcwX6ZJerD3Daneg@mail.gmail.com>

Hi team,
where do we stand with it now? I remember there was a plan to remove
nova-network support in 7.0, but we've delayed it due to vcenter/dvr or
something which was not ready for it.

Can we delete it now? The early in the cycle we do it, the easier it will
be.

Thanks!
-- 
Mike Scherbakov
#mihgen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/cdedf579/attachment.html>

From mriedem at linux.vnet.ibm.com  Wed Sep 30 22:13:10 2015
From: mriedem at linux.vnet.ibm.com (Matt Riedemann)
Date: Wed, 30 Sep 2015 17:13:10 -0500
Subject: [openstack-dev] [ops] Operator Local Patches
In-Reply-To: <9FFC2E05-79FC-4876-A2B9-FD1C2BE1269D@godaddy.com>
References: <9FFC2E05-79FC-4876-A2B9-FD1C2BE1269D@godaddy.com>
Message-ID: <560C5E76.9050505@linux.vnet.ibm.com>



On 9/29/2015 6:33 PM, Kris G. Lindgren wrote:
> Hello All,
>
> We have some pretty good contributions of local patches on the etherpad.
>   We are going through right now and trying to group patches that
> multiple people are carrying and patches that people may not be carrying
> but solves a problem that they are running into.  If you can take some
> time and either add your own local patches that you have to the ether
> pad or add +1's next to the patches that are laid out, it would help us
> immensely.
>
> The etherpad can be found at:
> https://etherpad.openstack.org/p/operator-local-patches
>
> Thanks for your help!
>
> ___________________________________________________________________
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
> From: "Kris G. Lindgren"
> Date: Tuesday, September 22, 2015 at 4:21 PM
> To: openstack-operators
> Subject: Re: Operator Local Patches
>
> Hello all,
>
> Friendly reminder: If you have local patches and haven't yet done so,
> please contribute to the etherpad at:
> https://etherpad.openstack.org/p/operator-local-patches
>
> ___________________________________________________________________
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
> From: "Kris G. Lindgren"
> Date: Friday, September 18, 2015 at 4:35 PM
> To: openstack-operators
> Cc: Tom Fifield
> Subject: Operator Local Patches
>
> Hello Operators!
>
> During the ops meetup in Palo Alto were we talking about sessions for
> Tokyo. A session that I purposed, that got a bunch of +1's,  was about
> local patches that operators were carrying.  From my experience this is
> done to either implement business logic,  fix assumptions in projects
> that do not apply to your implementation, implement business
> requirements that are not yet implemented in openstack, or fix scale
> related bugs.  What I would like to do is get a working group together
> to do the following:
>
> 1.) Document local patches that operators have (even those that are in
> gerrit right now waiting to be committed upstream)
> 2.) Figure out commonality in those patches
> 3.) Either upstream the common fixes to the appropriate projects or
> figure out if a hook can be added to allow people to run their code at
> that specific point
> 4.) ????
> 5.) Profit
>
> To start this off, I have documented every patch, along with a
> description of what it does and why we did it (where needed), that
> GoDaddy is running [1].  What I am asking is that the operator community
> please update the etherpad with the patches that you are running, so
> that we have a good starting point for discussions in Tokyo and beyond.
>
> [1] - https://etherpad.openstack.org/p/operator-local-patches
> ___________________________________________________________________
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I saw this originally on the ops list and it's a great idea - cat 
herding the bazillion ops patches and seeing what common things rise to 
the top would be helpful.  Hopefully some of that can then be pushed 
into the projects.

There are a couple of things I could note that are specifically operator 
driven which could use eyes again.

1. purge deleted instances from nova database:

http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/purge-deleted-instances-cmd.html

The spec is approved for mitaka, the code is out for review.  If people 
could test the change out it'd be helpful to vet it's usefulness.

2. I'm trying to revive a spec that was approved in liberty but the code 
never landed:

https://review.openstack.org/#/c/226925/

That's for force resetting quotas for a project/user so that on the next 
pass it gets recalculated. A question came up about making the user 
optional in that command so it's going to require a bit more review 
before we re-approve for mitaka since the design changes slightly.

3. mgagne was good enough to propose a patch upstream to neutron for a 
script he had out of tree:

https://review.openstack.org/#/c/221508/

That's a tool to deleted empty linux bridges.  The neutron linuxbridge 
agent used to remove those automatically but it caused race problems 
with nova so that was removed, but it'd still be good to have a tool to 
remove then as needed.

-- 

Thanks,

Matt Riedemann



From kamil.rogon at intel.com  Wed Sep 30 22:33:41 2015
From: kamil.rogon at intel.com (Rogon, Kamil)
Date: Wed, 30 Sep 2015 22:33:41 +0000
Subject: [openstack-dev] [Large Deployments Team][Performance Team] New
 informal working group suggestion
In-Reply-To: <CACsCO2wSs030XA7RiqBGyhtSVRpAhBPSFfb_X-=tB1Gg9GcZoA@mail.gmail.com>
References: <CACsCO2yHugc0FQmXBxO_-uzaOvR_KXQNdPOEYYneU=vqoeJSEw@mail.gmail.com>
 <CAOwk0=0BhkFqjNgzx7zWrS_tk9xj8Vh8uz7qgbO42wt_dpYoXw@mail.gmail.com>
 <CACsCO2wSs030XA7RiqBGyhtSVRpAhBPSFfb_X-=tB1Gg9GcZoA@mail.gmail.com>
Message-ID: <2800C6B567E9454499CC37828E68BA2D567636C5@IRSMSX106.ger.corp.intel.com>

Hello,

Thanks Dina for bringing up this great idea.



My team at Intel is working with Performance testing so far so we will be 
likely to be part of that project.

The performance aspect at large scale is an obstacle for enterprise 
deployments. For that reason Win The Enterprise 
<https://wiki.openstack.org/wiki/Enterprise_Working_Group>  group may be also 
interested in this topic.



Regards,

Kamil Rogon

----------------------------------------------------------------------------

Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173
80-298 Gdansk



From: Dina Belova [mailto:dbelova at mirantis.com]
Sent: Wednesday, September 30, 2015 10:27 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Large Deployments Team][Performance Team] New 
informal working group suggestion



Sandeep,



sorry for the late response :) I'm hoping to define 'spheres of interest' and 
most painful moments using people's experience on Tokyo summit and we'll find 
out what needs to be tested most and can be actually done. You can share your 
ideas of what needs to be tested and focused on in 
<https://etherpad.openstack.org/p/openstack-performance-issues> 
https://etherpad.openstack.org/p/openstack-performance-issues etherpad, this 
will be a pool of ideas I'm going to use in Tokyo.



I can either create irc channel for the discussions or we can use 
#openstack-operators channel as LDT is using it for the communication. After 
Tokyo summit I'm planning to set Doodle voting for the time people will be 
comfortable with to have periodic meetings :)



Cheers,

Dina



On Fri, Sep 25, 2015 at 1:52 PM, Sandeep Raman <sandeep.raman at gmail.com 
<mailto:sandeep.raman at gmail.com> > wrote:

On Tue, Sep 22, 2015 at 6:27 PM, Dina Belova <dbelova at mirantis.com 
<mailto:dbelova at mirantis.com> > wrote:

Hey, OpenStackers!



I'm writing to propose to organise new informal team to work specifically on 
the OpenStack performance issues. This will be a sub team in already existing 
Large Deployments Team, and I suppose it will be a good idea to gather people 
interested in OpenStack performance in one room and identify what issues are 
worrying contributors, what can be done and share results of performance 
researches :)



Dina, I'm focused in performance and scale testing [no coding background].How 
can I contribute and what is the expectation from this informal team?



So please volunteer to take part in this initiative. I hope it will be many 
people interested and we'll be able to use cross-projects session slot 
<http://odsreg.openstack.org/cfp/details/5>  to meet in Tokyo and hold a 
kick-off meeting.



I'm not coming to Tokyo. How could I still be part of discussions if any? I 
also feel it is good to have a IRC channel for perf-scale discussion. Let me 
know your thoughts.



I would like to apologise I'm writing to two mailing lists at the same time, 
but I want to make sure that all possibly interested people will notice the 
email.



Thanks and see you in Tokyo :)



Cheers,

Dina



-- 

Best regards,

Dina Belova

Senior Software Engineer

Mirantis Inc.


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe 
<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/ea25cece/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6586 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/ea25cece/attachment.bin>

From stdake at cisco.com  Wed Sep 30 22:39:29 2015
From: stdake at cisco.com (Steven Dake (stdake))
Date: Wed, 30 Sep 2015 22:39:29 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <BLU436-SMTP588A4F6F58729D8697DAC9D84D0@phx.gbl>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
 <0957CD8F4B55C0418161614FEC580D6BCECE8E@SZXEMI503-MBS.china.huawei.com>
 <1443628722351.56254@RACKSPACE.COM>
 <D98092A2-4326-44CE-AD0E-654D7DAA8738@rackspace.com>
 <BLU436-SMTP2546834433CE3DA7DED7F00D84D0@phx.gbl>
 <F794D8B2-433F-4182-B09F-097E2F32F0EA@rackspace.com>
 <BLU436-SMTP588A4F6F58729D8697DAC9D84D0@phx.gbl>
Message-ID: <D231AF25.13BFD%stdake@cisco.com>

Joshua,

If you share resources, you give up multi-tenancy.  No COE system has the
concept of multi-tenancy (kubernetes has some basic implementation but it
is totally insecure).  Not only does multi-tenancy have to ?look like? it
offers multiple tenants isolation, but it actually has to deliver the
goods.

I understand that at first glance a company like Yahoo may not want
separate bays for their various applications because of the perceived
administrative overhead.  I would then challenge Yahoo to go deploy a COE
like kubernetes (which has no multi-tenancy or a very basic implementation
of such) and get it to work with hundreds of different competing
applications.  I would speculate the administrative overhead of getting
all that to work would be greater then the administrative overhead of
simply doing a bay create for the various tenants.

Placing tenancy inside a COE seems interesting, but no COE does that
today.  Maybe in the future they will.  Magnum was designed to present an
integration point between COEs and OpenStack today, not five years down
the road.  Its not as if we took shortcuts to get to where we are.

I will grant you that density is lower with the current design of Magnum
vs a full on integration with OpenStack within the COE itself.  However,
that model which is what I believe you proposed is a huge design change to
each COE which would overly complicate the COE at the gain of increased
density.  I personally don?t feel that pain is worth the gain.

Regards,
-steve


On 9/30/15, 2:18 PM, "Joshua Harlow" <harlowja at outlook.com> wrote:

>Wouldn't that limit the ability to share/optimize resources then and
>increase the number of operators needed (since each COE/bay would need
>its own set of operators managing it)?
>
>If all tenants are in a single openstack cloud, and under say a single
>company then there isn't much need for management isolation (in fact I
>think said feature is actually a anti-feature in a case like this).
>Especially since that management is already by keystone and the
>project/tenant & user associations and such there.
>
>Security isolation I get, but if the COE is already multi-tenant aware
>and that multi-tenancy is connected into the openstack tenancy model,
>then it seems like that point is nil?
>
>I get that the current tenancy boundary is the bay (aka the COE right?)
>but is that changeable? Is that ok with everyone, it seems oddly matched
>to say a company like yahoo, or other private cloud, where one COE would
>I think be preferred and tenancy should go inside of that; vs a eggshell
>like solution that seems like it would create more management and
>operability pain (now each yahoo internal group that creates a bay/coe
>needs to figure out how to operate it? and resources can't be shared
>and/or orchestrated across bays; hmmmm, seems like not fully using a COE
>for what it can do?)
>
>Just my random thoughts, not sure how much is fixed in stone.
>
>-Josh
>
>Adrian Otto wrote:
>> Joshua,
>>
>> The tenancy boundary in Magnum is the bay. You can place whatever
>> single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
>> Swarm). This allows you to use native tools to interact with the COE in
>> that bay, rather than using an OpenStack specific client. If you want to
>> use the OpenStack client to create both bays, pods, and containers, you
>> can do that today. You also have the choice, for example, to run kubctl
>> against your Kubernetes bay, if you so desire.
>>
>> Bays offer both a management and security isolation between multiple
>> tenants. There is no intent to share a single bay between multiple
>> tenants. In your use case, you would simply create two bays, one for
>> each of the yahoo-mail.XX tenants. I am not convinced that having an
>> uber-tenant makes sense.
>>
>> Adrian
>>
>>> On Sep 30, 2015, at 1:13 PM, Joshua Harlow <harlowja at outlook.com
>>> <mailto:harlowja at outlook.com>> wrote:
>>>
>>> Adrian Otto wrote:
>>>> Thanks everyone who has provided feedback on this thread. The good
>>>> news is that most of what has been asked for from Magnum is actually
>>>> in scope already, and some of it has already been implemented. We
>>>> never aimed to be a COE deployment service. That happens to be a
>>>> necessity to achieve our more ambitious goal: We want to provide a
>>>> compelling Containers-as-a-Service solution for OpenStack clouds in a
>>>> way that offers maximum leverage of what?s already in OpenStack,
>>>> while giving end users the ability to use their favorite tools to
>>>> interact with their COE of choice, with the multi-tenancy capability
>>>> we expect from all OpenStack services, and simplified integration
>>>> with a wealth of existing OpenStack services (Identity,
>>>> Orchestration, Images, Networks, Storage, etc.).
>>>>
>>>> The areas we have disagreement are whether the features offered for
>>>> the k8s COE should be mirrored in other COE?s. We have not attempted
>>>> to do that yet, and my suggestion is to continue resisting that
>>>> temptation because it is not aligned with our vision. We are not here
>>>> to re-invent container management as a hosted service. Instead, we
>>>> aim to integrate prevailing technology, and make it work great with
>>>> OpenStack. For example, adding docker-compose capability to Magnum is
>>>> currently out-of-scope, and I think it should stay that way. With
>>>> that said, I?m willing to have a discussion about this with the
>>>> community at our upcoming Summit.
>>>>
>>>> An argument could be made for feature consistency among various COE
>>>> options (Bay Types). I see this as a relatively low value pursuit.
>>>> Basic features like integration with OpenStack Networking and
>>>> OpenStack Storage services should be universal. Whether you can
>>>> present a YAML file for a bay to perform internal orchestration is
>>>> not important in my view, as long as there is a prevailing way of
>>>> addressing that need. In the case of Docker Bays, you can simply
>>>> point a docker-compose client at it, and that will work fine.
>>>>
>>>
>>> So an interesting question, but how is tenancy going to work, will
>>> there be a keystone tenancy <-> COE tenancy adapter? From my
>>> understanding a whole bay (COE?) is owned by a tenant, which is great
>>> for tenants that want to ~experiment~ with a COE but seems disjoint
>>> from the end goal of an integrated COE where the tenancy model of both
>>> keystone and the COE is either the same or is adapted via some adapter
>>> layer.
>>>
>>> For example:
>>>
>>> 1) Bay that is connected to uber-tenant 'yahoo'
>>>
>>> 1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
>>> <http://yahoo-mail.us/>'
>>> 1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
>>> ...
>>>
>>> All those tenancy information is in keystone, not replicated/synced
>>> into the COE (or in some other COE specific disjoint system).
>>>
>>> Thoughts?
>>>
>>> This one becomes especially hard if said COE(s) don't even have a
>>> tenancy model in the first place :-/
>>>
>>>> Thanks,
>>>>
>>>> Adrian
>>>>
>>>>> On Sep 30, 2015, at 8:58 AM, Devdatta
>>>>> Kulkarni<devdatta.kulkarni at RACKSPACE.COM
>>>>> <mailto:devdatta.kulkarni at RACKSPACE.COM>> wrote:
>>>>>
>>>>> +1 Hongbin.
>>>>>
>>>>> From perspective of Solum, which hopes to use Magnum for its
>>>>> application container scheduling requirements, deep integration of
>>>>> COEs with OpenStack services like Keystone will be useful.
>>>>> Specifically, I am thinking that it will be good if Solum can
>>>>> depend on Keystone tokens to deploy and schedule containers on the
>>>>> Bay nodes instead of having to use COE specific credentials. That
>>>>> way, container resources will become first class components that
>>>>> can be monitored using Ceilometer, access controlled using
>>>>> Keystone, and managed from within Horizon.
>>>>>
>>>>> Regards, Devdatta
>>>>>
>>>>>
>>>>> From: Hongbin Lu<hongbin.lu at huawei.com
>>>>> <mailto:hongbin.lu at huawei.com>> Sent: Wednesday, September
>>>>> 30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
>>>>> usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
>>>>> compose = k8s?
>>>>>
>>>>>
>>>>> +1 from me as well.
>>>>>
>>>>> I think what makes Magnum appealing is the promise to provide
>>>>> container-as-a-service. I see coe deployment as a helper to achieve
>>>>> the promise, instead of the main goal.
>>>>>
>>>>> Best regards, Hongbin
>>>>>
>>>>>
>>>>> From: Jay Lau [mailto:jay.lau.513 at gmail.com] Sent: September-29-15
>>>>> 10:57 PM To: OpenStack Development Mailing List (not for usage
>>>>> questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
>>>>> k8s?
>>>>>
>>>>>
>>>>>
>>>>> +1 to Egor, I think that the final goal of Magnum is container as a
>>>>> service but not coe deployment as a service. ;-)
>>>>>
>>>>> Especially we are also working on Magnum UI, the Magnum UI should
>>>>> export some interfaces to enable end user can create container
>>>>> applications but not only coe deployment.
>>>>>
>>>>> I hope that the Magnum can be treated as another "Nova" which is
>>>>> focusing on container service. I know it is difficult to unify all
>>>>> of the concepts in different coe (k8s has pod, service, rc, swarm
>>>>> only has container, nova only has VM, PM with different
>>>>> hypervisors), but this deserve some deep dive and thinking to see
>>>>> how can move forward.....
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<EGuz at walmartlabs.com
>>>>> <mailto:EGuz at walmartlabs.com>>
>>>>> wrote: definitely ;), but the are some thoughts to Tom?s email.
>>>>>
>>>>> I agree that we shouldn't reinvent apis, but I don?t think Magnum
>>>>> should only focus at deployment (I feel we will become another
>>>>> Puppet/Chef/Ansible module if we do it ):) I belive our goal should
>>>>> be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem
>>>>> (Neutron/Cinder/Barbican/etc) even if we need to step in to
>>>>> Kub/Mesos/Swarm communities for that.
>>>>>
>>>>> ? Egor
>>>>>
>>>>> From: Adrian
>>>>> Otto<adrian.otto at rackspace.com
>>>>> <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
>>>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>>>> questions)"<openstack-dev at lists.openstack.org
>>>>> 
>>>>><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>openstack.org>>
>>>>>
>>>>>
>>> Date: Tuesday, September 29, 2015 at 08:44
>>>>> To: "OpenStack Development Mailing List (not for usage
>>>>> questions)"<openstack-dev at lists.openstack.org
>>>>> 
>>>>><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>openstack.org>>
>>>>>
>>>>>
>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>>
>>>>> This is definitely a topic we should cover in Tokyo.
>>>>>
>>>>> On Sep 29, 2015, at 8:28 AM, Daneyon Hansen
>>>>> (danehans)<danehans at cisco.com
>>>>> <mailto:danehans at cisco.com><mailto:danehans at cisco.com>> wrote:
>>>>>
>>>>>
>>>>> +1
>>>>>
>>>>> From: Tom Cammann<tom.cammann at hpe.com
>>>>> <mailto:tom.cammann at hpe.com><mailto:tom.cammann at hpe.com>>
>>>>> Reply-To:
>>>>> "openstack-dev at lists.openstack.org
>>>>> 
>>>>><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>openstack.org>"<openstack-dev at lists.openstack.org
>>>>> 
>>>>><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>openstack.org>>
>>>>>
>>>>>
>>> Date: Tuesday, September 29, 2015 at 2:22 AM
>>>>> To:
>>>>> "openstack-dev at lists.openstack.org
>>>>> 
>>>>><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>openstack.org>"<openstack-dev at lists.openstack.org
>>>>> 
>>>>><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>openstack.org>>
>>>>>
>>>>>
>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>>
>>>>> This has been my thinking in the last couple of months to
>>>>> completely deprecate the COE specific APIs such as pod/service/rc
>>>>> and container.
>>>>>
>>>>> As we now support Mesos, Kubernetes and Docker Swarm its going to
>>>>> be very difficult and probably a wasted effort trying to
>>>>> consolidate their separate APIs under a single Magnum API.
>>>>>
>>>>> I'm starting to see Magnum as COEDaaS - Container Orchestration
>>>>> Engine Deployment as a Service.
>>>>>
>>>>> On 29/09/15 06:30, Ton Ngo wrote: Would it make sense to ask the
>>>>> opposite of Wanghua's question: should pod/service/rc be deprecated
>>>>> if the user can easily get to the k8s api? Even if we want to
>>>>> orchestrate these in a Heat template, the corresponding heat
>>>>> resources can just interface with k8s instead of Magnum. Ton Ngo,
>>>>>
>>>>> <ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
>>>>> docker compose is just command line tool which doesn?t have any api
>>>>> or scheduling feat
>>>>>
>>>>> From: Egor Guz<EGuz at walmartlabs.com
>>>>> <mailto:EGuz at walmartlabs.com>><mailto:EGuz at walmartlabs.com>
>>>>> To:
>>>>> "openstack-dev at lists.openstack.org
>>>>> 
>>>>><mailto:openstack-dev at lists.openstack.org>"<mailto:openstack-dev at lists
>>>>>.openstack.org>
>>>>> <openstack-dev at lists.openstack.org
>>>>> 
>>>>><mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists
>>>>>.openstack.org>
>>>>>
>>>>>
>>> Date: 09/28/2015 10:20 PM
>>>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>> ________________________________
>>>>>
>>>>>
>>>>>
>>>>> Also I belive docker compose is just command line tool which
>>>>> doesn?t have any api or scheduling features. But during last Docker
>>>>> Conf hackathon PayPal folks implemented docker compose executor for
>>>>> Mesos (https://github.com/mohitsoni/compose-executor) which can
>>>>> give you pod like experience.
>>>>>
>>>>> ? Egor
>>>>>
>>>>> From: Adrian
>>>>> Otto<adrian.otto at rackspace.com
>>>>> 
>>>>><mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com><m
>>>>>ailto:adrian.otto at rackspace.com>>
>>>>>
>>>>>
>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>> questions)"<openstack-dev at lists.openstack.org
>>> 
>>><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.op
>>>enstack.org><mailto:openstack-dev at lists.openstack.org>>
>>>>> Date: Monday, September 28, 2015 at 22:03 To: "OpenStack
>>>>> Development Mailing List (not for usage
>>>>> questions)"<openstack-dev at lists.openstack.org
>>>>> 
>>>>><mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>openstack.org><mailto:openstack-dev at lists.openstack.org>>
>>>>>
>>>>>
>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>>
>>>>> Wanghua,
>>>>>
>>>>> I do follow your logic, but docker-compose only needs the docker
>>>>> API to operate. We are intentionally avoiding re-inventing the
>>>>> wheel. Our goal is not to replace docker swarm (or other existing
>>>>> systems), but to compliment it/them. We want to offer users of
>>>>> Docker the richness of native APIs and supporting tools. This way
>>>>> they will not need to compromise features or wait longer for us to
>>>>> implement each new feature as it is added. Keep in mind that our
>>>>> pod, service, and replication controller resources pre-date this
>>>>> philosophy. If we started out with the current approach, those
>>>>> would not exist in Magnum.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Adrian
>>>>>
>>>>> On Sep 28, 2015, at 8:32 PM, ??
>>>>> <wanghua.humble at gmail.com
>>>>> 
>>>>><mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com><mai
>>>>>lto:wanghua.humble at gmail.com>>
>>>>> wrote:
>>>>>
>>>>> Hi folks,
>>>>>
>>>>> Magnum now exposes service, pod, etc to users in kubernetes coe,
>>>>> but exposes container in swarm coe. As I know, swarm is only a
>>>>> scheduler of container, which is like nova in openstack. Docker
>>>>> compose is a orchestration program which is like heat in openstack.
>>>>> k8s is the combination of scheduler and orchestration. So I think
>>>>> it is better to expose the apis in compose to users which are at
>>>>> the same level as k8s.
>>>>>
>>>>>
>>>>> Regards Wanghua
>>>>> 
>>>>>______________________________________________________________________
>>>>>____
>>>>>
>>>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org
>>>>> 
>>>>><mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-de
>>>>>v-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.open
>>>>>stack.org>?subject:unsubscribe
>>>>>
>>>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> 
>>>>>______________________________________________________________________
>>>>>____
>>>>>
>>>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org
>>>>> 
>>>>><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>><mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>>>
>>>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> 
>>>>>______________________________________________________________________
>>>>>____
>>>>>
>>>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org
>>>>> 
>>>>><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>><mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>> 
>>><ATT00001.gif>__________________________________________________________
>>>________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>
>>>>>
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org
>>>>> 
>>>>><mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-de
>>>>>v-request at lists.openstack.org>?subject:unsubscribe
>>>>>
>>>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>> 
>>>>>______________________________________________________________________
>>>>>____
>>>>>
>>>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org
>>>>> 
>>>>><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Thanks, Jay Lau (Guangya Liu)
>>>>>
>>>>> 
>>>>>______________________________________________________________________
>>>>>____
>>>>>
>>>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org
>>>>> 
>>>>><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>> 
>>>>_______________________________________________________________________
>>>>___
>>>>
>>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org
>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> 
>>>________________________________________________________________________
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:OpenStack-dev-request at lists.openstack.org
>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From zigo at debian.org  Wed Sep 30 22:41:48 2015
From: zigo at debian.org (Thomas Goirand)
Date: Thu, 01 Oct 2015 00:41:48 +0200
Subject: [openstack-dev] Announcing Liberty RC1 availability in Debian
In-Reply-To: <CAAKgrcnK+ays05Uioyhv55DhF8L0Lx5gMCPNMMNZSMF9zQPWjQ@mail.gmail.com>
References: <560BCE52.5080100@debian.org>
 <CAAKgrcnK+ays05Uioyhv55DhF8L0Lx5gMCPNMMNZSMF9zQPWjQ@mail.gmail.com>
Message-ID: <560C652C.2060401@debian.org>

On 09/30/2015 07:25 PM, Jordan Pittier wrote:
> We are not used to reading "thanks" messages from you :) So I enjoy this
> email even more !

I am well aware that I do have the reputation within the community to
complain too much. I'm the world champion of starting monster troll
thread by mistake. :)

Though mostly, I do like everyone I've approached so far (except maybe 2
persons out of a few hundreds, which is unavoidable), and feel like we
have an awesome, very helpful and friendly community.

It is my hope that everyone understands the amount of "WTF" situation I
have face every day due to what I do, and that I'm close to burning out
at the end of each release. Liberty isn't an exception. Seeing that
Tempest finally ran yesterday evening filled me with joy. These last
remaining 15 days before the final release will be painful, even though
I'm nearly done for this cycle: I do need holidays...

So let me do it once more: thanks everyone! :)

Looking forward to meet so many friends in Tokyo,

Thomas Goirand (zigo)



From rochelle.grober at huawei.com  Wed Sep 30 22:46:42 2015
From: rochelle.grober at huawei.com (Rochelle Grober)
Date: Wed, 30 Sep 2015 22:46:42 +0000
Subject: [openstack-dev] [election][TC] Candidacy
Message-ID: <DA7681A6D234954992BD2FB907F9666208EF3F3B@SJCEML701-CHM.china.huawei.com>

Hello People!

I am tossing one of my hats into the ring to run for TC.  Yes, I believe you
could call me a "diversity candidate" as I'm not much of a developer any more,
but I think my skills would be a great addition to the excellent people who are
on the TC (past and present).

My background:  I am currently an architect with Huawei Technologies.  My role
is "OpenStack" and as such, I am liaison to many areas an groups in the
OpenStack community and I am liaison to Huawei engineers and management for the
OpenStack community.  I focus energy on all parts of Software products that
aren't directly writing code.  I am an advocate for quality, for effective and
efficient process, and for the downstream stakeholders (Ops, Apps developers,
Users, Support Engineers, Docs, Training, etc).  I am currently active in:
*             DefCore
*             RefStack
*             Product Working Group
*             Logging Working Group (cofounder)
*             Ops community
*             Peripherally, Tailgaters
*             Women of OpenStack
*             Diversity Working Group

What I would like to help the TC and the community with:
*             Interoperability across deployed clouds begins with cross project
communications and the realization that  each engineer and each project is
connected and influential in how the OpenStack ecosystem works, responds, and
grows. When OpenStack was young, there were two projects and everyone knew
each other, even if they didn't live in the same place.  Just as processes
become more formal when startups grow to be mid-sized companies, OpenStack
has formalized much as it has exploded in number of participants.  We need to
continue to transform Developer, Ops and other community lore into useful
documentation. We are at the point where we really need to focus our energies
and our intelligence on how to effectively span projects and communities via
better communications.  I'm already doing this to some extent.  I'd like to
help the TC do that to a greater extent.
*             In the past two years, I've seen the number of "horizontal" projects grow
almost as significantly as the "vertical" projects.  These cross functional
projects, with libraries, release, configuration management, docs, QA, etc.,
have also grown in importance in maintaining the quality and velocity of
development.  Again, cross-functional needs are being address, and I want to
help the TC be more proactive in identifying needs and seeding the teams with
senior OpenStack developers (and user community advisors where useful).
*             The TC is the conduit between, translator of and champion for the
developers to the OpenStack Board.  They have a huge responsibility and not
enough time, energy or resources to address all the challenges.  I am ready to
work on the challenges and help develop the strategic vision needed to keep on
top of the current and new opportunities always arising and always needing some
thoughtful analysis and action.

That said, I have my own challenges to address.  I know my company will support
me in my role as a TC member, but they will also demand more of my time,
opinions, presence and participation specifically because of the TC position.
I also am still struggling to make inroads on the logging issues I've been
attempting to wrangle into better shape.  I've gotten lots of support from the
community on this (thanks, folks, you know who you are;-), but it still gives
me pause for thought that I, myself need to keep working on my effectiveness.

Whether on the TC or not, I will continue to contribute as much as I can to the
community in the ways that I do best.  and you will continue to see me at the
summits, the midcycles, the meetups, the mailing lists and IRC (hopefully more
there as I'm trying to educate my company how they can provide us the access we
need without compromising their corporate rules).

Thank you for reading this far and considering me for service on the TC.

--Rocky

ps.  I broke Gerrit on my laptop, so infra is helping me, but I've stumped them and wanted to get this out.  TLDR: this ain't in the elections repository yet
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/682c1da8/attachment.html>

From matt at mattfischer.com  Wed Sep 30 22:47:21 2015
From: matt at mattfischer.com (Matt Fischer)
Date: Wed, 30 Sep 2015 16:47:21 -0600
Subject: [openstack-dev] [ops] Operator Local Patches
In-Reply-To: <560C5E76.9050505@linux.vnet.ibm.com>
References: <9FFC2E05-79FC-4876-A2B9-FD1C2BE1269D@godaddy.com>
 <560C5E76.9050505@linux.vnet.ibm.com>
Message-ID: <CAHr1CO9e2d0E+Pd7vz=N9+x4egscTWpGZ8sWU-nSSuovHHkqCA@mail.gmail.com>

Is the purge deleted a replacement for nova-manage db archive-deleted? It
hasn't worked for several cycles and so I assume it's abandoned.
On Sep 30, 2015 4:16 PM, "Matt Riedemann" <mriedem at linux.vnet.ibm.com>
wrote:

>
>
> On 9/29/2015 6:33 PM, Kris G. Lindgren wrote:
>
>> Hello All,
>>
>> We have some pretty good contributions of local patches on the etherpad.
>>   We are going through right now and trying to group patches that
>> multiple people are carrying and patches that people may not be carrying
>> but solves a problem that they are running into.  If you can take some
>> time and either add your own local patches that you have to the ether
>> pad or add +1's next to the patches that are laid out, it would help us
>> immensely.
>>
>> The etherpad can be found at:
>> https://etherpad.openstack.org/p/operator-local-patches
>>
>> Thanks for your help!
>>
>> ___________________________________________________________________
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>> From: "Kris G. Lindgren"
>> Date: Tuesday, September 22, 2015 at 4:21 PM
>> To: openstack-operators
>> Subject: Re: Operator Local Patches
>>
>> Hello all,
>>
>> Friendly reminder: If you have local patches and haven't yet done so,
>> please contribute to the etherpad at:
>> https://etherpad.openstack.org/p/operator-local-patches
>>
>> ___________________________________________________________________
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>> From: "Kris G. Lindgren"
>> Date: Friday, September 18, 2015 at 4:35 PM
>> To: openstack-operators
>> Cc: Tom Fifield
>> Subject: Operator Local Patches
>>
>> Hello Operators!
>>
>> During the ops meetup in Palo Alto were we talking about sessions for
>> Tokyo. A session that I purposed, that got a bunch of +1's,  was about
>> local patches that operators were carrying.  From my experience this is
>> done to either implement business logic,  fix assumptions in projects
>> that do not apply to your implementation, implement business
>> requirements that are not yet implemented in openstack, or fix scale
>> related bugs.  What I would like to do is get a working group together
>> to do the following:
>>
>> 1.) Document local patches that operators have (even those that are in
>> gerrit right now waiting to be committed upstream)
>> 2.) Figure out commonality in those patches
>> 3.) Either upstream the common fixes to the appropriate projects or
>> figure out if a hook can be added to allow people to run their code at
>> that specific point
>> 4.) ????
>> 5.) Profit
>>
>> To start this off, I have documented every patch, along with a
>> description of what it does and why we did it (where needed), that
>> GoDaddy is running [1].  What I am asking is that the operator community
>> please update the etherpad with the patches that you are running, so
>> that we have a good starting point for discussions in Tokyo and beyond.
>>
>> [1] - https://etherpad.openstack.org/p/operator-local-patches
>> ___________________________________________________________________
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I saw this originally on the ops list and it's a great idea - cat herding
> the bazillion ops patches and seeing what common things rise to the top
> would be helpful.  Hopefully some of that can then be pushed into the
> projects.
>
> There are a couple of things I could note that are specifically operator
> driven which could use eyes again.
>
> 1. purge deleted instances from nova database:
>
>
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/purge-deleted-instances-cmd.html
>
> The spec is approved for mitaka, the code is out for review.  If people
> could test the change out it'd be helpful to vet it's usefulness.
>
> 2. I'm trying to revive a spec that was approved in liberty but the code
> never landed:
>
> https://review.openstack.org/#/c/226925/
>
> That's for force resetting quotas for a project/user so that on the next
> pass it gets recalculated. A question came up about making the user
> optional in that command so it's going to require a bit more review before
> we re-approve for mitaka since the design changes slightly.
>
> 3. mgagne was good enough to propose a patch upstream to neutron for a
> script he had out of tree:
>
> https://review.openstack.org/#/c/221508/
>
> That's a tool to deleted empty linux bridges.  The neutron linuxbridge
> agent used to remove those automatically but it caused race problems with
> nova so that was removed, but it'd still be good to have a tool to remove
> then as needed.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/3961703b/attachment.html>

From adrian.otto at rackspace.com  Wed Sep 30 22:47:28 2015
From: adrian.otto at rackspace.com (Adrian Otto)
Date: Wed, 30 Sep 2015 22:47:28 +0000
Subject: [openstack-dev] [magnum] New Core Reviewers
Message-ID: <C675F3D0-4226-453E-A1FB-257DB1D85858@rackspace.com>

Core Reviewers,

I propose the following additions to magnum-core:

+Vilobh Meshram (vilobhmm)
+Hua Wang (humble00)

Please respond with +1 to agree or -1 to veto. This will be decided by either a simple majority of existing core reviewers, or by lazy consensus concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.

Thanks,

Adrian Otto

From emilien at redhat.com  Wed Sep 30 22:48:54 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Wed, 30 Sep 2015 18:48:54 -0400
Subject: [openstack-dev] [puppet] prepare 5.2.0 and 6.1.0 releases
Message-ID: <560C66D6.2070400@redhat.com>

Hi,

I would like to organize a "release day" sometimes soon, to release
5.2.0 (Juno) [1] and 6.1.0 (Kilo) [2].

Also, we will take the opportunity of that day to consolidate our
process and bring more documentation [3].

If you have backport needs, please make sure they are all sent in
Gerrit, so our core team will review it.

If there is any volunteer to help in that process (documentation,
launchpad, release notes, reviewing backports), please raise your hand
on IRC.

Once we will release 5.2.0 and 6.1.0, we will schedule 7.0.0 (liberty)
release (probably end-october/early-november), but for now we're still
waiting for UCA & RDO Liberty stable packaging.

Thanks!

[1] https://goo.gl/U767kI
[2] https://goo.gl/HPuVfA
[2] https://wiki.openstack.org/wiki/Puppet/releases
-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/8ba37781/attachment.pgp>

From harlowja at outlook.com  Wed Sep 30 22:55:11 2015
From: harlowja at outlook.com (Joshua Harlow)
Date: Wed, 30 Sep 2015 15:55:11 -0700
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <D231AF25.13BFD%stdake@cisco.com>
References: <CAH5-jC-AC4sVFf9A=c3UCfwa==qsoxvGQ1MH+0M79zpkwZwU3g@mail.gmail.com>
 <F83E9D77-E6CE-4D18-9E7C-BB769571E114@rackspace.com>
 <D22F6B8E.1DA5F%eguz@walmartlabs.com>
 <201509290530.t8T5UHfm012967@d01av01.pok.ibm.com> <560A5856.1050303@hpe.com>
 <D22FFC4A.689B3%danehans@cisco.com>
 <EF6D28F3-62B9-4355-8116-F8013E418736@rackspace.com>
 <D230119F.1DB05%eguz@walmartlabs.com>
 <CAFyztAF59b23Bprr0myVskB=TDw+favv7PeY+znnta06xnamXA@mail.gmail.com>
 <0957CD8F4B55C0418161614FEC580D6BCECE8E@SZXEMI503-MBS.china.huawei.com>
 <1443628722351.56254@RACKSPACE.COM>
 <D98092A2-4326-44CE-AD0E-654D7DAA8738@rackspace.com>
 <BLU436-SMTP2546834433CE3DA7DED7F00D84D0@phx.gbl>
 <F794D8B2-433F-4182-B09F-097E2F32F0EA@rackspace.com>
 <BLU436-SMTP588A4F6F58729D8697DAC9D84D0@phx.gbl>
 <D231AF25.13BFD%stdake@cisco.com>
Message-ID: <BLU436-SMTP912FDD514C5049F03C4C94D84D0@phx.gbl>

Totally get it,

And its interesting the boundaries that are being pushed,

Also interesting to know the state of the world, and the state of magnum
and the state of COE systems. I'm somewhat surprised that they lack
multi-tenancy in any kind of manner (but I guess I'm not to surprised,
its a feature that many don't add-on until later, for better or
worse...), especially kubernetes (coming from google), but not entirely
shocked by it ;-)

Insightful stuff, thanks :)

Steven Dake (stdake) wrote:
> Joshua,
> 
> If you share resources, you give up multi-tenancy.  No COE system has the
> concept of multi-tenancy (kubernetes has some basic implementation but it
> is totally insecure).  Not only does multi-tenancy have to ?look like? it
> offers multiple tenants isolation, but it actually has to deliver the
> goods.
> 
> I understand that at first glance a company like Yahoo may not want
> separate bays for their various applications because of the perceived
> administrative overhead.  I would then challenge Yahoo to go deploy a COE
> like kubernetes (which has no multi-tenancy or a very basic implementation
> of such) and get it to work with hundreds of different competing
> applications.  I would speculate the administrative overhead of getting
> all that to work would be greater then the administrative overhead of
> simply doing a bay create for the various tenants.
> 
> Placing tenancy inside a COE seems interesting, but no COE does that
> today.  Maybe in the future they will.  Magnum was designed to present an
> integration point between COEs and OpenStack today, not five years down
> the road.  Its not as if we took shortcuts to get to where we are.
> 
> I will grant you that density is lower with the current design of Magnum
> vs a full on integration with OpenStack within the COE itself.  However,
> that model which is what I believe you proposed is a huge design change to
> each COE which would overly complicate the COE at the gain of increased
> density.  I personally don?t feel that pain is worth the gain.
> 
> Regards,
> -steve
> 
> 
> On 9/30/15, 2:18 PM, "Joshua Harlow"<harlowja at outlook.com>  wrote:
> 
>> Wouldn't that limit the ability to share/optimize resources then and
>> increase the number of operators needed (since each COE/bay would need
>> its own set of operators managing it)?
>>
>> If all tenants are in a single openstack cloud, and under say a single
>> company then there isn't much need for management isolation (in fact I
>> think said feature is actually a anti-feature in a case like this).
>> Especially since that management is already by keystone and the
>> project/tenant&  user associations and such there.
>>
>> Security isolation I get, but if the COE is already multi-tenant aware
>> and that multi-tenancy is connected into the openstack tenancy model,
>> then it seems like that point is nil?
>>
>> I get that the current tenancy boundary is the bay (aka the COE right?)
>> but is that changeable? Is that ok with everyone, it seems oddly matched
>> to say a company like yahoo, or other private cloud, where one COE would
>> I think be preferred and tenancy should go inside of that; vs a eggshell
>> like solution that seems like it would create more management and
>> operability pain (now each yahoo internal group that creates a bay/coe
>> needs to figure out how to operate it? and resources can't be shared
>> and/or orchestrated across bays; hmmmm, seems like not fully using a COE
>> for what it can do?)
>>
>> Just my random thoughts, not sure how much is fixed in stone.
>>
>> -Josh
>>
>> Adrian Otto wrote:
>>> Joshua,
>>>
>>> The tenancy boundary in Magnum is the bay. You can place whatever
>>> single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
>>> Swarm). This allows you to use native tools to interact with the COE in
>>> that bay, rather than using an OpenStack specific client. If you want to
>>> use the OpenStack client to create both bays, pods, and containers, you
>>> can do that today. You also have the choice, for example, to run kubctl
>>> against your Kubernetes bay, if you so desire.
>>>
>>> Bays offer both a management and security isolation between multiple
>>> tenants. There is no intent to share a single bay between multiple
>>> tenants. In your use case, you would simply create two bays, one for
>>> each of the yahoo-mail.XX tenants. I am not convinced that having an
>>> uber-tenant makes sense.
>>>
>>> Adrian
>>>
>>>> On Sep 30, 2015, at 1:13 PM, Joshua Harlow<harlowja at outlook.com
>>>> <mailto:harlowja at outlook.com>>  wrote:
>>>>
>>>> Adrian Otto wrote:
>>>>> Thanks everyone who has provided feedback on this thread. The good
>>>>> news is that most of what has been asked for from Magnum is actually
>>>>> in scope already, and some of it has already been implemented. We
>>>>> never aimed to be a COE deployment service. That happens to be a
>>>>> necessity to achieve our more ambitious goal: We want to provide a
>>>>> compelling Containers-as-a-Service solution for OpenStack clouds in a
>>>>> way that offers maximum leverage of what?s already in OpenStack,
>>>>> while giving end users the ability to use their favorite tools to
>>>>> interact with their COE of choice, with the multi-tenancy capability
>>>>> we expect from all OpenStack services, and simplified integration
>>>>> with a wealth of existing OpenStack services (Identity,
>>>>> Orchestration, Images, Networks, Storage, etc.).
>>>>>
>>>>> The areas we have disagreement are whether the features offered for
>>>>> the k8s COE should be mirrored in other COE?s. We have not attempted
>>>>> to do that yet, and my suggestion is to continue resisting that
>>>>> temptation because it is not aligned with our vision. We are not here
>>>>> to re-invent container management as a hosted service. Instead, we
>>>>> aim to integrate prevailing technology, and make it work great with
>>>>> OpenStack. For example, adding docker-compose capability to Magnum is
>>>>> currently out-of-scope, and I think it should stay that way. With
>>>>> that said, I?m willing to have a discussion about this with the
>>>>> community at our upcoming Summit.
>>>>>
>>>>> An argument could be made for feature consistency among various COE
>>>>> options (Bay Types). I see this as a relatively low value pursuit.
>>>>> Basic features like integration with OpenStack Networking and
>>>>> OpenStack Storage services should be universal. Whether you can
>>>>> present a YAML file for a bay to perform internal orchestration is
>>>>> not important in my view, as long as there is a prevailing way of
>>>>> addressing that need. In the case of Docker Bays, you can simply
>>>>> point a docker-compose client at it, and that will work fine.
>>>>>
>>>> So an interesting question, but how is tenancy going to work, will
>>>> there be a keystone tenancy<->  COE tenancy adapter? From my
>>>> understanding a whole bay (COE?) is owned by a tenant, which is great
>>>> for tenants that want to ~experiment~ with a COE but seems disjoint
>>>> from the end goal of an integrated COE where the tenancy model of both
>>>> keystone and the COE is either the same or is adapted via some adapter
>>>> layer.
>>>>
>>>> For example:
>>>>
>>>> 1) Bay that is connected to uber-tenant 'yahoo'
>>>>
>>>> 1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
>>>> <http://yahoo-mail.us/>'
>>>> 1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
>>>> ...
>>>>
>>>> All those tenancy information is in keystone, not replicated/synced
>>>> into the COE (or in some other COE specific disjoint system).
>>>>
>>>> Thoughts?
>>>>
>>>> This one becomes especially hard if said COE(s) don't even have a
>>>> tenancy model in the first place :-/
>>>>
>>>>> Thanks,
>>>>>
>>>>> Adrian
>>>>>
>>>>>> On Sep 30, 2015, at 8:58 AM, Devdatta
>>>>>> Kulkarni<devdatta.kulkarni at RACKSPACE.COM
>>>>>> <mailto:devdatta.kulkarni at RACKSPACE.COM>>  wrote:
>>>>>>
>>>>>> +1 Hongbin.
>>>>>>
>>>>>>  From perspective of Solum, which hopes to use Magnum for its
>>>>>> application container scheduling requirements, deep integration of
>>>>>> COEs with OpenStack services like Keystone will be useful.
>>>>>> Specifically, I am thinking that it will be good if Solum can
>>>>>> depend on Keystone tokens to deploy and schedule containers on the
>>>>>> Bay nodes instead of having to use COE specific credentials. That
>>>>>> way, container resources will become first class components that
>>>>>> can be monitored using Ceilometer, access controlled using
>>>>>> Keystone, and managed from within Horizon.
>>>>>>
>>>>>> Regards, Devdatta
>>>>>>
>>>>>>
>>>>>> From: Hongbin Lu<hongbin.lu at huawei.com
>>>>>> <mailto:hongbin.lu at huawei.com>>  Sent: Wednesday, September
>>>>>> 30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
>>>>>> usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
>>>>>> compose = k8s?
>>>>>>
>>>>>>
>>>>>> +1 from me as well.
>>>>>>
>>>>>> I think what makes Magnum appealing is the promise to provide
>>>>>> container-as-a-service. I see coe deployment as a helper to achieve
>>>>>> the promise, instead of the main goal.
>>>>>>
>>>>>> Best regards, Hongbin
>>>>>>
>>>>>>
>>>>>> From: Jay Lau [mailto:jay.lau.513 at gmail.com] Sent: September-29-15
>>>>>> 10:57 PM To: OpenStack Development Mailing List (not for usage
>>>>>> questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
>>>>>> k8s?
>>>>>>
>>>>>>
>>>>>>
>>>>>> +1 to Egor, I think that the final goal of Magnum is container as a
>>>>>> service but not coe deployment as a service. ;-)
>>>>>>
>>>>>> Especially we are also working on Magnum UI, the Magnum UI should
>>>>>> export some interfaces to enable end user can create container
>>>>>> applications but not only coe deployment.
>>>>>>
>>>>>> I hope that the Magnum can be treated as another "Nova" which is
>>>>>> focusing on container service. I know it is difficult to unify all
>>>>>> of the concepts in different coe (k8s has pod, service, rc, swarm
>>>>>> only has container, nova only has VM, PM with different
>>>>>> hypervisors), but this deserve some deep dive and thinking to see
>>>>>> how can move forward.....
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<EGuz at walmartlabs.com
>>>>>> <mailto:EGuz at walmartlabs.com>>
>>>>>> wrote: definitely ;), but the are some thoughts to Tom?s email.
>>>>>>
>>>>>> I agree that we shouldn't reinvent apis, but I don?t think Magnum
>>>>>> should only focus at deployment (I feel we will become another
>>>>>> Puppet/Chef/Ansible module if we do it ):) I belive our goal should
>>>>>> be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem
>>>>>> (Neutron/Cinder/Barbican/etc) even if we need to step in to
>>>>>> Kub/Mesos/Swarm communities for that.
>>>>>>
>>>>>> ? Egor
>>>>>>
>>>>>> From: Adrian
>>>>>> Otto<adrian.otto at rackspace.com
>>>>>> <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com>>
>>>>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>>>>> questions)"<openstack-dev at lists.openstack.org
>>>>>>
>>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>> openstack.org>>
>>>>>>
>>>>>>
>>>> Date: Tuesday, September 29, 2015 at 08:44
>>>>>> To: "OpenStack Development Mailing List (not for usage
>>>>>> questions)"<openstack-dev at lists.openstack.org
>>>>>>
>>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>> openstack.org>>
>>>>>>
>>>>>>
>>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>>> This is definitely a topic we should cover in Tokyo.
>>>>>>
>>>>>> On Sep 29, 2015, at 8:28 AM, Daneyon Hansen
>>>>>> (danehans)<danehans at cisco.com
>>>>>> <mailto:danehans at cisco.com><mailto:danehans at cisco.com>>  wrote:
>>>>>>
>>>>>>
>>>>>> +1
>>>>>>
>>>>>> From: Tom Cammann<tom.cammann at hpe.com
>>>>>> <mailto:tom.cammann at hpe.com><mailto:tom.cammann at hpe.com>>
>>>>>> Reply-To:
>>>>>> "openstack-dev at lists.openstack.org
>>>>>>
>>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>> openstack.org>"<openstack-dev at lists.openstack.org
>>>>>>
>>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>> openstack.org>>
>>>>>>
>>>>>>
>>>> Date: Tuesday, September 29, 2015 at 2:22 AM
>>>>>> To:
>>>>>> "openstack-dev at lists.openstack.org
>>>>>>
>>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>> openstack.org>"<openstack-dev at lists.openstack.org
>>>>>>
>>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>> openstack.org>>
>>>>>>
>>>>>>
>>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>>> This has been my thinking in the last couple of months to
>>>>>> completely deprecate the COE specific APIs such as pod/service/rc
>>>>>> and container.
>>>>>>
>>>>>> As we now support Mesos, Kubernetes and Docker Swarm its going to
>>>>>> be very difficult and probably a wasted effort trying to
>>>>>> consolidate their separate APIs under a single Magnum API.
>>>>>>
>>>>>> I'm starting to see Magnum as COEDaaS - Container Orchestration
>>>>>> Engine Deployment as a Service.
>>>>>>
>>>>>> On 29/09/15 06:30, Ton Ngo wrote: Would it make sense to ask the
>>>>>> opposite of Wanghua's question: should pod/service/rc be deprecated
>>>>>> if the user can easily get to the k8s api? Even if we want to
>>>>>> orchestrate these in a Heat template, the corresponding heat
>>>>>> resources can just interface with k8s instead of Magnum. Ton Ngo,
>>>>>>
>>>>>> <ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
>>>>>> docker compose is just command line tool which doesn?t have any api
>>>>>> or scheduling feat
>>>>>>
>>>>>> From: Egor Guz<EGuz at walmartlabs.com
>>>>>> <mailto:EGuz at walmartlabs.com>><mailto:EGuz at walmartlabs.com>
>>>>>> To:
>>>>>> "openstack-dev at lists.openstack.org
>>>>>>
>>>>>> <mailto:openstack-dev at lists.openstack.org>"<mailto:openstack-dev at lists
>>>>>> .openstack.org>
>>>>>> <openstack-dev at lists.openstack.org
>>>>>>
>>>>>> <mailto:openstack-dev at lists.openstack.org>><mailto:openstack-dev at lists
>>>>>> .openstack.org>
>>>>>>
>>>>>>
>>>> Date: 09/28/2015 10:20 PM
>>>>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>>> ________________________________
>>>>>>
>>>>>>
>>>>>>
>>>>>> Also I belive docker compose is just command line tool which
>>>>>> doesn?t have any api or scheduling features. But during last Docker
>>>>>> Conf hackathon PayPal folks implemented docker compose executor for
>>>>>> Mesos (https://github.com/mohitsoni/compose-executor) which can
>>>>>> give you pod like experience.
>>>>>>
>>>>>> ? Egor
>>>>>>
>>>>>> From: Adrian
>>>>>> Otto<adrian.otto at rackspace.com
>>>>>>
>>>>>> <mailto:adrian.otto at rackspace.com><mailto:adrian.otto at rackspace.com><m
>>>>>> ailto:adrian.otto at rackspace.com>>
>>>>>>
>>>>>>
>>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>>> questions)"<openstack-dev at lists.openstack.org
>>>>
>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.op
>>>> enstack.org><mailto:openstack-dev at lists.openstack.org>>
>>>>>> Date: Monday, September 28, 2015 at 22:03 To: "OpenStack
>>>>>> Development Mailing List (not for usage
>>>>>> questions)"<openstack-dev at lists.openstack.org
>>>>>>
>>>>>> <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.
>>>>>> openstack.org><mailto:openstack-dev at lists.openstack.org>>
>>>>>>
>>>>>>
>>>> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>>>>> Wanghua,
>>>>>>
>>>>>> I do follow your logic, but docker-compose only needs the docker
>>>>>> API to operate. We are intentionally avoiding re-inventing the
>>>>>> wheel. Our goal is not to replace docker swarm (or other existing
>>>>>> systems), but to compliment it/them. We want to offer users of
>>>>>> Docker the richness of native APIs and supporting tools. This way
>>>>>> they will not need to compromise features or wait longer for us to
>>>>>> implement each new feature as it is added. Keep in mind that our
>>>>>> pod, service, and replication controller resources pre-date this
>>>>>> philosophy. If we started out with the current approach, those
>>>>>> would not exist in Magnum.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Adrian
>>>>>>
>>>>>> On Sep 28, 2015, at 8:32 PM, ??
>>>>>> <wanghua.humble at gmail.com
>>>>>>
>>>>>> <mailto:wanghua.humble at gmail.com><mailto:wanghua.humble at gmail.com><mai
>>>>>> lto:wanghua.humble at gmail.com>>
>>>>>> wrote:
>>>>>>
>>>>>> Hi folks,
>>>>>>
>>>>>> Magnum now exposes service, pod, etc to users in kubernetes coe,
>>>>>> but exposes container in swarm coe. As I know, swarm is only a
>>>>>> scheduler of container, which is like nova in openstack. Docker
>>>>>> compose is a orchestration program which is like heat in openstack.
>>>>>> k8s is the combination of scheduler and orchestration. So I think
>>>>>> it is better to expose the apis in compose to users which are at
>>>>>> the same level as k8s.
>>>>>>
>>>>>>
>>>>>> Regards Wanghua
>>>>>>
>>>>>> ______________________________________________________________________
>>>>>> ____
>>>>>>
>>>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org
>>>>>>
>>>>>> <mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-de
>>>>>> v-request at lists.openstack.org><mailto:OpenStack-dev-request at lists.open
>>>>>> stack.org>?subject:unsubscribe
>>>>>>
>>>>>>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>> ______________________________________________________________________
>>>>>> ____
>>>>>>
>>>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org
>>>>>>
>>>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>>> <mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>>>>
>>>>>>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ______________________________________________________________________
>>>>>> ____
>>>>>>
>>>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org
>>>>>>
>>>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>>> <mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>>
>>>> <ATT00001.gif>__________________________________________________________
>>>> ________________
>>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>>
>>>>>>
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org
>>>>>>
>>>>>> <mailto:OpenStack-dev-request at lists.openstack.org><mailto:OpenStack-de
>>>>>> v-request at lists.openstack.org>?subject:unsubscribe
>>>>>>
>>>>>>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>> ______________________________________________________________________
>>>>>> ____
>>>>>>
>>>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org
>>>>>>
>>>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Thanks, Jay Lau (Guangya Liu)
>>>>>>
>>>>>>
>>>>>> ______________________________________________________________________
>>>>>> ____
>>>>>>
>>>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>>>> Unsubscribe:
>>>>>> OpenStack-dev-request at lists.openstack.org
>>>>>>
>>>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>> _______________________________________________________________________
>>>>> ___
>>>>>
>>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org
>>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>> ________________________________________________________________________
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:OpenStack-dev-request at lists.openstack.org
>>>> <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> _________________________________________________________________________
>>> _
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


From davanum at gmail.com  Wed Sep 30 23:00:19 2015
From: davanum at gmail.com (Davanum Srinivas)
Date: Wed, 30 Sep 2015 19:00:19 -0400
Subject: [openstack-dev] [magnum] New Core Reviewers
In-Reply-To: <C675F3D0-4226-453E-A1FB-257DB1D85858@rackspace.com>
References: <C675F3D0-4226-453E-A1FB-257DB1D85858@rackspace.com>
Message-ID: <CANw6fcEZuxWbBDWUUd80fyaBuE9-_6+Q5aHj8TTR=fSWhxfufA@mail.gmail.com>

+1 from me for both Vilobh and Hua.

Thanks,
Dims

On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto <adrian.otto at rackspace.com>
wrote:

> Core Reviewers,
>
> I propose the following additions to magnum-core:
>
> +Vilobh Meshram (vilobhmm)
> +Hua Wang (humble00)
>
> Please respond with +1 to agree or -1 to veto. This will be decided by
> either a simple majority of existing core reviewers, or by lazy consensus
> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>
> Thanks,
>
> Adrian Otto
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/771bea45/attachment.html>

From Cathy.H.Zhang at huawei.com  Wed Sep 30 23:02:33 2015
From: Cathy.H.Zhang at huawei.com (Cathy Zhang)
Date: Wed, 30 Sep 2015 23:02:33 +0000
Subject: [openstack-dev] [neutron] pypi packages for networking
 sub-projects
In-Reply-To: <CAL3VkVw=vpjCKnngx6J1_FESJ5t_Xk6v+M1xhNqAcaK4vV5K1A@mail.gmail.com>
References: <CAL3VkVw=vpjCKnngx6J1_FESJ5t_Xk6v+M1xhNqAcaK4vV5K1A@mail.gmail.com>
Message-ID: <A2C96F6779E6A041BC7023CC207FC99421809449@SJCEML701-CHM.china.huawei.com>

Hi Kyle,

Is this only about the sub-projects that are ready for release? I do not see networking-sfc sub-project in the list. Does this mean we have done the pypi registrations for the networking-sfc project correctly or it is not checked because it is not ready for release yet?

Thanks,
Cathy

From: Kyle Mestery [mailto:mestery at mestery.com]
Sent: Wednesday, September 30, 2015 11:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] pypi packages for networking sub-projects

Folks:
In trying to release some networking sub-projects recently, I ran into an issue [1] where I couldn't release some projects due to them not being registered on pypi. I have a patch out [2] which adds pypi publishing jobs, but before that can merge, we need to make sure all projects have pypi registrations in place. The following networking sub-projects do NOT have pypi registrations in place and need them created following the guidelines here [3]:
networking-calico
networking-infoblox
networking-powervm

The following pypi registrations did not follow directions to enable openstackci has "Owner" permissions, which allow for the publishing of packages to pypi:
networking-ale-omniswitch
networking-arista
networking-l2gw
networking-vsphere

Once these are corrected, we can merge [2] which will then allow the neutron-release team the ability to release pypi packages for those packages.
Thanks!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
[2] https://review.openstack.org/#/c/229564/1
[3] http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/84f8bb87/attachment.html>

From hongbin.lu at huawei.com  Wed Sep 30 23:07:03 2015
From: hongbin.lu at huawei.com (Hongbin Lu)
Date: Wed, 30 Sep 2015 23:07:03 +0000
Subject: [openstack-dev] [magnum] New Core Reviewers
In-Reply-To: <CANw6fcEZuxWbBDWUUd80fyaBuE9-_6+Q5aHj8TTR=fSWhxfufA@mail.gmail.com>
References: <C675F3D0-4226-453E-A1FB-257DB1D85858@rackspace.com>
 <CANw6fcEZuxWbBDWUUd80fyaBuE9-_6+Q5aHj8TTR=fSWhxfufA@mail.gmail.com>
Message-ID: <0957CD8F4B55C0418161614FEC580D6BCED68C@SZXEMI503-MBS.china.huawei.com>

+1 for both. Welcome!

From: Davanum Srinivas [mailto:davanum at gmail.com]
Sent: September-30-15 7:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] New Core Reviewers

+1 from me for both Vilobh and Hua.

Thanks,
Dims

On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>> wrote:
Core Reviewers,

I propose the following additions to magnum-core:

+Vilobh Meshram (vilobhmm)
+Hua Wang (humble00)

Please respond with +1 to agree or -1 to veto. This will be decided by either a simple majority of existing core reviewers, or by lazy consensus concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.

Thanks,

Adrian Otto
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/92da3a8e/attachment.html>

From klindgren at godaddy.com  Wed Sep 30 23:26:10 2015
From: klindgren at godaddy.com (Kris G. Lindgren)
Date: Wed, 30 Sep 2015 23:26:10 +0000
Subject: [openstack-dev] [magnum]swarm + compose = k8s?
In-Reply-To: <<D231AF25.13BFD%stdake@cisco.com>>
Message-ID: <0F93DC63-18CF-4F67-AEB3-051DA48058C9@godaddy.com>

We are looking at deploying magnum as an answer for how do we do containers company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience tells me this wont be practical/scale, however from experience I also know exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of the projects are currently doing some form of containers on their own, with more joining every day.  If all of these projects were to convert of to the current magnum configuration we would suddenly be attempting to support/configure ~1k magnum clusters.  Considering that everyone will want it HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + floating ips.  From a capacity standpoint this is an excessive amount of duplicated infrastructure to spinup in projects where people maybe running 10?20 containers per project.  From an operator support perspective this is a special level of hell that I do not want to get into.   Even if I am off by 75%,  250 still sucks.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) would be able to support hierarchical projects in magnum.  That way we could create a project for each department, and then the subteams of those departments can have their own projects.  We create a a bay per department.  Sub-projects if they want to can support creation of their own bays (but support of the kube cluster would then fall to that team).  When a sub-project spins up a pod on a bay, minions get created inside that teams sub projects and the containers in that pod run on the capacity that was spun up  under that project, the minions for each pod would be a in a scaling group and as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, number of kube clusters, give people who can't/don?t want to fall inline with the provided resource a way to make their own and still offer a "good enough for a single company" level of multi-tenancy.

>Joshua,
>
>If you share resources, you give up multi-tenancy.  No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure).  Not only does multi-tenancy have to ?look like? it
>offers multiple tenants isolation, but it actually has to deliver the
>goods.

>

>I understand that at first glance a company like Yahoo may not want >separate bays for their various applications because of the perceived >administrative overhead. I would then challenge Yahoo to go deploy a COE >like kubernetes (which has no multi-tenancy or a very basic implementation >of such) and get it to work with hundreds of different competing >applications. I would speculate the administrative overhead of getting >all that to work would be greater then the administrative overhead of >simply doing a bay create for the various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that >today. Maybe in the future they will. Magnum was designed to present an >integration point between COEs and OpenStack today, not five years down >the road. Its not as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum >vs a full on integration with OpenStack within the COE itself. However, >that model which is what I believe you proposed is a huge design change to >each COE which would overly complicate the COE at the gain of increased >density. I personally don?t feel that pain is worth the gain.


___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/36d41a60/attachment.html>

From emilien at redhat.com  Wed Sep 30 23:30:22 2015
From: emilien at redhat.com (Emilien Macchi)
Date: Wed, 30 Sep 2015 19:30:22 -0400
Subject: [openstack-dev] [puppet] should puppet-neutron manage third
 party software?
In-Reply-To: <560ABB1F.6000701@redhat.com>
References: <56057027.3090808@redhat.com>
 <DDF6F73E-716D-439A-87B8-158008AD31AA@workday.com>
 <56057731.7060109@anteaya.info> <560ABB1F.6000701@redhat.com>
Message-ID: <560C708E.3070309@redhat.com>



On 09/29/2015 12:23 PM, Emilien Macchi wrote:
> My suggestion:
> 
> * patch master to send deprecation warning if third party repositories
> are managed in our current puppet-neutron module.
> * do not manage third party repositories from now and do not accept any
> patch containing this kind of code.
> * in the next cycle, we will consider deleting legacy code that used to
> manage third party software repos.
> 
> Thoughts?

Silence probably means lazy consensus.
I submitted a patch: https://review.openstack.org/#/c/229675/ - please
review.

I also contacted Cisco and they acknowledged it, and will work on
puppet-n1kv to externalize third party software.


> On 09/25/2015 12:32 PM, Anita Kuno wrote:
>> On 09/25/2015 12:14 PM, Edgar Magana wrote:
>>> Hi There,
>>>
>>> I just added my comment on the review. I do agree with Emilien. There should be specific repos for plugins and drivers.
>>>
>>> BTW. I love the sdnmagic name  ;-)
>>>
>>> Edgar
>>>
>>>
>>>
>>>
>>> On 9/25/15, 9:02 AM, "Emilien Macchi" <emilien at redhat.com> wrote:
>>>
>>>> In our last meeting [1], we were discussing about whether managing or
>>>> not external packaging repositories for Neutron plugin dependencies.
>>>>
>>>> Current situation:
>>>> puppet-neutron is installing (packages like neutron-plugin-*) &
>>>> configure Neutron plugins (configuration files like
>>>> /etc/neutron/plugins/*.ini
>>>> Some plugins (Cisco) are doing more: they install third party packages
>>>> (not part of OpenStack), from external repos.
>>>>
>>>> The question is: should we continue that way and accept that kind of
>>>> patch [2]?
>>>>
>>>> I vote for no: managing external packages & external repositories should
>>>> be up to an external more.
>>>> Example: my SDN tool is called "sdnmagic":
>>>> 1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
>>>> configure the .ini file(s) to make it work in Neutron
>>>> 2/ create puppet-sdnmagic that will take care of everything else:
>>>> install sdnmagic, manage packaging (and specific dependencies),
>>>> repositories, etc.
>>>> I -1 puppet-neutron should handle it. We are not managing SDN soltution:
>>>> we are enabling puppet-neutron to work with them.
>>>>
>>>> I would like to find a consensus here, that will be consistent across
>>>> *all plugins* without exception.
>>>>
>>>>
>>>> Thanks for your feedback,
>>>>
>>>> [1] http://goo.gl/zehmN2
>>>> [2] https://review.openstack.org/#/c/209997/
>>>> -- 
>>>> Emilien Macchi
>>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> I think the data point provided by the Cinder situation needs to be
>> considered in this decision: https://bugs.launchpad.net/manila/+bug/1499334
>>
>> The bug report outlines the issue, but the tl;dr is that one Cinder
>> driver changed their licensing on a library required to run in tree code.
>>
>> Thanks,
>> Anita.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/2ff20650/attachment.pgp>

From sthillma at cisco.com  Wed Sep 30 23:34:02 2015
From: sthillma at cisco.com (Steven Hillman (sthillma))
Date: Wed, 30 Sep 2015 23:34:02 +0000
Subject: [openstack-dev] [puppet] should puppet-neutron manage third
 party software?
In-Reply-To: <560ABB1F.6000701@redhat.com>
References: <56057027.3090808@redhat.com>
 <DDF6F73E-716D-439A-87B8-158008AD31AA@workday.com>
 <56057731.7060109@anteaya.info> <560ABB1F.6000701@redhat.com>
Message-ID: <D231BCE4.5894D%sthillma@cisco.com>

Makes sense to me.

Opened a bug to track the migration of agents/n1kv_vem.pp out of
puppet-neutron during the M-cycle:
https://bugs.launchpad.net/puppet-neutron/+bug/1501535

Thanks.
Steven Hillman

On 9/29/15, 9:23 AM, "Emilien Macchi" <emilien at redhat.com> wrote:

>My suggestion:
>
>* patch master to send deprecation warning if third party repositories
>are managed in our current puppet-neutron module.
>* do not manage third party repositories from now and do not accept any
>patch containing this kind of code.
>* in the next cycle, we will consider deleting legacy code that used to
>manage third party software repos.
>
>Thoughts?
>
>On 09/25/2015 12:32 PM, Anita Kuno wrote:
>> On 09/25/2015 12:14 PM, Edgar Magana wrote:
>>> Hi There,
>>>
>>> I just added my comment on the review. I do agree with Emilien. There
>>>should be specific repos for plugins and drivers.
>>>
>>> BTW. I love the sdnmagic name  ;-)
>>>
>>> Edgar
>>>
>>>
>>>
>>>
>>> On 9/25/15, 9:02 AM, "Emilien Macchi" <emilien at redhat.com> wrote:
>>>
>>>> In our last meeting [1], we were discussing about whether managing or
>>>> not external packaging repositories for Neutron plugin dependencies.
>>>>
>>>> Current situation:
>>>> puppet-neutron is installing (packages like neutron-plugin-*) &
>>>> configure Neutron plugins (configuration files like
>>>> /etc/neutron/plugins/*.ini
>>>> Some plugins (Cisco) are doing more: they install third party packages
>>>> (not part of OpenStack), from external repos.
>>>>
>>>> The question is: should we continue that way and accept that kind of
>>>> patch [2]?
>>>>
>>>> I vote for no: managing external packages & external repositories
>>>>should
>>>> be up to an external more.
>>>> Example: my SDN tool is called "sdnmagic":
>>>> 1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
>>>> configure the .ini file(s) to make it work in Neutron
>>>> 2/ create puppet-sdnmagic that will take care of everything else:
>>>> install sdnmagic, manage packaging (and specific dependencies),
>>>> repositories, etc.
>>>> I -1 puppet-neutron should handle it. We are not managing SDN
>>>>soltution:
>>>> we are enabling puppet-neutron to work with them.
>>>>
>>>> I would like to find a consensus here, that will be consistent across
>>>> *all plugins* without exception.
>>>>
>>>>
>>>> Thanks for your feedback,
>>>>
>>>> [1] http://goo.gl/zehmN2
>>>> [2] https://review.openstack.org/#/c/209997/
>>>> -- 
>>>> Emilien Macchi
>>>>
>>> 
>>>________________________________________________________________________
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> 
>> I think the data point provided by the Cinder situation needs to be
>> considered in this decision:
>>https://bugs.launchpad.net/manila/+bug/1499334
>> 
>> The bug report outlines the issue, but the tl;dr is that one Cinder
>> driver changed their licensing on a library required to run in tree
>>code.
>> 
>> Thanks,
>> Anita.
>> 
>> 
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>
>-- 
>Emilien Macchi
>



From fungi at yuggoth.org  Wed Sep 30 23:37:14 2015
From: fungi at yuggoth.org (Jeremy Stanley)
Date: Wed, 30 Sep 2015 23:37:14 +0000
Subject: [openstack-dev] [puppet] [infra] split integration jobs
In-Reply-To: <560C50B3.8010904@redhat.com>
References: <560C50B3.8010904@redhat.com>
Message-ID: <20150930233714.GC4731@yuggoth.org>

On 2015-09-30 17:14:27 -0400 (-0400), Emilien Macchi wrote:
[...]
> I like #3 but we are going to consume more CI resources (that's why I
> put [infra] tag).
[...]

I don't think adding one more job is going to put a strain on our
available resources. In fact it consumes just about as much to run a
single job twice as long since we're constrained on the number of
running instances in our providers (ignoring for a moment the
spin-up/tear-down overhead incurred per job which, if you're
talking about long-running jobs anyway, is less wasteful than it is
for lots of very quick jobs). The number of puppet changes and
number of jobs currently run on each is considerably lower than a
lot of our other teams as well.
-- 
Jeremy Stanley


From xarses at gmail.com  Wed Sep 30 23:43:51 2015
From: xarses at gmail.com (Andrew Woodward)
Date: Wed, 30 Sep 2015 23:43:51 +0000
Subject: [openstack-dev] [puppet] [infra] split integration jobs
In-Reply-To: <20150930233714.GC4731@yuggoth.org>
References: <560C50B3.8010904@redhat.com> <20150930233714.GC4731@yuggoth.org>
Message-ID: <CACEfbZjj6t+if=ockGbbm7njgoqEC4jDSW4_3QktZT4KJL+1iA@mail.gmail.com>

Emillien,

What image is being used to spawn the image? We see 300 sec as a good
timeout time in fuel with a cirros image. The time can usually be
substantially cut if the image is of any size using ceph for ephemeral...

On Wed, Sep 30, 2015 at 4:37 PM Jeremy Stanley <fungi at yuggoth.org> wrote:

> On 2015-09-30 17:14:27 -0400 (-0400), Emilien Macchi wrote:
> [...]
> > I like #3 but we are going to consume more CI resources (that's why I
> > put [infra] tag).
> [...]
>
> I don't think adding one more job is going to put a strain on our
> available resources. In fact it consumes just about as much to run a
> single job twice as long since we're constrained on the number of
> running instances in our providers (ignoring for a moment the
> spin-up/tear-down overhead incurred per job which, if you're
> talking about long-running jobs anyway, is less wasteful than it is
> for lots of very quick jobs). The number of puppet changes and
> number of jobs currently run on each is considerably lower than a
> lot of our other teams as well.
> --
> Jeremy Stanley
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/ca3be04e/attachment.html>

From vadivel.openstack at gmail.com  Wed Sep 30 23:50:24 2015
From: vadivel.openstack at gmail.com (Vadivel Poonathan)
Date: Wed, 30 Sep 2015 16:50:24 -0700
Subject: [openstack-dev] [Neutron] Release of a neutron sub-project
In-Reply-To: <CAL3VkVxjJ=TKAnHDPkhn4jVx4+Z-ut5wm5bba1CPrakviG5+kw@mail.gmail.com>
References: <CALtWjwbJb4Q=+9LXmBVpRTWt4R9a6BsiCrNYfbgejJ0z1zs4Bw@mail.gmail.com>
 <CAL3VkVws=+ynad4u5HEXiuOsrd92=b1b_vMVijeujdy2u9YB_Q@mail.gmail.com>
 <CAL3VkVxjJ=TKAnHDPkhn4jVx4+Z-ut5wm5bba1CPrakviG5+kw@mail.gmail.com>
Message-ID: <CALtWjwb8juroYkpYuoc4tNrvOtryUv4Y7Zn8iXR=Ck860-N+1A@mail.gmail.com>

Kyle,

We referenced arista's setup/config files when we setup the pypi for
our plugin. So if it is ok for Arista, then it would be ok for
ale-omniswitch too, i believe. You said Arista was ok when you did in
google, instead of pypi search, in another email. So can you pls.
check again ale-omniswitch as well and confirm.

If still it has an issue, can you pls. throw me some pointers on where
to enable the openstackci owener permission?..

Thanks,Vad--

The following pypi registrations did not follow directions to enable
openstackci has "Owner" permissions, which allow for the publishing of

packages to pypi:

networking-ale-omniswitch
networking-arista


On Wed, Sep 30, 2015 at 11:56 AM, Kyle Mestery <mestery at mestery.com> wrote:

> On Tue, Sep 29, 2015 at 8:04 PM, Kyle Mestery <mestery at mestery.com> wrote:
>
>> On Tue, Sep 29, 2015 at 2:36 PM, Vadivel Poonathan <
>> vadivel.openstack at gmail.com> wrote:
>>
>>> Hi,
>>>
>>> As per the Sub-Project Release process - i would like to tag and release
>>> the following sub-project as part of upcoming Liberty release.
>>> The process says talk to one of the member of 'neutron-release' group. I
>>> couldn?t find a group mail-id for this group. Hence I am sending this email
>>> to the dev list.
>>>
>>> I just have removed the version from setup.cfg and got the patch merged,
>>> as specified in the release process. Can someone from the neutron-release
>>> group makes this sub-project release.
>>>
>>>
>>
>> Vlad, I'll do this tomorrow. Find me on IRC (mestery) and ping me there
>> so I can get your IRC NIC in case I have questions.
>>
>>
> It turns out that the networking-ale-omniswitch pypi setup isn't correct,
> see [1] for more info and how to correct. This turned out to be ok, because
> it's forced me to re-examine the other networking sub-projects and their
> pypi setup to ensure consistency, which the thread found here [1] will
> resolve.
>
> Once you resolve this ping me on IRC and I'll release this for you.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/075880.html
>
>
>> Thanks!
>> Kyle
>>
>>
>>>
>>> ALE Omniswitch
>>> Git: https://git.openstack.org/cgit/openstack/networking-ale-omniswitch
>>> Launchpad: https://launchpad.net/networking-ale-omniswitch
>>> Pypi: https://pypi.python.org/pypi/networking-ale-omniswitch
>>>
>>> Thanks,
>>> Vad
>>> --
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/d9315735/attachment.html>

From skinjo at redhat.com  Wed Sep 30 23:58:51 2015
From: skinjo at redhat.com (Shinobu Kinjo)
Date: Wed, 30 Sep 2015 19:58:51 -0400 (EDT)
Subject: [openstack-dev] [Manila] CephFS native driver
In-Reply-To: <5605E68E.7060404@swartzlander.org>
References: <CALe9h7datjEmxQ8i+pZPPXA_355o4wMs7MVH56eVrPACFjCKSg@mail.gmail.com>
 <5605E68E.7060404@swartzlander.org>
Message-ID: <1430360729.24243057.1443657531099.JavaMail.zimbra@redhat.com>

Is there any plan to merge those branches to master?
Or is there anything needs to be done more?

Shinobu

----- Original Message -----
From: "Ben Swartzlander" <ben at swartzlander.org>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Sent: Saturday, September 26, 2015 9:27:58 AM
Subject: Re: [openstack-dev] [Manila] CephFS native driver

On 09/24/2015 09:49 AM, John Spray wrote:
> Hi all,
>
> I've recently started work on a CephFS driver for Manila.  The (early)
> code is here:
> https://github.com/openstack/manila/compare/master...jcsp:ceph

Awesome! This is something that's been talking about for quite some time 
and I'm pleased to see progress on making it a reality.

> It requires a special branch of ceph which is here:
> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>
> This isn't done yet (hence this email rather than a gerrit review),
> but I wanted to give everyone a heads up that this work is going on,
> and a brief status update.
>
> This is the 'native' driver in the sense that clients use the CephFS
> client to access the share, rather than re-exporting it over NFS.  The
> idea is that this driver will be useful for anyone who has such
> clients, as well as acting as the basis for a later NFS-enabled
> driver.

This makes sense, but have you given thought to the optimal way to 
provide NFS semantics for those who prefer that? Obviously you can pair 
the existing Manila Generic driver with Cinder running on ceph, but I 
wonder how that wound compare to some kind of ganesha bridge that 
translates between NFS and cephfs. It that something you've looked into?

> The export location returned by the driver gives the client the Ceph
> mon IP addresses, the share path, and an authentication token.  This
> authentication token is what permits the clients access (Ceph does not
> do access control based on IP addresses).
>
> It's just capable of the minimal functionality of creating and
> deleting shares so far, but I will shortly be looking into hooking up
> snapshots/consistency groups, albeit for read-only snapshots only
> (cephfs does not have writeable shapshots).  Currently deletion is
> just a move into a 'trash' directory, the idea is to add something
> later that cleans this up in the background: the downside to the
> "shares are just directories" approach is that clearing them up has a
> "rm -rf" cost!

All snapshots are read-only... The question is whether you can take a 
snapshot and clone it into something that's writable. We're looking at 
allowing for different kinds of snapshot semantics in Manila for Mitaka. 
Even if there's no create-share-from-snapshot functionality a readable 
snapshot is still useful and something we'd like to enable.

The deletion issue sounds like a common one, although if you don't have 
the thing that cleans them up in the background yet I hope someone is 
working on that.

> A note on the implementation: cephfs recently got the ability (not yet
> in master) to restrict client metadata access based on path, so this
> driver is simply creating shares by creating directories within a
> cluster-wide filesystem, and issuing credentials to clients that
> restrict them to their own directory.  They then mount that subpath,
> so that from the client's point of view it's like having their own
> filesystem.  We also have a quota mechanism that I'll hook in later to
> enforce the share size.

So quotas aren't enforced yet? That seems like a serious issue for any 
operator except those that want to support "infinite" size shares. I 
hope that gets fixed soon as well.

> Currently the security here requires clients (i.e. the ceph-fuse code
> on client hosts, not the userspace applications) to be trusted, as
> quotas are enforced on the client side.  The OSD access control
> operates on a per-pool basis, and creating a separate pool for each
> share is inefficient.  In the future it is expected that CephFS will
> be extended to support file layouts that use RADOS namespaces, which
> are cheap, such that we can issue a new namespace to each share and
> enforce the separation between shares on the OSD side.

I think it will be important to document all of these limitations. I 
wouldn't let them stop you from getting the driver done, but if I was a 
deployer I'd want to know about these details.

> However, for many people the ultimate access control solution will be
> to use a NFS gateway in front of their CephFS filesystem: it is
> expected that an NFS-enabled cephfs driver will follow this native
> driver in the not-too-distant future.

Okay this answers part of my above question, but how to you expect the 
NFS gateway to work? Ganesha has been used successfully in the past.

> This will be my first openstack contribution, so please bear with me
> while I come up to speed with the submission process.  I'll also be in
> Tokyo for the summit next month, so I hope to meet other interested
> parties there.

Welcome and I look forward you meeting you in Tokyo!

-Ben


> All the best,
> John
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev